This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

thanks: These two authors contributed equally.thanks: These two authors contributed equally.

Fundamental limits to learning closed-form mathematical models from data

Oscar Fajardo-Fontiveros Department of Chemical Engineering, Universitat Rovira i Virgili, Tarragona 43007, Catalonia    Ignasi Reichardt Department of Chemical Engineering, Universitat Rovira i Virgili, Tarragona 43007, Catalonia Department of Mechanical Engineering, Universitat Rovira i Virgili, Tarragona 43007, Catalonia    Harry R. De Los Rios Department of Computer Science and Mathematics, Universitat Rovira i Virgili, Tarragona 43007, Catalonia    Jordi Duch Department of Computer Science and Mathematics, Universitat Rovira i Virgili, Tarragona 43007, Catalonia    Marta Sales-Pardo Corresponding author: marta.sales@urv.cat Department of Chemical Engineering, Universitat Rovira i Virgili, Tarragona 43007, Catalonia    Roger Guimerà Corresponding author: roger.guimera@urv.cat Department of Chemical Engineering, Universitat Rovira i Virgili, Tarragona 43007, Catalonia ICREA, Barcelona 08010, Catalonia
(July 30, 2025)
Abstract

Given a finite and noisy dataset generated with a closed-form mathematical model, when is it possible to learn the true generating model from the data alone? This is the question we investigate here. We show that this model-learning problem displays a transition from a low-noise phase in which the true model can be learned, to a phase in which the observation noise is too high for the true model to be learned by any method. Both in the low-noise phase and in the high-noise phase, probabilistic model selection leads to optimal generalization to unseen data. This is in contrast to standard machine learning approaches, including artificial neural networks, which in this particular problem are limited, in the low-noise phase, by their ability to interpolate. In the transition region between the learnable and unlearnable phases, generalization is hard for all approaches including probabilistic model selection.

preprint: APS/123-QED

Introduction

For a few centuries, scientists have described natural phenomena by means of relatively simple mathematical models such as Newton’s law of gravitation or Snell’s law of refraction. Sometimes, they arrived to these models deductively, starting from fundamental considerations; more frequently, however, they derived the models inductively from data. With increasing amounts of data available for all sorts of (natural and social) systems, one may argue that we are now in a position to inductively uncover new interpretable models for these systems. To this end, machine learning approaches that can automatically uncover closed-form models from data have been recently developed Džeroski and Todorovski (2007); Schmidt and Lipson (2009); Evans and Rzhetsky (2010); Brunton et al. (2016); Guimerà et al. (2020); Udrescu and Tegmark (2020) 111Here and throughout this article, we refer to closed-form models as those that are expressed in terms of a relatively small number of basic functions, such as addition, multiplication, trigonometric functions, etc.. In physics alone, such approaches have been applied successfully to quantum systems Gentile et al. (2021), non-linear and chaotic systems Schmidt and Lipson (2009); Brunton et al. (2016), fluid mechanics Reichardt et al. (2020), and astrophysics Cranmer et al. (2020), among others Udrescu and Tegmark (2020).

A central assumption implicit in these approaches is that, given data, it is always possible to identify the correct underlying model. Here, we investigate the validity of this assumption. In particular, consider a dataset D={(yi,𝐱i)}D=\left\{(y_{i},\mathbf{x}_{i})\right\}, with i=1,,Ni=1,\dots,N, generated using the closed-form model mm^{*}, so that yi=m(𝐱i,θ)+ϵiy_{i}=m^{*}(\mathbf{x}_{i},{\theta^{*}})+\epsilon_{i} with θ{\theta^{*}} being the parameters of the model, and ϵi\epsilon_{i} a random unbiased observation noise drawn from the normal distribution with variance sϵ2s_{\epsilon}^{2} 222The assumption of Gaussian noise is standard in regression and symbolic regression problems, and it allows us to write the likelihood as in Eq. (12). In principle, one could assume other noise structures (for example, multiplicative noise) or even more general likelihoods, but these would be hard to justify in the context of regression and symbolic regression/model discovery. In any case, we do not expect that the qualitative behavior described in the manuscript should change in any way.. The question we are interested in is: Assuming that mm^{*} can be expressed in closed form, when is it possible to identify it as the true generating model among all possible closed-form mathematical models, for someone who does not know the true model beforehand? Note that our focus is on learning the structure of the model mm^{*} and not the values of the parameters θ\theta^{*}, a problem that has received much more attention from the theoretical point of view Zdeborová and Krzakala (2016). Additionally, we are interested in situations in which the dimension of the feature space 𝐱k\mathbf{x}\in\mathbb{R}^{k} is relatively small (compare to typical feature spaces in machine learning settings), which is the relevant regime for symbolic regression and model discovery.

To address the model-learning question above, we formulate the problem of identifying the true generating model probabilistically, and show that probabilistic model selection is quasi-optimal at generalization, that is, at making predictions about unobserved data. This is in contrast to standard machine learning approaches, which, in this case, are suboptimal in the region of low observation noise. We then investigate the transition occurring between: (i) a learnable phase at low observation noise, in which the true model can in principle be learned from the data; and (ii) an unlearnable phase, in which the observation noise is too large for the true model to be learned from the data by any method. Finally, we provide an upper bound for the noise at which the learnability transition takes place, that is, the noise beyond which the true generating model cannot be learned by any method. This bound corresponds to the noise at which most plausible a priori models, which we call trivial, become more parsimonious descriptions of the data than the true generating model. Despite the simplicity of the approach, the bound provides a good approximation to the actual transition point.

Results

Probabilistic formulation of the problem

The complete probabilistic solution to the problem of identifying a model from observations is encapsulated in the posterior distribution over models p(m|D)p(m|D). The posterior gives the probability that each model m=m(𝐱,θ)m=m(\mathbf{x},\theta), with parameters θ\theta, is the true generating model given the data DD. Again, notice that we are interested on the posterior over model structures mm rather than model parameters θ\theta; thus, we obtain p(m|D)p(m|D) by marginalizing p(m,θ|D)p(m,\theta|D) over possible parameter values Θ\Theta

p(m|D)\displaystyle p(m|D) =\displaystyle= Θ𝑑θp(m,θ|D)\displaystyle\int_{\Theta}d\theta\;p(m,\theta|D)
=\displaystyle= 1p(D)Θ𝑑θp(D|m,θ)p(θ|m)p(m),\displaystyle\frac{1}{p(D)}\int_{\Theta}d\theta\;p(D|m,\theta)\;p(\theta|m)\;p(m)\,,

where p(D|m,θ)p(D|m,\theta) is the model likelihood, and p(θ|m)p(\theta|m) and p(m)p(m) are the prior distributions over the parameters of a given model and the models themselves, respectively. The posterior over models can always be rewritten as

p(m|D)=exp[(m)]Z,p(m|D)=\frac{\exp\left[-\mathcal{H}(m)\right]}{Z}\,, (2)

with Z=p(D)={m}exp[(m)]Z=p(D)=\sum_{\{m\}}\exp\left[-\mathcal{H}(m)\right] and

(m)\displaystyle\mathcal{H}(m) =\displaystyle= lnp(D,m)\displaystyle-\ln p(D,m)
=\displaystyle= ln[Θ𝑑θp(D|m,θ)p(θ|m)]lnp(m).\displaystyle-\ln\left[\int_{\Theta}d\theta\,p(D|m,\theta)\,p(\theta|m)\right]-\ln p(m)\;.

Although the integral over parameters in Eq. (Probabilistic formulation of the problem) cannot, in general, be calculated exactly, it can be approximated as Schwarz (1978); Guimerà et al. (2020)

(m)B(m)2lnp(m),\mathcal{H}(m)\approx\frac{B(m)}{2}-\ln p(m), (4)

where B(m)B(m) is the Bayesian information criterion (BIC) of the model Schwarz (1978). This approximation results from using Laplace’s method integrate the distribution p(D|m,θ)p(θ|m)p(D|m,\theta)p(\theta|m) over the parameters θ\theta. Thus, the calculation assumes that: (i) the likelihood p(D|m,θ)p(D|m,\theta) is peaked around θ=argmaxθp(D|m,θ)\theta^{*}=\arg\max_{\theta}p(D|m,\theta), so that it can be approximated by a Gaussian around θ\theta^{*}; (ii) the prior p(θ|m)p(\theta|m) is smooth around θ\theta^{*} so that it can be assumed to be approximately constant within the Gaussian. Unlike other contexts, in regression-like problems these assumptions are typically mild.

Within an information-theoretical interpretation of model selection, (m)\mathcal{H}(m) is the description length, that is, the number of nats (or bits if one uses base 2 logarithms instead of natural logarithms) necessary to jointly encode the data and the model with an optimal encoding Grünwald (2007). Therefore, the most plausible model—that is, the model with maximum p(m|D)p(m|D)—has the minimum description length, that is, compresses the data optimally.

The posterior in Eq. (2) can also be interpreted as in the canonical ensemble in statistical mechanics, with (m)\mathcal{H}(m) playing the role of the energy of a physical system, models playing the role of configurations of the system, and the most plausible model corresponding to the ground state. Here, we sample the posterior distribution over models p(m|D)p(m|D) by generating a Markov chain with the Metropolis algorithm, as one would do for physical systems. To do this, we use the “Bayesian machine scientist” introduced in Ref. Guimerà et al. (2020) (Methods). We then select, among the sampled models, the most plausible one (that is, the maximum a posteriori model, the minimum description length model, or the ground state, in each interpretation).

Probabilistic model selection yields quasi-optimal predictions for unobserved data

Probabilistic model selection as described above follows directly (and exactly, except for the explicit approximation in Eq. (4)) from the postulates of Cox’s theorem Cox (1946); Jaynes (2003) and is therefore (Bayes) optimal when models are truly drawn from the prior p(m)p(m). In particular, the minimum description length (MDL) model is the most compressive and the most plausible one, and any approach selecting, from DD, a model that is not the MDL model violates those basic postulates.

Refer to caption
Figure 1: Probabilistic model selection makes quasi-optimal predictions about unobserved data. We select two models mm^{*}, whose expressions are shown at the top of each column. (a-b) From each model, we generate synthetic datasets DD with NN points (shown, N=100N=100) and different levels of noise sϵs_{\epsilon} (shown, sϵ=1s_{\epsilon}=1). Here and throughout the article, the values of the independent variables x1x_{1} and x2x_{2} are generated uniformly at random in [2,2][-2,2]. Vertical lines show the observation error ϵi\epsilon_{i} for each point in DD. For a model not drawn from the prior and data generated differently, see Supplementary Fig. S1. (c-d) For each dataset DD (with dataset sizes N{25,50,100,200,400}N\in\{25,50,100,200,400\}), we sample models from p(m|D)p(m|D) using the Bayesian machine scientist Guimerà et al. (2020), select the MDL model (maximum p(m|D)p(m|D)) among those sampled, and use this model to make predictions on a test dataset DD^{\prime}, generated exactly as DD. We show the prediction root mean squared error (RMSE) of the MDL model on DD^{\prime} as a function of NN and sϵs_{\epsilon}. For comparison, we also show the predictions from an artificial neural network (ANN, dotted lines; Methods). Since sϵs_{\epsilon} is the irreducible error, predictions on the diagonal RMSE=sϵ{\rm RMSE}=s_{\epsilon} are optimal. (e-f) We plot the prediction RMSE scaled by the irreducible error sϵs_{\epsilon}; optimal predictions satisfy RMSE/sϵ=1{\rm RMSE}/s_{\epsilon}=1 (dashed line).

We start by investigating whether this optimality in model selection also leads to the ability of the MDL model to generalize well, that is, to make accurate predictions about unobserved data. We show that the MDL model yields quasi-optimal generalization despite the fact that the best possible generalization is achieved by averaging over models Guimerà et al. (2020), and despite the BIC approximation in the calculation of the description length (Fig. 1). Specifically, we sample a model mm^{*} from the prior p(m)p(m) described in Ref. Guimerà et al. (2020) and in the Methods; for a model not drawn from the prior, see Supplementary Text and Fig. S1. From this model, we generate synthetic datasets D={(yi,𝐱i)}D=\left\{(y_{i},\mathbf{x}_{i})\right\}, i=1,,Ni=1,\dots,N (where yi=m(𝐱i,θ)+ϵiy_{i}=m^{*}(\mathbf{x}_{i},{\theta^{*}})+\epsilon_{i} and ϵiGaussian(0,sϵ)\epsilon_{i}\sim{\rm Gaussian}(0,s_{\epsilon})) with different number of points NN and different levels of noise sϵs_{\epsilon}. Then, for each dataset DD, we sample models from p(m|D)p(m|D) using the Bayesian machine scientist Guimerà et al. (2020), select the MDL model among those sampled, and use it to make predictions on a test dataset DD^{\prime}.

Because DD^{\prime} is, like DD, subject to observation noise, the irreducible error is sϵs_{\epsilon}, that is, the root mean squared error (RMSE) of predictions cannot be, on average, smaller than sϵs_{\epsilon}. As we show in Fig. 1, the predictions of the MDL model achieve this optimal prediction limit except for small NN and some intermediate values of the observation noise. This is in contrast to standard machine learning algorithms, such as random forests and artificial neural networks. These algorithms achieve the optimal prediction error at large values of the noise, but below certain values of sϵs_{\epsilon} the prediction error stops decreasing and predictions become distinctly suboptimal (Fig. 1; random forests perform significantly worse than artificial neural networks, so data are not shown).

In the limit sϵs_{\epsilon}\rightarrow\infty of high noise, all models make predictions whose errors are small compared to sϵs_{\epsilon}. Thus, the prediction error is similar to sϵs_{\epsilon} regardless of the chosen model, which also means that it is impossible to correctly identify the model that generated the data. Conversely, in the sϵ0s_{\epsilon}\rightarrow 0 limit, the limiting factor for standard machine learning methods is their ability to interpolate between points in DD, and thus the prediction error becomes independent of the observation error. By contrast, because Eqs. (2)-(4) provide consistent model selection, in this limit the MDL should coincide with the true generating model mm^{*} and interpolate perfectly. This is exactly what we observe—the only error in the predictions of the MDL model is, again, the irreducible error sϵs_{\epsilon}. Therefore, our observations show that, probabilistic model selection leads to quasi-optimal generalization in the limits of high and low observation noise.

Refer to caption
Figure 2: Phenomenology of the learnability transition. Using “Model 1” in Fig. 1, we generate 40 different datasets DD (with N=100)N=100) for each observation noise sϵs_{\epsilon}. As before, the values of the independent variables x1x_{1} and x2x_{2} are generated uniformly at random in [2,2][-2,2]. For each dataset, we sample models from the posterior p(m|D)p(m|D) and obtain the distribution of certain properties of the sampled models. (a) Distribution of description length gaps Δ(m)=(m)(m)\Delta\mathcal{H}(m)=\mathcal{H}(m)-\mathcal{H}(m^{*}); each line corresponds to a different dataset DD. The true generating model has Δ(m)=0\Delta\mathcal{H}(m)=0, indicated by a horizontal dashed line. Positive gaps correspond to models mm that are less plausible than the true generating model mm^{*}, and vice versa. When there is a model mm with negative gap for a given dataset DD, the true model is unlearnable from that dataset because mm^{*} is not the most plausible model given the data. (b) Distribution of model description lengths M(m)=lnp(m){\mathcal{H}_{M}}(m)=-\ln p(m) for each dataset DD. The M(m){\mathcal{H}_{M}}(m^{*}) of the true generating model is indicated by a horizontal dashed line. Without lost of generality, the M(mc){\mathcal{H}_{M}}(m^{c}) of the most plausible model a priori mc=argmaxmp(m)m^{c}=\arg\max_{m}p(m), or trivial model, is set to M(mc)=0{\mathcal{H}_{M}}(m^{c})=0.

Phenomenology of the learnability transition

Next, we establish the existence of the learnable and unlearnable regimes, and clarify how the transition between them happens. Again, we generate synthetic data DD using a known model mm^{*} as in the previous section, and sample models from the posterior p(m|D)p(m|D). To investigate whether a model is learnable we consider the ensemble of sampled models (Fig. 2); if no model in the ensemble is more plausible than the true generating model (that is, if the true model is also the MDL model), then we conclude that the model is learnable. Otherwise, the model is unlearnable because lacking any additional information about the generating model, it is impossible to identify it, as other models provide more parsimonious descriptions of the data DD.

To this end, we first consider the gap Δ(m)=(m)(m)\Delta\mathcal{H}(m)=\mathcal{H}(m)-\mathcal{H}(m^{*}) between the description length of each sampled model mm and that of the true generating model mm^{*} (Fig. 2a). For low observation error sϵs_{\epsilon}, all sampled models have positive gaps, that is, description lengths that are longer than or equal to the true model. This indicates that the most plausible model is the true model; therefore, the true model is learnable. By contrast, when observation error grows (sϵ0.6s_{\epsilon}\gtrsim 0.6 in Fig. 2a), some of the models sampled for some training datasets DD start to have negative gaps, which means that, for those datasets, these models become more plausible than the true model. When this happens, the true model becomes unlearnable. For even larger values of sϵs_{\epsilon} (sϵ>2s_{\epsilon}>2), the true model is unlearnable for virtually all training sets DD.

To better characterize the sampled models, we next compute description length of the model; that is, the term M(m)=lnp(m){\mathcal{H}_{M}}(m)=-\ln p(m) in the description length, which measures the complexity of the model regardless of the data (Fig. 2b). For convenience we set an arbitrary origin for M(m){\mathcal{H}_{M}}(m) so that M(mc)=0{\mathcal{H}_{M}}(m^{c})=0 for the models mc=argmaxmp(m)=argminmM(m)m^{c}=\arg\max_{m}p(m)=\arg\min_{m}{\mathcal{H}_{M}}(m) that are most plausible a priori, which we refer to as trivial models; all other models have M(m)>0{\mathcal{H}_{M}}(m)>0. We observe that, for low observation error, most of the sampled models are more complex than the true model. Above a certain level of noise (sϵ3s_{\epsilon}\gtrsim 3), however, most sampled models are trivial or very simple models with M(m)0{\mathcal{H}_{M}}(m)\approx 0. These trivial models appear suddenly—at sϵ<2s_{\epsilon}<2 almost no trivial models are sampled at all.

Altogether, our analysis shows that for low enough observation error the true model is always recovered. Conversely, in the opposite limit only trivial models (those that are most plausible in the prior distribution) are considered reasonable descriptions of the data. Importantly, our analysis shows that trivial models appear suddenly as one increases the level of noise, which suggests that the transition to the unlearnable regime may be akin to a phase transition driven by changes in the model plausibility (or description length) landscape, similarly to what happens in other problems in inference, constraint satisfaction, and optimization Mézard and Montanari (2009); Zdeborová and Krzakala (2016).

Refer to caption
Figure 3: Learnability transition and scaling. (a, b) For each of the two models in Fig. 1, we represent the learnability ρ(sϵ){\rho}(s_{\epsilon}), that is, the fraction of datasets DD for which the true model mm^{*} is the most plausible one, and thus can be learned from the data. Vertical lines are estimates of the learnability transition point sϵ×s_{\epsilon}^{\times} from Eq. (8). (c, d) As in Fig. 1(e,f), we represent the scaled prediction root mean squared error (RMSE) for the MDL model. The peak in scaled prediction RMSE coincides with the learnability transition point. (e) Learnability as a function of the scaled noise sϵ/sϵ×s_{\epsilon}/s_{\epsilon}^{\times} for all learnability curves (all values of NN and both models). The gray region sϵ/sϵ×>1s_{\epsilon}/s_{\epsilon}^{\times}>1 identifies the unlearnable phase.

To further probe whether that is the case, we analyze the transition in more detail. Our previous analysis shows that learnability is a property of the training dataset DD, so that, for the same level of observation noise, some instances of DD enable learning of the true model whereas others do not. Thus, by analogy with satisfiability transitions Mézard and Montanari (2009), we define the learnability ρ(sϵ){\rho}(s_{\epsilon}) as the fraction of training datasets DD at a given noise sϵs_{\epsilon} for which the true generating model is learnable. In Fig. 3, we show the behavior of the learnability ρ(sϵ){\rho}(s_{\epsilon}) for different sizes NN, for the two models in Fig. 1 (see Supplementary Fig. S1 for another model). Consistent with the qualitative description above, we observe that the learnability transition occurs abruptly at a certain value of the observation noise; the transition shifts towards higher values of sϵs_{\epsilon} with increasing size of DD.

Remarkably, the difficult region identified in the previous section, in which prediction error deviates from the theoretical irreducible error sϵs_{\epsilon}, coincides with the transition region. Therefore, our results paint a picture with three qualitatively distinct regimes: a learnable regime in which the true model can always be learned and predictions about unobserved data are optimal; an unlearnable regime in which the true model can never be learned but predictions are still optimal because error is driven by observation noise and not by the model itself; and a transition regime in which the true model is learnable only sometimes, and in which predictions are, on average, suboptimal. This, again, is reminiscent of the hard phase in satisfiability transitions and of other statistical learning problems Mézard and Montanari (2009); Ricci-Tersenghi et al. (2019); Maillard et al. (2020).

Learnability conditions

Finally, we obtain an upper bound to the learnability transition point by assuming that all the phenomenology of the transtion can be explained by the competition between just two minima of the description length landscape (m)\mathcal{H}(m): the minimum corresponding to the true model mm^{*}; and the minimum corresponding to the trivial model mcm^{c} that is most plausible a priori

mc=argmaxmp(m)=argminmM(m).m^{c}=\arg\max_{m}p(m)=\arg\min_{m}{\mathcal{H}_{M}}(m)\;. (5)

As noted earlier, mcm^{c} is such that M(mc)=0{\mathcal{H}_{M}}(m^{c})=0 by our choice of origin 333Note that, although the origin of descriptions lengths is not arbitrary, this choice is innocuous as long as we are only concerned with comparisons between models., and for our choice of priors it corresponds to trivial models without any operation, such as m(𝐱)=const.m(\mathbf{x})={\rm const.} or m(𝐱)=x1m(\mathbf{x})=x_{1}. Below, we focus on one of these models m(𝐱)=const.m(\mathbf{x})={\rm const.}, but our arguments (not the precise calculations) remain valid for any other choice of prior and the corresponding mcm^{c}.

As we have also noted above, the true model is learnable when the description length gap Δ(m)=(m)(m)\Delta\mathcal{H}(m)=\mathcal{H}(m)-\mathcal{H}(m^{*}) is strictly positive for all models mmm\neq m^{*}. Conversely, the true model mm^{*} becomes unlearnable when Δ(m)=0\Delta\mathcal{H}(m)=0 for some model mmm\neq m^{*}. As per our assumption that the only relevant minima are those corresponding to mm^{*} and mcm^{c}, we therefore postulate that the transition occurs when the description length gap of the trivial model becomes, on average (over datasets), Δ(mc)=0\Delta\mathcal{H}(m^{c})=0. In fact, when (mc)(m)\mathcal{H}(m^{c})\leq\mathcal{H}(m^{*}) the true model is certainly unlearnable; but other models mm could fullfill the condition (m)(m)\mathcal{H}(m)\leq\mathcal{H}(m^{*}) earlier, thus making the true model unlearnable at lower observation noises. Therefore, the condition Δ(mc)=0\Delta\mathcal{H}(m^{c})=0 yields an upper bound to the value of the noise at which the learnability transition happens. We come back to this issue later, but for now we ignore all potential intermediate models.

Within the BIC approximation, the description length of each of these two models is (Methods)

(m)\displaystyle\mathcal{H}(m^{*}) =\displaystyle= N2[log2πϵ2D+1]+k+12logN\displaystyle\frac{N}{2}\left[\log 2\pi\langle\epsilon^{2}\rangle_{D}+1\right]+\frac{k^{*}+1}{2}\log N (6)
\displaystyle- logp(m)\displaystyle\log p(m^{*})
(mc)\displaystyle\mathcal{H}(m^{c}) =\displaystyle= N2[log2π(ϵ2D+δ2D)+1]+logN\displaystyle\frac{N}{2}\left[\log 2\pi\left(\langle\epsilon^{2}\rangle_{D}+\langle\delta^{2}\rangle_{D}\right)+1\right]+\log N (7)
\displaystyle- logp(mc),\displaystyle\log p(m^{c})\;,

where kk^{*} is the number of parameters of the true model, δi=micmi\delta_{i}=m^{c}_{i}-m^{*}_{i} is the reducible error of the trivial model, and the averages D\langle\cdots\rangle_{D} are over the observations ii in DD. Then, over many realizations of DD, the transition occurs on average at

sϵ×=[δ2(p(mc)p(m))2NNk1N1]1/2s_{\epsilon}^{\times}=\left[\frac{\langle\delta^{2}\rangle}{\left(\frac{p(m^{c})}{p(m^{*})}\right)^{\frac{2}{N}}N^{\frac{k^{*}-1}{N}}-1}\right]^{1/2} (8)

where δ2\langle\delta^{2}\rangle is now the variance of mm^{*} over the observation interval rather than in any particular dataset.

Refer to caption
Figure 4: Model description length and learnability conditions. For each of the two models and each size NN, we plot the description length of the MDL model identified by the Bayesian machine scientist, averaged over 40 realizations of the training dataset DD (colored symbols). For each model and NN, we also plot the theoretical description length of the true generating model mm^{*} (Eq. (6); solid black line) and of the trivial model mcm^{c} (Eq. (7); dotted black line). As in Fig. 3, colored vertical lines are estimates of the learnability transition point at which (m)=(mc)\mathcal{H}(m^{*})=\mathcal{H}(m^{c}) (Eq. (8)). Right of this point (gray region; unlearnable phase) (m)>(mc)\mathcal{H}(m^{*})>\mathcal{H}(m^{c}), so the true model cannot be learned, from the data alone, by any method.

For large N100N\gtrsim 100 and model description lengths of the true model such that ΔMM(m)M(mc)O(1)\Delta_{M}^{*}\equiv\mathcal{H}_{M}(m^{*})-\mathcal{H}_{M}(m^{c})\sim O(1), the transition noise is well approximated by (Methods)

sϵ×[δ2N2ΔM+(k1)logN]1/2.s_{\epsilon}^{\times}\approx\left[\frac{\langle\delta^{2}\rangle N}{2\Delta_{M}^{*}+(k^{*}-1)\log N}\right]^{1/2}\;. (9)

Therefore, since sϵ×s_{\epsilon}^{\times} diverges for NN\rightarrow\infty, the true model is learnable for any finite observation noise, provided that NN is large enough (which is just another way of seeing that probabilistic model selection with the BIC approximation is consistent Schwarz (1978)).

In Fig. 3, we show that the upper bound in Eq. (8) is close to the exact transition point from the learnable to the unlearnable phase, as well as to the peak of the prediction error relative to the irreducible error. Moreover, once represented as a function of the scaled noise sϵ/sϵ×s_{\epsilon}/s_{\epsilon}^{\times}, the learnability curves of both models collapse into a single curve (Fig. 3e), suggesting that the behavior at the transition may be universal. Additionally, the transition between mm^{*} and mcm^{c} becomes more abrupt with increasing NN, as the fluctuations in δD\langle\delta\rangle_{D} become smaller. If this upper bound became tighter with increasing NN, this would be indicative of a true, discontinuous phase transition between the learnable and the unlearnable phases.

To further understand the transition and further test the validity of the assumption leading to the upper bound sϵ×s_{\epsilon}^{\times}, we plot the average description length (over datasets DD) of the MDL model at each level of observation noise sϵs_{\epsilon} (Fig. 4). We observe that the description length of the MDL model coincides with the description length (m)\mathcal{H}(m^{*}) of the true model below the transition observation noise, and with the description length (mc)\mathcal{H}(m^{c}) of the trivial model above the transition observation noise. Around sϵ×s_{\epsilon}^{\times}, and especially for smaller NN, the observed MDL is lower than both (m)\mathcal{H}(m^{*}) and (mc)\mathcal{H}(m^{c}), suggesting that, in the transition region, a multiplicity of models (or Rashomon set Rudin (2019)) other than mm^{*} and mcm^{c} become relevant.

Discussion

The bidirectional cross fertilization between statistical learning theory and statistical physics goes back to the 1980’s, when the paradigm in artificial intelligence shifted from rule-based approaches to statistical learning approaches Carleo et al. (2019). Nonetheless, the application of statistical physics ideas and tools to learning problems has so far focused on learning parameter values Engel and den Broeck (2004); Maillard et al. (2020); Aubin et al. (2020); Mignacco et al. (2022), or on specific problems such as learning probabilistic graphical models and, more recently, network models Guimerà and Sales-Pardo (2009); Decelle et al. (2011); Krzakala et al. (2016); Vallès-Català et al. (2018); Peixoto (2020). Thus, despite the existence of rigorous probabilistic model selection approaches Ando (2010), the issue of learning the structure of models, and especially closed-form mathematical models, has received much less attention from the statistical physics point of view Zdeborová and Krzakala (2016).

Our approach shows that, once the model-selection problem is formalized in such a way that models can be represented and enumerated, and their posteriors p(m|D)p(m|D) can be estimated and sampled in the same way we do for discrete configurations in a physical system  Guimerà et al. (2020), there is no fundamental difference between learning models and learning parameters (except, perhaps, for the discreteness of model structures). Therefore, the same richness that has been described at length in parameter learning can be expected in model learning, including the learnability transition that we have described here, but perhaps others related, for example, to the precise characterization of the description length landscape in the hard phase in which generalization is difficult. We may also expect model-learning and parameter-learning transitions to interact. For example, there are limits to our ability to learn model parameters, even for noiseless data Mondelli and Montanari (2019); Barbier et al. (2019); Maillard et al. (2020); in this unlearnable phase of the parameterlearning problem, it seems that the true model should also be unlearnable, even in this noiseless limit, something we have not considered here. Our findings are thus the first step in the characterization of the rich phenomenology arising from the interplay between data size, noise, and parameter and model identification from data.

Our results on closed-form mathematical models may also open the door to addressing some problems of learning that have so far remained difficult to tackle. In deep learning, for example, the question of how many training examples are necessary to learn a certain task with precision is still open Carleo et al. (2019), whereas, as we have shown, in our context of (closed-form) model selection the answer to this question arises naturally.

Finally, we have shown that, when data are generated with a (relatively simple) closed-form expression, probabilistic model selection generalizes quasi-optimally, especially in the ideal regime of low observation noise. This is in contrast, as we have seen, to what happens in this case with some machine learning approaches such as random forests or artificial neural networks. Conversely, it would be interesting to understand how expressive closed-form mathematical models are, that is, to what extent they are to describe and generalize when data are not generated using a closed-form mathematical model (for example, for the solution of a differential equation that cannot be expressed in closed form). We hope that our work encourages more research on machine learning approaches geared towards learning such interpretable models Rudin (2019), and on the statistical physics of such approaches.

METHODS

Prior over models

The probabilistic formulation of the model selection problem outline in Eqs. 1–4 requires the choice of a prior distribution p(m)p(m) over models—although the general, qualitative results described in the body of the text do not depend on a particular choice of prior. Indeed, no matter how priors are chosen, some model mc=argmaxmp(m)m^{c}=\arg\max_{m}p(m) (or, at most, a finite subset of models, since uniform priors cannot be used Guimerà et al. (2020)) different from the true generating model mm^{*} will be most plausible a priori, so the key arguments in the main text hold.

However, the specific, quantitative results must be obtained for one specification of p(m)p(m). Here, as in the previous literature Guimerà et al. (2020); Reichardt et al. (2020), we choose p(m)p(m) to be the maximum entropy distribution that is compatible with an empirical corpus of mathematical equations Guimerà et al. (2020). In particular, we impose that the mean number no\langle n_{o}\rangle of occurrences of each operation o{+,,exp,}o\in\{+,*,\exp,\dots\} per equation is the same as in the empirical corpus; and that the fluctuations of these numbers no2\langle n_{o}^{2}\rangle are also as in the corpus. Therefore, the prior is

p(m)exp[o𝒪(αono(m)βono2(m))],p(m)\propto\exp\left[\sum_{o\in\mathcal{O}}\left(-\alpha_{o}n_{o}(m)-\beta_{o}n_{o}^{2}(m)\right)\right]\,, (10)

where 𝒪={+,,exp,}\mathcal{O}=\{+,*,\exp,\dots\}, and the hyperparameters αo0\alpha_{o}\geq 0 and βo0\beta_{o}\geq 0 are chosen so as to reproduce the statistical features of the empirical corpus Guimerà et al. (2020). For this particular choice of prior, the models mc=argmaxmp(m)m^{c}=\arg\max_{m}p(m) are those that contain no operations at all, for example mc=const.m^{c}={\rm const.}; hence the use of the term trivial to denote these models in the text.

Sampling models with the Bayesian machine scientist

For a given dataset DD, each model mm has a description length (m)\mathcal{H}(m) given by Eq. (4). The Bayesian machine scientist Guimerà et al. (2020) generates a Markov chain of models {m0,m1,,mT}\{m_{0},m_{1},\dots,m_{T}\} using the Metropolis algorithm as described next.

Each model is represented as an expression tree, and each new model mt+1m_{t+1} in the Markov chain is proposed from the previous one mtm_{t} by changing the corresponding expression tree: changing an operation or a variable in the expression tree (for example, proposing a new model mt+1=θ0+x1m_{t+1}=\theta_{0}+x_{1} from mt=θ0x1m_{t}=\theta_{0}*x_{1}); adding a new term to the tree (for example, mt+1=θ0x1+x2m_{t+1}=\theta_{0}*x_{1}+x_{2} from mt=θ0x1m_{t}=\theta_{0}*x_{1}); or replacing one block of the tree (for example, mt+1=θ0exp(x2)m_{t+1}=\theta_{0}*\exp(x_{2}) from mt=θ0x1m_{t}=\theta_{0}*x_{1}) (see Guimerà et al. (2020) for details). Once a new model mt+1m_{t+1} is proposed from mtm_{t}, the new model is accepted using the Metropolis rule.

Note that the only input to the Bayesian machine is the observed data DD. In particular, the observational noise sϵs\epsilon is unknown and must be estimated, via maximum likelihood, to calculate the Bayesian information criterion B(m)B(m) of each model mm.

Artificial neural network benchmarks

For the analysis of predictions on unobserved data, we use as benchmarks the following artificial neural network architectures and training procedures. The networks consist of: an input layer with two inputs corresponding to x1x_{1} and x2x_{2}; (ii) four hidden fully connected feed-forward layers, with 10 units each and ReLU activation functions; and (iii) a linear output layer with a single output yy.

Each network was trained with a dataset DD containing NN points, just as in the probabilistic model selection experiments. Training errors and validation errors (computed on an independent set) were calculated, and the training process stopped when the validation error increased, on average, for 100 epochs; this typically entailed training for 1,000-2,000 epochs. This procedure was repeated three times, and the model with the overall lowest validation error was kept for making predictions on a final test set DD^{\prime}. The predictions on DD^{\prime} are those reported in Fig. 1.

Model description lengths

The description length of a model is given by Eq. (4), and the BIC is

B(m)=2ln(m)|θ^+(k+1)lnN,B(m)=-2\ln\left.{\cal L}(m)\right|_{\hat{\theta}}+(k+1)\ln N, (11)

where (m)|θ^=p(D|m,θ^)\left.{\cal L}(m)\right|_{\hat{\theta}}=p(D|m,\hat{\theta}) is the likelihood of the model calculated at the maximum likelihood estimator of the parameters, θ^=argmaxθp(D|m,θ)\hat{\theta}=\arg\max_{\theta}p(D|m,\theta), and kk is the number of parameters in mm. In a model-selection setting, one would typically assume that the deviations of the observed data are normally distributed independent variables, so that

(m)\displaystyle{\cal L}(m) =\displaystyle= i12πsy2exp[(yimi)22sy2]\displaystyle\prod_{i}\frac{1}{\sqrt{2\pi{s_{y}}^{2}}}\exp\left[{-\frac{(y_{i}-m_{i})^{2}}{2{s_{y}}^{2}}}\right] (12)
=\displaystyle= 1(2πsy2)N/2exp[i(yimi)22sy2]\displaystyle\frac{1}{(2\pi{s_{y}}^{2})^{N/2}}\exp\left[{-\sum_{i}\frac{(y_{i}-m_{i})^{2}}{2{s_{y}}^{2}}}\right]

where mim(𝐱i)m_{i}\equiv m(\mathbf{x}_{i}), and sy{s_{y}} is the observed error, which in general differs from sϵs_{\epsilon} when mmm\neq m^{*}. We can obtain the maximum likelihood estimator for sy2{s_{y}}^{2}, which gives sy2=1Ni(yimi)2{s_{y}}^{2}=\frac{1}{N}\sum_{i}(y_{i}-m^{*}_{i})^{2}, and replacing into Eqs. (4)-(12) gives

(m)=N2ln2πsy2+N2+k+12lnNlnp(m).{\cal H}(m)=\frac{N}{2}\ln 2\pi{s_{y}}^{2}+\frac{N}{2}+\frac{k+1}{2}\ln N-\ln p(m). (13)

For m=mm=m^{*} we have that sy2=1Ni(yimi)2=ϵ2D{s_{y}}^{2}=\frac{1}{N}\sum_{i}(y_{i}-m^{*}_{i})^{2}=\langle\epsilon^{2}\rangle_{D}. For any other model mm, we define for each point the deviation from the original model δi:=mimi\delta_{i}:=m_{i}-m^{*}_{i} so that sy2=1Ni(ϵiδi)2=ϵ2D+δ2D2δϵD{s_{y}}^{2}=\frac{1}{N}\sum_{i}(\epsilon_{i}-\delta_{i})^{2}=\langle\epsilon^{2}\rangle_{D}+\langle\delta^{2}\rangle_{D}-2\langle\delta\epsilon\rangle_{D}. Plugging these expressions into Eq. (13), we obtain the expressions in Eqs. (6) and  (7).

Approximation of sϵ×s_{\epsilon}^{\times} for large NN

Defining x=1/Nx=1/N in Eq. (8) and using the Puiseux series expansion

a2x(1x)bx=1+x(2loga+blog1x)+O(x2)a^{2x}\left(\frac{1}{x}\right)^{bx}=1+x\left(2\log a+b\log\frac{1}{x}\right)+O(x^{2}) (14)

around x=0x=0, we obtain Eq. (9).

Data availability

We did not use any data beyond the synthetically generated data described in the manuscript.

Code availability

The code for the Bayesian machine scientist is publicly available as a repository from the following URL: https://bitbucket.org/rguimera/machine-scientist/.

References

  • Džeroski and Todorovski (2007) S. Džeroski and L. Todorovski, eds., Computational Discovery of Scientific Knowledge, Lecture Notes in Artificial Intelligence (Springer, 2007).
  • Schmidt and Lipson (2009) M. Schmidt and H. Lipson, Science 324, 81 (2009).
  • Evans and Rzhetsky (2010) J. Evans and A. Rzhetsky, Science 329, 399 (2010).
  • Brunton et al. (2016) S. L. Brunton, J. L. Proctor,  and J. N. Kutz, Proc. Natl. Acad. Sci. U.S.A. 113, 3932 (2016).
  • Guimerà et al. (2020) R. Guimerà, I. Reichardt, A. Aguilar-Mogas, F. A. Massucci, M. Miranda, J. Pallarès,  and M. Sales-Pardo, Sci. Adv. 6, eaav6971 (2020).
  • Udrescu and Tegmark (2020) S.-M. Udrescu and M. Tegmark, Sci. Adv. 6, eaay2631 (2020).
  • Note (1) Here and throughout this article, we refer to closed-form models as those that are expressed in terms of a relatively small number of basic functions, such as addition, multiplication, trigonometric functions, etc.
  • Gentile et al. (2021) A. A. Gentile, B. Flynn, S. Knauer, N. Wiebe, S. Paesani, C. E. Granade, J. G. Rarity, R. Santagati,  and A. Laing, Nat. Phys. 17, 837 (2021).
  • Reichardt et al. (2020) I. Reichardt, J. Pallarès, M. Sales-Pardo,  and R. Guimerà, Phys. Rev. Lett. 124, 084503 (2020).
  • Cranmer et al. (2020) M. Cranmer, A. Sanchez Gonzalez, P. Battaglia, R. Xu, K. Cranmer, D. Spergel,  and S. Ho, in Advances in Neural Information Processing Systems, Vol. 33, edited by H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan,  and H. Lin (Curran Associates, Inc., 2020) pp. 17429–17442.
  • Note (2) The assumption of Gaussian noise is standard in regression and symbolic regression problems, and it allows us to write the likelihood as in Eq. (12\@@italiccorr). In principle, one could assume other noise structures (for example, multiplicative noise) or even more general likelihoods, but these would be hard to justify in the context of regression and symbolic regression/model discovery. In any case, we do not expect that the qualitative behavior described in the manuscript should change in any way.
  • Zdeborová and Krzakala (2016) L. Zdeborová and F. Krzakala, Adv. Phys. 65, 453 (2016).
  • Schwarz (1978) G. Schwarz, Ann. Stat. 6, 461 (1978).
  • Grünwald (2007) P. D. Grünwald, The Minimum Description Length Principle (The MIT Press, Cambridge, Massachusetts, 2007).
  • Cox (1946) R. T. Cox, Am. J. Phys. 14, 1 (1946).
  • Jaynes (2003) E. T. Jaynes, Probability Theory: The Logic of Science (Cambridge University Press, 2003).
  • Mézard and Montanari (2009) M. Mézard and A. Montanari, Information, Physics, and Computation (Oxford University Press, 2009).
  • Ricci-Tersenghi et al. (2019) F. Ricci-Tersenghi, G. Semerjian,  and L. Zdeborová, Phys. Rev. E 99, 042109 (2019).
  • Maillard et al. (2020) A. Maillard, B. Loureiro, F. Krzakala,  and L. Zdeborová, in Advances in Neural Information Processing Systems, Vol. 33, edited by H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan,  and H. Lin (Curran Associates, Inc., 2020) pp. 11071–11082.
  • Note (3) Note that, although the origin of descriptions lengths is not arbitrary, this choice is innocuous as long as we are only concerned with comparisons between models.
  • Rudin (2019) C. Rudin, Nat. Mach. Intell. 1, 206 (2019).
  • Carleo et al. (2019) G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto,  and L. Zdeborová, Rev. Mod. Phys. 91, 45002 (2019).
  • Engel and den Broeck (2004) A. Engel and C. V. den Broeck, Statistical Mechanics of Learning (Cambridge University Press, Cambridge, UK, 2004).
  • Aubin et al. (2020) B. Aubin, F. Krzakala, Y. Lu,  and L. Zdeborová, in Advances in Neural Information Processing Systems, Vol. 33, edited by H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan,  and H. L. n (Curran Associates, Inc., 2020) pp. 12199–12210.
  • Mignacco et al. (2022) F. Mignacco, F. Krzakala, Y. M. Lu, P. Urbani,  and L. Zdeborová, in Proceedings of the 37th International Conference on Machine Learning, ICML’20 (JMLR.org, 2022).
  • Guimerà and Sales-Pardo (2009) R. Guimerà and M. Sales-Pardo, Proc. Natl. Acad. Sci. U. S. A. 106, 22073 (2009).
  • Decelle et al. (2011) A. Decelle, F. Krzakala, C. Moore,  and L. Zdeborová, Phys. Rev. Lett. 107, 065701 (2011).
  • Krzakala et al. (2016) F. Krzakala, F. Ricci-Tersenghi, L. Zdeborová, R. Zecchina, E. W. Tramel,  and L. F. Cugliandolo, eds., Statistical Physics, Optimization, Inference, and Message-Passing Algorithms, 1st ed. (Oxford, UK, 2016).
  • Vallès-Català et al. (2018) T. Vallès-Català, T. P. Peixoto, M. Sales-Pardo,  and R. Guimerà, Phys. Rev. E 97, 062316 (2018).
  • Peixoto (2020) T. P. Peixoto, “Advances in network clustering and blockmodeling,”  (John Wiley & Sons Ltd, 2020) Chap. Bayesian stochastic blockmodeling.
  • Ando (2010) T. Ando, Bayesian model selection and statistical modeling (CRC Press, 2010).
  • Mondelli and Montanari (2019) M. Mondelli and A. Montanari, Foundations of Computational Mathematics 19, 703 (2019).
  • Barbier et al. (2019) J. Barbier, F. Krzakala, N. Macris, L. Miolane,  and L. Zdeborová, Proc. Natl. Acad. Sci. U.S.A. 116, 5451 (2019).
Acknowledgements.
This research was funded by project PID2019-106811GB-C31 from the Spanish MCIN/AEI/10.13039/501100011033 and by the Government of Catalonia (2017SGR-896).

Author contributions

MS-P and RG designed the research. OF-F, IR, HDLR, and RG wrote code and run computational experiments. All authors analyzed data. OF-F, IR, MS-P, and RG wrote the manuscript.

Competing interests

The authors declare no competing interests.