The Proof of Kolmogorov-Arnold May Illuminate NN Learning
Abstract.
Kolmogorov and Arnold, in answering Hilbert’s 13th problem (in the context of continuous functions), laid the foundations for the modern theory of Neural Networks (NNs). Their proof divides the representation of a multivariate function into two steps: The first (non-linear) inter-layer map gives a universal embedding of the data manifold into a single hidden layer whose image is patterned in such a way that a subsequent dynamic can then be defined to solve for the second inter-layer map. I interpret this pattern as “minor concentration” of the almost everywhere defined Jacobians of the interlayer map. Minor concentration amounts to sparsity for higher exterior powers of the Jacobians. We present a conceptual argument for how such sparsity may set the stage for the emergence of successively higher order concepts in today’s deep NNs and suggest two classes of experiments to test this hypothesis.
The Kolmogorov-Arnold theorem (KA) [kol56, arn57] resolved Hilbert’s 13th problem in the context of continuous functions (Hilbert’s literal statement). It took some time for applied mathematicians and computer scientists to notice that it addresses the representation power of shallow, but highly non-linear, neural nets [hed71]. The relevance of KA to machine learning (ML) has been much debated [gp89, v91].
The primary criticism is that the activation functions are not even first differentiable (even when the function to be represented is); indeed, they must be quite wild, and appear impossible to train. I agree with this criticism, but will argue here that it misses a more important point. The discussion of KA relevance has revolved around its statement. But in mathematics, proofs are generally more revealing of power than statements; I believe this particularly true in the case of KA. The purpose of this note is to call attention to some wisdom embedded within the proof which I believe will be useful for the training of neural nets (NNs). As far as resurrecting KA from the dustbin of ML history, this has already been done earlier this year in [ZWV24], where the original 1-hidden layer of KA has been deepened to many layers while the interlayer maps somewhat tamed (the authors use cubic -splines).
Before explaining KA’s insight and its potential applicability, let me give a modern, optimized statement of KA quoted from [mor21]. This statement is perhaps unnecessarily succinct from the NN perspective: it gets by with a single outer function , not outer functions which might appear more natural. Also, in the NN context one would expect the inner functions to carry a second index and not merely be rationally independent scalars times a single index function . But we take this statement as representative and refer the reader to Morris for more historical information on the individual contributions.
Theorem 1 (Kolmogorov, Arnold, Kahane, Lorentz, and Sprecher).
For any , , there exist real numbers and continuous functions , for , , with the property that for every continuous function there exists a continuous function such that for each ,
The proof of KA is divided into the construction of inner functions and an outer function . The inner functions taken together constitute an embedding, which we now denote , . We will append a subscript when referring to a neuron coordinate of the embedding. is universal; it can be chosen once and for all, independent of the function to be represented. Our chief lesson regards , although I also will make a comment on the non-linear dynamic used to converge to (given ).
In the mathematically idealized case of KA, feasible are actually dense in the space of continuous maps ; they also have a crucial local feature, about to be described. This fact is reminiscent of the observation (for example see [gmrm23, skmf24]) that when NNs are randomly initialized, training is often seen to merely perturb the values of the early layers. These layers seem to be more in the business of assuming a form conducive to the learning of later layers, rather than attending to the particular data themselves.
The local property crucial to is a kind of irregular staircase structure as might be used to approximate the general continuous function by one which is piecewise constant. More specifically, may be taken to be Lipschitz [ak17], and so by Rademacher’s theorem will be differentiable a.e. Such constructed in the proof of KA will have the property that at every point where the Jacobian is defined it will, as an matrix in the “neuron basis,” have columns consisting entirely of zeros. So only one of its ( choose ) minors can be nonzero. (Of course, which minor is active will vary for with .) To picture what this means, imagine an irregular stack of sugar cubes of different sizes but with all faces perpendicular to one of the three coordinate axes, , , or . Another condition says that the “inactive” coordinate on the -perpendicular faces (, , or ) must take distinct values. The actual situation is more like an irregular structure of -D cubes wafting through -D space, now with inactive directions at any generic point, and all these inactive coordinate values distinct. Being distinct is what sets the stage towards finding at least an approximate outer function . The inactive directions are crucial to the construction of , and it is important that , so that the inactive coordinates are in the majority at all generic points of .
The very rough idea for how to choose is to use the inactive function value on each “sugar cube face” as a hint: If that value is uniformly positive (negative) on the face with inactive coordinate , give a small constant positive (negative) value and extend in a convex-PL manner. In the regime , it may be checked that such a strategy produces a leading to a useful approximation:
Now iterating on the approximation error (in a manner reminiscence of Resnets), and relying on additivity at the output node, one produces a series which converges exponentially fast to the desired outer function for which the approximation error has been driven to zero. Many inactive directions at each point are essential since they provide a stationary coordinate value on which a guess for can be made: A small positive constant if is positive throughout the face and a small negative constant if is negative throughout the face.
This sketch of the construction of the outer function g reveals the importance of the stationarity condition and its implication of vanishing Jacobian minors in the embedding . Remembering that admissible are dense, we see that the job of the first layer is simply to impress a certain microscopic texture on which makes g constructable (if not learnable). This is the division of labor that feels so striking. does not try to learn anything but sets up for success the task of finding .
I wonder if a similar division of labor occurs between early and later stages of a NN as it is trained, and propose:
Proposal 1.
Search the natural Jacobian maps for minor concentration.
First, what are these natural Jacobians, and second, what is minor concentration? One may consider either a conventional feedforward DNN or the recently proposed KANets [ZWV24], or more elaborate architectures with residual connections and self-attention. The idea is to watch the data manifold flow through the NN. Initially at , the data manifold is , the values stored in the input neurons, and at time when the input has been imaged in the th layer (say after application of the activation functions). Let , , be the map between layers and , with being a very interesting special case. At any point in the evolving data manifold (, the previously defined coordinate on , if ) we may consider , the Jacobian matrix (in neuron coordinates) of the forward map at point . The proposal is to find some context where it is possible to search (or at least sample) this huge family of Jacobians and look for minor concentration. Based on understanding KA one may expect, particularly in some initial segment of the NN (perhaps all the way up to the penultimate layer), that training will cause these Jacobians to have their minors concentrated far beyond what would be seen in a random collection (e.g. Gaussian distributed entries) of linear maps. It seems best at first to start from the beginning and set . That is, one should look “in the wild” to see if a KA-style of data preparation is discovered, at least for certain tasks, and certain architectures, as a result of training.
What is minor concentration?
Given a matrix and , , minor concentration should be some quantity that measures how far from uniform the distribution of the absolute value of the minors of are. The easiest formula is:
That is, the ratio of the and norms.
As will become clear in the thought experiment below, one might also wish to design special purpose minor concentration functionals which reward not just a large, isolated minor but respond to a row or column of large minors inside , where the superscript indicates the th exterior power. This type of pattern for large and small minors could plausibly arise through training.
Proposal 2.
Evaluate the effect of forced minor concentration on learning.
Independent of the results of Proposal 1, one could study the effect of encouraging/discouraging minor concentration during training. One way to do this would be to alternate traditional training protocols with a novel step where a layer map would be updated along the gradient of a new objective function, one measuring MC. If MC is defined as above, these interleaved steps could be . That is, with MC-learning rate , move a layer map in the direction of increased minor concentration. The gradient might be taken over the last layer-map variables—although there are many choices with which to experiment. One might expect to find some portions of the NN and some stages of training where “concepts are gelling” (see below) in which a positive would accelerate learning and a negative delay learning. Interestingly, there could also be regimes where the reverse applied: At an early stage of learning, it could be harmful to have a concept gel prematurely, the NN may be better off keeping an open mind (Zen-like) mind. Experiments hindering/enhancing MC could get to the heart of “how learning works” [gmrm23].
Thought experiment/example
In the KA proof for , , the differential at a generic point of will be a matrix , and the salient feature of is that for any given , will have three of its five columns entirely zero (which three are zero will vary as varies), so in particular the , the second exterior power, will be a matrix with at most a single nonzero entry. This is the prototypical example of minor concentration. Let’s consider another.
Say that between layers and , the Jacobian of the non-linear map is . In this thought experiment, let’s imagine we have a convolutional NN analyzing a picture of a face. Suppose the th layer has learned some lines encoded in four neurons, determining a vector in , and say the th layer has three neurons spanning in and may be looking to define a feature. Now is a matrix. Arbitrarily pick , meaning we will be inquiring on how two “line” neurons stimulate two “feature neurons.” Now is a matrix. Suppose we see just one of its three rows with large values while the other 12 values are close to zero. Here is a story that we might tell: Let the feature-neurons , , and label the rows of , so , , and label the rows of . Suppose it is the row with large minors. This means as we consider the six pairs of “line neurons,” as their information varies to first order, we see the information in the pair vary robustly, but not so with the and pairs. That would be consistent with each of these three pairs being responsible for detecting a feature, say, eye, nose, and mouth respectively. The most parsimonious explanation for the muted response of the eye and mouth pairs would be that the lines are not suggesting either of these features, so varying the incoming line data leaves the and neurons in their “not an eye” and “not a mouth” base point states respectively. There is little or no variation in the eye or mouth recognizing neuron pairs because they are not finding what they have been trained to look for. On the other hand, at least some of the six pairwise line characteristics seem to be describing a variety of nose shapes as they move about. Because training involves both forward- and back-propagation, similar scenarios can be constructed where a column of would be heavy. Thus, in addition to the most naïve measure of minor concentration (above), one should also look for row-wise and column-wise concentrations in . Looking for such concentrations would always come down to comparing the appropriate functional on Jacobians from trained NNs with random matrices.
It is worth pointing out that when , minor concentration is quite close to the notion of “sparsity”; 1-minors concentrate when a few matrix entries are large (in absolute value) and the rest small. So, minor concentration for general is close to looking for sparsity in the exterior powers . It is worth commenting that the old notion of sparsity and its generalization to k-minor concentration only makes sense in the context of linear maps between based vector spaces. Without preferred bases, neither the individual entries in matrices, nor their exterior powers, have any invariant meaning.
A caveat
I’d like to mention a final point which arose in conversation with Boris Hanin relating to the phenomenon of neural collapse. Observations presented in [koth22] suggest, for example, if a convolutional network is trained to recognize farm animals, say: cow, sheep, pig, horse, that the penultimate layer may have the data compressed to some regular tetrahedron, with the four animal types at the four vertices, but these tetrahedron vertices will not generally be oriented along neuron axes. To put them in this form a rotation may be required. This suggests information midway though the net may often be held diffusely and be difficult to detect within a small subset of neurons, and hence difficult to detect through minor concentration, at least for small . However, sparsification, if applied during training, may already be emphasizing the neuron basis, and so facilitate concentration of -minors for . If this is so, it could present an interesting avenue towards understanding how sparsification at the original matrix level could enhance minor concentration and facilitate learning.