Gaussian process regression with additive periodic kernels for two-body interaction analysis in coupled phase oscillators
Abstract
We propose a Gaussian process regression framework with additive periodic kernels for the analysis of two-body interactions in coupled oscillator systems. While finite-order Fourier expansions determined by Bayesian methods can still yield artifacts such as a high-amplitude, high-frequency vibration, our additive periodic kernel approach has been demonstrated to effectively circumvent these issues. Furthermore, by exploiting the additive and periodic nature of the coupling functions, we significantly reduce the effective dimensionality of the inference problem. We first validate our method on simple coupled phase oscillators and demonstrate its robustness to more complex systems, including Van der Pol and FitzHugh-Nagumo oscillators, under conditions of biased or limited data. We next apply our approach to spiking neural networks modeled by Hodgkin-Huxley equations, in which we successfully recover the underlying interaction functions. These results highlight the flexibility and stability of Gaussian process regression in capturing nonlinear, periodic interactions in oscillator networks. Our framework provides a practical alternative to conventional methods, enabling data-driven studies of synchronized rhythmic systems across physics, biology, and engineering.
1 Introduction
The coupled phase-oscillator model [1, 2, 3] is commonly used to describe rhythmic behavior of various complex systems, such as the coordinated activity of neurons in the brain [4] or the synchronized flashing of fireflies [5]. This model allows for significant dimensionality reduction and can provide insight into the rhythmic behavior of the original system.
The coupling functions between the oscillators are represented as functions of the phase difference, which determine the synchronisation characteristics of the system. In other words, estimating the coupling functions from observed data is a crucial step in revealing the underlying mechanisms of rhythmic phenomena. Many studies have addressed to this issue using methods such as Fourier series expansion [6, 7, 8, 9]. However, the truncation order of Fourier series, which determines the complexity of the model and significantly affects the approximation accuracy, is typically arbitrary and subject to the limitations of the evidence approximation. Furthermore, the estimated Fourier coefficients may take large absolute values, leading to a high-amplitude, high-frequency oscillation. This phenomenon is undesirable since it means that coupling functions oscillate at unnaturally high frequencies.
To overcome these difficulties associated with Fourier truncation, we use the Gaussian process regression, which is a non-parametric regression method that allows for a flexible modeling of complex functions through the use of a covariance function [10]. Specifically, we propose a novel approach that incorporates key characteristics of the coupling functions—periodicity and additivity—into the kernel, introducing a periodic additive kernel.
This paper is organized as follows. In section 2, we briefly review coupled phase-oscillator models and Gaussian process. In section 3, we propose a method for extracting the coupling functions of oscillator systems from data exhibiting rhythmic phenomena using Gaussian process regression. This allows for more flexible estimation than exsiting methods. The proposed method is applied to various systems in the next section 4. In particular, we demonstrate the superiority of the proposed method over existing methods using coupled FitzHugh-Nagumo oscillators, under conditions of biased or limited data. Furthermore, the proposed method is applied to a many-body system of spiking neurons and it is confirmed that all coupling functions are estimated with high accuracy. Finally, we conclude this paper and provide further remarks in Section 5.
2 Preliminaries
2.1 Coupled phase oscillators
Let us consider a system comprising interacting dynamical systems:
(1) |
Here, the function represents the spontaneous dynamics of the element , and represents a weak action from the element to the element . We assume that each element has a stable limit cycle (i.e., each element is an oscillator). It is well known that such a system can be reduced to coupled phase oscillators [1, 2].
(2) |
The functions are coupling functions, which represent the action from the oscillator to the oscillator .
The advantage of employing phase-oscillator model lies in the significant simplification of the system. While the original system (1) is generally multidimensional, the phase oscillators (2) assign only one variable to each limit cycle. Consequently, the focus can be directed solely towards rhythm synchronization characteristics.
2.2 estimation for coupling functinos
In this paper, we estimate all phase coupling functions from phase time series with and for . Here, represents a torus. Since the phase coupling functions determine synchronization characteristics of the oscillators, it is desirable to estimate these functions from observed time series.
However, obtaining time-series data that comprehensively spans a high-dimensional state space is exceedingly challenging. Even if such data were acquired, the amount of data would be enormous, necessitating significant computational resources for estimation. To avoid this difficulty, we focus on the fact that the phase coupling functions are represented as the sum of two-body interactions, which enables us to use an additive kernel. See section 3.1 for details.
3 Methodology: Gaussian process regression
We employ the Gaussian process regression to estimate the coupling functions. In this section, we briefly address the procedure of the Gaussian process regression.
Suppose we have a time-series data with number of data and a function , such that
(3) |
for , where is a random variable of that represents a noise in the output. Our task is to predict the unknown function from the data using the Gaussian process.
We start by assuming that the unknown regression function is drawn from a given Gaussian process prior,
(4) |
where is called a mean function and is called a covariance function. The covariance function should be chosen so that it reflects one’s prior knowledge or belief about the regression function , and we will discuss this in the next subsection. In many cases, the mean function is set to a constant zero function for simplicity.
The posterior distribution of is calculated analytically by linear algebra, and is again a Gaussian process. The distribution is given by the following closed form:
(5) |
where the posterior mean function and the posterior covariance function are
(6) | ||||
(7) |
where
3.1 Choice of the covariance function
The covariance function is a fundamental component of Gaussian process regression, as it encodes our assumptions about the function which we wish to predict (see [11] for a detailed discussion). In our problem setting, since the coupling function have a domain of torus , which means the function is -periodic, it is appropriate to use a periodic kernel as the covariance function. In addition, since the coupling function is additive (i.e., it is represented as the sum of two-body interactions), it can be seen that the kernel is also additive [12]. In conclusion, we propose to use following additive periodic kernel to estimate the coupling functions:
(8) |
Here, and are positive hyperparameters. determines the amplitude of the covariance function. can be said as an inverse lengthscale, which specifies the width of the -axis direction of the covariance function.
3.2 Optimization
In the Gaussian process regression described so far, hyperparameters remain in the kernel and the noise strength . For brevity of notation, we will write these parameters as .
To estimate , we consider the maximum likelihood estimation, which makes inferences about the population that is most likely to have generated the data . The (marginal) log-likelihood for parameters is
(9) |
where is the covariance matrix for the noisy data . The most probable parameters are, therefore, estimated by finding the maximum of the log-likelihood function .
The problem of finding the point that maximizes the marginal likelihood while changing hyperparameters is formulated as an optimization problem. In this paper, we use the gradient descent method is to find the maximum point of the marginal likelihood. The hyperparameters is updated using the gradient with the following manner:
(10) |
where denotes the learning rate.
Stochastic gradient decent (SGD) method, regarded as a stochastic approximation of gradient descent optimization methods, is another way of optimization. In the SGD method, the gradient is determined by different minibatches at each step, and parameter updating proceeds accordingly. The greatest benefit is that it requires less computation time to converge than the gradient method, which uses all data at each step. It is also thought to be less likely to be trapped in local solutions, thanks to stochastic fluctuations in the gradient direction for each minibatch. Note that we can calculate the derivative of the covariance function analytically with respect to their hyperparameters. This enables efficient optimisation using automatic differentiation. The implementation was carried out using GPflow [13].
3.3 The significance of using Gaussian process regression
One of the main merits of Gaussian process regression is its ability to provide uncertainty estimates for the regression parameters, which can be useful in situations where the data are noisy or sparse. It also allows for the modeling of complex functions without requiring the specification of a fixed functional form, which can be difficult to determine in many cases. Additionally, the use of a kernel function allows for the incorporation of prior knowledge about the function, such as its smoothness or periodicity, which can improve the accuracy of the estimates.
Gaussian process regression has the advantage that all calculations can be done in a closed form using only matrix operations. However, as the number of data increases, inverse matrix calculations for matrices are required, and the amount of memory and computation is enormous. Many sparse approximations have been proposed to overcome the computational complexity of Gaussian process regression [14, 15, 16, 17, 18].
4 Numerical Simulations
4.1 Coupled phase oscillators

To confirm the validity of our approach, we first consider a simple 3-body system of coupled phase oscillators:
(11) |
where are the phases of the oscillators. We prepare the time series by solving the ODEs (11) with forward Euler method from to using a randomly selected initial value and sample the data with sampling rate 0.2. We repeat the above calculation 50 times. Estimation results are shown in Figure 1.
4.2 Coupled Van der Pol oscillators
Next, we apply the proposed method to coupled Van der Pol oscillators:
(12) |
with for . The values of parameters are .
We reconstruct the phase time series from the obtained time-series data using the following method. First, we calculate the geometric angle as follows:
(13) |
where and are centered time series of and , respectively. Even though for all , it is inappropriate to consider as the phase time series of the oscillator . This is because does not vary monotonically in the absence of interactions and noise. To overcome this difficulty, the following transformation from to should suffice [19, 20]:
(14) |
where denotes the empirical distribution of . We consider as the phase of oscillator and use them in the regression.
Estimation results are shown in Figure 2. The results are highly consistent with the true coupling functions derived using the adjoint method. For comparison purposes, we also present the estimated coupling functions with the existing method [9].

4.3 Coupled FitzHugh-Nagumo oscillators
Here is another example which demonstrates the effectiveness of the proposed method. Consider the following coupled FitzHugh-Nagumo oscillators [21, 22]:
(15) |
with for . Here, .
We estimated the coupling functions for various numbers of data (Figure 3). Specifically, we varied the number of initial values used to create the data (denoted by ) and the number of oscillator rounds per initial value used for estimation (denoted by ). We set the sampling rate as 0.2. The phase time series are reconstructed using the same method as described in section 4.2.
From Figure 3, it is evident that when the data sample size is small, our proposed method accurately aligns with the theoretical result in estimating the coupling function, whereas the existing method fails in estimation. This implies that our method is superior to the other methods when a number of the data are small and the data are biased.

4.4 Spiking Neural Network oscillators
Finally, we consider a network of multiple coupled Hodgkin-Huxley models [23] and estimate the coupling function for all of them. There are seven neurons, five of which are excitatory and two of which are inhibitory. Details of the Hodgkin-Huxley model is summarized in A.
We employed a kernel function in the form of equation (8) with . See the lower graphs of Figure 4 and Figure 5 for the estimation result compared to the theoretical coupling functions obtained by the phase reduction approach. We confirm that the coupling functions between each oscillators are qualitatively consistent with the results from Gaussian process regression and the theoretical results.


5 Conclusions and Discussions
In this paper, we focus on the real data that represent rhythmic phenomena, and from them, we consider estimating the coupling function that is modeled as a phase oscillator system. A method for estimating the coupling function using Gaussian process regression is proposed. By designing the kernel function flexibly, the regression can be performed without going through a finite order approximation by Fourier series. We confirmed that this works well in the case of simple phase oscillators, the Van der Pol equations and the FitzHugh-Nagumo equations. We also confirmed that the coupling function can be successfully estimated qualitatively for as many as seven many-body systems of coupled spiking neurons.
Finally, we comment on the proposed method using Gaussian process regression. The main idea was to reduce the data to a regression problem about a vector field of differential equations of phase. However, the possibility arises that the data used in the regression may contain systematic errors because it involves an approximation of the derivative in terms of differences due to finite time. To overcome this problem, a new idea has recently been proposed to calculate the parameter derivatives of time series data obtained from parameterized vector fields using the accompanying equations. We leave the application of the adjoint equation to the estimation of coupling functions as a future work.
Acknowledgements
This work was supported in part by the following: MEXT KAKENHI Grant Numbers JP23H04467; JSPS KAKENHI Grant Numbers JP24H00723, JP22KJ1902, JP20K20520; JST BOOST Grant Number JPMJBS2407.
References
- [1] Yoshiki Kuramoto. Chemical oscillations, waves, and turbulence. In Springer Series in Synergetics, pages 22–34, 1984.
- [2] Arthur T. Winfree. The geometry of biological time. Springer New York, NY, 1980.
- [3] Hiroya Nakao. Phase reduction approach to synchronisation of nonlinear oscillators. Contemporary Physics, 57(2):188–214, 2016.
- [4] Atthaphon Viriyopase, Raoul-Martin Memmesheimer, and Stan Gielen. Analyzing the competition of gamma rhythms with delayed pulse-coupled oscillators in phase representation. Phys. Rev. E, 98:022217, Aug 2018.
- [5] B. Ermentrout. An adaptive model for synchrony in the firefly pteroptyx malaccae. Journal of Mathematical Biology, 29(6):571–585, 1991.
- [6] Roberto F. Galán, G. Bard Ermentrout, and Nathaniel N. Urban. Efficient estimation of phase-resetting curves in real neurons and its significance for neural-network modeling. Phys. Rev. Lett., 94:158101, Apr 2005.
- [7] Isao T. Tokuda, Swati Jain, István Z. Kiss, and John L. Hudson. Inferring phase equations from multivariate time series. Phys. Rev. Lett., 99:064101, Aug 2007.
- [8] Tomislav Stankovski, Andrea Duggento, Peter V. E. McClintock, and Aneta Stefanovska. Inference of time-evolving coupled dynamical systems in the presence of noise. Phys. Rev. Lett., 109:024101, Jul 2012.
- [9] Kaiichiro Ota and Toshio Aoyagi. Direct extraction of phase dynamics from fluctuating rhythmic data based on a bayesian approach, 2014.
- [10] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006.
- [11] Motonobu Kanagawa, Philipp Hennig, Dino Sejdinovic, and Bharath K Sriperumbudur. Gaussian processes and kernel methods: A review on connections and equivalences. arXiv preprint arXiv:1807.02582, 2018.
- [12] David K Duvenaud, Hannes Nickisch, and Carl Rasmussen. Additive gaussian processes. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 24. Curran Associates, Inc., 2011.
- [13] Alexander G. de G. Matthews, Mark van der Wilk, Tom Nickson, Keisuke. Fujii, Alexis Boukouvalas, Pablo León-Villagrá, Zoubin Ghahramani, and James Hensman. GPflow: A Gaussian process library using TensorFlow. Journal of Machine Learning Research, 18(40):1–6, apr 2017.
- [14] Edward Snelson and Zoubin Ghahramani. Sparse gaussian processes using pseudo-inputs. In Proceedings of the 18th International Conference on Neural Information Processing Systems, page 1257–1264, Cambridge, MA, USA, 2005. MIT Press.
- [15] Joaquin Quiñonero Candela and Carl Edward Rasmussen. A unifying view of sparse approximate gaussian process regression. J. Mach. Learn. Res., 6:1939–1959, 2005.
- [16] Michalis Titsias. Variational learning of inducing variables in sparse gaussian processes. In Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, volume 5 of Proceedings of Machine Learning Research, pages 567–574. PMLR, 16–18 Apr 2009.
- [17] James Hensman, Nicolò Fusi, and Neil D. Lawrence. Gaussian processes for big data. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, UAI’13, page 282–290, Arlington, Virginia, USA, 2013. AUAI Press.
- [18] James Hensman, Alexander Matthews, and Zoubin Ghahramani. Scalable Variational Gaussian Process Classification. In Guy Lebanon and S. V. N. Vishwanathan, editors, Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, volume 38 of Proceedings of Machine Learning Research, pages 351–360, San Diego, California, USA, 09–12 May 2015. PMLR.
- [19] Björn Kralemann, Laura Cimponeriu, Michael Rosenblum, Arkady Pikovsky, and Ralf Mrowka. Uncovering interaction of coupled oscillators from data. Phys. Rev. E, 76:055201, 2007.
- [20] Björn Kralemann, Laura Cimponeriu, Michael Rosenblum, Arkady Pikovsky, and Ralf Mrowka. Phase dynamics of coupled oscillators reconstructed from data. Phys. Rev. E, 77:066205, Jun 2008.
- [21] Richard FitzHugh. Impulses and physiological states in theoretical models of nerve membrane. Biophysical Journal, 1(6):445–466, 1961.
- [22] J. Nagumo, S. Arimoto, and S. Yoshizawa. An active pulse transmission line simulating nerve axon. Proceedings of the IRE, 50(10):2061–2070, 1962.
- [23] A. L. Hodgkin and A. F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 117(4):500–544, 1952.
- [24] A. Destexhe, Z. F. Mainen, and T. J. Sejnowski. An Efficient Method for Computing Synaptic Conductances Based on a Kinetic Model of Receptor Binding. Neural Computation, 6(1):14–18, 01 1994.
Appendix A Spiking neurons
We write below for details of the Hodgkin-Huxley model [23].
(16) |
with the parameter values . The auxiliary functions are
model of fast-spiking neurons,
(17) |
with the parameter values . The auxiliary functions are
For each cell , the input current is the sum of the bias and synaptic currents,
(18) |
Here, denotes the set of indices of the cells that send synaptic inputs to the th cell. We set the bias currents as for , respectively.
The current flowing through the synapses, , is modeled using the kinetic synapse model [24], where it is represented as
(19) |
The fraction of bound receptor proteins is represented by , and its dynamics are described by the following equation:
(20) |
where is the concentration of neurotransmitters, which is set to 1 when a spike is emitted by the presynaptic cell and resets to 0 after 1 millisecond. The constants and govern the kinetics of , while is the reversal potential and is the synaptic conductance. The values used for excitatory and inhibitory synapses are and respectively. Additionally, a weak, independent noise function is added to the membrane voltage and channel variables , and . The noise follows a Gaussian white noise distribution, with and , where , and and are the cell indices. The noise strengths used are and .