This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

PDE-Net 2.0: Learning PDEs from Data with A Numeric-Symbolic Hybrid Deep Network

Zichao Long
School of Mathematical Sciences
Peking University
zlong@pku.edu.cn
&Yiping Lu
School of Mathematical Sciences
Peking University
luyiping9712@pku.edu.cn
\ANDBin Dong
Beijing International Center for Mathematical Research
and Center for Data Science
Peking University
dongbin@math.pku.edu.cn
Abstract

Partial differential equations (PDEs) are commonly derived based on empirical observations. However, recent advances of technology enable us to collect and store massive amount of data, which offers new opportunities for data-driven discovery of PDEs. In this paper, we propose a new deep neural network, called PDE-Net 2.0, to discover (time-dependent) PDEs from observed dynamic data with minor prior knowledge on the underlying mechanism that drives the dynamics. The design of PDE-Net 2.0 is based on our earlier work [1] where the original version of PDE-Net was proposed. PDE-Net 2.0 is a combination of numerical approximation of differential operators by convolutions and a symbolic multi-layer neural network for model recovery. Comparing with existing approaches, PDE-Net 2.0 has the most flexibility and expressive power by learning both differential operators and the nonlinear response function of the underlying PDE model. Numerical experiments show that the PDE-Net 2.0 has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.

Keywords Partial differential equations  \cdot dynamic system  \cdot convolutional neural network  \cdot symbolic neural network

1 Introduction

Differential equations, especially partial differential equations(PDEs), play a prominent role in many disciplines to describe the governing physical laws underlying a given system of interest. Traditionally, PDEs are derived mathematically or physically based on some basic principles, e.g. from Schrödinger’s equations in quantum mechanics to molecular dynamic models, from Boltzmann equations to Navier-Stokes equations, etc. However, the mechanisms behind many complex systems in modern applications (such as many problems in multiphase flow, neuroscience, finance, biological science, etc.) are still generally unclear, and the governing equations of these systems are commonly obtained by empirical formulas [2, 3]. With the recent rapid development of sensors, computational power, and data storage in the last decade, huge quantities of data can now be easily collected, stored and processed. Such vast quantity of data offers new opportunities for data-driven discovery of (potentially new) physical laws. Then, one may ask the following interesting and intriguing question: can we learn a PDE model to approximate the observed complex dynamic data?

1.1 Existing Work and Motivations

Earlier attempts on data-driven discovery of hidden physical laws include [4, 5]. Their main idea is to compare numerical differentiations of the experimental data with analytic derivatives of candidate functions, and apply the symbolic regression and the evolutionary algorithm to determining the nonlinear dynamic system. When the form of the nonlinear response function of a PDE is known, except for some scalar parameters, [6] presented a framework to learn these unknown parameters by introducing regularity between two consecutive time step using Gaussian process. Later in [7], a PDE constraint interpolation method was introduced to uncover the unknown parameters of the PDE model. An alternative approach is known as the sparse identification of nonlinear dynamics (SINDy) [8, 9, 10, 11, 12, 13]. The key idea of SINDy is to first construct a dictionary of simple functions and partial derivatives that are likely to appear in the equations. Then, it takes the advantage of sparsity promoting techniques (e.g. 1\ell_{1} regularization) to select candidates that most accurately represent the data. In [14], the authors studied the problem of sea surface temperature prediction (SSTP). They assumed that the underlying physical model was an advection-diffusion equation. They designed a special neural network according to the general solution of the equation. Comparing with traditional numerical methods, their approach showed improvements in accuracy and computation efficiency.

Recent work greatly advanced the progress of PDE identification from observed data. However, SINDy requires to build a sufficiently large dictionary which may lead to high memory load and computation cost, especially when the number of model variables is large. Furthermore, the existing methods based on SINDy treat spatial and temporal information of the data separately and does not take full advantage of the temporal dependence of the PDE model. Although the framework presented by [6, 7] is able to learn hidden physical laws using less data than the approach based on SINDy, the explicit form of the PDEs is assumed to be known except for a few scalar learnable parameters. The approach of [14] is specifically designed for advection-diffusion equations, and cannot be readily extended to other types of equations. Therefore, extracting governing equations from data in a less restrictive setting remains a great challenge.

The main objective of this paper is to design a transparent deep neural network to uncover hidden PDE models from observed complex dynamic data with minor prior knowledge on the mechanisms of the dynamics, and to perform accurate predictions at the same time. The reason we emphasize on both model recovery and prediction is because: 1) the ability to conduct accurate long-term prediction is an important indicator of accuracy of the learned PDE model (the more accurate is the prediction, the more confident we have on the underlying recovered PDE model); 2) the trained neural network can be readily used in applications and does not need to be re-trained when initial conditions are altered. Our inspiration comes from the latest development of deep learning techniques in computer vision. An interesting fact is that some popular networks in computer vision, such as ResNet[15, 16], have close relationship with ODEs/PDEs and can be naturally merged with traditional computational mathematics in various tasks [17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. However, existing deep networks designed in deep learning mostly emphasis on expressive power and prediction accuracy. These networks are not transparent enough to be able to reveal the underlying PDE models, although they may perfectly fit the observed data and perform accurate predictions. Therefore, we need to carefully design the network by combining knowledge from deep learning and numerical PDEs.

1.2 Our Approach

The proposed deep neural network is an upgraded version of our original PDE-Net [1]. The main difference is the use of a symbolic network to approximate the nonlinear response function, which significantly relaxes the requirement on the prior knowledge on the PDEs to be recovered. During training, we no longer need to assume the general type of the PDE (e.g. convection, diffusion, etc.) is known. Furthermore, due to the lack of prior knowledge on the general type of the unknown PDE models, more carefully designed constraints on the convolution filters as well as the parameters of the symbolic network are introduced. We refer to this upgraded network as PDE-Net 2.0.

Assume that the PDE to be recovered takes the following generic form

Ut=F(U,U,2U,),xΩ2,t[0,T].U_{t}=F(U,\nabla U,\nabla^{2}U,\ldots),\quad x\in\Omega\subset\mathbb{R}^{2},\quad t\in[0,T].

PDE-Net 2.0 is designed as a feed-forward network by discretizing the above PDE using forward Euler in time and finite difference in space. The forward Euler approximation of temporal derivative makes PDE-Net 2.0 ResNet-like [15, 18, 21], and the finite difference is realized by convolutions with trainable kernels (or filters). The nonlinear response function FF is approximated by a symbolic neural network, which shall be referred to as SymNetSymNet. All the parameters of the SymNetSymNet and the convolution kernels are jointly learned from data. To grant full transparency to the PDE-Net 2.0, proper constraints are enforced on the SymNetSymNet and the filters. Full details on the architecture and constraints will be presented in Section 2.

1.3 Relation with Model Reduction

Data-driven discovery of hidden physical laws and model reduction have a lot in common. Both of them concern on representing observed data using relatively simple models. The main difference is that, model reduction emphasis more on numerical precision rather than acquiring the analytic form of the model.

It is common practice in model reduction to use a function approximator to express the unknown terms in the reduced models, such as approximating subgrid stress for large-eddy simulation[27, 28, 29] or approximating interatomic forces for coarse-grained molecular dynamic systems[30, 31]. Our work may serve as an alternative approach to model reduction and help with analyzing the reduced models.

1.4 Novelty

The particular novelties of our approach are that we impose appropriate constraints on the learnable filters and use a properly designed symbolic neural network to approximate the response function FF. Using learnable filters makes the PDE-Net 2.0 more flexible, and enables more powerful approximation of unknown dynamics and longer time prediction (see numerical experiments in Section 3 and Section 4). Furthermore, the constraints on the learnable filters and the use of a deep symbolic neural network enable us to uncover the analytic form of FF with minor prior knowledge on the dynamic, which is the main advantage of PDE-Net 2.0 over the original PDE-Net. In addition, the composite representation by the symbolic network is more efficient and flexible than SINDy. Therefore, the proposed PDE-Net 2.0 is distinct from the existing learning based methods to discover PDEs from data.

2 PDE-Net 2.0: Architecture, Constraints and Training

Given a series of measurements of some physical quantities {U(t,x,y):t=t0,t1,,(x,y)Ω2}d\{U(t,x,y):\ t=t_{0},t_{1},\cdots,\ (x,y)\in\Omega\subset\mathbb{R}^{2}\}\subset\mathbb{R}^{d} with dd being the number of physical quantities of interest, we want to discover the governing PDEs from the observed data {U(t,x,y)}\{U(t,x,y)\}. We assume that the observed data are associated with a PDE that takes the following general form:

Ut(t,x,y)=F(U,Ux,Uy,Uxx,Uxy,Uyy,),U_{t}(t,x,y)=F(U,U_{x},U_{y},U_{xx},U_{xy},U_{yy},\ldots), (1)

here U(t,):ΩdU(t,\cdot):\Omega\mapsto\mathbb{R}^{d}, F(U,Ux,Uy,Uxx,Uxy,Uyy,)dF(U,U_{x},U_{y},U_{xx},U_{xy},U_{yy},\ldots)\in\mathbb{R}^{d}, (x,y)Ω2(x,y)\in\Omega\subset\mathbb{R}^{2}, t[0,T]t\in[0,T]. Our objective is to design a feed-forward network, called PDE-Net 2.0, to approximate the unknown PDE (1) from its solution samples in the way that: 1) we are able to reveal the analytic form of the response function FF and the differential operators involved; 2) we can conduct long-term prediction on the dynamical behavior of the equation for any given initial conditions. There are two main components of the PDE-Net 2.0 that are combined together in the same network: one is automatic determination on the differential operators involved in the PDE and their discrete approximations; the other is to approximate the nonlinear response function FF. In this section, we start with discussions on the overall framework of the PDE-Net 2.0 and then introduce the details on these two components. Regularization and training strategies will be given near the end of this section.

2.1 Architecture of PDE-Net 2.0

Inspired by the dynamic system perspective of deep neural networks [17, 18, 19, 20, 21, 22], we consider forward Euler as the temporal discretization of the evolution PDE (1), and unroll the discrete dynamics to a feed-forward network. One may consider more sophisticated temporal discretization which naturally leads to different network architectures [21]. For simplicity, we focus on forward Euler in this paper.

2.1.1 δt\delta t-block:

Let U~(t+δt,)\tilde{U}(t+\delta t,\cdot) be the predicted value at time t+δtt+\delta t based on U~(t,)\tilde{U}(t,\cdot). Then, we design an approximation framework as follows

U~(t+δt,)U~(t,)+δtSymNetmk(D00U~,D01U~,D10U~,).\tilde{U}(t+\delta t,\cdot)\approx\tilde{U}(t,\cdot)+\delta t\cdot SymNet_{m}^{k}(D_{00}\tilde{U},D_{01}\tilde{U},D_{10}\tilde{U},\cdots). (2)

Here, the operators DijD_{ij} are convolution operators with the underlying filters denoted by qijq_{ij}, i.e. Diju=qijuD_{ij}u=q_{ij}\circledast u. The operators D10D_{10}, D01D_{01}, D11D_{11}, etc. approximate differential operators, i.e. Dijui+juixjyD_{ij}u\approx\frac{\partial^{i+j}u}{\partial^{i}x\partial^{j}y}. In particular, the operators D00D_{00} is a certain averaging operator. The purpose of introducing the average operators in stead of simply using the identity is to improve the expressive power of the network and enables it to capture more complex dynamics.

Other than the assumption that the observed dynamics is governed by a PDE of the form (1), we assume that the highest order of the PDE is less than some positive integer. Then, we can assume that FF is a function of mm variables with known mm. The task of approximating FF in (1) is equivalent to a multivariate regression problem. In order to be able to identify the analytic form of FF, we use a symbolic neural network denote by SymNetmkSymNet_{m}^{k} to approximate FF, where kk denotes the depth of the network. Note that, if FF is a vector function, we use multiple SymNetmkSymNet_{m}^{k} to approximate the components of FF separately.

Combining the aforementioned approximation of differential operators and the nonlinear response function, we obtain an approximation framework (2) which will be referred to as a δt\delta t-block  (see Figure 1). Details of these two components can be found later in Section 2.2 and Section 2.3.

Refer to caption
Figure 1: The schematic diagram of a δt\delta t-block.

2.1.2 Multiple δt\delta t-Blocks:

One δt\delta t-block only guarantees the accuracy of one-step dynamics, which does not take error accumulation into consideration. In order to facilitate a long-term prediction, we stack multiple δt\delta t-blocks into a deep network, and call this network the PDE-Net 2.0 (see Figure 2). The importance of stacking multiple δt\delta t-blocks will be demonstrated by our numerical experiments in Section 3 and 4.

The PDE-Net 2.0 can be easily described as: (1) stacking one δt\delta t-block multiple times; (2) sharing parameters in all δt\delta t-blocks. Given a set of observed data {U(t,)}\{U(t,\cdot)\}, training a PDE-Net 2.0 with nn δt\delta t-blocks needs to minimize the accumulated error U(t+nδt,)U~(t+nδt,)22||U(t+n\delta t,\cdot)-\tilde{U}(t+n\delta t,\cdot)||_{2}^{2}, where U~(t+nδt,)\tilde{U}(t+n\delta t,\cdot) is the output from the PDE-Net 2.0 (i.e. nn δt\delta t-blocks) with input U(t,)U(t,\cdot), and U(t+nδt,)U(t+n\delta t,\cdot) is observed training data.

Refer to caption
Figure 2: The schematic diagram of the PDE-Net 2.0.

2.2 Convolutions and Differentiations

In the original PDE-Net [1], the learnable filters are properly constrained so that we can easily identify their correspondence to differential operators. PDE-Net 2.0 adopts the same constrains as the original version of the PDE-Net. For completeness, we shall review the related notions and concepts, and provide more details.

A profound relationship between convolutions and differentiations was presented by [32, 33], where the authors discussed the connection between the order of sum rules of filters and the orders of differential operators. Note that the definition of convolution we use follows the convention of deep learning, which is defined as

(fq)[l1,l2]:=k1,k2q[k1,k2]f[l1+k1,l2+k2].(f\circledast q)[l_{1},l_{2}]:=\sum_{k_{1},k_{2}}q[k_{1},k_{2}]f[l_{1}+k_{1},l_{2}+k_{2}].

This is essentially correlation instead of convolution in the mathematics convention. Note that if ff is of finite size, we use periodic boundary condition.

The order of sum rules is closely related to the order of vanishing moments in wavelet theory  [34, 35]. We first recall the definition of the order of sum rules.

Definition 2.1 (Order of Sum Rules)

For a filter qq, we say qq to have sum rules of order α=(α1,α2)\alpha=(\alpha_{1},\alpha_{2}), where α+2\alpha\in\mathbb{Z}^{2}_{+}, provided that

k2kβq[k]=0\sum_{k\in\mathbb{Z}^{2}}k^{\beta}q[k]=0 (3)

for all β=(β1,β2)+2\beta=(\beta_{1},\beta_{2})\in\mathbb{Z}^{2}_{+} with |β|:=β1+β2<|α||\beta|:=\beta_{1}+\beta_{2}<|\alpha| and for all β+2\beta\in\mathbb{Z}^{2}_{+} with |β|=|α||\beta|=|\alpha| but βα\beta\neq\alpha. If (3) holds for all β+2\beta\in\mathbb{Z}^{2}_{+} with |β|<K|\beta|<K except for ββ¯\beta\neq\bar{\beta} with certain β¯+2\bar{\beta}\in\mathbb{Z}^{2}_{+} and |β¯|=J<K|\bar{\beta}|=J<K, then we say qq to have total sum rules of order K\{J+1}K\backslash\{J+1\}.

In practical implementation, the filters are normally finite and can be understood as matrices. For an N×NN\times N filter qq (NN being an odd number), assuming the indices of qq start from N12-\frac{N-1}{2}, (3) can be written in the following simpler form

l=N12N12m=N12N12lβ1mβ2q[l,m]=0.\sum_{l=-\frac{N-1}{2}}^{\frac{N-1}{2}}\sum_{m=-\frac{N-1}{2}}^{\frac{N-1}{2}}l^{\beta_{1}}m^{\beta_{2}}q[l,m]=0.

The following proposition from [33] links the orders of sum rules with orders of differential operators.

Proposition 2.1

Let qq be a filter with sum rules of order α+2\alpha\in\mathbb{Z}^{2}_{+}. Then for a smooth function F(x)F(x) on 2\mathbb{R}^{2}, we have

1ε|α|k2q[k]F(x+εk)=CααxαF(x)+O(ε),asε0.\frac{1}{\varepsilon^{|\alpha|}}\sum_{k\in\mathbb{Z}^{2}}q[k]F(x+\varepsilon k)=C_{\alpha}\frac{\partial^{\alpha}}{\partial x^{\alpha}}F(x)+O(\varepsilon),\mbox{as}~\varepsilon\rightarrow 0. (4)

If, in addition, qq has total sum rules of order K\{|α|+1}K\backslash\{|\alpha|+1\} for some K>|α|K>|\alpha|, then

1ε|α|k2q[k]F(x+εk)=CααxαF(x)+O(εK|α|),asε0.\frac{1}{\varepsilon^{|\alpha|}}\sum_{k\in\mathbb{Z}^{2}}q[k]F(x+\varepsilon k)=C_{\alpha}\frac{\partial^{\alpha}}{\partial x^{\alpha}}F(x)+O(\varepsilon^{K-|\alpha|}),\mbox{as}~\varepsilon\rightarrow 0. (5)

According to Proposition 2.1, an α\alphath order differential operator can be approximated by the convolution of a filter with α\alpha order of sum rules. Furthermore, according to (5), one can obtain a high order approximation of a given differential operator if the corresponding filter has an order of total sum rules with K>|α|+k,k1K>|\alpha|+k,k\geqslant 1. For example, consider filter

q=(101202101),q=\left(\begin{array}[]{rrr}1&0&-1\\ 2&0&-2\\ 1&0&-1\end{array}\right),

It has a sum rules of order (1,0)(1,0), and a total sum rules of order 3\{2}3\backslash\{2\}. Thus, up to a constant and a proper scaling, qq corresponds to a discretization of x\frac{\partial}{\partial x} with second order accuracy.

For an N×NN\times N filter qq, define the moment matrix of qq as

M(q)=(mi,j)N×N,M(q)=(m_{i,j})_{N\times N}, (6)

where

mi,j=1i!j!k1,k2=N12N12k1ik2jq[k1,k2],i,j=0,1,,N1.m_{i,j}=\frac{1}{i!j!}\sum_{k_{1},k_{2}=-\frac{N-1}{2}}^{\frac{N-1}{2}}k_{1}^{i}k_{2}^{j}q[k_{1},k_{2}],i,j=0,1,\ldots,N-1. (7)

We shall call the (i,j)(i,j)-element of M(q)M(q) the (i,j)(i,j)-moment of qq for simplicity. For any smooth function f:2f:\mathbb{R}^{2}\to\mathbb{R}, we apply convolution on the sampled version of ff with respect to the filter qq. By Taylor’s expansion, one can easily obtain the following formula

k1,k2=N12N12q[k1,k2]f(x+k1δx,y+k2δy)\displaystyle\sum_{k_{1},k_{2}=-\frac{N-1}{2}}^{\frac{N-1}{2}}q[k_{1},k_{2}]f(x+k_{1}\delta x,y+k_{2}\delta y) (8)
=\displaystyle= k1,k2=N12N12q[k1,k2]i,j=0N1i+jfixjy|(x,y)k1ik2ji!j!δxiδyj+o(|δx|N1+|δy|N1)\displaystyle\sum_{k_{1},k_{2}=-\frac{N-1}{2}}^{\frac{N-1}{2}}q[k_{1},k_{2}]\sum_{i,j=0}^{N-1}\frac{\partial^{i+j}f}{\partial^{i}x\partial^{j}y}\bigg{|}_{(x,y)}\frac{k_{1}^{i}k_{2}^{j}}{i!j!}\delta x^{i}\delta y^{j}+o(|\delta x|^{N-1}+|\delta y|^{N-1})
=\displaystyle= i,j=0N1mi,jδxiδyji+jfixjy|(x,y)+o(|δx|N1+|δy|N1).\displaystyle\sum_{i,j=0}^{N-1}m_{i,j}\delta x^{i}\delta y^{j}\cdot\frac{\partial^{i+j}f}{\partial^{i}x\partial^{j}y}\bigg{|}_{(x,y)}+o(|\delta x|^{N-1}+|\delta y|^{N-1}).

From (8) we can see that filter qq can be designed to approximate any differential operator with prescribed order of accuracy by imposing constraints on M(q)M(q).

For example, if we want to approximate ux\frac{\partial u}{\partial x} (up to a constant) by convolution quq\circledast u where qq is a 3×33\times 3 filter, we can consider the following constrains on M(q)M(q):

(001)or(000100).\left(\begin{array}[]{ccc}0&0&\star\\ 1&\star&\star\\ \star&\star&\star\end{array}\right)\quad\mbox{or}\quad\left(\begin{array}[]{ccc}0&0&0\\ 1&0&\star\\ 0&\star&\star\end{array}\right). (9)

Here, \star means no constraint on the corresponding entry. The constraints described by the moment matrix on the left of (9) guarantee the approximation accuracy is at least first order, and the one on the right guarantees an approximation of at least second order. In particular, when all entries of M(q)M(q) are constrained, e.g.

M(q)=(000100000),M(q)=\left(\begin{array}[]{ccc}0&0&0\\ 1&0&0\\ 0&0&0\end{array}\right),

the corresponding filter can be uniquely determined. In the PDE-Net 2.0, all filters are learned subjected to partial constraints on their associated moment matrices, with at least second order accuracy.

2.3 Design of SymNetSymNet: a Symbolic Neural Network

The symbolic neural network SymNetmkSymNet_{m}^{k} of the PDE-Net 2.0 is introduced to approximate the multivariate nonlinear response function FF of (1). Neural networks have recently been proven effective in approximating multivariate functions in various scenarios [36, 37, 38, 39, 40]. For the problem we have in hand, we not only require the network to have good expressive power, but also good transparency so that the analytic form of FF can be readily inferred after training. Our design of SymNetmkSymNet_{m}^{k} is motivated by EQL/EQL÷ proposed by [41, 42].

The SymNetmkSymNet_{m}^{k}, as illustrated in Figure 3, is a network that takes an mm dimensional vector as input and has kk hidden layers.

Refer to caption
Figure 3: The schematic diagram of SymNetSymNet

Figure 3 shows the symbolic neural network with two hidden layers, i.e. SymNetm2SymNet_{m}^{2}, where ff is a dyadic operation unit, e.g. multiplication or division. In this paper, we focus on multiplication, i.e. we take f(a,b)=a×bf(a,b)=a\times b. Different from EQL/EQL÷, each hidden layer of the SymNetmkSymNet_{m}^{k} directly takes the outputs from the preceding layer as inputs, rather than a linear combination of them. Furthermore, it adds one additional variable (i.e. f(,)f(\cdot,\cdot)) at each hidden layer. To better understand SymNetmkSymNet_{m}^{k}, we present an example in Algorithm 1 showing how SymNet62SymNet_{6}^{2} is constructed. In particular, when bi=0,i=1,2,3b^{i}=0,i=1,2,3,

W1=(100000010000),W2=(00100000001000)W^{1}=\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\ 0&1&0&0&0&0\end{array}\right),W^{2}=\left(\begin{array}[]{ccccccc}0&0&1&0&0&0&0\\ 0&0&0&1&0&0&0\end{array}\right)

and W3=(0,0,0,0,0,0,1,1)W^{3}=(0,0,0,0,0,0,-1,-1), then SymNet62(u,ux,uy,v,vx,vy)=uuxvuySymNet_{6}^{2}(u,u_{x},u_{y},v,v_{x},v_{y})=-uu_{x}-vu_{y} which is the right-hand-side of ut\frac{\partial u}{\partial t} of the Burgers’ equation without viscosity.

Algorithm 1 SymNet62SymNet_{6}^{2}

Input: u,ux,uy,v,vx,vyu,u_{x},u_{y},v,v_{x},v_{y}\in\mathbb{R},

(η1,ξ1)=W1(u,ux,uy,v,vx,vy)+b1,W12×6,b12(\eta_{1},\xi_{1})^{\top}=W^{1}\cdot(u,u_{x},u_{y},v,v_{x},v_{y})^{\top}+b^{1},W^{1}\in\mathbb{R}^{2\times 6},b^{1}\in\mathbb{R}^{2};
f1=η1ξ1f_{1}=\eta_{1}\cdot\xi_{1};
(η2,ξ2)=W2(u,ux,uy,v,vx,vy,f1)+b2,W22×7,b22(\eta_{2},\xi_{2})^{\top}=W^{2}\cdot(u,u_{x},u_{y},v,v_{x},v_{y},f_{1})^{\top}+b^{2},W^{2}\in\mathbb{R}^{2\times 7},b^{2}\in\mathbb{R}^{2};
f2=η2ξ2f_{2}=\eta_{2}\cdot\xi_{2};

Output: F=W3(u,ux,uy,v,vx,vy,f1,f2)+b3,W31×8,b3F=W^{3}\cdot(u,u_{x},u_{y},v,v_{x},v_{y},f_{1},f_{2})^{\top}+b^{3}\in\mathbb{R},W^{3}\in\mathbb{R}^{1\times 8},b^{3}\in\mathbb{R}.

The SymNetmkSymNet_{m}^{k} can represent all polynomials of variables (x1,x2,,xm)(x_{1},x_{2},\ldots,x_{m}) with the total number of multiplications not exceeding kk. If needed, one can add more operations to the SymNetmkSymNet_{m}^{k} to increase the capacity of the network.

Now, we show that SymNetmkSymNet_{m}^{k} is more compact than the dictionaries of SINDy. For that, we first introduce some notions.

Definition 2.2

Define the set of all polynomials of mm variables (x1,,xm)(x_{1},\cdots,x_{m}) with the total number of multiplications not exceeding kk as 𝒫k[x1,,xm]\mathcal{P}^{k}[x_{1},\cdots,x_{m}]. Here, the total number of multiplications of 𝒫k[x1,,xm]\mathcal{P}^{k}[x_{1},\cdots,x_{m}] is counted as follows:

  • For any monomial of degree kk, if k2k\geq 2, then the number of multiplications of the monomial is counted as k1k-1. When k=1k=1 or 0, the count is 0.

  • For any polynomial PP, the total number of multiplications is counted as the sum of the number of multiplications of its monomials.

For example, i=1mxi+i=1kxixi+1\sum_{i=1}^{m}x_{i}+\sum_{i=1}^{k}x_{i}x_{i+1} and i=1k+1xi\prod_{i=1}^{k+1}x_{i} with k<mk<m are all members of 𝒫k[x1,,xm]\mathcal{P}^{k}[x_{1},\cdots,x_{m}]. The elements in 𝒫k[x1,,xm]\mathcal{P}^{k}[x_{1},\cdots,x_{m}] are of simple forms when kk is relatively small. The following proposition shows that SymNetmkSymNet_{m}^{k} can represent all polynomials of variables (x1,x2,,xm)(x_{1},x_{2},\ldots,x_{m}) with the total number of multiplications not exceeding kk. Note that the actual capacity of SymNetmkSymNet_{m}^{k} is larger than 𝒫k[x1,,xm]\mathcal{P}^{k}[x_{1},\cdots,x_{m}], i.e. 𝒫k[x1,,xm]\mathcal{P}^{k}[x_{1},\cdots,x_{m}] is a subset of the set of functions that SymNetmkSymNet_{m}^{k} can represent.

Proposition 2.2

For any P𝒫k[x1,,xm]P\in\mathcal{P}^{k}[x_{1},\cdots,x_{m}], there exists a set of parameters for SymNetmkSymNet_{m}^{k} such that

P=SymNetmk(x1,,xm).P=SymNet_{m}^{k}(x_{1},\cdots,x_{m}).

Proof: We prove this proposition by induction. When k=1k=1, the conclusion obviously holds. Suppose the conclusion holds for kk. For any polynomial P𝒫k+1[x1,,xm]P\in\mathcal{P}^{k+1}[x_{1},\cdots,x_{m}], we only need to consider the cases when PP has a total number of multiplications greater than 1.

We take any monomial of PP that has degree greater than 1, which we suppose take the form x1x2Ax_{1}x_{2}\cdot A where AA is a monomial of variable (x1,,xm)(x_{1},\ldots,x_{m}). Then, PP can be written as P=x1x2A+QP=x_{1}x_{2}A+Q. Define new variable xm+1=x1x2x_{m+1}=x_{1}x_{2}. Then, we have

P=xm+1A+Q𝒫k[x1,,xm,xm+1].P=x_{m+1}A+Q\in\mathcal{P}^{k}[x_{1},\cdots,x_{m},x_{m+1}].

By the induction hypothesis, there exists a set of parameters such that P=SymNetm+1k(x1,,xm+1)P=SymNet_{m+1}^{k}(x_{1},\cdots,x_{m+1}).

We take the linear transform between the input layer and the first hidden layer of SymNetmk+1SymNet_{m}^{k+1} as

W1(x1,,xm)+b1=(x1,x2).W^{1}\cdot(x_{1},\cdots,x_{m})^{\top}+b_{1}=(x_{1},x_{2})^{\top}.

Then, the output of the first hidden layer is x1,x2,,xm,x1x2x_{1},x_{2},\cdots,x_{m},x_{1}x_{2}. If we use it as the input of SymNetm+1kSymNet_{m+1}^{k}, we have

P(x1,,xm)\displaystyle P(x_{1},\cdots,x_{m}) =\displaystyle= SymNetm+1k(x1,,xm,x1x2)\displaystyle SymNet_{m+1}^{k}(x_{1},\cdots,x_{m},x_{1}x_{2})
=\displaystyle= SymNetmk+1(x1,,xm),\displaystyle SymNet_{m}^{k+1}(x_{1},\cdots,x_{m}),

which concludes the proof. \square

SINDy constructs a dictionary that incudes all possible monomials up to a certain degree. Observe that there are totally (m+ll)\binom{m+l}{l} monomials with mm variables and a degree not exceeding l()l(\in\mathbb{N}). Our symbolic network, however, is more compact than SINDy. The following proposition compares the complexity of SymNetmkSymNet_{m}^{k} and SINDy, whose proof is straightforward.

Proposition 2.3

Let P𝒫k[x1,,xm]P\in\mathcal{P}^{k}[x_{1},\cdots,x_{m}] and suppose PP have monomials of degree l\leq l.

  • The memory load of SymNetmkSymNet_{m}^{k} that approximates PP is O(m+k)O(m+k). The number of flops for evaluating SymNetmkSymNet_{m}^{k} is O(k(m+k))O(k(m+k)).

  • Constructing a dictionary with all possible polynomials of degree ll requires a memory load of (m+ll)\binom{m+l}{l}, and evaluation of a linear combination of dictionary members requires O((m+ll))O(\binom{m+l}{l}) flops.

We use the following example to show the advantage of SymNetSymNet over SINDy. Consider two variables u,vu,v and all of their derivatives of order 2\leq 2:

(u,v,ux,uy,,vyy)12.(u,v,u_{x},u_{y},\cdots,v_{yy})\in\mathbb{R}^{12}.

Suppose the polynomial to be approximated is P=uuxvuy+uxxP=-uu_{x}-vu_{y}+u_{xx}. For k=l=3k=l=3, the size of the dictionary of SINDy is (153)=455\binom{15}{3}=455 and the computation of linear combination of the elements requires 909909 flops. The memory load of SymNet123SymNet_{12}^{3}, however, is 15 and an evaluation of the network requires 180 flops. Therefore, SymNetSymNet can significantly reduce memory load and computation cost when input data is large. Note that when kk is large and ll small, SymNetmkSymNet_{m}^{k} is worse than SINDy. However, for system identification problems, we normally wish to obtain a compact representation (i.e. smaller kk). Thus, SymNetSymNet takes full advantage of this prior knowledge and can significantly save on memory and computation cost which is crucial in the training of the PDE-Net 2.0.

2.4 Loss Function and Regularization

We adopt the following loss function to train the proposed PDE-Net 2.0:

L=Ldata+λ1Lmoment+λ2LSymNet,L=L^{data}+\lambda_{1}L^{moment}+\lambda_{2}L^{SymNet},

where the hyper-parameters λ1\lambda_{1} and λ2\lambda_{2} are chosen as λ1=0.001\lambda_{1}=0.001 and λ2=0.005\lambda_{2}=0.005. Now, we present details on each of the term of the loss function and introduce pseudo-upwind as an additional constraint on PDE-Net 2.0.

2.4.1 Data Approximation LdataL^{data}

Consider the data set {Uj(ti,):1in,1jN}\{U_{j}(t_{i},\cdot):1\leq i\leq n,1\leq j\leq N\}, where nn is the number of δt\delta t-blocks and NN is the total number of samples. The index jj indicates the jj-th solution path with a certain initial condition of the unknown dynamics. Note that one can split a long solution path into multiple shorter ones. We would like to train the PDE-Net 2.0 with nn δt\delta t-blocks. For a given n1n\geq 1, every pair of the data {Uj(t0,),Uj(ti,)}\{U_{j}(t_{0},\cdot),U_{j}(t_{i},\cdot)\}, for each jj and ini\leq n, is a training sample, where Uj(t0,)U_{j}(t_{0},\cdot) is the input and Uj(ti,)U_{j}(t_{i},\cdot) is the label that we need to match with the output from the network. For that, we define the data approximation term LdataL^{data} as:

Ldata=1ni=1nj=1Nlij/δt2,wherelij=Uj(ti,)U~j(ti,)22,L^{data}=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{N}l_{ij}/\delta t^{2},\mbox{where}\quad l_{ij}=||U_{j}(t_{i},\cdot)-\tilde{U}_{j}(t_{i},\cdot)||_{2}^{2},

where U~j(ti,)\tilde{U}_{j}(t_{i},\cdot) is the output of the PDE-Net 2.0 with U~j(t0,)=Uj(t0,)\tilde{U}_{j}(t_{0},\cdot)=U_{j}(t_{0},\cdot) as the input.

2.4.2 Regularization: LmomentL^{moment} and LSymNetL^{SymNet}

For a given threshold ss, defined Huber’s loss function 1s\ell_{1}^{s} as

1s(x)={|x|s2if |x|>s12sx2else.\ell_{1}^{s}(x)=\left\{\begin{array}[]{ll}|x|-\frac{s}{2}&\text{if }|x|>s\\ \frac{1}{2s}x^{2}&\text{else.}\end{array}\right.

Then, we define LmomentL^{moment} as

Lmoment=i,ji1,j11s(M(qij)[i1,j1]),L^{moment}=\sum_{i,j}\sum_{i_{1},j_{1}}\ell_{1}^{s}(M(q_{ij})[i_{1},j_{1}]),

where qij{q_{ij}} are the filters of PDE-Net 2.0 and M(q)M(q) is the moment matrix of qq. We use this loss function to regularize the learnable filters to reduce overfitting. In our numerical experiments, we will use s=0.01s=0.01.

Given 1s\ell_{1}^{s} as defined above, we use it to enforce sparsity on the parameters of SymNetSymNet. This will help to reduce overfitting and enable more stable prediction. The loss function LSymNetL^{SymNet} is defined as

LSymNet=pparameters of SymNet1s(p).L^{SymNet}=\sum_{p\in\text{parameters of }SymNet}\ell_{1}^{s}(p).

We set s=0.001s=0.001 in our numerical experiments.

2.4.3 Pseudo-upwind

In numerical PDEs, to ensure stability of a numerical scheme, we need to design conservation schemes or use upwind schemes [43, 44, 45]. This is also important for PDE-Net 2.0 during inferencing. However, the challenge we face is that we do not know apriori the form or the type of the PDE. Therefore, we introduce a method called pseudo-upwind to help with maintaining stability of the PDE-Net 2.0.

Given a 2D filter qq, define the flipping operators flipx(q)\text{flip}_{x}(q) and flipy(q)\text{flip}_{y}(q) as

(flipx(q))[k1,k2]=q[k1,k2],k1,k2=(N1)/2,,(N1)/2\left(\text{flip}_{x}(q)\right)[k_{1},k_{2}]=-q[-k_{1},k_{2}],\quad k_{1},k_{2}=-(N-1)/2,\cdots,(N-1)/2

and

(flipy(q))[k1,k2]=q[k1,k2],k1,k2=(N1)/2,,(N1)/2.\left(\text{flip}_{y}(q)\right)[k_{1},k_{2}]=-q[k_{1},-k_{2}],\quad k_{1},k_{2}=-(N-1)/2,\cdots,(N-1)/2.

In each δt\delta t-block of the PDE-Net 2.0, before we apply convolution with a filter, we first use SymNetSymNet to determine whether we should use the filter or flip it first. We use the following univariate PDE as an example to demonstrate our idea. Given PDE ut=F(u,)u_{t}=F(u,\cdots), suppose the input of a δt\delta t-block is uu. The algorithm of pseudo-upwind is described by the following Algorithm 2.

Algorithm 2 pseudo-upwind in δt\delta t-block

Input: uu

u01=q01u,u10=q10uu_{01}=q_{01}\circledast u,u_{10}=q_{10}\circledast u
F~0=SymNet6k(u,u01,u10,q02u,q11u,q20u)\tilde{F}_{0}=SymNet_{6}^{k}(u,u_{01},u_{10},q_{02}\circledast u,q_{11}\circledast u,q_{20}\circledast u)
if F~0u01>0\frac{\partial\tilde{F}_{0}}{\partial u_{01}}>0 then
  ux=u01u_{x}=u_{01}
else
  ux=flipx(q01)uu_{x}=\text{flip}_{x}(q_{01})\circledast u
end if
if F~0u10>0\frac{\partial\tilde{F}_{0}}{\partial u_{10}}>0 then
  uy=u10u_{y}=u_{10}
else
  uy=flipy(q10)uu_{y}=\text{flip}_{y}(q_{10})\circledast u
end if
F~=SymNet6k(u,ux,uy,q02u,q11u,q20u)\tilde{F}=SymNet_{6}^{k}(u,u_{x},u_{y},q_{02}\circledast u,q_{11}\circledast u,q_{20}\circledast u)

Return: u+δtF~u+\delta t\tilde{F}

Remark 2.1

Note that the algorithm does not exactly enforce upwind in general. This is why we call it pseudo-upwind. We further note that:

  • Given a PDE of the form ut=G(u)ux+H(u)uy+λ(uxx+uyy)u_{t}=G(u)u_{x}+H(u)u_{y}+\lambda(u_{xx}+u_{yy}), we can use G(u)G(u) and H(u)H(u) to determine whether we should flip a filter or not.

  • For a vector PDE, such as U=(u,v)U=(u,v)^{\top}, we can use, for instance, F~0u01,F~0v10\frac{\partial\tilde{F}_{0}}{\partial u_{01}},\frac{\partial\tilde{F}_{0}}{\partial v_{10}} to determine how we should approximate uxu_{x} and uyu_{y} in the δt\delta t-block [44].

2.5 Initialization and training

In the PDE-Net 2.0, parameters can be divided into three groups: 1) moment matrices to generate convolution kernels; 2) the parameters of SymNetSymNet; and 3) hyper-parameters, such as the number of filters, the size of filters, the number of δt\delta t-Blocks and number of hidden layers of SymNetSymNet, regularization weights λ1,λ2,λ3\lambda_{1},\lambda_{2},\lambda_{3}, etc. The parameters of the SymNetSymNet are shared across the computation domain Ω\Omega, and are initialized by random sampling from a Gaussian distribution. For the filters, we initialize D01,D10D_{01},D_{10} with second order pseudo-upwind scheme and central difference for all other filters. For example, if the size of the filters were set to be 5×55\times 5, then the initial values of the convolution kernels q01,q025×5q_{01},q_{02}\in\mathbb{R}^{5\times 5} are

q01=(𝟎00341𝟎),q02=(𝟎01210𝟎).q_{01}=\left(\begin{array}[]{ccccc}&&\bm{0}&&\\ 0&0&-3&4&-1\\ &&\bm{0}&&\end{array}\right),\quad q_{02}=\left(\begin{array}[]{ccccc}&&\bm{0}&&\\ 0&1&-2&1&0\\ &&\bm{0}&&\end{array}\right).

We use layer-wise training to train the PDE-Net 2.0. We start with training the PDE-Net 2.0 on the first δ\deltat-block with a batch of data, and then use the results of the first δ\deltat-block as the initialization and restart training on the first two δ\deltat-blocks with another batch. Repeat this procedure until we complete all nn δ\deltat-blocks. Note that all the parameters in each of the δ\deltat-block are shared across layers. In addition, we add a warm-up step before the training of the first δ\deltat-block by fixing filters and setting regularization term to be 0 (i.e. λ1=λ2=0\lambda_{1}=\lambda_{2}=0). The warm-up step is to obtain a good initial guess of the parameters of SymNetSymNet.

To demonstrate the necessity of having learnable filters, we will compare the PDE-Net 2.0 containing learnable filters with the PDE-Net 2.0 having fixed filters. To differentiate the two cases, we shall call the PDE-Net 2.0 with fixed filters the “Frozen-PDE-Net 2.0”. Note that for Frozen-PDE-Net 2.0, the filters are fixed to be the initial values we choose to train the regular PDE-Net 2.0. This is a natural choice since when we know apriori that the PDE is Burgers’ equation, it would be a stable finite difference scheme. However, intuitively speaking, freezing any finite difference approximations of the differential operators during training of PDE-Net 2.0 is not ideal, because you cannot possibly know which numerical scheme to use without knowing the form of the PDE. Therefore, for inverse problem, it is better to learn both the PDE model and the discretization of the PDE model simultaneously. This assertion is supported by our emperical comparisons between frozen and regular PDE-Net 2.0 in Table 1 and 3.

3 Numerical Studies: Burgers’ Equation

Burgers’ equation is a fundamental partial differential equation in many areas such as fluid mechanics and traffic flow modeling. It has a lot in common with the Navier-Stokes equation, e.g. the same type of advective nonlinearity and the presence of viscosity.

3.1 Simulated data, training and testing

In this section we consider a 2-dimensional Burger’s equation with periodic boundary condition on Ω=[0,2π]×[0,2π]\Omega=[0,2\pi]\times[0,2\pi],

{Ut=UU+νΔU,U=(u,v)U|t=0=U0(x,y),\left\{\begin{array}[]{ll}\frac{\partial U}{\partial t}&=-U\cdot\nabla U+\nu\Delta U,U=(u,v)^{\top}\\ U|_{t=0}&=U_{0}(x,y),\end{array}\right. (10)

with (t,x,y)[0,4]×Ω,(t,x,y)\in[0,4]\times\Omega, where ν=0.05\nu=0.05

The training data is generated by a finite difference scheme on a 128×128128\times 128 mesh and then restricted to a 32×3232\times 32 mesh. The temporal discretization is 2nd order Runge-Kutta with time step δt=11600\delta t=\frac{1}{1600}, the spatial discretization uses a 2nd order upwind scheme for \nabla and the central difference scheme for Δ\Delta. The initial value u0(x,y),v0(x,y)u_{0}(x,y),v_{0}(x,y) takes the following form,

w(x,y)=2w0(x,y)maxx,y|w0|+cw(x,y)=\frac{2w_{0}(x,y)}{\max_{x,y}|w_{0}|}+c (11)

where w0(x,y)=|k|,|l|4λk,lcos(kx+ly)+γk,lsin(kx+ly)w_{0}(x,y)=\sum_{|k|,|l|\leq 4}\lambda_{k,l}\cos(kx+ly)+\gamma_{k,l}\sin(kx+ly), λk,l,γk,l𝒩(0,1),c𝒰(2,2)\lambda_{k,l},\gamma_{k,l}\sim\mathcal{N}(0,1),c\sim\mathcal{U}(-2,2). Here, 𝒩(0,1),𝒰(2,2)\mathcal{N}(0,1),\mathcal{U}(-2,2) represents the standard normal distribution and uniform distribution on [2,2][-2,2] respectively. We also add noise to the generated data:

U^(t,x,y)=U(t,x,y)+0.001×MW\widehat{U}(t,x,y)=U(t,x,y)+0.001\times MW (12)

where M=maxx,y,t{U(t,x,y)}M=\max_{x,y,t}\{U(t,x,y)\}, W𝒩(0,1)W\sim\mathcal{N}(0,1).

Suppose we know a priori that the order of the underlying PDE is no more than 2, we can use two SymNet125SymNet_{12}^{5} to approximate the right-hand-side nonlinear response function of (10) component-wise. Let U=(u,v)U=(u,v)^{\top}. We denote the two SymNet125SymNet_{12}^{5} as NetuNet_{u} and NetvNet_{v} respectively. Then, each δt\delta t-block of the PDE-Net 2.0 can be written as

u~(ti+1,)\displaystyle\tilde{u}(t_{i+1},\cdot) =\displaystyle= u~(ti,)+δtNetu(D00u~,D01u~,,D20u~,D00v~,,D20v~)\displaystyle\tilde{u}(t_{i},\cdot)+\delta t\cdot Net_{u}(D_{00}\tilde{u},D_{01}\tilde{u},\cdots,D_{20}\tilde{u},D_{00}\tilde{v},\cdots,D_{20}\tilde{v})
v~(ti+1,)\displaystyle\tilde{v}(t_{i+1},\cdot) =\displaystyle= v~(ti,)+δtNetv(D00u~,D01u~,,D20u~,D00v~,,D20v~)\displaystyle\tilde{v}(t_{i},\cdot)+\delta t\cdot Net_{v}(D_{00}\tilde{u},D_{01}\tilde{u},\cdots,D_{20}\tilde{u},D_{00}\tilde{v},\cdots,D_{20}\tilde{v})

where {Dij:0i+j2}\{D_{ij}:0\leq i+j\leq 2\} are convolution operators.

During training and testing, the data is generated on-the-fly. The size of the filters that will be used is 5×55\times 5. The total number of parameters in NetuNet_{u} and NetvNet_{v} is 336, and the number of trainable parameters in moment matrices is 105 for 5×55\times 5 filters (6×5×5456\times 5\times 5-45 (constraint on moment)). During training, we use BFGS, instead of SGD, to optimize the parameters. We use 28 data samples per batch to train each δt\delta t-block and we only construct the PDE-Net 2.0 up to 9 layers, which requires totally 420 data samples during the whole training procedure.

3.2 Results and discussions

We first demonstrate the ability of the trained PDE-Net 2.0 to recover the analytic form of the unknown PDE model. We use the symbolic math tool in python to obtain the analytic form of SymNetSymNet. Results are summarized in Table 1. As one can see from Table 1 that we can recover the terms of the Burgers’ equation with good accuracy, and using learnable filters helps with the identification of the PDE model. Furthermore, the terms that are not included in the Burgers’ equation all have relatively small weights in the SymNetSymNet (see Figure 7).

Table 1: PDE model identification.
Correct PDE ut=uuxvuy+0.05(uxx+uyy)u_{t}=-uu_{x}-vu_{y}+0.05(u_{xx}+u_{yy})
vt=uvxvvy+0.05(vxx+vyy)v_{t}=-uv_{x}-vv_{y}+0.05(v_{xx}+v_{yy})
Frozen-PDE-Net 2.0 ut=0.906uux0.901vuy+0.033uxx+0.037uyyu_{t}=-{\color[rgb]{0,0,1}0.906}uu_{x}-{\color[rgb]{0,0,1}0.901}vu_{y}+0.033u_{xx}+0.037u_{yy}
vt=0.907vvy0.902uvx+0.039vxx+0.032vyyv_{t}=-{\color[rgb]{0,0,1}0.907}vv_{y}-{\color[rgb]{0,0,1}0.902}uv_{x}+0.039v_{xx}+0.032v_{yy}
PDE-Net 2.0 ut=0.979uux0.973uyv+0.052uxx+0.051uyyu_{t}=-{\color[rgb]{1,.5,0}0.979}uu_{x}-{\color[rgb]{1,.5,0}0.973}u_{y}v+0.052u_{xx}+0.051u_{yy}
vt=0.973uvx0.977vvy+0.053vxx+0.051vyyv_{t}=-{\color[rgb]{1,.5,0}0.973}uv_{x}-{\color[rgb]{1,.5,0}0.977}vv_{y}+0.053v_{xx}+0.051v_{yy}

We also demonstrate the ability of the trained PDE-Net 2.0 in prediction, i.e. the ability to generalize. After the PDE-Net 2.0 with nn δt\delta t-blocks (1n91\leq n\leq 9) is trained, we randomly generate 1000 initial guesses based on (11) and (12), feed them to the PDE-Net 2.0, and measure the relative error between the predicted dynamics (i.e. the output of the PDE-Net 2.0) and the actual dynamics (obtained by solving (10) using high precision numerical scheme). The relative error between the true data UU and the predicted data U~\tilde{U} is defined as ϵ=U~U22UU¯22,\epsilon=\frac{\|\tilde{U}-U\|_{2}^{2}}{\|U-\bar{U}\|_{2}^{2}}, where U¯\bar{U} is the spatial average of UU. The error plots are shown in Figure 4. Some of the images of the predicted dynamics are presented in Figure 5 and the errors maps are presented in Figure 6. As we can see that, even trained with noisy data, the PDE-Net 2.0 is able to perform long-term prediction. Having multiple δt\delta t-blocks indeed improves predict accuracy. Furthermore, PDE-Net 2.0 performs significantly better than Frozen-PDE-Net 2.0. This suggests that when we do not know the PDE, we cannot possibly know how to properly discretize it. Therefore, for inverse problems, it is better to learn both the PDE model and its discretization simultaneously.

Refer to caption
Figure 4: Burgers’ equation: Prediction errors of the PDE-Net 2.0(orange) and Frozen-PDE-Net 2.0(blue) with 5×55\times 5 filters. In each plot, the horizontal axis indicates the time of prediction in the interval (0,400×δt]=(0,4](0,400\times\delta t]=(0,4], and the vertical axis shows the relatively errors. The banded curves indicate the 25%-100% percentile of the relative errors among 1000 test samples. The darker regions indicate the 25%-75% percentile of the relative errors, which shows that PDE-Net 2.0 can predict very well in most cases.
Refer to caption
Refer to caption

Dynamics of uu

Dynamics of vv

Figure 5: Burgers’ equation: The first row shows the images of the true dynamics. The last two rows show the images of the predicted dynamics using the Frozen-PDE-Net 2.0 (the second row) and PDE-Net 2.0 with 9 δt\delta t-blocks (the third row). Time step δt=0.01\delta t=0.01.
Refer to caption
Refer to caption

Error maps of uu

Error maps of vv

Figure 6: Burgers’ equation: The error maps of the Frozen-PDE-Net 2.0 (the first row) and PDE-Net 2.0 (second row) having 9 δt\delta t-blocks. Time step δt=0.01\delta t=0.01.

3.3 Importance of LSymNetL^{SymNet} and Pseudo-upwind

This subsection demonstrates the importance of enforcing sparsity on the SymNetSymNet and using pseudo-upwind. As we can see from Figure 7 that having sparsity constraints on the SymNetSymNet helps with suppressing the weights on the terms that do not exist in the Burgers’ equation. Furthermore, Figure 8 and Figure 9 show that having sparsity constraint on the SymNetSymNet or using pseudo-upwind can significantly reduce prediction errors.

Refer to caption
Refer to caption
Figure 7: Burgers’ equation: The coefficients of the remainder terms for the uu-component (left) and vv-component (right) with the sparsity constraint on the SymNetSymNet (orange) and without the sparsity constraint (blue). The bands for both cases are computed based on 20 independent training.
Refer to caption
Figure 8: Burgers’ equation: Prediction errors of the PDE-Net 2.0 with (orange) and without (blue) sparsity constraint on the SymNetSymNet. In each plot, the horizontal axis indicates the time of prediction in the interval (0,400×δt]=(0,4](0,400\times\delta t]=(0,4], and the vertical axis shows the relatively errors. The banded curves indicate the 25%-100% percentile of the relative errors among 1000 test samples.
Refer to caption
Figure 9: Burgers’ equation: Prediction errors of the PDE-Net 2.0 with (orange) and without (blue) pseudo-upwind in each δt\delta t-block. In each plot, the horizontal axis indicates the time of prediction in the interval (0,400×δt]=(0,4](0,400\times\delta t]=(0,4], and the vertical axis shows the relatively errors. The banded curves indicate the 25%-100% percentile of the relative errors among 1000 test samples.

4 Numerical Studies: Diffusion Equation

Diffusion phenomenon has been studied in many applications in physics e.g. the collective motion of micro-particles in materials due to random movement of each particle, or modeling the distribution of temperature in a given region over time.

4.1 Simulated data, training and testing

Consider the 2-dimensional heat equation with periodic boundary condition on Ω=[0,2π]×[0,2π]\Omega=[0,2\pi]\times[0,2\pi]

{ut=cΔu,(t,x,y)[0,1]×Ω,u|t=0=u0(x,y),\left\{\begin{array}[]{ll}\frac{\partial u}{\partial t}&=c\Delta u,(t,x,y)\in[0,1]\times\Omega,\\ u|_{t=0}&=u_{0}(x,y),\end{array}\right. (13)

where c=0.1c=0.1. The training data of the heat equation is generated by 2nd order Runge-Kutta in time with δt=11600\delta t=\frac{1}{1600}, and central difference scheme in space on a 128×128128\times 128 mesh. We then restrict the data to a 32×3232\times 32 mesh. The initial value u0(x,y)u_{0}(x,y) is also generated from (11).

4.2 Results and discussions

The demonstration on the ability of the trained PDE-Net 2.0 to identify the PDE model is given in Table 2. As one can see from Table 2 that we can recover the terms of the heat equation with good accuracy. Furthermore, all the terms that are not included in the heat equation have much smaller weights in the SymNetSymNet.

We also demonstrate the ability of the trained PDE-Net 2.0 in prediction. The testing method is exactly the same as the method described in Section 3. Comparisons between PDE-Net 2.0 and Frozen-PDE-Net 2.0 are shown in Figure 10, where we can clearly see the advantage of learning the filters. Visualization of the predicted dynamics is given in Figure 11. All these results show that the learned PDE-Net 2.0 performs well in prediction.

Table 2: PDE model identification. Note that the largest term of the remainders (i.e. the ones that are not included in the table) is 8e-4uyu_{y} (for Frozen-PDE-Net 2.0) and 6e-5uu (for PDE-Net 2.0).
Correct PDE ut=0.1(uxx+uyy)u_{t}=0.1(u_{xx}+u_{yy})
Frozen-PDE-Net 2.0 ut=0.103uxx+0.103uyyu_{t}=0.103u_{xx}+0.103u_{yy}
PDE-Net 2.0 ut=0.999uxx+0.998uyyu_{t}=0.999u_{xx}+0.998u_{yy}
Refer to caption
Figure 10: Diffusion equation: Prediction errors of the PDE-Net 2.0 (orange) and Frozen-PDE-Net 2.0 (blue). In each plot, the horizontal axis indicates the time of prediction in the interval (0,150×δt]=(0,1.5](0,150\times\delta t]=(0,1.5], and the vertical axis shows the relative errors. The banded curves indicate the 25%-100% percentile of the relative errors among 1000 test samples.
Refer to caption
Figure 11: Diffusion equation: The first row shows the images of the true dynamics. The second row shows the predicted dynamics using the Frozen-PDE-Net 2.0 with 9 δt\delta t-blocks. The third row shows the predicted dynamics using the PDE-Net 2.0 with 9 δt\delta t-blocks. Here, δt=0.01\delta t=0.01.

5 Numerical Studies: Convection Diffusion Equation with A Reactive Source

Convection diffusion systems are mathematical models which correspond to the transferring of some physical quantities such as energy or materials due to diffusion and convection. Specifically, a convection diffusion system with a reactive source can be used to model a large range of chemical systems in which the transferring of materials competes with productions of materials induced by several chemical reactions.

5.1 Simulated data, training and testing

Consider a 2-dimensional convection diffusion equation with a reactive source and the periodic boundary condition on Ω=[0,2π]×[0,2π]\Omega=[0,2\pi]\times[0,2\pi]:

ut=uuxvuy+νΔu+λ(A)uω(A)v,vt=uvxvvy+νΔv+ω(A)u+λ(A)v,A2=u2+v2,ω=βA2,λ=1A2,\begin{split}u_{t}&=-uu_{x}-vu_{y}+\nu\Delta u+\lambda(A)u-\omega(A)v,\\ v_{t}&=-uv_{x}-vv_{y}+\nu\Delta v+\omega(A)u+\lambda(A)v,\\ &A^{2}=u^{2}+v^{2},\omega=-\beta A^{2},\lambda=1-A^{2},\end{split} (14)

where (t,x,y)[0,1.5]×Ω,(t,x,y)\in[0,1.5]\times\Omega, and ν=0.1,β=1\nu=0.1,\beta=1. Training data is generated the same way as what we did for Burgers’ equation in Section 3. A 2nd order Runge-Kutta with time step δt=1/10000\delta t=1/10000 is adopted for temporal discretization. We choose a 2nd order upwind scheme for the convection terms and the central difference scheme for Δ\Delta on a 128×128128\times 128 mesh. We then restrict the data to a 32×3232\times 32 mesh. Noise is added the same way as the Burgers’ equation. The initial values u0(x,y),v0(x,y)u_{0}(x,y),v_{0}(x,y) are also generated from (11).

5.2 Results and discussions

The capability of the trained PDE-Net 2.0 to identify the underlying PDE model is demonstrated in Table 3. As one can see that we can recover the terms of the reaction convection diffusion equation with good accuracy. Furthermore, all the terms that are not included in this equation have relatively small weights in the SymNetSymNet.

We also demonstrate the ability of the trained PDE-Net 2.0 in prediction. The testing method is exactly the same as the method described in Section 3. Comparisons between PDE-Net 2.0 and Frozen-PDE-Net 2.0 are shown in Figure 12. Visualization of the predicted dynamics and errors maps are given in Figure 13 and Figure 14. Similar to what we observed in Section 3, we can clearly see the benefit from learning discretizations. PDE-Net 2.0 obtains more accurate estimations of the coefficients for the nonlinear convection terms (i.e. the term uuxvuy-uu_{x}-vu_{y} in Table 3) and makes more accurate predictions (Figure 12) than Frozen-PDE-Net 2.0.

Table 3: PDE model identification.
Correct PDE ut=uuxvuy+0.1Δu+(vu)(u2+v2)+uu_{t}=-uu_{x}-vu_{y}+0.1\Delta u+(v-u)(u^{2}+v^{2})+u
vt=uvxvvy+0.1Δv(v+u)(u2+v2)+vv_{t}=-uv_{x}-vv_{y}+0.1\Delta v-(v+u)(u^{2}+v^{2})+v
Frozen-PDE-Net 2.0 ut=0.86uux0.90vuy+0.09uxx+0.09uyyu_{t}=-{\color[rgb]{0,0,1}0.86}uu_{x}-{\color[rgb]{0,0,1}0.90}vu_{y}+0.09u_{xx}+0.09u_{yy}
+1.02u2v1.02u31.01uv2+1.01u+0.99v3\ \ \ \ \ +1.02u^{2}v-1.02u^{3}-1.01uv^{2}+1.01u+0.99v^{3}
vt=0.87uvx0.85vvy+0.09vxx+0.09vyyv_{t}=-{\color[rgb]{0,0,1}0.87}uv_{x}-{\color[rgb]{0,0,1}0.85}vv_{y}+0.09v_{xx}+0.09v_{yy}
+1.04u2v1.02uv21.01v3+0.99v0.99u3\ \ \ \ \ +1.04u^{2}v-1.02uv^{2}-1.01v^{3}+0.99v-0.99u^{3}
PDE-Net 2.0 ut=0.98vuy0.93uux+0.10uxx+0.10uyyu_{t}=-{\color[rgb]{1,.5,0}0.98}vu_{y}-{\color[rgb]{1,.5,0}0.93}uu_{x}+0.10u_{xx}+0.10u_{yy}
1.05uv2+0.99v30.98u3+0.98u+0.97u2v\ \ \ \ \ -1.05uv^{2}+0.99v^{3}-0.98u^{3}+0.98u+0.97u^{2}v
vt=0.99uvx0.96vvy+0.10vyy+0.10vxxv_{t}=-{\color[rgb]{1,.5,0}0.99}uv_{x}-{\color[rgb]{1,.5,0}0.96}vv_{y}+0.10v_{yy}+0.10v_{xx}
1.04u2v1.02v21.02uv2+1.01v1.00u3\ \ \ \ \ -1.04u^{2}v-1.02v^{2}-1.02uv^{2}+1.01v-1.00u^{3}
Refer to caption
Figure 12: Reaction convection diffusion: Prediction errors of the PDE-Net 2.0 (orange) and Frozen-PDE-Net 2.0 (blue). In each plot, the horizontal axis indicates the time of prediction in the interval (0,200×δt]=(0,2](0,200\times\delta t]=(0,2], and the vertical axis shows the relative errors. The banded curves indicate the 25%-100% percentile of the relative errors among 1000 test samples.
Refer to caption
Refer to caption

Dynamics of uu

Dynamics of vv

Figure 13: Reaction convection diffusion: The first row shows the images of the true dynamics. The second row shows the predicted dynamics using the Frozen-PDE-Net 2.0 with 24 δt\delta t-blocks. The third row shows the predicted dynamics using the PDE-Net 2.0 with 24 δt\delta t-blocks. Here, δt=0.01\delta t=0.01.
Refer to caption
Refer to caption

Error maps of uu

Error maps of vv

Figure 14: Reaction convection diffusion: The error maps of the Frozen-PDE-Net 2.0 (the first row) and PDE-Net 2.0 (second row) having 24 δt\delta t-blocks. Time step δt=0.01\delta t=0.01.

6 Conclusions and Future Work

In this paper, we proposed a numeric-symbolic hybrid deep network, called PDE-Net 2.0, for PDE model recovery from observed dynamic data. PDE-Net 2.0 is able to recover the analytic form of the PDE model with minor assumptions on the mechanisms of the observed dynamics. For example, it is able to recover the analytic form of Burgers’ equation with good confidence without any prior knowledge on the type of the equation. Therefore, PDE-Net 2.0 has the potential to uncover potentially new PDEs from observed data. Furthermore, after training, the network can perform accurate long-term prediction without re-training for new initial conditions. The limitations and possible future extensions of the current version of PDE-Net 2.0 is twofold: 1) having only addition and multiplication in the SymNetSymNet may still be too restrictive, and one may include division as an additional operation in SymNetSymNet to further improve its expressive power; 2) using forward Euler as temporal discretization is the most straightforward treatment, while a more sophisticated temporal scheme may help with the model recovery and prediction. Both of these worth further exploration. Furthermore, we would like to apply the network to real biological dynamic data. We will further investigate the reliability of the network and explore the possibility to uncover new dynamical principles that have meaningful scientific explanations.

Acknowledgments

Zichao Long is supported by The Elite Program of Computational and Applied Mathematics for PhD Candidates of Peking University. Yiping Lu is supported by the Elite Undergraduate Training Program of the School of Mathematical Sciences at Peking University. Bin Dong is supported in part by Beijing Natural Science Foundation (No. 180001) and Beijing Academy of Artificial Intelligence (BAAI).

References

  • [1] Zichao Long, Yiping Lu, Xianzhong Ma, and Bin Dong. Pde-net: Learning pdes from data. In International Conference on Machine Learning, pages 3214–3222, 2018.
  • [2] Christopher Earls Brennen and Christopher E Brennen. Fundamentals of multiphase flow. Cambridge university press, 2005.
  • [3] Messoud Efendiev. Mathematical modeling of mitochondrial swelling.
  • [4] Josh Bongard and Hod Lipson. Automated reverse engineering of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 104(24):9943–9948, 2007.
  • [5] Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. science, 324(5923):81–85, 2009.
  • [6] Maziar Raissi and George Em Karniadakis. Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics, 357:125–141, 2018.
  • [7] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part ii): data-driven discovery of nonlinear partial differential equations. arXiv preprint arXiv:1711.10566, 2017.
  • [8] Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, page 201517384, 2016.
  • [9] Hayden Schaeffer. Learning partial differential equations via data discovery and sparse optimization. Proc. R. Soc. A, 473(2197):20160446, 2017.
  • [10] Samuel H Rudy, Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Data-driven discovery of partial differential equations. Science Advances, 3(4):e1602614, 2017.
  • [11] Haibin Chang and Dongxiao Zhang. Identification of physical processes via combined data-driven and data-assimilation methods. arXiv preprint arXiv:1810.11977, 2018.
  • [12] Hayden Schaeffer, Giang Tran, Rachel Ward, and Linan Zhang. Extracting structured dynamical systems using sparse optimization with very few samples. arXiv preprint arXiv:1805.04158, 2018.
  • [13] Zongmin Wu and Ran Zhang. Learning physics by data for the motion of a sphere falling in a non-newtonian fluid. Communications in Nonlinear Science and Numerical Simulation, 67:577–593, 2019.
  • [14] Emmanuel de Bezenac, Arthur Pajot, and Patrick Gallinari. Deep learning for physical processes: Incorporating prior scientific knowledge. arXiv preprint arXiv:1711.07970, 2017.
  • [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pages 630–645. Springer, 2016.
  • [17] Yunjin Chen, Wei Yu, and Thomas Pock. On learning optimized reaction diffusion processes for effective image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5261–5269, 2015.
  • [18] E Weinan. A proposal on machine learning via dynamical systems. Communications in Mathematics and Statistics, 5(1):1–11, 2017.
  • [19] Eldad Haber and Lars Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34(1):014004, 2017.
  • [20] Sho Sonoda and Noboru Murata. Double continuum limit of deep neural networks. In ICML Workshop Principled Approaches to Deep Learning, 2017.
  • [21] Yiping Lu, Aoxiao Zhong, Quanzheng Li, and Bin Dong. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. In International Conference on Machine Learning, pages 3282–3291, 2018.
  • [22] Bo Chang, Lili Meng, Eldad Haber, Frederick Tung, and David Begert. Multi-level residual networks from dynamical systems view. In ICLR, 2018.
  • [23] Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. arXiv preprint arXiv:1806.07366, 2018.
  • [24] Tong Qin, Kailiang Wu, and Dongbin Xiu. Data driven governing equations approximation using deep neural networks. arXiv preprint arXiv:1811.05537, 2018.
  • [25] Steffen Wiewel, Moritz Becher, and Nils Thuerey. Latent-space physics: Towards learning the temporal evolution of fluid flow. arXiv preprint arXiv:1802.10123, 2018.
  • [26] Byungsoo Kim, Vinicius C Azevedo, Nils Thuerey, Theodore Kim, Markus Gross, and Barbara Solenthaler. Deep fluids: A generative network for parameterized fluid simulations. arXiv preprint arXiv:1806.02071, 2018.
  • [27] Karthikeyan Duraisamy, Ze J Zhang, and Anand Pratap Singh. New approaches in turbulence and transition modeling using data-driven techniques. In 53rd AIAA Aerospace Sciences Meeting, page 1284, 2015.
  • [28] Karthik Duraisamy, Gianluca Iaccarino, and Heng Xiao. Turbulence modeling in the age of data. Annual Review of Fluid Mechanics, 51:357–377, 2019.
  • [29] Chao Ma, Jianchun Wang, et al. Model reduction with memory and the machine learning of dynamical systems. arXiv preprint arXiv:1808.04258, 2018.
  • [30] WG Noid, Jhih-Wei Chu, Gary S Ayton, Vinod Krishna, Sergei Izvekov, Gregory A Voth, Avisek Das, and Hans C Andersen. The multiscale coarse-graining method. i. a rigorous bridge between atomistic and coarse-grained models. The Journal of chemical physics, 128(24):244114, 2008.
  • [31] Linfeng Zhang, Jiequn Han, Han Wang, Roberto Car, and E Weinan. Deep potential molecular dynamics: a scalable model with the accuracy of quantum mechanics. Physical review letters, 120(14):143001, 2018.
  • [32] Jian-Feng Cai, Bin Dong, Stanley Osher, and Zuowei Shen. Image restoration: total variation, wavelet frames, and beyond. Journal of the American Mathematical Society, 25(4):1033–1089, 2012.
  • [33] Bin Dong, Qingtang Jiang, and Zuowei Shen. Image restoration: Wavelet frame shrinkage, nonlinear evolution pdes, and beyond. Multiscale Modeling & Simulation, 15(1):606–660, 2017.
  • [34] Ingrid Daubechies. Ten lectures on wavelets, volume 61. Siam, 1992.
  • [35] Stéphane Mallat. A wavelet tour of signal processing. Elsevier, 1999.
  • [36] Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, and Qianli Liao. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review. International Journal of Automation and Computing, 14(5):503–519, 2017.
  • [37] Uri Shaham, Alexander Cloninger, and Ronald R Coifman. Provable approximation properties for deep neural networks. Applied and Computational Harmonic Analysis, 2016.
  • [38] Hadrien Montanelli and Qiang Du. Deep relu networks lessen the curse of dimensionality. arXiv preprint arXiv:1712.08688, 2017.
  • [39] Qingcan Wang et al. Exponential convergence of the deep neural network approximation for analytic functions. arXiv preprint arXiv:1807.00297, 2018.
  • [40] Juncai He, Lin Li, Jinchao Xu, and Chunyue Zheng. Relu deep neural networks and linear finite elements. arXiv preprint arXiv:1807.03973, 2018.
  • [41] Georg Martius and Christoph H Lampert. Extrapolation and learning equations. arXiv preprint arXiv:1610.02995, 2016.
  • [42] Subham Sahoo, Christoph Lampert, and Georg Martius. Learning equations for extrapolation and control. In International Conference on Machine Learning, pages 4439–4447, 2018.
  • [43] Xu-Dong Liu, Stanley Osher, and Tony Chan. Weighted essentially non-oscillatory schemes. Journal of computational physics, 115(1):200–212, 1994.
  • [44] Chi-Wang Shu. Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws. In Advanced numerical approximation of nonlinear hyperbolic equations, pages 325–432. Springer, 1998.
  • [45] Randall J LeVeque. Finite volume methods for hyperbolic problems, volume 31. Cambridge university press, 2002.