This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Implicit Neural Representations and the
Algebra of Complex Wavelets

T. Mitchell Roddenberry Department of Electrical and Computer Engineering, Rice University Vishwanath Saragadam Department of Electrical and Computer Engineering, Rice University
Maarten V. de Hoop
Department of Computational Applied Mathematics and Operations Research, Rice University
Richard G. Baraniuk Department of Electrical and Computer Engineering, Rice University
Abstract

Implicit neural representations (INRs) have arisen as useful methods for representing signals on Euclidean domains. By parameterizing an image as a multilayer perceptron (MLP) on Euclidean space, INRs effectively represent signals in a way that couples spatial and spectral features of the signal that is not obvious in the usual discrete representation, paving the way for continuous signal processing and machine learning approaches that were not previously possible. Although INRs using sinusoidal activation functions have been studied in terms of Fourier theory, recent works have shown the advantage of using wavelets instead of sinusoids as activation functions, due to their ability to simultaneously localize in both frequency and space. In this work, we approach such INRs and demonstrate how they resolve high-frequency features of signals from coarse approximations done in the first layer of the MLP. This leads to multiple prescriptions for the design of INR architectures, including the use of complex wavelets, decoupling of low and band-pass approximations, and initialization schemes based on the singularities of the desired signal.

1 Introduction

Implicit neural representations (INRs) have emerged as a set of neural architectures for representing and processing signals on low-dimensional spaces. By learning a continuous interpolant of a set of sampled points, INRs have enabled and advanced state-of-the-art methods in signal processing (Xu et al., 2022) and computer vision (Mildenhall et al., 2020).

Typical INRs are specially designed multilayer perceptrons (MLPs), where the activation functions are chosen in such a way to yield a desirable signal representation; some of these methods are demonstrated on a simple test image in Fig. 1.

Although these INRs can be easily understood at the first layer due to the simplicity of plotting the function associated to each neuron based on its weights and biases, the behavior of the network in the second layer and beyond is more opaque, apart from some theoretical developments in the particular case of a sinusoidal first layer (Yüce et al., 2022).

This work develops a broader theoretical understanding of INR architectures with a wider class of activation functions, followed by practical prescriptions rooted in time-frequency analysis. In particular, we

  1. 1.

    Characterize the function class of INRs in terms of Fourier convolutions of the neurons in the first layer (Theorem 1)

  2. 2.

    Demonstrate how INRs that use complex wavelet functions preserve useful properties of the wavelet, even after the application of the nonlinearities (Corollary 4)

  3. 3.

    Suggest a split architecture for approximating signals, decoupling the smooth and nonsmooth parts into linear and nonlinear INRs, respectively (Section 4.3)

  4. 4.

    Leverage connections with wavelet theory to propose efficient initialization schemes for wavelet INRs based on the wavelet modulus maxima for capturing singularities in the target functions (Section 5).

Following a brief survey of INR methods, the class of architectures we study is defined in Section 2. The main result bounding the function class represented by these architectures is stated in Section 3, which is then related to the algebra of complex wavelets in Section 4. The use of the wavelet modulus maxima for initialization of wavelet INRs is described and demonstrated in Section 5, before concluding in Section 6.

2 Implicit Neural Representations

Refer to caption

Figure 1: Zoo of INRs for representing a simple image 2\mathbb{R}^{2}\to\mathbb{R}. Highlighted squares indicate error relative to true image.

Wavelets as activation functions in MLPs have been shown to yield good function approximators (Zhang & Benveniste, 1992; Marar et al., 1996). These works have leveraged the sparse representation of functions by wavelet dictionaries in order to construct simple neural architectures and training algorithms for effective signal representation. Indeed, an approximation of a signal by a finite linear combination of ridgelets (Candès, 1998) can be viewed as one such MLP using wavelet activation functions.

Recently, sinusoidal activation functions in the first layer (Tancik et al., 2020) and beyond (Sitzmann et al., 2020; Fathony et al., 2020) have been shown to yield good function approximators, coupled with a harmonic analysis-type bound on the function class represented by these networks (Yüce et al., 2022). Other methods have used activation functions that, unlike sinusoids, are localized in space, such as gaussians (Ramasinghe & Lucey, 2021) or Gabor wavelets (Saragadam et al., 2023).

Following the formulation of (Yüce et al., 2022), we define an INR to be a map f𝜽:df_{\boldsymbol{\theta}}:\mathbb{R}^{d}\to\mathbb{C} defined in terms of a function ψ:d\psi:\mathbb{R}^{d}\to\mathbb{C}, followed by an MLP with analytic111That is, entire on \mathbb{C}. activation functions ρ():\rho^{(\ell)}:\mathbb{C}\to\mathbb{C} for layers =1,,L\ell=1,\ldots,L:

𝐳(0)(𝐫)\displaystyle\mathbf{z}^{(0)}(\mathbf{r}) =ψ(𝐖(0)𝐫+𝐛(0))\displaystyle=\psi(\mathbf{W}^{(0)}\mathbf{r}+\mathbf{b}^{(0)}) (1)
𝐳()(𝐫)\displaystyle\mathbf{z}^{(\ell)}(\mathbf{r}) =ρ()(𝐖()𝐳(1)(𝐫)+𝐛())\displaystyle=\rho^{(\ell)}(\mathbf{W}^{(\ell)}\mathbf{z}^{(\ell-1)}(\mathbf{r})+\mathbf{b}^{(\ell)})
f𝜽(𝐫)\displaystyle f_{\boldsymbol{\theta}}(\mathbf{r}) =𝐖(L)𝐳(L1)(𝐫)+𝐛(L),\displaystyle=\mathbf{W}^{(L)}\mathbf{z}^{(L-1)}(\mathbf{r})+\mathbf{b}^{(L)},

where 𝜽\boldsymbol{\theta} denotes the set of parameters dictating the tensor 𝐖(0)F1×d×d\mathbf{W}^{(0)}\in\mathbb{R}^{F_{1}\times d\times d}, matrices 𝐖()F+1×F\mathbf{W}^{(\ell)}\in\mathbb{C}^{F_{\ell+1}\times F_{\ell}} and 𝐛(0)F1×d\mathbf{b}^{(0)}\in\mathbb{R}^{F_{1}\times d}, and vectors 𝐛()F+1\mathbf{b}^{(\ell)}\in\mathbb{C}^{F_{\ell+1}} for =1,,L\ell=1,\ldots,L, with fixed integers FF_{\ell} satisfying FL+1=1F_{L+1}=1. We will henceforth refer to ψ\psi as the template function of the INR. Owing to the use of Gabor wavelets by Saragadam et al. (2023), we will refer to functions of the form (1) as WIRE INRs, although (1) also captures architectures that do not use wavelets, such as SIREN (Sitzmann et al., 2020).

3 Main Results

For the application of INRs to practical problems, it is important to understand the function class that an INR architecture can represent. We will demonstrate how the function parameterized by an INR can be understood via time-frequency analysis, ultimately motivating the use of wavelets as template functions.

3.1 Expressivity of INRs

Noting that polynomials of sinusoids generate linear combinations of integer harmonics of said sinusoids, Yüce et al. (2022) bounded the expressivity of SIREN (Sitzmann et al., 2020) and related architectures (Fathony et al., 2020). These results essentially followed from identities relating products of trigonometric functions. For template functions that are not sinusoids, such as wavelets (Saragadam et al., 2023), these identities do not hold. The following result offers a bound on the class of functions represented by an INR.

Theorem 1.

Let f𝛉:df_{\boldsymbol{\theta}}:\mathbb{R}^{d}\to\mathbb{C} be a WIRE INR. Assume that each of the activation functions ρ()\rho^{(\ell)} is a polynomial of degree at most KK, and that the Fourier transform of the template function ψ\psi exists.222It may exist only in the sense of tempered distributions. Let 𝐖(0)𝐫=[𝐖1𝐫,,𝐖F1𝐫]\mathbf{W}^{(0)}\mathbf{r}=[\mathbf{W}_{1}\mathbf{r},\ldots,\mathbf{W}_{F_{1}}\mathbf{r}]^{\top} for 𝐖1,,𝐖F1d×d\mathbf{W}_{1},\ldots,\mathbf{W}_{F_{1}}\in\mathbb{R}^{d\times d} each having full rank, and also let 𝐛(0)=[𝐛1,,𝐛F1]\mathbf{b}^{(0)}=[\mathbf{b}_{1},\ldots,\mathbf{b}_{F_{1}}]^{\top} for 𝐛1,,𝐛F1d\mathbf{b}_{1},\ldots,\mathbf{b}_{F_{1}}\in\mathbb{R}^{d}. For k0k\geq 0, denote by Δ(F1,k)\Delta(F_{1},k) the set of ordered F1F_{1}-tuples of nonnegative integers =[1,,F1]\boldsymbol{\ell}=[\ell_{1},\ldots,\ell_{F_{1}}] such that t=1F1t=k\sum_{t=1}^{F_{1}}\ell_{t}=k.

Let a point 𝐫0d\mathbf{r}_{0}\in\mathbb{R}^{d} be given. Then, there exists an open neighborhood U𝐫0U\ni\mathbf{r}_{0} such that for all ϕ𝒞0(U)\phi\in\mathcal{C}_{0}^{\infty}(U)

ϕf𝜽^(ξ)=(ϕ^k=0KL1Δ(F1,k)β^t=1F1(ei2π𝐖tξ,𝐛tψ^(𝐖tξ))t,ξ)(ξ),\widehat{\phi\cdot f_{\boldsymbol{\theta}}}(\xi)=\left(\widehat{\phi}\ast\sum_{k=0}^{K^{L-1}}\sum_{\boldsymbol{\ell}\in\Delta(F_{1},k)}\widehat{\beta}_{\boldsymbol{\ell}}\mathop{\Huge{{\ast}}}_{t=1}^{F_{1}}\left(e^{i2\pi\langle\mathbf{W}_{t}^{-\top}\xi,\mathbf{b}_{t}\rangle}\widehat{\psi}(\mathbf{W}_{t}^{-\top}\xi)\right)^{\ast\ell_{t},\xi}\right)(\xi), (2)

for coefficients β^\widehat{\beta}_{\boldsymbol{\ell}}\in\mathbb{C} independent of 𝐫\mathbf{r}, where (),ξ(\cdot)^{\ast\ell,\xi} denotes \ell-fold convolution3330-fold convolution is defined by convention to yield the Dirac delta. of the argument with itself with respect to ξ\xi. Furthermore, the coefficients β^\widehat{\beta}_{\boldsymbol{\ell}} are only nonzero when each t[1,,F1]t\in[1,\ldots,F_{1}] such that t0\ell_{t}\neq 0 also satisfies 𝐖t𝐫0+𝐛tsupp(ψ)\mathbf{W}_{t}\mathbf{r}_{0}+\mathbf{b}_{t}\in\operatorname{supp}(\psi).

The proof is left to Appendix A. Theorem 1 illustrates two things. First, the output of an INR has a Fourier transform determined by convolutions of the Fourier transforms of the atoms in the first layer with themselves, serving to generate “integer harmonics” of the initial atoms determined by scaled, shifted copies of the template function ψ\psi. Notably, this recovers (Yüce et al., 2022, Theorem 1). Second, the support of these scaled and shifted atoms is preserved, so that the output at a given coordinate 𝐫\mathbf{r} is dependent only upon the atoms in the first layer whose support contains 𝐫\mathbf{r}.

Remark 2.

The assumptions behind Theorem 1 can be relaxed to capture a broader class of architectures. By imposing continuity conditions on the template function ψ\psi, the activation functions can be reasonably extended to analytic functions. These extensions are discussed in Appendix B.

3.2 Effective Time-Frequency Support of INRs

As noted before, Theorem 1 describes the output of an INR at a point by self-convolutions of sums of the Fourier transforms of the atoms in the first layer whose support contains that point. So, for the remainder of this section, we can assume that 𝐫0d\mathbf{r}_{0}\in\mathbb{R}^{d} is fixed, and that we only consider indices tt such that ψ(𝐖t𝐫0+𝐛t)0\psi(\mathbf{W}_{t}\mathbf{r}_{0}+\mathbf{b}_{t})\neq 0.

Assume that the template function ψ\psi has compact support, which precludes ψ^\widehat{\psi} from having compact support. However, if we make the further assumption that ψ\psi is smooth, then ψ^\widehat{\psi} is rapidly decreasing. We will then say, informally, that ψ^\widehat{\psi} has a compact effective support, denoted by the set AdA\subset\mathbb{R}^{d}. Making the approximation supp(ψ^)A\operatorname{supp}(\widehat{\psi})\approx A, we have that supp(ψ^(𝐖tξ))𝐖tA\operatorname{supp}(\widehat{\psi}(\mathbf{W}_{t}^{-\top}\xi))\approx\mathbf{W}_{t}^{\top}A.

As a first approximation, then, we can estimate the frequency support “locally” around the point 𝐫0d\mathbf{r}_{0}\in\mathbb{R}^{d}, in the sense of the Fourier transform following multiplication with a suitable cutoff function. Examining the summands in (2), the support of the self-convolutions in the Fourier domain are bounded by

supp(t=1F1(ei2π𝐖tξ,𝐛tψ^(𝐖tξ))t,ξ)t=1F1t𝐖A,\operatorname{supp}\left(\mathop{\Huge{{\ast}}}_{t=1}^{F_{1}}\left(e^{i2\pi\langle\mathbf{W}_{t}^{-\top}\xi,\mathbf{b}_{t}\rangle}\widehat{\psi}(\mathbf{W}_{t}^{-\top}\xi)\right)^{\ast\ell_{t},\xi}\right)\subset\sum_{t=1}^{F_{1}}\ell_{t}\mathbf{W}^{\top}A,

where the subset relationship is understood to be in the informal sense of effective support, and the sum on the RHS is understood as the Minkowski sum of the summands t𝐖tA\ell_{t}\mathbf{W}_{t}^{\top}A. So, the frequency support described by Theorem 1 is effectively bounded by the set

k=0KL1{t=1F1t𝐖tA:tΔ(F1,k)}.\bigcup_{k=0}^{K^{L-1}}\left\{\sum_{t=1}^{F_{1}}\ell_{t}\mathbf{W}_{t}^{\top}A:\ell_{t}\in\Delta(F_{1},k)\right\}.

4 The Algebra of Complex Wavelets

Refer to caption

Figure 2: Template functions in Fourier domain under exponentiation.

Of the INR architectures surveyed in Section 2, the only one to use a complex wavelet template function is WIRE (Saragadam et al., 2023), where a Gabor wavelet is used. Gabor wavelets are essentially asymmetric (in the Fourier domain) band-pass filters, which are necessarily complex-valued due to the lack of conjugate symmetry in the Fourier domain. We now consider the advantages of using complex wavelets, or more precisely progressive wavelets, as template functions for INRs by examining their structure as an algebra of functions.

4.1 Progressive Template Functions

For the sake of discussion, suppose that d=1d=1, so that the INR represents a 1D function. The template function ψ:\psi:\mathbb{R}\to\mathbb{C} is said to be progressive444More commonly known as an analytic signal, we use this terminology to avoid confusion with the analytic activation functions used in the INR. if it has no negative frequency components, i.e., for ξ<0\xi<0, we have ψ^(ξ)=0\widehat{\psi}(\xi)=0 (Mallat, 1999). If ψ\psi is integrable, its Fourier transform is uniformly continuous, which implies that for integrable progressive functions we have ψ^(0)=0\widehat{\psi}(0)=0. Without embarking upon a long review of progressive wavelets (Mallat, 1999), we discuss some of the basic properties of progressive functions that are relevant to the discussion at hand. It is obvious that progressive functions remain progressive under scalar multiplication, shifts, and positive scaling. That is, for arbitrary s>0,u,zs>0,u\in\mathbb{R},z\in\mathbb{C}, if ψ(x)\psi(x) is progressive, then the function zDsTuψ(x):=zψ((xu)/s)z\cdot D_{s}T_{u}\psi(x):=z\cdot\psi((x-u)/s) is also progressive. Moreover, progressive functions are closed under multiplication, so that if ψ1\psi_{1} and ψ2\psi_{2} are progressive, then ψ3(x):=ψ1(x)ψ2(x)\psi_{3}(x):=\psi_{1}(x)\psi_{2}(x) is also progressive,555This is a simple consequence of the convolution theorem. i.e., progressive functions constitute an algebra over \mathbb{C}.

Example 1 (Complex Sinusoid).

For any ω>0\omega>0, the complex sinusoid ψ(x;ω)=exp(i2πωx)\psi(x;\omega)=\exp(-i2\pi\omega x) is a progressive function, as its Fourier transform is a Dirac delta centered at ω\omega. As pictured in Fig. 2 (Left), the exponents ψn(;ω)\psi^{n}(\cdot;\omega) are themselves complex sinusoids, where ψn(x;ω)=ψ(x;nω)\psi^{n}(x;\omega)=\psi(x;n\omega).

Example 2 (Gaussian).

The gaussian function, defined for some σ>0\sigma>0 as ψ(x;σ)=exp(x2/(2σ2))\psi(x;\sigma)=\exp(-x^{2}/(2\sigma^{2})), is not a progressive function, as its Fourier transform is symmetric and centered about zero. Moreover, as pictured in Fig. 2 (Center-Left), the exponents are also gaussian functions ψn(x;σ)=ψ(x;σ/n)\psi^{n}(x;\sigma)=\psi(x;\sigma/\sqrt{n}), which also have Fourier transform centered at zero. Unlike the complex sinusoid, the powers of the gaussian are all low-pass, but with increasingly wide passband.

Example 3 (Gabor Wavelet).

For any ω,σ>0\omega,\sigma>0, the Gabor wavelet defined as ψ(x;ω,σ)=exp(x2/(2σ2)i2πωx)\psi(x;\omega,\sigma)=\exp(-x^{2}/(2\sigma^{2})-i2\pi\omega x) is not a progressive function, as its Fourier transform is a gaussian centered at ω\omega with standard deviation 1/σ1/\sigma. However, the Fourier transform of the exponents ψn\psi^{n} for integers n>0n>0 are gaussians centered at nωn\omega with standard deviation n/σ\sqrt{n}/\sigma, as pictured in Fig. 2 (Center-Right). So, as nn grows sufficiently large, the effective support of ψn^\widehat{\psi^{n}} will be contained in the positive reals, so that the Gabor wavelet can be considered as a progressive function for the purposes of studying INRs.

A progressive function on \mathbb{R} has Fourier support contained in the nonnegative real numbers. Of course, there is not an obvious notion of nonnegativity that generalizes to d\mathbb{R}^{d} for d>1d>1. Noting that the nonnegative reals form a convex conic subset of \mathbb{R}, we define the notion of a progressive function with respect to some conic subset of d\mathbb{R}^{d}:

Definition 3.

Let Γ^d\Gamma\subseteq\widehat{\mathbb{R}}^{d} be a convex conic set, i.e., for all γ1,γ2Γ\gamma_{1},\gamma_{2}\in\Gamma and a1,a20a_{1},a_{2}\geq 0, we have that a1γ1+a2γ2Γa_{1}\gamma_{1}+a_{2}\gamma_{2}\in\Gamma.666We henceforth refer to such sets as simply “conic.” A function ψ:d\psi:\mathbb{R}^{d}\to\mathbb{C} is said to be Γ\Gamma-progressive if supp(ψ^)Γ\operatorname{supp}(\widehat{\psi})\subseteq\Gamma. The function ψ\psi is said to be locally Γ\Gamma-progressive at 𝐫0d\mathbf{r}_{0}\in\mathbb{R}^{d} if there exists some Γ\Gamma-progressive function ψ𝐫0:d\psi_{\mathbf{r}_{0}}:\mathbb{R}^{d}\to\mathbb{C} so that for all smooth functions ϕ𝒞0(d)\phi\in\mathcal{C}_{0}^{\infty}(\mathbb{R}^{d}) with support in a sufficiently small neighborhood of 𝐫\mathbf{r}, we have

ϕψ^=ϕ^ψ𝐫0^.\widehat{\phi\cdot\psi}=\widehat{\phi}\ast\widehat{\psi_{\mathbf{r}_{0}}}. (3)

Curvelets (Candès & Donoho, 2004), for instance, are typically defined in a way to make them Γ\Gamma-progressive for some conic set Γ\Gamma that indicates the oscillatory direction of a curvelet atom. Observe that if Γ\Gamma is a conic set, then for any matrix 𝐖\mathbf{W}, the set 𝐖Γ\mathbf{W}^{\top}\Gamma is also conic. Thus, for a function ψ\psi that is Γ\Gamma-progressive, the function ψ(𝐖𝐱)\psi(\mathbf{W}\mathbf{x}) is 𝐖Γ\mathbf{W}^{\top}\Gamma-progressive.

Observe further that for two Γ\Gamma-progressive functions ψ1,ψ2\psi_{1},\psi_{2}, their product is also Γ\Gamma-progressive. The closure of progressive functions under multiplication implies that an analytic function applied pointwise to a progressive function is progressive. For INRs as defined in (1), this yields the following corollary to Theorem 1.

Corollary 4.

Let Γ^d\Gamma\subseteq\widehat{\mathbb{R}}^{d} be conic, and let ψ:d\psi:\mathbb{R}^{d}\to\mathbb{C} be given, with Fourier support denoted Γ0:=supp(ψ^)\Gamma_{0}:=\operatorname{supp}{(\widehat{\psi})}. Let 𝐖(0)𝐫=[𝐖1𝐫,,𝐖F1𝐫]\mathbf{W}^{(0)}\mathbf{r}=[\mathbf{W}_{1}\mathbf{r},\ldots,\mathbf{W}_{F_{1}}\mathbf{r}]^{\top} for 𝐖1,,𝐖F1d×d\mathbf{W}_{1},\ldots,\mathbf{W}_{F_{1}}\in\mathbb{R}^{d\times d} each having full rank. Assume that for each t=1,,F1t=1,\ldots,F_{1}, we have 𝐖tΓ0Γ\mathbf{W}_{t}^{\top}\Gamma_{0}\subseteq\Gamma. Then, the WIRE INR f𝛉:df_{\boldsymbol{\theta}}:\mathbb{R}^{d}\to\mathbb{C} defined by (1) is a Γ\Gamma-progressive function.

Moreover, if we fix some 𝐫0d\mathbf{r}_{0}\in\mathbb{R}^{d}, and if the assumption 𝐖tΓ0Γ\mathbf{W}_{t}^{\top}\Gamma_{0}\subseteq\Gamma holds for the indices tt such that 𝐖t𝐫+𝐛tsupp(ψ)\mathbf{W}_{t}\mathbf{r}+\mathbf{b}_{t}\in\operatorname{supp}(\psi), then f𝛉f_{\boldsymbol{\theta}} is locally Γ\Gamma-progressive at 𝐫0\mathbf{r}_{0}.

The proof is left to Appendix C. Corollary 4 shows that INRs preserve the Fourier support of the transformed template functions in the first layer up to conic combinations, so that any advantages/limitations of approximating functions using such Γ\Gamma-progressive functions are maintained.

Remark 5.

One may notice that a Γ\Gamma-progressive function will always incur a large error when approximating a real-valued function, as real-valued functions have conjugate symmetric Fourier transforms (apart from the case Γ=R^d\Gamma=\widehat{R}^{d}). For fitting real-valued functions, it is effective to simply fit the real part of the INR output to the function, as taking the real part of a function is equivalent to taking half of the sum of that function and its conjugate mirror in the Fourier domain. In the particular case of d=1d=1, fitting the real part of a progressive INR to a function is equivalent to fitting the INR to that function’s Hilbert transform.

4.2 Band-pass Progressive Wavelets

Corollary 4 holds for conic sets Γ\Gamma, but is also true for a larger class of sets. If some set Γd\Gamma\subseteq\mathbb{R}^{d} is conic, it is by definition closed under all sums with nonnegative coefficients. Alternatively, consider the following weaker property:

Definition 6.

Let Γ^d\Gamma\subseteq\widehat{\mathbb{R}}^{d}. Γ\Gamma is said to be weakly conic if for all γ1,γ2Γ\gamma_{1},\gamma_{2}\in\Gamma and a1,a21a_{1},a_{2}\geq 1, we have that a1γ1+a2γ2Γa_{1}\gamma_{1}+a_{2}\gamma_{2}\in\Gamma, and that 0Γ0\in\Gamma. A function ψ:d\psi:\mathbb{R}^{d}\to\mathbb{C} is said to be Γ\Gamma-progressive if supp(ψ^)Γ\operatorname{supp}(\widehat{\psi})\subseteq\Gamma. The function ψ\psi is said to be locally Γ\Gamma-progressive at 𝐫0d\mathbf{r}_{0}\in\mathbb{R}^{d} if there exists some Γ\Gamma-progressive function ψ𝐫0:d\psi_{\mathbf{r}_{0}}:\mathbb{R}^{d}\to\mathbb{C} so that for all smooth functions ϕ𝒞(d)\phi\in\mathcal{C}^{\infty}(\mathbb{R}^{d}) with support in a sufficiently small777Again, not necessarily compact. neighborhood of 𝐫\mathbf{r}, we have

ϕψ^=ϕ^ψ𝐫0^.\widehat{\phi\cdot\psi}=\widehat{\phi}\ast\widehat{\psi_{\mathbf{r}_{0}}}. (4)

The notion of a weakly conic set is illustrated in Fig. 3 (Left). Just as in the case of progressive functions for a conic set, the set of Γ\Gamma-progressive functions for a weakly conic set Γ^d\Gamma\subseteq\widehat{\mathbb{R}}^{d} constitutes an algebra over \mathbb{C}. One can check, then, that Corollary 4 holds for weakly conic sets as well. Putting this into context, consider a template function ψ\psi such that ψ^\widehat{\psi} vanishes in some neighborhood of the origin. Assume furthermore that supp(ψ^)\operatorname{supp}(\widehat{\psi}) is contained in some weakly conic set Γ\Gamma.

Example 4 (Complex Meyer Wavelet).

The complex Meyer wavelet is most easily defined in terms of its Fourier transform. Define

ψ^(ξ):={sin(3ξ4π/2)ξ[2π/3,4π/3]cos(3ξ8π/2)ξ[4π/3,8π/3]0otherwise.\widehat{\psi}(\xi):=\begin{cases}\sin(\frac{3\xi}{4}-\pi/2)&\xi\in[2\pi/3,4\pi/3]\\ \cos(\frac{3\xi}{8}-\pi/2)&\xi\in[4\pi/3,8\pi/3]\\ 0&\text{otherwise}.\end{cases}

The complex Meyer wavelet and its exponents are pictured in Fig. 2 (Right). Observe that these functions are not only progressive, but are also Γ\Gamma-progressive for the weakly conic set Γ=[2π/3,)\Gamma=[2\pi/3,\infty). The Meyer scaling function, pictured by the dashed line in Fig. 2 (Right), has Fourier support that only overlaps that of the complex Meyer wavelet, but none of its powers.

Applying this extension of Corollary 4, we see that if the atoms in the first layer of an INR using such a function ψ\psi have vanishing Fourier transform in some neighborhood of the origin, then the output of the INR has Fourier support that also vanishes in that neighborhood.

Refer to caption

Figure 3: (Left) A conic set Γ12\Gamma_{1}\subset\mathbb{R}^{2}, and a weakly conic set Γ22\Gamma_{2}\subset\mathbb{R}^{2} (both truncated for illustration purposes). (Center) Modulus of a function gg given by the sum of four template atoms. (Right) Fourier transform of f𝜽(𝐫)=ρ(g(𝐫))f_{\boldsymbol{\theta}}(\mathbf{r})=\rho(g(\mathbf{r})), where ρ(z)=z+z2z3\rho(z)=-z+z^{2}-z^{3}. The blue and orange cones correspond to the respectively highlighted parts of the function gg. Effective Fourier supports of the template atoms constituting gg are enclosed by rectangles, and approximate centers of Frequency support for each atom and product of atoms are marked by colored circles.

We illustrate this in 2\mathbb{R}^{2} using a template function ψ:2\psi:\mathbb{R}^{2}\to\mathbb{C} where ψ\psi is the tensor product of a gaussian and a complex Meyer wavelet. Using this template function, we construct an INR with F1=4F_{1}=4 in the first layer, and a single polynomial activation function. The modulus of the sum of the template functions before applying the activation function is shown in Fig. 3 (Center). We then plot the modulus of the Fourier transform of f𝜽f_{\boldsymbol{\theta}} in Fig. 3 (Right). First, observe that since the effective supports of the transformed template functions are supported by two disjoint sets, the Fourier transform of f𝜽f_{\boldsymbol{\theta}} can be separated into two cones, each corresponding to a region in 2\mathbb{R}^{2}. Second, since the complex Meyer wavelet vanishes in a neighborhood of the origin, these cones are weakly conic, so that the Fourier transform of f𝜽f_{\boldsymbol{\theta}} vanishes in a neighborhood of the origin as well, by Corollary 4 applied to weakly conic sets.

Remark 7.

The weakly conic sets pictured in Fig. 3 (Right) are only approximation bounds of the true Fourier support of the constituent atoms. We see that Corollary 4 still holds in an approximate sense, as the bulk of the Fourier support of the atoms is contained in each of the pictured cones.

4.3 A Split Architecture for INRs

Based on this property of INRs preserving the band-pass properties of progressive template functions, it is well-motivated to approximate functions using a sum of two INRs: one to handle the low-pass components using a scaling function, and the other to handle the high-pass components using a wavelet. We illustrate this in Fig. 4, where we fit two INRs to a test signal on \mathbb{R} (Donoho & Johnstone, 1994).

Refer to caption

Figure 4: (Left) Real part of INR evaluated on \mathbb{R}. (Right) Fourier transform of INR output. (Top) Sum of the Gabor INR and scaling INR, with wavelet modulus maxima points. (Center) Individual outputs of Gabor INR and scaling INR. (Bottom) Linear Gabor INR.

The first INR uses a gaussian template function ψ(x)=exp((πx)2/6)\psi(x)=\exp(-(\pi x)^{2}/6) with L=1L=1, and the constraint that the weights 𝐖(0)\mathbf{W}^{(0)} are all equal to one, i.e., the template atoms only vary in their abscissa. Such a network is essentially a single-layer perceptron (Zhang & Benveniste, 1992) for representing smooth signals. We refer to this network as the “scaling INR.”

The second INR uses a Gabor template function ψ(x)=exp((πx)2/6)exp(i2πx)\psi(x)=\exp(-(\pi x)^{2}/6)\exp(-i2\pi x) with L=3L=3, where we initialize the weights in the first layer to be positive, thus satisfying the condition of 𝐖tΓΓ\mathbf{W}_{t}^{\top}\Gamma\subseteq\Gamma in Corollary 4 for Γ=+\Gamma=\mathbb{R}^{+}. Although ψ\psi is not progressive, its Fourier transform has fast decay, so we consider it to be essentially progressive, and thus approximately fulfilling the conditions of Corollary 4. We refer to this network as the “Gabor INR,” as it is the WIRE architecture (Saragadam et al., 2023) for signals on \mathbb{R}.

The reason for modeling a signal as the sum of a linear scaling INR and a nonlinear INR with a Gabor wavelet is apparent in Fig. 2 (Right), where the scaling function and powers of a complex Meyer wavelet are pictured. Observe that the portions of the Fourier spectrum covered by the scaling function and the high powers of the Gabor wavelet (as in an INR, by Theorem 1) are essentially disjoint. The idea behind this architecture is to use a simple network to approximate the smooth parts of the target signal, and then a more complicated nonlinear network to approximate the nonsmooth parts of the signal.

We plot the real part of the sum of the scaling INR and Gabor INR in Fig. 4 (Top) along with the individual network outputs in Fig. 4 (Center). One can clearly see how the scaling INR captures the low-pass components of the signal, while the Gabor INR captures the transient behavior.

To see the role of the nonlinearities in the Gabor INR, we freeze the weights and biases in the first layer of the Gabor INR, and take an optimal linear combination of the resulting template atoms to fit the signal, thus yielding an INR with no nonlinearities beyond the template functions (Zhang & Benveniste, 1992). The real part of the resulting function is plotted in Fig. 4 (Bottom), where the singularities from the Gabor INR are severely smoothed. This reflects how the activation functions resolve high-frequency features from low-frequency approximations, as illustrated initially in Fig. 3.

5 Resolution of Singularities

A useful model for studying sparse representations of images is the cartoon-like image, which is a smooth function on 2\mathbb{R}^{2} apart from singularities along a twice-differentiable curve (Candès & Donoho, 2004; Wakin et al., 2006).

The smooth part of an image can be handled by the scaling function associated to a wavelet transform, while the singular parts are best captured by the wavelet function. In the context of the proposed split INR architecture, the scaling INR yields a smooth approximation to the signal, and the Gabor INR resolves the remaining singularities.

5.1 Initialization With the Wavelet Modulus Maxima

As demonstrated by Theorem 1, the function ψ\psi in the first layer of an INR determines the expressivity of the network. Many such networks satisfy a universal approximation property (Zhang & Benveniste, 1992), but their value in practice comes from their implicit bias (Yüce et al., 2022; Saragadam et al., 2023) in representing a particular class of functions. For instance, using a wavelet in the first layer results in sharp resolution of edges with spatially compact error (Saragadam et al., 2023). In the remainder of this section, we demonstrate how an understanding of singular points in terms of the wavelet transform can be used to bolster INR architectures and initialization schemes.

Roughly speaking, isolated singularities in a signal are points where the signal is nonsmooth, but is smooth in a punctured neighborhood around that point. Such singularities generate “wavelet modulus maxima” (WMM) curves in the continuous wavelet transform (Mallat, 1999), which have slow decay in the Fourier domain. With Theorem 1 in mind, we see that INRs can use a collection of low-frequency template atoms and generate a collection of coupled high-frequency atoms, while also preserving the spatial locality of the template atoms.

The combination of these insights suggests a method for the initialization of INRs. In particular, for a given number of template atoms F1F_{1} in an INR, the network weights 𝐖(0)\mathbf{W}^{(0)} and abscissa 𝐛(0)\mathbf{b}^{(0)} should be initialized in a way that facilitates effective training of the INR via optimization methods.

Refer to caption

Figure 5: Comparison of initialization schemes. We plot the mean and standard deviation of the MSE over 1010 trials.

We empirically demonstrate the difference in performance for INRs initialized at random and INRs initialized in accordance with the singularities in the target signal. Once again, we fit the sum of a scaling INR and a Gabor INR to the target signal in Fig. 4. In Fig. 5, we plot the mean squared error (MSE) for this setting after 10001000 training steps for both randomly initialized and strategically initialized INRs, for F1=KmF_{1}=Km, where K{1,3,5,7}K\in\{1,3,5,7\} and mm is the number of WMM points as determined by an estimate of the continuous wavelet transform of the target signal. The randomly initialized INRs have abscissa distributed uniformly at random over the domain of the signal. The strategically initialized INRs place KK template atoms at each WMM point (so, a deterministic set of abscissa points). Both initialization schemes randomly distribute the scale weights uniformly in the interval [1,K][1,K]. We observe that for all KK, the MSE of the strategically initialized INR is approximately an order of magnitude less than that of the randomly initialized INR.

Refer to caption

Figure 6: Image approximations and error using INRs that are (Left) randomly initialized and (Center) initialized using the Canny edge detector. (Right) PSNR of image approximations during training.

When d=2d=2, e.g., images, the WMM can be approximated by the gradients of the target signal to obtain an initial set of weights and biases for the Gabor INR. We evaluate this empirically on a set of images shown in Fig. 6. For each example, we approximate the target image using the proposed split INR architecture. For the WMM-based initialization, we apply a Canny edge detector (Canny, 1986) to encode the positions and directions of the edges. Further details can be found in Appendix D. We then used a subset of edge locations for biases of the first layer of the Gabor INR. For simplicty, we initialized the weights such that each neuron generates a radially symmetric Gabor filter. For images consisting of isolated singularities, such as the circular edge example in the first row in Fig. 6, we observe that a WMM-based initialization results in nearly 3030dB higher PSNR, along with faster convergence rates than its uninitialized counterpart. A similar but less significant advantage can be seen for other images, such as the parrot with a blurred background (second row in Fig. 6). WMM-based initialization, has limited advantages when the image has dense texture, such as the chest X-ray scan (third row in Fig. 6). Overall, we observe that WMM-based initialization has similar advantages for signals on both \mathbb{R} and 2\mathbb{R}^{2}.

6 Conclusions

We have offered a time-frequency analysis of INRs, leveraging polynomial approximations of the behavior of MLPs beyond the first layer. By noting that progressive functions form an algebra over the complex numbers, we demonstrated that this analysis yields insights into the behavior of INRs using complex wavelets, such as WIRE (Saragadam et al., 2023). This naturally leads to a split architecture for approximating signals, which decouples the low-pass and high-pass parts of a signal using two INRs, roughly corresponding to the scaling and wavelet functions of a wavelet transform. Furthermore, the connection with the theory of wavelets yields a natural initialization scheme for the weights of an INR based on the wavelet modulus maxima of a signal.

INR architectures built using wavelet activation functions offer useful advantages for function approximation that balance locality in space and frequency. The structure of complex wavelets as an algebra of functions with conic Fourier support, combined with the application of INRs for interpolating sampled functions, suggests a connection with microlocal and semiclassical analysis (Monard & Stefanov, 2023). As future work, we aim to extend the results of this paper to synthesize true singularities at arbitrarily fine scales, despite the continuous and non-singular structure of INR approximations of a sampled signal. We also foresee the decoupling of the smooth and singular parts of a signal by the split INR architecture having useful properties for solving inverse problems and partial differential equations.

Acknowledgements

This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-1-2571, N00014-20-1-2534, and MURI N00014-20-1-2787; AFOSR grant FA9550-22-1-0060; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047. Maarten de Hoop gratefully acknowledges support from the Department of Energy under grant DE-SC0020345, the Simons Foundation under the MATH+X program, and the corporate members of the Geo-Mathematical Imaging Group at Rice University.

References

  • Candès (1998) Emmanuel Jean Candès. Ridgelets: theory and applications. PhD thesis, Stanford University, 1998.
  • Candès & Donoho (2004) Emmanuel Jean Candès and David Leigh Donoho. New tight frames of curvelets and optimal representations of objects with piecewise 𝒞2\mathcal{C}^{2} singularities. Communications on Pure and Applied Mathematics, 57(2):219–266, 2004.
  • Canny (1986) John Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, (6):679–698, 1986.
  • Donoho & Johnstone (1994) David Leigh Donoho and Iain M Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81(3):425–455, 1994.
  • Fathony et al. (2020) Rizal Fathony, Anit Kumar Sahu, Devin Willmott, and J Zico Kolter. Multiplicative filter networks. In International Conference on Learning Representations, 2020.
  • Mallat (1999) Stéphane Mallat. A wavelet tour of signal processing. Elsevier, 1999.
  • Marar et al. (1996) Joao Fernando Marar, Edson CB Carvalho Filho, and Germano C Vasconcelos. Function approximation by polynomial wavelets generated from powers of sigmoids. In Wavelet Applications III, volume 2762, pp.  365–374. SPIE, 1996.
  • Mildenhall et al. (2020) Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In IEEE European Conference on Computer Vision, 2020.
  • Monard & Stefanov (2023) François Monard and Plamen Stefanov. Sampling the X-ray transform on simple surfaces. SIAM Journal on Mathematical Analysis, 55(3):1707–1736, 2023.
  • Ramasinghe & Lucey (2021) Sameera Ramasinghe and Simon Lucey. Beyond periodicity: Towards a unifying framework for activations in coordinate-MLPs. In IEEE European Conference on Computer Vision, 2021.
  • Saragadam et al. (2023) Vishwanath Saragadam, Daniel LeJeune, Jasper Tan, Guha Balakrishnan, Ashok Veeraraghavan, and Richard G Baraniuk. WIRE: Wavelet implicit neural representations. In IEEE/CVF Computer Vision and Pattern Recognition Conference, pp.  18507–18516, 2023.
  • Sitzmann et al. (2020) Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems, 2020.
  • Tancik et al. (2020) Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Systems, 2020.
  • Wakin et al. (2006) Michael B Wakin, Justin K Romberg, Hyeokho Choi, and Richard G Baraniuk. Wavelet-domain approximation and compression of piecewise smooth images. IEEE Transactions on Signal Processing, 15(5):1071–1087, 2006.
  • Xu et al. (2022) Dejia Xu, Peihao Wang, Yifan Jiang, Zhiwen Fan, and Zhangyang Wang. Signal processing for implicit neural representations. Advances in Neural Information Processing Systems, 35, 2022.
  • Yüce et al. (2022) Gizem Yüce, Guillermo Ortiz-Jiménez, Beril Besbinar, and Pascal Frossard. A structured dictionary perspective on implicit neural representations. In IEEE/CVF Computer Vision and Pattern Recognition Conference, 2022.
  • Zhang & Benveniste (1992) Qinghua Zhang and Albert Benveniste. Wavelet networks. IEEE Transactions on Neural Networks, 3(6):889–898, 1992.

Appendix A Proof of Theorem 1

Let 𝐫0\mathbf{r}_{0} be given, and let I{1,,F1}I\subseteq\{1,\ldots,F_{1}\} be the set of indices such that 𝐖t𝐫0+𝐛tsupp(ψ)\mathbf{W}_{t}\mathbf{r}_{0}+\mathbf{b}_{t}\in\operatorname{supp}(\psi). Then, there exists an open neighborhood U𝐫0U\ni\mathbf{r}_{0} such that for all 𝐫U\mathbf{r}\in U, ψ(𝐖t𝐫+𝐛t)=0\psi(\mathbf{W}_{t}\mathbf{r}+\mathbf{b}_{t})=0 for any tIt\notin I. Thus, for any 𝐫U\mathbf{r}\in U, the assumptions on the activation functions imply that f𝜽(𝐫)f_{\boldsymbol{\theta}}(\mathbf{r}) is expressible as a complex multivariate polynomial of {ψ(𝐖i𝐫+𝐛t)}iI\{\psi(\mathbf{W}_{i}\mathbf{r}+\mathbf{b}_{t})\}_{i\in I} with degree at most KL1K^{L-1}, i.e.,

f𝜽(𝐫)=k=0KL1Δ(|I|,k)β^iIψi(𝐖i𝐫+𝐛i).f_{\boldsymbol{\theta}}(\mathbf{r})=\sum_{k=0}^{K^{L-1}}\sum_{\boldsymbol{\ell}\in\Delta(|I|,k)}\widehat{\beta}_{\boldsymbol{\ell}}\prod_{i\in I}\psi^{\ell_{i}}(\mathbf{W}_{i}\mathbf{r}+\mathbf{b}_{i}).

for some set of complex coefficients β^\widehat{\beta}_{\boldsymbol{\ell}}. The desired result follows immediately from the convolution theorem.

Appendix B Relaxed Conditions on the INR Architecture

Theorem 1 assumes that the template function ψ:d\psi:\mathbb{R}^{d}\to\mathbb{C} has a Fourier transform that exists in the sense of tempered distributions, and that the activation functions are polynomials, which is a stronger condition than merely assuming they are complex analytic. Here, we discuss ways in which this can be relaxed to include more general activation functions, as well as how template functions on Euclidean spaces other than d\mathbb{R}^{d} can be used.

B.1 Relaxing the Class of Activation Functions

In Theorem 1, two assumptions are made. The first is that the template function has a Fourier transform that exists, possibly in the sense of tempered distributions. The second is that the activation functions are polynomials of finite degree. However, given that Theorem 1 is a local result, mild assumptions on the template function allow for reasonable extension of this result to analytic activation functions.

Indeed, if ψ\psi is assumed to be continuous, then one can take VUV\subset U to be a compact subset of the neighborhood guaranteed by Theorem 1. It follows, then, that the functions {ψ(𝐖t𝐫+𝐛t)}t=1F1\{\psi(\mathbf{W}_{t}\mathbf{r}+\mathbf{b}_{t})\}_{t=1}^{F_{1}} are bounded over VV. Then, if the activation functions are merely assumed to be analytic, then the INR can be approximated uniformly well over VV by finite polynomials. Without repeating the details of the proof, this yields an “infinite-degree” version of Theorem 1, where for any ϕ𝒞0(V)\phi\in\mathcal{C}_{0}^{\infty}(V), we have

ϕf𝜽^(ξ)=(k=0Δ(F1,k)β^[ϕ^t=1F1(ei2π𝐖tξ,𝐛tψ^(𝐖tξ))t,ξ])(ξ).\widehat{\phi\cdot f_{\boldsymbol{\theta}}}(\xi)=\left(\sum_{k=0}^{\infty}\sum_{\boldsymbol{\ell}\in\Delta(F_{1},k)}\widehat{\beta}_{\boldsymbol{\ell}}\left[\widehat{\phi}\ast\mathop{\Huge{{\ast}}}_{t=1}^{F_{1}}\left(e^{i2\pi\langle\mathbf{W}_{t}^{-\top}\xi,\mathbf{b}_{t}\rangle}\widehat{\psi}(\mathbf{W}_{t}^{-\top}\xi)\right)^{\ast\ell_{t},\xi}\right]\right)(\xi).

This condition is not strong enough to handle general continuous activation functions, since the Stone-Weierstrass theorem for approximating complex continuous functions on a compact Hausdorff space requires polynomials terms to include conjugates of the arguments. One could conceivably extend Theorem 1 in this way, but this would not be compatible with the algebra of Γ\Gamma-progressive functions, since Γ\Gamma-progressive functions are not generally closed under complex conjugation.

B.2 Template Functions From Another Dimension

It is possible to consider cases where ψ\psi is defined as a map from a space of different dimension, say ψ:q\psi:\mathbb{R}^{q}\to\mathbb{C}. If q<dq<d, then one can construct a map ψ~:d\tilde{\psi}:\mathbb{R}^{d}\to\mathbb{C} so that ψ~(x1,,xq,xq+1,xd)=ψ(x1,,xq)\tilde{\psi}(x_{1},\ldots,x_{q},x_{q+1},x_{d})=\psi(x_{1},\ldots,x_{q}). This, for instance, is the case with SIREN (Sitzmann et al., 2020) and the 1D variant of WIRE (Saragadam et al., 2023), where ψ:\psi:\mathbb{R}\to\mathbb{R} is used for functions on higher-dimensional spaces. In this case, the Fourier transform of ψ~\tilde{\psi} only exists in the sense of distributions. If q>dq>d, then one can apply the results in this paper by treating the INR as a map from q\mathbb{R}^{q}\to\mathbb{C} followed by a restriction that “zeroes out” the excess coordinates by restricting the domain of f𝜽f_{\boldsymbol{\theta}} to the subspace of points 𝐱\mathbf{x} such that 𝐱=(x1,,xd,0,,0)\mathbf{x}=(x_{1},\ldots,x_{d},0,\ldots,0).

Appendix C Proof of Corollary 4

We will show that the property of f𝜽f_{\boldsymbol{\theta}} being locally Γ\Gamma-progressive at a point holds, as the result for being “globally” Γ\Gamma-progressive follows.

Let 𝐫0\mathbf{r}_{0} be given, and let I{1,,F1}I\subseteq\{1,\ldots,F_{1}\} be the set of indices such that 𝐖t𝐫0+𝐛tsupp(ψ)\mathbf{W}_{t}\mathbf{r}_{0}+\mathbf{b}_{t}\in\operatorname{supp}(\psi). By Theorem 1, there exists an open neighborhood U𝐫0U\ni\mathbf{r}_{0} such that for all 𝐫U\mathbf{r}\in U, 𝐟𝜽(𝐫)\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{r}) takes the form of a complex multivariate polynomial of {ψ(𝐖i𝐫+𝐛t)}iI\{\psi(\mathbf{W}_{i}\mathbf{r}+\mathbf{b}_{t})\}_{i\in I}. Denoting this polynomial by 𝒫(𝐫)\mathcal{P}(\mathbf{r}), we have that for any ϕ𝒞(U)\phi\in\mathcal{C}^{\infty}(U),

ϕ𝒫=ϕf𝜽.\phi\cdot\mathcal{P}=\phi\cdot f_{\boldsymbol{\theta}}.

Under the given assumptions, each of the terms {ψ(𝐖i𝐫+𝐛t)}iI\{\psi(\mathbf{W}_{i}\mathbf{r}+\mathbf{b}_{t})\}_{i\in I} is a Γ\Gamma-progressive function. Since Γ\Gamma-progressive functions constitute an algebra over \mathbb{C}, any polynomial of them will yield a Γ\Gamma-progressive function. Letting UU be the “sufficiently small neighborhood of 𝐫0\mathbf{r}_{0},” this implies that f𝜽f_{\boldsymbol{\theta}} is locally Γ\Gamma-progressive at 𝐫0\mathbf{r}_{0}, as desired.

Noting that Γ\Gamma-progressive functions also constitute an algebra over \mathbb{C} when Γ\Gamma is weakly conic, this result also applies in that case.

Appendix D Wavelet Modulus Maximus Initialization in Two Dimensions

Refer to caption
Figure 7: WMM initialization in two dimensions. (a) shows an image to be represented, while (b) shows the output of a Canny edge detector. (c) shows WMM-type bias initialization with the red dots showing some of locations used as bias values.

For 2D images, we approximate WMM location as edges of the image. Fig. 7 shows the flowchart for WMM-type initialization. We first start with the image to be represented and perform Canny edge dectection on it. We then use the binary edge map as a proxy for WMM locations. We then use the edge locations as bias values for each neuron in the Gabor INR.