This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Linear optimal transport subspaces for point set classification

Mohammad Shifat-E-Rabbi, Naqib Sad Pathan, Shiying Li, Yan Zhuang, Abu Hasnat Mohammad Rubaiyat, and Gustavo K Rohde M.S.E. Rabbi is with the Department of Electrical and Computer Engineering, North South University, Dhaka, Bangladesh (e-mail: rabbi.mohammad@northsouth.edu).N.S. Pathan is with the Imaging and Data Science Laboratory and the Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, USA (e-mail: qpb3vt@virginia.edu).S. Li is with the Department of Mathematics, University of North Carolina - Chapel Hill, NC, USA (e-mail: shiyl@unc.edu).Y. Zhuang is with the Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, MD, USA (e-mail: yan.zhuang2@nih.gov).A.H.M. Rubaiyat is with the Imaging and Data Science Laboratory and the Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, USA (e-mail: ar3fx@virginia.edu).G.K. Rohde is with the Imaging and Data Science Laboratory, the Department of Biomedical Engineering, and the Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, USA (e-mail: gustavo@virginia.edu).M.S.E. Rabbi (rabbi.mohammad@northsouth.edu) is the corresponding author.
Abstract

Learning from point sets is an essential component in many computer vision and machine learning applications. Native, unordered, and permutation invariant set structure space is challenging to model, particularly for point set classification under spatial deformations. Here we propose a framework for classifying point sets experiencing certain types of spatial deformations, with a particular emphasis on datasets featuring affine deformations. Our approach employs the Linear Optimal Transport (LOT) transform to obtain a linear embedding of set-structured data. Utilizing the mathematical properties of the LOT transform, we demonstrate its capacity to accommodate variations in point sets by constructing a convex data space, effectively simplifying point set classification problems. Our method, which employs a nearest-subspace algorithm in the LOT space, demonstrates label efficiency, non-iterative behavior, and requires no hyper-parameter tuning. It achieves competitive accuracies compared to state-of-the-art methods across various point set classification tasks. Furthermore, our approach exhibits robustness in out-of-distribution scenarios where training and test distributions vary in terms of deformation magnitudes.

Index Terms:
particle-LOT, subspace modeling, classification, optimal transport.

I Introduction

Point sets provide valuable insights about object geometry, making them useful for a variety of applications, including object detection, recognition, segmentation, and tracking in fields such as robotics, autonomous vehicles, virtual reality, and computer vision, among others [1, 2, 3, 4, 5]. They represent the surface geometry of an object in an N-dimensional space as a set of points, obtained using various scanning technologies such as LiDAR or photogrammetry [6, 7], or by sampling a continuous probability density function over an N-D space [8]. However, modeling the set structure space for classification presents significant challenges due to the sparsity and noise in data, the accumulation of spatial deformations (such as affine deformations) in real-world point set data, and the high dimensionality of point sets, among other factors [9, 10, 1, 11]. Furthermore, defining a metric or distance function for point set classification is challenging due to the permutation invariant nature of point sets resulting from the arbitrary order of points in a set [1, 12]. However, despite these challenges, there has been a growing interest in developing new algorithms and techniques for point set classification.

In recent years, several research efforts have focused on point set classification, resulting in the development of various methods to address challenges in this area. Over the last few decades, point set classification methods have evolved from relying on feature engineering [13, 14, 15, 16] to utilizing deep neural networks to learn representations and use them in classification tasks [3, 1, 17]. Neural networks have emerged as a leading classification framework for point sets, providing end-to-end learning capabilities and eliminating the need for hand-crafted feature engineering. They have demonstrated to achieve high accuracy in several classification tasks, and are also suitable for parallel implementation using graphical processing units (GPUs) [3, 1, 17]. However, the effectiveness of neural network-based methods is often limited by their high data requirements [18], high computational costs [1], and vulnerability to out-of-distribution samples, e.g., adversarial attacks [19, 20, 21].

While the conventional approach to modeling point sets involves direct processing of their coordinates, an alternative and less commonly used method is to represent a point set as a deformation of another point set [22]. To address this challenge, point set deformation models have been developed utilizing the mathematics of optimal mass transport [22, 23]. These models treat a point set as a smooth, nonlinear, and invertible transformation of a reference point set structure. The estimation of such models can be facilitated through the use of the linear optimal transport (LOT) transform, which has found applications in various fields [23]. The LOT of a point set provides as a linear embedding for that point set which can be used to compare with other point set data [23]. The LOT transform has been combined with various machine learning techniques and has been used in many applications [23, 20].

This paper introduces a new method for classifying point sets by expanding upon the LOT-based modeling frameworks. We start by introducing a transport generative model to define point set classes, where class elements can be conceived as instances of an unknown template point set pattern under the effect of unknown spatial deformations. Using the mathematical properties of the LOT transform, we establish that these point set classes, under our generative model (with certain conditions on spatial deformations), can be constructed as convex subspaces in the LOT space, which are capable of accommodating the variations in point set data. Subsequently, we propose a nearest subspace-based classifier in the LOT space for classifying point sets under the given generative model. Our model is also capable of mathematically encoding invariances by integrating mathematical knowledge of deformations known to be present in the data. In our experiments, we particularly focus on datasets experiencing affine deformations and demonstrate the effectiveness of our method compared to several state-of-the-art methods. Our approach exhibits particular strength in situations characterized by limited training data and in the challenging out-of-distribution setting, where the training and test distributions differ in terms of deformation magnitudes.

II Preliminaries

II-A Linear optimal transport embeddings

The fundamental principle of optimal transport theory relies on quantifying the amount of effort (measured as the product of mass and distance) required to rearrange one distribution to another, which gives rise to the Wasserstein metric between distributions. In the present study, we utilize a linearized version of this metric, as outlined in [23], which is constructed formally through a tangent space approximation of the underlying manifold.

Following the construction in [23], we define the linear optimal transport transform for probability measures in 𝒫2(L)\mathcal{P}_{2}(\mathbb{R}^{L}), which is the set of absolutely continuous measures with bounded finite second moments and densities 111Any μ𝒫2(L)\mu\in\mathcal{P}_{2}(\mathbb{R}^{L}) has the following two properties (i) bounded second moment, i.e. x2𝑑μ(x)<\int\|x\|^{2}d\mu(x)<\infty; (ii) absolute continuity with respect to the Lebesgue measure on L\mathbb{R}^{L} with bounded density, i.e., μ\mu has a density function fμf_{\mu} defined on L\mathbb{R}^{L} with fμ<\|f_{\mu}\|_{\infty}<\infty. . For simplicity, let us fix a reference measure σ\sigma as the Lebesgue measure on a convex compact set of L\mathbb{R}^{L}. Thanks to Brenier’s theorem [24], there is a unique minimizer TσμT_{\sigma}^{\mu} to the following optimal transportation problem

minTσ=μLxT(x)2𝑑σ(x),\min\limits_{T_{\sharp}\sigma=\mu}\int_{\mathbb{R}^{L}}\|x-T(x)\|^{2}d\sigma(x), (1)

where the push-forward (transport) relation Tσ=μT_{\sharp}\sigma=\mu is defined via μ(B)=σ(T1(B))\mu(B)=\sigma(T^{-1}(B)) for any measurable set BLB\subseteq\mathbb{R}^{L}. The linear optimal transport (LOT) transform is given by the following correspondence

μTσμ,\mu\mapsto T_{\sigma}^{\mu}, (2)

where each probability measure μ\mu is identified with the optimal transport map Tσμ:LLT_{\sigma}^{\mu}:\mathbb{R}^{L}\rightarrow\mathbb{R}^{L} from a fixed reference σ\sigma to μ\mu, which lies in a linear space. This square-root of the minimum is called the Wasserstein-2 distance between σ\sigma and μ\mu [25]. The LOT metric between two probability distributions μ,ν𝒫2(L)\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{L}) is 222Note that Tσ:=(LT(x)2𝑑σ(x))1/2\|T\|_{\sigma}:=\Big{(}\int_{\mathbb{R}^{L}}\|T(x)\|^{2}d\sigma(x)\Big{)}^{1/2}.

dLOT(μ,ν):=TσμTσνσ.d_{\textrm{LOT}}(\mu,\nu):=\|T_{\sigma}^{\mu}-T_{\sigma}^{\nu}\|_{\sigma}. (3)

For simplicity, we denote μ^\widehat{\mu} as the LOT transform of μ\mu, i.e., μ^=Tσμ\widehat{\mu}=T_{\sigma}^{\mu} where σ\sigma is fixed.

It turned out the linearization ability of LOT is closely related to the scope of the following so-called composition property [26, 27]

Tσgμ=gTσμ,T_{\sigma}^{g_{\sharp}\mu}=g\circ T_{\sigma}^{\mu}, (4)

where g𝒯Lg\in\mathcal{T}_{L}, and 𝒯L\mathcal{T}_{L} is the set of all diffeomorphisms from L\mathbb{R}^{L} to L\mathbb{R}^{L}. In particular, given a convex 𝒢𝒯L\mathcal{G}\subseteq\mathcal{T}_{L}, the LOT embedding of deformed measures via maps in 𝒢\mathcal{G} become convex 333 Note in general 𝒢μ\mathcal{G}_{\sharp}\mu is not convex as (λ1g1+λ2g2)μλ1g1μ+λ2g2μ(\lambda_{1}g_{1}+\lambda_{2}g_{2})_{\sharp}\mu\neq\lambda_{1}{g_{1}}_{\sharp}\mu+\lambda_{2}{g_{2}}_{\sharp}\mu. if all g𝒢g\in\mathcal{G} satisfies the above composition property (4), which is shown more formally below.

Proposition II.1 (Lemma A.2 in [27]).

Let 𝒢𝒯L\mathcal{G}\subseteq\mathcal{T}_{L} be convex. Given μ𝒫2(L)\mu\in\mathcal{P}_{2}(\mathbb{R}^{L}), define 𝒢μ:={gμ:g𝒢}\mathcal{G}_{\sharp}\mu:=\{g_{\sharp}\mu:g\in\mathcal{G}\}. If g𝒢\forall g\in\mathcal{G}, (4) holds, then 𝒢μ^:={ν^:ν𝒢μ}\widehat{\mathcal{G}_{\sharp}\mu}:=\{\widehat{\nu}:\nu\in\mathcal{G}_{\sharp}\mu\} is convex in the LOT transform domain.

When the dimension L2L\geq 2, it is shown in [26] that gg can only be “basic” transformations (more specifically, translations or isotropic scalings or their compositions) for the composition property (4) to hold for arbitrary μ\mu’s. Luckily, [27] proposes an approximate composition property for perturbations of the aforementioned basic transformations, the set of which we denote as 𝒜={h(x)=ax+b:a>0,bL}\mathcal{A}=\{h(x)=ax+b:a>0,b\in\mathbb{R}^{L}\}.

Property 1 (Approximate composition, p.388 in [27]444This property is referred as δ\delta-compatibility in [27].) Let ϵ0\epsilon\geq 0 and μ𝒫2(L)\mu\in\mathcal{P}_{2}(\mathbb{R}^{L}). Let g𝒯Lg\in\mathcal{T}_{L} such that ghϵ\|g-h\|\leq\epsilon for some h𝒜h\in\mathcal{A}. Then there exists some δ\delta such that

TσgμgTσμσ<δ,\|T_{\sigma}^{g_{\sharp}\mu}-g\circ T_{\sigma}^{\mu}\|_{\sigma}<\delta, (5)

Remark: Using the μ^\widehat{\mu} notation for LOT transform of μ\mu, we have

gμ^gμ^σ<δ.\|\widehat{g_{\sharp}\mu}-g\circ\widehat{\mu}\|_{\sigma}<\delta. (6)

With the above approximate composition property, one can show the following approximate convexity analog of Proposition II.1 using Lemma A.3, A.4 of [27]:

Proposition II.2.

Let ϵ0\epsilon\geq 0 and 𝒢𝒯L\mathcal{G}\subseteq\mathcal{T}_{L} be convex such that for any g𝒢g\in\mathcal{G}, there exists some h𝒜h\in\mathcal{A} such that ghϵ\|g-h\|\leq\epsilon. Given μ𝒫2(L)\mu\in\mathcal{P}_{2}(\mathbb{R}^{L}), we have 𝒢μ^:={ν^:ν𝒢μ}\widehat{\mathcal{G}_{\sharp}\mu}:=\{\widehat{\nu}:\nu\in\mathcal{G}_{\sharp}\mu\} is 2δ2\delta-convex in the LOT transform domain, where δ\delta is given in the above approximate composition property. In particular, for any c[0,1]c\in[0,1] and g1μ^,g2μ^𝒢μ^\widehat{{g_{1}}_{\sharp}\mu},\widehat{{g_{2}}_{\sharp}\mu}\in\widehat{\mathcal{G}_{\sharp}\mu} (g1,g2𝒢g_{1},g_{2}\in\mathcal{G}),

(1c)g1μ^+cg2μ^gcμ^<2δ,\|(1-c)\widehat{{g_{1}}_{\sharp}\mu}+c\widehat{{g_{2}}_{\sharp}\mu}-\widehat{{g_{c}}_{\sharp}\mu}\|<2\delta, (7)

where gc=(1c)g1+cg2𝒢g_{c}=(1-c)g_{1}+cg_{2}\in\mathcal{G}.

II-B Discrete implementation for point sets

For the analysis of discrete point set data, a discrete version of the Linear Optimal Transport (LOT) embedding is required. In this particular case, both the reference σ\sigma and target μ\mu are chosen as discrete probability measures, represented by point sets in L\mathbb{R}^{L}. A point set in a LL-dimensional space is a finite set of points in L\mathbb{R}^{L}. A point set Ωs\Omega_{s} with NN points can be thought as the image of an injective map s:{1,,N}Ls:\{1,\cdots,N\}\rightarrow\mathbb{R}^{L} 555Note that a point set may be associated with many injective maps, e.g. the image sets of sγs\circ\gamma and ss are the same for any permutation γ\gamma.. Given a point set Ωs\Omega_{s} with NN points, we define a discrete probability distribution associated with the point set as

Ps:=1N𝐱Ωsδ𝐱=1Ni=1Nδs(i).\displaystyle P_{s}:=\frac{1}{N}\sum_{\mathbf{x}\in\Omega_{s}}\delta_{\mathbf{x}}=\frac{1}{N}\sum_{i=1}^{N}\delta_{s(i)}. (8)

Given a diffeomorphism g𝒯Lg\in\mathcal{T}_{L}, the push-forward distribution of PsP_{s} under gg is given as

g#Ps:=1N𝐱Ωsδg(𝐱)=1Ni=1Nδg(s(i))=Pgs.\displaystyle g_{\#}P_{s}:=\frac{1}{N}\sum_{\mathbf{x}\in\Omega_{s}}\delta_{g(\mathbf{x})}=\frac{1}{N}\sum_{i=1}^{N}\delta_{g(s(i))}=P_{g\circ s}. (9)

Let 𝒩,\mathcal{F_{N,L}} denote the collection of injective maps from {1,,N}\{1,\cdots,N\} to L\mathbb{R}^{L}. Given s,r𝒩,s,r\in\mathcal{F_{N,L}}, the optimal transportation (Wasserstein-2) distance between associated distributions PsP_{s} and PrP_{r} can be obtained by solving the linear programming problem given below:

dW2(Ps,Pr)=minπN×Ni=1Nj=1Nπij|s(i)r(j)|2\displaystyle d_{W}^{2}(P_{s},P_{r})=\min_{\pi\in\mathbb{R}^{N\times N}}\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{ij}|s(i)-r(j)|^{2} (10)

where πij0\pi_{ij}\geq 0, and i=1Nπij=j=1Nπij=1/N\sum_{i=1}^{N}\pi_{ij}=\sum_{j=1}^{N}\pi_{ij}=1/N for all i,j=1,,Ni,j=1,\cdots,N. Let us fix some r𝒩,r\in\mathcal{F_{N,L}} and use PrP_{r} as a reference. It turned out that any minimizer matrix π\pi^{*} to the optimal transport problem in (10) is a permutation matrix[25]. In other words, there is a permutation σs:{1,,N}{1,,N}\sigma_{s}^{*}:\{1,\cdots,N\}\rightarrow\{1,\cdots,N\} such that

πij={1/Nifj=σs(i)0otherwise.\pi^{*}_{ij}=\begin{cases}1/N\quad&\textrm{if}~{}j=\sigma_{s}^{*}(i)\\ 0\quad&\textrm{otherwise}\end{cases}.

Hence with rr being fixed, an optimal transport map between PrP_{r} and PsP_{s} can be determined by σs\sigma^{*}_{s} and ss. The LOT transform for PsP_{s} is defined as [23] 666Note one can write sσs=[s(σs(1)),,s(σs(N))]T.s\circ\sigma^{*}_{s}=\begin{bmatrix}s(\sigma_{s}^{*}(1)),\cdots,s(\sigma_{s}^{*}(N))\end{bmatrix}^{T}. Note also that σs\sigma^{*}_{s} may not be unique in general, we follow the implementation in [23] to estimate one of them.

P^s:=sσs,\widehat{P}_{s}:=s\circ\sigma^{*}_{s}, (11)

and the LOT distance between two point set measures is

dLOT(Ps,Pq):=P^sP^q,d_{\textrm{LOT}}(P_{s},P_{q}):=||\widehat{P}_{s}-\widehat{P}_{q}||, (12)

where s,q𝒩,s,q\in\mathcal{F_{N,L}}.

III Transport based Classification Problem statement

In this section, we present a generative model-based problem statement for the type of classification problems we discuss in this paper, building upon the preliminaries established earlier. Our focus is on point set classification, where every class can be viewed as a collection of instances of a prototype point set pattern (a template) observed under unknown spatial deformations. To formalize this concept, we introduce a generative model that provides a formal approach to characterizing point set data of this type.

Generative model Let 𝒢L𝒯L\mathcal{G}_{L}\subset\mathcal{T}_{L} be a set of smooth one-to-one transformations in an LL-dimensional space. The mass-preserving generative model for the kk-th class is defined to be the set

𝕊(k)={Psj(k)|Psj(k)=gj#Pφ(k),gj𝒢L}\displaystyle\mathbb{S}^{(k)}=\left\{P_{s_{j}^{(k)}}|P_{s_{j}^{(k)}}=g_{j\#}P_{\varphi^{(k)}},~{}\forall g_{j}\in\mathcal{G}_{L}\right\} (13)

where Pφ(k)P_{\varphi^{(k)}} corresponds to the point set distribution of the prototype template pattern for the kk-th class and Psj(k)P_{s_{j}^{(k)}} represents the point set distribution of the jj-th sample from the kk-th class in 𝕊(k)\mathbb{S}^{(k)}. With these definitions, we can now construct a formal mathematical description for the generative model-based problem statement for point set classification.

Classification problem: Let the set of point set distributions 𝕊(k)\mathbb{S}^{(k)} are given as in equation (13). Given training samples {Ps1(1),Ps2(1),}\{P_{s_{1}^{(1)}},P_{s_{2}^{(1)}},\cdots\} (class 1), {Ps1(2),Ps2(2),}\{P_{s_{1}^{(2)}},P_{s_{2}^{(2)}},\cdots\} (class 2), \cdots as training data, determine the class of an unknown distribution PsP_{s}.

Note that the generative model in equation (13) describes set-structured point set data, which makes it challenging to compare point sets due to their permutation-invariant nature. The generative model above is also not guaranteed to be convex, presenting challenges for effective classification using machine learning techniques. In the subsequent sections, we present solutions to the above classification problem at first by restructuring the point clouds by providing linear optimal transport (LOT) embeddings for them and then by approximating the resulting convex spaces with subspaces as done in many image [21, 28], signal [29, 30], and gradient distribution [31] classification problems.

IV Proposed solution

The LOT transform, which was previously described in section II, can significantly simplify the classification problem described earlier by providing a convex linear embedding for the set-structured point set data. Let us first investigate the generative model in equation (13) in the LOT transform space. Applying the approximate composition property (equation (6)) to the generative model in equation (13), we have the LOT-space generative model as follows:

𝕊^(k)={P^sj(k)|P^sj(k)=gjP^φ(k),gj𝒢L}\displaystyle\widehat{\mathbb{S}}^{(k)}=\left\{\widehat{P}_{s_{j}^{(k)}}|\widehat{P}_{s_{j}^{(k)}}=g_{j}\circ\widehat{P}_{\varphi^{(k)}},~{}\forall g_{j}\in\mathcal{G}_{L}\right\} (14)

In this context, P^sj(k)\widehat{P}{s_{j}^{(k)}} and P^φ(k)\widehat{P}{\varphi^{(k)}} refer to the LOT embeddings of Psj(k)P_{s_{j}^{(k)}} and Pφ(k)P_{\varphi^{(k)}}, respectively, with respect to a reference structure PrP_{r} (see equation (11)). Based on the preliminary results presented in Section II (Property 1, PropositionII.2, and other results), it is possible to establish the convexity of the set 𝕊^(k)\widehat{\mathbb{S}}^{(k)} up to a certain bound, subject to certain constraints. Furthermore, we can show that when 𝕊(k)𝕊(p)=\mathbb{S}^{(k)}\cap\mathbb{S}^{(p)}=\varnothing, the intersection of 𝕊^(k)\widehat{\mathbb{S}}^{(k)} with 𝕊^(p)\widehat{\mathbb{S}}^{(p)} is empty [21].

IV-A Training phase

Based on the aforementioned theoretical discussions, we put forward a straightforward non-iterative training approach for the classification method. This involves computing a projection matrix that maps each sample in the LOT space onto the subspace 𝕍^(k)\widehat{\mathbb{V}}^{(k)} (as outlined in [21]), generated by the 2δ\delta-convex set 𝕊^(k)\widehat{\mathbb{S}}^{(k)}. Specifically, we estimate the projection matrix by applying the following procedure:

𝕍^(k)=span(𝕊^(k))={jJαjP^sj(k)|αj,J is finite}.\displaystyle\widehat{\mathbb{V}}^{(k)}=\mbox{span}\left(\widehat{\mathbb{S}}^{(k)}\right)=\{\sum_{j\in J}\alpha_{j}\widehat{P}_{s_{j}^{(k)}}|\alpha_{j}\in\mathbb{R},~{}J\mbox{ is finite}\}.

Given a set of sample training data, denoted as {Ps1(k),Ps2(k),}\{P_{s_{1}^{(k)}},P_{s_{2}^{(k)}},\cdots\}, the first step in our proposed method is to apply the LOT transform on them using a reference distribution Pr(k)P_{r^{(k)}}. This results in the generation of transformed samples, denoted as {P^s1(k),P^s2(k),}\{\widehat{P}_{s_{1}^{(k)}},\widehat{P}_{s_{2}^{(k)}},\cdots\}. The reference distribution Pr(k)P_{r^{(k)}} is obtained by selecting a point set at random from the training set, followed by the introduction of random perturbations. Subsequently, we estimate 𝕍^(k)\widehat{\mathbb{V}}^{(k)} as follows:

𝕍^(k)=span{P^s1(k),P^s2(k),}.\displaystyle\widehat{\mathbb{V}}^{(k)}=\mbox{span}\{\widehat{P}_{s_{1}^{(k)}},\widehat{P}_{s_{2}^{(k)}},\cdots\}. (15)

The proposed method also provides a structure to mathematically encode invariances with respect to deformations that are known to be present in the data [21, 28]. In this paper, we prescribe methods to encode invariances with respect to a set of affine transformations: translation, isotropic and anisotropic scaling, and shear. Detailed descriptions of the deformation types used for encoding invariances and the corresponding methodologies are explained as follows:

  1. 1.

    Translation: Let g(𝐱)=𝐱+𝐱𝟎g(\mathbf{x})=\mathbf{x}+\mathbf{x_{0}} be the translation by 𝐱0=((𝐱0)1,(𝐱0)2,,(𝐱0)L)LandPsg=g#Ps\mathbf{x}_{0}=((\mathbf{x}_{0})_{1},(\mathbf{x}_{0})_{2},\cdots,(\mathbf{x}_{0})_{L})\in\mathbb{R}^{L}~{}\mbox{and}~{}P_{s_{g}}=g_{\#}P_{s}. Using equation (6), we have that P^sg=g#Ps^gP^s=P^s+𝐱0,whereP^s=((P^s)1,(P^s)2,,(P^s)L)\widehat{P}_{s_{g}}=\widehat{g_{\#}P_{s}}\approx g\circ\widehat{P}_{s}=\widehat{P}_{s}+\mathbf{x}_{0},~{}\mbox{where}~{}\widehat{P}_{s}=((\widehat{P}_{s})_{1},(\widehat{P}_{s})_{2},\cdots,(\widehat{P}_{s})_{L}). Consequently,

    P^sgP^s+𝐱0=P^s+((𝐱0)1,(𝐱0)2,,(𝐱0)L)\displaystyle\widehat{P}_{s_{g}}\approx\widehat{P}_{s}+\mathbf{x}_{0}=\widehat{P}_{s}+\left((\mathbf{x}_{0})_{1},(\mathbf{x}_{0})_{2},\cdots,(\mathbf{x}_{0})_{L}\right)
    =P^s+(𝐱0)1(1,0,0,)+(𝐱0)2(0,1,0,)+\displaystyle=\widehat{P}_{s}+(\mathbf{x}_{0})_{1}\left(1,0,0,\cdots\right)+(\mathbf{x}_{0})_{2}\left(0,1,0,\cdots\right)+\cdots
    +(𝐱0)L(0,0,,1).\displaystyle+(\mathbf{x}_{0})_{L}\left(0,0,\cdots,1\right).

    Therefore, as in [21, 28], we define the spanning set for translation as

    𝕌T={ut(1),ut(2),,ut(L)},where\displaystyle\mathbb{U}_{T}=\{u_{t}(1),u_{t}(2),\cdots,u_{t}(L)\},~{}\mbox{where}
    ut(1)=(1,0,0,),ut(2)=(0,1,0,),,\displaystyle u_{t}(1)=(1,0,0,\cdots),u_{t}(2)=(0,1,0,\cdots),\cdots,
    ut(L)=(0,0,,1).\displaystyle u_{t}(L)=(0,0,\cdots,1).
  2. 2.

    Isotropic scaling: Let g(𝐱)=a𝐱g(\mathbf{x})=a\mathbf{x} be the normalized isotropic scaling of PsP_{s} by aa, where a+a\in\mathbb{R}_{+} and Psg=g#PsP_{s_{g}}=g_{\#}P_{s}. Using equation (6), we have that P^sggP^s=aP^s\widehat{P}_{s_{g}}\approx g\circ\widehat{P}_{s}=a\widehat{P}_{s}. As in [21, 28], an additional spanning set for isotropic scaling is not required as the subspace containing P^s\widehat{P}_{s} naturally contains its scalar multiplication aP^sa\widehat{P}_{s}. Therefore, the spanning set for isotropic is defined as 𝕌D0=\mathbb{U}_{D_{0}}=\varnothing.

  3. 3.

    Anisotropic scaling: Let g(𝐱)=𝒟˘𝐱g(\mathbf{x})=\breve{\mathcal{D}}\mathbf{x} be the normalized anisotropic scaling of PsP_{s}, where 𝒟˘=[a1,0,0,a2,]\breve{\mathcal{D}}=\begin{bmatrix}a_{1},&0,&\cdots\\ 0,&a_{2},&\cdots\\ \vdots&\vdots&\ddots\end{bmatrix}, aiaja_{i}\neq a_{j}, ai+a_{i}\in\mathbb{R}_{+}, and Psg=g#PsP_{s_{g}}=g_{\#}P_{s}. Using equation (6), we have that P^sggP^s=𝒟˘P^s=(a1(P^s)1,a2(P^s)2,,aL(P^s)L)\widehat{P}_{s_{g}}\approx g\circ\widehat{P}_{s}=\breve{\mathcal{D}}\widehat{P}_{s}=\left(a_{1}(\widehat{P}_{s})_{1},a_{2}(\widehat{P}_{s})_{2},\cdots,a_{L}(\widehat{P}_{s})_{L}\right). Consequently,

    P^sg𝒟˘P^s=a1((P^s)1,0,0,)+a2(0,(P^s)2,0,)+\displaystyle\widehat{P}_{s_{g}}\approx\breve{\mathcal{D}}\widehat{P}_{s}=a_{1}((\widehat{P}_{s})_{1},0,0,\cdots)+a_{2}(0,(\widehat{P}_{s})_{2},0,\cdots)+
    +aL(0,0,,(P^s)L).\displaystyle\cdots+a_{L}(0,0,\cdots,(\widehat{P}_{s})_{L}).

    Therefore, the spanning set for anisotropic scaling is defined as

    𝕌D={ud(1),ud(2),,ud(L)},where\displaystyle\mathbb{U}_{D}=\{u_{d}(1),u_{d}(2),\cdots,u_{d}(L)\},~{}\mbox{where}
    ud(1)=((P^s)1,0,0,),ud(2)=(0,(P^s)2,0,),\displaystyle u_{d}(1)=((\widehat{P}_{s})_{1},0,0,\cdots),u_{d}(2)=(0,(\widehat{P}_{s})_{2},0,\cdots),
    ,ud(L)=(0,0,,(P^s)L).\displaystyle\cdots,u_{d}(L)=(0,0,\cdots,(\widehat{P}_{s})_{L}).
  4. 4.

    Shear: Let g(𝐱)=𝐱g(\mathbf{x})=\mathcal{H}\mathbf{x} be the normalized shear of PsP_{s}, where =[1,k12,k21,1,]\mathcal{H}=\begin{bmatrix}1,&k_{12},&\cdots\\ k_{21},&1,&\cdots\\ \vdots&\vdots&\ddots\end{bmatrix} and Psg=g#PsP_{s_{g}}=g_{\#}P_{s}. Here, the shear matrix \mathcal{H} is constructed using the shear factors kijk_{ij}\in\mathbb{R}, which are located at the non-diagonal positions of \mathcal{H}. Using equation (6), we have that P^sggP^s=P^s=((P^s)1+k12(P^s)2+k13(P^s)3+,(P^s)2+k21(P^s)1+k23(P^s)3+,,(P^s)L+kL1(P^s)1+kL2(P^s)2+)\widehat{P}_{s_{g}}\approx g\circ\widehat{P}_{s}=\mathcal{H}\widehat{P}_{s}=((\widehat{P}_{s})_{1}+k_{12}(\widehat{P}_{s})_{2}+k_{13}(\widehat{P}_{s})_{3}+\cdots,(\widehat{P}_{s})_{2}+k_{21}(\widehat{P}_{s})_{1}+k_{23}(\widehat{P}_{s})_{3}+\cdots,\cdots,(\widehat{P}_{s})_{L}+k_{L1}(\widehat{P}_{s})_{1}+k_{L2}(\widehat{P}_{s})_{2}+\cdots). Consequently,

    P^sgP^s=P^s+k12((P^s)2,0,0,)+\displaystyle\widehat{P}_{s_{g}}\approx\mathcal{H}\widehat{P}_{s}=\widehat{P}_{s}+k_{12}((\widehat{P}_{s})_{2},0,0,\cdots)+
    k13((P^s)3,0,0,)+k14((P^s)4,0,0,)++\displaystyle k_{13}((\widehat{P}_{s})_{3},0,0,\cdots)+k_{14}((\widehat{P}_{s})_{4},0,0,\cdots)+\cdots+
    k21(0,(P^s)1,0,)+k23(0,(P^s)3,0,)++\displaystyle k_{21}(0,(\widehat{P}_{s})_{1},0,\cdots)+k_{23}(0,(\widehat{P}_{s})_{3},0,\cdots)+\cdots+
    kL1(0,0,,(P^s)1)+kL2(0,0,,(P^s)2)+.\displaystyle k_{L1}(0,0,\cdots,(\widehat{P}_{s})_{1})+k_{L2}(0,0,\cdots,(\widehat{P}_{s})_{2})+\cdots.

    Therefore, the spanning set for shear is defined as

    𝕌S={us(1,2),us(1,3),,us(L,L1)},where\displaystyle\mathbb{U}_{S}=\{u_{s}(1,2),u_{s}(1,3),\cdots,u_{s}(L,L-1)\},~{}\mbox{where}
    us(1,2)=((P^s)2,0,0,),us(1,3)=((P^s)3,0,0,),\displaystyle u_{s}(1,2)=((\widehat{P}_{s})_{2},0,0,\cdots),u_{s}(1,3)=((\widehat{P}_{s})_{3},0,0,\cdots),
    ,us(L,L1)=(0,0,,(P^s)L1).\displaystyle\cdots,u_{s}(L,L-1)=(0,0,\cdots,(\widehat{P}_{s})_{L-1}).

Finally, in light of the preceding discussion, we can approximate the enriched subspace 𝕍^E(k)\widehat{\mathbb{V}}^{(k)}_{E} as

𝕍^E(k)=span({P^s1(k),P^s2(k),}𝕌A),\displaystyle\widehat{\mathbb{V}}^{(k)}_{E}=\mbox{span}\left(\{\widehat{P}_{s_{1}^{(k)}},\widehat{P}_{s_{2}^{(k)}},\cdots\}\cup\mathbb{U}_{A}\right), (16)

where 𝕌A=𝕌T𝕌D0𝕌D𝕌S\mathbb{U}_{A}=\mathbb{U}_{T}\cup\mathbb{U}_{D_{0}}\cup\mathbb{U}_{D}\cup\mathbb{U}_{S}.

Refer to caption
Figure 1: Percentage test accuracy comparison of different methods on synthetic datasets.

IV-B Testing phase

To classify a given test sample PsP_{s}, we first apply the LOT transform to PsP_{s} to obtain its corresponding LOT space representation P^s,r(k)\widehat{P}_{s,r^{(k)}} with respect to the reference Pr(k)P_{r^{(k)}} (which was pre-selected duing the training phase). Assuming that the test samples originate from the generative model presented in equation (13) (or equation (14)), we can determine the class of an unknown test sample PsP_{s} using the following expression:

argminkd2(P^s,r(k),𝕍^E(k))\displaystyle\arg\min_{k}d^{2}\left(\widehat{P}_{s,r^{(k)}},\widehat{\mathbb{V}}^{(k)}_{E}\right) (17)

where d(,)d(\cdot,\cdot) is the distance between the test sample and the trained subspaces in the LOT transform space. We can estimate the distance between P^s,r(k)\widehat{P}_{s,r^{(k)}} and the trained subspaces using d2(P^s,r(k),𝕍^E(k))P^s,r(k)B(k)B(k)TP^s,r(k)L22d^{2}\left(\widehat{P}_{s,r^{(k)}},\widehat{\mathbb{V}}^{(k)}_{E}\right)\sim||\widehat{P}_{s,r^{(k)}}-B^{(k)}B^{(k)T}\widehat{P}_{s,r^{(k)}}||^{2}_{L_{2}}, where the matrix B(k)B^{(k)} contains the basis vectors of the subspace 𝕍^E(k)\widehat{\mathbb{V}}^{(k)}_{E} arranged in its columns.

V Results

V-A Experimental setup

Our objective is to analyze how the proposed method performs compared to state-of-the-art approaches in terms of classification accuracy, required training data, and robustness in out-of-distribution scenarios in limited training data setting. To achieve this, we created training sets of varying sizes from the original training set for each dataset under examination. We then trained the models using these training sets and assessed their performance on the original test set. Each train split was generated by randomly selecting (without replacement) samples from the original training set, and we repeated the experiments for each split size ten times. The same train-test data samples were used for all algorithms in each split.

In order to assess the effectiveness of the proposed approach, we utilized several comparison methods. These included PointNet [1], DGCNN [17], and multilayer perceptron (MLP) [32] in FSpool feature embedding space [16]. We also conducted a comparative analysis with various conventional machine learning techniques across different set feature embedding spaces. These included logistic regression (LR), kernel support vector machine (k-SVM), multilayer perceptron (MLP), and nearest subspace (NS) classifier models [32] in GeM1, GeM2, GeM4 [13], COVpool [14, 15], and FSpool [16] embedding spaces. The performance of the proposed method was evaluated in relation to these baselines. We conducted these evaluations in addition to performing out-of-distribution experiments. In the proposed method, we selected the number of basis vectors for the subspaces 𝕍^E(k)\widehat{\mathbb{V}}^{(k)}_{E} such that the total variance explained by the chosen basis vectors in the kk-th class captured up to 99% of the total variance explained by that class.

To assess the relative performance of the methods, we evaluated them on several datasets, including Point cloud MNIST [33, 34], ModelNet [35], and ShapeNet [36] datasets. We additionally applied random translations, anisotropic scaling, and shear transformations to both the training and test sets of the datasets. For the ShapeNet dataset, we tested the methods under two experimental setups: the regular setup, where both the training and test sets contained point sets at the same deformation magnitude level, and the out-of-distribution setup, where the training and test sets contained point sets at different deformation magnitude levels.

Refer to caption
Figure 2: The accuracy of different methods as a function of the number of training samples per class, evaluated on MNIST, ModelNet, and ShapeNet datasets.

V-B Accuracy in synthetic case

We first evaluated the effectiveness of the proposed method by comparing it with other state-of-the-art techniques on two synthetic datasets. The synthetic datasets were generated by selecting one sample per class from the point cloud MNIST and ShapeNet datasets, followed by introducing random translations, anisotropic scaling, and shear transformations to each selected sample to generate training and test sets. Specifically, the training set consisted of two samples per class, while the test set comprised 25 samples per class. The obtained comparative results are displayed in Fig. 1. As observed, the proposed method substantially outperformed the other methods in this synthetic scenario.

Refer to caption
Figure 3: Performance assessment under an out-of-distribution experimental setup where training and test distributions vary in terms of deformation magnitudes. The performance of the methods was assessed in terms of percentage test accuracy and plotted against the number of training images per class.

V-C Accuracy and efficiency in real datasets

We conducted the performance evaluation of the proposed method by comparing it with several state-of-the-art techniques, including PointNet, DGCNN, and MLP in FSpool feature embedding space, on the MNIST, ShapeNet, and ModelNet datasets. Fig. 2 presents the average test accuracy values obtained for different numbers of training samples per class. The results demonstrate that our proposed method outperformed the other methods across the range of training sample sizes used to train the models. Notably, the proposed method’s accuracy vs. training size curves exhibited a smoother trend in most cases compared to the other methods.

V-D Out-of-distribution robustness

To assess the effectiveness of the proposed method under the out-of-distribution setting, we introduced a gap between the magnitudes of deformations in the training and test sets. Specifically, we used 𝒢out\mathcal{G}_{out} as the deformation set for the ‘out-distribution’ test set, while 𝒢in\mathcal{G}_{in} was the deformation set for the ‘in-distribution’ training set. We trained the models using the ‘in-distribution’ data and tested using the ‘out-distribution’ data. For our out-of-distribution experiment, we used the ShapeNet dataset with small deformations as the ‘in-distribution’ training set and the ShapeNet dataset with larger deformations as the ‘out-distribution’ test set (see Fig. 3). The results show that the proposed method outperformed the other methods by an even more significant margin under the challenging out-of-distribution setup, as shown in Fig. 3. Under this setup, the proposed method obtained accuracy figures closer to that in the standard experimental setup (i.e., ShapeNet in Fig. 2). On the other hand, the accuracy of the other methods declined significantly under the out-of-distribution setup compared to the standard experimental setup (see ShapeNet results in Figs. 2 and 3).

Refer to caption
Figure 4: Comparative analysis of the percentage test accuracy results attained by the proposed method and the conventional machine learning techniques implemented across different feature embedding spaces.

V-E Comparison with set-embedding-based methods

We further evaluated the proposed method against various set embedding-based approaches in combination with classical machine learning methods. The study involved comparing the proposed method with different classifier techniques, including LR, k-SVM, MLP, and NS [32], that were employed with various set-to-vector embedding methods, such as GeM (1,2,4) [13], COVpool [14, 15], and FSpool [16]. Fig. 4 illustrates the percentage test accuracy results obtained from these modified experiments, along with the results of the proposed method for comparison. As shown in Fig. 4, the proposed method outperformed all these models in terms of test accuracy.

VI Discussion

This paper presents a new method for classifying point sets using linear optimal transport (LOT) subspaces. Our method is appropriate for problems where the data at hand can be represented as instances of prototype template point set patterns observed under smooth, nonlinear, and one-to-one transformations. The results achieved in different experimental scenarios indicate that our proposed approach can deliver accuracy results comparable to state-of-the-art methods, provided that the data adheres to the generative model specified in equation (13). Additionally, the nearest LOT subspace technique was shown to be more data-efficient in these cases, meaning that it can attain higher accuracy levels using fewer training samples.

Our proposed method maintains high classification accuracy, even in challenging out-of-distribution experimental conditions, as depicted in Fig.3, whereas the accuracy figures of other methods decline sharply. These results indicate that our method provides a better overall representation of the underlying data distribution, resulting in robust classification performance. The key to achieving better accuracy under out-of-distribution conditions is that our method not only learns the deformations present in the data but also learns the underlying data model, including the types of the deformations, such as translation, scaling, and shear, and their respective magnitudes. These deformation types can be learned from just a few training samples containing those deformations, as well as potentially from the mathematically prescribed invariances proposed in [28].

Our proposed method, which utilizes the nearest subspace classifier in the LOT domain, is more suitable for classification problems in the above category compared with general set embedding methods in combination with classical machine learning classifiers, as demonstrated by its classification performance. Typically, point set data classes in their original domain do not constitute embeddings, and commonly used set-to-vector representation techniques are inadequate in generating effective embeddings for them, as indicated by the results. This presents a significant challenge for any machine learning approach to perform effectively. However, the subspace model is appropriate in the LOT domain since the LOT transform provides a linear embedding and convex data geometry. Moreover, considering the subspace model in the LOT space improves the generative nature of our proposed classification method by implicitly including the data points from the convex combination of the provided training data points.

VII Conclusions

In this paper, we propose an end-to-end classification system designed for a specific category of point set classification problems, where a data class can be considered as a collection of instances of a template pattern observed under a set of spatial deformations. If these deformations are appropriately modeled as a collection of smooth, one-to-one, and nonlinear transformations, then the data classes become easily separable in the transform space, specifically the LOT space, due to the properties outlined in the paper. These properties also enable the approximation of data classes as convex subspaces in the LOT space, resulting in a more suitable data model for the nearest subspace method. As we observed in our experiments, this approach yields high accuracy and robustness against out-of-distribution conditions. Many point set classification problems can be formulated in this way, and therefore, our proposed solution has wide applicability.

Finally, we note that there can be many potential adaptations of the proposed method. For instance, the linear subspace method in the presented LOT space could be adjusted to incorporate alternative assumptions regarding the set that best represents each class. While some problems might benefit from a linear subspace method similar to the one described earlier, where all linear combinations are allowed, other problems may be require constraining the model using linear convex hulls. Additionally, investigating the sliced-Wasserstein distance using discrete CDT transform (as proposed in [31]) in conjunction with subspace models is another promising avenue for future research.

Our proposed approach provides promising results in point set classification and serves as a basis for further exploration in this domain. As the amount of 3D (or N-D) data continues to increase and accurate object recognition and scene understanding become more crucial, we believe that the combination of linear optimal transport embeddings and subspace modeling in the transform space will become increasingly significant in this context. We anticipate that our proposed method will inspire further research in this direction and lead to novel developments in recognizing 3D (or N-D) objects or distributions.

Acknowledgments

This work was supported in part by NIH grant GM130825, NSF grant 1759802, and CSBC grant U54-CA274499.

References

  • [1] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652–660, 2017.
  • [2] H. Zhao, L. Jiang, C.-W. Fu, and J. Jia, “Pointweb: Enhancing local neighborhood features for point cloud processing,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5565–5573, 2019.
  • [3] Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen, “Pointcnn: Convolution on x-transformed points,” Advances in neural information processing systems, vol. 31, 2018.
  • [4] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, “Multi-view 3d object detection network for autonomous driving,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1907–1915, 2017.
  • [5] Y. Zhou and O. Tuzel, “Voxelnet: End-to-end learning for point cloud based 3d object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4490–4499, 2018.
  • [6] Y. Xu and U. Stilla, “Toward building and civil infrastructure reconstruction from point clouds: A review on data and key techniques,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 2857–2885, 2021.
  • [7] Q. Wang, Y. Tan, and Z. Mei, “Computational methods of acquisition and processing of 3d point cloud data for construction applications,” Archives of computational methods in engineering, vol. 27, pp. 479–499, 2020.
  • [8] L. Zhou, Y. Du, and J. Wu, “3d shape generation and completion through point-voxel diffusion,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5826–5835, 2021.
  • [9] F. Pomerleau, F. Colas, R. Siegwart, et al., “A review of point cloud registration algorithms for mobile robotics,” Foundations and Trends® in Robotics, vol. 4, no. 1, pp. 1–104, 2015.
  • [10] X. Wang, M. H. Ang Jr, and G. H. Lee, “Cascaded refinement network for point cloud completion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 790–799, 2020.
  • [11] J. Zeng, G. Cheung, M. Ng, J. Pang, and C. Yang, “3d point cloud denoising using graph laplacian regularization of a low dimensional manifold model,” IEEE Transactions on Image Processing, vol. 29, pp. 3474–3489, 2019.
  • [12] Y. Lu, X. Liu, A. Soltoggio, and S. Kolouri, “Slosh: Set locality sensitive hashing via sliced-wasserstein embeddings,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2566–2576, 2024.
  • [13] F. Radenović, G. Tolias, and O. Chum, “Fine-tuning cnn image retrieval with no human annotation,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 7, pp. 1655–1668, 2018.
  • [14] Q. Wang, J. Xie, W. Zuo, L. Zhang, and P. Li, “Deep cnns meet global covariance pooling: Better representation and generalization,” IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 8, pp. 2582–2597, 2020.
  • [15] D. Acharya, Z. Huang, D. Pani Paudel, and L. Van Gool, “Covariance pooling for facial expression recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 367–374, 2018.
  • [16] Y. Zhang, J. Hare, and A. Prügel-Bennett, “Fspool: Learning set representations with featurewise sort pooling,” arXiv preprint arXiv:1906.02795, 2019.
  • [17] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, “Dynamic graph cnn for learning on point clouds,” Acm Transactions On Graphics (tog), vol. 38, no. 5, pp. 1–12, 2019.
  • [18] W. Liu, J. Sun, W. Li, T. Hu, and P. Wang, “Deep learning on point clouds and its application: A survey,” Sensors, vol. 19, no. 19, p. 4188, 2019.
  • [19] F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu, “Defense against adversarial attacks using high-level representation guided denoiser,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1778–1787, 2018.
  • [20] S. Basu, S. Kolouri, and G. K. Rohde, “Detecting and visualizing cell phenotype differences from microscopy images using transport-based morphometry,” Proceedings of the National Academy of Sciences, vol. 111, no. 9, pp. 3448–3453, 2014.
  • [21] M. Shifat-E-Rabbi, X. Yin, A. H. M. Rubaiyat, S. Li, S. Kolouri, A. Aldroubi, J. M. Nichols, and G. K. Rohde, “Radon cumulative distribution transform subspace modeling for image classification,” Journal of Mathematical Imaging and Vision, vol. 63, pp. 1185–1203, 2021.
  • [22] S. Kolouri, S. R. Park, M. Thorpe, D. Slepcev, and G. K. Rohde, “Optimal mass transport: Signal processing and machine-learning applications,” IEEE signal processing magazine, vol. 34, no. 4, pp. 43–59, 2017.
  • [23] W. Wang, D. Slepčev, S. Basu, J. A. Ozolek, and G. K. Rohde, “A linear optimal transportation framework for quantifying and visualizing variations in sets of images,” International journal of computer vision, vol. 101, pp. 254–269, 2013.
  • [24] Y. Brenier, “Polar factorization and monotone rearrangement of vector-valued functions,” Commun. Pure Appl. Math., vol. 44, no. 4, pp. 375–417, 1991.
  • [25] C. Villani, Topics in Optimal Transportation. No. 58, American Mathematical Soc., 2003.
  • [26] A. Aldroubi, S. Li, and G. K. Rohde, “Partitioning signal classes using transport transforms for data analysis and machine learning,” Sampl. Theory Signal Process. Data Anal., vol. 19, no. 6, 2021.
  • [27] C. Moosmüller and A. Cloninger, “Linear optimal transport embedding: Provable wasserstein classification for certain rigid transformations and perturbations,” Information and Inference: A Journal of the IMA, vol. 12, no. 1, pp. 363–389, 2023.
  • [28] M. Shifat-E-Rabbi, Y. Zhuang, S. Li, A. H. M. Rubaiyat, X. Yin, and G. K. Rohde, “Invariance encoding in sliced-wasserstein space for image classification with limited training data,” Pattern Recognition, vol. 137, p. 109268, 2023.
  • [29] A. H. M. Rubaiyat, M. Shifat-E-Rabbi, Y. Zhuang, S. Li, and G. K. Rohde, “Nearest subspace search in the signed cumulative distribution transform space for 1d signal classification,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3508–3512, IEEE, 2022.
  • [30] A. H. M. Rubaiyat, S. Li, X. Yin, M. Shifat-E-Rabbi, Y. Zhuang, and G. K. Rohde, “End-to-end signal classification in signed cumulative distribution transform space,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
  • [31] Y. Zhuang, S. Li, M. Shifat-E-Rabbi, X. Yin, A. H. M. Rubaiyat, G. K. Rohde, et al., “Local sliced-wasserstein feature sets for illumination-invariant face recognition,” arXiv preprint arXiv:2202.10642, 2022.
  • [32] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, et al., “Scikit-learn: Machine learning in python,” the Journal of machine Learning research, vol. 12, pp. 2825–2830, 2011.
  • [33] C. Garcia, “Point cloud mnist 2d,” 2020.
  • [34] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  • [35] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets: A deep representation for volumetric shapes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912–1920, 2015.
  • [36] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al., “Shapenet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015.