Robust Manifold Nonnegative Tucker Factorization for Tensor Data Representation
Abstract
Nonnegative Tucker Factorization (NTF) minimizes the euclidean distance or Kullback-Leibler divergence between the original data and its low-rank approximation which often suffers from grossly corruptions or outliers and the neglect of manifold structures of data. In particular, NTF suffers from rotational ambiguity, whose solutions with and without rotation transformations are equally in the sense of yielding the maximum likelihood. In this paper, we propose three Robust Manifold NTF algorithms to handle outliers by incorporating structural knowledge about the outliers. They first applies a half-quadratic optimization algorithm to transform the problem into a general weighted NTF where the weights are influenced by the outliers. Then, we introduce the correntropy induced metric, Huber function and Cauchy function for weights respectively, to handle the outliers. Finally, we introduce a manifold regularization to overcome the rotational ambiguity of NTF. We have compared the proposed method with a number of representative references covering major branches of NTF on a variety of real-world image databases. Experimental results illustrate the effectiveness of the proposed method under two evaluation metrics (accuracy and nmi).
keywords:
Nonnegative Tucker Factorization , Manifold learning , Low-rank Representation.Procedia Computer Science \runauth \jidprocs \jnltitlelogoProcedia Computer Science \CopyrightLine2011Published by Elsevier Ltd.
1 Introduction
Non-negative tucker factorization (NTF) also known as nonnegative multilinear singular value decomposition (SVD), is a multiway extension of nonnegative matrix factorization [1]. It explores the nonnegative property of data which enhances the ability of part-based representation and has received considerable attention in many fields, e.g. text mining [2], hyperspectral imaging [3], blind source separation [4] and data clustering [5].
Finding and exploiting the low-rank approximation and low-dimensional manifold from high-dimensional data is a fundamental problem in machine learning. In particular, the extracted components obtained by principle component analysis (PCA) [6], vector quantization (VQ) [7] may lose their physical meaning if the nonnegativity is not preserved for high-dimensional real-world data. Hence, nonnegative matrix factorization (NMF) [8, 9] has been used to explore low-rank representation of given data. It is intractable for NMF to deal with high-order tensors. The order of a tensor is the number of dimensions of the array, and a mode is one of its dimensions [10]. For example, an RGB image can be represented by a third-order tensor with the dimensions of height width channel. When applying NMF to tensorial data, the first step is to reshape tensors into matrices, which often leads to a loss of the meaningful tensor structures, and large scale parameters leads to higher memory demands [11]. Tensorial data can naturally characterize data from multiple aspects which preserve the structure information in each mode. However, they are typically high dimensional and difficult to be handled in their original space. To address this problem, nonnegative tensor decomposition (NTD) methods have been proposed to directly exploit multidimensional structures of tensors. NTD can be viewed as a special case of NMF which not only inherits the advantages of NMF but also provides physically meaningful representation of multiway structure representation.
For nonnegative tensor data analysis, many nonnegative tensor decomposition methods are based on CANDECOMP/PARAFAC (CP) decompositions [12], Tucker [13] models and low-tubal-rank models [14], respectively. In this paper, we focus on the tucker model, since it enables more flexible and interpretable decomposition by utilizing the interaction of latent factors. Existing NTF decomposition methods usually have the following major problems. First, many NTF methods decompose a high-dimensional tensor into a product of low-rank nonnegative projection matrices and a low-dimensional nonnegative core tensor by minimizing the Euclidean distance between their product and the original tensor data. They are optimal when the data are condemned by additive Gaussian noise. However, they may fail on grossly corrupted data, since the corruptions or outliers seriously violate the noise assumption of gaussian distribution. Second, the tucker-based model suffers from the rotational ambiguity [15], i.e, solutions with and without rotation transformations are equally good in the sense of yielding the maximum likelihood [11]. This implies that NTF can only find arbitrary bases of the latent subspace.
1.1 Contributions
In this paper, we aim to explore robust manifold NTF methods to address the above challenges. Our contributions are threefold.
-
1.
Three robust manifold NTF algorithms are proposed. They first apply a half-quadratic optimization algorithm to transform the intractable problem into a weighted NTF model where the weights can handle the outliers. They can be implemented by incorporating different prior distribution for the measurement between the original data and reconstructed data. Unlike the Gaussian distribution of loss functions, other flexible distribution of metric can handle outliers more efficiently. The weights are adjusted adaptively with respect to the error. To our knowledge, this is the first work of NTF for robust weighted learning to handle outliers.
-
2.
Three RMNTF algorithms reach the state-of-the-art performance are proposed. The three algorithms fall into the one major subclasses of NTF technologies, denoted as weighted NTF. Different distribution of loss function between original data and reconstructed data may influence the ability to handle outliers. Specifically, RMNTF with correntropy induced metric (RMNTF-CIM), RMNTF with Huber function (RMNTF-Huber), and RMNTF with Cauchy function (RMNTF-Cauchy) have been studied in this paper.
-
3.
We demonstrate the convergence, robustness and invariance of RMNTF.
In this work, we first introduce some preliminaries and related work in the following two subsections, then present three RMNTF algorithms in Section 2. Section 3 presents the experimental results. Finally, Section 4 concludes our findings.
1.1.1 Notations
We denote vectors, matrices and tensors by bold lowercase , bold uppercase and calligraphic letters respectively. denotes the fields of nonnegative real numbers. denotes the expectation of a certain random variable. is the vectorization operator that turns a tensor into a column vector. The transpose of a vector or matrix is denoted by . Symbols , , and denote the outer, Kronecker, Hadamard and Khatri-Rao products respectively. denotes the th order diagonal tensor formed by . , and denote the th vector, th matrix and th tensor respectively. denotes the mode- unfolding of tensor . denotes the mode- factor matrix. denotes the mode- tensor product.
Definition 1
(Mode- Product [16]): A mode- product of a tensor with a matrix is denoted by . Each elements can be represented as .
Definition 2
(Mode- Unfolding [17]): It is also known as matricization or flattening, which is the process of reordering the elements of an -way array into a matrix along each mode. A mode- unfolding matrix of a tensor is denoted as and arranges the mode- fibers to be the columns of the resulting matrix.
Definition 3
(Folding Operator [10]): Given a tensor , the mode-n folding operator of a matrix is demoted as , which is the inverse operator of the unfolding operator.
1.2 Related Work
NMF: Given a non-negative data matrix , where refers to the number of data points and indicates the dimension of the feature. The objective of NMF is to find two nonnegative and low rank factor matrices and and the product of the two matrices approximates the original data matrix by , generally . NMF incorporates the nonnegativity constraint and obtains the part-based representation as well as enhancing the interpretability [8] and these methods have a close relation with K-means [18]. However, the factorization of matrices is generally nonunique and many regularizors [19] or constraints [20] have been developed to alleviate nonuniqueness of decomposition. In particularly, NMF methods are proposed to utilize the priors to obtain a better representation [21] and achieve a robust clustering [22, 23].
However, the multiway structure information of high-dimensional data such as RGB images or videos cannot be represented by NMF. Vectorization of the high-dimensional data may lead to excess numbers of parameters and structure interruptions. Nonnegative tensor decomposition based methods have been proposed for the high-dimensional data.
Nonnegative CANDECOMP/PARAFAC (NCP) methods: The CP-based tensor model (Fig. 1) [24] decomposes into a linear combination of rank-one tensors as follows:
(1) |
where is the th-order diagonal tensor. The CP rank of is given by denotes the smallest number of the rank-one tensor decomposition [11]. The CP-based tensor model assumes each tensor element can be calculated by a summation of products and it is restrictive since it only considers possible interactions between latent factors.
Existing CP-based tensor models have a flexible subspace representation which may not consider the intrinsic manifold information of high-dimensional data and the performance of downstream tasks will be limited. A few regularization and prior knowledge strategies have been studied in these models [11]. For example, Zhao et al. [25] formulated CP factorization using a hierarchical probabilistic model and employed a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters to determine the rank of models. Zhao et al. [26] proposed a probabilistic model to recover the underlying low-rank tensor which modeled by multiplicative interactions among multiple groups of latent factors, and the additive sparse tensor modeling outliers. Zhou et al. [11] introduced concurrent regularizations which regularized the entire subspace in a concurrent and coherent way to avoid the strong scale restrictions of regularization. Chen et al. [27] proposed a generalized weighted low rank tensor factorization which represented the sparse component as a mixture of Gaussian, and unified the Tucker and CP factorization in a joint framework to handle complex noise and outliers. However, these methods neglect nonnegative constrains and may not learn the part-based and physical meaning representations.
NTF methods: The Tucker model (Fig. 2) assumes that an original tensor can be well approximated as
(2) |
where the tucker rank of is denoted as with .
We note that the CP decomposition is a special case of the Tucker decomposition. Although the tucker decomposition is invariant to rotations in the factor matrices, it shares parameters across latent factor matrices by a core tensor. In contrast, CP decomposition methods force each factor vector to capture potentially redundant information [28]. CP decomposition methods are more prone to overfitting than Tucker decompositions. The main interest in Tucker model is to find subspaces for tensor approximation.
Li et al. [29] introduced a manifold regularization into the core tensors of NTF which preserved the tensor geometry information. But the representation space of the core tensor will increase exponentially as tensor order increases, which results in high computational complexity. Jiang et al. [30] added a graph Laplacian regularization on a low-dimensional factor matrix to improve the robustness of tensor decomposition. Sun et al. [5] proposed a heterogeneous tensor decomposition for clustering by performing dimensionality reduction on the first order of the tensor, and incorporating some useful constraints on the last-mode factor matrix for clustering. However, these methods neglect nonnegative constraints and may loss physical meaning of low-rank representations. Yin et al. [31] incorporated Laplacian Eigenmaps and Locally Linear Embedding as the manifold regularization terms into the least square form of NTF model. Pan et al. [32] introduced the orthogonal constraint into the group of factor matrices of NTF, which not only helps to keep the inherent tensor structure but also well performs in data compression. Yin et al. [33] proposed Hypergraph Regularized NTF which preserved nonnegativity in tensor factorization and uncover the higher-order relationship among the nearest neighborhoods. However, these methods decompose tensor data by minimizing the Euclidean distance which fails on the not cleaned data.
Manifold learning: Manifold learning is a problem which encodes the geometric information of the data space. Its goal is to find a representation in which two objects are close to each other after dimension reduction if they are close in the intrinsic geometry of data manifold. Based on this idea, many types of manifold learning algorithms have been proposed, such as ISOMAP [34], LLE [35], Laplacian eigenmaps [36] and locality preserving projections [37]. The advantage of introducing manifold structures is that it can preserve the intrinsic geometric information of data points, and it has been shown to be useful in a wide-range of applications, such as face recognition [38], text mining [19], and multimedia interaction [39].
Recently, the idea of manifold learning has been employed to matrix and tensor analysis. For instance, Cai et al. [19] proposed graph regularized nonnegative matrix factorization (GNMF) which utilized the intrinsic geometric information. However, GNMF may not have optimal solutions due to the noises or outliers of data, consequently other extension methods based on the GNMF have been proposed. Moreover, NMF based on manifold learning methods may break the structure of mutliway data points. Hence, tensor based manifold learning methods have been proposed such as Graph regularized Nonnegative Tucker Decomposition [40], LLE based nonnegative tensor decomposition [31]. These manifold nonnegative tensor decomposition methods assume that the distribution of noise is Gaussian and may fail on grossly corrupted datasets.
2 Robust Manifold NTF methods
Because our methods rely on the half-quadratic theory, we first introduce the Half-Quadratic [41] minimization technique for the generic robust NTF framework.
2.1 Half-Quadratic Programming for Nonnegative Tucker Factorization
Half-Quadratic minimization was pioneered by Geman and Reynolds [42], and it was used to alleviate the computational task in the context of image reconstruction with nonconvex regularization. Let denote an original tensor and denote a reconstruction tensor of . Replacing the squared residual of data-fidelity terms in [43] on each entry with a generic function:
(3) |
where represents the residual error between original tensor and reconstructed tensor , is chosen to be robust to outliers or gross errors, and denotes the regularization terms with respect to . The minimizer of cost function involving the reconstruction error which is nonlinear with respect to and the regularization term . When factor matrices and core tensor have many nonzero entries or ill-conditioned, the computation of factorization is costly. specifically, the loss function is possibly non-quadratic and non-convex, and it is difficult to optimize directly. Fortunately, the half-quadratic minimization [44] has been developed to solve the intractable optimization. According to the conjugate function and half quadratic theory [44], the reconstruction error term can be performed as
(4) |
where is the conjugate function of , is the corresponded additional auxiliary variable, and is a quadratic term for and . In this paper, we only consider the quadratic term of the multiplicative form [42]:
(5) |
Substituting (4) and (5) into (3), we have the augmented cost function :
(6) |
The reconstruction error term involved in is half-quadratic. Hence, the minimizer of is calculated by alternate minimization. At iteration we calculate
(7) |
When is fixed, the minimization of the reconstruction error term is convex with respect to . The explicit optimum solution of [45] can be determined as
(10) |
where denotes element-wise division.
It is shown that the auxiliary variable only depends on the loss function . Since the outliers often cause large fitting errors, is important for the objective functions to constrain overfitting. For the large outliers, the weights should be small. On contrary, for the small errors, the weights should be large. Therefore, the weights variable can be seen as an outlier mask. The frequency used loss functions are shown in Fig. 3.
2.2 Robust NTF with Manifold Regularization
By using the nonnegative constraints and robust loss function for outliers, robust NTF can learn a part-based representation. Many robust NTF methods perform well in euclidean space. They fail to discover the intrinsic geometrical and discriminating structure of the data space. Here, we introduce a geometrically based regularization for our robust NTF framework.
First, supposing that the real data points lies on a low-dimensional manifold and is the representation of in the subspace. We make an assumption that if and are neighbors in data space, then their low-rank representations and are close enough to each other in . We build a regularization term as follows
(11) |
where is the mapping function which project data point to the low-rank representation . is a measurement of the smoothness of along the geodesics in the intrinsic geometry of the data.
Based on [19], we use the similarity graph of . Suppose that defines the affinity of data points, then we use the Heat Kernel to describe the similarity between each pair of data points if nodes and are connected:
(12) |
where is the width of the kernel used to control the similarity. Then, we calculate the diagonal matrix , where and the Laplacian matrix is . The graph regularization can be estimated as follows:
(13) |
2.3 RMNTF-CIM
Liu et al. [46] proposed the concept of Cross correntropy which is a generalized similarity measure between two arbitrary scalar random variables and defined by
(14) |
where is the kernel function. In practice, the joint probabilistic density function is unknown and only a finite number of data points are available. The sample estimator of correntropy can be represented as
(15) |
Based on the above definition of correntropy, Liu et al. [46] proposed correntropy induced metric (CIM) in the sample space which denoted as
(16) |
where we use the Gaussian kernel in this paper, i.e., , and is denoted as .
Substituting the error on each entry in Tucker model with the CIM, we obtain the objective function of RMNTF-CIM:
(17) |
which is equivalent to solving the following optimization problem
(18) |
Then, we introduce the half-quadratic minimization, , where is the conjugate function of .
2.3.1 Optimization of weighted tensor
When the factor matrices and the core tensor are fixed, the optimization problem with respect to can be solved separately:
(19) |
where , denotes the th row and th column element of matrix .
Let , then
(20) |
where denotes the element-wise division operation.
2.3.2 optimization of factor matrices
We use the Lagrange multiplier method and consider the mode- unfolding form, then:
(21) |
If , the objective function with respect to can be transformed as:
(22) |
where , , is the nonneative Lagrange multiplier for the nonnegative constraint. The partial derivative of with respect to is:
(23) |
Using the KKT conditions , we obtain the following equations
(24) |
Then, we obtain the update rules of :
(25) |
If , the objective function with respect to can be represented as:
(26) |
The partial derivative of is:
(27) |
Using the KKT conditions , we obtain the following equations:
(28) |
Then, we obtain the update rules of :
(29) |
2.3.3 optimization of the core tensor
For the subproblem of core tensor , we consider the vectorization form of (17):
(30) |
where denotes the all-one vector of length , , and denotes the Lagrange multipliers of .
(31) |
where .
(32) |
Using the KKT conditions , where denote the all-zero vector, we obtain the following equations
(33) |
Then, we obtain the update rules of :
(34) |
2.4 RMNTF-Huber
Robust statistics work well on model reconstruction under some observation with noise or outliers. Some popular M-estimators [47] such as Huber loss function and Cauchy function have been proposed to solve noisy data mining.
In this section, we take the Huber function in reconstruction error term to measure the quality of approximation by considering the connection between norm and norm:
(35) |
where is the cutoff parameter to tradeoff between the -norm and -norm.
Substituting the Huber function on each entry in (3), we have the RMNTF-Huber by minimizing the following objective function:
(36) |
where .
Following the equation (10), we obtain the optimization of weighted tensor :
(37) |
The optimization of factor matrices and the core tensor are the same as RMNTF-CIM. Here, we note that the cutoff parameter is set to the median of reconstruction errors, i.e., .
2.5 RMNTF-Cauchy
For any tensor , we define the reconstruction error of RMNTF-Cauchy as follows:
(38) |
where .
As equation (10), the Optimization of weighted tensor can be represented as:
(39) |
The optimization of factor matrices and the core tensor are the same as RMNTF-CIM.
2.6 Discussion
The convergence of the algorithms are guaranteed by the following theorem:
Theorem 1
Proof 1
See Appendix A.1.1 for the proof of Theorem 1.
The robustness of RMNTF is guaranteed by the following Theorem:
Theorem 2
Suppose there are the training images and test image . If the optimal parameters, core tensor , factor matrices and weight tensor are learned from training images. The low rank representation of test image learned by RMNTF will have close form solution:
(40) |
where , , denotes an identity matrix, and the affinity of data points. If the test image is outlier, then the weight tensor is constrained to a small value. The low rank representation of this outlier will be repaired through the manifold structure of the training data.
Proof 2
See Appendix A.1.2 for the proof of Theorem 40.
The uniqueness of RMNTF is guaranteed by the following Theorem:
Theorem 3
Since the solution of in equation (A.39) is unique and , can be uniquely estimated from due to the lemma 2, the mode-N unfolding of RMNTF model has an essentially unique solution. Because of the lemma 1, the RMNTF model is uniqueness.
Proof 3
See Appendix A.1.3 for the proof of Theorem 3.






3 Experiments
In this section, we compare the proposed RMNTF-CIM, RMNTF-Huber, RMNTF-Cauchy with nine nonnegative matrix and tensor factorization methods on five image data sets.
3.1 Datasets
We conducted experiments on the COIL100, USPS, FEI, ORL and FERET image datasets.
COIL100111https://www.kaggle.com/jessicali9530/coil100 is an object categorization image database including 100 classes of objective, each of which contains images with of difference observation angle. For reprocessing, we resize each image into with the RGB representation by nearest neighbor interpolation algorithm. As a result, each object is represented as a tensor. In total, we have 7200 tensor objects.
USPS Dataset222https://www.kaggle.com/bistaumanga/usps-dataset consists of images of handwritten digits with . Each of handwritten digits contains images.
FEI Part 1 Dataset333https://fei.edu.br/~cet/facedatabase.html is the subset of FEI database, which consists of 700 color images of size collected from 50 individuals. Each individual has 14 different images under different observed and facial expressions. In our experiment, the images are resized to . These image finally construct a fourth-order tensor.
AT & T ORL Dataset444https://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html: is the subset of FEI database, which consists of 700 color images of size collected from 50 individuals. Each individual has 14 different images under different observed and facial expressions. In our experiment, the images are resized to pixels. These image finally construct a fourth-order tensor.
FERET Dataset555https://www.nist.gov/itl/products-and-services/color-feret-database: This dataset [48] collects grayscale face images acquired from subjects. It is widely used for evaluating face recognition and clustering problem. In our experiment, we use a subset of images with a size of from subjects where each subject contains images with varying postures, genders, lighting condition, shooting directions and race. In total, we stack the images into a third-order tensor with size of .
For COIL100, we randomly selected categories from the whole COIL100 for evaluation. For USPS, we randomly selected categories and the whole categories for the evaluation. For FEI, ORL and FERET data sets, we randomly selected categories for the evaluation. For each comparison, we reported the average results over Monte-Carlo runs.



































3.2 Compared algorithms
The hyperparameters of RMNTF-CIM, RMNTF-Huber, and RMNTF-Cauchy in all experiments were set as follows: , unless otherwise stated. We compared RMNTF with three NMF methods, and six NTD methods. They are listed as follows:
-
1.
Nonnegative Matrix Factorization (NMF) [8]: We unfold tensorial original data into a matrix to apply it to NMF algorithm and obtain a low rank activation matrix . Finally, we apply to downstream tasks.
-
2.
Graph Regularization Nonnegative Matrix Decomposition (GNMF) [19]: We unfold tensorial data into a matrix and reduce the dimensionality of into by GNMF, where the dictionary matrix is used in graph regularization term. The coefficient of regularization , and the hyperparameter .
-
3.
Correntropy Induced Metric Nonnegative Matrix Factorization (CIMNMF) [49]:Similar to the above, we unfold the tensorial data into a matrix and apply it to CIMNMF. The loss function between the unfolding tensor and reconstruction data is based on the correntropy induced metric. It handles outlier rows by incorporating structural knowledge about outliers.
-
4.
Nonnegative Tucker Decomposition (NTD) [1]: NTD works directly on the tensorial data. It is constructed by the tucker model with nonnegativity constraints and updated by multiplicative rules. This method can be considered as a special case of RMNTF when all the entries of weight tensor are , and .
-
5.
Nonnegative Tucker Decomposition with alpha-Divergence (NTD) [50]: We directly use the tensorial data to NTD. It is considered -divergence as a discrepancy measure and derive multiplicative updating rules for NTD. The hyperparameter .
-
6.
Sparse Nonnegative Tucker Decomposition (SparseNTD) [51]: The tensorial data is directly utilized in SparseNTD. It adopts the sparsity and nonnegativity constraints to a core tensor and several factor matrices. The hyperparameter .
-
7.
Graph Regularized Nonnegative Tucker Decomposition (GNTD) [40]: GNTD use tensorial data and it constructs the nearest neighbor graph to maintain the intrinsic manifold structure of tensor and applies this constraint on the th factor matrix. The hyperparameters , and .
-
8.
Locally Linear Embedding Regularized Nonnegative Tucker Decomposition (LLENTD) [31]: We directly use tensor in LLENTD. It incorporates Laplacian Eigenmaps as manifold regularization terms into the least square form of nonnegative tucker model. The hyperparameters , and .
-
9.
Manifold Regularization Nonnegative Tucker Decomposition (MRNTD) [29]: MRNTD utilizes tensor data directly. It employs the manifold regularization terms for the core tensors constructed in the NTD. The hyperparameters , and .
-
10.
Hypergraph regularized nonnegative tensor factorization (hyperNTD) [33]:We take tensorial data into hyperNTD directly. It incorporates a higher-order relationship among the nearest neighborhoods and the nonnegative tucker decomposition. The hyperparameters , and .














































3.3 Evaluation Metrics
To evaluate the clustering performance of these algorithms, we adopt two commonly used metrics: 1) accuracy(ACC) and 2) normalized mutual information (NMI). The ACC is defined by
(41) |
where is the total number of samples in a dataset. We have the cluster labels and the ground-truth labels . is set to if and only if , and otherwise. is a displacement mapping function that maps each cluster label to the equivalent label from the dataset.
Another metric NMI is defined by
(42) |
where and denote the entropy of and , respectively, and
(43) |
and represent the marginal probability distribution functions of and , respectively, and is the joint probability distribution function of and . ranges from 0 to 1, with if the two sets of clusters are identical, and otherwise.
3.4 Main Results
3.4.1 Basis Visualization and Convergence
To compare the ability of extracting parts-based feature of tensor objects by NMF, GNMF, NTD, and RMNTF, respectively, we visualize the basic images extracted by each algorithms on 25 subjects randomly chosen from the AT&T ORL dataset in Fig. 4. From the experimental results, we note that the proposed RMNTF extracts more localized parts of face images than other algorithms, since these images reconstructed from the base are more homogeneous. It means that RMNTD can provide a more sparse representation.
In addition, we investigate the convergence speed of RMNTF. We show the convergence curves of the three implementations of RMNTF on five image datasets in Fig. 5. It is shown that the proposed three algorithms exhibit fast convergence rates, usually taking less than 100 iterations.
3.4.2 Simulated Corruption and Clustering Results
In order to evaluate the robustness of RMNTF, we compare our algorithms with the state-of-the-art clustering algorithms on five image datasets contaminated by Laplace noise and salt & pepper noise. The experimental results on COIL100 and FEI are presented in Fig. 6 and Fig. 7. Due to the space limitation, the results on FERET, ORL and USPS are represented in the supplement file.
Laplace noise and salt & pepper noise sometimes exist in image corruption. However, the cost function of some traditional methods, such as NMF, usually adopt Euclidean distance. They cannot deal with this kind of data well since the noise distribution is not consistent with the noise assumption.
The simulated Laplace noise obeys a Laplace distribution . We set the deviation from 40 to 280 and add the noise to each pixels of images randomly. The first two rows in Fig. 6 and Fig. 7 show the mean and standard deviations of average accuracy and NMI of RMNTF’s three implementations and other nine representative algorithms. The experimental results confirms that RMNTF based method performs better than other methods when the deviation of Laplace noise is within 200. However, when the deviation is excessive and many outliers come into being, performance of all the methods reduce dramatically.
In terms of salt & pepper noise, we set the percentage of contaminated pixels from 5 to 60 percent for each image. Results are shown in the last two rows in Fig. 6 and Fig. 7. With increase of corrupted pixels, only GNMF is competitive with RMNTF based methods. And they outperform than other methods. When more than 30 percent of pixels are corrupted, performance of all the algorithms is dramatically degraded and gradually reaches unanimity. Because it is difficult to separate outliers from inliers.
We have conducted experiments on FERET, ORL, and USPS. The results are listed in Appendix A .
3.5 Effects of the hyperparameters
This subsection investigates the effects of the hyperparameters of RMNTF based algorithms on the performance of clustering task. Experiments are conducted on five image datasets contaminated by Laplace noise which the deviation is set to 120. There are two hyperparameters and need to be predefined. We report the average accuracy and NMI on 10 categories. For the three implementations of RMNTF, the parameters were setting as follows: fix and choose .
Fig. 8 shows the effect of hyperparameters. The clustering performances are oscillate while and then tend to be stable while increases in range . Therefore, can be selected around . It can be seen that the clustering performances are quite stable while the integer is in range , which means that hardly influences RMNTF. In total, our methods are probably robust to and they tend to achieve better performance when is slightly smaller, but they may be influenced by hyperparameter .
4 Conclusion
In this paper, we explored three robust cost function for manifold structure of nonnegative tucker factorization. To deal with the minimization of non-convex cost functions, we derive an iterative half-quadratic minimization optimization. Then, the optimization problem can be reduced to a weighted Euclidean distance of NTF. The proposed methods are further utilizing manifold structure information to enhance the performance of accuracy and avoid the rotational ambiguity. Due to the connection between robust loss functions and robust M-estimators, we adopt CIM, Huber and Cauchy functions to replace traditional Euclidean loss function. The proposed methods combine manifold structures with robust loss function to improve the clustering accuracy under noisy data and outliers. We investigated the effective of Laplacian noise and Salt & Pepper noise on the performance of the models respectively. Under a small degree of noise interference, the proposed algorithms improve greatly compared with other algorithms. For example, in FEI database, when the deviation value of the Laplacian noise is 50, the accuracy of our methods are absolutely improved by 10% compared with GNTD. In general, experimental results show that the proposed methods outperform the comparison methods in terms of clustering accuracy and normalized mutual information under noisy data or outliers.
References
- [1] Y.-D. Kim, S. Choi, Nonnegative tucker decomposition, in: Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Minneapolis, MN, USA, 2007, pp. 1–8.
- [2] A. Schein, M. Zhou, D. Blei, H. Wallach, Bayesian poisson tucker decomposition for learning the structure of international relations, in: Proceedings of the 33rd International Conference on Machine Learning (ICML-16), PMLR, 2016, pp. 2810–2819.
- [3] A. Karami, M. Yazdi, G. Mercier, Compression of hyperspectral images using discerete wavelet transform and tucker decomposition, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 5 (2) (2012) 444–450.
- [4] A. Cichocki, R. Zdunek, A. H. Phan, S. Amari, Nonnegative matrix and tensor factorizations: applications to exploratory multi-way data analysis and blind source separation, John Wiley & Sons, 2009.
- [5] Y. Sun, J. Gao, X. Hong, B. Mishra, B. Yin, Heterogeneous tensor decomposition for clustering via manifold optimization, IEEE Trans. Pattern Anal. Mach. Intell 38 (3) (2016) 476–489.
- [6] H. Abdi, L. J. Williams, Principal component analysis, Wiley Interdiscipl. Rev. Comput. Stat. 2 (4) (2010) 433–459.
- [7] R. M. Gray, Vector quantization, IEEE Assp Mag. 1 (2) (1984) 4–29.
- [8] D. Lee, H. Seung, Learning the parts of objects by non-negative matrix factorization, Nature 401 (Oct. 1999) 788–791.
- [9] J. Wang, X. Zhang, Deep nmf topic modeling, Neurocomputing 515 (2023) 157–173.
- [10] T. G. Kolda, B. W. Bader, Tensor decompositions and applications, SIAM Rev. 51 (3) (2009) 455–500.
- [11] Y. Zhou, H. Lu, Y.-M. Cheung, Probabilistic rank-one tensor analysis with concurrent regularizations, IEEE Trans. Cybern. 51 (7) (2021) 3496–3509.
- [12] Q. Shi, H. Lu, Y.-M. Cheung, Tensor rank estimation and completion via cp-based nuclear norm, in: Proc. ACM Conf. Inf. Knowl. Manage., Nov. 2017, pp. 949–958.
- [13] Q. Shi, Y.-M. Cheung, Q. Zhao, H. Lu, Feature extraction for incomplete data via low-rank tensor decomposition with feature regularization, IEEE Trans. Neural Netw. Learn. Syst. 30 (6) (Jun. 2018) 1803–1817.
- [14] Y. Zhou, Y.-M. Cheung, Bayesian low-tubal-rank robust tensor factorization with multi-rank determination, IEEE Trans. Pattern Anal. Mach. Intell. 43 (1) (May. 2019) 62–76.
- [15] M. E. Tipping, C. M. Bishop, Probabilistic principal component analysis, J. Roy. Stat. Soc. B (Stat. Methodol.) 61 (3) (1999) 611–622.
- [16] L. R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika 31 (3) (Sep. 1966) 279–311.
- [17] L. De Lathauwer, B. De Moor, J. Vandewalle, A multilinear singular value decomposition, SIAM J. Matrix Anal. Appl. 21 (4) (2000) 1253–1278.
- [18] C. H. Ding, T. Li, M. I. Jordan, Convex and semi-nonnegative matrix factorizations, IEEE Trans. Pattern Anal. Mach. Intell. 32 (1) (Jan. 2010) 45–55.
- [19] D. Cai, X. He, J. Han, T. Huang, Graph regularized nonnegative matrix factorization for data representation, IEEE Trans. Pattern Anal. Mach. Intell. 33 (8) (Aug. 2010) 1548–1560.
- [20] H. Liu, Z. Wu, X. Li, D. Cai, T. Huang, Constrained nonnegative matrix factorization for image representation, IEEE Trans. Pattern Anal. Mach. Intell. 34 (7) (Jul. 2012) 1299–1311.
- [21] J. Pan, N. Gillis, Generalized separable nonnegative matrix factorization, IEEE Trans. Pattern Anal. Mach. intell. 43 (5) (May. 2021) 1546–1561.
- [22] N. Guan, T. Liu, Y. Zhang, D. Tao, L. S. Davis, Truncated cauchy non-negative matrix factorization, IEEE Trans. Pattern Anal. Mach. intell. 41 (1) (Jan. 2019) 246–259.
- [23] B. D. Haeffele, R. Vidal, Structured low-rank matrix factorization: Global optimality, algorithms, and applications, IEEE Trans. Pattern Anal. Mach. Intell. 42 (6) (Jun. 2020) 1468–1482.
- [24] R. A. Harshman, Foundations of the parafac procedure: Models and conditions for an” explanatory” multimodal factor analysis, UCLA Work. Pap. Phonetics 16 (1) (1970) 1–84.
- [25] Q. Zhao, L. Zhang, A. Cichocki, Bayesian cp factorization of incomplete tensors with automatic rank determination, IEEE Trans. Pattern Anal. Mach. Intell. 37 (9) (Sep. 2015) 1751–1763.
- [26] Q. Zhao, G. Zhou, L. Zhang, A. Cichocki, S.-I. Amari, Bayesian robust tensor factorization for incomplete multiway data, IEEE Trans. Neural Netw. Learn. Syst. 27 (4) (2015) 736–748.
- [27] X. Chen, Z. Han, Y. Wang, Q. Zhao, D. Meng, L. Lin, Y. Tang, A generalized model for robust tensor factorization with noise modeling by mixture of gaussians, IEEE Trans. Neural Netw. Learn. Syst. 29 (11) (Nov. 2018) 5380–5393.
- [28] S. Fang, R. M. Kirby, S. Zhe, Bayesian streaming sparse tucker decomposition, in: Proc. 35th Conference on Uncertainty in Artificial Intelligence (UAI), 2021.
- [29] X. Li, M. K. Ng, G. Cong, Y. Ye, Q. Wu, Mr-ntd: Manifold regularization nonnegative tucker decomposition for tensor data dimension reduction and representation, IEEE Trans. Neural Netw. Learn. Syst. 28 (8) (Aug. 2017) 1787–1800.
- [30] B. Jiang, C. Ding, J. Tang, B. Luo, Image representation and learning with graph-laplacian tucker tensor decomposition, IEEE Trans. Cybern. 49 (4) (Apr. 2019) 1417–1426.
- [31] W. Yin, Z. Ma, Le & lle regularized nonnegative tucker decomposition for clustering of high dimensional datasets, Neurocomputing 364 (2019) 77–94.
- [32] J. Pan, M. K. Ng, Y. Liu, X. Zhang, H. Yan, Orthogonal nonnegative tucker decomposition, SIAM J. Sci. Comput. 43 (1) (2021) B55–B81.
- [33] W. Yin, Z. Ma, Q. Liu, Hyperntf: A hypergraph regularized nonnegative tensor factorization for dimensionality reduction, arXiv preprint arXiv:2101.06827 (2021).
- [34] J. Tenenbaum, V. De Silva, J. C. Langford, A global geometric framework for nonlinear dimensionality reduction, science 290 (5500) (2000) 2319–2323.
- [35] S. Roweis, L. K. Saul, Nonlinear dimensionality reduction by locally linear embedding, science 290 (5500) (2000) 2323–2326.
- [36] M. Belkin, P. Niyogi, Laplacian eigenmaps and spectral techniques for embedding and clustering., in: Proc. Conf. Neural Inf. Process. Syst. (NIPS), 2002, pp. 585–591.
- [37] X. He, P. Niyogi, Locality preserving projections, Proc. Conf. Neural Inf. Process. Syst. (NIPS) (2004) 153–160.
- [38] X. He, S. Yan, Y. Hu, P. Niyogi, H.-J. Zhang, Face recognition using laplacianfaces, IEEE Trans. Pattern Anal. Mach. Intell. 27 (3) (Mar. 2005) 328–340.
- [39] Y. Yang, Y. Zhuang, F. Wu, Y. Pan, Harmonizing hierarchical manifolds for multimedia document semantics understanding and cross-media retrieval, IEEE Trans. Multimedia 10 (3) (Apr. 2008) 437–446.
- [40] Y. Qiu, G. Zhou, Y. Wang, Y. Zhang, S. Xie, A generalized graph regularized non-negative tucker decomposition framework for tensor data representation, IEEE Trans. Cybern. 52 (1) (2022) 594–607.
- [41] S. Boyd, L. Vandenberghe, Convex optimization, Cambridge, U.K.: Cambridge Univ. Press, 2004.
- [42] D. Geman, G. Reynolds, Constrained restoration and the recovery of discontinuities, IEEE Trans. Pattern Anal. Mach. Intell. 14 (3) (Mar. 1992) 367–383.
- [43] M. Nikolova, R. H. Chan, The equivalence of half-quadratic minimization and the gradient linearization iteration, IEEE Trans. Image Process. 16 (6) (Jun. 2007) 1623–1627.
- [44] M. Nikolova, M. K. Ng, Analysis of half-quadratic minimization methods for signal and image recovery, SIAM J. Sci. Comput. 27 (3) (2005) 937–966.
- [45] P. Charbonnier, L. Blanc-Féraud, G. Aubert, M. Barlaud, Deterministic edge-preserving regularization in computed imaging, IEEE Trans. Image Process. 6 (2) (Feb. 1997) 298–311.
- [46] W. Liu, P. P. Pokharel, J. C. Principe, Correntropy: Properties and applications in non-gaussian signal processing, IEEE Trans. Signal Processing 55 (11) (Nov. 2007) 5286–5298.
- [47] Z. Zhang, Parameter estimation techniques: A tutorial with application to conic fitting, Image Vis. Comput. 15 (1) (1997) 59–76.
- [48] P. J. Phillips, H. Moon, S. A. Rizvi, P. J. Rauss, The feret evaluation methodology for face-recognition algorithms, IEEE Trans. Pattern Anal. Mach. Intell. 22 (10) (Oct. 2000) 1090–1104.
- [49] L. Du, X. Li, Y.-D. Shen, Robust nonnegative matrix factorization via half-quadratic minimization, in: Proc. IEEE 12th Int. Conf. Data Mining (ICDM), Dec. 2012, pp. 201–210.
- [50] Y.-D. Kim, A. Cichocki, S. Choi, Nonnegative tucker decomposition with alpha-divergence, in: Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2008, pp. 1829–1832.
- [51] Y. Xu, Alternating proximal gradient method for sparse nonnegative tucker decomposition, Math. Programm. Comput. 7 (1) (2015) 39–70.
- [52] B. Jiang, C. Ding, J. Tang, Graph-laplacian pca: Closed-form solution and robustness, in: Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2013, pp. 3492–3498.
- [53] C. Van Loan, N. Pitsianis, Approximation with kronecker products, in: Linear Algebra for Large Scale and Real Time Applications, Springer, 1993, pp. 293–314.
- [54] K. Huang, N. D. Sidiropoulos, A. Swami, Non-negative matrix factorization revisited: Uniqueness and algorithm for symmetric decomposition, IEEE Trans. Signal Process. 62 (1) (Jan. 2014) 211–224.
Appendix A
















































A.1 Discussion
A.1.1 Generalization error bound and convergence analysis
Before we prove Theorem 1, we first give the definition of an upper bound auxiliary function.
Definition 4
is an upper bound auxiliary function for if the following conditions are satisfied:
(44) |
Corollary 1
If is an upper bound auxiliary function for , then is non-increasing under the update rule
(45) |
Proof 4
(46) |
Definition 5
A function can be represented as an infinite sum of terms that are calculated from the values of the function’s derivatives at a single point, which can be formulated as follows:
(47) |
where is the point and is the order of partial derivatives.
Given the above definitions, the objective function of RMNTF-CIM with respect to the three univariate functions are obtained as (19), (24) and (28). Then, we have the following four lemmas.
Lemma 1
The auxiliary function for (19) is as follows:
(48) |
where .
Proof 5
For RMNTF-CIM, the objective function can be re written as
(49) |
It is obvious that , we only need to prove that . The first-order partial derivative of (17) in element-wise is
(50) |
By the equivalent relationship , we have
(51) |
where .
Hence, the first-order partial derivative of RMNTF-CIM is
(52) |
We assume that is a new variable independent of . Let , then (52) can be represented by
(53) |
where . The second-order derivative of RMNTF-CIM can be represented as:
(54) |
According to the Taylor expansion in Definition 5, we can rewrite to its Taylor expansion form with respect to :
(55) |
The upper bound auxiliary function for (19) is defined as:
(56) |
Because we have
(58) |
Now, we can demonstrate that (57) holds, and (56) is the upper bound auxiliary function for (19), the updates of lead to a non-increasing of the objective function . Because the elements of factor matrices are nonnegative, and (56) is a convex function, its minimum value can be achieved at
(59) |
Lemma 1 is proved.
Lemma 2
The auxiliary function for (24) is as follows:
(60) |
Proof 6
For RMNTF-CIM, the objective function with respect to can be represented as
(61) |
The first-order partial derivative of (61) in element-wise is
(62) |
Because of (51), we rewrite the first-order derivative of RMNTF-CIM:
(63) |
The tensor variable is independent of . The second-order derivative of RMNTF-CIM with respect to is represented as
(64) |
According to the Taylor expansion, we rewrite with respect to :
(65) |
The upper bound auxiliary function for (24) is denoted as:
(66) |
Because we have
(68) |
and
(69) |
Lemma 3
The auxiliary function for (28) is as follows:
(71) |
Proof 7
First, we vectorize the objective function with respect to the core tensor :
(72) |
And then, we obtain the first-order partial derivative of :
(73) |
The second-order partial derivative of :
(74) |
According to the Taylor expansion, we rewrite with respect to :
(75) |
The upper bound auxiliary function for (28) can be represented as:
(76) |
Because we have:
(77) |
Hence, .
A.1.2 Robustness analysis of RMNTF
One important property of RMNTF is that its reconstruction is robust against outliers, as shown in experiments. Here, we show that the robustness of RMNTF is mainly due to the weighted tensor and the regularization. The previous work gLPCA [52] and GLTD [30] have demonstrate the robustness of Laplacian regularization. In this section, we further demonstrate that this property also occurs on RMNTF.
Suppose we have learned the optimal parameters , and from input training images . Now we have a text image and we aim to learn its low-rank representation and reconstruction , while the parameters , and learned by training images are fixed. This can be transformed as solving the problem:
(79) |
where is the th row of . Let , using the mode-n unfolding of tensor, problem (79) can be reformulated as:
(80) |
where . Let and . Then, we obtain first-order partial derivative as follows:
(81) |
We obtain the optimal as
(82) |
where is an identity matrix.
We note that the problem (79) has a closed form solution. In standard NTF model, , and the second term of equation (79) should be removed. Hence, the solution of may be singular value. In another word, if some corruptions happened to some elements of test image , the regularization and the weighted tensor will restore these corruptions properly.
A.1.3 Invariance of RMNTF
RMNTF has the following invariance property where the reconstruction is invariant under the transformation by permutation matrices and nonnegative diagonal matrices. Let and be a permutation matrix and a nonnegative diagonal matrix, respectively. Factor matrices and core tensor are transformed as:
(88) |
Definition 6
(Uniqueness of NTF) The NTF model is essentially unique, if holds for any other NTF model , where and are permutation matrix and nonnegative matrix, respectively.
To demonstrate the uniqueness of RMNTF model, we need to study the difference of uniqueness between the mode-n unfolding of RMNTF model and RMNTF model.
Lemma 1
If the RMNTF model is uniqueness, is the unique nonnegative matrix decomposition of matrix .
Proof 8
Suppose there exists a non-trivial matrix such that is another solution of mode-n unfolding of RMNTF. Let , and , then, . However, when , it satisfied that . This contradicts the assumption that the RMNTF is essentially unique.
Lemma 2
The mode-N unfolding of RMNTF model has an essentially unique solution of nonnegative matrix decomposition, then the RMNTF model of is essentially unique.
Proof 9
Suppose that has an essentially unique solution of NMF and , where , both and can be uniquely estimated. we introduce the permutation matrix and nonnegative diagonal matrix , suppose that
(89) |
where . Here, we only need to demonstrate that can be uniquely estimated from .