Learning idempotent representation for subspace clustering
Abstract
The critical point for the successes of spectral-type subspace clustering algorithms is to seek reconstruction coefficient matrices which can faithfully reveal the subspace structures of data sets. An ideal reconstruction coefficient matrix should have two properties: 1) it is block diagonal with each block indicating a subspace; 2) each block is fully connected. Though there are various spectral-type subspace clustering algorithms have been proposed, some defects still exist in the reconstruction coefficient matrices constructed by these algorithms. We find that a normalized membership matrix naturally satisfies the above two conditions. Therefore, in this paper, we devise an idempotent representation (IDR) algorithm to pursue reconstruction coefficient matrices approximating normalized membership matrices. IDR designs a new idempotent constraint for reconstruction coefficient matrices. And by combining the doubly stochastic constraints, the coefficient matrices which are closed to normalized membership matrices could be directly achieved. We present the optimization algorithm for solving IDR problem and analyze its computation burden as well as convergence. The comparisons between IDR and related algorithms show the superiority of IDR. Plentiful experiments conducted on both synthetic and real world datasets prove that IDR is an effective and efficient subspace clustering algorithm.
Index Terms:
subspace clustering, idempotent matrix, doubly stochastic constraint, normalized membership matrixI Introduction
High-dimensional data samples emerged in computer vision fields could be viewed as generated from a union of linear subspaces [1, 2, 3, 4]. subspace clustering, whose goal is to partition the data samples into several clusters with each cluster corresponding to a subspace, has attracted lots of researchers’ attentions. In the past decades, many kinds of subspace clustering algorithms have been proposed [5, 6, 7, 8, 9, 10]. Among them, spectral-type methods showed more excellent performances in many applications such as motion segmentation, face clustering and so on [9, 10, 11, 12].
Without loss of generality, suppose that a clean data matrix contains data samples drawn from subspaces. denotes the sub-matrix including data samples lying in the -th subspace, where . And if (), . The framework of spectral-type subspace clustering algorithms is divided into three parts. Firstly, they learn a reconstruction coefficient matrix satisfying . Secondly, an affinity matrix is built by using the obtained reconstruction coefficient matrix, i.e. , where and denote the element of and respectively, is the transpose of . Finally, a certain spectral clustering algorithm, e.g. normalized cuts (Ncuts) [13], is used to get the final clustering results by using . It could be clearly seen that the performance of a spectral-type algorithm mainly rely on the learned reconstruction matrix. An ideal coefficient matrix should have inter-subspace sparsity and intra-subspace connectivity. Namely, if and belong to a same subspace, . Otherwise, .
Different spectral-type methods use different regularizers to produce coefficient matrices with different characteristics. For instance, sparse subspace clustering (SSC) [9, 14] pursuits sparse reconstruction coefficient matrices by introducing a sparse constraint [15]. Low-rank representation (LRR) [10, 16] seeks a low-rank reconstruct coefficient matrix by minimize the nuclear norm of the coefficient matrix. Least square regression (LSR) [17] defines a Frobenuis norm regularizer and searches a dense reconstruction coefficient matrix. Block diagonal representation (BDR) [18] provides a block diagonal reconstruction coefficient matrix by minimizing the sum of smallest eigenvalues of the coefficient matrix’s Laplacian regularizer. Though these representative methods achieve promising results in different kinds of subspace clustering tasks, the obtained coefficient matrices still have some drawbacks. The coefficient matrices gotten by SSC are usually too sparse to lack connectedness within each subspace. The block diagonal constraint used in BDR may not lead the correct clustering, since each block still may not be fully connected. On the other hand, although the connectedness within subspaces are guaranteed in the dense coefficient matrices constructed by LRR and LSR, the coefficients of the inter-subspace samples are usually non-zero. In order to get away the dilemmas, three different types of methods are emerged. Firstly, some extensions of classical regularizers are developed. For example, Zhang et al. extended the nuclear norm regularizer used in LRR to a kind of Schatten- norm regularizer [19]. Xu et al. proposed a scaled simplex representation by adding the non-negative constraint and scaled affine constraint of the coefficient matrix obtained in LSR [20]. Secondly, researchers begin to use mixed regularizers of coefficient matrices. Li et al. proposed a structured sparse subspace clustering (SSSC) [21] by adding a re-weighted -norm regularizer into SSC. Elastic net (EN) method defined a combination of -norm and Frobenius regularizer of the coefficient matrices [22, 23]. Zhuang et al. combined sparse constraint and nuclear norm regularizer together to propose a non-negative low-rank and sparse representation method (NNLRSR) [24]. Tang et al. generalized NNLRSR and devised a structure-constrained LRR (SCLRR) [25]. Lu et al. presented a graph-regularized LRR (GLRR) algorithm by minimizing the nuclear norm and the Laplacian regularizer the coefficient matrix simultaneously [26]. Tang et al. designed a dense block and sparse representation (DBSR) method which used the -norm (the maximal singular value) and norm regularizers to compute a dense block and sparse coefficient matrix [27]. Thirdly, classical spectral-type subspace clustering algorithms are integrated to build cascade models. Wei et al. devised a sparse relation representation by stacking SSC and LRR [28]. Sui et al. also provided a similar method to show the effectiveness of cascade models [29]. These extended methods outperform the classical algorithms to a certain extent, but they still may not guarantee to produce ideal coefficient matrices.
From the view point of a spectral clustering algorithm, the best affinity matrix of data set should have the following properties: If and belong to a same cluster, then . Otherwise, . Namely, should be block diagonal and have the following formulation:
(1) |
where is a column vector with elements and each element equals . In the correlation clustering domain, is called a membership matrix [30, 31, 32]. Moreover, the researchers also proved that a variation of the membership matrix, called normalized membership matrix, is also adequate for spectral clustering. The normalized membership matrix corresponding to is expressed as follows:
(2) |
Back to the domain of subspace clustering, suppose an affinity matrix is a normalized membership matrix. As we mentioned above, in a spectral-type subspace clustering algorithm, an affinity matrix is defined as . If we force and (for all ), the best reconstruction coefficient matrix should be same as , namely is also a normalized membership matrix. We can see such is definitely inter-subspace sparse and intra-subspace connective. The property that each element in a block (i.e., ) equals means is fully connected in each block. Hence, this kind of coefficient matrix is better than that obtained by BDR. Fig. 1 presents the two coefficient matrices obtained by BDR and the proposed algorithm in this paper on a synthetic data set. We can see that all coefficient matrices are block diagonal, but each block in the coefficient matrices obtained by the proposed algorithms is denser than that obtained by BDR. Then the subspace structure of the data set could be revealed more faithfully by using the proposed algorithm.

In [32], Lee et al. also suggested constructing a normalized membership matrix for subspace clustering. However, the so-called membership representation (MR) algorithm [32] takes three steps to finally get the coefficient matrix. Firstly, a certain subspace clustering algorithm, such as SSC or LRR, was used to get an initial coefficient matrix. Secondly, MR sought a membership matrix by using the obtained initial coefficient matrix. Finally, a normalized membership matrix was computed with the obtained membership matrix. In the last two steps, augmented Lagrangian method (ALM) [33] was applied to solve the corresponding optimization problems. Hence, besides the computation time used for finding an initial coefficient matrix, the time cost in the last two steps of MR was also high.
In this paper, we invent a new method to find a coefficient matrix which is closed to a normalized membership matrix as much as possible. The motivation of the proposed algorithm is the self-expressiveness property of the reconstruction coefficient vectors obtained by subspace clustering algorithms. As we know, spectral-type subspace clustering algorithms assume that the original data samples obey the self-expressiveness property [9], i.e., each data point can be well reconstructed by a linear combination of other points in the given dataset. The self-expressiveness property of the obtained coefficient vectors means each coefficient vector could be linearly reconstructed by other coefficient vectors. Based on this proposition and the doubly stochastic constraints [34, 35], an idempotent representation (IDR) method for subspace clustering is proposed. For solving the IDR problem, an optimization algorithm is also presented. And the convergence as well as the complexity analysis of the optimization algorithm are given consequently. We also make comparisons between IDR and some related algorithms, so that the superiority of IDR is shown. Finally, extensive experiments conducted on both synthetic and real world databases show the effectiveness and efficiency of IDR method.
The rest of the paper is organized as follows: we introduce the general formulation of spectral-type subspace clustering algorithms in Section II. In Section III, we propose the idea of idempotent representation (IDR) and the optimization algorithm for solving IDR problem. The further discussions of IDR, such as the analysis on the convergence and complexity of the optimization algorithm, the connections between IDR and the related algorithms, are given in Section IV. Comparative subspace clustering experiments on both synthetic data set and real world data sets are performed in Section V. Section VI presents the conclusions.
II Preliminary
Through there is a wide variety of existing spectral-type subspace clustering algorithms, the general objective function of these algorithms could be expressed as follows:
(3) |
where indicates a certain norm regularizer of , is a data matrix. In real applications, data is often noisy or corrputed. Hence, the more robust version of above problem could be defined as follows:
(4) |
where is the error term, is a certain measurement of . is a positive parameter which is used to balance the effects of and . Moreover, some algorithms add some additional constraints of which could be expressed as . Then the main differences between the existing subspace clustering algorithms are the definitions of and . We use Table I to summarize the formulations of and of some representative subspace clustering algorithms.
Algorithms | |||
---|---|---|---|
SSC | |||
LRR | - | ||
LSR | |||
BDR | |||
SSSC | |||
EN | + | ||
SCLRR | + | - | |
GLRR | + | - | |
DBSR | + | - |
Notice: is a parameter. denotes a column vector composed by the elements in the diagonal of . is a column vector with each element equals . In BDR, is a constraint which forces to be block diagonal. In SSSC, is a weighted matrix updated by the segmentation results in each iteration and is an identity matrix. In GLRR, denotes the trace of a matrix and is the Laplacian matrix built by using K-nearest-neighbors (KNN) [36] and .
All the algorithms mentioned in Table I could be solved by using ALM. Consequently, Ncuts is used to get final subspace clustering results.
III Idempotent representation
III-A Motivation
The key point of the spectral-type subspace clustering algorithms have in common is that they all assume that the data samples in obey the self-expressiveness property [9]. Namely, each data sample could be approximately constructed by a linear combination of other data points in the given dataset with tolerable errors. Thus, and records the reconstruction relationship of the original data samples.
In addition, as described in [10, 16], the obtained coefficient matrix is a representation of the original data matrix with being the representation of . Here, and are the -th columns of and respectively. Then it is reasonable to assume that the coefficient vectors also obey the self-expressiveness property ( Self-expressiveness property of coefficient vectors), namely each coefficient vector could be linearly reconstructed by other coefficient vectors in . Therefore, we have
(5) |
where is a reconstruction coefficient matrix corresponding to . Moreover, we could hope to be closed to . Because if is a good representation of , should follow the reconstruction relationship of the original data set and records the reconstruction relationship of the original data samples. Therefore, the following equation holds
(6) |
The above equation means that is approximate to an idempotent matrix.
It is easy to verify that an identity matrix is idempotent and the solution to the problem . Then in sepctral-type subspace clustering algorithm, the above idempotent constraint (Eq. (6)) is not sufficient for finding a good coefficient matrix. Fortunately, it could be checked that a normalized membership matrix is also an idempotent matrix. Hence, we will show how to add some necessary constraints to compel an idempotent reconstruction coefficient matrix to be a normalized membership matrix.
In fact, Lee et al. pointed that an idempotent matrix is a normalized membership matrix if and only if it is doubly stochastic [32]. And a doubly stochastic matrix can be completely described by the following doubly stochastic conditions [34, 35]:
(7) |
However, these conditions still can not prevent to be . As mentioned above, for revealing the subspace structure of data set with subspaces faithfully, a coefficient matrix should be block diagonal. Then for an idempotent and doubly stochastic coefficient matrix , we could simply let , then would be block diagonal. This constraint could also prevent to degenerate the trivial solution, i.e., . Therefore, by integrating these constraints and the general formulation of subspace clustering algorithms, we could define the idempotent representation (IDR) problem as follows:
(8) |
where denotes an idempotent regularizer of , namely . In most real applications, partial data samples are corrupted, hence we use norm to measure the error term .
Furthermore, all these restrictions imposed on will limit its representation capability. For alleviating this problem, we introduce an intermediate term and proposed the following relaxed problem:
(9) |
where is also a positive parameter.
III-B Optimization
Similar to solving the existing subspace clustering problems, we use ALM [33] to find the solutions to IDR problem (e.g., Eq. (9)). Firstly, we need to transfer Eq. (9) to the following equivalent problem:
(10) |
where are two auxiliary variables. Then the corresponding augmented Lagrangian function of Eq. (10) could be expressed as follows:
(11) |
where , and are four Langrangian multipliers and is an additional parameter. By minimizing , the variables could be optimized alternately while fixing others.
1. Fix other variables and update . Then in the iteration ( is the number of iterations),
(12) |
where and are the updated variables. It could be easily verified that , where is the pseudo-inverse of .
2. Fix other variables and update . Similar to updting ,
(13) |
Hence, . Because of the non-negative and symmetric constraints on , we further let and .
3. Fix other variables and update . We also could find
(14) |
Then .
4. Fix other variables and update . For updating , we could get the following problem:
(15) |
where . Note that the constraint is just imposed on the diagonal elements of , hence , if . Let and , then we have
(16) |
This problem could be solved by any off-the-shelf quadratic programming solver. We here provide a more efficient method to achieve the solution to Problem (16). The Lagrangian function of Problem (16) is
(17) |
where is a Lagrange multiplier. The optimal solution should satisfy that the derivative of Eq. (16) w.r.t. is equal to zero, so we have
(18) |
Then for the element of , we have
(19) |
where and are the element of and respectively. According to the constraint in Problem (16), then
(20) |
Hence,
(21) |
By summarizing the above computations,
(22) |
where is a diagonal matrix with its diagonal vector being .
5. Fix other variables and update . From Eq. (11), it could be easily obtained as follows:
(23) |
Lemma 1. Let be a given matrix. If the optimal solution to
(24) |
is , then the -th column of is
(25) |
6. Fix other variables and update parameters. The precise updating schemes for the parameters existed in Eq. (11) are summarized as follows:
(26) |
where and are two given positive parameters.
III-C Algorithm
We summarize the algorithmic procedure of IDR in Algorithm 1. For a data set, once the solutions to IDR are obtained, we use and to define two affinity graphs and as and . Then N-cut is consequently performed on the two graphs to get two segmentation results. Finally, the best one would be chosen as the final result.
IV Further analyses
IV-A Complexity analysis
We can see that the complexity of Algorithm 1 is mainly determined by the updating of five variables . In each iteration, these variables all have closed form solutions. For updating , it needs to compute the pseudo-inverse of an matrix, hence the computation burden is . For updating , it takes to compute the multiplier of an matrices. And for updating by using Lemma 1, its time cost is . Hence, the time complexity of Algorithm 1 in each iteration taken together is . In our experiments, the number of iterations of Algorithm 1 are all less than , hence its total complexity is .
IV-B Convergence analysis
Then we present a theoretical convergence proof of the proposed Algorithm 1.
Proposition 1: Algorithm 1 is convergent and the sequence generated by Algorithm 1 would convergent to a stationary point.
Proof: Algorithm 1 aims to minimize the Lagrangian function of Eq. (11) by alternately updating the variables . Firstly, from the updating rule of in Eq. (12), we have
(27) |
Note that is -strongly convex w.r.t. . The following inequality holds:
(28) |
Here we use Lemma B.5 in [37].
Secondly, according to updating schemes for the rest variables, it could be found that these variables, namely , have the similar properties of . Hence, the corresponding inequalities of the variables similar to (28) would hold. By adding these inequalities, we have
(29) |
Hence, is monotonically decreasing and thus it is upper bounded. This implies that are also bounded. Now, summing inequality (29) over , we have
(30) |
This implies when ,
(31) |
Moreover, according to the definition, clearly . Therefore, the convergence of Algorithm 1 is guaranteed and the sequence would convergent to a stationary point of Eq. (10).
IV-C Comparative analysis with related algorithms
We now discuss the relationships between IDR and some related algorithms.
IV-C1 Comparative analysis with membership representation (MR)
As we mentioned in Section I, MR also proposes to learn a normalized membership matrix as a reconstruction coefficient matrix [32]. However, MR is a cascade model which consists of three steps:
Firstly, an initial coefficient matrix is leaned by using SSC or LRR.
Secondly, a membership matrix is constructed by solving the following problem:
(32) |
where requires to be a positive semi-definite, is a positive parameter.
Thirdly, after is obtained, a normalized membership matrix is achieved by optimizing the following problem:
(33) |
where and . is a manually set constant. We could see that the symmetric constraint of is omitted in the above problem. Hence, the coefficient matrix found by MR may not be close to a normalized membership matrix.
Besides the computation for finding the initial coefficient matrix, Problem (32) and (33) are also needed to solve by using ALM method. Clearly, MR is very time-consuming.
Additionally, it can be seen that the performances of MR depends on the learned initial coefficient matrices. The value of the parameter in SSC or LRR will influence its performances. How to choose an initial coefficient matrix is not reported discussed in [32]. Moreover, the three hyper-parameters existed in MR will make the tuning of the parameters be difficult.
IV-C2 Comparative analysis with doubly stochastic subspace clustering (DSSC) [38]
Based on the descriptions in Section III-A, it could be seen that the normalized membership matrix obtained by IDR is a special case of doubly stochastic matrix. Recently, Lim et al. devised a doubly stochastic subspace clustering (DSSC) algorithm [38] which pursuits a doubly stochastic coefficient matrix. The objective of DSSC could be expressed as follows:
(34) |
where are three parameters and means is an doubly stochastic matrix. Namely, satisfies the conditions presented in Eq. (7). By using two different strategies to solve the above problem, two different models, joint DSSC (J-DSSC) and approximation DSSC (A-DSSC), are presented. Among them, A-DSSC is a two-step algorithm which firstly uses LSR or EN to get an initial coefficient matrix and computes a doubly stochastic matrix consequently. On the other hand, the computation burden of J-DSSC is high, because in each iteration of J-DSSC, two intermediate matrices should be iteratively updated by using linear alternating direction method of multipliers (ADMM) [39]. Moreover, we also could see that DSSC has three hyper-parameters which will also leads the difficulties in parameters adjustment.
IV-C3 Comparative analysis with self-representation constrained LRR (SRLRR) [40]
The idempotent constraint of coefficient matrix is firstly proposed in our previously work in [40]. The SRLRR defines the following problem:
(35) |
The main problem existed in SRLRR is that we have not build solid theoretical connections between and a normalized membership matrix. The nuclear norm minimization and the affine constraint (i.e., ) [41] in SRLRR are used to avoid to degenerate to or . This is totally different from IDR.
Based on these comparisons, we could see the existing subspace clustering methods, which also aim to seek normalized membership matrices or doubly stochastic matrices, will all use certain existing regularizers of . IDR presents a much different method for tacking subspace clustering problems.
V Experiments
V-A Experiment setup
V-A1 Datasets
Both synthetic and real world data sets are applied in our experiments to verify the effectiveness of IDR. Four benchmark databases including Hopkins 155 motion segmentation data set [42], ORL face image database [43], AR face image database [44] and MNIST handwritten digital database111http://yann.lecun.com/exdb/mnist/ are used for evaluation.
V-A2 Comparison methods
The representative and close related algorithms such as SSC [14], LRR [16, 10], LSR [17], BDR [18], MR [32] and DSSC [38] would be used for comparison222We provide the Matlab codes of IDR, MR and DSSC on https://github.com/weilyshmtu/Learning-idempotent-representation-for-subspace-segmentation. And the Matlab codes for SSC and LRR could be found on http://www.vision.jhu.edu/code/ and http://sites.google.com/site/guangcanliu/ respectively. The Matlab codes of LSR and BDR could be found on https://canyilu.github.io/code/.. All the experiments are conducted on a Windows-based machine with an Intel i7-4790 CPU with 20-GB memory and MATLAB R2017b.
V-A3 Parameters Setting
Because the value of parameters will influence the performance of the evaluated algorithms, for each compared method, we will tune all the parameters by following the suggestions in corresponding references and retain those with the best performance on each data set. The chosen parameter settings for all algorithms are given in Table II. Especially, for MR, when SSC or LRR is used to achieve the initial coefficient matrix, the parameter corresponding to the two algorithms would be chosen in according to the description in [32].
Methods | Parameters |
---|---|
SSC | |
LRR | |
LSR | |
BDR | , |
MR | |
DSSC(JDSSC) | |
DSSC(ADSSC) | |
IDR |
V-A4 Evaluation metrics
For all the evaluated algorithms, we use the obtained coefficient matrices to construct the affinity graphs without any post-processing. For the performance evaluation, we use segmentation accuracy (SA) or segmentation error (SE), which is defined as follows:
(36) |
where and represent the ground truth and the output label of the -th point respectively, if , and otherwise, and is the best mapping function that permutes clustering labels to match the ground truth labels.
V-B Experiments on a synthetic data set
We generate subspaces each of dimension in an ambient space of dimension . We sample data points from each subspace and construct a data matrix . Moreover, a certain percentage of the data vectors are chosen randomly to add Gaussian noise with zero mean and variance . Finally, the evaluated algorithms are used to segment the data into subspaces. For a certain , the experiments would repeat trials. Therefore, there would be total subspace clustering tasks.
Actually, the similar experiments could be found in some existing references [16, 21]. But in these experiments the parameters of corresponding evaluated algorithm would be fixed when the algorithm is performed on each sub-database. Then by changing the parameter(s), the best results with certain parameter(s) would be finally selected. However, performing subspace clustering on sub-database should be viewed as a sole segmentation task. In our experiments, we hence will let the parameter(s) of each algorithm vary in the corresponding interval sets in Table II and record the highest segmentation accuracies of the evaluated algorithms on each sub-database. Then the mean of these highest segmentation accuracies (averaged from 20 random trials) of each algorithm versus variation of the percentage of corruption are reported in Fig. 2.
In addition, IDR and BDR can produce two coefficient matrices to compute the clustering results. By using different methods (SSC and LRR) to construct initial coefficient matrices, MR could obtain two different results. Based on different strategies, DSSC has two sub-models, namely JDSSC and ADSSC. Hence, we plot the accuracies of all the algorithms by selecting the better ones of the corresponding two results in Fig 2(a). The detailed segmentation accuracies of IDR and BDR by using two different coefficient matrices, and the results of MR based on SSC and LRR (denoted as MR-SSC and MR-LRR) as well as JDSSC and ADSSC are ploted in Fig. 2(b).

From Fig. 2(a), we can see that 1) IDR constantly achieves the best results; 2) the performances of LRR, LSR, MR and DSSC are closed to each other, when the percentage of corruption is smaller than ; 3) when the percentage of corruption is larger than , MR dominates LRR, LSR and DSSC; 4) SSC is inferior to other algorithms.
From Fig. 2(b), it can be seen that 1) the results obtained by two different coefficient matrices corresponding to IDR and BDR respectively are closed to each other; 2) the performances of JDSSC and ADSSC are also similar to each other; 3) However, the results of MR-LRR are much better than those of MR-SSC. This means that the performance of MR relies on the initial coefficient matrices.
In order to show the sensitivity of IDR to its two parameters and , we report the segmentation accuracies of IDR changed with the values of parameters. The sub-databases with are now used. Then the mean of segmentation accuracies against the pairs of parameters are illustrated in Fig. 3.

From Fig. 3, we could see that 1) the performance of IDR is stable when the parameters varied in relative large intervals; 2) when the corruption percentage is low, IDR is insensitive to . However, when the corruption percentage is high, small and could help IDR to achieve good results. We believe that when a data set is clean, a normalized membership reconstruction coefficient matrix is easily to get, so the idempotent constraint could also be satisfied. However, when most data samples in the data set are corrputed, the normalized membership reconstruction coefficient matrix is difficult to obtain. Hence, in such situations, the corresponding parameter should be small.
V-C Experiments on Hopkins 155 data set
Hopkins155 database is a well-known benchmark database to test the performances of subspace clustering algorithms. It consists of 120 sequences of two motions and 35 sequences of three motions. Each sequence is a sole clustering task and there are 155 clustering tasks in total. The features of each sequence are extracted and tracked along with the motion in all frames, and errors are manually removed for each sequence. We illustrate the sample images from Hopkins 155 database in Fig. 4.

We performed the experiments with the original data matrix and projected data matrix in dimensional subspace333 is the number of subspaces. obtained by using principal component analysis (PCA) [36]. Then the segmentation errors (i.e., ) of each evaluated algorithm are computed on each sequence.
Firstly, we collect all the best results of each algorithm obtained on 155 sequences with the parameters changing in the given intervals. And the mean, median and std. (standard variance) of the results are reported in Table III and IV. From the two tables, we could see that 1) IDR achieves best results in these experiments; 2) BDR and LSR also achieve competitive results; 3) MR-LRR and MR-SSC do not conquer their corresponding classical methods SSC and LRR. This means the post-processing on the coefficient matrices in MR may not always enhance the performance of LRR and SSC; 5) JDSSC fails to achieve satisfying results.
Methods | Average time | 2 motions | 3 motions | All motions | ||||||
---|---|---|---|---|---|---|---|---|---|---|
(sec.) | mean | median | std. | mean | median | std. | mean | median | std. | |
IDR-Z | ||||||||||
IDR-S | ||||||||||
SSC | ||||||||||
LRR | ||||||||||
LSR | ||||||||||
BDR-B | ||||||||||
BDR-Z | ||||||||||
MR-SSC | ||||||||||
MR-LRR | ||||||||||
JDSSC | ||||||||||
ADSSC |
Methods | Average time | 2 motions | 3 motions | All motions | ||||||
---|---|---|---|---|---|---|---|---|---|---|
(sec.) | mean | median | std. | mean | median | std. | mean | median | std. | |
IDR-Z | ||||||||||
IDR-S | ||||||||||
SSC | ||||||||||
LRR | ||||||||||
LSR | ||||||||||
BDR-Z | ||||||||||
BDR-B | ||||||||||
MR-SSC | ||||||||||
MR-LRR | ||||||||||
JDSSC | ||||||||||
ADSSC |
Moreover, we also report the average computation time of each algorithm on the 155 motion sequences in Table III and IV. Clearly, LSR and ADSSC are much efficient than other algorithms. The average computation time of IDR is close to that of LRR. Hence the computation burden of IDR is acceptable. We could see that MR is time-consuming.
Secondly, we analyse the experimental results of each algorithm in another way. For each algorithm, we present the percentage of motions with the obtained SEs are less than or equal to a given percentage of segmentation error in Fig. 5. We can see that the segmentation errors on all motions obtained by IDR-Z are all less than .

Finally, we also test the sensitivity of IDR to the parameters on Hopkins 155 database. For a fixed pair of , we compute the segmentation error for all 155 segmentation tasks, then the mean of 155 segmentation accuracies could be achieved. Then by changing the values of and , we illustrate the performance of IDR against to its parameters in Fig 6.

Based on Fig. 6, we still could see that IDR is insensitive to its parameters and small could help IDR to achieve better results.
V-D Experiments on face image databases
We now perform the experiments on two benchmark face image databases, i.e., ORL database [43] and AR database [44]. The brief information of the two databases is introduced as follows:
ORL database contains face images (without noise) of persons. Each individual has 10 different images. These images were taken at different times, varying the lighting, facial expressions (open/closed eyes, smiling/not smiling) and facial details (glasses/no glasses). In our experiments, each image is resized to pixels.
AR database consists of over face images of individuals. For each individual, 26 pictures were taken in two sessions (separated by two weeks) and each section contains 13 images. These images include front view of faces with different expressions, illuminations and occlusions. In our experiments, each image is resized to pixels.
Moreover, the pixel value in each image belongs to the two databases lies in . For efficient computation, we let each pixel value be divided by , so that the pixel value of each image fell into . This will not change the distribution of the original data sets. Some sample images from ORL and AR database are shown in Fig. 7.

We firstly randomly choose images of persons from the two databases. In ORL database, , and in AR database, . Then the performances of the evaluated methods are tested in these sub-databases. With the parameters varying, the highest clustering accuracy of each algorithm obtained on each sub-database is collected. These experiments are run trials, and the mean and standard variance of SAs obtained by each algorithm are reported in Table V and Table VI respectively.
Methods | Number of persons | |||||
---|---|---|---|---|---|---|
IDR-Z | ||||||
IDR-S | ||||||
SSC | ||||||
LRR | ||||||
LSR | ||||||
BDR-Z | ||||||
BDR-B | ||||||
MR-SSC | ||||||
MR-LRR | ||||||
JDSSC | ||||||
ADSSC |
Methods | Number of persons | ||||
---|---|---|---|---|---|
IDR-Z | |||||
IDR-S | |||||
SSC | |||||
LRR | |||||
LSR | |||||
BDR-Z | |||||
BDR-B | |||||
MR-SSC | |||||
MR-LRR | |||||
JDSSC | |||||
ADSSC |
Clearly, the two tables show that on most cases, IDR outperforms other algorithms on the two databases. Especially on AR database, IDR gets much better results than those of other evaluated algorithms.
We also compare the computation time of all the evaluated algorithms. For a face images database, on its sub-databases with a fixed (number of persons), we could compute the average computation time of each algorithm. Then the computation time of each algorithm changed with could be illustrated in the following Fig. 8. Similar to the results obtained on Hopkins 155 databases, it could be seen that the computation time of IDR is acceptable. When is relatively small, the computation cost of IDR is close to that of LRR, when is relatively large, IDR is more efficient than LRR. However, JDSSC spends much more time than other algorithms.

Finally, we test the convergence of IDR by using all the samples in ORL database. The residuals defined as of the three variables in Eq. (10). Fig. 10 plots the residuals versus the number of iterations. It can be seen that the variables could converge to a station point with a relative small number of iterations by using Algorithm 1. And when the number of iterations is larger than , the residuals are closed to .

The performances of the evaluated algorithms on the whole ORL database are reported in Table VII. We can see that IDR still achieves best results. In Table VII, the average computation time of each algorithm with different parameters is also reported. Moreover, the sensitivity verification of IDR to its parameters is illustrated in Fig. 10. It still shows that IDR is stable and can get good results when is relatively small.
Methods | IDR-Z | IDR-S | SSC | LRR | LSR | BDR-Z | BDR-B | MR-SSC | MR-LRR | JDSSC | ADSSC |
---|---|---|---|---|---|---|---|---|---|---|---|
Segmentation Accuracy | |||||||||||
Average time (sec.) |

V-E Experiments on MNIST data set
MNIST database has 10 subjects, corresponding to handwritten digits, namely ‘’-‘’. We first select a subset which consists of the first 100 samples of each subject’s training data set to form a sub MNIST database. And each image is resized to pixels. Some sample images from the database are illustrated in Fig. 11.

Then we followed the similar methodologies used in the above experiments. Here, we randomly chose images of digits from each subjects’ training data set to build sub-databases. We also run the experiments trails and record the mean and standard variance of segmentation accuracies obtained by each algorithm in Table VIII.
Methods | Number of digits | ||||
---|---|---|---|---|---|
IDR-Z | |||||
IDR-S | |||||
SSC | |||||
LRR | |||||
LSR | |||||
BDR-Z | |||||
BDR-B | |||||
MR-SSC | |||||
MR-LRR | |||||
JDSSC | |||||
ADSSC |
Form Table VIII, we could find that IDR still dominates the other algorithms. Actually, IDR achieves much better results than those of other algorithms. In addition, we could see that the performances of other algorithms are closed to each other.
Moreover, we also plot the average computation time of each algorithm against the number of digits in Fig. 12(a) and show that the performances of IDR-Z and IDR-S changed with the values of parameters and in Fig. 12(b) and Fig. 12(c) respectively. For the visualization of IDR’s sensitivity, here we use sub-databases with digits.

From Fig. 12, we could conclude that 1) the computation time of IDR is much less than MR and JDSSC; 2) the computation costs of MR-SSC and MR-LRR are much larger than those of other algorithms; 2) IDR could achieve better results with small and . This coincides with the experiments provided above.
Based on all the above experiments, we could make the following summarizations: 1) IDR could get satisfying subspace clustering results on different kinds of databases; 2) Compared with the closed related algorithms, such as MR and DSSC, the computation cost of IDR is acceptable; 3) IDR is insensitive to its two parameters. However, small parameters could make IDR achieve better results.
VI Conclusions
Spectral-type subspace clustering algorithms show their excellent performances in subspace clustering tasks. The classical spectral-type methods hope to use different norms of reconstruction coefficient matrices to seek coefficient matrices satisfying intra-subspace connectivity and inter-subspace sparse. In this paper, we design an idempotent constraint for reconstruction coefficient matrices based on the proposition that reconstruction coefficient vectors also obey the self-expressiveness property. By integrating double stochastic constraints, we present an idempotent representation (IDR) method for subspace clustering. subspace clustering experiments conducted on both synthetic data sets and real world data sets verify the effectiveness and efficiency of IDR.
References
- [1] L. Parsons, E. Haque, and H. Liu, “Subspace clustering for high dimensional data: A review,” SIGKDD Explor. Newsl., vol. 6, no. 1, pp. 90–105, 2004.
- [2] R. Vidal, “Subspace clustering,” IEEE Signal Processing Magazine, vol. 28, no. 2, pp. 52–68, 2011.
- [3] R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan, “Automatic subspace clustering of high dimensional data for data mining applications,” in SIGMOD 1998, Proceedings ACM SIGMOD International Conference on Management of Data, June 2-4, 1998, Seattle, Washington, USA, L. M. Haas and A. Tiwary, Eds. ACM Press, 1998, pp. 94–105.
- [4] H. Kriegel, P. Kröger, and A. Zimek, “Clustering high-dimensional data: A survey on subspace clustering, pattern-based clustering, and correlation clustering,” ACM Trans. Knowl. Discov. Data, vol. 3, no. 1, pp. 1:1–1:58, 2009.
- [5] R. Vidal and P. Favaro, “Low rank subspace clustering (lrsc),” Pattern Recognition Letters, vol. 43, pp. 47–61, 2014.
- [6] Y. Chu, Y. Chen, D. Yang, and M. Chen, “Reducing redundancy in subspace clustering,” IEEE Trans. Knowl. Data Eng., vol. 21, no. 10, pp. 1432–1446, 2009.
- [7] S. Yi, Z. He, Y. Cheung, and W. Chen, “Unified sparse subspace learning via self-contained regression,” IEEE Trans. Circuits Syst. Video Technol., vol. 28, no. 10, pp. 2537–2550, 2018.
- [8] Y. Ma, H. Derksen, W. Hong, and J. Wright, “Segmentation of multivariate mixed data via lossy data coding and compression,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 9, pp. 1546–1562, 2007.
- [9] E. Elhamifar and R. Vidal, “Sparse subspace clustering: algorithm, theory, and applications,” IEEE Trans Pattern Anal Mach Intell, vol. 35, no. 11, pp. 2765–81, 2013.
- [10] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013.
- [11] K. Kanatani, “Motion segmentation by subspace separation and model selection,” in Proceedings of the Eighth International Conference On Computer Vision (ICCV-01), Vancouver, British Columbia, Canada, July 7-14, 2001 - Volume 2. IEEE Computer Society, 2001, pp. 586–591.
- [12] Y. Ma, A. Y. Yang, H. Derksen, and R. M. Fossum, “Estimation of subspace arrangements with applications in modeling and segmenting mixed data,” SIAM Review, vol. 50, no. 3, pp. 413–458, 2008.
- [13] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp. 888–905, 2000.
- [14] E. Elhamifar and R. Vidal, “Sparse subspace clustering,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), Miami, Florida, USA, Jun. 2009, pp. 2790–2797.
- [15] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210–227, 2009.
- [16] G. Liu, Z. Lin, and Y. Yu, “Robust subspace segmentation by low-rank representation,” in Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010,, Haifa, Israel, jun 2010, pp. 663–670.
- [17] C.-Y. Lu, H. Min, Z.-Q. Zhao, L. Zhu, D.-S. Huang, and S. Yan, “Robust and efficient subspace segmentation via least squares regression,” in Proceedings of the 12th European Conference on Computer Vision, ECCV 2012, Florence, Italy, 2012, pp. 347–360.
- [18] C. Lu, J. Feng, Z. Lin, T. Mei, and S. Yan, “Subspace clustering by block diagonal representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 2, pp. 487–501, 2019.
- [19] H. Zhang, J. Yang, F. Shang, C. Gong, and Z. Zhang, “Lrr for subspace segmentation via tractable schatten-p norm minimization and factorization,” IEEE Transactions on Cybernetics, pp. 1–13, 2018.
- [20] J. Xu, M. Yu, L. Shao, W. Zuo, D. Meng, L. Zhang, and D. Zhang, “Scaled simplex representation for subspace clustering,” IEEE TRANSACTIONS ON CYBERNETICS, 2019.
- [21] C.-G. Li, C. You, and R. Vidal, “Structured sparse subspace clustering: A joint affinity learning and subspace clustering framework,” IEEE Trans Image Process, vol. 26, no. 6, pp. 2988–3001, 2017.
- [22] Y. Panagakis and C. Kotropoulos, “Elastic net subspace clustering applied to pop/rock music structure analysis,” Pattern Recognition Letters, vol. 38, pp. 46–53, 2014.
- [23] C. You, C. Li, D. P. Robinson, and R. Vidal, “Oracle based active set algorithm for scalable elastic net subspace clustering,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. IEEE Computer Society, 2016, pp. 3928–3937.
- [24] L. Zhuang, S. Gao, J. Tang, J. Wang, Z. Lin, Y. Ma, and N. Yu, “Constructing a nonnegative low-rank and sparse graph with data-adaptive features,” IEEE Trans. Image Processing, vol. 24, no. 11, pp. 3717–3728, 2015.
- [25] K. Tang, R. Liu, and J. Zhang, “Structure-constrained low-rank representation,” IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, vol. 25, no. 12, pp. 2167–2179, 2014.
- [26] X. Lu, Y. Wang, and Y. Yuan, “Graph-regularized low-rank representation for destriping of hyperspectral images,” IEEE Transaction on Geoscience and Remote Sensing, vol. 51, no. 7-1, pp. 4009–4018, 2013.
- [27] K. Tang, D. B. Dunson, Z. Su, R. Liu, J. Zhang, and J. Dong, “Subspace segmentation by dense block and sparse representation,” Neural Network, vol. 75, pp. 66–76, 2016.
- [28] L. Wei, F. Ji, H. Liu, R. Zhou, C. Zhu, and X. Zhang, “Subspace clustering via structured sparse relation representation,” IEEE Trans Neural Network and Learn Systems, vol. PP, 2021.
- [29] Y. Sui, G. Wang, and L. Zhang, “Sparse subspace clustering via low-rank structure propagation,” Pattern Recognition, vol. 95, pp. 261–271, 2019.
- [30] C. Mathieu and W. Schudy, “Correlation clustering with noisy input,” in Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010, Austin, Texas, USA, January 17-19, 2010, M. Charikar, Ed. SIAM, 2010, pp. 712–728.
- [31] C. Swamy, “Correlation clustering: maximizing agreements via semidefinite programming,” in Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2004, New Orleans, Louisiana, USA, January 11-14, 2004, J. I. Munro, Ed. SIAM, 2004, pp. 526–527.
- [32] M. Lee, J. Lee, H. Lee, and N. Kwak, “Membership representation for detecting block-diagonal structure in low-rank or sparse subspace clustering,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015. IEEE Computer Society, 2015, pp. 1648–1656.
- [33] Z. Lin, M. Chen, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” CoRR, vol. abs/1009.5055, 2010.
- [34] R. Zass and A. Shashua, “A unifying approach to hard and probabilistic clustering,” in 10th IEEE International Conference on Computer Vision (ICCV 2005), 17-20 October 2005, Beijing, China. IEEE Computer Society, 2005, pp. 294–301.
- [35] ——, “Doubly stochastic normalization for spectral clustering,” in Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006, B. Schölkopf, J. C. Platt, and T. Hofmann, Eds. MIT Press, 2006, pp. 1569–1576.
- [36] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification, 2nd Edition. Wiley, 2001.
- [37] J. Mairal, “Optimization with first-order surrogate functions,” in Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, ser. JMLR Workshop and Conference Proceedings, vol. 28. JMLR.org, 2013, pp. 783–791.
- [38] D. Lim, R. Vidal, and B. D. Haeffele, “Doubly stochastic subspace clustering,” CoRR, vol. abs/2011.14859, 2020.
- [39] S. Ma, “Alternating proximal gradient method for convex minimization,” J. Sci. Comput., vol. 68, no. 2, pp. 546–572, 2016.
- [40] L. Wei, X. Wang, A. Wu, R. Zhou, and C. Zhu, “Robust subspace segmentation by self-representation constrained low-rank representation,” Neural Processing Letters, vol. 48, no. 3, pp. 1671–1691, 2018.
- [41] C. You, C.-G. Li, D. P. Robinson, and R. Vidal, “Is an affine constraint needed for affine subspace clustering?” ICCV, 2019.
- [42] R. Tron and R. Vidal, “A benchmark for the comparison of 3-d motion segmentation algorithms,” in 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), 18-23 June 2007, Minneapolis, Minnesota, USA. IEEE Computer Society, 2007.
- [43] F. Samaria and A. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of Second IEEE Workshop on Applications of Computer Vision, WACV 1994, Sarasota, FL, USA, December 5-7, 1994. IEEE, 1994, pp. 138–142.
- [44] A. Martinez and R. Benavente, “The ar face database,” CVC Technical Report 24, 1998.