This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\jyear

2021

[2]\fnmPeng-fei \surHuang

1]\orgdivSchool of Mathematics and Statistics, \orgnameKashi University, \orgaddress\cityKashi, \postcode844008, \countryP.R. China

2]\orgdivSchool of Mathematical Sciences and LPMC, \orgnameNankai University, \orgaddress\cityTianjin, \postcode300071, \countryP.R. China

The low-rank approximation of fourth-order partial-symmetric and conjugate partial-symmetric tensor

\fnmAmina \surSabir amina_sabir1@126.com    huangpf@mail.nankai.edu.cn    \fnmQing-zhi \surYang qz-yang@nankai.edu.cn [ [
Abstract

We present an orthogonal matrix outer product decomposition for the fourth-order conjugate partial-symmetric (CPS) tensor and show that the greedy successive rank-one approximation (SROA) algorithm can recover this decomposition exactly. Based on this matrix decomposition, the CP rank of CPS tensor can be bounded by the matrix rank, which can be applied to low rank tensor completion. Additionally, we give the rank-one equivalence property for the CPS tensor based on the SVD of matrix, which can be applied on the rank-one approximation for CPS tensors.

keywords:
Conjugate partial-symmetric tensor, approximation algorithm, rank-one equivalence property, convex relaxation
pacs:
[

MSC Classification]15A69, 15B57, 90C26, 41A50

1 Introduction

Tensor decomposition and approximation have significant applications in computer vision, data mining, statistical estimation and so on. We refer to kolda2009tensor for the survey. Moreover, it is general that tensor from real applications with special symmetric structure. For instance, the symmetric outer product decomposition is particularly important in the process of blind identification of under-determined mixtures comon2008symmetric.

Jiang et al.jiang2016characterizing studied the functions in multivariate complex variables which always take real values. They proposed the conjugate partially symmetric (CPS) tensor to characterize such polynomial functions, which is the generalization of Hermitian matrix. Various examples of conjugate partial-symmetric tensors can be encountered in engineering applications arising from singal processing, electrical engineering, and control theoryaubry2013ambiguity; de2007fourth. Ni et al.Ni2019Hermitian and Nie et al.Nie2019Hermitian researched on the Hermitian tensor decomposition. Motivated by Lieven et al.de2007fourth, we proposed a new orthogonal matrix outer product decomposition model for CPS tensors, which explore the orthogonality of these matrices.

It is well known that unlike the matrix case, the best rank-r (r>1r>1) approximation of a general tensor may not exist, and even if it admits a solution, it is NP-hard to solvede2008tensor. The greedy successive rank-one approximation (SROA) algorithm can be applied to compute the rank-r (r>1r>1) approximation of tensor. However, the theoretical guarantee for obtaining the best rank-r approximation is less developed. Zhang et al.zhang2001rank first proved the successive algorithm exactly recovers the symmetric and orthogonal decomposition of the underlying real symmetrically and orthogonally decomposable tensors. Fu et al.2018Successive showed that SROA algorithm can exactly recover unitarily decomposable CPS tensors. We offer the theoretical guarantee of SROA algorithm for our matrix decomposition model of CPS tensor tensors.

Many multi-dimensional data from real practice are fourth-order tensors and can be formulated as low-CP-rank tensor. However it is very difficult to compute CP-rank of a tensor. Jiang et al.Jiang2018Low showed that CP-rank can be bounded by the corresponding rank of square unfolding matrix of tensors. Following the idea we research on the low rank tensor completion for fourth order partial-symmetric tensor in particular.

Recently, Jiang et al. 2015Tensor proposed convex relaxations for solving a tensor optimization problem closely related to the best rank-one approximation problem for symmetric tensors. They proved an equivalence property between a rank-one symmetric tensor and its unfolding matrix. Yang et al. Yuning2016Rank studied the rank-one equivalence property for general real tensors. Based on these rank-one equivalence properties, the above mentioned tensor optimization problem can be casted into a matrix optimization problem, which alleviates the difficulty of solving the tensor problem. In line with this idea, we study the rank-one equivalence property for the fourth-order CPS tensor and transform the best rank-one tensor approximation problem into a matrix optimization problem.

In Section 2, we give some notations and definitions. The outer product approximation model based on matrix is proposed and the successive rank-one approximation (SMROA) algorithm is given to solve it in Section 3. We show that the SMROA algorithm can exactly recover the matrix outer product decomposition or approximation of the CPS tensor in Section 4. Section LABEL:sec:application discusses applications of our model simply. In Section LABEL:sec:rank1Equ, we present the rank-one equivalence property of fourth-order CPS tensor, and based on it an application is proposed. Numerical examples are in Section LABEL:sec:numerical.

2 Preliminary

All tensors in this paper are fourth-order. For any complex number z=a+ibCz=a+ib\in C, z¯=aib\bar{z}=a-ib denotes the conjugate of zz. "\circ" denotes the outer product of matrices, namely 𝒜=XY\mathcal{A}=X\circ Y means that

𝒜ijkl=XijYkl.\mathcal{A}_{ijkl}=X_{ij}Y_{kl}.

SnS^{n} denotes the set of nn by nn symmetric matrices, the entries of these matrices can be complex or only real according to the context, without causing ambiguity. The inner product between 𝒜,Cn4\mathcal{A},~{}\mathcal{B}\in C^{n^{4}} is defined as

𝒜,=i,j,k,l=1n𝒜ijkl¯ijkl.\left<\mathcal{A},~{}\mathcal{B}\right>=\sum\limits_{i,j,k,l=1}^{n}\mathcal{A}_{ijkl}\bar{\mathcal{B}}_{ijkl}.
Definition 1.

A fourth-order tensor 𝒜\mathcal{A} is called symmetric if 𝒜\mathcal{A} is invariant under all permutations of its indices, i.e.,

𝒜ijkl=𝒜π(ijkl),i,j,k,l=1,n.\mathcal{A}_{ijkl}=\mathcal{A}_{\pi(ijkl)},\quad i,~{}j,~{}k,~{}l=1,\cdots n.
Definition 2.

Ni2019Hermitian A fourth-order complex tensor 𝒜Cn1×n2×n1×n2\mathcal{A}\in C^{n_{1}\times n_{2}\times n_{1}\times n_{2}} is called a Hermitian tensor if

𝒜i1i2j1j2=𝒜¯j1j2i1i2.\mathcal{A}_{i_{1}i_{2}j_{1}j_{2}}=\bar{\mathcal{A}}_{j_{1}j_{2}i_{1}i_{2}}.

Jiang et al.jiang2016characterizing introduced the concept of conjugate partial-symmetric tensors as following.

Definition 3.

A fourth-order complex tensor 𝒜Cn4\mathcal{A}\in C^{n^{4}} is called conjugate partial-symmetric (CPS) if

𝒜ijkl\displaystyle\mathcal{A}_{ijkl} =𝒜π(ij)π(kl)\displaystyle=\mathcal{A}_{\pi(ij)\pi(kl)}
𝒜ijkl\displaystyle\mathcal{A}_{ijkl} =𝒜¯klij,i,j,k,l=1,n.\displaystyle=\overline{\mathcal{A}}_{klij},\quad i,~{}j,~{}k,~{}l=1,\cdots n.
Definition 4.

A fourth-order tensor 𝒜Rn4\mathcal{A}\in R^{n^{4}} is called partial-symmetric if

𝒜ijkl=𝒜π(ij)π(kl)=𝒜π(kl)π(ij),i,j,k,l=1,n.\mathcal{A}_{ijkl}=\mathcal{A}_{\pi(ij)\pi(kl)}=\mathcal{A}_{\pi(kl)\pi(ij)},\quad i,~{}j,~{}k,~{}l=1,\cdots n.
Example 1.

de2007fourth In the blind source separation problem, the cumulant tensor is computed as

𝒞=r=1Rkrara¯ra¯rar.\mathcal{C}=\sum\limits_{r=1}^{R}k_{r}a_{r}\circ\bar{a}_{r}\circ\bar{a}_{r}\circ a_{r}.

By a permutation of the indices, it is in fact a conjugate partial-symmetric tensor.

Definition 5.

The square unfolding form M(𝒜)M(\mathcal{A}) of a fourth-order tensor 𝒜Cn4\mathcal{A}\in C^{n^{4}} is defined as

M(𝒜)(j1)n+i,(l1)n+k=𝒜ijkl.M(\mathcal{A})_{(j-1)n+i,(l-1)n+k}=\mathcal{A}_{ijkl}.

3 Matrix outer product approximation model

Jiang et al.Jiang2018Low introduced the new notions of M-decomposition for an even-order tensor 𝒜\mathcal{A}, which is exactly the rank-one decomposition of M(𝒜)M(\mathcal{A}), followed by the notion of tensor M-rank.

For each 𝒜Rn4\mathcal{A}\in R^{n^{4}}, let M(𝒜)=UΣVT=i=1n2σiuiviM(\mathcal{A})=U\Sigma V^{T}=\sum\limits_{i=1}^{n^{2}}\sigma_{i}u_{i}\circ v_{i} be the SVD of M(𝒜)M(\mathcal{A}), then 𝒜\mathcal{A} has the following decomposition form

𝒜=i=1n2σiUiVi,\mathcal{A}=\sum\limits_{i=1}^{n^{2}}\sigma_{i}U_{i}\circ V_{i}, (1)

where (Ui)st=(ui)(t1)n+s(U_{i})_{st}=(u_{i})_{(t-1)n+s}, (Vi)st=(vi)(t1)n+s(V_{i})_{st}=(v_{i})_{(t-1)n+s}, for i=1,2,,n2i=1,2,\cdots,n^{2}. Ui,Uj=δij,Vi,Vj=δij\left<U_{i},U_{j}\right>=\delta_{ij},~{}\left<V_{i},V_{j}\right>=\delta_{ij}, i,j=1,2,,n2i,j=1,2,\cdots,n^{2}.

We are particularly interested in the tensor with some symmetric properties. And analogous to Lieven et al.de2007fourth, we prove that the CPS tensor has a decomposition based on matrix as following.

Theorem 1.

If 𝒜Cn4\mathcal{A}\in C^{n^{4}} is a conjugate partial-symmetric tensor, then it can be decomposed as,

𝒜=i=1rλiEiEi¯,\mathcal{A}=\sum\limits_{i=1}^{r}\lambda_{i}E_{i}\circ\overline{E_{i}}, (2)

where λiR\lambda_{i}\in R, EiCn2×n2E_{i}\in C^{n^{2}\times n^{2}} are symmetric matrices and Ei,Ej=δij\left<E_{i},E_{j}\right>=\delta_{ij}, for i,j=1,2,,ri,~{}j=1,2,\cdots,r. And the decomposition is unique when λi\lambda_{i} are different from each other.

Proof: Since 𝒜\mathcal{A} is conjugate partial-symmetric, then the unfold matrix M(𝒜)M(\mathcal{A}) is Hermitian, and can be decomposed as

M(𝒜)=i=1rλieiei,M(\mathcal{A})=\sum_{i=1}^{r}\lambda_{i}e_{i}e_{i}^{*},

where λiR\lambda_{i}\in R, eiCn2e_{i}\in C^{n^{2}}, i=1,,ri=1,\ldots,r are mutually orthogonal. Folding eie_{i} into matrix EiE_{i} via (Ei)ij=(ei)(j1)×n+i(E_{i})_{ij}=(e_{i})_{(j-1)\times n+i}, thus EiE_{i}, i=1,,pi=1,\ldots,p are mutually orthogonal, that is Ei,Ej=δij\langle E_{i},E_{j}\rangle=\delta_{ij}. In this case, we have 𝒜=i=1rλiEiEi¯\mathcal{A}=\sum\limits_{i=1}^{r}\lambda_{i}E_{i}\circ\bar{E_{i}}.

From the eigen-decomposition of M(𝒜)M(\mathcal{A}), we have M(𝒜)eτ=λτeτM(\mathcal{A})e_{\tau}=\lambda_{\tau}e_{\tau}, for τ=1,,r\tau=1,\ldots,r i.e., k,laijkl(eτ)(l1)×n+k=λτ(eτ)(j1)×n+i\sum_{k,l}a_{ijkl}(e_{\tau})_{(l-1)\times n+k}=\lambda_{\tau}(e_{\tau})_{(j-1)\times n+i}, for any i,j=1,,ni,j=1,\ldots,n. Since aijkl=ajikla_{ijkl}=a_{jikl}, for all i,j=1,,ni,j=1,\ldots,n, then (eτ)(j1)×n+i=(eτ)(i1)×n+j(e_{\tau})_{(j-1)\times n+i}=(e_{\tau})_{(i-1)\times n+j}, thus EτE_{\tau} is symmetric. The uniqueness of the decomposition follows the property of eigen-decomposition of Hermitian matrix naturally.

Remark 1.

Jiang et al.jiang2016characterizing gave the decomposition theorem for CPS tensor like Theorem 3. However, they established this theorem in the view of polynomial decomposition and did not explore the mutually orthogonality of matrices in the decomposition model.

Definition 6.

𝒜Cn4\mathcal{A}\in C^{n^{4}} is a CPS tensor,

rankM(𝒜)=min{r𝒜=i=1rλiEiE¯i,λiR,EiSn2}.rank_{M}(\mathcal{A})=min\{r\mid\mathcal{A}=\sum\limits_{i=1}^{r}\lambda_{i}E_{i}\circ\bar{E}_{i},\lambda_{i}\in R,E_{i}\in S^{n^{2}}\}.

The rankM(𝒜)rank_{M}(\mathcal{A}) is actually the strongly symmetric M-rank rankssm(𝒜)rank_{ssm}(\mathcal{A}) defined by JiangJiang2018Low. For symmetric tensor 𝒜\mathcal{A}, they also proved the equivalence between rankssm(𝒜)rank_{ssm}(\mathcal{A}) and

ranksm(𝒜)=min{r𝒜=i=1rλiEiEi,λiR,EiCn2}.rank_{sm}(\mathcal{A})=min\{r\mid\mathcal{A}=\sum\limits_{i=1}^{r}\lambda_{i}E_{i}\circ E_{i},\lambda_{i}\in R,E_{i}\in C^{n^{2}}\}.

This is also true for CPS fourth-order tensor.

Theorem 2.

Let 𝒜Cn4\mathcal{A}\in C^{n^{4}} be a CPS tensor, then rankM(𝒜)=ranksm(𝒜).rank_{M}(\mathcal{A})=rank_{sm}(\mathcal{A}).

Proof: It is obvious that rankM(𝒜)rankSM(𝒜).rank_{M}(\mathcal{A})\geq rank_{SM}(\mathcal{A}). On the other hand, if rankSM(𝒜)=rrank_{SM}(\mathcal{A})=r, we have rank(M(𝒜))rrank(M(\mathcal{A}))\leq r. Since rank(M(𝒜))=rankM(𝒜)rank(M(\mathcal{A}))=rank_{M}(\mathcal{A}), we obtain the desired conclusion.

Corollary 3.

Let 𝒜Rn4\mathcal{A}\in R^{n^{4}} be a partial-symmetric tensor, then one has,

𝒜=i=1rλiEiEi,\mathcal{A}=\sum\limits_{i=1}^{r}\lambda_{i}E_{i}\circ E_{i}, (3)

where λiR\lambda_{i}\in R, EiE_{i} are symmetric matrices and Ei,Ej=δij\left<E_{i},E_{j}\right>=\delta_{ij}, for i,j=1,2,,ri,~{}j=1,2,\cdots,r. rankM(𝒜)n(n+1)2rank_{M}(\mathcal{A})\leq\frac{n(n+1)}{2}.

Proof: The first part is obvious according to Theorem 3. Since all matrices belonging to SnS^{n} form a n(n+1)2\frac{n(n+1)}{2}-dimensional vector space, we have rankM(𝒜)n(n+1)2rank_{M}(\mathcal{A})\leq\frac{n(n+1)}{2}. Fu et al. gave a rank-one decomposition of vector form for the CPS tensor based on Theorem 1 as follows,

Theorem 4.

(fu2018decompositions, Theorem 3.2) 𝒜Cn4\mathcal{A}\in C^{n^{4}} is CPS if and only if it has the following partial-symmetric decomposition

𝒜=iλia¯ia¯iaiai,\mathcal{A}=\sum\limits_{i}\lambda_{i}\bar{a}_{i}\circ\bar{a}_{i}\circ a_{i}\circ a_{i},

where λjR\lambda_{j}\in R and aiCna_{i}\in C^{n}. That is, a CPS tensor can be decomposed as the sum of rank-one CPS tensors.

However, when we restricted the decomposition on real domain, the decomposition does not seem to hold, since iλiaiaiaiai\sum\limits_{i}\lambda_{i}a_{i}\circ a_{i}\circ a_{i}\circ a_{i}, where λiR\lambda_{i}\in R, aiRna_{i}\in R^{n}, can only represent symmetric tensor. Thus, an extended rank-one approximation model for the partial-symmetric tensor can be proposed based on Corollary 3.

Corollary 5.

Let 𝒜Rn4\mathcal{A}\in R^{n^{4}} be a partial-symmetric tensor, then it can be decomposed as the sum of simple low rank partial-symmetric tensor,

𝒜=iλi(pipiqiqi+qiqipipi).\mathcal{A}=\sum\limits_{i}\lambda_{i}(p_{i}\circ p_{i}\circ q_{i}\circ q_{i}+q_{i}\circ q_{i}\circ p_{i}\circ p_{i}). (4)

Proof: From Corollary 3, partial-symmetric tensor 𝒜=i=1rλiEiEi\mathcal{A}=\sum\limits_{i=1}^{r}\lambda_{i}E_{i}\circ E_{i}, where EiE_{i} are symmetric. So it can be decomposed as j=1riβijuij(uij)\sum_{j=1}^{r_{i}}\beta_{i}^{j}u_{i}^{j}(u_{i}^{j})^{\top}, thus

𝒜\displaystyle\mathcal{A} =\displaystyle= i=1rλi(j=1riβijuij(uij))(k=1riβikuik(uik))\displaystyle\sum\limits_{i=1}^{r}\lambda_{i}(\sum_{j=1}^{r_{i}}\beta_{i}^{j}u_{i}^{j}(u_{i}^{j})^{\top})\circ(\sum_{k=1}^{r_{i}}\beta_{i}^{k}u_{i}^{k}(u_{i}^{k})^{\top})
=\displaystyle= i=1rλi(j=1rik=jriβijβik(uijuijuikuik+uikuikuijuij).\displaystyle\sum_{i=1}^{r}\lambda_{i}(\sum_{j=1}^{r_{i}}\sum_{k=j}^{r_{i}}\beta_{i}^{j}\beta_{i}^{k}(u_{i}^{j}\circ u_{i}^{j}\circ u_{i}^{k}\circ u_{i}^{k}+u_{i}^{k}\circ u_{i}^{k}\circ u_{i}^{j}\circ u_{i}^{j}).

The desired decomposition form follows.

Remark 2.

From the proof of Corollary 5, we can see that if piqip_{i}\neq q_{i}, piTqi=0p_{i}^{T}q_{i}=0. Whether this decomposition form is the compactest will be one of our future work.

We can discuss the case of skew partial-symmetric tensor in parallel.

Theorem 6.

We call 𝒜Rn4\mathcal{A}\in R^{n^{4}} skew partial-symmetric tensor if

𝒜ijkl=𝒜π(ij)π(kl)=𝒜π(kl)π(ij),i,j,k,l=1,2,,n.\mathcal{A}_{ijkl}=\mathcal{A}_{\pi(ij)\pi(kl)}=-\mathcal{A}_{\pi(kl)\pi(ij)},\quad i,~{}j,~{}k,~{}l=1,2,\cdots,n.

Then one has

𝒜=iλi(UiViViUi),\mathcal{A}=\sum\limits_{i}\lambda_{i}(U_{i}\circ V_{i}-V_{i}\circ U_{i}),

and

𝒜=iλi(pipiqiqiqiqipipi).\mathcal{A}=\sum\limits_{i}\lambda_{i}(p_{i}\circ p_{i}\circ q_{i}\circ q_{i}-q_{i}\circ q_{i}\circ p_{i}\circ p_{i}).

Proof: M(𝒜)M(\mathcal{A}) is skew-symmetric according to the definition of the skew partial-symmetric tensor. Then M(𝒜)=iλi(uiviTviuiT)M(\mathcal{A})=\sum_{i}\lambda_{i}(u_{i}v_{i}^{T}-v_{i}u_{i}^{T}). The rest of the proof is similar to that for partial-symmetric tensor, here we omit it.

Based on Theorem 1, we propose a matrix outer product approximation model for the CPS tensor as following.

minλiR,XiSn\displaystyle\underset{\lambda_{i}\in R,~{}X_{i}\in S^{n}}{min} 𝒜i=1rλiXiXi¯2F\displaystyle\|\mathcal{A}-\sum\limits_{i=1}^{r}\lambda_{i}X_{i}\circ\bar{X_{i}}\|^{2}_{F} (5)
s.t.\displaystyle s.t. Xi,Xj=δij.\displaystyle\left<X_{i},X_{j}\right>=\delta_{ij}.

The successive rank-one approximation algorithm can also be applied to the conjugate partial symmetric tensors to find the matrix outer product decompositions or approximations, as shown in Algorithm 1.

Algorithm 1 Successive Matrix Outer Product Rank-One Approximation (SMROA) Algorithm

Given a CPS tensor 𝒜Cn4\mathcal{A}\in C^{n^{4}}. Initialize 𝒜0=𝒜\mathcal{A}_{0}=\mathcal{A}.

for j=1j=1 to rr do
     (λj,Xj)argminXF=1,XSn,λR𝒜j1λXX¯F(\lambda_{j},X_{j})\in\underset{\|X\|_{F}=1,X\in S^{n},\lambda\in R}{argmin}\|\mathcal{A}_{j-1}-\lambda X\circ\bar{X}\|_{F}.
     𝒜j=𝒜j1λjXjXj¯\mathcal{A}_{j}=\mathcal{A}_{j-1}-\lambda_{j}X_{j}\circ\bar{X_{j}}.
end for
Return {λj,Xj}j=1r\{\lambda_{j},X_{j}\}_{j=1}^{r}

The main optimization problem in Algorithm 1 could be expressed as

(λ,X)argminXF=1,XSn,λR𝒜λXX¯F2,(\lambda_{*},X_{*})\in\underset{\|X\|_{F}=1,X\in S^{n},\lambda\in R}{argmin}\|\mathcal{A}-\lambda X\circ\bar{X}\|_{F}^{2}, (6)

The objective function of (6) can be rewritten as

𝒜λXX¯F2=𝒜F2+λ22λ𝒜,XX¯\begin{split}&\|\mathcal{A}-\lambda X\circ\bar{X}\|_{F}^{2}\\ =&\|\mathcal{A}\|_{F}^{2}+\lambda^{2}-2\lambda\left<\mathcal{A},X\circ\bar{X}\right>\end{split}

From which we can derive that problem (6) is equivalent to

XargmaxXF=1,XSn𝒜,XX¯,X_{*}\in\underset{\|X\|_{F}=1,X\in S^{n}}{argmax}\mid\left<\mathcal{A},X\circ\bar{X}\right>\mid, (7)

and λ=𝒜,XX¯\lambda_{*}=\left<\mathcal{A},X_{*}\circ\bar{X_{*}}\right>. We can solve (7) by transforming it into matrix eigenproblem as follows,

xargmaxx=1,xCn2xM(𝒜)x.x_{*}\in\underset{\|x\|=1,x\in C^{n^{2}}}{argmax}\mid x^{*}M(\mathcal{A})x\mid. (8)
Remark 3.

Zhang et al.zhang proved that if 𝒜Rn4\mathcal{A}\in R^{n^{4}} is symmetric,

minxi=1i=1,2,3,4𝒜λx1x2x3x4F=minx=1𝒜λxxxxF;\underset{\begin{subarray}{c}\|x_{i}\|=1\\ i=1,2,3,4\end{subarray}}{min}\|\mathcal{A}-\lambda x_{1}\circ x_{2}\circ x_{3}\circ x_{4}\|_{F}=\underset{\|x\|=1}{min}\|\mathcal{A}-\lambda x\circ x\circ x\circ x\|_{F}\mathchar 24635\relax\;

if 𝒜\mathcal{A} is symmetric about the first two and the last two mode respectively,

minxi=1i=1,2,3,4𝒜λx1x2x3x4F=minx=y=1𝒜λxxyyF.\underset{\begin{subarray}{c}\|x_{i}\|=1\\ i=1,2,3,4\end{subarray}}{min}\|\mathcal{A}-\lambda x_{1}\circ x_{2}\circ x_{3}\circ x_{4}\|_{F}=\underset{\|x\|=\|y\|=1}{min}\|\mathcal{A}-\lambda x\circ x\circ y\circ y\|_{F}.

It is obvious that for partial-symmetric tensor, we also have

minXiF=1,XiRn2,λR𝒜λX1X2F=minXF=1,XSn,λR𝒜λXXF.\underset{\|X_{i}\|_{F}=1,X_{i}\in R^{n^{2}},\lambda\in R}{min}\|\mathcal{A}-\lambda X_{1}\circ X_{2}\|_{F}=\underset{\|X\|_{F}=1,X\in S^{n},\lambda\in R}{min}\|\mathcal{A}-\lambda X\circ X\|_{F}.
Remark 4.

It is well-known that (6) is equivalent to the nearset Kronecker product problem golub2013matrix as below

(λ,X)argminXF=1,XSn,λRAλXX¯F2,(\lambda_{*},X_{*})\in\underset{\|X\|_{F}=1,X\in S^{n},\lambda\in R}{argmin}\|A-\lambda X\otimes\bar{X}\|_{F}^{2},

where A(i1)n+k,(j1)n+l=𝒜ijklA_{(i-1)n+k,(j-1)n+l}=\mathcal{A}_{ijkl}, "\otimes" denotes the kronecker product of matrices.

4 Exact Recovery for CPS tensors

In this section, we give the theoretical analysis of exact recovery for CPS tensors by the SMROA algorithm.

Theorem 7.

Let 𝒜\mathcal{A} be a CPS tensor with rankM(𝒜)=rrank_{M}(\mathcal{A})=r, that is

𝒜=i=1rλiEiEi¯.\mathcal{A}=\sum\limits_{i=1}^{r}\lambda_{i}E_{i}\circ\bar{E_{i}}.

If λi\lambda_{i} are different from each other, then the SMROA algorithm will obtain the exact decomposition of 𝒜\mathcal{A} after rr iterations.

We first claim the following lemma before proving the above theorem.

Lemma 8.

Let 𝒜\mathcal{A} be a CPS tensor with rankM(𝒜)=rrank_{M}(\mathcal{A})=r, that is

𝒜=i=1rλiEiEi¯.\mathcal{A}=\sum\limits_{i=1}^{r}\lambda_{i}E_{i}\circ\bar{E_{i}}.

λi\lambda_{i} are different from each other. Suppose

X^1argmaxXSn,XF=1𝒜,XX¯,λ^1=𝒜,XX¯.\hat{X}_{1}\in\underset{X\in S^{n},\|X\|_{F}=1}{argmax}\mid\left<\mathcal{A},X\circ\bar{X}\right>\mid,~{}\hat{\lambda}_{1}=\left<\mathcal{A},X\circ\bar{X}\right>.

Then, there exists j{1,2,,r}j\in\{1,2,\cdots,r\} such that

λ^1=λj,X^1=Ej.\hat{\lambda}_{1}=\lambda_{j},~{}\hat{X}_{1}=E_{j}.

Proof: According to Theorem 1, EiE_{i} are mutually orthogonal, thus {E1,,Er}\{E_{1},\cdots,E_{r}\} is a subset of an orthonormal basis {E1,,En(n+1)2}\{E_{1},\cdots,E_{\frac{n(n+1)}{2}}\} of SnS^{n} and 0λiR0\neq\lambda_{i}\in R. Let X^1=i=1xiEi\hat{X}_{1}=\sum\limits_{i=1}x_{i}E_{i}, where xi=X^1,Eix_{i}=\left<\hat{X}_{1},E_{i}\right> for i=1,2,,n(n+1)2i=1,2,\cdots,\frac{n(n+1)}{2}. Since X^1F\|\hat{X}_{1}\|_{F}=1, we have i=1x_i^2=1.Reordertheindicessuchthat(9)Equation 99∣λ1∣≥∣λ2∣≥⋯≥∣λr∣.Thenweobtain∣⟨A,^X1∘¯^X1⟩∣=∣∑i=1rλi∣xi∣2∣∣⟨A,^X1∘¯^X1⟩∣=∣∑i=1rλi∣xi∣2∣≤∣λ1∣.≤∣λ1∣.Ontheotherhand,theoptimalityleadsto∣⟨A,^X1∘¯^X1⟩∣≥∣⟨A,E1∘¯E1⟩∣∣⟨A,^X1∘¯^X1⟩∣≥∣⟨A,E1∘¯E1⟩∣=∣λ1∣.=∣λ1∣.Hence,∣λ1∣≤∣⟨A,^X1∘¯^X1⟩∣=∣^λ1∣≤∣λ1∣.So,λ^_1∣=∣λ_1∣, ∣x_1∣=1.Therefore,X^_1=e^iθE_1,foranyθ∈[0,2π],and^λ1=⟨A,^X1∘¯^X1⟩=⟨A,E1∘¯E1⟩=λ1.Thenletx_1=1,wehaveX^_1=E_1.Now,weproveTheorem7.𝐏𝐫𝐨𝐨𝐟:ByLemma8,thereexistsj1,2,,rsuchthatX^1=Ej,λ^1=λj.LetA1=A-^λ1^X1∘¯^X1=∑i≠jλiEi∘¯Ei,and^X2∈X∈Sn,∥X∥F=1argmax∣⟨A1,X∘¯X⟩∣,^λ2=⟨A,^X2∘¯^X2⟩.BythesimilarproofofLemma8,weknowthatthereexistsak{1,2,,n}\{j}suchthatλ^2=λk,X^2=Ek.Repeatedly,Wecaninduceapermutationπon{1,2,,r}suchthat^λj=λπ(j),^Xj=Eπ(j),j=1,2,⋯,r.Corollary 999Corollary 9Corollary 9.𝐿𝑒𝑡A=∑i=1rλiEi∘¯Ei,𝑤ℎ𝑒𝑟𝑒{E1,,Er}𝑖𝑠𝑎𝑠𝑢𝑏𝑠𝑒𝑡𝑜𝑓𝑎𝑛𝑜𝑟𝑡ℎ𝑜𝑛𝑜𝑟𝑚𝑎𝑙𝑏𝑎𝑠𝑖𝑠{E1,,En(n+1)2}𝑜𝑓𝑆n.0λi𝑅𝑎𝑟𝑒𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡𝑓𝑟𝑜𝑚𝑒𝑎𝑐ℎ𝑜𝑡ℎ𝑒𝑟,𝑎𝑛𝑑𝑎𝑟𝑒𝑜𝑟𝑑𝑒𝑟𝑒𝑑𝑎𝑠∣λ1∣≥∣λ2∣≥⋯≥∣λr∣.𝑆𝑢𝑝𝑝𝑜𝑠𝑒{(λ^i,X^i)}i=1n𝑖𝑠𝑡ℎ𝑒𝑜𝑢𝑡𝑝𝑢𝑡𝑜𝑓𝑡ℎ𝑒𝑆𝑀𝑅𝑂𝐴𝑎𝑙𝑔𝑜𝑟𝑖𝑡ℎ𝑚𝑓𝑜𝑟𝑖𝑛𝑝𝑢𝑡𝒜.𝑇ℎ𝑒𝑛,λ^i=λi.X^i=Ei,𝑓𝑜𝑟𝑖=1,2,,r.𝐼𝑛𝑝𝑎𝑟𝑡𝑖𝑐𝑢𝑙𝑎𝑟,𝑖𝑓λi>0,𝑤𝑒ℎ𝑎𝑣𝑒λ^1>λ^2>>λ^r;𝑖𝑓λi<0,𝑤𝑒ℎ𝑎𝑣𝑒λ^1<λ^2<<λ^r.ThispropositiondirectlyfollowsfromtheproofofLemma8.Remark 555Remark 5Remark 5