This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

L-structure least squares solutions of reduced biquaternion matrix equations with applications 22footnotemark: 2

Sk. Safique Ahmad111 Department of Mathematics, Indian Institute of Technology Indore, Simrol, Indore-452020, Madhya Pradesh, India. email:safique@iiti.ac.in, safique@gmail.com.    Neha Bhadala 333Research Scholar, Department of Mathematics, IIT Indore, Research work funded by PMRF (Prime Minister’s Research Fellowship). email:phd1901141004@iiti.ac.in, bhadalaneha@gmail.com
Abstract

This paper presents a framework for computing the structure-constrained least squares solutions to the generalized reduced biquaternion matrix equations (RBMEs). The investigation focuses on three different matrix equations: a linear matrix equation with multiple unknown L-structures, a linear matrix equation with one unknown L-structure, and the general coupled linear matrix equations with one unknown L-structure. Our approach leverages the complex representation of reduced biquaternion matrices. To showcase the versatility of the developed framework, we utilize it to find structure-constrained solutions for complex and real matrix equations, broadening its applicability to various inverse problems. Specifically, we explore its utility in addressing partially described inverse eigenvalue problems (PDIEPs) and generalized PDIEPs. Our study concludes with numerical examples.

Keywords. Kronecker product, least squares problem, matrix equation, inverse problem, reduced biquaternion matrix, Moore-Penrose generalized inverse.

AMS subject classification. 15A22, 15B05, 15B33, 65F18, 65F45.

1 Introduction

In matrix theory, linear matrix equations play a crucial role. They appear widely in control theory, inverse problems, and linear optimal control [6, 8, 10]. Owing to their widespread application in various fields, one encounters the problem of finding approximate solutions for linear matrix equations. There are many different forms of matrix equations. Some simple examples of these are:

AX=B,AXB+CXTD=E,AXB+CYD=E.AX=B,\;\;\;AXB+CX^{T}D=E,\;\;\;AXB+CYD=E.

There have been several studies on real and complex matrix equations. See [3, 7, 11, 13] and references therein. We now turn our attention to quaternion and reduced biquaternion matrix equations.

In 18431843, Hamilton first introduced the notion of quaternions. Several aspects of quantum physics, image processing, and signal processing rely on the quaternion matrix equations and their general solutions [1, 19, 22]. Quaternion matrix equations have been the subject of several studies in the literature (for example, [12, 18, 20]). Quaternion multiplication is not commutative, which limits its use in many vital applications. Following Hamilton’s discovery of quaternions, Segre introduced reduced biquaternions, which are commutative in nature. Reduced biquaternions are also known as commutative quaternions. Commutative multiplication simplifies numerous operations. For instance, Pei et al. [15, 16] demonstrated how reduced biquaternions outperform conventional quaternions in image and digital signal processing. Additionally, reduced biquaternions allow us to treat three or four-dimensional vectors as one entity, facilitating efficient information processing. Due to this, it becomes imperative to learn how to solve the matrix equations arising from commutative quaternionic theory. Some studies in the literature have focused on reduced biquaternion matrix equations (RBMEs). For instance, Yuan et al. [17] discussed the Hermitian solution of the RBME (AXB,CXD)=(E,G)(AXB,CXD)=(E,G). Zhang et al. [21] investigated the least squares solutions to the matrix equations AXC=BAXC=B and AX=BAX=B. This paper focuses on the least squares structured solutions to the generalized RBMEs. Our framework encompasses all those structures where any set of linear relationships between the matrix entries is allowed. A class of such matrices is called reduced biquaternion L-structure. Surprisingly, the least squares Toeplitz, symmetric Toeplitz, Hankel, and circulant solutions of the RBME have not been discussed in the literature despite their significance in scientific computing, inverse problems, image restoration, and signal processing [2, 14, 23]. Given the above context, our interest lies in least squares L-structure solutions for generalized RBMEs, with specific attention to reduced biquaternion Toeplitz, symmetric Toeplitz, Hankel, and circulant solutions. This manuscript addresses the following generalized matrix equations:

l=1rAlXlBl=E,\displaystyle\sum_{l=1}^{r}A_{l}X_{l}B_{l}=E, (1)
l=1rAlXBl+p=1qCpXTDp=E,\displaystyle\sum_{l=1}^{r}A_{l}XB_{l}+\sum_{p=1}^{q}C_{p}X^{T}D_{p}=E, (2)
(A1XB1,A2XB2,,ArXBr)=(E1,E2,,Er).\displaystyle(A_{1}XB_{1},A_{2}XB_{2},\ldots,A_{r}XB_{r})=(E_{1},E_{2},\ldots,E_{r}). (3)

Moreover, this paper elucidates a range of applications for the proposed framework in solving inverse eigenvalue problems. Several applications of the inverse eigenvalue problem, which involves reconstructing matrices from prescribed spectral data, deal with structured matrices. When the spectral data contain only partial information about the eigenpairs, this kind of inverse problem is called a partially described inverse eigenvalue problem (PDIEP). In both PDIEP and generalized PDIEP, two pivotal questions arise: the theory of solvability and the numerical solution methodology (see textbook [4] and references therein). In the context of solvability, one major challenge has been identifying necessary or sufficient conditions for a PDIEP and a generalized PDIEP to be solvable. On the other hand, numerical solution methods aim to develop procedures that construct a matrix in a numerically stable manner when the given spectral data is feasible. In this paper, we have successfully developed a numerical solution methodology for both PDIEP and generalized PDIEP by employing our developed framework. Our attention is primarily directed toward two structures, namely Hankel and symmetric Toeplitz. In summary, the primary applications discussed in this article encompass:

  • We utilize our developed framework to determine structure-constrained solutions for complex and real matrix equations. This is possible because these matrix equations represent a special case of RBMEs. It enables us to tackle various inverse eigenvalue problems. Using our framework, we offer a solution to PDIEP [4, Problem 5.1 and 5.2], which involves constructing a structured matrix from an eigenpair set.

  • We provide a framework for solving generalized PDIEP for symmetric Toeplitz and Hankel structure.

The manuscript is organized as follows. Section 2 presents the notation and preliminary results. In Section 3, we first define the concept of reduced biquaternion L-structures and examine some reduced biquaternion L-structure matrices. Next, we present several useful lemmas. Section 4 outlines the general framework for solving the RBMEs. Subsection 4.1 delves into solving the RBME with multiple unknown L-structures. In Section 5, we apply the developed framework from Section 4 to specific cases and explore their practical implications. Finally, Section 6 provides the numerical verification of our developed results.

2 Notation and preliminaries

2.1 Notation

Throughout this paper, we denote \mathbb{Q}_{\mathbb{R}} as the set of all reduced biquaternions. m×n{\mathbb{R}}^{m\times n}, m×n{\mathbb{C}}^{m\times n}, and m×n\mathbb{Q}_{\mathbb{R}}^{m\times n} denote the sets of all m×nm\times n real, complex, and reduced biquaternion matrices, respectively. Denote 𝕋n×n\mathbb{T}{\mathbb{R}}^{n\times n}, 𝕊𝕋n×n\mathbb{S}\mathbb{T}{\mathbb{R}}^{n\times n}, n×n\mathbb{H}{\mathbb{R}}^{n\times n}, n×n{\mathbb{C}}{\mathbb{R}}^{n\times n}, n×n\mathbb{H}{\mathbb{C}}^{n\times n}, 𝕋n×n\mathbb{T}\mathbb{Q}_{\mathbb{R}}^{n\times n}, 𝕊𝕋n×n\mathbb{S}\mathbb{T}\mathbb{Q}_{\mathbb{R}}^{n\times n}, n×n\mathbb{H}\mathbb{Q}_{\mathbb{R}}^{n\times n}, and n×n{\mathbb{C}}\mathbb{Q}_{\mathbb{R}}^{n\times n} as the sets of all n×nn\times n real Toeplitz, real symmetric Toeplitz, real Hankel, real circulant, complex Hankel, reduced biquaternion Toeplitz, reduced biquaternion symmetric Toeplitz, reduced biquaternion Hankel, and reduced biquaternion circulant matrices, respectively. For Am×nA\in{\mathbb{R}}^{m\times n}, the notations A+A^{+} and ATA^{T} denote the Moore-Penrose generalized inverse and the transpose of AA. For Am×nA\in{\mathbb{C}}^{m\times n}, the notations (A)\Re(A) and (A)\Im(A) stand for the real and imaginary parts of AA, respectively. For a diagonal matrix A=(aij)n×nA=(a_{ij})\in\mathbb{Q}_{\mathbb{R}}^{n\times n}, we denote it as diag(α1,α2,,αn)\mathrm{diag}(\alpha_{1},\alpha_{2},\ldots,\alpha_{n}), where aij=0a_{ij}=0 whenever iji\neq j and aii=αia_{ii}=\alpha_{i} for i=1,,ni=1,\ldots,n. InI_{n} represents the identity matrix of order nn. For i=1,2,,ni=1,2,\ldots,n, eie_{i} denotes the ithith column of the identity matrix InI_{n}. 0 denotes the zero matrix of suitable size. AB=(aijB)A\otimes B=(a_{ij}B) represents the Kronecker product of matrices AandBA\;\mbox{and}\;B. For matrix A=(aij)m×nA=(a_{ij})\in\mathbb{Q}_{\mathbb{R}}^{m\times n}, vec(A)=[a1,a2,,an]T\mathrm{vec}(A)=\left[a_{1},a_{2},\ldots,a_{n}\right]^{T}, where aj=[a1j,a2j,,amj]forj=1,2,,na_{j}=\left[a_{1j},a_{2j},\ldots,a_{mj}\right]\;\mbox{for}\;j=1,2,\ldots,n. F\left\lVert\cdot\right\rVert_{F} represents the Frobenius norm. 2\left\lVert\cdot\right\rVert_{2} represents the 22-norm or Euclidean norm. For Am×n1A\in\mathbb{Q}_{\mathbb{R}}^{m\times n_{1}} and Bm×n2B\in\mathbb{Q}_{\mathbb{R}}^{m\times n_{2}}, the notation [A,B]\left[A,B\right] represents the matrix [AB]m×(n1+n2)\begin{bmatrix}A&B\end{bmatrix}\in\mathbb{Q}_{\mathbb{R}}^{m\times(n_{1}+n_{2})}.

The Matlab command rand(m,n)rand(m,n) and ones(m,n)ones(m,n) return an m×nm\times n random matrix and a matrix with all entries one, respectively. Let uu and vv be row vectors of size nn. Matlab command toeplitz(u,v)toeplitz(u,v) returns a Toeplitz matrix with uu as its first column and vv as its first row. Matlab command toeplitz(u)toeplitz(u) returns a symmetric Toeplitz matrix with uu as its first column and first row. Matlab command hankel(u,v)hankel(u,v) creates a Hankel matrix with uu and vv as its first column and last row, respectively. We use the following abbreviations throughout this paper:
RBME : reduced biquaternion matrix equation, PDIEP : partially described inverse eigenvalue problem.

2.2 Preliminaries

A reduced biquaternion can be expressed uniquely as r=r0+r1i+r2j+r3kr=r_{0}+r_{1}\textit{{i}}+r_{2}\textit{{j}}+r_{3}\textit{{k}}, where rir_{i}\in{\mathbb{R}} for i=0,1,2,3i=0,1,2,3, and i2=k2=1,j2=1\textit{{i}}^{2}=\textit{{k}}^{2}=-1,\;\textit{{j}}^{2}=1, ij=ji=k,jk=kj=i,ki=ik=j\textit{{i}}\textit{{j}}=\textit{{j}}\textit{{i}}=\textit{{k}},\;\textit{{j}}\textit{{k}}=\textit{{k}}\textit{{j}}=\textit{{i}},\;\textit{{k}}\textit{{i}}=\textit{{i}}\textit{{k}}=-\textit{{j}}. It can also be expressed as r=d1+d2jr=d_{1}+d_{2}\textit{{j}}, where d1=r0+r1id_{1}=r_{0}+r_{1}\textit{{i}} and d2=r2+r3id_{2}=r_{2}+r_{3}\textit{{i}} are complex numbers. The conjugate of rr, denoted as r¯\bar{r}, is given by r¯=r0r1ir2jr3k\bar{r}=r_{0}-r_{1}\textit{{i}}-r_{2}\textit{{j}}-r_{3}\textit{{k}}. The norm of rr is r=r02+r12+r22+r32.\left\lVert r\right\rVert=\sqrt{r_{0}^{2}+r_{1}^{2}+r_{2}^{2}+r_{3}^{2}}. We have

r2rr¯.\|r\|^{2}\neq r\bar{r}.

In addition, we identify r=d1+d2jr=d_{1}+d_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}} using a complex vector Ψr=[d1,d2]1×2\Psi_{r}=[d_{1},d_{2}]\in{\mathbb{C}}^{1\times 2}. Similarly, we identify any reduced biquaternion matrix Z=Z1+Z2jm×nZ=Z_{1}+Z_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times n}, where Z1,Z2m×nZ_{1},Z_{2}\in{\mathbb{C}}^{m\times n}, using a complex matrix ΨZ=[Z1,Z2]m×2n\Psi_{Z}=\left[Z_{1},Z_{2}\right]\in{\mathbb{C}}^{m\times 2n}. The Frobenius norm for Z=(zij)m×nZ=(z_{ij})\in\mathbb{Q}_{\mathbb{R}}^{m\times n} is defined as follows:

ZF=i=1mj=1nzij2.\left\lVert Z\right\rVert_{F}=\sqrt{\sum_{i=1}^{m}\sum_{j=1}^{n}\left\lVert z_{ij}\right\rVert^{2}}.

We have

ZF=ΨZF=Z1F2+Z2F2=(Z1)F2+(Z1)F2+(Z2)F2+(Z2)F2.\left\lVert Z\right\rVert_{F}=\left\lVert\Psi_{Z}\right\rVert_{F}=\sqrt{\left\lVert Z_{1}\right\rVert_{F}^{2}+\left\lVert Z_{2}\right\rVert_{F}^{2}}=\sqrt{\left\lVert\Re(Z_{1})\right\rVert_{F}^{2}+\left\lVert\Im(Z_{1})\right\rVert_{F}^{2}+\left\lVert\Re(Z_{2})\right\rVert_{F}^{2}+\left\lVert\Im(Z_{2})\right\rVert_{F}^{2}}.

The complex representation of matrix Z=Z1+Z2jm×nZ=Z_{1}+Z_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times n}, denoted as h(Z)h(Z), is defined as follows:

h(Z)=[Z1Z2Z2Z1].h(Z)=\begin{bmatrix}Z_{1}&Z_{2}\\ Z_{2}&Z_{1}\end{bmatrix}.

For Ym×nY\in\mathbb{Q}_{\mathbb{R}}^{m\times n} and Zn×pZ\in\mathbb{Q}_{\mathbb{R}}^{n\times p}, we have

h(YZ)=h(Y)h(Z).h(YZ)=h(Y)h(Z). (4)

For q=q1+q2jq=q_{1}+q_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}} and Y=Y1+Y2jm×nY=Y_{1}+Y_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times n}, ΨqY\Psi_{qY} can be expressed as

ΨqY=[q1Y1+q2Y2,q1Y2+q2Y1]=[q1,q2][Y1Y2Y2Y1]=Ψqh(Y).\Psi_{qY}=\left[q_{1}Y_{1}+q_{2}Y_{2},q_{1}Y_{2}+q_{2}Y_{1}\right]=\left[q_{1},q_{2}\right]\begin{bmatrix}Y_{1}&Y_{2}\\ Y_{2}&Y_{1}\end{bmatrix}=\Psi_{q}h(Y).

For α\alpha\in{\mathbb{R}}, Y=Y1+Y2jY=Y_{1}+Y_{2}\textit{{j}}, and Z=Z1+Z2jZ=Z_{1}+Z_{2}\textit{{j}}, we have ΨαY=αΨY\Psi_{\alpha Y}=\alpha\Psi_{Y}, ΨY+Z=ΨY+ΨZ\Psi_{Y+Z}=\Psi_{Y}+\Psi_{Z}, and

ΨYZ=[Y1Z1+Y2Z2,Y1Z2+Y2Z1]=[Y1,Y2][Z1Z2Z2Z1]=ΨYh(Z).\Psi_{YZ}=\left[Y_{1}Z_{1}+Y_{2}Z_{2},Y_{1}Z_{2}+Y_{2}Z_{1}\right]=\left[Y_{1},Y_{2}\right]\begin{bmatrix}Z_{1}&Z_{2}\\ Z_{2}&Z_{1}\end{bmatrix}=\Psi_{Y}h(Z). (5)

Furthermore, the operator vec(Y)\mathrm{vec}(Y) is linear, which means that vec(Y+Z)=vec(Y)+vec(Z)andvec(αY)=αvec(Y).\mathrm{vec}(Y+Z)=\mathrm{vec}(Y)+\mathrm{vec}(Z)\,\mbox{and}\,\mathrm{vec}(\alpha Y)=\alpha\mathrm{vec}(Y). For Z=Z1+Z2jZ=Z_{1}+Z_{2}\textit{{j}}, we have

vec(Z)=vec(Z1)+vec(Z2)jandvec(ΨZ)=[vec(Z1)vec(Z2)].\mathrm{vec}(Z)=\mathrm{vec}(Z_{1})+\mathrm{vec}(Z_{2})\textit{{j}}\;\;\mbox{and}\;\mathrm{vec}(\Psi_{Z})=\begin{bmatrix}\mathrm{vec}(Z_{1})\\ \mathrm{vec}(Z_{2})\end{bmatrix}. (6)

Let Z=Z1+Z2jm×nZ=Z_{1}+Z_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times n} and denote Z=[(Z1),(Z1),(Z2),(Z2)]m×4n\overrightarrow{Z}=\left[\Re(Z_{1}),\Im(Z_{1}),\Re(Z_{2}),\Im(Z_{2})\right]\in{\mathbb{R}}^{m\times 4n}. We have

vec(Z)=[vec((Z1))vec((Z1))vec((Z2))vec((Z2))].\mathrm{vec}(\overrightarrow{Z})=\begin{bmatrix}\mathrm{vec}(\Re(Z_{1}))\\ \mathrm{vec}(\Im(Z_{1}))\\ \mathrm{vec}(\Re(Z_{2}))\\ \mathrm{vec}(\Im(Z_{2}))\end{bmatrix}.

Clearly,

ZF=ΨZF=vec(ΨZ)F=vec(Z)F.\left\lVert Z\right\rVert_{F}=\left\lVert\Psi_{Z}\right\rVert_{F}=\left\lVert\mathrm{vec}(\Psi_{Z})\right\rVert_{F}=\left\lVert\mathrm{vec}(\overrightarrow{Z})\right\rVert_{F}. (7)

3 Reduced biquaternion L-structure matrices

This section aims to define the concept of reduced biquaternion L-structure and explore some specific examples of this class of matrices. A reduced biquaternion L-structure refers to the set of all reduced biquaternion matrices of a given order whose entries adhere to specific linear constraints. A notable example of this class includes unstructured matrices, where no linear restrictions are placed on the matrix entries. The subsequent definition offers a formalized explanation of this concept.

Definition 3.1.

Let Ω\Omega be a subspace of mn\mathbb{Q}_{\mathbb{R}}^{mn}. The subset of reduced biquaternion matrices of order m×nm\times n given by

L(m,n)={Xm×n|vec(X)Ω}L(m,n)=\{X\in\mathbb{Q}_{\mathbb{R}}^{m\times n}|\mathrm{vec}(X)\in\Omega\} (8)

is known as the reduced biquaternion L-structure.

Remark 3.2.

\mathbb{Q}_{\mathbb{R}} and n\mathbb{Q}_{\mathbb{R}}^{n} are vector spaces over {\mathbb{R}} with dimensions 44 and 4n4n, respectively.

To better comprehend the above definition, let us consider the following examples.

Example 3.3.

Let A=[000100000000000100010000000000000010001000000000001000]A=\begin{bmatrix}0&0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&1&0&0\\ 0&1&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&1&0\\ 0&0&1&0&0&0&0&0&0\\ 0&0&0&0&0&1&0&0&0\\ \end{bmatrix} and Ω1={v9×1|Av=0}\Omega_{1}=\{v\in\mathbb{Q}_{\mathbb{R}}^{9\times 1}\;|\;Av=0\}. Clearly, Ω1\Omega_{1} is a subspace of 9×1\mathbb{Q}_{\mathbb{R}}^{9\times 1}. The resulting reduced biquaternion L-structure is as follows:

L(3,3)={X3×3|vec(X)Ω1}.L(3,3)=\{X\in\mathbb{Q}_{\mathbb{R}}^{3\times 3}\;|\;\mathrm{vec}(X)\in\Omega_{1}\}.

The subset above represents the class of diagonal matrices of size 3×33\times 3. In this case, six linear restrictions are imposed on the entries of matrix X=(xij)3×3X=(x_{ij})\in\mathbb{Q}_{\mathbb{R}}^{3\times 3}, given by xij=0x_{ij}=0 for iji\neq j. Hence, the collection of all reduced biquaternion diagonal matrices of a given order falls under the class of reduced biquaternion L-structure.

Other reduced biquaternion L-structure examples include the set of all reduced biquaternion Toeplitz, symmetric Toeplitz, Hankel, circulant, lower triangular, and upper triangular matrices of a given order. These classes of matrices consider only equality relationships between the matrix entries. Here is an example of a reduced biquaternion L-structure with some linear relationships between the matrix entries.

Example 3.4.

Let B=[111000000000111000000000111]B=\begin{bmatrix}1&-1&1&0&0&0&0&0&0\\ 0&0&0&1&1&-1&0&0&0\\ 0&0&0&0&0&0&1&-1&1\end{bmatrix} and Ω2={v9×1|Bv=0}\Omega_{2}=\{v\in\mathbb{Q}_{\mathbb{R}}^{9\times 1}\;|\;Bv=0\}. Clearly, Ω2\Omega_{2} is a subspace of 9×1\mathbb{Q}_{\mathbb{R}}^{9\times 1}. The resulting reduced biquaternion L-structure is as follows:

L(3,3)={X3×3|vec(X)Ω2}.L(3,3)=\{X\in\mathbb{Q}_{\mathbb{R}}^{3\times 3}\;|\;\mathrm{vec}(X)\in\Omega_{2}\}.

The above subset represents a collection of all reduced biquaternion matrices X=(xij)3×3X=(x_{ij})\in\mathbb{Q}_{\mathbb{R}}^{3\times 3} with the following linear restrictions imposed on the entries of matrix XX: x11+x31=x21x_{11}+x_{31}=x_{21}, x12+x22=x32x_{12}+x_{22}=x_{32}, and x13+x33=x23x_{13}+x_{33}=x_{23}.

The remaining section focuses on some reduced biquaternion L-structure matrices that frequently appear in practical applications. Our primary focus lies on reduced biquaternion Toeplitz, symmetric Toeplitz, Hankel, and circulant matrices. To commence our exploration, we initially examine the vec-structure of some real structured matrices.

Definition 3.5.

A matrix Xn×nX\in{\mathbb{R}}^{n\times n} is Toeplitz if it has the following form:

X=[x0x1x2xn1x1x0x1x2x1x1x2x1x0x1xn+1x2x1x0].X=\begin{bmatrix}x_{0}&x_{1}&x_{2}&\cdots&\cdots&x_{n-1}\\ x_{-1}&x_{0}&x_{1}&\ddots&&\vdots\\ x_{-2}&x_{-1}&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&x_{1}&x_{2}\\ \vdots&&\ddots&x_{-1}&x_{0}&x_{1}\\ x_{-n+1}&\cdots&\cdots&x_{-2}&x_{-1}&x_{0}\end{bmatrix}.

For Xn×nX\in{\mathbb{R}}^{n\times n}, denote by vecT(X)\mathrm{vec}_{T}(X) the following vector:

vecT(X)=[xn+1,xn+2,,x1,x0,x1,,xn1]T2n1.\mathrm{vec}_{T}(X)=\left[x_{-n+1},x_{-n+2},\ldots,x_{-1},x_{0},x_{1},\ldots,x_{n-1}\right]^{T}\in{\mathbb{R}}^{2n-1}. (9)
Definition 3.6.

A matrix Xn×nX\in{\mathbb{R}}^{n\times n} is symmetric Toeplitz if it has the following form:

X=[x0x1x2xn1x1x0x1x2x1x1x2x1x0x1xn1x2x1x0].X=\begin{bmatrix}x_{0}&x_{1}&x_{2}&\cdots&\cdots&x_{n-1}\\ x_{1}&x_{0}&x_{1}&\ddots&&\vdots\\ x_{2}&x_{1}&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&x_{1}&x_{2}\\ \vdots&&\ddots&x_{1}&x_{0}&x_{1}\\ x_{n-1}&\cdots&\cdots&x_{2}&x_{1}&x_{0}\end{bmatrix}.

For Xn×nX\in{\mathbb{R}}^{n\times n}, denote by vecST(X)\mathrm{vec}_{ST}(X) the following vector:

vecST(X)=[x0,x1,x2,,xn1]Tn.\mathrm{vec}_{ST}(X)=\left[x_{0},x_{1},x_{2},\ldots,x_{n-1}\right]^{T}\in{\mathbb{R}}^{n}. (10)
Definition 3.7.

A matrix Xn×nX\in{\mathbb{R}}^{n\times n} is Hankel if it has the following form:

X=[xn1x2x1x0\udotsx1x0x1\udots\udots\udotsx1x2x2x1\udots\udots\udotsx1x0x1\udotsx0x1x2xn+1].X=\begin{bmatrix}x_{n-1}&\cdots&\cdots&x_{2}&x_{1}&x_{0}\\ \vdots&&\udots&x_{1}&x_{0}&x_{-1}\\ \vdots&\udots&\udots&\udots&x_{-1}&x_{-2}\\ x_{2}&x_{1}&\udots&\udots&\udots&\vdots\\ x_{1}&x_{0}&x_{-1}&\udots&&\vdots\\ x_{0}&x_{-1}&x_{-2}&\cdots&\cdots&x_{-n+1}\end{bmatrix}.

For Xn×nX\in{\mathbb{R}}^{n\times n}, denote by vecH(X)\mathrm{vec}_{H}(X) the following vector:

vecH(X)=[xn1,xn2,,x1,x0,x1,,xn+1]T2n1.\mathrm{vec}_{H}(X)=\left[x_{n-1},x_{n-2},\ldots,x_{1},x_{0},x_{-1},\ldots,x_{-n+1}\right]^{T}\in{\mathbb{R}}^{2n-1}. (11)
Definition 3.8.

A matrix Xn×nX\in{\mathbb{R}}^{n\times n} is circulant if it has the following form:

X=[x0xn1x2x1x1x0xn1x2x1x0xn2xn1xn1xn2x1x0].X=\begin{bmatrix}x_{0}&x_{n-1}&\cdots&x_{2}&x_{1}\\ x_{1}&x_{0}&x_{n-1}&&x_{2}\\ \vdots&x_{1}&x_{0}&\ddots&\vdots\\ x_{n-2}&&\ddots&\ddots&x_{n-1}\\ x_{n-1}&x_{n-2}&\cdots&x_{1}&x_{0}\end{bmatrix}.

For Xn×nX\in{\mathbb{R}}^{n\times n}, denote by vecC(X)\mathrm{vec}_{C}(X) the following vector:

vecC(X)=[x0,x1,x2,,xn1]Tn.\mathrm{vec}_{C}(X)=\left[x_{0},x_{1},x_{2},\ldots,x_{n-1}\right]^{T}\in{\mathbb{R}}^{n}. (12)

In the following four lemmas, we describe the structure of some particular classes of real matrix sets.

Lemma 3.9.

If Xn×nX\in{\mathbb{R}}^{n\times n}, then X𝕋n×nvec(X)=KTvecT(X),X\in\mathbb{T}{\mathbb{R}}^{n\times n}\iff\mathrm{vec}(X)=K_{T}\mathrm{vec}_{T}(X), where vecT(X)\mathrm{vec}_{T}(X) is of the form (9), and the matrix KTn2×2n1K_{T}\in{\mathbb{R}}^{n^{2}\times 2n-1} is represented as

KT=[enen1en2e2e10000enen1e3e2e100000enen1e2e100000enen1e2e1].K_{T}=\begin{bmatrix}e_{n}&e_{n-1}&e_{n-2}&\cdots&e_{2}&e_{1}&0&\cdots&0&0\\ 0&e_{n}&e_{n-1}&\cdots&e_{3}&e_{2}&e_{1}&\cdots&0&0\\ \vdots&\vdots&\vdots&&\vdots&\vdots&&&\vdots&\vdots\\ 0&0&0&\cdots&e_{n}&e_{n-1}&\cdots&e_{2}&e_{1}&0\\ 0&0&0&\cdots&0&e_{n}&e_{n-1}&\cdots&e_{2}&e_{1}\end{bmatrix}.
Proof.

Consider X=[x0x1x2xn1x1x0x1x2x1x1x2x1x0x1xn+1x2x1x0].X=\begin{bmatrix}x_{0}&x_{1}&x_{2}&\cdots&\cdots&x_{n-1}\\ x_{-1}&x_{0}&x_{1}&\ddots&&\vdots\\ x_{-2}&x_{-1}&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&x_{1}&x_{2}\\ \vdots&&\ddots&x_{-1}&x_{0}&x_{1}\\ x_{-n+1}&\cdots&\cdots&x_{-2}&x_{-1}&x_{0}\end{bmatrix}. Clearly, XX is a Toeplitz matrix.
Let uiu_{i}, for i=1,2,,ni=1,2,\ldots,n, denote the ithi^{th} column of matrix XX. We have vec(X)=[u1u2un].\mathrm{vec}(X)=\begin{bmatrix}u_{1}\\ u_{2}\\ \vdots\\ u_{n}\end{bmatrix}. We get

u1=[0000100000010000\udots001000000100000010000000]vecT(X)=[enen1en2e2e1000]vecT(X),u_{1}=\begin{bmatrix}0&0&0&\cdots&0&1&0&\cdots&0&0\\ 0&0&0&\cdots&1&0&0&\cdots&0&0\\ \vdots&\vdots&\vdots&\udots&\vdots&\vdots&\vdots&&\vdots&\vdots\\ 0&0&1&\cdots&0&0&0&\cdots&0&0\\ 0&1&0&\cdots&0&0&0&\cdots&0&0\\ 1&0&0&\cdots&0&0&0&\cdots&0&0\end{bmatrix}\mathrm{vec}_{T}(X)=\begin{bmatrix}e_{n}&e_{n-1}&e_{n-2}&\cdots&e_{2}&e_{1}&0&\cdots&0&0\end{bmatrix}\mathrm{vec}_{T}(X),
u2=[000001000000100000010000\udots0010000001000000]vecT(X)=[0enen1e3e2e100]vecT(X),u_{2}=\begin{bmatrix}0&0&0&\cdots&0&0&1&\cdots&0&0\\ 0&0&0&\cdots&0&1&0&\cdots&0&0\\ 0&0&0&\cdots&1&0&0&\cdots&0&0\\ \vdots&\vdots&\vdots&\udots&\vdots&\vdots&\vdots&&\vdots&\vdots\\ 0&0&1&\cdots&0&0&0&\cdots&0&0\\ 0&1&0&\cdots&0&0&0&\cdots&0&0\end{bmatrix}\mathrm{vec}_{T}(X)=\begin{bmatrix}0&e_{n}&e_{n-1}&\cdots&e_{3}&e_{2}&e_{1}&\cdots&0&0\end{bmatrix}\mathrm{vec}_{T}(X),
un=[00000001000000100000010000001000]vecT(X)=[0000enen1e2e1]vecT(X).u_{n}=\begin{bmatrix}0&0&0&\cdots&0&0&0&\cdots&0&1\\ 0&0&0&\cdots&0&0&0&\cdots&1&0\\ \vdots&\vdots&\vdots&&\vdots&\vdots&\vdots&&\vdots&\vdots\\ 0&0&0&\cdots&0&0&1&\cdots&0&0\\ 0&0&0&\cdots&0&1&0&\cdots&0&0\end{bmatrix}\mathrm{vec}_{T}(X)=\begin{bmatrix}0&0&0&\cdots&0&e_{n}&e_{n-1}&\cdots&e_{2}&e_{1}\end{bmatrix}\mathrm{vec}_{T}(X).

We have

vec(X)=[enen1en2e2e10000enen1e3e2e100000enen1e2e100000enen1e2e1]vecT(X)=KTvecT(X).\mathrm{vec}(X)=\begin{bmatrix}e_{n}&e_{n-1}&e_{n-2}&\cdots&e_{2}&e_{1}&0&\cdots&0&0\\ 0&e_{n}&e_{n-1}&\cdots&e_{3}&e_{2}&e_{1}&\cdots&0&0\\ \vdots&\vdots&\vdots&&\vdots&\vdots&&&\vdots&\vdots\\ 0&0&0&\cdots&e_{n}&e_{n-1}&\cdots&e_{2}&e_{1}&0\\ 0&0&0&\cdots&0&e_{n}&e_{n-1}&\cdots&e_{2}&e_{1}\end{bmatrix}\mathrm{vec}_{T}(X)=K_{T}\mathrm{vec}_{T}(X).

Lemma 3.10.

If Xn×nX\in{\mathbb{R}}^{n\times n}, then X𝕊𝕋n×nvec(X)=KSTvecST(X),X\in\mathbb{S}\mathbb{T}{\mathbb{R}}^{n\times n}\iff\mathrm{vec}(X)=K_{ST}\mathrm{vec}_{ST}(X), where vecST(X)\mathrm{vec}_{ST}(X) is of the form (10). When nn is even, let n=2ln=2l. In this case, the matrix KSTn2×nK_{ST}\in{\mathbb{R}}^{n^{2}\times n} is represented as

KST=[e1e2e3elel+1en1ene2e1+e3e4el+1el+2en0e3e2+e4e1+e5el+2el+300elel1+el+1el2+el+2e1+en1en00el+1el+el+2el1+el+3e2+ene100en1en2+enen3e10enen1en2e2e1].K_{ST}=\begin{bmatrix}e_{1}&e_{2}&e_{3}&\cdots&e_{l}&e_{l+1}&\cdots&e_{n-1}&e_{n}\\ e_{2}&e_{1}+e_{3}&e_{4}&\cdots&e_{l+1}&e_{l+2}&\cdots&e_{n}&0\\ e_{3}&e_{2}+e_{4}&e_{1}+e_{5}&\cdots&e_{l+2}&e_{l+3}&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&&\vdots&\vdots\\ e_{l}&e_{l-1}+e_{l+1}&e_{l-2}+e_{l+2}&\cdots&e_{1}+e_{n-1}&e_{n}&\cdots&0&0\\ e_{l+1}&e_{l}+e_{l+2}&e_{l-1}+e_{l+3}&\cdots&e_{2}+e_{n}&e_{1}&\cdots&0&0\\ \vdots&\vdots&\vdots&&&&\ddots&\vdots&\vdots\\ e_{n-1}&e_{n-2}+e_{n}&e_{n-3}&\cdots&\cdots&\cdots&\cdots&e_{1}&0\\ e_{n}&e_{n-1}&e_{n-2}&\cdots&\cdots&\cdots&\cdots&e_{2}&e_{1}\end{bmatrix}.

When nn is odd, let n=2l1n=2l-1. In this case, the matrix KSTn2×nK_{ST}\in{\mathbb{R}}^{n^{2}\times n} is represented as

KST=[e1e2e3elel+1en1ene2e1+e3e4el+1el+2en0e3e2+e4e1+e5el+2el+300elel1+el+1el2+el+2e1+en000el+1el+el+2el1+el+3e2e100en1en2+enen3e10enen1en2e2e1].K_{ST}=\begin{bmatrix}e_{1}&e_{2}&e_{3}&\cdots&e_{l}&e_{l+1}&\cdots&e_{n-1}&e_{n}\\ e_{2}&e_{1}+e_{3}&e_{4}&\cdots&e_{l+1}&e_{l+2}&\cdots&e_{n}&0\\ e_{3}&e_{2}+e_{4}&e_{1}+e_{5}&\cdots&e_{l+2}&e_{l+3}&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&&\vdots&\vdots\\ e_{l}&e_{l-1}+e_{l+1}&e_{l-2}+e_{l+2}&\cdots&e_{1}+e_{n}&0&\cdots&0&0\\ e_{l+1}&e_{l}+e_{l+2}&e_{l-1}+e_{l+3}&\cdots&e_{2}&e_{1}&\cdots&0&0\\ \vdots&\vdots&\vdots&&&&\ddots&\vdots&\vdots\\ e_{n-1}&e_{n-2}+e_{n}&e_{n-3}&\cdots&\cdots&\cdots&\cdots&e_{1}&0\\ e_{n}&e_{n-1}&e_{n-2}&\cdots&\cdots&\cdots&\cdots&e_{2}&e_{1}\end{bmatrix}.
Proof.

The proof is similar to Lemma 3.9. ∎

To enhance our understanding of the above lemma, we will examine it in the context of n=6n=6 and n=7n=7. In this scenario, we have

KST=[e1e2e3e4e5e6e2e1+e3e4e5e60e3e2+e4e1+e5e600e4e3+e5e2+e6e100e5e4+e6e3e2e10e6e5e4e3e2e1],KST=[e1e2e3e4e5e6e7e2e1+e3e4e5e6e70e3e2+e4e1+e5e6e700e4e3+e5e2+e6e1+e7000e5e4+e6e3+e7e2e100e6e5+e7e4e3e2e10e7e6e5e4e3e2e1].K_{ST}=\begin{bmatrix}e_{1}&e_{2}&e_{3}&e_{4}&e_{5}&e_{6}\\ e_{2}&e_{1}+e_{3}&e_{4}&e_{5}&e_{6}&0\\ e_{3}&e_{2}+e_{4}&e_{1}+e_{5}&e_{6}&0&0\\ e_{4}&e_{3}+e_{5}&e_{2}+e_{6}&e_{1}&0&0\\ e_{5}&e_{4}+e_{6}&e_{3}&e_{2}&e_{1}&0\\ e_{6}&e_{5}&e_{4}&e_{3}&e_{2}&e_{1}\end{bmatrix},\;K_{ST}=\begin{bmatrix}e_{1}&e_{2}&e_{3}&e_{4}&e_{5}&e_{6}&e_{7}\\ e_{2}&e_{1}+e_{3}&e_{4}&e_{5}&e_{6}&e_{7}&0\\ e_{3}&e_{2}+e_{4}&e_{1}+e_{5}&e_{6}&e_{7}&0&0\\ e_{4}&e_{3}+e_{5}&e_{2}+e_{6}&e_{1}+e_{7}&0&0&0\\ e_{5}&e_{4}+e_{6}&e_{3}+e_{7}&e_{2}&e_{1}&0&0\\ e_{6}&e_{5}+e_{7}&e_{4}&e_{3}&e_{2}&e_{1}&0\\ e_{7}&e_{6}&e_{5}&e_{4}&e_{3}&e_{2}&e_{1}\end{bmatrix}.
Lemma 3.11.

If Xn×nX\in{\mathbb{R}}^{n\times n}, then Xn×nvec(X)=KHvecH(X),X\in\mathbb{H}{\mathbb{R}}^{n\times n}\iff\mathrm{vec}(X)=K_{H}\mathrm{vec}_{H}(X), where vecH(X)\mathrm{vec}_{H}(X) is of the form (11), and the matrix KHn2×2n1K_{H}\in{\mathbb{R}}^{n^{2}\times 2n-1} is represented as

KH=[e1e2e3en1en0000e1e2en2en1en00000e1e2en1en00000e1e2en1en].K_{H}=\begin{bmatrix}e_{1}&e_{2}&e_{3}&\cdots&e_{n-1}&e_{n}&0&0&\cdots&0\\ 0&e_{1}&e_{2}&\cdots&e_{n-2}&e_{n-1}&e_{n}&\cdots&0&0\\ \vdots&\vdots&\vdots&&\vdots&\vdots&&&\vdots&\vdots\\ 0&0&0&\cdots&e_{1}&e_{2}&\cdots&e_{n-1}&e_{n}&0\\ 0&0&0&\cdots&0&e_{1}&e_{2}&\cdots&e_{n-1}&e_{n}\end{bmatrix}.
Proof.

The proof is similar to Lemma 3.9. ∎

Lemma 3.12.

If Xn×nX\in{\mathbb{R}}^{n\times n}, then Xn×nvec(X)=KCvecC(X),X\in\mathbb{C}{\mathbb{R}}^{n\times n}\iff\mathrm{vec}(X)=K_{C}\mathrm{vec}_{C}(X), where vecC(X)\mathrm{vec}_{C}(X) is of the form (12), and the matrix KCn2×nK_{C}\in{\mathbb{R}}^{n^{2}\times n} is represented as

KC=[e1e2en1ene2e3ene1e3e4e1e2ene1en2en1].K_{C}=\begin{bmatrix}e_{1}&e_{2}&\cdots&e_{n-1}&e_{n}\\ e_{2}&e_{3}&\cdots&e_{n}&e_{1}\\ e_{3}&e_{4}&\cdots&e_{1}&e_{2}\\ \vdots&\vdots&&\vdots&\vdots\\ e_{n}&e_{1}&\cdots&e_{n-2}&e_{n-1}\\ \end{bmatrix}.
Proof.

The proof is similar to Lemma 3.9. ∎

In the following lemma, we present the vec-structure of reduced biquaternion L-structure matrices based on the vec-structure of real structure matrices.

Lemma 3.13.

If X=X1+X2jn×nX=X_{1}+X_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{n\times n}, then {fleqn}[]

(i)X𝕋n×nvec(X)=MTvecT(X),where(\textup{i})\quad X\in\mathbb{T}\mathbb{Q}_{\mathbb{R}}^{n\times n}\iff\mathrm{vec}(\overrightarrow{X})=M_{T}\mathrm{vec}_{T}(\overrightarrow{X}),\mbox{where}
MT=[KT0000KT0000KT0000KT],vecT(X)=[vecT((X1))vecT((X1))vecT((X2))vecT((X2))].M_{T}=\begin{bmatrix}K_{T}&0&0&0\\ 0&K_{T}&0&0\\ 0&0&K_{T}&0\\ 0&0&0&K_{T}\end{bmatrix},\;\mathrm{vec}_{T}(\overrightarrow{X})=\begin{bmatrix}\mathrm{vec}_{T}(\Re(X_{1}))\\ \mathrm{vec}_{T}(\Im(X_{1}))\\ \mathrm{vec}_{T}(\Re(X_{2}))\\ \mathrm{vec}_{T}(\Im(X_{2}))\end{bmatrix}.
{fleqn}

[]

(ii)X𝕊𝕋n×nvec(X)=MSTvecST(X),where(\textup{ii})\quad X\in\mathbb{S}\mathbb{T}\mathbb{Q}_{\mathbb{R}}^{n\times n}\iff\mathrm{vec}(\overrightarrow{X})=M_{ST}\mathrm{vec}_{ST}(\overrightarrow{X}),\mbox{where}
MST=[KST0000KST0000KST0000KST],vecST(X)=[vecST((X1))vecST((X1))vecST((X2))vecST((X2))].M_{ST}=\begin{bmatrix}K_{ST}&0&0&0\\ 0&K_{ST}&0&0\\ 0&0&K_{ST}&0\\ 0&0&0&K_{ST}\end{bmatrix},\;\mathrm{vec}_{ST}(\overrightarrow{X})=\begin{bmatrix}\mathrm{vec}_{ST}(\Re(X_{1}))\\ \mathrm{vec}_{ST}(\Im(X_{1}))\\ \mathrm{vec}_{ST}(\Re(X_{2}))\\ \mathrm{vec}_{ST}(\Im(X_{2}))\end{bmatrix}.
{fleqn}

[]

(ii)Xn×nvec(X)=MHvecH(X),where(\textup{ii})\quad X\in\mathbb{H}\mathbb{Q}_{\mathbb{R}}^{n\times n}\iff\mathrm{vec}(\overrightarrow{X})=M_{H}\mathrm{vec}_{H}(\overrightarrow{X}),\mbox{where}
MH=[KH0000KH0000KH0000KH],vecH(X)=[vecH((X1))vecH((X1))vecH((X2))vecH((X2))].M_{H}=\begin{bmatrix}K_{H}&0&0&0\\ 0&K_{H}&0&0\\ 0&0&K_{H}&0\\ 0&0&0&K_{H}\end{bmatrix},\;\mathrm{vec}_{H}(\overrightarrow{X})=\begin{bmatrix}\mathrm{vec}_{H}(\Re(X_{1}))\\ \mathrm{vec}_{H}(\Im(X_{1}))\\ \mathrm{vec}_{H}(\Re(X_{2}))\\ \mathrm{vec}_{H}(\Im(X_{2}))\end{bmatrix}.
{fleqn}

[]

(iii)Xn×nvec(X)=MCvecC(X),where(\textup{iii})\quad X\in{\mathbb{C}}\mathbb{Q}_{\mathbb{R}}^{n\times n}\iff\mathrm{vec}(\overrightarrow{X})=M_{C}\mathrm{vec}_{C}(\overrightarrow{X}),\mbox{where}
MC=[KC0000KC0000KC0000KC],vecC(X)=[vecC((X1))vecC((X1))vecC((X2))vecC((X2))].M_{C}=\begin{bmatrix}K_{C}&0&0&0\\ 0&K_{C}&0&0\\ 0&0&K_{C}&0\\ 0&0&0&K_{C}\end{bmatrix},\;\mathrm{vec}_{C}(\overrightarrow{X})=\begin{bmatrix}\mathrm{vec}_{C}(\Re(X_{1}))\\ \mathrm{vec}_{C}(\Im(X_{1}))\\ \mathrm{vec}_{C}(\Re(X_{2}))\\ \mathrm{vec}_{C}(\Im(X_{2}))\end{bmatrix}.
Proof.

We will prove the first part, and the remaining parts can be proved in a similar manner. The proof follows from the fact that X𝕋n×n(X1),(X1),(X2),(X2)𝕋n×nX\in\mathbb{T}\mathbb{Q}_{\mathbb{R}}^{n\times n}\iff\Re(X_{1}),\Im(X_{1}),\Re(X_{2}),\Im(X_{2})\in\mathbb{T}{\mathbb{R}}^{n\times n} and using Lemma 3.9. ∎

Until now, we have examined the representation of a reduced biquaternion L-structure matrix using a real structure matrix for a particular class of matrix sets. Based on the preceding discussion about reduced biquaternion L-structure matrix sets, we can summarize the findings as follows:

For X=X1+X2jm×nX=X_{1}+X_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times n}, we have X=[(X1),(X1),(X2),(X2)]m×4n\overrightarrow{X}=\left[\Re(X_{1}),\Im(X_{1}),\Re(X_{2}),\Im(X_{2})\right]\in{\mathbb{R}}^{m\times 4n}. Let 𝒢\mathcal{G} be a subspace of 4mn{\mathbb{R}}^{4mn} and MLM_{L} be the basis matrix for 𝒢\mathcal{G}. The subset of real matrices of order m×4nm\times 4n given by

LR(m,4n)={Xm×4n|vec(X)𝒢}L^{R}(m,4n)=\{\overrightarrow{X}\in{\mathbb{R}}^{m\times 4n}\;|\;\mathrm{vec}(\overrightarrow{X})\in\mathcal{G}\} (13)

will be called a real L-structure.

Remark 3.14.

MLM_{L} represents the basis matrix of the subspace 𝒢\mathcal{G}. To simplify things, we will refer to MLM_{L} as the basis matrix of LR(m,4n)L^{R}(m,4n) throughout the entire manuscript.

Thus, we have the following Lemma.

Lemma 3.15.

Let MLM_{L} be the basis matrix of LR(m,4n)L^{R}(m,4n). Then XL(m,n)vec(X)=MLvecL(X),X\in L(m,n)\iff\mathrm{vec}(\overrightarrow{X})=M_{L}\mathrm{vec}_{L}(\overrightarrow{X}), where vecL(X)\mathrm{vec}_{L}(\overrightarrow{X}) corresponds to the representation of X\overrightarrow{X} according to the basis matrix MLM_{L}.

Proof.

The proof follows from the generalization of Lemmas 3.9 and 3.13 to any L-structure matrix X.X.

Now that we have described the reduced biquaternion L-structure, we turn our attention to solving a RBME. Our approach for addressing the RBME involves transforming it into a complex matrix equation. To achieve this, we must study vec(AXB)\mathrm{vec}(AXB). For Am×n,Xn×s,andBs×tA\in\mathbb{C}^{m\times n},X\in\mathbb{C}^{n\times s},\;\mbox{and}\;B\in\mathbb{C}^{s\times t}, we have

vec(AXB)=(BTA)vec(X).\mathrm{vec}(AXB)=(B^{T}\otimes A)\mathrm{vec}(X). (14)

However, this differs in the context of reduced biquaternion algebra. Thus, we investigate vec(ΨAXB)\mathrm{vec}(\Psi_{AXB}) rather than vec(AXB)\mathrm{vec}(AXB) in reduced biquaternion algebra.

Lemma 3.16.

Let A=A1+A2jm×nA=A_{1}+A_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times n}, X=X1+X2jn×s,andB=B1+B2js×tX=X_{1}+X_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{n\times s},\;\mbox{and}\;B=B_{1}+B_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{s\times t}. Then

vec(ΨAXB)=(h(B)TA1+h(Bj)TA2)vec(ΨX).\mathrm{vec}(\Psi_{AXB})=\left(h(B)^{T}\otimes A_{1}+h(B\textit{{j}})^{T}\otimes A_{2}\right)\mathrm{vec}(\Psi_{X}).
Proof.

Using (4) and (5), we get ΨAXB=ΨAh(XB)=[A1,A2]h(X)h(B)=[A1,A2][X1X2X2X1][B1B2B2B1]=[A1X1B1+A2X2B1+A1X2B2+A2X1B2,A1X1B2+A2X2B2+A1X2B1+A2X1B1].\Psi_{AXB}=\Psi_{A}h(XB)=\left[A_{1},A_{2}\right]h(X)h(B)=\left[A_{1},A_{2}\right]\begin{bmatrix}X_{1}&X_{2}\\ X_{2}&X_{1}\end{bmatrix}\begin{bmatrix}B_{1}&B_{2}\\ B_{2}&B_{1}\end{bmatrix}=\left[A_{1}X_{1}B_{1}+A_{2}X_{2}B_{1}+A_{1}X_{2}B_{2}+A_{2}X_{1}B_{2},A_{1}X_{1}B_{2}+A_{2}X_{2}B_{2}+A_{1}X_{2}B_{1}+A_{2}X_{1}B_{1}\right].

Now from (6) and (14), we get

vec(ΨAXB)=[(B1TA1)vec(X1)+(B1TA2)vec(X2)+(B2TA1)vec(X2)+(B2TA2)vec(X1)(B2TA1)vec(X1)+(B2TA2)vec(X2)+(B1TA1)vec(X2)+(B1TA2)vec(X1)]=([B1TB2TB2TB1T]A1+[B2TB1TB1TB2T]A2)[vec(X1)vec(X2)]=(h(B)TA1+h(Bj)TA2)vec(ΨX).\begin{split}\mathrm{vec}(\Psi_{AXB})&=\begin{bmatrix}(B_{1}^{T}\otimes A_{1})\mathrm{vec}(X_{1})+(B_{1}^{T}\otimes A_{2})\mathrm{vec}(X_{2})+(B_{2}^{T}\otimes A_{1})\mathrm{vec}(X_{2})+(B_{2}^{T}\otimes A_{2})\mathrm{vec}(X_{1})\\ (B_{2}^{T}\otimes A_{1})\mathrm{vec}(X_{1})+(B_{2}^{T}\otimes A_{2})\mathrm{vec}(X_{2})+(B_{1}^{T}\otimes A_{1})\mathrm{vec}(X_{2})+(B_{1}^{T}\otimes A_{2})\mathrm{vec}(X_{1})\end{bmatrix}\\ &=\left(\begin{bmatrix}B_{1}^{T}&B_{2}^{T}\\ B_{2}^{T}&B_{1}^{T}\end{bmatrix}\otimes A_{1}+\begin{bmatrix}B_{2}^{T}&B_{1}^{T}\\ B_{1}^{T}&B_{2}^{T}\end{bmatrix}\otimes A_{2}\right)\begin{bmatrix}\mathrm{vec}(X_{1})\\ \mathrm{vec}(X_{2})\end{bmatrix}\\ &=\left(h(B)^{T}\otimes A_{1}+h(B\textit{{j}})^{T}\otimes A_{2}\right)\mathrm{vec}(\Psi_{X}).\end{split}

Set

𝒲ns=[InsiIns0000InsiIns],𝒮ns=[Qns00Qns],\mathcal{W}_{ns}=\begin{bmatrix}I_{ns}&\textit{{i}}\,I_{ns}&0&0\\ 0&0&I_{ns}&\textit{{i}}\,I_{ns}\end{bmatrix},\;\;\mathcal{S}_{ns}=\begin{bmatrix}Q_{ns}&0\\ 0&Q_{ns}\end{bmatrix}, (15)

where QnsQ_{ns} is the commutation matrix, a row permutation of the identity matrix InsI_{ns}.

We have examined vec(ΨAXB)\mathrm{vec}(\Psi_{AXB}) within reduced biquaternion algebra. The following lemma outlines vec(ΨAXB)\mathrm{vec}(\Psi_{AXB}) when XX possesses an L-structure in reduced biquaternion algebra.

Lemma 3.17.

Let A=A1+A2jm×n,X=X1+X2jL(n,s),andB=B1+B2js×tA=A_{1}+A_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times n},X=X_{1}+X_{2}\textit{{j}}\in L(n,s),\;\mbox{and}\;B=B_{1}+B_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{s\times t}. Then

vec(ΨAXB)\displaystyle\mathrm{vec}(\Psi_{AXB}) =\displaystyle= (h(B)TA1+h(Bj)TA2)𝒲nsMLvecL(X),\displaystyle\left(h(B)^{T}\otimes A_{1}+h(B\textit{{j}})^{T}\otimes A_{2}\right)\mathcal{W}_{ns}M_{L}\mathrm{vec}_{L}(\overrightarrow{X}),
vec(ΨAXTB)\displaystyle\mathrm{vec}(\Psi_{AX^{T}B}) =\displaystyle= (h(B)TA1+h(Bj)TA2)𝒮ns𝒲nsMLvecL(X).\displaystyle\left(h(B)^{T}\otimes A_{1}+h(B\textit{{j}})^{T}\otimes A_{2}\right)\mathcal{S}_{ns}\mathcal{W}_{ns}M_{L}\mathrm{vec}_{L}(\overrightarrow{X}).
Proof.

Using (6), (15), Lemma 3.15, and Lemma 3.16, we get

vec(ΨAXB)=(h(B)TA1+h(Bj)TA2)vec(ΨX)=(h(B)TA1+h(Bj)TA2)[vec(X1)vec(X2)]=(h(B)TA1+h(Bj)TA2)[vec((X1))+ivec((X1))vec((X2))+ivec((X2))]=(h(B)TA1+h(Bj)TA2)𝒲nsvec(X)=(h(B)TA1+h(Bj)TA2)𝒲nsLvecL(X).\begin{split}\mathrm{vec}(\Psi_{AXB})&=\left(h(B)^{T}\otimes A_{1}+h(B\textit{{j}})^{T}\otimes A_{2}\right)\mathrm{vec}(\Psi_{X})\\ &=\left(h(B)^{T}\otimes A_{1}+h(B\textit{{j}})^{T}\otimes A_{2}\right)\begin{bmatrix}\mathrm{vec}(X_{1})\\ \mathrm{vec}(X_{2})\end{bmatrix}\\ &=\left(h(B)^{T}\otimes A_{1}+h(B\textit{{j}})^{T}\otimes A_{2}\right)\begin{bmatrix}\mathrm{vec}(\Re(X_{1}))+\textit{{i}}\,\mathrm{vec}(\Im(X_{1}))\\ \mathrm{vec}(\Re(X_{2}))+\textit{{i}}\,\mathrm{vec}(\Im(X_{2}))\end{bmatrix}\\ &=\left(h(B)^{T}\otimes A_{1}+h(B\textit{{j}})^{T}\otimes A_{2}\right)\mathcal{W}_{ns}\mathrm{vec}(\overrightarrow{X})\\ &=\left(h(B)^{T}\otimes A_{1}+h(B\textit{{j}})^{T}\otimes A_{2}\right)\mathcal{W}_{ns}\mathcal{M}_{L}\mathrm{vec}_{L}(\overrightarrow{X}).\end{split}

We have

vec(ΨXT)=[vec(X1T)vec(X2T)]=[𝒬nsvec(X1)𝒬nsvec(X2)]=𝒮ns[vec(X1)vec(X2)]=𝒮ns𝒲nsLvecL(X).\mathrm{vec}(\Psi_{X^{T}})=\begin{bmatrix}\mathrm{vec}(X_{1}^{T})\\ \mathrm{vec}(X_{2}^{T})\end{bmatrix}=\begin{bmatrix}\mathcal{Q}_{ns}\mathrm{vec}(X_{1})\\ \mathcal{Q}_{ns}\mathrm{vec}(X_{2})\end{bmatrix}=\mathcal{S}_{ns}\begin{bmatrix}\mathrm{vec}(X_{1})\\ \mathrm{vec}(X_{2})\end{bmatrix}=\mathcal{S}_{ns}\mathcal{W}_{ns}\mathcal{M}_{L}\mathrm{vec}_{L}(\overrightarrow{X}).

The proof follows from some simple calculations. ∎

4 General framework for solving constrained RBMEs

The purpose of this section is to demonstrate how we can solve constrained generalized linear matrix equations over commutative quaternions. As part of our approach, the constrained RBME is reduced to the following unconstrained real matrix system:

[Q1Q2]x=e,\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}x=e, (16)

where Q1,Q2Q_{1},Q_{2} are real matrices of appropriate dimension and x,ex,e are real vectors of suitable size. From [5, Theorem 2] the generalized inverse of a partitioned matrix [U,V]\left[U,V\right] is given by

[U,V]+=[U+U+VHH],\left[U,V\right]^{+}=\begin{bmatrix}U^{+}-U^{+}VH\\ H\end{bmatrix},

where

H=R++(IR+R)ZVTU+TU+(IVR+),R=(IUU+)V,Z=(I+(IR+R)VTU+TU+V(IR+R))1.}\left.\begin{aligned} H&=R^{+}+\left(I-R^{+}R\right)ZV^{T}U^{+T}U^{+}\left(I-VR^{+}\right),\;R=\left(I-UU^{+}\right)V,\\ Z&=\left(I+\left(I-R^{+}R\right)V^{T}U^{+T}U^{+}V\left(I-R^{+}R\right)\right)^{-1}.\end{aligned}\right\}

We have

[U,V]T+=[U,V]+T=[U+U+VHH]T=[UT+HTVTUT+,HT].\left[U,V\right]^{T+}=\left[U,V\right]^{+T}=\begin{bmatrix}U^{+}-U^{+}VH\\ H\end{bmatrix}^{T}=\left[U^{T+}-H^{T}V^{T}U^{T+},H^{T}\right].

By substituting U=Q1TU=Q_{1}^{T} and V=Q2TV=Q_{2}^{T}, we get

[Q1Q2]+=[Q1+HTQ2Q1+,HT],[Q1Q2]+[Q1Q2]=Q1+Q1+RR+,\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}^{+}=\left[Q_{1}^{+}-H^{T}Q_{2}Q_{1}^{+},H^{T}\right],\;\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}^{+}\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}=Q_{1}^{+}Q_{1}+RR^{+}, (17)

where

H=R++(IR+R)ZQ2Q1+Q1+T(IQ2TR+),R=(IQ1+Q1)Q2T,Z=(I+(IR+R)Q2Q1+Q1+TQ2T(IR+R))1.}\left.\begin{aligned} H&=R^{+}+\left(I-R^{+}R\right)ZQ_{2}Q_{1}^{+}Q_{1}^{+T}\left(I-Q_{2}^{T}R^{+}\right),\;R=\left(I-Q_{1}^{+}Q_{1}\right)Q_{2}^{T},\;\\ Z&=\left(I+\left(I-R^{+}R\right)Q_{2}Q_{1}^{+}Q_{1}^{+T}Q_{2}^{T}\left(I-R^{+}R\right)\right)^{-1}.\end{aligned}\right\} (18)

Using [9] and the results mentioned above, we deduce the following lemma that is helpful in developing the main results.

Lemma 4.1.

Consider the real matrix system of the form [Q1Q2]x=e\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}x=e. We have the following results:

  1. (1)(1)

    The matrix equation has a solution xx if and only if [Q1Q2][Q1Q2]+e=e\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}^{+}e=e. In this case, the general solution is x=[Q1+HTQ2Q1+,HT]e+(IQ1+Q1RR+)y,x=\left[Q_{1}^{+}-H^{T}Q_{2}Q_{1}^{+},H^{T}\right]e+\left(I-Q_{1}^{+}Q_{1}-RR^{+}\right)y, where yy is an arbitrary vector of suitable size. Furthermore, if the consistency condition is satisfied, then the matrix equation has a unique solution if and only if matrix [Q1Q2]\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix} is of full column rank. In this case, the unique solution is x=[Q1+HTQ2Q1+,HT]ex=\left[Q_{1}^{+}-H^{T}Q_{2}Q_{1}^{+},H^{T}\right]e.

  2. (2)(2)

    The least squares solutions of the matrix equation can be expressed as x=[Q1+HTQ2Q1+,HT]e+(IQ1+Q1RR+)y,x=\left[Q_{1}^{+}-H^{T}Q_{2}Q_{1}^{+},H^{T}\right]e+\left(I-Q_{1}^{+}Q_{1}-RR^{+}\right)y, where yy is an arbitrary vector of suitable size, and the least squares solution with the least norm is x=[Q1+HTQ2Q1+,HT]ex=\left[Q_{1}^{+}-H^{T}Q_{2}Q_{1}^{+},H^{T}\right]e.

The following lemma will be used for the development of main results.

Lemma 4.2.

Consider the matrix equation Ax=b,Ax=b, where Am×n,xnA\in{\mathbb{C}}^{m\times n},x\in{\mathbb{R}}^{n}, and bm.b\in{\mathbb{C}}^{m}. The matrix equation Ax=bAx=b is equivalent to the linear system [(A)(A)]x=[(b)(b)].\begin{bmatrix}\Re(A)\\ \Im(A)\end{bmatrix}x=\begin{bmatrix}\Re(b)\\ \Im(b)\end{bmatrix}.

In the following subsection, we aim to find Q1Q_{1}, Q2Q_{2}, and ee for each of the three constrained RBMEs and solve them.

Remark 4.3.

It is important to emphasize that the values of Q1Q_{1}, Q2Q_{2}, and ee vary depending on the specific matrix equation we are attempting to solve.

4.1 Linear matrix equation in several unknown L-structures

The class of matrix equation (1) encompasses many important matrix equations. Some simple examples are AXB+CYD=E,AX+YB=EAXB+CYD=E,\;AX+YB=E. We now introduce a general framework for finding the least squares solutions of RBME of the form (1). The problem can be formally stated as follows:

Problem 4.1.

Let Al=Al1+Al2jm×nlA_{l}=A_{l1}+A_{l2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times n_{l}}, Blsl×tB_{l}\in\mathbb{Q}_{\mathbb{R}}^{s_{l}\times t}, and E=E1+E2jm×tE=E_{1}+E_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times t} for l=1,2,,rl=1,2,\ldots,r. Let

𝒩LE={[X1,X2,,Xr]|XlLl(nl,sl),l=1rAlXlBlEF=minX~lLl(nl,sl)l=1rAlX~lBlEF}.\mathcal{N}_{LE}=\left\{[X_{1},X_{2},\ldots,X_{r}]\;|\;X_{l}\in L_{l}(n_{l},s_{l}),\;\left\lVert\sum_{l=1}^{r}A_{l}X_{l}B_{l}-E\right\rVert_{F}=\min_{\begin{subarray}{c}\widetilde{X}_{l}\in L_{l}(n_{l},s_{l})\end{subarray}}\left\lVert\sum_{l=1}^{r}A_{l}\widetilde{X}_{l}B_{l}-E\right\rVert_{F}\right\}.

Then find [X1E,X2E,,XrE]𝒩LE[X_{1E},X_{2E},\ldots,X_{rE}]\in\mathcal{N}_{LE} such that

[X1E,X2E,,XrE]F=min[X1,X2,,Xr]𝒩LE(X1F2+X2F2++XrF2)12.\left\lVert[X_{1E},X_{2E},\ldots,X_{rE}]\right\rVert_{F}=\min_{[X_{1},X_{2},\ldots,X_{r}]\in\mathcal{N}_{LE}}\left(\left\lVert X_{1}\right\rVert_{F}^{2}+\left\lVert X_{2}\right\rVert_{F}^{2}+\cdots+\left\lVert X_{r}\right\rVert_{F}^{2}\right)^{\frac{1}{2}}.

To solve Problem 4.1, we employ the following notations: for l=1,2,,rl=1,2,\ldots,r, let MLlM_{L_{l}} be the basis matrix of LlR(nl,4sl)L_{l}^{R}(n_{l},4s_{l}), and

Sl\displaystyle S_{l} :=\displaystyle:= (h(Bl)TAl1+h(Blj)TAl2)𝒲nlslMLl,\displaystyle\left(h(B_{l})^{T}\otimes A_{l1}+h(B_{l}\textit{{j}})^{T}\otimes A_{l2}\right)\mathcal{W}_{n_{l}s_{l}}M_{L_{l}}, (19)
x\displaystyle x :=\displaystyle:= [vecL1(X1)vecL2(X2)vecLr(Xr)].\displaystyle\begin{bmatrix}\mathrm{vec}_{L_{1}}(\overrightarrow{X_{1}})\\ \mathrm{vec}_{L_{2}}(\overrightarrow{X_{2}})\\ \vdots\\ \mathrm{vec}_{L_{r}}(\overrightarrow{X_{r}})\\ \end{bmatrix}. (20)

Additionally, Q1,Q2Q_{1},Q_{2}, and ee (as in (16)) are in the following form:

Q1:=[(S1),(S2),,(Sr)],Q2:=[(S1),(S2),,(Sr)],ande:=[vec((ΨE))vec((ΨE))].Q_{1}:=\left[\Re(S_{1}),\Re(S_{2}),\ldots,\Re(S_{r})\right],\;Q_{2}:=\left[\Im(S_{1}),\Im(S_{2}),\ldots,\Im(S_{r})\right],\;\mbox{and}\;e:=\begin{bmatrix}\mathrm{vec}(\Re(\Psi_{E}))\\ \mathrm{vec}(\Im(\Psi_{E}))\end{bmatrix}. (21)

In case of inconsistency in matrix equation (1), we provide the least squares solutions. The following result provides the solution to Problem 4.1.

Theorem 4.4.

Let Alm×nlA_{l}\in\mathbb{Q}_{\mathbb{R}}^{m\times n_{l}}, Blsl×tB_{l}\in\mathbb{Q}_{\mathbb{R}}^{s_{l}\times t}, and Em×tE\in\mathbb{Q}_{\mathbb{R}}^{m\times t} for l=1,2,,rl=1,2,\ldots,r. Let Q1,Q2Q_{1},Q_{2}, and ee be of the form (21) and 𝒯=diag(ML1,ML2,,MLr)\mathcal{T}=\mathrm{diag}(M_{L_{1}},M_{L_{2}},\ldots,M_{L_{r}}). Then

𝒩LE={[X1,X2,,Xr]|[vec(X1)vec(X2)vec(Xr)]=𝒯[Q1+HTQ2Q1+,HT]e+𝒯(IQ1+Q1RR+)y},\mathcal{N}_{LE}=\left\{[X_{1},X_{2},\ldots,X_{r}]\;\left|\;\begin{bmatrix}\mathrm{vec}(\overrightarrow{X_{1}})\\ \mathrm{vec}(\overrightarrow{X_{2}})\\ \vdots\\ \mathrm{vec}(\overrightarrow{X_{r}})\end{bmatrix}\right.=\mathcal{T}\left[Q_{1}^{+}-H^{T}Q_{2}Q_{1}^{+},H^{T}\right]e+\mathcal{T}\left(I-Q_{1}^{+}Q_{1}-RR^{+}\right)y\right\}, (22)

where yy is any vector of suitable size. The unique solution [X1E,X2E,,XrE]𝒩LE[X_{1E},X_{2E},\ldots,X_{rE}]\in\mathcal{N}_{LE} to Problem 4.1 satisfies

[vec(X1E)vec(X2E)vec(XrE)]=𝒯[Q1+HTQ2Q1+,HT]e.\begin{bmatrix}\mathrm{vec}(\overrightarrow{X_{1E}})\\ \mathrm{vec}(\overrightarrow{X_{2E}})\\ \vdots\\ \mathrm{vec}(\overrightarrow{X_{rE}})\end{bmatrix}=\mathcal{T}\left[Q_{1}^{+}-H^{T}Q_{2}Q_{1}^{+},H^{T}\right]e. (23)
Proof.

By using (7), we get

l=1rAlXlBlEF2\displaystyle\left\lVert\sum_{l=1}^{r}A_{l}X_{l}B_{l}-E\right\rVert_{F}^{2} =\displaystyle= l=1rΨAlXlBlΨEF2\displaystyle\left\lVert\sum_{l=1}^{r}\Psi_{A_{l}X_{l}B_{l}}-\Psi_{E}\right\rVert_{F}^{2}
=\displaystyle= l=1rvec(ΨAlXlBl)vec(ΨE)F2.\displaystyle\left\lVert\sum_{l=1}^{r}\mathrm{vec}\left(\Psi_{A_{l}X_{l}B_{l}}\right)-\mathrm{vec}\left(\Psi_{E}\right)\right\rVert_{F}^{2}.

Using Lemma 3.17, we have

vec(ΨAlXlBl)=(h(Bl)TAl1+h(Blj)TAl2)𝒲nlslMLlvecLl(Xl).\mathrm{vec}\left(\Psi_{A_{l}X_{l}B_{l}}\right)=\left(h(B_{l})^{T}\otimes A_{l1}+h(B_{l}\textit{{j}})^{T}\otimes A_{l2}\right)\mathcal{W}_{n_{l}s_{l}}M_{L_{l}}\mathrm{vec}_{L_{l}}(\overrightarrow{X_{l}}).

Now using (19), we have

l=1rvec(ΨAlXlBl)\displaystyle\sum_{l=1}^{r}\mathrm{vec}\left(\Psi_{A_{l}X_{l}B_{l}}\right) =\displaystyle= l=1r(h(Bl)TAl1+h(Blj)TAl2)𝒲nlslMLlvecLl(Xl)\displaystyle\sum_{l=1}^{r}\left(h(B_{l})^{T}\otimes A_{l1}+h(B_{l}\textit{{j}})^{T}\otimes A_{l2}\right)\mathcal{W}_{n_{l}s_{l}}M_{L_{l}}\mathrm{vec}_{L_{l}}(\overrightarrow{X_{l}})
=\displaystyle= l=1rSlvecLl(Xl).\displaystyle\sum_{l=1}^{r}S_{l}\mathrm{vec}_{L_{l}}(\overrightarrow{X_{l}}).

By using (20), (21), and Lemma 4.2, we get

l=1rAlXlBlEF2\displaystyle\left\lVert\sum_{l=1}^{r}A_{l}X_{l}B_{l}-E\right\rVert_{F}^{2} =\displaystyle= l=1rSlvecLl(Xl)vec(ΨE)F2\displaystyle\left\lVert\sum_{l=1}^{r}S_{l}\mathrm{vec}_{L_{l}}(\overrightarrow{X_{l}})-\mathrm{vec}\left(\Psi_{E}\right)\right\rVert_{F}^{2}
=\displaystyle= [S1,S2,,Sr][vecL1(X1)vecL2(X2)vecLr(Xr)]vec(ΨE)F2\displaystyle\left\lVert\left[S_{1},S_{2},\ldots,S_{r}\right]\begin{bmatrix}\mathrm{vec}_{L_{1}}(\overrightarrow{X_{1}})\\ \mathrm{vec}_{L_{2}}(\overrightarrow{X_{2}})\\ \vdots\\ \mathrm{vec}_{L_{r}}(\overrightarrow{X_{r}})\\ \end{bmatrix}-\mathrm{vec}\left(\Psi_{E}\right)\right\rVert_{F}^{2}
=\displaystyle= [(S1)(S2)(Sr)(S1)(S2)(Sr)]x[vec((ΨE))vec((ΨE))]F2\displaystyle\left\lVert\begin{bmatrix}\Re(S_{1})&\Re(S_{2})&\cdots&\Re(S_{r})\\ \Im(S_{1})&\Im(S_{2})&\cdots&\Im(S_{r})\end{bmatrix}x-\begin{bmatrix}\mathrm{vec}\left(\Re(\Psi_{E})\right)\\ \mathrm{vec}\left(\Im(\Psi_{E})\right)\end{bmatrix}\right\rVert_{F}^{2}
=\displaystyle= [Q1Q2]xeF2.\displaystyle\left\lVert\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}x-e\right\rVert_{F}^{2}.

Hence, Problem 4.1 can be solved by finding the least squares solutions of the following unconstrained real matrix system:

[Q1Q2]x=e.\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}x=e.

By using Lemma 4.1, the least squares solutions of the above real matrix system is:

x=[Q1+HTQ2Q1+,HT]e+(IQ1+Q1RR+)y,x=\left[Q_{1}^{+}-H^{T}Q_{2}Q_{1}^{+},H^{T}\right]e+\left(I-Q_{1}^{+}Q_{1}-RR^{+}\right)y,

where yy is any vector of suitable size, and the least squares solution with the least norm is [Q1+HTQ2Q1+,HT]e.\left[Q_{1}^{+}-H^{T}Q_{2}Q_{1}^{+},H^{T}\right]e. Using Lemma 3.15, we have

[vec(X1)vec(X2)vec(Xr)]=𝒯x.\begin{bmatrix}\mathrm{vec}(\overrightarrow{X_{1}})\\ \mathrm{vec}(\overrightarrow{X_{2}})\\ \vdots\\ \mathrm{vec}(\overrightarrow{X_{r}})\end{bmatrix}=\mathcal{T}x.

Thus, we can obtain (22) and (23). ∎

The following theorem presents the consistency condition for obtaining the solution XlLl(nl,sl)X_{l}\in L_{l}(n_{l},s_{l}) for the RBME of the form (1) and a general formulation for the solution.

Theorem 4.5.

Consider the RBME of the form (1) and let 𝒯=diag(ML1,ML2,,MLr)\mathcal{T}=\mathrm{diag}(M_{L_{1}},M_{L_{2}},\ldots,M_{L_{r}}). Then the matrix equation (1) has an L-structure solution XlLl(nl,sl)X_{l}\in L_{l}(n_{l},s_{l}), for l=1,2,,rl=1,2,\ldots,r, if and only if

[Q1Q2][Q1Q2]+e=e,\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}^{+}e=e, (24)

where Q1,Q2Q_{1},Q_{2}, and ee are in the form of (21). In this case, the general solution XlLl(nl,sl)X_{l}\in L_{l}(n_{l},s_{l}) satisfies

[vec(X1)vec(X2)vec(Xr)]=𝒯[Q1+HTQ2Q1+,HT]e+𝒯(IQ1+Q1RR+)y,\begin{bmatrix}\mathrm{vec}(\overrightarrow{X_{1}})\\ \mathrm{vec}(\overrightarrow{X_{2}})\\ \vdots\\ \mathrm{vec}(\overrightarrow{X_{r}})\end{bmatrix}=\mathcal{T}\left[Q_{1}^{+}-H^{T}Q_{2}Q_{1}^{+},H^{T}\right]e+\mathcal{T}\left(I-Q_{1}^{+}Q_{1}-RR^{+}\right)y,

where yy is any vector of suitable size. Further, if the consistency condition holds, then the RBME of the form (1) has a unique solution XlLl(nl,sl)X_{l}\in L_{l}(n_{l},s_{l}) if and only if

rank([Q1Q2])=dim([vecL1(X1)vecL2(X2)vecLr(Xr)]).\mathrm{rank}\left(\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}\right)=\dim\left(\begin{bmatrix}\mathrm{vec}_{L_{1}}(\overrightarrow{X_{1}})\\ \mathrm{vec}_{L_{2}}(\overrightarrow{X_{2}})\\ \vdots\\ \mathrm{vec}_{L_{r}}(\overrightarrow{X_{r}})\end{bmatrix}\right).

In this case, the unique solution XlLl(nl,sl)X_{l}\in L_{l}(n_{l},s_{l}) satisfies

[vec(X1)vec(X2)vec(Xr)]=𝒯[Q1+HTQ2Q1+,HT]e.\begin{bmatrix}\mathrm{vec}(\overrightarrow{X_{1}})\\ \mathrm{vec}(\overrightarrow{X_{2}})\\ \vdots\\ \mathrm{vec}(\overrightarrow{X_{r}})\end{bmatrix}=\mathcal{T}\left[Q_{1}^{+}-H^{T}Q_{2}Q_{1}^{+},H^{T}\right]e.
Proof.

The proof follows using Lemma 4.1 and from the fact that

l=1rAlXlBl=E[(S1)(S2)(Sr)(S1)(S2)(Sr)][vecL1(X1)vecL2(X2)vecLr(Xr)]=[vec((ΨE))vec((ΨE))].\sum_{l=1}^{r}A_{l}X_{l}B_{l}=E\iff\begin{bmatrix}\Re(S_{1})&\Re(S_{2})&\cdots&\Re(S_{r})\\ \Im(S_{1})&\Im(S_{2})&\cdots&\Im(S_{r})\end{bmatrix}\begin{bmatrix}\mathrm{vec}_{L_{1}}(\overrightarrow{X_{1}})\\ \mathrm{vec}_{L_{2}}(\overrightarrow{X_{2}})\\ \vdots\\ \mathrm{vec}_{L_{r}}(\overrightarrow{X_{r}})\end{bmatrix}=\begin{bmatrix}\mathrm{vec}(\Re(\Psi_{E}))\\ \mathrm{vec}(\Im(\Psi_{E}))\end{bmatrix}.

The remaining subsection focuses on addressing the least squares problem associated with matrix equations (2) and (3). This involves finding the least squares solutions for the following unconstrained real matrix system:

[Q1Q2]vecL(X)=e.\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}\mathrm{vec}_{L}(\overrightarrow{X})=e. (25)

Let MLM_{L} be the basis matrix of LR(n,4s)L^{R}(n,4s). Using Lemma 3.15, we get vec(X)\mathrm{vec}(\overrightarrow{X}) from vecL(X)\mathrm{vec}_{L}(\overrightarrow{X}) in the following way:

vec(X)=MLvecL(X).\mathrm{vec}(\overrightarrow{X})=M_{L}\mathrm{vec}_{L}(\overrightarrow{X}).

The methodology for solving RBMEs of the form (2) and (3) remains the same as outlined in Subsection 4.1. Therefore, our focus here is solely on presenting the values for Q1Q_{1}, Q2Q_{2}, and ee, while intentionally omitting the detailed results.
Linear matrix equation in one unknown L-structure
Consider the matrix equation (2) and let Al=Al1+Al2jm×n,Bls×tA_{l}=A_{l1}+A_{l2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times n},B_{l}\in\mathbb{Q}_{\mathbb{R}}^{s\times t}, Cp=Cp1+Cp2jm×sC_{p}=C_{p1}+C_{p2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times s}, Dpn×tD_{p}\in\mathbb{Q}_{\mathbb{R}}^{n\times t}, E=E1+E2jm×tE=E_{1}+E_{2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times t} for l=1,2,,rl=1,2,\ldots,r and p=1,2,,qp=1,2,\ldots,q. Let

S:=(l=1r(h(Bl)TAl1+h(Blj)TAl2))𝒲nsML,S:=\left(\sum_{l=1}^{r}\left(h(B_{l})^{T}\otimes A_{l1}+h(B_{l}\textit{{j}})^{T}\otimes A_{l2}\right)\right)\mathcal{W}_{ns}M_{L},
N:=(p=1q(h(Dp)TCp1+h(Dpj)TCp2))𝒮ns𝒲nsML.N:=\left(\sum_{p=1}^{q}\left(h(D_{p})^{T}\otimes C_{p1}+h(D_{p}\textit{{j}})^{T}\otimes C_{p2}\right)\right)\mathcal{S}_{ns}\mathcal{W}_{ns}M_{L}.

Q1,Q2Q_{1},Q_{2}, and ee (as in (25)) for solving RBME of the form (2) are in the following form:

Q1:=(S)+(N),Q2:=(S)+(N),ande:=[vec((ΨE))vec((ΨE))].Q_{1}:=\Re(S)+\Re(N),\;Q_{2}:=\Im(S)+\Im(N),\;\mbox{and}\;e:=\begin{bmatrix}\mathrm{vec}(\Re(\Psi_{E}))\\ \mathrm{vec}(\Im(\Psi_{E}))\end{bmatrix}.

Generalized coupled linear matrix equations in one unknown L-structure
Consider the matrix equation (3) and let Al=Al1+Al2jm×n,Bls×tA_{l}=A_{l1}+A_{l2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times n},B_{l}\in\mathbb{Q}_{\mathbb{R}}^{s\times t}, and El=El1+El2jm×tE_{l}=E_{l1}+E_{l2}\textit{{j}}\in\mathbb{Q}_{\mathbb{R}}^{m\times t} for l=1,2,,rl=1,2,\ldots,r. Let

T:=[h(B1)TA11+h(B1j)TA12h(B2)TA21+h(B2j)TA22h(Br)TAr1+h(Brj)TAr2]𝒲nsML,z:=[vec(ΨE1)vec(ΨE2)vec(ΨEr)].T:=\begin{bmatrix}h(B_{1})^{T}\otimes A_{11}+h(B_{1}\textit{{j}})^{T}\otimes A_{12}\\ h(B_{2})^{T}\otimes A_{21}+h(B_{2}\textit{{j}})^{T}\otimes A_{22}\\ \vdots\\ h(B_{r})^{T}\otimes A_{r1}+h(B_{r}\textit{{j}})^{T}\otimes A_{r2}\end{bmatrix}\mathcal{W}_{ns}M_{L},\;\;z:=\begin{bmatrix}\mathrm{vec}(\Psi_{E_{1}})\\ \mathrm{vec}(\Psi_{E_{2}})\\ \vdots\\ \mathrm{vec}(\Psi_{E_{r}})\end{bmatrix}.

Q1,Q2Q_{1},Q_{2}, and ee (as in (25)) for solving RBME of the form (3) are in the following form:

Q1:=(T),Q2:=(T),ande:=[(z)(z)].Q_{1}:=\Re(T),\;Q_{2}:=\Im(T),\;\mbox{and}\;e:=\begin{bmatrix}\Re(z)\\ \Im(z)\end{bmatrix}.

5 Applications

We now employ the framework developed in Section 4 to specific cases and examine how our developed theory applies to various applications; including L-structure solutions to complex matrix equations, L-structure solutions to real matrix equations, PDIEP, and generalized PDIEP.

5.1 Solutions of matrix equation AXB+CYD=EAXB+CYD=E for [X,Y]n×n×n×n[X,Y]\in\mathbb{H}\mathbb{C}^{n\times n}\times\mathbb{H}\mathbb{C}^{n\times n}

As a special case, we now discuss the Hankel solutions of the complex matrix equation

AXB+CYD=E,AXB+CYD=E, (26)

where A,Cm×nA,C\in{\mathbb{C}}^{m\times n}, B,Dn×sB,D\in{\mathbb{C}}^{n\times s}, and Em×sE\in{\mathbb{C}}^{m\times s}. The following notations are required for solving matrix equation (26). Set

W:=(BTA)[In2,iIn2][KH00KH],J:=(DTC)[In2,iIn2][KH00KH].W:=\left(B^{T}\otimes A\right)\left[I_{n^{2}},\textit{{i}}\,I_{n^{2}}\right]\begin{bmatrix}K_{H}&0\\ 0&K_{H}\end{bmatrix},\;J:=\left(D^{T}\otimes C\right)\left[I_{n^{2}},\textit{{i}}\,I_{n^{2}}\right]\begin{bmatrix}K_{H}&0\\ 0&K_{H}\end{bmatrix}. (27)

Further Q1,Q2,xQ_{1},Q_{2},x, and ee (as in (16)) are given in the form:

Q1:=[(W),(J)],Q2:=[(W),(J)],x:=[vecH((X))vecH((X))vecH((Y))vecH((Y))],ande:=[vec((E))vec((E))].Q_{1}:=\left[\Re(W),\Re(J)\right],\;Q_{2}:=\left[\Im(W),\Im(J)\right],\;x:=\begin{bmatrix}\mathrm{vec}_{H}(\Re(X))\\ \mathrm{vec}_{H}(\Im(X))\\ \mathrm{vec}_{H}(\Re(Y))\\ \mathrm{vec}_{H}(\Im(Y))\end{bmatrix},\;\mbox{and}\;e:=\begin{bmatrix}\mathrm{vec}(\Re(E))\\ \mathrm{vec}(\Im(E))\end{bmatrix}. (28)

Using (14), (27), and Lemma 3.11, we have

vec(AXB)\displaystyle\mathrm{vec}(AXB) =\displaystyle= (BTA)vec(X)\displaystyle(B^{T}\otimes A)\mathrm{vec}(X)
=\displaystyle= (BTA)(vec((X))+ivec((X)))\displaystyle(B^{T}\otimes A)\left(\mathrm{vec}(\Re(X))+\textit{{i}}\,\mathrm{vec}(\Im(X))\right)
=\displaystyle= (BTA)[In2,iIn2][vec((X))vec((X))]\displaystyle(B^{T}\otimes A)\left[I_{n^{2}},\textit{{i}}\,I_{n^{2}}\right]\begin{bmatrix}\mathrm{vec}(\Re(X))\\ \mathrm{vec}(\Im(X))\end{bmatrix}
=\displaystyle= (BTA)[In2,iIn2][KH00KH][vecH((X))vecH((X))]\displaystyle(B^{T}\otimes A)\left[I_{n^{2}},\textit{{i}}\,I_{n^{2}}\right]\begin{bmatrix}K_{H}&0\\ 0&K_{H}\end{bmatrix}\begin{bmatrix}\mathrm{vec}_{H}(\Re(X))\\ \mathrm{vec}_{H}(\Im(X))\end{bmatrix}
=\displaystyle= W[vecH((X))vecH((X))].\displaystyle W\begin{bmatrix}\mathrm{vec}_{H}(\Re(X))\\ \mathrm{vec}_{H}(\Im(X))\end{bmatrix}.

Similarly,

vec(CYD)=(DTC)[In2,iIn2][KH00KH][vecH((Y))vecH((Y))]=J[vecH((Y))vecH((Y))].\mathrm{vec}(CYD)=(D^{T}\otimes C)\left[I_{n^{2}},\textit{{i}}\,I_{n^{2}}\right]\begin{bmatrix}K_{H}&0\\ 0&K_{H}\end{bmatrix}\begin{bmatrix}\mathrm{vec}_{H}(\Re(Y))\\ \mathrm{vec}_{H}(\Im(Y))\end{bmatrix}=J\begin{bmatrix}\mathrm{vec}_{H}(\Re(Y))\\ \mathrm{vec}_{H}(\Im(Y))\end{bmatrix}.

Using (28) and Lemma 4.2, we have

AXB+CYD=E\displaystyle AXB+CYD=E \displaystyle\iff vec(AXB)+vec(CYD)=vec(E)\displaystyle\mathrm{vec}(AXB)+\mathrm{vec}(CYD)=\mathrm{vec}(E)
\displaystyle\iff W[vecH((X))vecH((X))]+J[vecH((Y))vecH((Y))]=vec(E)\displaystyle W\begin{bmatrix}\mathrm{vec}_{H}(\Re(X))\\ \mathrm{vec}_{H}(\Im(X))\end{bmatrix}+J\begin{bmatrix}\mathrm{vec}_{H}(\Re(Y))\\ \mathrm{vec}_{H}(\Im(Y))\end{bmatrix}=\mathrm{vec}(E)
\displaystyle\iff [W,J][vecH((X))vecH((X))vecH((Y))vecH((Y))]=vec(E)\displaystyle\left[W,J\right]\begin{bmatrix}\mathrm{vec}_{H}(\Re(X))\\ \mathrm{vec}_{H}(\Im(X))\\ \mathrm{vec}_{H}(\Re(Y))\\ \mathrm{vec}_{H}(\Im(Y))\\ \end{bmatrix}=\mathrm{vec}(E)
\displaystyle\iff [(W)(J)(W)(J)]x=[vec((E))vec((E))]\displaystyle\begin{bmatrix}\Re(W)&\Re(J)\\ \Im(W)&\Im(J)\end{bmatrix}x=\begin{bmatrix}\mathrm{vec}(\Re(E))\\ \mathrm{vec}(\Im(E))\end{bmatrix}
\displaystyle\iff [Q1Q2]x=e.\displaystyle\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}x=e.

Hence, matrix equation AXB+CYD=EAXB+CYD=E for [X,Y]n×n×n×n[X,Y]\in\mathbb{H}\mathbb{C}^{n\times n}\times\mathbb{H}\mathbb{C}^{n\times n} can be solved by solving the following unconstrained real matrix system:

[Q1Q2]x=e.\begin{bmatrix}Q_{1}\\ Q_{2}\end{bmatrix}x=e.

Using Lemma 3.11, we have

[vec((X))vec((X))vec((Y))vec((Y))]=[KH0000KH0000KH0000KH]x.\begin{bmatrix}\mathrm{vec}(\Re(X))\\ \mathrm{vec}(\Im(X))\\ \mathrm{vec}(\Re(Y))\\ \mathrm{vec}(\Im(Y))\end{bmatrix}=\begin{bmatrix}K_{H}&0&0&0\\ 0&K_{H}&0&0\\ 0&0&K_{H}&0\\ 0&0&0&K_{H}\end{bmatrix}x.

5.2 Solutions of matrix equation AXB+CYD=EAXB+CYD=E for [X,Y]𝕊𝕋n×n×𝕊𝕋n×n[X,Y]\in\mathbb{S}\mathbb{T}{\mathbb{R}}^{n\times n}\times\mathbb{S}\mathbb{T}{\mathbb{R}}^{n\times n}

As a special case, we now discuss the symmetric Toeplitz solutions of the real matrix equation

AXB+CYD=E,AXB+CYD=E, (29)

where A,Cm×nA,C\in{\mathbb{R}}^{m\times n}, B,Dn×sB,D\in{\mathbb{R}}^{n\times s}, and Em×sE\in{\mathbb{R}}^{m\times s}. Using Lemma 3.10, we have

AXB+CYD=E\displaystyle AXB+CYD=E \displaystyle\iff vec(AXB)+vec(CYD)=vec(E)\displaystyle\mathrm{vec}(AXB)+\mathrm{vec}(CYD)=\mathrm{vec}(E)
\displaystyle\iff (BTA)vec(X)+(DTC)vec(Y)=vec(E)\displaystyle\left(B^{T}\otimes A\right)\mathrm{vec}(X)+\left(D^{T}\otimes C\right)\mathrm{vec}(Y)=\mathrm{vec}(E)
\displaystyle\iff (BTA)KSTvecST(X)+(DTC)KSTvecST(Y)=vec(E)\displaystyle\left(B^{T}\otimes A\right)K_{ST}\mathrm{vec}_{ST}(X)+\left(D^{T}\otimes C\right)K_{ST}\mathrm{vec}_{ST}(Y)=\mathrm{vec}(E)
\displaystyle\iff [(BTA)KST,(DTC)KST][vecST(X)vecST(Y)]=vec(E).\displaystyle\left[\left(B^{T}\otimes A\right)K_{ST},\left(D^{T}\otimes C\right)K_{ST}\right]\begin{bmatrix}\mathrm{vec}_{ST}(X)\\ \mathrm{vec}_{ST}(Y)\end{bmatrix}=\mathrm{vec}(E).

Hence, matrix equation AXB+CYD=EAXB+CYD=E for [X,Y]𝕊𝕋n×n×𝕊𝕋n×n[X,Y]\in\mathbb{S}\mathbb{T}{\mathbb{R}}^{n\times n}\times\mathbb{S}\mathbb{T}{\mathbb{R}}^{n\times n} can be solved by solving the following unconstrained real matrix system:

Q~x=e~,\widetilde{Q}x=\widetilde{e},

where Q~=[(BTA)KST,(DTC)KST]\widetilde{Q}=\left[\left(B^{T}\otimes A\right)K_{ST},\left(D^{T}\otimes C\right)K_{ST}\right], x=[vecST(X)vecST(Y)]x=\begin{bmatrix}\mathrm{vec}_{ST}(X)\\ \mathrm{vec}_{ST}(Y)\end{bmatrix}, and e~=vec(E)\widetilde{e}=\mathrm{vec}(E). Using Lemma 3.10, we have

[vec(X)vec(Y)]=[KST00KST]x.\begin{bmatrix}\mathrm{vec}(X)\\ \mathrm{vec}(Y)\end{bmatrix}=\begin{bmatrix}K_{ST}&0\\ 0&K_{ST}\end{bmatrix}x.

5.3 PDIEP and Generalized PDIEP

In this subsection, we aim to demonstrate the application of our developed framework in solving a range of inverse problems. Here, we develop a numerical solution methodology for the inverse problems in which the spectral constraints involve only a few eigenpair information rather than the entire spectrum. Mathematically, the problem statement is as follows:

Problem 5.1 (PDIEP).

Given vectors {u1,u2,,uk}𝔽n\{u_{1},u_{2},\ldots,u_{k}\}\subset\mathbb{F}^{n}, values {λ1,λ2,,λk}𝔽\{\lambda_{1},\lambda_{2},\ldots,\lambda_{k}\}\subset\mathbb{F}, and a set \mathcal{L} of structured matrices, find a matrix MM\in\mathcal{L} such that

Mui=λiui,i=1,2,,k,Mu_{i}=\lambda_{i}u_{i},\;\;\;\;i=1,2,\ldots,k,

where 𝔽\mathbb{F} represents the scalar field of either real {\mathbb{R}} or complex {\mathbb{C}}.

To simplify the discussion, we will use the matrix pair (Λ,Φ)\left(\Lambda,\Phi\right) to describe partial eigenpair information, where

Λ=diag(λ1,λ2,,λk)𝔽k×k,andΦ=[u1,u2,,uk]𝔽n×k.\Lambda=\mathrm{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{k})\in\mathbb{F}^{k\times k},\;\mbox{and}\;\Phi=[u_{1},u_{2},\ldots,u_{k}]\in\mathbb{F}^{n\times k}. (30)
Remark 5.1.

PDIEP can be written as MΦ=ΦΛM\Phi=\Phi\Lambda. By using the transformations

A=In,X=M,B=Φ,andE=ΦΛ,A=I_{n},\;X=M,\;B=\Phi,\;\mbox{and}\;E=\Phi\Lambda,

we can find solution to PDIEP by solving matrix equation AXB=EAXB=E for XX\in\mathcal{L}.

Next, we investigate generalized PDIEPs. In a nutshell, the problem is:

Problem 5.2 (Generalized PDIEP).

Given vectors {u1,u2,,uk}𝔽n\{u_{1},u_{2},\ldots,u_{k}\}\subset\mathbb{F}^{n}, values {λ1,λ2,,λk}𝔽\{\lambda_{1},\lambda_{2},\ldots,\lambda_{k}\}\subset\mathbb{F}, and a set \mathcal{L} of structured matrices, find pair of matrices M,NM,N\in\mathcal{L} such that

Mui=λiNui,i=1,2,,k,Mu_{i}=\lambda_{i}Nu_{i},\;\;\;\;i=1,2,\ldots,k,

where 𝔽\mathbb{F} represents the scalar field of either real {\mathbb{R}} or complex {\mathbb{C}}.

Remark 5.2.

Generalized PDIEP can be written as MΦ=NΦΛM\Phi=N\Phi\Lambda, where Λ\Lambda and Φ\Phi are as in (30). By using the transformations

A=In,X=M,B=Φ,C=In,Y=N,D=ΦΛ,andE=0,A=I_{n},\;X=M,\;B=\Phi,\;C=-I_{n},\;Y=N,\;D=\Phi\Lambda,\;\mbox{and}\;E=0,

we can find solution to Generalized PDIEP by solving matrix equation AXB+CYD=EAXB+CYD=E for X,YX,Y\in\mathcal{L}.

Though the primary emphasis of this paper is on inverse problems having symmetric Toeplitz or Hankel structures, the overall approach can be extended to encompass any structures where any set of linear relationships among matrix entries is permissible.

6 Numerical verification

In this section, we present numerical examples to verify our findings. All calculations are performed on an Intel Core i79700@3.00GHz/16GBi7-9700@3.00GHz/16GB computer using MATLAB R2021bR2021b software. We now present an example to verify our method for finding the least squares solution of the RBME of the form (1).

Example 6.1.

Let

A\displaystyle A =\displaystyle= rand(4,5)+rand(4,5)j,B=rand(5,7)+rand(5,7)j,\displaystyle rand(4,5)+rand(4,5)\textit{{j}},\;B=rand(5,7)+rand(5,7)\textit{{j}},
C\displaystyle C =\displaystyle= ones(4,5)+rand(4,5)j,D=rand(5,7)+ones(5,7)j.\displaystyle ones(4,5)+rand(4,5)\textit{{j}},\;D=rand(5,7)+ones(5,7)\textit{{j}}.

Let c1=[i, 2+i, 0, 1,i]c_{1}=\left[\textit{{i}},\,2+\textit{{i}},\,0,\,1,\,\textit{{i}}\right], r1=[i, 0, 2i, 1, 1+i]r_{1}=\left[\textit{{i}},\,0,\,2\textit{{i}},\,1,\,1+\textit{{i}}\right], c2=[1, 3i, 2+3i, 1, 0]c_{2}=\left[1,\,3\textit{{i}},\,2+3\textit{{i}},\,1,\,0\right], and r2=[1, 0, 1,i, 2]r_{2}=\left[1,\,0,\,1,\,\textit{{i}},\,2\right]. Define

X~=X~1+X~2j,\widetilde{X}=\widetilde{X}_{1}+\widetilde{X}_{2}\textit{{j}},

where X~1=toeplitz(c1,r1)\widetilde{X}_{1}=toeplitz(c_{1},r_{1}) and X~2=toeplitz(c2,r2)\widetilde{X}_{2}=toeplitz(c_{2},r_{2}).

Let c3=[2+i, 4,i, 1+3i, 2i]c_{3}=\left[2+\textit{{i}},\,4,\,\textit{{i}},\,1+3\textit{{i}},\,2\textit{{i}}\right], r3=[2+i, 7+6i, 3+2i,i, 1+i]r_{3}=\left[2+\textit{{i}},\,7+6\textit{{i}},\,3+2\textit{{i}},\,\textit{{i}},\,1+\textit{{i}}\right], c4=[1+3i, 3i, 2+3i, 3, 5+i]c_{4}=\left[1+3\textit{{i}},\,3\textit{{i}},\,2+3\textit{{i}},\,3,\,5+\textit{{i}}\right], and r4=[1+3i, 5, 1+6i, 3+i, 2i]r_{4}=\left[1+3\textit{{i}},\,5,\,1+6\textit{{i}},\,3+\textit{{i}},\,2\textit{{i}}\right]. Define

Y~=Y~1+Y~2j,\widetilde{Y}=\widetilde{Y}_{1}+\widetilde{Y}_{2}\textit{{j}},

where Y~1=toeplitz(c3,r3)\widetilde{Y}_{1}=toeplitz(c_{3},r_{3}) and Y~2=toeplitz(c4,r4)\widetilde{Y}_{2}=toeplitz(c_{4},r_{4}). Let E=AX~B+CY~DE=A\widetilde{X}B+C\widetilde{Y}D. Hence, [X~,Y~]\left[\widetilde{X},\widetilde{Y}\right] is the least squares Toeplitz solution with the least norm of the RBME AXB+CYD=EAXB+CYD=E.
Next, we take matrices A,B,C,DA,B,C,D, and EE as input and apply Theorem 4.4 to calculate the least squares Toeplitz solution with least norm of the RBME AXB+CYD=EAXB+CYD=E. We obtain X=X1+X2jX=X_{1}+X_{2}\textit{{j}} and Y=Y1+Y2jY=Y_{1}+Y_{2}\textit{{j}}, where

X1\displaystyle X_{1} =\displaystyle= [0+1i00i0+2i10i1+1i2+1i0+1i00i0+2i10i00i2+1i0+1i00i0+2i1+0i00i2+1i0+1i00i0+1i1+0i00i2+1i0+1i],X2=[10i0+0i10i0+1i20i0+3i10i0+0i10i0+1i2+3i0+3i10i0+0i10i1+0i2+3i0+3i10i0+0i0+0i1+0i2+3i0+3i10i],\displaystyle\begin{bmatrix}0+1\textit{{i}}&0-0\textit{{i}}&0+2\textit{{i}}&1-0\textit{{i}}&1+1\textit{{i}}\\ 2+1\textit{{i}}&0+1\textit{{i}}&0-0\textit{{i}}&0+2\textit{{i}}&1-0\textit{{i}}\\ 0-0\textit{{i}}&2+1\textit{{i}}&0+1\textit{{i}}&0-0\textit{{i}}&0+2\textit{{i}}\\ 1+0\textit{{i}}&0-0\textit{{i}}&2+1\textit{{i}}&0+1\textit{{i}}&0-0\textit{{i}}\\ 0+1\textit{{i}}&1+0\textit{{i}}&0-0\textit{{i}}&2+1\textit{{i}}&0+1\textit{{i}}\end{bmatrix},\;X_{2}=\begin{bmatrix}1-0\textit{{i}}&0+0\textit{{i}}&1-0\textit{{i}}&0+1\textit{{i}}&2-0\textit{{i}}\\ 0+3\textit{{i}}&1-0\textit{{i}}&0+0\textit{{i}}&1-0\textit{{i}}&0+1\textit{{i}}\\ 2+3\textit{{i}}&0+3\textit{{i}}&1-0\textit{{i}}&0+0\textit{{i}}&1-0\textit{{i}}\\ 1+0\textit{{i}}&2+3\textit{{i}}&0+3\textit{{i}}&1-0\textit{{i}}&0+0\textit{{i}}\\ 0+0\textit{{i}}&1+0\textit{{i}}&2+3\textit{{i}}&0+3\textit{{i}}&1-0\textit{{i}}\end{bmatrix},
Y1\displaystyle Y_{1} =\displaystyle= [2+1i7+6i3+2i0+1i1+1i4+0i2+1i7+6i3+2i0+1i0+1i4+0i2+1i7+6i3+2i1+3i0+1i4+0i2+1i7+6i0+2i1+3i0+1i4+0i2+1i],Y2=[1+3i5+0i1+6i3+1i0+2i0+3i1+3i5+0i1+6i3+1i2+3i0+3i1+3i5+0i1+6i3+0i2+3i0+3i1+3i5+0i5+1i3+0i2+3i0+3i1+3i].\displaystyle\begin{bmatrix}2+1\textit{{i}}&7+6\textit{{i}}&3+2\textit{{i}}&0+1\textit{{i}}&1+1\textit{{i}}\\ 4+0\textit{{i}}&2+1\textit{{i}}&7+6\textit{{i}}&3+2\textit{{i}}&0+1\textit{{i}}\\ 0+1\textit{{i}}&4+0\textit{{i}}&2+1\textit{{i}}&7+6\textit{{i}}&3+2\textit{{i}}\\ 1+3\textit{{i}}&0+1\textit{{i}}&4+0\textit{{i}}&2+1\textit{{i}}&7+6\textit{{i}}\\ 0+2\textit{{i}}&1+3\textit{{i}}&0+1\textit{{i}}&4+0\textit{{i}}&2+1\textit{{i}}\end{bmatrix},\;Y_{2}=\begin{bmatrix}1+3\textit{{i}}&5+0\textit{{i}}&1+6\textit{{i}}&3+1\textit{{i}}&0+2\textit{{i}}\\ 0+3\textit{{i}}&1+3\textit{{i}}&5+0\textit{{i}}&1+6\textit{{i}}&3+1\textit{{i}}\\ 2+3\textit{{i}}&0+3\textit{{i}}&1+3\textit{{i}}&5+0\textit{{i}}&1+6\textit{{i}}\\ 3+0\textit{{i}}&2+3\textit{{i}}&0+3\textit{{i}}&1+3\textit{{i}}&5+0\textit{{i}}\\ 5+1\textit{{i}}&3+0\textit{{i}}&2+3\textit{{i}}&0+3\textit{{i}}&1+3\textit{{i}}\end{bmatrix}.

Clearly, XX and YY are reduced biquaternion Toeplitz matrices. We have ϵ=[X,Y][X~,Y~]F=1.7470×1013\epsilon=\left\lVert\left[X,Y\right]-\left[\widetilde{X},\widetilde{Y}\right]\right\rVert_{F}=1.7470\times 10^{-13}.

From Example 6.1, we find that the error ϵ\epsilon is in the order of 101310^{-13} and is negligible. This demonstrates the effectiveness of our method in determining the structure-constrained least squares solution to the RBME of the form (1). Next, we illustrate an example for finding the structure-constrained least squares solution to the RBME of the form (3).

Example 6.2.

Let

A\displaystyle A =\displaystyle= ones(4,5)+rand(4,5)j,B=ones(5,7)+rand(5,7)j,\displaystyle ones(4,5)+rand(4,5)\textit{{j}},\;B=ones(5,7)+rand(5,7)\textit{{j}},
C\displaystyle C =\displaystyle= rand(4,5)+rand(4,5)j,D=ones(5,7)+rand(5,7)j.\displaystyle rand(4,5)+rand(4,5)\textit{{j}},\;D=ones(5,7)+rand(5,7)\textit{{j}}.

Let c1=[3+i, 2+4i, 6+i, 2+i, 3i]c_{1}=\left[3+\textit{{i}},\,2+4\textit{{i}},\,6+\textit{{i}},\,2+\textit{{i}},\,3\textit{{i}}\right], r1=[3i, 7, 3+2i, 1+i, 9+i]r_{1}=\left[3\textit{{i}},\,7,\,3+2\textit{{i}},\,1+\textit{{i}},\,9+\textit{{i}}\right], c2=[1+2i, 5+3i, 3i, 1+7i, 3]c_{2}=\left[1+2\textit{{i}},\,5+3\textit{{i}},\,3\textit{{i}},\,1+7\textit{{i}},\,3\right], and r2=[3, 1+i, 2+8i, 2+i, 2+2i]r_{2}=\left[3,\,1+\textit{{i}},\,2+8\textit{{i}},\,2+\textit{{i}},\,2+2\textit{{i}}\right]. Define

X~=X~1+X~2j,\widetilde{X}=\widetilde{X}_{1}+\widetilde{X}_{2}\textit{{j}},

where X~1=hankel(c1,r1)\widetilde{X}_{1}=hankel(c_{1},r_{1}) and X~2=hankel(c2,r2)\widetilde{X}_{2}=hankel(c_{2},r_{2}). Let E=AX~BE=A\widetilde{X}B and F=CX~DF=C\widetilde{X}D. Hence, X~\widetilde{X} is the least squares Hankel solution with the least norm of the RBME (AXB,CXD)=(E,F)\left(AXB,CXD\right)=\left(E,F\right).

Next, we take matrices A,B,C,D,EA,B,C,D,E, and FF as input to calculate the least squares Hankel solution with least norm of the RBME (AXB,CXD)=(E,F)\left(AXB,CXD\right)=\left(E,F\right). We obtain X=X1+X2jX=X_{1}+X_{2}\textit{{j}}, where

X1=[3+1i2+4i6+1i2+1i0+3i2+4i6+1i2+1i0+3i7+0i6+1i2+1i0+3i7+0i3+2i2+1i0+3i7+0i3+2i1+1i0+3i7+0i3+2i1+1i9+1i],X2=[1+2i5+3i0+3i1+7i3+0i5+3i0+3i1+7i3+0i1+1i0+3i1+7i3+0i1+1i2+8i1+7i3+0i1+1i2+8i2+1i3+0i1+1i2+8i2+1i2+2i].X_{1}=\begin{bmatrix}3+1\textit{{i}}&2+4\textit{{i}}&6+1\textit{{i}}&2+1\textit{{i}}&0+3\textit{{i}}\\ 2+4\textit{{i}}&6+1\textit{{i}}&2+1\textit{{i}}&0+3\textit{{i}}&7+0\textit{{i}}\\ 6+1\textit{{i}}&2+1\textit{{i}}&0+3\textit{{i}}&7+0\textit{{i}}&3+2\textit{{i}}\\ 2+1\textit{{i}}&0+3\textit{{i}}&7+0\textit{{i}}&3+2\textit{{i}}&1+1\textit{{i}}\\ 0+3\textit{{i}}&7+0\textit{{i}}&3+2\textit{{i}}&1+1\textit{{i}}&9+1\textit{{i}}\end{bmatrix},\;X_{2}=\begin{bmatrix}1+2\textit{{i}}&5+3\textit{{i}}&0+3\textit{{i}}&1+7\textit{{i}}&3+0\textit{{i}}\\ 5+3\textit{{i}}&0+3\textit{{i}}&1+7\textit{{i}}&3+0\textit{{i}}&1+1\textit{{i}}\\ 0+3\textit{{i}}&1+7\textit{{i}}&3+0\textit{{i}}&1+1\textit{{i}}&2+8\textit{{i}}\\ 1+7\textit{{i}}&3+0\textit{{i}}&1+1\textit{{i}}&2+8\textit{{i}}&2+1\textit{{i}}\\ 3+0\textit{{i}}&1+1\textit{{i}}&2+8\textit{{i}}&2+1\textit{{i}}&2+2\textit{{i}}\end{bmatrix}.

Clearly, XX is a reduced biquaternion Hankel matrix. We have ϵ=XX~F=5.7042×1013\epsilon=\left\lVert X-\widetilde{X}\right\rVert_{F}=5.7042\times 10^{-13}.

From Example 6.2, we find that the error ϵ\epsilon is in the order of 101310^{-13} and is negligible. This demonstrates the effectiveness of our method in determining the structure-constrained least squares solution to the RBME of the form (3).

Next, we will discuss Hankel PDIEPs [4, Problem 5.1]. Given a set of vectors {u1,u2,,uk}n\left\{u_{1},u_{2},\ldots,u_{k}\right\}\subset\mathbb{C}^{n}, where k1k\geq 1, and a set of numbers {λ1,λ2,,λk}\left\{\lambda_{1},\lambda_{2},\ldots,\lambda_{k}\right\}\subset\mathbb{C}, our aim is to construct a Hankel matrix Mn×nM\in{\mathbb{C}}^{n\times n} satisfying Mui=λiuiMu_{i}=\lambda_{i}u_{i} for i=1,2,,ki=1,2,\ldots,k. Now, we will illustrate this problem with an example.

Example 6.3.

To establish test data, we first generate a Hankel matrix MM. Let c=[1+2i, 24i,1+3i, 4]c=\left[1+2\textit{{i}},\,2-4\textit{{i}},\,-1+3\textit{{i}},\,4\right] and r=[4, 3+4i, 2i, 3]r=\left[4,\,3+4\textit{{i}},\,2\textit{{i}},\,3\right]. Define M=hankel(c,r)M=hankel(c,r). Let (Λ,Φ)\left(\Lambda,\Phi\right) denote its eigenpairs. We have Λ=diag(λ1,,λ4)4×4\Lambda=\mathrm{diag}(\lambda_{1},\ldots,\lambda_{4})\in\mathbb{C}^{4\times 4} and Φ=[u1,u2,u3,u4]4×4\Phi=\left[u_{1},u_{2},u_{3},u_{4}\right]\in\mathbb{C}^{4\times 4}, where

[λ1,λ2,λ3,λ4]=[3.8029+7.9250i,2.78263.5629i, 5.69541.0619i, 6.8900+5.6998i],\left[\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\right]=\left[-3.8029+7.9250\textit{{i}},\,-2.7826-3.5629\textit{{i}},\,5.6954-1.0619\textit{{i}},\,6.8900+5.6998\textit{{i}}\right],

and their corresponding eigenvectors

[u1u2u3u4]=[0.6240+0.0000i0.49400.0377i0.53950.2011i0.15720.2047i0.61450.0885i0.5863+0.0219i0.01720.1236i0.48180.1113i0.4246+0.0774i0.12170.1368i0.5855+0.0000i0.6784+0.0000i0.1893+0.0550i0.6138+0.0000i0.52590.1832i0.46090.1275i].\begin{bmatrix}u_{1}&u_{2}&u_{3}&u_{4}\end{bmatrix}=\begin{bmatrix}0.6240+0.0000\textit{{i}}&-0.4940-0.0377\textit{{i}}&-0.5395-0.2011\textit{{i}}&0.1572-0.2047\textit{{i}}\\ -0.6145-0.0885\textit{{i}}&-0.5863+0.0219\textit{{i}}&0.0172-0.1236\textit{{i}}&0.4818-0.1113\textit{{i}}\\ 0.4246+0.0774\textit{{i}}&0.1217-0.1368\textit{{i}}&0.5855+0.0000\textit{{i}}&0.6784+0.0000\textit{{i}}\\ -0.1893+0.0550\textit{{i}}&0.6138+0.0000\textit{{i}}&-0.5259-0.1832\textit{{i}}&0.4609-0.1275\textit{{i}}\end{bmatrix}.

Case 1\mathbf{1}. Reconstruction from one eigenpair (k=1)(k=1): Let the prescribed partial eigeninformation be given by

Λ~=λ3andΦ~=u34×1.\widetilde{\Lambda}=\lambda_{3}\in\mathbb{C}\;\textrm{and}\;\;\widetilde{\Phi}=u_{3}\in\mathbb{C}^{4\times 1}.

Construct the Hankel matrix M~\widetilde{M} such that M~ui=λiui\widetilde{M}u_{i}=\lambda_{i}u_{i} for i=3i=3. By using the transformations A=I4,X=M~,B=Φ~,andE=Φ~Λ~A=I_{4},\;X=\widetilde{M},\;B=\widetilde{\Phi},\;\mbox{and}\;E=\widetilde{\Phi}\widetilde{\Lambda}, we find the Hankel solution to the matrix equation AXB=EAXB=E. We obtain

M~=[1.6614+0.3115i1.0564+0.6597i1.8088+0.4921i2.67360.4763i1.0564+0.6597i1.8088+0.4921i2.67360.4763i2.08230.5222i1.8088+0.4921i2.67360.4763i2.08230.5222i1.7415+0.7505i2.67360.4763i2.08230.5222i1.7415+0.7505i1.2459+0.2833i].\widetilde{M}=\begin{bmatrix}1.6614+0.3115\textit{{i}}&1.0564+0.6597\textit{{i}}&-1.8088+0.4921\textit{{i}}&2.6736-0.4763\textit{{i}}\\ 1.0564+0.6597\textit{{i}}&-1.8088+0.4921\textit{{i}}&2.6736-0.4763\textit{{i}}&2.0823-0.5222\textit{{i}}\\ -1.8088+0.4921\textit{{i}}&2.6736-0.4763\textit{{i}}&2.0823-0.5222\textit{{i}}&-1.7415+0.7505\textit{{i}}\\ 2.6736-0.4763\textit{{i}}&2.0823-0.5222\textit{{i}}&-1.7415+0.7505\textit{{i}}&1.2459+0.2833\textit{{i}}\end{bmatrix}.

Then, M~\widetilde{M} is the desired Hankel matrix.

Case 2\mathbf{2}. Reconstruction from two eigenpairs (k=2)(k=2): Let the prescribed partial eigeninformation be given by

Λ~=diag(λ2,λ3)2×2andΦ~=[u2,u3]4×2.\widetilde{\Lambda}=\mathrm{diag}(\lambda_{2},\lambda_{3})\in\mathbb{C}^{2\times 2}\;\textrm{and}\;\;\widetilde{\Phi}=\left[u_{2},u_{3}\right]\in\mathbb{C}^{4\times 2}.

Construct the Hankel matrix M~\widetilde{M} such that M~ui=λiui\widetilde{M}u_{i}=\lambda_{i}u_{i} for i=2,3i=2,3. By using the transformations A=I4,X=M~,B=Φ~,andE=Φ~Λ~A=I_{4},\;X=\widetilde{M},\;B=\widetilde{\Phi},\;\mbox{and}\;E=\widetilde{\Phi}\widetilde{\Lambda}, we find the Hankel solution to the matrix equation AXB=EAXB=E. We obtain

M~=[1.0000+2.0000i2.00004.0000i1.0000+3.0000i4.00000.0000i2.00004.0000i1.0000+3.0000i4.00000.0000i3.0000+4.0000i1.0000+3.0000i4.00000.0000i3.0000+4.0000i0.0000+2.0000i4.00000.0000i3.0000+4.0000i0.0000+2.0000i3.0000+0.0000i].\widetilde{M}=\begin{bmatrix}1.0000+2.0000\textit{{i}}&2.0000-4.0000\textit{{i}}&-1.0000+3.0000\textit{{i}}&4.0000-0.0000\textit{{i}}\\ 2.0000-4.0000\textit{{i}}&-1.0000+3.0000\textit{{i}}&4.0000-0.0000\textit{{i}}&3.0000+4.0000\textit{{i}}\\ -1.0000+3.0000\textit{{i}}&4.0000-0.0000\textit{{i}}&3.0000+4.0000\textit{{i}}&-0.0000+2.0000\textit{{i}}\\ 4.0000-0.0000\textit{{i}}&3.0000+4.0000\textit{{i}}&-0.0000+2.0000\textit{{i}}&3.0000+0.0000\textit{{i}}\end{bmatrix}.

Then, M~\widetilde{M} is the desired Hankel matrix.

Case 1\mathbf{1} (k=1)(k=1) Case 2\mathbf{2} (k=2)(k=2)
Eigenpair Residual M~uiλiui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}u_{i}\right\rVert_{2} Eigenpairs Residual M~uiλiui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}u_{i}\right\rVert_{2}
(λ3,u3)\left(\lambda_{3},u_{3}\right) 2.7792×10152.7792\times 10^{-15} (λ2,u2)\left(\lambda_{2},u_{2}\right) 3.1349×10143.1349\times 10^{-14}
(λ3,u3)\left(\lambda_{3},u_{3}\right) 2.2761×10142.2761\times 10^{-14}
Table 1: Residual M~uiλiui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}u_{i}\right\rVert_{2} for Example 6.3

From Table 1, we find that the residual M~uiλiui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}u_{i}\right\rVert_{2}, for i=3i=3 in Case 11 and for i=2,3i=2,3 in Case 22, is in the order of 101410^{-14} and is negligible. This demonstrates the effectiveness of our method in solving the Hankel PDIEP.

Next, we will discuss symmetric Toeplitz PDIEPs [4, Problem 5.2]. Given a set of real orthonormal vectors {u1,u2,,uk}n\left\{u_{1},u_{2},\ldots,u_{k}\right\}\subset\mathbb{R}^{n}, where k1k\geq 1, each symmetric or skew-symmetric, and a set of real numbers {λ1,λ2,,λk}\left\{\lambda_{1},\lambda_{2},\ldots,\lambda_{k}\right\}\subset\mathbb{R}, our aim is to construct a symmetric Toeplitz matrix Tn×nT\in{\mathbb{R}}^{n\times n} satisfying Tui=λiuiTu_{i}=\lambda_{i}u_{i} for i=1,2,,ki=1,2,\ldots,k. It is important to note that a vector unu\in{\mathbb{R}}^{n} is called symmetric if Ju=uJu=u and skew-symmetric if Ju=uJu=-u, where JJ is the exchange matrix (a square matrix with ones on the anti-diagonal and zeros elsewhere). Now we illustrate this problem with an example.

Example 6.4.

To establish test data, we first generate a real symmetric Toeplitz matrix TT. Let c=[5.30, 2.50, 4.60,3.70, 2.80]c=\left[5.30,\,2.50,\,4.60,\,-3.70,\,2.80\right]. Define T=toeplitz(c)T=toeplitz(c). Let (Λ,Φ)\left(\Lambda,\Phi\right) denote its eigenpairs. We have Λ=diag(λ1,,λ5)5×5\Lambda=\mathrm{diag}(\lambda_{1},\ldots,\lambda_{5})\in\mathbb{R}^{5\times 5} and Φ=[u1,u2,u3,u4,u5]5×5\Phi=\left[u_{1},u_{2},u_{3},u_{4},u_{5}\right]\in\mathbb{R}^{5\times 5}, where

[λ1,λ2,λ3,λ4,λ5]=[4.6650,1.0842, 7.8650, 10.4951, 13.8891],\left[\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4},\lambda_{5}\right]=\left[-4.6650,\,-1.0842,\,7.8650,\,10.4951,\,13.8891\right],

and their corresponding eigenvectors

[u1u2u3u4u5]=[0.46270.40770.53470.34600.46270.53470.21690.46270.61650.26990.00000.75730.00000.01930.65280.53470.21690.46270.61650.26990.46270.40770.53470.34600.4627].\begin{bmatrix}u_{1}&u_{2}&u_{3}&u_{4}&u_{5}\end{bmatrix}=\begin{bmatrix}0.4627&0.4077&0.5347&-0.3460&-0.4627\\ -0.5347&0.2169&0.4627&0.6165&-0.2699\\ -0.0000&-0.7573&0.0000&-0.0193&-0.6528\\ 0.5347&0.2169&-0.4627&0.6165&-0.2699\\ -0.4627&0.4077&-0.5347&0.3460&-0.4627\end{bmatrix}.

Case 1\mathbf{1}. Reconstruction from two eigenpairs in which one eigenvector is odd and other is even (k=2)(k=2): Let the prescribed partial eigeninformation be given by

Λ~=diag(λ1,λ2)2×2andΦ~=[u1,u2]5×2.\widetilde{\Lambda}=\mathrm{diag}(\lambda_{1},\lambda_{2})\in\mathbb{R}^{2\times 2}\;\textrm{and}\;\;\widetilde{\Phi}=\left[u_{1},u_{2}\right]\in\mathbb{R}^{5\times 2}.

Construct the symmetric Toeplitz matrix T~\widetilde{T} such that T~ui=λiui\widetilde{T}u_{i}=\lambda_{i}u_{i} for i=1,2i=1,2. By using the transformations A=I5,X=T~,B=Φ~,andE=Φ~Λ~A=I_{5},\;X=\widetilde{T},\;B=\widetilde{\Phi},\;\mbox{and}\;E=\widetilde{\Phi}\widetilde{\Lambda}, we find the symmetric Toeplitz solution to the matrix equation AXB=EAXB=E. We obtain

T~=[5.302.504.603.702.802.505.302.504.603.704.602.505.302.504.603.704.602.505.302.502.803.704.602.505.30].\widetilde{T}=\begin{bmatrix}5.30&2.50&4.60&-3.70&2.80\\ 2.50&5.30&2.50&4.60&-3.70\\ 4.60&2.50&5.30&2.50&4.60\\ -3.70&4.60&2.50&5.30&2.50\\ 2.80&-3.70&4.60&2.50&5.30\end{bmatrix}.

Then, T~\widetilde{T} is the desired symmetric Toeplitz matrix.

Case 2\mathbf{2}. Reconstruction from two eigenpairs in which both eigenvectors are odd (k=2)(k=2): Let the prescribed partial eigeninformation be given by

Λ~=diag(λ1,λ3)2×2andΦ~=[u1,u3]5×2.\widetilde{\Lambda}=\mathrm{diag}(\lambda_{1},\lambda_{3})\in\mathbb{R}^{2\times 2}\;\textrm{and}\;\;\widetilde{\Phi}=\left[u_{1},u_{3}\right]\in\mathbb{R}^{5\times 2}.

Construct the symmetric Toeplitz matrix T~\widetilde{T} such that T~ui=λiui\widetilde{T}u_{i}=\lambda_{i}u_{i} for i=1,3i=1,3. By using the transformations A=I5,X=T~,B=Φ~,andE=Φ~Λ~A=I_{5},\;X=\widetilde{T},\;B=\widetilde{\Phi},\;\mbox{and}\;E=\widetilde{\Phi}\widetilde{\Lambda}, we find the symmetric Toeplitz solution to the matrix equation AXB=EAXB=E. We obtain

T~=[1.06673.10000.36673.10001.43333.10001.06673.10000.36673.10000.36673.10001.06673.10000.36673.10000.36673.10001.06673.10001.43333.10000.36673.10001.0667].\;\;\;\widetilde{T}=\begin{bmatrix}1.0667&3.1000&0.3667&-3.1000&-1.4333\\ 3.1000&1.0667&3.1000&0.3667&-3.1000\\ 0.3667&3.1000&1.0667&3.1000&0.3667\\ -3.1000&0.3667&3.1000&1.0667&3.1000\\ -1.4333&-3.1000&0.3667&3.1000&1.0667\end{bmatrix}.

Then, T~\widetilde{T} is the desired symmetric Toeplitz matrix.

Case 1\mathbf{1} (k=2)(k=2) Case 2\mathbf{2} (k=2)(k=2)
Eigenpairs Residual T~uiλiui2\left\lVert\widetilde{T}u_{i}-\lambda_{i}u_{i}\right\rVert_{2} Eigenpairs Residual T~uiλiui2\left\lVert\widetilde{T}u_{i}-\lambda_{i}u_{i}\right\rVert_{2}
(λ1,u1)\left(\lambda_{1},u_{1}\right) 5.7430×10155.7430\times 10^{-15} (λ1,u1)\left(\lambda_{1},u_{1}\right) 2.2505×10152.2505\times 10^{-15}
(λ2,u2)\left(\lambda_{2},u_{2}\right) 1.2200×10141.2200\times 10^{-14} (λ3,u3)\left(\lambda_{3},u_{3}\right) 6.1218×10156.1218\times 10^{-15}
Table 2: Residual T~uiλiui2\left\lVert\widetilde{T}u_{i}-\lambda_{i}u_{i}\right\rVert_{2} for Example 6.4

From Table 2, we find that the residual T~uiλiui2\left\lVert\widetilde{T}u_{i}-\lambda_{i}u_{i}\right\rVert_{2}, for i=1,2i=1,2 in Case 11 and for i=1,3i=1,3 in Case 22, is in the order of 101410^{-14} and is negligible. This demonstrates the effectiveness of our method in solving the symmetric Toeplitz PDIEP.

Similar to PDIEP, one can solve the generalized PDIEP. We illustrate the generalized PDIEP for Hankel structure by the following example.

Example 6.5.

To establish test data, we first generate a linear matrix pencil MλNM-\lambda N, where MM and NN are Hankel matrices. Let c1=[4+2i, 24i,1+3i, 4+3i]c_{1}=\left[4+2\textit{{i}},\,2-4\textit{{i}},\,-1+3\textit{{i}},\,4+3\textit{{i}}\right] and r1=[4+3i, 4i, 9+2i, 3+i]r_{1}=\left[4+3\textit{{i}},\,4\textit{{i}},\,9+2\textit{{i}},\,3+\textit{{i}}\right]. Define M=hankel(c1,r1)M=hankel(c_{1},r_{1}). Let c2=[3+2i, 6i,5+2i, 4+7i]c_{2}=\left[3+2\textit{{i}},\,6-\textit{{i}},\,-5+2\textit{{i}},\,4+7\textit{{i}}\right] and r2=[4+7i, 3+4i, 2+2i, 38i]r_{2}=\left[4+7\textit{{i}},\,3+4\textit{{i}},\,2+2\textit{{i}},\,3-8\textit{{i}}\right]. Define N=hankel(c2,r2)N=hankel(c_{2},r_{2}). Let (Λ,Φ)\left(\Lambda,\Phi\right) denote its eigenpairs. We have Λ=diag(λ1,,λ4)4×4\Lambda=\mathrm{diag}(\lambda_{1},\ldots,\lambda_{4})\in\mathbb{C}^{4\times 4} and Φ=[u1,u2,u3,u4]4×4\Phi=\left[u_{1},u_{2},u_{3},u_{4}\right]\in\mathbb{C}^{4\times 4}, where

[λ1,λ2,λ3,λ4]=[0.3953+0.6027i, 0.37080.7155i, 0.67430.3655i, 0.6761+0.1157i],\left[\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\right]=\left[-0.3953+0.6027\textit{{i}},\,0.3708-0.7155\textit{{i}},\,0.6743-0.3655\textit{{i}},\,0.6761+0.1157\textit{{i}}\right],

and their corresponding eigenvectors

[u1u2u3u4]=[0.4881+0.1767i0.48110.3552i0.7739+0.1499i0.7130+0.2870i0.4383+0.4624i0.4236+0.5764i0.8976+0.1024i0.1416+0.5177i0.41940.5806i0.1700+0.0352i0.3007+0.3084i0.3339+0.5007i0.56780.0875i0.33920.1123i0.0061+0.1882i0.35600.2370i].\begin{bmatrix}u_{1}&u_{2}&u_{3}&u_{4}\end{bmatrix}=\begin{bmatrix}-0.4881+0.1767\textit{{i}}&-0.4811-0.3552\textit{{i}}&-0.7739+0.1499\textit{{i}}&0.7130+0.2870\textit{{i}}\\ 0.4383+0.4624\textit{{i}}&0.4236+0.5764\textit{{i}}&-0.8976+0.1024\textit{{i}}&0.1416+0.5177\textit{{i}}\\ 0.4194-0.5806\textit{{i}}&-0.1700+0.0352\textit{{i}}&-0.3007+0.3084\textit{{i}}&-0.3339+0.5007\textit{{i}}\\ -0.5678-0.0875\textit{{i}}&0.3392-0.1123\textit{{i}}&0.0061+0.1882\textit{{i}}&-0.3560-0.2370\textit{{i}}\end{bmatrix}.

Case 1\mathbf{1}. Reconstruction from one eigenpair (k=1)(k=1): Let the prescribed partial eigeninformation be given by

Λ~=λ1andΦ~=u14×1.\widetilde{\Lambda}=\lambda_{1}\in\mathbb{C}\;\textrm{and}\;\;\widetilde{\Phi}=u_{1}\in\mathbb{C}^{4\times 1}.

Construct the Hankel matrices M~\widetilde{M} and N~\widetilde{N} such that M~ui=λiN~ui\widetilde{M}u_{i}=\lambda_{i}\widetilde{N}u_{i} for i=1i=1. By using the transformations A=I4A=I_{4}, X=M~X=\widetilde{M}, B=Φ~B=\widetilde{\Phi}, C=I4C=-I_{4}, Y=N~Y=\widetilde{N}, D=Φ~Λ~D=\widetilde{\Phi}\widetilde{\Lambda}, and E=0E=0, we find the Hankel solution to the matrix equation AXB+CYD=EAXB+CYD=E. We obtain

M~\displaystyle\widetilde{M} =\displaystyle= [1.0472+0.3406i1.1937+0.5288i0.8984+0.8802i1.0875+1.1282i1.1937+0.5288i0.8984+0.8802i1.0875+1.1282i0.7748+1.0806i0.8984+0.8802i1.0875+1.1282i0.7748+1.0806i0.6237+1.3399i1.0875+1.1282i0.7748+1.0806i0.6237+1.3399i0.3267+1.2860i],\displaystyle\begin{bmatrix}1.0472+0.3406\textit{{i}}&1.1937+0.5288\textit{{i}}&0.8984+0.8802\textit{{i}}&1.0875+1.1282\textit{{i}}\\ 1.1937+0.5288\textit{{i}}&0.8984+0.8802\textit{{i}}&1.0875+1.1282\textit{{i}}&0.7748+1.0806\textit{{i}}\\ 0.8984+0.8802\textit{{i}}&1.0875+1.1282\textit{{i}}&0.7748+1.0806\textit{{i}}&0.6237+1.3399\textit{{i}}\\ 1.0875+1.1282\textit{{i}}&0.7748+1.0806\textit{{i}}&0.6237+1.3399\textit{{i}}&0.3267+1.2860\textit{{i}}\end{bmatrix},
N~\displaystyle\widetilde{N} =\displaystyle= [1.4161+0.7678i1.3606+0.9305i1.0320+0.8914i0.9574+1.1034i1.3606+0.9305i1.0320+0.8914i0.9574+1.1034i0.8624+0.8961i1.0320+0.8914i0.9574+1.1034i0.8624+0.8961i0.6464+0.9076i0.9574+1.1034i0.8624+0.8961i0.6464+0.9076i0.5615+0.7072i].\displaystyle\begin{bmatrix}1.4161+0.7678\textit{{i}}&1.3606+0.9305\textit{{i}}&1.0320+0.8914\textit{{i}}&0.9574+1.1034\textit{{i}}\\ 1.3606+0.9305\textit{{i}}&1.0320+0.8914\textit{{i}}&0.9574+1.1034\textit{{i}}&0.8624+0.8961\textit{{i}}\\ 1.0320+0.8914\textit{{i}}&0.9574+1.1034\textit{{i}}&0.8624+0.8961\textit{{i}}&0.6464+0.9076\textit{{i}}\\ 0.9574+1.1034\textit{{i}}&0.8624+0.8961\textit{{i}}&0.6464+0.9076\textit{{i}}&0.5615+0.7072\textit{{i}}\end{bmatrix}.

Then, M~λN~\widetilde{M}-\lambda\widetilde{N} is the desired Hankel matrix pencil.

Case 2\mathbf{2}. Reconstruction from two eigenpairs (k=2)(k=2): Let the prescribed partial eigeninformation be given by

Λ~=diag(λ1,λ3)2×2andΦ~=[u1,u3]4×2.\widetilde{\Lambda}=\mathrm{diag}(\lambda_{1},\lambda_{3})\in\mathbb{C}^{2\times 2}\;\textrm{and}\;\;\widetilde{\Phi}=\left[u_{1},u_{3}\right]\in\mathbb{C}^{4\times 2}.

Construct the Hankel matrices M~\widetilde{M} and N~\widetilde{N} such that M~ui=λiN~ui\widetilde{M}u_{i}=\lambda_{i}\widetilde{N}u_{i} for i=1,3i=1,3. By using the transformations A=I4A=I_{4}, X=M~X=\widetilde{M}, B=Φ~B=\widetilde{\Phi}, C=I4C=-I_{4}, Y=N~Y=\widetilde{N}, D=Φ~Λ~D=\widetilde{\Phi}\widetilde{\Lambda}, and E=0E=0, we find the Hankel solution to the matrix equation AXB+CYD=EAXB+CYD=E. We obtain

M~\displaystyle\widetilde{M} =\displaystyle= [0.24600.0000i0.06960.0231i0.11180.0226i0.0519+0.0436i0.06960.0231i0.11180.0226i0.0519+0.0436i0.0299+0.1325i0.11180.0226i0.0519+0.0436i0.0299+0.1325i0.12430.0621i0.0519+0.0436i0.0299+0.1325i0.12430.0621i0.0711+0.0777i],\displaystyle\begin{bmatrix}0.2460-0.0000\textit{{i}}&-0.0696-0.0231\textit{{i}}&0.1118-0.0226\textit{{i}}&-0.0519+0.0436\textit{{i}}\\ -0.0696-0.0231\textit{{i}}&0.1118-0.0226\textit{{i}}&-0.0519+0.0436\textit{{i}}&0.0299+0.1325\textit{{i}}\\ 0.1118-0.0226\textit{{i}}&-0.0519+0.0436\textit{{i}}&0.0299+0.1325\textit{{i}}&0.1243-0.0621\textit{{i}}\\ -0.0519+0.0436\textit{{i}}&0.0299+0.1325\textit{{i}}&0.1243-0.0621\textit{{i}}&0.0711+0.0777\textit{{i}}\end{bmatrix},
N~\displaystyle\widetilde{N} =\displaystyle= [0.1767+0.0416i0.10670.0146i0.0352+0.0850i0.06960.0910i0.10670.0146i0.0352+0.0850i0.06960.0910i0.0943+0.1694i0.0352+0.0850i0.06960.0910i0.0943+0.1694i0.0396+0.0850i0.06960.0910i0.0943+0.1694i0.0396+0.0850i0.0269+0.0197i].\displaystyle\begin{bmatrix}0.1767+0.0416\textit{{i}}&0.1067-0.0146\textit{{i}}&-0.0352+0.0850\textit{{i}}&0.0696-0.0910\textit{{i}}\\ 0.1067-0.0146\textit{{i}}&-0.0352+0.0850\textit{{i}}&0.0696-0.0910\textit{{i}}&-0.0943+0.1694\textit{{i}}\\ -0.0352+0.0850\textit{{i}}&0.0696-0.0910\textit{{i}}&-0.0943+0.1694\textit{{i}}&-0.0396+0.0850\textit{{i}}\\ 0.0696-0.0910\textit{{i}}&-0.0943+0.1694\textit{{i}}&-0.0396+0.0850\textit{{i}}&-0.0269+0.0197\textit{{i}}\end{bmatrix}.

Then, M~λN~\widetilde{M}-\lambda\widetilde{N} is the desired Hankel matrix pencil.

Case 1\mathbf{1} (k=1)(k=1) Case 2\mathbf{2} (k=2)(k=2)
Eigenpair Residual M~uiλiN~ui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}\widetilde{N}u_{i}\right\rVert_{2} Eigenpairs Residual M~uiλiN~ui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}\widetilde{N}u_{i}\right\rVert_{2}
(λ1,u1)\left(\lambda_{1},u_{1}\right) 2.7626×10152.7626\times 10^{-15} (λ1,u1)\left(\lambda_{1},u_{1}\right) 1.0906×10141.0906\times 10^{-14}
(λ3,u3)\left(\lambda_{3},u_{3}\right) 2.7570×10152.7570\times 10^{-15}
Table 3: Residual M~uiλiN~ui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}\widetilde{N}u_{i}\right\rVert_{2} for Example 6.5

From Table 3, we find that the residual M~uiλiN~ui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}\widetilde{N}u_{i}\right\rVert_{2}, for i=1i=1 in Case 11 and for i=1,3i=1,3 in Case 22, is in the order of 101410^{-14} and is negligible. This demonstrates the effectiveness of our method in solving the generalized PDIEP for Hankel structure.

Next, we will illustrate an example of generalized PDIEP for symmetric Toeplitz structure.

Example 6.6.

To establish test data, we first generate a linear matrix pencil MλNM-\lambda N, where MM and NN are symmetric Toeplitz matrices. Let c1=[7.80, 5.50, 3.70,2.30, 8.90]c_{1}=\left[7.80,\,5.50,\,3.70,\,-2.30,\,8.90\right]. Define M=toeplitz(c1)M=toeplitz(c_{1}). Let c2=[4.20, 1.20,3.50, 3.90, 9.80]c_{2}=\left[4.20,\,1.20,\,-3.50,\,3.90,\,9.80\right]. Define N=toeplitz(c2)N=toeplitz(c_{2}). Let (Λ,Φ)\left(\Lambda,\Phi\right) denote its eigenpairs. We have Λ=diag(λ1,,λ5)5×5\Lambda=\mathrm{diag}(\lambda_{1},\ldots,\lambda_{5})\in\mathbb{C}^{5\times 5} and Φ=[u1,u2,u3,u4,u5]5×5\Phi=\left[u_{1},u_{2},u_{3},u_{4},u_{5}\right]\in\mathbb{C}^{5\times 5}, where

[λ1,λ2,λ3,λ4,λ5]=[4.1157,1.7144, 0.2371,0.1060+1.1336i,0.10601.1336i],\left[\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4},\lambda_{5}\right]=\left[4.1157,\,-1.7144,\,0.2371,\,-0.1060+1.1336\textit{{i}},\,-0.1060-1.1336\textit{{i}}\right],

and their corresponding eigenvectors

[u1u2u3u4u5]=[0.24810.31920.27730.2700+0.7300i0.27000.7300i0.44700.89530.41150.6140+0.1425i0.61400.1425i1.00001.00001.00000.00000.0000i0.0000+0.0000i0.44700.89530.41150.61400.1425i0.6140+0.1425i0.24810.31920.27730.27000.7300i0.2700+0.7300i].\begin{bmatrix}u_{1}&u_{2}&u_{3}&u_{4}&u_{5}\end{bmatrix}=\begin{bmatrix}-0.2481&0.3192&-0.2773&-0.2700+0.7300\textit{{i}}&-0.2700-0.7300\textit{{i}}\\ -0.4470&-0.8953&-0.4115&0.6140+0.1425\textit{{i}}&0.6140-0.1425\textit{{i}}\\ -1.0000&1.0000&1.0000&-0.0000-0.0000\textit{{i}}&-0.0000+0.0000\textit{{i}}\\ -0.4470&-0.8953&-0.4115&-0.6140-0.1425\textit{{i}}&-0.6140+0.1425\textit{{i}}\\ -0.2481&0.3192&-0.2773&0.2700-0.7300\textit{{i}}&0.2700+0.7300\textit{{i}}\end{bmatrix}.

Case 1\mathbf{1}. Reconstruction from two eigenpairs (k=2)(k=2): Let the prescribed partial eigeninformation be given by

Λ~=diag(λ1,λ3)2×2andΦ~=[u1,u3]5×2.\widetilde{\Lambda}=\mathrm{diag}(\lambda_{1},\lambda_{3})\in\mathbb{C}^{2\times 2}\;\textrm{and}\;\;\widetilde{\Phi}=\left[u_{1},u_{3}\right]\in\mathbb{C}^{5\times 2}.

Construct the symmetric Toeplitz matrices M~\widetilde{M} and N~\widetilde{N} such that M~ui=λiN~ui\widetilde{M}u_{i}=\lambda_{i}\widetilde{N}u_{i} for i=1,3i=1,3. By using the transformations A=I5A=I_{5}, X=M~X=\widetilde{M}, B=Φ~B=\widetilde{\Phi}, C=I5C=-I_{5}, Y=N~Y=\widetilde{N}, D=Φ~Λ~D=\widetilde{\Phi}\widetilde{\Lambda}, and E=0E=0, we find the symmetric Toeplitz solution to the matrix equation AXB+CYD=EAXB+CYD=E. We obtain

M~\displaystyle\widetilde{M} =\displaystyle= [1.39211.04730.67720.20320.67351.04731.39211.04730.67720.20320.67721.04731.39211.04730.67720.20320.67721.04731.39211.04730.67350.20320.67721.04731.3921],\displaystyle\begin{bmatrix}1.3921&1.0473&0.6772&-0.2032&0.6735\\ 1.0473&1.3921&1.0473&0.6772&-0.2032\\ 0.6772&1.0473&1.3921&1.0473&0.6772\\ -0.2032&0.6772&1.0473&1.3921&1.0473\\ 0.6735&-0.2032&0.6772&1.0473&1.3921\end{bmatrix},
N~\displaystyle\widetilde{N} =\displaystyle= [0.63390.19050.31610.60550.74040.19050.63390.19050.31610.60550.31610.19050.63390.19050.31610.60550.31610.19050.63390.19050.74040.60550.31610.19050.6339].\displaystyle\begin{bmatrix}0.6339&0.1905&-0.3161&0.6055&0.7404\\ 0.1905&0.6339&0.1905&-0.3161&0.6055\\ -0.3161&0.1905&0.6339&0.1905&-0.3161\\ 0.6055&-0.3161&0.1905&0.6339&0.1905\\ 0.7404&0.6055&-0.3161&0.1905&0.6339\end{bmatrix}.

Then, M~λN~\widetilde{M}-\lambda\widetilde{N} is the desired symmetric Toeplitz matrix pencil.

Case 2\mathbf{2}. Reconstruction from three eigenpairs (k=3)(k=3): Let the prescribed partial eigeninformation be given by

Λ~=diag(λ1,λ2,λ3)3×3andΦ~=[u1,u2,u3]5×3.\widetilde{\Lambda}=\mathrm{diag}(\lambda_{1},\lambda_{2},\lambda_{3})\in\mathbb{C}^{3\times 3}\;\textrm{and}\;\;\widetilde{\Phi}=\left[u_{1},u_{2},u_{3}\right]\in\mathbb{C}^{5\times 3}.

Construct the symmetric Toeplitz matrices M~\widetilde{M} and N~\widetilde{N} such that M~ui=λiN~ui\widetilde{M}u_{i}=\lambda_{i}\widetilde{N}u_{i} for i=1,2,3i=1,2,3. By using the transformations A=I5A=I_{5}, X=M~X=\widetilde{M}, B=Φ~B=\widetilde{\Phi}, C=I5C=-I_{5}, Y=N~Y=\widetilde{N}, D=Φ~Λ~D=\widetilde{\Phi}\widetilde{\Lambda}, and E=0E=0, we find the symmetric Toeplitz solution to the matrix equation AXB+CYD=EAXB+CYD=E. We obtain

M~\displaystyle\widetilde{M} =\displaystyle= [0.92140.64970.43710.27171.05130.64970.92140.64970.43710.27170.43710.64970.92140.64970.43710.27170.43710.64970.92140.64971.05130.27170.43710.64970.9214],\displaystyle\begin{bmatrix}0.9214&0.6497&0.4371&-0.2717&1.0513\\ 0.6497&0.9214&0.6497&0.4371&-0.2717\\ 0.4371&0.6497&0.9214&0.6497&0.4371\\ -0.2717&0.4371&0.6497&0.9214&0.6497\\ 1.0513&-0.2717&0.4371&0.6497&0.9214\end{bmatrix},
N~\displaystyle\widetilde{N} =\displaystyle= [0.49610.14170.41340.46071.15760.14170.49610.14170.41340.46070.41340.14170.49610.14170.41340.46070.41340.14170.49610.14171.15760.46070.41340.14170.4961].\displaystyle\begin{bmatrix}0.4961&0.1417&-0.4134&0.4607&1.1576\\ 0.1417&0.4961&0.1417&-0.4134&0.4607\\ -0.4134&0.1417&0.4961&0.1417&-0.4134\\ 0.4607&-0.4134&0.1417&0.4961&0.1417\\ 1.1576&0.4607&-0.4134&0.1417&0.4961\end{bmatrix}.

Then, M~λN~\widetilde{M}-\lambda\widetilde{N} is the desired symmetric Toeplitz matrix pencil.

Case 1\mathbf{1} (k=2)(k=2) Case 2\mathbf{2} (k=3)(k=3)
Eigenpairs Residual M~uiλiN~ui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}\widetilde{N}u_{i}\right\rVert_{2} Eigenpairs Residual M~uiλiN~ui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}\widetilde{N}u_{i}\right\rVert_{2}
(λ1,u1)\left(\lambda_{1},u_{1}\right) 3.3675×10153.3675\times 10^{-15} (λ1,u1)\left(\lambda_{1},u_{1}\right) 6.9900×10156.9900\times 10^{-15}
(λ3,u3)\left(\lambda_{3},u_{3}\right) 2.3481×10152.3481\times 10^{-15} (λ2,u2)\left(\lambda_{2},u_{2}\right) 2.4962×10152.4962\times 10^{-15}
(λ3,u3)\left(\lambda_{3},u_{3}\right) 2.5686×10152.5686\times 10^{-15}
Table 4: Residual M~uiλiN~ui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}\widetilde{N}u_{i}\right\rVert_{2} for Example 6.6

From Table 4, we find that the residual M~uiλiN~ui2\left\lVert\widetilde{M}u_{i}-\lambda_{i}\widetilde{N}u_{i}\right\rVert_{2}, for i=1,3i=1,3 in Case 11 and for i=1,2,3i=1,2,3 in Case 22, is in the order of 101510^{-15} and is negligible. This demonstrates the effectiveness of our method in solving the generalized PDIEP for symmetric Toeplitz structure.

7 Conclusions

In this manuscript, we have examined several L-structure reduced biquaternion matrix sets, including reduced biquaternion Toeplitz, symmetric Toeplitz, Hankel, and circulant matrix sets. Next, we have proposed a generalized framework for finding the least squares L-structure solutions for the following RBMEs:

l=1rAlXlBl=E,\displaystyle\sum_{l=1}^{r}A_{l}X_{l}B_{l}=E,
l=1rAlXBl+p=1qCpXTDp=E,\displaystyle\sum_{l=1}^{r}A_{l}XB_{l}+\sum_{p=1}^{q}C_{p}X^{T}D_{p}=E,
(A1XB1,A2XB2,,ArXBr)=(E1,E2,,Er).\displaystyle(A_{1}XB_{1},A_{2}XB_{2},\ldots,A_{r}XB_{r})=(E_{1},E_{2},\ldots,E_{r}).

Lastly, we have discussed how our developed theory applies to various applications, including L-structure solutions to complex and real matrix equations, PDIEP, and generalized PDIEP.

References

  • [1] Thomas Bülow and Gerald Sommer. Hypercomplex signals — a novel extension of the analytic signal to the multidimensional case. IEEE Trans. Signal Process., 49(11):2844–2852, 2001.
  • [2] Eunice Carrasquinha, Conceicao Amado, Ana M Pires, and Lina Oliveira. Image reconstruction based on circulant matrices. Signal Processing: Image Communication, 63:72–80, 2018.
  • [3] King-wah Eric Chu. Singular value and generalized singular value decompositions and the solution of linear matrix equations. Linear Algebra Appl., 88/89:83–98, 1987.
  • [4] Moody Chu and Gene Golub. Inverse eigenvalue problems: theory, algorithms, and applications. OUP Oxford, 2005.
  • [5] Randall E. Cline. Representations for the generalized inverse of a partitioned matrix. J. Soc. Indust. Appl. Math., 12:588–600, 1964.
  • [6] Biswa Datta. Numerical methods for linear control systems, volume 1. Academic Press, 2004.
  • [7] Harley Flanders and Harald K. Wimmer. On the matrix equations AXXB=CAX-XB=C and AXYB=CAX-YB=C. SIAM J. Appl. Math., 32(4):707–710, 1977.
  • [8] Feliks Rouminovich Gantmacher and Joel Lee Brenner. Applications of the Theory of Matrices. Courier Corporation, 2005.
  • [9] Gene H Golub and Charles F Van Loan. Matrix computations. JHU press, 2013.
  • [10] Antony Jameson and Eliezer Kreindler. Inverse problem of linear optimal control. SIAM J. Control, 11:1–19, 1973.
  • [11] Tongsong Jiang and Musheng Wei. On solutions of the matrix equations XAXB=CX-AXB=C and XAX¯B=CX-A\overline{X}B=C. Linear Algebra Appl., 367:225–233, 2003.
  • [12] Huang Liping. The matrix equation AXBGXD=EAXB-GXD=E over the quaternion field. Linear Algebra Appl., 234:197–208, 1996.
  • [13] Jan R. Magnus. LL-structured matrices and linear matrix equations. Linear and Multilinear Algebra, 14(1):67–88, 1983.
  • [14] S Aasha Nandhini, S Radha, P Nirmala, and R Kishore. Compressive sensing for images using a variant of toeplitz matrix for wireless sensor networks. Journal of Real-Time Image Processing, 16(5):1525–1540, 2019.
  • [15] Soo-Chang Pei, Ja-Han Chang, and Jian-Jiun Ding. Commutative reduced biquaternions and their Fourier transform for signal and image processing applications. IEEE Trans. Signal Process., 52(7):2012–2031, 2004.
  • [16] Soo-Chang Pei, Ja-Han Chang, Jian-Jiun Ding, and Ming-Yang Chen. Eigenvalues and singular value decompositions of reduced biquaternion matrices. IEEE Trans. Circuits Syst. I. Regul. Pap., 55(9):2673–2685, 2008.
  • [17] Shi-Fang Yuan, Yong Tian, and Ming-Zhao Li. On Hermitian solutions of the reduced biquaternion matrix equation (AXB,CXD)=(E,G)(AXB,CXD)=(E,G). Linear Multilinear Algebra, 68(7):1355–1373, 2020.
  • [18] Shi-Fang Yuan and Qing-Wen Wang. L-structured quaternion matrices and quaternion linear matrix equations. Linear Multilinear Algebra, 64(2):321–339, 2016.
  • [19] Shi-Fang Yuan, Qing-Wen Wang, and Xue-Feng Duan. On solutions of the quaternion matrix equation AX=BAX=B and their applications in color image restoration. Appl. Math. Comput., 221:10–20, 2013.
  • [20] Shifang Yuan and Anping Liao. Least squares solution of the quaternion matrix equation XAX^B=CX-A\widehat{X}B=C with the least norm. Linear Multilinear Algebra, 59(9):985–998, 2011.
  • [21] Dong Zhang, Zhenwei Guo, Gang Wang, and Tongsong Jiang. Algebraic techniques for least squares problems in commutative quaternionic theory. Math. Methods Appl. Sci., 43(6):3513–3523, 2020.
  • [22] Fengxia Zhang, Musheng Wei, Ying Li, and Jianli Zhao. Special least squares solutions of the quaternion matrix equation AX=BAX=B with applications. Appl. Math. Comput., 270:425–433, 2015.
  • [23] Shuai Zhang and Meng Wang. Correction of corrupted columns through fast robust Hankel matrix completion. IEEE Trans. Signal Process., 67(10):2580–2594, 2019.