L-structure least squares solutions of reduced biquaternion matrix equations with applications 22footnotemark: 2
Abstract
This paper presents a framework for computing the structure-constrained least squares solutions to the generalized reduced biquaternion matrix equations (RBMEs). The investigation focuses on three different matrix equations: a linear matrix equation with multiple unknown L-structures, a linear matrix equation with one unknown L-structure, and the general coupled linear matrix equations with one unknown L-structure. Our approach leverages the complex representation of reduced biquaternion matrices. To showcase the versatility of the developed framework, we utilize it to find structure-constrained solutions for complex and real matrix equations, broadening its applicability to various inverse problems. Specifically, we explore its utility in addressing partially described inverse eigenvalue problems (PDIEPs) and generalized PDIEPs. Our study concludes with numerical examples.
Keywords. Kronecker product, least squares problem, matrix equation, inverse problem, reduced biquaternion matrix, Moore-Penrose generalized inverse.
AMS subject classification. 15A22, 15B05, 15B33, 65F18, 65F45.
1 Introduction
In matrix theory, linear matrix equations play a crucial role. They appear widely in control theory, inverse problems, and linear optimal control [6, 8, 10]. Owing to their widespread application in various fields, one encounters the problem of finding approximate solutions for linear matrix equations. There are many different forms of matrix equations. Some simple examples of these are:
There have been several studies on real and complex matrix equations. See [3, 7, 11, 13] and references therein. We now turn our attention to quaternion and reduced biquaternion matrix equations.
In , Hamilton first introduced the notion of quaternions. Several aspects of quantum physics, image processing, and signal processing rely on the quaternion matrix equations and their general solutions [1, 19, 22]. Quaternion matrix equations have been the subject of several studies in the literature (for example, [12, 18, 20]). Quaternion multiplication is not commutative, which limits its use in many vital applications. Following Hamilton’s discovery of quaternions, Segre introduced reduced biquaternions, which are commutative in nature. Reduced biquaternions are also known as commutative quaternions. Commutative multiplication simplifies numerous operations. For instance, Pei et al. [15, 16] demonstrated how reduced biquaternions outperform conventional quaternions in image and digital signal processing. Additionally, reduced biquaternions allow us to treat three or four-dimensional vectors as one entity, facilitating efficient information processing. Due to this, it becomes imperative to learn how to solve the matrix equations arising from commutative quaternionic theory. Some studies in the literature have focused on reduced biquaternion matrix equations (RBMEs). For instance, Yuan et al. [17] discussed the Hermitian solution of the RBME . Zhang et al. [21] investigated the least squares solutions to the matrix equations and . This paper focuses on the least squares structured solutions to the generalized RBMEs. Our framework encompasses all those structures where any set of linear relationships between the matrix entries is allowed. A class of such matrices is called reduced biquaternion L-structure. Surprisingly, the least squares Toeplitz, symmetric Toeplitz, Hankel, and circulant solutions of the RBME have not been discussed in the literature despite their significance in scientific computing, inverse problems, image restoration, and signal processing [2, 14, 23]. Given the above context, our interest lies in least squares L-structure solutions for generalized RBMEs, with specific attention to reduced biquaternion Toeplitz, symmetric Toeplitz, Hankel, and circulant solutions. This manuscript addresses the following generalized matrix equations:
(1) | |||
(2) | |||
(3) |
Moreover, this paper elucidates a range of applications for the proposed framework in solving inverse eigenvalue problems. Several applications of the inverse eigenvalue problem, which involves reconstructing matrices from prescribed spectral data, deal with structured matrices. When the spectral data contain only partial information about the eigenpairs, this kind of inverse problem is called a partially described inverse eigenvalue problem (PDIEP). In both PDIEP and generalized PDIEP, two pivotal questions arise: the theory of solvability and the numerical solution methodology (see textbook [4] and references therein). In the context of solvability, one major challenge has been identifying necessary or sufficient conditions for a PDIEP and a generalized PDIEP to be solvable. On the other hand, numerical solution methods aim to develop procedures that construct a matrix in a numerically stable manner when the given spectral data is feasible. In this paper, we have successfully developed a numerical solution methodology for both PDIEP and generalized PDIEP by employing our developed framework. Our attention is primarily directed toward two structures, namely Hankel and symmetric Toeplitz. In summary, the primary applications discussed in this article encompass:
-
•
We utilize our developed framework to determine structure-constrained solutions for complex and real matrix equations. This is possible because these matrix equations represent a special case of RBMEs. It enables us to tackle various inverse eigenvalue problems. Using our framework, we offer a solution to PDIEP [4, Problem 5.1 and 5.2], which involves constructing a structured matrix from an eigenpair set.
-
•
We provide a framework for solving generalized PDIEP for symmetric Toeplitz and Hankel structure.
The manuscript is organized as follows. Section 2 presents the notation and preliminary results. In Section 3, we first define the concept of reduced biquaternion L-structures and examine some reduced biquaternion L-structure matrices. Next, we present several useful lemmas. Section 4 outlines the general framework for solving the RBMEs. Subsection 4.1 delves into solving the RBME with multiple unknown L-structures. In Section 5, we apply the developed framework from Section 4 to specific cases and explore their practical implications. Finally, Section 6 provides the numerical verification of our developed results.
2 Notation and preliminaries
2.1 Notation
Throughout this paper, we denote as the set of all reduced biquaternions. , , and denote the sets of all real, complex, and reduced biquaternion matrices, respectively. Denote , , , , , , , , and as the sets of all real Toeplitz, real symmetric Toeplitz, real Hankel, real circulant, complex Hankel, reduced biquaternion Toeplitz, reduced biquaternion symmetric Toeplitz, reduced biquaternion Hankel, and reduced biquaternion circulant matrices, respectively. For , the notations and denote the Moore-Penrose generalized inverse and the transpose of . For , the notations and stand for the real and imaginary parts of , respectively. For a diagonal matrix , we denote it as , where whenever and for . represents the identity matrix of order . For , denotes the column of the identity matrix . denotes the zero matrix of suitable size. represents the Kronecker product of matrices . For matrix , , where . represents the Frobenius norm. represents the -norm or Euclidean norm. For and , the notation represents the matrix .
The Matlab command and return an random matrix and a matrix with all entries one, respectively. Let and be row vectors of size . Matlab command returns a Toeplitz matrix with as its first column and as its first row. Matlab command returns a symmetric Toeplitz matrix with as its first column and first row. Matlab command creates a Hankel matrix with and as its first column and last row, respectively.
We use the following abbreviations throughout this paper:
RBME : reduced biquaternion matrix equation,
PDIEP : partially described inverse eigenvalue problem.
2.2 Preliminaries
A reduced biquaternion can be expressed uniquely as , where for , and , . It can also be expressed as , where and are complex numbers. The conjugate of , denoted as , is given by . The norm of is We have
In addition, we identify using a complex vector . Similarly, we identify any reduced biquaternion matrix , where , using a complex matrix . The Frobenius norm for is defined as follows:
We have
The complex representation of matrix , denoted as , is defined as follows:
For and , we have
(4) |
For and , can be expressed as
For , , and , we have , , and
(5) |
Furthermore, the operator is linear, which means that For , we have
(6) |
Let and denote . We have
Clearly,
(7) |
3 Reduced biquaternion L-structure matrices
This section aims to define the concept of reduced biquaternion L-structure and explore some specific examples of this class of matrices. A reduced biquaternion L-structure refers to the set of all reduced biquaternion matrices of a given order whose entries adhere to specific linear constraints. A notable example of this class includes unstructured matrices, where no linear restrictions are placed on the matrix entries. The subsequent definition offers a formalized explanation of this concept.
Definition 3.1.
Let be a subspace of . The subset of reduced biquaternion matrices of order given by
(8) |
is known as the reduced biquaternion L-structure.
Remark 3.2.
and are vector spaces over with dimensions and , respectively.
To better comprehend the above definition, let us consider the following examples.
Example 3.3.
Let and . Clearly, is a subspace of . The resulting reduced biquaternion L-structure is as follows:
The subset above represents the class of diagonal matrices of size . In this case, six linear restrictions are imposed on the entries of matrix , given by for . Hence, the collection of all reduced biquaternion diagonal matrices of a given order falls under the class of reduced biquaternion L-structure.
Other reduced biquaternion L-structure examples include the set of all reduced biquaternion Toeplitz, symmetric Toeplitz, Hankel, circulant, lower triangular, and upper triangular matrices of a given order. These classes of matrices consider only equality relationships between the matrix entries. Here is an example of a reduced biquaternion L-structure with some linear relationships between the matrix entries.
Example 3.4.
Let and . Clearly, is a subspace of . The resulting reduced biquaternion L-structure is as follows:
The above subset represents a collection of all reduced biquaternion matrices with the following linear restrictions imposed on the entries of matrix : , , and .
The remaining section focuses on some reduced biquaternion L-structure matrices that frequently appear in practical applications. Our primary focus lies on reduced biquaternion Toeplitz, symmetric Toeplitz, Hankel, and circulant matrices. To commence our exploration, we initially examine the vec-structure of some real structured matrices.
Definition 3.5.
A matrix is Toeplitz if it has the following form:
For , denote by the following vector:
(9) |
Definition 3.6.
A matrix is symmetric Toeplitz if it has the following form:
For , denote by the following vector:
(10) |
Definition 3.7.
A matrix is Hankel if it has the following form:
For , denote by the following vector:
(11) |
Definition 3.8.
A matrix is circulant if it has the following form:
For , denote by the following vector:
(12) |
In the following four lemmas, we describe the structure of some particular classes of real matrix sets.
Lemma 3.9.
If , then where is of the form (9), and the matrix is represented as
Proof.
Consider Clearly, is a Toeplitz matrix.
Let , for , denote the column of matrix . We have
We get
We have
∎
Lemma 3.10.
If , then where is of the form (10). When is even, let . In this case, the matrix is represented as
When is odd, let . In this case, the matrix is represented as
Proof.
The proof is similar to Lemma 3.9. ∎
To enhance our understanding of the above lemma, we will examine it in the context of and . In this scenario, we have
Lemma 3.11.
If , then where is of the form (11), and the matrix is represented as
Proof.
The proof is similar to Lemma 3.9. ∎
Lemma 3.12.
If , then where is of the form (12), and the matrix is represented as
Proof.
The proof is similar to Lemma 3.9. ∎
In the following lemma, we present the vec-structure of reduced biquaternion L-structure matrices based on the vec-structure of real structure matrices.
Lemma 3.13.
If , then {fleqn}[]
[]
[]
[]
Proof.
We will prove the first part, and the remaining parts can be proved in a similar manner. The proof follows from the fact that and using Lemma 3.9. ∎
Until now, we have examined the representation of a reduced biquaternion L-structure matrix using a real structure matrix for a particular class of matrix sets. Based on the preceding discussion about reduced biquaternion L-structure matrix sets, we can summarize the findings as follows:
For , we have . Let be a subspace of and be the basis matrix for . The subset of real matrices of order given by
(13) |
will be called a real L-structure.
Remark 3.14.
represents the basis matrix of the subspace . To simplify things, we will refer to as the basis matrix of throughout the entire manuscript.
Thus, we have the following Lemma.
Lemma 3.15.
Let be the basis matrix of . Then where corresponds to the representation of according to the basis matrix .
Now that we have described the reduced biquaternion L-structure, we turn our attention to solving a RBME. Our approach for addressing the RBME involves transforming it into a complex matrix equation. To achieve this, we must study . For , we have
(14) |
However, this differs in the context of reduced biquaternion algebra. Thus, we investigate rather than in reduced biquaternion algebra.
Lemma 3.16.
Let , . Then
Set
(15) |
where is the commutation matrix, a row permutation of the identity matrix .
We have examined within reduced biquaternion algebra. The following lemma outlines when possesses an L-structure in reduced biquaternion algebra.
Lemma 3.17.
Let . Then
4 General framework for solving constrained RBMEs
The purpose of this section is to demonstrate how we can solve constrained generalized linear matrix equations over commutative quaternions. As part of our approach, the constrained RBME is reduced to the following unconstrained real matrix system:
(16) |
where are real matrices of appropriate dimension and are real vectors of suitable size. From [5, Theorem 2] the generalized inverse of a partitioned matrix is given by
where
We have
By substituting and , we get
(17) |
where
(18) |
Using [9] and the results mentioned above, we deduce the following lemma that is helpful in developing the main results.
Lemma 4.1.
Consider the real matrix system of the form . We have the following results:
-
The matrix equation has a solution if and only if . In this case, the general solution is where is an arbitrary vector of suitable size. Furthermore, if the consistency condition is satisfied, then the matrix equation has a unique solution if and only if matrix is of full column rank. In this case, the unique solution is .
-
The least squares solutions of the matrix equation can be expressed as where is an arbitrary vector of suitable size, and the least squares solution with the least norm is .
The following lemma will be used for the development of main results.
Lemma 4.2.
Consider the matrix equation where , and The matrix equation is equivalent to the linear system
In the following subsection, we aim to find , , and for each of the three constrained RBMEs and solve them.
Remark 4.3.
It is important to emphasize that the values of , , and vary depending on the specific matrix equation we are attempting to solve.
4.1 Linear matrix equation in several unknown L-structures
The class of matrix equation (1) encompasses many important matrix equations. Some simple examples are . We now introduce a general framework for finding the least squares solutions of RBME of the form (1). The problem can be formally stated as follows:
Problem 4.1.
Let , , and for . Let
Then find such that
To solve Problem 4.1, we employ the following notations: for , let be the basis matrix of , and
(19) | |||||
(20) |
Additionally, , and (as in (16)) are in the following form:
(21) |
In case of inconsistency in matrix equation (1), we provide the least squares solutions. The following result provides the solution to Problem 4.1.
Theorem 4.4.
Proof.
By using (7), we get
Using Lemma 3.17, we have
Now using (19), we have
By using (20), (21), and Lemma 4.2, we get
Hence, Problem 4.1 can be solved by finding the least squares solutions of the following unconstrained real matrix system:
By using Lemma 4.1, the least squares solutions of the above real matrix system is:
where is any vector of suitable size, and the least squares solution with the least norm is Using Lemma 3.15, we have
The following theorem presents the consistency condition for obtaining the solution for the RBME of the form (1) and a general formulation for the solution.
Theorem 4.5.
Consider the RBME of the form (1) and let . Then the matrix equation (1) has an L-structure solution , for , if and only if
(24) |
where , and are in the form of (21). In this case, the general solution satisfies
where is any vector of suitable size. Further, if the consistency condition holds, then the RBME of the form (1) has a unique solution if and only if
In this case, the unique solution satisfies
Proof.
The remaining subsection focuses on addressing the least squares problem associated with matrix equations (2) and (3). This involves finding the least squares solutions for the following unconstrained real matrix system:
(25) |
Let be the basis matrix of . Using Lemma 3.15, we get from in the following way:
The methodology for solving RBMEs of the form (2) and (3) remains the same as outlined in Subsection 4.1. Therefore, our focus here is solely on presenting the values for , , and , while intentionally omitting the detailed results.
Linear matrix equation in one unknown L-structure
Consider the matrix equation (2) and let , , , for and . Let
, and (as in (25)) for solving RBME of the form (2) are in the following form:
Generalized coupled linear matrix equations in one unknown L-structure
Consider the matrix equation (3) and let , and for . Let
, and (as in (25)) for solving RBME of the form (3) are in the following form:
5 Applications
We now employ the framework developed in Section 4 to specific cases and examine how our developed theory applies to various applications; including L-structure solutions to complex matrix equations, L-structure solutions to real matrix equations, PDIEP, and generalized PDIEP.
5.1 Solutions of matrix equation for
As a special case, we now discuss the Hankel solutions of the complex matrix equation
(26) |
where , , and . The following notations are required for solving matrix equation (26). Set
(27) |
Further , and (as in (16)) are given in the form:
(28) |
Using (14), (27), and Lemma 3.11, we have
Similarly,
Using (28) and Lemma 4.2, we have
Hence, matrix equation for can be solved by solving the following unconstrained real matrix system:
Using Lemma 3.11, we have
5.2 Solutions of matrix equation for
5.3 PDIEP and Generalized PDIEP
In this subsection, we aim to demonstrate the application of our developed framework in solving a range of inverse problems. Here, we develop a numerical solution methodology for the inverse problems in which the spectral constraints involve only a few eigenpair information rather than the entire spectrum. Mathematically, the problem statement is as follows:
Problem 5.1 (PDIEP).
Given vectors , values , and a set of structured matrices, find a matrix such that
where represents the scalar field of either real or complex .
To simplify the discussion, we will use the matrix pair to describe partial eigenpair information, where
(30) |
Remark 5.1.
PDIEP can be written as . By using the transformations
we can find solution to PDIEP by solving matrix equation for .
Next, we investigate generalized PDIEPs. In a nutshell, the problem is:
Problem 5.2 (Generalized PDIEP).
Given vectors , values , and a set of structured matrices, find pair of matrices such that
where represents the scalar field of either real or complex .
Remark 5.2.
Generalized PDIEP can be written as , where and are as in (30). By using the transformations
we can find solution to Generalized PDIEP by solving matrix equation for .
Though the primary emphasis of this paper is on inverse problems having symmetric Toeplitz or Hankel structures, the overall approach can be extended to encompass any structures where any set of linear relationships among matrix entries is permissible.
6 Numerical verification
In this section, we present numerical examples to verify our findings. All calculations are performed on an Intel Core computer using MATLAB software. We now present an example to verify our method for finding the least squares solution of the RBME of the form (1).
Example 6.1.
Let
Let , , , and . Define
where and .
Let , , , and . Define
where and . Let . Hence, is the least squares Toeplitz solution with the least norm of the RBME .
Next, we take matrices , and as input and apply Theorem 4.4 to calculate the least squares Toeplitz solution with least norm of the RBME . We obtain and , where
Clearly, and are reduced biquaternion Toeplitz matrices. We have .
From Example 6.1, we find that the error is in the order of and is negligible. This demonstrates the effectiveness of our method in determining the structure-constrained least squares solution to the RBME of the form (1). Next, we illustrate an example for finding the structure-constrained least squares solution to the RBME of the form (3).
Example 6.2.
Let
Let , , , and . Define
where and . Let and . Hence, is the least squares Hankel solution with the least norm of the RBME .
Next, we take matrices , and as input to calculate the least squares Hankel solution with least norm of the RBME . We obtain , where
Clearly, is a reduced biquaternion Hankel matrix. We have .
From Example 6.2, we find that the error is in the order of and is negligible. This demonstrates the effectiveness of our method in determining the structure-constrained least squares solution to the RBME of the form (3).
Next, we will discuss Hankel PDIEPs [4, Problem 5.1]. Given a set of vectors , where , and a set of numbers , our aim is to construct a Hankel matrix satisfying for . Now, we will illustrate this problem with an example.
Example 6.3.
To establish test data, we first generate a Hankel matrix . Let and . Define . Let denote its eigenpairs. We have and , where
and their corresponding eigenvectors
Case . Reconstruction from one eigenpair : Let the prescribed partial eigeninformation be given by
Construct the Hankel matrix such that for . By using the transformations , we find the Hankel solution to the matrix equation . We obtain
Then, is the desired Hankel matrix.
Case . Reconstruction from two eigenpairs : Let the prescribed partial eigeninformation be given by
Construct the Hankel matrix such that for . By using the transformations , we find the Hankel solution to the matrix equation . We obtain
Then, is the desired Hankel matrix.
Case | Case | ||
---|---|---|---|
Eigenpair | Residual | Eigenpairs | Residual |
From Table 1, we find that the residual , for in Case and for in Case , is in the order of and is negligible. This demonstrates the effectiveness of our method in solving the Hankel PDIEP.
Next, we will discuss symmetric Toeplitz PDIEPs [4, Problem 5.2]. Given a set of real orthonormal vectors , where , each symmetric or skew-symmetric, and a set of real numbers , our aim is to construct a symmetric Toeplitz matrix satisfying for . It is important to note that a vector is called symmetric if and skew-symmetric if , where is the exchange matrix (a square matrix with ones on the anti-diagonal and zeros elsewhere). Now we illustrate this problem with an example.
Example 6.4.
To establish test data, we first generate a real symmetric Toeplitz matrix . Let . Define . Let denote its eigenpairs. We have and , where
and their corresponding eigenvectors
Case . Reconstruction from two eigenpairs in which one eigenvector is odd and other is even : Let the prescribed partial eigeninformation be given by
Construct the symmetric Toeplitz matrix such that for . By using the transformations , we find the symmetric Toeplitz solution to the matrix equation . We obtain
Then, is the desired symmetric Toeplitz matrix.
Case . Reconstruction from two eigenpairs in which both eigenvectors are odd : Let the prescribed partial eigeninformation be given by
Construct the symmetric Toeplitz matrix such that for . By using the transformations , we find the symmetric Toeplitz solution to the matrix equation . We obtain
Then, is the desired symmetric Toeplitz matrix.
Case | Case | ||
---|---|---|---|
Eigenpairs | Residual | Eigenpairs | Residual |
From Table 2, we find that the residual , for in Case and for in Case , is in the order of and is negligible. This demonstrates the effectiveness of our method in solving the symmetric Toeplitz PDIEP.
Similar to PDIEP, one can solve the generalized PDIEP. We illustrate the generalized PDIEP for Hankel structure by the following example.
Example 6.5.
To establish test data, we first generate a linear matrix pencil , where and are Hankel matrices. Let and . Define . Let and . Define . Let denote its eigenpairs. We have and , where
and their corresponding eigenvectors
Case . Reconstruction from one eigenpair : Let the prescribed partial eigeninformation be given by
Construct the Hankel matrices and such that for . By using the transformations , , , , , , and , we find the Hankel solution to the matrix equation . We obtain
Then, is the desired Hankel matrix pencil.
Case . Reconstruction from two eigenpairs : Let the prescribed partial eigeninformation be given by
Construct the Hankel matrices and such that for . By using the transformations , , , , , , and , we find the Hankel solution to the matrix equation . We obtain
Then, is the desired Hankel matrix pencil.
Case | Case | ||
---|---|---|---|
Eigenpair | Residual | Eigenpairs | Residual |
From Table 3, we find that the residual , for in Case and for in Case , is in the order of and is negligible. This demonstrates the effectiveness of our method in solving the generalized PDIEP for Hankel structure.
Next, we will illustrate an example of generalized PDIEP for symmetric Toeplitz structure.
Example 6.6.
To establish test data, we first generate a linear matrix pencil , where and are symmetric Toeplitz matrices. Let . Define . Let . Define . Let denote its eigenpairs. We have and , where
and their corresponding eigenvectors
Case . Reconstruction from two eigenpairs : Let the prescribed partial eigeninformation be given by
Construct the symmetric Toeplitz matrices and such that for . By using the transformations , , , , , , and , we find the symmetric Toeplitz solution to the matrix equation . We obtain
Then, is the desired symmetric Toeplitz matrix pencil.
Case . Reconstruction from three eigenpairs : Let the prescribed partial eigeninformation be given by
Construct the symmetric Toeplitz matrices and such that for . By using the transformations , , , , , , and , we find the symmetric Toeplitz solution to the matrix equation . We obtain
Then, is the desired symmetric Toeplitz matrix pencil.
Case | Case | ||
---|---|---|---|
Eigenpairs | Residual | Eigenpairs | Residual |
From Table 4, we find that the residual , for in Case and for in Case , is in the order of and is negligible. This demonstrates the effectiveness of our method in solving the generalized PDIEP for symmetric Toeplitz structure.
7 Conclusions
In this manuscript, we have examined several L-structure reduced biquaternion matrix sets, including reduced biquaternion Toeplitz, symmetric Toeplitz, Hankel, and circulant matrix sets. Next, we have proposed a generalized framework for finding the least squares L-structure solutions for the following RBMEs:
Lastly, we have discussed how our developed theory applies to various applications, including L-structure solutions to complex and real matrix equations, PDIEP, and generalized PDIEP.
References
- [1] Thomas Bülow and Gerald Sommer. Hypercomplex signals — a novel extension of the analytic signal to the multidimensional case. IEEE Trans. Signal Process., 49(11):2844–2852, 2001.
- [2] Eunice Carrasquinha, Conceicao Amado, Ana M Pires, and Lina Oliveira. Image reconstruction based on circulant matrices. Signal Processing: Image Communication, 63:72–80, 2018.
- [3] King-wah Eric Chu. Singular value and generalized singular value decompositions and the solution of linear matrix equations. Linear Algebra Appl., 88/89:83–98, 1987.
- [4] Moody Chu and Gene Golub. Inverse eigenvalue problems: theory, algorithms, and applications. OUP Oxford, 2005.
- [5] Randall E. Cline. Representations for the generalized inverse of a partitioned matrix. J. Soc. Indust. Appl. Math., 12:588–600, 1964.
- [6] Biswa Datta. Numerical methods for linear control systems, volume 1. Academic Press, 2004.
- [7] Harley Flanders and Harald K. Wimmer. On the matrix equations and . SIAM J. Appl. Math., 32(4):707–710, 1977.
- [8] Feliks Rouminovich Gantmacher and Joel Lee Brenner. Applications of the Theory of Matrices. Courier Corporation, 2005.
- [9] Gene H Golub and Charles F Van Loan. Matrix computations. JHU press, 2013.
- [10] Antony Jameson and Eliezer Kreindler. Inverse problem of linear optimal control. SIAM J. Control, 11:1–19, 1973.
- [11] Tongsong Jiang and Musheng Wei. On solutions of the matrix equations and . Linear Algebra Appl., 367:225–233, 2003.
- [12] Huang Liping. The matrix equation over the quaternion field. Linear Algebra Appl., 234:197–208, 1996.
- [13] Jan R. Magnus. -structured matrices and linear matrix equations. Linear and Multilinear Algebra, 14(1):67–88, 1983.
- [14] S Aasha Nandhini, S Radha, P Nirmala, and R Kishore. Compressive sensing for images using a variant of toeplitz matrix for wireless sensor networks. Journal of Real-Time Image Processing, 16(5):1525–1540, 2019.
- [15] Soo-Chang Pei, Ja-Han Chang, and Jian-Jiun Ding. Commutative reduced biquaternions and their Fourier transform for signal and image processing applications. IEEE Trans. Signal Process., 52(7):2012–2031, 2004.
- [16] Soo-Chang Pei, Ja-Han Chang, Jian-Jiun Ding, and Ming-Yang Chen. Eigenvalues and singular value decompositions of reduced biquaternion matrices. IEEE Trans. Circuits Syst. I. Regul. Pap., 55(9):2673–2685, 2008.
- [17] Shi-Fang Yuan, Yong Tian, and Ming-Zhao Li. On Hermitian solutions of the reduced biquaternion matrix equation . Linear Multilinear Algebra, 68(7):1355–1373, 2020.
- [18] Shi-Fang Yuan and Qing-Wen Wang. L-structured quaternion matrices and quaternion linear matrix equations. Linear Multilinear Algebra, 64(2):321–339, 2016.
- [19] Shi-Fang Yuan, Qing-Wen Wang, and Xue-Feng Duan. On solutions of the quaternion matrix equation and their applications in color image restoration. Appl. Math. Comput., 221:10–20, 2013.
- [20] Shifang Yuan and Anping Liao. Least squares solution of the quaternion matrix equation with the least norm. Linear Multilinear Algebra, 59(9):985–998, 2011.
- [21] Dong Zhang, Zhenwei Guo, Gang Wang, and Tongsong Jiang. Algebraic techniques for least squares problems in commutative quaternionic theory. Math. Methods Appl. Sci., 43(6):3513–3523, 2020.
- [22] Fengxia Zhang, Musheng Wei, Ying Li, and Jianli Zhao. Special least squares solutions of the quaternion matrix equation with applications. Appl. Math. Comput., 270:425–433, 2015.
- [23] Shuai Zhang and Meng Wang. Correction of corrupted columns through fast robust Hankel matrix completion. IEEE Trans. Signal Process., 67(10):2580–2594, 2019.