This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Channel Estimation for RIS Assisted Wireless Communications: Part II - An Improved Solution Based on Double-Structured Sparsity
(Invited Paper)

Xiuhong Wei, Decai Shen, and Linglong Dai All authors are with the Beijing National Research Center for Information Science and Technology (BNRist) as well as the Department of Electronic Engineering, Tsinghua University, Beijing 100084, China (e-mails: weixh19@mails.tsinghua.edu.cn, sdc18@mails.tsinghua.edu.cn, daill@tsinghua.edu.cn).This work was supported in part by the National Key Research and Development Program of China (Grant No. 2020YFB1807201) and in part by the National Natural Science Foundation of China (Grant No. 62031019).
Abstract

Reconfigurable intelligent surface (RIS) can manipulate the wireless communication environment by controlling the coefficients of RIS elements. However, due to the large number of passive RIS elements without signal processing capability, channel estimation in RIS assisted wireless communication system requires high pilot overhead. In the second part of this invited paper, we propose to exploit the double-structured sparsity of the angular cascaded channels among users to reduce the pilot overhead. Specifically, we first reveal the double-structured sparsity, i.e., different angular cascaded channels for different users enjoy the completely common non-zero rows and the partially common non-zero columns. By exploiting this double-structured sparsity, we further propose the double-structured orthogonal matching pursuit (DS-OMP) algorithm, where the completely common non-zero rows and the partially common non-zero columns are jointly estimated for all users. Simulation results show that the pilot overhead required by the proposed scheme is lower than existing schemes.

Index Terms:
Reconfigurable intelligent surface (RIS), channel estimation, compressive sensing.

I Introduction

In the first part of this two-part invited paper, we have introduced the fundamentals, solutions, and future opportunities of channel estimation in the reconfigurable intelligent surface (RIS) assisted wireless communication system. One of the most important challenges of channel estimation is that, the pilot overhead is high, since the RIS consists of a large number of passive elements without signal processing capability [1, 2]. By exploiting the sparsity of the angular cascaded channel, i.e., the cascade of the channel from the user to the RIS and the channel from the RIS to the base station (BS), the channel estimation problem can be formulated as a sparse signal recovery problem, which can be solved by compressive sensing (CS) algorithms with reduced pilot overhead [3, 4]. However, the pilot overhead of most existing solutions is still high.

In the second part of this paper, in order to further reduce the pilot overhead, we propose a double-structured orthogonal matching pursuit (DS-OMP) based cascaded channel estimation scheme by leveraging the double-structured sparsity of the angular cascaded channels111Simulation codes are provided to reproduce the results presented in this paper: http://oa.ee.tsinghua.edu.cn/dailinglong/publications/publications.html. . Specifically, we reveal that the angular cascaded channels associated with different users enjoy the completely common non-zero rows and the partially common non-zero columns, which is called as “double-structured sparsity” in this paper. Then, by exploiting this double-structured sparsity, we propose the DS-OMP algorithm based on the classical OMP algorithm to realize channel estimation. In the proposed DS-OMP algorithm, the completely common row support and the partially common column support for different users are jointly estimated, and the user-specific column supports for different users are individually estimated. After detecting all supports mentioned above, the least square (LS) algorithm can be utilized to obtain the estimated angular cascaded channels. Since the double-structured sparsity is exploited, the proposed DS-OMP based channel estimation scheme is able to further reduce the pilot overhead.

The rest of the paper is organized as follows. In Section II, we introduce the channel model and formulate the cascaded channel estimation problem. In Section III, we first reveal the double-structured sparsity of the angular cascaded channels, and then propose the DS-OMP based cascaded channel estimation scheme. Simulation results and conclusions are provided in Section IV and Section V, respectively.

Notation: Lower-case and upper-case boldface letters 𝐚{\bf{a}} and 𝐀{\bf{A}} denote a vector and a matrix, respectively; 𝐚T{{{\bf{a}}^{T}}} denotes the conjugate of vector 𝐚\bf{a}; 𝐀T{{{\bf{A}}^{T}}} and 𝐀H{{{\bf{A}}^{H}}} denote the transpose and conjugate transpose of matrix 𝐀\bf{A}, respectively; 𝐀F{{\left\|\bf{A}\right\|_{F}}} denotes the Frobenius norm of matrix 𝐀{\bf{A}}; diag(𝐱){\rm{diag}}\left({\bf{x}}\right) denotes the diagonal matrix with the vector 𝐱\bf{x} on its diagonal; 𝐚𝐛{\bf{a}}{\otimes}{\bf{b}} denotes the Kronecker product of 𝐚{\bf{a}} and 𝐛{\bf{b}}. Finally, 𝒞𝒩(μ,σ)\cal CN\left(\mu,\sigma\right) denotes the probability density function of the circularly symmetric complex Gaussian distribution with mean μ\mu and variance σ2\sigma^{2}.

II System Model

In this section, we will first introduce the cascaded channel in the RIS assisted communication system. Then, the cascaded channel estimation problem will be formulated.

II-A Cascaded Channel

We consider that the BS and the RIS respectively employ the MM-antenna and the NN-element uniform planer array (UPA) to simultaneously serve K{K} single-antenna users. Let 𝐆\bf{G} of size M×N{M\times N} denote the channel from the RIS to the BS, and 𝐡r,k{\bf{h}}_{r,k} of size N×1{N\times 1} denote the channel from the k{k}th user to the RIS (k=1,2,,K)\left({k=1,2,\cdots,K}\right). The widely used Saleh-Valenzuela channel model is adopted to represent 𝐆{\bf{G}} as [5]

𝐆=MNLGl1=1LGαl1G𝐛(ϑl1Gr,ψl1Gr)𝐚(ϑl1Gt,ψl1Gt)T,{\bf{G}}=\sqrt{\frac{MN}{L_{G}}}\sum\limits_{l_{1}=1}^{L_{G}}{\alpha^{G}_{l_{1}}}{\bf{b}}\left(\vartheta^{G_{r}}_{l_{1}},\psi^{G_{r}}_{l_{1}}\right){{\bf{a}}\left(\vartheta^{G_{t}}_{l_{1}},\psi^{G_{t}}_{l_{1}}\right)}^{T}, (1)

where LGL_{G} represents the number of paths between the RIS and the BS, αl1G{\alpha}^{G}_{l_{1}}, ϑl1Gr\vartheta^{G_{r}}_{l_{1}} (ψl1Gr{\psi}^{G_{r}}_{l_{1}}), and ϑl1Gt{\vartheta}^{G_{t}}_{l_{1}} (ψl1Gt{\psi}^{G_{t}}_{l_{1}}) represent the complex gain consisting of path loss, the azimuth (elevation) angle at the BS, and the azimuth (elevation) angle at the RIS for the l1l_{1}th path. Similarly, the channel 𝐡r,k{\bf{h}}_{r,k} can be represented by

𝐡r,k=NLr,kl2=1Lr,kαl2r,k𝐚(ϑl2r,k,ψl2r,k),{\bf{h}}_{r,k}=\sqrt{\frac{N}{L_{r,k}}}\sum\limits_{l_{2}=1}^{L_{r,k}}{\alpha^{r,k}_{l_{2}}}{{\bf{a}}\left(\vartheta^{r,k}_{l_{2}},\psi^{r,k}_{l_{2}}\right)}, (2)

where Lr,kL_{r,k} represents the number of paths between the kkth user and the RIS, αl2r,k{\alpha}^{r,k}_{l_{2}}, ϑl2r,k\vartheta^{r,k}_{l_{2}} (ψl2r,k{\psi}^{r,k}_{l_{2}}) represent the complex gain consisting of path loss, the azimuth (elevation) angle at the RIS for the l2l_{2}th path. 𝐛(ϑ,ψ)M×1{\bf{b}}\left(\vartheta,\psi\right)\in{\mathbb{C}^{M\times 1}} and 𝐚(ϑ,ψ)N×1{\bf{a}}\left(\vartheta,\psi\right)\in{\mathbb{C}^{N\times 1}} represent the normalized array steering vector associated to the BS and the RIS, respectively. For a typical N1×N2N_{1}\times N_{2} (N=N1×N2N=N_{1}\times N_{2}) UPA, 𝐚(ϑ,ψ){\bf{a}}\left(\vartheta,\psi\right) can be represented by [5]

𝐚(ϑ,ψ)=1N[ej2πdsin(ϑ)cos(ψ)𝐧1/λ][ej2πdsin(ψ)𝐧2/λ],{\bf{a}}\left(\vartheta,\psi\right)=\frac{1}{{\sqrt{N}}}{\left[{{e^{-j2{\pi}d{\rm{sin}}\left(\vartheta\right){\rm{cos}}\left(\psi\right){\bf{n}}_{1}/{\lambda}}}}\right]}{\otimes}{\left[{{e^{-j2{\pi}d{\rm{sin}}\left(\psi\right){\bf{n}}_{2}/{\lambda}}}}\right]}, (3)

where 𝐧1=[0,1,,N11]{\bf{n}}_{1}=[0,1,\cdots,N_{1}-1] and 𝐧2=[0,1,,N21]{\bf{n}}_{2}=[0,1,\cdots,N_{2}-1], λ\lambda is the carrier wavelength, and dd is the antenna spacing usually satisfying d=λ/2d=\lambda/2.

Further, we denote 𝐇k𝐆diag(𝐡r,k){\bf{H}}_{k}\triangleq{\bf{G}}{\rm{diag}}\left({\bf{h}}_{r,k}\right) as the M×NM\times N cascaded channel for the kkth user. Using the virtual angular-domain representation, 𝐇kM×N{\bf{H}}_{k}\in\mathbb{C}^{M\times N} can be decomposed as

𝐇k=𝐔M𝐇~k𝐔NT,{{\bf{H}}_{k}}={\bf{U}}_{M}{\tilde{\bf{H}}}_{k}{\bf{U}}_{N}^{T}, (4)

where 𝐇~k{\tilde{\bf{H}}}_{k} denotes the M×N{M\times N} angular cascaded channel, 𝐔M{\bf{U}}_{M} and 𝐔N{\bf{U}}_{N} are respectively the M×M{M\times M} and N×N{N\times N} dictionary unitary matrices at the BS and the RIS [5]. Since there are limited scatters around the BS and the RIS, the angular cascaded channel 𝐇~k{\tilde{\bf{H}}}_{k} has a few non-zero elements, which exhibits the sparsity.

II-B Problem Formulation

In this paper, we assume that the direct channel between the BS and the user is known for BS, which can be easily estimated as these in conventional wireless communication systems [5]. Therefore, we only focus on the cascaded channel estimation problem.

By adopting the widely used orthogonal pilot transmission strategy, all users transmit the known pilot symbols to the BS via the RIS over Q{Q} time slots for the uplink channel estimation. Specifically, in the q{q}th (q=1,2,,Q)\left({q=1,2,\cdots,Q}\right) time slot, the effective received signal 𝐲k,qM×1{{\bf{y}}_{k,q}}\in\mathbb{C}^{M\times 1} at the BS for the kkth user after removing the impact of the direct channel can be represented as

𝐲k,q=\displaystyle{{\bf{y}}_{k,q}}= 𝐆diag(𝜽q)𝐡r,ksk,q+𝐰k,q\displaystyle{\bf{G}}{\rm{diag}}\left({\bm{\theta}}_{q}\right){\bf{h}}_{r,k}{s}_{k,q}+{\bf{w}}_{k,q} (5)
=\displaystyle= 𝐆diag(𝐡r,k)𝜽qsk,q+𝐰k,q,\displaystyle{\bf{G}}{\rm{diag}}\left({\bf{h}}_{r,k}\right){\bm{\theta}}_{q}{s}_{k,q}+{\bf{w}}_{k,q},

where sk,qs_{k,q} is the pilot symbol sent by the kkth user, 𝜽q=[θq,1,,θq,N]T{\bm{\theta}}_{q}=[\theta_{q,1},\cdots,\theta_{q,N}]^{T} is the N×1N\times 1 reflecting vector at the RIS with θq,n\theta_{q,n} representing the reflecting coefficient at the nnth RIS element (n=1,,N)(n=1,\cdots,N) in the qqth time slot, 𝐰k,q𝒞𝒩(0,σ2𝐈M){{\bf{w}}_{k,q}}\sim{\cal C}{\cal N}\left({0,\sigma^{2}{\bf{I}}_{M}}\right) is the M×1{{M}\times 1} received noise with σ2{\sigma^{2}} representing the noise power. According to the cascaded channel 𝐇k=𝐆diag(𝐡r,k){\bf{H}}_{k}={\bf{G}}{\rm{diag}}\left({\bf{h}}_{r,k}\right), we can rewrite (5) as

𝐲k,q=𝐇k𝜽qsk,q+𝐰k,q.{{\bf{y}}_{k,q}}={\bf{H}}_{k}{\bm{\theta}}_{q}{s}_{k,q}+{\bf{w}}_{k,q}. (6)

After Q{Q} time slots of pilot transmission, we can obtain the M×Q{M\times Q} overall measurement matrix 𝐘k=[𝐲k,1,,𝐲k,Q]{\bf{Y}}_{k}={[{\bf{y}}_{k,1},\cdots,{\bf{y}}_{k,Q}]} by assuming sk,q=1{s_{k,q}}=1 as

𝐘k=𝐇k𝚯+𝐖k,{{\bf{Y}}_{k}}={\bf{H}}_{k}{\bm{\Theta}}+{\bf{W}}_{k}, (7)

where 𝚯=[𝜽1,,𝜽Q]{\bm{\Theta}}={[{\bm{\theta}}_{1},\cdots,{\bm{\theta}}_{Q}]} and 𝐖k=[𝐰k,1,,𝐰k,Q]{\bf{W}}_{k}=[{\bf{w}}_{k,1},\cdots,{\bf{w}}_{k,Q}]. By substituting (4) into (7), we can obtain

𝐘k=𝐔M𝐇~k𝐔NT𝚯+𝐖k.{{\bf{Y}}_{k}}={\bf{U}}_{M}{\tilde{\bf{H}}}_{k}{\bf{U}}_{N}^{T}{\bm{\Theta}}+{\bf{W}}_{k}. (8)

Let denote 𝐘~k=(𝐔MH𝐘k)H{{\tilde{\bf{Y}}}_{k}}=\left({\bf{U}}_{M}^{H}{{\bf{Y}}_{k}}\right)^{H} as the Q×MQ\times M effective measurement matrix, and 𝐖~k=(𝐔MH𝐖k)H{{\tilde{\bf{W}}}_{k}}=\left({\bf{U}}_{M}^{H}{{\bf{W}}_{k}}\right)^{H} as the Q×MQ\times M effective noise matrix, (7) can be rewritten as a CS model:

𝐘~k=𝚯~𝐇~kH+𝐖~k,{{\tilde{\bf{Y}}}}_{k}={\tilde{\bm{\Theta}}}{\tilde{\bf{H}}}_{k}^{H}+{\tilde{\bf{W}}}_{k}, (9)

where 𝚯~=(𝐔NT𝚯)H{{\tilde{\bf{\Theta}}}}=\left({\bf{U}}_{N}^{T}{{\bm{\Theta}}}\right)^{H} is the Q×NQ\times N sensing matrix. Based on (9), we can respectively estimate the angular cascaded channel for each user kk by conventional CS algorithms, such as OMP algorithm. However, under the premise of ensuring the estimation accuracy, the pilot overhead required by the conventional CS algorithms is still high.

III Joint Channel Estimation for RIS Assisted Wireless Communication Systems

In this section, we will first reveal the double-structured sparsity of the angular cascaded channels. Then, by exploiting this important channel characteristic, we will propose a DS-OMP based cascaded channel estimation scheme to reduce the pilot overhead. Finally, the computational complexity of the proposed scheme will be analyzed.

III-A Double-Structured Sparsity of Angular Cascaded Channels

In order to further explore the sparsity of the angular cascaded channel both in row and column, the angular cascaded channel 𝐇~k{\tilde{\bf{H}}}_{k} in (4) can be expressed as

𝐇~k=\displaystyle{\tilde{\bf{H}}}_{k}= MNLGLr,kl1=1LGl2=1Lr,kαl1Gαl2r,k\displaystyle\sqrt{\frac{MN}{L_{G}L_{r,k}}}\sum\limits_{l_{1}=1}^{L_{G}}\sum\limits_{l_{2}=1}^{L_{r,k}}{\alpha^{G}_{l_{1}}}{\alpha^{r,k}_{l_{2}}} (10)
𝐛~(ϑl1Gr,ψl1Gr)𝐚~T(ϑl1Gt+ϑl2r,k,ψl1Gt+ψl2r,k),\displaystyle{\tilde{\bf{b}}}\left(\vartheta^{G_{r}}_{l_{1}},\psi^{G_{r}}_{l_{1}}\right){\tilde{\bf{a}}^{T}\left(\vartheta^{G_{t}}_{l_{1}}+\vartheta^{r,k}_{l_{2}},\psi^{G_{t}}_{l_{1}}+\psi^{r,k}_{l_{2}}\right)},

where both 𝐛~(ϑ,ψ)=𝐔MH𝐛(ϑ,ψ){\tilde{\bf{b}}}\left(\vartheta,\psi\right)={\bf{U}}_{M}^{H}{\bf{b}}\left(\vartheta,\psi\right) and 𝐚~(ϑ,ψ)=𝐔NH𝐚(ϑ,ψ){\tilde{\bf{a}}}\left(\vartheta,\psi\right)={\bf{U}}_{N}^{H}{{\bf{a}}}\left(\vartheta,\psi\right) have only one non-zero element, which lie on the position of array steering vector at the direction (ϑ,ψ)\left(\vartheta,\psi\right) in 𝐔M{\bf{U}}_{M} and 𝐔N{\bf{U}}_{N}. Based on (10), we can find that each complete reflecting path (l1,l2)(l_{1},l_{2}) can provide one non-zero element for 𝐇~k{\tilde{\bf{H}}}_{k}, whose row index depends on (ϑl1Gr,ψl1Gr)\left(\vartheta^{G_{r}}_{l_{1}},\psi^{G_{r}}_{l_{1}}\right) and column index depends on (ϑl1Gt+ϑl2r,k,ψl1Gt+ψl2r,k)\left(\vartheta^{G_{t}}_{l_{1}}+\vartheta^{r,k}_{l_{2}},\psi^{G_{t}}_{l_{1}}+\psi^{r,k}_{l_{2}}\right). Therefore, 𝐇~k{\tilde{\bf{H}}}_{k} has LGL_{G} non-zero rows, where each non-zero row has Lr,kL_{r,k} non-zero columns. The total number of non-zero elements is LGLr,kL_{G}L_{r,k}, which is usually much smaller than MNMN.

Refer to caption
Figure 1: Double-structured sparsity of the angular cascaded channels.

More importantly, we can find that different sparse channels {𝐇~k}k=1K\{{\tilde{\bf{H}}}_{k}\}_{k=1}^{K} exhibit the double-structured sparsity, as shown in Fig. 1. Firstly, since different users communicate with the BS via the common RIS, the channel 𝐆\bf{G} from the RIS to the BS is common for all users. From (10), we can also find that {(ϑl1Gr,ψl1Gr)}l1=1LG\bigg{\{}\left(\vartheta^{G_{r}}_{l_{1}},\psi^{G_{r}}_{l_{1}}\right)\bigg{\}}_{l_{1}=1}^{L_{G}} is independent of the user index kk. Therefore, the non-zero elements of {𝐇~k}k=1K\{{\tilde{\bf{H}}}_{k}\}_{k=1}^{K} lie on the completely common LGL_{G} rows. Secondly, since different users will share part of the scatters between the RIS and users, {𝐡r,k}k=1K\{{\bf{h}}_{r,k}\}_{k=1}^{K} may enjoy partially common paths with the same angles at the RIS. Let LcL_{c} (LcLr,k,k{L_{c}}\leq{L_{r,k}},\forall k) denote the number of common paths for {𝐡r,k}k=1K\{{\bf{h}}_{r,k}\}_{k=1}^{K}, then we can find that for l1\forall l_{1}, there always exists {(ϑl1Gtϑl2r,k,ψl1Gtψl2r,k)}l2=1Lc\bigg{\{}\left(\vartheta^{G_{t}}_{l_{1}}-\vartheta^{r,k}_{l_{2}},\psi^{G_{t}}_{l_{1}}-\psi^{r,k}_{l_{2}}\right)\bigg{\}}_{l_{2}=1}^{L_{c}} shared by {𝐇~k}k=1K\{{\tilde{\bf{H}}}_{k}\}_{k=1}^{K}. That is to say, for each common non-zero rows l1l_{1} (l1=1,2,,LGl_{1}=1,2,\cdots,L_{G}), {𝐇~k}k=1K\{{\tilde{\bf{H}}}_{k}\}_{k=1}^{K} enjoy LcL_{c} common non-zero columns. This double-structured sparsity of the angular cascaded channels can be summarized as follows from the perspective of row and column, respectively.

  • Row-structured sparsity: Let Ωrk\Omega_{r}^{k} denote the row set of non-zero elements for 𝐇~k{\tilde{\bf{H}}}_{k}, then we have

    Ωr1=Ωr2==ΩrK=Ωr,\Omega_{r}^{1}=\Omega_{r}^{2}=\cdots=\Omega_{r}^{K}=\Omega_{r}, (11)

    where Ωr\Omega_{r} represents the completely common row support for {𝐇~k}k=1K\{{\tilde{\bf{H}}}_{k}\}_{k=1}^{K}.

  • Partially column-structured sparsity: Let Ωcl,k\Omega_{c}^{l,k} denote the column set of non-zero elements for the l1l_{1}th non-zero row of 𝐇~k{\tilde{\bf{H}}}_{k}, then we have

    Ωcl1,1Ωcl1,2Ωcl1,K=Ωcl1,Com,l1=1,2,,LG,\Omega_{c}^{l_{1},1}\cap\Omega_{c}^{l_{1},2}\cap\cdots\cap\Omega_{c}^{l_{1},K}=\Omega_{c}^{l_{1},{\rm{Com}}},\quad l_{1}=1,2,\cdots,L_{G}, (12)

    where Ωcl,Com\Omega_{c}^{{l,{\rm{Com}}}} represents the partially common column support for the l1l_{1}th non-zero row of {𝐇~k}k=1K\{{\tilde{\bf{H}}}_{k}\}_{k=1}^{K}.

Based on the above double-structured sparsity, the cascaded channels for different users can be jointly estimated to improve the channel estimation accuracy.

III-B Proposed DS-OMP Based Cascaded Channel Estimation

In this subsection, we propose the DS-OMP based cascaded channel estimation scheme by integrating the double-structured sparsity into the classical OMP algorithm. The specific algorithm can be summarized in Algorithm 1, which includes three key stages to detect supports of angular cascaded channels.

Input: 𝐘~k:k{\tilde{\bf{Y}}}_{k}:\forall k, 𝚯~{\tilde{\bm{\Theta}}}, LG{L_{G}}, Lr,k:k{L_{r,k}}:\forall k, LcL_{c}.
Initialization: 𝐇~^k=𝟎M×N,k{\hat{\tilde{\bf{H}}}}_{k}={\bf{0}}_{M\times N},\forall k.
1. Stage 1: Return estimated completely common row support Ω^r{\hat{\Omega}}_{r} by Algorithm 2.
2. Stage 2: Return estimated partially common column supports {Ω^cl1,Com}l1=1LG\{{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}\}_{l_{1}=1}^{L_{G}} based on Ω^r{\hat{\Omega}}_{r} by Algorithm 3.
3. Stage 3: Return estimated column supports {{Ω^cl1,k}l1=1LG}k=1K\{\{{\hat{\Omega}}_{c}^{l_{1},k}\}_{l_{1}=1}^{L_{G}}\}_{k=1}^{K} based on Ω^r{\hat{\Omega}}_{r} and {Ω^cl1,Com}l1=1LG\{{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}\}_{l_{1}=1}^{L_{G}} by Algorithm 4.
4. for l1=1,2,,LGl_{1}=1,2,\cdots,L_{G} do
5.     for k=1,2,,Kk=1,2,\cdots,K do
6.        𝐇~^kH(Ω^cl1,k,Ω^r(l1))=𝚯~(:,Ω^cl1,k)𝐘~k(:,Ω^r(l1)){\hat{\tilde{\bf{H}}}}^{H}_{k}({\hat{\Omega}}_{c}^{l_{1},k},{{\hat{\Omega}}_{r}}(l_{1}))={\tilde{\bm{\Theta}}}^{{\dagger}}(:,{\hat{\Omega}}_{c}^{l_{1},k}){\tilde{\bf{Y}}}_{k}(:,{{\hat{\Omega}}_{r}}(l_{1}))
7.     end for
8. end for
9. 𝐇^k=𝐔MH𝐇~^k𝐔N,k{{\hat{\bf{H}}}_{k}}={{\bf{U}}_{M}^{H}}{{{\hat{\tilde{\bf{H}}}}}_{k}}{{\bf{U}}_{N}},\forall k
Output: Estimated cascaded channel matrices 𝐇^k,k{{\hat{{\bf{H}}}}}_{k},\forall k.
Algorithm 1 DS-OMP based cascaded channel estimation

The main procedure of Algorithm 1 can be explained as follows. Firstly, the completely common row support Ωr\Omega_{r} is jointly estimated thanks to the row-structured sparsity in Step 1, where Ωr\Omega_{r} consists of LGL_{G} row indexes associated with LGL_{G} non-zero rows. Secondly, for the l1l_{1}th non-zero row, the partially common column support Ωcl1,Com\Omega_{c}^{l_{1},{\rm{Com}}} can be further jointly estimated thanks to the partially column-structured sparsity in Step 2. Thirdly, the user-specific column supports for each user kk can be individually estimated in Step 3. After detecting supports of all sparse matrices, we adopt the LS algorithm to obtain corresponding estimated matrices {𝐇~^k}k=1K\{{{\hat{\tilde{\bf{H}}}}_{k}}\}_{k=1}^{K} in Steps 4-8. It should be noted that the sparse signal in (9) is 𝐇~kH{\tilde{\bf{H}}}_{k}^{H}, thus the sparse matrix estimated by the LS algorithm in Step 6 is 𝐇~^kH{\hat{\tilde{\bf{H}}}}_{k}^{H}. Finally, we can obtain the estimated cascaded channels {𝐇^k}k=1K\{{{\hat{{\bf{H}}}}_{k}}\}_{k=1}^{K} by transforming angular channels into spatial channels in Step 9.

In the following part, we will introduce how to estimate the completely common row support, the partially common column supports, and the individual column supports for the first three stages in detail.

Input: 𝐘~k:k{\tilde{\bf{Y}}}_{k}:\forall k, LG{L_{G}}.
Initialization: 𝐠=𝟎M×1{\bf{g}}={\bf{0}}_{M\times 1}.
1. for k=1,2,,Kk=1,2,\cdots,K do
2.    𝐠(m)=𝐠(m)+𝐘~k(:,m)F2{\bf{g}}(m)={\bf{g}}(m)+{\|{{\tilde{\bf{Y}}}_{k}}(:,m)\|}^{2}_{F}, m=1,2,,M\forall m=1,2,\cdots,M
3. end for
4. Ω^r=Γ𝒯(𝐠,LG){\hat{\Omega}}_{r}={\Gamma}_{\mathcal{T}}({\bf{g}},L_{G})
Output: Estimated completely common row support Ω^r{\hat{\Omega}}_{r}.
Algorithm 2 Joint completely common row support estimation

1) Stage 1: Estimating the completely common row support. Thanks to the row-structured sparsity of the angular cascaded channels, we can jointly estimate the completely common row support Ωr\Omega_{r} for {𝐇~k}k=1K\{{{{\tilde{\bf{H}}}}_{k}}\}_{k=1}^{K} by Algorithm 2.

From the virtual angular-domain channel representation (4), we can find that non-zero rows of {𝐇~k}k=1K\{{{{\tilde{\bf{H}}}}_{k}}\}_{k=1}^{K} are corresponding to columns with high power in the received pilots {𝐘~k}k=1K\{{{{\tilde{\bf{Y}}}}_{k}}\}_{k=1}^{K}. Since {𝐇~k}k=1K\{{{{\tilde{\bf{H}}}}_{k}}\}_{k=1}^{K} have the completely common non-zero rows, {𝐘~k}k=1K\{{{{\tilde{\bf{Y}}}}_{k}}\}_{k=1}^{K} can be jointly utilized to estimate the completely common row support Ωr\Omega_{r}, which can resist the effect of noise. Specifically, we denote 𝐠\bf{g} of size M×1M\times 1 to save the sum power of columns of {𝐘~k}k=1K\{{{{\tilde{\bf{Y}}}}_{k}}\}_{k=1}^{K}, as in Step 2 of Algorithm 2. Finally, LGL_{G} indexes of elements with the largest amplitudes in 𝐠\bf{g} are selected as the estimated completely common row support Ω^r{\hat{\Omega}}_{r} in Step 4, where 𝒯(𝐱,L)\mathcal{T}({\bf{x}},L) denotes a prune operator on 𝐱\bf{x} that sets all but LL elements with the largest amplitudes to zero, and Γ(𝐱)\Gamma(\bf{x}) denotes the support of 𝐱{\bf{x}}, i.e., Γ(𝐱)={i,𝐱(i)0}\Gamma({\bf{x}})=\{i,{\bf{x}}(i)\neq 0\}.

After obtaining LGL_{G} non-zero rows by Algorithm 2, we focus on estimating the column support Ωcl1,k{{\Omega}}_{c}^{l_{1},k} for each non-zero row l1l_{1} and each user kk by the following Stage 2 and 3.

2) Stage 2: Estimating the partially common column supports. Thanks to the partially column-structured sparsity of the angular cascaded channels, we can jointly estimate the partially common column supports {Ωcl1,Com}l1=1L\{{{{\Omega}}_{c}^{l_{1},{\rm{Com}}}}\}_{l_{1}=1}^{L} for {𝐇~k}k=1K\{{{{\tilde{\bf{H}}}}_{k}}\}_{k=1}^{K} by Algorithm 3.

Input: 𝐘~k:k{\tilde{\bf{Y}}}_{k}:\forall k, LGL_{G}, 𝚯~{\tilde{\bm{\Theta}}}, Lr,k:k{L_{r,k}}:\forall k, LcL_{c}, Ω^r{\hat{\Omega}}_{r}.
Initialization: Ω^cl1,k={\hat{\Omega}}_{c}^{l_{1},k}=\emptyset, l1,k\forall l_{1},k, 𝐜1l=𝟎N×1{\bf{c}}^{l}_{1}={\bf{0}}_{N\times 1}, l1\forall l_{1}.
1. for l1=1,2,,LGl_{1}=1,2,\cdots,L_{G} do
2.    for k=1,2,,Kk=1,2,\cdots,K do
3.       𝐲~k=𝐘~k(:,Ω^r(l1)){{\tilde{\bf{y}}}}_{k}={{\tilde{\bf{Y}}}}_{k}(:,{\hat{\Omega}}_{r}(l_{1})), 𝐫~k=𝐲~k{{\tilde{\bf{r}}}}_{k}={\tilde{\bf{y}}}_{k}
4.       for l2=1,2,,Lr,kl_{2}=1,2,\cdots,L_{r,k} do
5.         n=argmaxn=1,2,,N𝚯~H(:,n)𝐫~kF2{n^{*}}={\mathop{\rm{argmax}}\limits_{n=1,2,\cdots,N}}{\|{\tilde{\bm{\Theta}}}^{H}(:,n){{\tilde{\bf{r}}}_{k}}\|}^{2}_{F}
6.         Ω^cl1,k=Ω^cl1,kn{\hat{\Omega}}_{c}^{l_{1},k}={\hat{\Omega}}_{c}^{l_{1},k}\bigcup n^{*}
7.         𝐡~^k=𝟎N×1{\hat{\tilde{\bf{h}}}}_{k}={\bf{0}}_{N\times 1}
8.         𝐡~^k(Ω^cl1,k)=𝚯~(:,Ω^cl1,k)𝐲~k{\hat{\tilde{\bf{h}}}}_{k}({\hat{\Omega}}_{c}^{l_{1},k})={\tilde{\bm{\Theta}}}^{{\dagger}}(:,{\hat{\Omega}}_{c}^{l_{1},k}){{\tilde{\bf{y}}}_{k}},
9.         𝐫~k=𝐲~k𝚯~𝐡~^k{{\tilde{\bf{r}}}_{k}}={\tilde{\bf{y}}}_{k}-{\tilde{\bm{\Theta}}}{\hat{\tilde{\bf{h}}}}_{k}
10.        𝐜l1(n)=𝐜l1(n)+1{\bf{c}}^{l_{1}}(n^{*})={\bf{c}}^{l_{1}}(n^{*})+1
11.      end for
12.   end for
13.   Ω^cl1,Com=Γ𝒯(𝐜l1,Pc){\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}=\Gamma_{{\mathcal{T}}({\bf{c}}^{l_{1}},P_{c})}
14. end for
Output: Estimated completely common row support {Ω^cl1,Com}l1=1LG\{{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}\}_{l_{1}=1}^{L_{G}}.
Algorithm 3 Joint partially common column supports estimation

For the l1l_{1}th non-zero row, we only need to utilize the effective measurement vector 𝐲~k=𝐘~k(:,Ω^r(l1)){\tilde{\bf{y}}}_{k}={{\tilde{\bf{Y}}}}_{k}(:,{\hat{\Omega}}_{r}(l_{1})) to estimate the partially common column support Ωcl1,Com{\Omega}_{c}^{l_{1},{\rm Com}}. The basic idea is that, we firstly estimate the column support Ωcl1,k{{\Omega}}_{c}^{l_{1},k} with Lr,kL_{r,k} indexes for each user kk, then we select LcL_{c} indexes associated with the largest number of times from all {Ωcl1,k}k=1K\{{{\Omega}}_{c}^{l_{1},k}\}_{k=1}^{K} as the estimated partially common column support Ω^cl1,Com{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}.

In order to estimate the column supports for each user kk, the correlation between the sensing matrix 𝚯~{\tilde{\bm{\Theta}}} and the residual vector 𝐫~k{\tilde{\bf{r}}}_{k} needs to be calculated. As shown in Step 5 of Algorithm 3, the most correlative column index in 𝚯~{\tilde{\bm{\Theta}}} with 𝐫~k{\tilde{\bf{r}}}_{k} is regarded as the newly found column support index nn^{*}. Based on the updated column support Ω^cl1,k{\hat{\Omega}}_{c}^{l_{1},k} in Step 6, the estimated sparse vector 𝐡~^k{\hat{\tilde{\bf{h}}}}_{k} is obtained by using the LS algorithm in Step 8. Then, the residual vector 𝐫~k{\tilde{\bf{r}}}_{k} is updated by removing the effect of non-zero elements that have been estimated in Step 9. Particularly, the N×1N\times 1 vector 𝐜l1{\bf{c}}^{l_{1}} is used to count the number of times for selected column indexes in Step 10. Finally, the LcL_{c} indexes of elements with the largest value in 𝐜l1{\bf{c}}^{l_{1}} are selected as the estimated partially common column support Ω^cl1,Com{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}} in Step 13.

3) Stage 3: Estimating the individual column supports. Based on the estimated completely common row support Ω^r{\hat{\Omega}}_{r} and the estimated partially common column supports {Ω^cl1,Com}l1=1L\{{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}\}_{l_{1}=1}^{L}, the column support Ωcl1,k{{\Omega}}_{c}^{l_{1},k} for each non-zero row l1l_{1} and each user kk can be estimated by Algorithm 4.

Input: 𝐘~k:k{\tilde{\bf{Y}}}_{k}:\forall k, 𝚯~{\tilde{\bm{\Theta}}}, LGL_{G} Lr,k:k{L_{r,k}}:\forall k, LcL_{c}, Ω^r{\hat{\Omega}}_{r}, {Ω^cl1,Com}l1=1L\{{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}\}_{l_{1}=1}^{L}.
Initialization: Ω^cl1,k=Ω^cl1,Com{\hat{\Omega}}_{c}^{l_{1},k}={\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}, l1,k\forall l_{1},k.
1. for l1=1,2,,LGl_{1}=1,2,\cdots,L_{G} do
2.    for k=1,2,,Kk=1,2,\cdots,K do
3.       𝐲~k=𝐘~k(:,Ω^r(l1)){\tilde{\bf{y}}}_{k}={\tilde{\bf{Y}}}_{k}(:,{\hat{\Omega}}_{r}(l_{1}))
4.       𝐡~^k=𝟎N×1{\hat{\tilde{\bf{h}}}}_{k}={\bf{0}}_{N\times 1}
5.       𝐡~^k(Ω^cl1,k)=𝚯~(:,Ω^cl1,Com)𝐲~k{\hat{\tilde{\bf{h}}}}_{k}({\hat{\Omega}}_{c}^{l_{1},k})={\tilde{\bm{\Theta}}}^{{\dagger}}(:,{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}){{\tilde{\bf{y}}}_{k}}
6.       𝐫k=𝐲k𝚯~𝐡^k{\bf{r}}_{k}={\bf{y}}_{k}-{\tilde{\bm{\Theta}}}{\hat{\bf{h}}}_{k},
7.       for l2=1,2,,Lr,kLcl_{2}=1,2,\cdots,L_{r,k}-L_{c} do
8.         n=argmaxn=1,2,,N𝚯~H(:,n)𝐫~kF2{n^{*}}={\mathop{\rm{argmax}}\limits_{n=1,2,\cdots,N}}{\|{\tilde{\bm{\Theta}}}^{H}(:,n){{\tilde{\bf{r}}}_{k}}\|}^{2}_{F}
9.         Ω^cl1,k=Ω^cl1,kn{\hat{\Omega}}_{c}^{l_{1},k}={\hat{\Omega}}_{c}^{l_{1},k}\bigcup n^{*}
10.         𝐡~^k=𝟎N×1{\hat{\tilde{\bf{h}}}}_{k}={\bf{0}}_{N\times 1}
11.         𝐡~^k(Ω^cl1,k)=𝚯~(:,Ω^cl1,k)𝐲~k{\hat{\tilde{\bf{h}}}}_{k}({\hat{\Omega}}_{c}^{l_{1},k})={\tilde{\bm{\Theta}}}^{{\dagger}}(:,{\hat{\Omega}}_{c}^{l_{1},k}){{\tilde{\bf{y}}}_{k}}
12.        𝐫~k=𝐲~k𝚯~𝐡~^k{{\tilde{\bf{r}}}_{k}}={\tilde{\bf{y}}}_{k}-{\tilde{\bm{\Theta}}}{\hat{\tilde{\bf{h}}}}_{k}
13.      end for
14.   end for
15. end for
Output: Estimated the individual column supports {{Ω^cl1,k}l1=1LG}k=1K\{\{{\hat{\Omega}}_{c}^{l_{1},k}\}_{l_{1}=1}^{L_{G}}\}_{k=1}^{K}.
Algorithm 4 Individual column supports estimation

For the l1l_{1}th non-zero row, we have estimated LcL_{c} column support indexes by Algorithm 3. Thus, there are Lr,kLcL_{r,k}-L_{c} user-specific column support indexes to be estimated for each user kk. The column support Ω^cl1,k{\hat{\Omega}}_{c}^{l_{1},k} is initialized as Ω^cl1,Com{\hat{\Omega}}_{c}^{l_{1},{\rm Com}}. Based on Ω^cl1,Com{\hat{\Omega}}_{c}^{l_{1},{\rm Com}}, the estimated sparse vector 𝐡~^k{\hat{\tilde{\bf{h}}}}_{k} and residual vector 𝐫~k{{\tilde{\bf{r}}}}_{k} are initialized in Step 5 and Step 6. Then, the column support Ω^cl1,k{\hat{\Omega}}_{c}^{l_{1},k} for l1\forall l_{1} and k\forall k can be estimated in Steps 7-13 by following the same idea of Algorithm 3.

Through the above three stages, the supports of all angular cascaded channels are estimated by exploiting the double-structured sparsity. It should be pointed out that, if there are no common scatters between the RIS and users, the double-structured sparse channel will be simplified as the row-structured sparse channel. In this case, the cascaded channel estimation can also be solved by the proposed DS-OMP algorithm, where Stage 2 will be removed.

III-C Computational Complexity Analysis

In this subsection, the computational complexity of the proposed DS-OMP algorithm is analyzed in terms of three stages of detecting supports. In Stage 1, the computational complexity mainly comes from Step 2 in Algorithm 2, which calculates the power of MM columns of 𝐘~k{\tilde{\bf{Y}}}_{k} of size Q×MQ\times M for k=1,2,,Kk=1,2,\cdots,K. The corresponding computational complexity is 𝒪(KMQ)\mathcal{O}(KMQ). In Stage 2, for each non-zero row l1l_{1} and each user kk in Algorithm 3 , the computational complexity 𝒪(NQLr,k3)\mathcal{O}(NQL_{r,k}^{3}) is the same as that of OMP algorithm [6]. Considering LGKL_{G}K iterations, the overall computational complexity of Algorithm 3 is 𝒪(LGKNQLr,k3)\mathcal{O}(L_{G}KNQ{L}_{r,k}^{3}). Similarly, the overall computational complexity of Algorithm 4 is 𝒪(LGKNQ(Lr,kLc)3)\mathcal{O}(L_{G}KNQ{(L_{r,k}-L_{c})}^{3}). Therefore, the overall computational complexity of proposed DS-OMP algorithm is 𝒪(KMQ)+𝒪(LGKNQLr,k3)\mathcal{O}(KMQ)+\mathcal{O}(L_{G}KNQ{L}_{r,k}^{3}).

IV Simulation Results

In our simulation, we consider that the number of BS antennas, RIS elements and users are respectively M=64M=64 (M1=8,M2=8M_{1}=8,M_{2}=8), N=256N=256 (N1=16N_{1}=16, N2=16N_{2}=16), and K=16K=16. The number of paths between the RIS and the BS is LG=5L_{G}=5, and the number of paths from the kkth user to the RIS is set as Lr,k=8L_{r,k}=8 for k\forall k. All spatial angles are assumed to be on the quantized grids. Each element of RIS reflecting matrix 𝚯\bm{\Theta} is selected from {1N,+1N}{\{-\frac{1}{\sqrt{N}},+\frac{1}{\sqrt{N}}\}} by considering discrete phase shifts of the RIS [7]. |αlG|=103dBR2.2|{\alpha}^{G}_{l}|=10^{-3}d_{BR}^{-2.2}, where dBRd_{BR} denotes the distance between the BS and RIS and is assumed to be dBR=10md_{BR}=10m. |αlr,k|=103dRU2.8|{\alpha}^{r,k}_{l}|=10^{-3}d_{RU}^{-2.8}, where dRUd_{RU} denotes the distance between the RIS and user and is assumed to be dRU=100md_{RU}=100m for k\forall k [7]. The SNR is defined as 𝔼{𝚯~𝐇~kHF2/𝐖~kF2}\mathbb{E}\{||{\tilde{\bm{\Theta}}}{\tilde{\bf{H}}}_{k}^{H}||_{F}^{2}/||{\tilde{\bf{W}}}_{k}||_{F}^{2}\} in (14) and is set as 0 dB.

We compare the proposed DS-OMP based scheme with the conventional CS based scheme [3] and the row-structured sparsity based scheme [4]. In the conventional CS based scheme, the OMP algorithm is used to estimate the sparse cascaded channel 𝐇~k{{\tilde{\bf{H}}}_{k}} for k\forall k. In the row-structured sparsity based scheme, the common row support Ωr\Omega_{r} with LGL_{G} indexes are firstly estimated, and then for each user kk and each non-zero row l1l_{1}, column supports are respectively estimated by following the idea of the classical OMP algorithm. In addition, we consider the oracle LS scheme as our benchmark, where the supports of all sparse channels are assumed to be perfectly known.

Refer to caption
Figure 2: NMSE performance comparison against the pilot overhead QQ.

Fig. 2 shows the normalized mean square error (NMSE) performance comparison against the pilot overhead, i.e., the number of time slots QQ for pilot transmission. As shown in Fig. 2, in order to achieve the same estimation accuracy, the pilot overhead required by the proposed DS-OMP based scheme is lower than the other two existing schemes [3, 4]. However, when there is no common path between the RIS and all users, i.e., Lc=0L_{c}=0, the double-structured sparsity will be simplified as the row-structured sparsity [4]. Thus the NMSE performance of the proposed DS-OMP based and the row-structured sparsity based scheme is the same. With the increased number of common paths LcL_{c} between the RIS and users, the NMSE performance of the proposed scheme can be improved to approach the benchmark of perfect channel supports.

V Conclusions

In this paper, we developed a low-overhead cascaded channel estimation scheme in RIS assisted wireless communication systems. Specifically, we first analyzed the double-structured sparsity of the angular cascaded channels among users. Based on this double-structured sparsity, we then proposed a DS-OMP algorithm to reduce the pilot overhead. Simulation results show that the pilot overhead required by the proposed DS-OMP algorithm is lower compared with existing algorithms. For the future work, we will apply the double-structured sparsity to the super-resolution channel estimation problem by considering the channel angles are continuous in practice.

References

  • [1] L. Dai et. al, “Reconfigurable intelligent surface-based wireless communications: Antenna design, prototyping, and experimental results,” IEEE Access, vol. 8, pp. 45 913–45 923, Mar. 2020.
  • [2] M. Di Renzo et. al, “Reconfigurable intelligent surfaces vs. relaying: Differences, similarities, and performance comparison,” IEEE Open J. Commun. Soc., vol. 1, pp. 798–807, Jun. 2020.
  • [3] P. Wang, J. Fang, H. Duan, and H. Li, “Compressed channel estimation for intelligent reflecting surface-assisted millimeter wave systems,” IEEE Signal Process. Lett., vol. 27, pp. 905–909, May 2020.
  • [4] J. Chen, Y.-C. Liang, H. V. Cheng, and W. Yu, “Channel estimation for reconfigurable intelligent surface aided multi-user MIMO systems,” arXiv preprint arXiv:1912.03619, Dec. 2019.
  • [5] C. Hu, L. Dai, T. Mir, Z. Gao, and J. Fang, “Super-resolution channel estimation for mmWave massive MIMO with hybrid precoding,” IEEE Trans. Veh. Technol., vol. 67, no. 9, pp. 8954–8958, Sep. 2018.
  • [6] X. Gao, L. Dai, S. Zhou, A. M. Sayeed, and L. Hanzo, “Wideband beamspace channel estimation for millimeter-wave MIMO systems relying on lens antenna arrays,” IEEE Trans. Signal Process., vol. 67, no. 18, pp. 4809–4824, Sep. 2019.
  • [7] Q. Wu and R. Zhang, “Beamforming optimization for wireless network aided by intelligent reflecting surface with discrete phase shifts,” IEEE Trans. Commun., vol. 68, no. 3, pp. 1838–1851, 2020.