This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Sparse Bayesian Learning-Based 3D Spectrum Environment Map Construction —— Sampling Optimization, Scenario-Dependent Dictionary Construction and Sparse Recovery

Jie Wang, Qiuming Zhu,  Zhipeng Lin,  Qihui Wu,  Yang Huang,  Xuezhao Cai, Weizhi Zhong,  Yi Zhao This work was supported in part by the National Key Scientific Instrument and Equipment Development Project (No. 61827801), in part by the National Natural Science Foundation of China (No. 62271250), in part by Natural Science Foundation of Jiangsu Province (No. BK20211182), in part by the open research fund of National Mobile Communications Research Laboratory, Southeast University, No. 2022D04.J. Wang, Q. Zhu, Z. Lin, Y. Huang, Y. Zhao, Q. Wu and X. Cai are with The Key Laboratory of Dynamic Cognitive System of Electromagnetic Spectrum Space, College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China (e-mail: bx2104906wangjie; zhuqiuming; linlzp; yang.huang.ceie; zhaoyi@nuaa.edu.cn, wuqihui2014@sina.com, caixz3312@outlook.com).W. Zhong is with The Key Laboratory of Dynamic Cognitive System of Electromagnetic Spectrum Space, College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China (e-mail: zhongwz@nuaa.edu.cn).
Abstract

The spectrum environment map (SEM), which can visualize the information of invisible electromagnetic spectrum, is vital for monitoring, management, and security of spectrum resources in cognitive radio (CR) networks. In view of a limited number of spectrum sensors and constrained sampling time, this paper presents a new three-dimensional (3D) SEM construction scheme based on sparse Bayesian learning (SBL). Firstly, we construct a scenario-dependent channel dictionary matrix by considering the propagation characteristic of the interested scenario. To improve sampling efficiency, a maximum mutual information (MMI)-based optimization algorithm is developed for the layout of sampling sensors. Then, a maximum and minimum distance (MMD) clustering-based SBL algorithm is proposed to recover the spectrum data at the unsampled positions and construct the whole 3D SEM. We finally use the simulation data of the campus scenario to construct the 3D SEMs and compare the proposed method with the state-of-the-art. The recovery performance and the impact of different sparsity on the constructed SEMs are also analyzed. Numerical results show that the proposed scheme can reduce the required spectrum sensor number and has higher accuracy under the low sampling rate.

Index Terms:
3D spectrum environment map, sparse Bayesian learning, mutual information, propagation channel model, clustering algorithm.
publicationid: pubid: 0000–0000/00$00.00 © 2021 IEEE

I Introduction

With the increase of various electronic devices, i.e., radio, radar, navigation and so on, the electromagnetic environment becomes significantly complex [1, 2, 3]. The Spectrum Environment Map (SEM) visualizes the spectrum related information, including the time, frequency and the received signal strength (RSS) of signals, and the locations of the sensor devices, on a geographical map [4, 5]. It is useful for the abnormal spectral activity detection, radiation source localization, radio frequency (RF) resource management, and so on. However, constructing an accurate SEM for the scenario with lots of buildings is still difficult. This is because the sensor device number and sampling time are limited in practice. Besides, with the development of space-air-ground integrated communication networks, electronic devices are distributed in the three-dimensional (3D) space.

Many SEM reconstruction methods have been proposed recently [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. They can be divided into two major categories, i.e., the direct construction methods driven by massive measured data and the indirect construction methods driven by channel propagation characteristics.

The direct construction methods typically employ interpolation algorithms to recovery the missing data by mining the correlation between the given measured data. In [17], the authors utilized the Kriging interpolation to construct the two-dimensional (2D) SEM at different frequencies. A tensor completion method was used in [9] to recovery the missing data in both the spatial domain and temporal domain. Machine learning techniques have also been adopted for the data-driven REM construction. For example, from the perspective of image processing, the authors in [11] developed a generative adversarial network (GAN) to obtain the SEM based on the sampling data. Deep neural networks were used in [10] to ”learn” the intricate underlying structure from the given data and constructed the SEM. Nevertheless, the aforementioned methods mainly focus on the 2D SEM re-construction and can only achieve satisfactory performance by using a large amount of sampling data.

The indirect construction methods can greatly reduce the number of sampling data by using the rule of wireless signal propagation [18]. In [12, 13], the limited sampling data was used to estimate the transmitters’ information, and then the missing data was recovered by using the ideal propagation model. Considering the realistic propagation model, the authors in [19] modeled the spectrum data as the tomographic accumulation of spatial loss field.

However, in many practical cases, the number of sampling sensors is very limited and the spatial sampling rate may be much lower than the Nyquist rate. To tackle this issue, the compressed sensing (CS) technique is applied, which decomposes the measurement into the linear superposition of the sensing matrix and the sparse signal to realize data recovery under sparse sampling. For example, the authors in [20] constructed the SEM based on least absolute shrinkage and selection operator (LASSO) with random sparse sensing data. In [15], the authors proposed an improved orthogonal matching pursuit algorithm (OMP) to build compressed SEM with right-triangular (QR) pivoting based sampling locations optimization. However, the traditional CS approaches, such as LASSO [21], OMP [22] and linear programming [23], are point estimation for sparse signals. Besides, the SEM sensing matrix usually has high spatial correlation, which would greatly deteriorate the recovery performance from the noisy measurements. The sparse Bayesian learning (SBL) [24, 25, 26] can recover the exact sparse signal under the high correlated sensing matrix. A SBL-based SEM construction algorithm was proposed in [27]. The authors used a Laplacian function to describe the signal propagation model for sparse dictionary construction, which does not consider the realistic propagation scenario, especially the impact of buildings. Furthermore, the authors arranged the sampling sensors randomly without considering the sampling location (SL) optimization for different scenarios.

To fill these gaps, we propose a novel 3D SBL-based SEM construction scheme with optimized sampling positions of sensors and a scenario-dependent dictionary with full consideration of scenario characteristics. The main novelties and contributions of this paper are summarized as follows:

  • A channel propagation dictionary design method for SEM construction under sparse sampling is proposed. Combined with the ray tracing (RT) simulation technology and the spatial interpolation algorithm, the scenario-dependent SEM construction dictionary is obtained.

  • A maximum mutual information (MMI)-based measurement matrix optimization architecture is proposed. By modeling the construction problem as a communication channel model, the objective function is derived from the SBL framework and solved by the greedy algorithm, which improving the efficiency of spectrum data acquisition.

  • An improved SBL algorithm based on cluster analysis is developed for 3D compressed spectrum recovery. The maximum and minimum distance (MMD) clustering and dynamic threshold pruning are combined with the SBL, which can recover sparse signals and achieve accurate SEM construction.

The rest of this paper is organized as follows. Section II gives the 3D SEM construction model and the sparse sampling model. In Section III, the details of proposed 3D SEM construction scheme is given and demonstrated. Then, Section IV presents the simulation and comparison results and Section V gives some conclusions.

Refer to caption
Figure 1: (a): Realistic continuous 3D spectrum map under an urban scenario; (b): Constructed discrete 3D spectrum map.
Refer to caption
Figure 2: Schematic illustration of 3D SEM construction process based on sparse sampling.

II System Model

II-A 3D SEM Model

As shown in Fig.1, the region of interest (ROI) is discretized into several small cubes. Each cube is colored according to its RSS, where red cubes represent high RSS values and blue cubes represent low RSS values. The ROI constitutes a spectrum tensor 𝝌Nx×Ny×Nz\bm{\chi}\in{\Re^{{N_{x}}\times{N_{y}}\times{N_{z}}}} in the 3D space, where Nx{N_{x}}, Ny{N_{y}}, Nz{N_{z}} indicate the grid number along xx, yy, zz dimensions, respectively. Technically, 3D SEM construction in this paper aims to recovery RSS values of all N=Nx×Ny×NzN={N_{x}}\times{N_{y}}\times{N_{z}} cubes based on the known RSS values of sampling cubes.

A sparse signal vector 𝝎=[ω1,ω2,,ωn,,ωN]TN×1\bm{\omega}={\left[{{\omega_{1}},{\omega_{2}},\ldots,{\omega_{n}},\ldots,{\omega_{N}}}\right]^{\text{T}}}\in{\mathbb{R}^{N\times 1}} can be defined as

ωn={Pnt,if there is a RF transmitter in the nth cube,0,otherwise.{\omega_{n}}=\begin{cases}P_{n}^{t},&{\text{if there is a RF transmitter in the }}n{\text{th cube,}}\\ {0,}&{\text{otherwise.}}\end{cases} (1)

where PntP_{n}^{t} is the transmitting power of the nnth RF transmitter. ω\omega is a KK-sparse signal vector with 𝝎0=K\left\|\bm{\omega}\right\|_{0}=K. That is if there are KK stationary RF transmitters denoted as {𝐓k}k=1K\left\{{{{\mathbf{T}}_{k}}}\right\}_{k=1}^{K}, where 𝐓k=(xkt,ykt,zkt){{\mathbf{T}}_{k}}=\left({x_{k}^{t},y_{k}^{t},z_{k}^{t}}\right) is the location of kkth RF transmitter, we have K(KN)K\left({K\ll N}\right) nonzero elements in ω\omega.

Suppose that we select MM SLs from all NN cubes denoted by {𝐒m}m=1M\left\{{{{\mathbf{S}}_{m}}}\right\}_{m=1}^{M} and 𝐒m=(xms,yms,zms){{\mathbf{S}}_{m}}=\left({x_{m}^{s},y_{m}^{s},z_{m}^{s}}\right). The sampling rate is r=M/Nr=M/N. The Euclidean distance from the kkth transmitter to the mmth SL can be written as

dm,k=𝐓k𝐒m2.{d_{m,k}}={\left\|{{{\mathbf{T}}_{k}}-{{\mathbf{S}}_{m}}}\right\|_{2}}. (2)

Under the line-of-sight (LOS) condition, the RSS from the kkth transmitter to the mmth SL can be approximated by using the free space propagation model as

Pm,kr=GtGrλ2Pkt(4π)2dm,kη,P_{m,k}^{r}=\frac{{{G_{t}}{G_{r}}{\lambda^{2}}P_{k}^{t}}}{{{{\left({4\pi}\right)}^{2}}d_{m,k}^{\eta}}}, (3)

where Gt{G_{t}} and Gr{G_{r}} are the antenna gain of transceivers, PktP_{k}^{t} denotes the transmitting power, λ\lambda is the wavelength of carrier, and η\eta is the path loss exponent. Since the RSS sensed at each SL may include the receiving power of several RF transmitters [15], the total RSS can be approximately expressed as

tm=k=1KPm,kr.{t_{m}}=\sum\limits_{k=1}^{K}{P_{m,k}^{r}}. (4)

It should be mentioned that the realistic propagation model is very complex due to reflection, diffraction and other propagation phenomena. The above data recovery method for the unsampled cubes is only suitable for the LOS propagation scenarios.

II-B Sparse Sampling Model

As shown in Fig.2 (a), the spectrum tensor 𝝌\bm{\chi} is firstly vectorized into 𝐗N×1{\mathbf{X}}\in{\mathbb{R}^{N\times 1}}. Compared with the total number of discretized cubes in the ROI, the number of stationary RF transmitters is much small (KNK\ll N). Since the spectrum vector 𝐗{\mathbf{X}} has high spatial correlation, it can be represented by the product of a sparse dictionary 𝝋N×N\bm{\varphi}\in{\mathbb{R}^{N\times N}} and the sparse signal ω\omega as

𝐗=𝝋𝝎,{\mathbf{X}}=\bm{\varphi}\bm{\omega}, (5)

where the element φi,j{\varphi_{i,j}} of sparse dictionary is defined as the channel gain or propagation path loss between the iith and the jjth cubes. Let us set the RSS vector of SLs as 𝒕M×1\bm{t}\in{\mathbb{R}^{M\times 1}}. The measurement matrix 𝝍M×N\bm{\psi}\in{\mathbb{R}^{M\times N}} can be defined as

ψi,j={1,if the ith SL is at the jth cube,0,otherwise.{\psi_{i,j}}=\begin{cases}1,&{\text{if the }}i{\text{th SL is at the }}j{\text{th cube,}}\\ {0,}&{\text{otherwise.}}\end{cases} (6)

where each row of 𝝍\bm{\psi} has an element of 1 denoting the SL’s position in the ROI. Then, we can have

𝒕=𝝍𝐗+𝜺=𝝍𝝋𝝎+𝜺=𝚽𝝎+𝜺,\bm{t}=\bm{\psi}{\mathbf{X}}+\bm{\varepsilon}=\bm{\psi}\bm{\varphi}{\bm{\omega}}+\bm{\varepsilon}={\mathbf{\Phi}}\bm{\omega}+\bm{\varepsilon}, (7)

where 𝜺M×1\bm{\varepsilon}\in{\mathbb{R}^{M\times 1}} is the measurement noise that obeys the zero-mean Gaussian distribution with variance σ2{\sigma^{2}}, and 𝚽{\mathbf{\Phi}} is a sensing matrix.

This paper recovers the spectrum data by two steps. Firstly, the sparse signal ω\omega is recovered by the noisy measurement vector 𝒕\bm{t} and the sensing matrix 𝚽{\mathbf{\Phi}}. Then, the spectrum vector 𝐗{\mathbf{X}} is constructed according to (5). The sparse signal recovery (SSR) is equivalent to solving the following l0{l_{0}}-minimization problem as

𝝎^=argmin𝝎0,s.t.𝒕=𝚽𝝎+𝜺.\begin{gathered}\bm{\hat{\omega}}=\arg\min{\left\|\bm{\omega}\right\|_{0}},\hfill\\ \begin{array}[]{*{20}{c}}{{\text{s}}{\text{.t}}{\text{.}}}&{\bm{t}={\mathbf{\Phi}}\bm{\omega}+\bm{\varepsilon}}\end{array}.\hfill\\ \end{gathered} (8)

The l0{l_{0}}-minimization problem of (8) is NP-hard, and can be equivalent to a l1{l_{1}}-minimization problem as [28]

𝝎^=argmin𝝎1,s.t.𝒕=𝚽𝝎+𝜺.\begin{gathered}\bm{\hat{\omega}}=\arg\min{\left\|\bm{\omega}\right\|_{1}},\hfill\\ \begin{array}[]{*{20}{c}}{{\text{s}}{\text{.t}}{\text{.}}}&{\bm{t}={\mathbf{\Phi}}\bm{\omega}+\bm{\varepsilon}}\end{array}.\hfill\\ \end{gathered} (9)

The authors in [29] have proved that if 𝚽{\mathbf{\Phi}} satisfies the condition of restricted isometry property, ω\omega can be recovered by applying CS algorithms. In this paper, we focus on using the sparse Bayesian theory to solve the recovery problem of the SEM construction.

Refer to caption
Figure 3: The flowchart of the proposed 3D SEM construction scheme.

III 3D SEM Construction based on SBL

III-A An Overview of the Proposed 3D SEM Construction Scheme

The proposed 3D SEM construction scheme is illustrated in Fig. 3, which mainly contains the scenario-dependent dictionary construction, the measurement matrix optimization and the 3D SEM construction. Firstly, combined with RT technology and interpolation algorithm, we take the factor of scenario into account and build a scenario-dependent sparse dictionary 𝝋\bm{\varphi}. According to the maximum mutual information criterion, we design the selection scheme of SLs and obtain the measurement matrix 𝝍\bm{\psi}. Then, based on the sparse dictionary and measurement matrix, the sparse signal ω\omega can be recovered based on the SBL algorithm. Finally, we utilize the sparse dictionary and sparse signal to construct the full SEM.

In order to recover the sparse signal ω\omega for the SEM construction, we adopt the SBL theory and hierarchical sparse probabilistic model as follows.

1) Noise Model: The sparse regression model (7) is defined in a zero-mean Gaussian noise 𝜺\bm{\varepsilon} with unknown variance σ2{\sigma^{2}}, and then the Gaussian likelihood of 𝒕\bm{t} can be written as

p(𝒕|𝝎,σ2)=(2πσ2)M/2exp{𝒕𝚽𝝎22σ2}.p\left({\bm{t}|\bm{\omega},{\sigma^{2}}}\right)={\left({2\pi{\sigma^{2}}}\right)^{-M/2}}\exp\left\{{-\frac{{{{\left\|{\bm{t}-{\mathbf{\Phi}}\bm{\omega}}\right\|}^{2}}}}{{2{\sigma^{2}}}}}\right\}. (10)

A Gamma distribution is then posed on β(β=(σ2)1)\beta\left({\beta=\left({\sigma^{2}}\right)^{-1}}\right), which is a conjugate prior to the Gaussian distribution and can greatly simplifies the analysis [30], as

p(β;c,d)=Gamma(β|c,d),p\left({\beta;c,d}\right)={\text{Gamma}}\left({\beta|c,d}\right), (11)

with

Gamma(β|c,d)=Γ(c)1dcβc1edβ,{\text{Gamma}}\left({\beta|c,d}\right)=\Gamma{(c)^{-1}}{d^{c}}{\beta^{c-1}}{e^{-d\beta}}, (12)

where c>0c>0 is the shape parameter, and d>0d>0 is the scale parameter, Γ(c)=0tc1et𝑑t\Gamma(c)=\int_{0}^{\infty}{{t^{c-1}}{e^{-t}}dt}.

2) Hierarchical Sparse Prior Model: To induce the sparsity of ω\omega, we deploy a sparseness-promoting prior on it. In the Relevance Vector Machine, ω\omega is a two-layer hierarchical sparse prior, which shares the same properties with Laplace prior while enabling convenient computation [24]. Specifically, each element of ω\omega is first posed a zero-mean Gaussian prior

p(𝝎|𝜶)=i=1N𝒩(ωi|0,αi1),p\left({\bm{\omega}|\bm{\alpha}}\right)=\prod\limits_{i=1}^{N}{\mathcal{N}\left({{\omega_{i}}|0,\alpha_{i}^{-1}}\right)}, (13)

where 𝜶=[α1,α2,,αN]T\bm{\alpha}=\left[{{\alpha_{1}},{\alpha_{2}},\ldots,{\alpha_{N}}}\right]^{\text{T}}. Then, to complete the specification of this hierarchical prior, we consider the Gamma hyperpriors over 𝜶\bm{\alpha} as

p(𝜶;a,b)=i=1NGamma(αi|a,b).p\left({\bm{\alpha};a,b}\right)=\prod\limits_{i=1}^{N}{{\text{Gamma}}}\left({{\alpha_{i}}|a,b}\right). (14)

The graphical model is shown in Fig.4. The overall prior p(𝝎)p\left(\bm{\omega}\right) can be obtained by computing the marginal integral of hyper-parameters in 𝜶\bm{\alpha} as

p(𝝎)=p(𝝎|𝜶)p(𝜶)𝑑𝜶.p\left(\bm{\omega}\right)=\int{p\left({\bm{\omega}|\bm{\alpha}}\right)p\left(\bm{\alpha}\right)d}\bm{\alpha}. (15)
Refer to caption
Figure 4: Graphical representation of the SBL model.

Since the integral is computable for Gamma p(𝜶)p\left(\bm{\alpha}\right), the true prior p(𝝎)p\left(\bm{\omega}\right) is a Student-t distribution which can promote sparsity on ω\omega[31].

3) Sparse Bayesian Inference: Following the Bayesian inference, the posterior distribution over all unknowns is desired as

p(𝝎,𝜶,β|𝒕)p(𝝎|𝒕,𝜶,β)p(𝜶,β|𝒕),p\left({\bm{\omega},\bm{\alpha},\beta|\bm{t}}\right)\equiv p\left({\bm{\omega}|\bm{t},\bm{\alpha},\beta}\right)p\left({\bm{\alpha},\beta|\bm{t}}\right), (16)

with the decomposition of the ‘weight posterior’ and the ‘hyper-parameter posterior’. It can be inferred that the weight posterior of ω\omega is Gaussian

p(𝝎|𝒕,𝜶,β)=p(𝒕|𝝎,𝜶,β)p(𝝎|𝜶)p(𝒕|𝜶,β)=𝒩(𝝎|𝝁,𝚺),\begin{gathered}p\left({\bm{\omega}|\bm{t},\bm{\alpha},\beta}\right)=\frac{{p\left({\bm{t}|\bm{\omega},\bm{\alpha},\beta}\right)p\left({\bm{\omega}|\bm{\alpha}}\right)}}{{p\left({\bm{t}|\bm{\alpha},\beta}\right)}}\\ =\mathcal{N}\left({\bm{\omega}|\bm{\mu},{\mathbf{\Sigma}}}\right),\\ \end{gathered} (17)

with

𝝁=β𝚺𝚽T𝐭,𝚺=(β𝚽T𝚽+𝚲)1,\begin{gathered}\bm{\mu}=\beta{\mathbf{\Sigma}}{{\mathbf{\Phi}}^{T}}{\mathbf{t}},\hfill\\ {\mathbf{\Sigma}}={\left({\beta{{\mathbf{\Phi}}^{T}}{\mathbf{\Phi}}+{\mathbf{\Lambda}}}\right)^{-1}},\hfill\\ \end{gathered} (18)

where 𝚲=diag[α1,α2,,αN]{\mathbf{\Lambda}}={\text{diag}}\left[{{\alpha_{1}},{\alpha_{2}},\ldots,{\alpha_{N}}}\right]. The SBL considers the signal recovery from the perspective of statistics. With sparse prior of ω\omega and compressive samples 𝒕\bm{t}, the posteriori probability density function of sparse vector ω\omega can be inferred. We further estimate the ω\omega with the mean 𝝁\bm{\mu} and evaluate the accuracy of the recovery by the variance 𝚺{\mathbf{\Sigma}}. To calculate 𝝁\bm{\mu} and 𝚺{\mathbf{\Sigma}}, we estimate probabilistic model hyperparameters 𝜶\bm{\alpha} and β\beta, the details of estimation will be discussed in the Section III-D.

III-B Scenario-dependent Sparse Dictionary Construction

The traditional sparse dictionary usually adopts the free-space propagation model without considering the scene information, which is not suitable for urban environments with a lot of buildings. The propagation occurs direct, reflection, and diffraction phenomena in the actual environment. Accordingly, a scenario-dependent dictionary is constructed here by analyzing the characteristics of propagation channel. The RT technique is based on Geometrical Optics and the Uniform Theory of Diffraction. It has been used to predict all the possible propagation path parameters in a given geographic map [32].

Assume that the position of mmth transmitting cube and nnth receiving cube are 𝒎=[mx,my,mz]\bm{m}=[{m_{x}},{m_{y}},{m_{z}}] and 𝒏=[nx,ny,nz]\bm{n}=[{n_{x}},{n_{y}},{n_{z}}], respectively. The propagation distance of direct path is obtained as

dmn=(mxnx)2+(myny)2+(mznz)2.d_{mn}=\sqrt{{{({m_{x}}-{n_{x}})}^{2}}+{{({m_{y}}-{n_{y}})}^{2}}+{{({m_{z}}-{n_{z}})}^{2}}}. (19)

For the indirect path, we define the intersection coordinates of scatterers as h=(Hx,Hy,Hz)h=({H_{x}},{H_{y}},{H_{z}}). Thus, the Euclidean distance of ssth path ray between mm and the scatterer, and the Euclidean distance of ssth ray between the scatterer and mm can be respectively calculated as

dh,ms=(Hxmx)2+(Hzmz)2+(Hzmz)2,d_{h,m}^{\text{s}}=\sqrt{{{({H_{x}}-{m_{x}})}^{2}}+{{({H_{z}}-{m_{z}})}^{2}}+{{({H_{z}}-{m_{z}})}^{2}}}, (20)
dn,hs=(Hxnx)2+(Hznz)2+(Hznz)2.d_{n,h}^{\text{s}}=\sqrt{{{({H_{x}}-{n_{x}})}^{2}}+{{({H_{z}}-{n_{z}})}^{2}}+{{({H_{z}}-{n_{z}})}^{2}}}. (21)

The proposed RT-based dictionary construction method includes three steps, i.e., decomposition of ray source, tracking rays, and superposition of the filed strength. Firstly, the ray source is decomposed with direct ray, reflection ray and diffraction ray. If the ray arrives at the receiving field position nn in LOS propagation, the field intensity of mm arriving at nn is

𝐄LOS=E1mejldmndmn,{{\mathbf{{\mathbf{E}}}}_{{\text{LOS}}}}=E_{1{\text{m}}}\frac{{{e^{-jld_{mn}}}}}{{d_{mn}}}, (22)

where ll is the wave number. E1mE_{1{\text{m}}} is the electric field intensity of 1 m1{\text{ m}} away from mm, and dmnd_{mn} is the propagation distance of the direct path in (19). For the reflected ray, the electric field intensity can be expressed as

𝐄sR=𝐄LOSRelj(dh,ms+dn,hs)dh,ms+dn,hs,{\mathbf{E}}_{s}^{\text{R}}={{\mathbf{E}}_{{\text{LOS}}}}R\frac{{{{\text{e}}^{-lj(d_{h,m}^{\text{s}}+d_{n,h}^{\text{s}})}}}}{{d_{h,m}^{\text{s}}+d_{n,h}^{\text{s}}}}, (23)

where RR is the reflection coefficient. The electric field intensity of the diffraction path can be expressed as

𝐄sD=𝐄LOSdn,hsDdn,hsdh,ms(dh,ms+dn,hs)ejl(dh,ms+dn,hs),{\mathbf{E}}_{s}^{\text{D}}=\frac{{{{\mathbf{{\mathbf{E}}}}_{{\text{LOS}}}}}}{{d_{n,h}^{\text{s}}}}D\sqrt{\frac{{d_{n,h}^{\text{s}}}}{{d_{h,m}^{\text{s}}\cdot(d_{h,m}^{\text{s}}+d_{n,h}^{\text{s}})}}}\cdot{{\text{e}}^{-jl(d_{h,m}^{\text{s}}+d_{n,h}^{\text{s}})}}, (24)

where DD is the diffraction coefficient.

Then, according to (22)-(24), the final received field strength of nn can be obtained by vector superposition of all the field strengths

𝐄n=s=1NP𝐄s,𝐄s{𝐄LOS,𝐄sR,𝐄sD},\begin{array}[]{*{20}{c}}{{{\mathbf{E}}_{n}}=\sum\limits_{s=1}^{{N_{P}}}{{{\mathbf{E}}_{s}}},}&{{{\mathbf{E}}_{s}}\in\left\{{{{\mathbf{{\mathbf{E}}}}_{{\text{LOS}}}},{\mathbf{E}}_{s}^{\text{R}},{\mathbf{E}}_{s}^{\text{D}}}\right\},}\end{array} (25)

where Np{N_{p}} is the total number of effective rays. 𝐄s{{\mathbf{E}}_{s}} is the electric field intensity of direct path, reflection path or diffraction path. It should be mentioned that the direct ray disappears when the link between mm and nn is blocked. Then the parameter calculation of direct ray can be ignored [32]. Therefore, the total average RSS of nn is

pn=GnGm(λ4π)2|𝐄n𝐄1m|,p^{n}={G^{n}}{G^{m}}{\left({\frac{\lambda}{{4\pi}}}\right)^{2}}\left|{\frac{{{{\mathbf{E}}_{n}}}}{{{{\mathbf{E}}_{1{\text{m}}}}}}}\right|{\text{,}} (26)

where λ\lambda is the wavelength. Gm{G^{m}} and Gn{G^{n}} are the antenna gains of the transmitter at mm and receiver at nn respectively. Thus, the channel gain in dB between mm and nn can be calculated, i.e., the element φmn{\varphi_{mn}} in the mmth row and the nnth column of matrix 𝝋\bm{\varphi}, as

φmn=10log10(pn/pm).{\varphi_{mn}}=-10{\log_{10}}({{{p^{n}}}\mathord{\left/{\vphantom{{{p^{n}}}{{p^{m}}}}}\right.\kern-1.2pt}{{p^{m}}}}{\text{)}}{\text{.}} (27)

where pm{p^{m}} is transmitting power of the mmth transmitting cube.

Finally, we can predict the channel gain between any two cubes in the ROI and construct a scenario-dependent dictionary matrix. In this paper, we only calculate a small part of dictionary matrix and achieve the whole matrix by interpolation. The inverse distance weighted interpolation is adopted [33]. It assumes that each known value has a local influence with respect to the distance, which is consistent with the radio propagation principle.

Let us set the data obtained by RT as φg,g=1,2,,N0{\varphi_{g}},g=1,2,\ldots,{N_{0}}, corresponding to the element in the gx{g_{x}} th row and the gy{g_{y}} th column of matrix 𝝋\bm{\varphi}. The unknown value φij{\varphi_{ij}} in the iith row and the jjth column can be obtained by

φij=g=1N0(dijg)pφgg=1N0(dijg)p,{\varphi_{ij}}=\frac{{\sum\limits_{g=1}^{{N_{0}}}{\left({d_{ij}^{g}}\right)^{-p}{\varphi_{g}}}}}{{\sum\limits_{g=1}^{{N_{0}}}{\left({d_{ij}^{g}}\right)^{-p}}}}, (28)

where dijg=(igx)2+(jgy)2d_{ij}^{g}=\sqrt{\left({i-{g_{x}}}\right)^{2}+\left({j-{g_{y}}}\right)^{2}} is the distance between interpolation point and the known point, and pp is the distance exponent.

III-C MMI-based Sampling Optimization

The layout of sampling sensors, i.e., the measurement matrix 𝝍\bm{\psi}, has a significant effect on the SEM construction accuracy. In [34, 35], the authors used the mutual information between the predicted sensors’ observation and the current target location distribution to optimize the layout.

Different from traditional methods using the random selection, we model the sparse signal reconstruction problem as an information theory problem in communication channel, where sparse signal ω\omega is the input to the channel and the SLs’ RSS samples tt is the output. The SLs are tasked to observe in order to increase the information (or to reduce the uncertainty) about the ω\omega’s state. We select SLs based on the MMI criterion.

We define the index set of candidate SLs for selection is 𝐒{\mathbf{S}}. The subset of indices for the determined SLs is expressed as 𝐒k{{\mathbf{S}}_{k}}. When the SLs have different observation angles and perceptual uncertainties, the information gain attributable to different SLs can be quite different [36, 37]. Then, the SLs set ς{\mathbf{\varsigma}} (ς𝐒,|ς|=M)\left({{\mathbf{\varsigma}}\subset{\mathbf{S}},\left|{\mathbf{\varsigma}}\right|=M}\right) is chosen when their observations 𝒕ς{\bm{t}_{\mathbf{\varsigma}}} minimizes the expected conditional entropy of the posterior distribution of ω\omega, as given by

ς^=argminς𝐒H(𝝎|𝒕ς),{\mathbf{\hat{\varsigma}}}=\arg\mathop{\min}\limits_{{\mathbf{\varsigma}}\subset{\mathbf{S}}}H\left({\bm{\omega}|{\bm{t}_{\mathbf{\varsigma}}}}\right), (29)

which is equivalent to maximize the entropy reduction of ω\omega. We maximize mutual information

ς^\displaystyle{\mathbf{\hat{\varsigma}}} =argmaxς𝐒{H(𝝎)H(𝝎|𝒕ς)}\displaystyle=\arg\mathop{\max}\limits_{{\mathbf{\varsigma}}\subset{\mathbf{S}}}\left\{{H\left(\bm{\omega}\right)-H\left({\bm{\omega}|{\bm{t}_{\mathbf{\varsigma}}}}\right)}\right\} (30)
=argmaxς𝐒I(𝝎;𝒕ς),\displaystyle=\arg\mathop{\max}\limits_{{\mathbf{\varsigma}}\subset{\mathbf{S}}}I\left({\bm{\omega};{\bm{t}_{\mathbf{\varsigma}}}}\right),
I(𝝎;𝒕ς)\displaystyle I\left({\bm{\omega};{\bm{t}_{\mathbf{\varsigma}}}}\right) =H(𝝎)H(𝝎|𝒕ς)\displaystyle=H\left(\bm{\omega}\right)-H\left({\bm{\omega}|{\bm{t}_{\mathbf{\varsigma}}}}\right) (31)
=12ln|𝚲|12ln|𝚺|\displaystyle=\frac{1}{2}\ln\left|{\mathbf{\Lambda}}\right|-\frac{1}{2}\ln\left|{\mathbf{\Sigma}}\right|
=12ln|𝚲(β𝚽T𝚽+𝚲)1|\displaystyle=\frac{1}{2}\ln\left|{\frac{{\mathbf{\Lambda}}}{{{{\left({\beta{{\mathbf{\Phi}}^{\text{T}}}{\mathbf{\Phi}}+{\mathbf{\Lambda}}}\right)}^{-1}}}}}\right|

The derivation of formula (31) is given in AppendexA. In the initialization stage, there is no prior information related to the sparse signal ω\omega. According to SBL, we assign the same variance α\alpha, i.e., 1. Then, the formula (31) can be further converted to

I(𝝎;𝒕ς)\displaystyle I\left({\bm{\omega};{\bm{t}_{\mathbf{\varsigma}}}}\right) =12ln|α𝐈(β𝚽T𝚽+α𝐈)1|\displaystyle=\frac{1}{2}\ln\left|{\frac{{\alpha{\mathbf{I}}}}{{{{\left({\beta{{\mathbf{\Phi}}^{\text{T}}}{\mathbf{\Phi}}+\alpha{\mathbf{I}}}\right)}^{-1}}}}}\right| (32)
=12ln|αβ𝐈(𝚽T𝚽+αβ1𝐈)|,\displaystyle=\frac{1}{2}\ln\left|{\alpha\beta{\mathbf{I}}\left({{{\mathbf{\Phi}}^{\text{T}}}{\mathbf{\Phi}}+\alpha{\beta^{-1}}{\mathbf{I}}}\right)}\right|,

with

det(αβ𝐈(𝚽T𝚽+β1α𝐈))\displaystyle\det\left({\alpha\beta{\mathbf{I}}\left({{{\mathbf{\Phi}}^{\text{T}}}{\mathbf{\Phi}}+\beta^{-1}\alpha{\mathbf{I}}}\right)}\right) (33)
=(αβ)Ndet(𝚽T𝚽+β1α𝐈)\displaystyle={\left({\alpha\beta}\right)^{N}}\det\left({{{\mathbf{\Phi}}^{\text{T}}}{\mathbf{\Phi}}+\beta^{-1}\alpha{\mathbf{I}}}\right)
=(α)2Ndet(βα𝚽T𝚽+𝐈)\displaystyle={\left(\alpha\right)^{2N}}\det\left({\frac{\beta}{\alpha}{{\mathbf{\Phi}}^{\text{T}}}{\mathbf{\Phi}}+{\mathbf{I}}}\right)
=(α)2Ndet(βα𝚽𝚽T+𝐈)\displaystyle={\left(\alpha\right)^{2N}}\det\left({\frac{\beta}{\alpha}{\mathbf{\Phi}}{{\mathbf{\Phi}}^{\text{T}}}+{\mathbf{I}}}\right)
=(α)2NMβMdet(𝚽𝚽T+αβ𝐈),\displaystyle={\left(\alpha\right)^{2N-M}}{\beta^{M}}\det\left({{\mathbf{\Phi}}{{\mathbf{\Phi}}^{\text{T}}}+\frac{\alpha}{\beta}{\mathbf{I}}}\right),

where α/β{\raise 3.01385pt\hbox{$\alpha$}\!\mathord{\left/{\vphantom{\alpha\beta}}\right.\kern-1.2pt}\!\lower 3.01385pt\hbox{$\beta$}} is a sufficiently small number denoted the ratio of noise variance to sparse signal variance [36]. Therefore, by ignoring the constant terms, the final objective function of MMI-based sampling is asymptotically approaches to

ς^\displaystyle{\mathbf{\hat{\varsigma}}} =argmaxς𝐒ln(det(𝚽𝚽T))\displaystyle=\arg\mathop{\max}\limits_{{\mathbf{\varsigma}}\subset{\mathbf{S}}}\ln\left({\det\left({{\mathbf{\Phi}}{{\mathbf{\Phi}}^{\text{T}}}}\right)}\right) (34)
=argmaxς𝐒ln(det(𝝍𝝋𝝋T𝝍T)),\displaystyle=\arg\mathop{\max}\limits_{{\mathbf{\varsigma}}\subset{\mathbf{S}}}\ln\left({\det\left({\bm{\psi}\bm{\varphi}{\bm{\varphi}^{\text{T}}}{\bm{\psi}^{\text{T}}}}\right)}\right),

where 𝚽=𝝍𝝋{\mathbf{\Phi}}=\bm{\psi}\bm{\varphi}. ψ=(𝐈N)ς.\psi=\left({\mathbf{I}_{N}}\right)_{\mathbf{\varsigma}\bm{.}} is consisted of the rows indexed by set ς{\mathbf{\varsigma}} in 𝐈N{{\mathbf{I}}_{N}}. Accordingly, the problem can be further expressed as

𝝍=argmaxς𝐒ln(det(𝚽𝚽T)),s.t𝝍=(𝐈N)ς.\begin{gathered}\bm{\psi}=\arg\mathop{\max}\limits_{{\mathbf{\varsigma}}\subset{\mathbf{S}}}\ln\left({\det\left({{\mathbf{\Phi}}{{\mathbf{\Phi}}^{\text{T}}}}\right)}\right),\hfill\\ {\text{s}}{\text{.t}}{\text{. }}\bm{\psi}=\left({\mathbf{I}_{N}}\right)_{\mathbf{\varsigma}\bm{.}}\hfill\\ \end{gathered} (35)

We solve the above problem by the greedy algorithm. Let 𝚽t{{\mathbf{\Phi}}_{t}} denote the sensing matrix after the ttth SL’s selection as

𝚽t=[𝝋i1T,𝝋i2T,,𝝋it1T,𝝋itT]T,{{\mathbf{\Phi}}_{t}}=\left[{\bm{\varphi}_{{i_{1}}}^{\text{T}},\bm{\varphi}_{{i_{2}}}^{\text{T}},\cdots,\bm{\varphi}_{{i_{t-1}}}^{\text{T}},\bm{\varphi}_{{i_{t}}}^{\text{T}}}\right]^{\text{T}}, (36)

where it(itς,it𝐒){i_{t}}\left({{i_{t}}\in{\mathbf{\varsigma}},{i_{t}}\in{\mathbf{S}}}\right) is the index of the ttth selected SL and 𝝋it{\bm{\varphi}_{{i_{t}}}} is the it{i_{t}}th row vector of the sparse dictionary matrix 𝝋\bm{\varphi}. The step-by-step maximization process is considered by the greedy method. The matrix can be expanded as

det(𝚽t𝚽tT)\displaystyle\det\left({{{\mathbf{\Phi}}_{t}}{\mathbf{\Phi}}_{t}^{\text{T}}}\right) (37)
=det([𝚽t1𝝍t][𝚽t1T𝝍tT])\displaystyle=\det\left({\left[{\begin{array}[]{*{20}{c}}{{{\mathbf{\Phi}}_{t-1}}}\\ {{\bm{\psi}_{t}}}\end{array}}\right]\left[{\begin{array}[]{*{20}{c}}{{\mathbf{\Phi}}_{t-1}^{\text{T}}}&{\bm{\psi}_{t}^{\text{T}}}\end{array}}\right]}\right)
=𝝍t(𝐈𝚽t1T(𝚽t1𝚽t1T)1𝚽t1)𝝍tT\displaystyle={\bm{\psi}_{t}}\left({{\mathbf{I}}-{\mathbf{\Phi}}_{t-1}^{\text{T}}{{\left({{{\mathbf{\Phi}}_{t-1}}{\mathbf{\Phi}}_{t-1}^{\text{T}}}\right)}^{-1}}{{\mathbf{\Phi}}_{t-1}}}\right)\bm{\psi}_{t}^{\text{T}}
×det(𝚽t1𝚽t1T).\displaystyle\times\det\left({{{\mathbf{\Phi}}_{t-1}}{\mathbf{\Phi}}_{t-1}^{\text{T}}}\right).

Therefore, the ttth row of 𝝍\bm{\psi} is determined according to the following formula,

𝝍t\displaystyle{\bm{\psi}_{t}} =argmaxit𝐒\𝐒kdet(𝚽t𝚽tT)\displaystyle=\mathop{\arg\max}\limits_{{i_{t}}\in{\mathbf{S}}\backslash{{\mathbf{S}}_{k}}}\det\left({{{\mathbf{\Phi}}_{t}}{\mathbf{\Phi}}_{t}^{\text{T}}}\right) (38)
=argmaxit𝐒\𝐒k𝝍t(𝐈𝚽t1T(𝚽t1𝚽t1T)1𝚽t1)𝝍tT.\displaystyle=\mathop{\arg\max}\limits_{{i_{t}}\in{\mathbf{S}}\backslash{{\mathbf{S}}_{k}}}{\bm{\psi}_{t}}\left({{\mathbf{I}}-{\mathbf{\Phi}}_{t-1}^{\text{T}}{{\left({{{\mathbf{\Phi}}_{t-1}}{\mathbf{\Phi}}_{t-1}^{\text{T}}}\right)}^{-1}}{{\mathbf{\Phi}}_{t-1}}}\right)\bm{\psi}_{t}^{\text{T}}.

After MM recursive iterations, the optimal MM SLs can be finally selected. The measurement matrix 𝝍\bm{\psi} is obtained.

III-D SEM Construction with Improved SBL

In Section II-B, we recover ω\omega with the mean μ\mu and evaluate the recovery accuracy by the variance 𝚺{\mathbf{\Sigma}}. The hyperparameters are estimated by maximum a posterior (MAP) probability [24] as

(𝜶,β)\displaystyle\left({\bm{\alpha},\beta}\right) =argmax𝜶,βp(𝜶,β|𝒕)\displaystyle=\mathop{\arg\max}\limits_{\bm{\alpha},\beta}p\left({\bm{\alpha},\beta|\bm{t}}\right) (39)
=argmax𝜶,βp(𝒕|𝜶,β)p(𝜶)p(β)\displaystyle=\mathop{\arg\max}\limits_{\bm{\alpha},\beta}p\left({\bm{t}|\bm{\alpha},\beta}\right)p\left(\bm{\alpha}\right)p\left(\beta\right)
=argmax𝜶,βlnp(𝒕|𝜶,β)p(𝜶)p(β),\displaystyle=\mathop{\arg\max}\limits_{\bm{\alpha},\beta}\ln p\left({\bm{t}|\bm{\alpha},\beta}\right)p\left(\bm{\alpha}\right)p\left(\beta\right),

which is equivalent to maximize the product of marginal likelihood p(𝒕|𝜶,β)p\left({\bm{t}|\bm{\alpha},\beta}\right) and the priors over hyperparameters in the logarithmic case. By ignoring the irrelevant terms, we can obtain the objective equation

(𝜶,β)=12{log|𝐂|+𝒕T(𝐂)1𝒕}+i=1N(alogαibαi)+clogβdβ,𝐂=β1𝐈+𝚽𝚲1𝚽T.\begin{gathered}\mathcal{L}\left({\bm{\alpha},\beta}\right)=-\frac{1}{2}\left\{{\log\left|{\mathbf{C}}\right|+\bm{t}^{T}\left({\mathbf{C}}\right)^{-1}\bm{t}}\right\}\\ +\sum\limits_{i=1}^{N}{\left({a\log{\alpha_{i}}-b{\alpha_{i}}}\right)}+c\log\beta-d\beta,\\ {\mathbf{C}}=\beta^{-1}{\mathbf{I}}+{\mathbf{\Phi\Lambda}}^{-1}{{\mathbf{\Phi}}^{T}}.\\ \end{gathered} (40)

We exploit expectation-maximization (EM) to solve α\alpha and β\beta. For α\alpha, the update procedure is equivalent to maximize E𝝎|𝐭,𝜶,β[logp(𝝎|𝜶)p(𝜶)]E_{\bm{\omega}|{\mathbf{t}},\bm{\alpha},\beta}\left[{\log p\left({\bm{\omega}|\bm{\alpha}}\right)p\left(\bm{\alpha}\right)}\right], and then the update rule can be derived through differentiation

αi=1+2aωi2+2b,ωi2=μi2+Σi,i,\begin{gathered}{\alpha_{i}}=\frac{{1+2a}}{{\left\langle{\omega_{i}^{2}}\right\rangle+2b}},\hfill\\ \left\langle{\omega_{i}^{2}}\right\rangle=\mu_{i}^{2}+{\Sigma_{i,i}},\hfill\\ \end{gathered} (41)

where Σi,i{\Sigma_{i,i}} represents the ii th diagonal element of 𝚺{\mathbf{\Sigma}}. We maximize E𝝎|𝒕,α,β[logp(𝒕|𝝎,β)p(β)]E_{\bm{\omega}|\bm{t},{\mathbf{\alpha}},\beta}\left[{\log p\left({\bm{t}|\bm{\omega},\beta}\right)p\left(\beta\right)}\right] to update β\beta as

βnew=M+2c𝒕𝚽𝝁2+(βold)1i1αiΣi,i+2d.\beta^{new}=\frac{{M+2c}}{{\left\|{\bm{t}-{\mathbf{\Phi}}\bm{\mu}}\right\|^{2}+\left({\beta^{old}}\right)^{-1}\sum\limits_{i}{1-\alpha_{i}{\Sigma_{i,i}}}+2d}}. (42)

The derivation of (41) and (42) are given in AppendixB. Continue the above iterations between (18), (41) and (42) until the convergence condition is satisfied, and then we can obtain the solution of ω\omega by μ\mu.

Furthermore, in the traditional SBL algorithm, a fix threshold thre_α1thre\_\alpha^{-1} is set to prune the small values of the recovered sparse signal in each iteration. The μi{\mu_{i}} equals to zero when αi1<thre_α1\alpha_{i}^{-1}<thre\_\alpha^{-1}, and thus the corresponding iith column in 𝚽{\mathbf{\Phi}} can be pruned out. The recovery algorithm based on the traditional SBL is unable to accurately restore the locations of all sources in the 3D SEM. Besides, it is difficult to determine the value of thre_α1thre\_\alpha^{-1} under different scenarios. However, the positions of the recovered signal sources are usually adjacent or close to the real signal sources in Cartesian coordinate. Therefore, we develop the clustering-based SBL (CSBL) algorithm and use adaptive threshold to improve the recovery accuracy.

Firstly, we define an adaptive dynamic threshold truncation in the pruning step of algorithm iterations, which is

thre_α1=mean(𝜶^1)std(𝜶^1).thre\_\alpha^{-1}={\text{mean}}\left({{{\bm{\hat{\alpha}}}^{-1}}}\right)-{\text{std}}\left({{{\bm{\hat{\alpha}}}^{-1}}}\right). (43)

Through the adaptive dynamic threshold truncation, the αi1\alpha_{i}^{-1} near the source points are selected to the most extent and the cube whose power is below threshold can be abandoned. The pruning rule is

αi1={αi1,if αi1>thre_α1,0,otherwise.{\alpha_{i}^{-1}}=\begin{cases}{\alpha_{i}^{-1},}&{\text{if }}{\alpha_{i}^{-1}}>thre\_\alpha^{-1},\\ {0,}&{\text{otherwise.}}\end{cases} (44)

Then, we propose to combine clustering algorithm with the SBL. Considering the number of transmitters is unknown, we employ the MMD clustering algorithm. It can adaptively determine the cluster seeds when the number of clusters is unknown and improve the efficiency of partitioning the dataset. The MMD clustering is a trial-based clustering algorithm in pattern recognition. It takes the furthest object as the clustering center based on Euclidean distance, and can avoid the situation that the cluster seeds may be too close when the initial value is selected by k-means method.

At the end of the SBL iteration process, the solution ω\omega may only have a small number of significant coefficients. The rest of small coefficients, which contribute very little to the sparse signal, is negligible. Therefore, we first approximate ω\omega by neglecting the small coefficients as

ωi={0,20log10(ωimaxj(ωj))<δ,ωi,otherwise.{\omega_{i}}=\begin{cases}{0,}&{20{{\log}_{10}}\left({\frac{{{\omega_{i}}}}{{\mathop{\max}\limits_{j}({\omega_{j}})}}}\right)<\delta,}\\ {{\omega_{i}},}&{\text{otherwise.}}\end{cases} (45)

where δ\delta is a negative sparsity threshold. Accordingly, the qq non-zero item set ={ζ1,ζ2,,ζq}\Im=\left\{{{\zeta_{1}},{\zeta_{2}},\ldots,{\zeta_{q}}}\right\} of ω\omega estimated preliminarily is obtained, which represents the cubes in 3D space. Due to the characteristics of clusters they are distributed in space, the candidate set \Im will be divided into KK clusters denoted by 1,2,,K{\aleph_{1}},{\aleph_{2}},\ldots,{\aleph_{K}} adaptively by MMD clustering. Then, by weighting the items in \Im with the averaging rule, the cluster centers are obtained as

{localkx0=tkωtlocaltxtkωtlocalky0=tkωtlocaltytkωtlocalkz0=tkωtlocaltztkωt,k=1,2,,K,\begin{array}[]{*{20}{c}}{\left\{\begin{gathered}local_{k}^{{x_{0}}}=\frac{{\sum\limits_{t\in{\aleph_{k}}}{\omega_{t}\cdot local_{t}^{x}}}}{{\sum\limits_{t\in{\aleph_{k}}}{\omega_{t}}}}\hfill\\ local_{k}^{{y_{0}}}=\frac{{\sum\limits_{t\in{\aleph_{k}}}{\omega_{t}\cdot local_{t}^{y}}}}{{\sum\limits_{t\in{\aleph_{k}}}{\omega_{t}}}}\hfill\\ local_{k}^{{z_{0}}}=\frac{{\sum\limits_{t\in{\aleph_{k}}}{\omega_{t}\cdot local_{t}^{z}}}}{{\sum\limits_{t\in{\aleph_{k}}}{\omega_{t}}}}\hfill\\ \end{gathered}\right.,}&{k=1,2,\ldots,K,}\end{array} (46)

where localtxlocal_{t}^{x}, localtylocal_{t}^{y} and localtzlocal_{t}^{z} are the xx, yy, zz coordinates of the tt th cube of set k{\aleph_{k}} in 3D space. localkx0local_{k}^{{x_{0}}}, localky0local_{k}^{{y_{0}}} and localkz0local_{k}^{{z_{0}}} are the xx, yy, zz coordinates of the cluster center of set k{\aleph_{k}}, which is also the re-estimated non-zero position in the updated sparse signal 𝝎{\bm{\omega}^{*}}. Simultaneously, we update the sparse coefficient as

ωk=tkωtωttkωt\omega_{k}^{*}=\frac{{\sum\limits_{t\in{\aleph_{k}}}{\omega_{t}\cdot\omega_{t}}}}{{\sum\limits_{t\in{\aleph_{k}}}{\omega_{t}}}} (47)

Eventually, the updated sparse signal 𝝎{\bm{\omega}^{*}} is obtained. According to the recovered sparse signal, the SEM construction model is

𝐗=𝝋𝝎,{\mathbf{X}}=\bm{\varphi}{\bm{\omega}^{*}}, (48)

where 𝐗{\mathbf{X}} is the spectrum vector we constructed.

TABLE I: The main simulation parameters
Parameters Value
Region of interest 100m×100m×50m100{\text{m}}\times 100{\text{m}}\times 50{\text{m}}
SEM tensor size 10×10×1010\times 10\times 10
Granularity of SEM tensor 10m×10m×5m10{\text{m}}\times 10{\text{m}}\times 5{\text{m}}
Number of RF transmitters (KK) 4, 8, 12, 16
Transmitting frequency (ff) 1 GHz
Transmitting power (PtP^{t}) 2 W
Positions of RF transmitters Random generated in the ROI
Sampling rate and
number of RF transmitters
in SEM construction performance
r=0.1r=0.1, K=4K=4
Algorithms for comparison
Random-SBL, Random-CSBL,
Random-MSBL, MMI-SBL,
MMI-CMSBL, Random-SWOMP

IV Simulation Results And Validations

IV-A Experiment Setup

In this section, the performance of proposed 3D SEM construction method is analyzed and verified by simulations under the campus scenario, as shown in Fig. 5. The ROI is 100 m×100 m×50 m100{\text{ m}}\times 100{\text{ m}}\times 50{\text{ m}}. We firstly discretize the ROI into 10×10×1010\times 10\times 10 cubes and each cube is 10 m×10 m×5 m10{\text{ m}}\times 10{\text{ m}}\times 5{\text{ m}}. Then we construct a spectrum tensor 𝝌10×10×10\bm{\chi}\in{\Re^{10\times 10\times 10}}. We denote the proposed algorithm as MMI-CMSBL. Six SEM construction algorithms, i.e., Random-SBL, Random-CSBL, Random-MSBL, MMI-SBL, MMI-CMSBL and random stagewise weak orthogonal matching pursuit (Random-SWOMP) [38] are used to construct the 3D SEM. The main simulation parameters are shown in the Table 1, where the random means that SLs are selected randomly from the ROI.

IV-B Sparse Signal Recovery Performance

We compare the proposed MMI-CMSBL with Random-SBL, Random-CSBL, Random-MSBL, MMI-SBL, and Random-SWOMP in terms of the SSR performance. The Mean Squared Error (MSE) of sparse signal recovery is defined as

d(𝝎est,𝝎true)=10log10(𝝎est𝝎true2/𝝎true2),d\left({{\bm{\omega}^{{\text{est}}}},{\bm{\omega}^{{\text{true}}}}}\right)=10\log_{10}\left({{\raise 3.01385pt\hbox{${\left\|{{\bm{\omega}^{{\text{est}}}}-{\bm{\omega}^{{\text{true}}}}}\right\|_{2}}$}\!\mathord{\left/{\vphantom{{\left\|{{\bm{\omega}^{{\text{est}}}}-{\bm{\omega}^{{\text{true}}}}}\right\|_{2}}{\left\|{{\bm{\omega}^{{\text{true}}}}}\right\|_{2}}}}\right.\kern-1.2pt}\!\lower 3.01385pt\hbox{${\left\|{{\bm{\omega}^{{\text{true}}}}}\right\|_{2}}$}}}\right), (49)

where 𝝎est{\bm{\omega}^{{\text{est}}}} and 𝝎true{\bm{\omega}^{{\text{true}}}} are the estimated sparse signal and true sparse signal.

It can be seen from the simulation results in the Fig. 6 that the MSE of sparse signal recovery decreases as the sampling rates increases. The proposed MMI-CSBL outperforms other algorithms in terms of the convergence speed. The performance of Random-MSBL algorithm is better than that of the Random-CSBL, MMI-SBL and Random-SBL, which reveals that the propagation model-based SBL algorithm brings more performance improvement than traditional SBL in SSR by considering scenario to construct the sparse dictionary. Besides, the SBL has better performance than other CS algorithms when recovering sparse signals if the sensing matrix has high correlation.

Refer to caption
Figure 5: 3D SEM construction scenario.
Refer to caption
Figure 6: The MSE of sparse signal recovery performance comparisons (K=4K=4).
Refer to caption
Figure 7: 3D SEM construction performance visualization (K=4K=4, r=0.1r=0.1). (a) The true SEM; and (b) - (g) The spectrum situation recovery of MMI-CMSBL, Random-MSBL, MMI-SBL, Random-CSBL, Random-SBL and Random-SWOMP, respectively.

IV-C SEM Construction Performance

Fig. 7 presents the SEM construction results of Random-SBL, Random-CSBL, Random-MSBL, MMI-SBL, MMI-CMSBL and Random-SWOMP. We can also see that the sparse signal recovery accuracy of MMI-CMSBL is higher than other algorithms, and the minimal components in Fig. 7 (b) are reduced obviously through clustering and dynamic threshold operations.

The Root Mean Square Error (RMSE) is used to evaluate the RSS recovery performance. The RMSE is defined as the difference of each cube in RSS between the estimated REM and the real REM, given by

RMSE = 1Ni=1N|PestirPtrueir|2,{\text{RMSE = }}\sqrt{\frac{1}{N}\sum\limits_{i=1}^{N}{{{\left|{P{{{}_{i}^{r}}^{{\text{est}}}}-P{{{}_{i}^{r}}^{{\text{true}}}}}\right|}^{2}}}}, (50)

where Pestir{P{{}_{i}^{r}}}^{{\text{est}}} and Ptrueir{P{{}_{i}^{r}}}^{{\text{true}}} are the estimated and the actual RSS values at the th cube respectively. It can be seen from Fig. 8 that MMI-CMSBL achieves the best performance compared with other methods even at a very low sampling rate. SWOMP performs worse than other SBL-based algorithms, due to the highly correlation of the sparse dictionary. Furthermore, based on the sparse dictionary constructed by RT technology, the MMI-CMSBL and Random-MSBL algorithm can rapidly converge and accurately recover the spectrum map at a low sampling rate. Comparing Fig. 6 and Fig. 8, we can see that when the MSE of sparse signal is large, the SEM construction performance is unsatisfactory.

IV-D Impact of Sparsity

The impact of KK on the performance of SEM construction and sparse signal recovery are shown in Figs. 9 - 12. In Fig. 9 and 11 show that as the sparsity KK increases, the SEM construction error and the MSE of sparse signal recovery with MMI-CMSBL increase. Meanwhile, as shown in Fig. 9, when sampling rate is small, the SEM construction error first increases and then decreases with the increasing of KK. Fig. 11 shows that the MSE of sparse signal recovery increases as KK increases, and we can successfully recover the signal at least when M>2Kln(N/K)M>2K\ln\left({N/K}\right). In addition, the convergence rate of sparse support distortion improves with the decrease of sparsity KK.

In Fig. 10 and 12, it can be seen that with the increase of sparsity KK the MMI-CMSBL can still maintain excellent performance on SEM recovery and sparse signal recovery compared with other algorithms at a fixed low sampling rate. MMI-CMSBL and Random-MSBL are less influenced by the sparsity KK than Random-SBL. Due to the increase of RF transmitters, their interference with each other also increases. Additionally, CSBL can achieve better performance than the traditional SBL. The visualization of SEM construction with different sparsity is shown in Fig. 13.

V Conclusion

In this paper, we have investigated the issue of 3D SEM construction based on SBL, which offers high value for efficient application of CR. We have first formulated 3D SEM construction as a sparse sampling problem by exploiting the underlying sparse nature of 3D spectrum situation. Then, we have proposed a 3D scenario-dependent SEM construction scheme, which is composed of three components: sparse dictionary construction, sampling optimization and spectrum situation recovery. Firstly, considering the complexity of electromagnetic propagation environment in the actual scenario, we have designed a scenario-dependent sparse dictionary for SEM construction based on the channel model. Secondly, we have developed MMI-based sampling architecture to obtain optimized measurement matrix. Based on SBL framework, we have derived the optimization function of MMI sampling and have solved it by greedy algorithm. Finally, due to the ineffectiveness of traditional SBL recovery algorithm in 3D SEM construction, a tailored MMD-clustering based SBL algorithm has been proposed. The sparse signal recovered can achieve high precision by dynamic threshold pruning. We have also compared the sparse signal recovery performance and SEM construction performance among six methods, i.e., Random-SBL, Random-CSBL, Random-MSBL, MMI-SBL, MMI-CMSBL and Random-SWOMP. The impact of sparsity on situation recovery precision has been studied. Simulations have demonstrated the superiority of the proposed 3D SEM construction scheme.

Refer to caption
Figure 8: 3D SEM construction performance comparisons (K=4K=4).
Refer to caption
Figure 9: Impact of sparsity KK on the MMI-CMSBL performances of SEM construction.
Refer to caption
Figure 10: Impact of sparsity KK on the SEM construction performances of different algorithms (r=0.03r=0.03).
Refer to caption
Figure 11: Impact of sparsity KK on the MMI-CMSBL performances of sparse signal recovery.
Refer to caption
Figure 12: Impact of sparsity KK on the sparse signal recovery of different algorithms (r=0.03r=0.03).
Refer to caption
Figure 13: 3D SEM construction performance visualization with different sparsity KK (r=0.1r=0.1). (a) and (c) are the simulation SEM with K=8K=8 and K=12K=12, respectively. (b) and (d) present the SEM construction of MMI-CMSBL with K=8K=8 and K=12K=12, respectively.

Appendix A Proof of the equation (31)

In this part, we present the derivation of (31) in the text. Firstly, we introduce the Mahalanobis transformation lemma,

Lemma 1. An arbitrary NN dimension Gaussian distribution ϑ𝒩(μ,Σ)\vartheta\sim\mathcal{N}\left({\mu,\Sigma}\right), we call ρ=Σ12[ϑμ]\rho={\Sigma^{-\frac{1}{2}}}\left[{\vartheta-\mu}\right] the Mahalanobis transformation, where

ρ𝒩(0,Ik),\rho\sim\mathcal{N}\left({0,{I_{k}}}\right), (51)

which means that ρi{\rho_{i}} is the standard Gaussian distribution 𝒩(0,1)\mathcal{N}\left({0,1}\right).

Proof: We have

p(ϑ)=(2π)N2|Σ|12exp[12(ϑμ)TΣ1(ϑμ)],p\left(\vartheta\right)={\left({2\pi}\right)^{-\frac{N}{2}}}{\left|\Sigma\right|^{-\frac{1}{2}}}\exp\left[{-\frac{1}{2}{{\left({\vartheta-\mu}\right)}^{T}}{\Sigma^{-1}}\left({\vartheta-\mu}\right)}\right], (52)
ϑ=Σ12ρ+μ,\vartheta={\Sigma^{\frac{1}{2}}}\rho+\mu, (53)

we compute the Jacobian determinant

𝐉=det[ϑρT]=|Σ|12.{\mathbf{J}}=\det\left[{\frac{{\partial\vartheta}}{{\partial{\rho^{T}}}}}\right]={\left|\Sigma\right|^{\frac{1}{2}}}. (54)

According to Lemma of substitution of variables

f(ρ)=fϑ[h(ρ)]×|𝐉|,f\left(\rho\right)={f_{\vartheta}}\left[{h\left(\rho\right)}\right]\times\left|{\mathbf{J}}\right|, (55)

where ϑf(ϑ)\vartheta\sim f\left(\vartheta\right) and inverse function ϑ=h(ρ)\vartheta=h\left(\rho\right). |𝐉|\left|{\mathbf{J}}\right| is the absolute value of the Jacobian determinant. Then the distribution of ρ\rho is

p(ρ)=(2π)N2exp[12ρTρ].p\left(\rho\right)={\left({2\pi}\right)^{-\frac{N}{2}}}\exp\left[{-\frac{1}{2}{\rho^{T}}\rho}\right]. (56)

The lemma has been proven. Then, for continuous random variables ω\omega, we have its conditional entropy

H(𝝎|𝐭)=p(𝒕)p(𝝎|𝒕)lnp(𝝎|𝒕)𝑑𝝎𝑑𝒕.H\left({\bm{\omega}|{\mathbf{t}}}\right)=-\int{p\left(\bm{t}\right)}\int{p\left({\bm{\omega}|\bm{t}}\right)\ln p\left({\bm{\omega}|\bm{t}}\right)}d\bm{\omega}d\bm{t}. (57)

The posterior distribution of ω\omega obeys the multidimensional Gaussian distribution 𝒩(𝝁,𝚺)\mathcal{N}(\bm{\mu},{\mathbf{\Sigma}}) with mean and covariance given by (18). Therefore, we use a posterior distribution to approximate the distribution of ω\omega, as SBL use a posterior mean to estimate ω\omega, so we have

H(𝝎|𝒕)\displaystyle H\left({\bm{\omega}|\bm{t}}\right) =H𝒕(𝝎)\displaystyle={H_{\bm{t}}}\left(\bm{\omega}\right) (58)
=p(𝝎)ln((2π)N2|𝚺|12\displaystyle=-\int{p\left(\bm{\omega}\right)}\ln({\left({2\pi}\right)^{-\frac{N}{2}}}{\left|{\mathbf{\Sigma}}\right|^{-\frac{1}{2}}}
exp[12(𝝎𝝁)T𝚺1(𝝎𝝁)])d𝝎\displaystyle\exp\left[{-\frac{1}{2}{{\left({\bm{\omega}-\bm{\mu}}\right)}^{T}}{{\mathbf{\Sigma}}^{-1}}\left({\bm{\omega}-\bm{\mu}}\right)}\right])d\bm{\omega}
=p(𝝎)[ln((2π)N2|𝚺|12)\displaystyle=\int{p\left(\bm{\omega}\right)}[\ln\left({{{\left({2\pi}\right)}^{-\frac{N}{2}}}{{\left|{\mathbf{\Sigma}}\right|}^{-\frac{1}{2}}}}\right)
12(𝝎𝝁)T𝚺1(𝝎𝝁)]d𝝎\displaystyle-\frac{1}{2}{\left({\bm{\omega}-\bm{\mu}}\right)^{T}}{{\mathbf{\Sigma}}^{-1}}\left({\bm{\omega}-\bm{\mu}}\right)]d\bm{\omega}
=ln((2π)N2|𝚺|12)\displaystyle=\ln\left({{{\left({2\pi}\right)}^{\frac{N}{2}}}{{\left|{\mathbf{\Sigma}}\right|}^{\frac{1}{2}}}}\right)
+12p(𝝎)[(𝝎𝝁)T𝚺1(𝝎𝝁)]𝑑𝝎\displaystyle+\frac{1}{2}\int{p\left(\bm{\omega}\right)\left[{{{\left({\bm{\omega}-\bm{\mu}}\right)}^{T}}{{\mathbf{\Sigma}}^{-1}}\left({\bm{\omega}-\bm{\mu}}\right)}\right]d\bm{\omega}}

By using Lemma 1, we then have

p(𝝎)[(𝝎𝝁)T𝚺1(𝝎𝝁)]𝑑𝝎\displaystyle\int{p\left(\bm{\omega}\right)\left[{{{\left({\bm{\omega}-\bm{\mu}}\right)}^{T}}{{\mathbf{\Sigma}}^{-1}}\left({\bm{\omega}-\bm{\mu}}\right)}\right]d\bm{\omega}} (59)
=p(ν)×νTν𝑑ν\displaystyle=\int{p\left({\mathbf{\nu}}\right)\times{{\mathbf{\nu}}^{T}}{\mathbf{\nu}}d\nu}
=i=1NE[νi2]=N2.\displaystyle=\sum\limits_{i=1}^{N}{{\text{E}}\left[{\nu_{i}^{2}}\right]}=\frac{N}{2}.

Accordingly, we can obtain the conditional entropy term in the formula (31), as given by

H(𝝎|𝒕)\displaystyle H\left({\bm{\omega}|\bm{t}}\right) =ln((2π)N2|𝚺|12)+N2\displaystyle=\ln\left({{{\left({2\pi}\right)}^{\frac{N}{2}}}{{\left|{\mathbf{\Sigma}}\right|}^{\frac{1}{2}}}}\right)+\frac{N}{2} (60)
=ln((2πe)N2|𝚺|12)\displaystyle={\text{ln}}\left({{{\left({2\pi e}\right)}^{\frac{N}{2}}}{{\left|{\mathbf{\Sigma}}\right|}^{\frac{1}{2}}}}\right)
=N2(ln2π+1)+12ln|𝚺|.\displaystyle=\frac{N}{2}\left({\ln 2\pi+1}\right)+\frac{1}{2}\ln\left|{\mathbf{\Sigma}}\right|.

The derivation of H(𝝎)H\left(\bm{\omega}\right) can be obtained by analogy.

Appendix B Proofs of the equations (41) and (42)

To obtain Eq. (41), we first define

(𝜶)=E𝝎|𝒕,𝜶,β[lnp(𝝎|𝜶)p(𝜶)]\displaystyle\mathcal{L}\left(\bm{\alpha}\right)={\text{E}}_{\bm{\omega}|\bm{t},\bm{\alpha},\beta}\left[{\ln p\left({\bm{\omega}|\bm{\alpha}}\right)p\left(\bm{\alpha}\right)}\right] (61)
=E[12(log|𝚲1|+tT𝚲t)+n=0N(alnαnbαn)]\displaystyle={\text{E}}\left[{-\frac{1}{2}\left({\log\left|{{\mathbf{\Lambda}}^{-1}}\right|+t^{T}{\mathbf{\Lambda}}t}\right)+\sum\limits_{n=0}^{N}{\left({a\ln{\alpha_{n}}-b{\alpha_{n}}}\right)}}\right]
+con\displaystyle+con
=12i=1N(lnαi1+αi(μi2+Σi,i))\displaystyle=-\frac{1}{2}\sum\limits_{i=1}^{N}{\left({\ln\alpha_{i}^{-1}+{\alpha_{i}}\left({\mu_{i}^{2}+{\Sigma_{i,i}}}\right)}\right)}
+i=1N(alnαibαi)+con,\displaystyle+\sum\limits_{i=1}^{N}{\left({a\ln{\alpha_{i}}-b{\alpha_{i}}}\right)}+con,

with concon being the item constant to 𝜶\bm{\alpha}. Then, we let (𝜶)/αi=0\partial\mathcal{L}\left(\bm{\alpha}\right)/\partial\alpha_{i}=0 to find the stationary point of αi\alpha_{i}, as

(𝜶)/αi\displaystyle\partial\mathcal{L}\left(\bm{\alpha}\right)/\partial\alpha_{i} =(12αi112(μi2+Σi,i))+(aαi1b)\displaystyle=\left({\frac{1}{2}\alpha_{i}^{-1}-\frac{1}{2}\left({\mu_{i}^{2}+{\Sigma_{i,i}}}\right)}\right)+\left({a\alpha_{i}^{-1}-b}\right) (62)
=0.\displaystyle=0.

And we obtain

αi=1+2aμi2+Σi,i+2b.{\alpha_{i}}=\frac{{1+2a}}{{\mu_{i}^{2}+{\Sigma_{i,i}}+2b}}. (63)

To obtain Eq. (42), we define

(β)=E𝝎|𝒕,𝜶,β[lnp(𝒕|𝝎,β)p(β)]\displaystyle\mathcal{L}\left(\beta\right)={\text{E}}_{\bm{\omega}|\bm{t},\bm{\alpha},\beta}\left[{\ln p\left({\bm{t}|\bm{\omega},\beta}\right)p\left(\beta\right)}\right] (64)
=E[12(ln|𝐁|β(𝒕𝚽𝝎)T(𝒕𝚽𝝎))+clnβdβ]\displaystyle={\text{E}}\left[{\frac{1}{2}\left({\ln\left|{\mathbf{B}}\right|-\beta\left({\bm{t}-{\mathbf{\Phi}}\bm{\omega}}\right)^{T}\left({\bm{t}-{\mathbf{\Phi}}\bm{\omega}}\right)}\right)+c\ln\beta-d\beta}\right]
+con\displaystyle+con
=12(Mlnββj=1ME[(𝒕𝚽𝝎)j2])\displaystyle=\frac{1}{2}\left({M\ln\beta-\beta\sum\limits_{j=1}^{M}{{\text{E}}\left[{\left({\bm{t}-{\mathbf{\Phi}}\bm{\omega}}\right)_{j}^{2}}\right]}}\right)
+clnβdβ+con.\displaystyle+c\ln\beta-d\beta+con.

Then, we let (β)/β=0\partial\mathcal{L}\left(\beta\right)/\partial\beta=0 to find the stationary point of β\beta, and we obtain

βnew=M+2cj=1ME[(𝒕𝚽𝝎)j2]+2d,\beta^{new}=\frac{{M+2c}}{{\sum\limits_{j=1}^{M}{{\text{E}}\left[{\left({\bm{t}-{\mathbf{\Phi}}\bm{\omega}}\right)_{j}^{2}}\right]}+2d}}, (65)

where

j=1ME[(𝒕𝚽𝝎)j2]\displaystyle\sum\limits_{j=1}^{M}{{\text{E}}\left[{\left({\bm{t}-{\mathbf{\Phi}}\bm{\omega}}\right)_{j}^{2}}\right]} (66)
=E[𝒕𝚽𝝎22]\displaystyle={\text{E}}\left[{\left\|{\bm{t}-{\mathbf{\Phi}}\bm{\omega}}\right\|_{2}^{2}}\right]
=𝒕222𝒕T𝚽𝝁+𝚽𝝁22+tr(𝚺𝚽T𝚽)\displaystyle=\left\|\bm{t}\right\|_{2}^{2}-2\bm{t}^{T}{\mathbf{\Phi}}\bm{\mu}+\left\|{{\mathbf{\Phi}}\bm{\mu}}\right\|_{2}^{2}+tr\left({{\mathbf{\Sigma\Phi}}^{T}{\mathbf{\Phi}}}\right)
=𝒕𝚽𝝁22+β1tr(𝐈𝚺𝚲)\displaystyle=\left\|{\bm{t}-{\mathbf{\Phi}}\bm{\mu}}\right\|_{2}^{2}+\beta^{-1}tr\left({{\mathbf{I}}-{\mathbf{\Sigma\Lambda}}}\right)
=𝒕𝚽𝝁22+β1i=1N(1αiΣii).\displaystyle=\left\|{\bm{t}-{\mathbf{\Phi}}\bm{\mu}}\right\|_{2}^{2}+\beta^{-1}\sum\limits_{i=1}^{N}{\left({1-\alpha_{i}\Sigma_{ii}}\right)}.

References

  • [1] G. Ding, Q. Wu, L. Zhang, Y. Lin, T. A. Tsiftsis, and Y.-D. Yao, “An amateur drone surveillance system based on the cognitive internet of things,” IEEE Communications Magazine, vol. 56, no. 1, pp. 29–35, 2018.
  • [2] A. Ahmad, S. Ahmad, M. H. Rehmani, and N. U. Hassan, “A survey on radio resource allocation in cognitive radio sensor networks,” IEEE Communications Surveys & Tutorials, vol. 17, no. 2, pp. 888–917, 2015.
  • [3] B. Wang and K. R. Liu, “Advances in cognitive radio networks: A survey,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 1, pp. 5–23, 2011.
  • [4] H. B. Yilmaz, T. Tugcu, F. Alagöz, and S. Bayhan, “Radio environment map as enabler for practical cognitive radio networks,” IEEE Communications Magazine, vol. 51, no. 12, pp. 162–169, 2013.
  • [5] M. Höyhtyä, A. Mämmelä, M. Eskola, M. Matinmikko, J. Kalliovaara, J. Ojaniemi, J. Suutala, R. Ekman, R. Bacchus, and D. Roberson, “Spectrum occupancy measurements: A survey and use of interference maps,” IEEE Communications Surveys & Tutorials, vol. 18, no. 4, pp. 2386–2414, 2016.
  • [6] K. Sato and T. Fujii, “Kriging-based interference power constraint: Integrated design of the radio environment map and transmission power,” IEEE Transactions on Cognitive Communications and Networking, vol. 3, no. 1, pp. 13–25, 2017.
  • [7] Y. Hu and R. Zhang, “A spatiotemporal approach for secure crowdsourced radio environment map construction,” IEEE/ACM Transactions on Networking, vol. 28, no. 4, pp. 1790–1803, 2020.
  • [8] E. Dall’Anese, S.-J. Kim, and G. B. Giannakis, “Channel gain map tracking via distributed kriging,” IEEE Transactions on Vehicular Technology, vol. 60, no. 3, pp. 1205–1211, 2011.
  • [9] M. Tang, G. Ding, Q. Wu, Z. Xue, and T. A. Tsiftsis, “A joint tensor completion and prediction scheme for multi-dimensional spectrum map construction,” IEEE Access, vol. 4, pp. 8044–8052, 2016.
  • [10] S. Shrestha, X. Fu, and M. Hong, “Deep spectrum cartography: Completing radio map tensors using learned neural models,” IEEE Transactions on Signal Processing, vol. 70, pp. 1170–1184, 2022.
  • [11] X. Han, L. Xue, Y. Xu, and Z. Liu, “A radio environment maps estimation algorithm based on the pixel regression framework for underlay cognitive radio networks using incomplete training data,” Sensors, vol. 20, no. 8, p. 2245, 1 2020.
  • [12] H. B. Yilmaz and T. Tugcu, “Location estimation-based radio environment map construction in fading channels,” Wireless communications & mobile computing, vol. 15, no. 3, pp. 561–570, 1 2015.
  • [13] M. Pesko, T. Javornik, L. Vidmar, A. Kosir, M. Stular, and M. Mohorcic, “The indirect self-tuning method for constructing radio environment map using omnidirectional or directional transmitter antenna,” Eurasip Journal on Wireless Communications and Networking, vol. 2015, pp. 50–1–50–12, 1 2015.
  • [14] F. Shen, G. Ding, Z. Wang, and Q. Wu, “Uav-based 3d spectrum sensing in spectrum-heterogeneous networks,” IEEE Transactions on Vehicular Technology, vol. 68, no. 6, pp. 5711–5722, 2019.
  • [15] F. Shen, Z. Wang, G. Ding, K. Li, and Q. Wu, “3d compressed spectrum mapping with sampling locations optimization in spectrum-heterogeneous environment,” IEEE Transactions on Wireless Communications, vol. 21, no. 1, pp. 326–338, 2022.
  • [16] Q. Zhu, Y. Zhao, Y. Huang, Z. Lin, L. H. Wang, Y. Bai, T. Lan, F. Zhou, and Q. Wu, “Demo abstract: An uav-based 3d spectrum real-time mapping system,” in IEEE INFOCOM 2022 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2022, pp. 1–2.
  • [17] K. Sato, K. Suto, K. Inage, K. Adachi, and T. Fujii, “Space-frequency-interpolated radio map,” IEEE Transactions on Vehicular Technology, vol. 70, no. 1, pp. 714–725, 2021.
  • [18] K. Mao, Q. Zhu, M. Song, H. Li, B. Ning, G. F. Pedersen, and W. Fan, “Machine-learning-based 3-d channel modeling for u2v mmwave communications,” IEEE Internet of Things Journal, vol. 9, no. 18, pp. 17 592–17 607, 2022.
  • [19] D. Lee, S.-J. Kim, and G. B. Giannakis, “Channel gain cartography for cognitive radios leveraging low rank and sparsity,” IEEE Transactions on Wireless Communications, vol. 16, no. 9, pp. 5953–5966, 2017.
  • [20] J. A. Bazerque and G. B. Giannakis, “Distributed spectrum sensing for cognitive radio networks by exploiting sparsity,” IEEE Transactions on Signal Processing, vol. 58, no. 3, pp. 1847–1862, 2010.
  • [21] R. Tibshirani, “Regression shrinkage and selection via the lasso: a retrospective,” Journal of the Royal Statistical Society, Series B. Statistical Methodology, vol. 73, no. 3, pp. 273–282, 1 2011.
  • [22] S. K. Sahoo and A. Makur, “Signal recovery from random measurements via extended orthogonal matching pursuit,” IEEE Transactions on Signal Processing, vol. 63, no. 10, pp. 2572–2581, 2015.
  • [23] R. Tichatschke, “Ciarlet, p. g., introduction to numerical linear algebra and optimization. cambridge etc., cambridge university press 1989. xiv, 436 pp., 45.hb/45.‐hb/ 15.‐pb. isbn 0–521–32788–1/0–521–33984‐7 (cambridge texts in applied mathematics),” ZAMM ‐ Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik, vol. 71, no. 3, p. 202, 1 1991.
  • [24] M. E. Tipping, “Sparse bayesian learning and the relevance vector machine,” J. Mach. Learn. Res., vol. 1, pp. 211–244, 2001.
  • [25] C. M. Bishop and M. E. Tipping, “Variational relevance vector machines,” ArXiv, vol. abs/1301.3838, 2000.
  • [26] F. Kiaee, H. Sheikhzadeh, and S. Eftekhari Mahabadi, “Relevance vector machine for survival analysis,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 3, pp. 648–660, 2016.
  • [27] D.-H. Huang, S.-H. Wu, W.-R. Wu, and P.-H. Wang, “Cooperative radio source positioning and power map reconstruction: A sparse bayesian learning approach,” IEEE Transactions on Vehicular Technology, vol. 64, no. 6, pp. 2318–2332, 2015.
  • [28] M. Fornasier and H. Rauhut, Compressive Sensing.   New York, NY: Springer New York, 2011, pp. 187–228. [Online]. Available: https://doi.org/10.1007/978-0-387-92920-0_6
  • [29] M. A. Herman and T. Strohmer, “General deviants: An analysis of perturbations in compressed sensing,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 342–349, 2010.
  • [30] S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian compressive sensing using laplace priors,” IEEE Transactions on Image Processing, vol. 19, no. 1, pp. 53–63, 2010.
  • [31] M. E. Tipping, “Bayesian inference: An introduction to principles and practice in machine learning,” in Advanced Lectures on Machine Learning, 2003.
  • [32] Q. Zhu, K. Mao, M. Song, X. Chen, B. Hua, W. Zhong, and X. Ye, “Map-based channel modeling and generation for u2v mmwave communication,” IEEE Transactions on Vehicular Technology, vol. 71, no. 8, pp. 8004–8015, 2022.
  • [33] D. Denkovski, V. Atanasovski, L. Gavrilovska, J. Riihijärvi, and P. Mähönen, “Reliability of a radio environment map: Case of spatial interpolation techniques,” in 2012 7th International ICST Conference on Cognitive Radio Oriented Wireless Networks and Communications (CROWNCOM), 2012, pp. 248–253.
  • [34] H. Wang, G. Pottie, K. Yao, and D. Estrin, “Entropy-based sensor selection heuristic for target localization,” in Third International Symposium on Information Processing in Sensor Networks, 2004. IPSN 2004, 2004, pp. 36–45.
  • [35] F. Zhao, J. Shin, and J. Reich, “Information-driven dynamic sensor collaboration,” IEEE Signal Processing Magazine, vol. 19, no. 2, pp. 61–72, 2002.
  • [36] Y. Saito, T. Nonomura, K. Yamada, K. Nakai, T. Nagata, K. Asai, Y. Sasaki, and D. Tsubakino, “Determinant-based fast greedy sensor selection algorithm,” IEEE Access, vol. 9, pp. 68 535–68 551, 2021.
  • [37] K. Hintz, “A measure of the information gain attributable to cueing,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 21, no. 2, pp. 434–442, 1991.
  • [38] T. Blumensath and M. E. Davies, “Stagewise weak gradient pursuits,” IEEE Transactions on Signal Processing, vol. 57, no. 11, pp. 4333–4346, 2009.
[Uncaptioned image] Jie Wang received the B.S. degree in internet of things engineering from the College of Information Science and Technology, Nanjing Forestry University of China, Nanjing, China, in 2021. She is currently pursuing the Ph.D. degree in communications and information systems with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics. Her current research interests conclude spectrum mapping.
[Uncaptioned image] Qiuming Zhu received the B.S. degree in electronic engineering and the M.S. and Ph.D. degrees in communication and information system from the Nanjing University of Aeronautics and Astronautics (NUAA), Nanjing, China, in 2002, 2005, and 2012, respectively. He is currently a Professor in the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China. His current research interests include channel sounding, modeling, and emulation for the fifth/sixth generation (5G/6G) mobile communication, vehicle-to-vehicle (V2V) communication, and unmanned aerial vehicles (UAV) communication systems.
[Uncaptioned image] Zhipeng Lin received the Ph.D. degrees from the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China, and the School of Electrical and Data Engineering, University of Technology of Sydney, NSW, Australia, in 2021. Currently, He is an Associate Researcher in the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China. His current research interests include signal processing, massive MIMO, spectrum sensing, and UAV communications.
[Uncaptioned image] Qihui Wu received the B.S. degree in communications engineering and the M.S. and Ph.D. degrees in communications and information system from the PLA University of Science and Technology, Nanjing, China, in 1994, 1997, and 2000, respectively. He is currently a Professor with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics. His current research interests include algorithms and optimization for cognitive wireless networks, soft-defined radio, and wireless communication systems.
[Uncaptioned image] Yang Huang received the B.S. and M. S. degrees from Northeastern University, China, in 2011 and 2013, respectively, and the Ph.D. degree from Imperial College London in 2017. He is currently an Associate Professor with College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China. His research interests include wireless communications, MIMO systems, convex optimization, machine learning and signal processing for communications. He has served as Technical Program Committee (TPC) members for many International conferences, such as IEEE GLOBECOM, etc.
[Uncaptioned image] Xuezhao Cai received the B.S. degree in information engineering from the School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China , in 2022. He is currently pursuing the M.S. degree in electronic information with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics. His current research direction is related to spectrum..
[Uncaptioned image] Weizhi Zhong received the B.S. degree in communications engineering and the M.S. and Ph.D. degrees in communications and information system from the PLA University of Science and Technology, Nanjing, China, in 1994, 1997, and 2000, respectively. He is currently a Professor with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics. His current research interests include algorithms and optimization for cognitive wireless networks, soft-defined radio, and wireless communication systems.
[Uncaptioned image] Yi Zhao received the B.S. degree in information engineering from Nanjing University of Aeronautics and Astronautics (NUAA) in 2021. He is currently working towards the master degree in electronic information engineering, NUAA. His current research is temporal and spatial prediction and reconstruction of spectrum situation.