This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Learning-Based Massive Beamforming

Siyuan Lu, Shengjie Zhao and Qingjiang Shi School of Software Engineering
Tongji University
Shanghai, China
Email: {14_lusiyuan, shengjiezhao, shiqj}@tongji.edu.cn
Abstract

Developing resource allocation algorithms with strong real-time and high efficiency has been an imperative topic in wireless networks. Conventional optimization-based iterative resource allocation algorithms often suffer from slow convergence, especially for massive multiple-input-multiple-output (MIMO) beamforming problems. This paper studies learning-based efficient massive beamforming methods for multi-user MIMO networks. The considered massive beamforming problem is challenging in two aspects. First, the beamforming matrix to be learned is quite high-dimensional in case with a massive number of antennas. Second, the objective is often time-varying and the solution space is not fixed due to some communication requirements. All these challenges make learning representation for massive beamforming an extremely difficult task. In this paper, by exploiting the structure of the most popular WMMSE beamforming solution, we propose convolutional massive beamforming neural networks (CMBNN) using both supervised and unsupervised learning schemes with particular design of network structure and input/output. Numerical results demonstrate the efficacy of the proposed CMBNN in terms of running time and system throughput.

Index Terms:
Beamforming, WMMSE, convolutional neural network, massive MIMO

I Introduction

The rapid development of deep learning in various applications has greatly changed many aspects of our life [1]. Besides the changes in human life, many research fields are also revolutionized by deep learning, such as computer vision and natural language processing. In the research of wireless network communication, deep learning (and machine learning) based methods are gaining more and more attention due to their efficacy. In response to this, embedding deep learning into the 5th generation of mobile systems (5G) and wireless networks is becoming an increasingly hot topic in recent years [2, 3].

At the same time, the advantages of massive MIMO in energy efficiency, spectral efficiency, robustness and reliability proved massive MIMO to be indispensable in the 5G era [4, 5]. To improve the quality of communication in massive MIMO systems, downlink beamforming or precoding is one of the most important transmission technologies. For beamforming design, the system throughput (weighted sum-rate) maximization under a total power constraint is an important metric of communication quality, which is the focus of our paper.

Many algorithms developed for beamforming are based on optimization theory like weighted minimum mean square error (WMMSE) [6], which can find locally optimal solutions of a formulated optimization problem through iterations. However, such optimization based algorithms often suffer from high computational costs (e.g., WMMSE involves complex matrix inversion operations). When large-scale antenna arrays are deployed on transmitter [7], the computational cost of these algorithms can be prohibitive. Meanwhile, algorithms with low complexity like zero-forcing method [8] cannot achieve good performance when the number of users or antennas becomes large.

As a result, deep learning-based methods were proposed to solve such problems in recent years. Supervised deep neural network (DNN) has been applied to power control, which can achieve similar sum-rate performance as the classical power allocation algorithm WMMSE [2]. In contrast, unsupervised learning can reach (even better) the performance of the WMMSE algorithm [9, 10]. Meanwhile, a hybrid precoding scheme with DNN-based autoencoder [11] was proposed for Millimeter wave (mmWave) MIMO systems. Apart from the aforementioned DNN models, a distributed convolutional neural network (CNN)-based deep power control network was introduced [12] to maximize the system spectral efficiency or energy efficiency with local CSI. Furthermore, CNN-based beamforming neural networks (BNNs) were proposed [13] for three typical beamforming optimization problems in multi-user multiple-input-single-output (MISO) networks. For the sum-rate maximization problem, BNNs were trained using both supervised learning and unsupervised learning.

Although these deep learning-based methods have been proposed for multi-user MIMO downlink beamforming, current methods mainly focus on the basic case of the sum-rate maximization problem without taking more complicated situations like user priority or varying number of stream per user into consideration. Besides, the neural network design does not utilize the structure of the closed-form update in the iterative algorithm.

In summary, two main challenges remain unaddressed in learning-based massive MIMO beamforming. First, as the number of antennas becomes large in massive MIMO system, both the input and output of the neural network (NN)-based methods would be of high dimension, which makes the neural network more complex and harder to train. Second, in real-world systems, the number of user streams and user priority are both changeable over time which means the solution space is not fixed. Thus, it will be quite challenging to take such two cases into consideration without increasing the neural network complexity significantly.

In this paper, to tackle the above challenges, we propose a new deep learning framework called convolutional massive beamforing neural networks (CMBNN). The main contributions of this paper are summarized as follows:

1) We utilize the structure of the closed-form solution of WMMSE algorithm in the design of the NN structure. In addition, we design a novel NN structure to cope with varying number of user streams. By doing so, for the first time, we are able to handle beamforming with time-varying user priority and varying number of user streams without significantly increasing the NN complexity or sacrificing model performance.

2) Due to the use of problem structure in the design of our networks, all NN structures proposed in our paper are much simpler than existing approaches. The low complexity of NN structures makes our method more appealing under the real-time requirements in 5G wireless communication systems.

II System Model and Problem Formulation

II-A System Model

Consider a single cell multi-user massive MIMO system where the BS is equipped with NTN_{T} transmit antennas and serves KK users each equipped with NRN_{R} antennas [14]. Let 𝐕kNT×dk\mathbf{V}_{k}\in\mathbb{C}^{N_{T}\times d_{k}} denotes the transmit beamforming that the BS employs to send the signal 𝐬kdk×1{\mathbf{s}}_{k}\in\mathbb{C}^{d_{k}\times 1} to user kk. The BS signal is given by,

𝐱=k=1K𝐕k𝐬k,\displaystyle\mathbf{x}=\sum_{k=1}^{K}\mathbf{V}_{k}{\mathbf{s}}_{k},

where it is assumed 𝔼[𝐬k𝐬kH]=𝐈\mathbb{E}\left[{\mathbf{s}}_{k}{\mathbf{s}}_{k}^{H}\right]=\mathbf{I}.

Assuming a flat-fading channel model, the received signal 𝐲kNR×1\mathbf{y}_{k}\in\mathbb{C}^{N_{R}\times 1} at user kk can be written as

𝐲k\displaystyle\mathbf{y}_{k} =𝐇k𝐱+𝐧k\displaystyle=\mathbf{H}_{k}\mathbf{x}+\mathbf{n}_{k} (1)
=𝐇k𝐕k𝐬kdesired signal of user k+j=1,jkK𝐇k𝐕j𝐬jmulti-user interference+𝐧k,k\displaystyle=\underbrace{\mathbf{H}_{k}\mathbf{V}_{k}{\mathbf{s}}_{k}}_{\textrm{desired signal of user $k$}}+\underbrace{\sum_{j=1,j\neq k}^{K}\mathbf{H}_{k}\mathbf{V}_{j}{\mathbf{s}}_{j}}_{\textrm{multi-user interference}}+\mathbf{n}_{k},\forall k

where matrix 𝐇kNR×NT\mathbf{H}_{k}\in\mathbb{C}^{N_{R}\times N_{T}} represents the channel matrix from the BS to user kk, while 𝐧kNR×1\mathbf{n}_{k}\in\mathbb{C}^{N_{R}\times 1} denotes the additive white Gaussian noise with distribution 𝒞𝒩(0,σk2𝐈)\mathcal{CN}(0,\sigma_{k}^{2}\mathbf{I}). We assume that the signals for different users are independent from each other and from receiver noises. In this paper, we treat the multi-user interference as noise and employ linear receive beamforming strategy, i.e., 𝐔kdk×NR,k\mathbf{U}_{k}\in\mathbb{C}^{d_{k}\times N_{R}},\forall k, so that the estimated signal 𝐬^kdk×1\hat{{\mathbf{s}}}_{k}\in\mathbb{C}^{d_{k}\times 1} is given by 𝐬^k=𝐔kH𝐲k,k.\hat{{\mathbf{s}}}_{k}=\mathbf{U}_{k}^{H}\mathbf{y}_{k},\forall k.

II-B Problem Formulation

A basic problem of interest is to find the transmit beamformers {𝐕k}\{\mathbf{V}_{k}\} such that the system weighted sum-rate is maximized subject to a total power constraint due to the BS power budget. Mathematically, it can be written as follows

max{𝐕k}k=1KαkRks.t.k=1KTr(𝐕k𝐕kH)Pmax\begin{split}\max_{\{\mathbf{V}_{k}\}}\quad&\sum_{k=1}^{K}\alpha_{k}R_{k}\\ \textrm{s.t.}\quad&\sum_{k=1}^{K}\textrm{Tr}\,(\mathbf{V}_{k}\mathbf{V}_{k}^{H})\leq P_{max}\end{split} (2)

where PmaxP_{max} denotes the BS power budget, the weight αk\alpha_{k} represents the priority of user kk in the system, and RkR_{k} is the rate of user kk given by

Rklogdet(𝐈+𝐇k𝐕k𝐕kH𝐇kH(Ak+σk2𝐈)1).R_{k}\triangleq\log\det\left(\mathbf{I}{+}\mathbf{H}_{k}\mathbf{V}_{k}\mathbf{V}_{k}^{H}\mathbf{H}_{k}^{H}\left({\textbf{A}}_{k}{+}\sigma_{k}^{2}\mathbf{I}\right)^{-1}\right).

where Akjk𝐇k𝐕j𝐕jH𝐇kH{\textbf{A}}_{k}\triangleq\sum_{j\neq k}\mathbf{H}_{k}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{k}^{H}.

Under the independence assumption of 𝐬k{\mathbf{s}}_{k}’s and 𝐧k\mathbf{n}_{k}’s, the MSE matrix 𝐄k\mathbf{E}_{k} can be written as,

𝐄k(𝐈𝐔kH𝐇k𝐕k)(𝐈𝐔kH𝐇k𝐕k)H+mk𝐔k𝐇k𝐕m𝐕mH𝐇kH𝐔kH+i=1Kσk2PmaxTr(𝐕i𝐕iH)𝐔kH𝐔k\begin{split}\mathbf{E}_{k}&\triangleq(\mathbf{I}-\mathbf{U}_{k}^{H}\mathbf{H}_{k}\mathbf{V}_{k})(\mathbf{I}-\mathbf{U}_{k}^{H}\mathbf{H}_{k}\mathbf{V}_{k})^{H}\\ &+\sum_{m\neq k}\mathbf{U}_{k}\mathbf{H}_{k}\mathbf{V}_{m}\mathbf{V}_{m}^{H}\mathbf{H}_{k}^{H}\mathbf{U}_{k}^{H}\\ &+\sum_{i=1}^{K}\frac{\sigma_{k}^{2}}{P_{max}}\textrm{Tr}\,(\mathbf{V}_{i}\mathbf{V}_{i}^{H})\mathbf{U}_{k}^{H}\mathbf{U}_{k}\end{split} (3)

Followed by [6], we can obtain the equivalent WMMSE form as

min{𝐖k,𝐔k,𝐕k}k=1K(logdet(𝐖k)Tr(𝐖k𝐄k))\min_{\{\mathbf{W}_{k},\mathbf{U}_{k},\mathbf{V}_{k}\}}\quad\sum_{k=1}^{K}\left(\log\det(\mathbf{W}_{k})-{\rm\textrm{Tr}\,}(\mathbf{W}_{k}\mathbf{E}_{k})\right)

Furthermore, inspired by the structure of ZF beamforming [8], to reduce the complexity in the massive antenna scenario, we also restrict 𝐕k\mathbf{V}_{k}’s to the range space of 𝐇H\mathbf{H}^{H}, i.e., let it satisfy 𝐕k=𝐇H𝐗k\mathbf{V}_{k}=\mathbf{H}^{H}\mathbf{X}_{k} with some 𝐗kKNR×dk\mathbf{X}_{k}\in\mathbb{C}^{KN_{R}\times d_{k}}, where 𝐇[𝐇1H𝐇2H𝐇KH]HKNR×NT\mathbf{H}\triangleq\left[\mathbf{H}_{1}^{H}~{}~{}\mathbf{H}_{2}^{H}~{}~{}\ldots\mathbf{H}_{K}^{H}\right]^{H}\in\mathbb{C}^{KN_{R}\times N_{T}}. As a result, by defining 𝐌k𝐔k𝐖k𝐔kH\mathbf{M}_{k}\triangleq\mathbf{U}_{k}\mathbf{W}_{k}\mathbf{U}_{k}^{H} and 𝐇¯𝐇𝐇HKNR×KNR\bar{\mathbf{H}}\triangleq\mathbf{H}\mathbf{H}^{H}\in\mathbb{C}^{KN_{R}\times KN_{R}}, we can derive the three main steps of the corresponding WMMSE algorithm as follows

𝐗k\displaystyle\mathbf{X}_{k} =(j=1Kσk2PmaxαjTr(𝐌j)𝐇¯+i=1Kαi𝐇¯iH𝐌i𝐇¯i)1\displaystyle=\left(\sum_{j=1}^{K}\frac{\sigma_{k}^{2}}{P_{max}}\alpha_{j}\textrm{Tr}\,(\mathbf{M}_{j})\bar{\mathbf{H}}+\sum_{i=1}^{K}\alpha_{i}\bar{\mathbf{H}}_{i}^{H}\mathbf{M}_{i}\bar{\mathbf{H}}_{i}\right)^{-1}
×αk𝐇¯kH𝐔k𝐖k\displaystyle~{}~{}~{}~{}\times\alpha_{k}\bar{\mathbf{H}}_{k}^{H}\mathbf{U}_{k}\mathbf{W}_{k} (4)
𝐔k\displaystyle\mathbf{U}_{k} =(j=1Kσk2PmaxTr(𝐇¯𝐗j𝐗jH)𝐈+i=1K𝐇¯i𝐗i𝐗iH𝐇¯iH)1\displaystyle=\left(\sum_{j=1}^{K}\frac{\sigma_{k}^{2}}{P_{max}}\textrm{Tr}\,(\bar{\mathbf{H}}\mathbf{X}_{j}\mathbf{X}_{j}^{H})\mathbf{I}+\sum_{i=1}^{K}\bar{\mathbf{H}}_{i}\mathbf{X}_{i}\mathbf{X}_{i}^{H}\bar{\mathbf{H}}_{i}^{H}\right)^{-1}
×𝐇¯k𝐗k\displaystyle~{}~{}~{}~{}\times\bar{\mathbf{H}}_{k}\mathbf{X}_{k} (5)
𝐖k\displaystyle\mathbf{W}_{k} =(𝐄k)1=(𝐈𝐔kH𝐇¯k𝐗k)1\displaystyle=\left(\mathbf{E}_{k}\right)^{-1}=\left(\mathbf{I}{-}\mathbf{U}_{k}^{H}\bar{\mathbf{H}}_{k}\mathbf{X}_{k}\right)^{-1} (6)

The algorithm repeats the above three steps until convergence. For ease of exposition, it is termed as reduced-WMMSE (R-WMMSE).

III Proposed Method

Our key idea is to learn the R-WMMSE algorithm above using deep learning, so that the complexity can be further reduced by choosing appropriate neural network structure and input/output.

III-A Reformulation

In previous work like [13], the noise power σk2\sigma_{k}^{2} is often fixed for all scenarios, resulting in the trained network only adapting to this noise level. Here we remove the effect of noise by reformulating the problem. Let us define 𝐇~k=Pmaxσk2𝐇k\tilde{\mathbf{H}}_{k}=\sqrt{\frac{P_{max}}{\sigma_{k}^{2}}}\mathbf{H}_{k} and

R~klogdet(𝐈+𝐇~k𝐕k𝐕kH𝐇~kH(jk𝐇~k𝐕j𝐕jH𝐇~kH+𝐈)1)\displaystyle\tilde{R}_{k}\!\triangleq\!\log\det\!\Bigg{(}\!\mathbf{I}{+}\tilde{\mathbf{H}}_{k}\mathbf{V}_{k}\mathbf{V}_{k}^{H}\tilde{\mathbf{H}}_{k}^{H}\!\left.\left(\sum_{j\neq k}\tilde{\mathbf{H}}_{k}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\tilde{\mathbf{H}}_{k}^{H}\!{+}\mathbf{I}\!\right)^{-1}\!\right)

Then we have the following proposition.

Proposition 1

Problem (2) is equivalent to

max{𝐕k}k=1KαkR~ks.t.k=1KTr(𝐕k𝐕kH)1,\begin{split}\max_{\{\mathbf{V}_{k}\}}\quad&\sum_{k=1}^{K}\alpha_{k}\tilde{R}_{k}\\ \textrm{s.t.}\quad&\sum_{k=1}^{K}\textrm{Tr}\,(\mathbf{V}_{k}\mathbf{V}_{k}^{H})\leq 1,\end{split} (7)

in the sense that the optimal solution to problem (7) multiplied by Pmax\sqrt{P_{max}} is also optimum to problem (2).

Because of the above equivalence, we consider problem (7) throughout the rest of this paper. Moreover, for notational simplicity, we drop ‘ ~\tilde{} ’ in all notations in (7).

III-B Neural Network Architecture

Figure 1 presents the CNN-based network architecture for beamforming design followed by the idea of [13], where CL and BN denote the convolutional layer and batch normalization layer respectively, leaky relu is chosen as the activation function and several dense layers are used after the flatten layer. The network architecture is further detailed as follows.

Refer to caption
Figure 1: A basic neural network structure for massive beamforming

III-B1 Supervised Learning

Supervised learning is a straightforward way to train a beamforming neural network. All data samples can be generated through running the R-WMMSE algorithm. For the CNN model, as the input 𝐇\mathbf{H} or 𝐇𝐇H\mathbf{H}\mathbf{H}^{H} are all complex matrices, we would like to reshape the complex matrix to a tensor like an image but with only two channels, one represents the real part while the other represents the imaginary part. However, different from the traditional image processing with convolutional and pooling layers, we would not use pooling layer because it may cause information loss which would influence the learning result. Adam and huber loss are selected as the optimizer and loss function respectively.

III-B2 Unsupervised Learning

Even if the huber loss of supervised learning becomes small, the weighted sum-rate result is not necessarily large enough. The intuitive reason is that the supervised learning does not aim directly to maximize the weighted sum-rate and its performance is largely limited by the training samples. On the other hand, we have a direct objective, i.e., weighted sum-rate maximization. Hence, we could use the negative weighted sum-rate as an alternative training loss which could improve the sum-rate directly.

L(θ;h)k=1KαkRk(h,o)L(\theta;h)\triangleq-\sum_{k=1}^{K}\alpha_{k}R_{k}(h,o) (8)

III-B3 Supervised + Unsupervised Learning

As the loss function of unsupervised learning is complicated involving many complex matrix operations, both the loss calculation and the corresponding gradient computation would be more time-consuming than the computation of traditional loss (e.g., MSE). Considering the trade-off between convergence speed and accuracy, we choose to combine both supervised learning and unsupervised learning to train the beamforming neural network. Specifically, supervised learning is used for pre-training and unsupervised learning is for further refinement. In practice, only one or two epochs for unsupervised learning is enough.

III-C Design of Input and Output

In massive MIMO system, the number of transmit antennas NTN_{T} could be very large. Hence, if we still directly take 𝐇\mathbf{H} and 𝐕k\mathbf{V}_{k} (or 𝐗k\mathbf{X}_{k}) as input and output as in [13, 9], the input/output of the neural network (NN) would be both high dimensional matrices, making it not easy to train an NN. As a consequence, the NN input and output should be redesigned to reduce the NN input/output size (and thus the training complexity and difficulty). In terms of the R-WMMSE algorithm, we find that beamformer 𝐕k\mathbf{V}_{k} is uniquely determined by 𝐗k\mathbf{X}_{k} while 𝐗k\mathbf{X}_{k} depends on 𝐇𝐇H\mathbf{H}\mathbf{H}^{H}. Hence, 𝐇𝐇H\mathbf{H}\mathbf{H}^{H} can be regarded as the NN input, which has reduced size as compared to 𝐇\mathbf{H} when NT>>NRN_{T}>>N_{R}. Moreover, since 𝐗k\mathbf{X}_{k} can be determined by 𝐇𝐇H\mathbf{H}\mathbf{H}^{H} and {𝐔k,𝐖k}\{\mathbf{U}_{k},\mathbf{W}_{k}\}, we can take {𝐔k,𝐖k}\{\mathbf{U}_{k},\mathbf{W}_{k}\} as the NN output in order to reduce the size of NN output. Tables I and II list the dimension of various input/output schemes. It can be seen that different choice of input/output leads to different size of input/output. Note that we generally have NT>>NRN_{T}>>N_{R}, NTKNRN_{T}\geq KN_{R}, KNR,dkK\geq N_{R},d_{k} in the massive MIMO case.

TABLE I: Dimension of different inputs
Input Dimension
𝐇k\mathbf{H}_{k} 2×(KNR×NT)2\times(KN_{R}\times N_{T})
𝐇𝐇H\mathbf{H}\mathbf{H}^{H} 2×(KNR×KNR)2\times(KN_{R}\times KN_{R})
TABLE II: Dimension of different outputs
Output Dimension
𝐕k\mathbf{V}_{k} 2×(NT×dk)2\times(N_{T}\times d_{k})
𝐗k\mathbf{X}_{k} 2×(KNR×dk)2\times(KN_{R}\times d_{k})
𝐔k\mathbf{U}_{k} and 𝐖k\mathbf{W}_{k} 2×(NR×dk+dk×dk)2\times(N_{R}\times d_{k}+d_{k}\times d_{k})
Refer to caption
Figure 2: Reconstruction of input 𝐇𝐇H\mathbf{H}\mathbf{H}^{H}

Furthermore, due to the conjugate symmetry of 𝐇𝐇H\mathbf{H}\mathbf{H}^{H}, both the real part and imaginary part of 𝐇𝐇H\mathbf{H}\mathbf{H}^{H} depend uniquely on their upper or lower triangular parts. Hence, we can further reduce the input size by combining the real part and the imaginary part in a way as shown in Fig. 2. As a result, the dimension of NN input is finally reduced to KNR×KNRKN_{R}\times KN_{R}. Similar operation can be done for 𝐖k\mathbf{W}_{k}, leading to a further reduced size of NN output.

III-D Architecture Design for User Priority

In practice, each user kk in the system may have a different priority with weight αk\alpha_{k} that often changes with time. While most current methods do not take this into consideration, the NN input or structure should be carefully redesigned when the weights are considered. According to the R-WMMSE algorithm mentioned before, both 𝐗k\mathbf{X}_{k} and {𝐔k,𝐖k}\{\mathbf{U}_{k},\mathbf{W}_{k}\} depend on αk𝐇𝐇H\alpha_{k}\mathbf{H}\mathbf{H}^{H}.

Two very intuitive ways to merge the weights into the NN are depicted in Figure 3 and Figure 4. One is to merge weights into input as K channels (see Figure 3), and the other is to concatenate weight after convolution and flatten of the input (see Figure 4). Our simulation results show that these two methods can achieve reasonably good performance.

Refer to caption
Figure 3: Merge weight into input as K channels
Refer to caption
Figure 4: Concatenate weight after conv.

However, these two methods will bring higher computational complexity to the original network which can lead to extra time and cost. Surprisingly, inspired by the update rule of 𝐗k\mathbf{X}_{k} in (II-B), we find that, just by taking 𝐇~𝐇~H\tilde{\mathbf{H}}\tilde{\mathbf{H}}^{H} as input, where 𝐇~k=αk𝐇k\tilde{\mathbf{H}}_{k}=\sqrt{\alpha}_{k}\mathbf{H}_{k} and 𝐇~[𝐇~1H𝐇~2H𝐇~KH]H\tilde{\mathbf{H}}\triangleq\left[\tilde{\mathbf{H}}_{1}^{H}~{}~{}\tilde{\mathbf{H}}_{2}^{H}~{}~{}\ldots\tilde{\mathbf{H}}_{K}^{H}\right]^{H}, we can reach the same performance as the previous two intuitive methods without any need for increasing network complexity.

III-E Architecture Design for varying number of user streams

In practical systems, sometimes only one stream is transmitted for some user during communication. This raises a new challenge that the number of streams dk(dkNR)d_{k}\ (d_{k}\leq N_{R}) can vary but the dimension of the network output needs to be fixed. Table III shows the number of valid output elements when dkd_{k} is different. Thus, to ensure that the network output have fixed dimension, certain positions should be set to zero when there exists a single stream transmission.

TABLE III: Dimension of different dkd_{k}
dkd_{k} 𝐔k\mathbf{U}_{k} 𝐖k\mathbf{W}_{k} Num of valid elements
2 2×22\times 2 2×22\times 2 12
1 2×12\times 1 1×11\times 1 5
Refer to caption
Figure 5: Network structure for varying number of user streams

There are a few simple and intuitive solutions to this problem. The simplest solution is to directly merge the information of the number of user streams into the original input 𝐇𝐇H\mathbf{H}\mathbf{H}^{H}. Another solution is to ignore the number of streams used and manually set to zero the positions corresponding to empty streams of the output, which may result in discontinuous loss function. These two methods both result in unsatisfactory performance in our experiments.

To achieve better performance than the solutions mentioned above, we introduce an auxiliary network called Index Network whose function is to softly zero out the corresponding positions given the number of streams used. Specifically, as shown in Figure 5, the Index Network is a network separated from the main network, it takes the number of user streams as input and outputs a soft mask having the same dimension of the output of the main network. The output of the main network is multiplied by the mask element-wisely to produce the final output. We find that this method can effectively stabilize the training and improve the performance.

During the testing stage, to ensure that the network outputs a beamforming with correct number of streams, the output elements at certain positions will be set to zero manually at the last layer (Lambda Layer).

During the unsupervised learning phase, variables of the Index Network should be fixed and specific positions should also be assigned with zero before calculating the unsupervised loss.

IV Experiments

IV-A Neural Network configuration

The main network consists of one convolutional layer with 4 kernels of size 3×33\times 3, followed by batch normalization (BN) layer and activation function layer (leaky relu), then only one dense layer with 32 hidden units. The Index Network is of similar scale as the dense layer before. Our network is much simpler than the previous work [13, 9] with much more layers and hidden units (mostly having more than two convolutional and dense layers).

IV-B Data generation

For weighted sum-rate maximization, the channel matrix 𝐇\mathbf{H} is generated from the complex Gaussian distribution with pathloss between the users and the BS. The pathloss is set to 128.1+37.6log10(ω)128.1+37.6\log_{10}(\omega)[dB] [15] where ω\omega is the distance between the user and the BS in km (0.10.30.1\sim 0.3). The noise power is set to be the same for all users and can be calculated by σk2=101Kklog101NRi,j𝐇kij2×10SNR10\sigma_{k}^{2}=10^{\frac{1}{K}\sum_{k}\log_{10}\frac{1}{N_{R}}\sum_{i,j}\mathbf{H}_{k_{ij}}^{2}}\times 10^{-\frac{\rm SNR}{10}}, where signal-to-noise ratio (SNR) is set as 20(dB)20{\rm(dB)}. The priority coefficients αk\alpha_{k} of the users are generated randomly and kαk=K\sum_{k}\alpha_{k}=K, and dkd_{k} is also generated randomly for each user kk (dk=1d_{k}=1 indicates dual stream and dk=0d_{k}=0 indicates single stream). In the simulation, 45000 samples are generated for training and 5000 are for testing. Table IV lists three main test cases in the following experiment with NR=2N_{R}=2. The last test case is of great importance in industry.

TABLE IV: Three main beamforming test cases
Case NTN_{T} KK
1 8 2
2 8 4
3 32 12

IV-C Simulation Result

To test whether the predicted precoder 𝐕\mathbf{V} is good enough to maximize the weighted sum-rate maximization problem, the predicted output should be put back to the objective function and the performance can be defined as follows.

f(𝐇,𝐕k)k=1KαkR~k(𝐇,𝐕k)f(\mathbf{H},\mathbf{V}_{k})\triangleq\sum_{k=1}^{K}\alpha_{k}\tilde{R}_{k}(\mathbf{H},\mathbf{V}_{k}) (9)
Performancef(𝐇,𝐕predict)f(𝐇,𝐕true)Performance\triangleq\frac{f(\mathbf{H},\mathbf{V}_{predict})}{f(\mathbf{H},\mathbf{V}_{true})} (10)
Refer to caption
Figure 6: Average execution time (in seconds) among CMBNN, ZF and WMMSE
Refer to caption
Figure 7: Average performance among CMBNN, ZF and WMMSE

Figures 6 and 7 show the average execution time and average performance compared with the WMMSE algorithm and ZF algorithm respectively. It can be observed that 1) the proposed method can achieve similar performance as the WMMSE algorithm in most cases, while significantly outperforming ZF algorithm (which is unable to handle the user priority); and 2) the proposed method costs less execution time (in testing stage) than both the WMMSE method including many inversion operations and ZF method which needs extra time to decide which beamforming vector should be used for sending single stream.

In summary, our proposed CMBNN model is superior from the perspective of both performance and efficiency.

V Conclusion

In this paper, we have proposed a convolutional massive beamforming neural networks (CMBNN) with low complexity. Specifically, we have designed the neural network according to the structure of optimization problem to handle complex situations with changeable user priority and varying number of user streams. Compared with the methods in literature, our proposed framework can achieve better performance and higher efficiency.

References

  • [1] O. Simeone, “A very brief introduction to machine learning with applications to communication systems,” IEEE Transactions on Cognitive Communications and Networking, vol. 4, no. 4, pp. 648–664, 2018.
  • [2] H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to optimize: Training deep neural networks for interference management,” IEEE Transactions on Signal Processing, vol. 66, no. 20, pp. 5438–5453, 2018.
  • [3] C. Zhang, P. Patras, and H. Haddadi, “Deep learning in mobile and wireless networking: A survey,” IEEE Communications Surveys & Tutorials, 2019.
  • [4] L. Lu, G. Y. Li, A. L. Swindlehurst, A. Ashikhmin, and R. Zhang, “An overview of massive mimo: Benefits and challenges,” IEEE journal of selected topics in signal processing, vol. 8, no. 5, pp. 742–758, 2014.
  • [5] E. G. Larsson, O. Edfors, F. Tufvesson, and T. L. Marzetta, “Massive mimo for next generation wireless systems,” IEEE Communications Magazine, vol. 52, no. 2, pp. 186–195, 2014.
  • [6] Q. Shi, M. Razaviyayn, Z.-Q. Luo, and C. He, “An iteratively weighted mmse approach to distributed sum-utility maximization for a mimo interfering broadcast channel,” IEEE Transactions on Signal Processing, vol. 59, no. 9, pp. 4331–4340, 2011.
  • [7] F. W. Vook, A. Ghosh, and T. A. Thomas, “Mimo and beamforming solutions for 5g technology,” in 2014 IEEE MTT-S International Microwave Symposium (IMS2014).   IEEE, 2014, pp. 1–4.
  • [8] T. Yoo and A. Goldsmith, “On the optimality of multiantenna broadcast scheduling using zero-forcing beamforming,” IEEE Journal on selected areas in communications, vol. 24, no. 3, pp. 528–541, 2006.
  • [9] H. Huang, W. Xia, J. Xiong, J. Yang, G. Zheng, and X. Zhu, “Unsupervised learning-based fast beamforming design for downlink mimo,” IEEE Access, vol. 7, pp. 7599–7605, 2018.
  • [10] F. Liang, C. Shen, W. Yu, and F. Wu, “Towards optimal power control via ensembling deep neural networks,” IEEE Transactions on Communications, vol. 68, no. 3, pp. 1760–1776, 2020.
  • [11] H. Huang, Y. Song, J. Yang, G. Gui, and F. Adachi, “Deep-learning-based millimeter-wave massive mimo for hybrid precoding,” IEEE Transactions on Vehicular Technology, vol. 68, no. 3, pp. 3027–3032, 2019.
  • [12] W. Lee, M. Kim, and D.-H. Cho, “Deep power control: Transmit power control scheme based on convolutional neural network,” IEEE Communications Letters, vol. 22, no. 6, pp. 1276–1279, 2018.
  • [13] W. Xia, G. Zheng, Y. Zhu, J. Zhang, J. Wang, and A. P. Petropulu, “A deep learning framework for optimization of miso downlink beamforming,” IEEE Transactions on Communications, vol. 68, no. 3, pp. 1866–1880, 2020.
  • [14] G. Caire and S. Shamai, “On the achievable throughput of a multiantenna gaussian broadcast channel,” IEEE Transactions on Information Theory, vol. 49, no. 7, pp. 1691–1706, 2003.
  • [15] H. Dahrouj and W. Yu, “Coordinated beamforming for the multicell multi-antenna wireless system,” IEEE transactions on wireless communications, vol. 9, no. 5, pp. 1748–1759, 2010.