Through-the-Wall Radar under Electromagnetic Complex Wall: A Deep Learning Approach
Abstract
This paper employed deep learning to do two-dimensional, multi-target locating in Through-the-Wall Radar under conditions where the wall is treated as a complex electromagnetic medium. We made five assumptions about the wall and two about the number of targets. There are two target modes available: single target and double targets. The wall scenarios include a homogeneous wall, a wall with an air gap, an inhomogeneous wall, an anisotropic wall, and an inhomogeneous-anisotropic wall. Target locating is accomplished through the use of a deep neural network technique. We constructed a dataset using the Python FDTD module and then modeled it using deep learning. Assuming the wall is a complex electromagnetic medium, we achieved accuracy for single-target 2D locating and accuracy for two-target locating. Additionally, we noticed a loss of to inaccuracy when noise was added at low SNRs, although this decrease dropped to less than at high SNRs.
Index Terms:
Through-the-Wall Radar, Complex Electromagnetic Media, Deep Learning, Machine Learning.I Introduction
Recently, Through-the-Wall Imaging (TWRI) has grown in prominence as a field of study. TWR can be used to locate, identify, classify, and track humans and moving objects beyond the wall[1, 2]. Due to the interference present in chaotic and multidimensional environments, it is difficult to identify and classify targets hidden behind walls and inside structures. Electromagnetic waves are used to accomplish this, but advancements in this multifaceted technology require a thorough understanding of the complexities of EM wave interaction with internal and external objects. Non-destructive TWRI technology is effective at detecting and locating invisible targets. There has been a lot of research conducted in this field. In the presence of wall parameters, accurate target identification is possible [3][4]. Another technique for determining the location of a human is to use doppler[5, 6]. Certain techniques must be investigated in order to compensate for wall effects and obtain precise positioning and high-quality, focused images. As a result, wall parameters such as permittivity, thickness, and conductivity are critical for accurate target locating and imaging. There are two types of methods for estimating wall parameters: conventional methods and machine-based methods. The time-delay approach[7], filter-based methods [8], M-Sequence sensor, and continuous basis estimator[9] are all examples of conventional methods. Additionally, machine learning algorithms have been used to estimate wall parameters [10, 11]. Another approach for locating targets behind a wall is to employ techniques that do not require knowledge of the wall’s parameters. Several algorithms, including inverse linear scattering algorithms based on the first-order Born approximation [12], auto-focusing strategies based on the spectrum Greens function [13], auto-focusing techniques based on higher order statistics [14], and the trajectory intersection method for estimating the target position [15] have been used. Conventional machine learning approaches were recently used for target detection and classification in TWR. Bufler et al. classified indoor targets using machine learning techniques [16]. The support vector machine (SVM) is used in this study to distinguish humans from indoor targets. Additionally, researchers have proposed a kernel extreme machine learning technique for locating through walls with unknown parameters [17]. Zhang et al. developed a two-dimensional positioning method based on the assumption of a homogeneous wall and a circular metal cylinder target. Additionally, in [18], an extreme machine learning is used to propose a 3D locating technique for a homogeneous single layer wall surrounding a spherical metallic target. Wood et al. [19] utilized a machine learning approach to accomplish three goals, one of which is predicting an object’s location. This article performs two-dimensional locating assuming a circular target using the K-Nearest Neighbors (KNN) algorithm and a homogeneous non-magnetic wall. Also [20] presents a deep learning-based method to estimate wall parameters and object parameters simultaneously when the wall is homogeneous.
Machine Learning is the study of designing machines that learn from both provided examples and their own experiences. Indeed, machine learning is an attempt to develop a machine capable of learning and functioning without explicitly planning and dictating individual actions through the use of algorithms. Rather than programming each action, machine learning uses data to feed a general algorithm, which then constructs its logic based on the data [21]. We are witnessing the penetration of machine learning into electromagnetic problems as a result of the increasing development of machine learning and its potential to solve a variety of problems[22, 23, 24, 25]. The majority of machine learning algorithms are capable of detecting and classifying hidden patterns and signals[26]. Due to this property, machine learning is an excellent tool in the field of radar. We may find acceptable applications for these methods by applying machine learning algorithms to the radars [27],[28].

The purpose of this paper is to determine the two-dimensional location of targets. We employ multiple targets rather than a single one. Additionally, we use a complex electromagnetic model to create a more accurate representation of the wall, making it more difficult to locate targets. Rather than using standard machine learning techniques, we employ deep learning. We consider five possible scenarios for the wall: homogeneous wall, airgap wall, inhomogeneous wall, anisotropic wall, and inhomogeneous-anisotropic wall. Following that, we conduct multi-target localization.
II METHODOLOGIES
II-A Complex Electromagnetic Media
In this paper, we considered five distinct wall models: homogeneous wall, air gap wall, inhomogeneous wall, anisotropic wall, and inhomogeneous-anisotropic wall. To approximate an inhomogeneous wall, we use multiple homogeneous layers, each with a specified value of . When the homogeneous layers are combined, an inhomogeneous wall is created.
(1) |
If the wall is a perfect nonmagnetic dielectric with only one direction of inhomogeneity, the permittivity is given by , , and . The inhomogeneous wall is approximated by a series of homogeneous layers adjacent to one another; for example, a ten-centimeter wall is approximated by ten layers with varying permittivity, the permittivity values of which were . The wall view is depicted in Figure 2.

In isotropic environments, the orientation of the environment’s electric or magnetic polarization is precisely determined by the direction of the external electric or magnetic field. As a result, the electrical and magnetic permeability coefficients of such environments, and , are expressed as numerical coefficients. On the other hand, there are numerous materials whose electrical or magnetic polarization is not oriented in the direction of electric or magnetic fields. As a result, the electrical and magnetic permeability coefficients of these environments, also referred to as anisotropic environments, must be expressed using matrices and tensors. In mathematics, anisotropy is defined as the lack of symmetry with respect to a collection of spatial rotation transformations. An anisotropic structure is one that appears differently from different axes. An anisotropic structure is one that has at least one non-scalar structural parameter in electromagnetism. In other words, in anisotropic environments, field is not in the direction, or field is not in the direction.
(2) |
Where :
(3) |
(4) |
We utilize a uni-axial anisotropic non-magnetic perfect dielectric wall in this work, where the non-main diagonal of the matrix is equal to zero, and only the main diagonal has a value of , , and permittivity of the wall is equal to the following tensor.
(5) |
Another model for the wall is inhomogeneous-anisotropic. We represent the wall as multiple homogeneous layers, with each layer having a distinct tensor for inhomogeneity and anisotropy. Indeed, we have a wall composed of many homogeneous-anisotropic layers stacked on top of one another to form an inhomogeneous-anisotropic wall. The following is the problem’s assumed permittivity:
(6) |
For the sake of simplicity, let in Equation 5. In this case, we assumed is ten centimeter thick and approximated it with ten distinct layers. The permittivity values of the layers have been assumed to be those in Table \Romannum1. The overview of the inhomogeneous-anisotropic wall is shown in Figure 3.
layer number | |
---|---|
1 | [6,3,2] |
2 | [5,5,2] |
3 | [6,4,2] |
4 | [4,6,2] |
5 | [3,4,2] |
6 | [2,3,2] |
7 | [5,2,2] |
8 | [2,4,2] |
9 | [4,3,2] |
10 | [3,5,2] |

We modeled the wall with airgap using three layers, placing two layers of homogeneous wall with on either side and in the center of an air layer, which is a practical example of an electromagnetic complex wall.
II-B Deep Learning
Artificial neural networks, which are inspired by brain neural cells, emerged in the last two decades and have a wide range of applications, including optimization, artificial intelligence, and many others. By modeling the structure of a cell and the neural network of the brain, an artificial cell and neural network can be constructed. An overview of an artificial neuron and neural cell are depicted in Figure 4. The inputs (input neurons) are as follows: , , … . In a neural network, each has a weight, denoted by . Indeed, each input is multiplied by the weight associated with it. The sum function (sigma) in the neural network then adds the product of the X’s and W’s, and an activation function then calculates the output value based on this computation. If represents the activation function and represents a bias value, the output of neurons can be written as follows:

(7) |
A neural network is composed of multiple layers of neurons. In general, a neural network consists of three layers: input, hidden, and output. As the number of layers and neurons in each hidden layer increases, the model becomes increasingly complex. As the number of hidden layers and neurons in our network increases, it becomes a deep neural network, and learning is referred to as deep learning. The activation of Relu and Linear functions was used to design the deep neural network, as shown in Equations 8 and 9.
(8) |
(9) |
II-C FDTD
The electromagnetic solver in the finite difference time domain (FDTD) is one technique for solving Maxwell equations. Ampere’s and Faraday’s Laws can be expressed as the following equations:
, | (Faraday’s Law) | (10) | |||
, | (Ampere’s Law) |
and denote the relative permittivity and permeability, respectively. They can be scalar, tensor, or location dependent, as explained previously. We employ polarization in this work in order to rewrite Ampere’s and Faraday’s Laws as follows[29]:
(11) |
(12) |
The scalar equations for are obtained from (11) and (12):
(13) |
(14) |
(15) |
Equations (13)–(15) can be expressed in finite-differences form due to the discretization in space-time; future fields can be written in terms of past fields. Assume that the indexes and denote the spatial step in the and direction, and denotes the temporal step. Additionally, the spatial step sizes are and in the and directions, respectively. The finite difference approximation of (13) expanded about the space-time point (m, (n + 1/2), q).The resulting equation is:
(16) |
For future value in terms of past value, the equation can be rewritten as follows:
(17) |
We can also write for Equation (14) and (15) expanded about the space-time point ((m + 1/2), n, q) and (m, n, (q + 1/2)), respectively:
(18) |
(19) |
In the simulation, we consider a two-dimensional environment on the X-Y plane with dimensions of 2 x 2 meters, surrounded by perfectly matched layers. The spatial step sizes are set to be in the X and Y directions . Given that we have assumed a frequency of 3 GHz, m. This means that the size of the meshes is 0.01m. Given a space of 2 x 2 meters, the number of meshes in the X and Y directions is 200.
III Data Gathering
In a two-dimensional through-the-wall radar problem, all materials are invariant within the z-direction. We performed the simulations using the FDTD technique. The simulation is carried out using the FDTD library in Python [30]. We consider square targets with a permittivity of and dimensions of 10, 20, and 30 cm; this permittivity value is comparable to that of water, allowing us to estimate the target in a manner analogous to that of the human body. Additionally, we considered a wall thickness of 10 cm. However, in the case of a wall with an air gap, we used a thickness of 15 cm to account for the presence of a 5 cm layer of air within. We generated a plane wave with a frequency of 3 GHz using the FDTD library’s line source. and are non-zero for polarization. When the source’s wave reaches the wall, some of it returns, while the remainder passes through, reaches the target, and scatters. The detector collects the scattered wave. The fields are obtained and used to construct the required dataset. We generate a dataset by moving the target or targets behind the wall in the coordinates X=[5,85] and Y=[40,100]. When there is only one target, we move it in the specified coordinates with a step of 10 cm in both the x and y directions, and we record the coordinates of the target center in the scattered fields in the database for each shift. When two targets exist, we proceed in the same manner as before, except that duplicate states are eliminated. The target location is the two-dimensional coordinates of the target’s center. The built-in database stores the two-dimensional coordinates of targets with contiguous squared fields. Figure 5 shows an overview of the problem, as well as the electric and magnetic fields associated with the homogeneous wall with two targets.




We generate 63 data points for each wall scenario in the single-target mode and 756 data points for the two-target mode (this is not the number of all possible data). As we have five different wall scenarios, the number of datasets is multiplied by five. Thus, if all data is provided to the model, we have 315 data points for the single-target mode and 3780 data points for the two-target mode. In each of these cases, we allocate of the data to the train, to the test, and to validation.
IV NUMERICAL AND EXPERIMENTAL RESULTS
IV-A without noise
We used a Deep Neural Network (DNN) in this work to estimate the two-dimensional position of targets hidden behind a wall. The DNN algorithm is implemented in Python, and the model is constructed using the Tensorflow and Keras [31] frameworks.We used dense and dropout layers in the proposed deep neural network model. The fully connected (dense) layer contains all of the neurons from the previous layer. The fully connected layer’s primary function is to connect the lower layer’s local features to the upper layer’s local features. Several neurons in the dropout layer are inadvertently ignored during the learning process. Dropout is a regularization technique for neural networks that assists in avoiding the learning process from being manipulated, as well as increasing learning speed and reducing the risk of overfitting. During the training process, dropout should be applied to the connections between layers, but not during network evaluation and testing. We propose a distinct deep learning model for each of the single-target and two-target locating modes that performs well regardless of the assumed wall models. Additionally, we investigated two distinct modes: one without noise, which corresponds to an infinite SNR, and one with noise at various SNRs.
We employed a 9-layer deep neural network to locate a sngle target. In this case, the DNN input’s size is 285, which is equal to the length of the , , and field vectors, which are composed of three vectors with a length of 95. The DNN’s output has a size of 2, estimating the two numbers that represent the target’s x and y coordinates. We chose 0.0001 as the learning rate and 30 as the batch size. Additionally, we used the Adam optimizer and the Mean Squared Logarithmic Error (MSLE) loss function, which are defined below:
(20) |
y is the actual value, is estimated value, and is the total number of data points. In fact, MSLE is the mean of the squared differences between the actual and estimated values following log transformation. The details of the designed DNN for this mode are summarized in Table \Romannum2.
layer number | layer | output shape | number of parameters | activation function |
---|---|---|---|---|
1 | dense_1 (Dense) | (,284) | 80940 | relu |
2 | dropout_1 (Dropout) | (,284) | 0 | - |
3 | dense_2 (Dense) | (,300) | 85500 | relu |
4 | dropout_2 (Dropout) | (,300) | 0 | - |
5 | dense_3 (Dense) | (,300) | 90300 | relu |
6 | dropout_3 (Dropout) | (,300) | 0 | - |
7 | dense_4 (Dense) | (,300) | 90300 | relu |
8 | dropout_4 (Dropout) | (,300) | 0 | - |
9 | dense_5 (Dense) | (,2) | 602 | linear |
We combined the Dense and Dropout layers in the presented model through trial and error and experience to achieve high accuracy and low error. We attempted to achieve the highest accuracy possible while using the smallest model possible. The accuracy and error graphs for single-target locating over 5000 epochs are shown in Figures 6.(a) and 6.(b). Additionally, we used a 12-layer deep neural network to locate two targets. The accuracy and error graphs for two targets over 1000 epochs are shown in Figure 6.(c), 6.(d), 6.(e), and 6.(f). In these two modes, we used the Mean Squared Logarithmic Error (MSLE) loss function and the Adam optimizer, with the learning rate set to 0.001 and the batch size set to 30. The details of the designed networks are summarized in Table \Romannum3, \Romannum4.
layer number | layer | output shape | number of parameters | activation function |
---|---|---|---|---|
1 | dense_1 (Dense) | (,285) | 81510 | relu |
2 | dropout_1 (Dropout) | (,285) | 0 | - |
3 | dense_2 (Dense) | (,300) | 85800 | relu |
4 | dropout_3 (Dropout) | (,300) | 0 | - |
5 | dense_3 (Dense) | (,300) | 90300 | relu |
6 | dropout_4 (Dropout) | (,300) | 0 | - |
7 | dense_4 (Dense) | (,300) | 90300 | relu |
8 | dropout_5 (Dropout) | (,300) | 0 | - |
9 | dense_5 (Dense) | (,300) | 90300 | relu |
10 | dropout_6 (Dropout) | (,300) | 0 | - |
11 | dense_6 (Dense) | (,300) | 90300 | relu |
12 | dense_7 (Dense) | (,4) | 1204 | linear |
Because the received signal has not changed, the DNN’s input in the two-targets mode is identical to the single-target mode. However, in two-target mode, we must estimate the two targets’ two-dimensional coordinates, resulting in a network output of four neurons. When we compare the network parameters, also known as trainable parameters, for single- and two-target modes, as shown in Tables \Romannum2, \Romannum3, and \Romannum4, we find that single-target mode has 347,642 parameters and two-target mode has 529,714 parameters, indicating that the more targets we have, the more parameters we have. This results in a more dense network, which can be achieved by increasing the number of neural network layers or by increasing the number of neurons contained within the layers.
The accuracy, validation-accuracy, loss, and validation-loss values for the five models considered for the wall, as well as a model incorporating all data in single and double locating, are shown in Table \Romannum5. The proposed deep learning model for location finding has been fine-tuned in such a way that it is insensitive to changing the wall model and does not change the amount of accuracy and error associated with changing the wall model. As can be seen, the problem becomes more complicated as the number of targets increases, and accuracy decreases as the network becomes more dense.




homogeneous | airgap | inhomogeneous | anisotropic | inhomogeneous-anisotropic | all data | ||
Single Target | Accuracy | %97 | %96.9 | %96.9 | %94 | %96 | %97.7 |
Loss | 0.001 | 0.009 | 0.007 | 0.007 | 0.008 | 0.005 | |
validation_Accuracy | %93 | %100 | %100 | %93.3 | %100 | %97.3 | |
validation-loss | 0.09 | 0.07 | 0.01 | 0.01 | 0.008 | 0.002 | |
Two Targets | Accuracy | %95.6 | %94.7 | %96 | %95 | %96.2 | %94.1 |
Loss | 0.008 | 0.009 | 0.01 | 0.007 | 0.006 | 0.01 | |
validation-Accuracy | %96.5 | %95 | %95.1 | %94.1 | %96.5 | %94.5 | |
validation-loss | 0.01 | 0.01 | 0.02 | 0.01 | 0.01 | 0.01 |
IV-B With noise
Additionally, adding noise to the signal can help bring the situation closer to reality. In this section, we added noise to the received signal and assumed the wall is a complex electromagnetic wall, increasing the difficulty of the task. To evaluate the model’s performance in the presence of noise, we introduced an Additive White Gaussian Noise (AWGN) with varying SNR values. The accuracy value for the validation dataset in all data mode is shown in Figure 7.

We made no changes to the structure of the neural network models presented in this scenario for single-target and two-target modes. As illustrated in Figure 8, accuracy increases as SNR increases, but the point is that when single-target and two-target diagrams are compared. Single-target mode is less accurate at low SNRs, whereas it is more accurate at high SNRs. To clarify this point, one could argue that because single-target mode has fewer datasets than two-target mode, the neural network would overfit after several epochs, which is why we use early stopping to avoid overfitting. The training procedure is terminated before the neural network becomes overfit. Due to a lack of data in the single-target mode, we have lower accuracy in low SNRs compared to the two-target mode. However, as the SNR increases, the neural network overcomes this situation and achieves greater precision than the two-target mode.
V CONCLUSIONS
We performed TWR multi-targets two-dimensional locating in this paper using several models that are close to a realistic wall model. We considered five distinct wall models: homogeneous, with an air gap, inhomogeneous, anisotropic, and inhomogeneous-anisotropic. Additionally, we evaluated three distinct target sizes (10, 20, and 30 cm) and two distinct target numbers (single and double). We used two deep learning models for the two modes (single and double targets). Each model can locate targets in the three sizes mentioned above under the various wall conditions. Assuming infinite SNR (i.e., no noise), we achieved accuracy on all data for single-target two-dimensional locating, which includes all assumed wall models and all targets in three different sizes. We were able to evaluate the models’ performance at various SNR levels by introducing noise into them. Low SNRs resulted in a reduction in accuracy, while high SNRs resulted in a less than reduction in accuracy.
References
- [1] A. Kumar, Z. Li, Q. Liang, B. Zhang, and X. Wu, “Experimental study of through-wall human detection using ultra wideband radar sensors,” Measurement, vol. 47, pp. 869–879, 2014.
- [2] A. Buonanno, M. D’Urso, G. Prisco, M. Felaco, L. Angrisani, M. Ascione, R. S. L. Moriello, and N. Pasquino, “A new measurement method for through-the-wall detection and tracking of moving targets,” Measurement, vol. 46, no. 6, pp. 1834–1848, 2013.
- [3] F. Ahmad, M. G. Amin, and S. A. Kassam, “Synthetic aperture beamformer for imaging through a dielectric wall,” IEEE transactions on aerospace and electronic systems, vol. 41, no. 1, pp. 271–283, 2005.
- [4] F. Ahmad, Y. Zhang, and M. G. Amin, “Three-dimensional wideband beamforming for imaging through a single wall,” IEEE Geoscience and remote sensing letters, vol. 5, no. 2, pp. 176–179, 2008.
- [5] Y. Ding, Y. Sun, G. Huang, R. Liu, X. Yu, and X. Xu, “Human target localization using doppler through-wall radar based on micro-doppler frequency estimation,” IEEE Sensors Journal, vol. 20, no. 15, pp. 8778–8788, 2020.
- [6] S. S. Chauhan, A. Basu, M. P. Abegaonkar, S. K. Koul et al., “Through the wall human subject localization and respiration rate detection using multichannel doppler radar,” IEEE Sensors Journal, vol. 21, no. 2, pp. 1510–1518, 2020.
- [7] P. Protiva, J. Mrkvica, and J. Machác, “Estimation of wall parameters from time-delay-only through-wall radar measurements,” IEEE Transactions on Antennas and Propagation, vol. 59, no. 11, pp. 4268–4278, 2011.
- [8] T. Jin, B. Chen, and Z. Zhou, “Image-domain estimation of wall parameters for autofocusing of through-the-wall sar imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 3, pp. 1836–1843, 2012.
- [9] F. Fereidoony, S. A. Mirtaheri, and S. Chamaani, “M-sequence sensor and continuous basis estimator for wall parameter estimation utilizing through the wall sensing,” IEEE Sensors Journal, vol. 17, no. 13, pp. 4083–4091, 2017.
- [10] H.-M. Zhang, Y.-R. Zhang, F.-F. Wang, and J.-L. An, “Application of support vector machines for estimating wall parameters in through-wall radar imaging,” International Journal of Antennas and Propagation, vol. 2015, 2015.
- [11] H. Zhang, Y. Zhang, Z. Wang, Z. Wu, and C. Zhang, “An efficient method based on machine learning for estimation of the wall parameters in through-the-wall imaging,” International Journal of Remote Sensing, vol. 37, no. 13, pp. 3061–3073, 2016.
- [12] F. Soldovieri and R. Solimene, “Through-wall imaging via a linear inverse scattering algorithm,” IEEE Geoscience and Remote Sensing Letters, vol. 4, no. 4, pp. 513–517, 2007.
- [13] L. Li, W. Zhang, and F. Li, “A novel autofocusing approach for real-time through-wall imaging under unknown wall characteristics,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 1, pp. 423–431, 2009.
- [14] F. Ahmad, M. G. Amin, and G. Mandapati, “Autofocusing of through-the-wall radar imagery under unknown wall characteristics,” IEEE transactions on image processing, vol. 16, no. 7, pp. 1785–1795, 2007.
- [15] G. Wang, M. G. Amin, and Y. Zhang, “New approach for target locations in the presence of wall ambiguities,” IEEE Transactions on Aerospace and Electronic Systems, vol. 42, no. 1, pp. 301–315, 2006.
- [16] T. D. Bufler and R. M. Narayanan, “Radar classification of indoor targets using support vector machines,” IET Radar, Sonar & Navigation, vol. 10, no. 8, pp. 1468–1476, 2016.
- [17] H.-M. Zhang, S. Zhou, C. Xu, and Y.-R. Zhang, “A real-time automatic method for target locating under unknown wall characteristics in through-wall imaging,” Progress In Electromagnetics Research, vol. 89, pp. 189–197, 2020.
- [18] H.-M. Zhang, S. Zhou, C. Xu, and J. J. Zhang, “A robust approach for three-dimensional real-time target localization under ambiguous wall parameters,” Progress In Electromagnetics Research, vol. 97, pp. 145–156, 2020.
- [19] A. Wood, R. Wood, and M. Charnley, “Through-the-wall radar detection using machine learning,” Results in Applied Mathematics, vol. 7, p. 100106, 2020.
- [20] F. Ghorbani and H. Soleimani, “Simultaneous estimation of wall and object parameters in twr using deep neural network,” arXiv preprint arXiv:2111.04568, 2021.
- [21] C. M. Bishop, Pattern recognition and machine learning. springer, 2006.
- [22] K. Xu, L. Wu, X. Ye, and X. Chen, “Deep learning-based inversion methods for solving inverse scattering problems with phaseless data,” IEEE Transactions on Antennas and Propagation, vol. 68, no. 11, pp. 7457–7470, 2020.
- [23] Y. Sharma, H. H. Zhang, and H. Xin, “Machine learning techniques for optimizing design of double t-shaped monopole antenna,” IEEE Transactions on Antennas and Propagation, 2020.
- [24] F. Ghorbani, S. Beyraghi, J. Shabanpour, H. Oraizi, H. Soleimani, and M. Soleimani, “Deep neural network-based automatic metasurface design with a wide frequency range,” Scientific Reports, vol. 11, no. 1, pp. 1–8, 2021.
- [25] F. Ghorbani, J. Shabanpour, S. Beyraghi, H. Soleimani, H. Oraizi, and M. Soleimani, “A deep learning approach for inverse design of the metasurface for dual-polarized waves,” Applied Physics A, vol. 127, no. 11, pp. 1–7, 2021.
- [26] F. Ghorbani, J. Shabanpour, S. Monjezi, H. Soleimani, S. Hashemi, and A. Abdolali, “Eegsig: an open-source machine learning-based toolbox for eeg signal processing,” arXiv preprint arXiv:2010.12877, 2020.
- [27] X. L. Travassos, S. L. Avila, and N. Ida, “Artificial neural networks and machine learning techniques applied to ground penetrating radar: A review,” Applied Computing and Informatics, 2020.
- [28] U. K. Majumder, E. P. Blasch, and D. A. Garren, Deep Learning for Radar and Communications Automatic Target Recognition. Artech House, 2020.
- [29] J. B. Schneider, “Understanding the finite-difference time-domain method,” School of electrical engineering and computer science Washington State University, vol. 181, 2010.
- [30] “fdtd,” https://github.com/flaport/fdtd.
- [31] F. Chollet et al., “Keras,” https://keras.io, 2015.