This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Factorization over interpolation: A fast continuous orthogonal matching pursuit

Gilles Monnoyer de Galland1,2, Luc Vandendorpe1 and Laurent Jacques2.
1CoSy. 2ISPGroup. ICTEAM/ELEN, UCLouvain, Louvain-la-Neuve, Belgium.
GM and LJ are funded by the Belgian FNRS.
Abstract

We propose a fast greedy algorithm to compute sparse representations of signals from continuous dictionaries that are factorizable, i.e., with atoms that can be separated as a product of sub-atoms. Existing algorithms strongly reduce the computational complexity of the sparse decomposition of signals in discrete factorizable dictionaries. On another flavour, existing greedy algorithms use interpolation strategies from a discretization of continuous dictionaries to perform off-the-grid decomposition. Our algorithm aims to combine the factorization and the interpolation concepts to enable low complexity computation of continuous sparse representation of signals. The efficiency of our algorithm is highlighted by simulations of its application to a radar system.

Introduction

Computation of sparse representations is beneficial in many applicative fields such as radar signal processing, communication or remote sensing [1, 2, 3]. This computation is based on the assumption that a signal 𝒚\boldsymbol{y} decomposes as a linear combination of a few atoms taken from a dictionary 𝒟\mathcal{D}. In this paper, we focus on continuous parametric dictionaries [4, 5, 6] which associate each parameter 𝒑\boldsymbol{p} from the continuous parameter set 𝒫\mathcal{P} to an atom 𝒂(𝒑)M\boldsymbol{a}(\boldsymbol{p})\in\mathbb{C}^{M}. Thereby, the decomposition of 𝒚M\boldsymbol{y}\in\mathbb{C}^{M} reads

𝒚=k=1Kαk𝒂(𝒑k),\textstyle\boldsymbol{y}=\sum_{k=1}^{K}\alpha_{k}\boldsymbol{a}(\boldsymbol{p}_{k}), (1)

where for all k[K]k\in[K], αk\alpha_{k}\in\mathbb{C} and 𝒑k𝒫\boldsymbol{p}_{k}\in\mathcal{P}.

In some applications [7, 8, 9], the atoms 𝒂(𝒑k)\boldsymbol{a}(\boldsymbol{p}_{k}) factorize as a product of sub-atoms, each depending on a distinct set of components of 𝒑k\boldsymbol{p}_{k}. In that case, greedy algorithms such as presented in [10, 11] can leverage this property to strongly reduce the computational complexity of the decomposition.

These approaches, however, capitalize a discretization of 𝒫\mathcal{P} and assume that the parameters {𝒑k}k=1K\{\boldsymbol{p}_{k}\}_{k=1}^{K} match the resulting grid [12]. Yet, estimations of parameters from such discretized models are affected by grid errors [13]. Although a denser grid reduces this effect, it tremendously increases the dimensionality of the problem to solve. Continuous reconstruction algorithms do not require such dense grids as they perform off-the-grid estimations of parameters [14, 15, 16]. In [17], a continuous version of the Basis Pursuit is derived from the construction of an interpolated model that approximates the atoms. In [18], the authors similarly designed the Continuous OMP (COMP) from the same interpolation concept.

We propose a Factorized COMP (F-COMP) that combines the concepts of interpolation and factorization to enable a fast and accurate reconstruction of sparse signals. We applied our algorithm to the estimation of the ranges and speeds of targets using a Frequency Modulated Continuous Wave (FMCW) radar. Simulations validated the superiority of using low-density grids with off-the-grid algorithms instead of denser grids with on-the-grid algorithms.

Notations:

Matrices and vectors are denoted by bold uppercase and lowercase symbols, respectively. The tensors are denoted with bold calligraphic uppercase letters. The outer product is \otimes, F\small\|\cdot\small\|_{F} is the Frobenus norm, [N]:={1,,N}[N]:=\{1,\cdots,N\}, j=1\mathrm{j}=\sqrt{-1}, and 𝖼{\sf c} is the speed of light.

Problem Statement

We consider the problem of estimating the values of KK parameters {𝒑k}k=1K𝒫\{\boldsymbol{p}_{k}\}_{k=1}^{K}\subset\mathcal{P} from a measurement vector 𝒚M\boldsymbol{y}\in\mathbb{C}^{M}. This measurement is assumed to decomposes as (1), with KMK\ll M. The parameters are known to lie in a separable parameter domain 𝒫L\mathcal{P}\subset\mathbb{R}^{L} such that 𝒫:=𝒫1××𝒫L\mathcal{P}:=\mathcal{P}_{1}\times\cdots\times\mathcal{P}_{L} with 𝒫\mathcal{P}_{\ell}\subset\mathbb{R} for each [L]\ell\in[L]. For all k[K]k\in[K], 𝒑k\boldsymbol{p}_{k} decomposes in

𝒑k:=(pk,1,,pk,L),\boldsymbol{p}_{k}:=(p_{k,1},\cdots,p_{k,L})^{\top}, (2)

with pk,𝒫p_{k,\ell}\in\mathcal{P}_{\ell} for all [L]\ell\in[L]. In (1) the atoms 𝒂(𝒑k)\boldsymbol{a}(\boldsymbol{p}_{k}) are taken from a continuous dictionary defined by 𝒟:={𝒂(𝒑):𝒑𝒫}\mathcal{D}:=\{\boldsymbol{a}(\boldsymbol{p}):\boldsymbol{p}\in\mathcal{P}\}.

In this paper, we consider the particular case of dictionaries of atoms that factorize in LL sub-atoms. More precisely, introducing the tensor 𝓐(𝒑)M1××ML\boldsymbol{\mathcal{A}}(\boldsymbol{p})\in\mathbb{C}^{M_{1}\times\cdots\times M_{L}} reshaping 𝒂(𝒑)M\boldsymbol{a}(\boldsymbol{p})\in\mathbb{C}^{M}

𝒜m1,m2,mL(𝒑):=am¯(𝒑),\mathcal{A}_{m_{1},m_{2},\cdots m_{L}}(\boldsymbol{p}):=a_{\bar{m}}(\boldsymbol{p}), (3)

with m¯:=mL+=1L1(m1)i=+1LMi[M]\bar{m}:=m_{L}+\sum_{\ell=1}^{L-1}(m_{\ell}-1)\prod_{i=\ell+1}^{L}M_{i}\in[M], m[M]m_{\ell}\in[M_{\ell}] for all [L]\ell\in[L] and M=M1M2MLM=M_{1}M_{2}\cdots M_{L}, we assume that the atom 𝓐(𝒑k)\boldsymbol{\mathcal{A}}(\boldsymbol{p}_{k}) decomposes in

𝓐(𝒑k):=𝝍1(pk,1)𝝍L(pk,L).\boldsymbol{\mathcal{A}}(\boldsymbol{p}_{k}):=\boldsymbol{\psi}_{1}(p_{k,1})\otimes\cdots\otimes\boldsymbol{\psi}_{L}(p_{k,L}). (4)

In (4), each 𝝍(pk,)M\boldsymbol{\psi}_{\ell}(p_{k,\ell})\in\mathbb{C}^{M_{\ell}} is a sub-atom taken from the continuous dictionary 𝒟:={𝝍(p):p𝒫}\mathcal{D}_{\ell}:=\{\boldsymbol{\psi}_{\ell}(p):p\in\mathcal{P}_{\ell}\}. In the tensor reshaped domain, the decomposition (1) becomes

𝓨=k=1Kαk𝓐(𝒑k),\textstyle\boldsymbol{\mathcal{Y}}=\sum_{k=1}^{K}\alpha_{k}\boldsymbol{\mathcal{A}}(\boldsymbol{p}_{k}), (5)

where 𝓨M1××ML\boldsymbol{\mathcal{Y}}\in\mathbb{C}^{M_{1}\times\cdots\times M_{L}} is the tensor-shaped measurement, i.e., 𝒴m1,m2,,ml:=ym¯\mathcal{Y}_{m_{1},m_{2},\cdots,m_{l}}:=y_{\bar{m}}.

Recovering the parameters {𝒑k}k=1K\{\boldsymbol{p}_{k}\}_{k=1}^{K} from the factorized model (5) can be made fast. For instance, in work [10, 11], the authors consider an adaptation of OMP, that we coin Factorized OMP (F-OMP), which leverages the decomposition (4) to reduce the dimensionality of the recovery problem. Yet, F-OMP only enables the estimation of on-the-grid parameters taken from a finite discrete set of parameters. In the next section, we build a model based on the same grid which enables the greedy estimation of off-the-grid parameters while similarly leveraging the factorization.

Factorization over Interpolation

From the general non-factorized model (1), the algorithm Continuous OMP (COMP) [18] extends OMP and succeeds to greedily estimate off-the-grid parameters. COMP operates with a parameter grid which results from the sampling of 𝒫\mathcal{P}. The atoms of the continuous dictionary 𝒟\mathcal{D} are approximated by a linear combination a multiple atoms which are defined from the grid. This combination enables to interpolate (from the grid) the atoms of 𝒟\mathcal{D} that are parameterized from off-the-grid parameters. Our algorithm F-COMP applies the same interpolation concept to the atoms 𝓐(𝒑)\boldsymbol{\mathcal{A}}(\boldsymbol{p}), which are factorized by (4).

Let us define the separable grid Ω𝒫𝒫\Omega_{\mathcal{P}}\subset\mathcal{P} such that Ω𝒫=Ω𝒫1××Ω𝒫L\Omega_{\mathcal{P}}=\Omega_{\mathcal{P}_{1}}\times\cdots\times\Omega_{\mathcal{P}_{L}}, with Ω𝒫:={ωn}n=1N𝒫\Omega_{\mathcal{P}_{\ell}}:=\{\omega^{\ell}_{n_{\ell}}\}_{n_{\ell}=1}^{N_{\ell}}\subset\mathcal{P}_{\ell} for all [L]\ell\in[L]. We propose a “factorization over interpolation" strategy where each off-the-grid atom 𝓐(𝒑k)\boldsymbol{\mathcal{A}}(\boldsymbol{p}_{k}) is interpolated by II on-the-grid atoms

𝓐(𝒑k)i=1Ick(i)𝓐(i)[𝒏(k)].\textstyle\boldsymbol{\mathcal{A}}(\boldsymbol{p}_{k})\simeq\sum_{i=1}^{I}c_{k}^{(i)}\boldsymbol{\mathcal{A}}^{(i)}[\boldsymbol{n}(k)]. (6)

In (6), the indices 𝒏(k):=(n1(k),,nL(k))\boldsymbol{n}(k):=(n_{1}(k),\cdots,n_{L}(k)) depend on the interpolation scheme and on 𝒑k\boldsymbol{p}_{k}, and each 𝓐(i)[𝒏(k)]\boldsymbol{\mathcal{A}}^{(i)}[\boldsymbol{n}(k)] is the ii-th interpolation atom associated to the 𝒏(k)\boldsymbol{n}(k)-th element of the grid Ω𝒫\Omega_{\mathcal{P}}. The coefficients ck(i)c_{k}^{(i)} are obtained from

(ck(1),ck(I))=𝒞𝒏(k)(𝒑k),(c_{k}^{(1)},\cdots c_{k}^{(I)})=\mathcal{C}_{\boldsymbol{n}(k)}(\boldsymbol{p}_{k}), (7)

where 𝒞𝒏(k)(𝒑)\mathcal{C}_{\boldsymbol{n}(k)}(\boldsymbol{p}) is a function defined from the choice of interpolation pattern [17, 18]. In this scheme, for all i[I]i\in[I], we decompose the global interpolation atoms 𝓐(i)[𝒏(k)]\boldsymbol{\mathcal{A}}^{(i)}[\boldsymbol{n}(k)] using interpolation sub-atoms denoted by 𝝍(i)[n(k)]\boldsymbol{\psi}_{\ell}^{(i)}[n_{\ell}(k)], i.e.,

𝓐(i)[𝒏(k)]=𝝍1(i)[n1(k)]𝝍L(i)[nL(k)].\boldsymbol{\mathcal{A}}^{(i)}[\boldsymbol{n}(k)]=\boldsymbol{\psi}_{1}^{(i)}[n_{1}(k)]\otimes\cdots\otimes\boldsymbol{\psi}_{L}^{(i)}[n_{L}(k)]. (8)

The factorization (8) is enabled by the properties of the interpolant dictionaries. It is for instance the case for the dictionaries describing FMCW chirp-modulated radar signals we detail in Sec. 5. From such signals, we can efficiently estimate off-the-grid values of {𝒑k}k=1K\{\boldsymbol{p}_{k}\}_{k=1}^{K} using the Factorized Continuous OMP that we explain in the next section.

Factorized Continuous OMP

Input : KK, 𝓨\boldsymbol{\mathcal{Y}}, {𝓐(i)[𝒏]}(i,𝒏)[I]×𝒩\big{\{}\boldsymbol{\mathcal{A}}^{(i)}[\boldsymbol{n}]\big{\}}_{(i,\boldsymbol{n})\in[I]\times\mathcal{N}}, Ω𝒫\Omega_{\mathcal{P}}.
Output : {α^k}k=1K,{𝒑k)}k=1K\{\hat{\alpha}_{k}\}_{k=1}^{K},\{\boldsymbol{p}_{k})\}_{k=1}^{K}
begin
 
  Initialization: 𝓡(1)=𝓨\boldsymbol{\mathcal{R}}^{(1)}=\boldsymbol{\mathcal{Y}}, Ω=\Omega=\emptyset;
 While kKk\leq K :
    
    
𝒏^(k)=arg min𝒏𝒩(min𝜷Ii=1Iβi𝓐(i)[𝒏]𝓡(k)F2)(9)\!\!\!\!\!\!\!\hat{\boldsymbol{n}}(k)=\underset{\hskip 2.84526pt\boldsymbol{n}\in\mathcal{N}\hskip 2.84526pt}{\text{arg\,min}\hskip 2.84526pt}\big{(}\underset{\boldsymbol{\beta}\in\mathbb{C}^{I}}{\text{min}}\big{\|}\sum_{i=1}^{I}\beta_{i}\boldsymbol{\mathcal{A}}^{(i)}[\boldsymbol{n}]-\boldsymbol{\mathcal{R}}^{(k)}\big{\|}_{F}^{2}\big{)}\hskip 4.2679pt\text{(9)}
    
    
ΩΩ{𝒏^(k)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\Omega\leftarrow\Omega\cup\{\hat{\boldsymbol{n}}(k)\}
    
{𝜷^k}k=1k=arg min{𝜷^kI}k=1kk=1ki=1Iβk(i)𝓐(i)[𝒏^(k)]𝓨F2\!\!\!\!\!\!\!\!\!\big{\{}\hat{\boldsymbol{\beta}}_{k^{\prime}}\big{\}}_{k^{\prime}=1}^{k}\hskip-2.84526pt=\hskip-8.53581pt\underset{\hskip 2.84526pt\{\hat{\boldsymbol{\beta}}_{k^{\prime}}\in\mathbb{C}^{I}\}_{k^{\prime}=1}^{k}\hskip 2.84526pt}{\text{arg\,min}\hskip 2.84526pt}\hskip-7.11317pt\small{\Big{\|}\hskip-1.42262pt\sum_{k^{\prime}=1}^{k}\sum_{i=1}^{I}\beta_{k^{\prime}}^{(i)}\boldsymbol{\mathcal{A}}^{(i)}\big{[}\hat{\boldsymbol{n}}(k^{\prime})\big{]}-\boldsymbol{\mathcal{Y}}\Big{\|}_{F}^{2}}
    
    𝒓(k+1)=𝒚k=1ki=1Iβ^k(i)𝓐(i)[𝒏^(k)]\!\!\!\!\boldsymbol{r}^{(k+1)}=\boldsymbol{y}-\sum_{k^{\prime}=1}^{k}\sum_{i=1}^{I}\hat{\beta}_{k^{\prime}}^{(i)}\boldsymbol{\mathcal{A}}^{(i)}\big{[}\hat{\boldsymbol{n}}(k^{\prime})\big{]}
    kk+1k\leftarrow k+1
   for all k[K]k\in[K],
 
 
(α^k,𝒑^k)=arg minα,𝒑𝒫α𝒞𝒏^(k)(𝒑)𝜷^k22.\!\!\!(\hat{\alpha}_{k},\hat{\boldsymbol{p}}_{k})=\hskip-2.84526pt\underset{\hskip 2.84526pt\alpha\in\mathbb{C},\boldsymbol{p}\in\mathcal{P}\hskip 2.84526pt}{\text{arg\,min}\hskip 2.84526pt}\hskip-5.69054pt\big{\|}\alpha\mathcal{C}_{\hat{\boldsymbol{n}}(k)}(\boldsymbol{p})-\hat{\boldsymbol{\beta}}_{k}\big{\|}_{2}^{2}. (10)
Algorithm 1 Factorized Continuous OMP (F-COMP)

Alg. 1 formulates F-COMP for a generic interpolation scheme. F-COMP leverages the factorized interpolated model (6) to estimate off-the-grid parameters with a reduced complexity with respect to COMP [18]. It follows the same steps as COMP and greedily minimizes 𝓨k=1Kαki=1Ick(i)𝓐(i)[n(k)]F2\big{\|}\boldsymbol{\mathcal{Y}}-\sum_{k=1}^{K}\alpha_{k}\sum_{i=1}^{I}c^{(i)}_{k}\boldsymbol{\mathcal{A}}^{(i)}[n(k)]\big{\|}_{F}^{2}. In Alg. 1, we use 𝒩\mathcal{N} to denote [N1]××[NL][N_{1}]\times\cdots\times[N_{L}] and 𝜷^k:=(β^1(1),β^k(I))\hat{\boldsymbol{\beta}}_{k}:=(\hat{\beta}^{(1)}_{1},\cdots\hat{\beta}^{(I)}_{k}), where β^k(i)\hat{\beta}_{k}^{(i)} estimates αkck(i)\alpha_{k}c^{(i)}_{k}.

The decomposition of the atoms 𝓐(𝒑k)\boldsymbol{\mathcal{A}}(\boldsymbol{p}_{k}) enables to compute the step in (1) with a complexity O(IN1NLmin[L](M))O(IN_{1}\cdots N_{L}\min_{\ell\in[L]}(M_{\ell})) instead of O(IN1NLM1ML)O(IN_{1}\cdots N_{L}M_{1}\cdots M_{L}) in COMP. This is achieved by extending the methodology of F-OMP [11] to our interpolation-based model.

Application to Radars

We applied F-COMP for the estimation of ranges {rk}k=1K\{r_{k}\}_{k=1}^{K} and radial speeds {vk}k=1K\{v_{k}\}_{k=1}^{K} of KK point targets using a Frequency Modulated Continuous Wave (FMCW) radar that emits chirp-modulated waveforms. The model is similar to the one provided in [9]. The received radar signal is coherently demodulated and sampled with a regular sampling rate 1/Ts1/T_{s}, MsM_{s} samples acquired per chirp, and McM_{c} chirps are acquired. With a few assumptions on the radar system [9, 19, 20], the resulting sampled measurement vector 𝒚McMs\boldsymbol{y}\in\mathbb{C}^{M_{c}M_{s}} is approximated for ms[Ms]m_{s}\in[M_{s}], and mc[Mc]m_{c}\in[M_{c}] by

ymcMs+msk=1Kαkej2πBMs2rk𝖼msej2πf0MsTs2vk𝖼mc,y_{m_{c}M_{s}+m_{s}}\simeq\sum_{k=1}^{K}\alpha_{k}e^{-\mathrm{j}2\pi\frac{B}{M_{s}}\frac{2r_{k}^{\prime}}{{\sf c}}m_{s}}e^{-\mathrm{j}2\pi f_{0}M_{s}T_{s}\frac{2v_{k}}{{\sf c}}m_{c}}, (11)

where rk=rk+f0MsTsBvkr_{k}^{\prime}=r_{k}+\frac{f_{0}M_{s}T_{s}}{B}v_{k}. In (11), BB and f0f_{0} respectively are the bandwidth and the carrier frequency of the transmitted waveform. Given (11), the measurement 𝒚\boldsymbol{y} is reshaped as explained in Sec. 2 and expressed by (5) and (4) where L=2L=2,

(𝝍1(rk))ms\displaystyle(\boldsymbol{\psi}_{1}(r_{k}^{\prime}))_{m_{s}} :=ej2πBMs2rk𝖼ms,\displaystyle:=e^{-\mathrm{j}2\pi\frac{B}{M_{s}}\frac{2r_{k}^{\prime}}{{\sf c}}m_{s}}, (12)
(𝝍2(vk))mc\displaystyle(\boldsymbol{\psi}_{2}(v_{k}))_{m_{c}} :=ej2πf0MsTs2vk𝖼mc.\displaystyle:=e^{-\mathrm{j}2\pi f_{0}M_{s}T_{s}\frac{2v_{k}}{{\sf c}}m_{c}}. (13)

In our application, we used an order-1 Taylor interpolation such as explained in [17, 18] to implement (6) and (8).

The radar model (11) is an approximation of the exact radar signal that enables the use of F-COMP. With the exact model, we can use COMP and expect more accurate estimation of ranges and speeds with a higher computation time than F-COMP. This is observed in Fig. 1, where an estimation is a miss when (r^krk𝖼/2B)2+(v^kvk𝖼/(4f0McTc))2<1\sqrt{\big{(}\frac{\hat{r}_{k}-r_{k}}{{\sf c}/2B}\big{)}^{2}+\big{(}\frac{\hat{v}_{k}-v_{k}}{{\sf c}/(4f_{0}M_{c}T_{c})}\big{)}^{2}}<1. The factorized algorithms (F-OMP and F-COMP) are faster but miss more often the estimation than their non-factorized counterparts (OMP and COMP). The continuous algorithms have a lower miss rate because they are not affected by the grid errors. F-COMP appears as the best trade-off between performance and computation time for most values of MR it can reach.

Refer to caption
Figure 1: Comparison of (a) Computation Time and (b) Miss Rate of (F)(C)OMP in function of the number of bins the location and in the velocity grids (N=N1=N2N^{*}=N_{1}=N_{2}). The simulated system has M=M1=M2=16M^{*}=M_{1}=M_{2}=16. B=200B=200MHz, f0=24f_{0}=24GHz, Ts=5μT_{s}=5\mus and Tc=MsTsT_{c}=M_{s}T_{s}. Each dot is obtained by averaging the values resulting from 10,000 realisations of random sets of K=5K=5 independent targets.

Conclusion

In this work, we designed the Factorized Continuous OMP which leverages the factorized structure of dictionaries to efficiently compute continuous sparse representations. We proposed an implementation of the algorithm for a practical radar application. Although this implementation remains simple, our simulations showed F-COMP as the best trade-off between performance and compuation time. In future work, we may investigate the extension of more sophisticated, and of higher order, interpolation schemes to factorizable dictionaries.

References

  • [1] R. Baraniuk and P. Steeghs. Compressive radar imaging. In 2007 IEEE Radar Conference, pages 128–133, April 2007.
  • [2] L. Zheng and X. Wang. Super-resolution delay-doppler estimation for ofdm passive radar. IEEE Transactions on Signal Processing, 65(9):2197–2210, May 2017.
  • [3] Y. D. Zhang, M. G. Amin, and B. Himed. Sparsity-based doa estimatioin using co-primes array. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, 5 2013.
  • [4] Rémi Gribonval. Fast matching pursuit with a multiscale dictionary of gaussian chirps. Signal Processing, IEEE Transactions on, 49:994 – 1001, 06 2001.
  • [5] R. M. Figueras i Ventura, P. Vandergheynst, and P. Frossard. Low-rate and flexible image coding with redundant representations. IEEE Transactions on Image Processing, 15(3):726–739, March 2006.
  • [6] Laurent Jacques and Christophe Vleeschouwer. A geometrical study of matching pursuit parametrization. Signal Processing, IEEE Transactions on, 56:2835 – 2848, 08 2008.
  • [7] V. Winkler. Range doppler detection for automotive fmcw radars. pages 166–169, 11 2007.
  • [8] S. Lutz, D. Ellenrieder, T. Walter, and R. Weigel. On fast chirp modulations and compressed sensing for automotive radar applications. In 2014 15th International Radar Symposium (IRS), pages 1–6, June 2014.
  • [9] T. Feuillen, A. Mallat, and L. Vandendorpe. Stepped frequency radar for automotive application: Range-doppler coupling and distortions analysis. In MILCOM 2016 - 2016 IEEE Military Communications Conference, pages 894–899, Nov 2016.
  • [10] S. Zubair and W. Wang. Tensor dictionary learning with sparse tucker decomposition. pages 1–6, 07 2013.
  • [11] Y. Fang, B. Huang, and J. Wu. 2d sparse signal recovery via 2d orthogonal matching pursuit. Science China Information Sciences, 55, 01 2011.
  • [12] E. J. Candes and M. B. Wakin. An introduction to compressive sampling. IEEE Signal Processing Magazine, 25(2):21–30, March 2008.
  • [13] H. Azodi, C. Koenen, U. Siart, and T. Eibert. Empirical discretization errors in sparse representations for motion state estimation with multi-sensor radar systems. pages 1–4, 04 2016.
  • [14] G. Tang, B. N. Bhaskar, P. Shah, and B. Recht. Compressed sensing off the grid. IEEE Transactions on Information Theory, 59(11):7465–7490, Nov 2013.
  • [15] K. V. Mishra, M. Cho, A. Kruger, and W. Xu. Super-resolution line spectrum estimation with block priors. pages 1211–1215, Nov 2014.
  • [16] Y. Traonmilin and J-F. Aujol. The basins of attraction of the global minimizers of the non-convex sparse spikes estimation problem. ArXiv, abs/1811.12000, 2018.
  • [17] C. Ekanadham, D. Tranchina, and E. P. Simoncelli. Recovery of sparse translation-invariant signals with continuous basis pursuit. IEEE Transactions on Signal Processing, 59(10):4735–4744, Oct 2011.
  • [18] K. Knudson, J. Yates, A. Huk, and J. Pillow. Inferring sparse representations of continuous signals with continuous orthogonal matching pursuit. Advances in neural information processing systems, 27, 04 2015.
  • [19] Y. Liu, H. Meng, G. Li, and X. Wang. Velocity estimation and range shift compensation for high range resolution profiling in stepped-frequency radar. Geoscience and Remote Sensing Letters, IEEE, 7:791 – 795, 11 2010.
  • [20] H. Bao. The research of velocity compensation method based on range-profile function. International Journal of Hybrid Information Technology, 7:49–56, 03 2014.