Compressive Sensing of ECG Signals using Plug-and-Play Regularization111K. N. Chaudhury was partially supported by Core Research Grant CRG/2020/000527 and SERB-STAR Award STR/2021/000011 from the Department of Science and Technology, Government of India.
Abstract
Compressive Sensing (CS) has recently attracted attention for ECG data compression. In CS, an ECG signal is projected onto a small set of random vectors. Recovering the original signal from such compressed measurements remains a challenging problem. Traditional recovery methods are based on solving a regularized minimization problem, where a sparsity-promoting prior is used. In this paper, we propose an alternative iterative recovery algorithm based on the Plug-and-Play (PnP) method, which has recently become popular for imaging problems. In PnP, a powerful denoiser is used to implicitly perform regularization, instead of using hand-crafted regularizers; this has been found to be more successful than traditional methods. In this work, we use a PnP version of the Proximal Gradient Descent (PGD) algorithm for ECG recovery. To ensure mathematical convergence of the PnP algorithm, the signal denoiser in question needs to satisfy some technical conditions. We use a high-quality ECG signal denoiser fulfilling this condition by learning a Bayesian prior for small-sized signal patches. This guarantees that the proposed algorithm converges to a fixed point irrespective of the initialization. Importantly, through extensive experiments, we show that the reconstruction quality of the proposed method is superior to that of state-of-the-art methods.
keywords:
ECG signals, compressive sensing, proximal gradient descent, plug-and-play regularization, GMM denoiser.organization=Department of Electrical Engineering, Indian Institute of Science, city=Bangalore, postcode=560012, country=India
1 Introduction
Electrocardiogram (ECG) is widely used for diagnosis and monitoring cardiac conditions such as hypertension [1], heart failure [2], arrhythmia [3] etc. Essentially, an ECG signal is a representation of the electric activity in the heart over time. Extensive use of ECG in healthcare fosters a need for sophisticated signal processing approaches to efficiently compress, analyze, store and transmit ECG signals. For example, in wearable devices, there is a need to reduce energy consumption due to data transmission and to increase memory usage efficiency. Recently, several efforts have been made to develop wireless ECG sensors for continuous health monitoring, for which it is desirable to have devices with low power consumption or low complexity. However, continuous wireless transmission of long-term biomedical data consumes a significant amount of energy. Thus, compression of ECG signals would be helpful to achieve energy efficiency.
Compressive Sensing (CS) is a possible solution for signal compression. It employs random linear projections that aim to preserve the structure of the signal. The signal can be reconstructed from its projections using nonlinear recovery methods. In fact, several works have explored the application of CS to biomedical signal processing, including ECG [4, 5, 6], EMG [7, 8], EEG [9] signals and MRI images [10].
1.1 Compressive Sensing of ECG Signals
The data acquisition model in CS is given by [5, 6]
(1) |
where is the original ECG signal having length , is a compression matrix with , and denotes the noise in the acquisition system. Typically, is assumed to be white Gaussian noise with mean , whereas is taken to be a random Gaussian or binary matrix [4, 5, 6].
The original signal can be approximately recovered by solving the regularized inversion problem
(2) |
where the term forces consistency of the recovered signal w.r.t. the measurements, whereas (known as the regularizer) acts as a penalty function that forces the recovered signal to have some desirable properties such as smoothness. Here, is a positive scalar used to control the amount of regularization and denotes the norm.
A good regularizer is necessary since recovering from is an ill-posed problem (as ). Several regularizers have been explored for the ECG compressive recovery task, such as the weighted norm [11], various other norms [12], total variation (TV) [13], and second-order sparsity-promoting functions [6]. Moreover, efficient recovery algorithms have been derived by exploiting the temporal correlation between successive samples; examples include -regularized least-squares [14], Block Sparse Bayesian Learning Bound-Optimization (BSBL-BO) [15], and Block Sparse Bayesian Learning with Expectation Maximization (BSBL-EM) [16]. The latter two are considered to be state-of-the-art.
1.2 Classical Regularization
It is well-known that the norm promotes sparse solutions; moreover, natural signals such as ECG are known to be approximately sparse in suitably chosen domains [11, 13]. Therefore, sparsity-promoting regularizers based on the norm in the wavelet and gradient domains (TV) have traditionally been used for ECG reconstruction. The downside is that the norm is not differentiable. Hence, the objective function in (2) as a whole is non-differentiable, even though is differentiable. A good choice of iterative numerical solvers to solve such problems is the class of proximal algorithms [17], such as ADMM and Proximal Gradient Descent (PGD). A proximal algorithm generally consists of smaller subproblems which individually involve only one of the two functions. Consequently, it is possible to take advantage of the differentiability of . In this paper, we focus on PGD since it is a particularly simple proximal algorithm. PGD is sometimes known as the Iterative Shrinkage-Thresholding Algorithm (ISTA) [18]. Starting from an initial point , PGD creates a sequence of points recursively using the rule
(3) |
where is a fixed parameter (known as the step size) and is a function known as the proximal operator of :
(4) |
Note that if we put in (2), then (2) reduces to (4). Thus, the proximal operator can be interpreted effectively as a Gaussian denoising operator. For regularizers such as the and norms, the proximal operator has a closed-form formula [17]. Hence, the PGD algorithm is easy to implement. As discussed in Section 2, the algorithm is guaranteed to converge under some mild conditions to a minimum of . Note that every PGD step can be seen as the composition of two operations: the first (computing ) is effectively one step of the classical gradient descent algorithm and depends only on the function , while the second (computing the proximal operator) depends only on . This is why the algorithm is named as Proximal Gradient Descent.
1.3 Plug-and-Play Regularization
Plug-and-play (PnP) regularization is a novel regularization technique developed in the image processing community [19, 20]. The main step in PnP is to replace the proximal operator by a powerful signal denoiser. As discussed in Section 2, this is due to the similarity of the proximal operator with a denoising operation. In the context of PGD, the function is replaced by a signal denoiser , so that the -th step now becomes . This algorithm is known as PnP-PGD. Essentially, this amounts to taking one step of gradient descent, followed by denoising. Note that we no longer need to choose a regularizer since does not appear in the modified algorithm; the regularization is performed implicitly by the denoiser.
PnP has yielded state-of-the-art results in many imaging problems. However, since there is no regularizer involved, the aforementioned convergence to a minimum of does not apply.
1.4 Contribution
The contributions of this work are as follows.
-
1.
We introduce the PnP framework for reconstructing ECG signals from CS measurements. To the best of our knowledge, PnP has never been used for this application. Through extensive experiments, we show that the proposed method, based on PnP-PGD, outperforms the current state-of-the-art CS recovery methods for ECG signals.
-
2.
Even though convergence of PGD to a minimum of is not applicable to PnP-PGD, we show that a different form of convergence, known as fixed-point convergence, can be guaranteed if the ECG denoiser satisfies a condition known as contractivity. Thus, the challenge lies in designing a contractive ECG denoiser.
-
3.
We derive a high-quality contractive ECG denoiser by modeling small patches as random vectors following a Gaussian Mixture Model (GMM). We experimentally show that the denoising performance of the GMM denoiser is comparable or better than existing state-of-the-art ECG denoisers.
The rest of this paper is organized as follows. In Section 2, we give an overview of the PnP-PGD algorithm and explain the motivation behind its development. We derive the GMM denoiser in Section 3 and compare its denoising quality with existing ECG signal denoisers. In Section 4, we discuss how the this denoiser can be used in a way that guarantees convergence of PnP-PGD for CS recovery. Numerical experiments on CS recovery of ECG signals are shown in Section 5, and we conclude the paper in Section 6. Some of the mathematical proofs are given in the Appendix.
2 Plug-and-Play PGD
We first state a standard convergence result for PGD and then move on to discuss some convergence-related aspects of PnP. What makes PGD a simple yet powerful algorithm is its guarantee of convergence to a minimum of . In the following theorem (and the rest of the paper), we denote the largest singular value of a matrix by .
Theorem 1 ([18]).
Consider the PGD algorithm for minimizing the function , where Suppose is continuous and convex, and that . Then the sequence converges to the minimum of as .
We now turn our attention to the PnP framework. Consider the definition of in (4). Note that the minimization problem in (4) is similar to (2) if we put , the identity matrix. Hence, is essentially a regularized inverse corresponding to the additive noise model , where is Gaussian noise. Thus, the proximal operator is simply an additive Gaussian denoising operator.
It is well-known in the image processing community that specially designed denoisers such as nonlocal means (NLM) [21] and BM3D [22] are superior to traditional denoisers based on regularization, e.g. or TV-regularized denoising. Motivated by this observation, the work in [19] explored how the performance of a proximal algorithm for image recovery problems is affected if we replace by some arbitrary Gaussian denoiser , such as NLM or BM3D. This scheme was named as plug-and-play, since the denoiser serves as a pluggable module that replaces the proximal operator in an already existing numerical solver. In the original work [19], the PnP scheme was explored for a different proximal algorithm – ADMM – but it was adapted to PGD in [20]. The PnP-PGD algorithm is thus recursively defined by
(5) |
Note that the same denoiser can be utilized for several kinds of image recovery problems using the PnP framework, since only the function changes from problem to problem. For this reason, in the past few years, PnP has gained a lot of interest in the imaging community. However, the use of PnP for recovering one-dimensional signals such as ECG signals has remained an unexplored territory.
An immediate question that arises from the PnP scheme is as follows: Does the sequence converge to some ? And if so, is optimal in some sense? The latter question can be resolved if is expressible as the proximal operator of some function . In general, however, an arbitrary cannot be expressed in this way. As a result, the PnP-PGD algorithm cannot be interpreted as minimizing an objective function of the form , and the convergence result in Theorem 1 is not generally applicable. Therefore, we are left with trying to determine whether at least the sequence converges. It turns out that such a guarantee can indeed be given under a technical condition on .
Definition 1.
The denoiser is said to be contractive if there exists such that for all points ,
We now state a theorem that guarantees the convergence of PnP-PGD using a contractive denoiser.
Theorem 2.
Consider the PnP-PGD algorithm, , where . Suppose . Moreover, suppose the denoiser is contractive. Then, as , the sequence converges linearly (at an exponential rate) to a unique fixed point that does not depend on the initialization .
While Theorem 2 is proved in the Appendix, we mention here that the proof uses the Banach Fixed Point Theorem [23, Th. 9.23], from which the linear rate of convergence follows.
Note the difference between the types of convergence addressed in Theorems 1 and 2: Theorem 1 claims the convergence of the sequence of objective function values , whereas Theorem 2 is concerned with the sequence of variables . Theorem 2 essentially claims that the PnP-PGD algorithm eventually stabilizes, in the sense that two consecutive iterates and are close to each other. This property is known as fixed-point convergence [24], and is desirable for any recovery algorithm. Thus, by Theorem 2, it is sufficient for the denoiser to be contractive, in order to have fixed-point convergence.
It is useful to compare our result with a similar result in a recent work [25]. In [25], the authors proved fixed-point convergence of PnP-PGD under a different set of assumptions than ours. The convergence result in [25] is applicable to the case where the loss function is strongly convex and is Lipschitz continuous, where is the identity operator. In contrast, we require to be contractive and we do not require strong convexity of . In fact, in our case, is not strongly convex since the sensing matrix has a non-trivial null space.
Various methods for Gaussian denoising of ECG signals have been explored in the literature; see [26] for a review. The state-of-the-art techniques are optimization-based, e.g. TV, multiresolution analysis methods such as wavelets, empirical mode decomposition methods etc. A combination of these methods is sometimes used [27]. Further, nonlocal means (NLM) denoising has also been found to be promising [28]. However, to the best of our knowledge, there is no work that determines whether any of these denoisers are contractive. Can we design a high-quality contractive ECG signal denoiser? In the next section, we show that this can indeed be done. Specifically, we design a Gaussian denoiser which takes the form , where is a symmetric matrix. The resulting PnP-PGD algorithm outperforms state-of-the-art methods for ECG.

3 GMM Denoiser
Our ECG denoiser is inspired from an observation that was made in the context of images [29, 30]: A small patch of some fixed size belonging to a clean (i.e. noiseless) image can be well-modeled as a random vector having a Gaussian Mixture Model (GMM) as its density. Such a density can be learned by fitting a GMM to a large collection of patches extracted from a set of clean images, usually belonging to a common class (e.g. face images). We apply this idea to model patches belonging to ECG signals. Specifically, we extract a large collection of patches of length from a set of noiseless ECG signals as our training data set. This is used to fit a GMM density (with a pre-determined number of components ) using the expectation-maximization (EM) algorithm. Essentially, we model clean ECG patches of length as random vectors drawn from this learned GMM density, which we denote by . For , let be the mixture coefficient of the -th component, be the mean and be the positive definite covariance matrix. Then is given by
(6) |
where denotes a Gaussian density function.
How can this model be used for denoising an ECG signal corrupted with Gaussian noise? Again, we borrow from a patch-based denoising framework that is quite popular in image processing [29, 30]. At a high level, this framework consists of the following steps:
-
1.
Extract all possible patches of length from the noisy signal; if the signal length is , then there are such patches (we apply circular padding to the signal). If denotes the noisy signal, the collection of patches is given by , where is the linear operator that extracts the patch starting at the -th location. This is defined as the segment .
-
2.
Denoise each patch independently by computing a Bayesian estimate of its corresponding clean patch under an additive Gaussian noise model, using as the prior distribution of clean patches. Letting denote the denoising operator, the collection of denoised patches is given by .
-
3.
Place each denoised patch back into its corresponding location in the signal. Each sample location belongs to overlapping denoised patches; take the average of the values at this location as the estimate of the -th sample of the denoised signal. This completes the overall denoising process, which is given by
(7)
The patch denoiser forms the core of this framework, and the overall denoising performance depends on the performance of . One approach to incorporate the Bayesian prior in the patch denoising is to take as the maximum a-posteriori (MAP) estimator of the clean patch under a Gaussian noise model. That is, for a noisy patch (where is Gaussian noise), we can define to be the mode of the conditional density . However, it is known that this cannot be computed in closed form when the prior is a GMM [29]. Instead, motivated by [30], we choose to be the minimum mean-squared-error (MMSE) estimator of the clean patch:
(8) |
The theorem below gives a closed-form expression for .
Theorem 3 ([30]).
Consider the additive noise model , where is zero-mean Gaussian noise having variance . Suppose has the GMM density given by (6). Then,
where and
To summarize, the overall GMM denoiser is given by
(9) |
In order to gauge the quality of the GMM denoiser, we perform a denoising experiment on a noisy ECG signal. Specifically, we compare its performance with the following ECG signal denoising schemes: TV [31], NLM [28], and wavelet- regularization. In Figure. 1, we show a segment of signal (assumed noiseless) from the Physionet MIT-BIH Arrhythmia Database [32, 33, 34]. The segment has length . We add white Gaussian noise such that the signal-to-noise ratio (SNR) of the noisy signal is dB. The SNR for an estimated signal (here, the denoised signal) with respect to a reference signal (here, the clean signal) is defined as
A higher SNR value indicates a better estimation quality. The denoised signals obtained using the aforementioned denoising schemes are shown in Figure. 1. Observe that the GMM denoiser yields the highest SNR of all the methods; the visual quality is considerably better compared to TV and wavelet, and comparable to NLM.
For a more extensive comparison, we repeat this experiment for different SNR values of the noisy signal. The SNR values of the denoised signal are noted in Table 1. Again, it is observed that the GMM denoiser outperforms the other denoisers, while NLM is the second-best method. In fact, for high noise levels (SNR of and dB), the gap in performance between GMM and NLM is quite high. A possible explanation is that when the noise level is high, reliable computation of weights for NLM is difficult and can result in spikes in the denoised signal.
Noisy | TV | NLM | Wavelet | GMM |
---|---|---|---|---|
4 Convergence Analysis
Recall that we would like to be contractive; however, due to the complexity of the expression in (9), it is difficult to determine whether this is the case. Fortunately, while using it as part of the larger PnP-PGD framework, we can modify the denoiser to make it contractive using a simple trick. Note that the coefficients in (9) are nonnegative and sum to for each . Consider the situation where we replace the ’s by some fixed universal constants that do not depend on , but have the same properties: for all , and for all . Then becomes a linear function of . In fact, we can write , where is given by
(10) |
Theorem 4.
Let be defined as in (10), where for all , and for all . Suppose is a multiple of . Then the largest eigenvalue of , , is . Consequently, the denoiser is contractive, with the constant being .
The proof is given in the appendix. Note that the requirement for to be a multiple of is not too restrictive, since we can pad the signal if it is not. We only need to find suitable coefficients to replace in (9). This can be done as follows. We first run a small number (say, or ) of PnP-PGD iterations using the coefficients in the denoiser, to get an intermediate estimate . We then set for all and . That is, we fix the ’s as the coefficients obtained from the intermediate point . The subsequent PnP-PGD iterations are run using the denoiser in (10) that uses these fixed coefficients. Since is contractive, it follows from Theorem 2 that the sequence converges to some fixed point . Consequently, the PnP-PGD iterations converge to some fixed point .
The idea behind fixing the coefficients after iterations is that as increases, is expected to become more refined (in the sense of looking similar to the unknown signal ); therefore, the coefficients after iterations would be good enough to use for all subsequent iterates as well. In fact, this scheme has been used in PnP algorithms for image restoration [35, 24].
We note that although the patch denoiser in (8) is an MMSE estimator, the overall image denoiser is not. Therefore existing convergence results for PnP with MMSE denoisers, e.g. [36], do not apply to our case. We make an important remark on the similarity and differences of the proposed GMM denoiser with the denoiser in [37, 30]. Indeed, the idea of our GMM denoiser is inspired by that in [37, 30]. However, there are a couple of subtle differences:
-
•
The denoiser in [37, 30] is scene-adapted, in the sense that the GMM distribution is tailored for the specific scene being reconstructed. This is possible because the application considered there is hyperspectral image sharpening, in which a complementary image of the same scene is available to obtain training data tailored for that scene. In contrast, in this work, we learn just one GMM distribution that is kept common for all the signals being reconstructed.
-
•
The method used to replace by a fixed coefficient is different in our paper as compared to [2]. This is because the approach in [2] fundamentally relies on the availability of a complementary image, and thus cannot be applied to compressive sensing. In particular, in our paper, we take the ’s to be the coefficients acquired from a surrogate signal that is obtained by running a few iterations of the PnP-PGD algorithm; this idea was inspired by [Sreehari]. On the other hand, in [2], the ’s are taken to the coefficients obtained from the complementary image (multispectral or panchromatic image).




5 Experimental Results
Database: To validate the proposed PnP-PGD method for ECG CS recovery, we use a subset of the data from the Physionet MIT-BIH Arrhythmia Database [32, 33, 34]. Every file in the database consists of two lead recordings sampled at 360Hz with 11 bits per sample of resolution. It contains 48 half-hour excerpts of two-channel ambulatory ECG recordings, obtained from 47 subjects studied by the BIH Arrhythmia Laboratory.
Metrics: We quantify the performance of the proposal using following metrics: SNR, which is defined in Section 3), and mean-squared error (MSE), which is defined below.
where and are the reconstructed and original ECG signals, respectively. Note that here we are assuming the Physionet signals are the true signals ; in reality these signals also contain noise, which the metrics above neglect, though at most SNRs the (simulated) additive noise dominates [28].
Compared methods: We compare with the following state-of-the-art methods: BSBL-BO [15], BSBL-EM [16], sparse prior on the ECG wavelet representation [11], and TV regularization [13]. We tuned the parameters of all the methods so that maximum SNR is obtained. The codes are used from the publicly available sites [5, 15, 16]. All simulations were performed using MATLAB (R2021a) on a Quad core, 3.80GHz machine with 32GB RAM.
The sensing matrix is constructed by randomly drawing each entry from the standard normal distribution , and applying an orthonormalization step to ensure that the rows of are orthonormal [12]. We trained the GMM on the set of all possible overlapping patches of size extracted from signal # in the dataset [32], which is of length . The number of GMM components is set to be . The training time is found to be s. In all the experiments on CS reconstruction, we terminate the PnP-PGD algorithm after iterations since we observed that the algorithm stabilizes by then.
In addition to the denoising results reported in Section 3, the results in the subsequent sections support the claim that GMM is a good prior for modeling ECG signal patches. Note that the entire training process can be done offline.
5.1 Goodness of GMM Modeling
In this experiment, we evaluate the goodness of GMM modeling by visualizing some of the eigenvectors of the covariance matrices from the learned GMM distribution. This type of visualization is commonly utilized in papers on image processing, e.g. [29]. In particular, in [29] it is observed that the eigenvectors corresponding to the largest few eigenvalues (of each covariance matrix) are relatively smooth and capture the large-scale structure of the patches, whereas the eigenvalues corresponding to the smallest few eigenvectors contain many fluctuations and thus capture the local structure. If the GMM is a good model, we expect to see a similar trend for ECG signal patches.
In our case, we fit a GMM with components on ECG patches of size . In Figure 3, we plot randomly selected eigenvectors (having unit norm) corresponding to eigenvalue indices (i.e. largest few eigenvalues) from the fitted GMM model. In Figure 3, we show similar plots but for eigenvalue indices (i.e. smallest few eigenvalues). It is evident that the expected trend described in the previous paragraph holds true in practice: the richness of textures, fluctuations and other local structures is captured by the signals in Figure 3, while most of the large-scale details are captured by the signals in Figure 2.
5.2 Recovery from Noiseless Measurements
We study the signal reconstruction performance of our method (under zero noise), especially when the number of measurements is much lesser than . In Figure 4, we show a segment of the original ECG signal from [33] and its reconstruction obtained using our proposed method with measurements. We can see that the recovered signal using GMM as the denoiser produces visually similar result on comparison with the other methods and also has better performance in terms of SNR.
5.3 Study of SNR for Different
We next perform an exhaustive experiment where we vary the number of measurements () while fixing the length of the signal. The signal of length is used for the experiment. The results are reported in Figure 6. Each instance of the SNR values shown is obtained by averaging of independent trials. We note that our proposed method achieves the best recovery among all the methods.

5.4 Study of Average SNR over Different Signals
In this section, we study the performace of the proposed method with that of various ECG signals from MIT-BIH database [33]. We consider a larger dataset of signals (#100 to #109), shown in Figure 5 for performance evaluation. All the signals are of length . We also compare with a more recent method, ECGLet [38], in this section. We measure the average SNR (over all the test signals) of the reconstructed signal for different values of . The results are reported in Figure 7. Note that the proposed method yields the highest SNR among all the methods considered, for every .

5.5 Effect of Compression Ratio
In this section we investigate the effect of compression ratio (CR) on the quality of the reconstructed ECG signals. The CR is defined as:
(11) |
where is the length of the original signal and is the length of the compressed signal. For each value of CR, we repeated the experiment times, and in each time, the sensing matrix was randomly generated [5]. Figure 8, shows the variation of SNR with CR. It is worth noting that we obtain superior performance over the whole range of CR values. The input signal is a segment of ECG signal from the MIT-BIH database [33].



5.6 Recovery from Noisy Measurements
From previous experiments we notice that the proposed method performs well when compared with the other methods in noiseless scenario, i.e. when in (1). Now we examine the recovery performance of our method from noisy compressed measurements. The signal of length and is used for this experiment. In Figure 9, we plot the MSE of the recovered signal as a function of the noise level in the input (specified in terms of the SNR of the input). The values reported are averaged over independent trials. To simulate the noisy measurements, we followed the approach in [28]. It is evident that the proposed method produces quality reconstructions under noisy measurements. Finally we show a reconstruction result from noisy measurements in Figure. 10. In the experiment, we add random Gaussian noise to the compressed measurements. On comparison, none of the three methods in Figure 10 are able to completely mitigate the effect of noise. However, our method performs the best by a significant margin, resulting in an SNR of dB in the recovered signal, and is visually similar to the original signal. We observed that in low SNR scenarios, TV acts as a better denoiser than GMM, which might explain why the CS reconstruction performance is higher for TV as compared to the proposed method when the input SNR is low.
5.7 Numerical Convergence
In this section, we numerically verify the convergence of the proposed PnP-PGD algorithm. For all the signals in Figure 5, we use PnP-PGD for reconstruction from measurements. As explained in Section 4, we take the surrogate signal as the signal obtained after running the algorithm for iterations. The plots of as a function of are shown in Figure 11 for each of the signals. Note that decays to as increases, which is a necessary condition for convergence.

CR | 25 | 37.5 | 50 | 62.5 | 75 | 87.5 | 93.75 |
---|---|---|---|---|---|---|---|
SDAE | 23.56 | 22.29 | |||||
GMM | 42.85 | 39.56 | 37.18 | 33.29 | 27.16 |

5.8 Comparison with Deep Learning
For completeness, we compare the proposed method with a deep learning method for CS reconstruction, known as SDAE [39]. In this experiment, we use the non-invasive Fetal ECG dataset from Physionet, which was used in [39]. This database contains a series of 55 multichannel abdominal non-invasive fetal electrocardiogram (FECG) recordings, taken from a single subject between 21 to 40 weeks of pregnancy. We conducted an experiment on the signal with patient id ecgca154. Table 2 shows the variation of SNR with CR for the proposed method and SDAE. We note that the proposed method is able to outperform SDAE in all cases except when the CR is very high.
5.9 Stable Recovery
We show that the proposed method produces very stable reconstructions. For this experiment, we considered the signal with measurements (). We ran trials of the method, so that its stability can be observed for different realizations of the sensing matrix . In Figure 12, we show the SNR variation for BSBL-BO (blue) and the proposed method (red). We noted a standard deviation of dB in SNR for BSBL-BO and dB for the proposed method. Thus, the proposed method is more stable as compared to BSBL-BO. In fact, we observed that the contrast in stability is more pronounced for smaller .
6 Conclusion
We introduced a novel framework for recovering ECG signals from compressively sensed measurements. Our method is based on the plug-and-play (PnP) paradigm that has recently become popular for image restoration problems. Essentially, the recovery method consists of repeating two main steps – inverting the forward model, and denoising – until stability is attained. We designed a high-quality ECG signal denoiser to be used in the denoising step. Moreover, we proved that the recovery algorithm is guaranteed to converge. Importantly, we showed via numerical experiments that our proposed method is superior to current state-of-the-art methods used for ECG CS recovery.
7 Appendix
7.1 Proof of Theorem 2
Since , we can write the PnP-PGD algorithm as , where
It is enough to prove that the function is contractive, since the convergence of to some unique fixed point at a linear rate would then follow by the Banach Fixed Point Theorem [23, Th. 9.23].
Let be the constant in Definition 1. Then for any , we have,
Let . Since is positive semidefinite, its singular values are also its eigenvalues. In particular, its eigenvalues lie in . Since , the eigenvalues of lie in . Therefore, . Thus, for all we have,
Since , the function is contractive.
7.2 Proof of Theorem 4
A proof can be found in [30, Appendix B]; for completeness, here we give a different and more concise proof. Note that each is symmetric positive semidefinite (p.s.d.); hence, for each , the matrix is p.s.d. (as a convex combination of p.s.d. matrices). By the same logic, we get that is p.s.d. Thus, to show , we only need to show that for all .
To prove this, first note that for all . Since each is a convex combination of all ’s and since is a convex function on the set of symmetric matrices, we have for all . Let
Clearly, . Note that for each , we have . Therefore, for any ,
Since is a multiple of , , which is the sum of all patches of length extracted from (using circular padding), is simply equal to . Thus, we get that for all .
References
- [1] S. Ahmad, S. Chen, K. Soueidan, I. Batkin, M. Bolic, H. Dajani, V. Groza, Electrocardiogram-assisted blood pressure estimation, IEEE Transactions on Biomedical Engineering 59 (3) (2012) 608–618.
- [2] L. Pecchia, P. Melillo, M. Sansone, M. Bracale, Discrimination power of short-term heart rate variability measures for CHF assessment, IEEE Transactions on Information Technology in Biomedicine 15 (1) (2010) 40–46.
- [3] M. I. Owis, A. H. Abou-Zied, A.-B. Youssef, Y. M. Kadah, Study of features based on nonlinear dynamical modeling in ECG arrhythmia detection and classification, IEEE Transactions on Biomedical Engineering 49 (7) (2002) 733–736.
- [4] H. Mamaghanian, N. Khaled, D. Atienza, P. Vandergheynst, Compressed sensing for real-time energy-efficient ECG compression on wireless body sensor nodes, IEEE Transactions on Biomedical Engineering 58 (9) (2011) 2456–2466.
- [5] Z. Zhang, T. P. Jung, S. Makeig, B. D. Rao, Compressed sensing for energy-efficient wireless telemonitoring of noninvasive fetal ECG via block sparse bayesian learning, IEEE Transactions on Biomedical Engineering 60 (2) (2012) 300–309.
- [6] J. K. Pant, S. Krishnan, Compressive sensing of electrocardiogram signals by promoting sparsity on the second-order difference and by using dictionary learning, IEEE Transactions on Biomedical Circuits and Systems 8 (2) (2013) 293–302.
- [7] A. Salman, E. G. Allstot, A. Y. Chen, A. M. Dixon, D. Gangopadhyay, D. J. Allstot, Compressive sampling of EMG bio-signals, Proc. IEEE International Symposium of Circuits and Systems (2011) 2095–2098.
- [8] A. M. Dixon, E. G. Allstot, D. Gangopadhyay, D. J. Allstot, Compressed sensing system considerations for ECG and EMG wireless biosensors, IEEE Transactions on Biomedical Circuits and Systems 6 (2) (2012) 156–166.
- [9] S. Aviyente, Compressed sensing framework for EEG compression, Proc. IEEE/SP 14th Workshop on Statistical Signal Processing (2007) 181–184.
- [10] M. Lustig, D. Donoho, J. M. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging, Magnetic Resonance in Medicine 58 (6) (2007) 1182–1195.
- [11] L. F. Polania, K. E. Barner, A weighted minimization algorithm for compressed sensing ECG, Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (2014) 4413–4417.
- [12] J. K. Pant, W.-S. Lu, A. Antoniou, New improved algorithms for compressive sensing based on norm, IEEE Transactions on Circuits and Systems II: Express Briefs 61 (3) (2014) 198–202.
- [13] Y. Liu, M. De Vos, I. Gligorijevic, V. Matic, Y. Li, S. Van Huffel, Multi-structural signal recovery for biomedical compressive sensing, IEEE Transactions on Biomedical Engineering 60 (10) (2013) 2794–2805.
- [14] J. K. Pant, S. Krishnan, Reconstruction of ECG signals for compressive sensing by promoting sparsity on the gradient, Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (2013) 993–997.
- [15] Z. Zhang, B. D. Rao, Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning, IEEE Journal of Selected Topics in Signal Processing 5 (5) (2011) 912–926.
- [16] Z. Zhang, B. D. Rao, Extension of SBL algorithms for the recovery of block sparse signals with intra-block correlation, IEEE Transactions on Signal Processing 61 (8) (2013) 2009–2015.
- [17] A. Beck, First-Order Methods in Optimization, SIAM, Philadelphia, PA, USA, 2017.
- [18] A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Sciences 2 (1) (2009) 183–202.
- [19] S. V. Venkatakrishnan, C. A. Bouman, B. Wohlberg, Plug-and-play priors for model based reconstruction, Proc. IEEE Global Conference on Signal and Information Processing (2013) 945–948.
- [20] U. S. Kamilov, H. Mansour, B. Wohlberg, A plug-and-play priors approach for solving nonlinear imaging inverse problems, IEEE Signal Processing Letters 24 (12) (2017) 1872–1876.
- [21] A. Buades, B. Coll, J. M. Morel, A non-local algorithm for image denoising, Proc. IEEE Computer Vision and Pattern Recognition 2 (2005) 60–65.
- [22] K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering, IEEE Transactions on Image Processing 16 (8) (2007) 2080–2095.
- [23] W. Rudin, Principles of Mathematical Analysis, New York: McGraw-Hill, 1976.
- [24] P. Nair, R. G. Gavaskar, K. N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms, IEEE Transactions on Computational Imaging (7) (2021) 337–348.
- [25] E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers, International Conference on Machine Learning (2019) 5546–5557.
- [26] S. Chatterjee, R. S. Thakur, R. N. Yadav, L. Gupta, D. K. Raghuvanshi, Review of noise removal techniques in ECG signals, IET Signal Processing 14 (9) (2020) 569–590.
- [27] S. Kumar, D. Panigrahy, P. K. Sahu, Denoising of Electrocardiogram (ECG) signal by using empirical mode decomposition (EMD) with non-local mean (NLM) technique, Biocybernetics and Biomedical Engineering 38 (2) (2018) 297–312.
- [28] B. H. Tracey, E. L. Miller, Nonlocal means denoising of ECG signals, IEEE Transactions on Biomedical Engineering 59 (9) (2012) 2383–2386.
- [29] D. Zoran, Y. Weiss, From learning models of natural image patches to whole image restoration, Proc. International Conference on Computer Vision (2011) 479–486.
- [30] A. M. Teodoro, J. M. Bioucas-Dias, M. A. Figueiredo, A convergent image fusion algorithm using scene-adapted Gaussian-mixture-based denoising, IEEE Transactions on Image Processing 28 (1) (2018) 451–463.
- [31] L. Condat, A direct algorithm for 1-D total variation denoising, IEEE Signal Processing Letters 101 (23) (2013) 1054–1057.
- [32] A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C. K. Peng, H. E. Stanley, Physiobank, Physiotoolkit, and Physionet: components of a new research resource for complex physiologic signals, Circulation 101 (23) (2000) e215–e220.
- [33] G. B. Moody, R. G. Mark, The impact of the MIT-BIH arrhythmia database, IEEE Engineering in Medicine and Biology Magazine 20 (3) (2001) 45–50.
- [34] Z. Lu, D. Y. Kim, W. A. Pearlman, Wavelet compression of ECG signals by the set partitioning in hierarchical trees algorithm, IEEE Transactions on Biomedical Engineering 47 (7) (2000) 849–856.
- [35] S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, C. A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation, IEEE Transactions on Computational Imaging 2 (4) (2016) 408–423.
- [36] X. Xu, Y. Sun, J. Liu, B. Wohlberg, U. S. Kamilov, Provable convergence of plug-and-play priors with mmse denoisers, IEEE Signal Processing Letters 27 (2020) 1280–1284.
- [37] A. M. Teodoro, J. M. Bioucas-Dias, M. A. Figueiredo, Scene-adapted plug-and-play algorithm with convergence guarantees, IEEE International Workshop on Machine Learning for Signal Processing (2017) 1–6.
- [38] N. Ansari, A. Gupta, Wnc-ecglet: Weighted non-convex minimization based reconstruction of compressively transmitted ecg using ecglet, Biomedical Signal Processing and Control 49 (2019) 1–13.
- [39] P. R. Muduli, R. R. Gunukula, A. Mukherjee, A deep learning approach to fetal-ecg signal reconstruction, National Conference on Communications (2016) 1–6.