Learned Multi-layer Residual Sparsifying Transform Model for Low-dose CT Reconstruction
Abstract
Signal models based on sparse representation have received considerable attention in recent years. Compared to synthesis dictionary learning, sparsifying transform learning involves highly efficient sparse coding and operator update steps. In this work, we propose a Multi-layer Residual Sparsifying Transform (MRST) learning model wherein the transform domain residuals are jointly sparsified over layers. In particular, the transforms for the deeper layers exploit the more intricate properties of the residual maps. We investigate the application of the learned MRST model for low-dose CT reconstruction using Penalized Weighted Least Squares (PWLS) optimization. Experimental results on Mayo Clinic data show that the MRST model outperforms conventional methods such as FBP and PWLS methods based on edge-preserving (EP) regularizer and single-layer transform (ST) model, especially for maintaining some subtle details.
Index Terms:
Low-dose CT, Statistical image reconstruction, Sparse representation, Transform learning, Unsupervised learning.I Introduction
Signal models exploiting sparsity have been shown to be useful in a variety of applications such as compression, restoration, denoising, reconstruction, etc. Natural signals can be modeled as sparse in a synthesis dictionary (i.e., represented as linear combinations of few dictionary atoms or columns) or in a sparsifying transform domain.
Transforms such as wavelets [1] and the discrete cosine transform (DCT) are well-known to sparsify images. Methods such as synthesis dictionary learning [2] and analysis dictionary learning [3] allow adapting the model to the data. These dictionary learning problems are NP-hard in general and involve computationally expensive alternating-type algorithms that limit applicability to large-scale data. In contrast, the recently proposed sparsifying transform learning approaches [4] involve exact and highly efficient updates. In particular, the transform model approximates the signal as sparse in a transformed domain. Adopting a multi-layer sparsifying transform model enables sparsifying an input image successively over layers [5], creating a rich and more complete sparsity model, which forms the core of this work.
One of the most important application of such image models is for medical image reconstruction. In particular, an important problem in computed tomography (CT) is reducing the X-ray exposure to patients while maintaining good image reconstruction quality. A conventional method for CT reconstruction is the analytical filtered back-projection (FBP) [6]. However, image quality degrades severely for FBP when the radiation dose is reduced. In contrast, model-based image reconstruction (MBIR) achieves comparable or better image quality [7].
A typical MBIR method for low-dose CT (LDCT) is the penalized weighted least squares (PWLS) approach. The cost function for PWLS includes a weighted quadratic data-fidelity term and a penalty term or regularizer capturing prior information or model of the object [8, 9]. Recent works have shown promising LDCT reconstruction quality by incorporating data-driven models into the regularizer, where the models are learned from datasets of images or image patches. In particular, PWLS reconstruction with adaptive sparsifying transform-based regularization has shown promise for tomographic reconstruction [10, 11, 12]. Recent work has also shown that they may generalize better to new data than supervised deep learning schemes [13]. The adaptive transform-based image reconstruction algorithms can exploit a variety of image models [10, 11, 14] learned in an unsupervised manner, and involve efficient closed-form solutions for sparse coding.
In this work, we propose a new formulation and algorithm for learning a multi-layer transform model [5], where the transform domain residuals (the difference between transformed data and their sparse approximations) are successively sparsified over several layers. We refer to the model as a multi-layer residual sparsifying transform (MRST) model. The transforms are learned over several layers from images to jointly minimize the transform domain residuals across layers, while enforcing sparsity conditions in each layer. Importantly, the filters in deeper layers can help better exploit finer features (e.g., edges and correlations) in the residual maps. We investigate the performance of the (unsupervised) learned MRST model for LDCT reconstruction using PWLS. We propose efficient alternating minimization algorithms for both learning and reconstruction. Experimental results on Mayo Clinic data illustrate that the learned MRST model outperforms the conventional FBP as well as PWLS methods based on the non-adaptive edge-preserving (EP) regularizer and learned single-layer transform model, especially for maintaining some subtle details. In the following sections, we will first discuss the proposed model and formulation, followed by our algorithms and experimental results on Mayo Clinic data.
II Problem Formulation
Here, we will introduce the proposed general multi-layer learning framework and the formulation for LDCT image reconstruction. The MRST learning cost and constraints are shown in Problem (P0), which is an extension of simple single-layer transform learning.
(P0) | ||||
Here, and denote the learned transforms and sparse coefficient maps for the layers. Parameter controls the maximum allowed sparsity level (computed using the “norm” penalty) of . The residual maps are defined in recursive form over layers, with denoting the input training data. Here, we assume to be a matrix, whose columns are (vectorized) patches drawn from image data sets. The unitary constraints for enable closed-form solutions for the sparse coefficient and transform update steps in our algorithms. The learned MRST model can then be used to construct a data-driven regularizer in PWLS as shown in Problem (P1).
(P1) |
In particular, we reconstruct the image from noisy sinogram data by solving (P1), where is the system matrix of the CT scan and is the diagonal weighting matrix with elements being the estimated inverse variance of . Operator extracts the th patch of as . The th colums of and are denoted and . The non-negative parameters control the sparsity of the coefficient maps in different layers, and captures the relative trade-off between the data-fidelity term and regularizer.
III Algorithms for Learning and Reconstruction
III-A Transform Learning Algorithm
We propose an exact block coordinate descent (BCD) algorithm for the nonconvex Problem (P0) that cycles over updating each for (sparse coding step) and each for (transform update step). In each step, the remainder of the variables (that are not optimized) are kept fixed. The exact BCD algorithm ensures that the objective in (P0) is monotone decreasing over iterations and that it converges. In particular, under the unitarity condition on the transforms, every subproblem in the block coordinate descent minimization approach can be solved exactly. We initialize the algorithm with 2D DCT for and the identity matrix for respectively. The initial are all-zero matrices. Since the residuals are defined recursively in (P0), to simplify the algorithmic description, we first define matrices , which can be regarded as backpropagation matrices from the th to th layers.
(1) | ||||
III-A1 Sparse Coding Step for
Here, we solve (P0) for with all other variables fixed. The corresponding subproblem is as in (2).
(2) |
Although the cost function is nonconvex and the residual maps also depend on (as in (P0)), we can exploit the unitarity of the transforms to simplify and rewrite Problem (2) as . This problem has a similar form as the single-transform sparse coding problem [4], and the optimal solution is obtained as in (3), where denotes the hard-thresholding operator that sets elements with magnitude less than the threshold to zero.
(3) |
III-A2 Transform Update Step for
Here, we fix and all (except the target in (P0)) and solve the following subproblem:
(4) |
III-B Image Reconstruction Algorithm
The proposed PWLS-MRST algorithm for low-dose CT image reconstruction exploits the learned MRST model. We reconstruct the image by solving the PWLS problem (P1). We propose a block coordinate descent (BCD) algorithm for (P1) that cycles over updating the image and each of the sparse coefficient maps for .
III-B1 Image Update Step for
First, with the sparse coefficient maps fixed, we optimize for in (P1) by optimizing the following subproblem:
(6) |
where , with , , and . We use the efficient relaxed linearized augmented Lagrangian method with ordered-subsets (relaxed OS-LALM) algorithm [15] to obtain the solution to (6). The algorithmic details are shown in Algorithm 1. In each iteration of the OS-LALM method, we update the image times corresponding to ordered subsets. The matrices and , and the vector are sub-matrices of and , and sub-vector of for the th subset respectively. Matrix represents a diagonal majorizing matrix of . We precompute the Hessian matrix of as in (8) to accelerate the algorithm, and the gradient of is as in (7).
(7) |
(8) |
III-B2 Sparse Coding Step for
Similar to the sparse coding step during transform learning, the solution of (P1) with respect to each sparse coefficient map is shown in (10), and is the solution of (9).
(9) | |||
(10) |
IV Experimental Results
In this section, we evaluate the image reconstruction quality for the proposed PWLS-MRST algorithm and compare it with conventional FBP and the PWLS-EP (Edge Preserving) method [16].
IV-A Transform Learning
To better understand the potential of the MRST model, we vary the number of layers and pre-learn transforms for ST, MRST2, MRST3, MRST5, and MRST7, which possess , , , , and layers, respectively. We used 7 slices of the Mayo Clinic data to train the models. For each model, we run 1000 to 2000 iterations of the learning algorithm to ensure convergence. Fig. 1 shows some of the learned transforms, with each transform matrix row displayed as a square patch for simplicity. The single layer transform displays edge-like and directional structures that sparsify the image. However, with more layers, finer level features are learned to sparsify transform domain residuals in deeper layers. Nonetheless, transforms in deep layers could be more easily contaminated with noise in the training data, since the main image features are successively filtered out over layers.






IV-B Image Reconstruction
Here, we simulated low-dose CT measurements from full-dose CT images in Mayo Clinic dataset with GE 2D LightSpeed fan-beam geometry corresponding to a monoenergetic source with incident photons per ray and no scatter. We ran 100 iterations of the PWLS-EP algorithm with FBP reconstructions as initializations. We used the relaxed OS-LALM algorithm with ordered subsets and regularization parameter . For the MRST model, we used the OS-LALM algorithm for the image update step with inner iterations and subsets. We used and outer iterations for (ST, MRST2) and (MRST3, MRST5, MRST7), respectively. We set the regularization parameters (after tuning over ranges of values) as , , for ST, , , , , for MRST2, , , , , , , for MRST3, , , , , , , , , , , for MRST5, and , , , , , , , , , , , , , , for MRST7, respectively.
To compare the performance between various models quantitatively, we compute the root mean square error (RMSE), peak signal to noise ratio (PSNR), structural similarity index measurement (SSIM) of the reconstructions in a region of interest (ROI). The RMSE in Hounsfield units (HU) computed between the ground truth image and reconstruction image is defined as RMSE , where and denote the pixel intensities of the reconstructed and ground truth images, respectively, and is the number of pixels in the ROI. The ROI here was a circular (around center) region containing all the phantom tissues. Since the full-dose Mayo Clinic CT images still contain some considerable noise, we cannot solely rely on RMSE and PSNR as metrics of image quality, as they would include the noise. SSIM helps better evaluate the preservation of structural details in the reconstructed image.
We conduct experiments on three slices (L067-slice100, L192-slice100, L506-slice100) of the Mayo Clinic data. Fig. 2 shows the reconstruction of L067-slice100 using FBP, PWLS-EP, PWLS-ST, PWLS-MRST2, PWLS-MRST3, PWLS-MRST5, and PWLS-MRST7, respectively at incident photon intensity . TABLE I lists the RMSE, PSNR, and SSIM values of reconstructions of the three test slices, with the best values bolded. The two-layer model (MRST2) provides the best RMSE and PSNR values among the methods. However, when we consider the SSIM criterion, MRST5 and MRST7 outperform ST and MRST2. So which MRST model is better? By observing the reconstructed images, we see that although MRST2 and ST have lower RMSE and higher PSNR values than MRST5 and MRST7, they sacrifice some sharpness of the central region and suffer from loss of details. The deeper models have a more positive effect in maintaining subtle features, which is clearly more essential to clinic medical diagnosis. Furthermore, after considerable parameter tuning, we have observed that the deeper models offer more stable image quality as is varied, i.e., they are more robust to oversmoothing.
FBP | EP | ST | MRST2 | MRST3 | MRST5 | MRST7 | |
---|---|---|---|---|---|---|---|
L067 slice100 | 101.1 | 34.2 | 30.3 | 29.1 | 29.8 | 30.5 | 31.4 |
27.4 | 36.8 | 37.8 | 38.2 | 38.0 | 37.8 | 37.5 | |
0.359 | 0.728 | 0.724 | 0.731 | 0.731 | 0.732 | 0.731 | |
L192 slice100 | 60.0 | 26.6 | 23.5 | 22.2 | 22.3 | 23.0 | 23.8 |
32.4 | 38.9 | 39.9 | 40.4 | 40.4 | 40.1 | 39.8 | |
0.440 | 0.786 | 0.790 | 0.798 | 0.800 | 0.807 | 0.810 | |
L506 slice100 | 56.8 | 32.6 | 27.1 | 26.0 | 26.2 | 27.5 | 28.7 |
32.8 | 37.6 | 39.2 | 39.6 | 39.5 | 39.1 | 38.7 | |
0.489 | 0.780 | 0.790 | 0.791 | 0.793 | 0.797 | 0.802 |
V Conclusion
This paper proposes a general framework for Multi-layer Residual Sparsifying Transform (MRST) learning wherein the transform domain residual maps over several layers are jointly sparsified. Our work then applies MRST learning to low-dose CT (LDCT) image reconstruction by using a PWLS approach with a learned MRST-based regularizer. Experimental results illustrate the promising performance of the multi-layer scheme over single-layer learned sparsifying transforms. Learned MRST models also offer significant improvement over typical nonadaptive methods. In the future, we will investigate additional strategies such as pooling operations to reduce noise over layers.
VI Acknowledgment
The authors thank Dr. Cynthia McCollough, the Mayo Clinic, the American Association of Physicists in Medicine, and the National Institute of Biomedical Imaging and Bioengineering for providing the Mayo Clinic data.
References
- [1] Y. Pati, R. Rezaiifar, and P. Krishnaprasad, “Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition,” in Asilomar Conf. on Signals, Systems and Computers, 1993, pp. 40–44 vol.1.
- [2] M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Sig. Proc., vol. 54, no. 11, pp. 4311–4322, Nov. 2006.
- [3] R. Rubinstein, T. Peleg, and M. Elad, “Analysis K-SVD: A dictionary-learning algorithm for the analysis sparse model,” IEEE Trans. Sig. Proc., vol. 61, no. 3, pp. 661–677, Feb. 2013.
- [4] S. Ravishankar and Y. Bresler, “Learning sparsifying transforms,” IEEE Trans. Sig. Proc., vol. 61, no. 5, pp. 1072–1086, Mar. 2013.
- [5] S. Ravishankar and B. Wohlberg, “Learning multi-layer transform models,” in Allerton Conf. on Comm., Control, and Computing, 2018, pp. 160–165.
- [6] L. A. Feldkamp, L. C. Davis, and J. W. Kress, “Practical cone beam algorithm,” J. Opt. Soc. Am. A, vol. 1, no. 6, pp. 612–619, June 1984.
- [7] I. A. Elbakri and J. A. Fessler, “Statistical image reconstruction for polyenergetic X-ray computed tomography,” IEEE Trans. Med. Imag., vol. 21, no. 2, pp. 89–99, Feb. 2002.
- [8] K. Sauer and C. Bouman, “A local update strategy for iterative reconstruction from projections,” IEEE Trans. Sig. Proc., vol. 41, no. 2, pp. 534–548, Feb. 1993.
- [9] J-B. Thibault, K. Sauer, C. Bouman, and J. Hsieh, “A three-dimensional statistical approach to improved image quality for multi-slice helical CT,” Med. Phys., vol. 34, no. 11, pp. 4526–4544, Nov. 2007.
- [10] L. Pfister and Y. Bresler, “Model-based iterative tomographic reconstruction with adaptive sparsifying transforms,” in Proc. SPIE, 2014, vol. 9020, pp. 90200H–1–90200H–11.
- [11] X. Zheng, S. Ravishankar, Y. Long, and J. A. Fessler, “PWLS-ULTRA: An efficient clustering and learning-based approach for low-dose 3D CT image reconstruction,” IEEE Trans. Med. Imag., vol. 37, no. 6, pp. 1498–1510, June 2018.
- [12] I. Y. Chun, X. Zheng, Y. Long, and J. A. Fessler, “Efficient sparse-view X-ray CT reconstruction using regularization with learned sparsifying transform,” in Proc. Intl. Mtg. on Fully 3D Image Recon. in Rad. and Nuc. Med, June 2017, pp. 115–119.
- [13] S Ye, S. Ravishankar, Y. Long, and J. A. Fessler, “SPULTRA: Low-Dose CT image reconstruction with joint statistical and learned image models,” IEEE Trans. Med. Imag., (Early Access), Aug. 2019.
- [14] W. Zhou, J-F. Cai, and H. Gao, “Adaptive tight frame based medical image reconstruction: a proof-of-concept study for computed tomography,” Inverse Prob., vol. 29, no. 12, pp. 125006, Dec. 2013.
- [15] H. Nien and J. A. Fessler, “Relaxed linearized algorithms for faster X-ray CT image reconstruction,” IEEE Trans. Med. Imag., vol. 35, no. 4, pp. 1090–1098, Apr. 2016.
- [16] J. H. Cho and J. A. Fessler, “Regularization designs for uniform spatial resolution and noise properties in statistical image reconstruction for 3D X-ray CT,” IEEE Trans. Med. Imag., vol. 34, no. 2, pp. 678–689, Feb. 2015.