\ul
X-LRM: X-ray Large Reconstruction Model for
Extremely Sparse-View Computed Tomography Recovery in One Second
Abstract
Sparse-view 3D CT reconstruction aims to recover volumetric structures from a limited number of 2D X-ray projections. Existing feedforward methods are constrained by the limited capacity of CNN-based architectures and the scarcity of large-scale training datasets. In this paper, we propose an X-ray Large Reconstruction Model (X-LRM) for extremely sparse-view (10 views) CT reconstruction. X-LRM consists of two key components: X-former and X-triplane. Our X-former can handle an arbitrary number of input views using an MLP-based image tokenizer and a Transformer-based encoder. The output tokens are then upsampled into our X-triplane representation, which models the 3D radiodensity as an implicit neural field. To support the training of X-LRM, we introduce Torso-16K, a large-scale dataset comprising over 16K volume-projection pairs of various torso organs. Extensive experiments demonstrate that X-LRM outperforms the state-of-the-art method by 1.5 dB and achieves 27 faster speed and better flexibility. Furthermore, the downstream evaluation of lung segmentation tasks also suggests the practical value of our approach. Our code, pre-trained models, and dataset will be released at https://github.com/caiyuanhao1998/X-LRM
1 Introduction
Computed Tomography (CT) uses X-rays with strong penetrating power to reveal internal structures non-invasively. It is widely used in medical imaging for disease diagnosis, treatment planning, and surgical navigation [13, 14, 24, 25]. In particular, CT reconstruction aims to recover the 3D radiodensity of the object given 2D X-ray projections.
Traditional methods [18, 67, 54, 2] usually require hundreds or even thousands of X-ray projections to yield good reconstruction quality, which exposes significant harmful radiation to patients and radiographers. Recently, some self-supervised algorithms based on neural radiance field (NeRF) [8, 68] or 3D Gaussian splatting (3DGS) [6, 70] have been designed to reconstruct CT with 40 50 projections. Yet, these methods usually require a long time (15 minutes) for each reconstruction and the radiation doses are still relatively high. In this work, we study the extremely sparse-view (10 views) CT reconstruction in a feedforward manner to achieve fast inference speed in one second.

Some recent works [27, 36, 35, 34, 40] also try to explore this task. However, existing feedforward methods suffer from the following issues. (i) They rely on single-organ datasets containing fewer than 1,000 cases [52, 12, 43], which severely lack the diversity and scale required to develop robust and generalizable models. (ii) Previous feedforward methods are mainly based on convolutional neural networks (CNNs), which exhibit limitations in effectively capturing long-range dependencies and achieving optimal scalability in large-scale training. (iii) The number of the input projections of existing methods is fixed and cannot be adjusted due to the computational scheme of CNN, which lacks flexibility and limits the application in practice.
To cope with these problems, we design an X-ray Large Reconstruction Model (X-LRM), for extremely sparse-view ( 10 views) CT recovery. X-LRM consists of two parts: X-former and X-triplane. Firstly, X-former uses a multi-layer perception (MLP) based image tokenizer to split an arbitrary number of input images into patch tokens. Then X-former adopts a pure Transformer [17] encoder to compute the self-attention among these patch tokens. The output tokens of the X-former are then upsampled and reshaped into our X-triplane representation. The point feature of the X-triplane is fed into an MLP to learn an implicit neural field of the 3D volume radiodensity. To explore the potential of large-scale training, we collect a 3D CT reconstruction dataset, Torso-16K, containing 16K volume-projection pairs. With the proposed techniques and dataset, our X-LRM can significantly benefit from large-scale training to boost the reconstruction performance and flexibly handle different numbers of input X-ray projections.
In a nutshell, our contributions can be summarized as:
-
•
We propose a novel feedforward framework, X-LRM, for extremely sparse-view CT reconstruction.
-
•
We design a Transformer-based encoder, X-former, to flexibly encode an arbitrary number of input X-ray projections. Besides, we present a new 3D representation, X-triplane, to model the radiodensity in X-ray imaging.
-
•
We collect a large-scale dataset, Torso-16K, containing over 16K samples of 2D X-ray projections and 3D CT volumes. To the best of our knowledge, our Torso-16K is the largest CT reconstruction benchmark and is over 18 larger than the existing largest dataset in the literature.
-
•
Our X-LRM drastically outperforms the SOTA method by 1.5 dB in PSNR and achieves faster inference speed.
2 Related Work
2.1 Sparse-View CT Reconstruction
We adopt a cone-beam CT (CBCT) setup that acquires multi-view 2D X-ray projections for volumetric reconstruction. Existing sparse-view CT reconstruction approaches can be categorized into optimization-based and prediction-based methods. Optimization-based methods iteratively refine the 3D volume to align with the measured projections. Traditional methods [2, 51, 55] formulate reconstruction as a maximum a posteriori problem, while learning-based methods leverage neural representations [68, 53, 7, 70, 6] and diffusion models [10, 11, 31]. Despite their effectiveness, these methods typically require minutes to hours to process a single case, making them impractical for real-time clinical applications. Prediction-based methods, in contrast, utilize neural networks such as CNNs [50] to learn semantic priors from external datasets. Given a test case, they employ pre-trained models for projection extrapolation [3, 19], slice denoising [59, 28, 40], or volume regression [34, 35, 36]. While these methods enable rapid inference, they are constrained by the limited capacity of CNN-based architectures and the scarcity of large-scale training datasets. We aim to cope with these problems with our X-LRM model.
2.2 Feedforward 3D Reconstruction
Unlike optimization-based methods NeRF [45] or 3D gaussian splatting [30], which take time-consuming optimization phase for shape recovery, feedforward 3D reconstruction aims to learn diverse geometry types (e.g., mesh [60, 64], implicit fields [44, 65] etc.) from input images in a forward manner with neural network architectures. Boosting from large-scale 3D datasets like Objaverse-XL [16, 15] and the scalability of Transformer architectures [58], Large Reconstruction Model [23] and its subsequent variants [32, 56, 61, 63, 71, 20, 9] has greatly promoted reconstruction ability and efficiency of current fields. However, due to the data scarcity of CT reconstruction and 3D model embedding, current feedforward CT reconstruction methods often suffer from poor reconstruction quality and generalization ability. Our goal is to fill these research gaps.
3 Method

The pipeline of our method is shown in Fig. 2. Our X-LRM consists of two parts: X-former and X-triplane, corresponding to Fig. 2 (b) and Fig. 2 (c). X-former begins with an MLP-based image tokenizer. Then a Transformer-based encoder processes an arbitrary number of multi-view image tokens with view-associated ray information into patch-based features. These features are then mapped into triplane tokens through cross-attention in a triplane decoder. We upsample and unpatchify these tokens to form our X-triplane representation. Finally, we adopt an MLP to learn an implicit mapping from the 3D point features on the triplane to the corresponding volume radiodensity.
3.1 X-former
As aforementioned, existing feedforward methods, primarily CNN-based, struggle with large-scale training, capturing long-range dependencies, and handling varying numbers of projections. These limitations lead to unsatisfactory performance and poor flexibility. To cope with these problems, we design an X-former consisting of an MLP-based image tokenizer and a Transformer-based image encoder.
Image Tokenizer. As shown in Fig. 2 (b), the input to the tokenizer is multi-view X-ray projections concatenated with the corresponding viewpoint camera conditions . We denote the input at the -th view as . During training, X-former can take varying numbers of views, denoted as , where is the count of varying numbers of input views. For a specific , the input is denoted as , where can change dynamically during training.
Specifically, we adopt the Reference-Point Plücker Coordinate (RPPC) [9] as our camera conditions since it provides more information of the ray position and the relative depth than standard Plücker Coordinate. Thus, we have , which better captures spatial relationships. Here, and represent the origins and directions of the pixel-aligned rays at the -th view.
Subsequently, the tokenizer partitions each input view into non-overlapping patches and projects each patch into a latent space of dimension via an MLP layer. Then we fuse patchified tokens of different views by concatenating them to derive the initial patch-wise tokens .
Image Encoder. The feature tokens are then encoded by a Transformer-based image encoder to produce input feature tokens: , where is the hidden dimension of our image encoder. The image encoder consists of self-attention blocks [58], and each block comprises a multi-head self-attention layer and an MLP layer. We add layer normalization [4] before both layers. For the -th self-attention block, we first split input into heads as
(1) |
Then for the -th head, we project input into query , key , and value as
(2) |
where , , are learnable parameters of the layers and . Then the output of -th head of the -th self-attention layer is computed as
(3) |
Then heads are concatenated to pass through a fully connected () layer to derive the output of self-attention as
(4) |
where is the learnable parameter of the layer. Then we forward to the MLP layer:
(5) |
where is the activation function, and are learnable parameters of layers. Finally, the output of the last layer of the image encoder is . This aforementioned process is illustrated in Fig. 2 (b)
Our X-former leverages the inherent flexibility of the transformer architecture, which can naturally process input tokens of different lengths. This allows our model to seamlessly train with different numbers of input views within a single training session, boosting the reconstruction performance and resulting in a unified framework capable of handling diverse multi-view configurations.
Dataset | Body Parts | # of Volumes |
---|---|---|
AbdomenAtlas v1.0 [33] | Abdomen, Chest, Pelvis | 5,171 |
RSNA2023 [21] | Abdomen, Pelvis | 4,711 |
AMOS [26] | Abdomen | 1,851 |
PENGWIN [37] | Pelvis | 100 |
TCIA [1, 46] | Abdomen | 833 |
MELA [47] | Chest | 1,100 |
FLARE24 (subset) [41] | Abdomen, Chest | 1,868 |
FUMPE [42] | Chest | 35 |
LNDb [49] | Chest | 294 |
RibFrac [29, 66] | Abdomen, Chest | 660 |
Torso-16K (Ours) | Abdomen, Chest, Pelvis | 16,623 |

3.2 X-triplane
To lift the features from 2D projection into 3D space, we design a Transformer-based decoder to map the 2D patch-wise features into 3D triplane tokens , where is the hidden dimension of the triplane decoder. is later upsampled and reshaped into our X-triplane representations. Then we adopt an MLP to learn an implicit mapping from the 3D point feature on the triplane representation to the corresponding radiodensity.
Triplane Decoder. As shown in Fig. 2 (c), the input of the triplane decoder includes and a set of learnable triplane embeddings . Our triplane decoder has cross-attention blocks. Each cross-attention block comprises a cross-attention layer, a self-attention layer, and an MLP layer. To guide the reconstruction of the triplane tokens and lift the feature into 3D space, we adopt the cross-attention mechanism to extract 2D projection and camera information by querying the input feature .
Similar to self-attention, for the -th cross-attention block in our triplane decoder, we first split and input triplane embeddings into heads as
(6) | ||||
Then for the -th cross-attention head, we project into query , and project into key and value by three layers, where . Then the output of -th head of the -th cross-attention layer is computed as
(7) |
Subsequently, heads are concatenated to pass through an layer for the output of the cross-attention layer:
(8) |
where is parameter of layer. Similar to previous self-attention (SA) and MLP in Sec. 3.1 we have
(9) |
Finally, the output of the last layer of our triplane decoder is , as illustrated in Fig. 2 (c). is further upsampled by a deconvolution layer and unpatchified to our X-triplane representation T.
Type | Method | Time (s) | 6-View | 8-View | 10-View | |||
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |||
Traditional | FDK | 0.008 | 9.51 | 0.039 | 10.68 | 0.047 | 11.46 | 0.058 |
ASD-POCS | 1.385 | 22.17 | 0.573 | 23.40 | 0.612 | 24.62 | 0.667 | |
SART | 1.400 | 22.61 | 0.537 | 23.56 | 0.548 | 24.57 | 0.585 | |
2D Feedforward | FBPConvNet | \ul0.010 | 26.99 | 0.704 | 27.22 | 0.722 | 28.05 | 0.737 |
FreeSeed | 0.163 | 28.93 | 0.841 | \ul30.08 | 0.843 | 30.17 | 0.855 | |
3D Feedforward | DIF-Net | 0.445 | 26.10 | 0.627 | 26.81 | 0.663 | 27.47 | 0.708 |
DIF-Gaussian | 0.621 | 28.19 | 0.813 | 28.53 | 0.820 | 29.52 | 0.848 | |
C2RV | 3.837 | \ul29.51 | \ul0.850 | 29.83 | \ul0.849 | \ul30.96 | \ul0.871 | |
X-LRM (Ours) | 0.141 | 31.05 | 0.910 | 31.24 | 0.912 | 31.33 | 0.915 |

Triplane Implicit Neural Field. Our X-triplane is composed of three orthogonal feature planes: , , and , where refers to the spatial resolution of each plane and is the dimension of the point feature . Then we build an implicit neural field mapping from the position of a 3D point to its radiodensity.
For a given 3D point within the unit bounding box (where each coordinate is normalized), we obtain its feature embeddings by projecting it onto three orthogonal plane features and at and . We then apply bilinear interpolation to extract features from each plane. Take the -plane and a point for instance, the interpolated feature value is computed as
(10) | ||||
where and are the neighboring grid points, and the interpolation weights are , . Applying this bilinear interpolation to all three triplanes, we obtain the feature representation at the point as
(11) |
As the radiodensity is isotropic and only related to the point property, we adopt an MLP to learn the mapping from the point feature to the radiodensity as
(12) |
3.3 Training objective
Existing RGB 3D reconstruction methods mainly adopt 2D rendering training loss to achieve good image recovery quality. However, this supervision involves volume rendering that needs to sample many 3D points to compute for each ray, taking a long time. Besides, in X-ray imaging, the 3D CT reconstruction is more concerned than the 2D X-ray rendering. Thus, we adopt the more precise 3D reconstruction loss with varying numbers of input views as
(13) |
where represents the training settings with different input view numbers , refers to the CT volume reconstructed by our X-LRM given numbers of input views, and is the ground-truth CT volume.
Type | Method | Time | 6-View | 8-View | 10-View | |||
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |||
Self-Supervised | NAF | 11m | 23.86 | 0.644 | 24.64 | 0.654 | 25.38 | 0.685 |
R2-Gaussian | \ul6m | 20.28 | 0.528 | 20.79 | 0.529 | 22.09 | 0.581 | |
SAX-NeRF | 8h | 24.08 | 0.669 | 24.73 | 0.674 | 25.68 | 0.692 | |
Diffusion Based | DDS | 12m | 24.42 | 0.529 | 25.64 | 0.570 | 26.64 | 0.607 |
DiffusionMBIR | 11h | \ul26.61 | \ul0.734 | \ul28.51 | \ul0.803 | \ul30.05 | \ul0.835 | |
3D Feedforward | X-LRM (Ours) | 0.14s | 30.14 | 0.888 | 30.10 | 0.886 | 30.28 | 0.889 |

4 Experiment
4.1 Experimental Setup
Datasets.
Previous works rely on small datasets [43, 12, 52] (fewer than 1,000 samples), which limits the ability to train robust and generalizable models. To overcome this constraint, we introduce Torso-16K, the largest and most diverse CT reconstruction dataset, comprising 16,623 real-world CT scans from ten public datasets (Fig. 3). It covers key anatomical regions in clinical applications, including the chest, abdomen, and pelvis. Some examples are shown in Fig. 3. Torso-16K is split into 15,000 / 873 / 750 for training, validation, and testing. We standardize CT scans by resampling and cropping them to a 503 cm3 volume at 1283 resolution. Radiodensity values are normalized from the Hounsfield unit range [-1000, 1000] to [0,1], ensuring coverage of primary organs of interest. Since most public datasets only provide CT volumes, we render multi-view X-ray projections via the TIGRE toolbox [5]. These 2562 resolution projections span a full range 0 360∘ with 3.92 mm2 pixel spacing. To enhance realism, we add Gaussian and Poisson noise to simulate Compton scattering and adopt the UCT 960+ scanner [57], with a 0.6m source-object distance and 1.118m source-detector distance.
Implementation Details.
We implement X-LRM by PyTorch [48]. X-LRM is trained with the AdamW optimizer [39] (, ). The weight decay coefficient is set to 0.05. The initial learning rate is set to and follows a cosine annealing scheduler [38] with a warm-up phase of iterations. For the network architecture, we utilize a ViT-B/16 transformer encoder, which processes inputs to feature tokens at an embedding dimension of with layers. The transformer decoder consists of layers at an output dimension of , while the X-triplane has a feature dimension of . The MLP used for radiodensity queries has four layers with a hidden dimension of .
During training, our model is designed to learn from a set of possible input view counts, . For each epoch, the same instance is processed 3 times, each with a different number of views selected from . Training is conducted on 8 RTX A5000 GPUs at a per-gpu batch size of for epochs. For evaluation, we adopt the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) [62] as the quantitative metrics. Please note that PSNR is measured directly in 3D space and SSIM is computed as the average of 2D SSIM values.

Method | Recon. | Left Lung | Right Lung | |||
PSNR | SSIM | DICE | ASD | DICE | ASD | |
FDK | 9.14 | 0.03 | 0.34 | 43.41 | 0.26 | 45.12 |
SART | 21.7 | 0.51 | 28.29 | 13.44 | 2.92 | 28.12 |
ASD-POCS | 21.48 | 0.53 | 25.35 | 15.62 | 2.52 | 31.84 |
FBPConvNet | 26.02 | 0.68 | \ul93.59 | \ul0.65 | \ul93.58 | \ul0.56 |
FreeSeed | 27.77 | \ul0.83 | 91.01 | 1.07 | 90.56 | 0.82 |
DIF-Net | 24.71 | 0.55 | 84.63 | 1.70 | 84.78 | 1.44 |
DIF-Gaussian | 26.84 | 0.79 | 92.16 | 0.83 | 91.69 | 0.72 |
C2RV | \ul28.24 | \ul0.83 | 91.47 | 0.88 | 90.28 | 0.87 |
X-LRM (Ours) | 30.59 | 0.92 | 95.21 | 0.49 | 94.63 | 0.48 |
4.2 Comparison with State-of-the-Art Methods
We evaluate our X-LRM model against baseline methods under different numbers of projection views (i.e. 6, 8, 10) using the following two different settings:
-
•
Traditional and feedforward methods: Traditional methods are directly tested on the 750-sample test set. The feedforward methods are first trained on the train set and then tested on the 750-sample test set.
-
•
Self-supervised and diffusion-based methods: We use a subset of 10 samples selected from the 750-sample test set, ensuring all 10 sub-datasets in Fig. 3 are covered. We test on this small dataset due to the long inference time (up to 11 hours per sample) of these methods.
Quantitative Results. Firstly, we compare X-LRM with three traditional methods (FDK [18], SART [2], and ASD-POCS [54]) and five feedforward methods (FBPConvNet [27], FreeSeed [40], DIF-Net [35], DIF-Gaussian [35], and C2RV [36]). The results are reported in Tab. 2. (i) When reconstructing CT volumes from 6, 8, and 10 X-ray projection views, our X-LRM surpasses the SOTA 2D feedforward method, FreeSeed, by 2.12, 1.16, and 1.16 dB in PSNR. Compared to the SOTA 3D feedforward method, C2RV, X-LRM improves the performance by 1.54, 1.41, and 0.37 dB in PSNR, while enjoying over 27 faster inference speed. (ii) Unlike previous feedforward methods, X-LRM enjoys better flexibility as it can efficiently reconstruct CT volume with different numbers of input views without training separate models, adapting well to input variations.
Secondly, we compare with three self-supervised methods (NAF [68], R2-Gaussian [69], and SAX-NeRF [8]) and two diffusion-based methods (DiffusionMBIR [10] and DDS [11]). The quantitative results are listed in Tab. 3. Our X-LRM achieves the best overall performance and fastest inference speed. Compared with the second-best method, DiffusionMBIR, our X-LRM is 3.53 dB higher in PSNR. Compared with the second-fastest method, R2-Gaussian, our method is over 2570 faster in inference.
Method | Recon. | Left Lung | Right Lung | |||
PSNR | SSIM | DICE | ASD | DICE | ASD | |
NAF | 21.91 | 0.57 | 48.54 | 19.14 | 50.08 | 9.49 |
R2-Gaussian | 18.58 | 0.45 | 29.62 | 26.12 | 34.76 | 12.86 |
SAX-NeRF | 21.83 | 0.59 | 39.34 | 27.55 | 19.87 | 20.86 |
DiffusionMBIR | \ul25.31 | \ul0.72 | \ul93.10 | \ul0.74 | \ul93.25 | \ul0.67 |
DDS | 23.47 | 0.53 | 71.04 | 2.61 | 69.58 | 2.60 |
X-LRM (Ours) | 27.63 | 0.85 | 95.60 | 0.51 | 95.48 | 0.48 |
Qualitative Results.
The qualitative results are depicted in Fig. 4 (compared with feedforward methods) and Fig. 5 (compared with self-supervised and diffusion-based methods). As observed from the reconstructed slices, all baseline methods struggle with generating high-quality reconstructions, particularly in sparser-view scenarios. Both feedforward and optimization-based approaches exhibit noticeable blurriness and lack of fine details, leading to incomplete anatomical structures and texture inconsistencies. Structural elements, such as lung regions and organ boundaries, appear unclear, often blending into surrounding areas due to the loss of high-frequency details.
Method | Base Model | + X-Triplane | + X-former |
---|---|---|---|
PSNR | 13.09 | 28.76 | 31.33 |
SSIM | 0.42 | 0.84 | 0.92 |
In contrast, our X-LRM yields visually sharper reconstructions with well-defined textures and more coherent anatomical structures. Across different view settings, it effectively preserves fine-grained details while maintaining spatial smoothness. Our method consistently reconstructs realistic features with minimal artifacts, demonstrating high-quality performance in extremely sparse-view CT reconstruction.
Application in Segmentation.
We evaluate the reconstructed CT volumes using medical segmentation. We employ the LungMask toolkit [22] to segment the left and right lung from CT reconstructions produced by various methods and compare the results against the ground-truth segmentation obtained from the original CT scans. Specifically, we evaluate lung test data from the 750-test set and 10-test set, testing the corresponding baseline methods and reporting reconstruction performance (PSNR and SSIM) alongside left and right lung segmentation accuracy (DICE and ASD) for 6-view reconstructed volumes. As shown in Tab. 4 and Tab. 5, X-LRM achieves superior reconstruction quality, surpassing C2RV by and DiffusionMBIR by 2.35 and 2.32 dB in PSNR. Additionally, the higher DICE scores and lower ASD values on both the left and right lung indicate that the 3D segmentation on the CT volume reconstructed by X-LRM has a larger overlap and smaller boundary discrepancies with the segmentation mask on the ground-truth CT volume. Fig. 6 shows the visual comparison with four kinds of recent best methods. Both quantitative and qualitative results demonstrate the ability of X-LRM to preserve anatomical structures more accurately and maintain precise shape consistency, surpassing other methods in both reconstruction fidelity and segmentation alignment. These results further highlight the practical advantages of our method.
4.3 Ablation Study
Ablation studies evaluate the effectiveness of the proposed modifications compared to the standard LRM, including X-former and X-triplane. Additionally, we assess the robustness of X-LRM under varying noisy scanning parameters, such as viewing angles, DSD, and DSO. The breakdown study is performed under the 10-view setting, while the robustness analysis is conducted under the 6-view setting.
Noisy parameters | PSNR | SSIM() | ||
Angles | DSO | DSD | ||
- | - | - | 31.05 (-0.00) | 91.04 (-0.00) |
- | - | 30.93 (-0.12) | 90.95 (-0.09) | |
30.62 (-0.43) | 90.71 (-0.33) | |||
- | - | 30.85 (-0.20) | 90.89 (-0.15) | |
30.67 (-0.38) | 90.73 (-0.31) | |||
- | - | 30.99 (-0.06) | 90.99 (-0.05) | |
30.93 (-0.12) | 90.95 (-0.09) |
Break-down Ablation.
We adopt the Open-LRM [23] as the base reconstruction model to study the effect of each component of X-LRM towards higher performance. The results of the 10-view reconstruction are reported in Tab. 6. The base model only achieves poor results of 13.09 dB in PSNR. After applying our X-triplane and X-former, the model gains by 15.67 and 2.57 dB in PSNR. These results validate the effectiveness of our proposed methods.
Robustness Analysis.
We conduct a robustness analysis under varying noisy scanning parameters, including viewing angles, source-to-origin distance (DSO), and source-to-detector distance (DSD), using a 6-view reconstruction setting. The introduced noise follows a uniform distribution, modeled as . With this noise, the projection images change but the model processes them as if captured under perfect conditions. Tab. 7 shows that X-LRM remains robust to noises of scanning parameters. Viewing angle shifts of (PSNR -0.12 dB, SSIM -0.0009) and results (PSNR -0.43 dB, SSIM -0.0033) have minimal impact on our model. Noises in DSO and DSD only introduce minor effects, with mm DSO causing the largest decline (PSNR -0.38 dB, SSIM -0.0031). Despite these fluctuations, performance remains stable, demonstrating the reliability of X-LRM to real-world scanning variations.
5 Conclusion
In this paper, we collect the largest dataset, Torso-16K, to enable large-scale training for CT reconstruction. Torso-16K is over 18× larger than the existing largest benchmark. We propose X-LRM, a Transformer-based feedforward framework consisting of X-former and X-triplane. X-former employs a tokenizer and Transformer backbone to flexibly encode an arbitrary number of input views, enabling X-LRM to reconstruct CT volumes without re-training. X-triplane decodes image tokens into a triplane representation and learns a neural implicit function to model 3D radiodensity. Experiments show that X-LRM surpasses the SOTA 3D feedforward method by 1.5 dB while achieving 27× faster speed, with its application in medical segmentation further highlighting its practical value.
References
- Ahmed et al. [2020] AA Ahmed, MM Elmohr, D Fuentes, MA Habra, SB Fisher, ND Perrier, M Zhang, and KM Elsayes. Radiomic mapping model for prediction of ki-67 expression in adrenocortical carcinoma. Clinical Radiology, 2020.
- Andersen and Kak [1984] Anders H Andersen and Avinash C Kak. Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm. Ultrasonic imaging, 1984.
- Anirudh et al. [2018] Rushil Anirudh, Hyojin Kim, Jayaraman J Thiagarajan, K Aditya Mohan, Kyle Champley, and Timo Bremer. Lose the views: Limited angle ct reconstruction via implicit sinogram completion. In CVPR, 2018.
- Ba et al. [2016] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
- Biguri et al. [2016] Ander Biguri, Manjit Dosanjh, Steven Hancock, and Manuchehr Soleimani. Tigre: a matlab-gpu toolbox for cbct image reconstruction. Biomedical Physics & Engineering Express, 2016.
- Cai et al. [2024a] Yuanhao Cai, Yixun Liang, Jiahao Wang, Angtian Wang, Yulun Zhang, Xiaokang Yang, Zongwei Zhou, and Alan Yuille. Radiative gaussian splatting for efficient x-ray novel view synthesis. In ECCV, 2024a.
- Cai et al. [2024b] Yuanhao Cai, Jiahao Wang, Alan Yuille, Zongwei Zhou, and Angtian Wang. Structure-aware sparse-view x-ray 3d reconstruction. In CVPR, 2024b.
- Cai et al. [2024c] Yuanhao Cai, Jiahao Wang, Alan Yuille, Zongwei Zhou, and Angtian Wang. Structure-aware sparse-view x-ray 3d reconstruction. In CVPR, 2024c.
- Cai et al. [2024d] Yuanhao Cai, He Zhang, Kai Zhang, Yixun Liang, Mengwei Ren, Fujun Luan, Qing Liu, Soo Ye Kim, Jianming Zhang, Zhifei Zhang, et al. Baking gaussian splatting into diffusion denoiser for fast and scalable single-stage image-to-3d generation. arXiv preprint arXiv:2411.14384, 2024d.
- Chung et al. [2023] Hyungjin Chung, Dohoon Ryu, Michael T McCann, Marc L Klasky, and Jong Chul Ye. Solving 3d inverse problems using pre-trained 2d diffusion models. In CVPR, 2023.
- Chung et al. [2024] Hyungjin Chung, Suhyeon Lee, and Jong Chul Ye. Decomposed diffusion sampler for accelerating large-scale inverse problems. In ICLR, 2024.
- Cipriano et al. [2022] Marco Cipriano, Stefano Allegretti, Federico Bolelli, Mattia Di Bartolomeo, Federico Pollastri, Arrigo Pellacani, Paolo Minafra, Alexandre Anesi, and Costantino Grana. Deep segmentation of the mandibular canal: a new 3d annotated dataset of cbct volumes. Ieee Access, 2022.
- Cormack [1963] Allan Macleod Cormack. Representation of a function by its line integrals, with some radiological applications. Journal of applied physics, 1963.
- Cormack [1964] Allan Macleod Cormack. Representation of a function by its line integrals, with some radiological applications. ii. Journal of Applied Physics, 1964.
- Deitke et al. [2023a] Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, et al. Objaverse-xl: A universe of 10m+ 3d objects. In NeurIPS, 2023a.
- Deitke et al. [2023b] Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objaverse: A universe of annotated 3d objects. In CVPR, 2023b.
- Dosovitskiy et al. [2021] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
- Feldkamp et al. [1984] Lee A Feldkamp, Lloyd C Davis, and James W Kress. Practical cone-beam algorithm. Josa a, 1984.
- Ghani and Karl [2018] Muhammad Usman Ghani and W Clem Karl. Deep learning-based sinogram completion for low-dose ct. In 2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), 2018.
- He et al. [2024] Hao He, Yixun Liang, Luozhou Wang, Yuanhao Cai, Xinli Xu, Hao-Xiang Guo, Xiang Wen, and Yingcong Chen. Lucidfusion: Generating 3d gaussians with arbitrary unposed images. arXiv preprint arXiv:2410.15636, 2024.
- Hermans et al. [2024] Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, et al. Rsna 2023 abdominal trauma ai challenge: Review and outcomes. Radiology: Artificial Intelligence, 2024.
- Hofmanninger et al. [2020] Johannes Hofmanninger, Forian Prayer, Jeanny Pan, Sebastian Röhrich, Helmut Prosch, and Georg Langs. Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem. European Radiology Experimental, 2020.
- Hong et al. [2024] Yicong Hong, Kai Zhang, Jiuxiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d. In ICLR, 2024.
- Hounsfield [1973] Godfrey N Hounsfield. Computerized transverse axial scanning (tomography): Part 1. description of system. The British journal of radiology, 1973.
- Hounsfield [1980] Godfrey N Hounsfield. Computed medical imaging. Science, 1980.
- Ji et al. [2022] Yuanfeng Ji, Haotian Bai, Chongjian Ge, Jie Yang, Ye Zhu, Ruimao Zhang, Zhen Li, Lingyan Zhanng, Wanling Ma, Xiang Wan, et al. Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation. In NeurIPS, 2022.
- Jin et al. [2017a] Kyong Hwan Jin, Michael T McCann, Emmanuel Froustey, and Michael Unser. Deep convolutional neural network for inverse problems in imaging. TIP, 2017a.
- Jin et al. [2017b] Kyong Hwan Jin, Michael T McCann, Emmanuel Froustey, and Michael Unser. Deep convolutional neural network for inverse problems in imaging. IEEE transactions on image processing, 2017b.
- Jin et al. [2020] Liang Jin, Jiancheng Yang, Kaiming Kuang, Bingbing Ni, Yiyi Gao, Yingli Sun, Pan Gao, Weiling Ma, Mingyu Tan, Hui Kang, Jiajun Chen, and Ming Li. Deep-learning-assisted detection and segmentation of rib fractures from ct scans: Development and validation of fracnet. eBioMedicine, 2020.
- Kerbl et al. [2023] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 2023.
- Lee et al. [2023] Suhyeon Lee, Hyungjin Chung, Minyoung Park, Jonghyuk Park, Wi-Sun Ryu, and Jong Chul Ye. Improving 3d imaging with pre-trained perpendicular 2d diffusion models. In ICCV, 2023.
- Li et al. [2024a] Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi. Instant3d: Fast text-to-3d with sparse-view generation and large reconstruction model. In ICLR, 2024a.
- Li et al. [2024b] Wenxuan Li, Chongyu Qu, Xiaoxi Chen, Pedro RAS Bassi, Yijia Shi, Yuxiang Lai, Qian Yu, Huimin Xue, Yixiong Chen, Xiaorui Lin, et al. Abdomenatlas: A large-scale, detailed-annotated, & multi-center dataset for efficient transfer learning and open algorithmic benchmarking. Medical Image Analysis, 2024b.
- Lin et al. [2023] Yiqun Lin, Zhongjin Luo, Wei Zhao, and Xiaomeng Li. Learning deep intensity field for extremely sparse-view cbct reconstruction. In MICCAI, 2023.
- Lin et al. [2024a] Yiqun Lin, Hualiang Wang, Jixiang Chen, and Xiaomeng Li. Learning 3d gaussians for extremely sparse-view cone-beam ct reconstruction. In MICCAI, 2024a.
- Lin et al. [2024b] Yiqun Lin, Jiewen Yang, Hualiang Wang, Xinpeng Ding, Wei Zhao, and Xiaomeng Li. C^ 2rv: Cross-regional and cross-view learning for sparse-view cbct reconstruction. In CVPR, 2024b.
- Liu et al. [2023] Yanzhen Liu, Sutuke Yibulayimu, Yudi Sang, Gang Zhu, Yu Wang, Chunpeng Zhao, and Xinbao Wu. Pelvic fracture segmentation using a multi-scale distance-weighted neural network. In MICCAI, 2023.
- Loshchilov and Hutter [2016] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
- Loshchilov and Hutter [2017] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
- Ma et al. [2023] Chenglong Ma, Zilong Li, Junping Zhang, Yi Zhang, and Hongming Shan. Freeseed: Frequency-band-aware and self-guided network for sparse-view ct reconstruction. In MICCAI, 2023.
- Ma et al. [2024] Jun Ma, Bo Wang, Song Gu, Yao Zhang, Cheng Ge, and Chenyu You. MICCAI FLARE24 Task 1: Pan-cancer Segmentation in CT Scans, 2024.
- Masoudi et al. [2018] Mojtaba Masoudi, Hamid-Reza Pourreza, Mahdi Saadatmand-Tarzjan, Noushin Eftekhari, Fateme Shafiee Zargar, and Masoud Pezeshki Rad. A new dataset of computed-tomography angiography images for computer-aided detection of pulmonary embolism. Scientific Data, 2018.
- McCollough [2016] Cynthia McCollough. Tu-fg-207a-04: overview of the low dose ct grand challenge. Medical physics, 2016.
- Mescheder et al. [2019] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In CVPR, 2019.
- Mildenhall et al. [2021] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 2021.
- Moawad et al. [2021] A Moawad, D Fuentes, A Morshid, A Khalaf, M Elmohr, A Abusaif, JD Hazle, AO Kaseb, M Hassan, A Mahvash, et al. Multimodality annotated hcc cases with and without advanced imaging segmentation [data set]. The Cancer Imaging Archive, 2021.
- Organizers [2022] MELA Challenge Organizers. Mediastinal lesion analysis (mela) dataset, 2022.
- Paszke et al. [2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019.
- Pedrosa et al. [2022] João Pedrosa, Guilherme, Carlos, Márcio, Patrícia, André, João, Eduardo, Isabel, António, and Aurélio. Lndb dataset, 2022.
- Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
- Sauer and Bouman [1993] Ken Sauer and Charles Bouman. A local update strategy for iterative reconstruction from projections. IEEE Transactions on Signal Processing, 1993.
- Setio et al. [2017] Arnaud Arindra Adiyoso Setio, Alberto Traverso, Thomas De Bel, Moira SN Berens, Cas Van Den Bogaard, Piergiorgio Cerello, Hao Chen, Qi Dou, Maria Evelina Fantacci, Bram Geurts, et al. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the luna16 challenge. Medical image analysis, 2017.
- Shen et al. [2022] Liyue Shen, John Pauly, and Lei Xing. Nerp: implicit neural representation learning with prior embedding for sparsely sampled image reconstruction. IEEE Transactions on Neural Networks and Learning Systems, 2022.
- Sidky and Pan [2008a] Emil Y Sidky and Xiaochuan Pan. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Physics in Medicine & Biology, 2008a.
- Sidky and Pan [2008b] Emil Y Sidky and Xiaochuan Pan. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Physics in Medicine & Biology, 2008b.
- Tochilkin et al. [2024] Dmitry Tochilkin, David Pankratz, Zexiang Liu, Zixuan Huang, Adam Letts, Yangguang Li, Ding Liang, Christian Laforte, Varun Jampani, and Yan-Pei Cao. Triposr: Fast 3d object reconstruction from a single image. arXiv preprint arXiv:2403.02151, 2024.
- United Imaging Healthcare [2023] United Imaging Healthcare. Uct 960+. https://eu.united-imaging.com/en/product-service/products/ct/uct-960, 2023.
- Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017.
- Wang et al. [2022] Ce Wang, Kun Shang, Haimiao Zhang, Qian Li, and S Kevin Zhou. Dudotrans: dual-domain transformer for sparse-view ct reconstruction. In International Workshop on Machine Learning for Medical Image Reconstruction, 2022.
- Wang et al. [2018] Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2mesh: Generating 3d mesh models from single rgb images. In ECCV, 2018.
- Wang et al. [2023] Peng Wang, Hao Tan, Sai Bi, Yinghao Xu, Fujun Luan, Kalyan Sunkavalli, Wenping Wang, Zexiang Xu, and Kai Zhang. Pf-lrm: Pose-free large reconstruction model for joint pose and shape prediction. arXiv preprint arXiv:2311.12024, 2023.
- Wang et al. [2004] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncell. Image quality assessment: from error visibility to structural similarity. TIP, 2004.
- Wei et al. [2024] Xinyue Wei, Kai Zhang, Sai Bi, Hao Tan, Fujun Luan, Valentin Deschaintre, Kalyan Sunkavalli, Hao Su, and Zexiang Xu. Meshlrm: Large reconstruction model for high-quality meshes. arXiv preprint arXiv:2404.12385, 2024.
- Wu et al. [2020] Rundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, and Baoquan Chen. Pq-net: A generative part seq2seq network for 3d shapes. In CVPR, 2020.
- Xu et al. [2019] Qiangeng Xu, Weiyue Wang, Duygu Ceylan, Radomir Mech, and Ulrich Neumann. Disn: Deep implicit surface network for high-quality single-view 3d reconstruction. In NeurIPS, 2019.
- Yang et al. [2024] Jiancheng Yang, Rui Shi, Liang Jin, Xiaoyang Huang, Kaiming Kuang, Donglai Wei, Shixuan Gu, Jianying Liu, Pengfei Liu, Zhizhong Chai, Yongjie Xiao, Hao Chen, Liming Xu, Bang Du, Xiangyi Yan, Hao Tang, Adam Alessio, Gregory Holste, Jiapeng Zhang, Xiaoming Wang, Jianye He, Lixuan Che, Hanspeter Pfister, Ming Li, and Bingbing Ni. Deep rib fracture instance segmentation and classification from ct on the ribfrac challenge. arXiv Preprint, 2024.
- Yu et al. [2006] Lifeng Yu, Yu Zou, Emil Y Sidky, Charles A Pelizzari, Peter Munro, and Xiaochuan Pan. Region of interest reconstruction from truncated data in circular cone-beam ct. TMI, 2006.
- Zha et al. [2022] Ruyi Zha, Yanhao Zhang, and Hongdong Li. Naf: neural attenuation fields for sparse-view cbct reconstruction. In MICCAI, 2022.
- Zha et al. [2024a] Ruyi Zha, Tao Jun Lin, Yuanhao Cai, Jiwen Cao, Yanhao Zhang, and Hongdong Li. R2-gaussian: Rectifying radiative gaussian splatting for tomographic reconstruction. In NeurIPS, 2024a.
- Zha et al. [2024b] Ruyi Zha, Tao Jun Lin, Yuanhao Cai, Jiwen Cao, Yanhao Zhang, and Hongdong Li. R2-gaussian: Rectifying radiative gaussian splatting for tomographic reconstruction. In NeurIPS, 2024b.
- Zhang et al. [2024] Kai Zhang, Sai Bi, Hao Tan, Yuanbo Xiangli, Nanxuan Zhao, Kalyan Sunkavalli, and Zexiang Xu. Gs-lrm: Large reconstruction model for 3d gaussian splatting. In ECCV, 2024.