This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\useunder

\ul

X-LRM: X-ray Large Reconstruction Model for
Extremely Sparse-View Computed Tomography Recovery in One Second

Guofeng Zhang1,∗, Ruyi Zha2,∗, Hao He3, Yixun Liang3, Alan Yuille1 Hongdong Li2, Yuanhao Cai1,†
1 Johns Hopkins University, 2 Australian National University, 3 HKUST
Abstract

Sparse-view 3D CT reconstruction aims to recover volumetric structures from a limited number of 2D X-ray projections. Existing feedforward methods are constrained by the limited capacity of CNN-based architectures and the scarcity of large-scale training datasets. In this paper, we propose an X-ray Large Reconstruction Model (X-LRM) for extremely sparse-view (<<10 views) CT reconstruction. X-LRM consists of two key components: X-former and X-triplane. Our X-former can handle an arbitrary number of input views using an MLP-based image tokenizer and a Transformer-based encoder. The output tokens are then upsampled into our X-triplane representation, which models the 3D radiodensity as an implicit neural field. To support the training of X-LRM, we introduce Torso-16K, a large-scale dataset comprising over 16K volume-projection pairs of various torso organs. Extensive experiments demonstrate that X-LRM outperforms the state-of-the-art method by 1.5 dB and achieves 27×\times faster speed and better flexibility. Furthermore, the downstream evaluation of lung segmentation tasks also suggests the practical value of our approach. Our code, pre-trained models, and dataset will be released at https://github.com/caiyuanhao1998/X-LRM

footnotetext: * = Equal Contribution. \dagger = Corresponding Author

1 Introduction

Computed Tomography (CT) uses X-rays with strong penetrating power to reveal internal structures non-invasively. It is widely used in medical imaging for disease diagnosis, treatment planning, and surgical navigation [13, 14, 24, 25]. In particular, CT reconstruction aims to recover the 3D radiodensity of the object given 2D X-ray projections.

Traditional methods [18, 67, 54, 2] usually require hundreds or even thousands of X-ray projections to yield good reconstruction quality, which exposes significant harmful radiation to patients and radiographers. Recently, some self-supervised algorithms based on neural radiance field (NeRF) [8, 68] or 3D Gaussian splatting (3DGS) [6, 70] have been designed to reconstruct CT with 40 \sim 50 projections. Yet, these methods usually require a long time (\sim15 minutes) for each reconstruction and the radiation doses are still relatively high. In this work, we study the extremely sparse-view (<<10 views) CT reconstruction in a feedforward manner to achieve fast inference speed in one second.

Refer to caption
Figure 1: Our X-LRM outperforms previous state-of-the-art 3D feedforward methods in terms of quality and efficiency, including DIF-Net [34], DIF-Gaussian [35], and C2-RV [36]. Our collected CT dataset, Torso-16K, is over 18×\times larger than previous benchmarks: LUNA16 [52], ToothFairy [12], and AAPM-Myo [43].

Some recent works [27, 36, 35, 34, 40] also try to explore this task. However, existing feedforward methods suffer from the following issues. (i) They rely on single-organ datasets containing fewer than 1,000 cases [52, 12, 43], which severely lack the diversity and scale required to develop robust and generalizable models. (ii) Previous feedforward methods are mainly based on convolutional neural networks (CNNs), which exhibit limitations in effectively capturing long-range dependencies and achieving optimal scalability in large-scale training. (iii) The number of the input projections of existing methods is fixed and cannot be adjusted due to the computational scheme of CNN, which lacks flexibility and limits the application in practice.

To cope with these problems, we design an X-ray Large Reconstruction Model (X-LRM), for extremely sparse-view (<< 10 views) CT recovery. X-LRM consists of two parts: X-former and X-triplane. Firstly, X-former uses a multi-layer perception (MLP) based image tokenizer to split an arbitrary number of input images into patch tokens. Then X-former adopts a pure Transformer [17] encoder to compute the self-attention among these patch tokens. The output tokens of the X-former are then upsampled and reshaped into our X-triplane representation. The point feature of the X-triplane is fed into an MLP to learn an implicit neural field of the 3D volume radiodensity. To explore the potential of large-scale training, we collect a 3D CT reconstruction dataset, Torso-16K, containing \sim16K volume-projection pairs. With the proposed techniques and dataset, our X-LRM can significantly benefit from large-scale training to boost the reconstruction performance and flexibly handle different numbers of input X-ray projections.

In a nutshell, our contributions can be summarized as:

  • We propose a novel feedforward framework, X-LRM, for extremely sparse-view CT reconstruction.

  • We design a Transformer-based encoder, X-former, to flexibly encode an arbitrary number of input X-ray projections. Besides, we present a new 3D representation, X-triplane, to model the radiodensity in X-ray imaging.

  • We collect a large-scale dataset, Torso-16K, containing over 16K samples of 2D X-ray projections and 3D CT volumes. To the best of our knowledge, our Torso-16K is the largest CT reconstruction benchmark and is over 18×\times larger than the existing largest dataset in the literature.

  • Our X-LRM drastically outperforms the SOTA method by 1.5 dB in PSNR and achieves 27×27\times faster inference speed.

2 Related Work

2.1 Sparse-View CT Reconstruction

We adopt a cone-beam CT (CBCT) setup that acquires multi-view 2D X-ray projections for volumetric reconstruction. Existing sparse-view CT reconstruction approaches can be categorized into optimization-based and prediction-based methods. Optimization-based methods iteratively refine the 3D volume to align with the measured projections. Traditional methods [2, 51, 55] formulate reconstruction as a maximum a posteriori problem, while learning-based methods leverage neural representations [68, 53, 7, 70, 6] and diffusion models [10, 11, 31]. Despite their effectiveness, these methods typically require minutes to hours to process a single case, making them impractical for real-time clinical applications. Prediction-based methods, in contrast, utilize neural networks such as CNNs [50] to learn semantic priors from external datasets. Given a test case, they employ pre-trained models for projection extrapolation [3, 19], slice denoising [59, 28, 40], or volume regression [34, 35, 36]. While these methods enable rapid inference, they are constrained by the limited capacity of CNN-based architectures and the scarcity of large-scale training datasets. We aim to cope with these problems with our X-LRM model.

2.2 Feedforward 3D Reconstruction

Unlike optimization-based methods NeRF [45] or 3D gaussian splatting [30], which take time-consuming optimization phase for shape recovery, feedforward 3D reconstruction aims to learn diverse geometry types (e.g., mesh [60, 64], implicit fields [44, 65] etc.) from input images in a forward manner with neural network architectures. Boosting from large-scale 3D datasets like Objaverse-XL [16, 15] and the scalability of Transformer architectures [58], Large Reconstruction Model [23] and its subsequent variants [32, 56, 61, 63, 71, 20, 9] has greatly promoted reconstruction ability and efficiency of current fields. However, due to the data scarcity of CT reconstruction and 3D model embedding, current feedforward CT reconstruction methods often suffer from poor reconstruction quality and generalization ability. Our goal is to fill these research gaps.

3 Method

Refer to caption
Figure 2: The overall architecture of X-LRM: (a) We collect Torso-16K, the largest CT reconstruction dataset (Sec. 4.1) to the best of our knowledge. (b) Our X-Former features an image tokenizer and encoder, designed to process a variable number of input views (Sec. 3.1). (c) Our X-Triplane includes a triplane decoder followed by our implicit neural field, directly predicting the 3D CT volume 𝐔^\hat{\mathbf{U}} (Sec. 3.2).

The pipeline of our method is shown in Fig. 2. Our X-LRM consists of two parts: X-former and X-triplane, corresponding to Fig. 2 (b) and Fig. 2 (c). X-former begins with an MLP-based image tokenizer. Then a Transformer-based encoder processes an arbitrary number of multi-view image tokens with view-associated ray information into patch-based features. These features are then mapped into triplane tokens through cross-attention in a triplane decoder. We upsample and unpatchify these tokens to form our X-triplane representation. Finally, we adopt an MLP to learn an implicit mapping from the 3D point features on the triplane to the corresponding volume radiodensity.

3.1 X-former

As aforementioned, existing feedforward methods, primarily CNN-based, struggle with large-scale training, capturing long-range dependencies, and handling varying numbers of projections. These limitations lead to unsatisfactory performance and poor flexibility. To cope with these problems, we design an X-former consisting of an MLP-based image tokenizer and a Transformer-based image encoder.

Image Tokenizer. As shown in Fig. 2 (b), the input to the tokenizer is multi-view X-ray projections 𝐈iH×W×1\mathbf{I}_{i}\in\mathbb{R}^{H\times W\times 1} concatenated with the corresponding viewpoint camera conditions 𝐂iH×W×6\mathbf{C}_{i}\in\mathbb{R}^{H\times W\times 6}. We denote the input at the ii-th view as 𝐗i=[𝐈i,𝐂i]H×W×7\mathbf{X}_{i}=[\mathbf{I}_{i},\mathbf{C}_{i}]\in\mathbb{R}^{H\times W\times 7}. During training, X-former can take varying numbers of views, denoted as 𝒱={V1,V2,,Vm}\mathcal{V}=\{V_{1},V_{2},\dots,V_{m}\}, where mm is the count of varying numbers of input views. For a specific ViV_{i}, the input is denoted as 𝐗=[𝐗1,𝐗2,,𝐗Vi]Vi×H×W×7\mathbf{X}=[\mathbf{X}_{1},\mathbf{X}_{2},\dots,\mathbf{X}_{V_{i}}]\in\mathbb{R}^{V_{i}\times H\times W\times 7}, where Vi𝐕V_{i}\in\mathbf{V} can change dynamically during training.

Specifically, we adopt the Reference-Point Plücker Coordinate (RPPC) [9] as our camera conditions since it provides more information of the ray position and the relative depth than standard Plücker Coordinate. Thus, we have 𝐂i=(𝐨i(𝐨i𝐝i)𝐝i,𝐝i)\mathbf{C}_{i}=\left(\mathbf{o}_{i}-(\mathbf{o}_{i}\cdot\mathbf{d}_{i})\mathbf{d}_{i},\mathbf{d}_{i}\right), which better captures spatial relationships. Here, 𝐨i\mathbf{o}_{i} and 𝐝i\mathbf{d}_{i} represent the origins and directions of the pixel-aligned rays at the ii-th view.

Subsequently, the tokenizer partitions each input view into non-overlapping patches and projects each patch into a latent space of dimension dEd_{E} via an MLP layer. Then we fuse patchified tokens of different views by concatenating them to derive the initial patch-wise tokens 𝐇n×dE\mathbf{H}\in\mathbb{R}^{n\times d_{E}}.

Image Encoder. The feature tokens 𝐇{\mathbf{H}} are then encoded by a Transformer-based image encoder to produce input feature tokens: 𝐅n×dE\mathbf{F}\in\mathbb{R}^{n\times d_{E}}, where dEd_{E} is the hidden dimension of our image encoder. The image encoder consists of NeN_{e} self-attention blocks [58], and each block comprises a multi-head self-attention layer and an MLP layer. We add layer normalization [4] before both layers. For the jj-th self-attention block, we first split input 𝐇inj\mathbf{H}^{j}_{in} into kEk_{E} heads as

𝐇inj=[𝐇1j,𝐇2j,,𝐇kEj].\mathbf{H}^{j}_{in}=[\mathbf{H}^{j}_{1},\mathbf{H}^{j}_{2},\dots,\mathbf{H}^{j}_{k_{E}}]. (1)

Then for the ii-th head, we project input 𝐇ij\mathbf{H}^{j}_{i} into query 𝐐ijn×dke\mathbf{Q}^{j}_{i}\in\mathbb{R}^{n\times d_{ke}}, key 𝐊ijn×dke\mathbf{K}_{i}^{j}\in\mathbb{R}^{n\times d_{ke}}, and value 𝐕ijn×dke\mathbf{V}_{i}^{j}\in\mathbb{R}^{n\times d_{ke}} as

𝐐ij=𝐇ij𝐖𝐐ij,𝐊ij=𝐇ij𝐖𝐊ij,𝐕ij=𝐇ij𝐖𝐕ij,\mathbf{Q}_{i}^{j}=\mathbf{H}_{i}^{j}\mathbf{W}_{\mathbf{Q}_{i}^{j}},\;\mathbf{K}_{i}^{j}=\mathbf{H}_{i}^{j}\mathbf{W}_{\mathbf{K}_{i}^{j}},\;\mathbf{V}_{i}^{j}=\mathbf{H}_{i}^{j}\mathbf{W}_{\mathbf{V}_{i}^{j}}, (2)

where 𝐖𝐐ij\mathbf{W}_{\mathbf{Q}_{i}^{j}}, 𝐖𝐊ij\mathbf{W}_{\mathbf{K}_{i}^{j}}, 𝐖𝐕ijdE×dke\mathbf{W}_{\mathbf{V}_{i}^{j}}\in\mathbb{R}^{d_{E}\times d_{ke}} are learnable parameters of the fcfc layers and dke=dE/kEd_{ke}=d_{E}/k_{E}. Then the output of ii-th head of the jj-th self-attention layer 𝐀ij\mathbf{A}_{i}^{j} is computed as

𝐀ij=softmax(𝐐ij(𝐊ij)dke)𝐕ij+𝐇ij.{\mathbf{A}_{i}^{j}}=\text{softmax}\left(\frac{\mathbf{Q}_{i}^{j}(\mathbf{K}_{i}^{j})^{\top}}{\sqrt{d_{ke}}}\right)\mathbf{V}_{i}^{j}+\mathbf{H}_{i}^{j}. (3)

Then kEk_{E} heads are concatenated to pass through a fully connected (fcfc) layer to derive the output of self-attention as

𝐇midj=[𝐀1j,𝐀2j𝐀kEj]𝐖sj.{\mathbf{H}^{j}_{mid}}=[\mathbf{A}_{1}^{j},\;\mathbf{A}_{2}^{j}\;\dots\;\mathbf{A}_{k_{E}}^{j}]\;\mathbf{W}^{j}_{s}. (4)

where 𝐖sjdE×dE\mathbf{W}^{j}_{s}\in\mathbb{R}^{d_{E}\times d_{E}} is the learnable parameter of the fcfc layer. Then we forward 𝐇midj\mathbf{H}^{j}_{mid} to the MLP layer:

𝐇outj=σ(𝐇midj𝐖1+𝐛1)𝐖2+𝐛2+𝐇midj,\mathbf{H}^{j}_{out}=\sigma(\mathbf{H}^{j}_{mid}\mathbf{W}_{1}+\mathbf{b}_{1})\mathbf{W}_{2}+\mathbf{b}_{2}+\mathbf{H}^{j}_{mid}, (5)

where σ\sigma is the activation function, and 𝐖1,𝐖2,𝐛1,𝐛2\mathbf{W}_{1},\mathbf{W}_{2},\mathbf{b}_{1},\mathbf{b}_{2} are learnable parameters of fcfc layers. Finally, the output of the last layer of the image encoder is 𝐅=𝐇outNen×dE\mathbf{F}=\mathbf{H}^{N_{e}}_{out}\in\mathbb{R}^{n\times d_{E}}. This aforementioned process is illustrated in Fig. 2 (b)

Our X-former leverages the inherent flexibility of the transformer architecture, which can naturally process input tokens of different lengths. This allows our model to seamlessly train with different numbers of input views within a single training session, boosting the reconstruction performance and resulting in a unified framework capable of handling diverse multi-view configurations.

Dataset Body Parts # of Volumes
AbdomenAtlas v1.0 [33] Abdomen, Chest, Pelvis 5,171
RSNA2023 [21] Abdomen, Pelvis 4,711
AMOS [26] Abdomen 1,851
PENGWIN [37] Pelvis 100
TCIA [1, 46] Abdomen 833
MELA [47] Chest 1,100
FLARE24 (subset) [41] Abdomen, Chest 1,868
FUMPE [42] Chest 35
LNDb [49] Chest 294
RibFrac [29, 66] Abdomen, Chest 660
Torso-16K (Ours) Abdomen, Chest, Pelvis 16,623
Table 1: The statistics of our collected Torso-16K benchmark. Torso-16K integrates ten public datasets covering major anatomical regions in different clinical applications.
Refer to caption
Figure 3: Example volumes and X-ray projectionis in Torso-16K dataset.

3.2 X-triplane

To lift the features from 2D projection into 3D space, we design a Transformer-based decoder to map the 2D patch-wise features 𝐅\mathbf{F} into 3D triplane tokens 𝐙(3×32×32)×dD\mathbf{Z}\in\mathbb{R}^{(3\times 32\times 32)\times d_{D}}, where dDd_{D} is the hidden dimension of the triplane decoder. 𝐙\mathbf{Z} is later upsampled and reshaped into our X-triplane representations. Then we adopt an MLP to learn an implicit mapping from the 3D point feature on the triplane representation to the corresponding radiodensity.

Triplane Decoder. As shown in Fig. 2 (c), the input of the triplane decoder includes 𝐅\mathbf{F} and a set of learnable triplane embeddings 𝐄(3×32×32)×dD\mathbf{E}\in\mathbb{R}^{(3\times 32\times 32)\times d_{D}}. Our triplane decoder has NdN_{d} cross-attention blocks. Each cross-attention block comprises a cross-attention layer, a self-attention layer, and an MLP layer. To guide the reconstruction of the triplane tokens and lift the feature into 3D space, we adopt the cross-attention mechanism to extract 2D projection and camera information by querying the input feature 𝐅\mathbf{F}.

Similar to self-attention, for the jj-th cross-attention block in our triplane decoder, we first split 𝐅\mathbf{F} and input triplane embeddings 𝐄inj\mathbf{E}^{j}_{in} into kDk_{D} heads as

𝐅\displaystyle\mathbf{F} =[𝐅1,𝐅2,,𝐅kD],\displaystyle=[\mathbf{F}_{1},\mathbf{F}_{2},\dots,\mathbf{F}_{k_{D}}], (6)
𝐄inj\displaystyle\mathbf{E}^{j}_{in} =[𝐄1j,𝐄2j,,𝐄kDj].\displaystyle=[\mathbf{E}^{j}_{1},\mathbf{E}^{j}_{2},\dots,\mathbf{E}^{j}_{k_{D}}].

Then for the ii-th cross-attention head, we project 𝐅i\mathbf{F}_{i} into query 𝐐ijn×dkd\mathbf{Q}_{i}^{j}\in\mathbb{R}^{n\times d_{kd}}, and project 𝐄ij\mathbf{E}_{i}^{j} into key 𝐊ij(3×32×32)×dkd\mathbf{K}_{i}^{j}\in\mathbb{R}^{(3\times 32\times 32)\times d_{kd}} and value 𝐕ij(3×32×32)×dkd\mathbf{V}_{i}^{j}\in\mathbb{R}^{(3\times 32\times 32)\times d_{kd}} by three fcfc layers, where dkd=dD/kDd_{kd}=d_{D}/k_{D}. Then the output of ii-th head of the jj-th cross-attention layer 𝐁ij\mathbf{B}_{i}^{j} is computed as

𝐁ij=softmax(𝐐ij(𝐊ij)dkd)𝐕ij+𝐄ij.{\mathbf{B}_{i}^{j}}=\text{softmax}\left(\frac{\mathbf{Q}_{i}^{j}(\mathbf{K}_{i}^{j})^{\top}}{\sqrt{d_{kd}}}\right)\mathbf{V}_{i}^{j}+\mathbf{E}_{i}^{j}. (7)

Subsequently, kDk_{D} heads are concatenated to pass through an fcfc layer for the output of the cross-attention layer:

𝐄midj=[𝐁1j,𝐁2j𝐁kDj]𝐖cj,{\mathbf{E}_{mid}^{j}}=[\mathbf{B}_{1}^{j},\;\mathbf{B}_{2}^{j}\;\dots\;\mathbf{B}_{k_{D}}^{j}]\;\mathbf{W}^{j}_{c}, (8)

where 𝐖cjdD×dD\mathbf{W}^{j}_{c}\in\mathbb{R}^{d_{D}\times d_{D}} is parameter of fcfc layer. Similar to previous self-attention (SA) and MLP in Sec. 3.1 we have

𝐄outj=MLP(SA(𝐄midj)+𝐄midj)+SA(𝐄midj).\mathbf{E}^{j}_{out}=\text{MLP}\big{(}\text{SA}(\mathbf{E}^{j}_{mid})+\mathbf{E}^{j}_{mid}\big{)}+\text{SA}(\mathbf{E}^{j}_{mid}). (9)

Finally, the output of the last layer of our triplane decoder is 𝐙=𝐄outNd(3×32×32)×dD\mathbf{Z}=\mathbf{E}^{N_{d}}_{out}\in\mathbb{R}^{(3\times 32\times 32)\times d_{D}}, as illustrated in Fig. 2 (c). 𝐙\mathbf{Z} is further upsampled by a deconvolution layer and unpatchified to our X-triplane representation T.

Type         Method     Time (s)\downarrow               6-View               8-View               10-View
PSNR\uparrow SSIM\uparrow PSNR\uparrow SSIM\uparrow PSNR\uparrow SSIM\uparrow
Traditional FDK 0.008 9.51 0.039 10.68 0.047 11.46 0.058
ASD-POCS 1.385 22.17 0.573 23.40 0.612 24.62 0.667
SART 1.400 22.61 0.537 23.56 0.548 24.57 0.585
2D Feedforward FBPConvNet \ul0.010 26.99 0.704 27.22 0.722 28.05 0.737
FreeSeed 0.163 28.93 0.841 \ul30.08 0.843 30.17 0.855
3D Feedforward DIF-Net 0.445 26.10 0.627 26.81 0.663 27.47 0.708
DIF-Gaussian 0.621 28.19 0.813 28.53 0.820 29.52 0.848
C2RV 3.837 \ul29.51 \ul0.850 29.83 \ul0.849 \ul30.96 \ul0.871
X-LRM (Ours) 0.141 31.05 0.910 31.24 0.912 31.33 0.915
Table 2: Comparison with traditional and feedforward methods on 750 test cases. Best result is in bold and second-best is \ulunderlined.
Refer to caption
Figure 4: Qualitative results of feedforward methods. From top to bottom: 10-view axial, 8-view coronal, and 6-view sagittal slices.

Triplane Implicit Neural Field. Our X-triplane 𝐓\mathbf{T} is composed of three orthogonal feature planes: 𝐓xy\mathbf{T}_{xy}, 𝐓yz\mathbf{T}_{yz}, and 𝐓xz(64×64)×dT\mathbf{T}_{xz}\in\mathbb{R}^{(64\times 64)\times d_{T}}, where 64×6464\times 64 refers to the spatial resolution of each plane and dTd_{T} is the dimension of the point feature 𝐏𝒙3×dT\mathbf{P}_{\boldsymbol{x}}\in\mathbb{R}^{3\times d_{T}}. Then we build an implicit neural field mapping from the position of a 3D point to its radiodensity.

For a given 3D point 𝒙=(x,y,z)[1,1]3\boldsymbol{x}=(x,y,z)\in[-1,1]^{3} within the unit bounding box (where each coordinate is normalized), we obtain its feature embeddings by projecting it onto three orthogonal plane features 𝐓xy,𝐓yz,\mathbf{T}_{xy},\mathbf{T}_{yz}, and 𝐓xz\mathbf{T}_{xz} at 𝐩xy=(x,y),𝐩yz=(y,z),\mathbf{p}_{xy}=(x,y),\mathbf{p}_{yz}=(y,z), and 𝐩xz=(x,z)\mathbf{p}_{xz}=(x,z). We then apply bilinear interpolation to extract features from each plane. Take the xyxy-plane 𝐓xy\mathbf{T}_{xy} and a point 𝐩xy=(x,y)\mathbf{p}_{xy}=(x,y) for instance, the interpolated feature value is computed as

𝐓xy(𝐩xy)\displaystyle\mathbf{T}_{xy}(\mathbf{p}_{xy}) =(1α)β𝐓(x0,y1)\displaystyle=(1-\alpha)\beta\mathbf{T}(x_{0},y_{1}) (10)
+α(1β)𝐓xy(x1,y0)\displaystyle+\alpha(1-\beta)\mathbf{T}_{xy}(x_{1},y_{0})
+(1α)(1β)𝐓xy(x0,y0)\displaystyle+(1-\alpha)(1-\beta)\mathbf{T}_{xy}(x_{0},y_{0})
+αβ𝐓xy(x1,y1),\displaystyle+\alpha\beta\mathbf{T}_{xy}(x_{1},y_{1}),

where x0,x1x_{0},x_{1} and y0,y1y_{0},y_{1} are the neighboring grid points, and the interpolation weights are α=xx0\alpha=x-x_{0}, β=yy0\beta=y-y_{0}. Applying this bilinear interpolation to all three triplanes, we obtain the feature representation at the point 𝒙\boldsymbol{x} as

𝐏𝒙=(𝐓xy(𝐩xy),𝐓yz(𝐩yz),𝐓xz(𝐩xz)).\mathbf{P}_{\boldsymbol{x}}=\left(\mathbf{T}_{xy}(\mathbf{p}_{xy}),\mathbf{T}_{yz}(\mathbf{p}_{yz}),\mathbf{T}_{xz}(\mathbf{p}_{xz})\right). (11)

As the radiodensity is isotropic and only related to the point property, we adopt an MLP to learn the mapping fINFf_{\text{INF}} from the point feature 𝐏𝒙\mathbf{P}_{\boldsymbol{x}} to the radiodensity ρ𝒙\rho_{\boldsymbol{x}} as

fINF:(𝐓xy(𝐩xy),𝐓yz(𝐩yz),𝐓xz(𝐩xz))ρ𝒙.f_{\text{INF}}:\left(\mathbf{T}_{xy}(\mathbf{p}_{xy}),\mathbf{T}_{yz}(\mathbf{p}_{yz}),\mathbf{T}_{xz}(\mathbf{p}_{xz})\right)\rightarrow\rho_{\boldsymbol{x}}. (12)

3.3 Training objective

Existing RGB 3D reconstruction methods mainly adopt 2D rendering training loss to achieve good image recovery quality. However, this supervision involves volume rendering that needs to sample many 3D points to compute for each ray, taking a long time. Besides, in X-ray imaging, the 3D CT reconstruction is more concerned than the 2D X-ray rendering. Thus, we adopt the more precise 3D reconstruction loss with varying numbers of input views as

recon=1mVi𝒱𝐔^Vi𝐔gt2,\mathcal{L}_{recon}=\frac{1}{m}\sum_{V_{i}\in\mathcal{V}}\big{|}\big{|}~{}\hat{\mathbf{U}}_{V_{i}}-\mathbf{U}_{gt}\big{|}\big{|}^{2}, (13)

where 𝒱={V1,V2,,Vm}\mathcal{V}=\{V_{1},V_{2},\dots,V_{m}\} represents the training settings with different input view numbers ViV_{i}, 𝐔^Vi\hat{\mathbf{U}}_{V_{i}} refers to the CT volume reconstructed by our X-LRM given ViV_{i} numbers of input views, and 𝐔gt\mathbf{U}_{gt} is the ground-truth CT volume.

Type             Method         Time\downarrow             6-View             8-View             10-View
PSNR\uparrow SSIM\uparrow PSNR\uparrow SSIM\uparrow PSNR\uparrow SSIM\uparrow
Self-Supervised NAF 11m 23.86 0.644 24.64 0.654 25.38 0.685
R2-Gaussian \ul6m 20.28 0.528 20.79 0.529 22.09 0.581
SAX-NeRF 8h 24.08 0.669 24.73 0.674 25.68 0.692
Diffusion Based DDS 12m 24.42 0.529 25.64 0.570 26.64 0.607
DiffusionMBIR 11h \ul26.61 \ul0.734 \ul28.51 \ul0.803 \ul30.05 \ul0.835
3D Feedforward X-LRM (Ours) 0.14s 30.14 0.888 30.10 0.886 30.28 0.889
Table 3: Comparison with self-supervised and diffusion-based methods on 10 test cases. Our method is 3.53 dB higher and 2570×\times faster.
Refer to caption
Figure 5: Qualitative comparison with self-supervised and diffusion-based methods on 6, 8, and 10-view CT reconstruction.

4 Experiment

4.1 Experimental Setup

Datasets.

Previous works rely on small datasets [43, 12, 52] (fewer than 1,000 samples), which limits the ability to train robust and generalizable models. To overcome this constraint, we introduce Torso-16K, the largest and most diverse CT reconstruction dataset, comprising 16,623 real-world CT scans from ten public datasets (Fig. 3). It covers key anatomical regions in clinical applications, including the chest, abdomen, and pelvis. Some examples are shown in Fig. 3. Torso-16K is split into 15,000 / 873 / 750 for training, validation, and testing. We standardize CT scans by resampling and cropping them to a 503 cm3 volume at 1283 resolution. Radiodensity values are normalized from the Hounsfield unit range [-1000, 1000] to [0,1], ensuring coverage of primary organs of interest. Since most public datasets only provide CT volumes, we render multi-view X-ray projections via the TIGRE toolbox [5]. These 2562 resolution projections span a full range 0{}^{\circ}\sim 360 with 3.92 mm2 pixel spacing. To enhance realism, we add Gaussian and Poisson noise to simulate Compton scattering and adopt the UCT 960+ scanner [57], with a 0.6m source-object distance and 1.118m source-detector distance.

Implementation Details.

We implement X-LRM by PyTorch [48]. X-LRM is trained with the AdamW optimizer [39] (β1=0.9\beta_{1}=0.9, β2=0.95\beta_{2}=0.95). The weight decay coefficient λ\lambda is set to 0.05. The initial learning rate is set to 4×1044\times 10^{-4} and follows a cosine annealing scheduler [38] with a warm-up phase of 30003000 iterations. For the network architecture, we utilize a ViT-B/16 transformer encoder, which processes 256×256256\times 256 inputs to 257257 feature tokens at an embedding dimension of dE=384d_{E}=384 with Ne=12N_{e}=12 layers. The transformer decoder consists of Nd=12N_{d}=12 layers at an output dimension of dD=512d_{D}=512, while the X-triplane has a feature dimension of dT=32d_{T}=32. The MLP used for radiodensity queries has four layers with a hidden dimension of 6464.

During training, our model is designed to learn from a set of possible input view counts, V={6,8,10}V=\{6,8,10\}. For each epoch, the same instance is processed 3 times, each with a different number of views selected from VV. Training is conducted on 8 RTX A5000 GPUs at a per-gpu batch size of 66 for 100100 epochs. For evaluation, we adopt the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) [62] as the quantitative metrics. Please note that PSNR is measured directly in 3D space and SSIM is computed as the average of 2D SSIM values.

Refer to caption
Figure 6: Visual comparison of lung segmentation on 6-view reconstructed CT slices with the recent best self-supervised method NAF [68], 2D feedforward method FreeSeed [40], diffusion-based method DiffusionMBIR [10], and 3D feedforward method C2RV [36].
Method Recon. Left Lung Right Lung
PSNR SSIM DICE ASD\downarrow DICE ASD\downarrow
FDK 9.14 0.03 0.34 43.41 0.26 45.12
SART 21.7 0.51 28.29 13.44 2.92 28.12
ASD-POCS 21.48 0.53 25.35 15.62 2.52 31.84
FBPConvNet 26.02 0.68 \ul93.59 \ul0.65 \ul93.58 \ul0.56
FreeSeed 27.77 \ul0.83 91.01 1.07 90.56 0.82
DIF-Net 24.71 0.55 84.63 1.70 84.78 1.44
DIF-Gaussian 26.84 0.79 92.16 0.83 91.69 0.72
C2RV \ul28.24 \ul0.83 91.47 0.88 90.28 0.87
X-LRM (Ours) 30.59 0.92 95.21 0.49 94.63 0.48
Table 4: Quantitative lung segmentation comparison of traditional and feedforward methods for 6-view reconstructed CTs.

4.2 Comparison with State-of-the-Art Methods

We evaluate our X-LRM model against baseline methods under different numbers of projection views (i.e. 6, 8, 10) using the following two different settings:

  • Traditional and feedforward methods: Traditional methods are directly tested on the 750-sample test set. The feedforward methods are first trained on the train set and then tested on the 750-sample test set.

  • Self-supervised and diffusion-based methods: We use a subset of 10 samples selected from the 750-sample test set, ensuring all 10 sub-datasets in Fig. 3 are covered. We test on this small dataset due to the long inference time (up to 11 hours per sample) of these methods.

Quantitative Results. Firstly, we compare X-LRM with three traditional methods (FDK [18], SART [2], and ASD-POCS [54]) and five feedforward methods (FBPConvNet [27], FreeSeed [40], DIF-Net [35], DIF-Gaussian [35], and C2RV [36]). The results are reported in Tab. 2. (i) When reconstructing CT volumes from 6, 8, and 10 X-ray projection views, our X-LRM surpasses the SOTA 2D feedforward method, FreeSeed, by 2.12, 1.16, and 1.16 dB in PSNR. Compared to the SOTA 3D feedforward method, C2RV, X-LRM improves the performance by 1.54, 1.41, and 0.37 dB in PSNR, while enjoying over 27×\times faster inference speed. (ii) Unlike previous feedforward methods, X-LRM enjoys better flexibility as it can efficiently reconstruct CT volume with different numbers of input views without training separate models, adapting well to input variations.

Secondly, we compare with three self-supervised methods (NAF [68], R2-Gaussian [69], and SAX-NeRF [8]) and two diffusion-based methods (DiffusionMBIR [10] and DDS [11]). The quantitative results are listed in Tab. 3. Our X-LRM achieves the best overall performance and fastest inference speed. Compared with the second-best method, DiffusionMBIR, our X-LRM is 3.53 dB higher in PSNR. Compared with the second-fastest method, R2-Gaussian, our method is over 2570×\times faster in inference.

Method Recon. Left Lung Right Lung
PSNR SSIM DICE ASD\downarrow DICE ASD\downarrow
NAF 21.91 0.57 48.54 19.14 50.08 9.49
R2-Gaussian 18.58 0.45 29.62 26.12 34.76 12.86
SAX-NeRF 21.83 0.59 39.34 27.55 19.87 20.86
DiffusionMBIR \ul25.31 \ul0.72 \ul93.10 \ul0.74 \ul93.25 \ul0.67
DDS 23.47 0.53 71.04 2.61 69.58 2.60
X-LRM (Ours) 27.63 0.85 95.60 0.51 95.48 0.48
Table 5: Quantitative lung segmentation results of self-supervised and diffusion-based methods for 6-view reconstructed CTs.

Qualitative Results.

The qualitative results are depicted in Fig. 4 (compared with feedforward methods) and Fig. 5 (compared with self-supervised and diffusion-based methods). As observed from the reconstructed slices, all baseline methods struggle with generating high-quality reconstructions, particularly in sparser-view scenarios. Both feedforward and optimization-based approaches exhibit noticeable blurriness and lack of fine details, leading to incomplete anatomical structures and texture inconsistencies. Structural elements, such as lung regions and organ boundaries, appear unclear, often blending into surrounding areas due to the loss of high-frequency details.

Method   Base Model   + X-Triplane   + X-former
PSNR 13.09 28.76 31.33
SSIM 0.42 0.84 0.92
Table 6: Break-down ablation towards higher performance by adding the components of X-LRM. The ablation study is conducted under the 10-view CT reconstruction setting.

In contrast, our X-LRM yields visually sharper reconstructions with well-defined textures and more coherent anatomical structures. Across different view settings, it effectively preserves fine-grained details while maintaining spatial smoothness. Our method consistently reconstructs realistic features with minimal artifacts, demonstrating high-quality performance in extremely sparse-view CT reconstruction.

Application in Segmentation.

We evaluate the reconstructed CT volumes using medical segmentation. We employ the LungMask toolkit [22] to segment the left and right lung from CT reconstructions produced by various methods and compare the results against the ground-truth segmentation obtained from the original CT scans. Specifically, we evaluate lung test data from the 750-test set and 10-test set, testing the corresponding baseline methods and reporting reconstruction performance (PSNR and SSIM) alongside left and right lung segmentation accuracy (DICE and ASD) for 6-view reconstructed volumes. As shown in Tab. 4 and Tab. 5, X-LRM achieves superior reconstruction quality, surpassing C2RV by and DiffusionMBIR by 2.35 and 2.32 dB in PSNR. Additionally, the higher DICE scores and lower ASD values on both the left and right lung indicate that the 3D segmentation on the CT volume reconstructed by X-LRM has a larger overlap and smaller boundary discrepancies with the segmentation mask on the ground-truth CT volume. Fig. 6 shows the visual comparison with four kinds of recent best methods. Both quantitative and qualitative results demonstrate the ability of X-LRM to preserve anatomical structures more accurately and maintain precise shape consistency, surpassing other methods in both reconstruction fidelity and segmentation alignment. These results further highlight the practical advantages of our method.

4.3 Ablation Study

Ablation studies evaluate the effectiveness of the proposed modifications compared to the standard LRM, including X-former and X-triplane. Additionally, we assess the robustness of X-LRM under varying noisy scanning parameters, such as viewing angles, DSD, and DSO. The breakdown study is performed under the 10-view setting, while the robustness analysis is conducted under the 6-view setting.

Noisy parameters PSNR SSIM(10210^{-2})
Angles DSO DSD
- - - 31.05 (-0.00) 91.04 (-0.00)
±0.5\pm 0.5^{\circ} - - 30.93 (-0.12) 90.95 (-0.09)
±1\pm 1^{\circ} 30.62 (-0.43) 90.71 (-0.33)
- ±2mm\pm 2mm - 30.85 (-0.20) 90.89 (-0.15)
±3mm\pm 3mm 30.67 (-0.38) 90.73 (-0.31)
- - ±2mm\pm 2mm 30.99 (-0.06) 90.99 (-0.05)
±3mm\pm 3mm 30.93 (-0.12) 90.95 (-0.09)
Table 7: Ablation study of X-LRM’s robustness to noisy X-ray scanner parameters under a 6-view CT reconstruction setting. We ablate on three parameters: viewing angles, DSO and DSD.

Break-down Ablation.

We adopt the Open-LRM [23] as the base reconstruction model to study the effect of each component of X-LRM towards higher performance. The results of the 10-view reconstruction are reported in Tab. 6. The base model only achieves poor results of 13.09 dB in PSNR. After applying our X-triplane and X-former, the model gains by 15.67 and 2.57 dB in PSNR. These results validate the effectiveness of our proposed methods.

Robustness Analysis.

We conduct a robustness analysis under varying noisy scanning parameters, including viewing angles, source-to-origin distance (DSO), and source-to-detector distance (DSD), using a 6-view reconstruction setting. The introduced noise follows a uniform distribution, modeled as η𝒰(ϵ,+ϵ)\eta\sim\mathcal{U}(-\epsilon,+\epsilon). With this noise, the projection images change but the model processes them as if captured under perfect conditions. Tab. 7 shows that X-LRM remains robust to noises of scanning parameters. Viewing angle shifts of ±0.5\pm 0.5^{\circ} (PSNR -0.12 dB, SSIM -0.0009) and ±1\pm 1^{\circ} results (PSNR -0.43 dB, SSIM -0.0033) have minimal impact on our model. Noises in DSO and DSD only introduce minor effects, with ±3\pm 3 mm DSO causing the largest decline (PSNR -0.38 dB, SSIM -0.0031). Despite these fluctuations, performance remains stable, demonstrating the reliability of X-LRM to real-world scanning variations.

5 Conclusion

In this paper, we collect the largest dataset, Torso-16K, to enable large-scale training for CT reconstruction. Torso-16K is over 18× larger than the existing largest benchmark. We propose X-LRM, a Transformer-based feedforward framework consisting of X-former and X-triplane. X-former employs a tokenizer and Transformer backbone to flexibly encode an arbitrary number of input views, enabling X-LRM to reconstruct CT volumes without re-training. X-triplane decodes image tokens into a triplane representation and learns a neural implicit function to model 3D radiodensity. Experiments show that X-LRM surpasses the SOTA 3D feedforward method by 1.5 dB while achieving 27× faster speed, with its application in medical segmentation further highlighting its practical value.

References

  • Ahmed et al. [2020] AA Ahmed, MM Elmohr, D Fuentes, MA Habra, SB Fisher, ND Perrier, M Zhang, and KM Elsayes. Radiomic mapping model for prediction of ki-67 expression in adrenocortical carcinoma. Clinical Radiology, 2020.
  • Andersen and Kak [1984] Anders H Andersen and Avinash C Kak. Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm. Ultrasonic imaging, 1984.
  • Anirudh et al. [2018] Rushil Anirudh, Hyojin Kim, Jayaraman J Thiagarajan, K Aditya Mohan, Kyle Champley, and Timo Bremer. Lose the views: Limited angle ct reconstruction via implicit sinogram completion. In CVPR, 2018.
  • Ba et al. [2016] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  • Biguri et al. [2016] Ander Biguri, Manjit Dosanjh, Steven Hancock, and Manuchehr Soleimani. Tigre: a matlab-gpu toolbox for cbct image reconstruction. Biomedical Physics & Engineering Express, 2016.
  • Cai et al. [2024a] Yuanhao Cai, Yixun Liang, Jiahao Wang, Angtian Wang, Yulun Zhang, Xiaokang Yang, Zongwei Zhou, and Alan Yuille. Radiative gaussian splatting for efficient x-ray novel view synthesis. In ECCV, 2024a.
  • Cai et al. [2024b] Yuanhao Cai, Jiahao Wang, Alan Yuille, Zongwei Zhou, and Angtian Wang. Structure-aware sparse-view x-ray 3d reconstruction. In CVPR, 2024b.
  • Cai et al. [2024c] Yuanhao Cai, Jiahao Wang, Alan Yuille, Zongwei Zhou, and Angtian Wang. Structure-aware sparse-view x-ray 3d reconstruction. In CVPR, 2024c.
  • Cai et al. [2024d] Yuanhao Cai, He Zhang, Kai Zhang, Yixun Liang, Mengwei Ren, Fujun Luan, Qing Liu, Soo Ye Kim, Jianming Zhang, Zhifei Zhang, et al. Baking gaussian splatting into diffusion denoiser for fast and scalable single-stage image-to-3d generation. arXiv preprint arXiv:2411.14384, 2024d.
  • Chung et al. [2023] Hyungjin Chung, Dohoon Ryu, Michael T McCann, Marc L Klasky, and Jong Chul Ye. Solving 3d inverse problems using pre-trained 2d diffusion models. In CVPR, 2023.
  • Chung et al. [2024] Hyungjin Chung, Suhyeon Lee, and Jong Chul Ye. Decomposed diffusion sampler for accelerating large-scale inverse problems. In ICLR, 2024.
  • Cipriano et al. [2022] Marco Cipriano, Stefano Allegretti, Federico Bolelli, Mattia Di Bartolomeo, Federico Pollastri, Arrigo Pellacani, Paolo Minafra, Alexandre Anesi, and Costantino Grana. Deep segmentation of the mandibular canal: a new 3d annotated dataset of cbct volumes. Ieee Access, 2022.
  • Cormack [1963] Allan Macleod Cormack. Representation of a function by its line integrals, with some radiological applications. Journal of applied physics, 1963.
  • Cormack [1964] Allan Macleod Cormack. Representation of a function by its line integrals, with some radiological applications. ii. Journal of Applied Physics, 1964.
  • Deitke et al. [2023a] Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, et al. Objaverse-xl: A universe of 10m+ 3d objects. In NeurIPS, 2023a.
  • Deitke et al. [2023b] Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objaverse: A universe of annotated 3d objects. In CVPR, 2023b.
  • Dosovitskiy et al. [2021] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
  • Feldkamp et al. [1984] Lee A Feldkamp, Lloyd C Davis, and James W Kress. Practical cone-beam algorithm. Josa a, 1984.
  • Ghani and Karl [2018] Muhammad Usman Ghani and W Clem Karl. Deep learning-based sinogram completion for low-dose ct. In 2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), 2018.
  • He et al. [2024] Hao He, Yixun Liang, Luozhou Wang, Yuanhao Cai, Xinli Xu, Hao-Xiang Guo, Xiang Wen, and Yingcong Chen. Lucidfusion: Generating 3d gaussians with arbitrary unposed images. arXiv preprint arXiv:2410.15636, 2024.
  • Hermans et al. [2024] Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, et al. Rsna 2023 abdominal trauma ai challenge: Review and outcomes. Radiology: Artificial Intelligence, 2024.
  • Hofmanninger et al. [2020] Johannes Hofmanninger, Forian Prayer, Jeanny Pan, Sebastian Röhrich, Helmut Prosch, and Georg Langs. Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem. European Radiology Experimental, 2020.
  • Hong et al. [2024] Yicong Hong, Kai Zhang, Jiuxiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d. In ICLR, 2024.
  • Hounsfield [1973] Godfrey N Hounsfield. Computerized transverse axial scanning (tomography): Part 1. description of system. The British journal of radiology, 1973.
  • Hounsfield [1980] Godfrey N Hounsfield. Computed medical imaging. Science, 1980.
  • Ji et al. [2022] Yuanfeng Ji, Haotian Bai, Chongjian Ge, Jie Yang, Ye Zhu, Ruimao Zhang, Zhen Li, Lingyan Zhanng, Wanling Ma, Xiang Wan, et al. Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation. In NeurIPS, 2022.
  • Jin et al. [2017a] Kyong Hwan Jin, Michael T McCann, Emmanuel Froustey, and Michael Unser. Deep convolutional neural network for inverse problems in imaging. TIP, 2017a.
  • Jin et al. [2017b] Kyong Hwan Jin, Michael T McCann, Emmanuel Froustey, and Michael Unser. Deep convolutional neural network for inverse problems in imaging. IEEE transactions on image processing, 2017b.
  • Jin et al. [2020] Liang Jin, Jiancheng Yang, Kaiming Kuang, Bingbing Ni, Yiyi Gao, Yingli Sun, Pan Gao, Weiling Ma, Mingyu Tan, Hui Kang, Jiajun Chen, and Ming Li. Deep-learning-assisted detection and segmentation of rib fractures from ct scans: Development and validation of fracnet. eBioMedicine, 2020.
  • Kerbl et al. [2023] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 2023.
  • Lee et al. [2023] Suhyeon Lee, Hyungjin Chung, Minyoung Park, Jonghyuk Park, Wi-Sun Ryu, and Jong Chul Ye. Improving 3d imaging with pre-trained perpendicular 2d diffusion models. In ICCV, 2023.
  • Li et al. [2024a] Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi. Instant3d: Fast text-to-3d with sparse-view generation and large reconstruction model. In ICLR, 2024a.
  • Li et al. [2024b] Wenxuan Li, Chongyu Qu, Xiaoxi Chen, Pedro RAS Bassi, Yijia Shi, Yuxiang Lai, Qian Yu, Huimin Xue, Yixiong Chen, Xiaorui Lin, et al. Abdomenatlas: A large-scale, detailed-annotated, & multi-center dataset for efficient transfer learning and open algorithmic benchmarking. Medical Image Analysis, 2024b.
  • Lin et al. [2023] Yiqun Lin, Zhongjin Luo, Wei Zhao, and Xiaomeng Li. Learning deep intensity field for extremely sparse-view cbct reconstruction. In MICCAI, 2023.
  • Lin et al. [2024a] Yiqun Lin, Hualiang Wang, Jixiang Chen, and Xiaomeng Li. Learning 3d gaussians for extremely sparse-view cone-beam ct reconstruction. In MICCAI, 2024a.
  • Lin et al. [2024b] Yiqun Lin, Jiewen Yang, Hualiang Wang, Xinpeng Ding, Wei Zhao, and Xiaomeng Li. C^ 2rv: Cross-regional and cross-view learning for sparse-view cbct reconstruction. In CVPR, 2024b.
  • Liu et al. [2023] Yanzhen Liu, Sutuke Yibulayimu, Yudi Sang, Gang Zhu, Yu Wang, Chunpeng Zhao, and Xinbao Wu. Pelvic fracture segmentation using a multi-scale distance-weighted neural network. In MICCAI, 2023.
  • Loshchilov and Hutter [2016] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
  • Loshchilov and Hutter [2017] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
  • Ma et al. [2023] Chenglong Ma, Zilong Li, Junping Zhang, Yi Zhang, and Hongming Shan. Freeseed: Frequency-band-aware and self-guided network for sparse-view ct reconstruction. In MICCAI, 2023.
  • Ma et al. [2024] Jun Ma, Bo Wang, Song Gu, Yao Zhang, Cheng Ge, and Chenyu You. MICCAI FLARE24 Task 1: Pan-cancer Segmentation in CT Scans, 2024.
  • Masoudi et al. [2018] Mojtaba Masoudi, Hamid-Reza Pourreza, Mahdi Saadatmand-Tarzjan, Noushin Eftekhari, Fateme Shafiee Zargar, and Masoud Pezeshki Rad. A new dataset of computed-tomography angiography images for computer-aided detection of pulmonary embolism. Scientific Data, 2018.
  • McCollough [2016] Cynthia McCollough. Tu-fg-207a-04: overview of the low dose ct grand challenge. Medical physics, 2016.
  • Mescheder et al. [2019] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In CVPR, 2019.
  • Mildenhall et al. [2021] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 2021.
  • Moawad et al. [2021] A Moawad, D Fuentes, A Morshid, A Khalaf, M Elmohr, A Abusaif, JD Hazle, AO Kaseb, M Hassan, A Mahvash, et al. Multimodality annotated hcc cases with and without advanced imaging segmentation [data set]. The Cancer Imaging Archive, 2021.
  • Organizers [2022] MELA Challenge Organizers. Mediastinal lesion analysis (mela) dataset, 2022.
  • Paszke et al. [2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019.
  • Pedrosa et al. [2022] João Pedrosa, Guilherme, Carlos, Márcio, Patrícia, André, João, Eduardo, Isabel, António, and Aurélio. Lndb dataset, 2022.
  • Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
  • Sauer and Bouman [1993] Ken Sauer and Charles Bouman. A local update strategy for iterative reconstruction from projections. IEEE Transactions on Signal Processing, 1993.
  • Setio et al. [2017] Arnaud Arindra Adiyoso Setio, Alberto Traverso, Thomas De Bel, Moira SN Berens, Cas Van Den Bogaard, Piergiorgio Cerello, Hao Chen, Qi Dou, Maria Evelina Fantacci, Bram Geurts, et al. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the luna16 challenge. Medical image analysis, 2017.
  • Shen et al. [2022] Liyue Shen, John Pauly, and Lei Xing. Nerp: implicit neural representation learning with prior embedding for sparsely sampled image reconstruction. IEEE Transactions on Neural Networks and Learning Systems, 2022.
  • Sidky and Pan [2008a] Emil Y Sidky and Xiaochuan Pan. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Physics in Medicine & Biology, 2008a.
  • Sidky and Pan [2008b] Emil Y Sidky and Xiaochuan Pan. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Physics in Medicine & Biology, 2008b.
  • Tochilkin et al. [2024] Dmitry Tochilkin, David Pankratz, Zexiang Liu, Zixuan Huang, Adam Letts, Yangguang Li, Ding Liang, Christian Laforte, Varun Jampani, and Yan-Pei Cao. Triposr: Fast 3d object reconstruction from a single image. arXiv preprint arXiv:2403.02151, 2024.
  • United Imaging Healthcare [2023] United Imaging Healthcare. Uct 960+. https://eu.united-imaging.com/en/product-service/products/ct/uct-960, 2023.
  • Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017.
  • Wang et al. [2022] Ce Wang, Kun Shang, Haimiao Zhang, Qian Li, and S Kevin Zhou. Dudotrans: dual-domain transformer for sparse-view ct reconstruction. In International Workshop on Machine Learning for Medical Image Reconstruction, 2022.
  • Wang et al. [2018] Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2mesh: Generating 3d mesh models from single rgb images. In ECCV, 2018.
  • Wang et al. [2023] Peng Wang, Hao Tan, Sai Bi, Yinghao Xu, Fujun Luan, Kalyan Sunkavalli, Wenping Wang, Zexiang Xu, and Kai Zhang. Pf-lrm: Pose-free large reconstruction model for joint pose and shape prediction. arXiv preprint arXiv:2311.12024, 2023.
  • Wang et al. [2004] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncell. Image quality assessment: from error visibility to structural similarity. TIP, 2004.
  • Wei et al. [2024] Xinyue Wei, Kai Zhang, Sai Bi, Hao Tan, Fujun Luan, Valentin Deschaintre, Kalyan Sunkavalli, Hao Su, and Zexiang Xu. Meshlrm: Large reconstruction model for high-quality meshes. arXiv preprint arXiv:2404.12385, 2024.
  • Wu et al. [2020] Rundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, and Baoquan Chen. Pq-net: A generative part seq2seq network for 3d shapes. In CVPR, 2020.
  • Xu et al. [2019] Qiangeng Xu, Weiyue Wang, Duygu Ceylan, Radomir Mech, and Ulrich Neumann. Disn: Deep implicit surface network for high-quality single-view 3d reconstruction. In NeurIPS, 2019.
  • Yang et al. [2024] Jiancheng Yang, Rui Shi, Liang Jin, Xiaoyang Huang, Kaiming Kuang, Donglai Wei, Shixuan Gu, Jianying Liu, Pengfei Liu, Zhizhong Chai, Yongjie Xiao, Hao Chen, Liming Xu, Bang Du, Xiangyi Yan, Hao Tang, Adam Alessio, Gregory Holste, Jiapeng Zhang, Xiaoming Wang, Jianye He, Lixuan Che, Hanspeter Pfister, Ming Li, and Bingbing Ni. Deep rib fracture instance segmentation and classification from ct on the ribfrac challenge. arXiv Preprint, 2024.
  • Yu et al. [2006] Lifeng Yu, Yu Zou, Emil Y Sidky, Charles A Pelizzari, Peter Munro, and Xiaochuan Pan. Region of interest reconstruction from truncated data in circular cone-beam ct. TMI, 2006.
  • Zha et al. [2022] Ruyi Zha, Yanhao Zhang, and Hongdong Li. Naf: neural attenuation fields for sparse-view cbct reconstruction. In MICCAI, 2022.
  • Zha et al. [2024a] Ruyi Zha, Tao Jun Lin, Yuanhao Cai, Jiwen Cao, Yanhao Zhang, and Hongdong Li. R2-gaussian: Rectifying radiative gaussian splatting for tomographic reconstruction. In NeurIPS, 2024a.
  • Zha et al. [2024b] Ruyi Zha, Tao Jun Lin, Yuanhao Cai, Jiwen Cao, Yanhao Zhang, and Hongdong Li. R2-gaussian: Rectifying radiative gaussian splatting for tomographic reconstruction. In NeurIPS, 2024b.
  • Zhang et al. [2024] Kai Zhang, Sai Bi, Hao Tan, Yuanbo Xiangli, Nanxuan Zhao, Kalyan Sunkavalli, and Zexiang Xu. Gs-lrm: Large reconstruction model for 3d gaussian splatting. In ECCV, 2024.