Computed Tomography Reconstruction Using Deep Image Prior and Learned Reconstruction Methods

In this work, we investigate the application of deep learning methods for computed tomography in the context of having a low-data regime. As motivation, we review some of the existing approaches and obtain quantitative results after training them with different amounts of data. We find that the learned primal-dual has an outstanding performance in terms of reconstruction quality and data efficiency. However, in general, end-to-end learned methods have two issues: a) lack of classical guarantees in inverse problems and b) lack of generalization when not trained with enough data. To overcome these issues, we bring in the deep image prior approach in combination with classical regularization. The proposed methods improve the state-of-the-art results in the low data-regime.


Introduction
Deep learning approaches for solving ill-posed inverse problems currently achieve stateof-the-art reconstruction quality in terms of quantitative results. However, they require large amounts of training data, i.e., pairs of ground truths and measurements, and it is not clear how much is necessary to be able to generalize well. For ill-posed inverse problems arising in medical imaging, such as Magnetic Resonance Imaging (MRI), guided Positron Emission Tomography (PET), Magnetic Particle Imaging (MPI), or Computed Tomography (CT), obtaining such high amounts of training data is challenging. In particular ground truth data is difficult to obtain, as, for example, it is of course impossible to take a photograph of the inside of the body. What learned methods usually consider as ground truths are simulations or high-dose reconstructions obtained with classical methods, such as filtered back-projection (FBP), which work considerably well in the presence of a sufficiently large amount of low-noise measurements. In MRI, it is well possible to obtain those reconstructions, but it requires much time for the acquisition process. Therefore a potential of learned approaches in MRI is to reduce the acquisition times [41]. In other applications such as CT, it would be necessary to expose patients to high doses of X-ray radiation to obtain the required training ground truths.
There is yet another approach called Deep Image Prior (DIP) [24] that also uses deep neural networks, for example, a U-Net. However, there is a remarkable difference: it does not need any learning, i.e., the weights of the network are not trained. This approach seems to have low applicability because it requires much time to reconstruct in contrast to learned methods. In the applications initially considered, for example, inpainting, denoising, and super-resolution, it is much easier to obtain or simulate data, which allows for the use of learned methods, and the DIP does not seem to have an advantage. However, these applications are not ill-posed inverse problems in the sense of Nashed [32]. The main issue is that, in some cases, they do have a non-trivial null space, which makes the solution not unique.
In this work, we aim to explore the application of the DIP together with other deep learning methods for obtaining CT reconstructions in the context of having a rather low-data regime. The structure of the paper and the main contributions are organized as follows. In Section 2, we briefly describe the CT reconstruction problem. Section 3 provides a summary of related articles and approaches, together with some background and insights that we use as motivation. The experienced reader may skip Sections 2 and 3 and go directly to Section 4, where we introduce the combination of the DIP with classical regularization methods and obtain theoretical guarantees. Following, in Section 5, we propose a similar approach to the DIP but using an initial reconstruction given by any end-to-end learned method. Finally, in Section 6, we present a benchmark of the analyzed methods using different amounts of data from two standard datasets.

Computed Tomography
Computed tomography (CT) is one of the most valuable technologies in modern medical imaging [6]. It allows for a non-invasive acquisition of the inside of the human body using X-rays. Since the introduction of CT in the 1970s, technical innovations like new scan geometries pushed the limits on speed and resolution. Current research focuses on reducing the potentially harmful radiation a patient is exposed to during the scan [6]. These include measurements with lower intensity or at fewer angles. Both approaches introduce particular challenges for reconstruction methods, that can severely reduce the image quality. In our work, we compare multiple reconstruction methods on these low-dose scenarios for a basic 2D parallel beam geometry (cf. Figure 1).
In this case, the forward operator is given by the 2D Radon transform [35] and models the attenuation of the X-ray when passing through a body. We can parameterize the path of an X-ray beam by the distance from the origin s ∈ R and angle ϕ ∈ [0, π] L s,ϕ (t) = sω (ϕ) + tω ⊥ (ϕ) , ω (ϕ) := [cos(ϕ), sin(ϕ)] T .
The Radon transform then calculates the integral along the line for parameters s and ϕ According to Beer-Lambert's law, the result is the logarithm of the ratio between the intensity I 0 at the X-ray source and I 1 at the detector Calculating the transform for all pairs (s, ϕ) results in a so-called sinogram, which we also call observation. To get a reconstructionx from the sinogram, we have to invert the forward model. Since the Radon transform is linear and compact, the inverse problem is ill-posed in the sense of Nashed [32,33].

Related approaches and motivation
In this section, we first review and describe some of the existing data-driven and classical methods for solving ill-posed inverse problems, which have also been applied to obtain CT reconstructions. Following, we review the DIP approach and related works. In inverse problems one aims at obtaining an unknown quantity, in this case the scan of the human body, from indirect measurements that frequently contain noise [12,29,36]. The problem is modeled by an operator A : X → Y between Banach or Hilbert spaces X and Y and the measured noisy data or observation The aim is to obtain an approximationx for x † (the true solution), where τ , with τ ≤ δ, describes the noise in the measurement. Classical approaches for inverse problems include linear pseudo inverses given by filter functions [29] or non-linear regularized inverses given by the variational approach hand-crafted regularizers/priors are x 2 , x 1 and Total Variation (TV). The value of the regularization parameter α should be carefully selected. One way to do that, in the presence of a validation dataset with some ground truth and observation pairs, is to do a line-search and select the α that yields the best performance on average, assuming there is a uniform noise level. Given validation data {x † i , y δ i } N i=1 , the data-driven parameter choice would bê α := arg min where : X × X → R is some similarity measure, such as PSNR or SSIM. Data-driven regularized inverses for solving inverse problems in imaging have recently had great success in terms of reconstruction quality [2,4,15]. Three main classes are: end-to-end learned methods [1,2,15,20,38], learned regularizers [27,30] and generative networks [5]. For this study, we only focus on the end-to-end learned methods.

End-to-end learned methods
In this section, we briefly review the most successful end-to-end learned methods. Most of them were implemented and included in our benchmark.
3.1.1. Post-processing This method aims at improving the quality of the filtered back-projection (FBP) reconstructions from noisy or few measurements by applying learned post-processing. Recent works [8,21,40] have successfully used a convolutional neural network (CNN), such as the U-Net [37], to remove artifacts from FBP reconstructions. In mathematical terms, given a possibly regularized FBP operator T FBP , the reconstruction is computed using a network D θ : X → X aŝ with parameters θ of the network that are learned from data.

Fully learned
Related methods aim at directly learning the inversion process from data while keeping the network architecture as general as possible. This idea was successfully applied in MRI by the AUTOMAP architecture [42]. The main building blocks consist of fully connected layers. Depending on the problem, the number of parameters can grow quickly with the data dimension. For mapping from sinogram to reconstruction in the LoDoPaB-CT Dataset, such a layer would have over 1000 × 513 × 362 2 ≈ 67 · 10 9 parameters. This makes the naive approach infeasible for large CT data. He et al [16] introduced an adapted two-part network, called iRadonMap. The first part reproduces the structure of the FBP. A fully connected layer is applied along s and shared over the rotation angle dimension ϕ, playing the role of the filtering. For each reconstruction pixel (i, j) only sinogram values on the sinusoid s = i cos(ϕ) + j sin(ϕ) have to be considered and are multiplied by learned weights. For the example above, the number of parameters in this layer reduces to 513 2 + (362) 2 · 1000 ≈ 130 · 10 6 . The second part consists of a post-processing network. We choose the U-Net architecture for our experiments, which allows for a direct comparison with the FBP + U-Net approach.
3.1.3. Learned iterative schemes Another series of works [1,2,15] use CNNs to improve iterative schemes commonly used in inverse problems for solving (5), such as gradient descent, proximal gradient descent or hybrid primal-dual algorithms. The idea is to unroll these schemes with a small number of iterations and replace some operators by CNNs with parameters that are trained using ground truth and observation data pairs. The simplest one is probably the proximal gradient descent, whose standard version is given by the iteration for k = 0 to L − 1, where φ J, α, λ : X → X is the proximal operator. The corresponding learned iterative scheme is a partially learned approach, where each iteration is performed by a convolutional network ψ θ k that includes the gradients of the data discrepancy and of the regularizer as input in each iteration. Moreover, the number of iterations is fixed and small, e.g., L = 10. The reconstruction operator is given by for any pseudo inverse A + of the operator A and θ = (θ 0 , . . . , θ L−1 ). Alternatively, x (0) could be just randomly initialized. Similarly, more sophisticated algorithms, such as hybrid primal-dual algorithms, can be unrolled and trained in the same fashion. In this work, we used an implementation of the learned gradient descent [1] and the learned primal-dual method [2].
The above mentioned approaches all rely on a parameterized operator T θ : Y → X, whose parameters θ are optimized using a training set of N ground truth samples x † i and their corresponding noisy observations y δ i . Usually, the empirical mean squared error is minimized, i.e., After training, the reconstructionx ∈ X from a noisy observation y δ ∈ Y is given bŷ x = Tθ(y δ ). The main disadvantage of these approaches is that they do not enforce data consistency. As a consequence, some information in the observation could be ignored, yielding a result that might lack important features of the image. In medical imaging, this is critical since it might remove an indication of a lesion.

Null Space Network
In order to overcome this issue, in [38] the authors introduce a new approach called Null Space Network. It takes the form where the function Ψ θ : X → X is defined by a neural network, A + is the pseudo inverse of A and Id X − A + A = P ker(A) is the projection onto the null space ker(A) of A. Consequently, the null space network F θ satisfies the property AF θ (x) = Ax for all x ∈ X. When combined with the pseudo inverse T θ = F θ • A + , this yields an end-to-end learned approach with data consistency. Theoretical results for this approach have been proved in [38]. We did not include this approach in the comparison, but leave it for a future study.

Deep Image Prior
The DIP is similar to the generative networks approach and the variational method. However, instead of having a regularization term J(x), the regularization is incorporated by the reparametrization x = ϕ(θ, z), where ϕ is a deep generative network with weights θ ∈ Θ, and z is a fixed input, for example, random white noise. The approach is depicted in Figure 2 and consist in solvinĝ In the original method, the authors use gradient descent with early stopping to avoid reproducing noise. This is necessary due to the overparameterization of the network, which makes it able to reproduce the noise. The regularization is a combination of early stopping (similar to the Landweber iteration) and the architecture [10]. The drawback is that it is not clear how to choose when to stop. In the original work, they do it using a validation set and select the number of iterations that performs the best on average in terms of PSNR. The prior is related to the implicit structural bias of this kind of deep convolutional networks. In the original DIP paper [24] and more recently in [7,17], they show that convolutional image generators, optimized with gradient descent, fit natural images faster than noise and learn to construct them from low to high frequencies. This effect is illustrated in Figure 3.

Related work
The Deep Image Prior approach has inspired many other researchers to improve it by combining it with other methods [28,31,39], to use it for a wide range of applications [13,14,19,22] and to offer different perspectives and explanations of why it works [7,9,10]. In [31], they bring in the concept of Regularization by Denoising (RED). They show how the two (DIP and RED) can be merged into a highly effective unsupervised recovery process. Another series of works, also add explicit priors but on the weights of the network. In [39], they do it in the form of a multi-variate Gaussian but learn the covariance matrix and the mean using a small dataset. In [9], they introduce a Bayesian perspective on the DIP by also incorporating a prior on the weights θ and conduct the posterior inference using stochastic gradient Langevin dynamics (SGLD). So far, the DIP has been used for denoising, inpainting, super-resolution, image decomposition [13], compressed sensing [39], PET [14], MRI [22] among other applications. A similar idea [19] was also used for structural optimization, which is a popular method for designing objects such as bridge trusses, airplane wings, and optical devices. Rather than directly optimizing densities on a grid, they instead optimize the parameters of a neural network which outputs those densities.

Network architecture
In the paper by Ulyanov et al [24], several architectures were considered, for example, ResNet, Encoder-Decoder (Autoencoder) and a U-Net. For inpainting big holes, the Autoencoder with depth = 6 performed best, whereas for denoising a modified U-Net achieved the best results. The regularization happens mainly due to the architecture of the network, which reduces the search space but also influences the optimization process to find natural images. Therefore, for each application, it is crucial to choose the appropriate architecture and to tune hyper-parameters, such as the network's depth and the number of channels per layer. Optimizing the hyperparameters is the most time-consuming part. In Figure 4 we show some reconstructions from the Ellipses dataset with different hyper-parameter choices. In this case, it seems that the U-Net without skip connections and depth 5 (Encoder-Decoder) achieves the best performance. One can see that when the number of channels is too low, the network does not have enough representation power. Also, if there are no skip channels, the higher the number of scales (equivalent to the depth), the more the regularization effect. The extraordinary success of this approach demonstrates that the architecture of the network has a significant influence on the performance of deep learning approaches that use similar kinds of networks.

Early-stopping
As mentioned before, in [24], they show that early stopping has a positive impact on the reconstruction results. They observed (cf. Figure 2) that in some applications, like denoising, the loss decreases fast towards natural images, but takes much more time to go towards noisy images. This empirical observation helps to determine when to stop. In Figure 5, one can observe how the exact error (measured by the PSNR and the SSIM metrics) reaches a maximum and then deteriorates during the optimization process.

Deep Image Prior and classical regularization
In this section we analyze the DIP in combination with classical regularization, i.e., we include a regularization term J : X → R ∪ {∞}, such as TV. We give necessary assumptions under which we are able to obtain standard guarantees in inverse problems, such as existence of a solution, convergence, and convergence rates.
In the general case, we consider X and Y to be Banach spaces, and A : X → Y a continuous linear operator. To simplify notation, we use ϕ(·) instead of ϕ(·, z), since the input to the network is fixed. Additionally, we assume that Θ is a Banach space, and ϕ : Θ → X is a continuous mapping.
With this approach, we get rid of the need for early stopping, i.e., the need to find an optimal number of iterations. Still, we introduce the problem of finding an optimal α, which is a classical issue in inverse problems. These problems are similar since both choices depend on the noise level of the observation data. The higher the noise is, the higher the value of α or the smaller the number of iterations for obtaining optimal results.
If the range of ϕ is Ω := rg(ϕ) = X, i.e., this is equivalent to the standard variational approach in Equation (5). However, although the network can fit some noise, it cannot fit, in general, any arbitrary x ∈ X. This depends on the chosen architecture, and it is mainly because we do not use any fully connected layers. Nevertheless, the minimization in (12) is similar to the setting in Equation (5) where D := D(A)∩D(J). If the following assumptions are satisfied, then all the classical theorems, namely well-posedness, stability, convergence, and convergence rates, still hold, cf. [18].
where y † is the perfect noiseless data.
Assumption 1 guarantees that the restricted domain of A is closed, whereas Assumption 2 guarantees that there is a J-minimizing solution in the restricted domain.
The mapping ϕ : Θ → X, has a neural network structure, with a fixed input z ∈ R n 0 , and can be expressed as a composition of affine mappings and activation functions where In the following we analyze under which conditions we can guarantee that the range of ϕ (with respect to Θ) is closed.
Definition 2. An activation function σ : R n → R n is valid, if it is continuous, monotone, and bounded, i.e., there exist c > 0 such that ∀x ∈ X : σ(x) ≤ c x .
Lemma 1. Let ϕ be a neural network ϕ : Θ → X with L layers. If Θ is a compact set, and the activation functions σ i are valid, then the range of ϕ is closed.
Proof. In order to prove the result we show that the range after each layer of the network is compact.
From the compactness of Θ it follows that G i is also bounded and closed, therefore, V i is also bounded. Let the sequence converges to Γū, therefore, v = Γū ∈ V i , which shows that V i is closed. ii) From i) and the fact that B i is also compact it follows that the set iii) It is easy to show that if the pre-image of a valid activation σ is compact, then its image is also compact.
In the first layer, V 0 = {z}; thus, it can be shown by induction that the range of ϕ : Θ → X is closed.
All activation functions commonly used in the literature, for example, sigmoid, hyperbolic tangent, and piece-wise linear activations, are valid. The bounds on the weights of the network can be ensured by clipping the weights after each gradient update. In our implementation of the DIP approach, we use a sufficiently large bound and empirically check that Assumption 2 holds. Remark 1. An alternative condition to the bound on the weights is to use only valid activation functions with closed range, for example, ReLU or leaky ReLU. However, it wouldn't be possible to use sigmoid or hyperbolic tangent. In our experiments we observed that having a sigmoid activation in the last layer performs better than having a ReLU.

Deep Image Prior with initial reconstruction
In this section, we propose a new method based on the DIP approach. It takes the result from any end-to-end learned method T : Y → X as initial reconstruction and further enforces data consistency by optimizing over its deep-neural parameterization.
Definition 3 (Deep-neural parameterization). Given an untrained network ϕ : Θ×Z → X and a fixed input z ∈ Z, the deep-neural parameterization of an element x ∈ X with respect to ϕ and z is x 1 T y δ = ϕ(θ k , z) Figure 6: Graphical illustration of the DIP approach with initial reconstruction. The blue area refers to an approximation of some part of the space of natural images.
The projection onto the range of the network is possible because of the result of Lema 1, i.e., the range is closed. If ϕ is a deep convolutional network, for example, a U-Net, the deep-neural parameterization has similarities with other signal representations, such as the Wavelets and Fourier transforms [19]. For image processing, such domains are usually more convenient than the classical pixel representation.
As shown in Figure 6, one way to enforce data consistency is to project the initial reconstruction into the set where Ax − y δ ≤ δ. The puzzle is that due to the illposedness of the problem, the new solution (red point) will very likely have artifacts. The proposed approach first obtains the deep-neural parameterization θ 0 of the initial reconstruction T (y δ ) and then use it as starting point to minimize over θ via gradient descent. The iterative process is conveyed until Aϕ(θ, z) − y δ ≤ δ or for a given fixed number of iterations K determined by means of a validation dataset. This approach seems to force the reconstruction to stay close to the set of natural images because of the structural bias of the deep-neural parameterization. The procedure is listed in Algorithm 1 and a graphical representation is shown in Figure 6. The new methodT : Y → X is similar to other image enhancement approaches. For example, related methods [11], first compute the wavelet transform (parameterization), and then repeatedly do smoothing or shrinking of the coefficients (further optimization).

Benchmark setup and results
For the benchmark, we implemented the end-to-end learned methods described in Section 3.1. We trained them on different data-sizes and compared them with classical Algorithm 1 Deep Image Prior with initial reconstruction methods, such as FBP and TV regularization, and with the proposed methods. The datasets we use were recently released to benchmark deep learning methods for CT reconstruction [25]. They are accessible through the DIVα python library [26]. We also provide the code and the trained methods in the following GitHub repository: https://github.com/oterobaguer/dip-ct-benchmark.

The LoDoPaB-CT Dataset
The low-dose parallel beam (LoDoPaB) CT dataset [25] consists of more than 40 000 two-dimensional CT images and corresponding simulated low-intensity measurements. Human chest CT reconstructions from the LIDC/IDRI database [3] are used as virtual ground truth. Each image has a resolution of 362 × 362 pixels. For the simulation setup, a simple parallel beam geometry with 1000 angles and 513 projection beams is used. To simulate low intensity, Poisson noise corresponding to a mean photon count of 4096 photons per detector pixel before attenuation is applied to the projection data. We use the standard dataset split defining in total 35 820 training pairs, 3522 validation pairs and 3553 test pairs.

Ellipses Dataset
As a synthetic dataset for imaging problems, random phantoms of combined ellipses are commonly used. We use the 'ellipses' standard dataset from the DIVα python library (as provided in version 0.4) [26]. The images have a resolution of 128 × 128 pixels. Measurements are simulated with a parallel beam geometry with only 30 angles and 183 projection beams. In addition to the sparse-angle setup, moderate Gaussian noise with a standard deviation of 2.5 % of the mean absolute value of the projection data is added to the projection data. In total, the training set contains 32 000 pairs, while the validation and test set consist of 3200 pairs each.

Implementation details
For the DIP with initial reconstruction, we used the learned primal-dual, which we consider to be state of the art for this task (see the results in Figure 9). For each datasize, we chose different hyper-parameters, namely the step-size η, the TV regularization parameter γ, and the number of iterations K, based on the available validation dataset (3 data-pairs for the smallest size).
Minimizing L(θ) in (18) is not trivial because TV is not differentiable. In our implementation we use the PyTorch automatic differentiation framework [34] and the ADAM [23] optimizer. For the Ellipses dataset we use the 2 -discrepancy term, whereas for the LoDoPaB we use the Poisson loss.

Numerical results
We trained all the methods with different dataset sizes. For example, 0.1 % on the Ellipses dataset means we trained the model with 0.1 % (32 data-pairs) of the available training data and 0.1 % (3 data-pairs) of the validation data. Afterward, we tested the performance of the method on 100 samples of the test dataset. More details are depicted in Appendix B.
As expected, on both datasets, the fully learned method (iRadonMap) requires much data to achieve acceptable performance. On the Ellipses dataset, it outperformed TV using 100 % of the data, whereas on the LoDoPaB dataset, it performed just slightly better than the FBP. The learned post-processing (FBP+UNet) required much less data. It outperformed TV with only 10 % of the Ellipses dataset and 0.1 % of the LoDoPaB dataset. On the other hand, we find that the learned primal-dual is very data efficient and achieved the best performance. On both datasets, it outperformed TV, trained with only 0.1 % (32 data-pairs) and 0.01 % (4 data-pairs from the same patient) of the Ellipses and LoDoPaB datasets respectively. In Figure 7, we show some results from the test set.
The DIP+TV approach achieved the best results among the data-free methods. On average, it outperforms TV by 1 dB, and 2 dB on the Ellipses and LoDoPaB datasets respectively. In Figure 8, it can be observed that TV tends to produce flat regions but also produces high staircase effects on the edges. However, the combination with DIP seems to produce more realistic edges. For the first two smaller data-sizes, it performs better than all the end-to-end learned methods.
The Deep Image Prior in combination with the learned primal-dual achieved the best results on the low-data regime. For the Ellipses dataset, it improved the quality of the reconstructions up to 1 dB on average. However, for dataset sizes bigger than 2 %, the method did not yield any significant change. On the LoDoPaB data, we did not find a notable improvement. For the smaller sizes, it did improve, but it was just as good as the DIP+TV approach. We believe that this approach is more useful in the case of having sparse measurements, as in the Ellipses dataset.
In Figure 10, we show some reconstructions obtained using this method for the   Ellipses dataset, and compare them with the original initial reconstructions. The reconstructions have a better data consistency w.r.t the observed data ( 2 -discrepancy) and higher quality both visually and in terms of the PSNR and SSIM measures. Moreover, it needed fewer iterations than the DIP+TV, even if we also consider the iterations required to obtain the deep-prior/neural parameterization of the first reconstruction. These initial iterations are much faster because they only use the identity operator instead of the Radon transform.
In our setting, for the Ellipses dataset, the DIP+TV approach needs 8000 iterations to obtain optimal performance in a validation dataset (5 ground truth and observation pairs). On the other hand, by using the initial reconstruction, it needs 4000 iterations with the identity operator and only 1000 with the Radon transform operator. With an nVidia GeForce GTX 1080 Ti graphics card, the original DIP takes approx. 6 min per reconstruction, whereas the proposed method takes 3 min (2× speed factor). The used Encoder-Decoder architecture has approx. 2 · 10 6 parameters in total.

Conclusions
In this work, we study the combination of classical regularization, deep-neural parameterization, and deep learning approaches for CT reconstruction. We benchmark the investigated methods and evaluate how they behave in low-data regimes. Among the data-free approaches, the DIP+TV method achieves the best results. However, it is considerably slow and does not benefit from having a small dataset. On the other hand, the learned primal-dual is very data efficient. Still, it lacks data consistency when not trained with enough data. These issues motivate us to adjust the reconstruction obtained with the learned primal-dual to match the observed data. We solved the puzzle without introducing artifacts through a combination of classical regularization and the DIP. We also derived conditions under which theoretical guarantees hold and showed how to obtain them.
The results presented in this paper offer several baselines for future comparisons with other approaches. Moreover, the proposed methods could be applied to other imaging modalities.