Image reconstruction for interrupted-beam x-ray CT on diagnostic clinical scanners

Low-dose x-ray CT is a major research area with high clinical impact. Compressed sensing using view-based sparse sampling and sparsity-promoting regularization has shown promise in simulations, but these methods can be difficult to implement on diagnostic clinical CT scanners since the x-ray beam cannot be switched on and off rapidly enough. An alternative to view-based sparse sampling is interrupted-beam sparse sampling. SparseCT is a recently-proposed interrupted-beam scheme that achieves sparse sampling by blocking a portion of the beam using a multislit collimator (MSC). The use of an MSC necessitates a number of modifications to the standard compressed sensing reconstruction pipeline. In particular, we find that SparseCT reconstruction is feasible within a model-based image reconstruction framework that incorporates data fidelity weighting to consider penumbra effects and source jittering to consider the effect of partial source obstruction. Here, we present these modifications and demonstrate their application in simulations and real-world prototype scans. In simulations compared to conventional low-dose acquisitions, SparseCT is able to achieve smaller normalized root-mean square differences and higher structural similarity measures on two reduction factors. In prototype experiments, we successfully apply our reconstruction modifications and maintain image resolution at quarter-dose reduction level. The SparseCT design requires only small hardware modifications to current diagnostic clinical scanners, opening up new possibilities for CT dose reduction.

algorithm design in recent years improving practicality for the clinic (Kim et al 2015, McGaffin and Fessler 2015, Nien and Fessler 2016.
Sparse sampling based on compressed sensing (Candes and Wakin 2008) is an alternative to tube-current reduction. Many sparse sampling methods for CT are based on view undersampling (Sidky et al 2006, Liu et al 2012. Aside from enabling dose reduction in combination with compressed sensing, another advantage of these schemes is that they reduce the amount of data collected, and thus decrease the time needed for reconstruction. Dose reduction factors as large as 8-fold have been demonstrated for sparse-view sampling schemes (Luo et al 2016). The performance of sparse-view CT has been evaluated in cardiac CT perfusion imaging (Chen et al 2008) and in cone-beam CT (CBCT) for use image-guided surgery and radiotherapy (Bian et al 2010, Lee et al 2012. However, view-based sparse sampling methods cannot be implemented on current diagnostic clinical hardware since the source cannot be turned on and off rapidly enough to facilitate this mode of undersampling. As a result, these methods have only been demonstrated in simulation or on specialized CT systems. An alternative to view-based undersampling are interrupted-beam sampling approaches in which the source is partially blocked using a multislit collimator (MSC). Previous studies have investigated beam blocking for applications such as scatter correction (Zhu et al 2009) and dose reduction (Abbas et al 2013, Dong et al 2013, Koesters et al 2017. These have indicated that MSC designs may be able to outperform other sparse sampling schemes (Abbas et al 2013) or even current commercial low-dose imaging methods (Koesters et al 2017). MSCbased designs introduce new challenges that must be addressed: penumbra effects from the finite size of the source  and corresponding coherence drop-offs related to collimator design parameters (Muckley et al 2017) have been modeled and simulated. However, while a number of studies have examined MSC effects in isolation, a reconstruction pipeline on diagnostic clinical hardware has not previously been demonstrated.
In this paper we investigate image reconstruction for an interrupted-beam approach tailored for use on diagnostic clinical scanners, which we call 'SparseCT'. The SparseCT interrupted-beam method had initially been investigated in simulation with physically unrealistic sampling models (Koesters et al 2017, Muckley et al 2017 or in simulation without modeling for uniform resolution . A first demonstration was recently achieved for a prototype experiment (Chen et al 2019). We extend the previous results to encompass a more comprehensive reconstruction scheme and test it both on simulation data with realistic sampling models and a realworld beam-interrupting collimator prototype. Our simulation experiments include noise-altering penumbra effects and techniques to maintain uniform resolution in the reconstruction at different radiation dose reduction factors. With our collimator prototype data, we demonstrate the reconstruction pipeline for the first time on a real-world SparseCT system. Our paper begins by giving an overview of the MSC scanner design and the relevant system model in section 2. Then, we outline which steps in the reconstruction process must be modified in the presence of the MSC: this includes the topic of partial source obstruction in section 2.2 and the preprocessing pipeline in section 3.2.2. Finally, we demonstrate the results of these modifications in section 4 and discuss future directions in section 5.

Theory
The SparseCT MSC design replaces the standard adaptive collimator with a new collimator plate that contains many slits as shown in figure 1. Specifications of these slits are denoted in terms of their footprint on the detector array such that a one-row opening on the MSC would illuminate one row of the detector with an ideal point source. One example design is to open four rows out of every 16 rows-we indicate this notationally with W4S16, for a width of four rows and a spacing period of 16 rows. The W4S16 pattern would be repeated from one end of the collimator to the other, until all rows of the detector are accounted for. Figure 2 shows an illustration of the MSC and the source, including the location of the source, the MSC, and the undersampled x-ray beam. As shown in the figures, the rays begin at the source, are partially blocked by the MSC, pass through the subject, and then illuminate the detector.
As the gantry rotates around the subject, two motions occur: the first is flying focal spot motion accomplished by steering of the x-ray beam. This motion allows the scanner to effectively acquire two or more projections for a single projection angle, in order to increase z-axis resolution. When the MSC is included, this source motion leads to a shift of the MSC shadow on the detector, and thus continues to produce extra views for one projection angle, but with a predictable detector irradiation pattern determined by the source motion and the MSC design. The second motion that occurs throughout the scan is movement of the MSC itself under an actuator. Motion occurs along the z-direction. This paper discusses a linear motion pattern, with study of other motion patterns left to future work.

System model
Our system model relies on that of previous work (Thibault et al 2007). The p th data point, b p can be modeled as where x ∈ R N is the object being imaged with N voxels and I p is the source intensity (Thibault et al 2007). The line integral projections, ȳ p , are attenuated by the object, x , as described by the ray path in a p . The Poisson variable f p with mean and variance ȳ p is called the quantum noise, while the Gaussian variable p with variance σ 2 is called the electronic noise.
For simplification, we work with the postlog data, y p , instead of working with (1) directly. We use the following linear model (Thibault et al 2007): where y = [y 1 , ..., y P ] T is the preprocessed data and A = [a 1 , ..., a P ] T is the system matrix of line integrals.
Section 2.2 describes the construction of A that takes into account partial obstruction of the x-ray source. n ∼ N (0, W −1 ) is Gaussian noise designed to approximate the combined effects of both noise sources and the preprocessing steps. For clarity, we note that ȳ p is the ideal, noiseless version of y p , whereas y p is the estimate of ȳ p that arises from log transform preprocessing. The covariance matrix, W −1 , is assumed to be diagonal, with its inverse being where w p is the variance of the p th data point. Based on previous results (Thibault et al 2007), we use w p = b p , which leads to high-intensity pre-log data being more important in the reconstruction. Calculating y p from b p requires a logarithm and some spectral corrections that we describe in section 3.2.2. Given the signal model in (2), one could obtain the penalized weighted least squares (PWLS) estimate of x by solving the following optimization problem: where the above estimate also enforces a non-negativity constraint on the attenuation coefficients in x . To compensate for aliasing due to MSC-based undersampling, we incorporate R(x), a regularization function that promotes sparsity. We define R(x) to have the following general form: where C ∈ R J×N is a finite difference operator and [Cx] j is the j th output of Cx. The function ψ j (t) is a potential function. The classic compressed sensing choice for ψ j (t) would be the absolute value function, thus yielding the 1 regularizer. However, in our experiments we found that this could lead to salt-and-pepper noise artifacts as demonstrated in previous results (Thibault et al 2007). Instead, we use the hyperbola function (Long et al 2010), With small values of δ, (6) approximates the classic Total Variation function used for compressed sensing. β j is a space-varying regularization weight. We allow the regularization parameter to vary since the data fidelity weighting can induce resolution variation on the reconstructed image (Fessler and Rogers 1996). In particular, low weighting of projections through the center of the image volume can lead to over-regularization in this area. We compensate for this in our selection of the β j weights. First, we calculate where a pn is the p th, nth value of A. We then apply where c jn is the kth, nth value of C and β is a baseline value that governs the regularization level in the entire image volume. This choice of β j values promotes uniform resolution in the reconstructed image Rogers 1996, Kim et al 2015).

System operator and partial source obstruction
We construct the data operator, A, based on the classic Siddon line integral method (Siddon 1985). We choose to do this since the Siddon method introduces minimal parameterization to the system geometry, allowing adaptation to correct for the effects of partial source obstruction, a distortion the MSC introduces to the standard system geometry. Partial source obstruction results in each row of the detector only seeing a portion of the source. This shifts the effective location of the ideal point source, resulting in row-variant cone beam magnification effects. To compensate for partial source obstruction effects, we shift the location of the source for each row of the detector. We determine the size of the shift from offline simulation experiments on the scanner geometry for each MSC position and MSC design. Then, in our Siddon ray-tracing procedure (Siddon 1985), we apply this shift for each row of the detector and MSC position. Figure 2 shows a schematic of the partial source obstruction effect. To estimate what portion of the source was observable, we approximated the source as a summation of Gaussian basis functions based on experimental data. Then, we calculated the centroid of the observable source and used this information to perturb the point source location for each row of the detector in the projector.

Optimization algorithm
There are several approaches for minimizing the cost function in (4): our chosen method was to use the ordered subsets/momentum algorithm (OS-momentum) (Kim et al 2015) due to its speed and ease of implementation. The OS-momentum method develops an optimization transfer algorithm for (4) and accelerates the optimization transfer algorithm using ordered subsets and Nesterov momentum (Nesterov 1983). We review this procedure for (4).
The first step is the creation of the surrogate, which can be done by upper-bounding the Hessian of (4), H with a diagonal matrix, D. The Hessian can be decomposed into two parts given by the data-fitting term and the regularizer, i.e. H = H f + H R . Then, the following is a diagonal upper bound: provided that D f H f and D R H R where D H implies that D − H is positive semidefinite. Thus, we can consider the data-fitting and regularizer terms separately in two stages. A diagonal upper bound for the datafitting part of the Hessian, A T WA, can be found using the method of De Pierro (1995): where | · | is the element-wise absolute value operation. This bound is commonly used for quadratic surrogates (SQS) methods (Erdoğan and Fessler 1999). The method of De Pierro also gives a bound for the regularizer (De Pierro 1995), Combining these into D = D f + D R gives the following separable quadratic surrogate (Erdoğan and Fessler 1999) at iteration k for the cost function in (4): where ∇Ψ x (k) is the gradient computed at the point, x (k) . Iteratively applying the update in k will descend the cost function in (4). Such an algorithm is typically slow, so we add ordered subsets and momentum accelerations (Kim et al 2015). The ordered subsets acceleration assumes that the overall cost function can be decomposed into sums of functions that have the same shape. If this is satisfied, then we apply a similar update to that in (12), but instead using the ordered subset cost function: (12) with ∇Ψ m x (k) . This can also be combined with Nesterov momentum by introducing auxiliary variables v and z (Kim et al 2015), giving the overall OS-Momentum Algorithm. All subsets are updated in the FFT bit-reversal order to ensure incoherence between consecutive updates.

Methods
We performed two sets of experiments to investigate the efficacy of the interrupted-beam design: simulation experiments and real-world prototype experiments. Section 3.1 describes the simulation experiments, while section 3.2 describes the real-world prototype experiments. All experiments used the Siddon line integral method (Siddon 1985) implemented in the EMrecon toolbox (Kösters et al 2011) (available at emrecon.unimuenster.de).

Simulation experiments
Our simulation experiments used data from the Low Dose CT Grand Challenge (McCollough et al 2017). The data consisted of a central 736 × 64 × 4006 (detector channels × detector rows × projections) section of the original helical sinogram of a liver examination from patient 67. This section of data was cropped from the original data set from z = 125 mm to z = 165 mm, yielding 4,006 projections. From this, we reconstructed a 768 × 768 × 32 image volume with 0.66 mm × 0.66 mm × 3 mm resolution. The images displayed in subsequent figures are from slice 15 of the image volume. Although the tungsten MSC has a very high attenuation factor and completely blocks the beam, the size of the focal spot leads to a non-binary sampling pattern corrupted by penumbra effects (Chen et al , 2019. The signal in the penumbra region may experience different physical effects than the main signal region-this can be caused by a spectral shift due to the source location on the anode or a partially attenuated path through a corner of the MSC. For these reasons, we decided to exclude elements from the penumbra region that had more than an 80% attenuation due to the MSC. We corrupted the data using a noise model that incorporated penumbra attenuation effects (Chen et al 2017) based on the W4S16 and W2S16 sampling patterns (4-fold and 8-fold reduction factors). Then, we reconstructed by minimizing (4) using the OS-Momentum algorithm (Kim et al 2015). We reconstructed images across a large array of values for the regularization parameter. We used a δ = 5 HU based on a previous study (Long et al 2009). These results were compared to quarter and eighth dose tube-current reduction simulations based on the Siemens ReconCT software (Yu et al 2012), also reconstructed by minimizing (4) with the OS-Momentum algorithm. A reference standard was reconstructed with uncorrupted data. Lastly, our experiments used a sinogram oversampling factor of 3 to reduce ringing from the Siddon ray-tracing procedure.
We did not simulate the effects of partial source obstruction in our simulation experiments. Our main task was to isolate the sampling and noise properties of the MSC to ascertain feasibility of the approach on in vivo data. However, these effects are fully-realized in the real-world experiments.

Prototype experiments 3.2.1. Data Collection
Our real-world prototype consisted of a Siemens SOMATOM Force scanner equipped with a W4S16 collimator in place of the standard adaptive collimator. Figure 3 shows the W4S16 collimator secured in the adaptive collimator jaws of the Siemens scanner.
In our prototype experiments, we used the American College of Radiology CT (ACR) phantom. The phantom has inserts at known locations that can be used to determine the resolution of the system. Four data sets were acquired with the prototype scanner: a no-MSC, 1/4 tube current air scan, a no-MSC, 1/4 tube current ACR scan, an MSC air scan, and an MSC ACR scan. The air scans were acquired for log-domain preprocessing (described in section 3.2.2). From these data, we retrospectively simulated a scan in which the MSC has a piecewise linear motion from the first to the sixteenth position over one rotation. This generated a 920 × 96 × 2100 data set. Of this data set, elements that had greater than 80% attenuation due to the MSC were excluded. From this, we reconstructed a 768 × 768 × 32 image volume at 0.66 mm ×0.66 mm ×3 mm resolution. The reconstruction process used the custom preprocessing pipeline described in section 3.2.2 and the partial source obstruction model shown in section 2.2.

Prototype preprocessing
The prototype experiments used a custom preprocessing pipeline to convert the prelog data points b p to postlog data points y p . Preprocessing pipelines consist of several corrections for non-ideal system factors. These factors include alterations to the spectral distribution of the x-ray source due to beam hardening, negative values that can arise in b p due to the electronic noise, geometric distortions, and anatomy-specific factors. Corrections for these factors can vary between vendors, and the details are often proprietary. Our goal is to build upon past literature on preprocessing while making modifications necessary for SparseCT. These include air calibration, log transformation, and beam-hardening correction. We apply these modifications to the data in our prototype experiments.
We first correct the negative values due to the electronic noise in b p . Current systems have signal-dependent filters (SDFs) for handling small values that occur behind highly-attenuating structures such as bone. However, the MSC itself is an abnormal high-attenuating structure, so we do not expect these filters to work in our setting. Instead, we apply clipping: where b p,min = 0.000 01I p . This limits the minimum value to be five orders of magnitude below the incident value. Clipping can create some bias in low-signal regions, but we design the reconstruction weights in (4) such that low-signal regions have relatively small effects on the final reconstruction.
We next apply the logarithm. This requires knowledge of the fluence, I p , which is measured with an air scan. We collected air scans for each MSC position with our prototype. With these scans, we apply the logarithm as The final step is beam-hardening correction. For this, we relied on a propriety 2-compartment model from Siemens (related, Raupach et al (2001)). Since we apply the method in the post-log domain after the air scan calibration, we assume that the beam-hardening method is performing in normal operating conditions and that no further modifications are necessary due to the MSC. This gives the final postlog data for reconstruction: where BHC(·) is the beam-hardening correction function. We did not perform scatter correction due to the antiscatter grid on the detector.

Simulation experiments
In our simulation experiments we compared to the full-dose reference scan shown in figure 4(a). We reconstructed this image with mild regularization to reduce the influence of the noise present in the original data. After calculating the reference image, we simulated SparseCT acquisitions with the W4S16 and the W2S16 collimator design. For comparison, we also simulated 1/4th dose and 1/8th dose tube-current acquisitions and reconstructed them with the same algorithm as the full-dose scan. Then, we calculated the normalized rootmean-square difference (NRMSD) and structural similarity measure (SSIM) across six slices between the lowdose image volumes and the reference image volume. The SSIM calculations used 400 HU for the dynamic range since this was the viewing window size. All displayed images in figures 4 and 5 were reconstructed with the NRMSD-optimal β for that method. The results of these reconstructions are shown in figures 4 and 5. Both tubecurrent reduction and SparseCT acquisitions show image degradation as the dose is reduced in the simulated images in figures 4 and 5. At high dose reduction levels, the SparseCT acquisitions showed less image degradation, as illustrated in figure 5. To more fully characterize the methods, we also plotted SSIM and NRMSD versus the ground truth image as a function of regularization parameter. These are shown in figure 6. The SparseCT methods perform better for the metrics at lower regularization levels. Among our experiments the minimum W4S16 NRMSD was 2.56% at β = 1270 with an SSIM of 0.8752. The minimum 1/4 tube current NRSMD was 2.69% at β = 3002 with an SSIM of 0.8810. The minimum W2S16 NRMSD was 3.19% at β = 1270 with an SSIM of 0.8336. The minimum 1/8 tube current MSE was 4.19% at β = 3999 with an SSIM was 0.8015.  Figure 8 compares results with a quarter-dose open collimator versus the MSC with linear motion by zooming in on the 7 line pairs per cm (lp cm −1 ) insert of the ACR phantom. For these reconstructions, a β of 128 was used for all methods. The open collimator results were obtained with the same dose reduction achieved for the SparseCT method and with the same reconstruction algorithm. Both methods resolve the 7 lp cm −1 insert. Although both methods were able to resolve the 7 lp cm −1 insert in figure 8, we found inserts finer than 7 lp cm −1 were more difficult. Figures 4 and 5 suggest that SparseCT and tube current have different error characteristics. Although SSIM and NRMSD are similar for both methods, the errors for the tube current reduction method are heavily concentrated on the interior of the abdomen, whereas the errors for SparseCT are more diffuse. This effect becomes greater at higher dose reduction factors. Model-based reconstruction methods for low-dose CT often rely on detailed system models and priors to compensate for this increased noise. Figure 6 suggests that with these models, the extent of regularization necessary to maintain high image quality is reduced with an interrupted-beam design. Nonetheless, the trends suggested by figures 4-6 should be considered a proof-of-concept of the potential of an interrupted-beam design. Further validation is necessary in future work.

New considerations of interrupted-beam image reconstruction
Our reconstruction pipeline shares a number of aspects with other low-dose reconstruction methods. Although data fidelity weighting and regularization for uniform resolution were originally developed for tube-current reduction schemes Rogers 1996, Thibault et al 2007), we found they readily extend to the SparseCT setting to consider penumbra effects. We also found that much of the standard preprocessing pipeline can be preserved after air calibration.  Still, SparseCT introduces some new considerations that are not present in other sparse sampling schemes such as view-based undersampling. In particular, we discovered that partial source obstruction can affect the data. Our correction for this required simulations of the source distribution and an alteration to the projection operator. Many implementations of modern projection operators (such as separable footprints (Long et al 2010)) are highly parameterized, but the jittering necessary for partial source obstruction precludes use of detailed parameterizations. This was the primary motivation for our use of the very general Siddon method (Siddon 1985). In the future, we will examine whether more parameterized projection operators can be adapted for SparseCT.

Extensions to a fully-functional prototype
Here, we simulated a linear motion of the MSC by retrospectively combining scans at different MSC positions. These results readily extend to a prospective linear motion with a few modifications under development. First, the air calibration scan must be registered to each location of the MSC in the sinogram. This requires a process for detecting the location of the MSC, then interpolating between the two closest corresponding air scan locations in order to perform the appropriate air calibration. Another modification will register the partial source obstruction effects to the detected location of the MSC by first fitting a polynomial for partial source obstruction corrections based on simulations, then applying this polynomial to the appropriate position determined from the MSC registration procedure. After these modifications are applied, the rest of the pipeline described in this paper can be used for reconstruction.
Another possibility for extension is that of dual-energy CT. The MSC naturally segments the sinogram into high and low-energy areas. In our case we entirely blocked some segments of the sinogram in order to achieve dose reduction, but an alternative approach would be to only partially attenuate these areas to achieve dual-energy CT imaging, an application that has been explored for CBCT systems (Lee et al 2017). The partial source obstruction models developed in this paper would continue to be relevant to reconstruction in this setting. Joint reconstruction techniques (Knoll et al 2014, Rigie and La Rivière 2015, Lee et al 2017 could be used to mitigate the image degradation from the low-energy filter by combining information from both segments of the sinogram.

Conclusion
We developed a new image reconstruction pipeline tailored to SparseCT acquisitions. Our method was based on an MBIR method, but included new modifications to consider aspects of the interrupted-beam design. We altered preprocessing methods and updated projectors for the effects of partial source obstruction. We confirmed that data fidelity weighting appropriately considers the signal variation due to penumbra effects. In retrospective simulation experiments, we observed similar NRMSD and SSIM values to tube-current reduction at 4-fold dose reductions and improved NRMSD and SSIM values at 8-fold dose reductions. Errors for tube current reduction were dominated by the interior of the abdomen, whereas in SparseCT this distribution of errors was flattened. Finally, we observed that SparseCT was able to achieve similar resolution to tube-current reduction in prototype experiments. In the future, we will perform experiments with prospective motion, as well as further developing corresponding reconstruction methods to incorporate the latest advances, such as deep learning.