Quantitative Susceptibility Mapping through Model-based Deep Image Prior (MoDIP)

The data-driven approach of supervised learning methods has limited applicability in solving dipole inversion in Quantitative Susceptibility Mapping (QSM) with varying scan parameters across different objects. To address this generalization issue in supervised QSM methods, we propose a novel training-free model-based unsupervised method called MoDIP (Model-based Deep Image Prior). MoDIP comprises a small, untrained network and a Data Fidelity Optimization (DFO) module. The network converges to an interim state, acting as an implicit prior for image regularization, while the optimization process enforces the physical model of QSM dipole inversion. Experimental results demonstrate MoDIP's excellent generalizability in solving QSM dipole inversion across different scan parameters. It exhibits robustness against pathological brain QSM, achieving over 32% accuracy improvement than supervised deep learning and traditional iterative methods. It is also 33% more computationally efficient and runs 4 times faster than conventional DIP-based approaches, enabling 3D high-resolution image reconstruction in under 4.5 minutes.

To reconstruct the susceptibility map from a Magnetic Resonance Imaging (MRI) phase measurement, a technique known as Quantitative Susceptibility Mapping (QSM) [18], [19] [20], involving multiple image processing steps, has been developed.These steps include phase unwrapping, background field removal, and dipole inversion.The final dipole inversion step computes the susceptibility map by performing a deconvolution operation with the unit dipole kernel, which raises the challenge of an ill-posed inverse problem.Several iterative optimization approaches have been developed to solve this problem [21], [22], [23], [24], which incorporate image regularization techniques to suppress noise and streaking artifacts.For instance, the MEDI method [25] assumes the predicted QSM shares tissue anatomical boundaries with the local field map.The SFCR method [26] refines the morphological prior by including susceptibility structural features.The iLSQR method [27] solves QSM dipole inversion by estimating and removing streaking artifacts from initial LSQR estimation.STAR-QSM [28] further improves susceptibility inversion for image data with high-intensity sources.There are also total field inversion [29], [30] and single-step-QSM [31] methods, which directly derive susceptibility maps from the unwrapped or raw phase maps.However, these conventional regularization techniques often result in artefactual, blurry, and underestimated susceptibility maps.Additionally, they often require fine-tuning of the regularization parameters to adapt to varying scan parameters and objects.
Deep learning methods have been developed to solve dipole inversion in QSM.However, most supervised learning methods [32], [33], [34], [35] have been trained solely on 1 mm isotropic resolution image and pure axial acquisition datasets of the human brain.As a result, their applicability is restricted when dealing with varying spatial resolutions, acquisition orientations, brains with abnormal sources (e.g., high-intensity hemorrhages), or objects other than the human brain.To enhance model generalizability, some methods [36], [37], [38] have employed synthetic data and meta-learning strategies, while other methods [39], [40], [41] have incorporated deep neural networks in an iterative manner guided by the physical model of dipole inversion.A recent study [42] has also embedded affine transformations into an end-to-end model.While these supervised methods have demonstrated improvements in model generalization, their performance remains constrained by the availability and diversity of training M datasets.In contrast, a self-supervised solution [43] has been proposed, which exhibits greater robustness against image resolution variations through adaptive instance normalization; however, the acquisition orientation effect was not investigated.
Deep Image Prior (DIP) [44] introduced deep neural networks as an implicit regularization technique for solving inverse problems.In the context of QSM, a recent approach called FINE (Fidelity Imposed Network Edit) [45] demonstrated the capabilities of DIP by fine-tuning pre-trained networks during the inference stage.However, this study highlighted the intensive forward and backward computations through the large U-net architecture, consuming substantial GPU memory and computational time.As a result, FINE imposed significant limitations on image size and resolution compared to conventional supervised methods.
In contrast to conventional DIP-based methods like the original DIP [44] and FINE [45], which heavily rely on protracted iterations and intricate network architectures for image reconstruction, our proposed method, Model-based DIP (MoDIP), takes a different approach.It combines a compact mini-U-net with a Data Fidelity Optimization (DFO) module, enhancing generalization capabilities while substantially reducing computational complexity.The main contributions can be summarized as follows:

A. QSM Dipole Inversion
The perturbated local field  due to tissue susceptibility  can be formulated as a convolution with the unit dipole kernel .This physical model can be written in k-space using the following equation: where ℱ and ℱ !" denote the forward and inverse Fourier transforms.The unit dipole kernel  can be represented as: Here  = [ # ,  $ ,  % ] denotes the k-space coordinates and  ⃗ = [ # ,  $ ,  % ] is the vector projections of the field-of-view onto the main magnetic field vector  ' ----⃗ [46].The relationship between the image voxel size ( ⃗ = [ # ,  $ ,  % ]) and the k-space coordinates is given by: where [ # , $ , % ] denotes the image matrix size.
A minimization task can be formulated to solve the susceptibility  from the local field : where ℛ() is a regularization term to penalize the data fidelity, and  is the weighting factor for the regularization.Conventionally, the regularization term often originates from prior assumptions such as sparsity and smoothness.However, such assumptions often yield suboptimal results and require manual parameter tuning for different data.
B. Supervised Learning for QSM Supervised deep learning dipole inversion approaches try to learn the optimal parameter  under a pre-defined model , for mapping from local field to tissue susceptibility: where  , and  , are pairs of inputs (local field maps) and labels (tissue susceptibility maps) for training.Data-driven intrinsic regularization is learned by fitting the entire training dataset.However, most existing supervised QSM methods are trained using 1 mm isotropic resolution images acquired in a pure-axial head orientation ( ⃗ = [0, 0, 1]).Consequently, when applying these trained models to testing data with scan parameters different from the training dataset (e.g., anisotropic resolutions or oblique acquisitions [42]) or in out-of-distribution cases such as lesions with substantially higher susceptibility values than healthy brain tissue [28], [35], [47], the networks may fail to yield accurate results.

C. Deep Image Prior (DIP)
Deep Image Prior (DIP) [44] proposed to optimize the network weights  by "fitting" a single inference data point without the need for any pre-training.For QSM dipole inversion, the DIP optimization can be written as: where  is the network input (Gaussian noise as in the original DIP paper).The network architecture implicitly provides regularization by acknowledging that noise in the measurement is more challenging for the network to fit than the underlying clean data.However, the untrained nature of the network hinders the application of DIP in terms of reconstruction time, which typically requires hundreds to thousands of iterations.Moreover, it is susceptible to being trapped in local minimums when sub-optimal hyper-parameters are set, as discussed in the original paper [44].

D. Fidelity Imposed Network Edit (FINE)
An alternative approach, FINE (Fidelity Imposed Network Edit) [45], has been proposed.This method involves training a deep neural network using paired local field and QSM datasets and fine-tuning the network weights for each local field input during inference.While sharing similarities with DIP, FINE requires supervised pre-training.In essence, FINE can be seen as a particular case of DIP, where the network is initialized with pre-trained weights instead of random initializations.When the pre-training and testing datasets closely align, the pre-trained FINE model is expected to perform more effectively than DIP's untrained model starting from scratch.However, FINE's performance diminishes when the pre-training datasets deviate from the refinement scenario, as we will show in our study.Moreover, the computational demand of the large network used in FINE limits its application for higher-resolution 3D volumes.

A. Model-based Deep Imaging Prior (MoDIP)
We propose a Model-based Deep Image Prior (MoDIP) method for improving the reconstruction speed, memory efficiency, and accuracy of the original DIP approach in QSM.As depicted schematically in Fig. 1, the bottom grey plane represents the manifold of QSM solutions to the local field, which satisfies the exact model loss of Eq. 1. Due to measurement noise and errors in the local field map, the desired optimal solution  * results in a non-zero cost and is, therefore, off this plane, as indicated by the red dot.Pure optimization with no prior (brown curve) and DIP with deep and large network architectures (dark blue curve) may converge to solutions in the zero-cost plane, far away from  * , resulting in an overfitting effect that amplifies noise and errors in the reconstructed QSM images.
The proposed MoDIP method combines the concept of DIP with physics model-based optimization.In MoDIP (green solid curve), the network adjusts its trajectory and converges to an initial estimation  ! of QSM.This network output serves as the starting point for the subsequent Data Fidelity Optimization (DFO) process (green dash curve), leading to the final QSM prediction  " .The DFO process minimizes the QSM physical model objective, bringing the solution closer to its optimum.In MoDIP, the network's task is to transform the local field into an easily attainable interim state instead of directly converting it into QSM.This approach relieves the network from the excessive burden of dipole inversion and accelerates its convergence, enabling the use of a light-weight network to reduce computational costs and GPU memory requirements, thus facilitating the reconstruction of high-resolution 3D images.It is worth noting that, unlike the unrolled methods [39] [40], MoDIP performs only a single forward pass through the network, followed by multiple rapid and memory-efficient gradient descent optimization steps within each iteration.
Furthermore, we replace the Gaussian noise input in the original DIP paper with the local field map, which shares some image features with the susceptibility map, easing the network's transformation task.Lastly, in addition to the physical model loss, we introduce a local field Laplacian loss to promote tissue boundary consistency.
The overall methodology of MoDIP is illustrated in Fig. 2, which depicts the DIP p rocess of updating the network parameters  followed by the optimization process of updating the susceptibility map  .Detailed implementations are described in Alg. 1. Firstly, an initial estimation of the susceptibility map, denoted as  ' , is predicted from the local field  using a mini-U-net  + : Following the forward pass through the network,  steps of gradient descent optimizations are performed on the data fidelity term and the final prediction  * is computed: where ∇ denotes the gradient operation, and  is the step size for the gradient descent algorithm.Subsequently, a backward pass is performed to update the network parameters  by minimizing the loss function: where ∇ & denotes the Laplacian operation.The first term represents the Mean Absolute Error (MAE) loss of the physical model, and the second term denotes the MAE loss of the Laplacians calculated for the predicted and measured local field maps.

B. Network Design and Implementation
MoDIP in this study utilized a mini-U-net architecture and was initialized with untrained random weights.As shown in Fig. 2, a simple encoder-decoder architecture was designed with a skip connection, similar to the modified U-net in [35], but with a pooling depth of 1 and channel numbers starting from 32.This mini-U-net consists of 8 convolutional blocks and 1 concatenation operation.For all convolutional operations, a kernel size of 3×3×3, a stride of 1, and a zero-padding of 1 are defined, except for the last one, which uses a kernel size of 1×1×1 to reduce feature dimension back to 1.
On the contrary, DIP and FINE employed a heavy-weight Unet architecture with a pooling depth of 4. In the case of DIP, the initial network weights are assigned randomly.In contrast, FINE initializes the U-net weights with those pre-trained using 1 mm isotropic QSM and local field pairs in pure-axial head orientation.The number of learnable parameters and computational costs of MoDIP, DIP and FINE methods are reported in Fig. 3.The evaluations are performed on two sizes of testing data: the original matrix size of 256×256×128 and the reduced size of 144×144×128.It is evident from the bar chart that MoDIP has 98% fewer network parameters, requires 33% less memory, and runs 28% faster compared to the DIP and FINE methods.The reconstruction time reported in Fig. 3 corresponds to running each method for 200 iterations.
The Adam optimizer was chosen for the optimizations of DIP, MoDIP, and FINE, with an initial learning rate of 5 × 10 !0 and a decay factor of 0.8 every 50 iterations.To ensure reproducibility, we manually fixed the random seed for model initialization.All experiments were conducted on a computer with an Intel 12700KF CPU, 32GB RAM, and an RTX4090 GPU with 24 GB vRAM.After thorough tests on different combinations of hyperparameters, we empirically set the step size () to 1.2 and the number of gradient descent steps () to 10, which yielded the most favourable results, and still preserve the computational efficiency.

A. Ablation and Comparison Studies
To investigate the effect of the added DFO module in MoDIP, we conducted an ablation study on an in-vivo brain dataset acquired at 3T with a multi-echo GRE sequence of 1 mm isotropic resolution in pure axial orientation and processed using the standard QSM pipeline, including phase unwrapping with the best-path method [48] and background field removal using the RESHARP method [49].We showed the intermediate results of MoDIP before DFO, referred to as  ' , and the result after DFO as  * .These results were also compared with DIP and Pure-DFO (i.e., conventional optimization on data fidelity loss with no prior regularization).As observed in Fig. 4, unlike DIP, which targets QSM solutions,  ' in MoDIP converges to an image that does not represent a typical QSM contrast but retains some characteristics of the local field input.This suggests that the mini-U-net in MoDIP transforms the local field into an interim state instead of directly attempting the significantly more challenging task of transforming it into QSM.This is consistent with the theory depicted in Fig. 1, where MoDIP's interim  ' output serves as an implicitly regularized starting point for the subsequent DFO process.DIP alone is unable to reach the optimal QSM solution directly from the local field in 200 iterations, while performing DFO steps alone from the measurement for extensive iterations with no prior regularization (i.e., Pure-DFO) can lead to noise and error amplification, as shown in Fig. 4 (bottom row).
In Fig. 5, we conducted a comparison between MoDIP, DIP, and FINE.Different network inputs (Gaussian noise vs. local field) for DIP and MoDIP were also compared.These three methods were evaluated on a simulated digital brain phantom.A COSMOS map, reconstructed from 5 GRE acquisitions with different head positions (as detailed in [29]), was resampled to   an anisotropic resolution of 1×1×2 mm 3 to serve as the susceptibility ground truth.This map was used to generate a local field map, with a titled acquisition angle (  ⃗ = [0.5, 0.5, 0.71]), as shown in Fig. 5(a) according to Eq. 1.Each method was conducted with 500 iterations, and their model losses (Eq.9) were plotted in Fig. 5 The results in Fig. 5 demonstrate that MoDIP exhibits significantly faster convergence and achieves a substantially smaller NRMSE than DIP and FINE.MoDIP already produces a reasonable estimation of QSM with just 10 iterations, while DIP and FINE require at least 200 iterations to achieve similar performance, demonstrating the effectiveness of the added DFO module.MoDIP with Gaussian noise as the network input showed a dramatically downgraded performance compared to using the local field as the input.DIP with noise input failed to reconstruct reasonable QSM results.These suggest that the network's transfo rmation task benefits from the shared image features between the local field and the susceptibility map.For the rest of the paper, DIP and MoDIP results are all from the local field as input.Even though the local field in the testing dataset differs in image resolution and acquisition orientation from the training dataset, the pre-trained FINE method exhibited a clear benefit of faster convergence compared to the untrained DIP method.

B. Simulated and In-vivo Pathological Brains
We compared different methods on an anisotropic (  ⃗ = [1, 1, 2] mm) and tilted (  ⃗ = [0.5, 0.5, 0.71] ) digital human brain phantom containing a simulated spherical hemorrhagic lesion with a radius of 2 mm and susceptibility of 0.8 ± 0.05 ppm.Results in Fig. 6 demonstrate that MoDIP outperformed all other methods visually, followed by AFTER-QSM, exhibiting excellent tissue contrast for deep grey matter, white matter, and hemorrhage, with minimal artifacts across the entire brain.Conversely, the supervised U-net failed to achieve the task, while DIP and FINE substantially suppressed QSM contrast.Susceptibility measurements of the hemorrhage are reported at the bottom of Fig. 6, with MoDIP being the closest to the ground truth.Quantitative evaluation in Table I further confirms that MoDIP achieved the minimum deviations from the ground truth in the hemorrhage region (indicated by the red box in Fig. 6), the non-hemorrhage region (i.e., the rest of the brain outside of the red box), and the entire brain.
Different methods were further compared on an in-vivo subject with cavernous hemangioma, scanned at 3T with an anisotropic spatial resolution (  ⃗ = [0.88,0.88, 2] mm) in a slightly titled acquisition orientation ( ⃗ = [0.02,−0.12, 0.99]).The local field map was reconstructed using the iQFM method [47] from the raw phase in a single step.Fig. 7 displays the reconstruction results in three orthogonal views, with red The hemorrhage area corresponds to the red box in Figure 6.arrows pointing to noticeable artifacts.Similar to the simulated hemorrhagic case, MoDIP and AFTER-QSM produced the most visually appealing susceptibility maps, exhibiting reduced streaking artifacts near the hemangioma while maintaining excellent susceptibility contrast in healthy tissues.Moreover, MoDIP yielded the highest hemangioma susceptibility, consistent with the simulated hemorrhagic results in Fig. 6.

C. Digital Geometric Phantom
Fig. 8 investigates the generalizability of different methods on a simulated geometric phantom.The phantom consists of 800 cuboids with side lengths randomly chosen from the range [1, 64] voxels, and each cuboid was assigned a uniform susceptibility value randomly selected from [-0.02, 0.02] ppm.These geometric susceptibility sources were arbitrarily placed and overlaid in an image of size 128×128×128, with susceptibility 0 set as the background.
MoDIP demonstrated comparable performance to conventional non-deep-learning methods (i.e., iLSQR and MEDI) that employ manually crafted regularizations designed to align with the sparsity characteristics of the simple geometric phantom.MoDIP exhibited superior performance compared to all other network-based methods by a substantial margin.These results highlighted MoDIP's clear distinction from data-driven approaches and its ability to perform robustly across various test datasets.

D. Overfitting and Stopping Criterion
As noted in the original DIP paper [44], excessive iterations may lead to network overfitting and degraded image quality.Therefore, stopping the network weights optimization process in time is critical.To investigate the effects of network size on overfitting and determine the stopping criterion, we conducted a comparative study using an in-vivo brain dataset.This analysis involved comparing the outcomes of our proposed MoDIP method, which employed a mini-U-net architecture, with those of a modified variant using a U-net architecture with a pooling depth of 4. This modified variant is referred to as MoDIP-D4, resembling the U-net architecture utilized in DIP and FINE.We also performed Pure-DFO to illustrate the detrimental artifacts due to overfitting.Fig. 9 shows the MoDIP, MoDIP-D4, and Pure-DFO results at iteration numbers 50, 100, 200, 400, 800, and 1000.It is  evident that MoDIP-D4 in early iterations showed suppressed susceptiblity contrasts than MoDIP.However, as the number of iterations increased, MoDIP-D4 showcased more pronounced shadow artifacts, underscoring the importance of a suitable stopping mechanism.This observation aligns with findings from the original DIP paper.In contrast, MoDIP demonstrated faster convergence to an optimally regularized solution without introducing artifacts during the iterations.These results suggest that our proposed MoDIP approach with a mini-U-net demands less computational memory, converges more rapidly, and exhibits reduced possibility to overfitting, eliminating the need for early stopping.In practice, MoDIP inference can be terminated after a certain number of iterations or when the relative difference in total loss between two consecutive iterations falls below a predetermined threshold.Our testing across various cases indicates that 200 iterations strike a good balance between reconstruction accuracy and efficiency.

V. DISCUSSION
Previous work [44] has shown that the performance of DIP, partially determined by the extent of the implicit prior regularization, is highly influenced by the selection of network architectures and the number of iterations.Solely optimizing the network weights against the model loss faces limitations in reaching the optimal solution or risk overfitting with excessive iterations, undermining the benefits of DIP.These deep and large networks also require extensive GPU memory and converge slowly, making them impractical for dealing with fullsized 3D high-resolution volumes in QSM dipole inversion.
MoDIP elevates the DIP concept by coupling it with conventional optimization to address these issues.Instead of relying on deep and large networks, we employ a shallow and small network and guide it towards the QSM solutions by optimizing directly on the image, without involving network weights.The theory behind MoDIP is that a small network can more easily and quickly converge to a suitable intermediate state for subsequent physical model-based optimization, compared to a deep network directly converging to the QSM solution.In MoDIP, both neural network updates and DFO steps are critical in reaching the optimal solution, sharing the workload, and reducing task complexity.The network focuses on prior regularization, while image optimization enhances data fidelity.This idea aligns with unrolled deep learning methods like LPCNN [40] and MoDL-QSM [39].However, in those methods, the network regularizers are learned from the training datasets and are explicitly applied to regularize the QSM images, which makes them less generalizable than MoDIP to various testing datasets.In theory, it is possible to fine-tune the pre-trained unrolled models during inference.However, this approach would be highly computationally demanding due to the involvement of multiple large networks.Similar to FINE, fine-tuning unrolled models becomes challenging when test sets substantially differ from the training sets.
Ablation studies revealed that the small network not only made MoDIP more practical and efficient with significantly less GPU memory requirement but also helped alleviate the risk of overfitting over iterations.Replacing random noise as network input with the local field measurement contributes significantly to MoDIP's performance, possibly because the network benefits from shared image features between the local field input and the susceptibility map output.Pre-training the network weights with paired local field and QSM datasets (FINE) showed limited improvement over untrained DIP networks and could even reduce its generalizability to different testing sets.
We also compared MoDIP with conventional non-deeplearning methods, as well as supervised deep-learning methods.These evaluation datasets included healthy and pathological brains and a digital geometric phantom, varying in object shape, susceptibility range, and acquisition parameters.MoDIP achieved outstanding generalizability in accurately reconstructing susceptibility maps from these diverse testing datasets.Notably, MoDIP achieved these results in under 4.5 minutes without the need for pre-training on paired datasets, exhibiting its practicality and flexibility in real-world applications.
Our work has identified several limitations of MoDIP as well.For instance, MoDIP does not account for local field preprocessing steps, such as brain extraction, phase unwrapping, and background field removal.As a result, the method may be susceptible to accumulated errors from these prior steps.Future work could explore a similar approach to implement MoDIP based on the physical model between the raw phase and QSM [50].Another limitation is that the computational cost of MoDIP is still higher than conventional iterative methods.Specifically, MoDIP takes nearly twice as long as iLSQR for reconstruction, using the mini-U-net starting from 32 channels.One possible solution to mitigate this computational burden is to reduce the number of network channels while increasing the number of DFO steps, although this may result in a trade-off with performance.Future work could find ways to balance computational efficiency and performance in MoDIP.

VI. CONCLUSION
In this work, we propose MoDIP, an unsupervised deep learning method based on the DIP concept, specifically designed to address the challenging task of 3D high-resolution QSM dipole inversion.MoDIP represents an advancement in this field, harnessing the benefits of a compact untrained network and a data fidelity optimization process.Consequently, the method reduces computational costs and demonstrates exceptional generalizability and robustness in accurately reconstructing susceptibility maps from various challenging datasets without requiring any training process.MoDIP holds great promise for practical applications in clinical settings.

Figure. 1 .
Figure.1.Visual representation of QSM reconstruction using different approaches: DIP with heavy-weight (dark blue) networks, conventional image optimization (brown curve), and the proposed MoDIP method (green curve). * signifies the optimal QSM solution, while χ ! and χ $ correspond to MoDIP's network output and its subsequent optimization result, respectively.The bottom grey plane represents the manifold where solutions satisfy the formulation.

Figure. 2 :
Figure.2: The overall scheme of MoDIP for QSM reconstruction.The yellow plane represents the parameter space of the network (), while the 3D surface illustrates the physics model cost function.An initial QSM estimate (χ ! ) is produced by the network and optimized by the physics model to obtain the predicted QSM (χ $ ).The network weights are updated based on the loss (ℒ) calculated on χ $ .The top left of the figure shows the mini-Unet architecture, with blocks representing intermediate feature maps, and arrows indicating different operations.The number of feature maps is specified at the bottom of each block.

Figure. 3 .
Figure. 3. Comparisons of parameter counts, memory, and time costs for FINE, DIP, and MoDIP.Black numbers indicate resource requirements.Initial segments represent results from an image size of 144x144x128, and complete bars portray results from an image size of 256x256x128.

Figure. 4 .
Figure. 4. Ablation study conducted on an in-vivo human brain.MoDIP intermediate results (MoDIP ( !)) and the final outputs (MoDIP ( $ )) were visually compared to DIP and DFO-Only for 10, 20, 50, 100 and 200 iterations.The local field map was used as the input for all different methods.

Figure. 5
Figure. 5. a) Oblique local field simulation procedure with the given voxel size  ⃗ = [1, 1, 2] mm and acquisition orientation  ⃗ = [0.5, 0.5, 0.71].b), Iteration loss curves for MoDIP, DIP and FINE methods with different network inputs.c) QSM images from iterations 10, 20, 50, 100 and 200.Error maps computed between the label and the 200th iteration results for each method are shown in the final column.
(b).Reconstructed QSM images at iteration numbers 10, 20, 50, 100, and 200 are shown in Fig. 5(c), with error maps of the results from 200 iterations in the last column and NRMSE values at the bottom.

Figure 6 .
Figure 6.Comparison of various methods on a simulated brain ( ⃗ = [1, 1, 2] mm,  ⃗ = [0.5, 0.5, 0.7071]) depicting a spherical hemorrhage source (highlighted within a red box).Results and error maps are shown in both axial and coronal views.Hemorrhage areas are magnified with window levels adjusted for best visualization.The means and standard deviations of hemorrhage susceptibility measurements are reported at the bottom.

Figure. 7 .
Figure. 7. QSM results of an in-vivo human brain ( ⃗ = [0.875,0.875, 2] mm,  ⃗ = [0.02,−0.12, 0.99]) with a cavernous hemangioma using different methods.Red arrows indicate apparent artifacts, while green arrows highlight the suppressed susceptibility tissue contrast.The hemangioma is delineated and measured using the blue contour, with a zoomed-in view and adjusted window level presented in the bottom row.The means and standard deviations of the hemangioma's susceptibility measurements are reported below the images.

Figure. 8 .
Figure. 8. QSM reconstruction results of a digital geometric phantom containing 80 cuboids using different methods.Error maps and the NRMSE values are shown below.

TABLE I NMRSE
VALUES OF QSM RESULTS FOR A HEMORRHAGIC BRAIN.HEMORRHAGE