Harmonic field extension for QSM with reduced spatial coverage using physics-informed generative adversarial network

Quantitative susceptibility mapping (QSM) is frequently employed in investigating brain iron related to brain development and diseases within deep gray matter (DGM). Nonetheless, the acquisition of whole-brain QSM data is time-intensive. An alternative approach, focusing the QSM specifically on areas of interest such as the DGM by reducing the field-of-view (FOV)


Introduction
Magnetic susceptibility is a tissue property that describes the extent of magnetization of a material in an external magnetic field.As such, it can serve as an indirect measure of disease biomarkers such as iron, myelin, calcium, and hemorrhage.Recently, a promising technique called quantitative susceptibility mapping (QSM) has emerged, which allows for the estimation of the spatial distribution of magnetic susceptibility in brain tissues from MRI phase measurements.(Haacke et al., 2015;Wang and Liu, 2015).Due to its ability to reveal disease-related changes in tissue iron, QSM of the iron-rich deep grey matter (DGM) has been employed in multiple neurodegenerative studies, including Parkinson's, Alzheimer's, Huntington's disease, and healthy aging (Acosta-Cabronero et al., 2013;Bilgic et al., 2012;Langkammer et al., 2016).
Despite the fact that a majority of QSM studies concentrate mainly on the iron-rich DGM region, whole brain coverage is generally necessary during acquisition to achieve accurate QSM results.However, acquiring the whole brain at a moderate to high resolution for QSM leads to long acquisition times, which in turn increases sensitivity to patient-induced motion effects.Various conventional approaches, such as parallel imaging, compressed sensing, echo-planar imaging, multi-contrast single sequence and ultra-fast acquisition methods, are used to shorten QSM scan time (Bilgic et al., 2015;De et al., 2020;Langkammer et al., 2015;Lustig et al., 2007;Sun et al., 2020;Sun and Wilman, 2015).Another way to accelerate acquisition time without aliasing effects is limiting the field-of-view (FOV) by reducing the number of slices only to the area of interest, such as iron-rich DGM.Limiting the FOV can also increase the spatial resolution given the same scan time, and it can be used with any other acceleration technique (Elkady et al., 2016).
Despite its apparent simplicity, reducing the FOV leads to significant mis-estimation of susceptibility values in QSM, primarily due to two critical steps -Background Field Removal (BFR) and Dipole Inversionboth of which exhibit poor performance in the context of limited spatial coverage.Previous studies have demonstrated that both the BFR and dipole inversion introduce similar levels of error.While there are several studies on dipole inversion in limited FOV acquisitions (Karsa et al., 2019;Zhu et al., 2022), to the best of our knowledge, there has been no research on correcting the performance of BFR in this scenario.
In QSM reconstruction, eliminating the background field which originates from susceptibility sources outside the region of interest (ROI) is a critical pre-processing step to prevent any degradation of the calculated susceptibility maps (Schweser et al., 2016).To address this issue, several techniques have been introduced.These methodologies exploit fundamental physics principles, representing the background field as harmonic functions within the ROI while expressing internal tissue-induced fields as non-harmonic functions.Method such as SHARP (Schweser et al., 2011), V-SHARP (Wu et al., 2012), LBV (Zhou et al., 2014) and RESHARP (Sun and Wilman, 2014) have been proposed.Each comes with its own unique underlying assumptions and limitations, primarily around ROI boundaries.(Schweser et al., 2017).SHARP and its derivative, V-SHARP, rely on the Spherical Mean Value theorem but face challenges in accurately determining internal field values near boundaries due to convolutional artifacts and reduced accuracy with proximity to boundaries.The LBV approach, which solves an elliptic partial differential equation with a boundary condition assumption, also faces inaccuracies in field correction near the boundary due to potential violation of its assumption.RESHARP, utilizing an objective function, fares better near boundaries, but still exhibits errors due to the exclusion of the boundary region in the objective function.In scenarios with a limited field of view (FOV), these conventional techniques exhibit significant distortion in the area of interest as the distance between the boundary field and the corrected field decreases.
Recently, deep learning-based approaches have shown promising results in the task of extending images beyond their boundaries, a technique known as out-painting (Wang et al., 2019;Yang et al., 2019;Zhang et al., 2020).Inspired by this idea, we proposed a method for extending the harmonic background field in limited FOV that provides additional phase information and increases the distance between the corrected field and the ROI boundary.One common method for achieving image extrapolation involves using generative adversarial network (GAN) in natural image processing (Cheng et al., 2022;Nair et al., 2022;Van Hoorick, 2019).However, while extrapolating RGB images can involve any visually appealing color, extending the background field must adhere to Laplace's equation since it is required to be a harmonic function.
To address this challenge, we propose a harmonic background field extension method based on a physics-informed GAN approach that embeds physical constraint into additional loss function.Our method outperforms conventional BFR algorithms alone in limited FOV and achieves comparable performance to the corrected field of the full FOV.Moreover, our approach overcomes the severe susceptibility underestimation issue of QSM with small spatial coverage by avoiding the propagated error from BFR.
For data preprocessing, brain masks were generated from magnitude images using FSL BET (Smith, 2002).Within the mask, the phase image was spatially unwrapped by Laplacian phase unwrapping (Li et al., 2011) and then was processed by V-SHARP to remove the background phase.
To generate the input data for our algorithm, we utilized the local fields of 12 healthy subjects.Background fields were generated by performing a forward calculation of geometric susceptibility sources that were randomly placed outside the brain ROI.For each synthetic susceptibility source, a susceptibility value was randomly determined, ranging from − 200 ppb to 800 ppb.(Zhu et al., 2022).Then, the total fields used as input for training were obtained by adding the generated background field to the local field.Subsequently, we truncated the total field along Z-axis, limiting it to approximately 30 mm thick axial slab.This slab was determined manually using segmented mask from FSL FIRST (Patenaude et al., 2011) for each subject to the necessary minimum range for capturing the entire DGM, encompassing structures such as the globus pallidus (GP), Putamen (PU) and caudate nucleus (CN) (Fig. 1).To optimize the input for our network training, we employed zero-padding to the truncated fields both above and below in the z dimension, extending the FOV in z dimension to 144 mm.Then we randomly cropped the padded total fields into 64×64×64 sized 3D patches for network training.Given that the truncated slab represented a minor portion of the brain, ensuring its representation in our training patches was vital.We pinpointed the bounds of the truncated data to define the starting and ending points of the patch.Then we determined the stride in each dimension and generated patches along these strides within the limiting boundaries of the predetermined starting and ending points.Finally, the total number of patches for training was 9048 with a mini-batch size of 8.

Network architecture & implementation detail
The GAN architecture (Goodfellow et al., 2020) used for harmonic field extension is depicted in Fig. 2, which was originally proposed for generative image inpainting (Yu et al., 2018).However, we made modifications to the network structure to fit our specific 3D requirements.Our generator follows a 3D coarse-to-fine structure.Initially, the coarse network takes the masked total field and generates a rough prediction of the background field outside the input field.Subsequently, the refinement network takes the coarse prediction and produces the final prediction of the extended background field.The refinement network benefits from a more complete field view than the original input image with missing regions, allowing for improved feature representation compared to a single coarse generator.Afterward, the discriminator, which is trained to distinguish real from the generated fields, is used in the adversarial training of generative model to improve the quality of generated fields by ensuring that they resemble real field.
The proposed network's generator employs a 3D U-Net architecture similar to (Ronneberger et al., 2015).The encoder path uses convolutional kernels of size 3 × 3 × 3 with feature maps of 16, 32, 64, 128, and 256, while the decoder path uses reverse feature maps followed by a final output prediction field.We applied instance normalization and a Leaky ReLU activation function sequentially over all the convolutional layers, and we used max pooling with a filter size of 2 × 2 × 2 and a stride of 2.
The discriminator part of the proposed network employs a 3D patchbased convolutional neural network.The network consists of convolutional kernels with a size of 4 × 4 × 4 and feature maps of 8, 16, 32 and 64.Each convolutional layer is followed by Leaky ReLU activation function.Finally, we flatten the output of the final convolutional layer and pass it through a fully connected layer to obtain the discriminator's final output.
The loss function in the proposed physics-informed generative adversarial network comprises several components, including adversarial loss (L adv ), L1 loss for extended field (L l1 ) and physics-informed loss (L lapla ).
The objective function of the GAN (generator: G, discriminator: D) is defined as: (1) where x denotes the samples drawn from the real data distribution, denoted as P r .Simultaneously, P g represents the model distribution, which is implicitly defined by x = G(z).In the context of our proposed method, the input z to the generator is the output from the coarse network.To overcome the issue of model collapse commonly observed in traditional GANs, we adopted the Wasserstein GAN (WGAN) (Arjovsky et al., 2017): where D is the set of 1-Lipschitz function.A gradient penalty (GP) can be further added to enforce the standard WGAN objective function (Gulrajani et al., 2017):  where λ gp is a weight of gradient penalty.
To ensure the consistency of our generated output, x, with the ground-truth data,x, which comprises the truncated local field and an added full background field, we included an L1 loss, defined as follows: Here, m is a binary mask representing the full FOV, and ⋅ denotes pixel-wise multiplication.By minimizing this loss term, the model is encouraged to preserve data that aligns with the given input data within the limited FOV and to accurately extrapolate the background field beyond the limited FOV.
Our objective is not only to generate visually appealing fields, but also to ensure that the generated background field adheres to Laplace's equation.This constraint guarantees that the generated field is a harmonic function within the brain ROI, Ω: Here, ∇ 2 represents Laplacian operator, H bkg is the background field.We implemented this physical constraint by using the Fourier convolution theorem with a 3D 7-point discrete Laplacian kernel, as follows: The 3D 7-point discrete Laplacian kernel is given by: first plane Here, FT − 1 and FT represents inverse Fourier transform and Fourier transform, respectively.
Finally, we trained our coarse network with reconstruction loss, L L1 along with the physical loss, L lapla .Our refinement network added the adversarial loss, L adv , from the discriminator which measures the Wasserstein-1 distance between the original and the generated harmonic field.The overall loss functions for the coarse and refinement networks are as follows: Here, λ L1 , λ lapla and λ adv are weights for the different loss functions.We determined the weights for our model's loss function empirically, as follows: λ L1 = 1, λ lapla = 0.1, λ gp = 10 and λ adv = 0.01.We used the Adam optimizer to minimize the loss function, with a learning rate of 10 − 3 .
We trained each generator in our network separately rather than an end-to-end approach.First, we trained the coarse generator by feeding all total field data with limited FOV and their corresponding groundtruth data to the network.Next, we trained the refinement network using the output of the coarse network and their corresponding groundtruth data.

Healthy in-vivo data
To assess the effectiveness of our algorithm in real clinical scenarios, we prospectively acquired data from healthy subject with a limited FOV for evaluation purposes.

Patient in-vivo data
To further investigate the efficacy of our algorithm, we utilized Parkinson's disease patient test data.These data were acquired using an oblique-coronal imaging oriented perpendicular to the midbrain structure using a 3D multi-echo GRE sequence on a 3T GE scanner (GE Healthcare, USA).The acquisition parameters of data were as follows: FOV = 192 × 192 × 38 mm 3 , flip angle = 20 • , acquisition matrix size = 384 × 384 × 38, reconstruction matrix size = 512 × 512 × 38, TR = ms, TE = 14.0/25.7/37.5 ms, spatial resolution = 0.375 × 0.375 × mm 3 and total scan time of 4 min and 2 s.Data were resampled to 1mm isotropic resolution to match the input of the network.
For data preprocessing of all evaluation datasets, we first extracted a brain mask from the magnitude image using BET.We then performed unwrapping of the phase images using Laplacian-based unwrapping.

Evaluation method 2.4.1. Visual analysis
To compare the reconstructed local field and QSM maps, we generated error maps with respect to the full FOV.Before proceeding with the BFR step, we replaced the total field portion of the extended output with true measurements to preserve the truncated total field values of the input and keep the extended field intact.These maps were generated using three different BFR method (V-SHARP, RESHARP and LBV) and the QSM was produced using the iLSQR method (Li et al., 2015).Since we did not have paired full FOV data for the clinical patient data, we additionally generated susceptibility map weighted imaging (SMWI) (Gho et al., 2014;Nam et al., 2017) to compare the image contrast qualitatively in nigrosome 1.We generated the SMWI using a multiplication number (m) of 4 and a susceptibility threshold value (χ th ) of 0.75.

Statistical analysis
We performed a mean susceptibility value analysis for each DGM structure (Globus pallidus (GP), Putamen (PU), and caudate nucleus (CN)) to determine the amount of underestimation prevention of our proposed approach in QSM.In addition, we calculated quantitative metrics, such as the normalized root mean squared error (NRMSE), highfrequency error norm (HFEN) and structure similarity index (SSIM), using the full FOV QSM map as a reference.

Network output
Fig. 3 presents the network output for coronal and sagittal views from two simulation test datasets.Our proposed method generally follows the trend of the ground truth's harmonic background field, resulting in a more accurate extended harmonic background field, however, the proposed method may exhibit slight differences in finer details.Furthermore, when compared to models without the physicsinformed loss (Laplacian loss), discontinuity was decreased at the boundary between the input and the extended field.The comparison of the local field and QSM results when employing Laplacian loss versus when it is omitted is included in supplementary information Fig. S1, demonstrating improved results with the Laplacian loss.This suggests that the appearance of discontinuity at the boundary during field extension can diminish the performance of the BFR.

Simulation results
Fig. 4 demonstrates the local field results using V-SHARP, RESHARP and LBV methods and its relative error maps for different FOV coverages.Considering that the full coverage was a 144 mm thick axial slab, a 10 % truncation corresponds to 15 mm, while 20 % corresponds to 30 mm, and 30 % corresponds to 45 mm.The relative errors between the results from the limited FOV local field and full brain results are provided for each respective method.After the harmonic field was extended, we applied each of the three different BFR methods to obtain local field results.In every limited FOV ranges, there is a notable performance improvement when the proposed method is applied before the conventional methods, compared to using only the three methods.The improvement is evident even at a 10 % coverage, where the error was largest for all methods.Here, the proposed method markedly enhanced performance by correcting nearly all artifacts that arose from the BFR in limited FOV.The network outputs of proposed method for each different limited FOV coverage are included in supplementary information Fig. S2.Additionally, a comparison with coarse network output and refinement network output is provided in supplementary information Fig. S3 and a comparison with the deep learning based BFR method is provided in supplementary information Fig. S4.
Interestingly, from a 30 % coverage onwards, there was reduced difference between limited FOV and full FOV for all three methods.At a 10 % coverage, however, V-SHARP and RESHARP exhibited a noticeable performance decrease but showed comparable results to full coverage as coverage increased toward 30 %.This suggests that these two methods are more vulnerable to the effects of boundary artifacts compared to LBV.Therefore, when we reduce the influence of boundary artifacts using the proposed method, the performance improvement for V-SHARP and RESHARP is larger than that for LBV.It is important to note that errors arising during BFR in limited FOV influenced the final QSM outcomes.Our proposed method consistently outperforms across all limited FOV coverages.For further detail, the QSM results for various limited FOV coverages are presented in Supplementary information Fig. S5 In Fig. 5, NRMSE and HFEN are compared for each slice of simulation test data limited to 30 mm axial slab, across the three BFR methods.To standardize the comparison, an erosion of 3 voxels from the edges was implemented.In terms of both NRMSE and HFEN, the proposed method showed the better results in all three methods.
All three methods displayed a sharp V-shaped pattern, with NRMSE and HFEN values decreasing as the analysis moved from both edge slice to the center, reflecting the impact of artifacts from the boundary region.However, our proposed method demonstrated consistent performance for slices both near the boundary and at the center, underlining its robustness against boundary artifacts.Furthermore, in line with the findings from Fig. 4, these quantitative metrics also reveal that the V-SHARP and RESHARP methods are more sensitive to the effects of boundary artifacts compared to LBV.Fig. 6(A) presents the axial and coronal views of QSM results obtained by applying the iLSQR method to the local field results, derived from applying three BFR methods on the simulation test data within a 30 mm thick axial slab.Notably, QSMs from all three reconstructions reveal an improved estimation of susceptibility value when the proposed method was employed compared to the results from conventional BFR method alone.This improvement is particularly noticeable in the DGM region, characterized by high susceptibility contrast.
The mean susceptibility and standard deviation across QSM from the three methods are calculated in the DGM ROIs in Fig. 6(B).Results from our proposed method showed the smallest errors compared to the QSM with full coverage, thereby demonstrating the superior accuracy of the proposed method in correcting artifacts originating from BFR within a limited FOV.
Table 1.summarizes the quantitative metrics -NRMSE, HFEN, and SSIM -for the QSM obtained from three different BFR methods within a limited FOV.Our proposed method achieved the lowest NRMSE and HFEN values, as well as the highest SSIM value, across all three results.These findings suggest that our method exhibits superior performance according to the widely used evaluation criteria for QSM within a limited FOV.

Healthy in-vivo results
In Fig. 7, we demonstrate the applicability of our proposed method, showing local field results and corresponding QSM results for data obtained prospectively with limited FOV.Indeed, susceptibility underestimation in QSM images and errors resulting from BFR were present in all three BFR methods, reflecting the patterns observed in truncated data.Furthermore, the enhancement in local field results and QSM results when our proposed method was applied to these datasets closely resemble the improvement seen in retrospectively truncated datasets.This clearly illustrates the feasibility and effectiveness of our proposed method in real clinical environments.The network outputs for the prospectively acquired limited data presented in supplementary information Fig. S6.

Patient in-vivo results
For patient data, multi-echo SMWI acquired in the oblique-coronal plane with limited FOV was used to better visualize loss of nigrosome 1 signal, a key imaging biomarker in Parkinson's disease (PD).For these patient data, we employed V-SHARP for BFR, which showed the largest error in limited FOV data, and iLSQR method for QSM.When we applied the proposed method to these in-vivo patient images with Parkinson's disease, we observed a correction of the susceptibility value underestimation in the QSM images (Fig. 8).Moreover, in the final SMWI images derived from this QSM, we noted an increase in contrast in areas such as the red nucleus (RN) (indicated by the blue arrow) and the region of the substantia nigra (SN) (indicated by the yellow arrow).

Discussion
In this study, we proposed a novel solution to address the challenge of BFR in limited FOV settings for QSM.This solution leverages a harmonic background field extension method that is grounded on a physicsinformed GAN approach.Our method outperforms conventional BFR algorithms, demonstrating notable enhancement specifically in more limited FOV scenarios.Also, our proposed methodology also effectively addresses a problem commonly observed in QSM with small spatial coveragethe underestimation of susceptibilityby preventing the propagation of errors from the BFR process.In summary, our method offers a robust and efficient solution for reconstruction QSM in limited FOV, which substantially shortens the scan time.
Despite the promising results demonstrated by this method, there are several limitations.Firstly, as seen in Fig. 5, while it displayed a markedly improved performance compared to when only conventional algorithms were used with limited FOV data, the proposed method still exhibits a slight 'V'-shaped pattern of errors near slices at the boundary compared to the central slices.This can be seen as a residual error when conducting BFR due to some level of inconsistency at the edges where the field is extended from the input data.This issue likely arises from our implementation method of inserting the total field-a combination of the background field and local field-rather than just the background field and subsequently extending the background field.In a limited FOV situation, the BFR process may not be successful, making it impossible to extract the pure background field.Consequently, the only option is to expand the background field from the total field.This supports our decision to use a deep learning-based approach instead of a polynomial expansion algorithm when extending the harmonic field.Though a polynomial expansion method has been proposed to extend the background field of conventional QSM using Taylor expansion (Topfer et al., 2015), the polynomial method assumes that the BFR is successfully performed beforehand and then extends the polynomial from a pure harmonic background field.However, as mentioned, in a limited FOV, successful BFR cannot be guaranteed beforehand, thus the underlying assumption can be invalid.In essence, we are compelled to input the total field and extend the background field from there.
As depicted in Fig. 6(B), the correction of the underestimation of susceptibility values was apparent when utilizing our proposed method.However, some degree of discrepancy was noted when compared to the full FOV results.This difference can be attributed to errors in the dipole inversion stemming from the truncation of dipole field, which expands beyond its physical susceptibility source.To investigate deeper into the effects of dipole inversion under limited FOV, we applied our method across various FOV ranges and compared the reconstructed QSM results to the full FOV data slice-by-slice.All three methods showed a V-shaped accuracy pattern, with better results moving from the edges to the center.The 10 % limited FOV had overlapping errors at its boundaries, resulting in lower accuracy compared to the 20 % and 30 % cases.For the latter two, accuracy stabilized about 5 mm from the boundary despite the different FOV coverage.Details of this comparative analysis can be found in supplementary information Fig. S7.To address this issue, future research can investigate comprehensive methodologies like combining deep learning based dipole inversion method for limited FOV (Zhu et al., 2022) to our result, which can jointly consider the effect from BFR and dipole inversion in limited FOV.
In our study, we addressed artifacts in conventional BFR methods only within the limited FOV.Recognizing that these methods utilize the separability of harmonic and non-harmonic fields, our method of extending a pure harmonic background effectively reduced related artifact.However, while our current approach synthesizes the harmonic background using basic forward calculations from geometric susceptibility sources, future research could explore more realistic methods that could better mimic susceptibility patterns from areas such as sinuses or ear cavities.
Despite the demonstrated performance of our model, it shares a  common vulnerability with other CNNs to variations in resolution.It performs well with data that matches the resolution of the training data, but its accuracy might degrade when applied to different resolutions.To overcome this issue, it might be beneficial to retrain the model with data at the desired resolution or to refer to other approaches (Oh et al., 2022;Xiong et al., 2023) which have been designed to be more robust to changes in image resolution.For the training of the network, we opted to use retrospectively truncated data from the full FOV, rather than data directly acquired from a limited FOV.Even under conditions where acquisition parameters remain constant and the same patient is scanned, variations arise between slices solely due to changes in the FOV.Moreover, when data is collected within a restricted FOV, there is a pronounced reduction in the signal-to-noise ratio (SNR) at the peripheries of the slices.To guarantee the acquisition of high-quality training data, we employed data that had been retrospectively truncated from the full FOV.Additionally, when acquiring an actual limited FOV, an important consideration is the need to account for non-local effects caused by unacquired adjacent slices.Incorporating these non-local effects into the training data was challenging, so our method primarily focused on the ideal limited FOV resulting from truncation and the influence of the harmonic field.Nevertheless, when evaluating the performance on data prospectively obtained within a limited FOV to assess our methodology's real-world applicability, the outcomes demonstrated a performance enhancement nearly identical to what was observed in the retrospectively truncated data.This affirms the efficacy of our approach, as illustrated in Fig. 7.

Conclusion
We proposed a novel method utilizing a physics-informed GAN for harmonic background field extension, effectively resolving the issue of BFR in limited FOV situations.This approach surpasses conventional algorithms, significantly improving performance in limited FOV cases and effectively correcting susceptibility underestimation in QSM.Thus, our methodology offers an efficient and robust solution for QSM reconstruction in limited FOV, which can lead to a substantial reduction in scan time.

Fig. 1 .
Fig. 1.Depiction of DGM ROI on an in-vivo subject, which includes the Caudate (CN), Putamen (PU), and Globus Pallidus (GP) (A).The coronal and sagittal views of the in-vivo subject's QSM are illustrated in (B).The location of truncation was manually adjusted for each individual, with the DGM ROI (A) as a reference.

Fig. 2 .
Fig. 2. (A) Schematic of the proposed method, which includes a dual-generator and a single-discriminator structure.(B) The generator employs a 3D U-Net architecture.(C) The discriminator, designed with 4 convolution layers, operates with a stride of 2. The number of channels for each layer is indicated below.

Fig. 3 .
Fig. 3. Qualitative comparison of the network output from two simulation test datasets.(a) Network input.(b) Ground truth obtained by the addition of the masked total field and the remaining synthetic background field.(c) Network output without the Laplacian loss.(d) Result generated by the proposed method.Full brain mask was applied for the network output (c) and (d).The application of Laplacian loss in the network, as opposed to its absence, results in a continuous extension of the field at the boundary between the input and the extended field, as indicated by the blue arrow in the yellow box.

Fig. 4 .
Fig. 4. Local field results for simulation test data truncated to a limited FOV ranging from 10 % to 30 %, considering full brain coverage as 100 %, for three methods: V-SHARP, RESHARP, and LBV.Numbers in parentheses mean (%) of full FOV.(b),(d),(f) show the local field results and error maps in conventional methods.(c),(e), (g) depict the local field results and error maps when the proposed method is applied.The corresponding NRMSEs of full brain coverage are noted below each result.

Fig. 5 .
Fig. 5. Quantitative performance metrics, NRMSE and HFEN, from the three BFR for each slice of simulation test data within the limited FOV.(A) describes the limited FOV data, which is truncated into a 30 mm thick axial slab.(B) and (C) show the NRMSE and HFEN values respectively, comparing each slice's result with those from the full FOV.Across all slices and methods, the improved performance of the proposed method is evident.

Fig. 6 .
Fig. 6. (A) QSM and their respective error maps, where three different BFR methods were applied.(B) Mean susceptibility values and standard deviation values in the DGM ROIs (Caudate (CN), Putamen (PUT), and Globus Pallidus (GP)).Our proposed method demonstrates superior performance in correcting susceptibility value underestimation in QSM with limited FOV, especially within DGM ROIs.

Fig. 7 .
Fig. 7. Local field and corresponding QSM results obtained from a prospectively acquired limited FOV in-vivo data.Upon employing our proposed method, visual improvements were observed across all three different BFR methods.

Table 1
Quantitative performance metric: NRMSE, HFEN, and SSIM of QSM obtained from three different BFR methods.