Plug-and-Play Latent Feature Editing for Orientation-Adaptive Quantitative Susceptibility Mapping Neural Networks

Quantitative susceptibility mapping (QSM) is a post-processing technique for deriving tissue magnetic susceptibility distribution from MRI phase measurements. Deep learning (DL) algorithms hold great potential for solving the ill-posed QSM reconstruction problem. However, a significant challenge facing current DL-QSM approaches is their limited adaptability to magnetic dipole field orientation variations during training and testing. In this work, we propose a novel Orientation-Adaptive Latent Feature Editing (OA-LFE) module to learn the encoding of acquisition orientation vectors and seamlessly integrate them into the latent features of deep networks. Importantly, it can be directly Plug-and-Play (PnP) into various existing DL-QSM architectures, enabling reconstructions of QSM from arbitrary magnetic dipole orientations. Its effectiveness is demonstrated by combining the OA-LFE module into our previously proposed phase-to-susceptibility single-step instant QSM (iQSM) network, which was initially tailored for pure-axial acquisitions. The proposed OA-LFE-empowered iQSM, which we refer to as iQSM+, is trained in a self-supervised manner on a specially-designed simulation brain dataset. Comprehensive experiments are conducted on simulated and in vivo human brain datasets, encompassing subjects ranging from healthy individuals to those with pathological conditions. These experiments involve various MRI platforms (3T and 7T) and aim to compare our proposed iQSM+ against several established QSM reconstruction frameworks, including the original iQSM. The iQSM+ yields QSM images with significantly improved accuracies and mitigates artifacts, surpassing other state-of-the-art DL-QSM algorithms.

Deep learning (DL) methods are emerging as alternatives to traditional methods (Jung et al., 2022), with most focusing on the dipole inversion step (Yoon et al., 2018;Bollmann et al., 2019b;Gao et al., 2021;Feng et al., 2021;Jung et al., 2020;Lai et al.;Chen et al., 2020;Polak et al., 2020;Xiong et al., 2023a;Oh et al., 2022;Xiong et al., 2023b), resulting in significantly better results than traditional iterative algorithms in a much shorter reconstruction time.QSMnet (Yoon et al., 2018) first proposed to train a U-net (Ronneberger et al.) for QSM dipole inversion by using the in vivo local field maps as train-ing inputs, and COSMOS QSM (Liu et al., 2009) images as training labels.This training scheme (learning the mapping from in vivo field maps to QSM labels reconstructed using traditional algorithms) was further improved by QSMnet+ (Jung et al., 2020), QSMGAN (Chen et al., 2020), VaNDI (Polak et al., 2020), LPCNN (Lai et al.), and MoDL-QSM (Feng et al., 2021) via training data augmentation and more advanced model designs (e.g., GAN (Goodfellow et al., 2014;Lei et al., 2020) or unrolled networks (Yang et al., 2016;Zhang and Ghanem, 2018;Aggarwal et al., 2018)).AutoQSM (Wei et al., 2019) also successfully trained a U-net for QSM reconstructions from total field maps within this scheme.Alternatively, DeepQSM (Bollmann et al., 2019b) and xQSM (Gao et al., 2021) chose to train deep neural networks (i.e., U-net and Octave U-net) based on purely synthetic data composed of simple geometric shapes and simulated human brain data using the dipole forward model (Schweser et al., 2011), respectively, where the training inputs and labels satisfy the underlying dipole convolution model.One method (Oh et al., 2022) used adaptive instance normalization to allow resolution-agnostic reconstruction, while AFTER-QSM (Xiong et al., 2023a) proposed to handle the local field from oblique and anisotropic acquisitions by adding affine transformations into the reconstruction pipelines.There were also some methods designed for phase unwrapping, e.g., PHU-Net (Zhou et al., 2021), and background removal, e.g., SHARQnet (Bollmann et al., 2019a) and BFRNet (Zhu et al., 2022).In addition to these DL methods, which were designed for a certain step in the QSM reconstruction pipeline, our recently proposed iQSM (Gao et al., 2022) method was the first deep learning-based single-step QSM reconstruction technique.It enables direct QSM reconstruction from wrapped phase images in an end-to-end network, and outperforms numerous previous state-of-the-art methods through eliminating the error-accumulating intermediate steps.
However, the performances of most previous DL-QSM approaches will dramatically decrease when the acquisition orientation with regard to the main magnetic field (or the magnetic dipole field orientation) of the testing data is different from those used in training, which has been reported as a major challenge for current DL-QSM methods in a recent review (Jung et al., 2022).Physics-guided modules can be introduced into DL-QSM methods to enhance the model generalizability to acquisition orientations.Two types of algorithms, i.e., (i) physics-guided unrolled neural networks, e.g., LPCNN (Lai et al.) and MoDL-QSM (Feng et al., 2021), and (ii) affine transformation-enabled end-to-end networks, i.e., AFTER-QSM (Xiong et al., 2023a) have demonstrated improvements over conventional pure-networkbased QSM solutions.However, the performance of unrolled networks is significantly degraded when applied to data acquired at relatively large oblique angles, and AFTER-QSM is very memory-consuming and sometimes will over-sharpen the QSM images due to a second refinement network design.Moreover, all these methods only perform the dipole inversion process starting from preprocessed local field maps.

Theory and Method
In this section, we will first demonstrate that single-step QSM reconstruction from MRI raw phases of arbitrary acquisition orientations is an orientation vector-dependent inverse problem.Then, we will introduce the proposed OA-LFE module and its integration with iQSM to form the new iQSM+ in detail.

Single-step QSM Reconstruction as an Inverse Problem
The local field δB local induced by brain tissue is equal to the convolution between the magnetic susceptibility distribution χ and the unit dipole kernel D in the image domain, which can be expressed as a multiplication in the k-space (Marques and Bowtell, 2005): where D( ⃗ k) is the unit dipole kernel in k-space, represents the acquisition (magnetic dipole) orientation unit vector, whose elements are the vector projections of the acquisition Field-of-View (FOV) coordinates onto the main magnetic field ( ⃗ B 0 ) direction.In most previous DL-QSM works, only the pure-axial acquisition (i.e., ⃗ p=[0, 0, 1]) is considered, which simplifies Eq. 2 into Eq.3: The relationship between the unwrapped MRI phase and the induced local field perturbation by the tissues inside the brain can be expressed through the Laplacian operator as follows (Li et al., 2014): where φ u is the unwrapped MRI phase; B 0 is the main magnetic field strength; γ is the gyromagnetic ratio; TE is the echo time.Substituting Eq. 1 and Eq. 2 into Eq.4: where [x,y,z] are the image domain coordinates, and the Laplacian of the unwrapped and wrapped phase φ u can be calculated using the Laplacian of trigonometric (LoT) functions (Gao et al., 2022;Li et al., 2014;Schofield and Zhu, 2003) from the wrapped phase φ w as: Summarizing all above equations, single-step QSM reconstruction from MRI raw phases can be formulated as below, which mathematically describes the single-step QSM reconstruction from MRI wrapped phase images as an orientation-dependent inverse problem since the second-order derivatives can be formulated as linear operators using finite-difference approximations: where A ⃗ p is the orientation-dependent system operator, which is equivalent to the operation of ∂y∂z .In our previous iQSM work (Gao et al., 2022), we have successfully solved Eq. 7 at the special case of ⃗ p=[0,0,1] (i.e., pure-axial dipole orientation).In this work, a DLbased solution of Eq. 7 at arbitrary ⃗ p is proposed below, which can effectively encode and fuse the orientation vector (⃗ p) information into the proposed deep neural networks and learns the inverse of A ⃗ p .

iQSM+ Network Architecture
As shown in the bottom part of Fig. 2, the network architecture of the proposed iQSM+ is constructed by inserting the proposed novel OA-LFE module into our recently developed iQSM network (Gao et al., 2022), which consists of a learnable LoT Layer and a traditional U-net.The details of the PnP OA-LFE module and the base LoT-Unet from iQSM are detailed as follows:

OA-LFE Module
As shown in Fig. 1(b), the proposed OA-LFE module takes both the network latent feature as well as the corresponding acquisition orientation vector as inputs, and outputs the Orientation-Adapted Features to the subsequent part of the network.
Orientation Vector Encoding: Suppose that the input latent feature of OA-LFE module is denoted by H ∈ R N c ×N x ×N y ×N z , where N c is the channel number; N x , N y , and N z represent the height, width, and depth of the 3D QSM data, respectively.In the LFE module, the orientation vector ⃗ p is first projected into a Feature Editing Kernel (FEK, K f e ∈ R 3×3×3 ), and two Feature Editing Vectors (FEVs, V f e ∈ R N c ×1 ) using three 4-layer MLPs, which can be described as follows: where Z 4 is the output of the MLP; , and b 3 ∈ R 10×1 are the associated bias terms; × represents the matrix multiplication; Sigmoid Linear Unit (SiLU) is adopted as the activate function because it has been demonstrated to improve the performance of shallow networks (Elfwing et al., 2018).For FEK learning, Latent Feature Editing: The input hidden feature H is first convoluted with the learned FEK (K f e ) to fuse orientation information in spatial dimension: where * denotes the 3D image convolution, K f e ∈ R 3×3×3 is the learned FEK (reshaped from the MLP output); zeropadding is conducted to make sure that the temporary feature H s is of the same size as H.
Then, the channel-dimension feature editing is conducted to take full advantage of the orientation information: where V1 f e ∈ R N c ×1 and V2 f e ∈ R N c ×1 are the two FEVs projected from the orientation vector, respectively.Finally, a skip connection from the latent feature input H to temporary edited feature H c s is added, forming a residual block: where H OA ∈ R N c ×N x ×N y ×N z is the final output of the proposed OA-LFE module, i.e., Orientation-Adapted Feature.

Orientation-Adaptive iQSM
As shown in the bottom part of Fig. 2, the network backbone of the proposed iQSM+ is constructed by combining the proposed PnP OA-LFE modules into our previously developed iQSM network (Gao et al., 2022), which is composed of a novel LoT Layer and a traditional U-net.The OA-LFE modules are appended after every 3D convolutional layer in the U-net part to make full use of the orientation information.
The U-net part contains 18 three-dimensional convolutional layers with kernel size of 3×3×3 and stride size of 1×1×1 and 1 voxel zero padding to maintain the image size; 18 OA-LFE modules following the 3D convolutional layers to gain Orientation Adaptability; 4 maxpooling and 3D transposed convolution layers of kernel size 2×2×2 are introduced in the contracting and expanding path for multiscale learning, respectively; 22 ReLU units are set as the activation function, and a 1×1×1 3D convolutional layer without zero padding is adopted as the output layer.Finally, a skip connection from the first channel of the LoT Layer output to the U-net output is added, forming a residual block (Kim et al., 2016), which helps the training process converge faster.Furthermore, this skip connection can mitigate the vanishing gradient problem during training (He et al., 2016).

Experiments
In this section, we first describe the details concerning the network training for the proposed iQSM+.Then, we introduce how the comparative experiments were conducted to evaluate the performances of the proposed iQSM+ on MRI phase data acquired from various platforms.Institutional ethics board approvals have been obtained for all MRI brain data used in this work.

Simulated-supervised Training Data Generation
A simulated-supervised training scheme was introduced to simulate varying orientation vectors spanning the entire 3D unit sphere for network training.As shown in the top part of Fig. 2, unit orientation vectors were first simulated using ⃗ p =[sin(θ)cos(ϕ), sin(θ)sin(ϕ), cos(θ)], where θ ∼ [0, π] and ϕ ∼ [0, 2π] are random spherical coordinates varying from batch to batch.Next, simulated local field maps of various head orientations were first calculated from the QSM ground truths using forward dipole convolution.Then, the corresponding background field maps were added to generate the total field images.Finally, the wrapped phases were simulated from the total field using a previously introduced phase evolution method, scaling with the main field strength and echo times (Gao et al., 2022).The above simulation process from the QSM labels to the training inputs, i.e., the wrapped phases, was encapsulated as a transform function.In this work, only training labels were explicitly prepared before network training, and the corresponding   training inputs were dynamically created using the preprocessing transform function at the beginning of each training batch.A major advantage of this simulatedsupervised training scheme is that we could simulate as many acquisition orientation vectors as desired with a limited number of training labels.In this study, the proposed iQSM+ were trained based on 14400 training patches for 100 epochs, which indicated that in total 14400×100 head orientation vectors could be simulated using the proposed simulated-supervised training scheme.
The proposed iQSM+ was trained based on 14400 small QSM patches (size: 64×64×64) cropped from 96 full-sized QSM volumes (size: 144×196×128), which were acquired from 96 healthy volunteers at 3T (GE Discovery 750) using a 3D multi-echo GRE sequence (ME-GRE) with the following parameters: First TE/∆TE/TR = 3/3.3/29.8ms, 8 unipolar echoes, FOV = 144×196×128 mm 3 , voxel size=1 mm 3 isotropic, total scan time = 5.3 min.To obtain the full-sized QSM labels, a traditional QSM reconstruction pipeline, i.e., BET (Smith, 2002) for brain mask segmentation, Laplacian Unwrapping for phase unwrapping and total field estimation, RESHARP (Sun and Wilman, 2014) with 3-voxel brain erosion on BET brain mask for accurate local field calculation, and iLSQR (Li et al., 2015) for dipole inversion, was performed on the 96 full-size ME-GRE raw phase images.We also used the PDF method to estimate the corresponding background field maps for each QSM training label.The cropping was carried out by sliding a 64×64×64 window to traverse the full-size volumes with a stride of 16×22×12.Similar to our recent iQSM work, a total of 14400 simulated pathological QSM patches were also generated using our previously developed simulation pipeline for each healthy QSM patch, and then the two datasets were randomly mixed at the beginning of each training epoch during network training, as described in our recently proposed iQSM training strategies (Gao et al., 2022).

Network Training
The loss function used for iQSM+ training consists of two parts, i.e., the Mean-Squared-Error (MSE) loss between the network QSM predictions and the QSM training labels, and a model loss to measure the difference be-   tween the ground truth local field maps and the estimated local field calculated from the predicted QSM images: where θ denotes the network parameters to be optimized, y n represents the training labels (ground truth QSM data), and ỹn (θ) is the network output QSM prediction, D is the dipole kernel, n = {1, 2, 3, . . ., N} is the data index, N is the batch size, and λ is a weighting parameter between the two losses.In this work, λ is empirically set to 0.1.All neural networks in this study were implemented using PyTorch 1.8.The source codes for iQSM+ along with pre-trained network checkpoints are available at https://github.com/sunhongfu/deepMRI/tree/master/iQSM_Plus.The network parameters were initialized with Gaussian random variables with a mean of zero and a standard deviation of 0.01, except for the convolutional kernels in LoT Layer, which were initialized using the 27-point-stencil Laplacian operators (Gao et al., 2022).The training batch size was set to 32 and the network was trained for 100 epochs using the Adam optimizer.The learning rate was set to 10 −3 , 10 −4 , and 10 −5 for the first 40, 40-80, and the final 20 epochs, respectively.It took around 24 hours to finish the network optimization on one Nvidia Tesla A6000 GPU.

Evaluation Datasets
To demonstrate the performance and generalizability of the proposed iQSM+, extensive comparative experiments were evaluated on a large number of simulated and in vivo brain datasets from different MRI platforms, including: 1. Simulated GRE phase measurements from two COS-MOS healthy brains (dipole orientation vectors ⃗ p =[0,0,1] and [0,0.707,0.707])were tested in an ablation study to quantitatively evaluate the effectiveness of the novel OA-LFE modules.
Two more patients (one with multiple micro bleeding (MB) lesions and one with intracranial hemorrhage (ICH)) were also scanned in vivo using this 10-echo sequence at 3T with a pure-axial acquisition orientation (⃗ p =[0,0,1]) to investigate the effects of incorrect orientation vectors on iQSM+ reconstructions.
5. A total of 100 ME-GRE scans (50 at pure-axial orientation and 50 at oblique orientations) from 8 subjects in a public QSM dataset (Shi et al., 2022)  quired at 3T (Siemens Prisma) were used to quantitatively compare deep gray matter (DGM) susceptibility measurements of different QSM methods.

Performance Evaluation and Numerical Metrics
To demonstrate the effectiveness of the proposed PnP OA-LFE module, another two iQSM-Nets without the OA-LFE modules, i.e., the original iQSM and iQSM-Mixed, were trained and compared with the proposed iQSM+ on both simulated and in vivo QSM data.The original iQSM was trained on phase data of pure-axial orientation, and iQSM-Mixed was trained on the same dataset as iQSM+ with local field maps generated using mixed dipole orientations.Different from iQSM+, the iQSM-Mixed shares the same network architecture as the original iQSM without the proposed OA-LFE module to take in orientation vectors as another input to the network.
We also compared the proposed iQSM+ with several established multiple-step methods, including iLSQR (Li et al., 2015), LPCNN (Lai et al.), and AFTER-QSM (Xiong et al., 2023a), which in theory can handle oblique dipole inversions.All multi-step methods started from Laplacian Unwrapping (Li et al., 2014) and RESHARP (Sun and Wilman, 2014) background field removal in this work.Similar to the AFTER-QSM (Xiong et al., 2023a) design, Affined-iQSM was also introduced in this study, i.e., applying affine transformation for image rotation before and after the pretrained iQSM model, where the affine matrix was calculated from the dipole orientation vectors.LPCNN and AFTER-QSM models were trained by their original authors without retraining on our simulated-supervised dataset.All deep learning method inferences were conducted on one Nvidia 4090 GPU, and the traditional non-deep learning algorithms were accomplished on one Intel(R) Core(TM) i7-13700F CPU.
For quantitative evaluation of different QSM reconstruction frameworks, commonly used numerical metrics (Langkammer et al., 2018)  putamen (PU), caudate (CN), substantia nigra (SN), and red nucleus (RN) in the one hundred in vivo public brain data (Shi et al., 2022) were measured and reported as bar graphs with the DGM parcellation provided by their authors.Line graphs on the ten simulated pathological data of the numerical metrics and Region-of-Interest (ROI) (simulated hemorrhage and GP) susceptibility measure-ments were also comprehensively compared with other QSM methods.

Ablation Study: The Effectiveness of the Proposed
PnP OA-LFE Module Figure 3 compares the proposed iQSM+ with the original iQSM and iQSM-Mixed on two COSMOS-based simulated and four in vivo brain subjects.It is clear from the simulation results (Fig. 3(a)) that iQSM+ showed consistent reconstructions for phase data of both pureaxial (⃗ p =[0,0,1]) and 45 • oblique (⃗ p =[0,0.707,0.707])orientations, while the original iQSM failed to present reasonable results for the latter case, and iQSM-Mixed demonstrated apparent over-smoothing QSM results.
The original iQSM's NRMSE/SSIM/HFEN performances degraded from 46.75/90.66/36.50% to 117.08/70.00/127.27%due to obliqued acquisition, while the maximum HFEN deviations between the two orientations for those of iQSM+ were less than 5%, compared to 91.22% and 10.68% for the original iQSM and iQSM-Mixed, respectively.Similar trends can be observed from the in vivo results in Fig. 3(b).As indicated by the red arrows, iQSM failed to restore desirable QSM contrast in the GP and SN regions for phase data of oblique acquisition orientations, while iQSM-Mixed presented oversmoothing images.In contrast, the iQSM+ showed consistent QSM results for all four different orientations.This ablation study confirms that the proposed OA-LFE modules significantly enhanced the original iQSM and enabled the new iQSM+ to accurately reconstruct QSM images from arbitrary head orientations.
An ablation study was also conducted to investigate the effectiveness of the proposed FEV and FEK designs using the 10 simulated pathological brains data as described in Section 3.3 (Evaluation Datasets), with qualitative and quantitative results shown in Supp.Fig. 1 and Supp.Table 1, respectively.The performance of iQSM+ without FEK or FEV module drops significantly, implying both FEV and FEK designs are effective for the QSM reconstruction task, with FEV of higher contribution than FEK.
Furthermore, the learned FEKs and FEVs in the first OA-LFE module across 10 different acquisition orientations are shown in Supp.Fig. 2. It is observed that FEKs and FEVs are changing continuously with the increasing acquisition orientation angles, which suggests some degree of interpretability and trustworthiness of the OA-LFE models.

Simulated Pathological Dataset
Figure 4 compares the proposed iQSM+ with various established QSM methods on two phase images with a simulated hemorrhage lesion (0.8 ppm) added onto a χ 33 brain base.The upper two rows compare different QSM methods on phase data of axial orientation, while the bottom two rows illustrate the results of oblique case.According to the error maps and the numerical metrics, the proposed iQSM+ showed the most consistent and best reconstruction results among all methods.Although Affined-iQSM could help improve the original iQSM's results of oblique dipole orientations, it smoothed out some fine brain structures.This implies that the proposed OA-LFE modules can better handle orientation vectors than the non-learnable affine transformation based rotations.It is also found that multi-step methods dramatically underestimated the hemorrhage susceptibility comparing to iQSM-basesd single-step methods, which is consistent with the findings in our recent work (Gao et al., 2022).
More quantitative analyses based on this χ 33 label were investigated and reported as line graphs in Fig. 5 concerning the robustness of various QSM methods on four evaluating metrics, i.e., NRMSE, SSIM, hemorrhage susceptibility, and GP measurement changes against the dipole orientation vectors (from pure-axial to 90 • titled).It is clear that the proposed iQSM+ showed the most flat curves along oblique angles, while the original iQSM's curves changed the most, which further confirmed that the effectiveness of the proposed OA-LFE modules.The proposed iQSM+ displayed the smallest NRMSE and the most accurate hemorrhage measurements.The hemorrhage curves in Fig. 5(c) clearly showed that the multistep algorithms significantly underestimated hemorrhage susceptibilities.

In vivo Dataset
QSM results of various methods on the SH patient were compared in Fig. 6.The upper two rows show reconstructions of 1-voxel brain edge erosion, while the bottom two rows demonstrate results of 4-voxel brain mask erosion.The proposed iQSM+ produced the most visually appealing results for 1-voxel mask erosion data, while iLSQR and LPCNN showed substantial artifacts, AFTER-QSM showed significant contrast loss compared with its 4-voxel erosion result, iQSM failed to produce clear reconstruction of the micro-bleeding, and Affined-iQSM showed noticeable over-smoothing effects.All iQSM-based methods showed more consistent reconstructions between the 1-voxel and 4-voxel mask-eroded data as compared to the first three methods in Fig. 6.
Box charts in Fig. 7 compare different QSM algorithms based on the 100 brain subjects from the public QSM dataset (Shi et al., 2022).The left part plot compares DGM measurements of various methods on data with neutral (pure-axial) orientation, while the right part plot shows the box charts for QSM of oblique orientations.The proposed iQSM+ showed similar measurements as the χ 33 reference, traditional iLSQR, and AFTER-QSM for neutral orientation QSM data, while LPCNN showed significant differences in the PU, SN, and RN region.For example, the median values of iQSM+ in PU is 3.9% deviated from the χ 33 reference, 31.40%lower than LPCNN.Similar trends were found for the oblique subjects, except that the original iQSM trained for pureaxial significantly under-performed all others.

The Influences of Incorrect Orientation Vectors on iQSM+ Results
The effects of incorrect orientation vectors on iQSM+ are shown in Fig. 8, demonstrating that the proposed OA-LFE indeed learned the effective encoding of dipole orientation vectors instead of simply memorizing 'normalappearing' QSM results for every possible orientation.According to the error maps in simulation results (Fig. 8(a)), deliberately feeding in incorrect orientation vectors to iQSM+ would result in undesirable reconstruction results with dramatically larger errors.Similar results were also observed in the in vivo brain data.As pointed out by the red arrows in Fig. 8(b), providing iQSM+ with the wrong orientation vectors resulted in substantial reconstruction errors and artifacts near the micro bleeds, SN, dental nucleus, and hemorrhage regions.

Applicability of OA-LFE to Other DL-QSM Networks
All above experiments were carried out to demonstrate the effectiveness and robustness of the proposed OA-LFE for the single-step iQSM.To demonstrate the versatility of the PnP OA-LFE module, we also applied it to another neural network, xQSM (Gao et al., 2021), which was solely designed for dipole inversion starting from the pre-processed local field map.Similar to the ablation studies designed for iQSM+ (Fig. 3), we also trained an xQSM+ (i.e., OA-LFE-empowered xQSM), and compared it with the original xQSM and xQSM-Mixed (similar to the training of iQSM-Mixed), as shown in Fig. 9. Similarly, xQSM+ showed much-improved results with enhanced orientation adaptability than the original xQSM and xQSM-Mixed, confirming that the proposed OA-LFE module is not limited to iQSM, but also effective for QSM dipole inversion neural networks that various research groups have proposed in recent years.

Discussion and Conclusion
In this work, we propose a novel PnP OA-LFE module to learn the encodings of dipole orientation vectors and effectively fuse them into deep neural networks, enabling QSM reconstructions from arbitrary acquisition orientations.We developed an iQSM+ method by incorporating the OA-LFE modules into our previously developed single-step iQSM network.To demonstrate the effectiveness of the proposed OA-LFE and the performances of iQSM+, extensive experiments were conducted on simulated and in vivo brain subjects from various MRI platforms to compare the proposed method with several established methods including some state-of-the-art DL-QSM methods and the original iQSM.In addition, we also validated the generalizability and versatility of the proposed OA-LFE module on deep neural networks designed for QSM dipole inversion, e.g., xQSM.Overall, this work presents a new DL paradigm enabling QSM researchers to develop novel orientation-adaptive QSM solutions by combining the PnP OA-LFE module into their network structures without a complete overhaul of their existing architectures.
The proposed iQSM+ was trained using a simulatedsupervised framework, which could simulate sufficient dipole orientations needed, without the need to scan a large number of subjects at oblique head orientations.This network training strategy also allowed us to manually control the distribution of the orientation vectors.
Most previous DL-QSM methods focused on the final QSM dipole inversion step, e.g., QSMnet/QSMnet+ (Yoon et al., 2018;Jung et al., 2020), xQSM (Gao et al., 2021), LPCNN (Lai et al.), MoDL-QSM (Feng et al., 2021) etc., while traditional unwrapping algorithms like Laplacian unwrapping (Li et al., 2014), and background removal RESHARP (Sun and Wilman, 2014) were still necessary as preprocessing steps.In contrast, our previously proposed phase-to-susceptibility iQSM network was able to reconstruct QSM images directly from the MRI raw phases in a single step.Thanks to the PnP property of the proposed OA-LFE, the current version of iQSM+ not only handles arbitrary acquisition orientations, but preserves almost all the advantages of the orig-inal iQSM (Gao et al., 2022) against the multi-step QSM frameworks, including robustness against brain edge erosion, ultra-fast reconstruction speed, and reduced errors from the intermediate steps, especially phase unwrapping as we have shown in our iQSM work (Gao et al., 2022).
We evaluated the performance of the proposed iQSM+ on extensive MRI datasets scanned from multiple platforms with various sequence parameters (e.g., TR, TE, flip angle, number of echoes, etc.), which ensured a fair, comprehensive, and realistic comparison for different QSM methods.It is found that the proposed iQSM+ consistently showed improved results on QSM data of different parameters, compared with the original iQSM and other multi-step QSM methods.
In this study, we limited our experiments of the proposed iQSM+ and xQSM+ to human brain data scanned at 1 mm 3 isotropic resolution.Data of anisotropic resolution would require image upsampling to isotropic voxel sizes.Theoretically, retraining or fine-tuning the iQSM+ network with matching anisotropic image resolution will improve the network's performance; however, this is more computationally expensive and time-consuming.In the future, we will investigate strategies such as those proposed in AFTER-QSM (Xiong et al., 2023a) and AdaIN-QSM (Oh et al., 2022) to more effectively and accurately process QSM data of anisotropic resolutions.In addition, the proposed Orientation-Adaptive (OA-LFE module embedded) neural networks were trained in a simulated-supervised manner, where the training labels reconstructed using previously developed QSM methods may contain artifacts not corresponding to brain anatomies and thus influence the network performance.Furthermore, if there is severe subject motion in the MRI scans (e.g., due to respiratory effects or severe head motion), specially tailored motion correction algorithms, including deep learning-based methods, should be performed as preprocessing steps before the iQSM+ reconstruction to eliminate or suppress potential artifacts.Future work may investigate unsupervised algorithms to directly solve QSM from the raw MRI phase acquired at arbitrary acquisition orientations and spatial resolutions.

Data and Code Availability Statements
Data are available on request due to privacy/ethical restrictions.Source codes and trained networks are avail-able at: https://github.com/sunhongfu/deepMRI/tree/master/iQSM_Plus.

Figure 1 :
Figure 1: The overall structure of the proposed (a) Orientation-Adaptive Neural Network, which is constructed by incorporating (b) Plug-and-Play Orientation-Adaptive Latent Feature Editing (OA-LFE) blocks onto conventional deep neural networks.The proposed OA-LFE can learn the encoding of acquisition orientation vectors and seamlessly integrate them into the latent features of deep networks.

Figure 2 :
Figure 2: The proposed iQSM+ network architecture and the associated simulated-supervised training data generation pipeline, including a random dipole orientation vector generation, a forward field calculation for simulating the local field, a background field addition, and a final phase evolution to obtain wrapped phases.

Figure 3 :
Figure 3: Comparison of the original iQSM, iQSM-Mixed, and the proposed iQSM+ methods on (a) two simulated brains with different acquisition orientations, and (b) four in vivo brains scanned at multiple 3T MRI platforms.Red arrows point to the reconstruction errors in the original iQSM and iQSM-Mixed.

Figure 4 :
Figure 4: Comparison of different QSM methods on two simulated pathological brains with a hemorrhage lesion in the frontal white matter.The upper two rows show the results and corresponding error maps relative to the ground truth QSM at neutral head orientation, while the bottom two rows illustrate the results at oblique dipole orientation (40 • titled).Red arrows point to the apparent errors on the hemorrhage lesion region.

Figure 5 :
Figure 5: Analysis of different QSM methods against the data acquisition orientations on 10 phase images simulated from a χ 33 label.The line curves of different QSM results on four quantitative metrics, i.e., (a) NRMSE, (b) SSIM, (c) mean hemorrhage susceptibility, and (d) mean Globus Pallidus (GP) susceptibility are compared.

Figure 6 :
Figure 6: Comparison of susceptibility maps computed from various QSM frameworks on an in vivo SH patient at 3T.The top two rows demonstrate the results with 1-voxel brain edge erosion, while the bottom two rows illustrate the result of 4-voxel mask erosion.The brain edge erosion was performed during the RESHARP background removal step for the LU+RESHARP-based QSM pipelines, while for the iQSM-related approaches, brain edge erosion was applied to the LoT layer outputs before inputting into the U-net part.Red arrows point to the reconstruction errors and artifacts in iQSM, iLSQR, and LPCNN, while yellow arrows point to the apparent over-smoothing and contrast loss in Affined-iQSM and AFTER-QSM.

Figure 7 :Figure 8 :
Figure 7: ROI analysis of different QSM methods on five deep grey matter regions from 100 in vivo subjects, categorized into (a) 50 with neutral acquisition orientations, and (b) 50 with oblique orientations.

FOVFigure 9 :
Figure 9: Comparison of the original xQSM, xQSM-Mixed, and the proposed xQSM+ on (a) two simulated brains from the COSMOS data, and (b) two in vivo brain subjects scanned at 3T. Red arrows point to the reconstruction errors in the original xQSM and xQSM-Mixed.The yellow arrows point to the structural loss in xQSM-Mixed, while the green arrow points to the vein that was smoothed out in xQSM-Mixed but successfully preserved in xQSM+.