Generative AI for Rapid Diffusion MRI with Improved Image Quality, Reliability and Generalizability

Diffusion MRI is a non-invasive, in-vivo biomedical imaging method for mapping tissue microstructure. Applications include structural connectivity imaging of the human brain and detecting microstructural neural changes. However, acquiring high signal-to-noise ratio dMRI datasets with high angular and spatial resolution requires prohibitively long scan times, limiting usage in many important clinical settings, especially for children, the elderly, and in acute neurological disorders that may require conscious sedation or general anesthesia. We employ a Swin UNEt Transformers model, trained on augmented Human Connectome Project data and conditioned on registered T1 scans, to perform generalized denoising of dMRI. We also qualitatively demonstrate super-resolution with artificially downsampled HCP data in normal adult volunteers. Remarkably, Swin UNETR can be fine-tuned for an out-of-domain dataset with a single example scan, as we demonstrate on dMRI of children with neurodevelopmental disorders and of adults with acute evolving traumatic brain injury, each cohort scanned on different models of scanners with different imaging protocols at different sites. We exceed current state-of-the-art denoising methods in accuracy and test-retest reliability of rapid diffusion tensor imaging requiring only 90 seconds of scan time. Applied to tissue microstructural modeling of dMRI, Swin UNETR denoising achieves dramatic improvements over the state-of-the-art for test-retest reliability of intracellular volume fraction and free water fraction measurements and can remove heavy-tail noise, improving biophysical modeling fidelity. Swin UNeTR enables rapid diffusion MRI with unprecedented accuracy and reliability, especially for probing biological tissues for scientific and clinical applications. The code and model are publicly available at https://github.com/ucsfncl/dmri-swin.


Introduction
Diffusion MRI (dMRI) can provide valuable clinical information and assess tissue microstructure; however, its low signal-to-noise (SNR) ratio can result in poor diagnostic and quantitative accuracy [6].To improve SNR, most dMRI protocols require low angular and spatial resolution or a long scan time, which limits usage in many important clinical settings.Therefore, there is great interest in having short patient scan times without compromising SNR or spatial and angular resolution.
Several supervised methods have been proposed to denoise brain dMRI scans; however, they are limited by their lack of generalizability.Often, they work on only one b-value and a prespecified set of diffusion-encoding directions, are built to predict only one set of microstructural parameters, or are trained and validated on the same dataset, such as the Human Connectome Project (HCP) [27] [15].Diffusion data can vary widely due to different acquisition parameters, scanners, and patient populations and therefore unsupervised or self-supervised denoising methods are often preferred [10].However, these self-supervised and unsupervised methods do not approach the performance of supervised techniques on data within the trained domain and can still perform variably with different out-of-domain datasets.
In addition, most dMRI denoising methods are evaluated only qualitatively or by denoising a subset of the data and evaluating the accuracy with respect to the full dataset.In this paper, we also evaluate denoising using external validation via test-retest reliability for Diffusion Tensor Imaging (DTI) [22] and Neurite Orientation Dispersion and Density Imaging (NODDI) [31] metrics to ensure precision as well as via known Structural Covariance Networks (SCNs) [30][16] derived from those metrics in the white matter (WM) and gray matter (GM) to ensure biological accuracy.Finally, we pay special attention to the ability to remove heavy-tail noise, which can lead to biased biophysical metrics from fitting algorithms primarily designed for data corrupted with Gaussian noise.
Here we propose to use a Swin UNEt TRansformers (Swin UNETR) model [12] to denoise dMRI data conditioned on registered T1 scans.Unlike other supervised methods, which utilize a small subset of the HCP dataset (typically 40 subjects) for training [27] [15], we use the full set of HCP data, training with all b-values and diffusion-encoding directions, and apply simple data augmentations, such as random flipping, rotation, scaling, and k-space downsampling.Training on a large dataset, close to 300,000 3D volumes across 1021 subjects, allows the Swin UNETR model to learn a denoising function that generalizes well to many different conditions.In addition to the Swin UNETR, we also train a UNet convolutional neural network model that lacks a transformer component to determine the effect, if any, of network architecture on denoising performance.We validate our approach on a held-out HCP Retest dataset as well as three ex-ternal datasets acquired in different patient populations using different scanners and dMRI protocols, showing significant benefits over current state-of-the-art self-supervised and unsupervised methods.In addition to improvements in accuracy, we also show better repeatability on the HCP Retest dataset.Finally, we demonstrate that fine-tuning, even on only one subject, improves performance on out-of-domain datasets and that our approach can also super-resolve dMRI data via qualitative assessment in an HCP subject.

Data
In our experiments, we used data from four different datasets the first of which comprising normal young adult volunteers is separated into a training dataset and a held-out dataset within this same training domain.The other three datasets are out-of-domain (OOD) patient research datasets for establishing generalizability and clinical applicability.
-HCP: 1021 subjects from the Human Connectome Project (HCP) Young Adult dataset [9] acquired using 90 diffusion-encoding directions at b-values of b=1000, 2000, 3000 s/mm 2 with 1.25 mm resolution.We used all 1021 subjects for training and excluded any subjects in the HCP Retest dataset.-HCP Retest: 44 subjects from the HCP Retest dataset used for validating denoising performance and repeatability.It was acquired in the same way as the HCP dataset.-TBI: 45 adult mild traumatic brain injury patients acquired two decades ago using protocols identical to [30] on a 3T GE scanner with 55 diffusionencoding directions at b=1000 s/mm 2 with a nominal resolution of 1.8 mm (0.9 mm in xy after zero-interpolation in k-space).We randomly selected 5 subjects for fine-tuning and 40 subjects for testing.-SPIN: 45 children ages 8-12 years with neurodevelopmental disorders acquired on a 3T Siemens Prisma scanner with 64 and 96 diffusion-encoding directions at b-values of b = 1000, 2500 s/mm 2 respectively (TE=72.20 ms, TR=2420 ms, flip angle=85 • ) with 2.00 mm isotropic resolution.We randomly selected 5 subjects for fine-tuning and 40 subjects for testing.-AHA: 8 children and adolescents ages 8 -18 with intracerebral hemorrhage due to vascular malformations, primarily arteriovenous malformations (AVMs), undergoing resection acquired on a 3T GE MR750 with 55 diffusionencoding directions at b=2000 s/mm 2 with a resolution of 2.00 mm that was zero-interpolated in-plane to 1.00 x 1.00 mm.Data was collected over three sessions: first, prior to resection, the second, six months after surgery, and the third, one year following surgery.Lesion location was determined by radiology notes.We selected a ninth subject for fine-tuning and had a total of 23 sessions for testing (one subject had only two sessions).
DMRI data was skull-stripped with Synthstrip [13], corrected for eddy currentinduced distortions and subject movements with Eddy [1], and aligned to structural 3D T1 scans with Boundary-Based Registration [11].T1 scans were also skull-stripped using Synthstrip and segmented using SynthSeg [2].For the AHA dataset, due to the hemorrhagic lesion impacting SynthSeg performance, T1 scans were instead segmented using Freesurfer's recon-all command with the SynthSeg option enabled and, in a few cases where recon-all failed, the reconall-clinical command was used instead.Finally, co-registered T1 and dMRI scans were resampled with 5 th order spline interpolation at 1.25 mm and used as inputs to the model.

Denoising Validation
To evaluate the denoising algorithms and possible scan time speedup, we measure performance in both fully-sampled and subsampled data.We chose the minimum number of diffusion gradients necessary for unique fits: 6 for DTI, 15 for 4 th order spherical harmonic, and 28 for 6 th order spherical harmonic along with 1 b=0 s/mm 2 volume.We select the directions to minimize the condition number of the design matrix using the procedure described in [28][25].For NODDI estimation, we only use the fully-sampled HCP acquisition.We compare the performance of the SWIN UNeTR and UNet models with three state-of-theart unsupervised/self-supervised machine learning methods for dMRI denoising: block-matching and 4D filtering (BM4D) [20], Marchenko-Pastur Principal Component Analysis (MPPCA) [29], and Patch2Self (P2S) [10].
To evaluate DTI estimation, the mean absolute error (MAE) between the ground truth fully-sampled dataset and the model predictions in the subsampled dataset for the principal eigenvector (V1), fractional anisotropy (FA), axial diffusivity (AD), radial diffusivity (RD), and mean diffusivity (MD) were found.For evaluating higher order spherical harmonics, the Jensen-Shannon distance (JSD) between the ground truth and model predictions, projected onto a uniformly distributed 362 direction hemisphere, was used [4].In each case, the ground truth was found by fitting the model using all acquired diffusion gradient directions.These errors were reported for WM and GM.DTI estimation was only conducted on the lowest shell (b=1000 s/mm 2 ) for multi-shell datasets, whereas spherical harmonic estimation was conducted on every shell.To evaluate super-resolution performance, dMRI data from an HCP subject was k-space downsampled by a factor of two and then upsampled to emulate a low resolution acquisition.
We explore denoising in an (OOD) dataset for clinical translation by applying our model on the AHA dataset, which consists of children with intracerebral hemorrhage undergoing lesion resection scanned before and after intervention.We collect global WM and GM DTI metrics and measure perilesional changes in DTI microstructure over time from both subsampled and fully-sampled shells.To investigate the output distribution and effect of signal rectification, we measure the signal in the lateral ventricles, a region which consists almost solely of free water and has a heavily-attenuated uniform signal, across all diffusion-encoding directions and compare the resulting signal intensity histograms produced by the denoising methods.
Microstructural repeatability was assessed using the HCP Retest dataset by measuring the within-subject coefficient of variation (CoV) for DTI and NODDI parameters across the two sessions.Briefly, we consider DTI test-retest reliability performance on both the fully-sampled and subsampled b=1000 s/mm 2 shell.For NODDI estimation, we consider the full multi-shell HCP acquisition.To evaluate repeatability in WM regions, we conduct tract-based spatial statistics (TBSS) analysis via FA registration to a template using the Johns Hopkins University (JHU) atlas [26].
We also investigate the repeatability of SCNs by measuring the mean absolute difference between SCNs derived from the first and second sessions.We quantify both GM SCNs, which encompass only cortical GM regions from the Desikan-Killiany-Tourville atlas, and WM SCNs, which encompass only the JHU WM tracts given by TBSS analysis.We evaluate performance for DTI SCN repeatability on the subsampled b=1000 s/mm 2 shell and for NODDI SCN repeatability on the fully-sampled acquisition using all shells.Finally, we also compute the MAE between the denoised subsampled DTI SCNs and the ground truth DTI SCN, computed by collecting the average DTI values for each region across both sessions using the fully-sampled b=1000 s/mm 2 shell.

Training and Implementation
The Swin UNETR model [12][17] is implemented using PyTorch [23] and MONAI [3] (Fig. S5).The model was trained on a NVIDIA V100 GPU using meansquared error loss between the model output and ground truth.To obtain ground truth dMRI data for training, a 6 th order spherical harmonic was fit for each shell and projected onto the acquired directions.We chose AdamW [18] as an optimizer with a learning rate of 1e-5 and train using gradient clipping with a maximal norm of 1.0 and 16-bit precision for 14 epochs of training.During training, we first downsample the dMRI scan with a probability of 0.5 in frequency space to an anisotropic resolution between 1.25 and 3 mm and linearly upsample back to 1.25 mm resolution [19].Random patches of 128 x 128 x 128 are cropped from the scan and randomly flipped with a probability of 0.5 along all axes and randomly rotated by 0, 90, 180, or 270 degrees along all axes with equal probability.The input dMRI patch is normalized to have zero mean and unit variance and the input T1 patch is normalized to have zero mean and standard deviation uniformly log-scaled between 0.25 and 4.0.For inference, we use a sliding window approach with an overlap of 0.875 and use 5 th order spline interpolation to upsample the data.Fine-tuning was achieved via additional training on external data from one held-out subject out of five with a learning rate of 1e-6 for three epochs and the average result for validation was reported.Finally, since our models are trained and evaluated on the HCP dataset with 1.25 mm resolution, model predictions are resampled to native dMRI resolution for external OOD dataset validation.UNet model training, fine-tuning, and validation was conducted in the same way, except training was extended to 20 epochs.Training for both the Swin and UNet models continued until training loss stabilized.

Denoising Performance
For DTI estimation, the Swin model achieves the lowest MAE in all metrics in WM and GM in the HCP and TBI validation datasets, even without any fine-tuning (Table 1).For the SPIN dataset, the Swin model also outperforms all other models, except for FA estimation in GM where BM4D achieved the best result.In the AHA dataset, with the exception of MD and RD, the Swin model achieves lower MAE than any other method.While the UNet performs better than BM4D, MPPCA, and P2S in most settings, its performance still lags behind the Swin model, especially the fine-tuned Swin model.Patch2Self performs worse than the other denoising methods in this setting, especially for V1 estimation.Qualitative comparisons are consistent with these quantitative results and show that the Swin model is able to capture more of the finer features in the WM and GM microstructure without the excessive smoothing of BM4D and MPPCA (Supplementary Figs.S1, S2, S3, S4).
Finally, we examine the noise distribution of each denoising algorithm in the lateral ventricles of one subject from the AHA dataset (Fig. 1B).Unlike the other denoising algorithms, the Swin model is able to transform the original data distribution, a heavy-tailed Rician-like distribution, into a more Gaussian-like distribution with significantly smaller variance and skew but greater kurtosis.

Test-Retest Reliability
The Swin model achieves the lowest CoV between the test and retest datasets for all metrics in GM and AD and MD in WM using the 6-direction subsampled shell as well as MD and RD in both GM and WM for the 90-direction fully-sampled shell (Table 2).When using 90 directions, applying no denoising leads to the lowest CoV for AD and MPPCA achieves the best result for FA estimation.
For repeatability of intracellular volume fraction (ICVF), fiber orientation dispersion index (ODI), and free water fraction (ISOVF), where all acquired data was used, Swin outperforms all other denoising methods except for ODI estimation in GM, where P2S is slightly better.In particular, Swin excels at ICVF repeatability, achieving close to 50 % lower CoV than the next best method on average and achieves dramatically lower CoV on regional GM and WM measurements (Fig. 1A).Swin also achieves considerably better test-retest reliability than the other denoising approaches for ISOVF as well, especially in global and regional GM measurements (Fig. 1A).
Swin generates the most accurate FA, MD, and RD SCN in WM and FA and RD SCN in GM.Patch2Self achieves the lowest error for AD and MD SCN in GM, while MPPCA has the most accurate AD SCN in WM.MPPCA achieves   the lowest SCN repeatability error in WM for AD, FA, and RD estimation, while Patch2Self has the most repeatable SCN in WM for ODI estimation and GM for AD and FA estimation.Swin denoising has the lowest SCN repeatability error for MD, ICVF and ISOVF in WM as well as MD, RD, ICVF, ODI and ISOVF in GM.Once again, Swin performed remarkably better than all competing methods for ICVF and ISOVF, achieving CoV values that are one-sixth to one-third of those from no denoising or denoising with P2S, MPPCA or BM4D.

Clinical Validation
The Swin model achieves qualitatively superior denoising even in poor quality data (CNR = 1.2375), which was the lowest CNR in the AHA dataset (Fig. 2).Swin denoising conducted with only 6 directions approaches data quality achieved by acquiring all 55 directions, resulting in a 9-fold speedup of scan time.Applying Swin to all 55 directions removes much of the FA noise associated with the AVM and its surrounding hemorrhage.Fine-tuning leads to modest improvements in image quality for both the UNet and Swin models.
The Swin model, even with 6 directions, records DTI values which are consistent with those reported from other denoising algorithms with access to all 55 directions (Fig. S6).Swin is able to achieve lower FA than the other methods with only 6 directions, barring P2S which yields unrealistically low values for WM, showing Swin's ability to reduce some of the noise which artifactually inflates FA.In addition, apart from FA estimation in GM, Swin denoising generates similar values for both 6-direction and 55-direction acquisitions, indicating that tissue microstructural metrics remain consistent, even as the angular resolution is increased.The Swin model also consistently produces lower FA, MD, RD, and AD values than other denoising methods in the perilesional space across all three sessions in all subjects (Fig. S7).MPPCA, BM4D, and no denoising (RAW) tend to follow the same values.

Super-Resolution
Although our model was not trained for super-resolution, it can be used to resample a dMRI dataset to 1.25 mm resolution (Fig. 3).Qualitative comparison shows that Swin is able to capture more of the fine microstructure than BM4D and MPPCA in the posterior periventricular WM and avoids excessive blurring.

Discussion
To the best of the authors' knowledge, this is the first supervised dMRI denoising method that can be applied, without modification, to denoise dMRI datasets with widely varying scanners, patient populations, and acquisition parameters.We have validated our approach on a held-out portion of the HCP dataset as well as three external OOD datasets consisting of children with neurodevelopmental disorders, adults with TBI, and both children and adolescents with intracranial  hemorrhage before and after resection of their vascular malformations.We have shown that our method can produce more accurate DTI metrics and spherical harmonic coefficients than other denoising methods with the minimum amount of data required for a unique fit.In addition, we have demonstrated the superior test-retest reliability of our denoising method for both DTI and NODDI metrics as well as SCNs derived from those metrics.Finally, we show that the Swin denoising has the potential to track brain microstructural changes with greater accuracy than other denoising models.
With Swin denoising, drastic scan time speedup is possible.Most dMRI protocols require at least 5 b=0 s/mm 2 and 30 b=1000 s/mm 2 acquisitions to obtain accurate DTI metrics, taking at least ten minutes [14].With Swin denoising, a dMRI acquisition which can obtain accurate DTI metrics is easily achievable in under two minutes.In fact, such a minimal 6-direction acquisition would take 90 seconds with an HCP protocol, 100 seconds with the TBI or AHA protocol, and only 20 seconds with the SPIN protocol.This speedup can allow dMRI to be used clinically in vulnerable populations and would significantly reduce motion artifacts that degrade image quality.
The Swin model showed high repeatability in all cases, but especially for the NODDI metrics of tissue intracellular volume fraction and free water fraction, often achieving better than 50% lower CoV than the next best method.We believe that this could be due to the ability of the Swin model to have an approximately Gaussian output distribution.Unlike other denoising methods, the Swin model is able to remove the heavy tail noise inherent in dMRI data.Even BM4D, which is designed to correct for Rician noise, fails at this task.We believe that the Swin model and to a lesser extent the Unet succeed in this respect partly because, unlike other denoising models, they have access to the T1 anatomical image volume which aids in tissue segmentation, differentiating GM from WM from CSF.In addition, both models are trained to reduce meansquared error, which heavily penalizes outliers and thus reduces the heavy tail.
There were cases where the Swin model was not the best.For instance, the Swin model performs worse than MPPCA in WM for 6 th order spherical harmonic fitting, even in HCP data, possibly because the Swin model denoises one direction at a time, whereas MPPCA is able to collectively denoise all directions (Table S2).By processing each dMRI volume separately, we are able to consider the full brain volume and avoid artifact propagation across volumes but do not utilize the correlation across volumes, such as the transformer patch-based approach in [15].Due to GPU memory constraints, a trade-off exists between utilizing spatial and angular correlations.Data compression techniques, such as quantized variational auto-encoders, should be explored to overcome this obstacle [8].
Our results were achieved with simple data augmentations, although using more extensive simulations, such as those in [21], could lead to greater generalizability.In addition, all inputs had to be resampled to 1.25 mm isotropic resolution, but with further data augmentation or resolution-independent architectures [5], it may be possible to perform denoising on native resolution if super-resolution is not desired.Our results require a T1 anatomical scan to be acquired in conjunction with the dMRI data.While most datasets which have dMRI data also collect T1 scans, this might not always the case and further work should be conducted to determine how important adding anatomical information, in the form of a T1 scan, is to denoising performance and whether T2 and other sequences can provide complementary information and further improve performance.Finally, our denoising method is designed to be performed after correction for patient movement, susceptibility-induced distortions, and eddycurrent correction via the Eddy command as well as co-registration with a T1 scan.However, it may be fruitful to pursue 2D denoising which can clean raw data slice-by-slice to improve the performance of Eddy itself and yield better downstream results.
In addition, we trained a simple UNet which, while performing better than the other self-supervised methods, did not outperform the Swin model.This could be due to the ability of the Swin transformers to capture long-range dependencies better than a UNet, especially when provided with enough data [7].It could also be because the same hyper-parameters were used for both the Swin and the UNet and those hyper-parameters were simply more optimal for the Swin model.Further investigation, with extensive hyper-parameter tuning on various datasets, would be necessary to determine the optimal architecture.In principle, we believe that most advanced neural network architectures, with sufficient complexity, would perform adequately.
We believe that some of the generalizability of our model may be attributed to grokking, a phenomenon that occurs when a neural network with good training loss but poor generalisation will, upon further training, transition to perfect generalisation [24].We anecdotally observed grokking to occur after the sixth epoch of training and suspect this is due to using weight decay and AdamW adaptive stochastic gradient descent training on a sufficiently large dataset with data augmentation.A better understanding of what leads to grokking could be instrumental in designing generative AI models that generalize well at scale for healthcare.
Finally, fine-tuning on even one subject consistently led to improved denoising performance for DTI fitting, but the results were mixed in higher order spherical harmonic fitting.This could be because fine-tuning reduces model bias, but increases variance which can accumulate error over the 15 or 28 directions used to compute the 4th or 6th order spherical harmonics, respectively, compared to the six directions used in DTI estimation.In addition, we did not experience significant performance benefits by fine-tuning on more than one subject.Recent work has demonstrated that large language models can effectively learn from a single example and our model, while significantly smaller, might also exhibit similar behavior via successful domain adaptation with limited fine-tuning data.To the best of our knowledge, fine-tuning has never been reported before in the context of dMRI denoising and merits further investigation.
Acknowledgements.HCP data were provided by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van

Introduction
For a list of all abbreviations used in the manuscript and their definitions, see Table S1.

Methods
A mask for the perilesional space was found by taking taking the largest connected component of the region that Freesurfer recon-all segments as "unknown" inside the brain mask.We dilate this mask two-fold and only take the dilated portion to be perilesional.

Results
Fig. S1.Visual comparison between the ground truth (GT), no denoising (RAW), BM4D, MPPCA, Unet, and Swin without finetuning (SWIN) for denoising on validation data from the HCP dataset For 4 th and 6 th order spherical harmonic fitting, the Swin model achieves the lowest JSD in GM across all datasets (Table S2).In particular, the Swin model outperforms all other denoising methods in the external datasets (AHA, TBI, SPIN) with the exception of 6th order estimation of B1000 WM spherical harmonics in SPIN and B2000 WM spherical harmonics in AHA, where no denoising has the lowest JSD.For the HCP dataset, the MPPCA had a lower JSD than the Swin model for 6th order spherical harmonic estimation in higher shells.The Unet model achieves lower JSD than the Swin model in several metrics in the HCP dataset, but performs worse in the external datasets.
The Swin model is able to achieve lower byte-lengths of dMRI data compressed with zlib using both high and low compression levels (Table S3).Table S2.JSD between ground truth and estimation using 15-direction (4th order) and 28-direction (6th order) HCP, SPIN, TBI, and AHA data in white matter (WM) and gray matter (GM) via no denoising (RAW), P2S, BM4D, MPPCA, UNET, and SWIN (with and without no finetuning).Best results are bolded.

Fig. 1 .
Fig. 1. (A) ICVF and ISOVF CoV (%) in select WM and GM regions and (B) histogram of signal intensity in the b=2000 s/mm 2 shell in the lateral ventricles of an AHA patient.Results using no denoising (RAW), P2S, BM4D, MPPCA, SWIN, and UNET denoising are displayed.

Fig. 3 .
Fig. 3. Visual comparison between the ground truth (GT), no post-processing (RAW), MPPCA, BM4D, Unet, and Swin without fine-tuning (SWIN) for super-resolution in the posterior periventricular WM of an HCP subject.Data was k-space downsampled by a factor of two and then upsampled with 5 th order spline interpolation back to 1.25 mm.
Essen and   Kamil Ugurbil; U54 MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the Mc-Donnell Center for Systems Neuroscience at Washington University.TBI data were acquired as part of a research project funded by NIH R01NS060886 (Principal Investigator: Pratik Mukherjee).SPIN data were acquired as part of a research project funded by NIH R01 MH116950 (Principal Investigators: Pratik Mukherjee and Elysa J. Marco).AHA data were acquired with funding from the American Heart Association (AHA) Bugher Foundation (Principal Investigators: Heather Fullerton, Christine Fox, Helen Kim, and Pratik Mukherjee).

Fig. S5 .
Fig. S5.A general overview of the Swin-UNETR architecture.The input is a concatenated 3D T1 and dMRI scan, which is encoded by a Swin transformer at multiple resolutions and fed into a residual convolutional neural net (CNN) decoder to reconstruct the ground truth dMRI scan.

Fig. S7 .
Fig.S7.Average DTI metrics (AD, FA, MD, RD) from denoising dMRI data in the perilesional space of AHA subjects across three sessions: first session data is taken one day prior to AVM resection, second session data is taken 6 months after AVM resection, and third session is taken one year after AVM resection.Subject 4 has data from only sessions one and three.

Table 1 .
MAE of FA, MD, RD, AD, and V1 estimation using six-direction HCP,

Table S1 :
Table of Abbreviations

Table S3 .
The sum of all dMRI data byte-lengths in Gb for the HCP test-retest diffusion dataset undergoing no denoising (RAW), Patch2Self (P2S), MPPCA, BM4D, and Swin denoising compressed with lzib, using the DEFLATE algorithm, with compression levels of 1 (lowest) and 9 (highest).Best results are bolded.

Table S4 :
Coefficient of variation (%) for FA estimation via no denoising (RAW), P2S, BM4D, MPPCA, and SWIN for gray matter cortical regions in HCP test-retest data.Best results are bolded.

Table S6 :
Coefficient of variation (%) for RD estimation via no denoising (RAW), P2S, BM4D, MPPCA, and SWIN for gray matter cortical regions in HCP test-retest data.Best results are bolded.

Table S7 :
Coefficient of variation (%) for AD estimation via no denoising (RAW), P2S, BM4D, MPPCA, and SWIN for gray matter cortical regions in HCP test-retest data.Best results are bolded.

Table S8 :
Coefficient of variation (%) for FA estimation via no denoising (RAW), P2S, BM4D, MPPCA, and SWIN for white matter regions in HCP test-retest data.Best results are bolded.

Table S11 :
Coefficient of variation (%) for AD estimation via no denoising (RAW), P2S, BM4D, MPPCA, and SWIN for white matter regions in HCP test-retest data.Best results are bolded.

Table S15 :
Coefficient of variation (%) for NDI estimation via no denoising (RAW), P2S, BM4D, MPPCA, and SWIN for white matter regions in HCP test-retest data.Best results are bolded.

Table S16 :
Coefficient of variation (%) for ODI estimation via no denoising (RAW), P2S, BM4D, MPPCA, and SWIN for white matter regions in HCP test-retest data.Best results are bolded.