Deregulation of the Purine Pathway in Pre-Transplant Liver Biopsies Is Associated with Graft Function and Survival after Transplantation

The current shortage of livers for transplantation has increased the use of marginal organs sourced from donation after circulatory death (DCD). However, these organs have a higher incidence of graft failure, and pre-transplant biomarkers which predict graft function and survival remain limited. Here, we aimed to find biomarkers of liver function before transplantation to allow better clinical evaluation. Matched pre- and post-transplant liver biopsies from DCD (n = 24) and donation after brain death (DBD, n = 70) were collected. Liver biopsies were analysed using mass spectroscopy molecular phenotyping. Discrimination analysis was used to parse metabolites differentiated between the two groups. Five metabolites in the purine pathway were investigated. Of these, the ratios of the levels of four metabolites to those of urate differed between DBD and DCD biopsies at the pre-transplantation stage (q < 0.05). The ratios of Adenosine monophosphate (AMP) and adenine levels to those of urate also differed in biopsies from recipients experiencing early graft function (EGF) (q < 0.05) compared to those of recipients experiencing early allograft dysfunction (EAD). Using random forest, a panel consisting of alanine aminotransferase (ALT) and the ratios of AMP, adenine, and hypoxanthine levels to urate levels predicted EGF with area under the curve (AUC) of 0.84 (95% CI (0.71, 0.97)). Survival analysis revealed that the metabolite classifier could stratify six-year survival outcomes (p = 0.0073). At the pre-transplantation stage, a panel composed of purine metabolites and ALT could improve the prediction of EGF and survival.


Introduction
There is an increasing need for organ transplantation, but the number of organs available remains insufficient [1,2]. This is reflected by the number of people registered in the Organ Donor Register (ODR) in the UK, which decreased from 2018 to 2019 [3], while in the same period, the number of patients on the active transplant list increased by 20%, reaching the number of 432 [3]. This stark surge in the demand for liver transplants (LT) is attributable to the global incidence of alcohol-related fatty liver disease, cirrhosis and hepatitis [4]. 2 of 13 To safeguard patients, pre-transplant donor screening is used to determine the probability of successful liver transplant. Optimal donors' parameters in the case of donation after circulatory death (DCD) include age (<60 years), weight (<100 kg), intensive care stay (<5 days), functional warm ischaemia time (fWIT, <20 min), cold ischaemia time (<8 h) and steatosis (<10%) [5]. These values have resulted in up to 20% of donation-after-brain-death (DBD) organs not meeting the clinical criteria [6] and a 78% increase in the discard rate of DCD livers [7]. The application of these criteria can result in a number of otherwise transplantable organs being discarded [8]. Therefore, identifying specific pre-transplantation markers of liver damage could assist in expanding the pool of transplantable livers.
Currently, the standard assessment of liver dysfunction is carried out using liver function tests that evaluate the concentrations of liver enzymes such as alkaline phosphatase (ALP), alanine aminotransferase (ALT), aspartate aminotransferase (AST) and gamma-glutamyl transferase (GGT) [9,10]. However, such tests lack sensitivity and specificity and can be affected by patient factors such as genetics, medicines and other non-associated diseases [11][12][13][14]. Thus far, transcriptomics and genomics have been used to discover biomarkers in live pathophysiology [15]. Metabolomics has also been employed to decipher metabolic fluxes in liver disease [16]. A systematic review on the use of metabolomics to discover liver biomarkers for transplantation outcomes in liver tissue biopsies highlighted promising results [17]. These first studies identified lipid molecules [18][19][20], tryptophan, kynurenine and S-adenosylmethionine as liver biomarkers [21,22].
The objective of this study was to employ a molecular phenotyping approach to investigate, at both pre-and post-transplantation, hundreds of polar metabolites in the hepatic tissue from two distinct donor types, viz., DBD and DCD donors. Following this, the association between metabolites that were different between these donor types and clinical outcomes, viz., early allograft dysfunction (EAD) and early graft function (EGF), were investigated. Then, prediction of EGF was calculated, and survival analysis based on metabolites and clinical variables was performed. The study workflow is illustrated in Figure S1.

Patients and Samples
This study received prior approval from the ethics committee at King's College Hospital (ethical approval number 09/H0802/100), and informed consent was obtained from all subjects. The methods were carried out in accordance to the ethical guidelines of the 1975 Declaration of Helsinki, and no donor organs were obtained from executed prisoners or other institutionalized persons.
Overall 94 Tru-Cut tissue biopsies were obtained from the left lobe of livers pre-and post-transplantation. The first (pre-transplant) biopsy was taken at the end of cold preservation, prior to implantation, and the second (post-transplant) biopsy was obtained approximately 1 h after graft reperfusion. A separate biopsy was obtained for histopathological evaluation of donor steatosis. Biopsies were immediately snap-frozen in liquid nitrogen and stored at −80 • C until extraction for LC-MS analysis. In all procedures, liver allografts were flash-cooled and perfused with University of Wisconsin preservation solution until the time of transplantation.
The study included two types of adult donors: DBD (n = 35) and DCD (n = 12). A wide spectrum of donor clinical data were collected for comparison among groups and for correlation with metabolite levels. In the DCD group, functional WIT (fWIT) was calculated from the time when systolic blood pressure was below 50 mmHg to the time of aortic cannulation. All recipients were patients with stable chronic liver disease who did not require hospitalization prior to transplantation. They also presented with a similar severity of liver disease, represented by scores assessed using the Model for End-Stage Liver Disease (MELD) at time of listing for LT. DCD donor liver grafts were randomly selected from transplants performed from August 2011 to August 2014, and all graft were matched with DBD grafts performed in the same period. After transplantation, all patients received immunosuppressive therapy with tacrolimus and prednisolone. Graft performance was assessed based on serum AST, serum bilirubin and international normalised ratio (INR) levels after transplantation [23]. According to graft performance, recipients were classified into two groups, i.e., patients showing EAD (n = 10) and those showing EGF (n = 37). The survival data were collected for 34 recipients from the time of transplantation (between 2011 and 2014) till April 2019. The relevant donor and recipient details are listed in Table 1.

Sample Treatment
Sample preparation for all 94 biopsies followed our previously published method [24]. We transferred 100 µL of the lower aqueous phase from all samples to clean vials for further analysis. The samples were kept in the chamber at a temperature of 4 • C, and the injection volume was 5 µL, with full-loop function (20 µL loop size). Chromatographic and spectrometric conditions for the analysis of polar metabolites were according to a published protocol [25]. Quality controls (QC) were run every 8 samples in random order.

Statistics
All data were processed within the "XCMS" package in "R Studio" (version 1.0.153), and multivariate analyses were conducted in both "R Studio" and "SIMCA" (version 14, MKS Umetrics AB, Umeå, Sweden). Multivariate analysis included pre-and post-transplant matched samples n = 94 (DBD n = 70, DCD n = 24) and 17 QCs. Principle component analysis (PCA) was carried out to detect outlier(s) and to examine the distribution of QCs. All pre-transplant data were then divided into a training dataset (DBD n = 30, DCD n = 5) and a test dataset (DBD n = 5, DCD n = 7). An orthogonal projections to latent structures discriminant analysis (OPLS-DA) model was built based on the training dataset to examine the profiling of pre-transplant samples in DBD and DCD groups. The test dataset was utilised to assess the prediction ability of the built model. S-plot derived from the OPLS-DA model was then applied to select features based on covariance P1 and correlation P (corr) values (P1 > 0.1, Metabolic features based on the LC-MS data were measured using Waters MassLynx software (Waters Corporation, Milford, MA, USA). Feature concentrations were expressed as ratios of peak areas to internal standards' peak areas. The identification was performed by using metabolites mass to search against in-house and public metabolite databases [26][27][28]. The metabolites' structure and fragmentation patterns in the MS2 data were studied by comparison with those of pure standard molecules.
To compare between DBD and DCD as well as between EAD and EGF groups at pre-and post-transplantation stages, levels of the identified metabolites and their ratios to the levels of another metabolite of the selected ones were explored and examined with univariate non-parametric Mann-Whitney test (2-sided) and Benjamini-Hochberg test. The ratios of selected metabolite in normal (no steatotic, n = 21) and steatotic (mild and moderate steatotic, n = 26) groups were also investigated. Post-hoc power calculation was performed for EGF (n = 37) and EAD (n = 10) participants using the values of metabolite ratios in "Gpower3.1".
Furthermore, random forest machine learning and receiver operating characteristic (ROC) were applied to choose the best predictors of EGF from the above selected ratios. Three ROC curves were determined: the highest possible area under the curve (AUC) with the combination of either clinical variables or metabolites (or their ratios) and the highest AUC with the combination of metabolites and clinical variables (package "caret", "randomForest", "pROC" and "ggplot2" in R studio). To follow this, correlation analyses between annotated metabolites and clinical features (serum AST, bilirubin, GGT) were conducted. Calculations were conducted in SPSS 23 (IBM, Armonk, NY, USA). Figures were plotted in GraphPad Prism 6 (GraphPad, La Jolla, CA, USA).
Metabolite ratios, clinical variables and the type of liver donor information were compared for their predictive power of survival. Two logistic regression models were fitted to make predictions based on metabolite ratios and clinical variables, respectively. Third, the group variable was used for the predictions as such. Participants were stratified into two equal-sized groups based on each of the three prediction models, and survival of these strata were compared with Kaplan-Maier curves. (package "survival", "survminer" and "ggplot2" in R 3.4.2).

Clinical Outcomes
Demographics of all 94 patients in both groups are presented in Table 1. There were no significant differences between DBD and DCD groups in age, EAD/EGF, liver enzyme levels, hepatic steatosis or serum bilirubin levels. Differences were observed in recipient ages (p < 0.05) between groups.

Multivariate Model and Feature Selection
The unit variance (UV) scaled dataset was first inspected for detection of outlier(s). Next, the comparison between DBD and DCD samples at the pre-transplant stage was performed. An OPLS-DA model was built with a training dataset (DBD n = 30, DCD n = 5), and the model was tested with a test dataset (DBD n = 5, DCD n = 7). As shown in the misclassification table, the test samples in the DBD group could be predicted with 100% accuracy, while the DCD samples were predicted with 85.71% accuracy (Table S1).
In order to identify which metabolic features were the strongest discriminators between DBD and DCD at pre-transplant, an S-plot ( Figure S2) derived from the OPLS-DA model was used to select 12 features on the criteria stated in the Section 2. From the 12 selected features, 5 metabolites were annotated (Table S2).
Five features were identified as purines at pre-transplant, and their levels in DBD and DCD were represented as bar plots in Figure S3. Additionally, jittered scatterplots representing the ratios of the levels of four purines to those of urate, illustrated in Figure 1, were plotted. At the pre-transplant stage, the ratios AMP/urate, adenosine/urate, adenine/urate and hypoxanthine/urate were significantly higher in the DBD group compared to the DCD one (q < 0.001). Moreover, the scatter plots showed that the ratios AMP/urate and adenine/urate were higher in the EGF group compared to the EAD group. The Mann-Whitney test confirmed that the mean ratios of adenine/urate and AMP/urate were significantly different between EAD and EGF (q < 0.05). Adenosine/urate and hypoxanthine/urate showed no significant difference between EAD and EGF groups.

Multivariate Model and Feature Selection
The unit variance (UV) scaled dataset was first inspected for detection of outlier(s). Next, the comparison between DBD and DCD samples at the pre-transplant stage was performed. An OPLS-DA model was built with a training dataset (DBD n = 30, DCD n = 5), and the model was tested with a test dataset (DBD n = 5, DCD n = 7). As shown in the misclassification table, the test samples in the DBD group could be predicted with 100% accuracy, while the DCD samples were predicted with 85.71% accuracy (Table S1).
In order to identify which metabolic features were the strongest discriminators between DBD and DCD at pre-transplant, an S-plot ( Figure S2) derived from the OPLS-DA model was used to select 12 features on the criteria stated in the method section. From the 12 selected features, 5 metabolites were annotated (Table S2).
Five features were identified as purines at pre-transplant, and their levels in DBD and DCD were represented as bar plots in Figure S3. Additionally, jittered scatterplots representing the ratios of the levels of four purines to those of urate, illustrated in Figure 1, were plotted. At the pre-transplant stage, the ratios AMP/urate, adenosine/urate, adenine/urate and hypoxanthine/urate were significantly higher in the DBD group compared to the DCD one (q < 0.001). Moreover, the scatter plots showed that the ratios AMP/urate and adenine/urate were higher in the EGF group compared to the EAD group. The Mann-Whitney test confirmed that the mean ratios of adenine/urate and AMP/urate were significantly different between EAD and EGF (q < 0.05). Adenosine/urate and hypoxanthine/urate showed no significant difference between EAD and EGF groups. Results are presented as mean ± SD, p-value was derived from Mann-Whitney tests, followed by Benjamini-Hochberg false discovery rate (FDR) correction (*q < 0.05, ***q < 0.001). DBD, donation after brain death; DCD, donation after circulatory death; EGF, early graft function; EAD, early allograft dysfunction. Results are presented as mean ± SD, p-value was derived from Mann-Whitney tests, followed by Benjamini-Hochberg false discovery rate (FDR) correction (* q < 0.05, *** q < 0.001). DBD, donation after brain death; DCD, donation after circulatory death; EGF, early graft function; EAD, early allograft dysfunction.
At the post-transplant stage, the ratios AMP/urate, adenine/urate and hypoxanthine/urate (q < 0.05) were significantly higher in the DBD group compared to the DCD group. Additionally, the scatter plot illustrated that the ratios AMP/urate, adenosine/urate and adenine/urate were elevated in the EGF group compared to the EAD one (q < 0.05).
The comparison of metabolite ratio levels between normal and steatotic groups revealed no significant difference ( Figure S4). In addition, post-hoc power was determined to asses this study, and the result was 77% power to detect differences between the EGF and EAD groups.

Random Forest with Metabolites and Clinical Variables
Machine learning was applied to identify variables to acting as classifiers between the EAD and EGF groups. From the included variables (AMP/urate, adenine/urate, hypoxanthine/urate, adenosine/urate, ALT, bilirubin, AST, GGT, steatosis status and donor age), high-importance scores for EGF were observed for the ratios of AMP/urate, adenine/urate, and hypoxanthine/urate and for ALT (Figure 2A). The prediction ability of purine ratios and ALT at pre-transplant was evaluated with ROC analysis. The accuracy, area under the curve (AUC), sensitivity and specificity for individual metabolites, enzymes and their various combinations in predicting EGF are listed in Table 2. The combination of the three ratios between purine and urate levels and ALT showed reliable prediction ability with high AUC, while the combination of four ratios between purine and urate levels demonstrated relatively higher accuracy, specificity and sensitivity ( Figure 2B). Using random forest, a panel composed of ALT and the ratios of AMP, adenine and hypoxanthine levels to urate levels predicted EGF, with AUC of 0.84 (95% CI (0.71, 0.97)). In comparison, an AUC of 0.71 (95% CI (0.52, 0.90)) was achieved using the clinical parameters. At the post-transplant stage, the ratios AMP/urate, adenine/urate and hypoxanthine/urate (q < 0.05) were significantly higher in the DBD group compared to the DCD group. Additionally, the scatter plot illustrated that the ratios AMP/urate, adenosine/urate and adenine/urate were elevated in the EGF group compared to the EAD one (q < 0.05).
The comparison of metabolite ratio levels between normal and steatotic groups revealed no significant difference ( Figure S4). In addition, post-hoc power was determined to asses this study, and the result was 77% power to detect differences between the EGF and EAD groups.

Random Forest with Metabolites and Clinical Variables
Machine learning was applied to identify variables to acting as classifiers between the EAD and EGF groups. From the included variables (AMP/urate, adenine/urate, hypoxanthine/urate, adenosine/urate, ALT, bilirubin, AST, GGT, steatosis status and donor age), high-importance scores for EGF were observed for the ratios of AMP/urate, adenine/urate, and hypoxanthine/urate and for ALT (Figure 2A). The prediction ability of purine ratios and ALT at pre-transplant was evaluated with ROC analysis. The accuracy, area under the curve (AUC), sensitivity and specificity for individual metabolites, enzymes and their various combinations in predicting EGF are listed in Table 2. The combination of the three ratios between purine and urate levels and ALT showed reliable prediction ability with high AUC, while the combination of four ratios between purine and urate levels demonstrated relatively higher accuracy, specificity and sensitivity ( Figure 2B). Using random forest, a panel composed of ALT and the ratios of AMP, adenine and hypoxanthine levels to urate levels predicted EGF, with AUC of 0.84 (95% CI (0.71, 0.97)). In comparison, an AUC of 0.71 (95% CI (0.52, 0.90)) was achieved using the clinical parameters. In order to investigate whether the levels of liver enzymes were associated with those of the analyzed purines, partial correlation analysis was performed. Purine relative amounts in pre-and post-transplant samples, together with serum AST, bilirubin and GGT in donors on the day of operation (day 0) and in recipients on the day after operation (day 1) were included for correlation analyses. In Table 3, the only significant correlation (q < 0.05) was observed between hypoxanthine and serum bilirubin after Benjamini-Hochberg correction. In order to investigate whether the levels of liver enzymes were associated with those of the analyzed purines, partial correlation analysis was performed. Purine relative amounts in pre-and post-transplant samples, together with serum AST, bilirubin and GGT in donors on the day of operation (day 0) and in recipients on the day after operation (day 1) were included for correlation analyses. In Table 3, the only significant correlation (q < 0.05) was observed between hypoxanthine and serum bilirubin after Benjamini-Hochberg correction.  Table 3. Partial correlation analysis (Pearson's correlation, adjusting for patient age) between the levels of five selected metabolites and those of liver enzymes; p-values were represented as q-values after applying Benjamini-Hochberg correction; * p or q < 0.05, ** p or q < 0.01.

Survival Analysis Based on Purines, Clinical Variables and Donation Groups
The purine ratio predictor (AMP/urate, adenine/urate, hypoxanthine/urate, adenosine/urate) stratified the participants: all five deaths occurred in the <50% strata, in which the metabolites predicted a lower chance of survival ( Figure 3A; p = 0.073). For the clinical predictor (ALT, bilirubin, AST, GGT, steatosis status and donor age), three out of the five deaths occurred in the <50% strata, which indicates no significant prediction ( Figure 3B; p = 0.54). Similarly, three out of five deaths occurred in the DBD group, and group class could not predict survival ( Figure 3C; p = 0.15).

Discussion
The five metabolites that were highly correlated to DCD are generated in the purine metabolism pathway (Figure 4) [29,30]. Metabolites in the purine pathway have a myriad of functions and are important in regulating inflammation [31] and oxidative injury and as markers of cell death. In liver tissue undergoing cold and warm ischemia, the dysregulation of their levels could be related to energy, inflammation and ischemic tissue damage [32][33][34].
Purines can act as physiological regulators of leucocyte function [35], but to be functional they must be released in the appropriate microenvironment following stimuli [36]. It is thought that liver inflammation is due to a cascade of inflammatory events that occur mainly in donors after brain death

Discussion
The five metabolites that were highly correlated to DCD are generated in the purine metabolism pathway (Figure 4) [29,30]. Metabolites in the purine pathway have a myriad of functions and are important in regulating inflammation [31] and oxidative injury and as markers of cell death. In liver tissue undergoing cold and warm ischemia, the dysregulation of their levels could be related to energy, inflammation and ischemic tissue damage [32][33][34].
Purines can act as physiological regulators of leucocyte function [35], but to be functional they must be released in the appropriate microenvironment following stimuli [36]. It is thought that liver inflammation is due to a cascade of inflammatory events that occur mainly in donors after brain death [37]. On the other hand, our studies have shown that DCD grafts undergo low inflammation and increased hepatocellular damage due to warm ischaemia time [37,38].
in addition to being mediators for graft recovery, high levels of AMP during ischaemia when oxygen is low indicate that ATP is still being generated [40,41]. This might explain why DBD allografts and the EGF group showed increased levels of both metabolites, as a higher energy reserve could improve post-transplant graft function [42].
AMP is also known to be protective during inflammation. It is generated from ATP and ADP by ectoapyrase (CD39) and released at the site of vascular injury when platelets aggregate to promote endothelial barrier function during inflammation [30]. Michael et al. found that overexpression of CD39 and hence increased AMP production conferred protection in both warm and cold hepatic ischaemia [43]. Adenine has been employed as a substrate to promote recovery. It has been shown that cells dying as a result of ischaemia undergo lysis to release adenine [44]. Kartha et al. demonstrated in vitro that adenine nucleotides accelerated structural and functional recovery in epithelial cells [45]. This would suggest that DBD liver allografts ( Figure S3B), with elevated levels of adenine at pretransplant, may recover more rapidly.
Although not significantly elevated at pre-transplant, the levels of adenosine and hypoxanthine showed the same trend as those of AMP and adenine. Wyatt et al. found that a solution containing In this study, AMP and adenine were found to be critically decreased in DCD. Studies have shown that adenine and AMP have a protective function during ischaemia [39]. Roy et al. found that, in addition to being mediators for graft recovery, high levels of AMP during ischaemia when oxygen is low indicate that ATP is still being generated [40,41]. This might explain why DBD allografts and the EGF group showed increased levels of both metabolites, as a higher energy reserve could improve post-transplant graft function [42].
AMP is also known to be protective during inflammation. It is generated from ATP and ADP by ectoapyrase (CD39) and released at the site of vascular injury when platelets aggregate to promote endothelial barrier function during inflammation [30]. Michael et al. found that overexpression of CD39 and hence increased AMP production conferred protection in both warm and cold hepatic ischaemia [43].
Adenine has been employed as a substrate to promote recovery. It has been shown that cells dying as a result of ischaemia undergo lysis to release adenine [44]. Kartha et al. demonstrated in vitro that adenine nucleotides accelerated structural and functional recovery in epithelial cells [45]. This would suggest that DBD liver allografts ( Figure S3B), with elevated levels of adenine at pre-transplant, may recover more rapidly.
Although not significantly elevated at pre-transplant, the levels of adenosine and hypoxanthine showed the same trend as those of AMP and adenine. Wyatt et al. found that a solution containing hypoxanthine and adenosine enhanced functional organ recovery after ischaemia/reperfusion (I/R) injury in dogs [46].
Increased levels of urate were observed in DCD livers pre-transplant (q < 0.001) ( Figure S3E). In humans, urate is the final product of purine metabolism [47]. In an experiment conducted by Matthew et al., in which hepatic ischaemia was induced for 30 min followed by 60 min of reperfusion, urate levels increased by over 300% after ischaemia and by 600% during the first 30 min of reperfusion [48]. Clear differences were revealed between DBD and DCD groups, as well as between EGF and EAD groups when the ratios of purines levels to urate levels were investigated (Figure 1). Epidemiological studies have also suggested that during I/R injury, urate levels are increased [49]. DCD allografts are more prone to IRI due to their exposure to a period of warm ischaemia [50].
We wanted to calculate the prediction ability of classifiers including purines for outcomes of EGF and longer-term survival. The model for EGF revealed that the diagnostic potential of combining the three ratios (AMP/urate, adenine/urate and hypoxanthine/urate) and ALT was the highest for EGF prediction, reaching 84% with a confidence interval of 71% to 96%. While higher accuracies were observed when purine levels were combined with known risks and enzyme markers (Table 2), the confidence interval shows that our study needs replication in a bigger cohort. Also, considering the average post-hoc power of 77% to distinguish EGF from EAD, at least 52 samples (EGF = 41, EAD = 11, with the same sample ratios used in this study) are needed to increase the power to 80%. The alterations we observed in regard to metabolite ratios were not related to the steatosis status, as no significant difference was observed between the normal and the steatotic groups. The survival analysis revealed that metabolite ratios were the best predictor of survival, compared to the other classifiers, i.e., clinical variables and the type of liver donor. Again, metabolite ratios better predicted deaths in the small dataset in comparison to clinical variables and the donor type. These preliminary results indicate that purine ratios may be useful in predicting prognosis, in addition to clinical profiles or donor graft types. However, we reiterate that the small number of patients in this study and samples that were limited to biopsies from operations conducted in one centre warrant that validation should be performed through a multicentre trial assessing early graft function. Also, a limitation of this study is that the first biopsy was taken before reperfusion, and for optimal results, a biopsy from the donor should also be included in the study design. To translate to the clinics and minimise the turnaround time of this panel (TAT), this test could be performed intraoperation, using available technology like rapid evaporative ionisation mass spectrometry (REIMS).

Conclusions
In this study, the combination of AMP/urate, adenine/urate, hypoxanthine/urate and ALT proved to have higher prediction ability compared to a combination of conventional liver function and risk markers. This study proposes a panel of small molecules at pre-transplantation that can aid in testing liver tissue quality for liver transplantation.
Supplementary Materials: The following are available online at http://www.mdpi.com/2077-0383/9/3/711/s1, Table S1: Misclassification table for the test dataset based on the training dataset model; Table S2: Annotation of markers based on molecular weight, retention time and collision-induced dissociation fragmentation of five metabolites; Figure S1: Study workflow; Figure S2: Metabolic feature selection from the S-plot; Figure S3

Conflicts of Interest:
The authors declare no conflict of interest.