Machine-learning prediction for quasiparton distribution function matrix elements

There have been rapid developments in the direct calculation in lattice QCD (LQCD) of the Bjorken-x dependence of hadron structure through large-momentum effective theory (LaMET). LaMET overcomes the previous limitation of LQCD to moments (that is, integrals over Bjorken x ) of hadron structure, allowing LQCD to directly provide the kinematic regions where the experimental values are least known. LaMET requires large-momentum hadron states to minimize its systematics and allow us to reach small-x reliably. This means that very fine lattice spacing to minimize lattice artifacts at order ð P z a Þ n will become crucial for next-generation LaMET-like structure calculations. Furthermore, such calculations require operators with long Wilson-link displacements, especially in finer lattice units, increasing the communication costs relative to that of the propagator inversion. In this work, we explore whether machine-learning algorithms can make predictions of correlators to reduce the computational cost of these LQCD calculations. We consider two algorithms, gradient-boosting decision tree and linear models, applied to LaMET data, the matrix elements needed to determine the kaon and η s unpolarized parton distribution functions (PDFs), meson distribution amplitude (DA), and the nucleon gluon PDF. We find that both algorithms can reliably predict the target observables with different prediction accuracy and systematic errors. The predictions from smaller displacement z to larger ones work better than those for momentum p due to the higher correlation among the data.


I. INTRODUCTION
In the early days, probing hadron structure with lattice QCD (LQCD) was limited to only the first few moments, due to complications arising from the breaking of rotational symmetry by the discretized Euclidean spacetime. The nonzero lattice spacing breaks the symmetry group of Euclidean spacetime from Oð4Þ to the discrete hypercubic subgroup Hð4Þ. Due to the reduced symmetry, the required operators are more complicated and often either suffer from divergences or mix with other operators under renormalization. This is treatable but complicated. As a result, even with increasing computational resources becoming available to the lattice-QCD community, LQCD hadronic structure calculations were limited to the lowest few moments (see Refs. [1,2] and references within for more details). Although modeling the x dependence to reproduce the calculated lattice moments to gain information on the x dependence [3] was attempted, this will only give the combinations of the difference between quark and antiquark contributions rather than individual (anti)quark contributions. Experiments such as E665 at FNAL can probe nucleon sea flavor asymmetry, meaning that lattice QCD would be excluded if it could only apply traditional moment calculations. Similarly, STAR at RHIC is probing the polarized (anti)quark structure of nucleon. The future electron-ion collider will further study sea structure. Facing these challenges, LQCD required a new computationally friendly approach to extend its applicability to calculations of PDFs and catch up with ongoing experimental efforts.
Large-momentum effective theory (LaMET) [4] is one of the most widely adopted new methods for calculating the full x dependence of hadron structure. In the LaMET framework, we take an operator containing an integral of gluonic field strength along a line and boost the nucleon momentum toward the speed of light, tilting the spacelike line segment toward the light-cone direction. The timeindependent, nonlocal (in space) correlators at finite P z can be directly evaluated on the lattice. For example, the quark unpolarized distribution of a hadron can be calculated via q lat ðx;μ;P z Þ¼ where U z is a discrete gauge link in the z direction, Γ ¼ γ t , x ¼ k=P z , μ is the renormalization scale and ⃗ P is the momentum of the hadron, taken such that P z → ∞. The q lat ðx; μ;P z Þ, often called the "quasi-PDF" [5], is related to the light-cone PDF through a factorization theorem, where the former can be factorized into a perturbative matching coefficient and the latter, up to power corrections suppressed by the nucleon momentum. This factorization theorem is founded in LaMET [4,[6][7][8][9], where the matching coefficient can be calculated exactly in perturbation theory. Lattice-QCD results using LaMET already include the isovector quark PDF of the nucleon [10][11][12][13][14], the pion generalized parton distribution [15], the meson distribution amplitudes (DAs) [16,17] and the nonperturbative renormalization in the regularization-independent momentum subtraction scheme [18,19]. Certain technical issues regarding the nonperturbative renormalization were raised and addressed in Refs. [14,[18][19][20][21][22][23]. The finite-volume effect in nucleon quasi-PDF was studied in [24].
Even with these promising results published and efforts ongoing, much work remains to be done. For example, most work so far has been limited to a single ensemble; more detailed studies incorporating the systematic errors from lattice artifacts, such as finite-volume and lattice spacing, is necessary to reach precision LQCD PDFs. Larger boost momentum in the hadron is important to suppress finite-momentum corrections, as well as getting the antiquark distribution and small-x quark distribution corrections. Ensembles with smaller lattice spacing (a −1 > 4 GeV) will become the crucial factors in the next generation of LaMET calculations. To reach larger boost momentum P z , a smaller lattice spacing is needed to control the ðP z aÞ n lattice artifacts, similar to how heavyquark studies must control the heavy-quark mass artifacts at order ðm q aÞ n . Likely, more than Oð100; 000Þ calculations will be needed to get a good signal-to-noise for the threepoint correlators and allow us to disentangle the excited states from the ground state. A larger number of degrees of freedom will be necessary to keep the finite-volume systematic within the consensus optimal region, M π L ≈ 4. More communication costs will be incurred transporting the Wilson link from one side of the lattice to the other, which can easily become a dominating cost for the calculation. Although optimizing communication efficiency may address the latter problem, we are hoping to find a method that will work for both the large-momentum and Wilson-link displacement issues that are characteristic of LaMET and its similar approaches.
Recently, authors of Ref. [25] introduced a machinelearning (ML) approach predicting observables by taking advantage of the correlations between lattice-QCD observables. Two types of data with high-statistics measurements, Oð100; 000Þ, were used in the studies: nucleon isovector charges and the CP-violating phase induced by the quark chromoelectric dipole moment interactions. The authors found a reduction in the computational cost by 7%-35%, depending on observable, showing very promising potential for lattice-QCD applications. In this work, we are interested in finding out how well the ML approach would work with the difficult computational situation in LaMETtype observables and whether computational cost can be saved for future large lattices (say, 64 3 × 192 and above). Although this paper focuses on the discussion with quasi-PDF, what we learn here also applies to the pseudo-PDF 1 [27][28][29][30] correlators since the building blocks of matrix elements are the same.
The structure of this paper is as follows: In Sec. II we briefly describe the two ML algorithms used in this work. Section III demonstrates the application of both algorithms to LaMET-type observables, including the correlators from the meson distribution amplitude, kaon and η s parton distribution functions and nucleon gluon parton distribution function. We compare results of the ML predictions. We summarize the conclusions and future prospects of this work in Sec. IV.

II. MACHINE-LEARNING ALGORITHM
ML works by optimizing a prediction model mapping between input and output data, creating a function approximating the relationship between them, inferred from data. The model is built from a set of data whose label (output) is known, and it is applied to make predictions of the labels for a new set of data whose label is unknown, assuming that there exists a consistent mapping function between the input and output data. In this study, we use regression algorithms, a class of ML approaches, to make quantitative predictions of lattice-QCD measurements. Specifically, the supervised ML regression algorithms we use are the simple linear regression and the gradient boosting tree (GBT) algorithms [31] implemented in the PYTHON scikit-learn package [32]. Although all the data we use in this test have labels, we divide them into the labeled and unlabeled sets, by hiding the label for the unlabeled dataset. Then, the labeled dataset is used for training and bias-correction procedure, while the unlabeled dataset is used for the test of the trained regression algorithm.
Gradient boosting is one of the techniques creating a strong model from an ensemble of weak prediction models 1 Another paper [26] applied the neural network (NN) algorithm to the inversion problem to reconstruct PDF from pseudo-PDF matrix elements, though the model was trained and tested on mock datasets instead of real lattice data. [33,34]. For GBT, the shallow decision trees are used as the weak learners (nested mappings as elements of a more complicated function approximation) in series and building the active model: where N est is the number of estimators, r i is the learning rate, and h i ðxÞ is the function used in the base decision tree to minimize the loss function L: where the subscript j iterates over the training-data samples. In this work, the loss function is chosen to be the mean squared error, and the depth of the decision tree is fixed at 3. To optimize the ML predictions, we must choose model parameters in Eq. (2) within the proper range. Two parameters are tuned explicitly in this process: the learning rate r, and the number of estimators N est . The prediction accuracy of GBT is compared with those of the linear regression model for the same set of data. Quantitatively, the quality of the prediction accuracy of the regression models is represented by the fit variance F v defined as where C ul and C pred are the observed and predicted measurements on unlabeled dataset, respectively, and σ 2 is the variance of the observed measurements. The higher value of F v indicates the better fit quality, and the maximum value of F v is 1, which shows a perfect prediction, C pred ¼ C ul .In practical calculations, the F v can be calculated on the biascorrection dataset of the labeled dataset, which is described below. However, we use the unlabeled dataset for the calculation of the F v , because C ul are available in this test study.
Prediction from a ML algorithm may have bias due to prediction error. We follow the bias-correction strategy introduced in Ref. [25] to remove the bias in our estimate and define the bias-corrected prediction as where the brackets with subscripts "ul" and "BC" denote averages over the unlabeled and bias-correction datasets, respectively. After bias correction, the expectation value of the prediction becomes the same as the expectation value of the ground truth, and its statistical error includes the systematic error due to inaccurate predictions. After the bias correction, therefore, our main concern is reducing the statistical error of the final estimate. We normalize the labeled data so that the standard deviation of each input measurement becomes 1 before we pass it to the ML algorithms. Each subset of the data (training, bias-correction, and unlabeled datasets) described in Ref. [25] are chosen such that the configurations are evenly distributed. The convention of notations throughout this work is given in Table I.
The errors of the predictions are estimated using the bootstrap method. We randomly pick the bootstrap samples for labeled and unlabeled datasets, and partition the labeled one into training and BC datasets. We train the model and estimate the bias correction on each bootstrap sample of labeled data. We make prediction on the corresponding sample of unlabeled data and calculate the average of the results for unlabeled data. The error is then estimated over all bootstrap samples.

A. Predictions of meson quasi-DA measurements
Meson DAs ϕ M are important universal quantities appearing in many factorization theorems, which allow for the description of exclusive processes at large-momentum transfers Q 2 ≫ Λ 2 QCD [35,36]. Such quantities can be calculated using LaMET [4,8] by calculating the time-independent spatial correlators (the quasi-DA) on the lattice, followed by a matching procedure with corrections suppressed by the hadron momentum. The light-cone meson DA can be extracted from the quasi-DÃ according to LaMET. The quasi-DA can be obtained by computing the following correlators for K − and η s ,a s presented in the Refs. [16,17]: where fψ 1 ;ψ 2 g are fu;sg for K − and fs;sg for η s , Uð ⃗x; ⃗x þ zÞ is the Wilson line connecting lattice site ⃗ x to ⃗ x þ zẑ. We perform a calculation using gauge ensembles with clover valence fermions on a 48 3 ×144 lattice with 2þ1þ1 flavors (degenerate up and down, strange, and charm degrees of freedom) of highly improved staggered quarks (HISQ) [38] generated by the MILC Collaboration [39]. The lattice spacing a ≈ 0.06 fm, and m sea π ¼ 310 MeV. Hypercubic (HYP) smearing [40] is applied to the configurations. The bare quark masses and clover parameters are tuned to recover the lowest pion mass of the staggered quarks in the sea. Correlators are calculated from momentumsmearing sources [41] using 20 source locations on each of the 95 configurations (1900 measurements in total).
We make two predictions using the ML algorithm. One is to predict the correlators at larger link length z pred from the correlators at z in <z pred . The other is to predict the correlators of larger momentum p pred from the correlators of p in <p pred .
To determine what input data to use for these predictions, we first check the correlations among datasets with different momenta, link lengths and time slices. The results are shown in Fig. 1. Here, we set the target data to be the 2-point quasi-DA correlators at Despite the larger error, larger time slices have a weaker correlation with the target data. This suggests that we should use input data close to the time slice of the target data. On the other hand, we should be able to extend the range of momentum or links of the input.
In the training process, we tried different parameters for learning rate in f0.5; 0.2; 0.1; 0.02; 0.01; 0.005; 0.002g and the number of estimators in f100; 150; 200; 250; 300g. The corresponding fit variance are plotted in a heat map with range [0,1], as shown in Fig. 2. Considering the fit quality for both p predictions and z predictions, we selected parameters r ¼ 0.1, N est ¼ 150 as having highest fit quality in both cases; these will be used for further meson-DA predictions.
The datasets were evenly distributed into three parts: training data, bias-correction data, and unlabeled test data. In practice, we want to minimize the labeled data size without sacrificing much prediction quality. We varied the amount of training data and bias-correction data from 300 to 500, while keeping the number of unlabeled test data N ul ¼ 900 fixed, to look for a best trade-off between reduced data size and prediction quality. The results are shown in Fig. 3. When correlation is obvious, small number of training and bias-correction datasets provides precise estimate that is very close to the true observations for the unlabeled dataset. When correlation is vague, the prediction becomes more precise as one increases the size of the training or the bias-correction datasets. Based on the plot, we picked N tr ¼ 400, N BC ¼ 500 for further estimations. Correlations between target η s DA C 2pt data at z pred ¼ 4, p pred ¼ 5, t pred ¼ 7 with input data at a different link length (momentum) and time slice for z prediction (left) and p prediction (right). The correlation decays quickly, especially at larger t.
To further check the consistency of our predictions with the observations, we calculate the effective mass from C 2pt and compare the results. The effective mass is defined as Then, we compared different input data to be used for z prediction in Table II. The bias correction makes the prediction noisier by converting the systematic error into statistical error, which improves the accuracy of the prediction for most cases.
For small datasets, such as what we have for the quasi-DA data, it can be difficult for the GBT model to extract the FIG. 3. The observed and z-predicted η s DA effective mass of p pred ¼ 5, z pred ¼ 4 at t pred ¼ 4 with input p pred ¼ 5, z pred ∈ ½0; 3, t in ∈ ½3; 5 for different choices of training-data counts and bias-correction data counts. The left (right) plot is the prediction of GBT (linear) model. The horizontal axis is N tr þ 0.1N BC , with N ul ¼ 900 fixed. The GBT parameters are N est ¼ 150, r ¼ 0.1. The blue points are predictions with bias correction for the unlabeled test data, and the brown points are observations for unlabeled test data.
The linear model is more accurate than GBT. Both models have better performance for z prediction than p prediction.

Type
Input It is clear that more estimators are needed for smaller learning rate. Increasing N est without worsening the prediction indicates that the model is robust to overfitting.
nonlinear pattern of the training dataset. As a consequence, the fit quality of the GBT model for the test data is poor. Instead, the simpler linear regression shows better performance. Sometimes, however, with input data when the dataset is noisy (e.g., larger-t data), the linear regression fails with poor prediction quality, as shown in Table III, while GBT was able to capture the correlation and make predictions. Using cleaner and more correlated data like the closest time slice, momentum and link can significantly improve the fit quality for linear regression.
After determining the parameters, we run the ML program and show the effective mass of our predictions along with the observed datasets for both p pred and z pred predictions in Fig. 4. The linear model works well for z prediction, but the GBT model and p predictions still need to be improved. III. Effective mass calculated from the prediction of η s DA C 2pt at p pred ¼ 5, z pred ¼ 4, t pred ¼ 10 with different models and different input time slices. Models are trained with N tr ¼ 400, N BC ¼ 500, N ul ¼ 900, N est ¼ 150, r ¼ 0.1. The linear model has better performance on correlated cleaner data but fails when more uncorrelated noisy data input are included. The GBT is more stable and less sensitive to these inputs.

Type
Input

B. Predictions of kaon quasi-PDFs
As Nambu-Goldstone bosons associated with dynamical chiral SU(3) symmetry breaking, the pion and kaon serve as a fundamental test ground for our understanding of QCD theory at the hadronic scale. The ab initio calculation of hadron PDFs from lattice QCD provides theoretical background for particle-discovery experiments and Standard-Model (SM) tests at colliders [42]. After decades of theoretical and experimental efforts, the precision required in PDFs for more stringent tests of the SM has increased significantly. In Ref. [43], we presented the first direct lattice calculation of the valence-quark distribution in the pion, using the MILC HISQ coarse ensemble with M π ≈ 330 MeV. Since the computational cost of quasi-PDF measurements on an ensemble at lighter pion mass or reduced lattice spacing would increase significantly, in this work we investigate a ML algorithm to reduce the computational cost.
We test on the meson unpolarized quasi-PDF measurements on the lattice: where C 3pt is the three-point correlator, C 2pt is the twopoint correlator, M ps ¼qγ 5 q is the pseudoscalar meson operator, z is the length of the Wilson link, U μ ðx; tÞ is the gauge link, and γ i are Dirac spinor matrices. For this study we use Wilson clover valence quarks on a MILC HISQ ensemble. The lattice spacing is a ≈ 0.12 fm, the lattice volume V ¼ 40 3 × 64, and the pion mass M sea π ≈ 220 MeV. The valence-quark masses are tuned to match the valence pion to the sea pion mass. We adopt Gaussian momentum smearing [41] to generate quark sources, to enhance the ground-state signal at nonzero momentum near 1.55 GeV. The Gaussian smearing width is chosen to be 3, with 50 iterations, and the momentum parameter k ¼ 4.82. Measurements are done on 495 configurations, using 4 quark-source locations per configuration, making 1960 measurements in total. Measurements are averaged over these quark sources before being passed to the ML algorithm, as this has shown to provide predictions with smaller statistical errors. The ratio of the three-point correlator (C 3pt ) to the two-point correlator (C 2pt )i sa useful way to extract the matrix elements: where RðtÞ is the ratio at the operator insertion time t, and t sep is the meson source and sink temporal separation.

Kaon quasi-PDF results
For the kaon quasi-PDF, the meson operator is K ¼ūγ 5 s. We first check the correlation for three-point correlators with insertion operator γ 4 . Generally, the correlations are better than for the DA case. The correlations between different time are shown in Fig. 5. The correlation is insensitive to insertion time, but sensitive to the difference between two-point time slice and three-point sourcesink time separation. Because the correlators are similar for different insertion times, we can use all the insertion time slices as input in the same procedure. An anomaly is observed in the momentum correlation in Fig. 6, which may be due to the different number of measurements for p ∈ f3; 4g and p ∈ f5; 6g, since we had an extra run for p ∈ f5; 6g with different source locations. The link FIG. 5. Correlations between C 2pt and C 3pt of kaon quasi-PDF at different time separations (left, with insertion time t 3pt ¼ t sep =2) and at different insertion times (right, with t sep ¼ 6). The correlation is insensitive to the insertion time.
correlation is then displayed in Fig. 7. The correlation decays slowly in the z direction, suggesting that we may use more data at different links as inputs.
Again, we compare the parameters used for the GBT model. Figure 8 shows the fit-variance estimate F v from both the z prediction (with p in ¼ p pred and z in <z pred ), and the p prediction (with p in <p pred and z in ¼ z pred ) using the GBT model trained on 400 measurements. The horizontal axis shows the number of estimators N est , and the vertical axis shows the learning rate r. The target measurement is at p pred ¼ 4, z pred ¼ 4, t sep ¼ 5, and t ¼ 2. For each prediction we used both C 3pt and the C 2pt . Thus, in either case a set of fit parameters can be chosen as, e.g., N est ¼ 150, r ¼ 0.1. As expected, with reduced learning rate, one needs more estimators to achieve a similar fit variance. With fixed learning rate, the fit variance becomes stable when we keep increasing N est , indicating that the model is robust to overfitting. Figure 9 compares the final predictions among various training and bias-correction measurements: N tr and N BC are selected from f80; 160; 240; 320; 400g, and the number of unlabeled measurements is fixed to N ul ¼ 1180. The fit parameters are adopted as above. We observe a reduced error size of final predictions with increased N tr and N BC .
Using p prediction on kaon quasi-PDFs can reduce the computational cost, because calculating C 3pt at different momenta requires the calculation of different propagators from different sequential sources. The effective computational savings of the ML calculation can be derived by considering the number of propagators needed to achieve the same precision as in a calculation without ML. In our case, to use p in ¼ 3 to predict p pred ¼ 4, we need to calculate N in propagators at p in ¼ 3 and N BC þ N tr propagators at p pred ¼ 4 for the ML setup. Then, we can use the model to obtain the N ul predictions at p pred ¼ 4. This amount of data is equivalent to a non-ML calculation with N in propagators at p in ¼ 3 and N ul × σ 2 ðR ul Þ=σ 2 ðR comb Þ propagators at p pred ¼ 4. The cost with ML can be quantified by where N BC , N tr and N ul are the numbers of propagator calculations (which represent the computational cost) needed to obtain the corresponding datasets (bias-correction, training and unlabeled), and N in is that of the input data. The ratio σ 2 ðR ul Þ=σ 2 ðR comb Þ is the scaling factor of the effective number of measurements we can obtain by employing the ML prediction, accounting for the increase of statistical error due to prediction error. We assume that the errors of observables scale as 1= ffiffiffiffi N p as the number of measurements increases. For the cost estimate, we use an average value of the ratios over different insertion time slices. We calculate the R comb ¼ C comb 3pt =C 2pt here from each bootstrap sample by taking the weighted average of the measurements on labeled data and BC predictions on unlabeled data in each sample: while the error σðR comb Þ is estimated from all bootstrap results. A smaller cost indicates higher prediction efficiency, so we vary N tr and N BC to find the optimal cost reduction, as showninFig.10. By choosing optimal N tr and N BC , we can obtain about 20% reduction in computational cost. Figure 11 shows this set of fitted results from both the z prediction and p prediction at N tr ¼ 240, N BC ¼ 240, while Table IV compares several sets of p and z predictions and observations. The last column of the table shows the fit quality.
We compare the predicted ratios for these models in Fig. 11. The z predictions are consistent with unlabeled data for both models, but the p predictions still need to be improved.

η s quasi-PDF results
For the η s quasi-PDF data, the meson operator is η s 1 sγ 5 s. The η s data have better signals, and the correlations among η s data show the same patterns as those of the kaon. Therefore, we select the same parameters for the model The red line shows the effective cost averaged on Rðt ∈ ½1; 4Þ. The left and right sides show the results from using the GBT and linear models, respectively. We use N est ¼ 150, r ¼ 0.1 for the GBT model. The horizontal axis shows N tr þ 0.1N BC , and the number of unlabeled measurements is fixed to 1180. Points in blue are for predictions with bias correction, and orange for observations. training, N est ¼ 150, Fig. 12 and Fig. 8, we can see that the fit quality is slightly improved by the cleaner dataset. We infer that with more labeled kaon quasi-PDF data available for model training, the kaon model will show better performance as well. The predictions compared with observations are shown in Fig. 14. Both z predictions and p predictions are more precise compared to the kaon case. Figure 13 shows the cost on different N tr =N BC set, the linear model shows a better optimal reduction than the kaon case. Overall, the cost reductions are 12%-18% at optimal choices of the sizes of the training and bias-correction datasets.
FIG. 12. Estimates of the fit variance F v as a function of learning rate r and number of estimators N est from the η s quasi-PDF measurements at p pred ¼ 4, z pred ¼ 4, t sep ¼ 5, and t pred ¼ 2. N tr ¼ 400 and N ul ¼ 1180 are used. The left (right) side shows the results from the z prediction (p prediction). The performance is better than the model of kaon data.
FIG. 13. Observations and predictions of the ratio Rðt ¼ 2Þ of η s quasi-PDF correlators at p pred ¼ 4, z pred ¼ 4, t sep ¼ 5 from input data at p in ¼ 3, z in ¼ 4, t sep ¼ 5. Red line is the effective cost averaged on Rðt ∈ ½1; 4Þ. The left and right sides show the results from using the GBT and linear models, respectively. We use N est ¼ 150, r ¼ 0.1 for the GBT model. The horizontal axis shows N tr þ 0.1N BC , and the number of unlabeled measurements is fixed to 1180. Points in blue are for predictions with bias correction, and orange for observations. TABLE IV. Observations and predictions of the ratio Rðt ¼ 2Þ of the kaon quasi-PDF correlators at p pred ¼ 4, z pred ¼ 4, t sep ¼ 5 from different models and inputs. We use N est ¼ 150, r ¼ 0.1 for the GBT model. The models are trained on 240 measurements with 240 bias-correction measurements, and then tested on 1180 unlabeled measurements. The linear model shows a better fit variance than GBT.

C. Gluon quasi-PDF matrix elements
The gluon PDF contributes at next-to-leading order to deep inelastic scattering (DIS) cross sections, and it enters at leading order in jet production. Global fits have combined the data from both DIS and jet-production cross sections, and constraints on the gluon PDF from the experimental side are improving. However, on the theoretical side the gluon PDF is poorly known. PDF cannot be calculated using perturbative QCD. Recently, it has been found that they can be calculated directly in lattice QCD using large-momentum effective field theory. The gluon unpolarized quasi-PDF matrix elements are computed on the lattice using C 3pt ðz; t sep ;tÞ¼h0jΓ and N ul ¼ 81920. Fit variance is closely related to the correlations between the input data and unlabeled data. z prediction works much better than p prediction.
FIG. 17. The GBT (left) and linear-regressor (right) results. The observed p-and z-predicted gluon-correlator ratios are for the overlap valence fermions at p pred ¼ 2, z pred ¼ 3 at t sep ¼ 8 by using r ¼ 0.02 and N est ¼ 150 for different counts of training data and biascorrection data. The horizontal axis is N tr þ 0.1N BC , with N ul ¼ 143360 fixed. The blue points are predictions with bias correction for the unlabeled test data, and the orange points are observations for unlabeled test data.
(3) of Ref. [44]. After applying the renormalization to the bare matrix elements, the results from different numbers of HYP-smearing steps are consistent with each other and with phenomenology results 0.42 (2) within the uncertainties [44] with the exception of the 10-step. Therefore, we apply 5 steps of HYP smearing to the gluon quasi-PDF operators in this work. The ratio R of the threepoint correlator to the two-point correlator follows the same definition as in Eq. (14).

Predictions of the gluon correlators with the overlap valence fermions
To make z=p predictions on correlators based on smaller z=p values, we should first check the correlations among correlators with different momenta and link lengths.
FIG. 18. The observed/predicted gluon-correlator C 3pt and C 2pt ratio of the overlap valence fermions lattice ensemble at p pred ¼ 2, z pred ¼ 3 from p in ¼ 1, z in ¼ 3 (upper) and p pred ¼ 2, z pred ¼ 3 from p in ¼ 2, z in ¼ 2 (lower) by using N tr ¼ 30720, N BC ¼ 30720, N ul ¼ 1433600, r ¼ 0.02, and N est ¼ 150. The GBT and linear-regressor results are shown on the left and right, respectively. The predictions with bias correction do not improve much over the raw predictions.
In Fig. 15, we show the correlations between the threepoint correlation function at p pred ¼ 2, z pred ¼ 3 and the same three-point correlation functions at various choices of momenta p in ¼f0; 1; 3g and link lengths z in ¼f1; 2; 3; 4g. The source-sink time separation is fixed to t sep ¼ 8.W e notice that the correlations between different momenta are weaker than the correlations between different link lengths in this case, which will result in a relatively low p-prediction fit variance as shown in Fig. 16.
The fit variances F v from the p prediction and z prediction are shown in Fig. 16 with different learning rates in f0.002; 0.005; 0.01; 0.02; 0.05; 0.1; 0.2; 0.5g and different numbers of estimators in f100; 150; 200; 250; 300g. T h et a r g e tm e a s u r e m e n ti sw i t hp in ¼½0; 1, p pred ¼ 2, z in ¼ z pred ¼ 3, t sep ¼ 8,a n dt ¼ 4.W eu s e db o t hC 3pt and C 2pt for prediction. Thus, considering the F v for z=p prediction shown in Fig. 16, we choose r ¼ 0.02, N est ¼ 150 as the parameter set we will use in further work.
For p prediction, we varied the number of training data and bias-correction data from 15360 to 30720, while keeping the number of unlabeled test data N ul ¼ 143360 fixed, to compare their performance. The results are shown in Fig. 17. We will use N tr ¼ 30720, N BC ¼ 30720 in the following p=z prediction.
With the ML model parameters and the dataset we obtained from the overlap-fermion ensembles, we show the result of our prediction along with the observed datasets for both p pred and z pred predictions in Fig. 18. In the prediction, we can use any p in <p pred or z in <z pred for prediction. In Table V, two-point and three-point correlator data at The data for insertion time t ¼ 4 are shown. From the table we can see that the p predictions are bad for both models, because the correlations are weak, as shown in Fig. 18. The z predictions are better than p predictions, and the linear model performs better than GBT.

Predictions of the gluon correlators for clover valence fermions
We repeat the procedure we established from the overlap valence fermions for the clover fermions, checking the correlations among correlators with different momenta and link lengths. In Fig. 19, we show the correlations between the three-point correlation functions at p pred ¼ 5, z pred ¼ 3 at various values of p in ¼f0; 2; 4g, z in ¼f0; 1; 2; 3g. The source-sink time separation is fixed t sep ¼ 8. The correlations between different momenta are much stronger than in the overlap case, which leads to a much higher p-prediction fit variance, as shown in Fig. 20. The reason that the correlations of clover fermion case are stronger than overlap-fermion case is the construction of the sources of proton correlator are different in two cases. In overlap fermion, we use grid spatial source which needs gauge averaging to get consistent correlators that are due to weak correlation properties. While the clover fermion does not have this kind of problem because of using one spatial location per time source.
We use the same fit-variance F v estimation as in the overlap case. The target measurement is p in ¼ 4, p pred ¼ 5, z in ¼ 2, z pred ¼ 3, t sep ¼ 8, and t ¼ 4. We obtain r ¼ 0.2, N est ¼ 200 as the parameters we will use in the following process from Fig. 20. These two figures indicate stronger correlations between input and target data are needed to obtain good results for the fit variance. FIG. 19. Correlation coefficients between the three-point correlation functions at p pred ¼ 5, z pred ¼ 3 at various values of p in ¼f0; 2; 4g, z in ¼f0; 1; 2; 3g calculated using the clover valence fermions. Different z at the same p cases show higher correlation than different p at the same z cases. V. Observations and predictions of gluon-correlator ratios for the overlap valence fermions observations and predictions at p pred ¼ 2, z pred ¼ 3, t sep ¼ 8, t ¼ 4 by using N tr ¼ 30720, N BC ¼ 30720, N ul ¼ 1433600, r ¼ 0.02, and N est ¼ 150. For the z predictions, the linear model shows a better fit variance than GBT. The p predictions are bad for both models, because the correlations are poor, as shown in Fig. 18. FIG. 20. Gluon-correlator ratio fit variance for the z prediction (left) and p prediction (right) for the clover valence fermions at With a stronger correlation between input and target data, smaller learning rate and number of estimators are needed to have good fit-variance score.
Again, to compare their performance we varied the number of training data and bias-correction data from 1440 to 2880, while keeping the number of unlabeled test data N ul ¼ 23040 fixed. The observed, p-a n dz-predicted gluon correlator C 3pt and C 2pt ratio of the clover valence fermions p pred ¼ 5;z pred ¼ 3 at t sep ¼ 8 are shown in Fig. 21. Comparing with these results, we will use N tr ¼ 2880, N BC ¼ 2880 in the following p and z predictions.
The observed/predicted gluon-correlator ratios of the clover valence fermions of the GBT and linear-regressor model at p pred ¼ 5, z pred ¼ 3 are shown in Fig. 22. The linear model gives a slightly better results. In Table VI,twopoint and three-point correlator data at p in ¼ 4, z in ¼ 2, t sep ¼ 8 are used for predicting p pred ¼ 2, z pred ¼ 3, t sep ¼ 8 correlator. The data at insertion time t ¼ 4 are shown. Compared with the overlap-fermion result in Table V, the fit FIG. 22. The observed/predicted gluon-correlator ratios of the clover valence fermions at p pred ¼ 5, z pred ¼ 3 from p in ¼ 4, z in ¼ 3 (upper) and p pred ¼ 5, z pred ¼ 3 from p in ¼ 5, z in ¼ 2 (lower) by using N tr ¼ 2880, N BC ¼ 2880, N ul ¼ 230400, r ¼ 0.2, and N est ¼ 200. The GBT and linear-regressor results are shown in the left and right columns, respectively. The predictions with bias correction do not much improve the raw predictions. variance is much higher, due to the input data having stronger correlations with the target data.

IV. SUMMARY
In this article, we applied the ML technique to quasi-DA and quasi-PDF correlators. Using both GBT model and linear model, we tried to predict the C 2pt for meson quasi-DAs and the C 3pt for meson and gluon quasi-PDFs at larger momenta and link lengths, which are noisier and need more computational resources. By predicting from the computationally less expensive data, we are able to reduce the computational cost. Systematic uncertainties from the ML prediction errors are converted to the statistical uncertainties by using the bias-correction procedure. With the full bootstrap resampling, we effectively estimated and compared the errors of different model predictions.
Table VII summarizes the best fit variances F v of all predictions we investigated. It is observed that for meson datasets, the data from different links are more correlated than those of different momenta. Consequently, the z predictions for both models work much better than p predictions. The ML approach on the z prediction of quasi-DAs and meson quasi-PDFs is very precise, while the p predictions and the predictions for gluon quasi-PDFs show relatively worse precision. By comparing two ML regression models, we find that the linear model is preferred on cleaner datasets when the correlations between input data and target data are good enough, such as the z prediction of meson-DAs and meson PDFs. On the other hand, the GBT model is more robust to noisy and less-obviously correlated inputs. For the p prediction of meson quasi-PDFs, both models are able to give a computational cost reduction of 16%. TABLE VII. Summary of fit variance F v for all the cases we investigate. The larger value of F v indicates the better predictions, and a perfect prediction yields F v ¼ 1.0. In general, z predictions work better than p predictions, and linear model shows better performance than GBT on our dataset.