Extraction of the Sivers function with deep neural networks

Deep Neural Networks (DNNs) are a powerful and flexible tool for information extraction and modeling. In this study, we use DNNs to extract the Sivers functions by globally fitting Semi- Inclusive Deep Inelastic Scattering (SIDIS) and Drell-Yan (DY) data. To make predictions of this Transverse Momentum-dependent Distribution (TMD), we construct a minimally biased model using data from COMPASS and HERMES. The resulting Sivers function model, constructed using SIDIS data, is also used to make predictions for DY kinematics specific to the valence and sea quarks, with careful consideration given to experimental errors, data sparsity, and complexity of phase space.


I. INTRODUCTION
The development of modern information extraction techniques has far outpaced experimental progress.Even with decades of experimental results, there is limited data on Transverse Momentum Dependent Parton Distribution Functions (TMDs) for global analyses that can be applied to a 3D phenomenological interpretation.Theoretical efforts are providing an ever-evolving framework and toolbox to interpret the data, leading to new areas of research that optimize information extraction unique to spin physics and the internal structure of hadrons.Artificial intelligence (AI) is accelerating data-driven research with the superior capacity of deep neural networks (DNNs) to be used as function approximators.DNNs can learn to approximate any relationships contained within data, provided there are no limits to the number of neurons, layers, and computing power.With enough data and training, such approximations can be made to nearly any desired degree of accuracy.
The Sivers function is the most studied of the eight leading-twist TMD distributions that pertain to polarized nucleons.The Sivers distribution is naively timereversal odd and is expected to be process-dependent, which leads to a distribution equal in magnitude but opposite in sign in Semi-Inclusive Deep Inelastic Scattering (SIDIS) compared to the Drell-Yan (DY) process [1,2].The Sivers distribution, as measured from the transverse single-spin asymmetries, provides information on the correlation between the nucleon's spin and the angular distribution of outgoing hadrons in SIDIS or the di-muons in DY.The quark Sivers function, ∆ N f q/N ↑ , describes the number density in momentum space of unpolarized quarks inside the transversely polarized target nucleon with nuclear spin (N ↑ ).A non-zero Sivers function indicates a contribution of quark orbital angular momentum to the target's spin.
TMD extraction and modeling with sensitivity to TMD evolution and factorization is critical and provides predictive power in the collinear limit characterizing the nonperturbative effects.Representing evolution also accounts for the momentum-dependent QCD interactions between partons inside the hadron, which affect their dis-tribution and can significantly impact observables.Some theoretical approaches use the parton model approximation without TMD evolution and demonstrate good agreement with experimental results.At the time of their origin, these calculations assumed that evolution effects in asymmetries are suppressed, as asymmetries are ratios of cross-sections where evolution and higher-order effects should cancel out [3][4][5].The more modern picture [6] implies this is likely an oversimplification.There is now strong evidence that the main effect of evolution takes place at lower energies (< 10 GeV) [6,7], so it is precisely this domain that offers the best opportunity to capture these important QCD relations in TMD distributions.On the other hand, studies that incorporated TMD evolution, as seen in [8,9], encountered difficulties and did not achieve better agreement with the Drell-Yan data compared to earlier analyses [4,5,10].This situation poses a challenge in confirming the validity of any theoretical implementation with respect to the TMD factorization theorem or modern utilities required to preserve it, particularly the scale-dependent universal nonperturbative evolution kernel, the so-called Collins-Soper (CS) kernel.However, the work in [7,11] demonstrated the conditional universality of the Sivers function using a simultaneous fit to SIDIS, Drell-Yan, and W/Z boson production data, including TMD evolution and the universal nonperturbative CS kernel extracted in [12] from unpolarized measurements.In contrast, the working principle of DNNs as a means of extraction can facilitate the necessary flexibility to capture not only the TMD and DGLAP evolution features but also the full complexity of QCD.In this regard, a carefully designed DNN model accompanied by a systematic method of extraction can incorporate not only the evolution features but other components not yet quantifiable in the formalism.The inherent capacity for DNNs to reduce dimensionality and organize intricate correlations in the data can be used to help determine the correct theoretical implementation.The challenge, of course, is that once captured in a DNN model, disentangling these features into formal expressions consistent with factorization is nontrivial, and we expect it to be the focus of much future work.
The approach outlined in this article highlights the inherent flexibility of employing specialized DNNs in combination with candidate multiplicative expressions holding some type of separate parameterization which may carry incorrect ansatzes or assumptions (biases).This can be done while still rendering a high-quality numerical model encoding all information available in the data.These types of biases can be managed by training the model with the biased term included.The DNN term then becomes uniquely parameterized to account for the biases.Consequently, the resulting combination of terms in the model remains largely unbiased, even though each individual term may not be independently meaningful.The schema can not only facilitate the exploration of the existing formalism found in the literature but also provides a means to derive and test novel phenomenology using a technique that can directly manage and study the biases in individual terms.This will be discussed in more detail in Section VI.
The phenomenology used to interpret and analyze experimental data relies on TMD factorization [13][14][15][16][17][18][19][20][21][22], proven for single-spin asymmetries (SSAs) described in terms of convolutions of TMDs.The Sivers function has been previously extracted from SIDIS data by several groups, with generally consistent results [4,[23][24][25][26][27].All previous phenomenological fits of the Sivers function (and other TMDs) require an ansatz characterizing the shape of the distribution combined with an assumed form of the Bjorken-x dependence.This can lead to ambiguity in determining both the quantitative results from the fit and the qualitative features of the momentum distributions and their associated dynamics.The function form of the Bjorken-x dependence is usually offered only as a placeholder and is assumed to at least contain the appropriate ingredients to facilitate the extraction.This is undoubtedly a considerable oversimplification but one that has permitted significant progress.In the following analysis, we perform a global fit with the goal of illustrating a method that can also permit significant progress even with data limitations, and with assumptions, and ansatzes in place.We focus on testing the extraction ability of a DNN representing a single term to maximize information extraction and minimize both the fit error and the analytical ambiguity associated with the interpretation of a generic Bjorken-x dependence, N q (x).There are, of course, broader implications of how these tools can be used to accelerate the field, some of which we will also touch on through our extraction of the Sivers function.
The exceptional capacity of DNNs to be ideal for function approximation is rigorously provable through the Universal Approximation Theorem [28,29].This is the advantage of DNNs over other machine learning (ML) approaches.In this regard, even the mere existence of a function implies that DNNs can be used to represent it and work with it without actually knowing the function form.With such a high-level abstraction, one can make use of the available data and make assessments not otherwise possible, even given an arbitrary degree of complexity.The complexity can be contained in the data relationships as well as in the experimental uncertainties.In order to make optimal use of experimental uncertainties, a detailed analysis must be provided on their estimated scale and correlations.
DNNs are also Turing-complete, implying the potential to simulate the computational aspects of any real-world general-purpose computing process.The implications are that there is potential for a type of generalizable framework that can be utilized and further developed over time without the knowledge of the exact rigorous details of the underlying mechanisms.Provided appropriately detailed global models, higher-level symbolic regressions can then be performed to infer the strict mathematical form.However, such an approach requires access to a significant amount of experimental data that holds the necessary information.Without the necessary amount of quality data, no matter the number of nodes or sophistication of architecture, DNNs are limited in what useful information they can extract, along with any other technique.Even with this constraint, DNNs can make considerable advancements with the use of sparse data with large experimental uncertainties.Generally, fitting with large errors or performing computational tasks with inherent fuzzy logic are tasks that are difficult to make optimal use of modern computational resources.DNNs are uniquely suited for such challenges.
In the remainder of this article, a first-level DNN extraction of N q (x) is performed to deduce the Sivers function from a global analysis using HERMES and COM-PASS data.This investigation is exploratory, with the intention of developing tools and techniques that minimize error and maximize utility, which we hope to expand upon in further work.The choice in this article is to focus on the Sivers function, rather than the unpolarized TMD f q/N (x, k ⊥ ), is to more clearly demonstrate the power of the method even given the limitations in data.Our global fit of f q/N (x, k ⊥ ) will be presented in later work.The examination of N q in relation to x is prioritized over other possibilities only to illustrate how generating functions can be utilized to measure and analyze systematics arising from the extraction produced.As we demonstrate in this work, this technique enables a systematic enhancement of information extraction.
In Section II, we present the formalism of the Sivers function and the kinematics for both SIDIS and DY.In Section III, a discussion of the fitting techniques of N q (x) is presented with a focus on the methodology of the DNN approach.Section IV explains in detail the extraction technique, starting with model testing.We perform a baseline fit using the classical Minuit χ 2 minimization algorithm and then perform the DNN fit, demonstrating with pseudodata the fidelity of the procedure.We then walk through the final DNN fit to experimental data for the polarized proton and the deuteron separately.The results of the fits are presented in Section V. Some further analysis and systematic studies are reported in Section VI.Then in Section VII we present the implication of the results in terms of the 3D tomography of the proton.In Section VIII, we perform a preliminary analysis on TMD evolution and finally, in Section IX, some concluding remarks are provided.

II. KINEMATICS AND FORMALISM
With the spin of the proton perpendicular to the transverse plane, the Sivers function is expected to reflect an anisotropy of quark momentum distributions for the up and down quarks, indicating that their motion is in opposite directions [1,2].This is manifestly due to quark orbital angular momentum (OAM).The most interesting and relevant aspects of the OAM, such as magnitude and partonic distribution shape as a function of the proton's state, cannot be determined by the Sivers effect alone.However, systematic studies can be performed to investigate the full 3D momentum distribution of the quarks in a transversely polarized proton, which can be used in concert with other information to exploit multidimensional partonic degrees of freedom using a variety of hard processes.Here, we focus specifically on SIDIS and DY, but it should be noted that there is significant potential for broader model development that can come from combining all available data from multiple processes with additional constraints using the simultaneous DNN fitting approach presented here.
The Sivers function describes a difference in probabilities, which implies that it may not be positive definite.Making a comparison between the Sivers function from the DY process and the SIDIS process is still the focus of much experimental and theoretical effort.Under time reversal, the future-pointing Wilson lines are replaced by past-pointing Wilson lines that are appropriate for factorization in the DY process.This implies that the Sivers function is not uniquely defined and cannot exhibit process universality, as it depends on the contour of the Wilson line.This feature of the Sivers function is directly tied to the QCD interactions between the quarks (or gluons) active in the process, resulting in a conditional universality, as shown in [30], This fundamental prediction still needs to be tested.Direct sign tests [4,10,31] can be performed, but experimental proof would require an analysis over a broad phase space of both SIDIS and DY, with consideration given to flavor and kinematic sensitivity for both valence and sea quarks.Our analysis will, in part, rely on this relationship rather than making direct tests of the validity of the sign change.

A. SIDIS process
The Semi Inclusive Deep Inelastic Scattering (SIDIS) process involves scattering a lepton off of a polarized nucleon and measuring the scattered lepton and a fragmented hadron.In the nucleon-photon center of mass frame, the nucleon three-momentum ⃗ p is along the z-axis and its spin-polarization ⃗ S T is on the plane perpendicular (transverse) to the ẑ-axis.In Fig. 1 the structquark, virtual-photon (with four-momentum ⃗ q), and the lepton belong to a plane called "Lepton Plane" (represented in yellow).The fragmented-hadron with momentum ⃗ p h and its projection onto the x− ŷ (i.e.⃗ p hT ) belong to so-called "Hadron Plane" (represented in transparentgreen).Thus, the transverse momentum ⃗ k ⊥ of the struck quark and ⃗ p hT are falling onto the transverse plane (represented in transparent-blue) perpendicular to both lepton plane and hadron plane.The azimuthal angle ϕ h of < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 / M < / l a t e x i t > q < l a t e x i t s h a 1 _ b a s e 6 4 = " s 2 q + s U P a h z j s < l a t e x i t s h a 1 _ b a s e 6 4 = " P j 6 I h s d I t D 0 r the produced hadron h is the angle between the lepton plane and the hadron plane [32].
To perform a global analysis using a generalized DNN approach, it is not necessary to postulate an expression or function form for the shape of the partonic distributions or any aspects of the nonperturbative contribution which encodes information about the intrinsic quark-gluon correlations within the nucleon.But to provide a more careful demonstration we do not generalize and focus directly on N q (x) to explore using quality metrics in combination with techniques to improve the overall extraction.It is useful to demonstrate this using a very standard approach that is already well-developed in the literature.With this in mind the TMDs (FFs) x and k ⊥ (z and p ⊥ ) dependence are decoupled.It is common to apply a Gaussian parametrization for the transverse momentum dependence but this imposes a bias implying that the transverse momentum is nonperturbative and primarily driven by the intrinsic properties of the colliding hadrons rather than hard gluon radiation.Such biases can be managed by the technique itself which we expect to come clear through the examples to come.There are several options for the choice of formalism and there is still much ongoing debate on how formalism should correctly be implemented for single-spin asymmetries to best respect factorization theory.Our choice of formalism is purely illustrative.There remains much ongoing debate about this and we intend no favoritism in one strategy over another.The focus of the present work is the demonstration of how the methods and tools can be implemented for arbitrary formalism.There are however some very useful characteristics in one of the traditional descriptions of the TSSA [33] that offer a clear means of detailing the necessary steps.However, we stress that the type of parameterization used in [4,34] does not have the complete features of TMD evolution but is comparable to full TMD evolution at next-to-leading logarithmic (NLL) accuracy.
The differential cross-section for the SIDIS process depends on both collinear parton distribution functions (PDFs) f q/N (x; Q 2 ) and fragmentation functions D h/q (z; Q 2 ), where q is the quark flavor, N represents the target nucleon, h is the hadron type produced by the process, and z is the momentum fraction of the final state hadron with respect to the virtual photon.A simplified version of the SIDIS differential cross-section can be written up to O(k ⊥ /Q) as [26,35], where ŝ, û are partonic Mandelstam invariants, and fq/p ↑ (x, k ⊥ ) is the unpolarized quark distribution, with transverse momentum k ⊥ inside a transversely polarized (with spin ⃗ S T ) proton with three-momentum ⃗ p, where ∆ N f q/p ↑ (x, k ⊥ ) denotes Sivers functions that carry the nucleon's spin-polarization effects on the quarks which can be considered as a modulation to the unpolarized quark PDFs [4], where, Here N q (x) is considered a factorized x-dependent function with a form that has yet to be formally established.In fact the form of f q/N (x, k ⊥ ) and h(k ⊥ ) are also assumed and these expressions should be considered embedded biases.For the best quality model of the Sivers function, the form of f q/N (x, k ⊥ ) could be determined and parameterized separately using unpolarized data however, it is instructive to preserve these original forms for this exercise.The parameter m 1 allows the k ⊥ Gaussian dependence of the Sivers function to be different from that of the unpolarized TMDs [4].f q (x; Q 2 ) is the collinear PDF for flavor q that is obtained from CTEQ6l [36] grid through LHAPDF [37] initially to be consistent with [4] during method testing and later improved using NNPDF4.0[38] for the real extraction, whereas the fragmentation functions for π ±,0 are from [39], and for K ± are from [40] (DSS formalism), from recent global analyses of fragmentation functions at next-to-leading-order (NLO) accuracy in QCD.
In terms of the cross-section ratios, the Sivers asymmetry in the SIDIS process can be written as, and can be parameterized [4] and further re-arranged as, = A 0 (z, p hT , m 1 ) q N q (x)e 2 q f q (x)D h/q (z) q e 2 q f q (x)D h/q (z) , where, and fragmentation functions D h/q (z, p ⊥ ) (before p ⊥integration), with ⟨k 2 ⊥ ⟩ = 0.57 ± 0.08 GeV 2 and ⟨p 2 ⊥ ⟩ = 0.12 ± 0.01 GeV 2 from the fits [41,42] to HERMES multiplicities [43].Note that we use the shorthand notation for the PDFs, FFs as well as TMDs by omitting Q 2 in the expressions for the sake of convenience as is done in the literature.
Through this azimuthal asymmetry, the SIDIS process provides information about the correlations between the transverse momentum of the partons leaving through the fragmented target and the spin of the target itself.In this regard, SIDIS allows one to study the structure of individual hadrons by selecting these decay fragments at the detection level.In general, SIDIS provides access to a wide range of TMDs, and allows for studying TMDs of hadrons carrying different flavors and polarizations.
For our present analysis, HERMES and COMPASS have the best-polarized proton target data for SIDIS, while COMPASS has the best-polarized neutron target data.In the COMPASS data, the neutron target is actually a polarized deuteron but the neutron carries over 90% of the deuteron polarization when polarized in solidstate form.The JLab data on polarized 3 He is of a different class of experiments and will not be combined with the polarized deuteron data from COMPASS.It is worth noting that the uncertainties in the experimental data can greatly differ depending on the choice of polarized target.
< l a t e x i t s h a 1 _ b a s e 6 4 = " c S H C a 3 Q 0

S H a d ro n P la n e Q ua rk -A nt iq ua rk P o la ri ze d H a d ro n
< l a t e x i t s h a 1 _ b a s e 6 4 = " B H R y H P w a J j g

D i-le p to n P la n e U n p o la ri ze d H a d ro n
2. Kinematics of the DY process in the hadronic centerof-mass frame.

B. DY process
Consider the Drell-Yan process A ↑ B → l + l − X, where A ↑ is a transversely polarized target, and B is the hadron beam.In the hadronic c.m frame, the 4-momentum q and the invariant mass squared (Q M ) of the final-state dilepton pair, Feynman-x (x F ) and the Mandelstam variable s are related as, q = (q 0 , q T , q L ) , In the kinematical region of, at order O(k ⊥ /Q M ), and in the hadronic c.m frame, the Sivers Single Spin Asymmetry can be given as [3,33], where, Note that here we follow the same convention as in [3,25,44,45] for the azimnuthal angle in the A − B center-of-mass frame with the hadron A ↑ moving along the positive z-axis and hadron B along negative z-axis.Thus the mixed product ⃗ S T • (p × k⊥i ) upon integration in k ⊥i (where i = {1, 2}) yields a sin(ϕ γ − ϕ S ) = cos ϕ γ (when ϕ S = π/2) dependence for the Sivers asymmetry, which implies an overall [− sin 2 (ϕ γ − ϕ S )] in Eq.( 15).For the case in which polarized hadron A ↑ moves along the −ẑ axis (i.e. for the processes BA ↑ → l + l − X), the corresponding overall factor is [+ sin 2 (ϕ γ − ϕ S )].The analytical integration of the numerator and denominator of Eq. ( 15) can be written as, where, and and it can be further simplified as, where, and, with the assumption ⟨k 2 ⊥1 ⟩ = ⟨k 2 ⊥2 ⟩ = ⟨k 2 ⊥ ⟩ = 0.25 GeV 2 as in [3].
Through this azimuthal asymmetry, the Drell-Yan process allows one to preferentially probe from the target and beam hadrons to create the quark anti-quark annihilation process of interest resulting in a dimuon pair in the detector.SIDIS only permits the measurement of a convolution of the TMDs function with a fragmentation function, whereas Drell-Yan allows the direct measurement of the TMDs without the complications of fragmentation functions and final state interactions.Coupled with its innate sensitivity to sea quarks, Drell-Yan is a critical process for determining the TMDs of the sea quarks.

III. FITTING Nq(x)
To obtain accurate three-dimensional tomographic information on quarks and gluons inside the nucleon, it is critical to extract TMDs with minimal model dependence and little to no unknown biases.Fitting with statistical analysis tools such as MINUIT [46] rely on χ 2 minimization or log-likelihood functions to compute the best-fit parameter values and uncertainties, including correlations between parameters.This class of algorithms has well-established statistical methods that have been used for decades in various scientific fields, making them a reliable and trusted tool.In frequentist statistics, the reliability of a χ 2 minimization fitting method can be evaluated through the concept of hypothesis testing.The fitting method minimizes the difference between the observed data and the expected theoretical model, expressed through a χ 2 statistic.The χ 2 statistic follows a known distribution and the probability of obtaining a value as extreme as the observation can be directly calculated.The reliability of the χ 2 minimization fitting method depends greatly on the ability to accurately estimate the theoretical uncertainties and the degree to which the model approximates the observed data.When these conditions are met, the method can provide a reliable estimate of the parameters that describe the model and its uncertainties.However, chi-square fits can be sensitive to the choice of initial parameter values and may not always converge to the correct solution.Fitting with DNNs can provide considerable advantages and does not inherently sacrifice the statistical framework provided by chi-square fits but it's worth touching on some key attributes needed in the method in order to best maintain quality statistical relevance and interpretation of resulting fits.
To preserve the statistical robustness and reliability of traditional χ 2 minimization fitting when using a DNN, it is important to carefully consider the data quality, model selection, validation, interpretation, and testing criteria.The quality of the data used to train the DNN should be as high as possible to ensure the DNN learns the correct relationships between inputs and outputs.This is crucial because the reliability of the DNN is only as good as the quality of the data it is trained on.Quantifying differences between the training data and the real data used in the fit can be challenging and can lead to unknown biases and systematic errors.In the method used here, Monte Carlo data, which has been tuned and matched to the experimental data, is utilized.This is done by successively extracting information from the experimental data to impose into the generated Monte Carlo data and then using the improved Monte Carlo data to further refine the extraction technique.
The choice of DNN architecture, activation functions, regularization techniques, and other hyperparameters should be carefully selected to minimize over-fitting and maximize generalization performance.Cross-validation techniques can be used to tune these hyperparameters and ensure the best possible fit to the data.The quality of the fit should be quantified with a metric that is welldefined and can be interpreted statistically.This could still be the χ 2 statistic but may also be a variety of possible loss functions.The trained DNN should be validated on an independent test dataset to ensure that it generalizes well to new data and that it does not over-fit the training data.This is critical because over-fitting can lead to an unreliable and unstable model.
When interpreting the results of the DNN fit, it's important to carefully examine the relationships between the inputs and outputs.To better understand how the DNN makes its predictions, techniques such as feature importance and attention mechanisms can be used.While DNNs can have a reliable statistical interpretation, it requires a more detailed analysis than traditional algorithms like MINUIT.
Directed testing of the model predictions and verification of reliability through multiple trials is crucial.Studies to test accuracy and precision are useful along with quantifying the robustness of the extraction method itself once a DNN architecture has been chosen.In this regard, it's important to prove that the method can be flexible as well as correct.It is not normally useful to have a model and method that yields highly accurate results but only for a fine slice of phase space under particular conditions.
The conventional chi-square minimization routines are limited in their flexibility and applicability to more complex problems because they assume a specific functional form of the relationship between inputs and outputs.In contrast, DNNs can learn complex and nonlinear relationships, making them suitable for tasks that require some degree of abstraction and where there is no known specific functional form of the relationship.DNNs are proven to be universal approximators and can handle large amounts of data while generalizing well to new data, which improves their accuracy and robustness.This is especially helpful in the present application, where DNNs can be used to build better models with new experimental data as it becomes available.
The ambiguity in the literature around N q (x) and the accompanying factorized terms in the Sivers function (h(k ⊥ ) and f q/N (x, k ⊥ ; Q 2 )) makes it a good candidate for a DNN extraction.In most Sivers function extractions [5, 8, 23, 25-27, 35, 47-51], N q (x) differs either by its parameterization or by the treatment of q in N q (x).For example, in [7,11,26,27], all anti-quarks (ū, d, and s) were treated the same (or combined) and referred to as the 'unbroken sea'.
Our first step is a generalization of the MINUIT fit parameterization of N q (x) for all light quark flavors.Section IV B summarizes the corresponding MINUIT fit results using iminuit (python version of MINUIT) [46].In these fits we use the same dataset as in [4], and obtained the fit parameters for N q (x) defined as, This expression is generalized for all light quark flavors, where N q is a scalar for quark flavor q.After our consistency check, this parameterization is used as a pseudodata generator to train and test the DNN model.We emphasize that our original MINUIT fit parameterization is used as a tool to demonstrate that the DNN model is capable of predicting (or confirming) the 19-parameter model used to generate pseudodata, as illustrated in Section IV C. It should be clarified that we do not rely on the form of Eq. ( 27) in any way and any function that can be used to generate quality1 pseudodata could be used.After building confidence in the DNN model from these preliminary tests, we move towards extracting the Sivers functions from experimental data from the SIDIS process with a polarized-proton target (see section IV D).Previous work on global fits to SIDIS data considered the data from polarized-proton targets, polarized-deuteron targets, and polarized 3 He gas targets as a combined dataset.This is usually motivated only by the overall lack of separate polarized target data.In [7], isospin symmetry was assumed for f ⊥ 1T,u and f ⊥ 1T,d for COMPASS2009 [52] and COMPASS2017 [53] datasets.However, these are very different polarized targets with very different results so we explore if our method will be sensitive to flavor and target dependence by fitting each polarized target data independently.As the nuclear effects on the Sivers functions are not very well understood, separately extracting the same observable with different types of polarized targets that contain different flavor dominance and different nuclear effects could provide valuable insight.Significant data would be required for each target type so there are clear limitations at the moment and our fits can only be considered preliminary in this regard.
The DNN's unique capacity to manage abstraction allows for the capture of additional complexity in a semimodel-independent way.To obtain the most information from the data and fit results we attempt to decompose some of this abstraction using the following conditions, • DNN fits to SIDIS data from proton and deuteron targets are performed independently to obtain separate models.
• No kinematic cuts are applied to take full advantage of the available data and to allow the DNN to build implicit inclusion of the necessary corrections related to TMD factorization.
• A Sivers function for each light-quark flavor is obtained to ensure the SU (3) flavor breaking effects in QCD are also contained.
The technique to achieve the extraction under the aforementioned conditions is somewhat novel, so explicit details are provided step by step for clarity in the next section.
x a

IV. THE EXTRACTION TECHNIQUE
The factorized form, N q (x)h(k ⊥ )f q/N (x, k ⊥ ) assumes the validity of Eq. ( 4), leading to model dependencies associated with this assertion and the Gaussian interpretation for the k ⊥ distributions of the partons.This introduces a bias to the analysis, but it is a known bias and can be managed.Within the construction of a DNN model in the fit function, there could be considerable variation in definitions (or assumptions) of h(k ⊥ ) or f q/N (x, k ⊥ ) still leading to a quality fit of N DNN q (x) to the data due to its considerable flexibility and high number of parameters with respect to either of the other two terms in the function.This makes the final N DNN q (x) highly dependent on the choice of h(k ⊥ ) and f q/N (x, k ⊥ ) so the resulting N DNN q (x) model must always be used with the same definitions of h(k ⊥ ) and f q/N (x, k ⊥ ) it was trained with.However, no particular definition of h(k ⊥ ) or f q/N (x, k ⊥ ) is required to make the combination of N DNN q (x)h(k ⊥ )f q/N (x, k ⊥ ) nonbiased and perform well in the fit.From this standpoint, we do not consider this to be a fully model-independent extraction, but every attempt is made to make it minimally model-dependent while still preserving the original assumptions relevant to the phenomenology for transparency of the schema.The DNN treatment of N q (x) enables the flexibility of handling all the light quarkflavors q = {u, ū, d, d, s, s} independently.The method presented is intended to be somewhat analogous to the approach of treating N q (x) as a fit function to be parameterized for the sake of illustrating the advantages.It is natural to extend this study towards a symbolic regression of N q (x) or the full expression of the Sivers function but that is not the scope of the present work.
< l a t e x i t s h a 1 _ b a s e 6 4 = " V 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " < l a t e x i t s h a 1 _ b a s e 6 4 = " j q y l w c x B a g M h e s C Q 8   8) ): Nq denote the DNN models for each quark-flavor q = {u, d, s, ū, d, s}, and + F q 0 5 J 5 s 5 h j 9 w P n 8 A E C K N q g = = < / l a t e x i t > FIG. 5.The block-diagram for DY Sivers asymmetries (see Eq. ( 22)).Nq denotes the DNN models for each quarkflavor q = {u, d, s, ū, d, s} obtained by training to SIDIS Sivers asymmetries.
We use a relationship for N q (x), similar to the seminal work [4], as a tool to generate our pseudodata for testing accuracy and reproducibility only.This definition is not used in any aspect of the final DNN fit results.The generic feedforward DNN structure for N q (x) that we use in this work is represented in Fig. 3.As we consider the SU (3) flavor symmetry breaking in QCD, we have N u (x), N ū(x), N d (x), N d(x), N s (x), and N s(x) to handle the six light-quark-flavors independently.Bjorken-x is the only input as the initial layer of each N q (x), and the final layer is a single-node output.The m 1 in A 0 (z, phT , m 1 ) as defined in Eq. ( 9) is treated as a free parameter, with the initialization obtained from our first chi-square minimization fit (discussed in Section IV B), and then allowed to vary throughout the DNN training process with SIDIS data, as shown in Fig. 4. The DNN model results are then used to infer the projections for both SIDIS kinematics and DY kinematics (see Fig. 5).
There are many open-source software libraries for machine learning and artificial intelligence.For this particular extraction, we use TensorFlow [54].A deep feedforward architecture is used with the hidden layers expanded with multiple numbers of nodes which are initialized randomly with Gaussian sampling of weights around zero with a standard deviation of 0.1.The degree of potential Non-linearity is introduced into the network by the choice of the activation function.The selection of the activation function can have a substantial impact on DNN performance and training dynamics.We chose the Relu6 activation function.This activation is a variant of the Rectified Linear Unit (ReLU ) function.The ReLU 6 activation function has been shown to empirically perform better under low-precision conditions by encouraging the model to learn sparse features earlier, which is beneficial for learning complex patterns and relationships from the experimental data.We also use Least Absolute Shrinkage and Selection Operator Regression which is a regularization technique used to prevent overfitting and improve the model's performance and generalization ability, while also encouraging sparsity and feature selection.We also use L1 regularization.L1 regularization encourages sparsity in the activation by adding a penalty term to the loss function that is proportional to the absolute value of the weights [55].By adding this regularization term, the most important inputs are weighted the most, so that noisy or redundant information is discarded.The strength of the regularization is controlled by the magnitude of the regularization coefficient, which is set to 10 −12 .Additionally, we use a dynamically decreasing learning rate.The learning rate is automatically reduced by 10% if the training loss has not decreased within the last 200 epochs2 (i.e.patience = 200).The optimizer used was Adam while the loss function used was Mean Squared Error.During the hyperparameter optimization process, there were slight deviations in the number of layers, nodes per layer, initial learning rate, batch size, and the number of epochs, but the basics of the scheme just described remain consistent for all DNNs used.
Data shuffling is used to randomly organize the data into batches that pass through the training process.When training, it is crucial to expose the DNN to a diverse range of training samples.Shuffling the training data ensures that the DNN doesn't learn patterns based on the sequential order of data.Instead, it promotes a more randomized and representative distribution of the data, preventing the model from becoming biased toward any specific subset or order.This randomization helps the model generalize better and makes it more robust to variations in the input data, ultimately improving its overall performance and accuracy.Batch size is a critical component of successful training as well.A batch size too large for a small data set, even well-shuffled, can also lead to a biased model.In what follows, we train two dedicated DNN models: one for the SIDIS data with a polarized proton target and one for the data with a polarized deuteron (neutron) target.Since the number of available data points for the deuteron target is significantly smaller than the amount of data for the proton target, care is taken to find the optimum configuration of shuffling and batch size for the case of deuteron whereas for the proton the shuffling and batch size's impact is nearly negligible.
Our strategy is to first perform an exercise using only pseudodata to verify the extraction method that will ultimately be used on the real experimental data.First, we devise a generating function for the SIDIS Sivers asymmetry data using a conventional χ 2 -minimization routine (MINUIT in this case), without following the popular assumption of "unbroken sea" [4] in order to generalize the treatment of quarks and antiquarks.We perform a series of conventional MINUIT fits step-wise to obtain the final 19 parameters for the case of broken SU (3) flavor symmetry in QCD.Then, we produce pseudodata (or replicas) for the SIDIS asymmetry by sampling from the mock experimental errors using the generating function with kinematics and binning in x, z and p hT as in the experimental data.Then a DNN model is constructed with all hyperparameters tuned in order to achieve the highest possible accuracy and precision.Here our nomenclature becomes quite specific and we refer to the resulting distribution of DNN fits as a DNN model.The first model obtained with the method for a particular set of data is referred to as the First Iteration.At this stage, we use the distribution of fit results to obtain the mean and the error band from the initial DNN model to reparameterize the generating function so that it produces more realistic pseudodata.The DNN fits are performed again improving the quality (both accuracy and precision) of the resulting fits to result in a Second Iteration DNN model.One can repeat the number of iterations until the resulting model is no longer improving within the experimental uncertainties.In this way, the DNN model approaches the best approximation of the Sivers functions in comparison to the true values put into the generating function.
After confirming that the method works well, an extraction of the Sivers function using the SIDIS experimental data is performed.We treat the data for a polarized proton-target and deuteron-target separately for two reasons.First, fitting these together would introduce bias that would need to be managed directly.This is the case even assuming isospin symmetry in the u and d-quarks' Sivers functions.Second, our approach leaves open the possibility to explore the nuclear dependence of the Sivers functions.The construction of the DNN models for proton and deuteron is analogous.To perform the fit another First Iteration, as previously described, is performed by developing and tuning a DNN model using the data from the real experiment rather than pseudodata from the generating function.In the proceeding iterations, we use the generating function in order to tune the hyperparameters to achieve the highest possible quality of fit in comparison with the results from the First Iteration.Once a tuned model is obtained, we perform an extended study for evaluating the algorithmic uncertainty3 as well as the systematic uncertainty of the DNN extraction method.
To elaborate on the pedagogy of this method, we organize the remainder of this section into the following subsections: (IV A) Data selection, (IV B) MINUIT fits for the case of SU

A. Data selection
Generally, TMD factorization is considered valid only under the kinematics restriction of Λ ≪ Q, and p hT ≪ zQ.The first of these requirements is important because it is only in this limit one can apply massless-hadron kinematics resulting in near-negligible contamination from power corrections work where Q > 2GeV is common.The second of these requirements provides the kinematic range a traditional factorization scheme will work.Additionally, the transverse mass scale should generally be preserved around (m/Q/z) 2 .We do not strictly adhere to any of these guidelines but instead, study the consequence of using progressively less restrictive cuts.For global data selection, we focus our attention on the fixed target SIDIS and DY data.For the proton DNN fits, kinematically 3D binned data from HERMES2020 [56] is left out for validation and only used 1D binned HER-MES2020 data for the training.For the neutron DNN fits, the polarized 3 He data from Jefferson Lab [57,58] is left out and used to test the new projections of the DNN model trained on the deuteron COMPASS data only.
Table I summarizes the kinematic coverage, the number of data points, and reaction types of the datasets that are considered in this work.In addition to the SIDIS datasets that are used in the fits, the polarized DY dataset from the COMPASS experiment is also listed in Table I as we demonstrate the predictive capability of the DNN model by comparing the projections with the real data points.The DY projections are made using the trained SIDIS DNN model assuming a sign change expected from conditional universality.For the case of training the DNN model related to proton-target we use HERMES2009 [59], COMPASS2015 [60] and HER-MES2020 [56] data points associated with 1D kinematic binning, leaving the HEREMES2020 [56] data associated with the 3D kinematic binning to compare with the projections from the trained model.The COMPASS2009 [52] dataset with a polarized-deuteron target is used for the neutron Sivers extraction as a separate DNN model.

Dataset
Kinematic Reaction Data coverage points HERMES2009 0.023 < x < 0.4  For the initial χ 2 -minimization fit with MINUIT the same datasets are used as in [4] for consistency which is HERMES2009 [59], COMPASS2009 [52], and COM-PASS2015 [60].This fit is described in the next subsection.The error in Fit 1 of Table II is quite large and many of the parameters are consistent with zero.This highlights the challenges of reproducing this type of analytical fit with a standard χ 2 minimization fit.The last column, Fit 5, is a fit from our generating function to the proton-DNN model which provides the most accurate parameterization of N q to date.Again, we point out we do not use this parameterization for any physics extraction but only to generate pseudodata.

B. MINUIT fits for SU (3) flavor
The analysis begins with a χ 2 -minimization fit with MINUIT similar to the approach in [4] except we expand the number of parameters to treat each of the light-quark flavors separately.The results of the MINUIT fits are shown in Table II.Fit 1 is from the original fit results from Anselmino et al directly from [4].Here the N q (x) for the u and d quark used is Eq. ( 27) but N q (x) = N q for anti-quarks.In this fit, there are three parameters α q and β q and N q for each quark flavor, and for each antiquark it's just N q plus m 1 .This results in a 9-parameter fit.Fit 2 is a test to reproduce the same parameterization as in Fit 1.We note that in Fit 2 non of the 9 parameters are fixed or have bounds imposed.Both of these first columns only consider u and d quarks and antiquarks.The Fit 1 parameters were used as the initial values to perform Fit 2. The difference in these two sets of fit parameters demonstrates the challenge of systematic consistency with this method though some parameters match reasonably well.For fit 3 we use the same convention but add in the strange quark so there is an additional four parameters N s , α s , β s , and N s which leads to a 13-parameter fit.In order to initialize the 13 parameters in Fit 3, we use the corresponding values for those parameters from Fit 2 and zeros for the rest.Fit 4 uses Eq. ( 27) for both quarks and antiquarks so that the treatment of all three light-quark-flavors is the same.In addition to the parameters from Fit 3, Fit 4 contains six more parameters for the antiquarks.The result of Fit 4 leads to a larger N q value to compensate for the fact that α s and β s are now present in the fit.However, the motivation behind performing Fit 4 in this way is to generalize the N q (x) in a flavor-independent fashion for both quarks and anti-quarks.Fit 4 is the final fit that we will use to generate pseudodata for testing the DNN fits and for calculating the model's accuracy.The last column, Fit 5, uses the analytical generating function to fit the proton-DNN model in fine bins.The error of the model is propagated to each bin and a MINUIT fit is performed providing a highly accurate parameterization of N q represented in this particular function form.We provide this can as a basis for comparison.

C. DNN Method Testing
We develop a systematic method of constructing, optimizing, and testing the DNN fits by using pseudodata to ensure a quality extraction from the experimental data.Our approach uses a combination of Monte Carlo sampling and synthetic data generation.The pseudodata points are randomly generated by sampling within multi-Gaussian distributions centered around each experimental data point, with variance given by the experimental uncertainty.Many pseudodata DNN fits (instances) are performed together to obtain the uncertainty of the resulting DNN model (mean and distribution).The general  [4], Fit 2: Re-fit as similar to [4], Fit 3: fit results including strange-quarks, Fit 4: fit results with the same treatment for all three light-quark-flavors.
approach is to use existing experimental data to parameterize a fit function and then use it to generate new synthetic data (replicas) with similar characteristics.The pseudodata is generated with a known Sivers function so that the extraction technique can be explicitly tested.An error bar is assigned to each new data point which is taken directly from the experimental uncertainties reported for the complete set of kinematic bins.This approach aims to produce pseudodata that simulates the experimental data as closely as possible with particular sensitivity to phase space.It does this so that the test metrics are also relevant for the real experimental extraction.To do this the pseudodata generator must be very well-tuned to the kinematic range of the experimental data.Hence, the generating function contains as much feature space information as possible.It's important to emphasize here that the metrics that we use to quantify the improvement in the Second Iteration compared to the First Iteration are sensitive to phase space.The accuracy (proximity of the mean of the DNN fits to the true Sivers) is defined as, and precision (the standard deviation of replicas), as The generating-function used to produce the true value of the Sivers is improved in the process of optimizing the DNN hyperparameters.This approach improves the generating-function and the DNN fit with each iteration.As a result, more realistic data can be generated in each iteration, which enables better hyperparameter optimization and testing for the experimental data in the subsequent iteration.Note that experimental data still refers to pseudodata replicas that are generated using the real experimental data rather than the generating-function.
In the pseudodata test, the same number of replicas are used in the First Iteration and in the Second Iteration.The number of replicas should be kept the same across consecutive iterations to control statistical error variation from the replicas.To most accurately propagate the experimental uncertainty using the replica approach requires a sufficient amount of replicas so that only negligible statistical error from the replicas is added.For the present study, our pseudodata uncertainty is simplified and is represented by a single error bar which contains the experimental statistical error and systematic error provided by the data publication. 4hese are the steps to perform the DNN Method Testing with pseudodata using the generating function 1. Generate pseudodata for the SIDIS Sivers asymmetry using the generating function (The initial function used is from Fit 4 in Table II).
2. First Iteration: construct a DNN fit and tune its hyperparameters by training with the pseudodata from Step 1. Use 10% of the pseudodata for the validation in each epoch.

Determine the optimum number of epochs by analyzing the training and validation losses for each replica in
Step 2.
4. Improve the generating function: (a) Perform an intermediate DNN fit to the experimental data.This fit is performed using the optimized hyperparameters from the above Steps 2 and 3.
(b) Use the trained DNN fit from (a) to infer the asymmetry over the 3D kinematics (x, z, p hT ) in fine bins. 5  (c) Perform a MINUIT fit on the fine-binned data to obtain the new generating function parameterization.

Perform
Step 1 again using the improved generating function.
6. Second Iteration: Perform Step 2 again with the pseudodata generated from Step 5.
7. Perform a comparison of the Sivers functions extracted at the First Iteration vs Second Iteration in terms of the accuracy, precision and the magnitude of the loss-function at the final epoch.
The accuracy metric is a critical part of the testing and is used to understand the resulting extraction methods' strengths and weaknesses.A truly robust method would be consistently able to make accurate predictions for any generating function parameterization over the relevant kinematic range.To verify the robustness of our extraction method, we test it with many artificial Sivers functions and their corresponding pseudodata at the final stage.
The hyperparameters optimization process is a systematic process of trial and error with architecture changes like adding or subtracting layers and the number of nodes per layer.The initial learning rate varies as does the batch size.Care is taken to achieve the lowest training and validation loss.We monitor the stability of the loss by examining the magnitude and frequency of fluctuations.A more moderate and even trend downward is optimal.After the generating function is improved, in principle, it holds more information.The number of layers and nodes is increased to make a better fit to the new, more realistic, pseudodata in the Second Iteration.If there are too few hidden layers and nodes, it becomes apparent when the training loss does not decrease compared to previous fits.Too many additional layers and nodes result in instability in the training loss.Tuning continues until little improvement in accuracy and precision can be determined.
Figure (6) illustrates the results from the First Iteration (two plots on the left-hand side) and the Second 5 When generating fine-binned kinematics for the Sivers asymmetry data for a given kinematic variable, the averaged values are used for other variables including Q 2 as the Q 2 −evolution is near negligible over the considered kinematics.
A comparison of the DNN model before (leftcolumn) and after (right-column) hypertuning with the improved generating-function.In the First Iteration, the DNN model is based on Fit 4, whereas in the Second Iteration, the model is based on the hyper-tuned DNN fit.Additionally, hypertuning optimization leads to higher accuracy and precision with the same number of epochs.The dotted line in each case represents generating function or the true value while the solid line represents the mean of DNN fits.In the top two plots, u is blue, s is green and d is red.Similarly for the anti-quark in the lower two plots.
Iteration (two plots on the right-hand side).The set of hyperparameters in the First Iteration and the Second Iteration are given in the columns for each DNN model in Table III.The indications in the table are C i 0 and C f 0 for results from the pseudodata from the generating function, C i p , and C f p for results from SIDIS data from experiments associated with polarized-proton target, C i d and C f d for results from SIDIS data from experiments associated with polarized-deuterium target.Here i and f indicate the First Iteration and Second Iteration respectively.The listed learning rate (multiplied by 10 −4 ) is the initial learning rate as a dynamically decreasing learning rate is used.(This is explained in Sec.IV).The accuracy ε q (x, k ⊥ ) is defined in Eq. ( 29), and the results in this table correspond to the maximum deviation of the mean of the replicas from the true values ε max q ; whereas the precision σ q (x, k ⊥ ) is defined in Eq. ( 30) and the results are the maximum standard deviations of the replicas σ max q , and are in the units of ×10 −3 .It is worth noting that the improvement in both accuracy ε max q and precision σ max q can be observed from the closeness of the solid line (mean of the 1000 DNN replicas) to the dashed line (generating function) for quarks (upper plots) and antiquarks (lower plots).The improve- for results from SIDIS data from experiments associated with polarized-proton target, C i d and C f d for results from SIDIS data from experiments associated with polarized-deuterium target, where i and f indicates the First Iteration and Second Iteration respectively.The initial learning rate is also listed (×10 −4 ) as is the final training loss (×10 −3 ).The accuracy and precision in each case are the maxima over the phase space.
ment of the DNN model in each case is significant to the point where it is difficult to distinguish the solid line and the corresponding dashed line, indicating a high degree of accuracy and precision.Also, we observed that the training loss in the Second Iteration is about one order of magnitude less than the one from the First Iteration.

D. DNN model from real data
In contrast to the testing of the DNN model with pseudodata from the generating function, we now describe the steps to apply the DNN fit method to real experimental data.The pseudodata test from Sec. IV C is using the combined proton and deuteron data as in all previous work on global fits of the Sivers function.In the following extraction with real experimental data, the proton and deuteron data are fitted separately.To take full advantage of the information provided by the model testing in the previous section, the steps from Sec. IV C are performed again separately for proton and deuteron data.The starting hyperparameters for the first DNN fit in this section including the architecture, initial learning rate, batch size, as well as optimal number of epochs, are all determined based on the best accuracy and precision using the well-tuned pseudodata from the generating function first in each case.The provide more information on the initial hyperparameter so that even the First Iteration is a quality fit.In the following steps, two distinct DNN models are developed, one for the proton quarks Sivers asymmetry and one for neutron quark Sivers asymmetry.The following procedure is common to both.
1. First Iteration: construct a DNN fit and tune its hyperparameters by training with the experimental data.Use 10% of the data for the validation in each epoch. 6. Identify the optimum number of epochs when the validation loss exceeds the training loss (see Fig. (8) as an example).
3. Perform a DNN fit, with the optimized hyperparameters from Step 1 and the number of epochs determined from Step 2, using all the data without leaving any for validation.

Improve the generating function:
(a) Use the tuned DNN model in step 3 to infer the asymmetry over the 3D kinematics (x, z, p hT ) in fine bins.
(b) Perform a MINUIT fit to obtain the new generating function.(c) Produce pseudodata for the SIDIS asymmetry using the generating function in the previous step.

Perform Step 1 and
Step 2 with the pseudodata from Step 4 (from the improved generating function).
6. Second Iteration: perform a DNN fit with the optimized hyperparameters from Step 5 using experimental data without leaving any data for validation.
7. Perform a comparison of the Sivers functions extracted at the First Iteration vs Second Iteration in terms of the accuracy, precision and the magnitude of the loss-function at the final epoch.
Although the architectural specifics such as the number of hidden layers, nodes per layer, and learning rate may be modified in the hyperparameter optimization step, the number of epochs and the number of replicas remain the same.The overall feedforward architecture structure remains consistent as well for simplicity.Table III shows the optimized set of hyperparameter configurations are represented with C FIG. 8.An example for the determination of the optimum number of epochs (about 300 in this case: marked by the vertical red dashed-line) at the First Iteration based on cross-over coordinates between the train (blue curve) and the validation (orange curve) losses.Then, the train-loss behaviors until the optimum number of epochs with all data points at the First Iteration and at the Second Iteration are represented by the green-curve and the red-curve respectively.
the First Iteration and the Second Iteration, and {0, p, d} represent with pseudodata, with proton data and with deuteron data respectively.
Clearly, the pseudodata from the generating function still plays a critical role in the tuning and testing of the fit of the real experimental data.Accuracy is necessarily determined using pseudodata so that the mean of the final DNN model can be compared directly to the true Sivers asymmetry.This procedure for determining accuracy assumes that the true Sivers asymmetry from the experi-ment can be approximated by the well-tuned generating function after the final iteration.There is a systematic error associated with analyzing accuracy this way but this type of error can be estimated.
After completing the extraction method described above, we perform a systematic uncertainty assessment on the overall method of extraction.To test the reliability of the extraction, we adjust the parameters of the generating function and repeat the extraction process by again following the full set of steps for several variations of pseudodata.By using the Sivers asymmetry generated from the optimized generating function, the absolute differences between the mean of the DNN model and the true value over k ⊥ was used to estimate the systematic uncertainties of the final DNN model from this extraction technique.

V. RESULTS
In this section, we present the results from two separate DNN models: the proton-DNN and the deuteron-DNN, along with their optimized hyperparameters.The number of parameters in the optimized proton-DNN model is 11 million wheras for deuteron-DNN model is 2.8 million.Only SIDIS data was used to train the DNN models in this exploratory AI-based extraction technique.The optimized hyperparameter configurations are provided for both models in columns , with the subscripts p and d, respectively, in Table III.To quantitatively represent the improvement made by performing the steps mentioned in the previous section, we present our accuracy and precision results in the lower part of Table III.

A. DNN fit to SIDIS data
We now explore the results and compare some of our final fits and projections with those of other global fits.Note that the χ 2 values are calculated values for each kinematic bin based on Pearson's reduced χ 2 statistic, and indicated in our plots are calculated after the analysis is complete, rather than as a part of the minimization process.
The plots of SIDIS Sivers asymmetry data and our resulting DNN models (for proton and deuteron) are shown in Fig. 9.Each plot includes the partial χ 2 values for the particular x, z, and p hT bins for each hadron type.HERMES2009 [59] (top-left), HERMES2020 [56] (topright), and COMPASS 2015 [60] (bottom-left) are described with the proton-DNN model with Root-Mean-Square-Error (RMSE) of 0.0225, whereas the COM-PASS2009 [52] (bottom-right) dataset is described with the deuteron-DNN with RMSE of 0.0220.In comparison with [4], there are some improvements in describing the proton SIDIS data on π ± and K + in HERMES 2009, which can be noticed quantitatively based on the partial   χ 2 values from the DNN model.This indicates that the possible effects attributed to the TMD evolution [6,8,48] and were assumed to be the cause of the larger χ 2 values in [4] for proton data on π + may have been somewhat integrated into the DNN model.Although HERMES 2020 [56] reported SIDIS data in 1D kinematic bins as well as with 3D kinematic bins, in our fits we use the data in the form of 1D kinematic bins to be consistent with the rest of the datasets in our fits.
The deuteron-DNN model's description of COM-PASS2009 [52] data is shown in the bottom-left sub-figure of Fig. 9. Without applying any cuts on the data, the DNN model yields a RMSE of 0.0220, covering the full range in x, z, and p hT kinematic projections from the COMPASS2009 dataset.This is in contrast to the limited kinematic coverage considered in [5,7,9,51]; notably, the data points at p hT > 1 GeV are described somewhat better by the deuteron-DNN model compare to the fits in [4,26].This suggests that performing dedicated fits to data specific to polarized nucleon targets enables better information extraction, which is true for both DNN and other fitting approaches.The advantage of DNNs in this case is to perform well even with limited data.
We did not include JLab [57,58] data in our deuteron-DNN model fits to use it as a projection test for the neutron Sivers asymmetry.Our projection indicates good agreement with the 3 He data, but both the data and the projection are largely consistent with zero.It is also important to note that in this work we are not imposing any isospin symmetry condition (f ⊥u 1T = f ⊥d 1T and/or ) for the SIDIS data with the deuteron target as was done in [7].The successful construction of the two different proton and neutron Sivers functions may indicate that our DNN approach can be particularly useful for analyzing data from polarized nucleons in different nuclei, potentially opening up a new way of exploring the nuclear effects associated with TMDs.

B. Sivers in Momentum Space
The extracted Sivers functions, including the systematic uncertainties from the DNN models at x = 0.1 and Q 2 = 2.4 GeV, are shown in Fig. 10 represented by the mean with 68% Confidence Level (CL) error-bands.The corresponding optimized hyperparameter configurations for the proton-DNN model and deuteron-DNN model are C f p and C f d , respectively, as given in Table III.The Sivers functions extracted using the deuteron-DNN model show consistency with zero, considering the accompanying systematic uncertainties.However, this is still a significant result, given the limitation in statistics from the SIDIS data with a deuteron target.The Sivers functions extracted using the proton-DNN model have small systematic uncertainties.Note that we use ∆ N f q/p ↑ (x, k ⊥ ) notation, as in [4], to represent the Sivers functions in our plots, and one can use Eq.(3)   Comparing the extracted Sivers functions in k ⊥ -space to other extractions in the literature can also be useful, although we have not included those curves in our plots.In summary, the proton-DNN model extractions are relatively precise, with narrower error bands compared to those in [3, 4, 7-9, 11, 26].
DNN models are given in Fig. 11 with 68% CL errorbands using the optimized hyperparameter configurations Comparing the results in Fig. 1 of [51] as shown in Fig. 12, we see that the xf ⊥(1)u 1T from the DNN model is more consistent with [5,8] in the vicinity of x = 0.1, although it is consistent with [27] at x = 0.01.The xf , in general, is consistent with the extractions from [4,5,8,26,27,51,61]. Additionally, the extracted behavior of xf is consistent with the qualitative observation in [26], which was originally a prediction from the large-N c limit of QCD [62].Most importantly, the DNN model is able to capture the feature of the u and d quarks orbiting in opposite directions without imposing this constraint directly as done in [45].In terms of the quantitative assessment, Eq. ( 32) could be accurate at the large-N c limit, if the isospin breaking effects are also included at the next to leading order in O(1/N c ).
In regards to the light sea-quarks, the proton-DNN model extracts the features such as d/p ↑ (x) < 0, even considering the scale of the uncertainties.Additionally, the proton-DNN model is consistent with which was also a similar observation from a theoretical calculation based on SU (2) chiral Lagrangian [63] and the predictions at large-N c limit of QCD [62].The central values extracted in [27] are qualitatively similar to the features seen in Fig. 11 which are small but non-zero within the uncertainties.Additionally, the corresponding central values extracted in [4] are both negative but consistent with zero.The first transverse moments xf ⊥(1)q 1T (x), in the case of SU (3) flavor , from our DNN result, are more precise (narrower error bands) than those in [4,23,26].However, the error bands are slightly larger than those in JAM20 [5], which includes more data from SIDIS, DY and SIA, pp-collisions, and parameterizations for Sivers, Collins, and Transversity TMDs together.

SIDIS Projections
In Fig. 13, we compare the SIDIS Sivers asymmetries (in red) projected onto the HERMES2020 3D kinematic bins with the experiment measurements (in blue).These   results are obtained using our proton-DNN model and are accompanied by 68% CL error bands.We also provide calculated partial χ 2 values for each kinematic bin as a quantitative assessment.Unlike in [7], we have made projections for all the data points since we have not applied any data cuts.There are relatively larger partial χ 2 /N pt (as was also observed in [7]), but only in a couple of K + and K − bins.In Fig. 14, we present the pro-jected SIDIS Sivers asymmetries for the JLab kinematics [57,58], obtained using our deuteron-DNN model.The figure includes 68% CL error bands and a comparison with the JLab neutron Sivers asymmetry data.These results are consistent with those reported in [4,8,9,61].The deuteron-DNN model projections (red) with 68% CL error-bands, for JLab kinematics [57,58] in comparison with measured data (blue) without the systematic uncertainty.

DY Projections
The resulting DNN model based on the SIDIS Sivers asymmetries is capable of projecting the Sivers asymmetries in DY experiments which could be sensitive to either valence quarks or sea quarks depending on the relevant kinematic coverage.For example, the COMPASS2017polarized DY Sivers asymmetry measurements [53] are dominated by the valence quarks, and the upcoming SpinQuest (E1039) experiment's polarized DY asymmetry measurements [64] will be dominated by the sea quarks.For these DY projections, we follow the block diagram represented in Fig. 5, which includes the assumption of the sign-change of the Sivers function in DY relative to the SIDIS process mentioned in Eq. ( 1).Therefore, using the trained proton-DNN model, we make projections for the DY Sivers asymmetries for the COM-PASS2017 experiment [53] with a proton target and a pion beam using CTEQ6l [36] and JAM21PionPDFnlo [65] for proton PDFs and pion PDFs respectively.Meanwhile, using both the proton-DNN model and deuteron-DNN model, we make predictions for the SpinQuest experiment 7 .The kinematic-inputs are x 1 (beam), x 2 (target), x F (= x 1 − x 2 ), q T (transverse component of the virtual photon), and Q M (di-lepton invariant mass).
The projected DY Sivers asymmetries for the COM-PASS2017 kinematics using the trained proton-DNN model in comparison with the data [53]  but some distinct advantages are clear with our DNN approach compared to standard analytical fitting.

A. Systematic Study of Data Cuts
Neural networks, especially deep neural networks, have a profound capacity for learning complex patterns and relationships in data to model non-linear and highdimensional functions without any prior information on the function dimensionality or form.When performing global fits the kinematic dependencies can be intricate and difficult to capture using conventional analytical approaches leading to the need to limit the data to achieve a decent fit.DNNs can adapt to these complexities and, in some cases, benefit from the lack of kinematic cuts, making the resulting model more robust and still sensitive to a range of kinematic variables.
The TMD factorization formula for the SIDIS hadronic 0.37 0.47 0.57 0.67 tensor W µν was defined in [6,16], where all non-perturbative information (the soft-part) is encoded in F f /N ↑ and the D h/f whereas the perturbatively calculable hard-part is denoted by For preserving the validity of TMD factorization normally strict kinematic cuts are applied.However, applying stringent cuts to already limited experimental data reduces the statistical significance of the model and may lead to the loss of valuable information.We explore a range of possible cuts with the intention of preserving as much data as possible and taking care to not introduce bias by only selecting data that leads to better fits.The unique feature of DNNs to perform well even when trained on a broader range of data with complex correlations serves as a major advantage over fitting analytically.It then serves, whenever there is not a direct conflict with the necessary factorization theorem, to include data points from regions that might otherwise be excluded in traditional kinematic cuts allowing the DNN model to build implicit inclusion of the necessary corrections.
DNNs have the ability to implicitly capture higher-order effects that would be near impossible to obtain using a direct analytical fit unless the initial ansatz is very lucky or the function form has been proven to contain the necessary physics.TMD factorization relies on specific assumptions about the dominance of certain terms in the cross-section calculation.One critical limitation is q hT ≪ Q which is required for the derivation of the factorization property suitable for the case of relatively low transverse momentum.In this factorization scheme approximations are made that have errors of order (q hT /Q).
For q hT greater than Q, the conventional formalism, with integrated fragmentation functions, would no longer be valid.This directly leads to the restriction of SIDIS data to a region where p hT ≪ zQ, which can severely reduce the available data.TMD factorization loses accuracy at large q hT , with fractional errors characterized as (q hT /Q) α .The Collins and Soper approach [16] gives (m/Q) errors for the full range of q hT which treats the TMD term as a first approximation to the cross-section and allows for the application of a correction by applying an additive approximation from the ordinary collinear factorization.Such corrections can be implicitly captured when training a DNN model over the full range of p hT .This is the case even if the assumptions such corrections are based on are not relevant in all kinematic regions of the applicable data.The only requirement is that TMD factorization does not break down at a rigid boundary but instead, remains valid but at a cost to the models' accuracy.The scale of such errors can be numerically estimated so there is an advantage to using as much data as possible and then studying the systematic effects of certain limitations.This approach allows the implicit inclusion of higher-order effects in the DNN model, providing the means for a more comprehensive analysis that can handle a wider kinematic range without sacrificing factorization validity.However, careful consideration and validation are necessary to ensure the reliability of the model as well as accurate quantification of the systematic error as a function of p hT .
The applicability of TMD factorization was ensured by applying cuts to SIDIS data based on various criteria in the literature.Although, the limit q T ≪ Q [12,13,15,16,[69][70][71][72] covers the conventional TMD factorization formalism, the large-Q requirement is needed for suppressing the power corrections ∼ m 2 /Q 2 and ∼ Λ 2 /Q 2 , where Λ is a general nonperturbative scale of QCD.Since m and Λ are ∼ 1 GeV, Q > 2 GeV condition was applied on top of the δ = p hT /(zQ) ≤ 0.3 condition in [7,11] although the phenomenological region for δ is 0.2 − 0.3 in order for the TMD factorization to be valid.It is practically well known by the effort on global fits, that accommodating more data points is a bigger challenge when using a smaller lower bound for δ.Therefore, a more conservative limit q T /Q < 0.75 was used in [9] to retain a large enough data set to perform a meaningful fit.On the other hand, a combination of cuts: Q 2 > 1.63 GeV 2 , 0.2 < z < 0.6, 0.2 < p hT < 0.9 GeV was applied in [5], and also discussed in detail in [73] with the standpoint that, data which satisfy p hT ≪ Q may not satisfy q T ≪ Q depending on the value of z h because q T ≃ p hT /z h , and therefore be difficult to describe in a TMD approach.Not only in Sivers function extractions but this is also the case for other TMDs extractions [74].
In this exploratory effort with DNNs, such latermentioned power corrections are not directly imposed.In addition to the data basic cut Q 2 > 1 GeV 2 we performed Q 2 > 2 GeV 2 and p hT < zQ cuts separately with the proton-DNN model to understand the impact on the extracted Sivers functions.In addition, a dedicated DNN fit has been performed with the JAM20 [5] cuts: i.e.Q 2 > 1.63 GeV 2 , 0.2 < z < 0.6, 0.2 < p hT < 0.9 GeV as a demonstration of impact from such a selective combination of cuts on the extracted Sivers functions.
The results are plotted in Fig. 17 only representing the Sivers functions for u, d flavors.In summary, all cuts analyzed which respect Q 2 > 2 GeV 2 and p hT < zQ, except the JAM20 [5] cuts, are consistent with the DNN model that only contains the Q 2 > 1 GeV 2 cut which for this data set is no cuts at all.Note that Q 2 > 1 is the recommended generic cut in [16] assuming the application of corrections and error estimates for increasing q T .The deviation measured in this study of cuts that is strictly dependent on q T is only of ∼ 2% so it is clearly advantageous to incorporate a wide range of q T to ensure that there is implicit inclusion of these corrective contributions to the hadronic tensor from the Y -term built into the DNN model.The Fig. 18 shows the resultant Sivers functions from proton-DNN for all six light quark flavors with relatively small uncertainty bands caused by the selection of cuts in JAM20 [5].Note that the uncertainty band represents the statistical component with 68% CL from 1000 replica models.
FIG. 17. Solid lines with light-band represent the u (in blue), d (in red) Sivers functions using the cut Q 2 > 1 GeV 2 .These resulting DNN models made from the cuts from all tests are also shown.
Sivers functions from a retrained DNN model using the cuts [65] to the data demonstrating that being selective with the data can reduce the error bands of the fit but may also add an unintentional bias.

B. Systematic Study on Choice of h(k ⊥ )
In the original framework [4], the Sivers function is written as the factorized form, Eq. 4, where h(k ⊥ ) is understood to be of an unknown form that is simply postulated by the authors (in [4]) based on the assumed kinematic response.Indeed, the analytical expression of h(k ⊥ ) has been treated with various types of ansatz and mostly with the Gaussian type parameterization [4,5,7,35,51].It's also entirely possible that this term has nothing to do with the proper theoretical treatment as suggested in [6].When a term in the factorized TMD expression is required to manage some kinematic behavior but has not been formally derived, the DNN analysis presented can be particularly useful as it allows for a high-quality fit despite the dependence of h(k ⊥ ).After the determination of the other terms using the DNN, like N q (x) in this case, the terms can then be separated and studied independently to determine the interpretation.If h(k ⊥ ) or any of the multiplicative terms are biases then the DNN can be used to compensate for the bias.This is done by building an architecture that has orders of magnitude more parameters (usually thousands) than the expression in question, h(k ⊥ ) in this case.This effectively reduces the weight of h(k ⊥ ) in the fit.This can be done progressively and systematically with intentional control of how much each term contributes to the fit.In the fit performed for this analysis, we build the DNN to optimize the loss directly so the contribution of h(k ⊥ ) to the final fit is small though the DNN becomes specialized to the particular h(k ⊥ ).In this way, the DNN directly mitigates any type of incorrect ansatz if the final number DNN parameters is very large with respect to h(k ⊥ ).This can be done while still rendering a high-quality numerical model encoding all information available in the data.The DNN term becomes uniquely parameterized to account for the biases while the resulting combination of terms in the model remains largely unbiased, even though each individual term may not be independently meaningful.It should be noted that for our analysis the choice of having the DNN model N q (x)rather than N q (x)h(k ⊥ ) or some other choice is arbitrary and done purely as an exercise to demonstrate the flexibility of the technique.To study the systematic variation of the choice of h(k ⊥ ), we used the same candidate functions for h(k ⊥ ) that were also used in [35].Those are, and The model must be trained separately with each h(k ⊥ ) creating entirely different models for N q (x) that result in the same Sivers function as shown in Fig. .It is clear that the DNN N q (x) is capable of incorporating both types of h(k ⊥ ) without affecting the Sivers functions in the final model as well as the asymmetries (with deviation less than 1%).

VII. THE 3D TOMOGRAPHY OF PROTON
The TMD density of unpolarized quarks inside a proton polarized in the ŷ-direction can be graphically represented using the relation [7,51]  ρ a p↑ (x, k x , k y ; where k ⊥ is a two-dimensional vector (k x , k y ), and the unpolarized TMD and the Sivers function for quarkflavor a are respectively represented as f a 1 (x, k 2 ⊥ ; Q 2 ), and f ⊥a 1T (x, k 2 ⊥ ; Q 2 ).The corresponding quark density distributions from our proton-DNN model for all light quark flavors in SU (3) flavor at x = 0.1 and Q 2 = 2.4 GeV 2 are shown in Fig. 20.The observed shifts in each quark flavor are linked to the correlation between the OAM of quarks and the spin of the proton.The results shown in Fig. 20 provide evidence of non-zero OAM in the wave function of the proton's valence and sea quarks.The proton-DNN model calculations for the u and d quarks are similar to those reported in [7,51], where the distortion has a positive shift for the u-quark and a negative shift for the d-quark with respect to the +x direction.From the results in Fig. 20, the proton-DNN model demonstrates that a virtual photon traveling towards a polarized proton "sees" an enhancement of the quark distribution, in particular more u, ū-quarks to its right-hand side and more d, d-quarks to its left-hand side in the momentum space.Moreover, the resultant shifts for ū, s quarks from the proton-DNN model are also in agreement with [7].In the low-x region, the momentum space quark density becomes almost symmetric [51], and it indicates that the Sivers effect becomes smaller and the corresponding experimentally observed asymmetry is small.The forthcoming data from Jefferson Lab at 12 GeV, Fermilab SpinQuest experiment, and the anticipated future data from the Electron-Ion Collider [75][76][77], along with their extensive kinematic coverage, are expected to provide invaluable insights into the 3D structure of the nucleon.Obtaining a model-independent estimate of quark angular momentum requires parton distributions that simultaneously depend on both momentum and position [78][79][80][81].In addition to experimental observations, lattice QCD (LQCD) computations provide a valuable tool for QCD phenomenology from first principles.For instance, LQCD has been utilized to investigate the Sivers effect and other TMD observables at different pion masses [82] as well as the generalized parton distribution at the physical pion mass [83].Additionally, LQCD results on the Collins-Soper kernel over a range of b T (the Fourier transform of the transverse momentum) are useful for global fits of TMD observables from different processes [84].In this way, LQCD could complement the experimental data and open up an avenue to enhance the DNN method to explore the 3D structure of nucleons more directly.

VIII. EXPLORING EVOLUTION
The solution of the TMD evolution equations [12,16], and D(b) is the nonperturbative Collins-Soper kernel.Also in the literature, these scales were generally selected as [ 6-8, 16, 85], and the global fits have been performed using some form of evolution factor as a function of the Collin-Soper kernel.Although the full analysis of incorporating TMD evolution from the DNN fit is beyond the scope of this work, a preliminary DNN fit has been performed by modifying the N q (x) as N q (x, Q 2 ) by adding a separate input node for Q 2 in addition to x.The Fig. 21 shows the percentage of the Sivers asymmetry (A sin(ϕ h −ϕ S ) U T ) vs Q 2 (GeV 2 ) in comparison with [6].The preliminary version of the TMD evolution from DNN is in agreement with the observation in [6] within 68% CL (with 1000 replica models) regarding the suppression of the full asymmetry faster than ∼ 1/ √ Q, but slower than ∼ 1/Q 2 .Moreover, the dynamic trend and consistency with the evolution behavior seen in [6] indicates that a complete evolution treatment at large Q may be a collective effect and worthy of a deeper investigation with our DNN approach.from 1000 replica models of the proton-DNN at x = 0.12, z = 0.32, p hT = 0.14 GeV.
As presented the Sivers function satisfies DGLAP evolution but not a TMD evolution.We have however introduced the first steps of how this analysis could be performed.However, in order to perform the best evolution analysis it would be best to start with no prior analytical ansatz.

IX. CONCLUSION AND DISCUSSION
In conclusion, this article demonstrates the effectiveness of using specialized Deep Neural Networks (DNNs) as part of a fit function used to obtain a global extraction of the Sivers function from transverse single-spin asymmetries experimental data.It has been clearly shown that these tools are incredibly flexible and can be used to build accurate models even under the condition that the factorized terms contain biases.By training the model with these terms present, the DNN can account for these biases, resulting in a largely unbiased final model.This approach enables the exploration of existing formalism and the testing of new phenomenology while managing and studying biases in individual terms.The proposed method can provide a means to minimize errors and ambiguity associated with the ill-defined expressions normally constructed to meet the factorization requirements.
Our proposed method leverages artificial intelligence (AI) to perform global fits and extract the Sivers distributions of unpolarized quarks in both polarized protons and neutrons.We use a generating function to ensure robustness and accuracy in the extraction process.Progressive improvement in the extracted information can be achieved by optimizing architecture and the corresponding hyperparameters using pseudodata generated in the same kinematic bins as the real data and then translating that improvement to real experimental data extraction.Our schema handles complex and sparse data effectively, with the generating function providing additional quality control and the means to quantify accuracy and precision.
As the first attempt to extract the Sivers function using AI techniques, we chose the N q (x) parameterization of the deep neural net to incorporate all x-dependent features.This initial DNN extraction of N q (x) uses SIDIS data from HERMES and COMPASS to build a Sivers function model which can then be used to make predictions for the Sivers asymmetries for both SIDIS and DY processes.The fitting method successfully extracts the Sivers function for valence quarks and light quark flavors in a flavor-independent manner.This investigation is exploratory, with the intention of developing tools and techniques that minimize error and maximize utility, which we hope to expand upon in further work.The trained model predicts results consistent with experimental data, demonstrating its predictive capability.Projections for the sea quark Siver function are made for the SpinQuest experiment with polarized proton and deuteron targets.
There are two key challenges when using data to extract TMDs.One is the applicability of TMD factorization and the other is achieving meaningful fit results.An exploratory effort has been performed to address this with our schema along with a variety of data cuts.As a result, the DNN shows the variation of q T dependent cuts, resulting in small deviations in the extracted Sivers functions compared to the result with a wide range of q T , and simultaneously following the TMD factorization theorem.We also analyze the TMD evolution of the Sivers asymmetry, which aligns with previous work and indicates the potential of the DNN approach.Further analysis in this direction is warranted.
Our method and techniques are intended to be simple and reproducible, highlighting the potential of AI for data analysis and information extraction.Cooperation between experimentalists, theorists, and the growing computational efforts is crucial for accelerating advancements in the field.A global effort to standardize data collection, organization, and storage in an unbinned format with detailed covariance information must be developed to take full advantage of the AI tools now available.AI and its emerging technologies will continue to accelerate the progress of data-driven physics, but the speed of that progress largely depends on the level of foresight and cooperation within the community.
t e x i t s h a 1 _ b a s e 6 4 = "

FIG. 1 .
FIG.1.Kinematics of the SIDIS process in the nucleonphoton center-of-mass frame.

FIG. 3 .
FIG. 3. A generic representation of the DNN architecture for Nq(x), where q = {u, d, s, ū, d, s}, and a (n) m represent the node m in the hidden layer n.The figure represents only up to n = 3 for demonstration purposes.

2 <
0 h e a M 5 R j S y j T w t 5 K 2 J B q y t C m k 7 c h e M s v r 5 J m u e R d l i r 1 S r F 6 k 8 W R g 1 M 4 g w v w 4 A q q c A c 1 a A C D A T z D K 7 w 5 0 n l x 3 p 2 P R e u a k 8 2 c w B 8 4 n z / T J I 2 C < / l a t e x i t > Q l a t e x i t s h a 1 _ b a s e 6 4 = " X t c V d Y Z z l t Z b e i + o 5 Q 8 n 2 8 e h e N Q = " > A A A B 8 X i c b V D L S g N B E J y N r x h f U Y 9 e B o P g K e y K q M e g E D x G M A 9 M l j A 7 6 S R D Z m e X m V 4 x L P k L L x 4 U 8 e r f e P N v n C R 7 0

1 <
6 c d + d j 3 l p w 8 p l D + A P n 8 w c O n o 2 p < / l a t e x i t > x l a t e x i t s h a 1 _ b a s e 6 4 = " A R 8 b w 1 i L V + h / T J 1 W t H I G v C Z C 5 c 0 = " > A A A B 6 n i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H Y J U Y 9 E L x 4 x y i O B D Z k d e m H C 7 O x m Z t Z I C J / g x Y P G e P W L v P k 3 D r A H B S v p p F L V n e 6 u I B F c G 9 f 9 d n J r 6 x u b W / n t w s 7 u 3 v 5 B 8 f C o q e N U M W y w W M S q H V C N g k t s G G 4 E t h O F N A o E t o L R z c x v P a L S P J Y P Z p y g H 9 G B 5 C F n 1 F j p / q l X 6 R V L b t m d g 6 w S L y M l y F D v F b + 6 / Z i l E U r D B N W 6 4 7 m J 8 S d U G c 4 E T g v d V G N C 2 Y g O s G O p p B F q f z I / d U r O r N I n Y a x s S U P m 6 u + J C Y 2 0 H k e B 7 Y y o G e p l b y b + 5 3 V S E 1 7 5 E y 6 T 1 K B

x 2 <
l a t e x i t s h a 1 _ b a s e 6 4 = " K 0 L g v F J a n W + O 6 Q 7 K m i B g e I 9 N 4 s 0 = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K U I 9 B L x 4 j 5 gX J E m Y n v c m Q 2 d l 1 Z l Y I I Z / g x Y M i X v 0 i b / 6 N k 2 Q P m l j Q U F R 1 0 9 0 V J I J r 4 7 r f T m 5 t f W N z K 7 9 d 2 N n d 2 z 8 o H h 4 1 d Z w q h g 0 W i 1 i 1 A 6 p R c I k N w 4 3 A d q K Q R o H A V j C 6 n f m t J 1 S a x 7 J u x g n 6 E R 1 I H n J G j Z U e H n v 1 X r H k l t 0 5 y C r x M l K C D L V e 8 a v b j 1 k a o T R M U K 0 7 n p s Y f 0 K V 4 U z g t N B N N S a U j e g A O 5 Z K G q H 2 J / N T p + T M K n 0 S x s q W N G S u / p 6 Y 0 E j r c R T Y z o i a o V 7 2 Z u J / X i c 1 4 b U / 4 T J J D U q 2 W B S m g p i Y z P 4 m f a 6 Q G T G 2 h D L F 7 a 2 E D a m i z N h 0 C j Y E b / n l V d K 8 K H u X5 c p 9 p V S 9 y e L I w w m c w j l 4 c A V V u I M a N I D B A J 7 h F d 4 c 4 b w 4 7 8 7 H o j X n Z D P H 8 A f O 5 w 8 5 A I 3 F < / l a t e x i t > q T < l a t e x i t s h a 1 _ b a s e 6 4 = " 3 N 6 o E z v O P C T 7 O y / u T U N Z i u l l Y L g = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K U I 9 B L 1 6 E B M 0 D k i X M T n q T I b O z y 8 y s E E I + w Y s H R b z 6 R d 7 8 G y f J H j S x o K G o 6 q a 7 K 0 g E 1 8 Z 1 v 5 3 c 2 v r G 5 l Z + u 7 C z u 7 d / U D w 8 a u o 4 V Q w b L B a x a g d U o + A S G 4 Y b g e 1 E I Y 0 C g a 1 g d D v z W 0 + o N I / l o x k n 6 E d 0 I H n I G T V W e q j 3 7 n v F k l t 2 5 y C r x M t I C T L U e s W v b j 9 m a Y T S M E G 1 7 n h u Y v w J V Y Y z g d N C N 9 W Y U D a i A + x Y K m m E 2 p / M T 5 2 S M 6 v 0 S R g r W 9 K Q u f p 7 Y k I j r c d R Y D s j a o Z 6 2 Z u J / 3 m d 1 I T X / o T L J D U o 2 W J R m A p i Y j L 7 m / S 5 Q m b E 2 B L K F L e 3 E j a k i j J j 0 y n Y E L z l l 1 d J 8 6 L s X Z Y r 9 U q p e p P F k Y c T O I V z 8 O A K q n A H N W g A g w E 8 w y u 8 O c J 5 c d 6 d j 0 V r z s l m j u E P n M 8 f / Z W N n g = = < / l a t e x i t > Q M < l a t e x i t s h a 1 _ b a s e 6 4 = " V 2 + d P I / c 7 i 6 a H b K 7

a t e x i t s h a 1 _ b a s e 6 4 =q e q x 1
" s o u A 7 o m X u B k f C y t u 4 t n 2 t i O P V y Y = " > A A A B 8 n i c b V B N S w M x E M 3 W r 1 q / q h 6 9 B I v g q e x K U Y 9 F R T x W s B + w X U o 2 z b a h 2 W R J Z s W y 9 G d 4 8 a C I V 3 + N N / + N a b s H b X 0 w 8 H h v h p l 5 Y S K 4 A d f 9 d g o r q 2 v r G 8 X N 0 t b 2 z u 5 e e f + g Z V S q K W t S J Z T u h M Q w w S V r A g f B O o l m J A 4 F a 4 e j6 6  n f f m T a c C U f Y J y w I C Y D y S N O C V j J 7 w J 7 g q x x c 2 s m v X L F r b o z 4 G X i 5 a S C c j R 6 5 a 9 u X 9 E 0 Z h K o I M b 4 n p t A k B E N n A o 2 K X V T w x J C R 2 T A f E s l i Z k J s t n J E 3 x i l T 6 O l L Y l A c / U 3 x M Z i Y 0 Z x 6 H t j A k M z a I 3 F f / z / B S i y y D j M k m B S T p f F K U C g 8 L T / 3 G f a 0 Z B j C 0 h V H N 7 K 6 Z D o g k F m 1 L J h u A t v r x M W m d V 7 7 x a u 6 9 V 6 l d 5 H E V 0 h I 7 R K f L Q B a q j O 9 R A T U S R Q s /o F b 0 5 4 L w 4 7 8 7 H v L X g 5 D O H 6 A + c z x 9 l 3 5 F X < / l a t e x i t > PDFs < l a t e x i t s h a 1 _ b a s e 6 4 = " i z K n K j h 8 y 5 5Q W W l o v x W Q j E j 7 f s g = " > A A A C K H i c b Z D L S g M x F I Y z 9 V 5 v o y 7 d B I u g C H W m F H V n r R u X C t Y W O j V k 0 o w N z d y S j F j C P I 4 b X 8 W N i C J u f R L T y 0 K t P w R + v n M O J + f 3 E 8 6 k c p x P q z A z O z e / s L h U X F 5 Z X V u 3 N z Z v Z J w J Q h s k 5 r F o + V h S z i L a U E x x 2 k o E x a H P a d P v n w / r z X s q J I u j a z V I a C f E d x E L G M H K I G S f e j I L U Q q 9 Q G C i K U p v K 7 l + Q C 4 8 g A + o k s M A 6 f T w L N8 z a N 9 4 7 f l Y 6 D T P D + t D V t l H d s k p O y P B a e N O T A l M d I n s V 6 8 b k y y k k S I c S 9 l 2 n U R 1 N B a K E U 7 z o p d J m m D S x 3 e 0 b W y E Q y o 7 e n R o D n c N 6 c I g F u Z F C o 7 o z w m N Q y k H o W 8 6 Q 6 x 6 8 m 9 t C P + r t T M V n H Q 0 i 5 J M 0 Y i M F w U Z h y q G w 9 R g l w l K F B 8 Y g 4 l g 5 q + Q 9 L C J T J l s i y Y E 9 + / J 0 + a m U n a P y t W r a q l W n 8 S x C L b B D t g D L j g G N X A B L k E D E P A I n s E b e L e e r B f r w / o c t x a s y c w W + C X r 6 x t V Q K W F < / l a t e x i t > X + x 2 f q/A (x 1 )f q/B (x 2 ) < l a t e x i t s h a 1 _ b a s e 6 4 = " y g 8 K 0 X j e s i K L 4 G I / g y u S x U g z b F E = " > A A A B / 3 i c b V D L S g M x F M 3 U V 6 2 v U c G N m 2 A R K k i Z k a I u S 9 2 4 r N A X t M O Q S T N t a J I Z k 4 x Q x i 7 8 F T c u F H H r b 7 j z b 8 y 0 X W j r g c D h n H u 5 J y e I G V X a c b 6 t 3 M r q 2 v p G f r O w t b 2 z u 2 f v H 7 R U l E h M m j h i k e w E S B F G B W l q q h n p x J I g H j D S D k Y 3 m d 9 + I F L R S D T 0 O C Y e R w N B Q 4 q R N p J v H / U 4 0 k O M W F q b + E 7 p 3 m + c c 9 8 9 8 + 2 i U 3 a m g M v E n Z M i m K P u 2 1 + 9 f o Q T T o T G D C n V d Z 1 Y e y m S m m J G J o V e o k i M 8 A g N S N d Q g T h R X j r N P 4 G n R u n D M J L m C Q 2 n 6 u + N F H G l x j w w k 1 l a t e h l 4 n 9 e N 9 H h t Z d S E S e a C D w 7 F C Y M 6 g h m Z c A + l Q R r N j Y E Y U l N V o i H S C K s T W U F U 4 K 7 + O V l 0 r o o u 5 f l y l 2 l W K 3 N 6 8 i D Y 3 A C S s A F V 6 A K b k E d N A E G j + A Z v I I 3 6 8 l 6 s d 6 t j 9 l o z p r v H I I / s D 5 / A J Z T l S 0 = < / l a t e x i t > B 0 (q T , m 1 ) < l a t e x i t s h a 1 _ b a s e 6 4 = " n J V D J n U 8 C m t 3 L e G H p e 2 M z I Y X G Q I = " > A A A B + 3 i c b V D L S s N A F J 3 U V 6 2 v W J d u B o v g q i Q i 6 r I + F q 6 k g n 1 I G 8 N k O m m H T h 7 M 3 E h L y K + 4 c a G I W 3 / E n X / j t M 1 C W w 9 c O J x z L / f e 4 8 W C K 7 C s b 6 O w t L y y u l Z c L 2 1 s b m 3 v m L v l p o o S S V m D R i K S b Y 8 o J n j I G s B B s H Y s G Q k 8 w V r e 8 G r i t 5 6 Y V D w K 7 2 E c M y c g / Z D 7 n B L Q k m u W L 9 z 0 F m e P a R f Y C N L r h y x z z Y p V t a b A i 8 T O S Q X l q L v m V 7 c X 0 S R g I V B B l O r Y V g x O S i R w K l h W 6 i a K x Y Q O S Z 9 1 N A 1 J w J S T T m / P 8 K F W e t i P p K 4 Q 8 F T 9 P Z G S Q K l x 4 O n O g M B A z X s T 8 T + v k 4 B / 7 q Q 8 j B N g I Z 0 t 8 h O B I c K T I H C P S 0 Z B j D U h V H J 9 K 6 Y D I g k F H V d J h 2 D P v 7 x I m s d V + 7 R 6 c n d S q V 3 m c R T R P j p A R 8 h G Z 6 i G b l A d N R B F I / S M X t G b k R k v x r v x M W s t G P n M H v o D 4 / M H E E + U e g = = < / l a t e x i t > A DY N < l a t e x i t s h a 1 _ b a s e 6 4 = " z l s U H u R a T 7 6 a N I h 6 m y d v N b

( 3 )
flavor , (IV C) DNN model training with pseudodata, (IV D) DNN model training with real experimental data.

Q 2 =2. 4 2 x =0. 1 FIG. 7 .
FIG.7.The qualitative improvement of the extracted Sivers functions for u (blue), d (red), and s (green) quarks at x = 0.1 and Q 2 =2.4 GeV 2 using the optimized proton-DNN model at the Second Iteration (solid-lines with dark-colored error bands with 68% CL), compared to the First Iteration (dashed-lines with light-colored error bands with 68% CL).

FIG. 9 .
FIG.9.The DNN fit results of the SIDIS Sivers asymmetries (red) accompanied by 68% CL error bands in comparison with the actual data (blue).The proton-DNN model is trained with HERMES2009, HERMES2020, and COMPASS2015, whereas the deuteron-DNN model is trained with the COMPASS2009 data.The calculated partial χ 2 values are provided as quantitative assessments for all kinematic bins.

C2
and C3 in Table III respectively for proton-DNN model and deuteron-DNN model.The calculated moments using the deuteron-DNN model are consistent with zero, based on the systematic uncertainties.

FIG. 13 .
FIG.13.Projections of the of HERMES 2020 data for 3D kinematic bins, using the proton-DNN model including 68% CL error bands (in red) in comparison with the actual data points (in blue).

FIG. 21 .
FIG.21.The Sivers asymmetry evolution in Q 2 (compared to the result from[6].The red-colored solid line and the band represent the mean and standard deviation of the A sin(ϕ h −ϕ S ) U T

TABLE I .
The SIDIS and DY datasets considered in the fits include the DY data, which is used to demonstrate the predictive capability of the DNN model.Specifically, we make projections using the trained SIDIS DNN model, assuming a sign change to predict the real experimental DY data points.

TABLE II .
Collection of MINUIT fit results.Fit 1 is from Anselmino et al

TABLE III .
The summary of the optimized sets of hyperparameters: The indications in the table are C i 0 and C f 0 for results from the pseudodata from the generating function, C i p , and C f p