Machine learning Sasakian and $G_2$ topology on contact Calabi-Yau $7$-manifolds

We propose a machine learning approach to study topological quantities related to the Sasakian and $G_2$-geometries of contact Calabi-Yau $7$-manifolds. Specifically, we compute datasets for certain Sasakian Hodge numbers and for the Crowley-N\"ordstrom invariant of the natural $G_2$-structure of the $7$-dimensional link of a weighted projective Calabi-Yau $3$-fold hypersurface singularity, for 7549 of the 7555 possible $\mathbb{P}^4(\textbf{w})$ projective spaces. These topological quantities are then machine learnt with high performance scores, where learning the Sasakian Hodge numbers from the $\mathbb{P}^4(\textbf{w})$ weights alone, using both neural networks and a symbolic regressor which achieve $R^2$ scores of 0.969 and 0.993 respectively. Additionally, properties of the respective Gr\"obner bases are well-learnt, leading to a vast improvement in computation speeds which may be of independent interest. The data generation and analysis further induced novel conjectures to be raised.

While the holonomy reduction to G 2 amounts to a difficult non-linear PDE, pragmatically it may be relaxed in a number of ways, by considering the fundamental notion of a G 2 -structure: a non-degenerate 3-form φ which induces a so-called G 2 -metric g φ ; its failure to give rise to a metric with Hol(g φ ) ⊂ G 2 is encoded by its full torsion tensor T := ∇ g φ φ.A G 2 -structure is closed if dφ = 0 and coclosed if dψ = 0, where ψ := * φ φ, and the torsion-free condition T = 0 is equivalent to φ being both closed and coclosed.We propose therefore to work with coclosed G 2 -structures on certain contact Calabi-Yau (cCY) 7manifolds, which are closely related to the weighted projective Calabi-Yau 3-folds famously studied in [33].
Despite their unsuitability to M-theory, torsionful G 2 structures retain relevance in the context of (3+7)-dimensional heterotic supergravity with flux, as demonstrated by [34][35][36].Indeed, as shown by [37], one can explicitly solve the corresponding Strominger system on cCY 7-manifolds, by way of coclosed G 2 -structures together yielding non-trivial scalar and G 2 -instanton gauge fields, with constant dilaton, as well as an H-flux with prescribed Chern-Simons defect, in accordance to the 'anomaly-free' condition referred to as the heterotic Bianchi identity.
Topological invariants of CY links.Contact Calabi-Yau manifolds were introduced by Tomassini and Vezzoni in [38], and they consist of Sasakian manifolds endowed with a closed basic complex volume form, which is 'transversally holomorphic' in the sense of foliations.It was shown in [39] that such a manifold carries naturally a coclosed G 2 -structure.
A special class of such structures arises from Calabi-Yau links, which were first discussed from the perspective of G 2 topology in [40].A 7-dimensional weighted link K f is obtained as the intersection of a possibly small S 9 ⊂ C 5 with a weighted homogeneous affine variety (defined by the zero locus of the polynomial f ) having an isolated singularity at the origin.Milnor showed that such links are 2-connected compact smooth manifolds, indeed K f is the total space of a Hopf S 1 -bundle over a (weighted) projective 3-orbifold in P 4 (w), for appropriate choices of polynomial degree and weighted C × -action, see §2.Interestingly, the dataset of possible weights that admit these CY 3-folds consists of the 7555 cases classified in [33].Therefore, we pursue the construction of a Calabi-Yau link for each of these weight systems, computing the following two types of topological invariants.
From the perspective of Sasakian topology, the (basic) Hodge numbers h p,q can be obtained as the dimensions of certain linear subspaces of the Milnor algebra M f = C[[z 1 , . . ., z 5 ]]/J f , defined by the corresponding Jacobian ideal of f [41].We provide the first systematic computation of the Sasaki-Hodge numbers {h 3,0 , h 2,1 } for this class of 7-dimensional CY links.
On the other hand, considering their G 2 -topology, a CY link bounds an 8-dimensional Milnor fibre which smoothly extends the G 2 -structure φ as a spinor field, hence it is possible to explicitly compute the Crowley-Nordström (CN) homotopy invariant ν(φ) ∈ Z/48Z, introduced in [42].Building upon the calculations first carried out in [40], we obtain an exhaustive dataset of ν-invariants for Calabi-Yau links.
Machine Learning cCY topology.We analyse these two sets of topological data from a perspective similar to what has been done for Calabi-Yau manifolds [1].In the standard Calabi-Yau case, the weights defining the ambient projective space are sufficient to uniquely determine the Calabi-Yau 3-fold's Hodge numbers, motivating ML of the known formulas from weights to Hodge numbers [43].However, in the 7-dimensional Calabi-Yau link case, no such explicit formula is known, and one would initially expect the specific polynomial coefficients chosen to change the topology.Extending the ML techniques to these link invariants would establish existence of an approximate formula for Sasakian Hodge numbers from the weight information, from which ML interpretability techniques may be used to uncover its true form.This formula would provide new insights into Sasakian structures, as well as being dramatically quicker to compute, as well as open the door for their application on other related invariants, including for G 2 -structures as motivated by their ML in this work.
We therefore extend previous work learning CY Hodge numbers from weights, to predicting Calabi-Yau link topological properties (namely Sasakian Hodge numbers and CN invariants).We find that, whilst the machine is able to learn the Sasakian Hodge number topology of these manifolds with highperformance measures, the same cannot be established for the CN invariant.The datasets of the weighted Calabi-Yau polynomials used in the link construction, with the computed Sasakian Hodge numbers and CN invariants, as well as the scripts used for analysis and machine learning, are available on GitHub [44].
This letter is organised as follows: in §2 we survey some background to contact Calabi-Yau manifolds, G 2 -geometry, and machine learning; in §3 we describe the methodology for the construction of the Calabi-Yau link data and perform relevant statistical analysis of the datasets of invariants; in §4 we present the results of the machine learning investigations; and we conclude in §5, discussing some future prospects.

Calabi-Yau links
One may interpret structure group reductions on an odddimensional contact metric manifold (K 2n+1 , η, ξ, g) as 'evendimensional' structures 'transverse' with respect to a S 1 -action along the fibres of a submersion S 1 → K → V.Here η ∈ Ω 1 (K) denotes the contact form and ξ ∈ X (K) its (unit) dual Reeb field, such that η(ξ) = 1.Whenever clear from context, we will omit mention of the Riemannian metric g, for simplicity.
In particular, Sasakian geometry may be seen as transverse Kähler geometry, corresponding to the reduction of the transverse holonomy group to U(n).These are equipped in addition with a transverse complex structure J ∈ End(T K) such that J • J = −I T K +η⊗ξ, yielding a decomposition of forms by basic bi-degree, and a transverse symplectic form ω = dη ∈ Ω 1,1 (K), all of which satisfy suitable compatibility conditions; for more details see eg. [45, §2] or the canonical reference [46].Furthermore, Sasakian manifolds with special transverse holonomy SU(n) are studied by Habib and Vezzoni [39, § 6.2.1]: 2 ω n and dΥ = 0, with ω = dη.
Such an f defines an affine variety which, in general, admits a singularity at the origin.
Assuming that the origin is an isolated singularity, the intersection of V f with a surrounding small hypersphere S 2n+3 ε is a compact smooth (2n + 1)-manifold K f = V f ∩ S 2n+3 ε , the socalled weighted link of the singularity [47].A weighted link K f of degree d and weight w is a Calabi-Yau link if which precisely guarantees the existence of a cCY structure on K f .The dimension of the moduli space of these cCY structures is well-understood and discussed in §5.

Sasakian Hodge numbers of a CY link
The C × (w)-action on C n+2 induces a contact-metric S 1 -action on K f .It admits finitely many distinct isotropy subgroups, contained in some finite subgroup Γ ⊂ S 1 , so that K f admits a double fibration over a projective n-orbifold V ⊂ P n+1 (w), The following key theorem allows us to compute certain mixed Hodge numbers h p,q (K f ) from the dimensions of the primitive cohomology groups H n 0 (V * f ), for p + q = n, which in turn can be obtained from the Milnor algebra.A brief survey of Sasakian Hodge numbers can be found in Appendix A.
Theorem 2 ([41, Theorem 1.2], [48,49]).Let f be a w-homogeneous polynomial on C n of degree d.Given p+q = n, let ℓ = (p+1)d− i w i , and denote by (M f ) ℓ the linear subspace of the Milnor algebra consisting of degree ℓ elements.
When (1) is satisfied, i.e.K f is a Calabi-Yau link, the condition reduces to ℓ = pd.
Finally, Moriyama expresses the dimension of the moduli space of cCY structures on a given 7-dimensional link K f , in terms of the Sasakian Hodge numbers [50]: ( In particular, the third Betti number b 3 is completely determined by h 2,1 and h 3,0 : which we have computed in this work.The remaining term is h 1,1 S , which is not calculable via Theorem 2, however may be accessible by other means.For instance, in the study of Calabi-Yau manifolds [33,51,52], there is a well-established notion of homological mirror symmetry between Hodge numbers.We propose that if one could extend this to a notion of mirror symmetry among links, perhaps one could access h 1,1 for a link as being h 2,1 of the respective 'mirror'.For this dataset, we have enumerated the h 2,1 's exhaustively, so we could in principle know all the terms in (2) and subsequently the dimensions of the moduli space of cCY structures, at least for CY link mirror pairs.

The Crowley-Nordström invariant on cCY 7-manifolds
For an arbitrary closed 7-manifold with G 2 -structure (Y 7 , φ), Crowley and Nordström have defined a Z/48Z-valued homotopy invariant ν(φ), which is a combination of topological data from a compact coboundary 8-manifold with Spin(7)-structure (W 8 , Ψ) extending (Y 7 , φ), in the sense that Y = ∂W and where χ the real Euler characteristic and σ is the signature.

Weak R-equivalence of weighted polynomials
In the light of Theorem 2, we will observe in §3 that the Sasakian Hodge numbers and the natural CN invariant of a CY link depend only upon the Milnor algebra M f .The relation between the Milnor algebras of different weighted homogeneous polynomials was examined in [53], where the notion of R-equivalence is introduced: Theorem 2 in [53] gives a sufficient condition for R-equivalence between two such polynomials, as quoted below: Theorem 5. Let f, g be w-homogeneous polynomials on C n of degree d, such that J f = J g ; then f is R-equivalent to g.However, our initial empirical observations (as detailed in §3.3) suggested that any homogeneous polynomial with the same weight vector (up to permutations), and no further singularities, has the same ℓ-degree subspaces of the Milnor algebra, up to linear isomorphism.This then implies that their respective Sasakian Hodge numbers and CN invariants will be the same.This motivates us to propose the following definition and conjecture.Definition 6.Two weighted homogeneous polynomials f, g on C n are said to be weakly R-equivalent if the respective ℓ-degree linear subspaces of their Milnor algebras are isomorphic, for each ℓ such that p + q = n, as in Theorem 2.
Conjecture 7. Consider two weighted homogeneous polynomials f, g on C n of same degree d; if their weight vectors w f and w g coincide (up to permutations), then f and g are weakly R-equivalent.
The Conjecture is somewhat surprising, since it encompasses cases in which the Jacobian ideals J f and J g are non-isomorphic, and thus M f and M g are not equivalent.As we will see in §3, although certain steps in the algorithms to compute the Sasakian Hodge numbers and CN invariant involve the Gröbner basis, which is directly related to the Jacobian ideal, the results of these computations seem to depend only on the initial sets of weights of the C × -action.

Machine Learning
Aiming at an audience in the community of mathematics and theoretical physics, we provide a very brief introduction to neural networks, which is the architecture we use in our investigation [30,54,55].We begin by introducing the neuron, the building block of any neural network.A neuron is a vertex in an oriented graph, which takes in a set of input data {x i } and produces a single numerical output y, by the following three steps: 1. First, each input x i is multiplied by a weight W i : W i x i .2. Next, all the weighted inputs are summed and a bias b is added: Finally, the sum is passed through a non-linear activation function which produces an output: ŷ = act( i W i x i + b).
ReLU is perhaps the most standard example of a non-linear activation function, it is defined and is the activation function used in this work.A neural network is then simply a collection of neurons stratified in a series of layers, whereby the neurons in each layer are connected by edges to neurons in the previous and next layers.
The process of training a neural network starts with partitioning the dataset into training data, from which the network will learn, and test data, which is only used after training to evaluate the network's performance.The training process involves repeatedly calculating the 'error', which is some measure of the difference between the predicted model outputs and the true known outputs for the training data.During training, these weights and biases are stochastically updated in order to reduce this error measure.Computing the error requires a choice of loss function, for regression problems typically one uses either mean absolute error (MAE) or mean squared error (MSE) where y i and ŷi are the true and predicted values, respectively, and N is the dataset size.The method by which we change the weights and biases to minimise the loss is called the optimisation algorithm, the simplest of which is stochastic gradient descent (SGD).There are more advanced optimisation methods that build on SGD, of which Adam [56] is a popular choice and is the particular optimisation we adopt.For regression tasks, typical performance metrics include MAE (8) as well the R 2 score (9), which is defined as the proportion of the variance in the dependent variable that is predictable from the independent variable(s): where is the mean output.Therefore an R 2 score close to 1 means the regression model is a good fit, whereas a score close to 0 means the model is a poor fit.Additionally, despite this being a regression problem, we introduce a classification-inspired metric: Accuracy.We define this the be the proportion of predictions within a fixed distance from the true value over the test data, where this fixed distance is defined in terms of a bound which is 0.05 times the range of the true values.This also evaluates in the range [0, 1], where a value of 1 indicates perfect learning.Cross-validation is a method commonly used to get an unbiased evaluation of the learning, whereby the full dataset is shuffled and then split into k non-overlapping subsets.Each subset acts as the test dataset once, whilst the remaining (k − 1) subsets are combined to create the complement training dataset.k independent identical neural network models are then each trained on one of these training datasets, then evaluated on the complementary test dataset, with the evaluation scores recorded.The mean evaluation scores, with their standard errors, are then calculated and used to measure the model performance.
Continuous datasets may be quantitatively tested for correlations via the Product Moment Correlation Coefficient (PMCC), as linear method providing first order insight about potential dataset dependencies.This measure is defined for random variables X, Y with respective means µ X , µ Y and standard deviations σ x , σ Y , and expectation values over the datasets taken with E(•).It takes values in the range stated with perfect (anti-)correlation represented by PMCC of 1 (-1), and no correlation for PMCC of 0. A more interpretable supervised learning method is symbolic regression [57].In this method a basis of functions is assembled into an expression which is fit to training data.The basis used later in this work was {+, −, * , /}.These methods are genetic algorithms at their core, in that a population of expressions (represented as trees) are first initialised and then evaluated on the training data with respect to a standard loss (8) perturbed with a parsimony term which rewards simplicity in the expression (alike regularisation techniques in traditional ML preventing overfitting).The fittest individuals are then selected for cross-breeding between their expression trees, with subsequent mutation, to form the next generation of expressions.This process is then iterated to convergence, providing an array of candidate expressions which one can deduce information about the true functional form from.

Data Generation & Analysis
As previously stated, the Calabi-Yau 3-folds arising in the link construction are hypersurfaces in complex weighted projective space P 4 (w).Such spaces are compact Fano manifolds (with positive curvature), constructed through identification of C 5 with a weight vector of 5 entries.It was shown in [33], that the list of weight vector combinations which lead to unique weighted projective spaces whose anticanonical divisors are compact and Ricci-flat is finite, with N = 7555 cases.
For each P 4 (w), any hypersurface in the anticanonical divisor class can be represented as a weighted homogeneous polynomial of degree i w i .Throughout this class, there is freedom in the choice of complex coefficients for each of the monomial terms in the hypersurface's defining polynomial equation.Any choice of coefficients, such that the surface does not become more singular, defines a Calabi-Yau 3-fold.All of these will share the same Hodge numbers, but may otherwise be topologically distinct [33].In addition, there is redundancy between choices of coefficient sets due to polynomial symmetries (such as coordinate transformations, coefficient normalisation, etc.), allowing multiple sets of coefficients to define the same 3-fold.
The dataset of Calabi-Yau links considered in this work was constructed using one Calabi-Yau from each of the respective 7555 P 4 (w)'s.In each case, the Calabi-Yau polynomial was first selected to have all monomial coefficients as 1.Physically, this may be interpreted as considering equivalent points on the Coulomb branches of the vacuum expectation value moduli spaces, when the Calabi-Yau manifolds are used for string compactification [58].However 1484 out of the 7555 polynomial hypersurfaces intersected with singularities in the ambient space, leading to a higher-dimensional singularity structure on the links.To avoid this, for these 1484 cases other polynomials were sampled, with coefficients from {1, 2, 3, 4, 5}, until the singularity structure was exclusively the isolated singularity at the originas required for the link construction 1 .
To exemplify this process, consider the weight vector w = (22, 29, 49, 50, 75), whose degree d = 225 (= i w i ) monomial basis has 7 terms: We thus initialise the Calabi-Yau polynomial equation to: for the complex coefficient vector (a 1 , . . ., a 7 ).We set a 1 = • • • = a 7 = 1, and check the singularity structure of the resulting hypersurface.In this case, the singular locus defined by this polynomial has dimension 0, which is the isolated singularity at 1 Practically, the dimension of the singular locus of each Calabi-Yau polynomial was computed over a finite field of prime characteristic (101).Since this field reduction from the complex numbers cannot decrease the dimension of the singularity structure, where the dimension was 0 the polynomial was accepted.Where the observed singularity dimension was higher, a selection of other primes (251,1993,1997) were used to check for bad field reduction, where the dimension was 0 in any of those cases the polynomial was accepted as the increase in observed singularity dimension was due to this bad reduction.Where the singularity dimension did not decrease to 0, the polynomial was resampled until one with singularity dimension 0 was found (each time only 1 resample was required).the origin, there is hence no further singularity structure introduced.We therefore accept this Calabi-Yau 3-fold, adding it to our database for topological invariant computation (no further sampling of the a i values is required).
For 7549 of these 7555 Calabi-Yau's selected in this way, the topological properties of the corresponding links were calculated.Namely, the Sasakian Hodge numbers {h 3,0 , h2,1 }, from Theorem 2, and the CN invariant, from (4).It is worth emphasising that, since the list of weight vectors which lead to complex 3-dimensional Calabi-Yaus is finite, and since the topological invariants computed are conjectured to be identical for all Calabi-Yau polynomials with same weight vector (via Conjecture 7, and inspired by initial empirical observations exemplified in §3.3), the data generated for these 7555 manifolds would be exhaustive for this link construction 2 .
The polynomial generation and topological invariant computations were performed in sagemath [59], with the help of macaulay2 [60] and singular [61].Computation of each of the topological invariants required the respective Gröbner bases of the Calabi-Yau polynomials; these bases are notoriously expensive to compute with at worst doubly-exponential time complexity [62], and taking our High-Performance Computing cluster (HPC) ∼ 100, 000 core hours.Hence, as a side product of these computational efforts, the Gröbner basis for a selection of the Calabi-Yau polynomials considered (one for each possible weight vector) is provided, along with the corresponding topological quantities, on this work's respective GitHub.
The distribution of the lengths of the Gröbner bases for 7549 out of 7555 Calabi-Yau polynomials is shown in Figure 1.Due to the non-trivial connection between weights and basis length, and the importance of the basis length in determining whether invariant computation is even feasible, the prediction of Gröbner bases lengths was independently investigated, as detailed in §4.1.

Sasakian Hodge Numbers
As outlined in §2.2, the computation of the Hodge numbers h 3,0 and h 2,1 associated with each weighted homogeneous polynomial is done by the algorithmic implementation of the explicit formula in Theorem 2, from [41]: Algorithm 1 Computation of Sasakian Hodge Numbers via Theorem 2. Require: f (z 1 , . . ., z 5 ), a homogeneous polynomial in C 5 .Require: w = (w 1 , . . ., w 5 ), the weight vector associated with the polynomial f .Ensure: [h 3,0 , h 2,1 ], the Sasakian Hodge Numbers associated to ( f, w) Step 4 of Algorithm 1 corresponds to a well-known superexponential (hard) routine, the Gröbner basis generation [63,64].In order to perform computations within a feasible time for all the 7555 polynomials 3 , we implemented a parallel version of Algorithm 1 using sagemath [59] and its built-in interface to singular [61], and executed the job on a HPC cluster.
The Sasakian h 3,0 values for the 7549 Calabi-Yau links computed all take value 1, matching the value known for all Calabi-Yau 3-folds, which corresponds to the unique holomorphic volume form.The Sasakian h 2,1 values range from 1 to 416, their frequency distribution is shown in Figure 2. Despite the similar structure to the distribution of Gröbner basis lengths, we note that there is only a mild positive correlation, with PMCC ∼ 0.65.
Due to the aforementioned successes of ML in predicting the Hodge numbers of these Calabi-Yaus [1,8], it is natural to consider how this performance can extrapolate to Sasakian Hodge numbers, which will be addressed in §4.2.To compare directly the CY Hodge numbers with the Sasakian Hodge numbers of the links built from the same polynomials, a cross-plot of the respective h 2,1 values is given in Figure 3.This plot shows that these topological invariants are strongly correlated (PMCC ∼ 0.99), and the Sasakian Hodge number is bounded above by the CY Hodge number -suggesting the following mathematical conjecture: Conjecture 8.The Sasakian Hodge number h 2,1 S for a Calabi-Yau link is bounded above by the Hodge number h 2,1 CY of the Calabi-Yau 3-fold built from the same w-homogeneous polynomial: 3 of which only 6 have been considered timed-out for this letter.We note that the analogous bound also technically holds for 1 = h 3,0 S ≤ h 3,0 CY = 1; as it may well be the case for other yet uncomputed Sasakian Hodge numbers.Hence, from the successes in previous work on learning Calabi-Yau Hodge numbers [1,8], and this strong correlation with Sasakian Hodge numbers, the investigation into their ML prediction is well-motivated.

Crowley-Nördtrom invariant
To compute the CN invariant for polynomials in our dataset, we modify a procedure developed and described in [40] which utilises Steenbrink's Signature theorem.Let The CN invariant of a link was computed in [40] in terms of its degree and weights, along with the signature (µ − , µ 0 , µ + ) of the intersection form on H 4 (V f , R): Steenbrink [48] proved that the signature can be computed as follows: In [40], this procedure was originally implemented as two separate scripts, one in singular and one in MATHEMATICA [65].We improve upon this by combining those into a single python script.We are then able to take advantage of parallel processing, pooling, and the powerful computational resources of our HPC's to compute the CN invariant for 7549 out of the 7555 Calabi-Yau links.
The CN invariants computed fully span the range of possible values, which are odd integers from 1 to 47, cf.[40, Proposition 3.2].Their frequency distribution is shown in Figure 4, which exhibits an unexpected periodicity of 12 in the most populous invariant values (∼ 500).In particular, we note the occurrence of CN invariants 27 and 35, where previous work had not identified examples in these topological classes [40].Below we provide an explicit example of a Calabi-Yau polynomial that leads to a link in each of these classes (noting the repetition of the example considered earlier in §3): CN : 27 , Weights : (22,29,49,50,75) , Polynomial :

Explicit Weak R-Equivalence
To corroborate the weak R-equivalence predicted in Conjecture 7, we show the observed behaviour for the previously selected example, as stated in (15).
In performing the checks of weak R-equivalence, we considered 10 permutations of the example weight system, and 50 polynomials per permutation (with general integer coefficients in the range (0,100), such that singular locus dimension is still 0; here, 100 was selected as it is the bound set by the prime 101 used for the coefficient ring characteristic).In each case, the computed CN invariant was ν = 27, and (h 3,0 S , h 2,1 S ) = (1, 2).We observe that, while these values for the invariants were the same, the Gröbner basis lengths changed among different weight permutations (but were the same for different polynomials with the same weight system permutation).This behaviour was expected since the permutation of weights effectively amounts to a relabelling of the coordinates.
In addition to running these checks for the quoted example, the same procedure was repeated for 100 weight systems randomly selected from the database (selecting those with generally shorter polynomials, for computational efficiency), again considering 50 polynomials per weight system, and in all cases the weak R-equivalence was verified.Code to run these checks in general scenarios is also made available in this article's GitHub Below are two example weight permutations of ( 15), each with two respective Calabi-Yau polynomials.These were all included in the explicit checks above and hence lead to the same topological invariants.

Machine Learning
In order to investigate the efficacy of ML techniques to learn topological invariants of this dataset of Calabi-Yau links, NNs were chosen as the prototypical tool from supervised learning.Since the output invariants take a large range of values in each case, the NNs were set up for a regression-style problem.The NNs used had the same architecture in each case.They had neuron layer sizes of (16,32,16), ReLU activation, and were trained on a MSE loss using an Adam optimiser.These layer sizes and the other hyperparameters were set after some heuristic tuning for the Gröbner basis ML, then used for the other investigations also for consistency.Each NN hence amounts to a map of the form: such that each f i acts via linear then non-linear action as In each case, the regression NNs were trained on 5 different partitions of the dataset into 80:20 train:test splits in accordance with cross-validation, to provide statistical error on the metrics used to assess learning performance.The NNs were implemented in python with the use of scikit-learn [66].
These NNs were trained to predict, from an input of the weight vectors, the respective Calabi-Yau link Gröbner basis length, Sasakian Hodge number h 2,1 , and the CN invariant.The first two of these investigations are detailed in the subsequent sections, whilst the final investigation is briefly detailed here since this architecture could not learn to predict the CN invariant sufficiently well.Performance had R 2 value of ∼ 0.004.Even reducing the problem to a binary classification between the two most populous classes (ν = 1 and ν = 25) did not lead to accuracies much above 0.5, indicating no significant learning and highlighting the highly non-trivial dependency of this invariant on the input polynomial and weight data, despite our computations showing the invariant was only dependent on the weight data in accordance with Conjecture (7).

Grobner Basis Length
By a significant margin, the computational bottleneck (in terms of both time and memory) of the invariant calculations was the generation of the Gröbner basis.Many initial runs failed from memory overload in this step for specific Calabi-Yau links.Through trial and error, we diverted our resources to allocate more computational power to the harder cases with larger Gröbner bases.However, it would have been substantially more efficient if we had approximately accurate predictions of which links would have required more computational power.
This problem again suits itself to ML, where a simple regression model can provide quick estimates of Gröbner basis length, and thus guide the subsequent allocation of computational resources.Previously, ML methods have been used to help optimise specific steps of the Gröbner basis algorithm [67,68], and decide when a Gröbner basis would be useful [69].Yet, only recently has ML been used to predict Gröbner basis properties directly, eg. in [70] predictions of Gröbner basis length reached R 2 scores ∼ 0.4 for binomial ideals of 2 terms.
For our specific construction the polynomial ideals have much more than 2 terms (with the longest polynomial having 680), and 5 terms per ideal for each partial derivative of the polynomial.Therefore it is prudent to investigate the performance of regression NNs in predicting Gröbner basis length for our Calabi-Yau links.These NNs used the same hyperparameters as previously specified in (17), with neuron layer sizes (16,32,16), ReLU activation, training on a MSE loss with an Adam optimiser on a 80:20 train:test data split.Performance measures for the learning were: R 2 = 0.964 ± 0.002 , MAE = 122 ± 2 , Accuracy = 0.947 ± 0.005 .(18) showing excellent performance, especially when comparing the MAE score to the Gröbner basis range of values, as shown in Figure 1.We also note that whilst the number of monomial terms in a polynomial does correlate with Gröbner basis length, the correlation is not strong (PMCC ∼ 0.66), and hence ML is much more useful in estimating the length of a Gröbner basis.These results corroborate the motivation to use ML in improving computational efficiency, particularly here in the application of calculating topological invariants; where perhaps the architectures are using that the data represents similar geometries to aid learning in this case.Inspired by this success, the authors hope to extend, in future work, the application of ML to learning more properties of Gröbner bases, as well as the basis elements directly.

Sasakian Hodge Numbers
Extending work where NNs have showed surprising success in predicting Hodge numbers of Calabi-Yau manifolds, we now investigate their success in predicting the Sasakian Hodge number h 2,1 , the computation of which was described in §3.1.
Using the same NN regressor architecture previously described in (17), with neuron layer sizes (16,32,16), ReLU activation, training on a MSE loss with an Adam optimiser on a 80:20 train:test data split.The h 2,1 values for the 7549 Calabi-Yau links computed were learnt with performance measures:  These results are equivalently strong and exemplify the efficacy of ML methods in predicting more subtle topological parameters.
Comparing the predicted h 2,1 outputs of a trained NN to the h 2,1 CY values of the input weight system (instead of the intended h 2,1 S values) produces lower performance scores across the dataset: R 2 = 0.915, MAE = 9.88, Accuracy = 0.912.This reassures that the NN's are learning the Sasakian topology instead of the correlated (and well learnt in previous work) Calabi-Yau base topology.

Sasakian Hodge Symbolic Regression
Motivated by the highly accurate regression results, one is led to expect these NN functions to be approximating a true relation between the Sasakian Hodge numbers and the weights used to define the Calabi-Yau links.In spirit, this could be a similar phenomenon to what was observed for the Poincaré polynomial, cf.[71].
To distil some mathematical insight about the function space of appropriate approximations for Sasakian Hodge numbers from weights, as equivalently and independently probed by the NN architectures, we implement techniques of symbolic regression using PySR [72] to provide interpretable relations between inputs and outputs, which should help guide any investigation of this direct relation bypassing the Milnor algebra Gröbner basis computation.The scripts used for the symbolic regression analysis are also available at this work's GitHub.The (highest performing) equation on the independent test data (10% of the dataset) proposed by PySR to model Sasakian h where this expression may shed some light on the structure of the true function.The h 2,1 S predicted value from (20) for each of the Calabi-Yau links considered is plotted against the true values in Figure 5. Additionally the plot shows the equivalent predictions for a trained NN, where predictions are less accurate, particularly at higher invariant values.

Predicting the Remaining Invariants
As mentioned in §3, in this work the topological invariants have been computed for all but 6 of the 7555 weight systems, where HPC walltime has been successively exceeded due to the enormous computational cost.Practically the computations failed at walltimes of 1000 core hours using > 200GB of RAM.However, as demonstrated in the previous sections, machine learning architectures have shown great success in modelling the prediction of these topological invariants.It therefore makes sense to apply instances of these very trained models to predict the invariant values for the remaining (currently computing) six weight systems.The prediction results are shown in Table 1.
Due to the excessive computation time for these remaining 6 weight systems, we expect their Gröbner basis lengths to make up the tail end of the respective distribution in Figure 1.The extrapolation capabilities of the NN models were briefly tested by training the same NN architecture to predict Gröbner basis length from input weight system for 95% of the dataset pairs with shortest lengths, then testing on the remaining 5% (378 bases).The NN had lower performance scores of: R 2 = 0.594, MAE = 452, Accuracy = 0.484; highlighting the importance of interpretation of statistical methods for out-ofdistribution predictions.However the predictions do well identify these test weight systems as having long bases, with the minimum predicted basis length on the test data being 2391, far exceeding the mean of the full dataset (1290).Whilst the NN models do predict large lengths for the remaining weight systems' Gröbner bases in Table 1, with values significantly above the average, they do not predict values exceeding the current highest value of 7299.These predictions are high, as is useful for our implementation but less likely to be exact.
Conversely, the Sasakian Hodge number values have been shown to only loosely correlate with the Gröbner basis length, hence there is only loose intuition that these Hodge numbers should be high.Our Conjecture 8 produces some bound on the Sasakian h 2,1 values, set by the respective Calabi-Yau h 2,1 values: 348, 387, 462, 491, 246, 275.Interestingly, all predictions from both the NN and the symbolic regression satisfy this bound as well.Furthermore, the better-performing symbolic regression model of (20) predicts Hodge numbers very close to this bound, which is sensible behaviour, as demonstrated by Figure 3.
Despite the predictions between the NN and symbolic regressor in some cases being quite different, pragmatically mathematicians and physicists are often most interested in discerning when Hodge numbers are particularly low, such as 0 or 1, as indications of phenomena such as exactness, rigidity or unobstruction.Hence, these learnt models can very quickly provide confidence in whether this is the case, by making predictions which can test conjectures or corroborate theoretical expectations.
Our own experienced difficulty in computing the topological invariants for these final 6 remaining cases illustrates the potential of these machine learning models.Where direct computation is not feasible, ML methods can provide predictions for quantities of interest (such as our CY link topological invariants) with statistical confidence, providing invaluable insight to guide refinement of the computation, and further the progress of academic research.

Conclusion
In this work, real 7-dimensional Calabi-Yau links were constructed for a complex 3-dimensional Calabi-Yau manifold coming from 7549 of the 7555 complex 4-dimensional weighted projective spaces that admit them.It was observed, and conjectured, that any Calabi-Yau hypersurface with the correct singularity properties lead to the same Sasakian Hodge number h 2,1 and CN invariant; which (when computed for the final 6 evasive cases) will produce an exhaustive list of these invariants from the Calabi-Yau link construction.
The datasets of these invariants were statistically analysed, and NN regressors were used to successfully predict the respective Sasakian h 2,1 values from the ambient P 4 (w) weights alone.The same architectures were not successful in predicting the CN invariant, but did show surprising success in predicting the length of the Gröbner basis from the weights.These regressors can hence be used to streamline future computation of Gröbner basis by informing on efficient computational resource allocation.
The exhaustive list of Calabi-Yau link data, as well as the python scripts used for their analysis and ML are made available at this work's corresponding GitHub.Avenues for future work include studying whether we can obtain Sasakian structure for more general links made through general toric varieties rather than weighted projective spaces.As well as application of ML techniques to study other Gröbner basis properties to further streamline their computation.

Appendix A. Sasakian Hodge decomposition
We establish some notation and elementary facts about the complexified tangent bundle, which are largely adapted from [73].The contact structure splits the tangent bundle as T M = B ⊕ N ξ , where B = ker(η) and N ξ is the real line bundle spanned by the Reeb field ξ.The transverse complex structure Φ satisfies (Φ| B ) 2 = −1, so the eigenvalues of the complexified operator Φ C are ± √ −1.The complexification B C := B ⊗ R C splits as B C = B 1,0 ⊕ B 0,1 , so we obtain a decomposition of direct sum of vector bundles This induces the decomposition of vector bundles where Ω p,q B (M) := Γ(M, Λ p (B C ) * ⊗ Λ q (B C ) * ).Let us briefly describe the transverse complex geometry on a Sasakian manifold (M, η, ξ, g, Φ), following [45]

Figure 5 :
Figure 5: ML architecture predictions of the h 2,1 S values, against the true values, for the 7549 Calabi-Yau link constructions considered, from: (a) a trained NN; and (b) the symbolic regression best model of equation (20).
and references therein.Relatively to the Reeb foliation, the usual Hodge star induces a transverse Hodge star operator * T : Ω k B (M) → Ω m−(k+1)