Bayesian Neural Networks for Fast SUSY Predictions

One of the goals of current particle physics research is to obtain evidence for new physics, that is, physics beyond the Standard Model (BSM), at accelerators such as the Large Hadron Collider (LHC) at CERN. The searches for new physics are often guided by BSM theories that depend on many unknown parameters, which, in some cases, makes testing their predictions difficult. In this paper, machine learning is used to model the mapping from the parameter space of the phenomenological Minimal Supersymmetric Standard Model (pMSSM), a BSM theory with 19 free parameters, to some of its predictions. Bayesian neural networks are used to predict cross sections for arbitrary pMSSM parameter points, the mass of the associated lightest neutral Higgs boson, and the theoretical viability of the parameter points. All three quantities are modeled with average percent errors of 3.34% or less and in a time significantly shorter than is possible with the supersymmetry codes from which the results are derived. These results are a further demonstration of the potential for machine learning to model accurately the mapping from the high dimensional spaces of BSM theories to their predictions.


Introduction
The discovery of the Higgs boson at the LHC in 2012 [1,2] marked the end of the search for the Standard Model (SM) particles.With the completion of the SM, physicists' focus over the next decade or so is understanding the physics of electroweak symmetry breaking [3] by following two broad strategies: comparing precision measurements of Higgs boson properties with SM predictions [4] and conducting direct searches for physics beyond the SM (BSM).This marks a methodological change in particle physics, moving from a wellposed search for particles predicted by a well-tested theory to searching for any evidence of new physics guided, in part, by the predictions of BSM theories.A popular group of candidate theories for BSM physics are the supersymmetric (SUSY) theories.These theories provide potential solutions to the hierarchy problem, permit gauge coupling unification at high energies [5], and provide a promising candidate for a dark matter particle [6].
The simplest formulation of supersymmetry consistent with the SM is the Minimal Supersymmetric Standard Model (MSSM).The MSSM uses the same gauge group as the SM and assumes minimal particle content and R-parity conservation.Despite being a minimal model, the MSSM has 105 free parameters [7] beyond those of the SM, making a thorough exploration of the model challenging.The typical approach is to select a set of parameter points within an accessible subset of the parameter space and compute observables, such as cross sections, for each point.While meaningful results have been obtained from approaches like this [8], the expensive computations limit our ability to investigate the theoretical parameter spaces thoroughly and limit our ability to use standard likelihood methods [9] to make inferences about the parameter spaces.Through the methods outlined in this paper, we show that an accurate, fast mapping of BSM theory parameters to predictions can be constructed based on recently available tools that implement sampling via Hamiltonian Monte Carlo (HMC).
A simplification of the full MSSM, which is also a prototypical example of a BSM theory is the phenomenological MSSM (pMSSM) [10,11,12,13,14,15].The pMSSM has no new sources of CPviolation, no flavor changing neutral currents, and includes first and second generation universality.These assumptions, which are consistent with experimental facts, reduce the 105 free parameters to just 19 [15].The large reduction in the number of free parameters is useful in that it renders calculations with this model feasible, while being large enough to make the pMSSM a good proxy of the MSSM.The parameter space is also complex enough to highlight the advantages of the use of machine learning for the rapid calculation of observables.While this is the theory studied in this paper, the pMSSM is merely an interesting example of a high-dimensional theory that illustrates the proposed technique for fast predictions.
Machine learning has been successfully applied to several problems in high energy physics [16] and SUSY in particular.In [17], neural networks were shown to be capable of determining restrictions on BSM parameter spaces given experimental data and in [18] random forests were used to classify pMSSM parameter points as excluded or not excluded by ATLAS and CMS searches.In [19], neural networks are used to calculate the profile likelihood ratios for a variation of the pMSSM, while in [20], an alternative to random sampling, called active learning, is used to explore the pMSSM parameter space.Additionally, Bayesian neural networks, the type of machine learning used in this paper, along with boosted decision trees, were used in [21,22,23] to aid in the detection of single top quarks at the Tevatron as well as in neutrino background and signal discrimination [24].For recent reviews of the use of machine learning in the physical sciences see, for example, [16,25] and the recently released machine learning inference toolkit MadMiner [26].
In this paper, we use Bayesian neural networks [27] to model the mapping of the parameter space of the pMSSM to its predictions.The program SOFTSUSY [28] is used to calculate particle spectra and decay chains, while the program Prospino2 is used to calculate cross sections for neutralino chargino pair production [29] at the LHC at 14 TeV [30].These programs encode algorithms that give accurate predictions, but only point by point in the parameter space.However, given predictions computed at a large number of parameter points, we show how Bayesian neural networks (BNNs) can be used to create prediction functions that map parameters to predictions.This is demonstrated for three different prediction functions: 1.A map from parameters to a classification of whether a given pMSSM parameter point is physically or numerically viable as determined by SOFTSUSY.2. A map from parameters to the predicted cross section for neutralino chargino production.3. A map from parameters to the predicted light neutral Higgs boson mass.
With these functions, it is possible to assess quickly whether a pMSSM parameter point is valid, whether it yields a neutral Higgs boson mass consistent with the observed value, and predict the cross section for neutralino chargino production.

Mathematical Details
Our goal is to predict the physical or numerical viability of pMSSM parameter points, predict the neutralino chargino production cross section, and predict the mass of the lightest neutral Higgs boson, and to do so as accurately and rapidly as possible.The pMSSM parameters are listed in   this paper, we model these functions as BNNs [27].

Bayesian neural networks
In the Bayesian approach neural networks [27], the goal is to infer a probability density p(θ | D) over the parameter space of the network given training data D. For the pMSSM, the training data D = {(t k , x k )} comprise targets t k associated with the pMSSM parameters points x k .A Bayesian neural network is a functional of the posterior density where p(D | θ) is the likelihood of the data, π(θ) a prior density, and F (x, θ) a function whose average over the space is desired.For example, setting , where f (x, θ) is a neural network, yields the predictive density, In practice, the posterior density is represented by an ensemble of neural networks whose parameters are sampled from the posterior density using the Hamiltonian Monte Carlo method [27,31].Since the method approximates Eq. (3), it automatically furnishes an estimate of the uncertainty in the predictions y from some measure of the width of the predictive density.

Likelihood and Prior
A crucial step in constructing a Bayesian neural network is modeling the joint probability p(t, x, θ), which as noted above, is usually factorized into a likelihood function p(D|θ) and a prior π(θ).The likelihood is the product j p(t j |x j , θ) p(x j |θ) over all sampled pMSSM parameter points.However, if we make the reasonable assumption that the pMSSM parameters x are independent of the network parameters θ, that is, p(x j |θ) = p(x j ), the likelihood function to be modeled is p(t|x, θ).
Likelihood.For the classifier, we take the likelihood function p(t|x, θ) to be a Bernoulli density with targets t = 1 and 0 for the viable (valid) and non-viable (invalid) pMSSM parameter points, respectively.For the regression models, the targets are either the cross sections computed using Prospino2 or the Higgs boson masses computed using SOFTSUSY.For regression, the likelihood function p(t|x, θ) is chosen to be a normal density N (t, f (x, θ), σ) with mean f (x, θ) and an unspecified variance σ 2 .
If one neglects theoretical uncertainties, codes such as Prospino2 and SOFTSUSY provide deterministic predictions t = P (x).Consequently, the interpretation of the probabilistic mapping from x to t is somewhat subtle.Since the predictions are noiseless, strictly speaking the likelihood describing t and x is the δ-function δ(t − P (x)), which is approximated using p(t|x, θ) = N (t, f (x, θ), σ).
Prior.Choosing a high-dimensional prior for likelihood functions is an extremely difficult problem.It is particularly challenging for functions as complex as neural networks in which the parameters θ have no obvious meaning.For such cases, well motivated methods have been proposed to construct so-called objective priors (see, for example, [32]).However, these methods are computationally prohibitive for high-dimensional spaces and are not guaranteed to yield satisfactory results.Therefore, in practice, the prior is chosen for computational simplicity and its ability to yield satisfactory results.Furthermore, by using a hierarchical prior whose parameters are constrained by a hyper-prior, increased flexibility is introduced that makes it possible to tune the prior by varying the hyper-parameters in order to improve the quality of the results.The overall prior π(θ) is a product of the priors for all network parameters and the associated hyper-priors.
An obvious choice for a prior is a product of zero mean normal densities, one for each neural network parameter.This choice is equivalent to imposing L 2 regularization in standard neural network training (see, for example, [33]).However, Neal [27] showed (see, also, [34]) that due to the central limit theorem Bayesian neural networks with normal priors tend to converge to a Gaussian process prior in the limit of an infinite number of hidden nodes.Since this is not necessarily the desired behavior, in this work we have chosen to use Cauchy priors, which alters the behavior of large Bayesian neural neural networks.For additional details about the prior used see [35].
There is another prior whose effect we need to consider.As noted in the next section, x ∼ p(x), where p(x) is a flat prior in the bounded region of the pMSSM parameter space shown in Table 1.
Two possible concerns come to mind.The first is that the experimentally accessible region of the pMSSM may not be fully covered by the region listed in Table 1.The second is that our results may be sensitive to the choice of the sampling density p(x).
As noted above, the likelihood p(D|θ) depends on our model p(x|θ) for the pMSSM parameter space sampling density, p(x).However, the posterior density over the network parameter space is given by p(θ|D) = j p(t j |x j , θ) p(x j |θ) π(θ)/p(D), which, given the assumption p(x|θ) = p(x) and noting that p(D) ∝ j p(x j ), shows that the explicit dependence on p(x) drops out.Of course, the implicit dependence on p(x) remains through the specific training sample {x j }.As a consequence, the accuracy of results will depend on the sampling density of the training data.While a uniform sampling of the parameter space is the simplest to implement, it would clearly be better to place more points where they are needed most.One way to do so would be to weight each pMSSM point x j by a likelihood function that incorporates LHC data.
In addition to the dependence of results on the training data, results also depend on the prior π(θ).In a more thorough study the effect of this prior on the inferences would be assessed by weighting every neural network in the ensemble by the ratio of the new prior to the one with which the ensemble was generated.The ability to do this in a practical way was not available at the time of writing, but is now available in the Bayesian neural network package used in this work [35].

Data Sets
In this section, we describe the data sets used to construct the prediction functions, that is, the functions mapping pMSSM parameter points to predictions.We use three independent data sets, labeled VPAR, OHIGGS, XSEC, and follow standard practice by dividing each into three sets: training, validation, and test sets in the percentages, 80%, 10%, and 10%, respectively.The first set was used to train the BNN models, the second was used to assess the models' performance during training, and the third (the test set) was used to evaluate the performance of the trained models.
As noted above, our choice of p(x) may miss subsets of the experimentally viable pMSSM points.But, in order to determine the boundary at which the experimental sensitivity drops below a given threshold, careful studies of the predicted signals and associated Standard Model backgrounds, using analyses optimized for different integrated luminosities, would be needed.Such studies would be extremely interesting, but are beyond the scope of this paper.Instead, we restrict our attention to the region of the pMSSM parameter space that has been used in other studies [13,36].
VPAR.This data set consists of 500,000 pMSSM points randomly sampled from the subspace given in Table 1.Each pMSSM parameter is sampled independently from a uniform distribution over its range.For each point, SOFTSUSY is used to compute sparticle masses and decays.Of the points sampled, 60.61% were labeled as invalid and 39.39% as valid.
OHIGGS.This data set consists of 567,597 points with sparticle masses computed using SOFTSUSY.The points were sampled in the same way as for VPAR, but only those points flagged as valid were kept and that yielded the lightest neutral Higgs boson mass between 110 and 130 GeV, as this contained the majority of the points and had a much smaller range than the full dataset of 591,337 points.
XSEC.This data set consists of 202,264 pMSSM points with decays computed using SOFTSUSY and cross sections computed at next to leading order (NLO) accuracy using Prospino2.The SOFTSUSY calculations were reused from the generation of the OHIGGS data set.

Data preparation
The data sets were prepared for training using several different normalization schemes.For the input data, here the 19 pMSSM parameters, we scaled and shifted each parameter to have zero mean and unit variance.The targets of the VPAR data set were left unchanged since the values are 0 and 1.The targets of the XSEC data set were log normalized, that is, the natural logarithm of the cross sections was computed and the values shifted and scaled to have zero mean and unit variance.This was done because the distribution of the log of the cross section was roughly normal.The OHIGGS data set was also normalized to have zero mean and unit variance by subtracting the mean and dividing by the standard deviation.

Architecture and Training
The Bayesian approach can be applied to any machine learning model.However, because of the computational burden of Markov chain Monte Carlo methods, even when aspects of the calculations can be parallelized, the models used tend to be smaller than the ones trained (that is, fitted) using optimization methods such as stochastic gradient descent.A detailed technical description of the BNN implementation we have used, as well as details of the Hamiltonian Monte Carlo (HMC) sampling method, is given in [35] and the package developed for this work is available at [37].But, for completeness, we briefly describe the main features of the models we used and their training.
Each model is a fully connected feed-forward neural network with 19 inputs, one for each pMSSM parameter, and 5 linear layers, each with 50 nodes.The output node is a sigmoid function for the classifier and is linear for the regression functions.The activation functions are variations on a ReLU [35].In order to provide a good starting point for the Hamiltonian Monte Carlo sampling, each model was trained by minimizing the mean squared error for regression and the mean binary cross entropy for classification with a batch size of 32.The training was run for three cycles of 30 epochs each using AMSGRAD [38].An epoch is a pass over the full training data while a cycle, in this context, is training with a given learning rate.For regression the learning rates were 0.01 for the first cycle, 0.001 for the second, and 0.0001 for the third, while for classification the learning rates were a factor 10 smaller.For each cycle, the network with the smallest validation error served as the starting point for the next cycle and the best network of the last cycle was used as the starting point for the HMC sampling.
Network sampling was run until there was evidence of convergence, which we took to mean no statistically significant changes in the predictive distributions.For example, for the cross section regression function the sampling was run for 13,500 epochs after a burn-in of 100 epochs.Here an epoch is a sequence of deterministic steps through the neural network parameter space governed by a simplectic approximation to Hamilton's equations.(For details, see [35].)In order to produce an approximately iid sample of networks for inference, the original was down-sampled by 100, yielding a sample of size 135 for the cross section regression function, and by a factor 10 for the other two functions.For completeness, Table 2 lists the values of the hyper-parameters we used in the HMC sampling, but we defer to [35]  The training was done using an Nvidia RTX-2080 Ti GPU as well as an Intel Xeon Silver 4210 CPU and the different networks took on the order of two to five days to train.

Metrics
The ensemble of neural networks ŷ = f (x, θ j ) constitute a point cloud approximation of the predictive distribution p(y | x, D), which is the full Bayesian solution to the mapping problems.From p(y | x, D) useful summaries can be computed.For example, we can take the mean and standard deviation of p(y | x, D) as an estimate of the prediction and the uncertainty in the prediction, respectively.Other commonly used summaries are the mode, median, and credible intervals.
The main advantage of the Bayesian approach to machine learning models, and neural networks in particular, is that it furnishes an estimate of the uncertainty in the output of a model.It is therefore important to assess the reliability of the prediction functions by examining the uncertainty estimates, which come with the predictions.Since we are following a Bayesian approach, it would be natural to use Bayesian measures to assess the results of the models.However, we have chosen to follow common practice in machine learning and assess the reliability of our results using frequentist measures.We do so for two reasons.Firstly, any procedure can be treated as a black box whose performance can be assessed using frequentist methods irrespective of what lies within the box.Secondly, frequentist questions, such as how often a model gets something right, are often easy to answer so long as one makes clear what reference ensemble is being used to make the assessments.Frequentist measures are particularly useful when a calibration of a Bayesian procedure may be necessary [39,40] as is the case when aspects of the procedure, such as the choice of prior, are largely the result of pragmatic considerations rather than formal construction from first principles.
Our choice of reliability measures is, therefore, informed by the following considerations.We want metrics that are easy to compute, simple to understand, and commonly used to assess the quality of fitted machine learning models.The quality of the approximations has been assessed using two main metrics: • the ratio of the absolute difference to the target y, E[|ŷ − y)|] / y and • the relative frequency with which the 3standard deviation interval about the mean of p(y | x, D) brackets the true predictions, y.
The second metric provides a frequentist calibration of the Bayesian 3-standard deviation credible intervals.We are asking for the confidence level of these intervals, which is a purely frequentist measure, if the credible intervals were to be interpreted as approximate confidence intervals.As noted above, the use of such a metric deviates from pristine Bayesian methodology; nevertheless, it accords with a commonly used approach for assessing statistical results whatever their provenance [39,40].The quality of the classifier was assessed using the standard machine learning measures precision, recall, and F 1 [41].In the current context, the precision is given by where R = P (+ | V ) is the recall and N (V ) and N (V ) are the numbers of valid and invalid pMSSM points, respectively.A result is + if a pMSSM point is classified as valid and − otherwise.Note that the recall can be computed from where the numerator is the number of true positives and the denominator is the sum of true positives and false negatives.The quantity F 1 , the harmonic mean provides a single number that is a compromise between precision and recall.Recall is the fraction of valid points that are correctly identified and is an intrinsic characteristic of a classifier, whereas precision depends on the ratio N (V )/N (V ), which is a property of the data set to which the classifier is applied.In particle physics, the quantity obtained by setting N (V ) = N (V ) in the precision, that is, by using a balanced data set, is typically referred to as the discriminant.
In practice, the precision and recall are approximated using the number of true positive predictions, T P , the the number of false positive predictions, F P , and the number of false negative predictions, F N , as follows, For a given input feature vector x, here a pMSSM parameter point, a machine learning model would typically provide a single estimate of the quantity it models, while a BNN model provides a point cloud approximation to the full predictive distribution, Eq. ( 3).We can assess the impact of using a posterior distribution rather than the output from a single network by noting how the metrics change when applied to the mean, the mean minus 3 standard deviations, and the mean plus 3 standard deviations.

Results
For each pMSSM prediction function, and a given pMSSM parameter point x, we use the mean of the predictive distribution, p(y |x, D), as an estimate of the corresponding prediction from either SOFTSUSY or Prospino2, while the standard deviation of the predictive distribution is taken as an estimate of the uncertainty in the prediction.Figure 1 show examples of the predictive distributions for the validity, Higgs boson mass, and log cross section predictions.As stated above, it is these distributions that constitute the full Bayesian solutions rather than the summaries, but the summaries are nevertheless useful.Below, we use them to assess the reliability of these distributions.

The viability classifier (VPAR)
The three performance metrics for the VPAR classifier are given in Table 3.A pMSSM point is classified as valid if the point estimate, the average over the ensemble of networks, exceeds a cutoff of 0.5.Alternatively, one can apply the cutoff to the average plus 3 standard deviations.This causes many points with high uncertainty to be classified as valid and results in a much higher recall score and slightly lower F 1 and precision scores.We expect the average over the ensemble of networks to be most useful for attaining the highest classification accuracy, while the average plus 3 standard deviation would be useful in selecting valid points that could serve as input to a program such as SOFTSUSY.

The cross section regression function (XSEC)
After training on the VPAR data set, networks were trained on the XSEC data set.For a given pMSSM parameter point, we again take the average, ŷ, of the distribution of network outputs, as an estimate of the quantity being modeled, here the predicted cross section in femtobarns.We use the associated standard deviation, σ, to exclude pMSSM parameter points for which log(ŷ−3σ) > 3, as pMSSM points with cross section of this size or larger have been excluded at the LHC [42,13].This cut removed 5.5% of the generated pMSSM points.On average, with this cut, the cross section is estimated with a percentage uncertainty of 3.34% and the true value fell within 3 standard deviations of the estimated cross section 99% of the time.The standard deviation can be used to flag pMSSM points for which the network based prediction is highly uncertain.If we exclude the 45 pMSSM points with cross sections that differ by an order of magnitude or more between the bounds ŷ − 3σ and ŷ + 3σ the relative uncertainty in the predictions falls to 3.04%.
Figure 2 shows the two measures of uncertainty in the BNN predictions: one computed directly from the known errors of the BNN predictions and the other from the estimated standard deviations of the predictive distribution.The bias in the BNN predictions is negligible.However, the point cloud of network outputs overestimates the uncertainty in the BNN predictions.It is also clear that the BNN behaves as expected in that the uncertainty is greater where there are fewer data for training.
When predicting cross sections for many pMSSM points simultaneously on a GPU, a single prediction per network, in the ensemble of networks, took on average 49 nanoseconds for the prediction alone and 94 nanoseconds when the computational overhead is included.The BNN for the above results used an ensemble of 135 networks, which implies a computation rate of 12.7 microseconds per prediction including the overhead.This is approximately 16.5 million times faster than running Prospino2, which took about 3.5 minutes per NLO prediction.Sampling via any Markov chain Monte Carlo method, including Hamiltonian Monte Carlo, produces a sequence of correlated predictions.While one may estimate mean quantities using such predictions, estimating uncertainties from these points requires more care because of the correlation.It is usually simpler to down sample to a set of approximately iid points by using every n points along a chain, where, ideally, the choice of the integer n is somewhat longer than the correlation length along the chain.For example, the ensemble size of 135 for the cross section prediction function is a compromise between the desire to have an ensemble large enough to produce distributions such as those in Fig. 1 and small enough to permit very fast predictions.
When cross sections are required for a large number of pMSSM points, the leading order prediction is frequently used as it is faster to compute.However, on average, it is approximately 19% less precise than the prediction at next-to-leading order (NLO) accuracy, while, as noted above, on average the BNN matches the NLO prediction within 3.0% when the few poorly estimated cross section predictions are excluded.Note, however, that even after excluding these outliers, we find that there are still a number of points for which the BNN does worse than the leading order prediction.But, as can be seen in Figure 3, this happens for a small percentage of the pMSSM points we considered.

The Higgs boson mass function (OHIGGS)
The output of the OHIGGS data set was analyzed using both regression and classification approaches with the same ensemble of networks.For both analyses the data set was split into two subsets.One consisted of all pMSSM points where either the true Higgs boson mass was within 2 GeV of 125 GeV [2], or the point's 3 standard deviation interval overlapped with this range.The other subset consisted of all remaining points.Within the first subset, the average percent error was 0.10% and 87.4% of the time the true value was within 3 standard deviations of the predicted value.In the second subset, the percent error was 0.14% and 86.7% of the time the credible interval contained the true value.If the predictive distributions were Gaussian, these credible intervals, if interpreted as approximate confidence intervals, undercover by about 13%, which shows they are not as well calibrated as the ones for the cross section data set.
In the classifier approach, the labels were positive if the true Higgs boson mass was within 2 GeV of 125 GeV, and negative if it was not.We claim a prediction to be positive, that is, good, if any part of its credible interval overlapped the desired range.Using this classification criterion, the precision of the network was 0.926, its recall was 0.997, and its F 1 score was 0.960.We therefore conclude that using the BNN to identify pMSSM parameter points with a low-mass neutral Higgs boson consistent with the measured Higgs boson mass will remove very few pMSSM points that yield Higgs boson masses consistent with observation.Using this approach in conjunction with the regression will also allow very accurate labelling of the selected masses as its error on the selected pMSSM points is low.
We see in Figure 4 that the estimated uncertainty from the ensemble of networks does not match the uncertainty computed from the actual errors.Moreover, in contrast to the results for the cross section, the Higgs boson mass BNN underestimates the uncertainties, though, as expected, the uncertainties are larger where there are fewer data.

Discussion
The utility of Bayesian neural networks is that they directly approximate the predictive distribution p(y | x, D), that is, they provide a probability density over the space of the quantity being modeled.Moreover, their implementation on GPUs yields a significant increase in prediction speed.With the GPUs used, we achieve a computation speed about 50,000 times faster than SOFTSUSY and 16.5 million times faster than Prospino2 running on a single CPU.
In principle, the predictive distribution encodes the uncertainty in a given BNN prediction.There are two kinds of uncertainty that should be accounted for.The first is the uncertainty arising from the fact that a finite amount of data are used to train, that is, fit the models.The second is the uncertainty due to the fact that we do not know which model should be fitted.However, to the degree that the neural network models used in this paper are sufficiently flexible, the uncertainties reported in this paper automatically include both.
However, as is true of all Bayesian inference, the results depend on the likelihood function as well as on the prior.In this work, we have chosen the form of the prior for computational simplicity, with parameters constrained by hyper-priors for added flexibility.However, even granting the form of the hierarchical prior, it is still necessary to choose the values of the hyper-parameters.We have made no attempt, so far, to optimize those choices, which may explain both the over and under estimates of the standard deviations associated with the BNN predictions.But the fact that the problem depends upon hyper-parameters as is true of all machine learning models can be turned into a virtue.For example, by weighting the output of each network in the ensemble of networks by the ratio of the hyperprior with its hyper-parameters viewed as variables to the hyper-prior with which the HMC sampling was done, we may be able to use standard optimization techniques to improve the results by optimizing the choice of hyper-parameters.

Conclusion
Given a large number of predictions of interest from high-dimensional models such as the pMSSM, we showed that BNNs, implemented on GPUs, can successfully model these predictions with computation speeds from 50,000 to 16.5 million times greater than that of the programs that yielded the predictions.This makes it possible to make rapid, accurate, predictions for model points other than those used to construct the BNNs.In particular, we were able to classify accurately whether a given pMSSM parameter point is valid, as determined by SOFTSUSY, with a maximum F 1 score of 0.957 and a recall of 0.987.We were also able to predict the cross sections for the production of supersymmetric particles that matched the predictions at NLO accuracy to about 3% on average.Finally, we could classify whether pMSSM parameters would give a Higgs boson mass of 125 ± 2 GeV with a recall of 0.998 and an F 1 score of 0.942.
These results indicate that it will be possible to more easily filter out pMSSM parameter combinations that are either unphysical or that predict values for the Higgs boson mass inconsistent with observation.It will also be much easier to study the impact of varying different parameters on the production cross sections of supersymmetric particles.Finally, the predictions studied in this paper were just a small number of the potentially interesting ones.Using the methods described in this paper, which are available in the TensorBNN [35] package, it should be possible to apply these methods to make rapid predictions of other SUSY observables given a large sample of these predictions from programs such as SOFTSUSY and Prospino2.Moreover, as noted in our discussion, there is much room for improvement.

Figure 3 :
Figure3: The cdf for the difference in relative uncertainty between (top) the prediction at leading order and that of the BNN and (bottom) between the BNN and the leading order.The vast majority of the BNN-based predictions are more precise, in the sense of being closer to the corresponding NLO predictions, than the predictions at leading order.

Figure 4 :
Figure 4: (top) The solid and dotted lines are computed from the errors of the BNN.The black line is the bias of the BNN as a function of the Higgs boson mass.The dotted lines represent the 1 and 2 standard deviation intervals.The green and yellow bands are root mean square intervals computed from the standard deviations furnished by the BNN.(bottom) The distribution of the Higgs boson mass.

Table 1 :
The 19 parameters of the pMSSM and the subset of the pMSSM parameter space considered in this paper.

Table 2 :
[35]an explanation of the quantities listed.TensorBNN network training parameters for each of the prediction functions.Explanations of the parameters may be found in[35].

Table 3 :
The program would catch false positives but not false negatives.VPAR: performance metrics.