GANs for generating EFT models

We initiate a way of generating models by the computer, satisfying both experimental and theoretical constraints. In particular, we present a framework which allows the generation of effective field theories. We use Generative Adversarial Networks to generate these models and we generate examples which go beyond the examples known to the machine. As a starting point, we apply this idea to the generation of supersymmetric field theories. In this case, the machine knows consistent examples of supersymmetric field theories with a single field and generates new examples of such theories. In the generated potentials we find distinct properties, here the number of minima in the scalar potential, with values not found in the training data. We comment on potential further applications of this framework.

A key activity in fundamental physics is to come up with models satisfying experimental constraints and theoretical paradigms. Finding such solutions requires human experience, imagination, and intuition on which extensions to consider. Only very rare situations allow a complete classification of solutions and human exploration generically is limited in time and sometimes imagination. It would be very exciting to explore automated model generation and to see which model building potential, machines can have in the context of fundamental physics. The aim is to have a tool which can generate models with pre-defined properties. Our effort comes at a time when machines are able -though in different settings -to come up with 'creative' solutions to problems going beyond human capability such as in the context of AlphaGoZero [1].
The language, in which models are formulated in fundamental physics, is that of effective field theories, which can, in many cases, be characterised by a field theory Lagrangian. The latter determines the couplings among fields and their respective dynamics. Theoretical and experimental constraints are implemented by structure in the couplings, for instance by requiring invariance of the Lagrangian under symmetries, such as invariance under the Lorentz group. Given a particular requirement, such as invariance under spacetime symmetries, it is a common problem to find theories consistent with such a symmetry. Such a list of requirements can be seen as the rules of the game which are imposed on the allowed models. The goal is to explore the space of models which are consistent with these symmetries and to determine which type of dynamics can appear. In most cases, physicists know consistent examples but do not know the general space of solutions. Finding consistent solutions which go beyond the known types of solutions is a common problem in physics.
As a first example in this direction, we automatise the search for new supersymmetric models. Supersymmetry (SUSY) is one of the leading candidates for Beyond-The-Standard-Model physics (BSM), potentially addressing the electroweak hierarchy problem and is preferred in ul-traviolet complete theories arising in string theory. Large experimental efforts are taken to search for low-energy remnants of supersymmetry at colliders and as dark matter candidates. The low-energy observables of supersymmetry crucially depend on how supersymmetry is broken. In the absence of gravity, i.e. in the global limit of supersymmetry, the models of supersymmetry breaking are relatively limited, two prominent classes being [2,3]. An extension of the available models of supersymmetry breaking, potentially leading to different phenomenological signatures, is still highly desirable.
From a theoretical point of view, supersymmetric models allow a very tractable avenue for automatisation strategies. The simplest setup is that of a single chiral superfield with no gauge symmetries. In this context, taking canonical kinetic terms, the superpotential governs all dynamics. This superpotential is a holomorphic function in one variable. To generate new models becomes the task of generating holomorphic functions. Additional properties, such as the number of minima of the scalar potential or the masses in the minimum could be added as further requirements.
As a first step, we restrict ourselves in this paper to generating superpotentials for a single field. Put concretely, we build a generator for a single field superpotential in a box, which is discretised. The problem of generating such a superpotential is then equivalent to generating an image with two colour channels [13] and a particular local property, holomorphicity. Holomorphicity can be checked locally whether the Cauchy-Riemann equations for a function f (z = x + iy) = u(x, y) + iv(x, y) are satisfied: A function is holomorphic when these conditions are satisfied everywhere.
In this paper, we present numerical examples based on a 64 × 64 grid, allowing for simulations to be carried out on 'standard' GPUs of a desktop in reasonable time. The output has two channels, implementing the arXiv:1809.02612v1 [cs.LG] 6 Sep 2018 2 fact that we are interested in a complex-valued function. To build such a generator we use a Generative Adversarial Network (GAN) structure [4]. Such GANs have been extremely successful in generating images with particular properties [14]. As part of applications of machine learning in particle and astrophysics (cf. [5,6]), GANs are also particularly useful in physics when circumventing costly simulations such as detector simulations [7] or galaxy shape measurements for dark energy surveys [8].
We would like to stress that one difficulty of GANs is the reconstruction of global features (e.g. generating images of animals with the correct numbers of characteristic features, examples can be found in [9]). In our case, this is not a 'bug' but a feature. Globally distinct, but locally inseparable features are actually particularly interesting in the context of superpotentials. Here, this can correspond for instance to multiple minima of the scalar potential, which is highly relevant in models of early Universe cosmology. This seems to be a very intriguing avenue for model building, which the machine is performing here as it is combining a lot of known local features to a new global structure. This is precisely what is done in a lot of BSM model building, e.g. in bottom-up string model building [10].

Numerical setup:
The basic idea of GANs is that two networks, the discriminating and generating network, are trained to compete against each other: the discriminating network is optimised to distinguish between real and fake data, whereas the generating network is optimised to produce fake data which tricks the discriminating network. In our case, the input for the discriminator network consists of generated images from the generating network and examples of superpotentials which we have generated from some known holomorphic functions. The overall structure of the network is shown in Figure 1 at the top.
For simplicity, we start with polynomial type superpotentials up to a maximal degree: The coefficients are complex-valued and its real and imaginary part are initially drawn from a uniform distribution in a given range (−x, x). We then normalise the input such that the maximal absolute value of the real and imaginary part in the interval of choice for the superpotential (−z, z) is 1. We report in due course on our choices of parameters when we describe our numerical experiments. In Figure 2 we show one example of the associated scalar potential for such a polynomial superpotential which is given as Network structure The layout of the discriminator network. The numbers indicate the respective output dimensions of the block of layers. D1 is the input layer. D2-D5 is a combination of a convolution layer, a LeakyReLU activation, and a dropout layer. D6 is a dense layer with a sigmoid activation. Bottom: The layout of the generator network. G1 is the noise input layer. G2 consists of a dense layer, batch normalisation, linear activation, and a dropout layer. G3-G4 consist of an upsampling layer followed by a convolutional layer, batch normalisation, and linear activation. G5 does not contain an upsampling layer but the same type of layers as G3-G4. G6 is a convolutional layer and a tanh activation. A table with the exact layer structure for both the discriminator and the generator can be found at the end of this article. Our examples are drawn from a probability function which is different to the probability distribution underlying holomorphic functions. The goal of generators is to draw from an underlying probability distribution, typically the same as the input one. In our case, there are three probability distributions which are of interest: 1. The probability distribution of the input superpotentials, which is essentially related to the underlying probability distribution of the polynomial coefficients. bility distribution associated to general superpotentials. 3. The probability distribution of complex-valued non holomorphic functions. A cartoon of the three spaces is shown in Figure 3.
Our goal is to build a discriminating network which only distinguishes between holomorphic and nonholomorphic functions but shall not try to simply explore the 'known' polynomial functions. To achieve this, the basic idea is to equip the discriminating network only with the power of checking for the local property (holomorphicity) and not for the global properties required for polynomial checks. Note that this is precisely what certain GAN layouts achieve involuntarily in the context of image generation [9].
A visualisation of our network layout for this goal is shown in Figure 1 in the middle (discriminator) and at the bottom (generator). The network design is very similar to networks used for generating fake MNIST samples which for instance can be found in [11]. Here we have adopted the structure of the discriminating network to feature convolutional layers with a size of two by two pixels and a stride of one. This is to ensure that the network is capable of checking the local consistency condition of holomorphicity (cf. (1)). The final activation of the generating network is tanh to generate a number between −1 and 1 for each image point. The detailed network structure can be found at the end of this article.
For our training set we use 10.000 polynomial superpotentials, which are generated from a choice of underlying parameters as described above. We then train our network using the RMSprop optimiser and a batch size 256. Our implementation is based on tensorflow and Keras. We have performed hyperparameter tuning regarding the optimiser. We present examples in this letter based on a learning rate of 2e − 4, a decay rate of 6e − 8. In the following, we present results based on a training set with polynomials of degree 2, range of coefficients {−1, 1}, and a box size of {−2, 2}. We have performed tests with polynomials up to degree 5, varied the ranges of coefficients from {−1, 1} to {−5, 5}, and box sizes of length {2, 4, 6}. We have also searched over different grid sizes. Before turning to the results, let us briefly comment on how the results from the trained generator have to be scrutinised in two ways: 1. Can the numerical solutions be seen as holomorphic functions, given that there are inevitably numerical errors present? To define the error as the deviation from the Cauchy-Riemann equations (1) is incomplete, as it does not allow the comparison on dimensional grounds to the actual scales involved in the potential. On dimensional grounds, we hence multiply with a length scale δz, here taken to be the lattice spacing. The errors are then This error should be small compared to the scales involved in the superpotential. Comparing the error at each point in the grid with the corresponding superpotential value is mis-leading as the superpotential can vanish but its derivatives do not. To avoid this problem, we look at the distribution of errors, its mean and the respective 95 percent confidence level, where the latter is taking into account the spread of the error. We confront these values with the mean absolute value of the superpotentials. For potentials with interesting properties we also perform a visual check whether there seems to be a correlation between the errors and the structure of the potential.
2. Are the numerical solutions well approximated by polynomial superpotentials? Our aim is to have results which are not necessarily fit by polynomials to explore the space of holomorphic functions. The basic idea is to fit with a polynomial the real data and see that a fit to the generated data is not a good fit. Different methods might be suitable to perform this task, here we use a method based on least square optimisation. To establish whether the generated results are polynomials of a particular degree, we perform a least square fit to a general polynomial of that degree, using the real and imaginary part as separate data points, i.e. minimising: where O i denotes the discrete data points which have been generated and E i the corresponding parameters obtained from a model with parameters α [15]. In the case of the training data, the fit clearly reproduces the original coefficients; in particular giving vanishing coefficients for powers higher than present in the original polynomial. Conversely, the fit worsens when the fitting polynomial is of lower degree than the original polynomial. Applied to the generated data, this method can signal that the generator creates functions in a larger class than the one of the training data.
A sample of the evolution of our generating network for fixed noise input is shown in Figure 6, where we show the scalar potential and the errors at different training steps. From a completely noisy output, the network is trained to produce outputs which, on visual inspection, look similar to polynomials we have started with (cf. Figure 2), some with notable differences though.The errors are initially very noisy as expected and are getting significantly smaller as desired. The evolution of the expectation value of the absolute value of the superpotential averaged over the entire grid and over 16 fixed noise inputs, the mean errors and their respective 95% confidence value is shown in Figure 5. We clearly see that the errors, upon training, are becoming smaller than the superpotential expectation value as desired. Hence the network identifies up to small errors what a holomorphic function is. Checking whether the generated superpotentials are of degree two, we find that the generator is producing solutions clearly going beyond polynomials of degree two. A simple check reveals that the initial set of solutions has maximally one minimum, whereas some solutions have multiple minima (cf. Figure 4). When fitting the generated polynomials to the potentials of varying degree, we note that the fits get better the higher the polynomials are (unlike for the training data). The mean values of the cost (5) are 18.02, 5.46, 3.05 for degrees (2,3,5) , which is an average over 10.000 generated examples. This generator has been able to identify consistent superpotentials of a type "unknown" to it. For degree 2 polynomials we can clearly visualise the difference, and see that it finds solutions which are clearly physically distinct to solutions in the training set. The next step, on which we only comment briefly here, is an analysis on which analytic models the generator has produced. The aim is to find an analytic function which shares the properties of the noisy numerical potential (e.g. the number of minima and the overall shape of the potential). As an example along these lines, we have performed fits of polynomial models with varying degrees (cf. Figure 4 for a polynomial fit of degree 5). For complicated models, it would be necessary to fit with other functions (e.g. exponentials, logarithms, etc.).
Further explorations along these lines are clearly exciting. However, at this stage we leave it for the future and only comment on some of the applications we can envision.

Outlook:
We can envision a lot of applications and future developments of such generating networks. Let us list a couple of examples: • The class of polynomial potentials clearly provides not the most sophisticated examples we can envision, but shall be seen as a good toy example on establishing the structural difference between the test set and the generated set. Following a similar strategy it will be very intriguing to scrutinise more sophisticated classes of functions.
• In the context of supersymmetric model building, along the lines of our work, another application is, which models of supersymmetry breaking can we find. How do our properties generalise to systems with multiple fields? Which properties of the potential in the context of (post-)inflationary cosmology can we obtain?
FIG. 6: Top: Evolution of the potential for a fixed noise input. Snapshots are taken after steps: 100 (beginning), 1000 (middle), 20000 (end). The normalisation of the colour-bar is taken to be the same in all three instances Middle: Evolution of the error e1 and e2 as defined in Equation (4). The snapshots are taken at the same times as for the potential and the grid is the same as for the potential. Again the colour-bar is the same for all six plots. Videos involving multiple examples will be available online.
• In the context of string compactifications, can we obtain further consistent compactifications lying outside of the realm of current models. For instance, can physical systems with particular features (e.g. spectral properties, mass hierarchies) be constructed.
• Such generators are of crucial importance to make distinct statements on mapping out the swampland [12], i.e. which directions in theory space are no-go areas, and to generate experimental predictions of string theory. To make such statements precise, we clearly have to know about the solution space of string theory which is out of the reach of current technology apart from small classes of models.
• An interesting avenue will be to build generators which can treat the theory directly on the Lagrangian level. A setup which can generate consistent Lagrangian theories subject to checking symmetry conditions would open several doors. For instance, it would be exciting to explore than supersymmetric solutions which involve non-trivial background gauge fields or non-linear realisations of supersymmetry. Which supergravity solutions could be recovered with such techniques.
Overall this paper should be seen as a proof of concept and not as aiming at an extensive analysis. Although clearly interesting, studying the available generator techniques is beyond the scope of this article. We simply demonstrated, using one technique, that this approach can lead to interesting models which go beyond the known models, i.e. the models known to the network. It will be clearly interesting to build a generator where we can steer the deviation from polynomial equations, i.e. set how far away it should be. We think that this result is very intriguing as it opens up the possibility to explore new models in fundamental physics with generating techniques.
We are very much looking forward in going beyond "line 4" in the context of particle physics models [1]. It is exciting to see which model building intuition and sophistication the computer can achieve.
Acknowledgements: It is a pleasure to thank Ben Hoyle, Fabian Ruehle for discussion. Special thanks to Ivo Sachs for supportive discussions and the initial stimulus which motivated this project. HE is supported by a Carl Friedrich von Siemens Research Fellowship of the Alexander von Humboldt Foundation. SK's research is funded by ERC Advanced Grant "Strings and Gravity" (Grant No. 320040).

Details on neural network architecture:
In Tables I and II we list the detailed layer structure of our discriminator and generating networks used in this work. During training we have varied the relative training rates of the discriminating and generating networks. We have varied the batchsizes (128,256). For some choices of test sets we observed mode collapse. Again, we stress that the purpose of this article is not to identify the best network design for all possible input configurations. Also we have not yet investigated, although interesting, other box shapes, which correspond to different coverings of the complex plane. We noted that edge effects might play a role, i.e. the errors tend to be larger at the edges. To avoid such problems, one possibility is to start with a larger grid and then restrict later to the potential on the smaller grid. The generated noise input is drawn from a uniform distribution in the range (−1, 1).