T HE L ANGUAGE OF H YPERELASTIC M ATERIALS

The automated discovery of constitutive laws forms an emerging area that focuses on automatically obtaining symbolic expressions describing the constitutive behavior of solid materials from experimental data. Existing symbolic/sparse regression methods rely on availability of libraries of material models, which are typically hand-designed by a human expert relying on known models as reference, or deploy generative algorithms with exponential complexity which are only practicable for very simple expressions. In this paper, we propose a novel approach to constitutive law discovery relying on formal grammars as an automated and systematic tool to generate constitutive law expressions complying with physics constraints. We deploy the approach for two tasks: i) Automatically generating a library of valid constitutive laws for hyperelastic isotropic materials; ii) Performing data-driven discovery of hyperelastic material models from displacement data affected by different noise levels. For the task of automatic library generation, we demonstrate the flexibility and efficiency of the proposed methodology in alleviating hand-crafted features and human intervention. For the data-driven discovery task, we demonstrate the accuracy, robustness and significant generalizability of the proposed methodology.


Introduction
From the mechanics of a beating heart to the haptics of a robotic arm, from the rupture of an aneurysm sack to the aeroelasticity of a spaceship, mechanical phenomena controlled by the behavior and properties of different types of materials are ubiquitous in nature and in engineering applications alike.Thus, understanding, modeling and characterizing the mechanical response of materials has been an important research focus for centuries.Accordingly, extracting so-called material models (or constitutive laws), i.e. relations between stresses, strains and often additional variables, from experimental data has been and still is a central task in solid mechanics [1].The conventional approach to solve this underlying inverse problem requires a large number of experiments and a tedious trial-and-error iterative procedure, involving at each iteration the choice of a model from the available literature (largely based on experience) and the subsequent calibration of its unknown parameters [2], often referred to as model updating, which can be achieved under both deterministic and stochastic schemes [3,4].However, the recent advances in imaging techniques are increasingly motivating a paradigm shift from point-wise measurements to full-field data acquisition [5].At the same time, the stunning recent successes of machine learning, and more generally data-driven techniques, have spawned a dramatic surge in the adoption of such algorithms also in the field of material modeling [6]; see [7] for a recent review.In this work, we leverage machine learning techniques to automatize model selection and calibration by condensing these into the single task of model discovery, see the difference in model identification and discovery in [2].Moreover, we are interested in approaches which deliver as output an interpretable material model, i.e. one that can be represented by mathematical expressions satisfying appropriate constraints, thus moving away from the model-free paradigm [8].
Figure 1: A graphical representation of the process proposed in this manuscript.In a first step, a virtual material library is assembled offline using a formal grammar.In a second step, a symbolic regression algorithm is trained offline on the generated virtual material library.In a third step, constitutive laws are discovered from experimental data using gradient-free optimization, and the trained model from the second step, for solving the resulting inverse problem.
In general, approaches that derive mathematical expressions from targeted experimental data are commonly referred to as symbolic regression methods.In what follows, we review different symbolic regression methods proposed in the literature for general (i.e.not necessarily material) model discovery and comment on their advantages and shortcomings in relation to the discovery of constitutive laws.For the sake of a unified presentation, we choose to view symbolic regression through the lens of operations on graphs.We further refer readers not familiar with the graph representation of mathematical expressions to A, which reviews some basic notions that facilitate reading.From this perspective, symbolic regression algorithms can be viewed as structured ways to add and discard edges of a graph between nodes, with the nodes containing so-called primitives (i.e.constants, variables, mathematical operations, or small expressions).Thus, the construction of a symbolic regression method entails three main choices -i) how the primitives are chosen; ii) what is the underlying graph topology; iii) how to ensure the derivation of meaningful expressions for the chosen graph topology.We hereby discuss existing classes of such algorithms in terms of these defining choices.Equation Learner.A simple and interpretable framework for symbolic regression is the Equation Learner (EQL) [9].To construct the EQL algorithm, the primitives include mathematical operations, such as the sin, cos, identity and • operations, and the input variable X, whereas the underlying topology is a weighted directed graph topology that represents all the functions that provide an output y given an input X, see Figure 2.This is equivalent to considering a fully connected neural network whose weights are the weights of the graph and whose activation functions are the primitive operations.Multiple layers of the network are considered for approximating complex functions and different paths of the graph represent different mathematical expressions.The method is trained such that, given an input, it determines the edges of the graph that constitute the simplest path whose resulting expression evaluation is the closest to the measurements.Even though this method has been successfully employed for different tasks, it comes with some important drawbacks.First, there is no systematic way to constrain the method to provide meaningful mathematical expressions.Moreover, EQL is difficult to train for primitives such as division, exponentiation and logarithms even though ways to alleviate this issue have been proposed [10,11].Since these operations are present in many material models, the application of this method to constitutive law discovery can be problematic.An EQL-based approach for discovery of hyperelastic constitutive laws is presented in [12,13]; the authors propose methods that consider the invariants of the Cauchy-Green tensor as inputs and activation functions consisting of unary operations.These authors do not consider constitutive laws containing operations such as division, exponentiation or logarithms.

Sparse Regression.
An alternate family of algorithms is based on Sparse Regression (SR) [14,15,16].For constructing a SR algorithm, primitives are chosen to be expressions, rather than operations, and the underlying topology is that of a weighted directed acyclic graph connecting the root node (i.e. the input X) with the primitives, see Figure 2.This is equivalent to considering a fully connected network without activation functions, trained to provide the weighted combination of edges that best fits the given measurements, while also imposing sparsity [17] with the purpose of obtaining a parsimonious, i.e. a simple mathematical expression.In SR, whether a generated expression is meaningful is (at least partially) determined by the primitives.For example, considering simple constitutive laws already as primitives is likely to result in a valid final constitutive law.This approach has been successfully applied in different areas of scientific discovery [18] and also to the discovery of constitutive laws with applications to hyperelasticity [2,19,20,21,22,23], viscoelasticity [24], plasticity [25,26] and generalized standard material models [27].The main drawback of SR is that a human expert needs to hand-design the primitives; this inevitably introduces a bias in the process of discovery and restricts the search space of possible candidate expressions to those included in the starting library.
Genetic Programming.Instead of imposing sparsity to a directed acyclic graph, other methods achieve sparsity by operating directly on trees rather than graphs; in other words, sparsity naturally follows from the tree structure.A standard approach for performing symbolic regression on trees is Genetic Programming (GP) [28,29].Here, an initial population of S-expressions is constructed randomly by combining different primitives, i.e. constants as well as unary or binary operators [30].For each random generation, actions are performed that alter step-by-step the structure of the population of the tree, by either mutating the primitives or removing/adding sub-expressions to the tree.This approach is not equipped with any structured way to impose constraints to each step of the generation of mathematical expressions.Moreover, the complexity of the method grows very fast with the depth of the tree [31], making it practically applicable only to small mathematical expressions.The prohibitive computational cost for potentially complex expressions makes GP not suitable for constitutive law discovery.Nevertheless, attempts in this direction have been made [32,33,34,26].In these cases, initial expression trees are evolved using mutations and crossover operations either as a standalone procedure or as a part of a more general pipeline.
Deep Symbolic Regression.Deep Symbolic Regression (DSR) [35] exploits Recurrent Neural Networks to predict the probability of children nodes given the parent node.To construct the DSR algorithm one needs to choose a set of primitives, namely constants as well as unary and binary operations, and a specific sequence of operations that corresponds to a top-to-bottom, left-to-right ordering of tree nodes, see Figure 2. The algorithm randomly chooses a root node; then, using auto-regressive sampling, it computes the probability of the next primitive on the tree conditioned on the previous one.The model is trained using a risk-seeking policy gradient to generate best-fitting expressions with high probability.In-situ constraints to zero out the probability of particular children given the parents can be considered to shrink the search space [35,17,36].The difficulty in applying this methodology to a constitutive law discovery scenario (which, to the best of our knowledge, has not yet been attempted) is that the constraints are imposed at each step of the generation but not to the whole expression.Moreover, the generation process requires a second optimization step to calibrate constants, which may end up violating constitutive law constraints.
Large-Scale Pre-Trained Models.The approaches we introduced so far, learn a model that needs to be re-trained as new data points are added.To address this issue in symbolic regression, Large-Scale Pre-Trained (LSPT) models have been proposed [37,38,39].LSPT models are trained once on a library of valid symbolic expressions and then provide the probability of an expression given a new data point.Constructing a LSPT model entails choosing primitives as a set of constants and mathematical operations, and a tree topology with a sequence of operations that corresponds to a top-to-bottom, left-to-right ordering of the tree nodes.By sampling valid sequences of primitives, a library of potential expressions is constructed and then evaluated for a range of the input variable X and constant values.The pair consisting of the input variable X and the evaluation of the expression y is encoded using a transformer architecture and the encoding is then used by a different transformer architecture to provide a distribution of expressions.The whole process is trained end-to-end by supervised learning via matching the function evaluation and the tree labels.After the model is trained, a beam search is performed for pairs of (X, y) to discover expressions that provide y with high likelihood, see Figure 2.This method could potentially translate to constitutive law discovery, but it would need to be pre-trained on a large library of expressions.At present, there exists no systematic way to construct such library, nor to impose constraints during the library creation or the expression generation process.
Grammars and Variational Autoencoders.Formal grammars have been explored for the generation of molecules as well as of mathematical expressions using sequence-to-sequence [40,41] and recursive [42] learning.Discovery processes of both new molecules and mathematical expressions can be succeeded via use of a deep generative model, more specifically using a variational autoencoder (VAE).In Kusner et.al. [40], the authors propose a method that first extracts from a tree the sequence of grammar rules that generates it and then encodes the sequence using a low-dimensional vector.In [42], the authors directly encode a tree using a recursive procedure that encodes the rule that generated the parent node given the children.In both cases, a decoding algorithm based on a grammar is considered to guarantee syntactic validity of the expressions.After the VAE model is trained, a gradient-free optimization procedure, such as Bayesian optimization or an evolutionary strategy, can be employed for exploring the low-dimensional space of the VAE Figure 2: Schematic representation of abstractions of different symbolic regression methods.For EQL and SR, we are performing supervised learning to learn a map from X to y parameterized by a network, while also imposing sparsity to the network.After the training is completed, we track the remaining edges to discover the form of the equation.The DSR and LSPT methods are constructed by randomly generating and evaluating trees based on a set of primitives, so they do not directly involve supervised learning.to find the expression that best fits a new set of measurements.Methods based on grammar possess low complexity, constrain the tree generation using rules, and generalize.To the best of our knowledge, they have never been used for constitutive law discovery.
The above survey on available tools for symbolic regression clearly suggests the following characteristics that would be desirable for symbolic regression algorithms applied to data-driven constitutive law discovery: • Accommodation of primitive operations of significant complexity (e.g.including exp, log or /), without exhibiting training instabilities.• Alleviation of human bias via flexible and automated generation of new constitutive expressions.
• Low complexity, meaning that the complexity of the discovery process should not increase exponentially with the length of the expression.• Possibility to incorporate constraints on both the final expression and the steps of the expression generating process.• Generalization of the symbolic regression algorithm, i.e. potential to provide expressions that are valid for new and unseen data points without re-training.
In this paper, we propose a symbolic regression pipeline for constitutive law discovery that satisfies all the above-listed requirements.Due to the advantages of formal grammars mentioned earlier (low complexity, ability to systematically embed constraints stemming from domain knowledge, and generalization capability) we explore grammar-based symbolic regression and develop formal grammars which are specifically designed for constitutive laws.In this first investigation, we focus on hyperelasticity, whereby the material behavior is fully encoded by the elastic strain energy density function for which we seek a parsimonious mathematical expression.
The proposed pipeline is composed by three steps: a library construction step, a pre-training step, and a data-driven discovery step, as illustrated in Figure 1.In the first step, we define a formal grammar that generates mathematical expressions in the form of trees.We ensure that the expressions derived by the grammar are valid constitutive laws (i.e., in our case, valid elastic strain energy density functions) partly by applying constraints during the generation process and partly by performing empirical acceptance checks on the final expressions.The grammar is then used to perform an off-line library generation of constitutive laws.Note that this library is a useful result per se, as it may be deployed in place of a hand-crafted library in the context of constitutive model discovery based on sparse regression [2].In the second step, this library is used to train off-line a symbolic regression algorithm that takes advantage of the tree structure of the mathematical expressions and of the grammar of the constitutive laws.The pre-trained model is trained to encode a tree to a low-dimensional latent vector representations and decode this low-dimensional vector representation to a tree, creating a low-dimensional manifold of tree representations.In the third step, we perform a gradient-free optimization approach [43,44] to search the low-dimensional manifold for the vectorial representation of the model that best fits the given measurements.Therefore, the expensive step of training the deep learning model needs to be executed once and then we perform the discovery without re-training.
The remainder of the manuscript is structured as follows.In Section 1 we focus on the notion of formal grammars and describe the Context-Free and Regular Tree Grammar classes, their properties and semantics.Section 2 is devoted to the construction of a formal language, the Language of Hyperelastic Materials, where the constitutive law constraints are either included in the grammar or enforced on the derived expressions.In this section we also discuss how the Language of Hyperelastic Materials can deployed for the automatic construction of a library of hyperelastic constitutive laws, and present examples of generation of such a library.In Section 3, we introduce a symbolic regression method combining VAEs and a Regular Tree Grammar defined for constitutive laws, that fulfils all the requirements of the earlier list.Finally, we propose a pipeline for discovering constitutive laws from available data and illustrate an example using artificially generated full-field displacement data.Conclusions and an outlook close the paper in Section 4.
In this section, we present two different types of formal grammars, namely Context-Free Grammars and Regular Tree Grammars, along with a simple example to showcase their use.To enable a high-level intuition of how grammar works, we provide a simplified schematic representation of a tree created using the proposed grammar in Figure 3.The node labels are variables, e.g.L, Ψ, S, from which one, in this case S, is assigned as the tree root.During tree generation, these variables are substituted with different primitives (i.e.constants, e.g. 2, 3, variables, e.g.I 1 , J, or operators, e.g.+, −, •), using predefined substitution rules, e.g.r 1 , r 2 .The number of arguments that the primitives take (e.g.+ takes two arguments while J takes no arguments) determine the number of their children and the tree connectivity.

Context-Free Grammars
Roughly described, a Context-Free Grammar (CFG) is a systematic way of generating tree structures that represent syntactically and semantically meaningful expressions using a set of rules.CFGs are defined as a tuple G = {Φ, Σ, R, S}, where Φ is a set of non-terminal symbols, Σ is an alphabet of terminal symbols, R is a set of production rules, and S denotes a special non-terminal symbol called the starting symbol.Let us explain the meaning of these terms.
• Non-terminal symbols are the variable node labels (i.e.L, Ψ, S in Figure 3).They are syntactic variables that cannot appear in an expression as standalone entities; to generate a sentence, or, in our context, a mathematical expression, they are substituted by terminal symbols through production rules.
• Terminal symbols are the primitives.They are the building blocks that compose sentences or, in our context, mathematical expressions.In our case, the terminal symbols can be defined as constants, variables, operations (such as in Figure 3), or sub-expressions.
• Production rules are user-defined rules to substitute non-terminals with other non-terminal or terminal symbols.Each substitution rule consists of a left-hand side that contains a non-terminal symbol, and a right-hand side that contains a mixture of terminals and non-terminals to be substituted to the left-hand side.E.g., the rule Ψ → sL, with Ψ, L ∈ Φ and s ∈ Σ, substitutes Ψ with sL in a sentence.If a non-terminal symbol appears on the left-hand side of more than one production rule, it can be replaced by any of the right-hand sides of these rules.By recursive substitution of the non-terminal symbols using the production rules, initiating from the starting symbol S, sentences that contain only terminal symbols are finally derived.
Note that because the production rules of CFGs contain only one non-terminal on the left-hand side, the sentences of CFGs can be expressed as trees, called derivation trees, as illustrated in Figure 3.Each derivation tree corresponds to a terminal sentence, or, in our context, to a mathematical expression.A language L(G) is defined as the set of all possible terminal sentences that can be derived by applying the production rules of the grammar starting from S, or all possible ways that the nodes of a derivation tree can be connected starting from S. We now illustrate these concepts with a simple example.Consider the grammar G NH = {Φ, Σ, R, S}, where: Here, C, Ψ, and L denote the non-terminals, with C as the starting symbol, and Ψ and L are used for recursive substitution of operations and literals (i.e.constants and variables), respectively.More specifically, Ψ describes both unary and binary operations between terminals and non-terminals, whereas L re-writes the integers 1, 3, the real constants a, c and the variables I 1 , J. The numbers next to the production rules are used for their annotation.Note that rule (3) is written as (L − L), but this does not mean that this rule always provides zero values.Each non-terminal L is treated separately, which means that the two L's in rule (3) do not always have the same value.
If we identify I 1 , J with the first invariant of the right Cauchy-Green deformation tensor and the third invariant of the deformation gradient in finite deformation kinematics, we can produce the non-dimensional elastic strain energy density function of a simple Neo-Hookean model W = a • (I 1 − 3) + b • (J − 1) 2 by performing recursive substitutions of non-terminal symbols beginning from the starting symbol C, as follows: The grammar G NH can not only produce this specific Neo-Hookean model, but also generate any other expression that can be derived as a combination of the introduced production rules {(1), ... (11)}.Examples derived from G N H are shown in Figure 4.All the possible terminal strings derived by recursively substituting the non-terminal symbols C, Ψ, L ∈ Φ (starting from C) using the production rules constitute the context-free language L(G NH ).

Regular Tree Grammars
Another type of grammar suitable for our application is a Regular Tree Grammar (RTG), defined as the tuple G = {Φ, Σ, R, S}, where Σ is now a ranked alphabet.This is defined as an alphabet augmented by specifying the arity of each primitive, i.e. the number of arguments each primitive takes.To explain the difference between CFGs and RTGs, we now consider a Regular-Tree version of G NH , which we denote as ĜNH , where: L → I 1 (), (10) L → J()(11) }.
There are two immediate differences between ĜNH and G NH .First, we provide the arity of each primitive in the alphabet.We consider operations with arity = 2, such as +, −, with arity = 1 such as () 2 , and with arity = 0, i.e. constants and variables.Operations such as () 2 can also be defined to have arity = 2 in the general case where the exponent is also an argument.Second, we write all rules in Polish notation, see Section A, to denote that we are working with trees and not strings.As for CFGs, the sentences (or mathematical expressions) of RTGs are expressed as derivation trees.Also, as for the context-free case, the Regular-Tree Language L( Ĝ) is defined as the set of all the trees that can be generated using the RTG Ĝ.

Characteristics and Operations of Tree Grammars
In general cases, trees are fully described by their node labels and connectivity.For the special case of a tree derived from a RTG, the tree can be fully characterized by the list of rules used for its generation, as we state more precisely in the following.In the previous section we have shown that a unique Neo-Hookean model can be derived by a set of rules.For the case of a RTG, the opposite also holds, meaning that all the information required to describe the specific Neo-Hookean model is uniquely represented by a set of rules.More precisely, it can be proven mathematically [42] that this property holds for unambiguous RTGs, i.e.RTGs in which each rule has a unique right-hand side.One can also prove that any RTG can be made unambiguous and that algorithms handling RTGs possess linear complexity in both computations and memory [42], see B for details on the properties of RTGs.In view of the above, there are two meaningful actions to be performed on a tree derived from a RTG: i. given the tree, extract its characterization (i.e. the list of rules of the corresponding grammar used for its generation), which is denoted as parsing; and ii.given a characterization, i.e. a list of rules, generate the tree that it corresponds to, which is called generation.In B we provide details on tree parsing and generation algorithms.Note that knowledge on characterization is key for performing tree-based symbolic regression, as characterization (i.e. the list of rules) is the dependent variable that the symbolic regression algorithm learns to predict.
Importantly, the bijective relation between trees and the sets of grammar rules used for their generation only holds for (unambiguous) RTGs, and not for CFGs [45].On the other hand, CFGs are known to be more expressive than RTGs [46].Looking ahead to the two main outcomes of this work, namely, automated generation of a material model library and automated model discovery based on data (Sections 2 and 3 respectively), it is quite clear that CFGs are more suitable for the first task, in which expressivity is important and the unique definition of the set of grammar rules corresponding to a given model is not essential, whereas RTGs are more suitable for the second task.For this reason, we will make use of CFGs in Section 2 and of RTGs in Section 3.
A grammar describes the rules that create sentences and constitute the syntax of natural languages.However, the syntax alone is not enough to define a language that is useful for our purposes; the additional needed ingredient is semantics, i.e. the meaning of the sentences that the language produces.While it is difficult to define semantics in natural languages, in physics this is more straightforward.For our purposes, we consider as semantically valid the expressions that satisfy the constraints of the constitutive law relations.
In this section, our objective is to generate the Language of Hyperelastic Materials.We first review the constraints that need to be satisfied for a mathematical expression to represent a valid elastic strain energy density function.We then discuss ways of enforcing these constraints.This is done partially during the construction of the grammar and partially through checks on the final derived expressions.Subsequently, we propose a general language for hyperelastic materials, i.e. a language whose expressions are valid elastic strain energy density functions (i.e.functions which correspond to valid hyperelastic constitutive laws).Throughout this paper, we assume to have introduced a suitable non-dimensionalization such that we only deal with non-dimensional strain energy density functions.
We consider finite-deformation kinematics and denote with u ∈ R 3 the displacement field, with F = I + Grad u the deformation gradient (where Grad is the gradient operator with respect to the reference coordinates), and with C = F T F the right Cauchy-Green deformation tensor.The principal invariants of C are defined as: where tr denotes the trace operator.For nearly incompressible materials, it is customary to decompose the deformation gradient F into a volume-preserving (or isochoric) part, F iso , and a volume-altering part, F vol : where This decomposition translates to the right Cauchy-Green tensor and its invariants as follows With the decomposition of the deformation gradient, a frequent choice is to write the strain energy density function W (F) as the sum of isochoric, W iso , and volumetric, W vol contributions:

Requirements for Hyperelastic Constitutive Laws
In the following, we briefly overview the main requirements that hyperelastic constitutive laws have to satisfy according to continuum mechanics theory.For more details, see [47,48].
Thermodynamic Consistency A hyperelastic material is one for which we postulate the existence of an elastic strain energy density function where GL + (3) is the set of invertible second-order tensors with positive determinant.The laws of thermodynamics lead to the Clausius-Duhem inequality where : denotes the tensor dot product.The Clausius-Duhem inequality is satisfied as an equality for an arbitrary Ḟ by the following definition of the first Piola-Kirchhoff stress tensor Symmetry of the Stress Tensor Balance of angular momentum implies the symmetry of the Cauchy stress tensor σ = J −1 P F T , and results in the fact that the strain energy density function W (F) needs to satisfy Objectivity (Frame Indifference) Objectivity means independence from the choice of the observer, which in hyperelasticity can be expressed as

Material Symmetry
The strain energy density function should reflect the desired type of material symmetry, e.g.isotropy or a specific class of anisotropy, which can be formalized as follows where G is the symmetry group of the material.
Polyconvexity The polyconvexity of the strain energy density function guarantees sequential weak lower semicontinuity, which along with coercivity is a sufficient condition for the existence of minimizers, i.e. of solutions to boundary value problems under general boundary conditions and body forces [49,50,51].Also importantly, polyconvexity is sufficient for quasiconvexity, which in turn implies rank-one convexity.For twice differentiable and smooth elastic strain energy density functions, rank-one convexity is equivalent to ellipticity, [52,51].W (F) is polyconvex if and only if there exists a function P, convex in its arguments, such that In other words, an undeformed configuration implies no stresses and stores no energy.

Growth Condition
The volumetric growth condition requires that the strain energy density grows to infinity as the volumetric deformation tends to zero or to infinity, as follows Its physical meaning is that an infinitesimal material volume cannot grow to infinity or be compressed to a point.Note that this is only one type of growth condition and probably the most widely used one.For a comprehensive discussion on growth conditions and coercivity we refer the readers to [51].

Non-Negativity of the Strain Energy Density
The strain energy density also needs to satisfy W (F) ≥ 0, as a negative strain energy density is physically meaningless.

Enforcement of Constraints
Now that the requirements for elastic strain energy densities have been defined, we discuss how these constraints can be translated to language semantics.The grammar G N H introduced in Section 1 leads to expressions that do not necessarily satisfy the constraints in Section 2.1.If we consider e.g.W = (J − 0.5) 2 + 3 • I 2 1 (one of the examples in Figure 4 with a = 0.5), it is evident that this expression does not satisfy the normalization condition.In this section, we propose a process to embed some of the constitutive law constraints directly in the definition of the grammar and to apply additional semantic checks to generated expressions in order to ensure that the remaining constraints are satisfied.Therefore, we propose creating a grammar, and the corresponding language, such that its derived expressions comprise automatically valid strain energy density functions.
Intrinsic Constraints Some of the requirements in Section 2.1 can be easily fulfilled a priori in the grammar construction.Thermodynamic consistency for hyperelastic materials is trivially satisfied; symmetry of the stress tensor and frame indifference are automatically fulfilled by considering W as a function of the deformation gradient through the right Cauchy-Green deformation tensor C. In terms of material symmetry, from now on we focus on isotropic hyperelasticity; isotropy is automatically guaranteed by considering W as a function of the invariants of C.
To satisfy normalization, we correct W (F) as follows: (4) where W 0 and W c are corrections to satisfy the strain energy density and the stress normalization, respectively, given by where To derive the stress correction we follow [53].

Extrinsic Constraints
The remaining requirements in Section 2.1 are not satisfied a priori, but verified on the final produced expression.If they are not fulfilled, the expression is discarded.In order to empirically check if the growth condition holds, we choose F as the diagonal matrix: for which J = a.We set a to a large positive real number a l to check the case J → ∞ and to a small positive real number a s for the case J → 0 + .Then, we choose a large positive real number H and consider the growth condition satisfied if W > H for a = a l and a = a s .In our later numerical examples, we choose a l = 10 4 , a s = 10 −4 and H = 10 3 .
To check (again empirically) the non-negativity and the monotonicity of the strain energy density function, we consider one-parametric deformation gradients corresponding to several simple tests, namely Uni-axial Tension (UT), Bi-axial Tension (BT), Uni-axial Compression (UC), Bi-axial Compression (BC), Simple Shear (SS), and Pure Shear (PS) [2], as well as three additional loading paths denoted as N 1 , N 2 and N 3 , as follows with γ as a real non-negative parameter (we give it values such that all the expressions are well defined).The deformation paths resulting from the parameterized deformation gradients in (7) are visualized in Figure 5, revealing a reasonable coverage of the invariant space at least in the first quadrant.For all cases, W (F(γ)) should be a positive and monotonically increasing function of γ.We then consider parameters γ 1 , γ 2 with γ 2 > γ 1 and check that Figure 5: Deformation paths used to check positivity and monotonicity of the elastic strain energy density plotted in the space of the isochoric invariants.
If Equation 8does not hold for a pair (γ 1 , γ 2 ) for any of the loading conditions in ( 7), the mathematical expression is rejected.
Polyconvexity The requisite of polyconvexity deserves a separate discussion.In view of the equivalence mentioned in Section 2.1 and of an additivity property [50], a possible simple choice of a polyconvex strain energy density is the following: with W 1 , W 2 and W 3 convex and monotonically increasing with respect to their arguments.Since Ĩ1 and Ĩ3/2 2 are polyconvex [50], the resulting strain energy density function is naturally polyconvex.Note that this choice is quite restrictive, as we are not including any terms containing both Ĩ1 and Ĩ2 (since these mixed terms are not polyconvex [50]).Empirical tests on the convexity of each expression can be potentially performed, however one needs to consider deformation gradient tensors parameterized for different loading cases that also adequately cover the space of kinematic quantities.Designing the form of the deformation gradient tensors for such loading paths is not a trivial task and it is left as a subject of future research.
The approach we follow in this paper allows for more flexibility.As will become clear later, we work with a strain energy density function of the form thus, we bias the grammar to derive polyconvex expressions through the choice of the exponent for Ĩ2 , however we do not strictly guarantee Ŵ convexity or monotonicity.Over polyconvexity, which may be regarded as an even too strong requirement for practical purposes, we prefer a more flexible grammar construction to allow for more exotic expressions to be generated.However, already at this point we would like to stress that the grammar construction fully controls the properties of the derived expressions; using mathematical tools from formal languages, these properties can be provably controlled in a rigorous manner [46], hence, in principle they provide the possibility to strictly enforce polyconvexity if desired.
Example To exemplify the process of hyperelastic model generation including constraint enforcement, we refer back to G NH and apply a minor modification.Without loss of generality, we consider a = 0.5, b = 1.5 to define a specific material and the isochoric invariant Ĩ1 in place of I 1 .We present the grammar construction in two ways.First, we rewrite G NH upon implementation of the above modifications: As we already did in Section 1.1, we generate an expression by consecutively applying a sequence of production rules, i.e.r = [( 1), ( 2), ( 2), ( 6), ( 3), ( 9), ( 6), ( 13), ( 8), ( 11), ( 2), ( 9), ( 7), (12)], in a top-to-bottom-left-to-right fashion, as follows: As a second alternative, we introduce two new non-terminal symbols Ψ iso and Ψ vol to indicate the isochoric and volumetric parts of the strain energy density, and define production rules for each of these non-terminals: L iso → Ĩ1, ( 14) With this grammar definition, it is possible to derive expressions which distinguish between volumetric and deviatoric (isochoric) contributions, which is a common choice in constitutive modeling of hyperelastic materials.From this grammar we again sample a sequence of production rules, i.e. r = The Language of Hyperelastic Materials [(1), ( 2), ( 6), ( 4), ( 3), ( 11), ( 8), ( 15), ( 10), ( 13), ( 6), ( 11), ( 9), ( 14)], and consecutively apply them in a top-tobottom-left-to-right fashion to re-write the non-terminals until we derive a terminal expression, as follows: Note that with this second version, where separate non-terminals are considered for the isochoric and volumetric parts of the strain energy density, the number of grammar rules is larger since we need rules for each non-terminal.For the sake of simplicity, in the following developments of this paper we will consider a single non-terminal Ψ and not distinguish between volumetric and deviatoric contributions.However, considering both terms separately would be straightforward as shown above.
The obtained expression for W (F) does not satisfy the normalization conditions, because W (F = I) = 27.25 and W c (F = I) = (J − 1).Thus, we perform the corrections ( 5) and ( 6) and derive the expression: Performing the corrections is equivalent to adding a sub-tree to the original expression, see Figure 6.In Figure 6 we also provide the predicted displacement field for the benchmark in [2], reproduced in Figure 7, considering W (F) from eq. ( 9).
We then execute the remaining empirical checks on volumetric growth and non-negativity, both of which hold in this case.Therefore, we conclude that the derived expression can indeed be deemed a constitutive law that can characterize a hyperelastic material.

Generating the Language of Hyperelastic Materials
So far we considered a simple grammar, based on which we could generate simple expressions which we guaranteed to be valid (simple) constitutive laws.In order to obtain more flexible expressions, we now consider a more complicated alphabet of terms, include production rules with more operations, and derive constants via recursive substitution of non-terminals instead of explicitly defining production rules that re-write non-terminals with reals.In the choice of the primitives we consider logarithmic, exponential, monomials and other operations that often appear in constitutive relations.To keep a reasonably compact grammar, we do not consider different non-terminals and production rules for the isochoric and volumetric parts of the strain energy density, but one non-terminal for both.However, if desired, this can be changed as exemplified earlier.Finally, we define the energy and stress corrections as production rules for the non-terminal Ψ.The resulting grammar of hyperelastic materials, G HM , is then defined as follows: L → exp(L) (13) L → Ĩ1 ( 14) L → J ( 16) M → P.P, (18) In G HM , we purposely define multiple production rules for the non-terminals L, P, D; this choice enforces recursions that increase the expressivity of the grammar.When two rules have the same left-hand side, we randomly select one to apply.In CFGs non-terminals should be treated independently in each production rule; however, for the purpose of the normalization correction (see eq. ( 4)) we enforce that the same non-terminal Ψ is substituted to the left-hand side in the production rules (1), (3), and (4).Now we return to eq. ( 9), and show how this simple constitutive law can be derived also from the present more complex grammar via the sequence of production rules r = [(1), ( 3), ( 4), ( 2), ( 5), ( 10), ( 7), ( 16), ( 17), ( 18), ( 19), ( 21), ( 26), ( 5), ( 9), ( 18), ( 21), (10), (14)] with top-to-bottom-left-to-right ordering: − −− → (J − P.P ) 2 + Ψ − ((J − P.P ) − − → (J − 0.5) − −− → (J − 0.5) By evaluating the production rules r = [(3), (4)] we derive eq. ( 9).It takes more steps to complete the derivation and obtain the strain energy density expression because G HM contains more numerous and more general production rules than G NH .As usual, the growth, non-negativity and monotonicity conditions are empirically checked a posteriori.
To summarize, in the grammar of hyperelastic materials G HM the thermodynamic consistency, the symmetry of the stress tensor, the objectivity and material symmetry conditions are satisfied a priori, the normalization of the stress and the strain energy density are satisfied by construction and the growth, non-negativity and monotonicity conditions are empirically assessed a posteriori by testing the obtained expression on a number of simple one-parametric deformation paths.
The proposed approach is highly versatile, because with minimal changes to the grammar one can describe different types of materials.For example in the case where anisotropy is present due to fiber reinforcement, additional invariants need to be considered to describe the deformation along the direction of anisotropy.For example in a material reinforced with two fibers, G HM can be further adjusted by considering the production rules [54]: and adding J 4 , J 5 , J 6 , J 7 , i.e. the anisotropic invariants for the two fiber families, to the alphabet.The ensuing language L(G HM ) will include constitutive models for this anisotropy case.Clearly, analogous modifications can be applied for more complex anisotropy cases.
We provided the derivation of one expression by choosing a sequence of production rules from G HM .All the possible derivation trees that the grammar can produce comprise the Language of Hyperelastic Materials L(G HM ).In the next section, we elaborate on how this language can be used for the automated generation of a library of constitutive models, see the first block of Figure 1.

Deriving Parsimonious Expressions
The recursive substitution process in the grammar of hyperelastic materials G HM is not guaranteed to stop nor to provide a parsimonious expression, i.e. a shallow tree.To alleviate theses issues, the tree generation process can be manipulated so as to enforce or favor the generation of shallow trees.There are two main options.One option is to attribute different probabilities to the production rules that share the same left-hand side, favoring those that produce terminal strings.
Another alternative is to specify a fixed maximum number of operations to obtain the final expression.In this subsection, we briefly discuss these two alternative procedures.
Probabilistic Context-Free Grammars As discussed earlier, in a CFG several production rules may have the same non-terminal on their left-hand side; as a result, a criterion is needed to select one of these production rules during the tree generation.A naive approach would consider a uniform probability over all production rules for a specific non-terminal.A greater flexibility and efficiency can be obtained using Probabilistic Context-Free Grammars (PCFGs) [55,56,57,58].PCFGs can be constructed from CFGs by assigning probabilities to the grammar production rules with the same non-terminal on the left-hand side, with the sum of the probabilities summing up to one.The probability for each rule can be assigned by the user or learned from the data.As an example, a PCFG can be obtained from G NH (for a = 0.5, b = 1.5) by assigning probabilities to its production rules (given in square brackets) as follows Ψ Clearly, the probability of a constitutive law is given by the product of all probabilities associated with the rules chosen for its derivation, i.e.
where r = [r 1 , ..., r Np ] is the sequence of production rules that derive W .It is also clear that the probabilities of all possible final expressions sum up to one [57].For deriving a deeper tree, i.e. a more complex expression, a longer sequence r is required, which includes a larger number of production rules and is thus associated to a lower probability.Even though this is an important property that can be used to favor the generation of parsimonious expressions, it The Language of Hyperelastic Materials requires a careful design of the grammar, as unrealistically low probabilities may be obtained already for simple models.For example, with G NH the sequence of rules that produces the Neo-Hookean model W = 0.5 1), ( 2), ( 3), ( 5), ( 10), ( 9), ( 8), ( 11), ( 7)], and the probability of this model being produced from the grammar (with the individual rule probabilities in the previous example) is Clearly, it is possible to modify the grammar and/or the individual rule probabilities to influence the probability of the resulting model form, but such modifications would require hand-engineering, thereby introducing bias in the model generation.Thus, PCFGs are not well suited for generation of expressions.Moreover, PCFG are not efficient for performing data-driven discovery tasks for long expressions because these will be discovered with a low probability.They are frequently used for tasks involving parsing of expressions, where the probability of each rule is learned from the data and not manually assigned by a user.For example, we could consider a probabilistic grammar if we were given a library of constitutive laws and wishes to determine the probabilities of individual rules being used in deriving a valid expression.This could be useful in a scenario where there is uncertainty about the number and form of the rules used to derive expressions.In such a case, we could define a more general grammar than we believe is needed and then, by learning suitable probabilities, we could discover the rules used in generating the library.Probabilistic grammars are also useful in understanding the decoding process that we employ in Section 3.

Constraining the Number of Operations in a Tree Derivation
The second alternative is to a priori decide the maximum number of operations that the grammar is allowed to perform in order to derive an expression.To realize this constraint, we construct the grammar rules such that they do not allow for long recursions.For this purpose, we introduce a non-terminal symbol for each operation, e.g. by transforming rule (2 and for each of these non-terminal symbols we define unary and binary operations that in one step derive terminal nodes.In this way, we derive expressions of pre-set maximum length and complexity.We exemplify how this process works in practice in the next section.

Automatic Generation of an Hyperelastic Material Model Library
In this section, we demonstrate that the grammar-based generation of elastic strain energy density functions proposed in the previous sections can be used for the automatic creation of a material model library, which can possibly serve as the basis for sparse regression approaches such as those in [2,27].Moreover, we develop an automated computational pipeline which, once the mathematical expression for a valid constitutive law is generated, is used to produce the finite element code needed to solve boundary value problems where the material behavior obeys such a constitutive law.
For the hyperelastic model library generation, we deploy a CFG for deriving mathematical expressions.As illustrated earlier, the grammar provides a systematic way of checking if the expressions are syntactically and semantically valid, and it is chosen to be as expressive as possible.For the purpose of obtaining parsimonious expressions, we no longer use the grammar G HM defined in Section 2.3, but the following grammar, G HMp : Here, U stands for unary, Y for binary, T for combinations of unary and binary operations, V for invariants, and L for both invariants and constants.We also denote with O real numbers, with D integers and with P parts of real numbers.We use the symbol "|" to separate rules with the same non-terminal on the left-hand side, for which we consider a uniform probability.As explained in Section 2.4, in this new grammar we limit the left-hand side recursions in the grammar production rules in order to create shallow trees that correspond to parsimonious expressions by introducing non-terminal symbols and operations of predetermined depth.For this reason, we define the grammar rule (2) of G HMp as Ψ → Ψ 1 + Ψ 2 + Ψ 3 + Ψ 4 instead of Ψ → Ψ + Ψ and introduce the L, U, Y, T to derive terminal nodes in less steps.Therefore, we enforce that the strain energy density is represented by a tree of maximum depth equal to seven.We use this grammar to produce syntactically valid expressions of elastic strain energy densities, which also satisfy intrinsic semantic constraints.The full semantic validity of these expressions is then assessed by checking if the growth and the non-negativity conditions hold, see Section 2.2.If these conditions hold, the expression is accepted as a valid constitutive law; if not, a new expression is constructed.Figure 8 illustrates several examples of valid final expressions.
Clearly, the total number of models (or trees) that a library can potentially contain depends on the grammar and can be computed using combinatorial calculus.In this work, we do not generate all possible models stemming from ĜHM , but fix upfront a desired number, and stop the generation process right after obtaining a number of valid models equal to the target.The automatic generation of the constitutive law library needs to be performed using symbolic computations, i.e. in SymPy [59].The wall-clock time for the generation of a tree on a single processor is of order O(10 −4 ) seconds computed on an Alienware m16 Laptop with an Intel i9-13900HX CPU and 32GB RAM.In our experience, the probability of a generated tree to be accepted is about 33%; the entirety of the discarded trees violates the growth condition of the strain energy density function, whereas about 33% violate both the growth and the non-negativity conditions.To further speed up the process we leverage the independence of the generation process for different trees and use multi-threading, whereby each tree is generated by a different core of the CPU in an asynchronous manner.Thus, the total time for a model library generation depends on the number of available CPU units.For example the process of generating 1, 000, 000 valid trees, with 100 CPU units available, would take a few minutes.This numbers naturally depend on the chosen number of operations in the grammar rule (2), and we expect the acceptance rate of a tree to drop as the number increases, so the average time to produce a tree to increase.Once a material model library is created automatically, for its practical use it is of fundamental importance to automate the process leading from availability of a constitutive law to solution of a boundary value problem embedding such law.The goal should be to seamlessly integrate the library creation with finite element simulations without having to perform changes to a finite element code for every different strain energy density function.This requires the possibility to use automatic differentiation to derive the stress and the tangent stiffness tensors from the elastic strain energy density [60], and this possibility is provided by modern finite element tools such as Fenics [61,62,63] or AceGEN/AceFEM [60].In Figure 9, we present the displacement fields computed by solving the boundary value problem in Figure 7 with δ = 0.3, for four of the hyperelastic constitutive models in Figure 8, with the Fenics library.We discretize the domain using 1024 quadratic triangular elements with a three-point Gauss quadrature rule.
The code for the generation of the hyperelastic material model library and the code for solving the boundary value problem in Figure 7 will be made publicly available at the time of publication.

Data-Driven Discovery of Hyperelastic Constitutive Laws
One of the possible motivations for generating the Language of Hyperelastic Materials is to use it within the context of data-driven constitutive model discovery.As discussed already in the introduction, constitutive model identification is typically formulated as an inverse problem, where the unknown parameters in a chosen model are to be inferred on the basis of available experimental information from a tested system.Constitutive model discovery goes one step further, as it integrates model selection with parameter identification [27].In Section 1 we discussed the properties that a symbolic regression method needs to possess in order to be useful for constitutive law discovery.More specifically, we consider only algorithms that allow for generalization and therefore consider a structure composed from a map from the language L(G) to a latent vector R nVAE and another map to L(G).In this way, the latent space can be searched during the discovery stage for latent vectors that decode to expressions that fit the data.Moreover, the chosen dimensionality reduction method should accept trees as the data structure of the input, as well as, consider an encoder and decoder structure that can be combined with formal grammars.To our knowledge the RTGVAE algorithm is the only dimensionality reduction technique that fits these criteria.We have chosen to work with methods based on grammar due to their advantages: they overcome human bias, since they are not restricted to producing known models or combinations thereof, and the grammar construction imposes semantic and syntactic constraints to the constitutive law expressions.In this section, we aim at showcasing the solution of the inverse data-driven constitutive law discovery problem using a method that features reasonable complexity, accommodates different primitive operations without training instabilities, and generalizes across different materials.For this purpose, we consider a combination of the Recursive Tree Grammar Figure 8: We present different constitutive laws from the language of hyperelastic materials L(G HMp ).These expressions satisfy all the constitutive law constraints.Variational Autoencoder (RTGVAE) method proposed by [42], applied to the Language of Hyperelastic Materials formulated in the previous section, and the Covariant Matrix Adaptation Evolutionary Strategy [43,44] (CMA-ES) for gradient-free optimization.Our purpose in choosing RTGVAE is to show that the proposed methodology can be seamlessly integrated with an already established symbolic regression method.However, in principle any other symbolic regression method could be considered, e.g. a LSPT model, in combination with our grammar-generated hyperelastic constitutive laws.In symbolic regression methods that consider grammar, the latent space is constructed by encoding sequences of grammar rules.Therefore, when performing the data-drive discovery, the models produced in the process of optimization are biased to satisfies the grammar and thus be semantically and syntactically valid.If we considered a LSPT model, for example, the latent space would not be constructed by encoding grammar rules, which means that there would be a higher probability of obtaining invalid expressions during the discovery process.

Symbolic Regression Using Grammar and Model Discovery
We discuss in Sections 1 and 1 how trees can be characterized via grammar rules instead of their node labels and connectivity, see B for more information.In B we also discuss the operations of extracting a nested list of rules given a tree, called parsing, and deriving a tree given a list of rules, called generation.A tree VAE is built on these two operations.More specifically, its encoder extracts the nested list of production rules that derive different trees (inputs) and produces its low-dimensional vectorial encoding.The decoder takes an encoding and provides the most probable nested list of rules that corresponds to the encoding (outputs).The VAE is trained to provide, with high probability, the correct list of rules given a latent vector.In the next paragraphs, we discuss in more detail how the encoder, the decoder and the loss function are defined.

Tree Encoder
The purpose of a tree encoder is to provide a systematic way to represent trees that adhere to a certain set of grammar rules using a low-dimensional vectorial representation.This encoding represents hierarchical or nested lists as a vector, which is a data structure that is easier to handle in computer programs.We explain how the encoder works by comparing encoding to the bottom-up parsing process, as they are closely connected.In parallel to parsing, the encoder assigns a representation to each child and recursively maps the representations of the children to the representation of the parent using a function f r for each rule r.In [42], the f r functions are defined as Recursive Neural Networks [64,65,66] to capture the hierarchical structure of the trees.This process begins from the leaf nodes of the tree and stops when it reaches the root, which means that the whole tree is encoded.Then two functions map the representation of the tree to the mean µ and the covariance σ of the multivariate normal Gaussian distribution that models the latent variables of the VAE.Finally, the so-called reparameterization trick [67] is performed to sample a latent vector of the VAE, see Figure 10 for a visual explanation of the encoding process.
More precisely, the tree encoding is defined as a function ϕ : L( Ĝ) → R nVAE from a language L( Ĝ) to a n VAEdimensional space.Consider a grammar rule of the form Ψ → s(L 1 , ...L k ) that re-writes a non-terminal Ψ with a k-ary operation and produces L 1 , ..., L k children non-terminals.For each grammar rule, we introduce a function f r : R k×n → R n (with n as a user-defined integer) that maps the representation of the k children to the representation of the parent.If the rule re-writes the non-terminal with a nullary operation, i.e. a constant, then f r is a constant vector of size n.In practice f r is defined as a fully connected neural network with a single layer: where U r,j ∈ R n×n are the weights and b ∈ R n the biases of the neural network.To obtain a representation of the whole tree, f r is applied during parsing.Therefore, the tree encoder is defined as a recursive equation of the form: where r ∈ R is the rule that derives s( t1 , ... tk ) from Ψ.For the VAE, the encoding probability q ϕ (z| ŵ), where ŵ the derivation tree of an expression, is the Gaussian with mean µ : R n → R nVAE and the diagonal covariance σ : R n → R nVAE .Both µ and σ are defined as linear fully connected networks with trainable parameters.The mean and the covariance are then used to perform the reparameterization z = µ + ϵ ⊙ σ, where ϵ ∼ N (0, α • I) and α positive real number that scales the covariance of ϵ.When the encoding finishes, the entire tree is represented by a vector z ∈ R nVAE .
Tree Decoder The purpose of a tree decoder is to provide a process to generate trees conditioned on a low-dimensional vectorial representation.Even though the decoder and the generation process discussed in B both work in a top-down manner, initiating from the starting symbol, they present a notable difference.For each step l of the generation process, the generation algorithm recovers a non-terminal Ψ from the tail of the list of non-terminals, applies the rule r l = Ψ → s(L 1 , ...L k ), removes Ψ from the list and stores L 1 , ...L k , see B. In contrast to the generation process, the decoding process does not take a list of grammar rules as an input but a vectorial encoding z.For this reason the rule r l to be applied at the l-th step is not known a priori, but needs to be chosen during the tree generation.
The tree decoding is defined as the opposite process of the encoding, or more precisely, as a function ψ : R nVAE → L( Ĝ) from a n VAE -dimensional space to the language L( Ĝ).Mirroring the encoder, the first step of the decoding process is to lift the dimensions of the vector z ∈ R nVAE using a linear fully connected network ρ : R nVAE → R n to obtain the vector s = ρ(z) that represents the entire tree.Then, at each step l of the decoding process, the algorithm takes a vector s and a non-terminal Ψ as inputs.The vectorial representation s is used to choose which rule r l will be applied as follows.Let N Ψ be the number of rules with Ψ on their left-hand side.A linear fully connected network h Ψ : R n → R NΨ is employed to obtain a score vector λ = h Ψ (s) for each non-terminal Ψ.Then we choose a rule r l , with l ∈ [1, ..., N Ψ ], based on the probability: For choosing the rules to be applied for non-terminals L 1 , ...L k , the representation of the parent needs to be mapped to the representation of the children.For this purpose, a neural network is defined with k layers g r 1 , ..., g r k , for each rule r = Ψ → s(L 1 , ..., L k ), to map the vector representation t of each child, where t j = g r j (s) for j ∈ [1, ..., k].At each step of the decoder, the tree representation s is updated by removing the representation at step l from the overall tree representation, i.e. s is substituted by s − t l .We repeat this process until no non-terminal symbol is left.The decoding function is then written in a recursive manner as follows: (15) Using ψ for Ψ = C where C is the starting symbol, we recursively generate a tree ŵ given a vector z.Even though the decoder learns probabilities of production rules conditioned to a library of valid material laws, which means that it is strongly biased to produce valid constitutive laws, it is not explicitly constrained to always satisfy the constitutive law constraints.This means that there exists a probability that the discovery process results to an expression that is not semantically valid. 1 .Left: The encoder maps the representation of the children to the representation of the parent using f r , where r is the rule used to produce the children.The process starts from the leaf nodes until it reaches the root and encodes the whole tree.Middle: Perform the reparameterization trick to sample a latent vector z.Right: The decoder generates a tree from a low-dimensional vector z by first using function h Ψ to decide the next rule to be applied, beginning from the starting symbol, and then function g r j to predict the vectorial representations of the children.

Loss Function
The VAE introduces a probability density function q ϕ (z| ŵ), parameterized by the encoder, and the conditional probability p ψ ( ŵ|z) of decoding the input using the latent vector z.The autoencoder is trained to minimize the loss: where D KL (q ϕ ||N ) is the Kullback-Leibler divergence between the distribution parameterized by the encoder and the standard normal distribution N and β is a user-defined parameter that controls the effect of this term on the overall loss [68].Eq. ( 16) trains the model to maximize the probability of choosing the correct rule for a step in the decoding process given that all the choices for the previous steps have been correct [42].
Inverse Problem Solution The data-driven discovery stage of the process consists of searching the reduced space of the VAE for an encoding, which when decoded provides a model whose evaluation matches well the measured data.At this stage, we do not focus on the optimal solution of this problem, but rather opt for use of an evlutionary approach, which is a popular choice for the solution of inverse formulations linked to multi-step model evaluation processes [69].We consider a global instead of a gradient based optimization method, because we expect the landscape of the latent space to present multiple local minima.The data-driven discovery is performed on displacement measurements from only one type of loading (Biaxial Tension), which is not necessarily information to constrain the discovery to one material model, thus a global minimum.In this case, we opt for the adoption of a covariance matrix adaptation evolutionary strategy (CMA-ES) [43,44] for generating candidate solutions from a probability distribution whose parameters are iteratively updated.The CMA-ES method fits naturally for performing global optimization in the latent space of VAEs, as they both consider a multivariate normal distribution over the latent vectors z.A difference lies in the fact that the CMA-ES uses a full covariance matrix during the adaptation process, while the VAE considers a diagonal covariance matrix.For measuring the goodness of fit, a root mean square error metric is adopted reflecting the discrepancy between the model evaluation and the available measurements.The CMA-ES reflects one of several possible optimization tools one can opt from the class of evolutionary optimization tools, which allow for a more flexible formulation of the objective function.We opt for use of the CMA-ES as it comprises a stronger mathematical foundation, based on the maximum-likelihood principle, in comparison to metaheuristic schemes, such as Genetic Algorithms [70] and Particle Swarm Optimization [71].

Constitutive Law Discovery Based on Data
In this section, we demonstrate how the proposed grammar-based approach can be used to perform efficient data-driven model discovery, thus covering the spectrum of tasks in Figure 1.The model discovery is geometry and load agnostic, which means that the same process without any change would work for any geometry, loading or boundary condition.We consider a simple RTG and generate a language for hyperelastic materials, which we then use to train a RTGVAE [42].As a final step of the process we employ the CMA-ES [43,44] to discover the model that best fits synthetic data that are contaminated with different levels of noise.In real applications, these measurements may come from experiments or from higher fidelity (e.g., multiscale) finite element computations.
We consider two different setups for the discovery process, one based on supervised learning, and one based on unsupervised learning.We denote as supervised learning the case in which the experimental values of the elastic strain energy density are assumed to be known.This is an unrealistic assumption since the elastic strain energy density cannot be measured experimentally.However, this setup is initially used as it is the simplest option and it is able to demonstrate the useful properties of the grammar-based model discovery approach.One such property is that when the baseline model cannot be represented using the grammar rules defined in the grammar, the discovery process recovers a constitutive law expression that represents similar physics.We demonstrate this property by evaluating the generalization performance of the discovered expression to different loading conditions.Another useful property of the approach is that it is robust to noise in the displacement data.We demonstrate this property by introducing different noise levels in the measurements.To combine these useful properties with a realistic loss function, we adopt the unsupervised learning approach, i.e. we assume that only displacement fields and the related values of global applied force are available from experimental measurements, which is a realistic scenario as discussed in [2].In the unsupervised case, we demonstrate that the proposed approach retains the properties of robustness and generalization and provides accurate results.For both discovery scenarios we consider one loading step in the discovery process for δ = 0.3.
In the cases of EQL and SR, see Section 1, the data-driven discovery process is designed such that the learning algorithm chooses the combination that best fits the data out of a library of primitives, expressions or operations.The underlying assumption of these methods is that the constitutive laws can be described by a weighted sum of an priori chosen basis.The selection of such a basis biases the discovery process to recover combinations of already known relations and it is conditioned by the expertise of the person designing the library.However, in real applications the form of the constitutive law is not always known a priori such that an informed choice of the basis can be made.For such cases, a weighted sum of the library elements may not accurately describe the constitutive law, because the library construction may be missing important information.With our grammar-based approach, on one hand we can fully automatically generate a much wider library of material models than possible by manual construction, as already demonstrated in Section 2.5; on the other hand, we can discover an interpretable material model which well interprets the data even though the true model underlying the data is not present in the library.While the capability of finding a good approximation for a model not present in the library was already demonstrated with SR [2], the final model still had to be a combination of the library terms.This is no longer the case in the present setting, as we demonstrate in this section.
Data Generation For generating the data that we are going to use to perform the discovery, we consider the benchmark proposed in [2,72] that emulates digital image correlation measurements using computational data generated by the finite element method.The domain is a plate with a central hole which undergoes a symmetric bi-axial loading controlled by a displacement parameter δ.Due to symmetry, only a quarter of the plate is studied with symmetry boundary conditions on the left and bottom boundaries [2], see Figure 7. Plane-strain conditions are assumed.In the data generation, we consider four different (baseline) hyperelastic constitutive models: • A Neo-Hookean (NH) model containing a quadratic volumetric term • An Isihara (IS) model [73] containing a quadratic deviatoric term • A Haines-Wilson (HW) model [74] containing a cubic deviatoric term • A Gent-Thomas (GT) model [75] containing a logarithmic deviatoric term • An Ogden (OG) model [76] depending on the principal stretches We set the applied displacement parameter to δ = 0.3 and, for each constitutive model, we obtain the displacement field u which we then use to compute the deformation gradient F and the volumetric and isochoric invariants of the right Cauchy-Green deformation tensor J, Ĩ1 , Ĩ2 .We add artificial noise to the displacement data, as follows: with n dof s as the number of degrees of freedom in the finite element discretization.We adopt two levels of noise, namely σ * = 10 −4 , and σ * = 10 −3 , as in [2].Denoising of the resulting displacement data ũ is performed as in [72].
Choosing the Grammar For the reasons explained in Section 1.3, we adopt a RTG for the data-driven discovery of the material model.With respect to the grammars we used so far (e.g. the CFG G HMp in eq. ( 11)), we introduce some simplifications; namely, we consider the terminals J − 1, Ĩ1 − 3, and Ĩ2 − 3 to promote expressions where the normalization holds a priori.However the normalization conditions are not strictly enforced, as the final expression could contain operations that violate them, e.g.additions between scalars.Moreover, we rewrite C as Ψ 1 + Ψ 2 + Ψ 3 to derive final expressions that contain a smaller number of operations such that the grammar derives parsimonious expressions.The designed RTG, ĜHMs , is defined as follows: 1, (•) 3 : 1, log : 1, ( Ĩ1 − 3) : 0, ( Ĩ2 − 3) : 0, (J − 1) : 0, 0.2 : 0, 0.5 : 0, 0.7 : 0, 1.5 : 0, 2 : 0, 3 : 0}, In eq. ( 18), as done previously, we use the symbol "|" to separate grammar rules with the same left-hand side.The grammar ĜHMs contains additions and multiplications between terminals and thus it can potentially derive the NH model.However, the log( Ĩ2 /3) operation of the GT model or additional + operations for the IS and HW models cannot be used when deriving the library of constitutive laws, hence these constitutive laws cannot be recovered exactly.Thus, for the data obtained using these laws our goal is to discover parsimonious expressions that deliver a physical behavior similar to the baseline using the available grammar ĜHMs .Moreover, for the case of the OG model the principal stretches are not included in the grammar.Therefore, the symbolic regression pipeline is forced to discover a surrogate that performs closely to the OG model using the invariants.
Training Setup We derive expressions from the language L( ĜHMs ) and construct a library of N train = 600, 000 valid constitutive models for different hyperelastic materials.We then train the recursive tree VAE model on this library.For the VAE, we set β = 0.0001 in eq. ( 16) and α = 0.0001 for the reparameterization trick, see Section 3.1.We choose the latent dimension of the VAE as n = 8, the learning rate as 0.001 and the width of the recursive encoder and decoder as 128.We check the performance of the model against N test = 100 test expressions using a root mean tree edit distance metric [77].
Data-driven Discovery Once we train the model on the library, we perform gradient-free optimization on the latent space of the RTGVAE to discover different hyperelastic material models using the CMA-ES method.For CMA-ES in the supervised case we consider the number of new solutions per iteration as 1, 000 trees, a zero mean, a 0.1 variance and 10 maximum iterations per adaptation.In the unsupervised case, we consider 100 new solutions per iteration, and 15 maximum iterations per adaptation.
In the supervised case, we assess the goodness of fit using a root mean square error metric: where W i and Wi are respectively the baseline and the predicted strain energies for node i, and N n = 2, 752 is the number of nodes in the non-uniform finite element mesh used for data generation.The baseline strain energy density is computed based on the available dense (full-field) displacement measurements using the baseline expression of the constitutive law.The computation of the root mean square error is performed using the SymEngine [78] SymPy interface to decrease the wall-clock time.For the assessment of the prediction accuracy, we employ the relative L 2 error metric referred to the entire domain: Relative L 2 Error = ∥W − W ∥ 2 ∥W ∥ 2 and analogous for other quantities, such as the first component of the first Piola-Kirchhoff stress tensor.
For the unsupervised learning scenario, we realistically assume the displacement field and the reaction force to be known and we compensate for the unknown baseline strain energy densities by enforcing the balance of linear momentum, both locally in the interior of the domain and globally on the loaded boundary, through the loss function in [72], see [72] for more details.We additionally apply a penalty term to approximately satisfy the stress normalization conditions.
In Table 1, we report on the wall-clock time to run one iteration of the CMA-ES method with respect to the new solutions generated at each adaptation step on an Alienware m16 Laptop with an Intel i9-13900HX CPU and 32GB RAM.
New solutions per iteration 100 500 1, 000 5, 000 Supervised Case: Wall-clock time in sec 1 5 11 60 Unsupervised Case: Wall-clock time in sec 60 600 1300 7000 Table 1: The approximate wall-clock time in seconds of the discovery process for different numbers of new solution per iteration that are used for the covariance adaptation.
Assessing the Discovery Process for the NH Model First, we test the performance of the symbolic regression algorithm in discovering the NH model from displacement data with σ * = 0, 10 −4 , 10 −3 .The best discovered expressions as well as the relative L 2 errors between the baseline and the predicted strain energy density for both the supervised and the unsupervised cases are presented in Table 2 for different noise levels.Being contained in the language generated by the starting grammar, the NH model is discovered exactly for all cases in the unsupervised setting.For the supervised setting, it is discovered exactly for the noise-free case and for the noisy case with σ * = 10 −4 .For the σ * = 10 −3 case, a very accurate approximation is discovered, with a relative L 2 error of 1.3%.
Assessing the Discovery Process for the IS, HW, GT and OG Models We then run the discovery algorithm for each of the IS, HW, GT and OG models separately.Since none of these models is contained in the language generated by the grammar in use, we do not expect an exact discovery.The best discovered models for the noise-free, σ * = 10 −4 , and σ * = 10 −3 noise cases as well as the relative L 2 errors between the baseline and the predicted strain energy density are presented in Table 3.We observe that for the IS and the HW models the relative L 2 error is less than 1.5% for all noise Table 2: Best discovered expressions and relative L 2 errors between the best predicted and the baseline strain energy density for the NH model, obtained from data with different noise levels for both the supervised and the unsupervised cases .levels irrespective of supervision, while for the GT model the error increases to 4.2% for the 10 −3 noise level in the supervised case and to 4.7% in the unsupervised case.For the OG model the relative error is between 2 and 3 % for all cases in the supervised and 4.7% in the unsupervised cases.
We note that the models discovered from the IS data are very close to the baseline ones.While in the grammar (18) we defined the constitutive law as Ψ 1 + Ψ 2 + Ψ 3 , which means that constructed expressions in the library contain two binary + operations, the discovery process returns an expression containing three binary operations; this indicates that the decoding process can predict expressions that are not limited to the strict structure of those contained within the library.Thus, interestingly, the discovered models for the noiseless and the lowest-noise case have the same structure of the baseline model, but different material constants.We attribute the discrepancy in the constants to inaccuracies of the symbolic regression algorithm and the inability of the CMA-ES method to determine the global minimum of the optimization problem.Also interestingly, the predicted expression for the HW model is less complicated than the baseline and yet it achieves an excellent accuracy, with an error less than 1% independently of the noise level.In the supervised case, the GT discovered expression is a linearized version of the baseline, corresponding to a generalized Mooney-Rivlin model, which is reasonable for the relatively limited amount of deformation included in the data (a similar result is reported in [2]).Interestingly, also the discovered constitutive laws for the OG model have the same structure of the linearized GT (or generalized Mooney-Rivlin) model, only with different constants.In the unsupervised case, the process discovers the exact same model for both the OG and the GT baselines.These results further reflect the dependence of the inference process on the richness of the available dataset, including the amount of non-linearity that is observed.

Generalization to Different Loading Conditions
In practice, we are interested in expressions that describe the behavior of a material under general loading conditions.For this reason, we now assess the performance of the discovered models in an extrapolation setup (i.e. for problems different from the one in Figure 7).For each discovered model, we perform the six simple loading tests in equation 7 in the range γ ∈ [0, 1].We compare the baseline and the discovered constitutive laws in terms of strain energy density and first component of the first Piola-Kirchhoff stress tensor for all six loading conditions and noise levels.We present the results for the NH, IS, HW and GT models in Figures 11,12,13, and 14, respectively.
As expected, the NH model predictions are identical to the baseline for the noise-free and σ * = 10 −4 cases, while they are very close to the baseline for σ * = 10 −3 .For the IS model, the predictions for the strain energy density in the noise-free case are very close to the baseline, while for the noisy cases we observe discrepancies especially in UC.The predictions for the first component of the first Piola-Kirchhoff stress tensor are very accurate for the noise-free case, but present discrepancies for the noisy cases especially for SS and UC loading.For the HW model, the predictions for the strain energy density are quite accurate for all loading conditions and noise levels, except for UC and BC, where we observe discrepancies for higher deformations.For the first component of the first Piola-Kirchhoff, we observe more pronounced discrepancies especially for the UC, SS and PS loading conditions and in the presence of noise.For the GT model, we observe a very good agreement between the baseline and the predicted model for all loading conditions for both strain energy density and stress in the noise-free and σ * = 10 −4 cases.However, for the noise level σ * = 10 −3 we observe high discrepancies for most loading conditions, which is reasonable considering the significant level of noise contamination and the usage of the raw noise-affected data with no denoising.For all the models and out of all loading cases, slightly larger discrepancies are obtained for the loading conditions involving shear.This can be Supervised Case Model, Noise Level Discovered Expression Relative L 2 Error IS, σ * = 0 W = Ĩ1 − 3 + 0.5

Conclusions
In this paper, we proposed formal grammars (specifically, Context-Free Grammars and Regular Tree Grammars) as conceptual and practical tools for the automated discovery of material models, featuring low complexity, ability to systematically embed constraints stemming from domain knowledge, and generalization capability.Leveraging formal grammars specifically designed for constitutive laws (focusing on hyperelasticity in this initial paper), we could achieve two important goals.On the one hand, we could automatically generate extensive libraries of hyperelastic material models including a desired, potentially very large number of terms which satisfy the relevant physics constraints, combine high expressivity with parsimony and can be deployed for model discovery based on sparse regression.On the other hand, we proposed and explored a grammar-based symbolic regression pipeline for constitutive law discovery composed by a library construction step, a pre-training step, and a data-driven discovery step, and leveraging a combination of the Recursive Tree Grammar Variational Autoencoder method [42] and the Covariant Matrix Adaptation Evolutionary Strategy [43,44] for gradient-free optimization.We demonstrated that the approach is able to discover accurate and parsimonious hyperelastic models on four different benchmark cases.We here put forth an important first step toward establishing the language of material models, along with a number of possible useful downstream tasks and applications.However, the proposed method in its current form presents certain limitations.First, the efficiency and computational wall clock time required for the library generation is limited by the rate of rejections of the expressions, because some of the physics constraints are not imposed a priori but checked after the final expression is generated.Moreover, the decoder of the VAE does not allow to check the semantic validity of the generated expression during the process of decoding, which implies that expressions are yielded that may not satisfy the constitutive law constraints.A naive approach for imposing these constraints to the output of the decoder would be the following: Consider a strain energy function of the form W (F) = W (F) + W 0 + W c .In this case, the symbolic regression model learns to predict W (F) and the corrections, W 0 and W c , are applied after the decoding takes place as physics-informed constraints.Even though this is possible, there exist two drawbacks for the particular approach: a) the evaluation of the corrections relies on symbolic computations, which makes the training prohibitively expensive, and b) as the physics informed constraints are not a natural part of the grammar (and therefore the encoding process) they are likely to result into disorienting the optimization process and moving it away from possibly suitable areas of the solution space.A more involved approach to add physics constraints would be through the definition of an attribute grammar [79], where the attribute rules enforce the semantics, in this case the constitutive law constraints.Furthermore, the CMA-ES method is an evolutionary strategy, which implies that different runs of CMA-ES can lead to different results and expressions offering different levels of approximation to the baseline.Additionally, the decoder is not suited to run on GPUs as it sequentially decides on the most probable grammar rule given a non-terminal symbol from a list of non-terminals.This limitation applies when choosing to work with tree structures.Finally, in this paper we considered grammars that only describe the behavior of isotropic hyperelastic materials, but the proposed methodology is extendable to more general and complex classes of material behavior.All these limitations will be addressed in forthcoming research.

A Representation of Mathematical Expressions
Mathematical expressions can be represented using graphs, which can then be used to perform downstream regression or classification tasks.Within a general approach, we can view mathematical operations, variables, or expressions, which we collectively call primitives, as graph node labels and define different expressions as different ways to connect these nodes.To explain this concept we first introduce some basic terminology on graphs, S-expressions and Polish notation.
We then summarize the construction of mathematical expression using graph representations and their utilization in the context of symbolic regression.
Basic Glossary on Graphs Throughout the manuscript, we consider nomenclature to describe different graphs and their nodes.For this purpose, we consider a short glossary accompanied by Figure 16 to explain these terms.A general graph is a topology whose edges are undirected, meaning that moving both from x to 3 and from 3 to x is allowed, all nodes can be connected with all other nodes without restrictions and also cycles, meaning a node is connected with itself, are allowed to exist.A directed acyclic graph is a topology where the edges are directed, meaning that moving only from x to 3 is allowed, all nodes can be connected with all nodes, and no cycles are allowed.A weighted directed acyclic is an directed acyclic graph whose edges have also weights.The weights can be thought of as the importance of edges or how strongly correlated two nodes are.A tree is a graph topology that nodes are connected to other nodes in a specific hierarchy.The node − is called the parent of the nodes • and y which are called the children of −.A tree takes its name from the number of children that appear at most in it.For example, a tree where the nodes have two children is called binary, with three ternary and with k k-ary.The nodes that have no children are called leaf nodes and the node with no parent a root node in a tree, see the green and magenta nodes in Figure 16 respectively.
Graph and Tree Representations, S-expressions A graph representation of a mathematical expression is one where the graph nodes represent primitives, see Figure 17 left, and the edges of the graph describe different possible orders of operations.By considering orders of operations between the primitives, implying a direction on the graph edges, and different paths on the graph, possible expressions are defined, see Figure 17 middle.A more simplified, yet also more restricted, representation that a priori assumes the order of the operations is the tree representation, see Figure 17 right.Any tree can be represented using a recursive structure of nested lists called symbolic or S-expressions.
Polish Notation To better clarify the relation between a mathematical formula and a tree, we introduce the so-called Polish notation.This is a mathematical notation where operators proceed operands.Mathematical operations on any number of arguments written in Polish notation can be easily translated into trees because their nested operation format is compatible with S-expressions.For example, let us consider the red path in Figure 17, which provides the expression x • 3 − y.We can write this expression in Polish notation as −(•(x, 3), y), and then translate it into a tree structure, as shown in Figure 17 right.
Construction of Expressions using Graph Representations: By choosing the possible order of operations between primitives, either by considering different paths in directed acyclic graphs or different trees, we can generate expressions.It is evident that not all paths provide meaningful mathematical expressions; e.g. in Figure 17 the blue path delivers x13y.To ensure that only meaningful expressions are generated, constraints can be considered in the choice of the graph paths or, in the case of trees, in the choice of the children given the parent.It is also clear that the space of possible expressions grows exponentially with the number of nodes; thus the choice of the primitives plays a crucial role on the size of this space.Discovering Expressions from Data (Symbolic Regression) As discussed in the previous paragraphs, expressions are constructed by choosing orders of operations between nodes and paths that provide meaningful results.Symbolic regression is a systematic way of finding a possible path in a graph or a tree that corresponds to an expression that best fits given data.Different symbolic regressions algorithms differ by their choices of i. primitives, i.e. whether they are only mathematical operations or variables or whether they can also consist of sub-expressions, ii.graph representations, i.e. directed acyclic or tree representations, iii.schemes to enforce that the resulting expressions are meaningful.A further choice of that of the objective function to be minimized to optimize the fit of the given data, see Figure 18.

B Regular Tree Grammars
The purpose of this section is to provide a more detailed description of the properties of RTGs, and more specifically how production rules and trees/sub-trees are defined.Moreover, we present the parsing and generation algorithms with simple examples.
Properties of Regular Tree Grammars A RTG is defined as the 4-tuple Ĝ = {Φ, Σ, R, S}, where Φ a set of non-terminal symbols, Σ a ranked alphabet, R a set of production rules, and S the starting non-terminal symbol.Consider Σ = {+ : 2, a : 0, 3 : 0} for simplicity and a + 3 a derivation of the grammar using a list of production rules.We write a + 3 in Polish notation as +(y, 3), then we can define a tree as ŵ = s( t1 , t2 ), where s = +, t1 = a, and t2 = 3.More generally, a tree with k children, a k-ary tree, can be defined as ŵ = s( t1 , ..., tk ), where s ∈ Σ an element of the alphabet, and t1 , ..., tk sub-trees defined using rules involving elements of the alphabet.In this definition k = 0 denotes leaf nodes, nodes without children, of the tree.The production rules are defined as Ψ → ŵ where Ψ ∈ Φ a non-terminal symbol and ŵ a possible sub-tree derived by performing recursive substitutions of non-terminal symbols starting from Ψ using production rules that have Ψ on their left-hand side.More specifically, the production rules for an operation s of arity k of the alphabet is defined Ψ → s(L 1 , ..., L k ), where L 1 , ..., L k ∈ Φ non-terminal symbols.In Section 1 and 1, we discussed how the trees produced by grammar are characterized by production rules.In RTGs, the trees are derived in a recursive, hierarchical, manner and we represent this hierarchical structure of rules using nested lists.We provide examples of sub-tree expressions derived starting from symbol Ψ of ĜNH , see Equation 20, together with the nested lists of production rules that derive them in Figure 19.Expression: •(a, () 2 (−(I 1 (), a())), Rules: r = [2, [6, [4, [3, [10, 6]]]]] Figure 19: Examples of sub-trees produced by the grammar.We present the derivations of the sub-trees in Polish notation to emphasize the fact that we are now working with tree grammars.
We are going to use the expressions in Figure 19 as examples, together with the grammar ĜNH presented in Equation 20, to explain how the nested list structures work.The list that contains the rules that generate the expression −(J, a) is comprised by two levels.The second level [11,6] corresponds to the rules that generate the left and the right children of −, respectively.The first level [3, ...] corresponds to the rule that generates −.Therefore, what the nested list [3, [11, 6]] represents in words is "the first level of the tree is generated using the rule (3), and on the second level the left child is generated using rule (11) and the right is generated using rule (6).For more complicated cases, the same logic applied.For example in the expression •(a, () 2 (−(I 1 (), a())), the nested list [2, [6, [4, [3, [10, 6]]]]] can be connected to the tree structure as follows: • The first level of the tree is derived by rule (2) which then created two children, meaning r = [2, [, ]].

Figure 3 :
Figure 3: Schematic explanation of a tree derivation using grammar.The non-terminals are color coded based on the grammar rules that substitute them.For example, L is substituted by 3, I 1 , or J and is re-written by the grammar rules J → 3|I 1 |J.On the other hand, Ψ and S are re-written by +, −, •, or pow, and correspond to S → +(L, Ψ)| − (L, Ψ), and Ψ → pow(L, L)| • (L, L), respectively.The black arrows show two possible tree derivation generated by these substitutions.

Figure 4 :
Figure 4: Binary trees derived from the context-free grammar G N H ; they are all part of the language L(G N H ).
In an undeformed configuration, i.e. for F = I, it must be W (F = I) = 0 and P(F = I) = 0.

Figure 6 :
Figure 6: Left: The tree representation of W = 3 • Ĩ2 1 + (J − 0.5) 2 .Middle: The tree representation of the expression W = 3 • Ĩ2 1 + (J − 0.5) 2 − 26.25 − J, with the correction as a sub-tree attached to the original expression.Right: The displacement plot with W used as constitutive law for the boundary value problem in Figure 7 with δ = 0.3.

Figure 7 :
Figure 7: Benchmark boundary value problem considered in this work, adapted from [2].

Figure 9 :
Figure 9: We choose four constitutive law relations out of the ones presented in Figure 8 and perform a finite element simulation for the benchmark boundary value problem in Figure 7.We present the results for δ = 0.3.

Figure 10 :
Figure 10: A schematic representation of the recursive tree VAE process for the case of W (F) = (J − 0.5) 2 + 3 • Ĩ21 .Left: The encoder maps the representation of the children to the representation of the parent using f r , where r is the rule used to produce the children.The process starts from the leaf nodes until it reaches the root and encodes the whole tree.Middle: Perform the reparameterization trick to sample a latent vector z.Right: The decoder generates a tree from a low-dimensional vector z by first using function h Ψ to decide the next rule to be applied, beginning from the starting symbol, and then function g r j to predict the vectorial representations of the children.

Figure 11 :
Figure 11: Neo-Hookean model: Left: comparison between the baseline and the predicted strain energy density, Right: comparison between the first component of the Piola-Kirchhoff stress tensor for the loading conditions not in the training set evaluated for different displacement magnitudes parameterized by γ.For all cases, the solid red line represents the baseline, the dashed blue line the prediction for the noise-free data, the dashed orange line the prediction for σ * = 10 −4 and the dashed green line the prediction for σ * = 10 −3 .

Figure 12 :
Figure 12: Isihara model: Left: comparison between the baseline and the predicted strain energy density, Right: comparison between the first component of the Piola-Kirchhoff stress tensor for the loading conditions not in the training set evaluated for different displacement magnitudes parameterized by γ.For all cases, the solid red line represents the baseline, the dashed blue line the prediction for the noise-free data, the dashed orange line the prediction for σ * = 10 −4 and the dashed green line the prediction for σ * = 10 −3 .

Figure 13 :
Figure 13: Haines-Wilson model: Left: comparison between the baseline and the predicted strain energy density, Right: comparison between the first component of the Piola-Kirchhoff stress tensor for the loading conditions not in the training set evaluated for different displacement magnitudes parameterized by γ.For all cases, the solid red line represents the baseline, the dashed blue line the prediction for the noise-free data, the dashed orange line the prediction for σ * = 10 −4 and the dashed green line the prediction for σ * = 10 −3 .

Figure 14 :
Figure 14: Gent-Thomas model: Left: comparison between the baseline and the predicted strain energy density, Right: comparison between the first component of the Piola-Kirchhoff stress tensor for loading conditions not in the training set evaluated for different displacement magnitudes parameterized by γ.For all cases, the solid red line represents the baseline, the dashed blue line the prediction for the noise-free data, the dashed orange line the prediction for σ * = 10 −4 and the dashed green line the prediction for σ * = 10 −3 .

Figure 15 :
Figure 15: Ogden model: Left: comparison between the baseline and the predicted strain energy density, Right: comparison between the first component of the Piola-Kirchhoff stress tensor for loading conditions not in the training set evaluated for different displacement magnitudes parameterized by γ.For all cases, the solid red line represents the baseline, the dashed blue line the prediction for the noise-free data, the dashed orange line the prediction for σ * = 10 −4 and the dashed green line the prediction for σ * = 10 −3 .

3 yFigure 16 :
Figure 16: Examples of three graph topologies.On the far left, we have a general graph; In the middle left, a directed acyclic graph; In the middle right, a weighted directed acyclic graph; On the far right, a binary tree.

yy 3 y
An example of a general graph.Each node contains a primitive, i.e. either a mathematical operation or a variable.The path in red provides the expression x • 3 − y or in Polish notation −(•(x, 3), y).The path in blue does not deliver a meaningful expression.Tree representation of −(•(x, 3), y).

Figure 17 :
Figure 17: Left: a possible representation of mathematical expressions using an acyclic graph; middle: examples of paths on a directed acyclic graph; right: an example of a tree.

Figure 18 :
Figure 18: Symbolic regression methods can be constructed by choosing i. a set of primitives consisting of operations, and sub-expressions, top left, ii. a representation of the possible functions, i.e. directed acyclic graphs, top right, and iii.sampling valid expressions based on constraints.

Table 3 :
[2]t discovered expressions and relative L 2 errors between the best predicted and the baseline strain energy density for the IS, HW and GT models, obtained from data with different noise levels for both the supervised and the unsupervised cases .explainedby the fact that the extent of shear in the training data is quite limited.Similar observations were made in previous works with model discovery approaches based on SR[2].