Differentiable graph-structured models for inverse design of lattice materials

Architected materials possessing physico-chemical properties adaptable to disparate environmental conditions embody a disruptive new domain of materials science. Fueled by advances in digital design and fabrication, materials shaped into lattice topologies enable a degree of property customization not afforded to bulk materials. A promising venue for inspiration toward their design is in the irregular micro-architectures of nature. However, the immense design variability unlocked by such irregularity is challenging to probe analytically. Here, we propose a new computational approach using graph-based representation for regular and irregular lattice materials. Our method uses differentiable message passing algorithms to calculate mechanical properties, therefore allowing automatic differentiation with surrogate derivatives to adjust both geometric structure and local attributes of individual lattice elements to achieve inversely designed materials with desired properties. We further introduce a graph neural network surrogate model for structural analysis at scale. The methodology is generalizable to any system representable as heterogeneous graphs.


Introduction
Materials used to build future space infrastructure, especially those built directly on other planetary bodies, will be required to meet demanding conditions with environment-specific material properties, yet still be relatively easy to process and shape 1 .The constraints imposed by local planetary resources limit the palette of material composition that engineers can exploit to meet desired performance.Much like in nature, engineers will thus come to rely on the optimisation of a material's topology in addition to its chemical makeup in order to achieve the desired properties under the limitations of the local milieu.Here on Earth, bone, plant stems, dragonfly wings, coral, and radiolarians 2 are just some examples of natural lattice materials that showcase how intricate architecture is used to achieve extreme mechanical performance with a limited choice of constituents 3 .Enhancements in e.g.strength, stiffness, impact toughness, fluid transport, and thermal insulation are all found while conserving light weight and minimizing mass transport.Moreover, an understanding of topological features often unlocks deformation modes and damage tolerant mechanisms not achieved by the bulk material alone.
Inspired by this, synthetic truss-based lattice materials -a subset of so-called Architected Materials -comprise a highly active area of research in materials science.This activity is largely owed to the emergence of modern digital design and fabrication tools like 3D printing, or more formally, additive manufacturing.At the European Space Agency (ESA), capabilities for 3D printing lattice materials from a variety of space-relevant polymers, metals and in-situ planetary regoliths have been pioneered over the past decade 4,5 .The geometric freedom afforded by this technology, however, also creates an overwhelming design space of possible topological features, much like in nature.Human engineering has not been afforded the same evolutionary timescales as the natural optimisation pathways that drive biology's architectures.As such, the complete lattice material design space cannot be explored by engineering intuition alone.
An emergent stream in computational materials science demonstrates high potential for machine learning (ML)-driven design to aid engineers in the exploration of chemical and topological landscapes.Recent years have seen unprecedented demonstrations of physicsinformed learning models capable of high fidelity material property predictions based on atomic-level interactions.Key among them are, e.g., the deployment of graph neural networks (GNNs) to demystify previously unsolved phase transition dynamics in glassy systems 6 , and the use of generative adversarial networks (GANs) and variational autoencoders (VAEs) to predict complex molecular topologies in crystalline nanoporous materials like zeolites and metal-organic frameworks 7,8 .
Along with these developments, research into mechanical metamaterials has also pivoted towards ML as a means to augment the engineer's intuition with a data-driven geometric design language [9][10][11][12][13][14][15][16][17][18] .Inverse design -the prescription of desired target properties and optimisation over various candidate microstructures -has taken off as a vehicle for ML-aided simulations.Whether supervised (data-driven) 19,20 or unsupervised (physics-driven) 21 , these ML-based prediction models more accurately determine lattice mechanics, and dramatically accelerate the search for candidate architectures that meet target mechanical properties, forgoing computationally expensive meshing required in finite element (FE) based simulations and conventional Topology Optimisation calculations 22 .
Conventionally these mechanical lattice materials are formed from periodically repeating unit cells, as this makes their properties addressable analytically and via numerical homogenization.Recent studies have pioneered deep neural networks toward inverse design of auxetic lattices, mapping subtle design variations in unit cell construction parameters to gains in total effective lattice properties via surrogate models of homogenization at much lower computational cost 23,24 .While powerful, homogenization assumes uniformity of A lattice material is represented as a graph L (left) which contains vector attributes both on the nodes (e.g.position) and edges (e.g.Young's modulus and beam crosssectional area).A differential forward model F takes the graph as input and predicts material properties.Comparing these predictions with desired properties, automatic differentiation is used to change the material (e.g., move nodes and remove or add beams) to better satisfy those properties.This process is repeated iteratively until a material with the desired properties is found.lattice properties throughout the global material, and does not account for, e.g., local imperfections, stress concentrations and edge-effects from partially constrained unit cells.
Contrary to periodic lattice materials used for lightweighting (such as hexagonal honeycombs), irregular architected materials -similar to those found in nature -have been shown to boast extraordinary damage tolerance 25 , anisotropic functional grading 26 and other surprising properties emerging from local defects and aperiodicity.Control over the most mechanically 'beneficial' features of these irregular tilings must happen at local defects and cannot be exerted analytically 27 , leaving a near-infinite space of heterogenous geometric combinations not easily modelled.For this reason, most modern synthetic lattice designs have been restricted to periodic structures, leaving out an ever-growing design space whose potential remains untapped.
To tap into this design space, we propose a gradient-descent-based optimisation approach for inverse-designing emergent properties of irregular lattices.The core idea of our methodology is to represent lattice materials as heterogeneous graphs and perform computations directly on the topology of this graph using an operation called 'message passing'.This allows us to seamlessly link effective mechanical properties of the superstructure to local lattice elements (e.g.defects) in a differentiable way -hence enabling the usage of gradient descent to iteratively apply targeted modifications to the lattice, reaching a design with desired properties after only tens to hundreds of iterations.
More specifically, using message passing, we construct a differentiable forward model that predicts mechanical properties of lattice materials, which is then used in reverse through the application of automatic differentiation to change local lattice properties -such as the cross-sectional area, parent material composition, and node positions of individual beams -until a lattice structure with a set of desired global mechanical properties has been found (Figure 1).Furthermore, by combining this inverse-design approach with a technique from computational neuroscience called 'surrogate gradient' 28 , removal or addition of beams to the lattice using gradient information during inverse-design is enabled, opening up the possibility to quickly move through a huge space of candidate designs.
Conceptually, the introduced approach is inspired by recent work using GNNs to predict mechanical properties of periodic metamaterials.This was done by training said GNN only on the periodic unit cell structure 29 .Our work extends this by enabling real-time structural feedback and inverse design for arbitrary aperiodic lattices, including complicated irregular topologies with local defects as found in natural materials.
In the following, we first provide a brief description of the methods used to model and characterize lattice materials.Afterwards, we introduce the general idea of our proposed inverse design methodology, which we denote as 'differentiable lattices' in the following.Finally, two realizations of differentiable lattices will be presented, one based on an exact finite element method (requiring no training on data) and one using an approximate surrogate model, i.e., a GNN trained on simulated data.With both realizations, we demonstrate inverse design via gradient descent.Our initial focus is on the in-plane elastic properties of two-dimensional lattices for simplicity, though the methodology is easily transferred to three-dimensionally architectured designs.

2D lattice materials
Before introducing differentiable lattices, we briefly summarise how 2D lattice materials are modelled in the remainder of this work.For more details on the used methods and concepts, see the experimental procedures.For details on simulations, see the supplemental experimental procedures.

Notation and units
In general, sets are denoted using calligraphic letters, vectors and matrices using bold letters and scalars using normal letters.We summarise a lattice L as a tuple L = (X , E, AE) consisting of node coordinates r r ri = (xi, yi) ∈ X for each node i; the set of edges (beams) E between nodes, i.e., (i, j) ∈ E if an edge exists between nodes i and j; and edge attributes a a aij ∈ AE.For instance, lattice beams are characterized by their local Young's Modulus E, cross-sectional area A and second moment of area I.In principle, these can be chosen differently for each beam in the lattice.For simplicity, we choose one global value for all beams here.Thus, for a homogeneous lattice, we have a a aij = (E, A, I) ∀(i, j) ∈ E.
Distances between nodes are given by r r rij = r r rj − r r ri.The length and orientation of a beam between nodes i and j is given by 30

Lij
In the following, we abbreviate L = Lij, s = sij and c = cij.s and c here denote sin ϑ and cos ϑ respectively, where ϑ is the angle a beam makes with the lattice's base plane.For simplicity, we assume that all lattices reside within a normalised bounding box with height by and width bx.The proposed approach can be used for microscopic as well as macroscopic lattice structures.Here, we report results for a chosen set of material input paramaters, with E = 2 GPa, bx = by = 1 cm and A = 2 • 10 −5 cm 2 .

Modelling 2D lattice materials
To develop a general geometric model for lattice materials, we set out from the simplified case of a 2D lattice material, similar to a honeycomb sandwich panel used commonly in various engineering applications.To model in-plane mechanical properties we employ the direct stiffness method -a finite element matrix method derived from static analysis -to model elastic properties of our lattice material 31 .In it, a lattice is treated as a collection of connected beams, where each beam between nodes i and j is characterized by its stiffness matrix K K Kij.The stiffness equation allows us to calculate the reaction of the beam element to a given load.Here, u u ui = (u x i , u y i , u φ i ) are the resulting node displacements due to external forces and moments F F F i = (F x i , F y i , M φ i ); and φ characterises the resulting bending of beam elements.In this work, we use generalized Euler-Bernoulli beam elements that can both deform along and perpendicular to their longitudal axis (see experimental procedures for details).
The global stiffness matrix G G G for the whole lattice (i.e., more than two nodes) is constructed by summing up the contributions of each individual stiffness matrix for every node (see experimental procedures).The final stiffness equation for a lattice material with N nodes is then given by (3)

Characterising 2D lattice materials
A variety of mechanical in-plane properties are available to characterize the behaviour and functionality of 2D lattice materials.In this work, we focus on the relative density ρ, effective elastic modulus E * and Poisson's ratio ν * .The asterisk denotes "effective" material properties for the entire lattice material, i.e., a global response, not just individual beams.For non-isotropic materials, the elastic modulus and Poisson's ratio are direction-dependent.Here, as a proof of concept, we restrict ourselves to these quantities measured along the vertical axis.However, the presented results are applicable to properties measured along any axis, e.g., to inverse design the elastic modulus in horizontal and vertical direction at the same time.
A detailed description of these mechanical properties and how they are obtained using the direct stiffness method is given in the experimental procedures.For regular grids in 2D, E * and ρ * can be determined analytically [32][33][34] (see Table S1).We use this to test our numerical approach, confirming that both analytical and numerical values agree for regular square, equilateral triangular, hexagonal honeycomb and reentrant honeycomb lattices with differing relative densities (Figure 2).

Differentiable lattices
In the following, we first introduce the general framework based on graph-based methods for inverse designing lattice materials.Consequently, we demonstrate two different realizations of this framework: one using exact direct stiffness, and one using GNNs trained on experimental data.

Representing lattices as graphs
Lattices lend themselves to being modelled as graphs, with edges representing beams and nodes representing the locations where beams connect with each other.This allows an efficient and expressive description of lattices, where additional information such as node coordinates, beam cross-area, and beam Young's modulus can be encoded as node and edge features (i.e., real-valued vectors stored on nodes and edges) -something that is not possible when representing lattices as, e.g., images.

Calculating on graphs
A recent and widely adopted approach of performing calculations on graph-structured data is message passing [35][36][37] .Message passing describes information flow between nodes that are directly connected with an edge, meaning that calculations are performed on edges with stored feature vectors that are locally available to that edge.Usually, operations using message passing can be decomposed into two steps: a messaging step and a reduction step (Figure 3).In the messaging step, node features are sent along edges, where they are combined with edge features (and other node features) to calculate new edge features.In the reduction step, newly calculated edge features are sent to neighbouring nodes and combined to form new node features.These operations are performed on all edges and nodes in parallel, allowing efficient and scalable computations on the graph structure itself.Most importantly, this realizes differentiable operations on the discrete structure of the graph, allowing us to utilize gradient-descent based optimization to, e.g., change initial features or even the structure   S1) and values obtained using our finite element method.We show the effective elastic modulus (left) and Poisson's ratio (right) for different tilings (square, equilateral triangle, honeycomb and reentrant honeycomb) and relative densities.To change the relative density, we repeated numerical experiments for lattices with different number of cells and beam cross-areas.(2) On the edges, propagated node features as well as edge features are combined via a function ϕ 0 to create new edge features.ϕ 0 can take any shape (e.g., a neural network).( 3) Edge features are reduced using a function ϕ 1 that satisfies permutation invariance (i.e., the order of arguments does not matter) and is applicable to a varying number of arguments.( 4) The node features are updated using the reduced edge features using a function ϕ 2 (which, again, can take any shape).
of the graph itself to change the output of the calculation implemented by message passing.This is enabled due to computations following the structure of the graph, i.e., all operations are differentiable, but the sequence of operations (i.e., how and which features are combined) is determined by the connectivity of the graph.

Differentiable algorithms for inverse design
We propose to use differentiable message passing algorithms for property prediction on lattices represented as graphs, which in turn can then be used to realize an iterative inverse design approach using gradient-descent based optimization.In general, we denote by F (L, S) a function that takes node coordinates, graph edges and edge attributes L = (X , E, AE) as input and returns one or several material properties of interest, e.g., effective elastic modulus, Poisson's ratio or simply the displacement of each node given a certain external distortion.Optionally, F can also take specific constraints S as input, describing which nodes in the lattice are forced or kept fixed.To ease notation, we neglect writing S as an argument of F in the following.Although F is still ambiguous here, it can take several shapes, as will be shown later.Generally, F is composed of several message passing steps, followed by pooling operations (combining node features) and usual differentiable operations such as neural networks.
For inverse design, we compare the predicted property F (L) with a desired target value ζ (which, if several properties are predicted, takes the form of a vector).How well prediction and target agree is measured using a loss function L, in our case a L1 loss This loss function is used to find a lattice material with the desired target properties by minimizing it using gradient descent, e.g., by iteratively changing the geometry of the lattice (adjusting node positions using ∇r r r i or removing/adding beams as described in the next subsection) or by changing the material properties of individual beams (i.e., changing edge features such as the cross-area of individual beams).In this work, we restrict ourselves to geometric changes only to find lattices with desired mechanical properties.Gradient descent is implemented using automatic differentiation, which is readily available in current deep learning libraries such as Tensorflow and pyTorch.

Masking edges
To enable the removal or addition of beams in a lattice using gradient descent, we introduce an approach inspired by Ying et al. 38 where each edge obtains an additional attribute: a mask value mij ∈ R that is used to decide whether a beam is realized in the lattice between nodes i and j, i.e., a a aij = (E, A, I, mij).In our case, the masking value is turned into a binary decision by applying the Heaviside step function θ(•) which is used to mask away the contribution of an edge during the reduction step -as if it were not present in the lattice (m θ ij = 1beam exists; m θ ij = 0 -beam does not exist).In Ying et al. 38 , masks are only used to remove edges from a graph, since enabling adding edges between all possible nodes would be computationally unfeasible.However, in our case, a node can only be connected to a selected few other nodes in its local neighbourhood, since long beams spanning the whole material are not of interest to us.Thus, when starting from, e.g., a triangular lattice, we can add additional beams to neighbouring nodes that are initially masked out, but can be added during the inverse design process.To guarantee that we do not add crossing beams, we have to generate a list C (i,j) that contains, for each edge (i, j), other edges that would physically cross it.From this, a final mask value is obtained, which basically unites the two conditions for a beam to be active in the lattice: its mask value has to be greater than 0 and all other beams that would cross it have to be masked out.
During inverse design, both the list of crossing beams as well as the list of locally neighbouring nodes where beams could be introduced can be adjusted, enabling a complete geometric restructuring of the lattice material.In this work, for simplicity, we only update the list of crossing beams to ensure valid lattice designs with non-crossing beams.

Surrogate gradients
One problem remains: the decision function for masking, Equation (5), has a Dirac delta distribution as its derivative, meaning that it vanishes everywhere except at the threshold, mij = 0.This slows down optimization via gradient descent tremendously -a problem that is well known in other areas such as computational neuroscience, where gradient-based learning for spiking neural networks faces the same problem.However, recently, an approach called "surrogate gradients" 28,39 has been introduced that enables robust gradient-based learning of spiking neural networks.
For learning to mask edges in a graph (or beams in a lattice), we apply the same trick: instead of using the Dirac delta function, we substitute it with a surrogate function with non-vanishing parts off the threshold.A multitude of choices exist for surrogate functions.Specifically for this work, we use a mirrored Lorentz function g as in Zenke et al. 40 : with α ∈ R + being a choosable hyperparameter and |x| being the absolute value of x.If not stated otherwise, we use α = 1.

Message passing finite element
As a first realization of F , we show how the direct stiffness method can be realized using message passing to form an end-toend differentiable pipeline that returns exact mechanical properties given the graph representation of a lattice material.

Model description
The direct stiffness approach consists of several steps: 1) constructing the stiffness matrix for each beam, 2) adding masking to enable optimization of the beam connectivity, 3) combining those matrices into a global stiffness matrix describing the whole lattice, and 4) applying the experimental protocol for acquiring the desired mechanical property.In our framework, these steps take the following form: 1) Stiffness matrices: For an edge (i, j), node features r r ri and r r rj (the "messages") are turned into edge features Lij, sij and cij, see Equations (1a) to (1c).Those edge features are sufficient to construct the stiffness matrix K K Kij on the edge, see Equations ( 9) and ( 10) in the experimental procedures.2) Masking: To mask a beam, the binary masking value Mij is multiplied to the stiffness matrix K K Kij.Hence, if a beam is masked out, its contribution will not appear in the global stiffness matrix.3) Global stiffness matrix: Constructing the global stiffness matrix is equivalent to a pooling operation that takes the features (masked stiffness matrix) of each edge and combines it into one global quantity valid for the whole graph (see Figure 4 for an illustration).4) Properties: As discussed in the supplemental experimental procedures, to calculate the resulting deformation of the lattice given an external load, operations such as selecting parts of arrays and matrices, matrix-vector products, solving a system of linear equations and addition are required, which are all differentiable.Similarly, the additional operations needed to obtain the effective elastic modulus and Poisson's ratio (e.g.linear regression) are differentiable as well.For the relative density, we can simply sum up Lij, as calculated using message passing in 1), over all edges (i, j) ∈ E with Mij > 0 to obtain L * and subsequently ρ (see Equation (12) in the experimental procedures).Thus, obtaining mechanical properties such as the relative density, resulting node displacements due to a load, effective elastic modulus and Poisson's ratio can be obtained in a fully differentiable framework, starting with message passing on the graph representation of the lattice, an edge-wise pooling operation and a series of ordinary differentiable operations, i.e., operations on data without graph structure, such as scalars, vectors and matrices.
The described forward model F (L) specifically takes the following information from the graph L as input: the edge list E, the masking values M = {mij | (i, j) ∈ E} and the node features Since it is fully differentiable, it can be used to inverse design a lattice with desired properties using automatic differentiation -both to change continuous properties (such as node coordinates) and discrete properties (such as beam existence).In the following, we show this for two scenarios: obtaining a global target property such as a certain effective elastic modulus or Poisson's ratio, and obtaining a certain (functional) deformation given a loading scenario.

Designing lattices with target properties
We demonstrate our approach with two examples: starting with a regular honeycomb grid, we inverse design node positions and beam connectivity to acquire a lattice with an effective elastic modulus that is one order of magnitude higher than originally while keeping the relative density ρ of the lattice constant (Figure 5A).In addition, we turn an initially regular triangular lattice with positive Poisson's ratio into a lattice with a negative Poisson's ratio of ν target = −0.5, again with the condition of keeping ρ unchanged (Figure 5B).In both cases, we start with all edges unmasked (i.e.beams that are originally not available cannot be added during inverse design) and the used loss function is a L1 loss with an additional regularization term for the relative density, see the supplemental experimental procedures for details.
In both cases, the required target values are achieved after a small number of iterations.For the honeycomb lattice, the inverse design leads to a restructuring that is more akin to a square lattice, which matches our expectations as square grids have high axial stiffness.To get a configuration with the same relative density as the initial lattice, beams that only weakly influence E * are removed after the target elastic modulus has been reached.For the triangular lattice, a configuration is found that, in general, promotes inward bending of elements during compression, resulting in a negative Poisson's ratio.A peculiar feature are the arc-like structures on the bottom and top of the lattice that promote such inward movements.

Designing lattices with target deformations
In addition to global material properties, we can also inverse design the node displacements of the lattice to a given load.In Figure 6A,  Eff.elastic mod.lattice material that can perform a grabbing motion is found through inverse design.We start with a hexagonal honeycomb lattice with a small cavity on the right side -which will eventually become the grabbing part.To initiate the grabbing motion, the top left part of the lattice is compressed downwards while the bottom left part is kept in place.Initially, this leads to an outward bending of the right part of the lattice.Instead we turn this into an inward grabbing motion through inverse design.To achieve this, we provide a target deformation in y and x-direction (shown as crosses in Figure 6A) for some of the nodes (colored squares) in the material's cavity.As shown in Figure 6A (middle), the four nodes converge towards their target behaviour after around 40 iterations.The inverse design creates interesting functional structures, such as a lever-like arrangement (situated in the left bottom corner of the hole in the lattice, shown in gold in Figure 6A) that pulls up the bottom-right part of the material when the lattice is compressed.
As a second example, we demonstrate in Figure 6B that our approach can be used to design a lattice with several target behaviours.Specifically, we design a lattice structure that keeps a flat surface when only the top left (load 1) or top right half (load 2) is pushed downwards (with the other half free to move).As a target behaviour, we aim at keeping the top surface flat, i.e., the free moving part has to mimic the movement that would occur if the whole top surface was pushed downwards.Although the target behaviour is only learned for one particular load strength, the acquired design is valid over a large range of external deformations, see Figure S1.For the scenarios shown in Figure 6A,B, we allow new beams to be added between nodes that were initially not present in the lattice, drastically increasing the design space.
To show that our method can be scaled to much larger lattices, we design a honeycomb lattice composed of 635 cells to have no displacement in x-direction (for surface nodes) given a load in ydirection (Figure 6C) -which is equivalent to having a lattice material with ν * = 0. Starting from a hexagonal honeycomb lattice, after only nine iterations a peculiar design is found using a motive resembling tilted reentrant honeycombs, producing the desired effect.We emphasize here that this motive is completely and autonomously emergent, with no prior conception of auxetic behaviour introduced to the model.A magnified version of the design, featuring node positions, is shown in Figure S2.
In general, the computational complexity of the exact model for F is dominated by solving the stiffness equation (Equation (3)), which scales with the cube of the number of cells (see Note S1 and Figure S3 for details).Still, this is counter-balanced by the fact that gradient descent finds a suitable design typically within a few iterationsmaking the approach viable even for larger lattices.To obtain results faster, a surrogate model for the finite element analysis can be used, i.e., trading accuracy for speed, such as GNNs trained on simulated or experimental data, where the computational complexity only scales linearly with the number of cells.

Graph neural networks
Instead of a finite element method, F can also be an approximate model obtained using, e.g., machine learning.This is particularly useful when an analytical treatment is not feasible, computationally too slow, or only experimental data is available.Recently, GNNs using convolutional operators 35,37,41 have reached competitive performance on a variety of graph inference tasks such as link prediction and node classification, translating into applications such as molecule property prediction 42 , mesh-based simulations 43 , modelling glassy systems 6 , generalized neural algorithm learners 44 , as well as material science and chemistry 45 .They are especially interesting due to their property of being able to deal with graphs that have a varying number of nodes as well as carry numerical features on nodes and edges.Hence, in the following, we provide a proof of concept for using GNNs both for predicting the properties of lattice materials as well as inverse designing novel lattices.

Dataset generation
For training, evaluating and testing GNNs on predicting mechanical properties, we require an extensive dataset.As a proof of concept, we generated simulated data using the direct stiffness method.Lattices with different base tiling (square, equilateral triangle, hexagonal honeycomb and reentrant honeycomb) and deformations (random node displacements and beam/node removals) have been generated, with 4000, 200 and 1000 lattices of each tiling (train, validation and test data, respectively) -i.e., in total 16000 training, 800 validation and 4000 test samples.A detailed description of these perturbations can be found in the supplemental experimental procedures and Table S2.A visualisation of examples from the training dataset and dataset statistics are shown in Figure S4 and Figure S5.

Property prediction
First, we train models that utilize message passing to predict inplane material properties of 2D lattices.We investigate two different GNN architectures: (i) GNNs based on the simple EdgeConv layer introduced in Wang et al. 46 , as well as a message passing neural network (MPNN) architecture used for molecule property prediction 42 .The model architectures are explained in detail in the experimental procedures.For comparison, we further train a linear regression and CatBoost model 47 (i.e., a gradient-boosted tree) on handcrafted features extracted from the lattices such as cell density and relative density (see Table S3).As an alternative, focusing only on the geometry of the lattice, we also train a convolutional neural network (CNN) on image representations of the lattices.
To estimate the performance of the models, we report in Table 1 the root mean squared error (RMSE) calculated on the test set.The best model during training is selected using the validation split.In general, the CNN, CatBoost and MPNN perform similarly, clearly outperforming linear regression.The best performance is reached by the EdgeConv GNN.More specifically, we found that the models learn to predict the effective elastic modulus from the lattice geometry (or tabular features) very well, while the Poisson's ratio is much harder to learn.Especially for square lattices, we found that all models perform rather badly, although good performances are reached for reentrant honeycomb and hexagonal honeycomb lattices.This is illustrated in Figure 7A, where we compare predicted and experimental values for the EdgeConv GNN.These are promising results, especially since both GNN architectures are end-to-end differentiable and can thus be used as an approximate replacement of F in the inverse design framework.

Inverse design using graph neural networks
To showcase the usage of GNNs for 2D lattice inverse design, we use an EdgeConv GNN trained to predict the effective elastic modulus as the forward model F .To enable changing the beam structure of the lattice, we have to properly implement masking of edges in the message passing architecture of the GNN, which is explained in the experimental procedures.In Figure 7B, we show that inverse design is possible through a trained model, yielding a lattice design that has the desired target properties.To ensure that the GNN did not simply return a lattice design from the training dataset, we show the two closest lattices from the training data in Figure S6

Discussion
For inverse design, the exact forward model (message passing finite element) has the strong advantage that no training data is required at all.In addition, the presented approach allows additional physical information of the lattice to be encoded as either node or edge features.Thus, the scheme can be generalized to create completely heterogeneous lattices (each beam with different values of E and A) to satisfy a given set of target properties.In fact, due to the model being exact, a complete restructuring of the lattice geometry and its properties is possible without ever leaving the regime where the forward model is valid -unless modifications are made that result in a free-hinged structure or disconnected material.However, the model requires a full finite element implementation that -depending on the size of the material and the degree of realism of the used finite element approach -can be computationally inefficient and slow, having a cubic scaling relationship with respect to the number of cells in the lattice.Still, as only a few iterations are required to find a single lattice design, we are confident that the approach scales to more complex applications and can be utilized as a design assistant to explore novel, irregular lattice materials.
An alternative to using an exact forward model is training a surrogate model, e.g., GNNs, which is computationally less demanding (linear scaling) than running a finite element simulation.Furthermore, it can be applied to experimentally recorded data and a variety of (non-linear) mechanical properties.In this work, we trained GNNs on predicting the effective elastic modulus E * and Poisson's ratio ν * of lattice materials.Although good performance is reached for predicting E * , the model struggled with predicting ν * accurately, especially for small absolute values of ν * .To better understand the predictive power of GNNs in this application, a higher number of mechanical properties should be investigated in future work, as well as alternative GNN architectures that are more tailored to the problem of lattice material property prediction.Moreover, we only used lattices with a predefined base tiling and small range of cells.In future work, a larger dataset that covers a vast regime of lattice topologies (e.g. based on Voronoi grids) could strongly benefit the training process.
Major downsides of using GNNs is that a lot of training data is required -more than might be accessible from experimental observations.In this case, a transfer learning approach might be used, where the model is first trained with simulated data and then fine-tuned, or partially retrained to predict novel properties, using experimental data.Finally, another possibility for reducing the amount of training data required as well as increase the performance and robustness of the model is to use a hybrid approach that integrates part of the analytical model in the GNN, basically molding knowledge about the underlying physics into the GNN architecture.To allow beams to be added or removed during inverse design, edge mask values have to be added to the GNN, details of which strongly depend on the choice of GNN architecture.For instance, in this work we showed how masking can be integrated in a GNN architecture using EdgeConv layers.
Different from inverse design using an exact, analytical forward model (e.g.Figures 5 and 6), the optimization process is noisier when using a surrogate model.Furthermore, we found that it works less reliably the further one moves away from the value regime covered during training of the model.In fact, the design loop can get stuck in configurations of the lattice that are not realistic (e.g., disconnected components) without being able to recover from it on its own.Although this can be accounted for in the inverse design framework, for instance by not allowing changes to the geometry that result in unphysical lattice materials, this is a clear downside of the approach compared to using an exact model.Nevertheless, especially for larger lattices, inverse design with GNNs is extremely fast (Figure S3).Thus, if sufficient data is available to train a model, it can be used to quickly generate candidate designs, which then only need to be refined for a few iterations with the exact -but much slower -message passing finite element model, drastically speeding up the inverse design.
To mask in and out beams in the lattice design during inverse design, we used a technique called surrogate gradient.Without surrogate gradients, we found that only continuous properties of the lattice, such as node positions, are changed during gradient descent -which is not too surprising, since the gradient of the threshold function is a Dirac delta distribution, which vanishes everywhere except at 0, and thus mask values of beams are only rarely changed.In contrast, with surrogate gradients, beams were continuously added or removed during inverse design, enabling a discrete restructuring of the lattice.
When using this method, there is an ambiguity on what kind of function to use as the surrogate gradient.However, it has recently been observed that learning with surrogate gradients is robust to different shapes of the surrogate function -at least for training spiking neural networks 39 .In this work, we only used one type of surrogate function, but with adjustable width α -which we found to not drastically change performance.
We used a masking scheme here that incorporates information about crossing beams in the inverse design framework.Although we decided to simply mask out all crossing beams, in principle, a scheme could be developed that samples -from a group of crossing beams -a "winning" beam.For instance, the Gumbel-Softmax trick 48 , which enables sampling from categorical distributions compatible with automatic differentiation, could be used to select the winning beam based on their edge mask values.
Although we only focused on 2D lattices here, the introduced method is extendable to more complex structures and materials.
For instance, 3D truss-based lattices can be encoded in the same way as 2D lattices: node coordinates (in 3D) and an adjacency matrix describe the structure itself, while stiffness matrices are again calculated for each beam element using message passing.In fact, our method is naturally compatible with any truss-based lattice and we are additionally confident that it can be generalized to more complex lattice types in the future, such as sheet-and shell-based materials.We envision this achieved e.g., by representing the mesh of local elements (instead of beams) as a heterogeneous graph, and using message passing between unit elements to model their interactions.
An analogous approach has shown promise for property prediction from the grain microstructures of polycrystalline metallic alloys 49 .
To conclude, we present a framework that utilizes differentiable graph representations of lattices to perform both property prediction and inverse design.The main focus lies hereby on using message passing algorithms, which perform calculations directly on the geometric structure of the lattice's graph representation and allow the modification of local material properties and beam connectivity using automatic differentiation to optimize global properties of the lattice.We show that finite element methods can be realized using message passing, or GNNs trained on simulated data can act as surrogate models thereof.This yields an efficient and expressive way of both describing and parametrizing lattices as well as modelling their behaviour mathematically.
Our approach constitutes an important step towards enabling automatic inverse design of irregularly structured lattice materials.
Moreover, it opens up a new set of tools developed in the graph machine learning literature, such as the GNNExplainer method, for analyzing both regular and irregular lattices.We hope that this will spark new ideas for representing 3D-printable materials and lead to a wealth of novel approaches and tools that assist practitioners in designing new (multi)-functional materials.An intriguing example of a technology that is highly synergistic with our approach is the recently introduced mechanical neural network 50 (MNN) -a physical, experimental lattice structure constructed with programmable stiffness beam elements.Similar to how analogue implementations of neural networks have been trained [51][52][53][54] , MNNs could be trained 'in-the-loop' using our method, i.e., with forward passes being done on the physical device, while backward calculations (i.e.error backpropagation) for adjusting the stiffness of individual beams are done using our model on an edge device.This way, MNNs could be trained in real-time to create adaptive and smart structures in the real world.
Finally, we would like to stress that the approach introduced in this work is not limited to describing lattices, but is applicable to any system that constitutes of a graph representation and a forward model that predicts its properties -thus allowing automatic differentiation to change the structure of the input graph until it satisfies a set of desired properties.

Experimental procedures Generalized Euler-Bernoulli beam elements
The stiffness matrix K K Kij of the generalized beam element is obtained by combining the stiffness matrices of rod elements K K K rod ij and Euler-Bernoulli beam elements Rod elements are used here to model the deformation of lattice elements along their longitudal axis.Their corresponding stiffness matrix is given by 30 Euler-Bernoulli beam elements model the bending of beam elements where the beam length is much larger than the characteristic dimension of the cross section, for which the stiffness matrix is given by 30 .
We assume beam elements with a square-shaped cross-sectional area, which corresponds to , where b, h denote crosssection depth and height, respectfully.In a square beam, these are both equal to beam thickness 55 , t.

Global stiffness matrix
The global stiffness matrix G G G can be constructed iteratively: first, start with G G G being the zero-matrix (i.e., all elements are zero).Then, for each beam connecting two nodes i and j in the lattice, G G G is updated as follows (with k k k = K K Kij here): The index notation a:b denotes the range of integers from a to b, i.e., K K K0:3 is the sub-matrix of K K K consisting only of its first three rows.We further introduced the specific index notation K K Kij = K K K3•i:3•i+3,3•j:3•j+3, where the sub-selection is applied to both rows and columns.

Relative density
The most influential design parameter on the in-plane mechanical properties of a cellular lattice material is its relative density 32 , ρ = ρ * ρs .This is defined as the ratio between the lattice's density and the density of the parent solid material, ρs.For most practical scenarios, this ratio is equal to the material volume fraction contained within a known bounding box, following the relation: where L * = (i,j)∈E Lij is the sum of all beam lengths, t = √ A the beam thickness (and width) and by and bx the height and width of the bounding box, respectively.
In cases where the bounding box is not known or considered, the relative density of common periodic lattices can also be determined analytically.Table S1 lists the relative density as a function of beam thickness, t, and regular beam length, L for various periodic regular tilings.Note that contributions from material at nodal beam intersections are assumed negligible.The analytical relationships of Table S1 are only applicable for sufficiently slender beams when 32 ρ < 0.2.Otherwise, stress distribution within nodes and the emergence of axial shear effects in thicker beams cause the model to break down at higher relative densities.More robustly, Meza et al. 56 confirmed the classical predictions apply if strut dimensions fall within the regime (t/l) ≲ 0.05.
To ensure our generalized beam element model yields viable elastic properties, we limit all our experiments to a regime of relative densities between 0.05 < ρ ≤ 0.19, contingent on the varying number of cells and beam thicknesses in each observation.In most cases, our beams also satisfy the slenderness criterion 56 .

Effective elastic modulus
The effective elastic modulus E * describes how strongly a material resists to externally-induced deformations.In general, E * is given by the slope of the linear regime of a material's stress-strain curve.
We use the following experimental setup to determine E * for arbitrary -including regular as well as irregular -2D lattice materials.First, the material is glued between two plates in y-direction, i.e., we have a top and a bottom plate (Figure 8).To obtain the stress-strain curve, the top plate is then iteratively pushed downwards to force the top nodes of the lattice to move.Bottom nodes are constrained in place.This yields different strains, i.e., displacements in y-direction.The stress is then obtained by measuring what force the material is applying on the top plate in response to the induced displacements, and dividing by the cross-sectional area of the lattice's bounding box.After collecting several stress values for increasing strains, the effective elastic modulus is given by the slope of the resulting stressstrain curve.How this experimental setup translates into simulations in detail is described in the supplemental experimental procedures.In general, it requires constructing the global stiffness matrix G, applying The material -here a lattice with honeycomb tiling -is placed between two plates.By pressing the top plate downwards while keeping the bottom plate unmoved, the material gets compressed.With this approach, stress-strain as well as strain-strain curves are derived, from which the above-mentioned material properties are calculated.constraints and deformations, solving the stiffness equation for u u ui and updating the node coordinates r r ri for all nodes i -which has to be repeated several times to record the stress-strain curve.

Poisson's ratio
The Poisson's ratio measures how the width of a material changes due to a forced compression of its height.For instance, many materials will widen when compressed, which corresponds to a positive Poisson's ratio.However, so-called auxetic materials do the opposite: when compressed, their width is reduced as well, which corresponds to a negative Poisson's ratio.If the width of the material does not change at all, its Poisson's ratio is 0.
The Poisson's ratio can be obtained with a similar experimental setup as the effective elastic modulus, just that we measure the change in width due to a strain in y-direction.Hence, the Poisson's ratio is given by the slope of a strain-strain curve, as explained in detail in the supplemental experimental procedures.The width change is calculated by taking the difference of the mean u x r value of all nodes on the outer right surface and the mean u x l value of all nodes on the outer left surface of the material.

Graph neural networks
We investigate graph neural networks that work directly on the graph structure of our lattices, using only the information contained in L.
Here, we use the EdgeConv model 46 , where the node features r r ri are updated as follows: with r r r (0) i = r r ri, Ni is the set containing all nodes connecting to node i, and W W W and W W W 0 are matrices.The full model consists of several EdgeConv, followed by a dense deep neural network Φ that returns a property prediction for each node.The final prediction is then obtained by averaging over all nodes.
In addition, we investigate the MPNN model that has been proposed for molecule property prediction 42 .For this model, we use the distance vector between nodes r r rij, the length of the beam Lij as well as the orientation cij as edge features e e eij = (r r rij, Lij, cij).Node features are first preprocessed using a multi-layer neural network ϕr.They are update using a NNConv layer and a gated recurrent unit (GRU) r r ri = r r r = GRU r r ri, h h h with r r r (0) i = h h h (0) i = ϕr (r r ri) and where ϕe is a neural network that takes the edge features as input and returns a matrix.This step is repeated N times, after which a graph embedding r r rg is obtained by pooling over all nodes r r rg = pool r r r For pooling, we use the Set2Set operator.From r r rg, a prediction is obtained through a final multi-layer neural network ϕp.

Masking EdgeConv
For EdgeConv, message passing yields the following edge features for each edge (j, i) Eji = W W W r r r (l) j − r r r Masking is done as follows: Node features are updated by choosing the maximum value (elementwise) over all neighbouring edges r r r Hence, if an edge is masked, its value is not picked by the max operation and it appears as if the edge does not exist in the graph.
The masking also guarantees that the chosen maximum value cannot be larger than the minimum value of the edge features, always guaranteeing that the masked value is not chosen by accident.If the graph is bidirectional, both forward and backward edge between two nodes are masked with the same mask value, i.e., mji ≡ mij, resulting in Mji ≡ Mij.Self-connections (i, i) in the graph are not masked.This also guarantees that masking still works if nodes become disconnected from the remaining graph.

Data and code availability
This publication is accompanied with an extensive Python module (based on pyTorch) for analysing and designing 2D lattices called pyLattice2D (see Note S3), which is publicly available on gitlab 57 .An archived version can be found under the following DOI: 10.5281/zenodo.8239350.It contains the experiments performed in this work, as well as many convenience functions, e.g., for the comfortable generation of various 2D lattice geometries.Default parameters as well as details for simulations can be found in the supplemental experimental procedures.
• Every five iterations, the expressions for M ij are updated by newly checking which beams in the lattice cross.This is done to avoid solutions with crossing beams.Related to Figure 6A.
The EdgeConv model consists of three EdgeConv layers (see Equation ( 13)) with 200 hidden neurons each.The deep neural network consists of three layers with [400, 200, 1] neurons.
The specification of the MPNN model can be found in pyLattice2D (in models/MPNN/networks the class LatticeNNConv) with parameters hid_nfeat set to 15 and num_message_passing set to 3.
For the CNN, we use a batchsize of 50 and weight regularization of 10 −5 .Training data is randomly flipped horizontally and vertically.The specifications of the CNN architecture can be found in pyLattice2D (in models/MPNN/networks the class CNN).
Related to Figure 7 and Table 1.

Inverse design using GNNs
For inverse design, we trained an EdgeConv model with 150 hidden neurons per EdgeConv layer, otherwise the architecture is the same as in "Training machine learning models" in the supplemental experimental procedures.For the inverse design loop, we use again an L1 loss: with β = 100 and F now the trained EdgeConv model.Only original beams of the triangle grid can be added or removed during training.We initialize the mask randomly with values m ij ∼ U (0.2), which was necessary to avoid that the model removes too many beams at once during the first few iterations.For coordinates, we use the learning rate γ r r r = 0.0001 and for the edge mask γ m = 0.001.All parameters are regularized (weight decay) with strength 10 −6 .
For comparison, we use an exact forward model (FE in Figure 7B) which uses 10 iterations of direct stiffness with δ = 0.002 to determine E * .
Related to Figure 7B.

Figure 1 :
Figure1: Schematic illustration of the proposed framework.A lattice material is represented as a graph L (left) which contains vector attributes both on the nodes (e.g.position) and edges (e.g.Young's modulus and beam crosssectional area).A differential forward model F takes the graph as input and predicts material properties.Comparing these predictions with desired properties, automatic differentiation is used to change the material (e.g., move nodes and remove or add beams) to better satisfy those properties.This process is repeated iteratively until a material with the desired properties is found.

Figure 2 :
Figure 2: Comparison of analytical values (TableS1) and values obtained using our finite element method.We show the effective elastic modulus (left) and Poisson's ratio (right) for different tilings (square, equilateral triangle, honeycomb and reentrant honeycomb) and relative densities.To change the relative density, we repeated numerical experiments for lattices with different number of cells and beam cross-areas.

Figure 3 :
Figure3: Illustration of the message passing steps.For clarity, we only show the operations on one edge, although in practice, these operations are performed on all edges in parallel.(1) Node features r r r i are propagated along connecting edges.(2) On the edges, propagated node features as well as edge features are combined via a function ϕ 0 to create new edge features.ϕ 0 can take any shape (e.g., a neural network).(3) Edge features are reduced using a function ϕ 1 that satisfies permutation invariance (i.e., the order of arguments does not matter) and is applicable to a varying number of arguments.(4) The node features are updated using the reduced edge features using a function ϕ 2 (which, again, can take any shape).

Figure 4 :
Figure 4: Illustration of message passing finite element.Using message passing on the graph representation of the lattice, first the stiffness matrix of each beam is computed (locally on each edge of the graph), from which the global stiffness matrix G is then constructed (by pooling from all edges).

Figure 5 :
Figure 5: Inverse designing lattices to have desired mechanical properties.Lattices without load are shown in light gray and with load in black.Top nodes (red) are forced, while bottom nodes (gray) are constrained to not move.In the middle plots, target values of properties are shown as dashed lines.(A) Inverse designing a square lattice to have a higher effective elastic modulus while keeping the relative density constant.Deformations are magnified by a factor of 10. (B) Inverse designing a triangle lattice to have a negative Poisson's ratio while keeping the relative density constant.Deformations are not magnified.

Figure 6 :
Figure6: Inverse designing a lattice to yield certain deformation responses to loads.Colors and line styles as in Figure5.Nodes with target behaviour are shown as colored squares.(A) Inverse design of a grabbing tool.Progress towards the target for all four nodes is shown in the middle plot, colored like the corresponding nodes (left and right, colored squares).In the lattice illustrations (left and right), target deformations are indicated by crosses.A lever-like structure that emerged during inverse design is highlighted in gold.Deformations are magnified by a factor of 4. See also Video S1. (B) Inverse designing the response to two different load scenarios.As can be seen in the middle figure, if we only train on load scenario 1, the target behaviour for load scenario 2 is not reached.However, training on both scenarios will lead to a solution that satisfies both target behaviours.Deformations are magnified by a factor of 5. See also FigureS1and Video S2. (C) Inverse designing a large honeycomb lattice to have zero Poisson's ratio.Initially, loading the lattice leads to a bulging outwards.With our framework, a design is found that leads to no bulging, featuring local tiling motives resembling tilted reentrant honeycombs and triangles.See also FigureS2and Video S3.

Figure 7 :
Figure 7: Experiments using graph neural networks as the forward model.(A) Model prediction vs. ground truth for the EdgeConv model.(B) Inverse design using a GNN as the forward model F .A triangle lattice is adjusted to feature a reduced effective elastic modulus while its relative density has to be unchanged.Target values for the elastic modulus and relative density are shown as dashed lines.The predicted value by the GNN F (L) is shown in blue (straight line), while the exact value obtained using finite element is shown in red (dashed dotted line).Colors and line styles as in Figure 5. Deformations are magnified by a factor of 2. See also Figure S6.

Figure 8 :
Figure 8: Experimental setup for determining both the effective elastic modulus and the Poisson's ratio of a lattice material along the loading direction.The material -here a lattice with honeycomb tiling -is placed between two plates.By pressing the top plate downwards while keeping the bottom plate unmoved, the material gets compressed.With this approach, stress-strain as well as strain-strain curves are derived, from which the above-mentioned material properties are calculated.

Figure S2 :
Figure S2: Magnified illustration of the inverse-designed lattice shown in Figure 6C.Connection points between beams are shown as nodes.It should be noted that, although the design looks symmetric on first glance, it is not.

Figure S3 :
FigureS3: Relative time required for a single forward pass of the exact algorithm using message passing finite element (FE) and the approximate algorithm using GNNs.Experiments are done using square lattices.Time complexity is given in relation to the time required for a square lattice with N 0 = 4 × 4 cells.

Table 1 :
(see Note S2 for details).Experimental results for training prediction models.