Data-Driven Structural Design Optimization for Petal-Shaped Auxetics Using Isogeometric Analysis

Focusing on the structural optimization of auxetic materials using data-driven methods, a back-propagation neural network (BPNN) based design framework is developed for petal-shaped auxetics using isogeometric analysis. Adopting a NURBSbased parametric modelling scheme with a small number of design variables, the highly nonlinear relation between the input geometry variables and the effective material properties is obtained using BPNN-based fitting method, and demonstrated in this work to give high accuracy and efficiency. Such BPNN-based fitting functions also enable an easy analytical sensitivity analysis, in contrast to the generally complex procedures of typical shape and size sensitivity approaches.

achieve enhanced properties using numerical optimization approaches [Ren, Das, Tran et al. (2018)], since the pioneering work in [Sigmund (1994[Sigmund ( , 1995]. Such studies are mostly based on topology optimization [Kaminakis and Stavroulakis (2012) ;Schwerdtfeger, Wein, Leugering et al. (2011); Wang (2018); Wang, Sigmund and Jensen (2014); Wang, Gao, Luo et al. (2017); Wang, Luo, Zhang et al. (2014); Xia, Shi and Xia (2019); Zheng, Xiao, Gao et al. (2019)], with experimental validations using samples made from additive manufacturing [Schwerdtfeger, Wein, Leugering et al. (2011); Wang (2018); Wang, Sigmund and Jensen (2014)]. In contrast, studies using shape optimization approach are only limited to the works in Choi et al. [Choi and Cho (2018); Clausen, Wang, Jensen et al. (2015);  ;Weeger, Narayanan and Dunn (2019); Zhu, Wang and Poh (2018)] for chiral structures and [Kumar, Wang, Poh et al. (2019); ; Wang, Poh, Dirrenberger et al. (2017); Wang, Poh, Zhu et al. (2019)] for petal shaped structures. These numerical design optimization are often based on the computational homogenization of a representative volume element (RVE) to achieve tunable effective properties, and has attracted increasing attention [Xia and Breitkopf (2017)]. Since the multi-scale homogenization scheme involves additional boundary value problems associated with each RVE at different regions of a structure [Da, Cui, Long et al. (2017); Hou, Cai, Sapanathan et al. (2019); Wang, Abdalla, ; Xia and Breitkopf (2014); Shi (2017, 2018); Xu, You and Du (2015)], Wang et al. [Wang, Arabnejad, Tanzer et al. (2018); Wang, Xu and Pasini (2017)] studied the relation between effective mechanical properties of optimal designs and the corresponding relative density, which is next utilized to speed up the homogenization computations. This provides an instructive idea on accelerated homogenization computations. Isogeometric analysis (IGA) [Benson, Bazilevs, Hsu et al. (2010); Hughes, Cottrell and Bazilevs (2005)] combines the basis functions used in computer-aided design (CAD), e.g., nonuniform rational B-splines (NURBSs), with the discretization shape functions in the finite element method (FEM). Compared to the conventional FEM, one essential improvement of IGA is the direct design-to-analysis approach, i.e., analysis follows directly from computer-aided design (CAD) framework without further FE discretization. IGA thus has a huge potential in structural shape and topology optimization [Qian (2010); Wall, Frenzel and Cyron (2008); Wang and Benson (2016); ; Xia, Wang, Wang et al. (2017); Xie, Wang, Xu et al. (2018)]. This naturally leads to the design of auxetic structures with smoothed and curved features using IGA [Choi and Cho (2018); Kumar, Wang, Poh et al. (2019); ; ; Wang, Poh, Dirrenberger et al. (2017); Wang, Poh, Zhu et al. (2019) ;Weeger, Narayanan and Dunn (2019)]. Among these works, the petal-shaped auxetics inherit the reentrant features from their counterpart star-shaped auxetics. Data-driven methods [Kirchdoerfer and Ortiz (2016); Kirchdoerfer and Ortiz (2017); Nguyen and Keip (2018)] often requires large experimental datasets, then utilize machine learning methods to find the relations and guide the designs. Machine learning (e.g., artificial neural networks [Basheer and Hajmeer (2000); Pineda (1987)] and deep learning [LeCun, Bengio and Hinton (2015)]) has become increasingly important in many advanced fields, e.g., computer vision [Akhtar and Mian (2018); LeCun and Bengio (1998)] and natural language processing [Collobert and Weston (2008) ;Collobert, Weston, Bottou et al. (2011)]. These techniques can construct models for complex input-output relations, and are thus suitable for the prediction of mechanical properties and design of structures. The pioneering work of Le et al. [Le, Yvonnet and He (2015); Yvonnet, Monteiro and He (2013)] presented a multiscale data-driven homogenization framework for the design of nonlinear elastic composite RVEs. Combining neural networks and RVE analyses, Bessa et al. [Bessa, Bostanabad, Liu et al. (2017)] developed a data-driven framework for materials with uncertain events. Gu et al. [Gu, Chen and Buehler (2018)] proposed a machine-learning method for optimizing composite structures to achieve exceptional toughness and strength. Lei et al. [Lei, Liu, Du et al. (2018)] applied machine learning technique to moving morphable component-based structure optimization. Li et al. [Li, Kafka, Gao et al. (2019)] used clustering discretization methods to generate material performance databases for training neural networks, and applied it for material mechanism prediction and structure optimization. Liu et al. [Liu and Wu (2019); Liu, Wu and Koishi (2019)] proposed deep material network to describe multiscale heterogeneous materials. While the research efforts on machine learning methods for material and structure design have received increasing attention, such design concepts are still limited due to the high training cost and complicated implementation procedures. This paper proposes a data-driven machine learning framework for designing petal-shaped auxetics with back-propagation neural network (BPNN) [Goh (1995); Wang, Zeng and Chen (2015)]. The current work refers to the isogeometric parameterization method proposed in ; Wang, Poh, Zhu et al. (2019)], where only a small number of design variables is required to describe the geometric model. An analytical sensitivity analysis is provided using on the BPNN approach. Subjected to an effective stiffness constraint, a design methodology to achieve optimal effective properties can be accelerated with the BPNN-based fitting function. In the following, Section 2 introduces the basic theories of IGA-based computational homogenization. A detailed description of BPNN-based fitting method is presented in Section 3 and based on that the design optimization model is provided in Section 4. In the section 5, a BPNN training process is discussed in detail. Two group of numerical design studies are shown in Section 6. Finally, the conclusions are drawn in Section 7.

IGA-based computation homogenization
A NURBS curve with an order of often consists of a knot vector = [ 1 , 2 , … , + +1 ] and control points. Each control point corresponds to a weighted basis function where the B-Spline basis , (ξ) are defined as [Boor (1972)] Accordingly, a generic variable (e.g., coordinate, force, or displacement) with parametric coordinate (ξ, η) can be evaluated from the corresponding control variables by In IGA, the displacement field of a deformed geometry patch can similarly be written as where is the displacement control variable of the i-th control point. Using this interpretation in FEM, the stiffness matrix , unknown displacement vector , and the corresponding load vector for the i-th NURBS patch can be obtained. By assembling them together with multiple points constraints (MPCs), a system of equations can be obtained as where are Lagrangian multipliers, A representative volume element (RVE) is used to obtain the efficient properties of a petalshaped auxetic structure. The RVEs for three types of petal structures are defined in Fig. 1. Please note that the RVE of tri-petals are a symmetric parallelogram which combines two tri-petals structures. For each RVE, V represents the design domain and S denotes the boundary. Symbols '-' and '+' represent analogous boundaries at the opposite sides of a RVE. Within a RVE, the bulk material lies within a sub-domain Ω (Ω ⊂ ) with boundary Γ, and the remaining region within a RVE defined as void. With a macro strain E and periodic boundary conditions imposed on Ω, a boundary value problem within the RVE is solved to extract the macro stress field: The macro stress-strain relation for plane stress can be written as where and are the effective Young modulus and Poisson's ratio, respectively. For plane stress condition with a macro strain = [1 0 0] T , we can evaluate the effective Poisson's ratio as and the effective Young modulus as The IGA-based parameterization scheme for the petal-shaped auxetics studied in this work follows the methodology presented in Wang et al. ; Wang, Poh, Zhu et al. (2019)], where only 8 variables ( 1 , 2 , 3 , 4 , ℎ 4 , 1 , 2 , 3 ) are needed to describe a unit-cell (Fig. 2). For simplicity, the width of connecting bar 3 is set identical to the petal width 2 . A single material is considered in the unit-cell. The design variables for petalshaped auxetic design thus reduce to = [ 1 , 2 , 3 , 4 , ℎ 4 , 1 , 2 ].

BP-neural-network-based fitting method
Extracting the effective properties of petal-shaped auxetics from the abovementioned IGAbased homogenization framework can be computationally expensive. To improve the computational efficiency, a data-driven BPNN-based fitting method is proposed here. BP (back-propagation) neural network is a type of multilayer forward network, in which the learning process consists of signal forward-propagation and error back-propagation stages. Compare to other machine learning methods, BPNN has two outstanding advantages, i.e., strong nonlinear mapping ability and flexible network structure. Since the nonlinear relationship between the geometric parameters and effective properties is not quite complicated, the three-layer BPNN can possess the best trade-off between accuracy and efficiency when fitting such nonlinear relationship, and thereby the three-layer structure is employed herein. In this work, the three-layer BP neural network is utilized to fit the nonlinear relationship between design variables (i.e., geometric parameters) and effective properties. The datadriven method presented herein is developed for BPNN and requires large databases. Given a training set = {( , ), = 1,2, … , } , is the input neuron data, i.e., is the output neuron data, i.e., effective mechanical properties (Poisson's ratio and Young modulus), and is the number of samples. The dataset is populated using the IGA-based computation homogenization. The detail process is to select the different design variables randomly in the definitional domain, before the extraction of efficient properties by IGA-homogenization method. Fig. 3 shows the BPNN structure with 7 input neurons, unknown hidden neurons and 2 output neurons, in which the weight between the j-th output neuron and the h-th hidden neuron is defined as ℎ and the weight between the i-th input neuron and the h-th hidden neuron is defined as ℎ . Additionally, the bias neurons are inserted to the input layer and hidden layer, in which the bias for the h-th hidden layer neuron is defined as ℎ 1 , the bias for the j-th output layer neuron is defined as 2 . For a training sample ( , ), the final output can be calculated by where and denote the number of hidden and output neurons ( = 2 in this work), respectively, and ℎ is the output of the h-th hidden neuron, expressed as with as the number of input neurons ( = 7 in this work), ℎ the input of the h-th hidden neuron, and the activation function. The derivative of ( ℎ ) is (13) To find the optimal fitting function, the key issue is to minimum the loss error, which is set for each training sample as In order to reduce the loss error, the updated rule can be expressed as Giving a learning rate , ℎ can be calculated by stochastic gradient descent (SGD) [Bottou (2010) According to the chain rule where ℎ can be rewritten as Similarly, where It is noted that the training object is to minimize the loss function , defined as the average loss error of all training samples, After BPNN training, the weights and biases are obtained, and the fitting function can be expressed as where represent the effective mechanical properties, in which 1 denotes effective Poisson's ratio, and 2 denotes effective Young modulus. Henceforth, the IGA-based computation homogenization can be replaced by the fitting function, which establishes an explicit relationship between geometric parameters and mechanical properties. The flowchart of data-driven design framework is shown in Fig. 4. The data-driven design framework first uses the databased to train the fitting function by BPNN, then replaces the IGA-based computation homogenization with the fitting function. Based on the fitting function, the design can obtain optimal properties structures efficiently.

Design optimization problem
An intriguing advantage of the BPNN-based fitting function is to parameterize structure intrinsically. The rate of gradient or change of the geometric parameters (i.e., the input neurons) with respect to the effective mechanical properties (i.e., the output neurons) in the network can be derived analytically.
The design optimization problem here is to optimize a given petal-shaped auxetic to achieve tunable Poisson's ratio at a certain stiffness, i.e., Sensitivity analysis can be expressed as follows, where ( ) denotes the material (design) derivative or full derivative. The optimized model can be expressed as where and are the lower and upper bounds of � , respectively. Based on BPNN framework, the derivative of can be written analytically, Similarly, we can obtain The sensitivity analysis in a conventional IGA-based computation homogenization approach can be complicated. In several cases, a semi-analytical approach is adopted ] to facilitate implementation, though at the expense of higher computational cost. The BPNN-based function describes an explicit relationship between geometric parameter and mechanical properties, which enables a simple analytical derivation of the gradient and thus improves the computational efficiency significantly.

BP Neural network training
Three different auxetic structures with tri-, tetra-and hexa-petals are used to train the BPNN, respectively. The tetra-petals case will be discussed in detail to find the optimal BPNN structure. The BP neural networks include 3 layers, with the input layer consisting of 7 neurons (corresponding to the geometric parameter of petal structure), and the output layer consisting of 2 neurons (corresponding to the effective Poisson's ratio and effective Young modulus

Data set
Datasets are essential in BPNN training and determine the upper limit of fitting accuracy. In this study, three types of datasets are generated through IGA-based computation homogenization. The numbers of training, validation and test samples are 3000, 500 and 500, respectively. Specifically, the tetra-petals cases are discussed here in detail, with upper and lower bounds set for the size and shape design variables to avoid non-feasible solutions. Generally, the fitting function is prone to have large errors when the input variables are closed to the upper and lower bounds. To increase the accuracy of the fitting function, the sampling range of design variables in training set is defined slightly larger than the validation and test sets, while the validation and test sets have the same data distribution. First, the upper and lower bounds of validation and test set should be defined. To avoid overlaps between the petals and connecting bars, the upper and lower bounds for the design variables 1,2,3,4 are set to be [3, 3, 3, 1] and [0.8, 0.5, 0.5, 0.25], respectively. The upper and lower bounds for design variable ℎ 4 are set to be 11 and 7, respectively, so that the petals are small enough to fit within a unit cell, and a sufficient size to avoid a relatively large curvature at the vertices. For the validation and test sets, the upper and lower bounds for the tetra-petals design variables are set to be [3, 3, 3, 1, 11, 1, 1] and [0.8, 0.5, 0.5, 0.25, 7, 0.3, 0.2], respectively. To increase the accuracy of fitting function, the sampling ranges of design variables in training set are 20% larger than the validation and test sets in this work. The input data is selected randomly in the definition domain, and the output data are generated by IGA-based computation homogenization using the input data. The distribution of datasets is shown in Fig. 5, in which the 1 st ~7 th variables in horizontal axis represent the input neurons data, i.e., design variables, and the 8 th and 9 th variables in horizontal axis represent the output neurons data, i.e., effective mechanical properties. It is noted that the test set has the same data distribution as validation set.

Data preprocessing
The original data of neurons differs significantly between one another. To increase the training efficiency, a preprocessing of training data is required. The input data are scaled using mean and standard deviation, with the normalization for each input neuron expressed as follows, where and are the mean and standard deviation of the i-th input neuron data, respectively. The output data are scaled to fall within a specified range in [0,1], the preprocessing for each output neuron is expressed as follows, where and are the minimum and maximum value of the j-th output neuron data, respectively. It is noted that the data preprocessing is just performed in the training set, and the validation and test sets use the same preprocessing parameters as the training set.

BPNN structure
To find the optimal BPNN structure for the fitting case, we discuss four different structures: 7-5-2, 7-15-2, 7-30-2 and 7-50-2, in which the three parameters denote the number of input, hidden and output neurons, respectively. The iterative history of the fitting error (defined as the average squared error between the predicted and the original values for all samples) for above cases are listed on the left side in Fig. 6, while the frequency distribution histograms of the sample errors (defined as the squared error between the predicted and the original values for each sample) are also presented in middle for the validation set and on the right side for the training set in Fig. 6.   Fig. 6, it can be observed that the average error of the training and validation sets drop simultaneously in the training process, with the latter being smaller at all times. For the histogram, the error range of the validation set is much smaller than that of the training set. Similarly, in Tab. 1, the fitting error in the validation set is much smaller than that of the training set, which demonstrates the expansion for the defined range of training set is effective to guarantee the fitting accuracy of the validation set. The number of hiding layer neurons have a significant effect on the training process. If the number of hidden neurons is too small, the network lacks the necessary abilities of learning and information processing. Eventually, the case with 5 hidden neurons has a large error. If the number of hidden neurons is too large, it increases the complexity of the network structure significantly, and slows down the learning speed of the network. Thus, the number of hidden neurons should be set to a suitable value, which is taken as 15 in this work.

Test set
After the fitting function is obtained with the BPNN structure of 7-15-2, the test set (including 500 random samples) is used to demonstrate the accuracy of fitting function, and the relative errors for the effective Poisson's ratio and Young modulus are shown in Fig. 7. The average relative errors of the Poisson's ratio and the Young modulus are 0.097 and 0.007, respectively. The difference of the relative errors shows that the fitting values of the Young modulus has a better accuracy than that of the Poisson's ratio, likely due to the fact that values smaller than 10 −2 are present in the dataset of the Poisson's ratio, and are thus prone to induce high relative errors. Utilizing the same method, the fitting functions of 3-petals and 6-petals can be obtained. The average error and the fitting error distribution are plotted in Fig. 8, and the relative error distribution in test set shown in the Fig. 9. For the validation and test sets, the upper and lower bounds for the design variables of tri-petals are set to be [3, 3, 4, 1, 13.5, 1, 1] and [0.5, 0.5, 0.5, 0.25, 7, 0.3, 0.2], respectively, while for the hexa-petals are set to be [3, 3, 3, 3, 10.5, 1, 1] and [0.8, 0.5, 0.5, 0.25, 7, 0.3, 0.2], respectively. The sampling ranges of design variables in training set are 20% larger than the validation and test sets. In Tab. 2, the fitting loss of the validation and training sets for tri-petals and hexa-petals are listed. The results show that the fitting error of the two validations cases reaches a magnitude of 10 −4 . The average relative errors of the Poisson's ratio and the Young modulus are 0.153 and 0.013 for hexa-petals, 0.111 and 0.007 for tri-petals, respectively. The fitting error and relative error of the hexa-petals case are bigger than that of the tripetals, indicating that hexa-petals case has a more complicated nonlinear relationship, which can be fitted by more complicated BPNN structure in our future work.

Numerical design studies
Since the BP neural networks is used to fit the relationship between the design variables and the mechanical properties, the efficiency and accuracy must be discussed. For each kind of petal-shaped auxetics, a clear fitting function is obtained. Based on such fitting functions, considering the lowest Poisson's ratio attainable with a reference structure over a range of stiffness constraints, the design limit curves for tri-petals and hexa-petals are provided.

Demonstration for accuracy and efficiency
To demonstrate the accuracy and efficiency of BPNN-based fitting method, three different petal-shaped auxetic structures are tested in this section.

Figure 10:
The petal-shaped structures of tri-petals, tetra-petals and hexa-petals auxetic The design variables in Fig. 10   As the Tab. 3 shown, the computational time of BPNN-based fitting is much smaller than the IGA-based computation homogenization. Moreover, the Poisson's ratio and Young modulus obtained by BPNN-based fitting are relatively close to the IGA-based computation homogenization, hence demonstrating the high efficiency and accuracy of the BPNN-based fitting method.

Optimal solution for petal design
To find the design limit, i.e., the lowest achievable Poisson's ratio at a given stiffness constraint, the target value for the effective Poisson's ratio in the objective function in Eq. (24) can be set as � = −1. The design optimization study is implemented sequentially over a range of � values for the tri-petals and hexa-petals auxetics. The upper and lower bounds for the petalshaped design variables are set as the same as the validation sets of above section. There are two cases in this study: Case A with IGA-based computation homogenization and Case B with BPNN-based fitting method. Using these two different methods, the design limit curves for tri-and hexa-petals auxetics are shown in Fig. 11, in which the optimal solutions from low to high stiffness for each case are listed in Figs. 12~Fig. 15 and the optimal design parameters for the optimal solutions are listed in Appendix A.
(a)  Figure  As shown in Fig. 11, a stationary point is obtained for the lowest attainable Poisson's ratio for each petal-shaped case, and the design limit curves obtained by BPNN-based fitting function are similar with the IGA-based computation homogenization, which demonstrates its accuracy. In the range of lower or higher effective stiffness, the petal-shaped structures lose its auxeticity. From the Figs. 12~Fig. 15, we can observe that the structures obtained from IGA-based computation homogenization and BPNN-based fitting are similar. The general observations are: (1) the petal width tends to increase; (2) the petal length tends to reach its upper bound for a low Poisson's ratio and lower bound for a high stiffness; and (3) the petal shrinks from an initial bouffant shape to a stout design with increasing stiffness. The computational time of each point in the design limit curves using IGA-based computation homogenization ranges from several minutes to an hour. In contrast, the optimization analyses based on BPNN fitting function only cost tens of seconds on average -an acceleration of the design process by two orders of magnitude. Since the gradient can be obtained analytically and efficiently in the BPNN framework, the optimization can be accelerated significantly. The results in this paper thus showcase the efficiency and accuracy of BPNN-base fitting method.

Conclusion
This paper proposed a novel data-driven method to design petal-shaped auxetics with IGA utilizing BP neural network. Based on the BPNN training, the relationship between geometric parameters and effective mechanical properties (Poisson's ratio and Young modulus) is expressed by fitting function accurately. The fitting errors in the validation sets of three petal-shaped cases lie in range of 10 −4 , which demonstrates the high-accuracy level of the method. The computing processes for the mechanical properties can be accelerated by two orders of magnitude, demonstrating the efficiency of the method. The sensitivity analysis can be obtained analytically and easily in the BPNN-based design framework, which avoids the complicated shape and size sensitivity analysis. Numerical design studies to obtain the design limit curves for (minimum) effective Poisson's ratio over a range of stiffness constraints are presented. The design limit curves can be obtained in a very short time by using BPNN-based fitting function and the optimal solutions are very close to those determined from IGA-based computation homogenization. Our future work will mainly focus on three aspects: (1) extending BPNN-based design framework to multi-materials auxetics and other material designs, e.g., lattice materials [Da, Yvonnet, Xia et al. (2018) (2) exploring other machine learning methods for possible improved efficiency [Mohri, Rostamizadeh and Talwalkar (2018)] (e.g., reinforcement learning, decision tree, support vector machine); (3) combining fastcalculation methods, such as parallel computing [Wang, Wang, Deng et al. (2015); Wang, Wang, Wang et al. (2013)] and algorithm acceleration [Liao, Zhang, Wang et al. (2019)], to further improve the computing efficiency in the training process of machine learning. Acknowledgment: This work has been supported by National Natural Science Foundation of China (Grant Nos. 51705158 and 51805174) and the Fundamental Research Funds for the Central Universities (Grant Nos. 2018MS45 and 2019MS059). These supports are gratefully acknowledged.

Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study.