Neural Networks Approach for Hyperelastic Behaviour Characterization of ABS under Uniaxial Solicitation

Recent developments in computer-aided polymer processing have brought along the need for accurate description of the behavior of materials under the conjugated effect of applied stress and temperature. In order to serve this purpose, in this study, experimental data provided by uniaxial tensile technique tests for thermoplastic halter (CTPH) comprised of hyperelastic materials when subjected to combined effects of applied stress and temperature are coupled with numerical simulations to obtain the required parameters for the characterization of such materials. First, stresses and displacements the thermoplastic halter are recorded during experiment. Thereafter, Mooney-Rivlin's and Ogden theory of hyperelastic is employed to define the constitutive model of thermoplastic halter (CTPH) and nonlinear equilibrium equations of the process are solved using finite element method with Abaqus software. As a last step, a neuronal algorithm (ANN model) is employed to minimize the difference between calculated and measured parameters to determine material constants for Mooney-Rivlin and Ogden models. Although the developed procedure can be applied to several polymeric materials, in this paper, this technique is successfully implemented for acrylonitrile–butadiene– styrene (ABS). Using these coefficients, the material behavior of ABS with Mooney-Rivlin and Ogden constitutive laws is reproduced. The material model obtained in this study for ABS can be implemented into industrial and academic softwares for applications and design purposes.


INTRODUCTION
Polymers still occupy a noticeable place in the materials industry, their characterization during working process, as well as during their use in diverse industrial fields, impose an advanced knowledge of their mechanical and thermal behavior.
In the working level, forming a polymeric sheet into a desired shape is done by means of the blow molding or the thermoforming process for example. Generally, the deformation is very fast, non-uniform, multiaxial and occurs at a forming temperature above the glass transition temperature [1]. The modeling of this phenomenon remains delicate and creates several difficulties in the experimental level [2] as much as in the modeling level in terms of important computational time, as well the implementation of nonlinear constitutive laws [3].
On the other hand, designing the polymeric parts of different shape and geometries requires a rigorous control of external constraints, and mechanical limits (such as yield point, failure stress…). Modeling the mechanical behavior of polymers during this state proves to be useful, even crucial in order to improve the working process.
Generally, material constants embedded within the behavior models can be determined by using a modified Levenberg-Marquardt algorithm [4,5], or a Powell's iterative algorithm which is a conjugate direction method without gradient calculation [6], to minimize the difference between the experimental curves and the theoretical response [7,8]. However, as it is underlined in several references [9], the convergence of the solution of the equilibrium equation, which governs the movement of the thermoplastic flat plate, depends on the initial choice of the material parameters embedded in the model.
To remedy this problem, in this work, an approach combining experimental and numerical tools, based on an artificial neural networks model [10,11], is used to model the behavior of thermoplastic flat plats under uniaxial tensile solicitation. The algorithms associated with the artificial neural networks can prove to be an interesting alternative for the determination of the mechanical parameters.
Recently, neural networks are used in many areas include the classification, pattern recognition, speech synthesis, diagnosis, identification and control of dynamic systems, or even to improve the quality of produced parts in various industrial processes such as injection molding or blow molding [12,13].
The use of neural networks in the identification processes is justified by the fact that they can approximate the nonlinear functions through their capacity, to reproduce rather complex behavior. Their performance is improving continuously while relying on a dynamic learning [14], which provides a robust neuronal identification vis-à-vis the parametric variations and disturbances that can affect the operation of the studied system.
Modeling using neural networks requires a model selection which is a crucial step in designing a neural network, as with any non-linear model. Indeed, this phase should lead to choose the model that has a complexity that is both sufficient to fit the data and not excessive.
Their use in the identification problem in the mechanical field remains a matter of actuality, the pioneer works are those proposed by Erchiqui [11] and [15] for identifying the hyperelastic behavior of a polymeric sheet in bubble inflation technique. The same approach was used in the aim to characterize the viscoelastic behavior of an ABS polymeric sheet in bubble inflation technique [16].
In this work, the power of neural networks is exploited to identify the nonlinear behavior of an ABS flat plate under uniaxial tension. Our neural algorithm is coupled with a finite element code [17] in order to reproduce the behavior of material, hyperelastic Mooney-Rivlin [18] and Ogden [19] models are considered, a multilayer perceptron is used for this application.
In the first approximation, models with first order of the strain energy function are considered. However, the obtained results show a need to increase the strain energy function order. The strength of the neuronal approach is explored to control the additional materials constants.

BEHAVIOUR MODEL
In static tensile test, a volume element undergoing a tensile stress in a single direction is subject to the Poisson's effect as showing in Fig 1 below: This implies that the extensions or stretches in both y and z directions are equivalent:

Fig. 1. Volume element of the tensile specimen in the non-deformed and deformed state
We can define the principal stretch with respect to the x axis as: Assumptions of plane stress and the incompressibility of thermoplastic material are taken into account. It follows that the main stretches in the directions of y and z can be written as follows: The Cauchy stress tensor's components are: ߪ ଶଶ = ߪ ଷଷ = ߪ ଵଷ = ߪ ଶଷ = ߪ ଷଵ = ߪ ଷଶ (4) In this work, we consider the hyperelastic materials. They are defined by the existence of scalar function, W, called the strain energy function, from which stresses can be derived at each point. In order to satisfy the objectivity requirements, the strain energy function must be invariant under changes of the observer frame of reference. It is well-known that the Cauchy-Green deformation tensor C is invariant under changes of the observer frame of reference. Thus, if the strain energy function can be written as a function of C, it automatically satisfies the objective principle. The general stress-strain relationship is given by the formula: Whereܵ is the Piola-Kirchhoff stress tensor. The different models that exist in the literature define the strain energy as a function of the strain field. The Rivlin theory [18] for isotropic materials describes the energy as a function of the three Cauchy strain invariant I ଵ , I ଶ and I ଷ .
Based on the assumption of the incompressibility of the material, the stress can be obtained from the following Mooney-Rivlin [18]: ‫ܥ‬ are the material constants. The use of two terms in the series is sufficient to describe the elastic modulus in both uniaxial and biaxial deformation modes, but we will see the opposite; to describe with more accuracy the behaviour of our material, we have to go beyond the first order.
From other side, Ogden [19] proposed to link the strain energy directly with principal stretches.
The use of a series of three terms can get very good results to represent the overall behaviour of materials. In addition, the simplicity of the mathematical formulation and the direct use of extensions make the model widely used in numerical problems of large hyperelastic deformations.
Stress difference for incompressible hyperelastic material is given by: We can write generally for Mooney type material, for the first order development: And for Ogden type material, we can write for the first order approximation: The equation (8) can be rewritten as follows, for Mooney material: And then for Ogden material: By considering equations (1), (2) and (3), both expressions (11) and (12) can be written respectively as: By substituting in our case, which is a simple uniaxial tension, it follows that: Then the Cauchy principal stress in the x direction takes the following form, for Mooney and Ogden material respectively:

EXPERIMENTAL
The gripping faces were covered with an elastomeric film to prevent slippage during loading, were mounted on a Zwick-Roell Z005 displacement controlled tensile testing machine. Dogbone ABS (Acrylonitrile-Butadiene-Styrene) specimens were tested, according to the guidelines prescribed by the ASTM D 638-03 norm [20], Fig. 2 shows the specimen dimension according to this norm.
Preconditioning of the samples was carried out with a preload of 0.5 MPa with a load speed of 1 mm/min. The stress was determined by dividing the instantaneous load by the original cross-sectional area. The stretch was obtained by dividing the instantaneous gauge length by the original length. The original length of each specimen was taken as the gauge length after test specimen preconditioning and at a load of 0.5 MPa. The gauge length was the vertical distance between the clamps gripping the test specimen.
The experiment of tension was fulfilled until the specimen's failure, the stress versus strain's evolution are recorded simultaneously, using a data acquisition system. Note that all tests were carried out at controlled ambient temperature, which is 25ºC. These weights assume values that define the behavior of the whole structure. The adaptation of the latter through a suitable learning algorithm, allows the network to learn, memorize and generalize information.

Architecture
In general, a multilayer network is composed of an input layer, an output layer and a number of hidden layers [22]. Each neuron is connected to all neurons of the next layer via connections whose weights are real. Usually, the neuron used in the hidden layers or in the output layer, performs a weighted summation of its inputs and a nonlinear transformation which is often the sigmoid function. While the neuron used in the input layer, simply passes its input to output.

Learning Process
During a learning process, neural network changes its behavior to allow it to move towards a clearly defined purpose. In other words, learning is adjusting the connection weights so that the outputs of the network are as close as possible to the desired outputs for all the used examples.
Initially, the learning process of a multilayer network was determined by the gradient method. Then, all optimization methods have been applied to this problem as the conjugate gradient method, the Levenberg-Marquardt [4] or Kalman filters [23].
Thus, the literature of learning of a multilayer neural network is very rich. However, we limit our study on the back-propagation algorithm combined with a Levenberg-Marquardt [24] optimization algorithm, which is a modified version of the gradient method and remains the most used and most responded. It is an optimization algorithm that seeks to minimize a cost function which covers the gap between the desired output and that delivered by the network [25].
The back-propagation algorithm can be described using two phases, first, a propagation phase or relaxation: initially, the input values are provided to the network's input layer, and then the network propagates the information from one layer to another to the output layer, and which gives the response of the network. Second, the back-propagation phase: that is to calculate the difference between the outputs of the network and the desired resulting changes in weight of the output neurons. Then, retro propagating the error to the input layer which allows the adaptation of the hidden layers weights.
In our network, we used a learning algorithm by back-propagation associated with a Levenberg-Marquardt optimization algorithm [26], with a transfer function of sigmoid type. In our architecture, the error in the presentation of an example, denoted by E, can be given by the following expression: Where ݂ is the sigmoid activation function defined by: The adjustment of the network weights by the back-propagation algorithm is ensured through the following iterative equation: With ߝ and ݅ denote the step and the iteration number.
The derivative of ‫ܧ‬ with the weight depends on its position, in fact if the weight connects the hidden layer to the output layer, so: If the weight connects the input layer to the hidden layer, then: It should be emphasized that the convergence of the back-propagation algorithm depends on a variety of parameters, namely: • The complexity of the database and the order of the presentation of examples. Indeed, various studies have shown that a random sample presentation allows faster learning [25]. • The structure of the considered neural network and especially the number of neurons in the hidden layers, which must be not very important to ensure the generalization ability of the network, nor very small for the learning network capability. • The initial setting of the algorithm parameters, essentially the initial values of weight and iteration step must be chosen in a manner allowing having a fast convergence.
The multilayer neural network that we propose has the following architecture as showing in the Fig 3 below: Where ܺ ଵ and ܺ ଶ denoted C 1 and C 2 for the Mooney material, and ߤ and ߙ for Ogden type material.
The used network consists of an input layer, hidden layers and an output layer. The number of neurons in the output layer depends on the response contemplated by the network. For the first approximation identification program, two neurons are required in the output layer to give the response of the excitation. The two neurons outputs represent the materials parameters embedded in the Mooney-Rivlin and Ogden models, while the input of the network is the variance of the corresponding stresses. Whereas for the second or third approximation model, the output layer takes more than two neurons depending on the behavior model presentation.
The simulation data are used for network learning, also, the error back-propagation algorithm is used for the perceptron learning. Thus, the optimization algorithm of Levenberg-Marquardt is used also in view of minimizing the error during the operation of back propagation. Generally, the use of a maximum of three hidden layers gives satisfying results, this optimization technique is more powerful than other optimization algorithms, but it requires more CPU memory.

Fig. 4. Meshed geometry used for the finite element modeling
Generally, the specimen reaches the failure point at a strain around 4%, which justifies the choice of this value for the hyperelastic models. Both Mooney-Rivlin and Ogden models present the mechanical behavior of the material. The relative error remains important and its maximum value reaches up to 30% in some measurement points Fig. 6. This error is more or less acceptable and justifies our choice to proceed with the second order for both models, in order to achieve more accuracy in the mechanical behavior of the ABS flat plate.
Theoretical material constants for Mooney-Rivlin and Ogden models are obtained byusing the neural networks approach for ABS. The material constants are given in Table 1 for Mooney-Rivlin and Ogden models respectively.
Going beyond the first order for both hyperelastic Mooney-Rivlin and Ogden models seems to be delicate, in terms of non-stability of materials constants and the identification neural

Input layer containing an excitation vector
Output layer containing a response vector Hidden layers network algorithm becomes heavy and time consuming. However, the obtained results show the reliability of the used approach. Table 2 presents the hyperelastic material constants for the second order Mooney-Rivlin and Ogden models, while Fig. 7 shows the concordance between both Mooney-Rivlin an Ogden Models. It is observed that the relative error barely exceeds 9%and the behavior can be reproduced more accurately.  The numerical model for ABS, fits much better to the experimental data. In order to minimize the difference between the calculated and measured parameters to determine material constants of Mooney-Rivlin and Ogden models, a neuronal algorithm (ANN model) is applied as a last step. The developed technique is successfully implemented with two constitutive models for ABS material.

Fig. 5. Deformed geometry and von mises stresses distribution in the last step
The obtained error in the first order development of both Mooney-Rivlin and Ogden's models is around 30%, the passage to the second order of the hyperelastic models seems to be more feasible, and the error between the calculated stress and the experimental results has reduced to 4%. However, this choice is much expansive in term of CPU calculation time. The material data obtained in this work can be implemented into industrial and academic software for applications and design purposes.