A rapid multi-scale design tool for the prediction of wrinkle defect formation in composite

The consolidation of uncured material is an important factor in the design and manufacture of thick laminated composite structures since it is a key driver for part quality and the formation of defects. A new modelling approach and its implementation is presented here. The constitutive relation, based on kinematic enrichment, has been derived from the orientation and volume of the micro-constituents of the material, their respective constitutive laws and the orientation of the interfaces between them. It is applicable for any layered structures, in particular those made of soft anisotropic materials. The proposed method has been implemented into a commercial Finite Element (FE) software via a user material. Its ability to predict wrinkles during the manufacture of laminated composites demonstrates its performance as a design tool, as this provides a challenging test case for any numerical platform


H I G H L I G H T S
• A computationally efficient modelling framework for the mechanical response of soft layered structures is proposed.• The effect of microstructural features is captured, and the predicted behaviours are independent of the mesh size.• The prediction of wrinkles formed during the manufacture of laminated composites illustrate the scheme's performances.• Run times were reduced from 2 weeks (using a high-fidelity model) to below 30 minutes with minimal compromise in accuracy.

Introduction
The macroscale mechanical properties of materials are intimately linked to the way their micro-structure responds to loading and the interaction between different phases within the material.Therefore, a current trend in engineering is to design materials at the meso-, micro-and even nano-scale in order to obtain specific structural properties.Many examples can be found in the fields of composite materials [1,2] and additive manufacturing [3,4].However, as manufacturing a structure and subsequently testing it is both costly and time consuming, the full potential of the concept can only be realised if fast and predictive design tools, allowing the " virtual" exploration of as much of the design space as possible, are available.
Modelling the influence of the inner micro-structure on the apparent macroscopic properties of a material has been an active field of research since the early 1900s and the formulation of the https://doi.org/10.1016/j.matdes.2019.1083880264-1275/© 2019 Published by Elsevier Ltd. generalised continuum theory [5].However, it only started to gain momentum in the end of 1960s [6].Since then, a number of methods aiming at amending the classical continuum theory to include a length characteristic of the material inner structure have been proposed.These methods include nonlocal [7] and gradient theories [8].Although these have mainly been used to predict failure, recent work has shown the relevance of these methods to model the influence of microstructural mechanisms (e.g.yarn bending) in the simulation of the deformation of technical textiles [9] and 3D woven fabrics [10] and to model how the deformation of each individual layer affects the formation of out-of-plane wrinkles during the manufacture of laminated composites [11].The main idea behind generalised continuum theory is to smooth the deformation over a volume of the size of the localisation area.Hence, to allow proper regularisation, the spatial discretisation needs to be refined enough.Moreover, these methods require some refinement of the continuum mechanics theory that involve either the use of additional degrees of freedom [11] or the extra computation of gradient [9,10] or non-local terms.As a result, they tend to be difficult to implement and CPU-intensive.
Multi-scale modelling is a more direct way of accounting for micro-structural effects on the macro-structural response.Several frameworks have been proposed over the years (see Ref. [12]) and they can roughly be classified into 2 categories which can be combined, as in Ref. [13].Concurrent methods refer to cases where different sub-volumes of the global domain are solved with various resolutions and scales.Hierarchical methods, on the other hand, describe the set of techniques where different scales are solved and coupled in the same part of the domain.The underlying assumption behind classical multi-scale modelling is that, at the structural scale, the material appears to be homogeneous.Under this assumption, the effective properties for the macro-scale can be formulated by investigating the behaviour of a "small" heterogeneous statistically representative volumes of material at the micro-scale across the macro-domain (i.e. the RVE).The requirement for how "small" the RVE needs to be (i.e. the scale separation principle) to capture sizeeffects properly has been recently studied in Ref. [14].It was shown that, for classical homogenisation, the RVE needs to be at least 1 order of magnitude smaller than the size of the macro-domain but that this requirement becomes less stringent as the homogenisation order increases.As they require to run multiple analyses for the different scales considered, multi-scale methods need a lot of memory and are CPU-intensive.Therefore, they are not well suited for the modelling of full-size components (e.g. an Airbus A320 wing is about 15 m-long) or design studies which are required to run hundreds or thousands of simulations of the same model with slightly different parameters each time.
Recently, aiming at modelling localised failure in geomaterials, Nguyen et al. [15] proposed an homogenisation framework that allows to enrich the macro-scale strain field by considering the strains in each of the micro-constituents of the material and the internal equilibrium conditions at each of their interfaces.The developing crack is described as a weak discontinuity linked to the rest of the otherwise homogenous block of material through the macrohomogeneity condition [16].Only inelastic loading inside the weak discontinuity and elastic unloading outside it was originally considered [17,18] but this was later generalised as an homogenisation tool for heterogeneous structures where the micro-constituents are non-linear [19].The scheme allows to formulate a constitutive relation that possesses an intrinsic length scale and provides the same degree of enhancement as more mathematically complex theories.Despite being memory intensive, the scheme allows fast simulations using minimal CPU resources and is well adapted for design purposes.Moreover, it can be used in frameworks other than FE such as mesh-free methods [20].
Fibre reinforced polymers (FRP) are a good example of heterogeneous materials (see Fig. 1) that can be designed at the meso-and micro-scale to obtain desired mechanical performance.To minimise weight, FRP parts are designed with non-constant thickness.This leads to the existence of thickness transition regions where the periodicity of the repeating pattern is lost and where continuum mechanics rules cannot apply.Similarly, in thin sections, the size of the heterogeneities (i.e. the layers or plies) through the thickness violate the scale separation principle.Therefore, most of the predictive tools currently available to composites designers have been developed for composite structures only.They are based on analytical formulations and plate theories [21] that often assume that the material is elastic and are only applicable to thin parts.Another limitation is that they seldom account of manufacturing-induced defects and design features that inevitably occur [22] and that can have significant knockdown effect on a part structural performance [23,24].Hence, safety factors used across the industry are high and the resulting designs are over-conservative.
The present contribution shows how the kinematic enrichment approach can provide a viable way to analyse the manufacture of laminated structures, as a customized version of a method that is otherwise more general and can be deployed for any type of materials.The paper focuses on the analysis of the numerically challenging case of consolidation-induced wrinkling defects formed in the manufacturing of composites for which the kinematic enrichment approach is reformulated within the large deformation framework.It is shown that in comparison to previous work [25,26] where every single ply of the structure were modelled, the approach allows for considerably faster run time, with minimal compromise in accuracy.

Large deformation quantities definitions
This section gives the definition of certain tensorial quantities used to reformulate the kinematically enriched constitutive modelling scheme in the large deformation framework.Fig. 2 shows that the deformation gradient, F, can be multiplicatively decomposed as the superposition of a rigid body rotation, R, and the right stretch tensor, U.As R is independent of the type of material considered, it is preferable to keep any constitutive law independent of it.This justifies the choice made here to characterise the material deformation through, the Green-Lagrange strain, E (see Eq. (1) where C is the right Cauchy-Green deformation, I is the identity tensor and A T is the transpose of A).
The Cauchy stress, s, is a measure of the force, d f, acting on an element of area dS in the deformed configuration (Y on Fig. 2).It is an Eulerian quantity expressed as: d f = s T • ndS (where n is the normal to dS).Its determination requires the definition of a new frame every time the material undergoes deformation which makes the formulation of a constitutive law in the finite strain theory framework tedious.Therefore, it is usually preferred to formulate constitutive laws based on other stress measures (and their work conjugate so that the strain energy is conserved).Hence here, the 2 nd Piola-Kirchhoff stress, S, is used.S is a symmetric tensor (i.e. it is purely defined in the material frame) and is work-conjugate with E. As illustrated in Fig. 2, S is formed by pulling back d f into the reference configuration so that we have: where n 0 is the normal vector to the element of area (dS 0 ) in the undeformed configuration (Y 0 )).It follows that s can be determined through the relation: where J controls the volume change of the material and is defined as J = det(F).

Kinematic enrichment
In this section, the concept of kinematically enriched constitutive modelling is reformulated in the large deformation theory framework.The apparent response of a material made from two phases that join at an interface of normal n is derived from the knowledge of the material laws of the two micro-structural constituents and their volume fractions (f (1) and f (2) ) defined as the ratio of their volumes (V 1 and V 2 ) by the total volume of the material (i.e.V = V 1 + V 2 ).The case where the two phases are non-linear, anisotropic and exhibit large deformation is considered.Finally, the concept is both generalised and particularised for the description of laminated structures.
The formulation of the apparent constitutive law of a bi-material (Fig. 3) starts by making the assumption widely used in multi-scale mechanics (see Ref. [12]), that the apparent deformation gradient is the unweighted volume average of the deformation gradient in each of the phases: where A denotes the apparent value of A in the bi-material.
The second main equation used comes from the assumption that the two constitutive materials can not separate.This implies that the deformation is continuous in the interface's plane and that the strain components in that plane are the same in the two phases.As these have different properties, the stress tensor is discontinuous across the interface.On the other hand, the stress equilibrium imposes the continuity of the out-of-plane stresses and the discontinuity of the out-of-plane strains (hence the analogy made here with the modelling of material failure).In the original kinematic enrichment formulation [15], the strain in material 2 (see Fig. 3) was expressed as the sum of the stain in material 1 and of an extra term acting only perpendicularly to the interface.Similar considerations were made by Leone [27] in the context of failure in composite materials at large deformation.They proposed a decomposition of the deformation gradient which holds many similarities with Nguyen et al.'s kinematic enrichment.Inspired by these examples, the deformation gradient in material 2 is decomposed as: F is a kinematic enrichment term for the deformation gradient which can be written as ⎦ points towards the direction 3 of a local coordinate system attached to the interface.
To simplify the equations derived, the Mandel-Voigt notation (where stress and strains entities and their related rates are stored as column vectors rather than matrices) is adopted in the rest of Section 2. Conveniently, in the simple case of a bi-material considered here, the averaging scheme of Eq. ( 3) is carried over to the Green-Lagrange strain and its rate.Thus, starting by combining Eq. ( 3) with Eq. ( 4), it is demonstrated in Appendix A (where tensorial rather than Voigt notations are used) that the apparent strain rate in the bi-material is expressed as: where Ė(m) is the strain rate in material m (m = 1 or 2) and it is, similarly, shown in Appendix B that Eq. ( 4) yields to: where ⎦ is a strain rate enrichment term.The last equation coming into play derives from the Hill-Mandell macro-homogeneity condition [16] that expresses the apparent virtual work done by the bi-material as the volume average of the virtual works done by each of the constituent materials: It is important to note that, although different physical quantities are used, Eqs. ( 5)-( 7) are mathematically completely equivalent to those derived in the multiple publications by Nguyen et al. (see Ref. [15] for example).From now on the same mathematical development is therefore followed.Substituting Eq. ( 5) into Eq.( 6) gives rise to: Eq. ( 8) can then be inserted into Eq.( 7) thus leading to: Since Eq. ( 9) must be verified for any values of Ė and Ė, it follows that: and Eq. ( 10) provides a direct link between the apparent stress in the bi-material and the respective stresses of its constituents.Eq. ( 11) ensures that the continuity of the traction vector across the interface between the two homogeneous materials is met.In the case of nonlinear materials, these equations are used in their incremental forms: Ṡ( 1) The two phases of the bi-material, are assumed to be locally homogeneous.Their mechanical behaviour (i.e.deformation) is controlled by a strain energy density function (X (m) ) that, in the most general case, depends on the Green-Lagrange strain, its rate and as many tensorial state variables as necessary, The non-linear mechanical behaviour described by X (m) E (m) , Ė(m) , P 1 (m) , • • • , P n (m) can be approximated by a piecewise-linear function so that incremental constitutive relation of material m (m = 1or2) can be expressed as:

Finite element implementation 2.3.1. Tangent stiffness
Eqs. ( 8) and ( 12)-( 14) completely describe the apparent behaviour of the bi-material based on the individual behaviours of its constituents.The implementation of this behaviour as a user material subroutine for any FE package is based on the expression of the kinematic enrichment term Ė as a function of the apparent strain rate in the bi-material, the respective volume fractions of the two phases and their mechanical behaviours, which are controlled by their tangent stiffnesses D (1) and D (2) .Inserting Eq. ( 8) into Eq.( 14) leads to: Eq. ( 15) can then be introduced into Eq.( 13) and rearranged to give: which can be simplified noting: The combination of Eqs. ( 16), ( 15) and ( 12), finally allows to express the apparent stiffness matrix of the bi-material that relates the increment of the apparent 2nd Piola-Kirchhoff stress to the increment of the apparent Green-Lagrange strain: (1) − D (2)  Ė (17)

Stress return algorithm
As the constituent materials are nonlinear and the strain rates are not infinitesimal, the insertion of Eq. ( 15) into Eq.( 13) is not sufficient to guarantee that the traction continuity condition is satisfied.Small residual stresses will progressively build-up at the interface increment after increment and lead to increasingly inaccurate solution and convergence difficulty of the numerical scheme.An iterative procedure (Newton-Raphson) which ensures that these residual stresses (r) stay minimal is therefore required.The procedure starts by defining: r = N T S (1) − N T S (2)  (18 Based on the previously reached stress states in the two constituent materials, the first order Taylor expansion of r gives: where dS (1) and dS (2) are the iterative stresses in materials 1 and 2 respectively.An incremental form of Eq. ( 15) then allows to write: As r should be null, r new = 0 is enforced finally leading to: DE is given by the FE code at the beginning of the increment.It is used to calculate a first guess for Ė using Eq. ( 16).This then feeds into Eq.( 15) and allows to calculate an estimate for r = r previous .If the convergence criterion r previous n T S (1)   < TOLERANCE is met, then Ė is used to calculate Ṡ using Eq. ( 17).If, on the other hand, convergence is not achieved, the value of Ė is updated using Eq. ( 21) and the whole process is repeated until the convergence is reached.Throughout the paper, TOLERANCE = 10e −4 is used.

Implementation of the laminate constitutive relation
The proposed scheme derived for a bi-material lends itself well to be generalised to laminated structures.Instead of deriving cumbersome analytical equations, a laminate is constructed by a series of homogenisation steps of two layers, which follows the procedure described above.The macroscopic response obtained from the homogenisation of the first 2 layers of the laminate is homogenised with the 3rd layer.The results of this procedure is then homogenised with a fourth layer and so on up to n-th layer of the laminate.The method holds some similarities with the work by Nguyen et al. [28] who have described the progressive development of multiple shear bands in porous sandstones subjected to high compressive loading.In the present case, however, the volume of the homogenised material evolves as new layers are being considered.
The implementation of this procedure in the commercial FE package Abaqus (see Fig. 4) starts with the mapping of the structure's layup (see Fig. 1) onto the FE mesh (more details are provided later in Section 4.2).As a function of their zone of influence, each integration point is assigned with a layup description containing information on material's stacking sequences, properties and dimensions.The information is saved as .csvfiles.To avoid multiple file opening, writing and closing slowing down the code's execution, a UEXTERNALDB subroutine was written.This allows to extract and store (before any calculation was made) the information contained in these .csvfiles into a Fortran module that makes the data accessible at any time during the simulation.Additional modules are created to store each material strain and strain rate at each integration point and at both the current previous increments.The "strains and their rates in the previous increment" module is updated with the values of the "Current strains and strain rates" module when the subroutine is called at the beginning of an increment.
The stress tensors and the stiffness matrix are calculated in the subroutine and returned to the software.Each time the UMAT is called (i.e. at each integration point and at each increment), the layup description at a given integration point is read from the "Layup description" module.This data is then stored in an allocatable array, the size of which is controlled by the number of materials in the laminate at the integration point under consideration.The homogenised deformation gradient and velocity gradient (passed by the software) are used to calculate the Green-Lagrange strain and its rate for the homogenised block (Eq.( 1)).A loop, ranging from the number of materials in the laminate (n) to 2, is then initiated to progressively decompose the homogenised strains and their rates into strains and strain rates in the different layers of the structure (Eqs.( 14), ( 16) and ( 8)).At each increment of the loop, the size of the homogenised block decreases.Eq. ( 15) finally allows to calculate the corresponding stress tensors.If the stress residual on the interface between the material and the homogenised block is greater than a certain tolerance, the kinematic strain enrichment term is updated (Eq.( 21)) and the procedure is repeated.When r is small enough, the loop on the material's constituents is incremented by −1.Finally, once all the stresses in the different constituents have been updated, a re-homogenisation process, that allows to determine the homogeneous approximation of the stress and of the tangent stiffness matrix of the laminated structure, is triggered.Starting from the second layer, Eqs. ( 16) and ( 15) are repetitively used to homogenise material i with the already homogenised block response resulting from the previous i − 1 homogenisation steps.i is incremented of 1 each time and the process stops when i = n.
The approach allows for a full 3D approximation of the stress distribution in laminated composites.It is well suited for the modelling of structures that fall outside of the traditional domain of application of laminated plate theory [21].It is, thus, able to handle thick structures made of non-linear material.Unlike other methods [29,30], there is no need to decompose the stress and strain tensors and inplane/out-of-plane coupling phenomenon are intrinsically present in the formulation.

The mechanics of prepreg stacks
In the rest of the paper, some of the characteristics of the kinematic enrichment framework developed in Section 2 are showcased in the context of the modelling of wrinkle formation in the consolidation of thick stack of fibrous reinforcements impregnated with resin, which is particularly relevant for applications in composite manufacturing and design.The present section, briefly recalls the nature of the physical phenomena involved and presents some baseline model verification.

Consolidation-driven defect generation in composite manufacture
During the manufacturing of a composite, the consolidation of a (thick) stack of fibre reinforcements impregnated with viscous resin, may lead to the formation of wrinkle defects following the creation of excess length.Hence, as illustrated in Fig. 5, upon consolidation (under an applied pressure P), the thickness and the length of the path on top of the part (l) reduce.If the layers cannot compress nor Fig. 5. Consolidation of prepreg stacks over curved tools (schematic adapted from Ref. [26]).
slip with respect to another, the only way to accommodate the length change is through out-of-plane deformation of the plies.On the other hand, if the plies can move freely the "book-end" effect is observed.
The thickness reduction of a prepreg (i.e. a sheet of fibre reinforcement impregnated with resin) under an applied pressure is intimately linked to the flow mode of the resin.At low resin viscosity, a filtration mode (also known as bleeding) is observed.Under bleeding, the resin flows through the fibre network, the configuration of which is almost unaffected.For high viscosity systems, however, fibre squeezing is observed i.e. the fibres are carried by the resin when it transversely expands under the application of the compaction load.The pioneering work of Hubert and Poursartip [31] showed that in a manufacturing cycle, the two flow modes may coexist.A more quantitative relation between flow transition and thickness evolution of prepreg stacks was proposed in Belnoue et al. [32].Recent work performed at the École Centrale de Nantes highlighted the necessity to properly understand the coupling effect between thickness change and ply interface properties.Hence, the study of the kinematics of copper threads inserted between plies of a stack of unidirectional (UD) carbon prepreg [33], revealed that the application of through-thickness compression can result in the relative rotation of the fibrous reinforcements.Later, in situ observations (through a glass compressive plattern) of the consolidation of impregnated fibrous reinforcements demonstrated the influence of the interfacial properties on the flow mode transition [34].
Another, better known, phenomenon is the influence of the interactions between the reinforcement (i.e.fibres) themselves [35,36] and also between the plies [37][38][39][40] on the apparent bending behaviour of the stack.

Hyper-viscoelastic modelling of UD viscous prepreg and resin rich interfaces
The physical phenomena described in the above section show that stack of prepreg sheets should be modelled at the ply level with the ply interfaces explicitly modelled.The model proposed in Belnoue et al. [32] was chosen to describe the behaviour of the composite plies.To account for the influence of ply interactions on the apparent bending and shear responses of the stack, ply interfaces are represented as thin extra layers of viscous fluid connecting the plies (i.e.10% of the initial ply thickness that is set as 0.2 mm for IMA-M21).As the ply-scale model already considers ply interaction and its implications on the compressive response of the stack, the role of interfaces in the laminate-scale model is to include the relative movement of the plies.For consistency of the formulation, the interfaces are modelled in the same way as the plies, but without including the stiff elastic response in the fibre direction.As a result, the compaction response is unaffected, whilst the very compliant nature of the resin rich regions under shear is accounted for.It is worth noting that, for a woven material, a constitutive relation accounting for nesting and yarn friction may be required but that the overall kinematic enrichment scheme would remain the same.
The general thermodynamic potential underlying the model, is additively decomposed into an elastic part (X e ) related to the deformation of the tape in fibre direction and a viscous part (X v ) controlling the resin flow.The corresponding second Piola-Kirchhoff stress tensor is expressed as: To ensure that the deformation stays the same regardless of the coordinate system, X e and X v are expressed as a function of invariants of the Green-Lagrange strain and strain rates.I 1 = Tr(C) registers the average stretches of all lines and I 3 = J2 = det(C) controls the volume change of the material.I 4 = Tr(A 0 C) (where A 0 is a structural tensor that characterises the local directional properties of the material and is defined as A 0 = a 0 ⊗ a 0 ) determines the material's longitudinal extension and compression.The physical meaning of I 5 = Tr(A 0 C 2 ) is more difficult to establish but it influences both the shear and bending behaviour in the fibre direction.Finally, J 2 = 1 2 Tr Ċ2 is used to bring in the dependence of the material upon the loading rate.
X v takes the general form: where upon squeezing and F (I1, under bleeding.l 0 , w 0 and h 0 are the initial length, width and thickness of the prepreg sheet.w l and w f are the aspect ratios of a unit cell at the squeeze/bleed transition and on the compaction limit and d is the size of the fibres in the plane perpendicular to the fibre direction.To simplify the expressions, I 1 s = 1 are used.At constant temperature, only 4 material parameters (as b takes different values under squeezing and bleeding) need to be determined [32].The parameters a and b are linked to the viscosity of the resin, whilst the parameter k controls the size of the initial size of the inter-fibre channels.
The expression of X e follows: l L (I5 − 1) (24)   where k controls the material incompressibility upon squeezing through a penalty method and is set as: k = E L (where is the Young's modulus in the fibre direction).The shear modulus in the longitudi- so that it is in the same order of magnitude as in Ref. [41].Upon the transition between squeezing and bleeding, k is ramped down to k = l L 2 allowing the fibre volume fraction to increase.b controls the material (in)extensibility in the direction of the fibres and can be expressed as b = E L − 2l L .
Throughout the paper, Hexcel® IMA-M21 prepreg is considered.All the models were run assuming that the process is isothermal (the values of the material parameters at 90 • C from Table 1 were applied).This can be justified by the fact that, past 70 • C the resin viscosity is very low and there is no evolution of the parameters a, b and k beyond this point.In a thick laminate made from IMA-M21 it takes two and a half hours for the resin to gel.This means that all of the laminate's flow and deformation up to vitrification occurs very early in the cure cycle.Based on these considerations, it is assumed that a cure simulation model is not required.In the resin rich layers, E L = 0 is set.

The role of the interfaces
To illustrate how the interfaces affect the apparent bending behaviour of a prepreg stack and to verify that the new model framework behaves as intended, a cantilever bending test of 3 different layups was simulated.The stacks were 170 mm-long, 20 mm-wide and 3.2 mm-thick.The 3 different configurations considered were a quasi-isotropic layup [0/ + 45/ − 45/90] 2S (Case 1), a UD layup [0] 16 (Case 2) and a UD layup, identical to Case 2 but where the pure resin layers were artificially stiffened by setting k = 100,000(Case 3).
In each model, the beams were fully-fixed at one end (right-hand side in Fig. 6) and left to deform under self weight (a gravity load was applied) for 30 min.Incompatible mode eight-node solid elements C3D8I were used.A dynamic implicit analysis in Abaqus/Standard was run.The resulting deformed shapes are presented in Fig. 6.The quasi-isotropic layup with fewer fibres aligned with the beam longitudinal direction was the most compliant.Cases 2 and 3 also behave differently.The stiff interfaces in Case 3 are responsible for a stiffer apparent bending response of the stack, as it is more difficult for the layers to slide with respect to one another.

Consolidation
Initial model assessment was performed by simulating compaction experiments on small uncured laminates (see Fig. 7).Cross-ply [90/0] 8 (CP) and blocked-ply [90 4 /0 4 /90 4 /0 4 ] (BP) laminates with 15 mm by 15 mm in-plane dimensions were considered.The experiments performed at T = 90 • C in Ref. [32] were simulated.Reduced integration linear solid elements C3D8R (with hourglass control) were used and a static analysis in Abaqus/Standard was run.The samples were modelled as square blocks of material with 1 element through the thickness, thus only considering the effective gauge section of the cruciform-shaped samples.These blocks were embedded between 2 rigid plates.A surface to surface contact with a Coulomb friction of 0.2 [42] was defined to model the interaction between the compaction plates and the samples.Load cycle was applied through a point load acting at the reference point of one of the plates.The results are presented in Fig. 7 a/.Similar evolution with time of the specimen thicknesses compared to the experiments were observed, thus proving that the homogenisation scheme does not affect the results.The layup effects reported were well captured, with BP specimens behaving fundamentally differently from CP samples, despite the material parameters used having been extracted from the experiments on the CP samples only.The more stepped behaviour of the BP model compared to the experiments is due to inaccuracies of the underlying ply model and was also reported in Ref. [32].
Finally, an additional case of a [0/45] 8 laminate of the same geometry and subjected to the same thermo-mechanical loading as Fig. 6.Simulation of a cantilever bending test in the case of 3 different layups showing that the model is able to capture the effect of different layups but also the effect of layer sliding on the apparent bending behaviour of a prepreg stack. in Ref. [32] was also simulated.Fig. 7 b/ shows that the laminate is macroscopically under shear which corresponds to the relative rotation of the plies.The fibre rotation captured is close to the 14 • reported in Ref. [33].This deformation mode would not be possible without the interaction between the plies being modelled (see Ref. [11]).

Consolidation over an external radius
More robust validation was performed by simulating the autoclave consolidation of a 6 mm-thick specimen with 300 mm by 80 mm in-plane dimensions over a convex L-shaped tool (see Fig. 5).The tool radius was 15 mm.A UD layup with the fibres running along the path of the tool was used.Micrographs of a physical demonstrator and the high fidelity model for this case were presented in Ref. [26] and are used to compare with and validate the new numerical approach.The FE model using the kinematically enriched model was set-up with C3D8R solid elements with hourglass control.A dynamic implicit analysis in Abaqus/Standard was run.191 elements were used for the "homogenised" laminate (to compare with 41,000 elements reported in Ref. [26] for the high fidelity model).The tool was modelled as a rigid body.The interaction between the tool and the elements representing the prepreg stack was represented through the definition of a penalty contact with the same frictional behaviour as in Section 3.4.The autoclave pressure was applied, on the top of the laminate, as an homogeneous pressure ramping from 0 to 7 bar in 600 s and then held at 7 bar for 3000 s.As illustrated in Fig. 8, the wrinkle severity predicted by the new approach is very similar to that made using the high resolution model.More importantly, the time necessary to run an analysis was reduced from 3 days down to 30 min using the same system configuration.

Severely tapered laminate
The final validation case was to compare predictions for wrinkle formation in a severely tapered part manufactured with hard tooling on both side of the laminate.This was also studied extensively in Ref. [26] where it was shown that a combination of excess length generation and an initial mismatch between the mould (designed based on the final part's shape) and the unconsolidated stack of prepreg leads to the formation of wrinkles.Only the most computationally challenging case showing the most severe wrinkles is considered.A ply-by-ply illustration of half a sample (with the symmetry plane on the right hand-side of the picture) prior to consolidation is provided in Fig. 1.The sample consisted of a double taper laminate of 300 mm × 300 mm in-plane dimensions and was manufactured with 137 plies in the thick section and 23 plies in the thin section of the sample.The model was set-up as in Section 4.1 with the exception of the pressure being applied through the reference point of the top tool modelled as a rigid body.The symmetry of the problem was used to reduce the model size.The complexity of the stacking sequence in the tapered region (see Fig. 1) and the amount of different layup files (see Section 2.3.3) to be created made it difficult to build the model manually.Therefore, a pre-processor which automatically creates an input file for Abaqus using a ply book (i.e. a file containing the definition of the stacking order, orientation, geometry and position of all the constituent plies of the part) and a CAD geometry of the bottom surface of the part (in the .stpformat) as inputs, was written.Fig. 9, illustrates the workflow of the pre-processor which was written as a Python script for the 3D parametric modeller FreeCAD [43].The bottom surface of the part is first meshed in 2D, with quadratic elements, using the finite element mesh generator Gmsh [44] (included within FreeCAD).The information contained in the ply book is then used to generate surfaces of the same geometry, with the same in-plane position as the plies but that are all placed on the same plane (i.e. the plies thicknesses are not accounted for).Searching for the intersections between each node and the plies collapsed on a single plane then allows to determine the number of plies stacked at each position and to attribute an unconsolidated "thickness" to each node.It also allows to map the part layup onto each element by checking for the intersection between the elements' centroids and each "collapsed" ply.With the knowledge of the part's thickness at each node, it is then straight forward to build a 3D mesh from the 2D mesh.Finally, an Abaqus input file and layup definition files are written.
Results of the analysis are presented in Fig. 10 where the micrograph of the physical component and the high fidelity model already presented in Ref. [26] are also displayed.As in the corner section example, the kinematically enriched constitutive modelling is much more efficient than the high fidelity approach (about 30× faster per CPU).Yet, the compromise in accuracy is minimal, as illustrated by the contour plots of the displacement field in the compaction direction extracted from the two different fidelity analyses.Moreover, the wrinkles' wavelength, that is not captured very well by the high fidelity model, seems to be better represented by the new "homogenised" scheme.It is believed to be related to the inability of the current model to allow for ply separation at the interfaces accounting for some level of tack between the plies.On the other hand, the "resin rich" regions in the middle of the wrinkles that originate from the ply separation cannot be captured by the new approach.Such capability could, however, be added in the future providing that a ply tack and separation model becomes available.

Performance and limitations of the scheme
The speed improvement reported in the 2 previous sections originates, mostly, from the reduction of the degrees of freedom used in comparison to the high fidelity modelling approach.Moreover, the number of contacts, that are notoriously responsible for slowing down the convergence of implicit FE schemes, is also reduced.Lastly, stress concentration regions and highly anisotropic plies are smeared out into an homogenised response when communicating with the solver which also helps improve the convergence of the numerical scheme.
A small study was performed to investigate the minimum number of elements and modelling fidelity required.The size of the baseline mesh for the double taper model was set-up so that the average size of the elements (in-plane) was about 1  4 of the expected wavelength of the wrinkles.To capture bending properly, four elements through the thickness were used.Two extra cases were run: one with the same through thickness resolution but with a refined mesh inplane; and one with the same in-plane resolution but with only two elements through thickness.3D views of the three tested cases are presented in Fig. 11.Highly irregular unstructured meshes were purposely used as this is more general and challenging for the model.This helped to highlight some of the limitations of the approach.The two main wrinkles predicted the baseline model branched out and joined in two places (see white lines highlighting the wrinkles path on Fig. 11).This is an artefact of the mesh pattern that can be associated with mesh-bias dependency.The phenomenon is removed when the in-plane size of the mesh is finer but this is at the expense of the computational cost.Hence, the accuracy of the prediction of the wrinkles' path needs to be a trade-off against the run-time.One solution to improve the computational efficiency of the scheme is to decrease the resolution of the through-thickness mesh.The simulation that uses only two elements through the thickness of the stack give very similar results to the higher resolution baseline mesh but allows to decrease the computational cost of the simulation, more than halving the run-time.However, it must be noted that a simulation with only one element through the thickness failed to converge as the complex kinematics of the prepreg stack can not be captured accurately when too few degrees of freedom are used.The last case presented in the bottom right corner of   11 corresponds to a model with similar in-plane mesh density to that of the baseline case, but where the mesh is more regular.As highlighted on the figure, this allowed to eliminate wrinkle branching.The run time was kept low through the use of only 2 elements through the thickness.Finally, it must be noted that, the wave length, k, of the predicted wrinkles stay the same whatever the level of mesh refinement.This highlights the good level of regularisation obtained through the kinematic enrichment approach.
One of the particular problems faced by the composite manufacturing community (see Ref. [45]) is the stochastic nature of material properties and variability in processing conditions.This can be responsible for the location and severity of defects diverging from the tight tolerances required by industry (especially in the aerospace sector).The formation of such defects can thus not be predicted from only a knowledge of the geometry and ideal manufacturing cycle.The run-times obtained with the new approach now makes it possible to numerically study these phenomena and brings us a step closer to design for manufacturing of composite structures.This would, in particular, be eased by recent developments in the artificial intelligence principles in the field of materials (e.g.Ref. [46]).On a slightly longer term, a kinematic enrichment-based multi-level homogenisation procedure, where a prepreg architecture and composition is  designed on a computer to improve a components manufacturability (see Ref. [47] for example), would also be feasible.Lastly, the framework derived here is flexible enough to be used with any constituent materials and can find applications beyond the field of composites manufacturing.Additive manufacturing, biomechanics or the structural performance of micro-electronics components are amongst the fields where the method could find a direct application route.

Conclusions
In this paper, a new framework is proposed for modelling structures made from soft anisotropic layered materials, which is computationally efficient enough to be used in the early stages of design.For example this can show how different layups and geometry can affect part quality in the case of laminated composites.The kinematically enriched constitutive modelling approach used is inspired by the work by Nguyen et al. [15] on failure in geomaterials and was reformulated in the large deformation framework.The new approach allows to take account of discontinuities at the material level, making its implementation into commercial FE codes much easier than other methods providing similar degrees of enhancement of the displacement field (i.e.Cosserat models, XFEM etc.).In the case of large lab-scale specimens, significant reduction of the number of elements required can be achieved, thus suppressing the need for the plyby-ply modelling approach.This new approach makes the prospect of modelling real-size components much more feasible, especially if coupled with new software architectures that allow better scaling of the FE method with the number of CPUs [48].where Ė is a Green-Lagrange strain kinematic enrichment term.It is interesting to note that the full expression for this term does not need to be derived here.

CRediT authorship contribution statement
j o u r n a l h o m e p a g e : w w w .e l s e v i e r .c o m / l o c a t e / m a t d e s A rapid multi-scale design tool for the prediction of wrinkle defect formation in composite components Jonathan P.-H.Belnoue*, Stephen R. Hallett Bristol Composites Institute (ACCIS), University of Bristol, Bristol BS8 1TR, UK

Fig. 4 .
Fig. 4. Flowchart of the implementation of the kinematically enriched modelling scheme of a laminate into a UMAT subroutine for the commercial FE package Abaqus.

Fig. 7 .
Fig. 7. Simulation of consolidation experiments -a/ Thickness evolution with time for BP and CP specimens.Experimental results (dotted lines) are superimposed with model predictions.b/Top view of a [0/45] 8 sample showing apparent macroscopic in-plane shearing.The original mesh is superimposed to the deformed specimen.

Fig. 8 .
Fig.8.The kinematic enrichment approach allows similarly good predictions of wrinkle formation in corner section (see comparison with experimental results on the left) than the high fidelity approach.

Fig. 9 .
Fig. 9. Flowchart illustrating the different operations performed by the pre-processor.

Fig.
Fig.11corresponds to a model with similar in-plane mesh density to that of the baseline case, but where the mesh is more regular.As highlighted on the figure, this allowed to eliminate wrinkle branching.The run time was kept low through the use of only 2 elements through the thickness.Finally, it must be noted that, the wave length, k, of the predicted wrinkles stay the same whatever the level of mesh refinement.This highlights the good level of regularisation obtained through the kinematic enrichment approach.One of the particular problems faced by the composite manufacturing community (see Ref.[45]) is the stochastic nature of material properties and variability in processing conditions.This can

Fig. 10 .
Fig. 10.Comparison of the predictions for wrinkle formation in severely tapered laminate made by the ply-by-ply and the kinematic enrichment approaches.

Fig. 11 .
Fig. 11.3D views of the deformed configurations of FE models of the double taper specimen with different mesh densities and patterns.