Phase field topology optimisation for 4D printing

This work concerns a structural topology optimisation problem for 4D printing based on the phase field approach. The concept of 4D printing as a targeted evolution of 3D printed structures can be realised in a two-step process. One first fabricates a 3D object with multi-material active composites and apply external loads in the programming stage. Then, a change in an environmental stimulus and the removal of loads cause the object deform in the programmed stage. The dynamic transition between the original and deformed shapes is achieved with appropriate applications of the stimulus. The mathematical interest is to find an optimal distribution for the materials such that the 3D printed object achieves a targeted configuration in the programmed stage as best as possible. Casting the problem as a PDE-constrained minimisation problem, we consider a vector-valued order parameter representing the volume fractions of the different materials in the composite as a control variable. We prove the existence of optimal designs and formulate first order necessary conditions for minimisers. Moreover, by suitable asymptotic techniques, we relate our approach to a sharp interface description. Finally, the theoretical results are validated by several numerical simulations both in two and three space dimensions.


Introduction
Four-dimensional (4D) printing [2,52,61] entails the combination of additive manufacturing (3D printing) and active material technologies to create printed composites capable of morphing into different configurations in response to various environmental stimuli. First designs of such composites consist of active material components, such as piezoelectric ceramics, hydrogels or shape memory polymers [24], in the form of fibers integrated within a passive elastomeric matrix [37]. These multi-material active composites were originally difficult to manufacture, owing to the fragility of the materials involved [44]. However, with 3D printing techniques it is nowadays feasible to fabricate these active composites to a high degree of precision, resulting in so-called printed active composites (PACs) [50]. For an overview of other 4D printing strategies besides PACs in the construction of smart materials allowing direct stimuli-responsive transformations, we refer to [67].
The shape shifting functionality of the active components enables the self-actuating and self-assembling potentials of PACs, allowing them to fold, bend, twist, expand and contract when a stimulus is applied, and return to their original configurations after the stimulus is removed. This property has led to the fabrication of intelligent active hinges and origami-like objects [35,37], mesh structures [24,64] and self-actuated deformable solids [59] in the form of in the form of graspers and smart key-lock systems. We refer to the review article [52] and the references cited therein for more applications of 4D printing. The shape memory behaviour of the PACs can be programmed in a two-step cycle: The first (programming) step involves deforming the structure from its permanent shape to a metastable temporary shape, and the second (recovery) step involves applying an appropriate stimulus so that the structure regains its original shape. A typical stimulus is heat (in combination with light [36] or water [7]), in which programmed PACs alter their shapes when the temperature rises above or drops below a critical value.
With the advances in the state-of-the art 3D printing technologies, the designs of PACs need not be limited to the conventional fibre-matrix architectures first considered in [37]. In particular, the distribution of active and passive materials in the designs can take on more complicated geometries to better fulfil the intended functionalities of the PACs. This opens up the possibility of a computational design approach guided by a structural topology optimisation framework. In the context of 3D printing, see, e.g., the review [26], this framework has been applied to explore optimising support structures to overhang regions [3,46,51], as well as self-support designs respecting the overhang angle constraints [20,32,43,45]. For active materials and active composites, [40,54] studied how to pattern thin-film layers within a multi-layer structure with the aim of generating large shape changes via spatially varying eigenstrains within the microstructures, while [50] aimed to optimise the microstructures of PACs matching various target shapes after a thermomechanical training and activation cycle. Later works incorporated nonlinear thermoelasticity [39,59], thermo-mechanical cycles of shape memory polymers [12], reversible deformations [49], as well as multi-material designs [65] within the topology optimisation framework.
In many of the aforementioned contributions, the topology optimisation is implemented numerically with the level-set method or the solid isotropic material with penalisation (SIMP) approach. In this work we employ an alternative approach based on the phase field methodology [17], which allows a straightforward extension to the multiphase setting [13,62] involving multiple (possibly distinct) types of active materials within the design. In particular, this opens up the design to multiphase PACs that can memorise more than two shapes [38,47,58,63,66]. The phase field-based structural topology optimisation approach has been popularised in recent years by many authors, with applications in nonlinear elasticity [55], stress constraints [19], compliance optimisation [15,60], elastoplasticity [4], eigenfrequency maximisation [31,60], graded-material design [21], shape optimisation in fluid flow [28,29,30] and more recently for 3D printing with overhang angle constraints [32].
Taking inspiration from the setting of Maute et al. [50], we formulate a structural topology optimisation problem for a multiphase PAC with the objective of finding optimal distributions of active and passive materials so that the resulting composite matches targeted shapes as close as possible. An additional perimeter regularisation term, in the form of a multiphase Ginzburg-Landau functional, is added, and our contribution involves a mathematical analysis of the resulting multiphase structural topology optimisation problem with emphasis on the rigorous derivation of minimisers and optimality conditions. A sharp interface asymptotic analysis is performed to obtain a set of optimality conditions applicable in a level set-based shape optimisation framework. We perform numerical simulations in two and three spatial dimensions to show the optimal distributions of active and passive components in order to match with various target shapes for the PACs.
The rest of this paper is organised as follows: in Section 2 we formulate the phase field structural optimisation problem to be studied, and present several preliminary mathematical results. In Sections 3 and 4 we analyse the design optimisation problem and establish analytical results concerning minimisers and optimality conditions. The sharp interface limit is explored in Section 5 and, finally, in Section 6 we present the numerical discretisation and several simulations of our approach.

Problem formulation
Within a bounded domain Ω ⊂ R d , d ∈ {2, 3}, with Lipschitz boundary Γ := ∂Ω, we assume there are L types of linearly elastic materials, whose volume fractions are encoded with the help of a vectorial phase field variable ϕ = (ϕ 1 , . . . , ϕ L ) : Ω → ∆ L , where ∆ L denotes the Gibbs simplex in R L : For our application to PACs, we take ϕ L as the volume fraction of the passive elastic material, and ϕ 1 , . . . , ϕ L−1 as the volume fractions of (possibly different) active elastic materials. Note that in the two-phase case L = 2, we simply have ϕ = (ϕ 1 , ϕ 2 ), and due to the relation ϕ 1 + ϕ 2 = 1 we may instead use the scalar difference function ϕ := ϕ 1 − ϕ 2 to encode ϕ via the relation ϕ = ( 1 2 (1 + ϕ), 1 2 (1 − ϕ)). This particular scenario will be employed later on, when dealing with the connection between the problem we are going to analyse and the corresponding sharp interface limit in Section 5, as well as for the numerical simulations presented in Section 6.
The shape shifting mechanism considered in [50,68] involves two levels of temperature and one set of external loads, with one temperature T H higher than a critical transition temperature T g of the active materials (e.g., the glass transition temperature for shape memory polymers), and the other temperature T L lower than the critical temperature. The printed composite is first heated to T H , and the shape memory cycle starts at T H and proceeds as follows: First, external loads are applied to deform the printed composite while the temperature remains at T H , with the new configuration being known as the programming stage (or Stage 1). Then, the temperature is decreased while the loads are maintained on the printed composite, which are then removed once the temperature reached T L . The resulting shape at T L is the desired shape and we denote it as the programmed stage (or Stage 2). Increasing the temperature to T H enables the printed composite to recover its original shape, and this ends the shape memory cycle, see Figure 1 for the thermo-mechanical processing steps involving the two stages.
To capture the above behaviour, following [50] we consider a model for each stage. In the programming stage (Stage 1), we consider an elastic displacement u : Ω → R d and decompose the domain boundary Γ into a partition Γ = cl(Γ D ) ∪ cl(Γ N ) with relative open subsets Γ D and Γ N such that Γ D ∩ Γ N = ∅ and Γ D = ∅, where cl(A) denotes the closure of a set A, and we assign a prescribed displacement U on Γ D and surface loads g on Γ N . Under a linearised elasticity setting, the balance of momentum yields the following system of equations for the displacement u: with a phase-dependent elasticity tensor C, body force F, outer unit normal n, and symmetrised gradient E(u). One example of C(ϕ) is After the change in temperature from T H to T L and after the programming loads in Stage 1 have been removed, the PAC experiences deformations due to residual stresses generated during the thermomechanical processing steps. When the temperature falls below T g , the active elastic materials undergo a phase transition from a soft rubbery state to a glassy state that has a higher Young's modulus. We introduce a new variable u : Ω → R d to denote the displacement in the programmed stage (Stage 2), and as in [50], model the strains from the programming stage (Stage 1) as eigenstrains for u. These eigenstrains are present only in the regions of active elastic materials, which we model with a fixity function χ : R L → [0, ∞). The shape fixity for a shape memory material is the ratio (expressed as a percentage) between the strain in the stress-free state after the programming step and the maximum strain [1]. For example, if the deformation elongates the material, the fixity quantifies the ability of the material to hold the temporary elongated length when the stress is removed. It is clear from the definition that for a passive elastic material, the fixity is zero, and so we set that χ = 0 in the region {ϕ L = 1} of the passive elastic material. Decomposing the domain boundary Γ into a possibly different partition Γ = cl( Γ D )∪cl( Γ N ) with relative open subsets Γ D and Γ N such that Γ D ∩ Γ N = ∅ and Γ D = ∅, where we assign a prescribed displacement U on Γ D and surface loads g on Γ N , the equations for the programmed stage (Stage 2) read as with a phase-dependent elasticity tensor C and body force F. In the above, the change in the elasticity tensor from C in Stage 1 to C in Stage 2 encodes the change in the elastic properties of the PAC when the temperature changes from T H to T L . Similarly to [50], here we have neglected the strains arising from thermal expansion in (2.1) and (2.2).
In the next section, under a suitable functional framework, we demonstrate that (2.1) and (2.2) are uniquely solvable, with the solution depending continuously on ϕ. Since ϕ controls the distribution of the passive and active elastic materials, it is natural to ask for specific material distributions that optimise certain cost functionals related to the design of PACs. Motivated from [50], we primarily focus on the following cost functional where γ > 0 is a weighting factor, u is a solution to (2.2) depending on ϕ (and also on u, a solution to (2.1)), W ∈ R d×d is a fixed weighting matrix, Γ tar is a subset of the boundary Γ N , ε > 0 is a fixed constant related to the thickness of the interfacial Hausdorff measure, and Ψ : R L → R is a non-negative multi-well potential that attains its minimum at the corners {e 1 , . . . , e L } (the unit vectors in R L ) of the Gibbs simplex ∆ L . The first term in (2.3) consists of a target shape matching term, where we like to match the displacement u in Stage 2 with a prescribed deformation u tar over the surface Γ tar ⊂ Γ N by minimising the squared difference weighted by a matrix W . The second term is the well-known Ginzburg-Landau functional in the multiphase setting that serves as a form of perimeter regularisation. It provides some form of regularity to our design solutions and penalises designs that have large interfaces between the different phases of elastic materials. For our problem we introduce the design space and our design problem can be formulated as the following Remark 2.1. For the existence theory for optimal designs to (P), it is also possible to consider a more general form of the cost functional: with Carathéodory functions h Ω , h and h satisfying (see [14,Remark 5]) In our current setting we have h Ω = h = 0 and h(s, u) = 1 2 X Γ tar (s)W ( u − u tar ) · ( u − u tar ). Remark 2.2. It is also possible to consider mass constraints for ϕ of the form for fixed vectors α, β ∈ ∆ L (possibly also α = β), where in the above the inequalities are taken component-wise. These are convex constraints and thus when included into the definition of U ad , the design space remains a closed and convex set. Then, in the corresponding necessary optimality condition, associated Lagrange multipliers will appear, see [13,14] for more details.
Notation. For a Banach space X, we denote its topological dual by X * , and the corresponding duality pairing by ·, · X . For any p ∈ [1, ∞] and k > 0, the standard Lebesgue and Sobolev spaces over Ω are denoted by L p := L p (Ω) and W k,p := W k,p (Ω) with the corresponding norms · L p (Ω) and · W k,p (Ω) . In the special case p = 2, these become Hilbert spaces and we employ the notation H k := H k (Ω) = W k,2 (Ω) with the corresponding norm · H k (Ω) . For our subsequent analysis, we introduce the spaces For brevity, the corresponding norms are denoted by the same symbol · H 1 (Ω) if no confusion may arise. Vectors, matrices, and vector-or matrix-valued functions will be denoted by bold symbols. Furthermore, for a subset Γ N ⊂ Γ, we consider the function space H where v denotes the trivial extension of v to Γ, and we endow it with the norm We highlight that the above definition is not redundant as in general the trivial extension of a H forms a Hilbert triple (see, e.g., [48]).
For the forthcoming analysis we make the following structural assumptions.
, is a bounded domain with C 1,1 or convex boundary Γ = ∂Ω that admits a partition Γ = cl(Γ D ) ∪ cl(Γ N ) with relative open subsets Γ D and Γ N such that Γ D ∩ Γ N = ∅ and Γ D = ∅. Here, cl(A) denotes the closure of the set A. The same assumptions are made for Γ D and Γ N .
(A2) The elasticity tensor C is assumed to be a tensor-valued function . . , d}. Moreover, it fulfils the symmetry conditions and there exist positive constants C 0 , C 1 , and C 2 such that, for all ϕ, h ∈ R L , The set R d×d sym consists of the symmetric (d × d)-matrices. The same assumptions are made for C. where Ψ ∈ C 1,1 (R L ) and the indicator function I ∆ of the simplex ∆ L is defined as otherwise.
(A4) The function χ : R L → R is C 1,1 (R L ) and there exist a positive constant χ 0 such that (A5) The data of the problems satisfy (A6) The target displacement u tar ∈ L 2 (Γ tar , R d ), where Γ tar ⊂ Γ N .
3 Analysis of the design optimisation problem 3

.1 Linear elasticity system with mixed boundary conditions
In this section we provide a preliminary well-posedness result for the following linear elasticity system with mixed boundary conditions The well-posedness of the system (3.1) is formulated as follows.
Proposition 3.1. In addition to (A1), suppose that C ∈ L ∞ (Ω, R d×d×d×d ) fulfils the symmetry conditions for some positive constants λ and Λ. Then, for every F ∈ L 2 (Ω, R d×d ), f ∈ L 2 (Ω, R d ) and g ∈ H and there exists a positive constant C independent of u such that (3.4) Note that whenever g ∈ L 2 (Γ N , R d ) ⊂ H 1/2 00 (Γ N , R d ) * , we can identify the duality product in (3.3) as the standard boundary integral, that is, Proof of Proposition 3.1. The variational equality (3.3) admits a unique solution by a direct application of the Lax-Milgram theorem (cf., e.g., [5]). In this direction, we set It is worth noticing that it readily follows from the assumptions on F, f , and g that F ∈ V * . With the above notation, (3.3) can then be rewritten as the variational problem Thus, to apply the Lax-Milgram theorem, it is sufficient to show the bilinear form a(·, ·) is continuous and coercive in V. By (3.2) we have while (3.2) and Korn's inequality yield the V-coercivity: with a constant C K arising from Korn's inequality. Thus, the existence and uniqueness of u ∈ H 1 D (Ω, R d ) solving (3.3), as well as the estimate (3.4), readily follow from the Lax-Milgram theorem.
We end this section with another abstract result that will be useful for the subsequent analysis. Consider the following problem with inhomogeneous data on the Dirichlet boundary: Well-known theory yields that, for every U ∈ H 1/2 (Γ D , R d ), there exists a unique weak solution u ∈ H 1 (Ω, R d ). The proof follows similarly to the above as a direct consequence of the Lax-Milgram theorem. This allows us to introduce the associated solution operator, that we call the extension operator where u is the unique weak solution to system (3.5).

Well-posedness of the state systems
and For a cleaner presentation, we abuse notation and use the same variables u and u to denote u new and u new . Then, the well-posedness of (2.1) and (2.2) (equivalently (3.7) and (3.8)) are formulated as follows.
Moreover, there exists a positive constant C, independent of ϕ, such that Proof. For (3.9) we invoke Proposition 3.1 with the specifications and for (3.10) we consider to obtain the existence and uniqueness of solutions u and u. Lastly the estimate (3.11) can be obtained from (3.4) and the uniform boundedness of the tensors C and C in (A2).
where R > 0 is fixed, and let (u i , u i ) denote the unique solutions to systems (3.7) and (3.8) corresponding to ϕ i , but with the same data F, F, g, g, H and H. Then, there exists a positive constant C, independent of the differences, such that Proof. To start, let us set Then, we consider the difference between the variational equalities (3.9)-(3.10) written for (ϕ 1 , u 1 , u 1 ) and for (ϕ 2 , u 2 , u 2 ) to infer that Choosing ζ = u and invoking condition (2.4) yields (3.15) By Korn's inequality we infer Then, inserting ζ = u in (3.14), and invoking the Lipschitz continuity of C and χ from (A2) and (A4), as well as (3.16), yields Applying Korn's inequality leads to (3.12).
Let (u n , u n ) denote the solutions to (3.9) and (3.10) corresponding to data ϕ n , F, F, g, g, H and H. Then, it holds that, as n → ∞, are the unique solutions to (3.9) and (3.10) corresponding to data ϕ * , F, F, g, g, H and H.
Proof. From (3.11) we infer that u n and u n are bounded in H 1 D (Ω, R d ) and H 1 D (Ω, R d ), respectively, and thus there exist limit functions (u * , To obtain strong convergence, in (3.13) and (3.14) we substitute ϕ 1 = ϕ n , ϕ 2 = ϕ * , u 1 = u n , u 2 = u * , u = u n − u * and u = u n − u * . Then, by virtue of the dominated convergence theorem, we infer the strong convergence, as n → ∞, Hence, in the analogue of the first inequality in (3.15) we see that the integral on the right-hand side converges to zero, which implies via Korn's inequality that By the generalised dominated convergence theorem, we have the strong convergences and thus, in the analogue of the first inequality in (3.17) we see that the integral on the right-hand side converges to zero, leading to the assertion Thus, by combining the weak convergences with the above norms convergence the claim follows.
The above analysis for systems (3.7) and (3.8) allows us to define some solution operators. Namely, we introduce as well as the intermediate operators: where u = u(ϕ) and u = u(ϕ, u) are the unique solutions obtained from Theorem 3.1. Then, the overall solution operator S in (3.18) is simply the composition of the intermediate mappings S 1 and S 2 , i.e., S = S 2 • S 1 . In particular, we can define the reduced cost functional J red : U ad → R, J red : ϕ → J(ϕ, S(ϕ)).
Proof. As the proof is nowadays standard with the direct method of the calculus of variations, let us briefly outline the main points. Consider a minimising sequence {ϕ n } n∈N ⊂ U ad for the reduced cost functional J red , which satisfies This yields that {ϕ n } n∈N is bounded in H 1 (Ω, R L )∩L ∞ (Ω, R L ). By standard compactness arguments, since U ad is closed and convex, we obtain a limit function ϕ * ∈ U ad such that ϕ n → ϕ * weakly* in H 1 (Ω, R L ) ∩ L ∞ (Ω, R L ) along a non-relabelled subsequence. Consequently, by (3.11) the sequence { u n = S(ϕ n )} n∈N is bounded in H 1 D (Ω, R d ), and on invoking Corollary 3.1 there exists a limit function u * ∈ H 1 D (Ω, R d ) such that, along a non-relabelled subsequence, u n → u * strongly in H 1 D (Ω, R d ) as n → ∞. Continuity of the boundary trace operator gives u n → u * strongly in L 2 ( Γ N , R d ), and thuŝ Γ tar By Fatou's lemma and the a.e. convergence of ϕ n to ϕ * , we have and using also the weak lower semicontinuity of the L 2 -norm, we infer that This shows that ϕ * is a solution to (P).

Optimality conditions
To derive the first order necessary optimality conditions for ϕ * , we first study the linearised system for the linearised variables introduced below, and use adjoint variables to provide a simplification of the optimality condition.

Linearised systems and Fréchet differentiability
Here, we analyse some differentiability properties of the solutions operators S 1 and S 2 introduced above. This will help us to formulate the first order optimality conditions of (P).
is the unique weak solution to the following system: in the senseˆΩ where u is the unique solution to (3.9) associated to ϕ obtained from Theorem 3.1.
Proof. Firstly, the unique solvability of the linearised system (4.2) follows directly from the application of Proposition 3.1 upon choosing Next, we take ϕ ∈ U ad and h ∈ L ∞ (Ω, R L ) such that ϕ h := ϕ + h ∈ U ad , and set Since the first component of S 1 is just the identity in L ∞ (Ω, R L ), we only need to investigate the Fréchet differentiability of the second component S 2 1 . In this direction, we denote with v being the unique solution to the linearised system (4.2) associated to ϕ and h.
Our aim is to show the existence of a positive constant C such that which would then imply the Fréchet differentiability of the operator S 2 1 . To this end, we subtract from (3.9) for ϕ h the sum of (3.9) for ϕ and (4.3) for v to obtain Choosing ζ = w and using (2.4) we infer that Lipschitz continuity of C yields a positive constant C C such that and keeping in mind the estimate obtained from Theorem 3.2: we find that , Then, by Korn's inequality we infer (4.4), and whence the claimed Frechét differentiability of S 1 .
Before presenting the Fréchet differentiability of S 2 , let us provide a formal discussion. Recall that u = S 2 (ϕ, u) and thus the directional derivative DS 2 (ϕ, u)(h, k) of S 2 at (ϕ, u) along a direction (h, k) will be given by where D ϕ and D u represent the partial derivatives with respect to ϕ and u, respectively. Hence, we expect ϑ := DS 2 (ϕ, u)(h, k) to be a sum of two functions v := D ϕ S 2 (ϕ, u)(h) and w := D u S 2 (ϕ, u)(k), and the Fréchet differentiability of S 2 can be established by demonstrating The result is formulated as follows. , is the unique weak solution to the following system: in the sense thatˆΩ where u is the unique solution to (3.9) associated to ϕ and u is the unique solution to (3.10) associated to (ϕ, u).
Proof. Unique solvability of (4.9), (4.10) and (4.11) are obtained by application of Theorem 3.1, and thus we focus on demonstrating the crucial estimate (4.7). Let ϕ ∈ U ad , We observe that by subtracting from (3.10) for (ϕ + h, u + k, u h,k ) the sum of (3.10) for (ϕ, u, u) and (4.9) for ϑ, we obtain (4.12) Before proceeding with some computations, let us point out the following identities: and (4.14) Similar to (4.5), we have, for positive constants C χ and C C , that and upon choosing ζ = ξ in (4.12), we infer that Then, employing the Young and Hölder inequalities, (4.6), as well as the stability estimates deduced from (3.12), we find that and thus we obtain by Korn's inequality which verifies the Fréchet differentiability of S 2 . Furthermore, it is clear that by the uniqueness of the solutions, the sum v + w is equal to ϑ. To establish the identification of the partial derivatives D ϕ S 2 (ϕ, u)(h) = v and D u S 2 (ϕ, u)(k) = w, we argue as follows: Consider k = 0, so that from (4.11) we obtain that w = 0 and ϑ = v. Then, in (4.12) with k = 0, we now have for ξ = u h − u − v the estimate (4.16), where where we made use of u h,0 = u h . A short calculation shows that which entails that D ϕ S 2 (ϕ, u)(h) = v. The other identification D u S 2 (ϕ, u)(k) = w is in fact more straightforward as S 2 (ϕ, ·) is a linear operator. This completes the proof.

Adjoint systems
We now move to the investigation of some adjoint systems which, as typically happens in constrained minimisation problems, will allow us to simplify the formulation of the optimality conditions of (P).
As the proof of existence and uniqueness is simply an application of Theorem 3.1, we omit the details.
Remark 4.1. Notice that the adjoint variable p to the Stage 1 deformation u is dependent on the adjoint variable q to the Stage 2 deformation u. This backwards dependence has some parallels with the adjoint systems associated to time-dependent state equations, which have to be solved backwards in time. and (4.20) corresponding to (ϕ * , u * , u * ). Then, it necessarily holds that

First order optimality conditions
where we set Ψ ,ϕ as the vector of partial derivatives of Ψ.
Proof. As U ad is a non-empty, closed and convex set, standard results in optimal control and convex analysis yield that the first order necessary optimality condition for ϕ * is where ϑ is the unique solution to (4.9) with ϕ = ϕ * , u = u * , u = u * , h = φ − ϕ * and k = v from (4.3) (also with h = φ − ϕ * ). To simplify the above relation, the procedure is to compare the equalities obtained from (4.3) with ζ = p, (4.9) with k = v and ζ = q, (4.18) with ζ = ϑ and (4.20) with ζ = v. A short calculation then showŝ which allows us to remove the dependence of ϑ from (4.22) and leads to (4.21).

Sharp interface asymptotics
In this section we study the behaviour of solutions in the sharp interface limit ε → 0. For ε > 0, we denote where S is the solution operator defined in (3.18), so that the corresponding reduced functional can be expressed as the sum J ε red (ϕ) = G(ϕ) + γE ε (ϕ). The asymptotic behaviour of solutions can be studied under the framework of Γ-convergence. In order to state the result some preparation is needed. A function ϕ ∈ L 1 (Ω, R L ) is termed a function of bounded variation in Ω, written as ϕ ∈ BV(Ω, R L ) if there exists a matrix-valued measure and define the essential boundary ∂ * E i ϕ as where, for any ρ > 0, Then, the Γ-convergence of E ε to E 0 as ε → 0 is expressed as the following assertions: • Liminf property. If {ϕ ε } ε>0 is a sequence such that lim inf ε→0 E ε (ϕ ε ) < ∞ and ϕ ε → ϕ 0 in L 1 (Ω, R L ), then ϕ 0 ∈ BV(Ω, T ) with E 0 (ϕ 0 ) ≤ lim inf ε→0 E ε (ϕ ε ).
Proof. The proof relies on the Γ-convergence of the Ginzburg-Landau functional and the stability of Γ-convergence under continuous perturbations. By Corollary 3.1, and the continuity of the trace operator, we see that G is continuous. For arbitrary ψ ∈ BV(Ω, T ), we invoke the limsup property to find a sequence {ψ ε } ε>0 such that ψ ε → ψ strongly in As ϕ ε minimises J ε red , we see that By the non-negativity of G, the above estimate implies sup ε∈(0,1] E ε (ϕ ε ) < ∞, and by the compactness property we deduce that there exists a limit function ϕ 0 ∈ BV(Ω, T ) such that ϕ ε → ϕ 0 strongly in L 1 (Ω, R L ) along a non-relabelled subsequence. Continuity of G then gives G(ϕ ε ) → G(ϕ 0 ) and invoking the liminf property we subsequently infer that As ψ is arbitrary, this shows that ϕ 0 is a minimiser of J 0 red . We now return to the beginning of the proof and consider using the limsup property to construct a sequence {ϕ ε } ε>0 that converges strongly to ϕ 0 in L 1 (Ω, R d ). Then, following a similar argument we arrive at which provides the claimed assertion lim ε→0 J ε red (ϕ ε ) = J 0 red (ϕ 0 ).

Formally matched asymptotic expansions
We turn our attention towards the optimality condition (4.21) and study its sharp interface limit ε → 0 with the method of formally matched asymptotic expansions, where we assume the functions ϕ ε , u ε , u ε , p ε , and q ε admit asymptotic expansions in powers of ε. From Lemma 5.1 we saw that ϕ ε converges to a function ϕ 0 ∈ BV(Ω, T ) as ε → 0, and thus for 0 < ε < 1, we expect ϕ ε to change its values rapidly on a length scale proportional to ε. This inspires us to consider two asymptotic expansions of (ϕ ε , u ε , u ε , p ε , q ε ) in the bulk and interfacial regions (to be defined below), and the procedure is to match these expansions in an intermediate region to deduce the expected equations in the sharp interface limit. We follow the ideas in [13] that treats a similar system of equations, and refer the reader to, e.g., [18,25,32,33] for more details regarding the methodology. Recalling T = Ψ −1 (0) = {e 1 , . . . , e L } as the set of corners of the Gibbs simplex ∆ L , we partition the domain Ω into regions Ω i , i = 1, . . . , L, where Ω i = {x ∈ Ω : ϕ 0 (x) = e i }. Then, we assume the functions ϕ ε , u ε , u ε , p ε , and q ε are sufficiently smooth and admit the following asymptotic expansion in ε: for all points x ∈ Ω away from the interfaces Γ ij = ∂Ω i ∩ ∂Ω j for i, j ∈ {1, . . . , L}, i = j. This is known as the outer expansion. Furthermore, we assume that where T Σ L is the tangent space of the affine hyperplane Σ L = {v ∈ R L : L i=1 v i = 1}, so that by the above construction ϕ ε (x) ∈ ∆ L for ε sufficiently small. We assume that there are constant elasticity tensors C i and C i for i = 1, . . . , L, satisfying the standard symmetric conditions and are positive definite. Then, for ϕ = (ϕ 1 , . . . , ϕ L ) such that ϕ(x) ∈ ∆ L , we consider Then, substituting the outer expansions into the state systems (2.1), (2.2) and the adjoint systems (4.17) and (4.19), to leading order we obtain the following equations for i = 1, . . . , L: where χ i := χ(e i ), and It then remains to furnish the above with boundary conditions for (u 0 , u 0 , p 0 , q 0 ) on the interfaces Γ ij for i, j ∈ {1, . . . , L}, i < j, which we assume are smooth hypersurfaces that can be obtained in the limit ε → 0. These boundary conditions can be inferred with the help of a corresponding inner expansion for (ϕ ε , u ε , u ε , p ε , q ε ) in the interfacial regions bordering two bulk regions Ω i and Ω j . To this end, we focus on one particular interface Γ ij and introduce a change of coordinates with the help of a local parameterisation γ : where U is a spatial parameter domain. Close to γ(U ), consider the signed distance function d such that d(x) > 0 if x ∈ Ω j and d(x) < 0 if x ∈ Ω i , so that the unit normal ν to Γ ij points from Ω i to Ω j . Introducing the rescaled signed distance z = d ε , a local parameterization of x ∈ R d close to γ(U ) can be given as This representation allows us to infer the following expansions for gradients, divergences and Laplacians [34]: for scalar functions b(x) = b(s(x), z(x)) and vector functions j(x) = j(s(x), z(x)), along with the curvature κ Γ ij of Γ ij , the surface gradient operator ∇ Γ ij , and the surface divergence operator div Γ ij on Γ ij . Then, for points close by Γ ij , we assume an inner expansion of the form Lastly, we assume in a tubular neighborhood of Γ ij the outer expansions and the inner expansions hold simultaneously. Within this intermediate region certain matching conditions relating the outer expansions to the inner expansions must hold. For a scalar function b(x) admitting an outer expansion ∞ k=0 ε k b k (x) and an inner expansion ∞ k=0 ε k B k (s, z), it holds that (see [34,Appendix D]) for x ∈ Γ ij . Consequently, we denote the jump of a quantity b across Γ ij as Note that the above matching conditions also apply to vectorial functions. We introduce the orthogonal projection . . . , 1) , so that the optimality condition (4.21) can be expressed in the following strong form We substitute the inner expansions into the equations (2.1a), (2.2a), (4.17a), (4.19a), and (5.3) and collect terms of the same order. Then, we perform some computations to deduce the boundary conditions posed on Γ ij . As the subsequent analysis is similar to that performed in [13,32], we omit most of the straightforward details. In the sequel we use the notation for second order tensors B and for X ∈ {U , U , P , Q}.
To leading order O( 1 ε 2 ), equations (2.1a) and (4.17a) yield Multiplying by U 0 and Q 0 , respectively, integrating over z ∈ R, integrating by parts and applying the matching conditions allow us to deduce that ∂ z U 0 = ∂ z Q 0 = 0, and hence both U 0 and Q 0 are constant in z. Applying matching conditions we infer that Then, to leading order O( 1 ε 2 ), equations (2.2a) and (4.19a) yield on account of the fact that ∂ z U 0 = ∂ z Q 0 = 0. Hence, we also obtain To first order O( 1 ε ), we get from (2.1a) and (4.17a) that Integrating over z ∈ R and using the matching conditions yields Similarly, from equations (2.2a) and (4.19a), we obtain to first order O( 1 ε ) that Integrating over z ∈ R and applying the matching conditions, we obtain Turning now to the optimality condition (5.3), we use the fact that Following [18], Φ 0 can be chosen independent of s and as the solution to the above ordinary differential system such that lim z→−∞ Φ 0 (z) = e i and lim z→+∞ Φ 0 (z) = e j . To the next order O(1), we obtain where Ψ ,ϕϕ denotes the Hessian matrix of Ψ, and we have used that ∂ z Φ 0 ∈ T Σ L . Note that by the fact that ∂ z X 0 = 0 for X ∈ {U , U , P , Q}, and by the symmetry of the elasticity tensors C ijkl = C jikl , we have the relations for any X, Y ∈ {U , U , P , Q}. To obtain a solution Φ 1 , a solvability condition has to hold, which can be derived by multiplying (5.6) with ∂ z Φ 0 and integrating over z. Using the after integrating by parts and applying the matching conditions, we obtain We employ the identities obtained from (5.4), (5.5) and (5.7) to obtain that as well as ∂ z ν = 0 to see that Via a similar calculation we infer and thus, setting b ij :=´∞ −∞ 2|∂ z Φ 0 | 2 dz, and applying matching conditions for E X and ∂ z X 1 with X ∈ {U , U , P , Q}, we obtain from (5.8) the solvability condition that has to hold on Γ ij . Thus, the sharp interface limit consists of the equations (5.1) and (5.2) posed in Ω i , 1 ≤ i ≤ L, furnished by the boundary conditions (5.9) and Remark 5.1. It is possible to consider the sharp interface limit near a triple junction where three regions meet. We refer to [13,18,53] for more details regarding the asymptotic analysis around a triple junction.
Definition 5.1. The space V ad of admissible velocity fields is defined as the set of all where τ > 0 is a fixed small constant, such that it holds: Then, the space T ad of admissible transformations is defined as the set of solutions to the ordinary differential equations Notice that by the second property it holds T t (Ω) = Ω for all t ∈ [−τ, τ ]. Let V ∈ V ad be an admissible velocity field with corresponding transformation T ∈ T ad . For ϕ ∈ U ad we define ϕ t := ϕ • T −1 t , along with the unique solutions (u t , , where u t = S 2 1 (ϕ t ) and u t = S(ϕ t ). Setting (ϕ 0 , u 0 , u 0 ) = (ϕ, u, u), by following a similar proof to [14,Lem. 25], we define for τ 0 > 0 sufficiently small the function F 1 : Using a change of variables y = T t (x), the relation [57,Prop. 2.47] for the boundary transformation, we obtain where E y (u) = 1 2 (∇ y u + (∇ y u) ) for u : T t (Ω) → R d and dH d−1 y denotes the (d − 1)dimensional Hausdorff measure related to y. From the properties of the mapping T t , we find that T t (Γ N ) = Γ N and v : Hence, from the above identity we observe that Denoting by D u F 1 as the partial derivative of F 1 with respect to its second argument, we find that D u F 1 (0, u) : where we have used the relation ∇T −1 t | t=0 = I the identity matrix. As D u F 1 (0, u) is an isomorphism by the Lax-Milgram theorem, the application of the implicit function theorem allows us to infer that the mapping +ˆΩ ∇FV(0) + F div(V(0)) · ζ dx +ˆΓ N ∇gV(0) + g div(V(0)) − n · ∇V(0)n · ζ dH d−1 (5.11) for all ζ ∈ H 1 D (Ω, R d ). In the above, we have made use of the following relations (see [ Furthermore, substituting ζ =u[V] into (5.11), by means of Korn's inequality and the smoothness of V(0), we obtain the estimate Via a similar procedure, for a small τ 0 > 0, we consider Then, by a change of variables y = T t (x), we find that is an isomorphism, by the implicit function theorem we infer that the mapping being the unique solution to the distributional equation into (5.13), then using (5.12) and Korn's inequality, we obtain the estimate (5.14) The next result details an optimality condition for minimisers ϕ ε to (P ε ) obtained via geometric variations.
Proof. For any V ∈ V ad , let T ∈ T ad be the associated transformation and consider the scalar function g(t) := J ε red (ϕ ε • T −1 t ) for t ∈ (−τ 0 , τ 0 ), where τ 0 is sufficiently small. As ϕ ε is a minimiser of J ε red , we have can be obtained as in [41,Lem. 7.5]. Denoting by u ε (t) = S(ϕ ε • T −1 t ), then the derivative of G(ϕ ε • T −1 t ) can be obtained by a standard change of variables: leading to (5.15).
The convergence of (5.15) to the sharp interface limit ε → 0 is formulated as follows.
Remark 5.3. With more regularity, it is possible to relate (5.17) with the solvability condition (5.9) in the two-phase setting, see [14,41] for more details.

Numerical simulations
In this section we present the finite element discretisation and showcase several numerical simulations in two and three dimensions for the two-phase case. Namely, we have L = 2 and we consider the optimal distribution of a single type of active material within a passive material.

Finite element discretisation
We assume that Ω is a polyhedral domain and let T h be a regular triangulation of Ω into disjoint open simplices. Associated with T h are the piecewise linear finite element spaces where we denote by P 1 (o) the set of all affine linear functions on o, cf. [22]. In addition we define V h = ζ ∈ S h : |ζ| ≤ 1 in Ω , (6.1) as well as We also let (·, ·) denote the L 2 -inner product on Ω, and let (·, ·) h be the usual mass lumped L 2 -inner product on Ω associated with T h . In addition, A, B C = (CA, B) for any fourth order tensor C and any matrices A and B. Finally, τ denotes a chosen uniform time step size. We now introduce finite element approximations of the state equations (3.9) and (3.10), adjoint systems (4.18) and (4.20), as well as the optimality conditions (4.21) with a pseudotime evolution based on an L 2 -gradient flow approach. In particular, we consider the obstacle potential (5.10) with Ψ(s) = 1 2 (1 − s 2 ), which leads to a variational inequality. The fully discrete numerical scheme is formulated as follows: We implemented the scheme (6.2) with the help of the finite element toolbox ALBERTA, see [56]. To increase computational efficiency, we employ adaptive meshes, which have a finer mesh size h f within the diffuse interfacial regions and a coarser mesh size h c away from them, see [9,10] for a more detailed description. Clearly, we first solve the linear systems (6.2a) in order to obtain u n h , then (6.2b) for u n h , then (6.2c) for q n h , then (6.2d) for p n h , and finally the variational inequality (6.2e) for ϕ n h . In two space dimensions we employ the package LDL, see [23], together with the sparse matrix ordering AMD, see [6], in order to solve the linear systems (6.2a)-(6.2d). In three space dimensions, on the other hand, we use a preconditioned conjugate gradient algorithm, with a W -cycle multigrid step as preconditioner. In order to solve the variational inequality (6.2e) we employ a nonlinear multigrid method similar to [42].
For the computational domain we will choose Ω = [0, L 1 ] × [− 1 2 , 1 2 ] in two dimensions, and Ω = [0, in three dimensions with positive lengths L i , i ∈ {1, 2}, given below. For the physical parameters we loosely follow the settings in [50]. In particular, for the forcings we choose F = F = 0 throughout, as well as g = 0 and For the interpolated elasticity tensors we choose C(s) = 1 2 [(1 + s)C + + (1 − s)C − ], where the two tensors C ± are defined through the Young's moduli E ± and Poisson ratios ν ± via E + = 3, E − = 0.7, ν + = ν − = 0.45. For the visualisation of the progress of the discrete gradient flow computations, we define the discrete cost functional As the initial data ϕ 0 h we choose a random mixture with mean zero. Choosing other initial data, including random mixtures with positive or negative mean had no visible influence on the obtained optimal shapes.

Numerical simulations in two dimensions
For the target shapes we consider the parabolic profile with c tar > 0, and the cosine profile with c tar > 0 and k tar > 0. In Figure 2 we plot two examples for the profiles in (6.8) for the domain length L 1 = 12.
In each of the Figures 3, 4, 5 and 6, we provide visualisations of the numerical solution ϕ n h at various pseudo-times (black denotes the passive material {ϕ n h = −1} and grey denotes the active material {ϕ n h = 1}), and the corresponding displacements u n h (in red) and u n h (in green). As mentioned before, each time the gradient flow is started from a random mixture ϕ 0 h with zero mean. We also provide pseudo-time plots of the cost functional J h (ϕ n h , u n h ), the proportion of the elastic E h,tar ( u n h ) and interfacial γE h (ϕ n h ) energies, as well as log-plots of the elastic energies. The parameter details are summarised in the For all the presented simulations we choose the parameters ε = 1 8π and γ = 0.01. In each case the cost functional decreases monotonically, but the proportions of the two energies (elastic vs interfacial) differ from case to case.
The first experiment in Figure 3 is for the parabolic profile on the domain [0, 6]×[− 1 2 , 1 2 ]. We observe that in the optimal distribution of material, the active phase occupies most of the lower domain. This ensures that in the programmed stage, the printed active composite is able to attain the desired target shape.  ] for the target shape (6.8a) with c tar = 0.075 and W = Id and Γ tar = ∂Ω, with ε = 1 8π and γ = 0.01. We display ϕ n h and the displacements u n h (red) and u n h (green) at pseudo-times t = 0.1, 0.5, 1. Below we show plots of the cost functional J h (ϕ n h , u n h ), the proportion in it of the elastic energy E h,tar ( u n h ) (solid blue) and the interfacial energy γE h (ϕ n h ) (dashed red), as well as of log 10 E h,tar ( u n h ).
Not surprisingly, a very different distribution of material is obtained when changing the target shape functional to enforce a cosine profile at the programmed stage. As can be seen from Figure 3, the optimal distribution of the active material is now given by an elongated region that connects the lower left of the domain with the upper right.
It turns out that on longer (or thinner) domains, far less active material is needed to achieve significant deformations at the programmed state. For example, in Figure 4 a miniscule amount of active material, spread in several connected components arranged at the bottom of the domain, is sufficient to result in the desired parabolic target shape.
Similarly, we observe in Figure 5 that three strategically placed small amounts of active material guarantee a cosine profile at the programmed stage for the printed active composite.

Numerical simulations in three dimensions
For the target shapes we consider a function of the form ] for the target shape (6.8b) with c tar = 0.25, k tar = 2 and W = e 2 ⊗ e 2 and Γ tar = ∂Ω, with ε = 1 8π and γ = 0.01. We display ϕ n h and the displacements u n h (red) and u n h (green) at pseudo-times t = 1, 2, 5. Below we show plots of the cost functional J h (ϕ n h , u n h ), the proportion in it of the elastic energy E h,tar ( u n h ) (solid blue) and the interfacial energy γE h (ϕ n h ) (dashed red), as well as of log 10 E h,tar ( u n h ). ] for the target shape (6.8a) with c tar = 0.02 and W = Id and Γ tar = ∂Ω, with ε = 1 8π and γ = 0.01. We display ϕ n h and the displacements u n h (red) and u n h (green) at pseudo-times t = 0.5, 1, 2. Below we show plots of the cost functional J h (ϕ n h , u n h ), the proportion in it of the elastic energy E h,tar ( u n h ) (solid blue) and the interfacial energy γE h (ϕ n h ) (dashed red), as well as of log 10 E h,tar ( u n h ). ] for the target shape (6.8b) with c tar = 1, k tar = 1.5 and W = Id and Γ tar = ∂ top Ω, with ε = 1 8π and γ = 0.01. We display ϕ n h and the displacements u n h (red) and u n h (green) at pseudo-times t = 0.1, 0.5, 1. Below we show plots of the cost functional J h (ϕ n h , u n h ), the proportion in it of the elastic energy E h,tar ( u n h ) (solid blue) and the interfacial energy γE h (ϕ n h ) (dashed red), as well as of log 10 E h,tar ( u n h ).
and choose u tar from one of the following: u tar (x 1 , x 2 ) = c tar (x 1 ) 2 . Notice that (6.9) and (6.10) are simply the three dimensional analogues of the parabolic profile (6.8a) and the cosine profile (6.8b), respectively. On the other hand, (6.11) yields a linear profile in x 1 with a twisting in the x 2 -direction.
In each of the Figures 7, 8, and 9, 10 we provide visualisations of the numerical solution ϕ n h at various pseudo-times (black denotes the passive material {ϕ n h = −1} and grey denotes the active material {ϕ n h = 1}), and the corresponding displacements u n h (darker colours indicating lower values of u n h · e 3 and lighter colours indicating higher values of u n h · e 3 ). We also provide pseudo-time plots of the energy functionals, similarly to the 2D simulations in the previous subsection. The parameter details are summarised in the  In the first three figures we take ε = 1 4π and for γ choose either 0.1 or 0.01. In each simulation the cost functional decreases monotonically, but the proportions of the two energies (elastic vs interfacial) differ from case to case.
The first simulation is a direct 3D analogue for the computation previously shown in Figure 3, the only difference being that here we restrict the set Γ tar to the upper part (side view and bottom view) and the displacement u n h (with colour coding for u n h · e 3 ) at pseudo-times t = 1, 2, 5. Below we show plots of the cost functional J h (ϕ n h , u n h ), the proportion in it of the elastic energy E h,tar ( u n h ) (solid blue) and the interfacial energy γE h (ϕ n h ) (dashed red), as well as of log 10 E h,tar ( u n h ).
of the boundary ∂Ω. As expected, the observed results are very close to the ones seen previously in the 2D setting. In particular, the active phase occupies the lower half of the domain, with a dip towards the right end of the domain. On more elongated domains we once again observe that relatively little active material can result in large deformations at the programmed stage. For example, the strategic placement of the active component seen in Figure 8 yields a large cosine profile deformation. It is interesting to note that by simply changing the weighting matrix W in the target energy functional, we obtain a completely different optimal distribution of active material. This can be seen in Figure 9, where the only change to the previous simulation is W = Id, rather than W = e 3 ⊗ e 3 . Now there are just two connected components for the active phase, one at the lower left part of the domain, and one at the middle of the top of the domain. Yet the obtained deformations at the programmed stage are very similar.
Our final numerical simulation is for a twisted target shape. In particular, we use the target function (6.11) with c tar = 0.1 on the domain Ω = [0, 6] × [−3, 3] × [0, 1], with W = e 3 ⊗ e 3 and Γ tar = ∂ right Ω. In Figure 10 we provide visualisations of the numerical solution ϕ n h at various pseudo-times, and the corresponding displacements u n h (here darker colours indicate lower values of | u n h | and lighter colours indicate higher values of | u n h |). For this computation we take ε = 1 2π and γ = 0.02. At the optimal configuration we observe an elaborate distribution of the active material, which yields a twisted shape of the component at the programmed stage. ] × [0, 1] for the target shape (6.10) with c tar = 1, k tar = 2 and W = e 3 ⊗ e 3 and Γ tar = ∂ top Ω, with ε = 1 4π and γ = 0.1. We display ϕ n h (side view and bottom view) and the displacement u n h (with colour coding for u n h · e 3 ) at pseudo-times t = 1, 2, 10. Below we show plots of the cost functional J h (ϕ n h , u n h ), the proportion in it of the elastic energy E h,tar ( u n h ) (solid blue) and the interfacial energy γE h (ϕ n h ) (dashed red), as well as of log 10 E h,tar ( u n h ). ] × [0, 1] for the target shape (6.10) with c tar = 1, k tar = 2 and W = Id and Γ tar = ∂ top Ω, with ε = 1 4π and γ = 0.01. We display ϕ n h (side view and bottom view) and the displacement u n h (with colour coding for u n h · e 3 ) at pseudo-times t = 0.1, 0.5, 1. Below we show plots of the cost functional J h (ϕ n h , u n h ), the proportion in it of the elastic energy E h,tar ( u n h ) (solid blue) and the interfacial energy γE h (ϕ n h ) (dashed red), as well as of log 10 E h,tar ( u n h ). for the target shape (6.11)(b) with c tar = 0.1 and W = e 3 ⊗ e 3 and Γ tar = ∂ right Ω, with ε = 1 2π and γ = 0.02. We display ϕ n h and the displacement u n h (with colour coding for | u n h |) at pseudo-times t = 0.1, 1, 2. In the middle we show plots of the cost functional J h (ϕ n h , u n h ), the proportion in it of the elastic energy E h,tar ( u n h ) (solid blue) and the interfacial energy γE h (ϕ n h ) (dashed red), as well as of log 10 E h,tar ( u n h ). In the bottom we show a backward view of the first row of plots.