Physics-informed machine learning in asymptotic homogenization of elliptic equations

We apply physics-informed neural networks (PINNs) to first-order two-scale periodic asymptotic homogenization of the property tensor in a generic elliptic equation. The problem of lack of differentiability of property tensors at the sharp phase interfaces is circumvented by making use of diffuse interface formulation. Periodic boundary conditions are incorporated strictly, through the introduction of an input-transfer layer (Fourier feature mapping), in which the sine and cosine of the inner product of position vectors and reciprocal lattice vectors are considered. This, together with the absence of Dirichlet boundary conditions, results in a lossless boundary condition application. The only loss terms are then due to the differential equation itself, which removes the necessity of scaling the loss entries. In demonstrating the formulation's versatility based on the reciprocal lattice vectors, crystalline arrangements defined with Bravais lattices are used. We also show that considering integer multiples of the reciprocal basis in the Fourier mapping leads to improved convergence of high-frequency functions. We consider applications in one, two, and three dimensions. Periodic composites, composed of embeddings of monodisperse inclusions in the form of disks/spheres in the two-/three-dimensional matrix, are considered. For demonstration purposes, stochastic monodisperse disk arrangements are also considered.


Introduction
Machine learning (ML) has shown remarkable success in computer vision [1], medical diagnosis [2], natural language processing [3], and financial applications [4].Following the pioneering work of Lagaris et al., [5], the universal approximation property of deep neural networks [6,7,8] has been used as an alternative method in scientific computation for solving partial differential equations (PDEs) in physics.In particular, physics-informed neural networks (PINNs) allow approximating the solution of PDEs using their strong form without the need for a weak form derivation and discretization, as is the case, e.g., for the finite elements method (FEM) [9].Unlike standard discretization techniques, the solution obtained from an artificial neural network (ANN) is differentiable with a closed analytic form that allows subsequent calculations [5].That is, the function value and its derivatives can be computed exactly at any point of the problem domain with a trained network.In such a meshless approach, the solution is approximated in a deep neural network whose hyper-parameters are found by minimizing the loss functions associated with the PDE.Hence, the problem of finding the solution to the PDE is converted to an optimization problem.
The use of ANNs is nowadays extensive, from solving integro-differential equations to fractional and stochastic differential equations, see, e.g., [9,10,11]; and it is expected to become more prominent with the advent of accessible, scalable, and modular deep learning frameworks (such as, e.g., Tensorflow [12] and PyTorch [13]), as well as specialized libraries, e.g., DeepXDE [9] and SciANN [14].The increased efficiency of ANNs that results from reduced training time mediated by the improvement of parallel architectures is also exploited in applications in solid mechanics [15], amongst others.For an extensive survey on the state of the art of PINNs and additional libraries, see [16].In comparison to classical ML approaches, where the deep learning model is trained against a dataset with the aim of error minimization with respect to the dataset, the loss function to be minimized in PINNs is composed of residual norms of the governing differential equation and associated initial, and boundary conditions (BCs) [5,17,18].Therefore, a key and nontrivial task in solving PDEs with ANNs is the application of appropriate BCs, the methods for which can be divided into being exact or approximate.
An extensive study on approximate penalty-based methods for the imposition of Dirichlet, Neumann, Robin, and periodic boundary conditions is given in [19].These methods penalize the loss term (residual) due to boundary conditions with a coefficient.However, this term, which is found through a trial and error process, influences the network's convergence and its accuracy [20].On the other hand, the exact imposition of BCs does not suffer from these weaknesses.For problems with irregular boundaries, a radial basis function network can be used for the exact satisfaction of the boundary conditions [17].
An essential class of BCs is periodic boundary conditions (PBCs), which are used, for example, in applications involving periodic structures in computational engineering [21], such as physical problems related to crystals; or periodic unit cell problems that are solved within asymptotic periodic homogenization [22,23,24,25], amongst many others.When PBCs are applied in ANNs, an extensively used method is the penalty method that yields an approximate BC imposition.In [19], periodicity is satisfied using penalty terms multiplied by prescribed parameters in the loss function.In this sense, the loss term represents the residual norms of the periodic conditions for the function and its first derivative.We should reiterate that penalty parameters are problematic, something generally known for penalty-based methods.For example, in shallow neural networks with one hidden layer, which use cos activation functions to create Fourier Neural Networks [26], a key problem is the determination of the penalty coefficients and the requirement of knowing the period of the estimated function for the activation and loss functions.Other works on Fourier networks yielding Fourier series approximation properties are [27,28,29].On the other hand, imposing exact C ∞ , or C k (k > 0), PBCs can be achieved by using an additional layer that transfers the input to either C ∞ or C k periodic functions [30].An alternative approach is to apply instead a transfer function to the solution that requires periodic conditions [31], which leads to neural networks that satisfy the periodic boundary conditions of the microscopic problem.The work we present here is partly based on this approach.
In this work, we are interested in determining the effective (macroscopic) properties of composite material systems whose micro-(constituent-level) psychics is given by a generic elliptic PDE.It is worth noting that there are analytically derived bounds for this type of problem, e.g., (arithmetic) Voigt and the (harmonic) Reuss averages, as well as effectivemedium theories such as the Maxwell, self-consistent, and differential effective-medium approximations.However, these approximations use limited microstructural descriptors, such as the phase volume fraction and shape, and hence their validity is limited to dilute composite systems where the constituents possess low contrasting physical properties.Only full-field computational micro-macro transfer techniques provide the required accuracy for materials with random phase distributions creating phase interactions and high property contrast and to this end, the finite element method has been conventionally used over the last years, see e.g.[32,22,23].
We work on physics-informed deep learning with the exact imposition of PBCs for problems with arbitrary periodicity.To this end, we use the scientific ML and physics-informed library DeepXDE [9].All the problems studied here include cuboidal domains with orthogonal phases, and we impose PBCs by adding a C ∞ periodic layer to the neural network with cos and sin functions.Therefore, the training phase of the neural network does not involve loss terms corresponding to the boundary conditions, and there is no need to consider any penalty factor.This naturally leads to the deep neural network solution and its derivative being periodic.One advantage of our approach is that the emerging solution is valid outside the training region [26], which to our best knowledge, has only been investigated partially for rectilinear domains in [30].A detailed review of machine learning-mediated multiscale modeling and simulation is reported in [33].
We consider first-order asymptotic homogenization problems that arise in a broad range of physical phenomena and are described in terms of elliptic PDEs of divergence type.Our numerical approach has two important elements that are particularly relevant to this type of problem: 1.As we use reciprocal unit cell vectors, our method can be applied to any form of crystal in 2D or 3D, not necessarily rectilinear.This is crucial in homogenization problems as primitive unit cells of micro-structures can take various geometries.
2. Solutions of unit cell problems arising in homogenization are computed up to a constant.This is ideal for our PINNs environment as it allows us to use the loss term composed only of the PDE residual term without the need for any additional scaling.
The rest of the paper is structured as follows.Section 2 provides a summary of the theory behind first-order asymptotic homogenization and we outline the working principles of the PINNs.In Section 3 we apply our approach to a selection of problems in 1D, 2D, and 3D.The results are compared for both local field distributions and the computed macroscopic property tensors.We show that using PINNs in asymptotic homogenization of property tensors provides highly accurate results with ample time gains, especially once complex phase geometries with rapid fluctuations are considered.

Theory
2.1 Crystal Arrangements, Bravais Lattices, and Lattice-Periodic Functions A Bravais lattice is a set of regularly spaced points, each of which has a position R as an integer combination of three translation vectors a i for i = 1, 2, 3, with R = m a 1 + n a 2 + o a 3 , and m, n, and o are integers [34,35].The translation vectors a i are called direct lattice vectors or primitive translation vectors.The domain containing only one lattice point and the smallest volume a 1 • a 2 × a 3 that fills the entire space upon translation with integer combinations of a i is the primitive unit cell.Fig. 1 demonstrates selected Bravais lattices in 1D, 2D, and 3D, which are used in subsequent application problems.
In one dimension, there exists only one Bravais lattice which is a regular grid of spacing a such that each point has a position R = ma, where m is an integer.In two dimensions, there are five Bravais lattices: oblique (O), rectangular (R), centered rectangular (R c ), hexagonal (H) and square (S).In three dimensions, there are in total 14 Bravais lattices.Exemplary lattices and corresponding unit cells in 1D, 2D, and 3D are depicted in Fig. 1.The choice of the primitive unit cell is not unique unless it is a Wigner-Seitz cell created by Voronoi decomposition applied to the Bravais lattice.In Fig. 1, the unit cells B 1 , S 1 , R c 1 and C s 1 correspond to Wigner-Seitz cells whereas B 2 , S 2 , R c 2 and C s 2 constitute another arbitrary primitive unit cell.However, the unit cell R c 3 for the centered rectangular lattice is not a primitive unit cell.Direct and reciprocal lattice vectors for the lattices given in Fig. 1 are given in Table 2.1.
Table 1: Direct and reciprocal lattice vectors for each Bravais lattice given in Fig. 1.For the demonstration of the direct lattice vector a and the vector c see Fig. 1.For the 1D lattice, a = |a| = L.For the 2D square lattice, Here the directions of the unit Cartesian basis e i conform with c i for i = 1, 2, 3.

dimension lattice type direct lattice vectors reciprocal lattice vectors
such that a i • b j = 2πδ ij .Any lattice-periodic function ξ(x) with ξ(x) = ξ(x + R) can be written as the following Fourier series, see, e.g., [34,35] where x represents the vector position and ξG are generally complex Fourier coefficients.

First-order Asymptotic Homogenization
A generic form of elliptic PDEs is given as Figure 1: Selected Bravais lattices in 1D, 2D and 3D.The choice of the primitive unit cell is not unique.B i , S i , R c i and C s i correspond to unit cells for one dimensional, two-dimensional square, centered rectangular and three-dimensional simple cubic lattices, respectively.Unit cells with i = 1, 2 constitute primitive unit cells, whereas for i = 1 they are referred to as Wigner-Seitz cells.R c 3 is not a primitive unit cell as in the cell domain more than one lattice point is occupied.Instead, R c 3 corresponds to a rectangular unit cell which has direct lattice vectors c i for i = 1, 2. for i, j = 1, 2, 3, where x denotes the three-dimensional (3D) position vector, u is the quantity of interest, and f is a source term.Equation (3) arises in many different contexts [24].For example, in electrostatics, u and a may refer to one of the electric potentials and to the dielectric constant, respectively; in magnetostatics, it is used to describe the magnetic (scalar) potential where a is the magnetic permeability; and in continuum mechanics, it describes the pressure field of slow viscous fluid flow in porous materials where a models the permeability of the medium.
We consider lattice-periodic two-phase material systems with linear physical properties that can be represented in terms of Fourier series, i.e. a(x) = G ȃG e iG•x , such that a(x) = a(x + R).For such systems, the material domain V corresponding to the primitive unit cell of the lattice encapsulates all material characteristics.Henceforth, let x denote the cell-scale (microscale) position that captures the fast variations of the field.It is related to the macroscale position M x, which captures the slow variations of the field, as x = M x/ , where 0 < 1 is a small parameter representing the separation between scales.
Following formal steps of first-order two-scale asymptotic homogenization, see e.g.[24,32], the macroscopic property tensor a is obtained through the following averaging operation for i, m = 1, 2, 3, where χ m (x) is a lattice-periodic corrector function that is the solution to the cell-problem Equation ( 5) corresponds to three problems indexed with m = 1, 2, 3 each of which has the source term computed with the divergence of the m th column vector of the property tensor a.In the literature, two approaches are used to solve Eq. ( 5), see, e.g., [32], where the unit cell is subjected to satisfy Hill-Mandel boundary conditions, see e.g., [36,37,38,39].The first approach solves the problem directly without any transformation.This poses some challenges, particularly when dealing with sharp phase interfaces at which ∂a ij /∂x i → ∞.Although this can be resolved through the incorporation of a diffuse interface formulation, the presence of random material microstructures with complex phase geometries still creates difficulties when implementing the problem into, e.g., commercial finite element packages.An alternative approach is to reformulate the problem such that it can be solved completely using Dirichlet boundary conditions, see, e.g., [32,40,23].To this end, the unit cell is supposed to be subjected to unitary gradients of the solution variable along with space directions.The total solution is then in the form of a superposition of a linearly varying field and a periodic fluctuation field, which corresponds to the corrector function.In this work, we use the former method and solve Eq. ( 5) by PINNs using an appropriate regularization for the source term.

Physics-Informed Neural Networks (PINNs)
Our theoretical approach for this part follows the works by Lu et al. [9] and Haghighat et al. [15].The artificial neural networks used here are referred to as feed-forward neural networks (FNNs).In FNNs a recursive application of linear and nonlinear transformations to the input proves to be sufficient in the solution of most PDEs [9].
We consider an L-layer neural network denoted by N L (x; Θ) : R dimin → R dimout , where dim in and dim out are the network input and output dimensions, respectively.The total number of hidden layers is L−1, and N L (x; Θ) represents the surrogate model that gives the approximate solution for the unknown u of the differential equation.Here, x denotes the input vector, and Θ := {W , b} represent the set of trainable network parameters, where W and b denote the weight matrix and the bias vector, respectively.The output vector is given by y.Considering the input-output relations z 0 ← x and y ← z L , the input, and the feed-forward network propagate the input vector through its layers via the following operation sequence where σ is the nonlinear activation function, e.g., hyperbolic tangent (tanh) or logistic sigmoid (1/[1 + exp(−x)]).The outputs, the weight matrix, and the bias vector of each layer k are, respectively, denoted by z k ∈ R q k , W k ∈ R q k ×q k−1 and b k ∈ R q k , where q k is the number of neurons.
Unlike conventional ANNs, PINNs aim at solving differential equations, which requires the computation of corresponding derivatives.The inputs of the network are problem coordinates or the independent variables representing the problem domain (space and time), whereas the output is the smooth field representing the solution of the PDE.In effect, the derivatives of network output with respect to network input are required, in which, the compositional structure of the network requires the chain rule.In DeepXDE, this is achieved by using the method referred to as automatic differentiation with backpropagation [9,41].
A schematic plot of a PINN for solving general PDEs is given in Fig. 2 for input x and output û, which approximates the actual solution u.Since the current problem is quasi-static, we do not consider time variations of the fields.P and B are differential and boundary operators to be satisfied.For a given domain and boundary training sets, which are respectively denoted by T P and T B , the neural network aims at minimizing the loss function L(θ; T ) by varying the network parameters θ.This process is called training.The converged parameter set giving the optimum output is denoted by θ .The loss function L(θ; T ) is defined in terms of the following weighted sum of the residuals of the PDE (L P (θ; T P )) and the boundary (L B (θ; T B )) where ω P and ω B are the loss weights associated with the PDE and boundary loss terms.

PINNs with Fourier Features
Fourier feature mappings were introduced in [42] as a way to approximate shift-invariant kernels, and have been used in many machine learning applications ever since.The basic idea is to project input points to the surface of a higher or lower dimensional hyperspace by using Fourier bases, thereby making the learning process of the neural network more efficient.
An example is the work of Ref. [43], where Fourier feature mappings are applied in computer vision and graphics applications.It is shown there that mapping the input coordinates with Fourier features enables ANNs to learn highfrequency functions that might not be accessible by standard networks.In particular, Fourier feature mappings can be Spectral bias arises because of the rapid decay of the eigenvalues of the neural tangent kernel (NTK), which is the kernel that approximates the network's output.Such rapid decay in the eigenvalues leads, in turn, to a very slow convergence to the high-frequency components of the target function.Hence, as is shown in [43], adding a Fourier feature mapping effectively transforms the NTK into a shift-invariant kernel while enabling it to tune its spectrum by choosing appropriate frequencies in the Fourier mapping.
In our work, we use a similar approach in terms of Fourier features, with a mapping that projects the input data to a Fourier basis.As shown in the next section, one key element of our formulation is that the convergence of high-frequency functions is achieved by considering multiple integers of the reciprocal base vector, which is different from other applications where random multiples are used.

PINNs in Periodic Homogenization
In what follows, we show how the periodic boundary conditions are implemented exactly without any need for an explicit imposition between periodically located boundary collocation points, eliminating the residual loss term relating to periodic boundary conditions.To this end, we derive trigonometric basis representations of Eq. ( 2) for one-, two-, and three-dimensions.

One-Dimensional Bravais Lattice
A reciprocal lattice, with lattice point positions G = m b can be identified where m is an integer and b = 2π/a such that ab = 2π.For a scalar periodic function, say ξ(x), one has ξ(x) = ξ(x + R), where n is an integer.In view of Eq.
(2), ξ(x) can be written as where ξG are the complex Fourier coefficients.If ξ(x) is a real function, then the above series can be expressed in terms of trigonometric functions where α m and β m are the corresponding real Fourier coefficients.Using multiple-angle trigonometric identities, we can rewrite the above sum as: where we have defined Equation (11) shows that ξ(x) can be represented as where F 1D is a nonlinear combination of cos(bx) and sin(bx).Therefore, instead of approximating the nonlinear smooth function ξ(x) one can approximate F 1D in an ANN by modifying the neural network architecture by adding the C ∞ periodic basis functions as a transfer layer as demonstrated in Fig. 3. Here, the cos and sin functions in the added layer can be seen as activation functions prior to which no weights are applied.This provides a priori and exact satisfaction of the C ∞ periodicity of the solution ξ(x).We note that a similar argument was given in [44,45], who reported that a periodic function with a given period can be decomposed into a weighted summation of the Fourier series basis function.
It is also worth remarking that without loss of generality, we can use integer multiples of reciprocal basis in Eq. ( 12), which still satisfy exact PBCs but also increase the learning process of high-frequency functions, as a consequence of Fourier feature mappings [43].Therefore, our approach provides a well-defined route to identify the number of required Fourier basis for improved convergence (see Section 3 for a detailed analysis of different applications).This is in contrast to the method used in [30], where they considered an arbitrary number of trigonometric functions in their representation, which, as we have shown above, is unnecessary.Indeed, the method proposed in [30] can always be reduced to our approach, and to test the validity of our method, some problems discussed in [30] had been solved in A.

Two-and Three-Dimensional Bravais Lattices
For a two-dimensional Bravais lattice, and considering that ξ(x) is a real periodic function which requires that u G = u −G , Eq. ( 2) can be represented with the following exponential Fourier series where α m,n , β m,n , γ m,n , and δ m,n are real constants.Using multiple-angle trigonometric identities, the above sum can be expressed in the form where F 2D is a nonlinear function.As before, the problem of approximating the nonlinear smooth function ξ(x) is replaced by approximating F 2D in an ANN by modifying the neural network architecture by adding the C ∞ periodic basis functions as a transfer layer as demonstrated in Fig. 4. It is important to remark that the pairs of trigonometric functions given in Eq. 14 provide a linearly independent irreducible basis for the solution, hence choosing these pairs of functions is sufficient to implement PBCs with neural networks in a periodic lattice of any shape.This is partly mentioned in [30] for the case of square lattices.However, we note that they do not justify why trigonometric functions can be used to implement PBCs, as we do here.
Extending from 2D to 3D is straightforward and suffice to say that a real periodic function ξ(x) is represented by the corresponding nonlinear function F 3D of the form

Loss Contributions in Periodic Homogenization with PINNs
In this work, we use PINNs to approximate the solution of Eq. ( 5) whose derivative is then used to compute the integral given in Eq. ( 4).We note that a solution up to a constant is sufficient to this end, eliminating the need to apply Dirichlet boundary conditions.With the a priori and exact satisfaction of PBCs by adding a C ∞ periodic transfer layer, as demonstrated in the previous sections, the residual loss term corresponding to the boundary conditions is zero, i.e.L B (θ; T B ) = 0.
With the elimination of boundary terms, the conventional loss term is just that of the PDE residual minL(θ; T ) = ω P L P (θ; T P ).In the conventional application of the PINNs L P is the square norm of the PDE residuals Alternatively, one may individually scale the loss contribution due to PDE at each material point with assuming isotropic constituent properties with a(x k ) = a(x k )1.x k corresponds to the collocation point at which the PDE is computed.

Applications
In this section, we solve illustrative examples of regular and stochastic periodic material systems in 1D, 2D, and 3D.In the studied problems, the crystal structure is formed by attaching the basis containing a circular disk to every lattice point of the space lattice given in Fig. 1.We consider a property contrast of a 1 /a 2 = 100, where a 1 corresponds to the inclusion property a i .This contrast is relatively large compared to the applications reported in the literature.In Ref. [31], for example, in which PINNs in homogenization in electrostatics is investigated, a phase contrast of 10 was considered.The property of the matrix material is taken as a m = a 2 = 1.In [46], a phase contrast below 10 was used for the elasticity moduli of the constituents while seeking out a physics-informed neural network solution for a continuum micromechanical problem.

Numerical Details
Unless otherwise stated, the neural network is chosen with a density of 50 and depth of 3, as these values were observed to suit the model to be trained well.The learning rate is varied between 0.001 and 0.010.In the optimization step, the first 5000 epochs belong to adam optimizer, whereas the rest 25000 are to L-BFGS.Individually scaled and unscaled loss terms, see, Eqs. ( 17) and ( 16), respectively, are considered.Networks with and without the Fourier feature layer are also tested, observing that without the Fourier feature layer, the application of periodic boundary conditions is only approximate.No additional scaling is applied to the corresponding loss terms.
In all of the problems of this study, the collocation points are generated using the Hammersley low-discrepancy sequence, which allows the training points to be more evenly distributed over the problem domain, leading to more accurate solutions.The number of collocation points should allow sampling from the smoothing region in the property field distribution.This region is the only one providing nonzero source terms, hence a nontrivial solution.
The model training with neural networks is accomplished on NVIDIA Tesla P100-and V100-GPUs.The training times vary depending on the batch size, the depth, and the density of the network.

A One-dimensional Problem
In 1D, Eq. ( 4) becomes where a(x) is a scalar quantity and x is the spatial coordinate.The one-dimensional composite unit cell domain amounts to Integrating the above equation twice and rearranging, we obtain where we have used the periodicity of χ(x) with χ(−L/2) = χ(L/2).
Letting its center correspond to the origin, we select V 1 := {x : −r < x < r} as the domain of phase 1.The remaining domain, or phase 2, represents the matrix material, and it is defined as In order to have a continuous derivative at the interfaces located at x = ±r, we regularize the a(x) function with a smooth interface as follows where a 1 and a 2 are the properties of phase 1 and phase 2, respectively, ξ is the numerical regularization (or smoothing) parameter.Putting this expression into Eq.( 20) and evaluating the integral, we obtain where we have defined f (x) = tanh ((x − r)/ξ).Note that as ξ → 0, f (x) → 1 for x > r and f (x) → −1 for x < r.Therefore, as ξ → 0 we obtain which corresponds to the conventional Reuss (harmonic) average for 1D composites in the sharp interface limit.

PINN Implementation and Numerical Results
As an illustrative example, we take L = 200ξ as the size of the one-dimensional composite domain, with ξ = 1, and consider inclusion volume fractions of φ i = {0.10,0.20, . . ., 0.90}.
We can see from Eq. ( 21) that the maximum absolute slope |a (x)| occurs at the inflection point x = r with |a (r)| = |a 2 − a 1 |/2ξ.Therefore, reducing ξ or increasing the phase contrast to yield a higher property gap |a 2 − a 1 | will have a similar influence on the property gradients and thus should be considered while selecting the number of collocation points.Here, the training batch size is selected as 512.However, in more general cases with lower phase contrast, a lower number of collocation points also works.
The PINNs simulations will be compared to high-resolution finite element simulations done with ABAQUS, which will be used to generate the ground truth by discretizing the one-dimensional domain with 5000 elements.
The solution field and its spatial derivative are shown in Figs. 5 and 6, respectively.The spatial derivative of the predicted solution is determined at selected collocation points using automatic differentiation in TensorFlow.Results from PINNs simulations are compared to finite element solutions (shown in discrete circular markers).In both figures, the left column corresponds to solutions for individual loss scaling with Eq. ( 17) and the right column without it.Our numerical tryouts show that the effect of this modification is marginal for the lower phase contrasts.However, we observe that the scaling increases accuracy for the case of low-volume fractions with high contrast phase, as would be noticed by comparing top columns.Our results show that meaningful solutions for a wide range of phase volume fractions are only obtained when Fourier features are included.
The training histories of the normalized loss function are given in Fig. 7.The loss for the approximate periodic boundary condition treatment does not provide an accurate prediction, no matter the learning rate.Thus, without Fourier features, the solution struggles to converge.For the loss computations with the Fourier features, for which there is an exact imposition of the periodic boundary conditions, large fluctuations are observed in the stochastic gradient descent phase.
On the other hand, in the L-BFGS optimization phase, the loss reduces monotonically.Figure 8 demonstrates that although a lower learning rate generally shows better performance in terms of convergence, with a higher learning rate, convergence is reached at a higher speed.
The effective property predictions at the end of 30000 epochs as a function of inclusion volume fraction is given in Fig. 9.In accordance with the solution field distributions and the loss diagrams, neural networks with a Fourier feature layer provide accurate predictions showing excellent agreement with the finite element solutions.The integrated, effective properties can be equivalently computed using Eq.(22).As anticipated, the computed effective properties are closer to the Reuss (harmonic) average.The reason for the discrepancy is the diffuse interface condition considered in the finite element and neural network solutions.

Two-dimensional Problems
We now consider the problem of finding the solution to Eq. ( 5) in a 2D system.At the microscale, we assume isotropy for each constituent such that a im (x) = a(x) δ im .In effect, the source term ∂a im (x)/∂x i satisfies the relation ∂a im (x)/∂x i = ∂a(x)/∂x m .Finding the solution χ m (x) for the source term for m allows computation of the m th column vector of the effective property tensor a , see Eq. ( 4).
For 2D, the two load cases required are those with two source terms defined in terms of property field gradients f 1 = ∂a(x)/∂x 1 and f 2 = ∂a(x)/∂x 2 .Thus, in view of Eq. ( 4) with the chosen Cartesian basis, the computation of the components a 11 and a 21 requires the solution χ 1 (x) for the source term f 1 which gives The solution χ 2 (x) obtained for the source term f 2 , on the other hand, allows computation of a 22 and a 12 = a 21 , respectively, with The matrix a m and inclusion a i properties are selected as a m = 1 and a i = 2, respectively.We consider the following regularized property distribution Here, x i denotes the position of the inclusion and r its radius.As before, ξ is the regularization parameter that controls the sharpness of the phase interface.
In the following, through the solution of the cell problems with the source terms f 1 and f 2 for the V−periodic corrector functions χ 1 (x) and χ 2 (x), respectively, we compute a for the selected microstructures.The gradients of χ 1 (x) and χ 2 (x), required in Eqs. ( 24) and ( 25), are obtained from the differentiable ANN solution.

Two-dimensional Square Lattice
We start by considering a single primitive unit cell with a single central circular inclusion.Inclusion volume fractions of φ i = {0.1,0.2, 0.3, 0.4, 0.5, 0.6} are considered.In view of Table 2.With Neumann's principle and the second-order tensor formalism for the property tensor, such microstructures possess isotropic macroscopic properties [47,48] leading to a = a 11 1 with a 11 = a 22 and a 12 = a 21 = 0. Thus, it suffices to find the solution to one of the source terms only.Here, we use f 1 = ∂a(x)/∂x 1 .Individually scaled partial differential equations are considered in the loss term.PBCs are exactly satisfied through low-and high-frequency Fourier features with a single and the first 10 integer multiples of the reciprocal base vector.25600 collocation points are used.For the adam optimizer, a learning rate of 0.010 is used.
Fig. 10 depicts the contour plots for the property field a(x), the source term ∂a(x)/∂x 1 , and the ANN solution fields χ(x) and |∇χ| at the end of 30000 epochs for high-frequency Fourier features.The results are compared to high-resolution finite element simulations conducted with ABAQUS; see Fig. 11.The finite element model includes 260000 linear quadrilateral elements and 260401 nodes.A local element refinement is applied at the diffuse interface zone.As in the case of 1D, the finite element solutions are assumed to correspond to the ground truth.It is observed that high-frequency Fourier features with the first 10 integer multiples of the reciprocal base vector lead to a better agreement with the ground truth.The training histories for the normalized loss function given in Fig. 12 support this observation.
Fig. 13 shows the ANN predictions for the effective property a 11 and its comparison to the ABAQUS solution as well as the analytical truncated series expansion solution of Godin [49] cited in [50]; see C.1.The ANN predictions agree excellently with the finite element simulation results for both low-and high-frequency Fourier feature solutions.The discrepancy between the analytical solutions is, as before, the sharp interface assumption considered in the latter.
We continue our studies on the 2D square lattice with periodic and stochastic disk distributions resulting in an inclusion fraction slightly above φ i = 0.30.The phase distribution and the source terms are depicted in see Fig. 14.Since a random structure with a relatively small volume element is considered, an anisotropic property is anticipated.Thus, solutions to both source terms are considered, yielding the complete property tensor components.65536 collocation points with 50×3 and 100×6 ANN architectures are used in training.In both cases, high-frequency Fourier features with the first 10 integer multiples of the reciprocal base vector are considered.
Considering the stochastic nature of the ANN training process and the absence of ground truth, we reverted to repeated solutions.For each architecture, the ANN is trained three times.The computed effective property tensor components with a chosen basis that conforms to square dimensions are given in Table B1 in Appendix B. As also demonstrated by the training scaled loss history plots given in Fig. 16, 100×6 ANN architecture gives results with higher accuracy in precision with the following representations in terms of mean and standard deviation in four decimal points for each component with a 11 = 2.2969 ± 0.0018, a 12 = −0.0261± 0.0002, a 21 = −0.0296± 0.0010, a 22 = 2.4012 ± 0.0022.
Corresponding solution fields for the applied source terms are given in Fig. 15.In light of our computations, the Figure 9: ANN solution of the effective properties a at the end of 30000 epochs and its comparison with Voigt (arithmetic average) and Reuss (geometric average) analytical bounds, and ABAQUS solution given in circle markers, which is in agreement with the integral given in Eq. (22).The left column belongs to the results with scaling and the right without scaling.Dashed results have a learning rate of 0.001 and continuous lines of 0.01.Black curves give results for no Fourier feature, whereas dark blue and red for low-and high-frequency Fourier features with a single and the first 10 integer multiples of the reciprocal base vector, respectively.following representation in two decimal points constitutes an accurate representation of the symmetric effective property tensor in matrix form, revealing a small anisotropy.

Two-dimensional Centered Rectangular Lattice
In this part, our applications focus on the centered rectangular lattice with L 1 /L 2 = 3 where L 1 =|c 1 and L 2 =|c 2 .We consider the inclusion volume fraction of φ i = {0.3}.The area of the primitive unit cell is arranged to be equal to the square unit cell considered in the previous example with L 1 L 2 = L 2 .This provides identical radii of the circular min max  R c 3 which has double the area, the number of collocation points are doubled.Learning rates of 0.001 and 0.010 with 50×3 and 100×6 ANN architectures.Like before, considering the stochastic nature of the ANN training process and the absence of ground truth, we reverted to repeated solutions.For each architecture, the ANN is trained three times.For R c 1 and R c 2 , higher accuracy was recorded with a learning rate of 0.001 with 100×6 ANN architecture possessing low-frequency Fourier features.For R c 3 , on the other hand, a learning rate of 0.010 with 50×3 ANN architecture possessing high-frequency Fourier features with the first 10 integer multiples of the reciprocal base vector performed best once the consequential training loss is concerned, see Fig. 18.Fig. 10 depicts the contour plots for the property field a(x), the source terms ∂a(x)/∂x 1 and ∂a(x)/∂x 2 , and the corresponding ANN solution fields χ(x) and |∇χ| at the end of 30000 epochs for the best-performing hyperparameter set for each case.The utilization of Fourier features in conjunction with reciprocal unit cells allows for the satisfaction of periodicity in non-rectangular problem domains.As observed from the contour plots, the choice of the selected unit cell does not change the solution.
The computed effective property tensor components are given in Table B2 in Appendix B for R c 1 , R c 2 and R c 3 .As demonstrated in Fig. 16, the training losses for all of the unit cells are sufficiently small at the end of 30000 epochs.Nevertheless, learning trends differ due to the difference in the selected ANN hyperparameters.An orthotropic effective property matrix, when using a chosen basis that conforms to rectangular dimensions, is expected, with non-zero diagonal terms and zero off-diagonal terms, resulting in a 11 = a 22 and a 12 = a 21 = 0.For R c 1 , a conversion of the tabulated values to mean and standard deviation in four decimal points for each component results in a 11 = 1.7402 ± 0.0042, a 12 = 0.0018 ± 0.0021, a 21 = 0.0042 ± 0.0016, a 22 = 2.2820 ± 0.0023.For R c 2 , a conversion of the tabulated values to mean and standard deviation for each component results in a 11 = 1.7423 ± 0.0025, a 12 = −0.0004± 0.0012, a 21 = 0.0010 ± 0.0013, a 22 = 2.2846 ± 0.0026.Finally, for R c 3 , this gives a 11 = 1.7378 ± 0.0008, a 12 = 0.0000 ± 0.0000, a 21 = 0.0004 ± 0.0014, a 22 = 2.2946 ± 0.0023.These results show a good agreement between the solutions devising different unit cells.In light of our computations, the following representation in two decimal points emerges as a good approximation for the symmetric effective property tensor in matrix form, revealing anisotropy,    The ANN training batch size corresponds to 512000.A learning rate of 0.010 is used with 50×3 ANN architectures with both low-and high-frequency Fourier features, where the ANN is trained three times for each hyperparameter combination.
For the symmetry group possessed by this material microstructure, a spherical second-order effective property tensor is anticipated.Using a chosen basis that conforms to cubic dimensions, this gives non-zero and equal diagonal terms and zero off-diagonal terms, resulting in a 11 = a 22 = a 33 and a ij = 0 for i = j and i, j = 1, 2, 3. Thus, in our computations, we consider only x−gradients of the property field in the source term.The results show that ANNs with high-frequency Fourier features incorporating the first 10 integer multiples of the reciprocal base vector performed best; see Fig. 20.The contour plots for the PINN solution χ(x) the flux norm |∇χ(x)| fields of the cell-problem for periodic unit cells C 1 and C 2 are given for this hyperparameter set are given in Fig. 19.As observed from the contour plots, the choice of the selected unit cell does not change the solution.
The computed effective property tensor components are given in Table B3 in Appendix B for C 1 and C 2 .For the more accurate solution with high-frequency Fourier features, incorporating the first 10 integer multiples of the reciprocal base the mean and standard deviation for the tabulated effective property matrix components result in a 11 = 2.4550 ± 0.0016, a 12 = 0.0001 ± 0.0001, a 13 = −0.0001± 0.0002, for C 1 , and in a 11 = 2.4553 ± 0.0020, a 12 = 0.0001 ± 0.0000, a 13 = 0.0000 ± 0.0002, for C 2 .These results show a good agreement between the solutions devising different unit cells.Considering 0.30 inclusion fraction but a sharp interface condition, the truncated series expansion solution of Lam [51] cited in [50], given in C.2 gives a = 2.2742.Similar to our observations for 1D and 2D square lattices with regular circular inclusion distribution for which analytical solutions exist, the ANN solution for the main diagonal term a 11 results in a slightly larger prediction.This discrepancy is due to the utilized diffuse interface assumption.The following representation in matrix form in two decimal points emerges as a good approximation for the effective property tensor a C = 2.46 0.00 0.00 0.00 2.46 0.00 0.00 0.00 2.46 . (29)

Conclusion
We applied physics-informed neural networks (PINNs) to determine effective (macroscopic) properties of heterogeneous media using first-order two-scale periodic asymptotic homogenization.A relatively large phase contrast is aimed at.A generic divergence-type elliptic equation is considered to this end which applies to a broad range of physics problems, including electrostatics, magnetostatics, steady-state (time-independent) heat and electrical conduction, and anti-plane elasticity.The unit cell problem for the elliptic equation can be solved up to a constant.The reliance on the standard integral solution for the property tensors on only the gradient of the solution field dismissed the requirement for uniqueness in the solution.This, together with the absence of Dirichlet boundary conditions, resulted in a lossless boundary condition application.ANN architectures with an input-transfer layer (Fourier feature mapping) materializing reciprocal lattice vectors are considered.This allowed an exact imposition of periodic boundary conditions for periodic domains with arbitrary and nonrectangular shapes pertaining to crystalline arrangements defined with different Bravais lattices and improved convergence of high-frequency functions during training once integer multiples of the reciprocal basis in the Fourier mapping are considered.As illustrative examples, we considered applications in one, two, and three dimensions with various regular and stochastic composites.We summarized the influence of the utilized ANN architecture and hyperparameters on the prediction quality.

A Implementing Periodic Boundary Conditions in Test Cases
In this section, we provide further evaluations of the effectiveness of Fourier features in incorporating periodic boundary conditions.Some of these problems are discussed in Ref. [30].In all the problems training is performed using 5000 adam optimization steps, followed by 25000 L-BFGS optimization steps.
In the solution to this problem, ANN density and depth are taken as 50 and 3, respectively.512 points are sampled from the domain.The learning rate is taken as 0.001 and 0.010.30), see Fig. 22.The First 5000 epochs (marked with a vertical dashed line) belong to adam optimizer, whereas the rest to L-BFGS.In the numerical trials learning rates of 0.001 (dashed lines) and 0.010 (continuous lines) are used.Black curves give results for no Fourier feature, whereas dark blue and red for low-and high-frequency Fourier features with a single and the first 10 integer multiples of the reciprocal base vector, respectively.Black curves give results for no Fourier feature, whereas dark blue and red for low-and high-frequency Fourier features with a single and the first 10 integer multiples of the reciprocal base vector, respectively.

A.2 Two-dimensional Helmholz Problem with Periodic Boundary Conditions
In this problem, we solve the following Helmholz equation which is x− and y−periodic over the square domain Ω = {(x, y) : 0 ≤ x ≤ 4, 0 ≤ y ≤ 4}.The parameter λ is selected as λ = 10.Following Ref. [30], the source term f (x, y) is selected to give the solution u(x, y) of Eq. ( 31

B Further Results
In this section, additional predictions made using Artificial Neural Networks (ANN) are presented in Tables B1, B2 and B3.

C Analytical Solutions
C.1 Two-dimensional Square Circular Disk Distributions Square and periodic arrangement of disks possess isotropic effective properties once the considered property tensors are of second order [48].Thus, the effective property tensor a can be represented by a scalar a where a = a 1.For a matrix with circular inclusions with the square and periodic arrangement and respective properties denoted by a m and a i , the truncated series expansion solution of Godin [49] cited in [50], gives the analytical expression for a as  Coming to the sphere distributions in 3D, the truncated series expansion solution of Lam [51] cited in [50], gives the analytical expression for a as with

e 2 a 3 =
L e 3 b 3 = 2π/L e 3 The reciprocal Bravais lattice constitutes another set of regularly spaced points with positions G = m b 1 + n b 2 + o b 3 with m , n and o being integers.The noncoplanar vectors b i for i = 1, 2, 3 are called the reciprocal lattice vectors, and they can be derived from the lattice vectors Figure 1: Selected Bravais lattices in 1D, 2D and 3D.The choice of the primitive unit cell is not unique.B i , S i , R c i and C s i correspond to unit cells for one dimensional, two-dimensional square, centered rectangular and three-dimensional simple cubic lattices, respectively.Unit cells with i = 1, 2 constitute primitive unit cells, whereas for i = 1 they are referred to as Wigner-Seitz cells.R c 3 is not a primitive unit cell as in the cell domain more than one lattice point is occupied.Instead, R c 3 corresponds to a rectangular unit cell which has direct lattice vectors c i for i = 1, 2. For the 1D lattice, a = |a|.For the 2D square lattice |c 1 | = |c 2 |, ϕ = 90 • .For the 2D centered rectangular lattice |c 1 | = |c 2 |, ϕ = 90 • .Finally, for the 3D simple cubic lattice |c 1 | = |c 2 | = |c 3 |, ϕ = = 90 • .

Figure 2 :Figure 3 :
Figure2: Shematic PINN with the conventional application of BCs within the context of computational homogenization.P and B denote differential and boundary operators, respectively.T P and T B represent collocation points for the PDE and the boundary conditions, respectively.The image is adapted from[9].

Figure 4 :
Figure 4: Schematic representation of how to implement exact C ∞ periodic boundary conditions in two-dimensional problems through an added C ∞ periodic input transfer layer.

Figure 5 :
Figure5: The ANN predictions for the solution field at the end of 30000 epochs and comparison to ABAQUS results (discrete circular markers).The light blue line represents the smoothed material property distribution.The left column belongs to the results with scaling and the right column without scaling.Dashed curves have a learning rate of 0.001 and continuous ones 0.01 for the adam optimizer.Black curves give results for no Fourier feature, whereas dark blue and red for low-and high-frequency Fourier features with a single and the first 10 integer multiples of the reciprocal base vector, respectively.

Figure 6 :
Figure6: The ANN predictions for the solution field derivative at the end of 30000 epochs and comparison to ABAQUS results (discrete circular markers).The light blue line represents the smoothed material property distribution.The left column belongs to the results with scaling, and the right column is without scaling.Dashed curves have a learning rate of 0.001 and continuous ones 0.01 for the adam optimizer.Black curves give results for no Fourier feature, whereas dark blue and red for low-and high-frequency Fourier features with a single and the first 10 integer multiples of the reciprocal base vector, respectively.

Figure 7 :
Figure 7: Training history of the loss function for the cell-problem PINN solution.The first 5000 epochs (marked with a vertical red dashed line) belong to adam optimizer, whereas the rest to L-BFGS.The left column belongs to the results with scaling and the right without scaling.Dashed results have a learning rate of 0.001 and continuous lines of 0.01.Black curves give results for no Fourier feature, whereas dark blue and red for low-and high-frequency Fourier features with a single and the first 10 integer multiples of the reciprocal base vector, respectively.

Figure 8 :
Figure8: Training history of the loss function for the cell-problem PINN solution for the first 500 adam epochs.In the results, the top row belongs to the results with scaling and the bottom without scaling.Dashed results have a learning rate of 0.001 and continuous lines of 0.01.Black curves give results for no Fourier feature, whereas dark blue and red for low-and high-frequency Fourier features with a single and the first 10 integer multiples of the reciprocal base vector, respectively.

Figure 14 :
Figure 14: The contour plots for the property (a) and source terms as property derivatives along x− and y−directions for the square periodic unit cell with random disk distribution.The intervals [min, max] of the contour plots are (a) [1, 100], (b) and (c) [−49.5, 49.5].

Figure 15 :Figure 16 :
Figure 15: The contour plots for the ANN solution χ(x) for (a) and (c) and the flux norm |∇χ(x)| for (b) and (d) fields of the cell-problem for the square periodic unit cell.The intervals [min, max] of the contour plots are (a) [0, 55], (b) [0, 3.6] (c) [0, 55], and (d) [0, 5.3].Results are obtained using a 100×6 architecture with high-frequency Fourier features with the first 10 integer multiples of the reciprocal base vector.

Figure 18 :
Figure 18: Training history of the normalized loss function for the ANN solution considering unit cells of the cellproblem for periodic unit cells R c 1 (black), R c 2 (blue) and R c 3 (red).For each hyperparameter set, three training sessions are run.The first 5000 epochs (marked with a vertical dashed line) belong to adam optimizer, whereas the rest to L-BFGS.For R c 1 and R c 2 , a learning rate of 0.001 with 100×6 ANN architecture possessing low-frequency Fourier features is used, whereas, for R c 3 , a learning rate of 0.010 with 50×3 ANN architecture possessing high-frequency Fourier features with the first 10 integer multiples of the reciprocal base vector.

Figs. 21 Figure 21 :
Figs.21 and 22 demonstrates that ANN with high-frequency Fourier features using the first 10 integer multiples of the reciprocal base vector performs the best in approximating the function in both the convergence rate and the final accuracy.The training loss at the end of 30000 epochs is nearly four orders of magnitude larger for ANNs without Fourier features.

Figure 22 :
Figure22: Plots for the 1D periodic function given in Eq. (30) and its ANN approximation (left) and absolute error in the ANN approximation (right) over the problem domain x ∈ [0, 2].Results for a learning rate of 0.010 are demonstrated.Black curves give results for no Fourier feature, whereas dark blue and red for low-and high-frequency Fourier features with a single and the first 10 integer multiples of the reciprocal base vector, respectively.

Table B1 :
ANN predictions with two different architectures for the source term f 1 and f 2 for the random and periodic inclusion distribution considering a square lattice.For f 1 the subscript i = 1, otherwise i = 2. Results are obtained using high-frequency Fourier features with the first 10 integer multiples of the reciprocal base vector.

Table B2 :
ANN predictions for the source term f 1 and f 2 considering the three different unit cells and centered rectangular lattice.For R c 1 and R c 2 , a learning rate of 0.001 with 100×6 ANN architecture possessing low-frequency Fourier features is used, whereas, for R c 3 , a learning rate of 0.010 with 50×3 ANN architecture possessing highfrequency Fourier features with the first 10 integer multiples of the reciprocal base vector.For f 1 the subscript i = 1, otherwise i = 2.

Table B3 :
ANN predictions for the source term f 1 for considering the two different unit cells C 1 and C 2 and simple cubic lattice.A learning rate of 0.010 with 50×3 ANN architecture is utilized with low-frequency and high-frequency Fourier features, respectively possessing a single and first 10 integer multiples of the reciprocal base vector.