De-homogenization using Convolutional Neural Networks

This paper presents a deep learning-based de-homogenization method for structural compliance minimization. By using a convolutional neural network to parameterize the mapping from a set of lamination parameters on a coarse mesh to a one-scale design on a fine mesh, we avoid solving the least square problems associated with traditional de-homogenization approaches and save time correspondingly. To train the neural network, a two-step custom loss function has been developed which ensures a periodic output field that follows the local lamination orientations. A key feature of the proposed method is that the training is carried out without any use of or reference to the underlying structural optimization problem, which renders the proposed method robust and insensitive wrt. domain size, boundary conditions, and loading. A post-processing procedure utilizing a distance transform on the output field skeleton is used to project the desired lamination widths onto the output field while ensuring a predefined minimum length-scale and volume fraction. To demonstrate that the deep learning approach has excellent generalization properties, numerical examples are shown for several different load and boundary conditions. For an appropriate choice of parameters, the de-homogenized designs perform within $7-25\%$ of the homogenization-based solution at a fraction of the computational cost. With several options for further improvements, the scheme may provide the basis for future interactive high-resolution topology optimization.

: De-homogenization of a 200 × 50 homogenization-based solution onto a fine mesh of 8000×2000 elements. Compliance of the de-homogenized design is within 17% of the homogenization solution, adheres to a predefined minimum relative thickness, and has been de-homogenized in just 15 seconds on a modern day laptop.

Introduction
In many ways the fields of computer vision and topology optimization are closely related, e.g. in both cases, the optimization domain constitutes a set of elements/pixels in a grid, and the problem can be formulated as minimizing a functional to obtain an optimized configuration. In computer vision, an optimized configuration could be a set of segmentation labels, which in the context of topology optimization [9] could be thought of as material phases. Due to the closeness of the tasks, it is only reasonable to assume that some of the impact which deep learning has had on the field of computer vision will transfer to the field of topology optimization.
Led by this promise, recent years have seen a surge in publications on the application of deep learning [23] to topology optimization within a variety of problems such as minimum compliance [20,45,10,42,11,29,21], thermal compliance [27,25,26], micro-structure design [2,37,41,22,12] and generative design [32]. Despite these efforts, deep learning has yet to make a major impact within topology optimization, and only performs at a similar level to traditional topology optimization methods at very low design resolutions, or for highly restricted problem domains and boundary conditions, where the cost of generating a synthetic dataset and training a neural network is not prohibitive. Put bluntly, and due to the above reasons, none of the so far published deep learning end-to-end methods allow for a level of generalization which makes it advisable to use them for practical applications. Moreover, for end-to-end machine learning to have a significant impact on the field of topology optimization, it is clearly a requirement that it is not only faster and scales better, but also that the solutions produced are of comparable quality to those computed using more conventional optimization methods [9]. Furthermore, this must be true even when the network is provided inputs that are very different from those in the training data. This appears to be a very challenging problem, but attractive alternatives to end-to-end learning present themselves. For example, it is generally much easier to incorporate constraints and ensure a physically sound design in the classical mathematical programming methods, where only one design problem is considered at a time, whereas convolutional neural networks have shown state-of-the-art performance on tasks such as image denoising [39] and upsampling [24] where local information plays an important role. Hence, a combined method, in which the two approaches complement each other, seems to be a much more viable direction to pursue [13].
Here we report on a deep learning approach that allows for problem definition generalization and which is capable of producing high resolution and detailed designs. The key to this achievement lies in the fact that the proposed method does not provide an end-to-end topology optimization approach. Instead, it provides an alternative to traditional de-homogenization approaches [33,16,3,17] by using the orientations from a homogenization-based topology optimization solution [8,16] as inputs for a fully convolutional neural network [34], which upsamples them to an intermediate fieldρ. The network is trained using a synthetic dataset generated by sampling the spatial gradient orientations from a field of low-frequency sines, and an encoding based on splitting the angular information into multiple channels is used to ease learning for the network. A custom loss function ensures that the intermediate field is periodic according to a predefined wave-length and that its spatial gradients are orthogonal to the input directions. Furthermore, two regularization terms are included that favor a solution with sufficient spatial variation, and where branching mainly takes place in the solid phase. Figure 2 shows the data pipeline from the orientation field to the intermediate field and subsequent post-processing. The post-processing consists of a series of fast steps, all running in linear time, which project the desired lamination widths on the intermediate field to yield the de-homogenized one-scale design. We remark, that solving the homogenization-based topology optimization problem is already very fast, i.e. the problem is close to convex and much information can be extracted, even from coarse finite element meshes [17]. The current bottleneck in de-homogenization methods is the expensive solving of one large-scale least square problem per lamination direction. In this work, we propose a learning-based solution to this problem that promised significant time savings. With computation times around 2 seconds for projection onto a 2400×1200 mesh on a standard laptop and lots of potential for further improvements and porting to GPUs, this approach may pave the way for future real-time de-homogenization in the iterative homogenization-based optimization loop [30].
The structure of the paper is as follows; in Section 2 the relevant methods are outlined including homogenization-based topology optimization, network architecture, loss function, and post-processing scheme. Section 3 presents the synthetic dataset generation and input encoding. In Section 4 results for several different minimum compliance examples are presented and compared to a reference solution provided by the homogenization-based method. Finally, a discussion of the strengths and weaknesses of the proposed method and potential improvements are discussed in Section 5.

Methods
This section provides a brief introduction to homogenization-based topology optimization, followed by a detailed description of the proposed neural network and lamination width projection scheme.

Homogenization-based topology optimization
The seminal papers by Bendsøe and Kikuchi [8,6] introduced the so-called homogenization-based topology optimization approach. Crucially, this method overcame the ill-posedness of element or node-based design parameterizations by introducing optimal, multi-scale, so-called rank-N (or slightly sub-optimal rectangular hole) microstructures, oriented along principal stress directions. This approach was considered quite complex and also resulted in "grey" solutions that could not be manufactured. Therefore, and specifically after the simpler density-approach was proven physically admissible [7], homogenization-based topology optimization was largely abandoned in favor of simpler approaches. Now, where simpler density approaches have reached giga-scale resolution [5] that require access to supercomputers, and where advances in additive manufacturing techniques allow realization of efficient multi-scale structures, there has been a renewed interest in homogenization-based topology optimization approaches, that can be run efficiently on simple computers and who's output can be realized through novel de-homogenization approaches [33,16,3,17].
The homogenization-based approach to topology optimization can be explained through the visualization in Figure 3. A design domain (a) is discretized into a relatively low number of elements and each element has three design variables. Lamination parameters µ 1 and µ 2 denote fractions of solid material at two different length scales and θ determines the orientation. For this so-called rank-2 material, homogenized effective properties can be determined in closed form (see e.g. [9]). Based on analytical gradient computations, these three design fields can be optimized, resulting in e.g. the cantilever beam structure in Figure 3a, where grey color indicates optimized element-wise volume fractions and blue and red lines indicate direction and proportions of the two local lamination thicknesses. The exact formulation of the homogenization-based topology optimization approach used here follows next.

Optimization problem
The goal of the considered homogenization-based topology optimization problem is to minimize the structural compliance C, i.e. the work done by external forces, subject to a volume constraint. This objective is augmented by two additional terms; an angle regularization term F θ [17] and a penalization of the solid area [15]. The design variables are the two element-wise lamination parameters µ µ µ 1 , µ µ µ 2 , the angle θ θ θ, as well as an auxiliary material indicator field s.
The full optimization problem, including constraints, is stated as min µ µ µ1,µ µ µ2,θ θ θ,s : Here K is the stiffness matrix, f is the force vector, u is the solution to the equilibrium equations, v is a vector of element volumes, ρ ρ ρ d is a vector of element densities, V max is the maximum amount of allowed material and C ref is a reference compliance. Finally, γ θ and Γ are constants used to scale the importance of each term in the objective and µ min is the minimum bound on the admissible layer thickness, from here on referred to as the minimum relative thickness.
The relation between design variables and physical variables is where α out = 10 −9 is a small value indicating the stiffness in the void phase, and Ω denotes the optimization domain. Here,˜ denotes filtered variables and¯ denote projected variables.
With this formulation, we ensure clear designs without non-physical lamination parameters in the interval [0, µ min [. For a detailed description of this approach, the readers are referred to the paper by Giele et al. [15].

Convolutional neural network
The homogenization-based topology optimization solution contains a set of lamination parameters describing the underlying microstructure in each element, but does not in itself provide a physically realizable solution, as infinite periodicity of the microstructure is assumed within each element (Figure 3b). Together with the element-wise varying lamination angles, this poses the difficult challenge of seamlessly connecting the microstructure in adjacent elements to form a mechanically sound and efficient de-homogenized design at finite periodicity. In the context of previous de-homogenization approaches, this is where the current bottleneck lies [33,16,3]. The most expensive part of existing de-homogenization approaches is the solution of a least-squares problem on an intermediate mesh to identify two auxiliary fields with gradients corresponding to the normals of the two lamination directions. Hence, it is interesting to investigate if this task can be performed more efficiently with a learning-based approach.
The most straightforward way to utilize a neural network for solving this task would be to do it in a supervised manner [31]. In the supervised learning case, the network takes the lamination parameters as input and outputs a de-homogenized design which is then evaluated against a ground-truth de-homogenized design generated using existing de-homogenization methods [16,3], i.e. the neural network G creates a mapping ρ ρ ρ = G(µ µ µ 1 , µ µ µ 2 , θ θ θ) (5) and is evaluated using an error estimate, such as the mean-squared error, between the predicted and ground-truth design Where ρ ρ ρ gt denotes the ground-truth design.
In practice, there are several downsides to such an approach. First of all, a very large amount of computation is required in order to generate a dataset of ground-truth designs with sufficient resolution and quality. Secondly, even though de-homogenization methods have improved tremendously in recent years most methods still require some degree of manual intervention, which is both cumbersome and undesirable to do for thousands of samples. Also, an entirely target-based loss function only ensures pixel-wise accuracy, but does not ensure the structural integrity of the design as a whole. Thus, if a few crucial pixels are predicted with low accuracy the design may contain structural disconnects, but exhibit a low geometrical error overall. Of these two issues, the dataset generation is the most pressing, as minor structural disconnects can often be fixed by a subsequent post-processing scheme. Therefore, and due to the large amount of computation and manual work needed to make a supervised approach work, an unsupervised approach [14] is taken in this paper. Our solution is based on the observation that the mechanical soundness of the structure is provided by the homogenization-based solution. Thus, our convolutional network needs only to produce structures of a given periodicity from the lamination orientations provided by the homogenization-based solution, whereas the lamination widths are projected onto the design in a post-processing step. This problem greatly resembles texture synthesis which crucially allows us to train the network using synthetic fields. However, initial tests using the orientation field directly as input for the neural network did not show promising results, and thus an alternative encoding based on splitting the angular field into multiple channels each modeled by a Gaussian was developed as detailed in Section 3.
Given the activations in each channel the network G performs the following mapping Where H are the Gaussian activations, andρ ρ ρ is an intermediate field who's gradients follow lamination orientations, i.e. |∇ρ · e θ | ≈ 0. Note that for a rank-2 microstructure description the lamination orientations correspond to the principal stress orientations.
A fully convolutional neural network [34] is chosen as the desired network architecture as it allows an arbitrary input size, and requires much fewer learning parameters compared to a fully connected neural network. In the bottom layer of the network, a series of Residual Neural Network (ResNet) blocks [19] are used to extract local features from the input orientations.
To accurately represent the orientation information in the intermediate field a total upsampling factor of ×8 is performed. Each upsampling layer consists of a nearest-neighbour upsampling followed by two convolutional layers, which have proven less prone to checkerboard-artifacts compared to other upsampling methods such as pixel-shuffle or transposed convolution [28]. The final output layer consists of a 3 × 3 kernel followed by a Sigmoid activation to scale the output between 0 and 1. More information on the exact network architecture can be found in B.
As a loss function, the dot-product between the orientations from the homogenization-based solution and the normalized gradients of the intermediate field is used Where e θ is the vectorized version of the lamination angle, andêρ = ∇ρ ρ ρ/ ∇ρ ρ ρ 2 is calculated using the Sobel operator. To avoid very small gradient magnitudes from influencing numerical stability during training a mean filter is applied to ∇ρ ρ ρ before normalization. Figure 4 shows a visual representation of the dot-product loss for a small patch ofρ ρ ρ near the edge of the domain. It can be seen that generally, the loss is higher near the edge as the Sobel operator does not provide an accurate estimate of the gradient here, while orientations further from the domain boundaries are near orthogonal. To counteract this issue, replication padding is added to the neural network input.  To control the periodicity of the output field a windowed Fourier transformatioñ is utilized as the basis for an additional term in the loss function. Here a Hamming window w has been used to account for spectral leakage [18]. This additional term measures the energy inside a specified frequency band compared to the total energy of the spectrum In terms of de-homogenization it is most intuitive to think of this frequency band as describing the desired wave-length ε. Since an exact fulfillment of the desired wave-length leads to a very narrow frequency band, which is impractical during optimization, it is desirable to assign a small fixed width of b pixels to the frequency band. Given a square image of width H the frequency interval is simply Where we assume the zero-frequency is at the origin. The spectral loss is included in the total loss function as Here λ ω is a weight factor and a minus is used in front of E ω as we seek to maximize the energy inside the specified frequency band, but we seek to minimize the total objective.
If no regularization is used, the objective in eq. 12 tends to favor solutions with very low spatial variation in terms of magnitude. This might lead to numerical instabilities when performing normalization of the image gradient orientations, especially since the computations are performed on a single-precision GPU. Two different approaches have been tested in order to counteract an overly smooth representation ofρ ρ ρ. The simplest approach is to add weak Gaussian noise to the  input orientation fields, before performing the input encoding. This helps the network learn more robust representations of the intermediate field, essentially pushing the values towards the extremes of the Sigmoid function, where small variations have less impact. In practice the Gaussian noise alone is often enough to ensure sufficient spatial variation. In addition to the Gaussian noise a total variation term is added to the loss to enforce a predefined total variation of the intermediate field.
It is important to note that the total variation term on its own does not provide an explicit way of controlling the periodicity of the intermediate field, as the total variation may be increased in multiple ways, i.e. either through increasing the constrast, the periodicity or a mixture of both. Thus, the total variation and spectral losses should be used in conjunction Where τ is the desired total variation, and λ τ is a weight factor.
For orientational changes to take place in the intermediate field, the periodicity must be increased or branches occur. Since the periodicity is fixed by the spectral loss, branching is the only option. Without any regularization, areas around a branching point often take intermediate values. This is undesirable if a thresholding approach is later used to obtain a fully solid-void design as it might lead to disconnects. Thus, it is preferable to drive branches towards the solid phase. One possible way to do this is to use the dot-product loss in eq. 8 as an indicator function for branching. This is based on the observation that peaks in the dot-product loss most often coincide with branching, see Figure 5.
Using the dot-product loss surface directly as an indicator function introduces an unnecessary amount of noise to the optimization, as there are lots of small peaks in the loss-surface not related to branching. Furthermore, the error will often be large near the edge of the domain, as the Sobel operator provides a poor estimate of the gradients in this area. To create a cleaner indicator function, the loss surface is first smoothed using a small Gaussian kernel g Subsequently, all values in I dot closer than 5 pixels to the edge of the domain are set to zero. Now, a loss term favoring branching in the solid phase may be formulated based on the indicator function Where indicates element-wise multiplication, and 1 −ρ ρ ρ is used such that only dot-product errors in the void-phase are penalized. This naturally drives the intermediate field towards values very close to one, but still far enough apart to provide meaningful image gradients.
Including the branching loss in eq. 14 leads to the final loss function used for this work, i.e.
Where λ ω , λ τ and λ b are scaling parameters, used to weigh the importance of the spectral, total variation and branching loss in the full objective.
However, minimizing all four terms in eq. 17 simultaneously often leads to poor local minima, and requires careful balancing of the four objectives to obtain useful results. To remedy this issue the training is performed in two steps; in the first step the network is trained with only the orientation, spectral and total variation losses enabled, i.e. λ b = 0. This ensures that the generated intermediate field follows the input orientations, and the prescribed periodicity. In the second step the spectral loss is disabled, and the branching loss is enabled instead, i.e. λ ω = 0. The network is then initialized with the weights from the first training step and trained until convergence. This procedure makes sense as the branching loss relies on branches being the main source of error in the orientation loss, which is not the case during early stages of the first training phase. On the other hand the spectral loss disturbs the generation of well-defined branches, as they violate the prescribed periodicity. The overall training procedure is outlined in Algorithm 1.

Lamination width projection
The homogenization-based topology optimization is based on optimally oriented rank-2 microstructures as described in Section 2.1, However, to achieve practically realizable single-scale microstructures, we convert the rank-2 microstructures to a single scale rectangular-hole microstructures with little loss in stiffness [38] as discussed in [16].
Given the intermediate fieldρ ρ ρ 1 for a single lamination direction a post-processing procedure is applied in order to project the desired lamination width onto the intermediate field. Below, the post-processing procedure for a single direction is outlined, but the procedure remains entirely the same for the other direction. To make sure that the smallest features inρ ρ ρ 1 are resolved by at least three pixels a bilinear upsampling to a finer mesh T f is first performed. An estimate of the required upsampling factor needed to ensure this can be calculated as Where µ min is the minimum relative thickness, h min is the minimum feature size in pixels and ε i is the wave-length in pixels/period on the intermediate mesh T i .
Onceρ ρ ρ 1 is of reasonable resolution the skeleton (c.f. Figure 6c) of the solid phase is extracted. However, in order to do so a few preparation steps are made. First,ρ ρ ρ 1 is normalized to the range 0-1 as the network output does not always utilize the entire range of the Sigmoid.
Second, branching regions are solidified by extracting all maxima above a certain threshold in the dot-product loss surface, and setting all pixels in a radius of ε i /4 to 1, c.f. Figure 6b. Lastly, the upsampled intermediate field is converted to binary image by setting all values above the mean to 1. Once the preparations steps have been performed, a skeletonization [44] is performed on the binary image to obtain the underlying structure of the intermediate field. Since skeletonization thins the solid part of the density field to one-pixel thickness, a dilation with a pixel-radius of 1 is applied to avoid point connections in diagonal bars. The dilated skeleton now serves as the minimum thickness for all members in the structure.
Having established the underlying skeleton and minimum thickness of all members in the structure a width is now assigned to each member. Here a distance transform D 1 (c.f. Figure  6d) is used to turn the skeleton image into a distance field. Given the distance field a thickness can be assigned to each member of the skeleton by using the lamination width µ 1 to adaptively threshold the distance field Where H(x) is the Heaviside step function.
There are a few caveats to this procedure. First off, due to branching the periodicity inρ ρ ρ 1 is not entirely uniform. This means that if D 1 is normalized to the 0-1 range outliers will influence the normalization, and cause an erroneous projection. To remedy this issue a histogram on D 1 can be used to identify outlier values, and determine an appropriate clipping value before normalizing the distance field. Secondly, µ 1 should be clipped according to hmin P ·mup and normalized subsequently, such that the minimum feature size corresponds to the minimum width resolvable by the mesh. Figure 6 gives an overview of the entire post-processing procedure from the intermediate field to the 1-directional density field.
Once the density fields for both lamination orientations have been calculated the global density field can be obtained as the union between the two fields

Dataset and input encoding
One of the absolute strengths of the method proposed in this paper, is that the training data does not rely on the physics or the underlying structural optimization problem. When training a neural network, using data from the desired application domain, in most cases, leads to the best performance. In our case this type of data would correspond to a set of orientation fields generated using the homogenization-based approach. However, such orientation fields are both expensive to obtain, and might contain singularities, especially around regions where load and boundary conditions have been applied. These singularties often result in non-integrable vector fields, which are not suited for de-homogenization [36]. To avoid singularities and lessen the computational burden a synthethic dataset is used instead.
To obtain a synthetic orientation field, we first create a synthetic scalar field, F , by summing products of low frequency sine functions and obtain the needed vector field by taking the gradient: Since the gradient magnitude is of no importance to the orientation, the gradients are normalized to yield From the global orientation field generated using eqs. (21)(22)(23)(24) smaller patches with different aspect ratios may be sampled. To ensure a somewhat smooth orientation field within each patch a constraint on the maximum angular change θ max between neighboring orientations is introduced. This constraint is enforced by resampling a patch if the constraint is violated, and also ensures that the singularities around local extrema in the global field are avoided. Furthermore, the global orientation field is resampled every N rs iterations with a random number of sine contributions from the set C = {6, 8, 10}. Figure 7 shows an example of four patches sampled using the above mentioned approach, while Table 1 provides an overview of the parameters used for the sampling procedure.
Global field size Patch sizes C θ max N rs H=W=800 80x80 60x120 40x160 n = m = 6 n = m = 8 n = m = 10 25 deg 100 Table 1: Parameters used to sample the synthetic dataset.
From Figure 7b it can be seen that the variation within each patch is generally not that large. Thus, to reduce computation and memory consumption the orientation vectors are subsampled to half the resolution using a 2x2 block average. Notice, even though the orientation vectors are subsampled it remains important to use a relatively large patch size to capture sufficient  Initial tests using the orientation field e directly as input for the neural network did not show promising results, and thus an alternative encoding based on splitting the angular input into N different channels was used. Each channel is now modeled using a Gaussian with centers c equally spaced in the interval [0; π] and a kernel radius of r = 2π/N . For this particular case x indicates the input angle.
The encoding constitutes a mapping R 1 → R N from the angular information of e to an activation in each of the N channels and is applied in an element-wise manner. This type of input encoding bears resemblance to binning, but ensures a smooth representation of the activation in each channel as the contribution of the angular input in each channel is weighted by the Gaussian. Figure 8 shows the activations in each of the channels for N = 12 and an orientation field with a close to 90 • phase shift. There are a few caveats which must be taken into account when performing this type of encoding. First, e is a 2-directional field, and thus invariant to rotations of π. This means that any angular values outside the [0; π] range, in which the Gaussians are defined, can simply be shifted to the equivalent value inside the range. Second, to preserve the π-periodicity any activation in one end of the interval must be properly reflected in the other end of the interval. This is done by introducing a set of support Gaussians which extend the [0; π] interval by 3r in each end. Any activations in these support functions are mapped to the appropriate channel in the opposite end of the interval.

Numerical examples
The proposed method generalizes to a wide range of minimum compliance problems as the lamination fields and angles provided by the homogenization-based topology optimization method implicitly take boundary and load conditions into account. To demonstrate this capability the method is tested on three different problems; the Michell cantilever beam, the double-clamped beam and the L-shaped beam. See Figure 9 for domain sizes and exact placement of boundaries and loads for the three cases. To avoid stress singularities at load locations the force is applied along a line of 1 10 L in the Michell cantilever case, a line of 1 20 L in the L-shaped beam case and on a block of 1 10 L × 1 10 L in the double-clamped beam case. Furthermore, all elements with a density of ρ ρ ρ > 0.99 and in a small radius around the load are set to solid. To generate the lamination widths µ µ µ 1 , µ µ µ 2 used during the post-processing scheme and the angles θ θ θ serving as input for the neural network the optimization problem stated in eq. 1 is solved until convergence with γ θ = 0, Γ = 0.05, and a filter radius of r min = 1.2 for varying mesh sizes, volume fractions and relative minimum thicknesses µ min . For all of the numerical examples presented below replication padding with a width of 2 pixels has been added to the angular input for the neural network, to provide a better estimate of the image gradient near domain boundaries.

Network training for specified periodicity
Two different neural networks have been used to generate the results presented in the following sections; one trained for a periodicity equal to a wave-length of ε i = 10h i and one for a wave-length of ε i = 20h i . Here h i refers to the element size on the intermediate mesh T i . Both networks have been trained using the two-step training procedure specified in Algorithm 1. In the first step weight factors λ ω = 1, λ τ = 1 and λ b = 0 were used, while weight factors λ ω = 0, λ τ = 1 and λ b = 2 were used during the second step of training. In all cases the target total variation was set to τ = (2/ε i ) 2 , and the fixed size of the frequency band was set to b = 4 pixels. For both the first and second step of training the Adam optimizer with an initial learning rate  A dataset consisting of 20.000 orientation fields sampled as described in Section 3 and encoded using 24 Gaussian channels was used for training. The complete training process, i.e. application of Algorithm 1, for the full 15 epochs using this dataset took ≈ 1 hour on a NVIDIA Titan X GPU, and once trained a forward pass of the network takes only a few milli-seconds on a GPU. To demonstrate that the proposed method also can be used on a standard CPU all computation times presented in the following tables have been performed on a single Intel Core i7 2.6 GHz CPU. If a GPU is available significant speed-ups compared to the stated times can be expected, as the two most expensive operations of the post-processing scheme, the skeletonize and distance transform, can also be performed on a GPU. Wagner [40] reports a 50 times speed-up when performing skeletonization on the GPU, while Zampirolli et al. [43] report a factor five speed-up when performing Euclidean distance transform on the GPU.

Michell cantilever beam
The extensively studied Michell cantilever beam [35] serves as good starting point for investigating the performance of the neural network-based de-homogenization approach for various parameter configurations. Table 2 shows the results for a fixed homogenization-based resolution of 60×30 with varying periodicity, minimum relative thickness and volume fractions. The perfor-  Table 2: Performance and computational cost of neural network based de-homogenization approach on the Michell cantilever beam for a 60 × 30 input. Here h c is the element size on the coarse mesh (homogenization mesh), ε i is the wave-length on the intermediate mesh (output of neural network), µ min is the minimum relative thickness, V ref is the reference volume fraction from the homogenization solution, C ref is the reference compliance from the homogenization solution, h f is the element size on the fine mesh (de-homogenization mesh), ε f is the wave-length on the fine mesh, V f is the volume fraction of the de-homogenized design, C f is the compliance of the de-homogenized design, the ratio between the performance of the de-homogenized design and the reference solution (lower is better), and t f is the computation time for the de-homogenization in seconds. mance is measured in terms of compliance and compliance times volume fraction normalized by the corresponding reference value from the homogenization-based solution. Note that for small volume fractions, compliance and volume are close to inversely proportional. Consequently, we can use their product as an approximate means to compare structures with slightly varying volume fractions.
Despite the proposed method not relying on any physical (finite element) modeling in either training or post-processing, the de-homogenized designs are found to perform very well when compared to the reference homogenization-based result. Moreover, the discrepancy is consistently reduced as the wave-length is decreased. That is, on average the de-homogenized solutions performs around 25.7% worse in terms of C f · V f for ε i = 20h i , and 22.3% worse for ε i = 10h i . It is also observed that the proposed method performs best for high volume fractions, which in part can be explained by the fact that structural member thicknesses are better resolved as the volume increases but also because the stiffness of the resulting sub-optimal curved forks is much higher for higher local volume fractions. Quantitatively, the periodicity does not seem to affect the performance significantly, and thus one could argue that using a larger wave-length should be preferred, as the minimum relative thickness can be resolved on a coarser mesh in this case. However, visual inspection of the designs indicates that the lower wave-length designs seem more robust, for example the diagonal bars in Figure 10c are only resolved by one period in the high wave-length case, whereas multiple periods are present in the low wave-length case, c.f. Figure 11c. Boundary artefacts in the form of non-load carrying material is present in many of the designs, especially for larger wave-lengths. One idea to get rid of these could be to use the scheme proposed by Groen et al. [16], in which finite element analyses are used to iteratively identify non-load carrying material, and set these   Table 3: Performance and computational cost of neural network based de-homogenization approach on the Michell cantilever beam for a high-resolution input.
elements to void. Incorporating this post-processing procedure could help reduce the volume fraction used by the neural network-based de-homogenization approach, as it generally tends to overshoot the reference volume fraction slightly, i.e. 5.7% on average for the large wave-length case, and 2.4% for the small wave-length case.
Next, a higher resolution homogenization-based topology optimization result is used as input to the neural network. The results for a 240×120 input de-homogenized onto a 5760×2880 mesh are shown in Table 3. Here one would expect a higher performance than in the low-resolution input case as the homogenization-based solution itself is of higher quality and individual suboptimal branches should have less impact on final de-homogenized design. Compared to the low-resolution input case a significant improvement in performance can be observed with the average performance in terms of C f · V f being within 13% of the reference value. However, this comes at increased CPU cost as discussed next.

Computational cost
The computational costs associated with the method are much lower than the standard TO approach, and outperforms current state-of-the-art de-homogenization methods by several factors. The de-homogenization of the 60 × 30 homogenization-based solution onto 1440 × 720 and 2400 × 1200 meshes took ≈ 1.3s and ≈ 2.2s respectively. At the same resolutions the top88 A nice property of the proposed method is that the computational cost of all major operations (convolution, bi-linear interpolation, skeletonization and euclidean distance transform) scale linearly in time, making the method very well-suited for generating very high-resolution designs. As an example, for the 5760 × 2880 mesh (16.5 million elements) direct solvers either become extremely memory consuming or break down entirely, and parallel computing [1] in combination with e.g. multi-grid [4] preconditioners would have to be leveraged in order to perform standard TO. With the proposed method the optimization could be carried out on a coarse scale mesh, e.g. 240 × 120 elements, using the homogenization method, and afterwards projected onto the high-resolution mesh. All of this could be accomplished in a couple of minutes on a modern day laptop without having to leverage complicated multi-processing frameworks or multi-scale solvers. Figure 13 shows the average computation time for the six designs in Figure 12, along with a breakdown of the computation time for different sub-components of the framework. Here the loss function evaluation, solidify branches, network forward pass, skeletonize and distance transform are the most costly components. The solidify branches component is mostly a safety measure, to make sure that branches are always solid, but is often not needed as the two-step training procedure is enough to assure this in most cases. As this is the only component in the post-processing procedure relying on a loss function evaluation dropping this component could save up to 40% of the computational cost. The three remaining costly components can, as mentioned earlier, all be run on a GPU for a significant speed-up. In terms of memory consumption the proposed method is able to handle inputs up to a resolution of 800 × 800 on a single 11GB GPU. This corresponds to a de-homogenized design with a resolution of 32000×32000 when using ε i = 10 which should be more than enough for representing any currently manufacturable design in 2D. For 3D purposes the framework can with no further theoretical development be extended to multi-GPU usage by wrapping the model with a distribution module, such as tf.distribute 2 in Tensorflow or torch.nn.DataParallel 3 in PyTorch. Considering these significant advantages in terms of computational cost a loss in performance of 7 − 25% is not immense, and in the low-resolution case where the de-homogenization only takes a few seconds the proposed approach could serve as a real-time evaluation tool for a discrete and manufacturable version of the homogenized-based design. In the high-resolution case the de-homogenized design could serve as a good starting guess for high-resolution shape or topology optimization, thus saving a large amount of iterations.

Double-clamped beam
As a second example, the proposed method is applied to the double-clamped beam with center loading. An input of size 200×50 is considered and de-homogenized at two different wave-lengths with the fine-scale mesh size chosen according to the minimum relative thickness. The results are shown in Table 4. Here the average performance is 22.4% worse than the reference case, and the volume constraint is violated by an average of 2.3%. The best performance is achieved for the small wave-length case, but at a cost of double the computation time.  Table 4: Performance and computational cost of the neural network based de-homogenization approach on the double-clamped beam example.

L-shaped beam
As a last example the L-shaped beam is considered. This example presents a more challenging case due to the singularity at the sharp corner in the design domain. Directly solving the optimization problem in eq. 1 leads to small regions near the sharp corner, in which the angles are turned 90 • compared to neighboring elements. One way to remedy this issue is the vector field combing method presented in Stutz et al. [36] which provides a consistent labeling of the lamination orientations. Table 5 and Figure 15 shows the results for the L-shaped beam using the combed input fields. Here it can be seen that higher periodicity only leads to a slightly better performance with both designs being within roughly 12% of the reference solution.  Table 5: Performance and computational cost of neural network based de-homogenization approach on the L-shaped beam.

Conclusions and future work
An unsupervised deep learning-based de-homogenization framework capable of generating highresolution, manufacturable, and continuous microstructures has been presented. Compared to previous deep learning-based topology optimization approaches the proposed method does not seek to perform topology optimization in an end-to-end manner, but instead utilizes the solution from a traditional homogenization-based topology optimization as input. This leads to the, perhaps, most important feature of the proposed methodology: It is, to a large extent, insensitive to domain size, loading, and boundary conditions. The proposed fully convolutional neural network approach is capable of synthesizing continuous and periodic microstructures, while the input, in the form of an orientation field, guides the network towards a mechanically sound design. A subsequent post-processing scheme assigns an appropriate thickness to each member of the final design, and ensures manufacturability by imposing a minimum relative thickness. Training of the neural network is extremely inexpensive as synthetic orientation fields can be used as training data, and generalizes to a wide range of problems, as the underlying load and boundary conditions are implicitly enforced by the homogenization-based input. Numerical examples show that the proposed method is a factor 5 to 10 faster than current state-of-the-art de-homogenization approaches and has a performance within 7 − 25% of the reference solution (before post-processing). While only 2D examples are shown the very low computational cost of the method makes it an obvious candidate for 3D applications. Furthermore, as all operations in the proposed de-homogenization pipeline support GPU computations, a GPU implementation could pave the way for interactive de-homogenization, i.e. projection during the optimization, in 2D cases.
There are several ways upon which the current method could be improved. For example, lamination widths could be used directly as input to the neural network, instead of being applied in the post-processing scheme. This would allow the neural network to trade a high error in inconsequential parts of the domain, i.e. regions with very little material, for a lower error in solid parts of the domain. As of now, the proposed method also tends to slightly overshoot the prescribed volume fraction. This problem could be remedied by iteratively performing a finite element analysis to identify non-loaded carrying members in the final design and remove them. Furthermore, a graph-based post-processing scheme could be used to identify U-shaped branches and turn them into V-shaped branches which are known to be a mechanically better design when designing for minimum compliance. Finally, the iso-contour of the design could be extracted directly from the distance transform utilized in the post-processing scheme, such that the design can be described using a conforming mesh. This would yield a smoother boundary representation while allowing a more efficient finite element analysis during post-processing and subsequent shape optimization. An obvious idea, to be studied in future work, is the extension to 3D, which is expected to result in even larger time savings than reported here for the 2D case.
A Michell cantilever -120x60 designs  Table 1: Performance and computational cost of neural network based de-homogenization approach on the Michell cantilever beam for a 120 × 60 input.

B Network architecture
The neural network consists of 9 convolutional layers, four ResNet blocks and three upsampling layers. The two first layers use a kernel size of 7 and 5 respectively, while all other layers use a kernel size of 3. Between each convolutional layer batch normalization is applied, and ReLU is used the activation function in all layers except the last, which uses a Sigmoid. For upsampling nearest neighbors is used, as this is less prone to checkerboard artifacts than learned upsampling filters. The network architecture is summarized in Table 1.