Hierarchical remeshing strategies with mesh mapping for topology optimisation

This work investigates the use of hierarchical mesh decomposition strategies for topology optimisation using bi‐directional evolutionary structural optimisation algorithm. The proposed method uses a dual mesh system that decouples the design variables from the finite element analysis mesh. The investigation focuses on previously unexplored areas of these techniques to investigate the effect of five meshing parameters on the analysis solving time (i.e. computational effort) and the analysis quality (i.e. solution optimality). The foreground mesh parameters, including adjacency ratio and minimum and maximum element size, were varied independently across solid and void domain regions. Within the topology optimisation, strategies for controlling the mesh parameters were investigated. The differing effects of these parameters on the efficiency and efficacy of the analysis and optimisation stages are discussed, and recommendations are made for parameter combinations. Some of the key findings were that increasing the adjacency ratio increased the efficiency only modestly – the largest effect was for the minimum and maximum element size parameters – and that the most dramatic reduction in solve time can be achieved by not setting the minimum element size too low, assuming mapping onto a background mesh with a minimum element size of 1. © 2016 The Authors. International Journal for Numerical Methods in Engineering Published by John Wiley & Sons, Ltd.


INTRODUCTION
Structural optimisation is commonly split into three main categories: sizing optimisation, shape optimisation and topology optimisation (TO). These names reflect the degree of design freedom that each type allows. TO [1] aims to solve a material distribution problem with the ultimate aim of determining where material should be removed and where material should be retained, from a predefined domain. It therefore has the largest extent of freedom out of the three categories to make changes to the structure. TO is typically carried out on a fixed finite element analysis (FEA) mesh that is defined in advance of the optimisation procedure, and these mesh elements also commonly form the optimisation design variables. The resolution of this mesh constrains the minimum topological features that can be represented and so limits the optimality of the final design. The FEA is usually the most time-consuming stage of the TO process. Modifying an analysis mesh during the course of optimisation has two primary benefits: firstly, the optimisation can be carried out more efficiently as fewer degrees of freedom (DOFs) reduce computational expense and storage requirements. Secondly, the structure can be represented in finer detail where this detail is required, without requiring the whole mesh to be of high resolution. This should provide improved FEA of the structure and increased overall optimality of the structure.
The earliest example in the literature of integrating an adaptive meshing strategy into a TO routine was Maute and Ramm 1995 [2], and there has been a steady stream of work in the literature on this topic since then. Generally, there are two main categories of methods for modifying the analysis mesh: modifying an existing mesh through refining/coarsening [3] or complete remeshing to generate a new mesh [2,4,5]. Some approaches use global refinement where the whole mesh is incrementally refined across the whole domain without reference to the actual structure [6,7].
Proposed approaches include locally varying the polynomial degree of the finite element shape function (p-adaptivity), locally relocating, aligning or moving a mesh (r-adaptivity) [8,9] or, most commonly, locally refining or coarsening the resolution of a mesh (h-adaptivity) [3,4,8,[10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27]. There are also differences in how the analysis mesh is connected in cases where quadrilateral or hexahedral elements are used: some keep hanging nodes in the analysis and use multi-point constraint (MPC) equations [10,11,15,20,22,28], while others do not allow hanging nodes by using connection templates. Strategies also vary as to how often the meshes are modified: some use the same mesh until the optimisation has converged or until a set number of iterations has been reached; after which, the mesh is modified and the optimisation restarted using the previous optimisation result.
A feature of some approaches is to use multiple meshes to allow decoupling of the finite elements, design variables and material distribution density elements so that their resolutions can be different [4,23,24,26,27,28,29]. The majority of the studies coupled the adaptive meshing with density-based optimisation approaches such as the homogenisation method or solid isotropic material with penalisation, with the exceptions being the level-set approach [8,9,16], and bi-directional evolutionary structural optimisation (BESO) [3].
All the cases in the literature, where a quadtree decomposition type approach is used, insist on the quadtree to be balanced, to ensure no adjacent elements are of vastly different size. This is commonly important if using the quadtree mesh as a basis for the creation of a triangular mesh, in which case the triangles can be greatly distorted, which leads to inaccurate analysis results. To achieve this balancing, they impose a constraint on the element adjacency edge ratio of 2:1. There are no instances in the literature when remeshing is coupled with TO where this parameter is allowed to be increased to ratios greater than 2:1. There also does not seem to be any detailed study carried out on the effect of meshing parameters on the optimisation results. In addition, there are no instances in the literature where the local meshing parameters are allowed to change incrementally through an optimisation run. This work aims to contribute to this topic by looking at each of these three areas. The main research questions to be answered by this study are as follows: (1) What is the effect of using fixed mesh decomposition parameters on the efficiency and efficacy of the optimisation? (2) What is the effect of using incremental parameter variation strategies on the efficiency and efficacy of the optimisation? (3) Is it possible to identify the most suitable set of mesh decomposition parameters for best compromise between speed and optimality? 678 A. PANESAR ET AL.

METHODOLOGY
The key methodology of this research is a TO algorithm coupled with a remeshing strategy. This method was used to allow the effect of meshing parameters and application strategies on the optimised solutions to be investigated. In addition to this, investigations were carried out to understand the effect of the remeshing strategy on the analysis solve time and the analysis quality. This section is divided into three main subsections. Firstly, the remeshing strategy is described. Secondly, the TO algorithm and test cases defined are presented. Thirdly, the methodology for assessing the effect of the remeshing parameters on the results is explained.

Methodology: remeshing strategy
The remeshing method used consists of three key concepts, which are described in this section. The first of these is a dual mesh system, which enables separate meshes to be used for the analysis and optimisation stages. The second concept is the decomposition procedure, which generates the remeshed finite analysis elements from the design variable mesh. The third concept is the use of MPC equations to allow the use of hanging nodes in a mesh with undistorted elements.
2.1.1. Dual-mesh system. In order to enable flexibility in the mesh configuration used for the FEA, the analysis elements were decoupled from the discrete (i.e. solid or void) design variables by defining an additional mesh, termed the 'background mesh'. At each optimisation iteration, the analysis results from the foreground mesh (i.e. analysis quadtree/octree elements) were mapped (spatially in a one-to-one fashion) onto the background mesh (i.e. design variables), which had a constant predefined resolution and remained unchanged throughout the optimisation. At each iteration of the optimisation, a completely new foreground mesh was generated and the details of how this was realised is provided in the next section.

Mesh decomposition procedure.
Quadtree decomposition is a technique used to partition 2D space by recursively subdividing an element into four, starting with the largest possible allowable element ( Figure 1b). This method was used to generate elements for the FEA stage. The equivalent 3D version is termed octree decomposition ( Figure 1c). For this work, the elements were kept square or cube. The primary condition used to determine if an analysis element of the foreground mesh should be subdivided was if there was some level of heterogeneity (i.e. intermediate averaged densities) in the corresponding area of the background mesh. Enforcing this condition ensured the analysis elements had only two possible states (i.e. solid or void). For a predefined threshold, t = 0.5, the element was subdivided if (max (B) − min (B)) > t, where B denotes the region of the background mesh containing the elements that correspond to the foreground element being considered for subdivision as shown in Figure 2.
Constraining this subdivision were three primary parameters, namely, the minimum and maximum element dimensions, and , respectively, and the maximum number of adjacent elements to an element edge (2D) or face (3D), . Figure 3 summarises the decomposition procedure: the initial decomposition was carried out on the basis of the homogeneity and element dimension constraints. This was followed by an enforcement stage of the adjacent elements constraint. The resulting foreground mesh consists of the largest possible elements that satisfy the aforementioned conditions. The material property associated with the foreground mesh elements was obtained from the underlying background mesh.

Multi-point constraint equation generation.
To enable a mesh such as that shown in Figure 1b to be analysed using FEA, the hanging nodes, h 1 -h 5 , must be specified in terms of their relationship to their surrounding reference nodes, r 1 -r 4 . These relationships are known as MPC equations. An MPC equation is required for each displacement component for each node. Shape functions of the isoparametric elements is used to relate the hanging node to the four reference nodes of the larger element face (Figure 1a). For each hanging node, the weightings, w 1 to w 4 corresponding to reference  nodes r 1 to r 4 were calculated as  where and are the parameters that define any point within the -coordinate plane defined over the element in consideration, and 2a and 2b are the larger element face edge lengths (in the case of the elements used for this work, which are always square or cube, a = b).
The procedure for including MPC equations in the FEA is now explained. In a static FE analysis, the equilibrium of the structure is governed by a system of equations expressed in terms of the displacements of the nodes: where K is the stiffness matrix, u the nodal displacements vector and F the vector of nodal applied forces. Equation (2) can be expressed as where P is the vector of external forces directly applied to the nodes, R is the vector of forces on the nodes from the MPC equations and Q is the vector of forces from single-point constraints. The MPCs establish a relationship between the displacements of the dependent nodes u m and independent nodes u n: where G is a matrix of weighting factors. The forces on the dependent and independent nodes, R m and R n , respectively, must be in equilibrium, the total work carried out by them should be zero for any displacements vector: Therefore, using equation (4), we obtain Which for any displacement vector u n allows calculation of forces R n :  [30,31] was used for TO in this work. This was primarily because it made identification of boundaries straightforward as the structure is inherently discrete throughout the optimisation (because of the two possible design variables values). The compliance (a measure of the inverse of stiffness) based optimisation problem can be stated as minimise where C is the compliance or alternatively the total strain energy (SE), V i is the elemental volume, V * is the target structural volume, and x i = x min or 1, for void or solid regions, respectively. For this work, x min was set at 1e-6. This BESO formulation uses a material interpolation scheme: where E 1 is the Young's modulus for the solid region and p is the penalty exponent. As detailed in [30,31], and explained by Bendsoe and Sigmund 2003 [32], the sensitivity of the objective function with respect to the change of the i th element is where K 0 i denotes the elemental stiffness matrix for elements in the solid region. The relative ranking of the elemental sensitivities for both solid and void region elements can be expressed as where i is the sensitivity number for the ith element. It can be seen that the sensitivity number for the solid region is equal to the elemental SE, and the sensitivity number for the void region is dependent on the value of p. For this work, p was selected to be 1, and therefore, the sensitivity number was equal to the SE for both regions. An external finite element solver was used to calculate this, specifically MSC Nastran [33]. As a non-uniform mesh was used for this work, the sensitivity number included the effect of the element size; the SE was mapped onto the background mesh elements after dividing by the foreground element area or volume to obtain the SE density [34]. The rest of   the procedure is typical for the BESO methodology, as described in the aforementioned sources. The overall optimisation procedure used for this work is summarised by the flow chart in Figure 3.

Topology optimisation test cases.
The majority of the studies carried out were in 2D, with a few carried out in 3D to assess consistency of any effects observed. For the 2D cases, a cantilever plate (a zero thickness, i.e. z = 0, version of Figure 4) was used with a background mesh domain size of x = 256 by y = 128 elements, where the objective was to minimise compliance subject to a 0.5 volume fraction (V f ) constraint. In this implementation, the domain edge lengths were set to a power of 2 to ensure the domain can always be subdivided down to elements of size 1 by 1. The BESO parameters (refer to Table I) were set as evolutionary rate, of 1%, and filter radius, r of 6, where the filtering was carried out using the nodal strain energies over the region of influence (i.e. circle/sphere with radius r) around an element [34] (of background mesh). For the 3D cases, the background mesh domain size of x = 128 by y = 64 by z = 32 elements with value of 2% was used to reduce the model run time subjected to same objective and volume fraction constraint. Further information on the empirically determined value of I term (terminating iteration) of 70 is provided in Section 2.3.4.

Methodology: investigation of the remeshing parameter effects
The primary concern of this investigation was the effect of the remeshing parameters on the optimisation efficiency and efficacy. Within this scope, four investigations into parameter effects were carried out. The first of these looked at the effect of hanging nodes on the analysis solve time. The second looked at the effect of parameters on the analysis quality. The third looked at the effect of the parameters on the analysis time. The fourth looked at the effect of the parameters on the TO results, using both fixed parameters and incremental variation strategies, to assess if a good balance between optimisation efficiency and efficacy could be achieved. Referring to Figure 3, only the FEA stage computation time was measured for this work, for two main reasons. Firstly, it is the analysis stage that is usually the bottleneck for the TO routine, and so it largely dictates the efficiency of the whole process. Secondly, the methods implemented for the actual mesh handling and TO work were written in MATLAB [35] (in a non-optimised form as this was not the primary focus of the study), and therefore, their efficiencies are not comparable with the analysis solver, which was a commercial piece of software. While there would be some time penalty to pay for carrying out complex mesh operations, these were considered of secondary importance to the analysis stage itself.

Investigation into the effect of hanging nodes on analysis solve time.
As was explained in Section 2.1, the foreground mesh contains hanging nodes that must be defined in terms of their relationship to their adjacent independent nodes. To evaluate whether the presence of these hanging nodes has an effect on the analysis solve time of the model, a study was carried out using samples consisting of a constant number of independent nodes (total nodes -hanging nodes) but with varying proportions of hanging nodes. Figure 5 summarises how the samples for this were generated from a base checkerboard arrangement with a specified number of elements being subdivided into four smaller elements (i.e. one level of decomposition). The checkerboard ensured that no elements subdivided were adjacent. For every one element subdivided, this introduces one additional independent node and four additional hanging nodes. For each independent node added, a different independent node was removed from the outer boundary of the mesh, thus ensuring the total quantity of independent nodes remained constant. MPC equations were then generated to link the hanging and independent nodes.
For the example shown in Figure 5, the nodes on the boundary encompassed by the dashed lines were removed to compensate for the additional independent nodes added by the subdivided elements. The elements around the checkerboard were not allowed to be removed so that the nodes generated on the boundary of a subdivided element were actually treated as hanging nodes rather than just independent nodes on the boundary. In addition, other elements were retained to ensure that the load location and quantity of single-point constraints remained constant. While the number of elements in the mesh varied, any change in time taken by the solver such as for reading in and processing model files would be marginal.
As explained in [36], for most practical cases, it is reasonable to assume the computational time is proportional to DOF √ DOF. This is based on the assumptions that in a 2D shell type structure of mesh n × n nodes, the number of DOFs is proportional to n 2 and the bandwidth of the stiffness matrix is proportional to n. For this study, it was of interest to see if the inclusion of hanging nodes as opposed to independent nodes increased the solve time. This study was carried out for two domain sizes, and each sample was analysed three times to obtain an average measure of solve time. In addition to this, in order to investigate more fully the effect of the number of hanging and independent nodes on the solve time, the datasets from each analysis model of the 2D fixed parameter combination optimisations (results in Section 3.4) that had a mesh with hanging nodes for each optimisation run were studied. These models were also analysed three times.

Investigation into the effect of remeshing parameters on analysis quality.
As mentioned earlier, the meshes generated using the decomposition procedure are influenced by three primary parameters, specifically the minimum and maximum element dimensions, and , respectively, and the maximum number of adjacent elements to any larger element edge or face, . and were varied independently for solid and void regions, giving the parameters s and s for solid regions, and v and v for void regions. The parameter remained consistent for both solid and void regions as this is required for the mesh to be compatible at the region interfaces. The value selection lists for the five parameters are shown in Table II  This study looked at applying parameter combinations to an existing structure to generate a mesh for analysis. As well as examining the effect this had on solve time, the effect on the SE density was studied. The TO procedure uses the elemental SE density to determine how the structure changes, and so any effect on this from the mesh parameters could have an effect on the optimised structure. To quantify any difference between results, including any additional variation over the combination set studied, the absolute difference, D, between all combinations was calculated: where SED represents the elemental SE density with i denoting the individual element. Note that to reduce the skewing effect of the values at the boundary conditions, these regions were masked from the study. Prior to updating the design variables in the TO procedure based on these SE density results, a filtering process was carried out, as indicated in Figure 3.

Investigation into the effect of remeshing parameters on analysis time.
In order to evaluate the effect of the decomposition parameters, the optimised topology shown in Figure 6 was used for structural FEA using meshes generated using a number of parameter combinations. Of interest in this study was the effect of using the parameter combinations on the analysis time.  Figure 6 also shows the mesh generated using the latter of these combinations as a demonstration.

Investigation into the effect of remeshing parameters on optimisation.
This section provides details of two investigations that seek to illuminate the influence of remeshing parameters when (a) they remain fixed and (b) incrementally modified as the optimisation iterations progress on the optimised results.   16 16 16 1] The first investigation kept the parameters fixed throughout the optimisation iterations in order to investigate the effect of the parameters , and . A number of parameter combinations for the decomposition were used, specifically those contained in Table III. These were used on both the 2D and 3D test cases. In 3D, was squared to constrain the number of smaller elements adjacent to a larger element on a face, rather than an edge. Note that the 3D test case used a different background mesh resolution to the 2D one, and so, for example, a value of 8 is 6.25% and 3.125% of the longest domain edge length, for 3D and 2D, respectively.
Variation strategies were investigated that incrementally modify the remeshing parameter values as the optimisation iterations progressed. To investigate whether variation strategies can be used to achieve a good balance between optimisation efficiency and efficacy, a number of parameter combinations were defined for four of the five parameters. and were kept constant over both solid and void regions, while s and v were independently controlled. Figure 7 shows the varying and strategies. The first strategy, shown in (a), was designed to start small to give the highest quality FEA but then to increase it as the structure becomes better   defined to improve efficiency. was kept small to allow fine changes. The second strategy, shown in Figure 7b, was designed to start large and decrease as the optimisation progresses to gradually improve the quality of the FEA as the structure becomes better defined. was again kept small to allow fine changes. The third strategy, shown in (c), was designed to start both and large and decrease as the structure becomes more defined. The fourth strategy, shown in (d), was to keep large throughout but refine as the structure becomes more defined to allow smaller changes to occur. Throughout all of these strategies, was kept constant at 2 (in 2D) and 2 2 (in 3D). Note: in 3D, this constraint is of the quantity of adjacent elements to a face, rather than an edge. Figure 8 shows the v and s strategies. The first strategy, shown in Figure 8a, was designed to start with a 2:1 ratio in the solid region and then progressively increase this to improve efficiency once the structure was better defined. The ratio in the void region was kept constant at its maximum. The second strategy shown in (b), was designed to be the opposite of the first strategy for the solid region, where the ratio was kept high and progressively reduced as the structure became better defined. The third and fourth strategies, shown in (c) and (d), were designed to be similar to the first and second strategies, respectively, but consistent across the solid and void regions. For all of these strategies, was kept constant at 1 and was kept constant at 16. This value of was chosen as no benefit for > 16 for the domain size, and model setup used was observed in the preliminary trials (using fixed parameters), as there was insufficient space in void or solid regions to effectively accommodate elements of this size. Therefore, for the strategies where was increased progressively, the limit was set at 16. While in the cases where was decreased progressively, there would be space to accommodate larger elements (up to half the minimum domain dimension), it is unlikely that such an extreme of element size would provide FEA results of adequate accuracy on which to start the optimisation, and so this was limited to 32. Preliminary trials showed that the volume fraction constraint was typically met by about 70 iterations, so this was used as a guide to determine iteration increments for parameter value changes.
The two measures used to evaluate the effect of decomposition parameters on optimisation efficiency and efficacy were analysis solving time and total SE of the optimised topology. Because of varying convergence times for the different parameter combinations, the efficiency measure was Figure 9. Convergence study showing change in total strain energy and convergence of the gap between these values as the mesh resolution increased. 'a' represents a structure with relatively poorly represented thin member and 'b' represents a structure with well-represented thicker members.
defined as the total solving time up to the point that the volume fraction constraint was satisfied.
With a constant evolution rate, this was at a constant iteration across all samples. The solving time was recorded from FEA solver log files and consisted of the central processing unit time, which represents the actual time taken to execute the code, rather than real-world elapsed time (wall time). To ensure as reliable a time measure as possible, all unnecessary operating system tasks were stopped or moved to alternative processors and only a single processor was assigned to the model solving.
Each model was analysed three times to reduce the effect of any variations. The total SE was calculated from a reanalysis of the final optimised topology using a mesh resolution eight times that of the background resolution. This was because for some structures that consisted of relatively thin members, the total SE was affected by the poor representation of these thin members by the finite elements. To accommodate this, a convergence study was carried out primarily to ensure an accurate relative difference across optimised structures, rather than the actual total SE value. To this end, a number of mesh resolutions (scaled from the background mesh) were analysed for two dissimilar structures. As shown in Figure 9, a converged gap (to two decimal places) between the two total SE values was achieved when using a scale factor of eight, so this was used for subsequent post analyses.

Results of effect of hanging nodes on analysis solve time
The first study carried out was to investigate the effect on solve time of having hanging nodes within the mesh. Figure 10 shows the results of two meshes with fixed quantities of independent nodes and varying quantities of hanging nodes and therefore MPC equations. It can be seen that the effect is a linear increase in solve time as the quantity of hanging nodes increases. Both sets of results have approximately the same gradient indicating that the effect is independent of the total number of hanging nodes in the mesh.
A large set of analysis results were generated during the 2D fixed parameter combination optimisation runs (Section 3.4), and so these iteration analysis results were studied to investigate more fully the effect of the quantities of hanging and independent nodes on the solve time. Only the results that corresponded to an analysis model with a mesh with hanging nodes were included, which gave a total of 52 230 data points and a surface was fitted to these data of the form t s = 1.053 × 10 −4 n i + 1.06 × 10 −4 n h + 2.263 (14) where n i and n h represent quantity of independent and hanging nodes, respectively. Goodness-of-fit statistics are R 2 : 0.9911 and root mean squared error: 0.05598.  From this study, it was found that the effect on solve time of adding one independent node is slightly less than adding one hanging node (as evident by the coefficients of n i and n h in the equation), but for the purposes of this paper, they will be considered as approximately equal, and subsequent results will be presented with respect to total number of nodes (independent and hanging) rather than actual solve time. This reduces the effect of individual timing discrepancies and allows different hardware use parameters to be used for larger models, to reduce run time without affecting the results.
While the proportion contribution to solve time from the independent nodes and hanging nodes is approximately equal, in reality, the addition of one hanging node in a quadtree mesh allows a substantial reduction in the quantity of independent nodes. The extent of this reduction is presented in the following section. These combinations represent what could be considered the two extremes of the values included; the first combination uses the largest adjacency ratios, , and the largest range in element sizes; the second combination uses the smallest ratio and range, i.e. a uniform mesh. The '*' represents that the adjacency rule parameters, s and v , are inactive when the maximum element size, , is 1. The latter of the two has been thresholded to match the maximum value of the former, and the ranges of both have been normalised. Some difference between the two results can be observed. The absolute difference (refer to equation (13)) between all combinations ( Figure 11c) and the absolute difference between the post-filtered results were calculated (Figure 11d). This showed that the effect of the mesh on the values was significantly less than the prefiltered results, in particular on the structural boundaries. It was observed that the largest differences, which were due to the finite element size in those particular regions, were some distance from the boundaries within the structure, and therefore, these could potentially have an effect on the optimised topology through the inclusion of new holes.

Results of effect of parameters on analysis quality
The maximum absolute difference across all thresholded and range-normalised results was 0.6 for the prefiltered result compared with 0.25 for the post-filtered result. In addition the standard deviations were calculated to be 0.18 and 0.1 for the prefiltered and post-filtered results, respectively. This shows that the filtering process is useful for reducing the effect of the meshing parameters as well as serving as a checkerboard filter.

Results of effect of parameters on analysis time
The FEA solve time was recorded for each mesh generated by each parameter combination. These results are shown in Figure 12. It was found that the biggest relative change in solve time (when compared with its close variant parameter combinations) was from using the [ s v s v ] = [2 2 2 2 1] combination; with these parameters, the total quantity of nodes in the mesh was reduced by 68%. A not insignificant further reduction of 11% was found for parameter combination [2 2 4 4 1].
Although subsequent parameter combinations further reduced the quantity of nodes, the difference was more marginal. As discussed previously, it was found that the computational cost of an additional 690 A. PANESAR ET AL.   hanging node was slightly more than the cost of an additional independent node ( Figure 10). Table IV shows the total quantities of the nodes in the various meshes; as the proportion of hanging nodes passed 50% of the total node quantity, the reduction in total nodes does not reduce the cost enough to offset the cost increase caused by the increased quantity of hanging nodes. In addition, the extent of the changes that can be made to the mesh by relaxing this constraint further is low.

Results of effect of fixed parameters (2D).
The section presents the results for the effect of (different) fixed parameters, specifically, maximum dimension parameter, , adjacent rule parameter,   , and minimum dimension parameter, . Figure 13 shows the effect of maximum dimension parameter ( ) on the total SE and the total quantity of nodes, whereas Figure 14 shows the effects on resultant topologies.
For the adjacent rule ( ) parameter, Figure 15 shows the effect it has on the total SE and the total quantity of nodes, whereas Figure 16 shows its influence on resultant topologies.
As the parameter must be equal at the interface between the solid and void regions, it cannot be varied independently for these regions, so there is only one set of results for this parameter, shown quantitatively in Figure 17 and qualitatively in Figure 18.

Results of incremental parameter variation (2D).
With reference to the strategies introduced in the methodology (Section 2.3.4), the effect of varying the parameter combinations over the course of an optimisation in terms of 'total nodes of benchmark' and 'total SE' is shown in Figures 19 and  20 for the final topology.

Results of complete set (2D).
The full sample set consisted of 187 fixed parameter combinations, which included the subset presented in Section 3.4.1. These results are shown in Figure 21. There were three distinct regions identified, which are highlighted in the figure. The parameter combinations that generated the results for regions 1 and 3 are shown in Figure 21. Region 2 of the results contained the majority of parameter combinations and is shown enlarged in Figure 22. Overlaid on these fixed parameter combination results are the incremental variation combinations marked with the stars, with the labels corresponding to sets 1 and 2, combinations a-d. It might be expected that the finest mesh would give the best result, but these results show that this was not the case here. This indicates that the benchmark result was actually a local optimum as better results are obtained by Figure 21. Total strain energy (SE) from post analysis against total quantity of nodes (up to iteration limit) for all parameter combination from Table II. Region 2 is enlarged in Figure 22. coarsening the mesh. It also means that low level comparisons between individual data points are to some extent difficult.

Results for fixed parameters (3D).
Following on from the 2D results presented in Section 3.4.1 for the fixed parameter combinations (specifically, , and ), this section presents the 3D results for the same parameter combinations (in the 3D cases, acted on the face rather than the      edge; therefore, these parameter values were squared). Figure 23 shows the result for the uniform mesh that was used as a benchmark topology to compare the other results with. Figure 24 shows the effect of maximum dimension parameter ( ) on the total SE and the total quantity of nodes, whereas Figure 25 shows the effects on resultant topologies.
For the adjacent rule ( ) parameter, Figure 26 shows the effect it has on the total SE and the total quantity of nodes, whereas Figure 27 shows its influence on resultant topologies.
As was the case with the 2D results, must be equal at the interface between the solid and void regions; it cannot be varied independently for these regions, so there is only one set of results for this parameter. Consistent with the previous 3D results, Figure 28 shows the final iteration analysis meshes. However, while for a value of = 1, this mesh corresponds to the actual optimised structure; when > 1, it does not. Therefore, for these cases, the corresponding background mesh structure is also shown to the right.

DISCUSSION
Recalling that the research questions posed in Section 1 were as follows: 1. What is the effect of using fixed decomposition meshing parameters on the efficiency and efficacy of the optimisation? 2. What is the effect of using incremental parameter variation strategies on the efficiency and efficacy of the optimisation? 3. Is it possible to identify the most suitable set of decomposition parameters for best compromise between speed and optimality?
To aid in answering these questions, Table V summarises the effect of the parameters on the optimisation results. For the parameter (maximum element size), the effect on run time is shown as the reduction in the total number of nodes compared with the uniform mesh results (benchmark). For the parameter (adjacency rule parameter), the effect on run time was compared with the run time when just using a ratio of 2:1 with the maximum value of , i.e. the further reduction possible in percentage points (pp) by relaxing this constraint. The effect of (minimum element size) was also compared with the maximum value of , but for these results, the additional percentage reduction is shown. For all cases, the change in total SE was compared with the benchmark result. Note that as mentioned previously, the benchmark result is not necessarily the optimum.
Looking first at the effect of the maximum dimension parameter, , varied for the solid region only, the run time was significantly reduced (by about 65% for the largest value) from the benchmark, with marginal change in total SE. Varied for the void region only, the run time was less significantly reduced (by about 20%), with negligible change in total SE. When varied consistently across both regions, the run time was reduced significantly up to about 85%, with marginal change in total SE. From these results, there is negligible effect of v on the final result, and so it can be kept as coarse as possible. The effect of s is greater than v .
Regarding the adjacent rule , varied for the solid region only, this gave only a marginal additional time saving over using just the maximum value for s of just 1-2%. The actual change in the total quantity of elements from relaxing this meshing restriction is low. The particular structure generated also would have some effect on the benefit of this parameter as wider regions would allow greater scope for larger elements to be used. The effect of this parameter on the total SE was a gradual increase as the parameter value increased: the overall effect was marginal in 2D but more significant in 3D. Varying only the void parameter v made a marginal additional time saving over using just the maximum value for s of less than 1%. The effect on the total SE was marginal and about the same as the effect of the corresponding v parameter. Varying the parameter consistently across solid and void regions made a small additional time saving over the parameter, which was more significant for the 2D case. The effect on the total SE in 2D was similar to the effect of the corresponding parameter, but in 3D, the effect was more significant.
Overall, in 3D, had a more significant effect on the actual topology generated compared with , even after taking into account the different domain sizes. As varied, the structure remained fairly consistent as a structure without any through holes in it, i.e. its thickness varied over the length of the domain. There was some small difference when was set at 8 (in 3D), but overall, the variation was much smaller compared with varying . From these results, there was only a slight further reduction in time by increasing the adjacency rule past 2 (maximum 4.4%). While in 2D, the effect on the total SE was marginal; in 3D, the effect was significantly greater with potential for poorer results, depending on the extent of the relaxation. Less substantial changes in the mesh (with respect to the number of nodes) are possible by only varying adjacency rule parameters.
Regarding the minimum dimension parameter , which has to be varied consistently over the solid and void regions, this made a significant impact on the time taken, further reducing it (compared with the maximum value for ) by 90 + %. This is due to the huge reduction in elements, particularly in the extreme case. The effect on the total SE was higher than the other parameters, but the extreme cases were the biggest bias of this value. Going from a value of 1 to 2 resulted in a further time reduction of 51% (2D) with a change in total SE of just 0.1% (2D). Going from 2 to 4 resulted in a further time reduction of 30 pp (2D) with a change in total SE of 0.5% (2D). Going from 4 to 8 resulted in a further time reduction of 12 pp (2D) with a change in total SE of about 4.5% (2D). This trend was also observed in 3D, with going from 1 to 2 resulting in a further reduction in time of 71% (3D) with a change in total SE of 0.2% (3D), going from 2 to 4 resulting in a further reduction in time of 22 pp (3D) with a change in total SE of 2% (3D) and going from 4 to 8 resulting in a further reduction in time of 5 pp (3D) with a change in total SE of 8% (3D). Recall that the 3D test case used a different resolution to the 2D one, and so a value of 8 is 6.25% and 3.125% of the longest domain edge length, for 3D and 2D, respectively. Therefore, for the 3D case, the analysis quality is actually worse than the corresponding 2D case, for the same actual parameter value. From these results, it was found that increasing slightly does not significantly affect the quality of solution, but it has a drastic effect if increased too far. This assumes that a mapping strategy is used where the background mesh is of a resolution that is equivalent to a value of 1.
Looking at the effect of using an incremental parameter variation strategy, as was shown in Figure 22, out of both incremental sets, strategies 2b and 2d were best overall. Compared with the others in set 2 (2a and 2c), they were better from a solution optimality (i.e. lower total SE) point of view. In all four strategies of set 2, the parameter was fixed at its maximum (16) throughout. The reason that the solve time was approximately bimodal across the results is that at the start of the optimisation there are much larger regions of the structure that are homogeneous, and so when relaxing the adjacency rule , the presence of significantly fewer elements is possible compared with relaxing towards the end of the optimisation when the structure is more or less defined. From a total SE point of view, strategies 1a, 2a and 2c seem to be bad choices. In 1a, the time taken is also significantly longer as is at its lowest throughout, meaning that the earlier iterations contain a larger quantity of elements. The total time taken is also quite heavily biased by the iterations that either have highly constraining values for or , where the quantity of elements is high. Therefore, further investigations are required to evaluate the effect of the length of the increments used for the incremental variation strategies to enable fine tuning.
Referring to the total fixed parameter dataset results in Figure 21, the objective of the optimisation was to minimise the total SE, and so it is preferable for the data points to be located at the bottom left corner of the set as these are the stiffest structures that took the least time to analyse. With this in mind, the parameters for the points filled in black marked 13-17 were identified (shown in Figure 22). These results had a range of parameter combinations. Point  . Relaxing s to 4 and s to 8 seems to only increase the total SE slightly but makes a more substantial reduction to the time taken. Common to all is that s is 2 or 4. For the majority of the cases, is 1, apart from point 17. In this latter case, it could be noted that and are equal, where in the other cases, was larger. It may be that a higher is tolerable so long as is not too large, although this result was the worst of the five from a total SE point of view. On the basis of the results shown in region 3 of Figure 21, using a large value gives significantly worse total SE results. On the basis of the results shown in region 1, ignoring the void region parameters, all of the parameters were [2 * 1 * 1]. If the void region had no effect, then it would be expected that all of these results would be approximately equal. However, this appears to only be the case if v = 2 (points 2-4) or when v / v = 1 (points 5-7).
From investigating the effect of including a hanging node in an FE mesh, it was found that the effect on solve time of adding one independent node is slightly less than adding one hanging node. However, in reality, the addition of one hanging node in a quadtree/octree mesh allows a substantial reduction in the quantity of independent nodes. On the basis of the analysis results for a fixed topology with varying parameter combinations, it was noted that because of the large finite elements in interior of the structure (i.e. some distance from the boundaries) there could be significant repercussions on the topology of the optimised solution (through inclusion of new holes).
Discussing the remeshing methodology, there are some points worth considering. The SE values calculated for foreground elements are mapped onto the background mesh after dividing by the area or volume of that element to acquire the SE density. These mapped values would be consistent across the region in the background mesh covered by the foreground element. For larger elements, this would mean that a larger proportion of those elements in the background mesh would be marked for removal as the BESO update procedure was carried out. In contrast, with a fine mesh, the strain energies are more localised and therefore more accurate. To some extent, the filtering process that averages the values after mapping alleviates this problem. The filtering process also allows the structure of a model with an analysis mesh generated using a parameter value >1 (i.e. foreground mesh consisting of large elements) to be represented more smoothly.

CONCLUSIONS
Topology optimisation techniques are increasingly being employed for the design of additively manufactured parts. These techniques have been traditionally used for simple test cases in the literature or as an approximate first design step in practice, where manufacturing constraints mean that optimised designs cannot be readily realised. As additive manufacturing enables complex topologies to be manufactured with little constraint, the challenges of making the TO process computationally tractable for real-life 3D cases with high-resolution designs become an urgent research task. The proposed hierarchical mesh decomposition strategy is shown to be an effective approach at addressing this challenge.
In this work, for the first time, octree remeshing for adjacency ratio ( ) over 2:1 is investigated for TO with independent control over solid and void regions. The investigation explored analysis solving time (i.e. computational effort) and the analysis quality (i.e. solution optimality) to fully understand the repercussions to remeshing parameters.
From the results obtained in this work, the following conclusions can be drawn. Although the effect of including a hanging node in an FE mesh on analysis solve time is slightly more than the effect of adding one independent node, within the context of hierarchical remeshing, it can eliminate several independent nodes thereby resulting in efficient computation.
It was found that the void region had less effect when the mesh for the solid region was more tightly controlled. Furthermore, the parameters acting on the void regions had significantly less effect on the result compared with those acting on the solid region, in particular if v = 2. Therefore, the choice of parameter combinations for the void region can be made fairly flexibly without noticeably effecting the efficiency and efficacy of the optimisation.
Increasing the values increased the efficiency only modestly. Whereas by increasing the maximum element dimension ( ), the run time was significantly reduced, with marginal change in the quality of the solution. In 3D, had a more significant effect on the actual topology generated than . This is because less substantial changes in the mesh are possible by only varying adjacency rule parameters. The best compromise between speed and optimality can be achieved by setting ( ) fairly high for the void region, v, while the solid parameter, s, needs to adaptively change from small to large as the topology becomes well defined. Concerning , it is suggested to set this at a level where the ratio of largest domain edge length to is 64 (256:4 for 2D and 128:2 for 3D) as this has a marginal effect on the total SE.