Sensitivity Analysis and Filtering of Machinable Parts Using Density-Based Topology Optimization

: Topology optimization has become a popular tool for designing optimal shapes while meeting specific objectives and restrictions. However, the resulting shape from the optimization process may not be easy to manufacture using typical methods like machining and may require interpretation and validation. Additionally, the final shape depends on chosen parameters. In this study, we conduct a sensitivity analysis of the main parameters involved in 3D topology optimization—penalization and filter radius—focusing on the density-based method. We analyze the features and characteristics of the results, concluding that a machinable and low interpretable part is not an attainable result in by-default topology optimization. Therefore, we propose a new method for obtaining more manufac-turable and easily interpretable parts. The main goal is to assist designers in choosing appropriate parameters and understanding what to consider when seeking optimized shapes, giving them a new plug-and-play tool for manufacturable designs. We chose the density-based topology optimization method due to its popularity in commercial packages, and the conclusions may directly influence designers’ work. Finally, we verify the study results through different cases to ensure the validity of the conclusions.


Introduction
Topology optimization (TO) is a design tool for obtaining optimal shapes with known conditions, based on the optimization problem definition.In other words, it aims to achieve the best possible result for a given function while applying specific constraints.TO has been increasingly used during the last years, with increased research on the topic and the development of commercial software for obtaining topology parts, as mentioned by Shiye and Jiejiang [1].Nevertheless, the foundations of TO theory were established over a century ago [2], with major applications of the actual theory in the guidelines developed by Dorn [3].Years later, the computerization of topology optimization arrived [4], with further applications in the digital era [5].
In this epigraph, a brief explanation of density-based TO is presented in order to show the used methods, solving techniques, and filters as well as the application of the studied parameters.The values of these and MatLab ® codes are provided for replicating the results of the study, which can be applied to the well-known TO codes discussed in the literature [6,7].This chapter is divided in two sections: one related to the used TO method, and other related to the filters oriented towards subtractive manufacturing.

Density-Based Topology Optimization
The density-based approach starts from a design domain Ω that is discretized using nodes and elements.This design domain is the maximum space (line, surface, or volume) where the final design could exist.A density value is assigned to each element of the discretized design domain; hence, it is parametrized by this density.The density value is not binary, so it can vary from 0 (void) to 1 (solid), governed by an interpolation function [8,9].Historically, the use of only black and white elements easily leads to an ill-posed result [10].Many solutions have been proposed [11][12][13], but none are totally satisfactory.The proposal to include not only binary values for the density of each element was presented by Bendsøe and later by Mlejnek in their studies [8,9].Those values are calculated following a power rule [14], which will be explained later in this manuscript.
The density-based approach is extensively described in several studies [6,15].It follows the simplified flowchart shown in Figure 1, where every step is numbered for future reference.

Density-Based Topology Optimization
The density-based approach starts from a design domain Ω that is discretized using nodes and elements.This design domain is the maximum space (line, surface, or volume) where the final design could exist.A density value is assigned to each element of the discretized design domain; hence, it is parametrized by this density.The density value is not binary, so it can vary from 0 (void) to 1 (solid), governed by an interpolation function [8,9].Historically, the use of only black and white elements easily leads to an ill-posed result [10].Many solutions have been proposed [11][12][13], but none are totally satisfactory.The proposal to include not only binary values for the density of each element was presented by Bendsøe and later by Mlejnek in their studies [8,9].Those values are calculated following a power rule [14], which will be explained later in this manuscript.
The density-based approach is extensively described in several studies [6,15].It follows the simplified flowchart shown in Figure 1, where every step is numbered for future reference.This study gives special attention to levels 4 and 5 of the flowchart in Figure 1, as these processes involve the parameters under study.Specifically, for the interpolation in level 4, the Solid Isotropic Material with Penalization method (SIMP) is utilized.Additionally, a density filter is employed as the filtering technique in level 5.

Interpolation Scheme: The SIMP
The SIMP is based on the relation of the stress state of every element with its relative density.The formulation is included in Equation ( 1): where E is the element Young Modulus, E is the almost zero elastic modulus of void material for avoiding mathematical incompatibility issues, E is the elastic modulus of the original material, x is the relative density of the element i, and p is the penalization factor.This penalization factor is one of the parameters to use in the sensitivity analysis of this manuscript.According to Equation (1), it is evident that a higher p will result in a more binary outcome [16][17][18], corresponding to the graph of a power rule function presented in Figure 2. The penalization raises the degree of the SIMP polynomial when it increases, resulting in a more discrete relation between the tensional state of the element and its density.This study gives special attention to levels 4 and 5 of the flowchart in Figure 1, as these processes involve the parameters under study.Specifically, for the interpolation in level 4, the Solid Isotropic Material with Penalization method (SIMP) is utilized.Additionally, a density filter is employed as the filtering technique in level 5.

Interpolation Scheme: The SIMP
The SIMP is based on the relation of the stress state of every element with its relative density.The formulation is included in Equation ( 1): where E i is the element Young Modulus, E min is the almost zero elastic modulus of void material for avoiding mathematical incompatibility issues, E 0 is the elastic modulus of the original material, x i is the relative density of the element i, and p is the penalization factor.This penalization factor is one of the parameters to use in the sensitivity analysis of this manuscript.According to Equation (1), it is evident that a higher p will result in a more binary outcome [16][17][18], corresponding to the graph of a power rule function presented in Figure 2. The penalization raises the degree of the SIMP polynomial when it increases, resulting in a more discrete relation between the tensional state of the element and its density.SIMP represents a robust, highly configurable, and computationally efficient scheme.It does not require homogenization and is adaptive to various design conditions and constraints.Due to these advantages, SIMP is chosen as the initial interpolation scheme for the research conducted in this work.However, using SIMP presents challenges such as dealing with intermediate densities, mesh dependency, and potential non-convergence [19].

Filtering: The Density Filter
The density-based method employs the Finite Element Method (FEM, see step 3 in Figure 1) and the SIMP method to assign each element a relative density value ranging from 0 to 1.The classical approach is finding a black and white layout (zeros and ones) that minimizes the compliance of the structure.A maximum volume fraction in relation to the original volume of the design domain is usually utilized as the upper limit optimization constraint [6].
Nonetheless, even with SIMP, mesh dependency, checkerboard, ill-posed, or local minima results can still arise [20].To address these issues, the density filter [21] is employed in this study.A basic formulation of this filter is presented in Equation ( 2): where x is the filtered density of an element,  represents the neighborhood of a specific element, v is the volume of the element i, and H is a weight factor.The neighborhood of an element is defined as all the elements whose center is within a certain distance R from the centre of the element in analysis.This can be defined as in Equation (3): where (i, j) is the Euclidean distance between elements i and j.The weight factor H can be defined as in Equation ( 4): SIMP represents a robust, highly configurable, and computationally efficient scheme.It does not require homogenization and is adaptive to various design conditions and constraints.Due to these advantages, SIMP is chosen as the initial interpolation scheme for the research conducted in this work.However, using SIMP presents challenges such as dealing with intermediate densities, mesh dependency, and potential non-convergence [19].

Filtering: The Density Filter
The density-based method employs the Finite Element Method (FEM, see step 3 in Figure 1) and the SIMP method to assign each element a relative density value ranging from 0 to 1.The classical approach is finding a black and white layout (zeros and ones) that minimizes the compliance of the structure.A maximum volume fraction in relation to the original volume of the design domain is usually utilized as the upper limit optimization constraint [6].
Nonetheless, even with SIMP, mesh dependency, checkerboard, ill-posed, or local minima results can still arise [20].To address these issues, the density filter [21] is employed in this study.A basic formulation of this filter is presented in Equation (2): where x i is the filtered density of an element, N i represents the neighborhood of a specific element, v j is the volume of the element i, and H ij is a weight factor.The neighborhood of an element is defined as all the elements whose center is within a certain distance R from the centre of the element in analysis.This can be defined as in Equation (3): where dist(i, j) is the Euclidean distance between elements i and j.The weight factor H ij can be defined as in Equation ( 4): where is element j, which is always inside neighborhood N i .The constant R is defined as the filter radius and is the second parameter used in the sensitivity analysis throughout this paper.Now,

∼
x i from Equation (2), known as the filtered density of the element, is substituted in the SIMP (Equation ( 1)) as in Equation ( 5):

Study Description and Novelty
Density-based TO is the most popular method in commercial packages such as Altair Hyperworks suite [22], Ansys suite [23], Dassault Systèmes suite [24], and PTC suite [25].This manuscript analyzes the two main parameters involved in the TO process: the penalty and filter radius.This study aims to determine the best parameters for obtaining manufacturable results with minimal interpretation for three-axis machining.This research aims to assist designers in making informed decisions and saving time in the preprocessing and simulation phases.
A wide range of values is considered for the mentioned parameters along with calculation times, discreteness, and machinability of the results.All these data are ordered and presented later in this study.It is important to note that other studies [1,16,26] make remarkable contributions in this area, but none of them are specifically oriented to a concrete manufacturing method and are often limited to a short set of values for the studied parameters.Other works [27,28] concentrate on achieving these manufacturable parts by adding filters and additional constraints to the TO process, but a more direct application is sought in the present paper.
The given text describes the development of a novel analysis tool that can assess the machinability of a part for a three-axis machine.This tool is later applied as a filter to automatically modify the non-extra-constrained part, making it more machinable and less interpretable.MatLab ® R2023a is used for the ease of use of the scientific and designer communities and to allow the replication of the research results.
The structure of the paper is organized as follows: • The initial section discusses the chosen parameter value criteria, along with the load case and material parameters of the main study case for the sensitivity analysis.

•
Following that, the parameters used for analyzing the results are presented and explained, with particular emphasis on the measure of the discreteness and the evaluation of the machinability, • Subsequently, the results of the 92 simulations are presented and analyzed, allowing conclusions related to the aim of the study to be obtained.• The machining filter is presented and verified through different verification tests.

•
Finally, the conclusions drawn from the study and potential avenues for future research are presented.
In summary, novelty is given in two fields: the extensive sensitivity analysis of a 3D case and the machining analysis and filter.Regarding this last topic, this document includes a machining analysis tool that identifies whether an element is manufacturable via threeaxis machining.It calculates the global machinability as a result, achieving a 100% fully machinable design.This analysis tool is used for the machining filter and the sensitivity analysis.Finally, a novel machining filter is presented in combination with the machining analysis tool.It takes into account the density threshold for displaying the elements and does not require big changes in the sensitivity analysis.These are differentiable points with respect to existing machining filters [27-29].

Parameter Value Selection Criteria
The text highlights that the filter radius (R) and penalization (p) are the two main parameters being analyzed.These parameters should not be chosen arbitrarily but require careful consideration to achieve a satisfactory outcome.
For penalization p, previous studies [1,26] indicate that a minimum value of three should be used for isotropic materials with a Poisson Ratio ν~1/3, within the defined topology optimization method.The relative density of each element follows the SIMP power rule, and it is observed that higher penalization values lead to sharper trends towards either 0 or 1 density.To analyze the parameter's impact, values ranging from three (3) to six (6), inclusive, with increments of one (1) are chosen based on previous studies [1,26].These values allow one to observe the transitional changes in the resultant shapes when larger penalization values are employed.
For filter radius R, the values were taken from different scopes, and they are summarized in Table 1.These sources include • Values used in other mentioned studies (OS).
• Values derived from the Euclidean distance between the element under analysis and the neighboring layer of elements (ED).

•
Large estimated values intended to observe the behavior of the method with extreme values (LV).
The values of the filter radii in Table 1 were rounded to reduce computation time.This approach was chosen because the filter radius is a highly recurrent value throughout the process.Rounding the number to a value that does not overlap with the next filter radius value has no impact on the result.

Case Study Description and Material
The study aims to analyze parameters in a 3D structure, so the initial design space needs reliable dimensions in the x, y, and z directions.The typical load case involves minimizing compliance with a volume fraction constraint in a cantilever beam with a vertically applied distributed force.In this study, a point force (tip load) is used to emphasize the 3D nature of the case.
Considering that the element size is 1 in all directions, the initial design space consists of 80 elements in the x direction, 30 elements in the y direction, and 20 elements in the z direction.This choice helps generate fewer flat shapes in the optimization result.Figure 3 illustrates the case study with the applied load and the number of elements.
For this study, a volume fraction of 0.2 is applied, and a material with a Poisson Ratio (ν) of 0.3 is used.The value of E min in Equation ( 1) is set to 10 −9 .It is close enough to zero to avoid affecting calculations and prevent mathematical singularities.E 0 is set to 1, which is a typically used value in the mentioned previous studies for researching and reducing calculation time.These properties are chosen for comparing the results to other studies [1,26] and having a direct application in the known and accessible MatLab ® codes [6,7].The convergence is achieved when the maximum change in the density of any element between two consecutive iterations is less than 0.01.This is the recommended value based on the experience of previous studies [6].

Parameters for the Results Analysis
The sensitivity analysis aims to evaluate the machinability by exploring different filter radii and penalties to assist designers in their projects.The parameters extracted from the simulation results are intended to demonstrate this characteristic objectively.These parameters include the number of iterations, time per iteration, measure of non-discreteness, and machinability.
The number of iterations and time per iteration are directly obtained from MatLab ® and will be presented in the subsequent section.However, further explanations are needed for the measure of non-discreteness and machinability parameters, which are provided below.

Measure of Non-Discreteness
Discreteness refers to a global characteristic of the final design, indicating how binary the result is with distinct void (x = 0) and solid (x = 1) elements.This parameter is important because the result representation includes only elements with density x above a threshold (in this case, 0.5).Consequently, there are elements that do not appear in the final design but still contribute to compliance, and their significance increases as the discreteness of the result grows.Since the designer needs to interpret the shape displayed in the TO result, and an objective of the study is to obtain easily interpretable components, it is essential to control and minimize the discreteness.
To analyze the characteristic of discreteness, Sigmund introduced a tool called the measure of non-discreteness (M ), which is widely utilized in the investigation of this matter [30].The measure of non-discreteness is represented by Equation ( 6): The convergence is achieved when the maximum change in the density of any element between two consecutive iterations is less than 0.01.This is the recommended value based on the experience of previous studies [6].

Parameters for the Results Analysis
The sensitivity analysis aims to evaluate the machinability by exploring different filter radii and penalties to assist designers in their projects.The parameters extracted from the simulation results are intended to demonstrate this characteristic objectively.These parameters include the number of iterations, time per iteration, measure of non-discreteness, and machinability.
The number of iterations and time per iteration are directly obtained from MatLab ® and will be presented in the subsequent section.However, further explanations are needed for the measure of non-discreteness and machinability parameters, which are provided below.

Measure of Non-Discreteness
Discreteness refers to a global characteristic of the final design, indicating how binary the result is with distinct void ( ∼ x i = 0) and solid ( ∼ x i = 1) elements.This parameter is important because the result representation includes only elements with density ∼ x i above a threshold (in this case, 0.5).Consequently, there are elements that do not appear in the final design but still contribute to compliance, and their significance increases as the discreteness of the result grows.Since the designer needs to interpret the shape displayed in the TO result, and an objective of the study is to obtain easily interpretable components, it is essential to control and minimize the discreteness.
To analyze the characteristic of discreteness, Sigmund introduced a tool called the measure of non-discreteness (M nd ), which is widely utilized in the investigation of this matter [30].The measure of non-discreteness is represented by Equation ( 6): where N refers to the ID of each element in the design domain.The parameter M nd represents the measure of non-discreteness.It should be noted that the lower the value of M nd , the more binary the analysis results are.

Machinability
The machinability of the optimized shape is analyzed by assessing the accessibility of a hypothetical tool to each frontier element.A frontier element refers to the last element with a density higher than the chosen threshold.This analysis involves an element-by-element evaluation.
To initiate the analysis, the set of elements exceeding the selected density threshold is divided into distinct groups: • The external layer (EL): comprising all the elements located on the boundaries of the design domain.These elements are always accessible because they do not have any solid interface with the exterior.• The core layers (CL): representing the internal solid region, consisting of all the ele- ments forming the shape of the part excluding the frontier elements.• The frontier layer (FL): consisting of the elements above the density threshold and located between the solid and void phases.
The frontier layer (FL) can be determined by considering that it includes all the elements exceeding the threshold but not belonging to the EL or CL groups, as stated in Equation ( 7): The FL is divided into three distinct groups in the subsequent step.These groups are based on the accessibility of the elements for a tool.The groups are as follows: 1.
Elements directly accessible for a tool: These are the elements within the FL that can be readily machined using a tool.

2.
Elements not directly accessible but machinable by machining a neighboring element: These elements are not directly accessible to a tool but can still be manufactured through the machining of a neighboring element.

3.
Non-machinable elements: This group consists of elements within the FL that are not machinable.
This division allows for a more detailed understanding of the machinability characteristics of the elements within the FL, providing insights into their accessibility for machining operations.For the FL elements in groups 1 and 2, the process for identifying them is as follows: For an element to be considered directly accessible for a tool, it must satisfy a specific condition: at its free faces, there must be a row of elements with densities below the selected threshold that extends to the boundaries of the design domain.To illustrate this process, let us consider the simplified 2D example in Figure 4, where the identification process is depicted.The blue elements represent those with densities exceeding the threshold, while the green element e (x,y) is the element currently being analyzed.
To determine if an element satisfies the condition of being directly accessible for a tool in the y axis, the row and column leading to that element are analyzed from the outside towards the inside in both directions (shaded elements in the central pane of Figure 4).If, at any point during the analysis, an element is encountered with a density exceeding the threshold, the condition is not fulfilled for that specific element.
To be considered machinable through its neighbor, an element must meet a specific condition: a free row must extend to its free face.To determine this, the neighboring rows are analyzed from the outside towards the inside until reaching the free face of the element.If, during this analysis, none of the studied lines contains an element with a density above the threshold, then the condition is fulfilled.Figure 5 illustrates this process in a similar manner to the explained process in Figure 4 but in the x axis direction.To determine if an element satisfies the condition of being directly accessible for a tool in the y axis, the row and column leading to that element are analyzed from the outside towards the inside in both directions (shaded elements in the central pane of Figure 4).If, at any point during the analysis, an element is encountered with a density exceeding the threshold, the condition is not fulfilled for that specific element.
To be considered machinable through its neighbor, an element must meet a specific condition: a free row must extend to its free face.To determine this, the neighboring rows are analyzed from the outside towards the inside until reaching the free face of the element.If, during this analysis, none of the studied lines contains an element with a density above the threshold, then the condition is fulfilled.Figure 5 illustrates this process in a similar manner to the explained process in Figure 4 but in the x axis direction.The output parameter is a percentage of the machinable elements with respect to all the frontier elements.A value of 100% would mean that the design is fully manufacturable.The MATLAB ® code of this process and the rest of the machining filter (which is explained in subsequent epigraphs) are shown in Appendix A.

Sensitivity Analysis
The results of the TOs are presented in Tables A1-A4, which can be found in the Appendices B.1-B.4.Note that the relative density x of each element is represented with a greyscale factor in Tables A2-A4, where a darker element represents a value closer to 1.Only elements with a relative density x > 0.5 are shown.Numerical results for the analysis are shown in Table 1, being

•
Sim. ID: the identifier of the specific TO case.To determine if an element satisfies the condition of being directly accessible for a tool in the y axis, the row and column leading to that element are analyzed from the outside towards the inside in both directions (shaded elements in the central pane of Figure 4).If, at any point during the analysis, an element is encountered with a density exceeding the threshold, the condition is not fulfilled for that specific element.
To be considered machinable through its neighbor, an element must meet a specific condition: a free row must extend to its free face.To determine this, the neighboring rows are analyzed from the outside towards the inside until reaching the free face of the element.If, during this analysis, none of the studied lines contains an element with a density above the threshold, then the condition is fulfilled.Figure 5 illustrates this process in a similar manner to the explained process in Figure 4 but in the x axis direction.The output parameter is a percentage of the machinable elements with respect to all the frontier elements.A value of 100% would mean that the design is fully manufacturable.The MATLAB ® code of this process and the rest of the machining filter (which is explained in subsequent epigraphs) are shown in Appendix A.

Sensitivity Analysis
The results of the TOs are presented in Tables A1-A4, which can be found in the Appendices B.1-B.4.Note that the relative density x of each element is represented with a greyscale factor in Tables A2-A4, where a darker element represents a value closer to 1.Only elements with a relative density x > 0.5 are shown.Numerical results for the analysis are shown in Table 1, being

•
Sim. ID: the identifier of the specific TO case.The output parameter is a percentage of the machinable elements with respect to all the frontier elements.A value of 100% would mean that the design is fully manufacturable.The MATLAB ® code of this process and the rest of the machining filter (which is explained in subsequent epigraphs) are shown in Appendix A.

Sensitivity Analysis
The results of the TOs are presented in tables, which can be found in the Appendices B.1-B.4.Note that the relative density x i of each element is represented with a greyscale factor in Appendices B.2-B.4,where a darker element represents a value closer to 1.Only elements with a relative density x i > 0.5 are shown.Numerical results for the analysis are shown in Table 1, being

•
Sim. ID: the identifier of the specific TO case.

•
Iterations: the number of iterations required to achieve the convergence condition.Note that a maximum of 5000 iterations was set, so if the TO reaches this value, convergence is not reached.

•
It. Time: the average time the TO took in every iteration.

•
Objective, the compliance of the structure, calculated as in Equation ( 8): where K, U, and F are the global stiffness matrix and the global displacement and force vectors, respectively.N is the total number of elements, u e is the e element displacement vector, k 0 is the element stiffness matrix, and x e is the e element density.
• Non-Discreteness: the described parameter in Section 2.3.1.

•
Machinability: the described parameter in Section 2.3.2.
Appl.Sci.2024, 14, 6260 Next, in Figures 6 and 7, some remarkable results are displayed along with fitted curves of the discrete results data.
where , , and  are the global stiffness matrix and the global displacement and force vectors, respectively.N is the total number of elements, u is the e element displacement vector, k is the element stiffness matrix, and x is the e element density.

•
Non-Discreteness: the described parameter in Section 2.3.1.

•
Machinability: the described parameter in Section 2.3.2.
Next, in Figures 6 and 7, some remarkable results are displayed along with fitted curves of the discrete results data.As the filter radius increases, it is expected that higher filter radius values lead to simpler shapes, as the density filter in Equation ( 2) affects a greater number of elements.However, when the value of the radius becomes very large, the results start to exhibit discontinuities, and the computational cost increases.
Nevertheless, the fitted data depicted in Figure 6 reveal an unexpected trend.It is shown that as the filter radius grows, the TO process requires fewer iterations and less time per iteration for computation.This finding contradicts the initial expectation, as a larger filter radius entails analyzing more elements in each iteration.
Figure 7 provides an analysis of the TOs' manufacturability.In Figure 7a, higher penalization values lead to slightly lower Mnd.This is an expected outcome, since higher penalization corresponds to a more aggressive interpolation of the SIMP function defined by Equation (1) and Figure 2.
Regarding non-discreteness, the results exhibit regular and predictable behavior for filter radii ranging from Sim. ID 19-72.These values increase with the corresponding filter radii.Similarly, machinability also increases with the filter radius.As mentioned previously, a higher radius corresponds to simpler shapes that are more easily machinable.In Figure 7, it is evident that results become more inconsistent and show greater dispersion and poor predictability when large filter radii are used.One interesting point to note is that, in the default density-based TO approach, it is not possible to achieve the best values for discreteness and machinability simultaneously, as shown in Figure 8.As the filter radius increases, it is expected that higher filter radius values lead to simpler shapes, as the density filter in Equation ( 2) affects a greater number of elements.However, when the value of the radius becomes very large, the results start to exhibit discontinuities, and the computational cost increases.
Nevertheless, the fitted data depicted in Figure 6 reveal an unexpected trend.It is shown that as the filter radius grows, the TO process requires fewer iterations and less time per iteration for computation.This finding contradicts the initial expectation, as a larger filter radius entails analyzing more elements in each iteration.
Figure 7 provides an analysis of the TOs' manufacturability.In Figure 7a, higher penalization values lead to slightly lower M nd .This is an expected outcome, since higher penalization corresponds to a more aggressive interpolation of the SIMP function defined by Equation (1) and Figure 2.
Regarding non-discreteness, the results exhibit regular and predictable behavior for filter radii ranging from Sim. ID 19-72.These values increase with the corresponding filter radii.Similarly, machinability also increases with the filter radius.As mentioned previously, a higher radius corresponds to simpler shapes that are more easily machinable.
In Figure 7, it is evident that results become more inconsistent and show greater dispersion and poor predictability when large filter radii are used.One interesting point to note is that, in the default density-based TO approach, it is not possible to achieve the best values for discreteness and machinability simultaneously, as shown in Figure 8.In Figure 7, it is evident that results become more inconsistent and show greater dispersion and poor predictability when large filter radii are used.One interesting point to note is that, in the default density-based TO approach, it is not possible to achieve the best values for discreteness and machinability simultaneously, as shown in Figure 8. Machinability filter in combination with a Heaviside step filter [15,[31][32][33] may result in an improvement of the M -Machinability combination, as will be presented in the next section.
Lower filter radii and lower penalization values lead to better compliance outcomes.Additionally, the results show lower dispersion with the smallest radii.These findings are expected, because more complex shapes result in a higher moment of inertia, leading to increased stiffness.This can clearly be seen in Figure 9, where the corrected data are represented using a two-term exponential line.Machinability filter in combination with a Heaviside step filter [15,[31][32][33] may result in an improvement of the M nd -Machinability combination, as will be presented in the next section.
Lower filter radii and lower penalization values lead to better compliance outcomes.Additionally, the results show lower dispersion with the smallest radii.These findings are expected, because more complex shapes result in a higher moment of inertia, leading to increased stiffness.This can clearly be seen in Figure 9, where the corrected data are represented using a two-term exponential line.

Additions to Filtering
The machining filter is positioned inside the optimization algorithm, which in this case is the Optimality Criteria method described by Liu and Tovar in their work [6], using the parameters described in previous studies [34][35][36][37].The flowchart depicted in Figure 10 illustrates the position of the filters (machining filter and Heaviside step filter) using the numbering notation from Figure 1.

Additions to Filtering
The machining filter is positioned inside the optimization algorithm, which in this case is the Optimality Criteria method described by Liu and Tovar in their work [6], using the parameters described in previous studies [34][35][36][37].The flowchart depicted in Figure 10 illustrates the position of the filters (machining filter and Heaviside step filter) using the numbering notation from Figure 1.

Additions to Filtering
The machining filter is positioned inside the optimization algorithm, which in this case is the Optimality Criteria method described by Liu and Tovar in their work [6], using the parameters described in previous studies [34][35][36][37].The flowchart depicted in Figure 10 illustrates the position of the filters (machining filter and Heaviside step filter) using the numbering notation from Figure 1.The code now includes three additions: the machining filter, the Heaviside step filter, and the ellipsoid-shaped density filter.These implementations are then explained and positioned inside the OC routine.

Machining Filter
The new machining filter developed for this study utilizes the analysis method described in Section 2.3.2 to identify elements.Non-machinable elements are identified, and The code now includes three additions: the machining filter, the Heaviside step filter, and the ellipsoid-shaped density filter.These implementations are then explained and positioned inside the OC routine.

Machining Filter
The new machining filter developed for this study utilizes the analysis method described in Section 2.3.2 to identify elements.Non-machinable elements are identified, and an individual machinability radius, R mach , is assigned to each of them.The determination of R mach involves analyzing the neighboring elements of the element under analysis, using the filter radii described in Equation ( 4) and Table 1.This process is performed individually for each Cartesian axis, so the designer can obtain machinable parts in a concrete direction, The analysis begins with the smallest practical radius of 1.5.Suppose a neighboring element is deemed machinable through direct access or neighbor access.In that case, this radius is adopted, and the elements influenced by this radius are not analyzed further until the next iteration, for computational efficiency.
Suppose there are no machinable elements within the neighborhood defined by the radius.In that case, the next radius in Table 1 is examined, and this process continues until at least one machinable element is found.This process in MATLAB ® code is shown in Appendices B and C.
Once the non-machinable elements are detected, as explained in Section 2.3.2, and the filter radii are obtained, the filtering process can begin.It is graphically described in Figure 11, and it is also broken down step by step in the next section.
Once the non-machinable elements are detected, as explained in Section 2.3.2, and the filter radii are obtained, the filtering process can begin.It is graphically described in Figure 11, and it is also broken down step by step in the next section.
The process starts with the blueprint design, which is the resultant density field of the design variable update in the OC loop.This is represented in Figure 11a, where blue elements have a density over the chosen threshold.Then the filter radii are obtained using the MATLAB ® code of Appendix A, considering that, in this case, a machinable result in the x direction is desired.The elements highlighted in pink in Figure 11b are the ones affected by the filter radii.The process starts with the blueprint design, which is the resultant density field of the design variable update in the OC loop.This is represented in Figure 11a, where blue elements have a density over the chosen threshold.Then the filter radii are obtained using the MATLAB ® code of Appendix A, considering that, in this case, a machinable result in the x direction is desired.The elements highlighted in pink in Figure 11b are the ones affected by the filter radii.Now, two conditions are evaluated: the element density is above the density threshold, and pink elements are present in the machining direction at any point of the density field.If an element meets both conditions, its density is turned into zero.This is done to give a first access to the filter for working as will be now described.This first column filtering is represented in Figure 11c, where elements in red are the elements that meet the mentioned conditions.In the same figure, the yellow arrows point to the nearest pink elements in the machining direction, evidencing the existence of non-accessible elements.In Figure 11d, the design after the first column filtering is represented.
The rest of the columns are filtered in a similar way: the elements to filter are selected if they meet the conditions described above, but in this case, they take the density of the element before them in the machine direction, the (x − 1, y) element.In Figure 11e, the second column element detection is represented in a similar way as in Figure 11c.In Figure 11f, green arrows represent the elements from whom the filtered elements take their new density.
Figure 11g represents the density field after the second column filtering.The process continues until there are no elements that meet the filtering conditions.In Figure 11h, the final design after all the filtering processes is represented.The MATLAB ® code of the filter itself is shown in Appendix C.
Finally, a kernel filter as described in Equation ( 2) is applied as a damping tool.Forcing some elements to have a concrete density can lead to a disintegrated density field and thus result in a difficult convergence of the OC routine.This acts as the density filter [21,38] presented in Section 1.It is also where the third addition, the ellipsoid-shaped filter, is applied (Section 4.3).

Heaviside Step Filter
To achieve a manufacturable part with minimal discretization of the element densities, a chain rule consisting of close filters through a Heaviside function as the one described by Schevenels and Sigmund, Andreassen et al., or Pellens et al. [15,31,32] is used first by applying a dilatation and lastly, an erosion step.Sigmund [30] demonstrated the use of close filters for reducing the discreteness of the result while preserving the original shape (non-filtered shape).The utilized Heaviside step function corresponds to the smooth approximation proposed by Wang et al. [39]: where β corresponds to the Heaviside smoothness parameter: a β approaching zero filters the densities linearly without making any changes, while β trending to infinite causes a step function in the value defined by the parameter η.When η = 1, the expression corresponds to an erosion filter, while when η = 0, it corresponds to a dilation filter.

∼
x e is the element density before filtering, and x e is the filtered element density.The aim of using first a dilation filter and later an eroding filter is to avoid a non-volume preserving strategy.In this case, a close-open filter is applied as specified by Sigmund in his study [30].
The sensitivities must be modified to ensure accurate calculation of the new design variables in every OC iteration.This is done through the chain rule described in Equation (10): where "∧" represents a dilation operator, and "∨" is the erosion operator.The tilde represents the filtered density by the kernel filter.

Ellipsoid-Shaped Density Filter
The main issue when extrapolating results from the sensitivity analysis to other design spaces is the mesh dependency of the density-based TO [19].The shape of the optimized part is directly dependent on the density filter described in Equation ( 2).This is caused by the relation between the filter size and the design domain size: using the same filter size with different design domains leads to different resultant shapes.
To address this characteristic, the original sphere-shaped filter is taken as an ellipsoid with equal semi-axes, so they can be individually varied with the three Cartesian dimensions of the initial design domain.As the used weighting factor is conic weights [38], the radii of the ellipsoid at the same latitude and longitude as the analyzed element, R fil ijk , must be obtained for calculating the weight of the neighbor elements.
For obtaining this radii, six initial parameters are needed: a, b, and c, which are the semi-axes' dimensions given by the user, and three filter radii, R min x , R min y , and R min z , respectively.The relative coordinates of the element being analyzed in the kernel filter with respect to the base element are denoted R i , R j , and R k .
The angles α and ω in Figure 12 are the geocentric latitude and longitude, respectively, which can be obtained with trigonometry, as follows: Appl.Sci.2024, 14, x FOR PEER REVIEW 15 of 37 where "˄" represents a dilation operator, and "˅" is the erosion operator.The tilde represents the filtered density by the kernel filter.

Ellipsoid-Shaped Density Filter
The main issue when extrapolating results from the sensitivity analysis to other design spaces is the mesh dependency of the density-based TO [19].The shape of the optimized part is directly dependent on the density filter described in Equation ( 2).This is caused by the relation between the filter size and the design domain size: using the same filter size with different design domains leads to different resultant shapes.
To address this characteristic, the original sphere-shaped filter is taken as an ellipsoid with equal semi-axes, so they can be individually varied with the three Cartesian dimensions of the initial design domain.As the used weighting factor is conic weights [38], the radii of the ellipsoid at the same latitude and longitude as the analyzed element, R , must be obtained for calculating the weight of the neighbor elements.
For obtaining this radii, six initial parameters are needed: a, b, and c, which are the semi-axes' dimensions given by the user, and three filter radii, R , R , and R , respectively.The relative coordinates of the element being analyzed in the kernel filter with respect to the base element are denoted R , R , and R .
The angles α and ω in Figure 12 are the geocentric latitude and longitude, respectively, which can be obtained with trigonometry, as follows: The coordinates of the ellipsoid surface at the same latitude and longitude as the analyzed element, R , R , and R , can be geometrically calculated.By combining these values with the general formula of an ellipsoid, Equation ( 12) is obtained for the calculation of the desired radius: The three implementations are presented and highlighted in green in the flowchart presented in Figure 13.The flowchart represents level 6 of Figure 1 and the blue block of Figure 10.The machining filter is divided into the three phases, described in Appendices A.1-A.3, and the density filter is computed using the ellipsoid-shaped neighborhood.The coordinates of the ellipsoid surface at the same latitude and longitude as the analyzed element, R x , R y , and R z , can be geometrically calculated.By combining these values with the general formula of an ellipsoid, Equation ( 12) is obtained for the calculation of the desired radius: The three implementations are presented and highlighted in green in the flowchart presented in Figure 13.The flowchart represents level 6 of Figure 1 and the blue block of Figure 10.The machining filter is divided into the three phases, described in Appendices A.1-A.3, and the density filter is computed using the ellipsoid-shaped neighborhood.

Results
This section is divided in two parts.The first one validates the proposed additions to the original TO methodology.In this subsection, a concrete case of the sensitivity analysis is chosen, and the machining filter, the ellipsoid-shaped density filter, and the Heaviside step filter are applied separately and together.The aim is to show the impact on the results and verify their performance.The second subsection is an epigraph explaining the conclusions reached in this study, highlighting the most useful points for the potential designers who read this paper.

Validation
In this subsection, six different validation cases are analyzed based on the subcase R = 3 and p = 6, whose data and shape can be seen in Appendices B.1 and B.2.All the figures are colored rainbow-like in the machining direction "i", starting in blue with the elements (1, y, z) and finishing in red with the elements (80, y, z).This is done to give a clearer view of the results and the difference between the resultant shapes of the validation cases.

Results
This section is divided in two parts.The first one validates the proposed additions to the original TO methodology.In this subsection, a concrete case of the sensitivity analysis is chosen, and the machining filter, the ellipsoid-shaped density filter, and the Heaviside step filter are applied separately and together.The aim is to show the impact on the results and verify their performance.The second subsection is an epigraph explaining the conclusions reached in this study, highlighting the most useful points for the potential designers who read this paper.

Validation
In this subsection, six different validation cases are analyzed based on the subcase R = 3 and p = 6, whose data and shape can be seen in Appendices B.1 and B.2.All the figures are colored rainbow-like in the machining direction "i", starting in blue with the elements (1, y, z) and finishing in red with the elements (80, y, z).This is done to give a clearer view of the results and the difference between the resultant shapes of the validation cases.

•
In the (d) case, the machining filter is applied to the standard case.• The case (e) showcases the combination of case (b) and case (c); this is the standard case, with a Heaviside step filter and ellipsoid-shaped density filter combined.

•
Finally, the case (f) evidences the resultant shape of the combination of case (e) with the machining filter.In this case, the three additions described in Section 4 are applied.
For cases (c), (d), (e), and (f), which are the cases where the Heaviside filter is applied, the searching methodology described in [40] is applied, updating the design variables every 25 iterations or when the updating conditions are met.Finally, case (g) is showcased, where the number of iterations for updating the design variables is reduced to 10 with the conditions of case (f).This is done to reduce the total number of iterations and search for an optional different design.A video showcasing case (g) and the AVD described in [40] is available in the Data Availability Statement section.
Relevant validation data are presented in Table 2, which shows the number of iterations before convergence, the measure of non-discreteness, the machinability, and the compliance (as objective function) of all the verification cases.

Conclusions
Through this study, several conclusions can be drawn regarding the parameters to be considered for designers when using TO as the design methodology, as well as how to fix the issues found.The proposed solutions can be interpreted as potential implementations for commercial software packages in the future, as this kind of methodology has not been implemented yet.
In the sensitivity analysis, modifications over the penalization and filter radius are analyzed in a concrete 3D case.A total of 92 simulations are showcased, presenting the most complete study of this kind found in the bibliography.The results are discussed in Section 3, but additional notes are included regarding the paper's objective of aiding designers.When designing functional parts, it is important to aim for a low M nd and high machinability.Additionally, the objective function (compliance) should be minimized while meeting the prescribed constraints, the volume fraction in this case study.
The main issue detected in the sensitivity analysis is the fact that a low measure of non-discreteness M nd and high machinability cannot be achieved at the same time.This can be seen in Figure 7, but especially in Figure 8.As the volume fraction is the same for all the simulations, the objective is clearly dependent on the penalization, obtaining better results for low penalizations as Figure 9 and Appendix B.1 evidence.However, worse non-discreteness results are obtained for low penalization factors (Figure 7).Machinability trends show that results with a higher filter radius are more manufacturable, as shown in Figure 7.
Recognizing the limitations in achieving good machinability and non-discreteness, the need for additional tools to achieve the best results in terms of interpretation and manufacturability is acknowledged.For this paper, three additions have been studied and applied (two of them being a novelty): the machining filter, the ellipsoid-shaped density filter, and the Heaviside step filter.The machining filter was developed for obtaining machinable parts in a concrete direction.The ellipsoid-shaped density filter has the aim of obtaining results with concrete features in certain selective directions (looking at, for example, the results of the sensitivity analysis).The Heaviside step filter is applied for obtaining low interpretation of the results or, in other words, low measure of non-discreteness.
Table 2 and Appendix C evidence the successful addition of the three filters separately and combined.In the cases where the machining filter is applied (cases (d) and (f)), a machinability of 100% is achieved with a higher computation cost (more iterations until convergence and more time per iteration).In the cases where the Heaviside step filter is used (cases (b), (d), (e), and (f)), M nd is sensibly lower than "standard" cases.Finally, for the cases where the ellipsoid-shaped density filter is applied (cases (c) and (e)), the machinability is a bit better in comparison to those cases without the filter.
Considering this, and especially looking at cases (f) and (g), the behavior end effectiveness of the developed filters is verified, as a low M subindex is obtained while having 100% machinability.The incrementation of the iterations until convergence is a normal consequence of adding extra constraints (filters in this case) that increase nonlinearity.An incrementation of the objective function with respect to the standard case is also observed, which is an expected result.By modifying the AVD [40] parameters, convergence can be achieved while maintaining good discreteness and 100% performance.Geometrical restrictions introduce dramatic limitations into the possible optimum shapes, but it is the only way to mathematically ensure that an optimum design is reached and allow the designers to use the obtained shapes with low interpretation of the results.
In summary, the sensitivity analysis carried out in this study is a good tool for designers to choose the initial parameters of their TOs.It can be a useful tool for determining some geometrical features through the ellipsoid-shaped density filter.The Heaviside step filter is a valuable tool for obtaining low interpretable designs (with high discretization of the density field), ensuring a good results interpretation stage.The presented machining filter is shown to be an interesting tool for obtaining machinable parts in a certain direction.It has also been demonstrated that a machining filter could be a powerful tool in combination with the other additions presented in the paper.Nevertheless, a convergence or searching strategy should be used, such as Adaptive Variable Design (which has proven its effectiveness), other optimization algorithms, a user custom strategy, etc.

Figure 1 .
Figure 1.Simplified flowchart of the density-based topology optimization process.

Figure 1 .
Figure 1.Simplified flowchart of the density-based topology optimization process.

Figure 2 .
Figure 2. Graphical representation of the SIMP interpolation power rule for penalization values between 1 and 100.

Figure 2 .
Figure 2. Graphical representation of the SIMP interpolation power rule for penalization values between 1 and 100.

Figure 3 .
Figure 3. Study load case and design domain.

Figure 3 .
Figure 3. Study load case and design domain.

Figure 4 .
Figure 4. Element directly accessible for a tool detection process.In blue, solid phase; in green, the element under analysis in coordinates (x,y).Red arrows indicate hypothetical tool trajectory.

Figure 5 .
Figure 5. Element accessible for a tool through a neighbor detection process.In blue, solid phase; in green, the element under analysis in coordinates (x,y).Red arrows indicate hypothetical tool trajectory.

Figure 4 .
Figure 4. Element directly accessible for a tool detection process.In blue, solid phase; in green, the element under analysis in coordinates (x,y).Red arrows indicate hypothetical tool trajectory.

Figure 4 .
Figure 4. Element directly accessible for a tool detection process.In blue, solid phase; in green, the element under analysis in coordinates (x,y).Red arrows indicate hypothetical tool trajectory.

Figure 5 .
Figure 5. Element accessible for a tool through a neighbor detection process.In blue, solid phase; in green, the element under analysis in coordinates (x,y).Red arrows indicate hypothetical tool trajectory.

Figure 5 .
Figure 5. Element accessible for a tool through a neighbor detection process.In blue, solid phase; in green, the element under analysis in coordinates (x,y).Red arrows indicate hypothetical tool trajectory.

Figure 6 .
Figure 6.Comparison of the iterations and time per iteration in every TO.(a) Sim.ID vs. Iterations graph: dots are the discrete data, and the blue line represents the corrected data.(b) Sim.ID vs. Mean Iteration Time: dots are the discrete data, and the blue line represents the corrected data.

Figure 6 .Figure 7 .
Figure 6.Comparison of the iterations and time per iteration in every TO.(a) Sim.ID vs. Iterations graph: dots are the discrete data, and the blue line represents the corrected data.(b) Sim.ID vs. Mean Iteration Time: dots are the discrete data, and the blue line represents the corrected data.Appl.Sci.2024, 14, x FOR PEER REVIEW 10 of 37

Figure 8 .
Figure 8. comparison of machinability and measure of non-discreteness.Dots are the discrete data, and the blue line represents the corrected data.

Figure 7 .
Figure 7.Comparison of the measure of non-discreteness and machinability in every TO.(a) Non-Discreteness vs. Sim.ID: dots are the discrete data, and the blue line represents the corrected data.(b) Machinability vs. Sim.ID: dots are the discrete data, and the blue line represents the corrected data.

Figure 7 .
Figure 7.Comparison of the measure of non-discreteness and machinability in every TO.(a) Non-Discreteness vs. Sim.ID: dots are the discrete data, and the blue line represents the corrected data.(b) Machinability vs. Sim.ID: dots are the discrete data, and the blue line represents the corrected data.

Figure 8 .
Figure 8. comparison of machinability and measure of non-discreteness.Dots are the discrete data, and the blue line represents the corrected data.

Figure 8 .
Figure 8.Comparison of machinability and measure of non-discreteness.Dots are the discrete data, and the blue line represents the corrected data.

37 Figure 9 .
Figure 9.Comparison of Sim.ID and objective function results.Dots are the discrete data, and the blue line represents the corrected data.

Figure 9 .
Figure 9.Comparison of Sim.ID and objective function results.Dots are the discrete data, and the blue line represents the corrected data.

Figure 9 .
Figure 9.Comparison of Sim.ID and objective function results.Dots are the discrete data, and the blue line represents the corrected data.

Figure 10 .
Figure 10.Flowchart of the TO method, with the machining filter highlighted in pink for clarity of later explanations.Coloured in blue, the Optimality Criteria loop is showcased.In pink, the machining filter step is represented.

Figure 10 .
Figure 10.Flowchart of the TO method, with the machining filter highlighted in pink for clarity of later explanations.Coloured in blue, the Optimality Criteria loop is showcased.In pink, the machining filter step is represented.

Figure 11 .
Figure 11.Machining filter, step by step, in x direction.Yellow arrows represent the distance of filtered element to the next non-machinable element in x direction.Green arrows represent the element from whom the filtered element adopts its new density.(a) Initial blueprint design.(b) Highlighted in pink, the elements covered by the filter radii.(c) First column filtering.(d) Design after first column filtering.(e) Element detection for subsequent columns (second column represented).

Figure 11 .
Figure 11.Machining filter, step by step, in x direction.Yellow arrows represent the distance of filtered element to the next non-machinable element in x direction.Green arrows represent the element from whom the filtered element adopts its new density.(a) Initial blueprint design.(b) Highlighted in pink, the elements covered by the filter radii.(c) First column filtering.(d) Design after first column filtering.(e) Element detection for subsequent columns (second column represented).(f) Filtering of subsequent columns (second column represented).(g) Design after second column filtering.(h) Final design after filtering.

Figure 12 .
Figure 12.Geometrical parameters of the ellipsoid for obtaining its radius from the Cartesian coordinates of the elements being analyzed.

Figure 12 .
Figure 12.Geometrical parameters of the ellipsoid for obtaining its radius from the Cartesian coordinates of the elements being analyzed.

37 Figure 13 .
Figure 13.OC routine with the machining filter highlighted in pink and the filtering steps colored in green.
of Appendices C.1-C.4 represents the standard case without any additional filter added.• The case (b) of Figures A1-A4 of Appendices C.1-C.4 is the resultant shape of the standard case in combination with the Heaviside step filter, with η = 0.5.• In Figures A1c-A4c of Appendices C.1-C.4, the ellipsoid-shaped density filter is applied to the standard case.Fixing the column of p = 6 in Appendices B.2-B.4, the radius of each Cartesian direction is chosen in order to obtain the features in that axis of the case in the sensitivity analysis.In this case R = 6, R = 5, and R = 3.5 are used as filter radii for obtaining a more manufacturable result with the features of R = 6 and p = 6 (Appendix B.4) in the x direction, R = 5 and p = 6 (Appendix B.4) in the y direction, and R = 3.5 and p = 6 (Appendix B.3) in the z direction.

Figure 13 .
Figure 13.OC routine with the machining filter highlighted in pink and the filtering steps colored in green.

•= 6
The case (a) of Figures A1-A4 of Appendices C.1-C.4 represents the standard case without any additional filter added.• The case (b) of Figures A1-A4 of Appendices C.1-C.4 is the resultant shape of the standard case in combination with the Heaviside step filter, with η = 0.5.•InFiguresA1c, A2c, A3c and A4c of Appendices C.1-C.4, the ellipsoid-shaped density filter is applied to the standard case.Fixing the column of p = 6 in Appendices B.2-B.4, the radius of each Cartesian direction is chosen in order to obtain the features in that axis of the case in the sensitivity analysis.In this case R minx used as filter radii for obtaining a more manufacturable result with the features of R = 6 and p = 6 (Appendix B.4) in the x direction, R = 5 and p = 6 (Appendix B.4) in the y direction, and R = 3.5 and p = 6 (Appendix B.3) in the z direction.

Table 1 .
Chosen study filter radii, the reason of the choice of these radii, and the percentage of the elements in the maximum direction that represents the radii.OS = Other Studies.ED = Euclidean Distance.LV = Large Values.