From μCT data to CFD: an open-source workflow for engineering applications

The generation of high-quality volume meshes out of µCT data can be a challenging task that is mandatory to perform CFD simulations of fluid flows within or around scanned objects. Despite the growing relevance of CT-based CFD simulations, there is still a lack of a standardized, free-to-use workflow. In this work, an open-source based workflow is presented, which covers all steps from the CT image stack to a volume mesh in the end. The software packages ImageJ, ParaView, and Blender are used for surface reconstruction and processing of µCT data; resulting meshes are evaluated regarding geometric parameters, surface fidelity, and used memory. For volume meshing, the mesh generator snappyHexMesh implemented in OpenFOAM is used. A detailed description of this workflow is provided in the Supplementary Materials. We demonstrate how potential pitfalls and shortcomings can be avoided and how surface smoothing is especially important to preserve the surface area of scanned objects. We use CT data of a 10 ppi open cell foam to illustrate that data decimation can reduce the required time for the volume meshing by up to 45% and RAM up to 75%. The resulting meshes are used to simulate flow fields and heat transport with a surface heat source. The resulting temperature fields show that differences in the surface area and recesses in an unsmoothed mesh can affect the outcome of heat transport simulations, highly relevant for reactive CFD simulations. The results highlight the quality of our workflow’s outcome, which can help engineers that rely on CT reconstruction processes, also for other applications.


Introduction
Computational Fluid Dynamics (CFD) is a prominent method in the engineering community to obtain deep insights into flow, temperature, and/or species fields in and around objects of interest in single and multi-phase environment. While in the first stages of CFD modeling the objects were trivial and generated manually [e.g. flow around bluff bodies (Krajnovic & Davidson, 2000)], nowadays it is possible to investigate complex structures like porous materials (Wood et al., 2020). The increasing object complexity requires the inventions of novel tools to digitize the geometry. One possible way is to use surrogate structures based on geometrical parameters of the real object. For example, Voroni tessellations are frequently used to generate digital replicas of Open Cell Foams (OCF) (Wehinger et al., 2016) and most recently an open-source workflow to generate such structures was presented (Agostini et al., 2022). However, such surrogates are only applicable if local phenomena in the real object are not important. To implement a real object into a CFD model, it must be scanned and resulting data must be processed properly. Possible methods of collecting the geometrical data are laser and radar scanners, sonar, Nuclear Magnetic Resonance (NMR) or X-Ray Computer Tomography (CT) (Rychlik et al., 2004). The latter is by far the most popular. The combination of CT with CFD simulations is referred to as image-based CFD.
The first field to combine CT with CFD was biomedical engineering. In 1993, first attempts were made to investigate the blood flow of a stenosed arterial bifurcation (Tasciyan et al., 1993). Since then, numerous works have been published regarding the cardiovascular system (Kagadis et al., 2008;Qiu et al., 2018;Tippe et al., 1999) and the respiratory tract (Inthavong et al., 2009;Nowak et al., 2003;Quadrio et al., 2016;Shukla et al., 2020;Tabe et al. 2021). It took many years for image-based CFD to find its way into other engineering disciplines. Apart from a pioneering work of Menegazzi and Trapy (1997), who analysed an engine cooling system using a combination of CFD and CT data in 1997, frequent usage started only in the late 2000s (Tabor et al., 2008). Over the last decade, extensive research has been carried out based on image-based CFD, including works on droplet spreading in porous media (Aboukhedr et al., 2017), electrolyte flow in batteries (Emmel et al., 2020), conjugate heat transfer in monolithic catalyst supports (Sinn et al., 2020), and even reaction simulations in catalytic reactors (Dong et al., 2018). The application fields grow as rapidly as the need of specific knowledge on the whole process towards the simulation of real objects.
Converting CT data to volume meshes, which are the basis for Finite Volume Method-based CFD, is by no means trivial. The amount of data can be immense and thus challenging to handle. This is especially true for fine geometries like porous media, which is why data reduction is of utmost importance. Hence, the pre-processing, which is already considered as one of the most time consuming steps in CFD simulations (Ali et al., 2019), becomes even more challenging. Surprisingly, although lots of knowledge and methods for meshing of CT data are known (see Chapter 2), the meshing procedure for these complex geometries is still very time consuming and prone to error. The main reason is probably a lack of standardised meshing workflows that can deal with complex geometries and are easy to use for CFD engineers, who are no experts in digital image processing and surface reconstruction. This paper presents a straightforward an 'easy-to-use' workflow for the processing of CT image sequences into sound volume meshes, which can be used as the discretised calculation domain for CFD simulations. The workflow is based on open-source software prominent in the scientific community. Our aim is to show a standardised way to generate a sound mesh, both to facilitate the pre-processing and overcome additional challenges of image-based CFD. Furthermore, the influence of the process steps on the morphology of sample objects is shown as well as the influence of smoothing surface meshes on CFD results of conjugate heat transfer simulations in an OCF reactor. The workflow is explained in detail throughout this manuscript and the Supplementary Materials. Furthermore, utilized STL files and simulation cases are uploaded to a repository. The following key questions will be addressed in the present manuscript: • Is it reasonable to use uncompressed CT data for meshing? • How is volume and surface preservation dependent on smoothing? • Are there significant differences in CFD results between smoothed and unsmoothed meshes?

Theory
Several works have been published describing methods to either create proper surface meshes or to create volume meshes directly out of image stacks of CT scans (Inthavong et al., 2009;Manmadhachary et al., 2016;Wang et al. 2010;Young et al., 2008). Almost all the research on converting CT data to digital meshes was done in the field of biomedical engineering, which is why the starting point in many cases is Digital Imaging and Communication in Medicine (DICOM) data. These files contain the CT data as image stacks or voxel data, the basis for further processing. In the following, common meshing methods are grouped into direct and indirect methods (visualized in Figure 1).

Direct methods
In direct methods, the volume mesh is directly generated from voxel-based CT data. This clearly has the advantage of few process steps and the possibility to automate the meshing process to a big extend. A very basic and simple approach converts the voxel-based data into a hexahedral mesh (Approach 1 in Figure 1). This method is very robust and the resulting mesh is consistent. However, one of the big disadvantages is a stepped surface and hence a large overestimate of the surface area (Young The approaches presented in literature are divided into direct and indirect methods. While approaches from the first method directly create volume meshes out of CT data, the indirect methods process the CT data to create a surface mesh and additional software has to be used for creating the volume meshes. Detailed descriptions: The first approach generates cubes out of the voxels, while the second approach (Volumetric Marching Cubes, VoMaC) uses the Marching Cubes (MC) algorithm as a basis. The third approach utilizes Non-Uniform Rational B-Splines (NURBS) to represent the contours, the fourth approach is the classical MC algorithm, and the fifth approach works with a point cloud based on grey prediction (GP) modelling. et al., 2008). In case of a sphere, this overestimate can be about 50% depending on the ratio between voxel size and sphere radius ). Another direct method was shown by Müller and Rügsegger (1995), which is based on the well-established Marching Cubes (MC) algorithm (Lorensen & Cline, 1987) and referred to as Volumetric Marching Cubes (VoMaC, see Approach 2 in Figure 1). The VoMaC algorithm subdivides a voxel into tetrahedrons, making it possible to use MC and create a volume mesh within one step. Young et al. (2008) improved the VoMaC algorithm further to enable multiregion meshing and better control over cell size and cell type. Using MC without any further processing can still lead to a considerable overestimate of surface areas (Lindblad & Nyström, 2002). Generally, direct methods are barely used in engineering sciences.

Indirect methods
Indirect methods firstly create sound surface meshes, that are commonly saved as 'Standard Tesselation Language' (STL) files. These files can be used in a second step to create a discretization of the calculation domain. Although Young et al. (2008) already showed an improved method for the direct creation of a volume mesh from 3D image data, most approaches from CT to CFD still use this intermediate step of surface reconstruction (Lintermann, 2021;Meinicke et al., 2017aMeinicke et al., , 2017bSadeghi et al., 2020;Sinn et al., 2019). The main reason is probably the fact that image processing, surface reconstruction, and simulation software are not yet integrated into one single software. Furthermore, it is easier to detect flaws or tackle problems if the process is divided into several steps. Three indirect methods can be distinguished. The first indirect method creates three-dimensional models by reconstructing the surface for every CT image and extracting data points representing the contours (Approach 3 in Figure 1). These points are fitted with B-spline or Non-Uniform Rational B-Spline (NURBS) curves and lofted throughout the sequence of CT images to create a three-dimensional surface model (Grove et al., 2011;Manmadhachary et al., 2016;Wang et al., 2010). On the one hand, this approach ensures a smooth surface representation and a substantial data reduction. On the other hand, several steps are required, alongside an in-depth knowledge of NURBS-based computing and fitting (Grove et al., 2011). The second indirect method is based on the MC algorithm, which creates a triangulated surface mesh from the voxel-based data (Approach 4 in Figure 1). This approach is state-of-the-art and therefore implemented in many software packages, but it suffers from uncertainties regarding a precise surface representation (Lindblad & Nyström, 2002). Within this approach, more sophisticated variants have been presented, combining the setting of the Hounsfield value, creation of an iso-surface, and smoothing of the surface (Yoo, 2011). Nonetheless, such sophisticated variants are very specific to biomedical applications and the accuracy of the resulting STLs in terms of surface area is hardly known. In the third indirect method, gray prediction modelling is used to create point data from CT images (Approach 5 in Figure 1). Thereafter, the points of adjacent layers are connected (triangulated) and an STL file can be generated (Wang et al., 2010). The disadvantage is once again an unknown surface fidelity, which is particularly dependent on the resolution of the CT scan as well as the accuracy of the grey prediction modelling.
The process steps used in the open-source workflow presented here are closest to Approach 4. We apply an intermediate step of surface reconstruction using an algorithm similar to MC (see Section 3.3.2).

General workflow
The workflow applied in this work contains an intermediate step of surface reconstruction, it is thus an indirect method. The main reason is a better control over every process step and the broad availability of the required algorithms in available software. Beyond that, existing CFD software like OpenFOAM ® (Weller et al., 1998) and STAR-CCM+ (Siemens PLM, Plano, Texas, USA) use STL files as the basis for creating volume meshes. The overall process steps are shown in Figure 2. After μCT (micro-computed tomography) scanning, the voxelbased data is available as an image sequence. These images are processed (ImageJ), the surface is reconstructed, and an STL file is generated (ParaView). The resulting STL file is further processed by decimating the data and smoothing the surface (Blender). For the comparative geometries with known surface and volume (sphere and Periodic Open Cell Structure/POCS), the final STL files are assessed regarding volume and surface fidelity. For the 10 ppi OCF, volume meshes are generated and CFD-simulations are performed to examine the influence of surface mesh smoothing on the volume mesh generation and CFD results.

Geometries and μCT scanning
Apart from a 10 and 20 ppi OCF, we investigate a simple sphere geometry as well as a POCS with known surface and volume to assess the fidelity of the geometrical parameters. The POCS has a cubic unit cell and is generated using the script-based, open-source software Open-SCAD (openscad.org). Both strut diameter and unit cell height are comparable to the values of the 10 ppi OCF. Afterwards, the POCS is 3D printed using an ELEGOO ® Mars UV photocuring printer (Shenzhen, China) with an x-y resolution of 47 μm and a z-axis accuracy of 1.25 μm. The photopolymer resin used is the ELEGOO LCD UV 405 nm rapid resin. The open cell foams (manufactured by Hofman Ceramics GmbH, Breitscheid, Germany) as Figure 2. Schematic of the workflow from the physical object to the final volume mesh. The investigated objects are a sphere, a POCS and a 10 and 20 ppi OCF. The surface mesh represents the geometry, which is the basis for volume mesh generation. The process steps are performed with ImageJ (I), ParaView (II), Blender (III) and OpenFOAM (IV). All scale bars represent 5 mm. 3.3 ± 0.9 c 1.9 ± 0.3 d Strut diameter [mm] 1.5 1.5 ± 0.5 c 0.87 ± 0.13 e a Measured with calliper. b Obtained from surface meshes. c Sinn et al., 2020. d Sadeghi et al., 2020. e Calculated with literature correlation (Inayat et al., 2011). well as the sphere are made from alumina. A standard calliper is used to measure the diameter and height of the samples. The sphere has a diameter of 12.04 mm. The OCFs have a diameter of 25 mm and a height of 22 mm. The geometrical parameters of the objects are summarized in Table 1.
High-resolution X-ray tomography is used to generate digital representations of the examined geometries (X-ray source: XS160NFOF, GE Sensing & Inspection Technologies, USA; detector: Shad-o-Box 6 K HS, Teledyne DALSA, Canada). All scans are performed at TU Dresden using a molybdenum target. The acceleration voltage is set to 90 kV and the cathode current to 170 μA with an exposure time of 0.18 s for the POCS and 1-1.1 s for the other geometries. The angular resolution of 0.25°results in 1440 radiograms per sample. The geometric magnification is 6.05 for the sphere sample, 3.05 for the OCF geometries, and 2.05 for the POCS. The resulting voxel size of the reconstructed data is 8.18 μm for the sphere, 16.23 μm for the foams and 24.15 μm for the POCS. Compared to a minimum resolution of 10 voxels per sphere radius reported in literature , the resolutions of the datasets used here are more than sufficient with a minimum of 31 voxels per strut radius for the POCS.

Post-processing and surface mesh
A standard operating procedure for the process steps can be found in the Supplementary Materials. The workflow from μCT images to the processed surface mesh can be divided into three main steps: (1) Post-processing of image stacks in ImageJ (median filter, binning, and binarisation), (2) surface reconstruction and STL generation in Par-aView (Contour filter), and (3) file size reduction and surface smoothing in Blender (decimate collapse algorithm, simple smoothing algorithm).
These steps are performed consecutively. To generate an STL file out of the CT data, the binarisation in step I and the surface reconstruction in step II are the only mandatory steps. The other optional steps help to reduce the file size and improve the surface fidelity of the objects. In the following, we describe the process steps in more detail.

ImageJ
The post processing of the CT data (step I) is done using the open-source software ImageJ Fiji (https://imagej.net/ software/fiji/). Firstly, the image stacks are cropped to fit the region of interest and a median filter (radius 3 px) is applied. This helps to get rid of artifacts and noise in the images and is beneficial for the upcoming edge detection (Bovik et al., 1987;Reddy et al., 2017). Thereafter, the images are binned with a value of 0.5 in every 3 directions, so that only half of the pixels are left in each direction. Consequently, the voxel size is doubled and the data is reduced significantly (factor 1/8). This is an optional step and reduces both the file size as well as the resolution of the data. However, in the case of the OCFs binning is necessary to be able to process the data further. For the sphere and the POCS, both the binned and the original image stacks are used from now on to be able to analyse the differences of the resulting STL files. To reconstruct the surface, object and background pixels are identified and separated by applying the default autothreshold in ImageJ on the grey values of the images. This algorithm is based on the iterative procedure published by Ridler and Calvard (1978), considering the average values of the background and object pixels and setting the threshold to the arithmetic mean of these values in every iteration. After thresholding, the images are segmented and the pixel values are normalized by the object value to get a pseudo-binary image stack (only values of 0 and 1).

ParaView
This image stack is loaded into ParaView (paraview.org) for surface reconstruction (step II). The contour filter is used to create an iso-surface representing the surface of the objects. This filter is part of the Visualization Toolkit (VTK) library (Kitware Inc., Clifton Park, New York, USA). It detects points with a certain value (in this case 0.5) within the interpolated data set and connects them to form a triangulated surface. This surface can directly be exported as an STL file. This approach is trivial but also very efficient, since the outcome is almost identical to the one of the MC algorithm. This is because the MC algorithm also uses linear interpolation. MC is a divideand-conquer approach creating cells (marching cubes), that are bounded by the centers of 8 pixels of two adjacent slices (Lorensen & Cline, 1987). After finding the cells at the borders, linear interpolation is used to approximate the intersection points of the marching cubes with the iso-surface (Lindblad & Nyström, 2002;Lorensen & Cline, 1987). In this work, the interpolated binary values of each slice are used for surface reconstruction, resulting in a similar surface appearance.

Blender
In step III the mesh is decimated and smoothed. The size of the STL file can be huge, depending on the complexity of the geometry and the resolution of the CT scan (in our case between 250 MB for the binned sphere sample and 12.3 GB for the original 20 ppi OCF if saved as binary files). Such large files cause several problems. Firstly, it costs time since the computational time of the volume mesh increases with file size. Secondly, considerable hardware resources (especially RAM) are required to process this data further (and also to generate volume meshes, see Section 4.2). Further processing is required to tackle problems like non-manifold edges or unconnected vertices, which can result from faulty CT scans. The data is thus reduced using the open-source software Blender ® (blender.org). The built-in decimate modifier is used in collapse mode with a factor of 0.1, so that only 10% of the file size is retained. The decimation with the chosen factor of 0.1 does influence the shape of the geometries, which is shown exemplary for the 10 ppi OCF in the Supplementary Materials (Section S1.2). The decimate modifier uses an iterative edge contraction algorithm based on quadric error metrics, comparable to the one proposed by Garland and Heckbert (Garland & Heckbert, 1997;Garland & Heckbert, 1998). In contrast to binning (step I), decimation reduces the file size while preserving the shape of the geometry as much as possible. Therefore, only little geometrical information is lost. A more detailed description of the decimate algorithm can be found in the Supplementary Materials (Section S1.1).
As a last processing step for the surface mesh, smoothing is performed to correct the stepped surface representation. For this, the smooth modifier in Blender was used. The algorithm is very simple and based on averaging of edge center points (detailed description in Supplementary Materials, Section S1.1). In this work, two iterations and a factor of 1 are used, since results show good agreement with the real surface (Section 4.1).

Volume mesh
The STL files can serve as the basis for the volume mesh generation using a variety of meshing and/or simulation software. This is step IV of the presented workflow. Here, only the reconstructed surface of the 10 ppi OCF is used for volume meshing and simulation. Its complex structure serves as a model for porous geometries, which are often used in engineering applications nowadays. Both single-as well as multi-region meshes are generated to be able to evaluate the performance of different STL files for both cases.
The mesh generator snappyHexMesh (sHM) implemented in the software OpenFOAM is used. While most other mesh generators build the mesh from scratch, sHM uses a given mesh and modifies it to fit the geometry. The base mesh is generated using blockMesh, a pure hexahedral mesh generator implemented in OpenFOAM. In a next step, sHM refines the mesh and snaps adjacent vertices on the surface of the STL geometry. This is done based on mesh quality targets that can be set separately (see OpenFOAM cases in Repository). The final mesh is not purely hexahedral due to different refinement levels and resulting tetrahedrons in the transition areas. The meshing is performed on 6 cores of an AMD ® Ryzen ® 7 2700 processor. A detailed workflow can be found in the Supplementary Materials (Section S6).

Governing equations and simulation setup
All CFD simulations are performed using OpenFOAM (v6, https://openfoam.org/download/6-source/), which is based on the Finite Volume Method. A singlephase, steady-state, and laminar fluid flow is simulated. Single-and multi-region simulations are performed using rhoSimpleFoam and chtMultiRegionFoam, respectively. For detailed explanations on the assumption of laminar flow and the governing equations in the fluid phase see Section S3 in the Supplementary Materials.
In case of multi-region simulations, a solid phase is considered. This solid phase can solely be described by the energy equation, which facilitates to where x i are the Cartesian coordinates, q i denotes the heat transferred through the solid due to conduction, λ s is the solid's thermal conductivity, c p,s the specific heat capacity of the solid and h s the enthalpy of the solid. The solid is assumed to be mullite. To mimic the heat production during a heterogeneous catalytic reaction, a heat source is introduced at the interface between the OCF and the fluid. This boundary condition is coded manually by using an existing boundary condition and introducing a reference temperature gradient at the baffle between solid and fluid (source code in Repository). The constant temperature gradient leads to a constant heat flux at the baffle. The required temperature gradient for a certain heat flux can be calculated using Fourier's law: with T being the temperature. The value of the heat flux is set to 6000 W·m −2 , leading to a total heat source of about 47 W. This is based on the heat of reaction of the CO 2 methanation, which is estimated to be 44.1 W for a similar OCF by Sinn et al. (2020). A summary of other model assumptions can be found in Table S1 in the Supplementary Materials. The pressure at all boundaries except at the outlet is considered as zero gradient. A no-slip condition for the velocity is set at the reactor wall as well as the OCF surface. For the single-region simulations, a fixed temperature of 600 K is set at the boundary between fluid and OCF. In case of the multi-region simulations, the surfacebased heat source is applied. See Figure 3 for an overview of the simulation area and the boundary conditions.

Results
The Results chapter is divided into three main parts. In the first part, the influence of the process steps on the surface meshes is investigated. These process steps are binning in ImageJ (I), surface reconstruction in Par-aView (II), and decimating and smoothing in Blender (III). The surface area and the volume are determined using Blender with exception of the volume of the image stack, which was calculated using the number of solid voxels of the stack. In a second part, the generation of volume meshes for the 10 ppi OCF is examined. By using two different surface meshes (with and without process step III), the differences with regard to required hardware and time for meshing are shown. In a third part, the two generated volume meshes are utilized for CFD simulations to show the influence of surface smoothing on CFD results.

Sphere
In this section, we compare both volume and surface area of the sphere geometry at specific steps during the presented workflow: the image stack, the surface reconstruction (STL), the surface decimation by quadric error metrics, and the smoothed surface ( Figure 4). We further compare the influence of binning for all steps. We did not determine the surface area of the image stack, because the voxel-based data cannot represent the real surface sufficiently (see Chapter 2). The values are normalised with respect to the real values of the sphere, A = 4π r 2 = 4.554 cm 2 , and V = 4 3 π r 3 = 914.08 mm 3 . The difference between binned and original data is very low for both the surface as well as for the volume ( Figure  4(C,D), less than 2% difference between binned and original data during all steps). This shows clearly that the chosen resolution is sufficient to represent the geometry. Thus, binning of the data does not have a significant influence on the geometric parameters. For all the process steps, the volume of the sphere is represented well with a maximum deviation of 0.5%, indicating a high accuracy of the surface reconstruction and STL processing (see Figure 4 C). However, the surface area is significantly overestimated after surface reconstruction (see Figure 4 D). For the binned data, the overestimate is about 9.4%, while for the original data it is 10.8%. This agrees well with the data from Lindblad and Nyström (2002) as well as Nyström et al. (2002), who found an overestimate in surface area for the marching cubes reconstruction of a sphere of about 8.8%. While the decimation does not significantly influence the surface area, it is significantly reduced by the simple smoothing algorithm. After smoothing, the deviation from the real value of the sphere is less than 1%. This surface improvement is visible in Figures 4(A,B). The steps in the surface that cause the overestimate vanish almost completely after smoothing.

POCS
For the POCS, both volume and the surface are normalised with respect to the digital parent. Similar to the sphere, the process steps do not have a significant influence on the volume of the object ( Figure  5(C)). Again, the surface area is overestimated before smoothing ( Figure 5(D)). Smoothing once again reduces the surface area by removing the stepped surface (see Figure 5(A,B)). However, the surface after smoothing is lower compared to the digital parent ( Figure 5(D)). Furthermore, the volume is overestimated by 3% to 5% ( Figure 5(C)).
The reason for the underestimation of the surface after smoothing and the overestimation of the volume is rooted in the manufacturing of the POCS. Figure  6(A) shows an orthographic projection of the reconstructed POCS (yellow) superimposed on the digital parent (black). Clearly, the crossing point of the struts is rounded off in the reconstructed POCS leading to a higher volume and lower surface area. This is also seen in the reflected light microscope (Figure 6(B)). The rounding is due to imperfections of the 3D print. We thus still consider smoothing as beneficial to generate a surface appearance closer to reality. Figure 7 shows, how the process steps are performing on the OCF geometries. Here, the volume is normalised by the value of the image stack and the surface area is normalised with respect to the STL file. This is because the OCFs have a complex structure and hollow struts. This makes it very hard to measure the volume and especially difficult to measure the geometrical surface area experimentally. Additionally, only the binned data is shown here, since data files are very large and no significant impact on the geometrical parameters can be detected.

OCF
Still, the process steps do not have a significant influence on the volume (see Figure 7(C)). The maximum deviation is 0.4% for the smoothed 10 ppi sample. Again, the smoothing step leads to a substantial reduction of the surface area, with 12.9% for the 10 ppi and 16.6% for the 20 ppi sample (see Figure 7(D)). Compared to the other geometries, this reduction is considerable. The reason for that is most likely the complex structure of the OCFs with their small cavities inside the struts. This leads to a higher average curvature of the object, so smoothing has a bigger influence on the surface area. The enhancement of the surface representation can be seen in Figure 7(A,B) as well as in the superimposition in Figure 8. Sharp edges and artefacts are reduced, while the overall geometry is maintained.
Additionally, both the binning and the decimation reduce the data significantly for all samples, as can be seen from Table 2. The original datasets of the OCFs could not be decimated and smoothed as 64 GB of RAM were not enough for further processing. This illustrates the problem of the huge amount of data, which impede the handling of the geometries. However, too much binning can also lead to a loss of the overall geometric structure, if the voxel size increases towards the size of the smallest structures in an object. Hence, it is beneficial to perform most data reduction using the quadric error metrics decimation. In this way, as much geometric details as possible can be used for surface reconstruction and the data reduction is based on an algorithm specifically made to preserve the surface appearance.  In conclusion, the performed process steps help to reduce the file size without significantly affecting the volume of the objects. Also, the quadric error metrics-based decimation by a factor of 0.1 does not modify the surface area, indicating the good performance of the algorithm. However, the surface reconstructions lead to an overestimate of the surface areas comparable to reported deviations from the MC algorithm. The simple smoothing algorithm removes the stepped surface appearance leading to a good representation of the surface area. It is clearly shown that it does not make sense using uncompressed data, since the decimated and smoothed meshes show a better representation of the real geometry with less memory space needed.

Volume meshes
For volume meshing, only the 10 ppi OCF meshes based on the binned, unprocessed STL (1.3 GB, mesh UP) as well as the one based on the processed STL (130 MB, mesh P) are used. The meshing procedure and parameters are identical for the two meshes. Since both singleregion and multi-region simulations are performed, both types of volume meshes are generated (see Section S4 in Supplementary Materials for grid independence study). Figure 9 shows the meshing time as well as the average RAM usage for meshes that reached grid independence. Clearly, both measures are reduced significantly by decimating and smoothing the STL. For the multi-region meshes, the meshing time decreases by more than 45% (Figure 9(A)), while the required RAM decreases by more than 75% (Figure 9(B)). The saved hardware resources are significant. Same results can be seen in case of the single-region mesh. The volume difference between both meshes is below 1% for all of the meshes, in accordance with the surface meshes. When comparing the surface area of the volume-meshed OCF (not shown), the difference between the meshes decreases compared to the surface mesh. The surface of the volume mesh from the unprocessed STL is 2.2% (single-region) and 3.9% (multi-region) larger than the surface of the volume mesh of the processed STL. The volume meshing thus has a smoothing effect on the OCF's surface, which is due to a larger cell size compared to the resolution of the surface meshes (about 32.46 μm resolution of the binned μCT data compared to 100 μm edge length at the OCF interface in the volume mesh (both single-and multi-region)). However, the surface fidelity of the processed mesh is still better compared to the unprocessed. Although the volume mesh cannot resolve the imperfections of the unprocessed surface mesh, recesses on the OCF surface can be detected. The reason for that is the way the volume mesh is generated. Meshing algorithms try to place or move the nodes of a volume mesh on the projected surface of the surface mesh. If the surface mesh is stepped, defects in the volume mesh are more likely because of the abrupt change of the surface location.
Concluding, processing the STL increases the performance of the volume meshing process significantly. The meshing time is reduced by up to 45% and the average RAM usage decrease from more than 25 GB to about 6 GB. Furthermore, the process of meshing mitigates the overestimation of surface area for the not smoothed STL, having a smoothing effect on the OCF surface.

Influence on CFD results and the importance of smoothing
In the last section we showed the smoothing caused by the volume meshing. However, differences in the surface area between mesh UP and mesh P are still present, and the surface fidelity is considerably different. In this section we show an application example of the generated volume meshes. It is a proof of feasibility of the generated meshes, but also evaluates the influence of surface-mesh smoothing on CFD results. Since we do not attempt to represent the reality, we do not validate the simulation results with experiments.

Single-region simulations
For the single-region simulations, a fixed temperature of 600 K is set at the boundary between fluid and OCF, emulating a heated OCF with infinite thermal conductivity. Since the temperature of the reactor wall is identical to the inlet temperature of the fluid (500 K), the only heat entering the system is caused by the OCF.
To proof the quality of CFD results, the pressure drop of the OCF for the different velocities is plotted against an analytical correlation in Figure S6 in the Supplementary Materials. The simulated pressure drops are in good agreement with the literature. Comparing mesh UP and mesh P, both flow and temperature fields do not show significant differences, so these results are not discussed here and can be found in the Supplementary Materials as well (Section S5). However, the influence of the poor surface representation of mesh UP on the heat transfer is visible from evaluating the heat flux through the fluid/OCF boundary. Despite a higher volume (+0.8%) and a higher surface area (+2.2%), the integral heat flow of the OCF in mesh UP is 1.28% lower compared to mesh P (values for a superficial velocity of 0.5 m s −1 ). The reason is clearly seen in Figure 10(A,B). The heat flux is dependent on the temperature gradient near the wall and therefore coupled with the near wall velocity of the fluid. The smoother surface of mesh P leads to a consistent, high velocity gradient near the wall, whereas recesses in the unsmoothed mesh UP cause lower near-wall velocities and thus lower temperature gradients. The consequence is a higher integral heat flow for the smoother mesh.
To conclude, no significant differences between smoothed and unsmoothed meshes could be detected regarding pressure drop as well as regarding local velocity and temperature fields. However, the comparison of the integral heat fluxes showed a 1.28% higher value for mesh P, although both volume and surface area of the OCF are lower compared to mesh UP. The reason for that are recesses in mesh UP, which decrease the local heat flux. The smoothing of the STL thus has a positive influence on the CFD results.

Multi-region simulations
The multi-region meshes are used to perform conjugate heat transfer simulations. For this, a surface-based heat source is added at the interface between fluid and solid region. This is an easy way to mimic the heat production during a heterogeneous, catalytic reaction, assuming a homogeneous heat production dependent on the surface area. In this case, a power of 6000 W·m −2 is introduced.
A radial averaging method is used to compare radial temperature profiles (similar to Sinn et al., 2020). We chose two slices, located at the front and back of the foam, which we divided into 10 annuli with equal area. The temperature was averaged within these annuli (Figure 11(A)). Hence, a radial temperature profile can be extracted for the fluid and the solid region at two distinct positions at the front and back of the foam. Results for an inlet velocity of 0.3 m s −1 can be seen in Figure  11(B,C). Temperatures are plotted over the normalized radius. The solid temperature is higher than the fluid temperature, because a large part of the heat is introduced into the solid due to its higher thermal conductivity. Also, the temperature is higher towards the back of the OCF, since the fluid heats up downstream effectively reducing the cooling effect of the fluid. The temperature decreases with radius, underlining the strong effect of wall cooling through conduction in the foam struts.
Results from Figure 11 show lower temperatures for the smoothed OCF (mesh P) due to the lower surface area. The radial averaging shows an effect on the temperature distribution especially in the centre of the foam (small r/R), with a maximum difference of 9 K. The difference decreases with increasing r/R because of the high conduction through the struts.
Although the difference between the two meshes seems to be quite low, the influence on simulations with catalytic surface reactions can be of importance. The amount of accessible surface area is of utmost importance for heterogeneous catalytic reactions, since it (together with the catalyst loading) determines the number of active sites. As a result, the overestimated surface area in mesh UP would lead to higher conversion. In exothermic reactions, higher temperatures and additionally accelerated reactions kinetics are the consequences. Thus, small differences in the surface area could have a significant influence on the whole system. Furthermore, the simulation of heterogeneous catalytic reactions is especially dependent on the surface fidelity and near-wall cells. Reactions take place at the interface between solid and fluid and, numerically, the heat of reaction is introduced into the fluid cells at the   solid (this is the case for the OpenFOAM add-on DUO, Deutschmann et al., 2020). The resulting high gradients in temperature and species concentration require cells of high quality to guarantee convergence of the simulations and realistic results. Additionally, recesses like the ones we showed for the mesh UP can lead to additional problems, since transport of the reaction heat is limited due to missing convection. This makes complex reaction simulations more prone to errors. The improved discretization based on the smoothed STL is therefore beneficial for sound reactive CFD simulations. Figure 11. Radial averaging of temperature. (A) Illustration of the temperature analysis method. Two slides are located 3 mm downstream of the beginning (front) and 3 mm upstream of the end of the OCF (back). 10 annuli with equal areas were used to perform a radial averaging of the temperatures. (B) Radial temperature distribution in the front slice for the two different meshes. (C) Radial temperature distribution in the back slice for the two different meshes (evaluation method similar to Sinn et al., 2020).
Concluding, the surface-based heat source method shows differences between the smoothed and unsmoothed mesh. Both heat flux and temperature are higher for mesh UP, which is due to the overestimation of the surface area. Considering actual reaction simulations, the influence of the unsmoothed mesh on the results can be even higher, which is why smoothing of the surface mesh is highly desirable.

Discussion
The results clearly show the quality of the workflow's outcome and how both the decimating and smoothing algorithms in Blender are suitable to process the μCT data. While the decimate modifier based on quadric error metrics saves time and resources in volume meshing, the surface smoothing is necessary to preserve the surface area. To our knowledge, this study shows for the first time, how a stepped surface representation of unsmoothed CT reconstructions can have an influence on CFD results. Hence, special care should be taken when processing CT data and the presented workflow can build the basis for that.
The open-source nature gives the opportunity to establish a standardised way to prepare image-based CFD simulations, free to use for everyone. In literature, the reported ways of processing CT data and generating volume meshes for simulations are very diverse and often both fragmentary as well as based on commercial software tools. For surface-mesh generation, the software Mimics (Materialize NV, Belgium) is widely used in biomedical applications (Cherobin et al., 2018;Kou et al., 2018;Qi et al., 2018;Sun et al., 2022), while also inhouse segmentation codes are used (Thune et al. 2021). In other disciplines the approaches are more diverse, ranging from commercial software like ScanIP (Della Torre et al., 2014;Yang et al., 2019), Retomo (Aboukhedr et al., 2017), VGStudio Max (Dong et al., 2018), and Octopus (Emmel et al., 2020;developed by Dierick et al., 2004) to open-source software like ImageJ (Emmel et al., 2020;Moreno-Casas et al. 2020;Plachá et al., 2020;Shimotori et al., 2021). Further processing is done in software like SOLIDWORKS (Sun et al., 2022), STAR-CCM+ (Dong et al., 2018;Thune et al., 2021), ANSA (Aboukhedr et al., 2017) and Geomagic Studio (Qi et al., 2018). This diversity in combination with the commercial nature of the used software packages makes it difficult for other scientists to reproduce the data, which is essential to maintain good scientific practice. The presented workflow overcomes these challenges and gives a potential guideline for standardised pre-processing of image-based CFD.
Furthermore, the workflow can easily be extended for special demands. Apart from the influence of surface smoothing on CFD results shown here, the model reconstruction itself also impacts CFD results, as was shown by Cherobin et al. (2018). Different Hounsfield value thresholds can change both the geometry as well as the pressure loss and flow rate in simulated nasal airflow (Cherobin et al., 2018). However, in the present study, using a singlecomponent material the default auto threshold in ImageJ showed a very good outcome in preserving geometrical parameters, which is why there was no need to consider threshold differences. For applications in biomedical engineering with a large number of different tissues to be segmented, the choice of a proper threshold can be of higher importance. Also, in special applications like image-based CFD of cement pores, deconvolution techniques have to be used to make segmentation even possible (Yang et al., 2019). Consequently, the present workflow has to be extended with a special focus on the segmentation of CT data, if special applications are considered.

Conclusion
In this work, we introduce an easy-to-use workflow for the processing of CT image sequences into sound volume meshes. This workflow is based on open-source software and the resulting meshes can be directly used for CFD simulations. A detailed description of the workflow including STL data and CFD cases is provided in the Supplementary Materials. The basic steps of the workflow are: [I] Post processing of CT image stack (ImageJ), [II] reconstruction of the surface (ParaView), [III] decimating and smoothing of the surface mesh (Blender), and [IV] volume meshing as a basis for CFD simulations (snappyHexMesh in OpenFOAM). It was possible to decimate the STL size by 90% without having an influence on the surface area or the volume. By using geometries with known volume and surface area, the surface meshes could be evaluated in terms of the preservation of morphological parameters. While the volume was almost consistent throughout all the process steps, the surface area was overestimated by up to 10.8% due to a stepped surface after surface reconstruction. Smoothing of the surface resulted in a better agreement with the real values. In summary, processing of the CT data reduced the files to a large extend and improved the overall surface fidelity. This reduced the volume meshing time by up to 45% and the RAM usage by up to 75%.
Furthermore, CFD simulations were performed to assess the influence of surface smoothing on simulation results. Recesses in the unsmoothed mesh led to poor heat flux representations at the fluid/solid interface. Moreover, used surface heat sources showed higher temperatures in the unsmoothed mesh due to a larger surface area. Hence, smoothing of surface reconstructions is advisable and can be of high importance, if the surface area or the quality of near-wall cells is especially important (e.g. in the case of catalytic surface reactions).
Finally, we discuss the findings and the workflow itself by comparing it with recent literature in the field of image-based CFD. By now, a wide range of (often commercial) software is used, which makes it difficult to reproduce data. Our workflow offers the opportunity to overcome this challenge, since it relies only on opensource software. While the application for single component materials is our main focus, the workflow can easily be extended for special application, where the choice of a proper threshold during segmentation is of higher importance.
In summary, we showed that processing CT data with open-source software can save both time as well as resources and deliver high quality meshes for consecutive CFD simulations. The provided standardised workflow facilitates the work for engineers in the field of imagebased CFD and is not limited to the presented cases of foams or POCS. (ZARM, Uni Bremen) for his helpful assistance with coding of the surface heat source in OpenFOAM. Furthermore, Kevin Kuhlmann thanks Prof. Dr. Rodion Groll, head of the group of Thermofluid Dynamics at ZARM (Uni Bremen), for helpful discussions regarding this manuscript and assistance in numerical and fluid dynamic questions.

Data deposition
All data relevant to this work and detailed explanations are available under https://doi.org/10.5281/zenodo.6521710.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
This work was supported by the German Research Foundation (DFG) through the research training group GRK 1860 (Micro-, Meso-and Macroporous Nonmetallic Materials: Fundamentals and Applications, MIMENIMA).