Towards efficient structure prediction and pre-compensation in multi-photon lithography

Microscale 3D printing technologies have been of increasing interest in industry and research for several years. Unfortunately, the fabricated structures always deviate from the respective expectations, often caused by the physico-chemical properties during and after the printing process. Here, we show first steps towards a simple, fast and easy to implement algorithm to predict the final structure topography for multi-photon lithography - also known as Direct Laser Writing (DLW). The three main steps of DLW, (i) exposure of a photo resin, (ii) cross-linking of the resin, and (iii) subsequent shrinkage are approximated by mathematical operations, showing promising results in coincidence with experimental observations. E.g., the root-mean-square error (rmse) between the unmodified 3D print of a radial-symmetrically chirped topography and our predicted topography is only 0.46 $\mu$m, whereas the rmse between this 3D print and its target is 1.49 $\mu$m. Thus, our robust predictions can be used prior to the printing process to minimize undesired deviations between the target structure and the final 3D printed structure. Using a Downhill-Simplex algorithm for identifying the optimal prediction parameters, we were able to reduce the rmse from 4.04 $\mu$m to 0.33 $\mu$m by only two correction loops in our best-case scenario (rmse = 0.72 $\mu$m after one loop). Consequently, this approach can eliminate the need for many structural optimization loops to produce highly conformal and high quality micro structures in the future.


Introduction
As one of the most flexible and high-resolution 3D printing technologies, multi-photon lithography a.k.a. Direct Laser Writing (DLW) has become established within the past 25 years. In a nutshell, a femto-second pulsed laser beam operating at near-infrared wavelength is tightly focused into a photo resin. There, a multi-photon absorption initiated polymerization takes place along a pre-programmed relative movement between focus and resin [1,2]. The technology's basics were laid by Maruo et al. in 1997 [3], whereas nowadays modern applications can be found in a wide range of disciplines: in integrated photonics, for example, the selective Bragg reflection band of liquid crystalline photo resins can be adjusted continuously in the visible range when applying DLW, paving the way for "true-color 3D" (or 4D) micro fabrication [4]. Further applications can be found in life science for the fabrication and mimicking of 3D cellular micro-environments with tunable properties [5], in micro-optics for the fabrication of, e.g., Fresnel lenses onto fibers [6], in micro-mechanics [7], micro-fluidics [8] or in the context of topological photonics [9,10], to name just a few. Even in industrial research, DLW becomes more and more important, e.g., for providing ISO-conform calibration measures [11,12] or for industrial prototyping and mastering [13]. Especially, recent progresses towards direct laser writing of metals [14] open up completely new possibilities in future.
Unfortunately, directly laser-written structures always exhibit some deformation compared to their original designs. These deformations can be observed either as shrinkage or bulging of the structures and occur both during and after the printing process. Since the density ρ of a negative-tone photo resin's unpolymerized state is smaller than the density of the polymerized state [15], there must be a loss in volume V when the mass m is conserved, following ρ = m/V . Here, shrinkage depends on the degree of polymerization and thus on the intensity of the laser used for printing. During the development of the printed structures, further shrinkage takes place due to the dissolution of soluble components. The latter include unpolymerized monomers and unreacted photoinitiators of the resin [16]. Since the developer induces structure swelling in a slight amount, capillary effects during the drying process also lead to shrinkage [17], which has been found to be the most prominent contributor [18].
The aforementioned bulging of the printed structures is mainly caused by the so-called proximity effect [19][20][21]. Here, the overlap of single voxels along the laser focus' trajectory while printing a structure overlap in space and time, leading to a local heightening of the exposure dose. According to this, e.g., a designed flat structure, like a disc, usually shows a curvature in its topography with a height maximum in its lateral center.
Both structure deforming aspects, the bulging and the shrinkage, cannot be avoided and ultimately result in structure deviations of up to 30% [22]. Fortunately, there are different ways for minimizing those deviations: (i) by pre-compensating the target shape prior to the printing, [11,15,17,23,24], (ii) by spatially adjusting the laser dose during printing [12,13,15], or (iii) by improving the development process subsequent to the printing [22,25]. While the latter does not take the proximity effect into account and is therefore fundamentally limited, the first two approaches aim for an overall homogeneous cross-linking of the resin and can theoretically achieve arbitrarily high structural conformations. In our case, spatially adjusting the laser dose requires significantly more computer storage, because a laser intensity value has to be added to each single coordinate within the programmed structure to be printed. This is especially important for large structures and can sum up to several additional GB, depending on the discretization. Although this approach allows for the smallest overall printing time, the calibration of how to adjust the laser's intensity in dependence of the respective structures is very challenging and has been patented recently [26].
Therefore, we rather focus on the pre-compensation of the target structure, which is usually designed on the computer, translated into coordinates for the DLW system and then printed. The printed structure is then measured, e.g. with a confocal microscope, and the thus accessible difference between the 3D print and its target is incorporated into a second version of the target structure. After printing this second version, its topography is measured again so that the target structure can be adjusted a second time. This process is repeated until the printed structure meets the respective requirements [11,24]. Even though this approach leads to very high structural conformance, it is still time-consuming and labor-intensive, as five to ten (or even more) correction iterations are not uncommon [11].
Predictions of the outcome of direct laser written structures reach back to first considerations on the expected widths of single lines by Fischer et. al in 2013 [27] or voxel and pillar dimensions by Purtov et. al in 2019 [28]. Taking the reaction diffusion during the polymerization into account, dramatically complicates the modeling of structure prediction -even for simple lines, as it was recently published by Pingali and Saha [29]. There, it was only possible to reliably predict line widths, but not line heights or aspect ratios. Guney and Fedder published a promising semi-empirical analytic model through simulations and fitting for estimating widths and heights of single lines in 2016 [30]. Unfortunately, this approach also cannot be easily transferred to voluminous structures of several tens or hundreds of microns length. Palmer et. al recently focused on the simulation of additive manufactured metallic micro structures [31]. Although this fabrication method is based on direct laser writing, the photonic processes differ fundamentally in many points, hence, its simulation algorithm cannot be directly transferred to conventional direct laser writing of polymers. The most recent work in this field was published by Adão et. al in 2022 [32]. They established an algorithm to predict the resulting laser dose for the resin's exposure at each coordinate, taking experimental parameters like scan speed, laser power, and numerical aperture into account. The presented scanning electron microscope images of the fabricated wave-guide structures indicate an impressive similarity to their predictions, unfortunately, without providing a quantitative value. On closer inspection, however, one realizes that the shrinkage behaviour is not taken into account, hence, providing only a qualitative agreement, still showing deviations from the target structure. Those deviations are uncritical for the presented functionality of the fabricated waveguides, but can be crucial for, e.g., the functionality of diffractive elements or for determining the metrological characteristics of calibration measures.
In contrast to this, we report on a fast computable and easy to implement algorithm that predicts the above mentioned undesired deformations for different types of structures. It takes shrinkage and proximity effects into account and is optimized for a quantitative high conformity of (bulky) target and 2/12 printed topographies. This offers the possibility to directly pre-compensate the structures to be printed in order to achieve the highest possible conformity between the target and the 3D printed topography.

Materials & methods
All micro-structures presented within this study were designed as 2D surface matrices. The pixel indices within the matrices correspond to lateral positions (x, y), whereas the respective matrix entries represent the height values at those positions z(x, y). Exporting these surface matrices to stl-files allows for a common translation into coordinates for the DLW system, using the software Describe (Nanoscribe GmbH & Co. KG). If not explicitly described otherwise, we used Describe to discretize the structures in equidistant axial planes with a so-called slicing distance of 0.1 µm and each plane into lateral lines with a 0.1 µm spacing -called hatching distance. The thus generated data can be interpreted by the associated 3D printer Photonic Professional GT + (Nanoscribe GmbH & Co. KG), using galvanometric mirrors for lateral, and a piezo stage for axial positioning. A constant laser power of roughly 40 mW illuminating the complete entrance pupil of a 63x objective (NA = 1.4 -Carl Zeiss Microscopy Deutschland GmbH) and a constant writing speed of 20,000 µm/s were used for all structures, as well as IP-S (Nanoscribe GmbH & Co. KG) as photo resin. For all experiments we scanned the structures in an unidirectional way along the y-axis and kept all the fabrication parameters (e.g., acceleration and deceleration of the galvo mirrors) constant. The development procedure after printing onto cleaned and silanized glass substrates [33] followed the manufacturer's specifications: first, resting for 20 minutes in propylene glycol methyl ether acetate (PGMEA), second, resting for five minutes in isopropanol, and third, drying gently with nitrogen.
For measuring the structure's topographies, a µSurf confocal microscope (NanoFocus AG) equipped with a 100x objective (NA = 0.95) and a 60x objective (NA = 0.9, Olympus Europa SE & Co. KG, both) were used. To obtain the best possible results, the respective exposure times were optimized for each single measurement.
In complete analogy to the designed target structures, the data measured in this way correspond to 2D surface matrices. To be able to compare both data-sets, a correct alignment is of great importance: first, the non-measured points are interpolated using the bivariate spline function scipy.interpolate.BivariateSpline of Python's SciPy package [34]. Second, the target data is rescaled to fit the resolution of the measurement, utilizing the linear interpolation function scipy.interpolate.RegularGridInterpolator [34]. Thanks to the square imprints of all the structures, the measurement is then manually and eye-controlled rotated with respect to an illustrative vertical reference line that crosses the measurement field and to which the vertical edges of the structures are aligned in parallel. In this way, we align our measurement as perfectly as possible with the respective target surface, whereupon the data is cropped. Since both fabrication and measurement are not perfectly plane-parallel processes, a least-squares plane adjustment is used as a final pre-processing step to remove tilt.
The obtained data is used as input information for the algorithm presented below. Everything shown in the following was performed by a low-cost personal computer (PC) with only 4 GB RAM, an Intel Pentium CPU G6950 processor with 2.80 GHz and an Intel HD graphics card with 64 GB VRAM. The algorithm's calculation times are less than one to five seconds excluding the optional parameter calibration routine (see section 4) and in the range of less than one to five minutes including the calibration, depending on the chosen discretization.

The prediction algorithm
In a first step the target structure is converted into a 3D point-cloud pc tar (x, y, z), containing the values 1 and 0, representing points within (1 = True) and outside (0 = False) the structure. This is schematically shown in Fig. 1 (a) as an overlay with a height-representing colormap. By adjusting the point-cloud's resolution, different hatching and slicing distances can be simulated as, e.g., doubling the lateral resolution of the point-cloud means halving the hatching distance.
Next, the spatial extent of the nearly Gaussian shaped laser focus is modeled as it moves through the photo resin during the printing process. Within this manuscript, we use a 3D Gaussian distribution f foc (x, y, z), illustrated as inset (1) in Fig. 1, to represent the exposing laser focus. As we claim this to serve as 'first steps towards predictions', an independent adjustment of the lateral and axial expansions of the distribution was initially more important to us than the implementation of the theoretically correct point spread function. Moreover, the experimental laser focus always suffers from aberrations, typically leading to even stronger deviations from its theoretical shape, than those caused by our 3D Gaussian approximation [35]. The exposure of the resin is then approximated by convolving this Gaussian function with the structure point-cloud, resulting in the exposure matrix m exp (x, y, z): illustrated in Fig. 1 (b). Bisecting the exposure matrix in a part above and a part below a certain value allows the matrix to be binarized to values of 0 and 1, shown in Fig. 1 (c). This equals a thresholddependent exposure point-cloud pc exp and assumes the existing minimum intensity for initializing cross-linking (compare threshold model [2,21,27]). As investigated by Waller et. al [19], the diffusion of radicals and the cross-linking of monomers within the exposed resin can be approximated by a Gaussian distribution, whose full widths at half maximum (FWHM) defines the spatial extent. Hence, by calculating the convolution of pc exp with this second Gaussian distribution f cl (x, y, z) shown as inset (2) in Fig. 1, the resulting cross-linking in 3D space within the resin is modeled m cl = pc exp f cl (2) and shown in Fig. 1 (d). Subsequently, a linear mapping operation (equation 3) transfers this cross-linking matrix into a shrinkage matrix m s , ascribing every point of m cl a specific shrinkage factor κ: with κ max and κ min being the maximal and minimal shrinkage factors, respectively. This linear mapping is illustrated in Fig. 1 (e) and includes the assumption of shrinkage factors being positive, non-zero and smaller than one to mathematically map the experimentally observed shrinkage. Typically, our algorithm is initiated with κ max = 1 and κ min = 0.7. Due to the complexity of a holistic description of shrinkage in 3D, only the axial direction is considered directly here as an approximation, although the lateral influences due to cross-linking are of course also taken into account. This approach seems justifiable, since the previously mentioned deformations can be satisfactorily represented by local height adjustments for the rather simple 2.5D structures studied here, as it was already exploited for correction, e.g., in [12]. As a price for this simplification we accept not only the limitation to 2.5D structures but also vertical edges within topographies being not deformed laterally, as is often observed experimentally. Accounting for true 3D shrinkage will be a future step, paving the way for the prediction of arbitrary Figure 2. Correction workflow. After printing (1) and measuring (2) a target structure, the presented prediction algorithm is used in combination with the Downhill-Simplex approach to identify the ideal prediction parameters (3). Subsequently, the latter are used for predicting again to iteratively modify the target structure until the prediction equals the very first target (4). Afterwards, the corresponding pre-compensated structure is printed (5). To further increase the conformity, one can optionally repeat steps (2) to (5) one time, represented by step (6).
where N xy is the amount of z-values at the lateral position (x, y) within the matrix. In the same fashion, pc exp is translated into a 2D matrix pc 2D exp , representing the structure's height values as respective matrix entries. As shown in Fig. 1 (f), multiplying the obtained z-averaged shrinkage matrix m 2D s (x, y) with pc 2D exp only takes the shrinkage at the exposed positions into account and results in the final lithographic print prediction, which is illustrated as point-cloud in Fig. 1 (g).
Some of the parameters within this prediction algorithm are given by the experimental writing process itself and are therefore fixed, like hatching and slicing distances or the laser power. Other parameters, like the FWHM of the cross-linking distribution or the minimal and maximal shrinkage factors are not specifically accessible, since these parameters depend on the structures to be printed. Fortunately, all of these parameters represent real physical quantities whose range of values can be roughly identified by other experiments. For example, the FWHM of the laser focus can be estimated by fabricating and measuring individual voxels, or the range of influence of molecular diffusion can be estimated by the spatio-temporal controlled fabrication of single lines [19]. Seven such physical-quantity based parameters are necessary for the prediction algorithm shown here: The default values are based on our experimental observations. Of course, these values are influenced by the structure to be printed and do not guarantee for accurate results. Therefore, we additionally use a Downhill-Simplex algorithm [36] (DSA) to determine the optimal simulation parameters.
This algorithm generally minimizes the value of a given function with multiple variables. By determining the deviation between the prediction result and the corresponding measurement data of a structure as a function of the simulation parameters, the identification of the most appropriate simulation parameters is automated. Hence, one has to print and measure a first unmodified version of the target structure to identify those ideal simulation parameters by the DSA. On the one hand, this requires the printing of a so-called "calibration structure", on the other hand, the most suitable, structure-dependent simulation parameters are obtained. The related general workflow is illustrated in Fig. 2 and the quality of the respective results will be shown and discussed within the following section.

Results & discussion
Using the presented prediction algorithm for estimating the outcome of direct laser written structures requires the aforementioned set of simulation parameters. Without any measured data, one can only use educated guesses or use the default values given in section 4.
But if a calibration structure has been printed, it is recommended to use the parameters found by the Downhill-Simplex algorithm to predict the 3D printed surfaces (steps (1) -(3) of the workflow in Fig. 2). The results of this approach are shown in Fig. 3 for two exemplary types of structures: a circular chirped topography (CIN) and an areal roughness calibration measure (AIR). These two types of structures were specifically selected because they are widely used in the field of metrology: the AIR-type is based on an actual engineering surface and is designed with a tailored height distribution to provide a linear Abbot-curve. This is used for calibrating the height axis of measuring instruments as well as to determine their roughness properties [11]. The CIN-type, however, serves as resolution calibrating topography and is characterized by smooth topographic waves with an radially increasing wavelength. In contrast to other resolution-calibrating measures, the CIN is much less prone to measuring artifacts, due to the smooth waves. Additionally, it allows for determining the measuring instrument's transfer function in a continuous way and is not limited to specific axis orientations [12]. Because of these metrological benefits and especially because of the accurate measurability due to the chosen amplitudes and topographic frequencies, height-deformations during and after the printing process are expected to be experimentally well observable. Hence, our predictions and pre-corrections should be well comparable, making AIR and CIN as test-structure very well suited.
In both cases, the predicted surfaces in Fig. 3 (c) and (f) are pretty close to the actual measured ones in (b) and (e) and differ significantly -as expected -from their target shapes (a) and (d). To quantify the similarity between those topographies, one can use e.g., the root-mean-square error (rmse) and Matlab's 2D correlation coefficient [37]: with n, m being the indices of the matrices A, B, and A, B representing the matrices' mean values. Hence, for the results shown in Fig. 3 we get the first impression confirming values: The origin of the observed deviations between 3D print and target has already been explained in the introduction, but the prediction algorithm maps them quite well, as represented, e.g., by root mean square errors of 0.46 µm and 0.93 µm, respectively. Not only the total offset along the axial dimension due to the elongation of the exposing laser focus, but also the lift towards the center of the structure due to vignetting and the proximity effect are well captured. Note, that doubling the footprint of the structures from 50 µm of the CIN to 100 µm of the AIR does not worsen the algorithm's performance.
Due to those promising prediction capabilities, we use our approach for pre-corrections, illustrated by steps (4) and (5) of Fig. 2. Ideally, the algorithm allows to modify the target structure so that the resulting 3D printed result matches the original target. Note that this process requires only two prints in total: the unmodified "calibration structures" and the final corrected structures.
However, the prediction of the modified structures within the loop of step (4) will be less accurate, since the simulation parameters have been optimized for the unmodified 3D print. For instance, the rmse between the 'old parameter predicted' modified CIN structure and it's corresponding 3D print is 5.058 µm, confirming the aforementioned suspicion. To compensate for this, the described process can be repeated using the 'first generation' of correction in step (5) for a second run -the optional but recommended step (6). This of course increases the number of total prints by one and is referred to as the 'second generation' of correction. We observed that the best results are achieved by those second generation structures (compare table 1). Three printing processes are still much less compared to up to ten or even more, often required elsewhere [11].
Two exemplary second generation correction results for (i) a circular chirped (CIN) and (ii) an areal roughness (AIR) structure are shown in Fig. 4 and Fig. 5, respectively. The measured final 3D prints (d) are characterized by a very high conformity to their target surfaces (a), which can be seen nicely within the profile plots (e) & (f). The deviations between the finally produced topographies and the respective targets can be greatly reduced. In terms of the root-mean-square error (rmse), these deviations decrease from 4.044 µm to 0.332 µm for CIN and from 3.490 µm to 0.477 µm for AIR, respectively. Besides, the values for the first generation corrections, as well as the corresponding 2D correlation coefficients are given in Tab. 1 and quantitatively confirm the improvements.
As these two types of structures are supposed to image specific metrological characteristics, we can also have a look at, e.g., the axial amplification coefficient (α z ) or the quadratic areal roughness parameter S q of the AIR structure (see references [11,38,39] for details about the metrology). As shown in Tab. 1, the amplification coefficient for the unmodified calibration structure deviates by roughly 23%, getting slightly improved to 21% by the first generation correction. The second generation, on the other hand significantly improves this metrological characteristic towards only 8% deviation. As a second example, the S q deviates by 30 nm, by 110 nm, and by 15 nm through the different correction generations. Similar behaviours can be observed for the other metrological characteristics, as well as for the generally quantifying rmse and CC values (see. Tab. 1), once again underlining the overall conformity enhancement, obtained by the presented method.
This can be additionally strengthened by the convergence of the simulation parameters by the Downhill-Simplex algorithm. There, the FWHM of the exposing laser focus changes from the default values (σ exp xy = 0.5 µm, σ exp z = 1.5 µm) to roughly 0.41 µm and 2.15 µm, respectively for the 2 nd generation AIR-type structure. Hence, the aspect ratio of the simulated voxel increases from 1.5 (default) to 5.2, which is based on the manufacturer's specifications much more realistic for this full-volume structure and the photo resin used [40]. To just name a second example, the spatial cross-linking slightly decreases from σ cl xy = σ cl z = 7 µm (default) to values between 4.4 µm and 5.7 µm for both AIR and CIN, being more realistic, too, following the calculations of reference [19]. An analogous behaviour can be observed Table 1. Improvement over correction generations. The 2D correlation coefficients CC and rootmean-square errors rmse for the structures depicted in Figs. 4 and 5 are shown for each correction generation. The values are calculated with reference to their respective target topographies. Moreover, the metrological characteristics of the AIR structure (axial amplification coefficient α z , linearity deviation l z , arithmetic S a , and quadratic S q areal roughness) are exemplary compared, too. for all 2 nd generation correction simulation parameters, demonstrating the power of our approach.

Summary & outlook
In this study, we have presented an approach to predict the topography of directly laser-written structures. Our algorithm considers several physical quantities as simulation parameters to account for the main physico-chemical processes during fabrication that are responsible for undesired deviations. In addition to fixed parameters, such as hatching and slicing distances or laser power, parameters that are difficult to access, such as the effective exposure or the spatial region of cross-linking in the photo resin, can be automatically optimized for each structure. Since the resulting 3D printing predictions are very promising (e.g., rmse reduced by more than 4 µm down to be less than 1 µm), an iterative application of the algorithm allows a reasonable pre-compensation of the structures to be printed. In the end, one can expect a very high match between target and printed structure within only two or optionally three printing steps. This match is for instance represented by the 2D correlation coefficient being 0.27 and 0.48 for the unmodified CIN and AIR structures, respectively. The first correction generation improves these values to 0.69 and 0.76, whereas the third generation even further enhances them to 0.85 and 0.83. Since, e.g., the properties of the used photo resin or the size of the point spread function can be seen as covered by the automatically optimized prediction parameters, our approach should be adaptable to other resins, objectives, and different kinds of structures. In contrast to that flexibility, we are fundamentally limited to 2.5D structures right now due to the mathematical working principle of the prediction algorithm. However, for many applications 2.5D structures are sufficient, as micro-lenses, Fresnel-lenses, diffraction gratings and prisms belong all to this class.
An extensive investigation in terms of topographical, material as well as process capabilities will provide further insight into the limitations of our approach but exceeds the claim of 'first steps' aimed at with this publication. As further future work, we will speed up the identification of optimal simulation parameters. Conceivable here would be a neural network trained by our prediction algorithm that can set the optimal simulation parameters for each generation. Ideally, even without the need to print any calibration structures beforehand. For this purpose, we will extend our prediction algorithm to take into account the holistic shrinkage behavior in 3D to predict arbitrary complex 3D structures, as well as the deformation of vertical edges.