Influence of aberrations and roughness on the chromatic confocal signal based on experiments and wave-optical modeling

This paper addresses the effect and influence of wave optical aberrations and surface roughness on the chromatic confocal signal and resulting measurement errors. Two possible approaches exist for implementing chromatic confocal imaging based on either refraction or diffraction. Both concepts are compared and an expression for the expected chromatic longitudinal aberrations when using a diffractive optical element is derived. Since most chromatic confocal sensors are point sensors, the discussion on wave-optical aberrations is focused on spherical aberrations. Against common belief, the effect of spherical aberrations cannot be eliminated in the calibration process using for instance a piezo mounted mirror. It will be shown in the following that even a diffraction limited system with peak to valley spherical aberration smaller than 0.25 wavelength suffers from measurement errors. Experimental results will be shown to highlight this important issue. In order to develop a deeper understanding of the underlying physics, a wave-optical simulation environment has been realized. This wave-optical model furthermore enables the investigation of the influence of roughness. Herethereto the correct choice of numerical aperture when investigating a rough surface is based on a heuristic approach. Using the wave-optical simulations an explanation for the increased noise when employing a low numerical aperture to examine rough surfaces will be derived. Furthermore, a formula is presented to support the selection of the correct numerical aperture with regards to the roughness parameters of the surface under investigation.


Introduction
Confocal imaging is a powerful technique for the investigation of biological and technical samples. It is based on locating a pinhole in the image plane to reduce the influence of multiple scattering for the recorded signal [1]. In combination with a precision x-y-z moving stage, this approach enables 3D sectioning of objects with high axial and lateral resolution. However, the scanning movement is very time-consuming. This problem is solved in chromatic confocal microscopy. The longitudinal scanning via a piezo or a precision motorized stage in confocal microscopy is replaced by depth encoded longitudinal chromatic aberrations. Therefore, chromatic confocal microscopy only requires scanning alongside the lateral directions, which results in a significant reduction of measurement time. Axial information can be accessed via the application of a hyper-chromate, by which different wavelengths are focused at different depths. However, chromatic confocal microscopy with a small numerical aperture (NA) suffers from a highly disturbed signal at rough or strongly scattering surfaces and hence results in measurement errors. It is, therefore, of utmost importance to select the correct optics for a particular application. Moreover, aberrations especially spherical aberrations can lead to a misinterpretation of the confocal signal. Therefore, an unmet need exists to fully understand the signal development process, which enables the correction of the signal via replacing the measured axial position with the axial position of an aberration-free signal of same optical parameters. For fast implementation, different likely scenarios can already be simulated to generate a large database. In this manner, a good starting point for possibly an artificial intelligencebased subsequent method as well as for other iterative approaches can be assured.
A rather limited amount of publications can be found focused on modelling the confocal signal or chromatic confocal signal. Preliminary studies based on paraxial wave optical modeling [2,3] provide good results only for systems with small NA and insignificant defocus [4,5]. In [6] a wave-vectorial description has been used to analyse the influence of aberrations in confocal imaging and the change of confocal peak position with respect to the degree of aberration and tilt angle of the object. The modelled confocal signal discussed in [7] uses an overlap integral between object and object impinging illumination function, which offers the advantage that the computational time can be reduced since the wavefield in the object plane is calculated and not the back-scattered wavefield in the detector plane. However, wave optical phenomena that are accounted for when calculating the backscattered wavefield in the detector plane such as the speckle effect, are not taken into consideration. The chromatic confocal signal using several approaches including a wave-optical approach was discussed in [8]. However, the signal derivation was based on the intensity point spread function in the object and the detector plane and hence did not account for the speckle effect, which arises from the roughness of the surface that corresponds to a different phase distribution for each wavelength. The speckle noise has a significant impact on the measurement and the corresponding measurement uncertainty. It represents an inherent artifact applicable to all coherent imaging modalities [9]. It arises from the interference of scattered light, which does not completely fill the entrance pupil of the optical system, due to surface roughness or very fine object structures, as discussed [9,10]. In such cases Abbe's resolution criterion, the recordingof tfirst-order together with the zeroth diffraction order, is no longer fulfilled. Consequently, diffracted light, which either corresponds to high spatial object frequencies or large object tilt -, interfere, resulting in an uncorellated coherent superposition that is observed as the speckle pattern. The speckle intensity probability function follows a negative exponential curve, which means that more dark speckles than bright speckles exist [9]. Therefore, the investigation of rough surfaces with a low NA hyperchromate in confocal imaging results in an unreliable signal. In chromatic confocal imaging the speckle contrast is furthermore increased due to the increased temporal coherence of the recorded light in comparison to conventional confocal imaging when using the same broad band light source. A brief discussion of the speckle effect using a wave-optical approach solely based on simulated data was presented by the authors in [11]. Despite the missing experimental confirmation of the modell, neither the effect of a tilted object nor the resulting change in axial position was addressed in [11].

Modelling
In figure 1(a) the schematic diagram of a typical unfolded refraction based hyperchromate is shown. A double pass system can be simplified to a single pass system, only when a plane mirror is oriented perpendicular to the optical axis of the hyperchromate. This is usually not the case and therefore this simplification is not applicable in the following discussion. In fact, due to the topography and tilt of the object a different transfer-function is obtained for the passage of light from the object to the confocal pinhole. A spatially coherent but temporally incoherent light source with a known spectral intensity distribution is used for the input of the system. The entire simulation is based on the numerical implementation of wave-optical propagation methods, as discussed in [12]. We have used the scalar implementation, which does not account for possible polarization effects. This approximation is valid for an aplanatic system until an NA of 0.5 [13]. For higher NA, theoretically a rigorous model should be employed, which likewise accounts for polarization effects. Despite these approximations, the scalar model used here, should be sufficient to provide a good guidance for selecting a sufficiently high NA when investigating rough surfaces or a system with aberrations. In order to save computation time, we have chosen an asymmetric representation of the chromatic confocal arrangement, as shown in figure 1. The incident wave field is collimated and for on-axis object points it propagates parallel to the optical axis. It starts at the pupil plane of the central wavelength. It is important to point out that pupil plane for each wavelength is located at different axial position. The light then interacts with the hyperchromate, where different wavelength become focused at different depths. Two different ways coexist, which enable the realization of longitudinal chromatic aberrations. The first is based on a lens assembly using glasses that exhibit a significant/required dispersion effect. The resulting longitudinal chromatic aberration represents the sum of the longitudinal aberrations of the different lenses employed, schematically shown in figure 1(a).
Therefore, even with high numerical aperture objectives working ranges of a few mm can be obtained. These hyperchromates are commercially available from various manufacturers, such as Precitec Optronik GmbH, Stil SAS and Micro-epsilon Messtechnik GmbH & Co. KG [14][15][16]. For simplicity and without loss of generality it is assumed that all wavelength covers the same diameter in the pupil plane. Hence, due to the different focal lengths associated with each wavelength, the numerical aperture decreases the longer the wavelength becomes. For refractive hyperchromates the transfer function can be written as, which relates wavelength λ to combined focal length f comb . With q 1 , q 2 and q 3 the factors of the second order polynomial, which usually is sufficient to characterize the transfer function. The transfer function is either provided from the optics design, the manufacturer or from an experimental calibration process using a piezo mounted mirror or precision motorized stage. Experimental calibration is the most accurate approach accounting for manufacturing and setup imperfections. Another approach to realise a high NA hyperchromate is based on the combination of a chromatic aberration-corrected objective such as an achromate or apochromate and a blazed Fresnel lens [14], which causes the incident light to be diffracted, as shown in figure 1(b). The diffractive convergent lens can be combined with a diverging concave lens, which compensates the refractive and diffractive power for the central wavelength [15]. In that manner, the diffracted cone is homogeneously distributed around the optical path of the central wavelength. Furthermore, the numerical aperture of all wavelengths can be the same, if the diffractive optical element is placed in the pupil plane. This approach offers some significant advantages when object points outside the optical axis, field points, have to be investigated, as proposed and pointed out in [15]. A different numerical aperture for a different wavelength results in a different magnification for the wavelength. In other words, not only the longitudinal position changes with changing wavelength but also the lateral position changes as schematically shown in figure 2. Since the dispersion effect is solely caused by the diffractive optical element. It is now possible to calculate the wavelength associated focal length from the combination of the effective focal length of the microscope objective f micro , the concave lens f con and the converging diffractive optical lens f diff , with respect to the principal plane of the microscope objective [16]. At first the combined focal length of diffractive Fresnel-lens and the concave refractive lens are calculated. According to [15], the Fresnel-lens is designed to enable the compensation its diffractive power with the refractive power of the concave lens for the central wavelength λ c : Under the thin element approximation and the assumption that the principal plane of both lenses can be combined, the resulting focal length of diverging concave lens and converging Fresnel lens becomes: Hence, the focal length representation the combination of all three lenses becomes: The equations derived give a good estimate of the necessary focal powers of the concave lens and the diffractive element in order to obtain the desired measurement range. Furthermore, the NA with respect to the wavelength and the derived combined focal length is likewise very important for generating a realistic model. This, in particular, relates to the scenarios, where the diffractive dispersive element is not positioned in the pupil plane of the microscope objective, displaced by a distance d L2 . At first the diameter of the microscope's aperture is calculated from its NA and focal length f micro : From the schematic ray diagram shown in figure 2 trigonometric relationships can be drawn for the case that the diffractive element is placed at a distance d L2 from the pupil plane. The shortest wavelength λ min is diffracted in a divergent manner, hitting the edge of the aperture of the microscope objective. The diffraction angle ε′ min for λ min can be calculated from geometric relationships shown in figure 2, resulting in: For the central wavelength, the diffractive power and refractive power compensate each other, hence A symmetric angular distribution of the diffraction angles for the shortest wavelength λ min and longest wavelength λ max after passage through the combined diffractive-refractive optical elements is assumed.
e l e l Here u´is the angle of convergence in the object plane, which is a function of the wavelength.
and similarly, the NA: The corresponding pupil diameter for the combined lens with focal length f comb if illuminated with a collimated beam becomes: It is important to note that in case the diffractive element is placed in the pupil plane, the same angle u' is obtained for all wavelengths, as pointed out in [15].
With the knowledge of the wavelength correspondence to focal length and numerical aperture, the modelling process can be realized, as depicted in figure 3. A detailed explanation of the propagation steps between the individual planes is described in the following: (1) Pupil-plane central wavelength: The incident plane wave is propagated from the pupil plane of the central wavelength to its wavelength corresponding pupil plane. The numerical propagation is performed using the angular spectrum implemented Rayleigh-Sommerfeld diffraction integral, as discussed in [9]. The propagation distance is defined by the difference in focal length. A larger focal length than the reference focal length of the central wavelength results in a negative propagation distance and a focal length smaller than the reference focal length in a positive propagation distance.
(2) Pupil-plane individual wavelength: A binary mask representing the spatial frequency bandwidth of the corresponding wavelength is multiplied with the complex wavefield. The diameter of the mask can be calculated from equation (14) for a diffractive based system. For a refractive based system, it is assumed that all wavelengths fill the entire aperture resulting in the same pupil diameter for the Fourier-transform arrangement, as shown in figure 1(a) and that the NA therefore refers to the maximum NA that corresponds to the shortest wavelength focused closest to the objective: Moreover, at the pupil-plane of the individual wavelength the phase function of possible wave aberrations, such as spherical aberrations, can be applied. It is important to point out that the aberrations scale inversely proportional with the wavelength employed [17]. At the next step, the pixel-number of the pupilplane is increased to at least twice the pupil diameter. This step is necessary to avoid aliasing artifacts that can arise from a tilt of the object. The pupil and the object plane are related to each other via a Fourier transform. Hence, the wavelength corresponding object plane is obtained via the application of a 2D discrete Fourier transform. It is important to notice that there is a slight difference in pixel-size Δx obj in the object plane of the individual wavelength: With N the pixelnumber and Δx pupil the starting pixelsize in the pupil plane.
(3) Object-plane of individual wavelength: The complex wavefield in the object plane is then propagated via the angular spectrum method to the reference object-plane, which in the simplest case is the object-plane that corresponds to the central wavelength.
(4) Reference object-plane: At this stage, we account for the topography of the object, which enables simulating the influence of roughness and object tilt. However, at first the different pixel-size of each wavelength needs to be taken into consideration. This is done via first cropping the topography map to a region that is representative of the wavelength followed by interpolation to result in the wavelength corresponding pixel-size and an unchanged number of pixels. Afterward, the topography data needs to be translated into phase data.
The phase scales inversely proportional with the wavelength. The smaller the wavelength the larger is the range of phase that has to be covered to represent the topography range. Finally, the complex wavefield can be multiplied with the object function.
(5) Object-plane individual wavelength: Afterwards, the exit wave needs to be propagated to the wavelength corresponding object planes (inversion of step 3).
(6) Pupil-plane individual wavelength: After applying a 2D inverse Fourier transform, the complex wave arrives at its wavelength corresponding pupil plane. At this plane, the 2D Fourier transformation has been applied four times within the propagation scheme, shown in figure 3. Therefore, the pixel size obtained is the same for all wavelengths and matches with the one used in point 2). In case the object is a mirror perpendicularly to the optical axis, the pupil function obtained, matches with the hyperchromate's pupil function. In other words, the pupil function defined in point 2) is projected into the pupil plane of the reflection path. In this projection process, path changes introduced from an inclined object surface or the topography are accounted for. The complex wavefield arriving at the reflection path's pupil plane is multiplied with the hyperchromate's pupil function including corresponding wave-aberrations. Obviously, the interaction of the projected pupil function from point 2) and the reflection path pupil function is limited to the overlap region. Hence, the nonoverlapping regions have to be set to zero. In order to reduce the computational effort. The size of the pupil plane is cropped to the original number of pixels defined in point 1).
(1) Pupil-plane central wavelength: A slightly changed angular spectrum method, as demonstrated in [18], is now applied to arrive at the reference pupil plane, highlighted in the schematic diagram via an asterisk behind the AS * . This step is motivated from the fact that the final pixel-size in the detector plane differs for different wavelengths caused by the final Fouriertransform that has to be applied. Hence, adjusting the pixel-size before the final Fourier transform helps to compensate for this pixelsize mismatch, ensuring the same pixel-size in the detector plane. The changed angular spectrum method consists of a numerical lens, which enables adjustment of the pixelsize in the reconstruction process. (3) The procedure from step (1) to (8) is repeated until all wavelength have been propagated through the system.
The necessary pixel-size in the input plane (pupil plane of central wavelength) and the sampling rate are determined by: (i) the necessary pupildiameter to cover the entire NA, (ii) the corresponding pixel size Δx obj in the object plane, which should be smaller than half the Airy disc according to the Nyquist criteria. This condition is already fulfilled due to the Fourier transform relationship between pupil-plane and object-plane, resulting in a Δx obj. (iii) Although, only a few pixels, typically central 3×3 are binned to result in the spectral signal a much larger area has to be covered to simulate the complex wavefield in the detector plane likewise accounting for the defocus caused broadening of the complex wavefield at other wavelengths incident at the detector plane. If the detector plane is chosen too small, the defocused light pattern that extends over the detector plane will be wrapped into simulated wave-field and result in disturbing aliasing artifacts, which lead to a wrong signal. With the NA used and the wavelength corresponding axial distance between the infocus position and furthest distant wavelength axial position Δz obj , the following expression is obtained for the minimum number of pixels N required:

Experimental results
At first, based on an infrared (780 nm-940 nm) applicable hyperchromate with 200 μm longitudinal working range (∼1.25 μm nm −1 ) and a working distance of 20 mm measurements have been conducted. The hyperchromate has an NA of 0.5 and delivers a diffraction limited spot size (peak to valley wave aberration value smaller than 0.25 for the minimum wavelength and a root mean square aberration value smaller 0.07 for the entire wavelength range). Through a single-mode 2×2 fibre coupler (Thorlabs TW805R5A2) the light was launched from the broad band light source (Superlum M-T-850-HP) into the hyperchromate. The object backscattered and back-reflected light passes through the objective, where the light of the focused wavelength passes through the fibre with maximum intensity, whereas the neighbouring wavelengths are recorded with reduced intensity. At the other end of the fibre a spectrometer (Thorlabs CCS175/M) records the spectral signal.

Influence of spherical aberrations
The object under investigation is a plane mirror. The measurements have been recorded on the optical axis. Hence only spherical aberrations could occur. For the description of the wave aberrations the fringe indexing has been used to represent the Zernike polynomials [19]. Although the objective is diffraction limited, higher order spherical aberrations occurred, which result in an asymmetric profile of the chromatic confocal signal. The spectral signal obtained is shown via the solid black line in figure 4. The simulation was performed in 0.1 nm steps from 770 nm to 810 nm and has iteratively been adjusted to match the experimental data. The code was run on Matlab2017 using an Intel Core i7 Processor with 3.5 Ghz and 16 Gb The performance of matching both data can quantitatively be evaluated using the R squared coefficient and root mean square value (RMS) [20], which result in R 2 =0.9849, rms=0.0258. The aberration phase map, is displayed in figure 4 on the right-hand side within the graphical representation.
Due to the asymmetric profile, the commonly applied centre of gravity algorithm, that due to the larger data basis is thought to work better than the peak detection algorithm, results in the measurement of a different axial position than the real on.

Influence of object tilt
The same experimental configuration as explained in 3.1 has been used. The mirror has now been tilted by an angle of 15°. The simulation has been adjusted to account for the tilt angle. The tilt of the object in the reflected path corresponds to a phase ramp, which according to the Fourier shift theorem, as discussed in [12] on page 8, causes a lateral displacement in the Fourier domain, which is represented by the pupil plane. If only the tilt of the object is concerned and a specular reflective object is considered (no diffuse scattering or topographic variations), then the propagation steps 3 for and 5 can be omitted.
The object's tilt angle α can directly be related to the wavelength corresponding pupil plane via a shift. The amount of shift d shift can be calculated as: with Δx pupil the pixel-size in the wavelength corresponding pupil plane. The final pupil function corresponds to the overlap between the pupil-function with and without shift. At an inclined object the resulting chromatic confocal signal becomes more symmetric, as shown in figure 5. This effect is due to a reduced number of different foci arising from the spherical aberrations. In detail, at large object tilt the paraxial rays in close proximity to the optical axis are reflected outside the aperture, whereas the marginal rays at least from one side of the objective can be recorded, as schematically shown in figure 6(b).
The corresponding R squared coefficient and root mean square value comparing simulated and experimentally obtained data shown in figure 5 for the 15°t ilted mirror are R 2 =0.9989 and 0.0191. This investigation has further been extended to analyse the influence on the measurement uncertainty with rising degree of first order spherical aberrations W(4,0). According to fringe indexing peak to valley aberrations values of 0.125, 0.25 and 0.5 times the maximum wavelength (940 nm) have been investigated. The results are shown in figure 7. It can be concluded that an increased degree of spherical aberrations does not only result in an asymmetric profile but likewise introduces a shift of peak position and increased intensity of side peaks. The peak position for the mirror in tilted and tilt-free position are listed in table 1. For the tilt-free position, a maximum shift of wavelength compared to the aberration-free curve was 4.2 nm for half wavelength PV aberration value. For the tilted object positions the shift was not as severe and oppositely directed, towards shorter wavelength, with a maximum shift of 0.8 nm for half wavelength PV aberration value. Hence, in the case of the investigated maximum spherical aberration of half a wavelength PV value a wavelength difference of 4.6 nm would be obtained, resulting in a height shift of 5.75 μm for 200 μm measurement range over 160 nm wavelength range. This height shift results in almost 3% error with regards to the measurement range.

Influence of roughness
One of the major applications of chromatic confocal microscopy is the determination of roughness parameters from the measured height such as the mean roughness R a and the roughness range R z value. Measurements with a Stil OP10000 (NA=0.2, spot size=51 μm, working distance=65 mm, measurement range=10 mm) and Precitec5002227 (NA=0.5, spot size=5 μm, working distance= 4.5 mm, measurement range=300 μm) hyperchromate have been conducted. The measurement data have been recorded at a lateral sampling rate of 1 μm. The height data associated to the spectral confocal signal at different NA are shown in figure 8.
Already from the scale, it can be concluded that the measurements taken with the NA of 0.2 are not trustworthy. The measurement data obtained with the NA    figure 9). Motivated by the large discrepancy of the data presented, finding and understanding the cause of these large measurement errors in the determination of roughness values when employing a smaller NA will be addressed in the following. For the simulations, the height data from a reference measurement system that delivers an axial and lateral resolution higher than the chromatic confocal system can be used as the input topography map.
Possible reference measurement systems are white light interference microscopy or atomic force microscopy. On the other hand, if the roughness parameters of the sample are known, the topography map can be modelled using the rms roughness R q and the correlation length l corr . Furthermore, a Gaussian height probability distribution (HPB) is assumed for the model, as discussed in [21,22], that can be defined as: The correlation length relates to autocorrelation function (ACF), which estimates the degree of similarity for surface heights. ACF for a lag value of lj is given by The correlation length (l corr ) is a parametrization of the ACF and represents a distance for which the absolute value of the ACF falls below 1/e of its zero-lag value. According to [21] l corr can likewise be related to the rms slope value s: It is also possible to compute more complex topography maps with asymmetric HPB, which in detail is discussed in [23]. The shape of the HPB gives information about the surface processing technique used [24], such as milling, turning, grinding or honing.
The experiment was carried out on a PTB calibrated roughness normal. The data provided are Ra, Rz and Rmax.
In order to generate Rq and the correlation length l corr , measurements using the white light interferometer (Polytec TMS1200) with a Mireau Objective NA of 0.3 have been taken, with the retrieved topography map shown in figure 10. The PTB specified  cutoff wavelength λ c of 0.8 mm ( DIN EN ISO 4288), measurement length of at least 4 mm and bandwidth limited phase-corrected profile filter λ s of 2.5 μm (DIN EN ISO 16610-21, DIN EN ISO 3274) was selected to separate the roughness, from shape and waviness. Despite, the Gaussian lowpass and highpass filters to extract the roughness, an additional Gaussian low pass filter of 6 μm × 6 μm was first applied to the raw data. This filter has been chosen in agreement with Polytec GmbH. Its need arises from the non-tactile opto-digital data generation process in white light interference microscopy compared to the tactile and mechanical approach used by PTB . For the opto-digital data, additional sources of noise impact the signal during its conversion from photon to electron into a digital discretized signal, as pointed out in [25]. These sources of generally high-frequency noise need to be suppressed to enable a fair comparison. Only after the application of the lowpass Gauss-filter the same values as specified by PTB could be obtained and the corresponding Rq = 2.22 μm and l corr = 26 μm values have been calculated.
With these values the feasibility of the synthetically generated surface topography, as defined in [21,22], can also be confirmed. The chromatic confocal experiment was carried out with a chromatic confocal objective (Precitec Model 5000227). The NA of the objective is 0.5 while the measurement range and the working distance are 300 μm and 4.5 mm respectively. The spot size provided by the manufacturer is 5 μm. This objective was fixed in a Thor-Labs cage system and an adjustable iris-aperture was attached in front of it, as shown in figure 11. The iris aperture facilitated taking measurements at different NA. A photonic crystal fiber-based supercontinuum white-light laser source from NKT Photonics was used in the experiment. Light from laser source was coupled into a 2×2 wideband single-mode fiber coupler. Ocean Optics HR2000+spectrometer with a 2048 pixel sensor was used to capture the confocal signals. The spectral range of the spectromter is 200 nm to 1100 nm, which results in a spectral resolution of 0.44 nm/pixel. The surface profile was constructed and filtered according to DIN standards and finally the roughness parameters were calculated.
The experimental and simulated signals for the different NA are shown in figure 12. The previous findings with the Stil objective could be confirmed. At an NA of 0.2 the experimental, as well as the simulated data, display a rugged profile with a not well-defined peak position. The pupil function for the focused wavelength shown in the bottom right corner of the graph likewise exhibits a more noisy structure. With increasing NA the profile becomes less noisy. In fact an NA of 0.3 is already sufficient to enable a trustworthy recovery of the roughness parameters, which can be confirmed from both measurement and simulation. For an NA of 0.5 the simulation results in a slightly narrower spectral signal. This can be addressed to the fact, that in the simulation a collimated homegeneous input beam is assumed, whereas its true nature is Gaussian, which results in a reduction of intensity at the periphery of the pupil and hence a widening of the resulting chromatic confocal peak. The R 2 and rms values, for the comparison of experimental and simulated data, are listed in table 2.
A physical explanation for the increased noise at smaller NA can be derived from the Nyquist sampling criterion. The height difference corresponding phase difference between two adjacent object positions, which are within the Airy disc, should not be larger than π/2. A representative height difference between two adjacent object points can be obtained from the rms slope,which can be calculated readjusting equation (26) or from the roughness profile.  Due to the double passing arrangement in reflection mode the height difference should not be larger than λ/4, which results in the following condition for the required NA:  Due to the fact that both, the calculation of the Airy disc diameter and the maximum height step, are related to the wavelength, a wavelength-independent expression could be obtained.
Another way to think about this problem can be derived from the generation of speckle with regards to Abbe's resolution criterion. In comparison to the specular surface discussed in sections 3.1 and 3.2. the light now interacts with a rough scattering surface, which results in diffraction and corresponding diffraction orders. Abbe's resolution criterion states that the first order and zeroth order need to be recorded in order to recover the corresponding object point. In fact, both, zeroth and first order, interfere in the image plane, resulting in a sharply imaged object point. Due to the inclination represented by the rms slope value the zeroth diffraction order is inclined to the optical axis by an angle α that can for small angles be approximated to be the slope value s. In order to enable the recording of at least one of the two first orders, the NA of the objective needs to be larger than twice the slope. Otherwise, first orders and zeroth orders cannot interfere with each other, which results in an uncorrelated phase difference and the appearance of speckle. In the worst case, this phase difference is π, which results in destructive interference and hence a significantly reduced intensity in the image plane, which gives an explanation for the increased noise of the chromatic confocal signal at small NA.
Obviously, these equations fail if the rms roughness is smaller than λ/10 at which the surface acts like a mirror and exhibits very little scattering. For the investigated roughness sample with an Rq of 2.2 μm and l corr of 26 μm a minimum NA of 0.292 is required, which could be confirmed by our investigation.

Conclusions
A realistic wave-optical simulation tool has been developed and validated at different scenarios using experimental data. It could be shown that spherical aberrations in combination with an inclined object result in a different pupil function and hence a different chromatic confocal signal. Even a diffraction limited system with a peak to valley wavefront error of λ/4 results in a significant peak shift that amounts to more than 1% of the measurement range.
Furthermore, the influence of roughness with respect to the NA employed has been demonstrated. At small NA disturbing artifacts occurred, which can be attributed to the speckle effect. Finally, a formula has been derived, which helps to select a sufficiently large NA to avoid the speckle effect and hence guarantees the recording of a noise-reduced signal.
Future work will be based on the development of a rigorous model so that higher NA and polarization effects can likewise be investigated. Moreover, other sources of noise that impact the signal during its conversion from photon to a digitized value should likewise be accounted for in the modelling process. In that manner, a simulation tool will become available that enables realistic rendering and hence the generation of a large number of synthetic data. This data could, for instance, be used for the application of artificial intelligence algorithms to enable an improvement of the measurement uncertainty while relaxing the optical performance requirements imposed on the system.