Snapshot 3D reconstruction of liquid surfaces

: In contrast to static objects, liquid structures such as drops, blobs, as well as waves and ripples on water surfaces are challenging to image in 3D due to two main reasons: ﬁrst, the transient nature of those phenomena requires snapshot imaging that is fast enough to freeze the motion of the liquid. Second, the transparency of liquids and the specular reﬂections from their surfaces induce complex image artefacts. In this article we present a novel imaging approach to reconstruct in 3D the surface of irregular liquid structures that only requires a single snapshot. The technique is named Fringe Projection - Laser Induced Fluorescence (FP-LIF) and uses a high concentration of ﬂuorescent dye in the probed liquid. By exciting this dye with a fringe projection structured laser beam, ﬂuorescence is generated primarily at the liquid surface and imaged at a backward angle. By analysing the deformation of the initial projected fringes using phase-demodulation image post-processing, the 3D coordinates of the liquid surface are deduced. In this article, the approach is ﬁrst numerically tested by considering a simulated pending drop, in order to analyse its performance. Then, FP-LIF is applied for two experimental cases: a quasi-static pending drop as well as a transient liquid sheet. We demonstrate reconstruction RMS errors of 1.4% and 6.1% for the simulated and experimental cases respectively. The technique presented here demonstrates, for the ﬁrst time, a fringe projection approach based on LIF detection to reconstruct liquid surfaces in 3D. FP-LIF is promising for the study of more complex liquid structures and is paving the way for high-speed 3D videography of liquid surfaces.


Introduction
The ability to reconstruct an object in 3D is of interest for a large number of applications.Examples of three-dimensional optical imaging include the reconstruction of dental pieces for improved accuracy [1], of bridges to aid maintenance work [2], of plants/fruits to measure their growth [3] and to perform 3D mapping of the seafloor [4].In these mentioned applications a structured light beam was used to illuminate the object of interest and the pattern that was diffusely reflected from its surface was imaged.From the analysis of the deformed light pattern, the surface of the object could be reconstructed in 3D.Generally, methods using Structured Illumination require several image acquisitions following a scanning procedure to obtain a single 3D reconstruction [1][2][3][4].In situations where high temporal resolution is needed this multiple image acquisition is not desired, for example, when imaging fast moving and/or deforming elements such as liquid bodies.
The 3D reconstruction of liquid bodies such as drops, blobs, ligaments as well as of liquid sheets is highly desirable for analysing the process of liquid fragmentation occurring, for example, in spray systems [5].Most of the 3D reconstruction techniques for sprays are based on averaged imaging using either X-rays [6][7][8][9][10][11] or optical approaches [12,13].However, averaged 3D reconstructions depict the general/global spray structure, but not the stochastic and random structures of each individual injection event nor the dynamics acting on liquid surfaces.Such information requires, instead, snapshot 3D imaging to optically freeze liquid motion.This is a challenging task where only a few examples of succesful techniques can be found in the literature.For example, various optical techniques have been used for 3D reconstruction of the position of large and isolated drops including tomographic shadowgraphy [14], in-line holography [15][16][17] and plenoptic imaging [16].Plenoptic imaging has also been used to produce snapshot 3D reconstructions of sprays where bulk features such as spray angle could be estimated [18].Another recent example applies high-speed X-ray tomography, where three high-flux rotating anode X-ray tube sources have been used together with three intensified high-speed cameras [19].Thanks to this approach, 3D snapshot images could be recorded at 10 kHz creating a 4D spray movie.Despite its merits, it can be noted that this configuration requires the use of multiple intensified high-speed cameras where cost issues can be an obstacle since the number of views directly limits spatial resolution.
In addition to temporal resolution, a second challenge with liquids lies both in their transparency leading to refraction and in the occurrence of specular reflections from their surfaces.This differs from opaque objects where light does not pass through them and where diffuse back-reflection occurs at their surfaces.To face these issues and reduce transparency, additives can be mixed with liquids.For example, a fluorescing dye can be used at low concentration to induce a fluorescence signal proportional to the illuminated liquid volume [20], or paint can be employed to create diffuse reflection near the liquid surface [21].To avoid refraction and reflection effects at the liquid/air interface, another solution consists of using X-rays instead of visible light, as previously mentioned [6][7][8][9][10][11]19].Finally, some approaches, on the contrary, either take advantage of light refraction to reconstruct 3D structures by analysing the refractive distortion of a binary pattern from different angles [22,23] or take advantage of specular light reflection on the object surface for 3D reconstruction [24].Digital Fringe Projection (DFP) is a technique based on Structured Illumination where a digital projector is employed to project a structure consisting of a number of parallel lines.When those fringes are projected onto a 3D object and observed from an angle, the lines curve as a function of the surface structure.By analysing the deformation of those lines using a post processing procedure, the topological structure of the object can directly be extracted for each pixel.The technique has been used on static objects [25] as well as for moving objects [26] and the concept of Fringe Projection can enable 3D reconstruction from a single snapshot image [27,28].
The approach proposed in this article is based on DFP, to take advantage of its capability in obtaining 3D images from a snapshot image.However, to apply the technique to liquids, the challenges related to transparency and specular reflections must be solved.We tackle this problem by adding a fluorescence dye to the liquid at a fairly high concentration.With this solution, a fluorescent signal is emitted very close to the liquid surface when illuminated at an appropriate excitation wavelength.By using a fluorescence band-pass filter in front of the camera, the specularly reflected light from the surface is rejected while the fluorescence signal close from the surface is detected.This new technique, called Fringe Projection -Laser Induced Fluorescence (FP-LIF) is illustrated in Fig. 1 where its general principle is described.
In this article, the methods for 3D reconstruction based on phase demodulation are first reported and the one employed here for FP-LIF is described in detail.Then, a simulated case of a pending drop is investigated to evaluate the Root Mean Square Error (RMSE) of the reconstructions where a parameter for the technique is optimized.Once verified from the numerical calculations, the technique is experimentally applied for the reconstruction of both pending drops and a transient liquid sheet.

Method for 3D reconstruction with phase demodulation
The recorded image with the FP-LIF technique, Fig. 1, called a fringe pattern, can be modelled by Eq. (1).For the foreground (fg): A is the background illumination, B is the modulation amplitude and ϕ is the phase of the fringe pattern.The background (bg) is simply modelled as noise .As shown by Takeda et al. in [27], the phase ϕ of the fringe pattern includes encoded information of the imaged object's third coordinate and together with a calibration and triangulation, this phase enables estimation of real 3D coordinates for each foreground pixel.It is therefore important to estimate the phase of the fringe pattern and this process is called phase demodulation.

I(x, y) =
A(x, y) + B(x, y) cos(ϕ(x, y)) + (x, y), (x, y) ∈ fg (x, y), (x, y) ∈ bg The process of phase demodulation is not new and multiple methods have been developed for a variety of fringe pattern types.The type interesting for this application is called open fringe patterns where the gradient of the image phase ϕ never cross the zero frequency.The opposite type, closed fringe patterns, include closed curved fringes that will never occur with a the fringe projection used by FP-LIF.

Review of phase demodulation methods
A first class of phase demodulation methods are phase-stepping methods.With these, multiple images are used to extract one phase map, an example is given by Zhang et al. in [29].Each image has a different phase shift of the fringes and at least three images are required to find an unique solution for A, B and ϕ for each pixel.One advantage with this approach is that the phase can be extracted independently pixel by pixel.However, the limitation of multiple recorded images for each demodulation and 3D reconstruction is as mentioned not preferred and a single image method is favored.
One class of single image demodulation methods is called Regularized Phase Trackers.The approach iteratively finds the phase for each pixel by minimizing a local energy function [30,31] that can be effective for both open and closed fringe patterns.Limitations of the method are the requirement of parameter initialisation and in some cases pre-normalisation of the fringe pattern [30].A different single image phase demodulation method shows that a fringe pattern describes Lissajous Ellipses [32].By estimating the parameters of these ellipses the phase is demodulated.
The phase demodulation method used in this article is the Continuous Wavelet Transform (CWT) method.This method is connected to the Fourier methods for phase demodulation introduced by Takeda et al [27]., originally known as Fourier Transform Profilometry (FTP).FTP uses a filtering in the Fourier domain of the image and after inverse Fourier transform the parameters of the fringe pattern, A, B and ϕ, can be estimated.The method is computationally fast but its greatest drawback is connected to that it is a global method that has trouble to adapt to the non-stationary fringe patterns often found in phase demodulation problems [33][34][35].To solve this problem either the Windowed Fourier Transform (WFT) or CWT method can be used which gains localisation in the spatial domain [33,35,36].In simulation it has been found that the CWT and WFT methods have a higher computational time compared to the FTP method but with improved accuracy [37].The next section will provide a more detailed explanation of the Continuous Wavelet Transform phase demodulation method.

Phase demodulation with the Continuous Wavelet Transform method
The implementation of the Continuous Wavelet Transform (CWT) method requires knowledge of wavelet theory where a wavelet is a wave-like oscillation package.One example is the 2D Morlet wavelet, an illustration is found in Fig. 2 and the function is found in Eq. ( 2).
For the Morlet wavelet function Eq. ( 2), 2π/k 0 is the base period length of the wavelet, σ corresponds to the standard deviation of a Gaussian envelope and ε controls the degree of anisotropy of the envelope where ε = 1 is isotropic.To be considered a wavelet, a requirement is that the Fourier coefficient of the zero frequency is close to zero and to fulfil this k 0 ≥ 6 [33].By using a value of ε greater than 1 the wavelet can include more wave directions with similar frequency as seen in the Fourier transform of the Morlet wavelet, Fig. 2.

Fig. 2.
For phase demodulation using the CWT method a wavelet is required and the one used in this work is the Morlet wavelet shown above.The real part of the complex spatial function, Eq. ( 2), is plotted in the left panel with its Fourier transform F on the right.From Eq. ( 2) k 0 = 6, σ = 0.6 and ε = 3 where the parameters have been adapted to improve simulation results from initial values suggested by Watkins et al [33].
With a chosen (mother) wavelet, daughters can now be created through scaling and translation as, where a is the scaling parameter, b is the translation parameter and Ψ is the mother wavelet function.In general a is a real positive non zero value and b can also be any real value but it is usually connected to a discrete sampled signal, such as an image, and will take integer values of the sample coordinate.The period length of the daughter Morlet wavelet is calculated as 2πa/k 0 .The Continuous Wavelet Transform (CWT) of a 2D function f , in our case the recorded image, is calculated as The transform is a function with three dimensions, one for the scale a and two for the translation b.In addition one can rotate the wavelet as a fourth dimension but this is not necessary since the fringe patterns are open and all fringes can be approximated to be in one direction.Here, W f (a, b) for each combination of a and b is named the wavelet coefficient and together all coefficients absolute values are known as the scalogram.
With the theory of wavelets and the CWT, one approach to demodulate the phase ϕ(x) in an image f is firstly to find the ridge of the scalogram, calculated as The ridge is the daughter wavelet scale a that best match the fringe pattern frequency at pixel b.This is the heaviest computational step and with the approximation above, one dimension is dropped that reduce computation time.From the ridge the wrapped phase is estimated with the phase method, described in [33] as, The result is now wrapped in the interval (−π, π] as a result of taking the argument of a complex number.Removing this wrapping is called phase unwrapping, which can be accomplished using several different mathematical approaches [38].We used a phase unwrapping algorithm implemented in the scikit-image python package [39] to estimate the continuous phase ϕ.Note that the unwrapped phase is added with an unknown multiple of 2π to retrieve the absolute phase, the correct phase value must be known in at least one pixel.Now, consider how the CWT method is most efficiently implemented.A fast implementation, as concluded by Watkins [33], is to use the Fast Fourier Transform (FFT) and the convolution theorem of the Fourier transform.The continuous wavelet transform of an image can be estimated as a 2D convolution and with the Fourier convolution theorem the result is, for all pixels and single scale a.This is the implementation used for the phase demodulation results in this article.

Triangulating 3D coordinates from image phase
With the calculated unwrapped phase of the fringe pattern ϕ(x, y), this section shows the calculations for reconstructing 3D points of an object surface.Here, a model of the global phase ϕ(X, Y, Z) in the world coordinate system is required together with a camera model that include information of how world coordinates transforms into camera coordinates, [X, Y, Z] T − → [x, y] T .A 2D illustration of how fringes are projected into the image from a surface is found in Fig. 3.The projected light induce fluorescence close to the 3D structure surface, white stripes, connected to a linear global phase in the X direction, Eq. ( 10).The projection is imaged from an angle θ where the orthographic camera converts the global coordinates [X, Y, Z] T into camera coordinates [x, y] T .The projection can be represented using a camera matrix, Eq. ( 9).From the global phase and the camera matrix, the expression in Eq. ( 13) is derived for extracting the third coordinate d from the image phase.
The 3D points of an object can be described by the following set of homogeneous 3D coordinates, where mainly d(X, Y) is a challenge to find in an image.Homogeneous coordinates are generally used for projective geometry since they allow an affine or a projective transform to be represented by a single matrix.The homogeneous coordinate adds one dimension to the original coordinates and a homogeneous point will still represent the same Cartesian point after being scaled with a non-zero value.If the point is scaled so that the last coordinate is 1 it can be removed to get the corresponding Cartesian coordinates.The camera that connect the world coordinate system [X, Y, Z] T and the camera coordinate system [x, y] T is, as mentioned, represented by a camera matrix P calculated as, P is divided into two parts, firstly, an intrinsic camera matrix K that includes the lens and sensor properties of the camera, here modelled by the scale (the number of pixels per unit of length) s and the principal image point [x 0 , y 0 ] T .Secondly, the extrinsic camera matrix C that represents an orthographic projection from the cameras position and rotation as seen in Fig. 3 and it depends only on the angle θ since orthographic cameras are not affected by the distance from the camera to the 3D object.It should be noted that the camera and the projection source needs to be carefully aligned as these are assumed to be in the same plane.
The global phase will be approximated as linear in X, even though the real fringes projected through interference of laser light, will diverge with distance.However, for a small enough range of d relative to the distance between the illumination source and the object, the error will be negligible.The following equation models the global phase, where T g is the global period length of the fringes.To include the projection property of the fringes one can replace T g with δ T Z + T g where δ T represents the fringe divergence.Now the connection between the 3D points on the object surface and the 2D camera points are calculated by camera projection of object points D.
Solving for X in the first row of Eq. ( 11) and inserting into the global phase Eq. ( 10) gives a model for the image phase found in the following equation, Here, T g has been exchanged to the more practical T = T g s cos(θ) which is the base period length of the fringe pattern when d = 0 and it is measured in pixels.Finally Eq. ( 12) can be solved for d to, The estimated 3D points can be described in image pixel coordinates as, Note that 12 assumes that the absolute phase is known from the phase demodulation.If this is not the case, the calculations for X might be corrupted and one approximate solution is to assume that sin(θ) = 0 since θ is quite small.For the calibration of this system the four unknown parameters θ, s, x 0 , y 0 and T are required.θ is found from the setup and with a telecentric lens s can be found as the number of pixels per unit of length in world coordinates where it is assumed that the same s can be found in both imagea directions.x 0 and y 0 are approximated quite well as the center of the image.Finally, T can be estimated from a calibration image of a flat 3D object orthogonal to the illumination direction, d(x, y) = 0. Here, the calibration image should just have a single frequency for the whole image since the fringe pattern phase is linear.The period T can either be extracted from the frequency peak found in the Fourier Transform or by estimating the pixel length between the fringes in the image.
As a side note, with the setup illustrated in Fig. 3 it is important to have the phase in the X-direction, vertical fringes.If the phase is in the Y-direction, no information of d will be stored in the image as seen in the second row of Eq. (11).For a corresponding setup with horizontal fringes the camera can be put on top of the projection instead of on the side.

Description of the simulation
The following procedure is used to simulate fringe pattern images of a rotational symmetric simulated drop.Both the drop and example fringe patterns are shown in Fig. 4 with different parameter sets.The simulated drop is calculated using the radius r as a function of Y that is a combination of two curves, a third degree polynomial and a quarter circle, The polynomial parameters are set to make both curves have equal value and derivative at Y = 0.The radius is then calculated for a Y-range of [−1.6, 1.0] and the 3D structure for each Y value is a half circle along X with the corresponding radius.Values for X outside of the drop in an image are masked as background.With the 3D structure d set as the simulated drop, which is a function of the global parameters X and Y, the simulated phase of a fringe pattern image is calculated using Eq. ( 12).To use this equation, a projection of the drop into the orthogonal camera's coordinates x and y must be performed, which for a structure with rotational symmetry along the same axis as θ is calculated as, The minus for y originates in opposite camera y and world Y axis directions.After this projection, some of the coordinates for the calculated structure will be shadowed by the structure itself and therefore all values of d(x, y)<0 is masked as background.The image phase is now calculated using Eq. ( 12).Then, the image I, first without noise, is calculated with Eq. ( 1) where A = B = 1 for the foreground pixels.To simulate lower modulation depth B for high frequencies, the image is convoluted with a 3x3 smoothing kernel.Finally, independent normally distributed noise is added to each pixel with a variance σ 2 calculated from a chosen Signal to Noise Ratio (SNR) as The assumptions used in this simulation is mainly that the recording of a fringe pattern image behaves like the equations derived in section 2.3.In addition, assumptions are made that lower modulation depth for high frequencies can be simulated with a 3x3 smoothing kernel and that all pixel noise is normally distributed with spatial independence.The whole simulation is computed using python with basic packages numpy and scipy.

Simulation results
For the simulation results, the Root Mean Square Error (RMSE) measure is used to estimate the overall quality of a 3D reconstruction.It is calculated as, where N is the number of pixels in the foreground of the image , d is the correct third coordinate structure and d is the estimated third coordinate structure.All RMSE values have been scaled by the maximum value of d to get a percentage error.In addition, the estimated expected value, or the mean, RMSE is used.This is calculated as the mean value of 20 reconstruction RMSEs with the same simulation parameters to get a value less influenced by the noise in a single fringe pattern simulation.
To visualise the accuracy of the FP-LIF technique, cross sections of 3D reconstructions are shown in Fig. 5.It is seen how the reconstructions follow the correct structure and that a fine scale is required to resolve the smallest errors.The errors calculated using RMSE for the full structure is 1.4%, but this include the large errors on the left edge and when they are disregarded (see top right panel in Fig. 6) the RMSE error drops to 0.8%.These mentioned errors close to the edges of the structure can be explained by two factors.Firstly, the reconstruction of a pixel uses a small area of neighbours.For edge pixels this means that background pixels will corrupt the reconstruction.The second more complex factor, is the larger gradient of the structure d in the areas around the edges.The large gradient will connect to either a large or low so called instantaneous frequency of the signal that can explain the error.The instantaneous frequency is the local frequency of the lines in an area of the image, equal to the gradient of the phase ϕ divided by 2π.On the right end of the simulated fringe patterns, a low instantaneous frequency is seen in the x-direction since there is larger distance between the lines there.Here, the direction of the fringes is almost horizontal since the y-gradient is dominant which is a challenge for the phase demodulation method.The mentioned rotation of the wavelets in section 2.2 will improve the results but the errorwhic from the background pixels will remain.On the left part of the images there is instead a large instantaneous frequency which is firstly a challenge since the high frequencies are harder to image with large modulation depth.In addition, discretely sampled signals must follow the Nyquist criterion which states that frequencies above 0.5 (T<2) cannot be resolved.For the drop 3D structure, the largest instantaneous frequency is above the limit for T = 7 and this means that these areas are impossible to reconstruct.For both high and low instantaneous frequencies, the magnitude of the mentioned errors is connected to the base period length T which makes it of interest for further analysis.The quality of the phase demodulation is largely dependent on T, which means that it should be tuned for optimal reconstruction.In this work the period T is optimized by reconstructing the simulated 3D drop structure with fringe patterns produced using different combinations of base period lengths T and SNR noise levels and for each combination, the mean RMSE of 20 3D reconstructed fringe patterns is calculated.The result is shown in Fig. 6 with a range of SNR from [1 − 20] and T from [6 − 17] pixels where the error map is shown for three different cases.The cases consider which pixels has been used as foreground to calculate the error, either the full structure, only pixels on the left edge or the remainder when removing pixels on the left edge are used.A first observation is that the errors for the full structure and edge look quite similar with the difference that the edge errors are around five times larger.Here it can be concluded that a majority of pixels with large error are found close to the left edge of the structure which is expected from previous discussion.What is surprising is that for T ∼ 7 pixels show initially decreasing error with lower SNR where it intuitively should be the other way around.More noise here seems to improve the reconstructions on the left edge even though the pixels on the edge are impossible to correctly reconstruct as a consequence of the Nyqvist criterion.When observing the errors without the left edge the estimations seem to improve with less noise as expected.From these simulations it can be concluded that the optimal base period length T depends on the application.If the structure on the left edge is important to reconstruct correctly, a larger T in the range 12 − 15 pixels should be used.However, if the application is not interested in the mentioned pixels, T in the range 7 − 9 pixels seems to be optimal for reconstructing the simulated drop.

Experimental 3D reconstruction of liquid structures
Gathered experimental data and results presented in this article together with fringe pattern 3D reconstruction software can be found at https://spray-imaging.com/fp-lif.html.

Description of the experimental setup
In this experiment, Fluorescein is mixed with water at concentration corresponding to a 1:100 ratio.In addition of being a non-toxic inorganic dye, Fluorescein presents the advantage of having high quantum yield allowing to generate a strong fluorescence signal.The dye is excited at 447 nm wavelength using a 4W power continuous wave collimated diode laser.To create the sinusoidal structure projected on the imaged liquid, a series of optical components are employed as illustrated in Fig. 7.The beam is first expanded by a factor of 10 to illuminate a 25mm squared Ronchi grating of 10 lp/mm frequency, which imprint a periodic square pattern into the beam.Then, a convex spherical lens of 200 mm focal length focuses the beam into a spatial filter where all diffraction orders are blocked except for the 1st order spatial frequency beams that are transmitted.After crossing the spatial filter, the two first order beams diverge, overlap and interfere creating a perfectly sinusoidal illumination pattern over a long distance.Note that a similar optical arrangement has been used in [40,41].Images were recorded with a 50 Hz 5.5 megapixel sCMOS camera (Andor, Zyla) mounted with a telecentric lens (Edmund optics 1.0X GoldTL), where the lens is chosen to approximate an orthographic camera as is assumed in the post processing triangulation.The camera is placed at θ ≈ −21 • from the illumination axis.Each pixel is imaging an area of 6.4 µm x 6.4 µm which corresponds to s ≈ 154 pixels/mm.The period of the modulation gives T ≈ 16 pixels.

Experimental results
The main experimental results is the 3D reconstruction of a pending drop shown in Fig. 8.To investigate the validity of the proposed methodology, we assume the pending drop to be rotational symmetric, where the edge of the structure drawn from the original fringe pattern can be rotated to produce a full 3D structure.In Fig. 9, a comparison is made between cross sections of the reconstructed structure of the drop and a rotational symmetric version, with a RMSE value of 6.1%.This agreement indicates a good accuracy of the triangulation model and the experimental arrangement for the current imaging conditions.In the model used for the fringe pattern images, Eq. ( 1), there is a foreground and a background where the background is simply modelled as noise.For the experimental images a slightly different definition of the background is used which is that the background are areas of the image without clear fringes.These areas would cause large errors in the reconstruction even though they are not just zero mean noise.However, this definition of what clear fringes look like is unclear and currently a time consuming manual segmentation is performed to find the foreground.One clear segmentation challenge is visible in the pending drop fringe pattern where a circular area  9.This is a proof of concept example of using the FP-LIF technique on a more turbulent liquid surface structure with a short shutter time of 50 µs.For a 3D visualisation go to https://3d.sprayimaging.com/onion.on the right part of the image has very weak fringes, maybe as a result of inhomogeneous laser beam.Even though there is liquid here, it is classified as background since the reconstruction is inaccurate for these pixels.Another segmentation challenge are areas with high local frequency of the fringe pattern that is not resolved, which also are segmented as background.These challenges can be addressed experimentally by changing the parameters T and θ and cleaning the optics, but the problem of foreground segmentation will remain which means that it is of interest to further understand and possibly automatize this process.
Figure 10(a) shows pending drops and their shape evolution prior to break up and Fig. 10(b) is an example of FP-LIF applied to a spray system, where the technique is used to reconstruct a turbulent liquid structure in 3D.Here, a liquid sheet is formed at the exit of a pressure-swirl nozzle.Due to the swirling motion, ligaments are formed and break up further downstream into drops.This example illustrates the capability of the FP-LIF technique in extracting the 3D view of the liquid disintegration process from a single snapshot.The experimental fringe pattern is recorded with a short shutter time of around 50 µs which means that this technique can be used with high speed cameras.A challenge here lies in that shorter exposure times will directly reduce the SNR.Thus, for higher frame rates it might be required to increase the dye concentration and/or the laser power.From the 3D reconstruction results with the presented FP-LIF technique it is of great interest to know the limiting spatial resolution of the reconstructions.There are three dimensions that should be handled differently.Firstly, the resolution of the Y dimension for the orthographic case can simply be set to the pixel size, 6.5µm for the pending drop.Then the resolution for X and d are intertwined since both are dependent on the image x-dimension in the calculation and, in addition, there are limits for the calculation of d originating in the phase demodulation that will also propagate to the X-dimension.These complex resolution limits of X and d, and their connection to the setup parameters T and θ, has not been further analysed in this work but for the future possible success of the FP-LIF technique they are of great significance.

Conclusions
The Fringe Projection -Laser Induced Fluorescence (FP-LIF) technique has been created and developed to enable 3D reconstruction of liquid surfaces from a single experimental image.The technique uses a structured laser beam which illuminates liquids that contain a high concentration of fluorescent dye.This ensures that the laser-induced fluorescence signal originates primarily very close from the illuminated liquid surface.By now recording images with an angle, information related to the third dimension is encoded within the phase of the imaged pattern.The phase is, then, estimated through Continuous Wavelet Transform phase demodulation and the 3D coordinates are finally estimated by triangulation.The simulated results show that the phase demodulation method used here is capable of reconstructing 3D structures with an RMS error corresponding to only 1.4% of the maximum structure size.Also, the base fringe pattern period length is found to be optimal between 7−9 pixels for the simulated data.The experimental results show a proof of concept for using FP-LIF for 3D reconstruction of a slowly moving pending drop.This reconstruction agrees well with a rotational symmetric approximation of the pending drop.Finally, the technique has also been applied to a more challenging transient liquid sheet, where a 3D reconstruction of more complex liquid shapes have been demonstrated.Future developments of the technique include the increase of the spatial resolution to image liquid bodies as small as possible as well as high-speed 3D imaging of liquid structures in spray systems which is very promising.

Fig. 1 .
Fig. 1.General description of the FP-LIF approach: Periodic fringes are projected on a 3D liquid structure containing a fluorescing dye at high concentration.The fluorescing fringes are, then, imaged at a backward angle θ.By now analysing the phase of the modulation, from the recorded fringe pattern, a 3D surface reconstruction can be obtained.Here, (a) shows the light intensity along the green line of recorded fringe pattern while (b) shows the modulation the incident fringe projection.The change of frequency between (a) and (b) is induced by the 3D structure of the object surface.Thus, the shift of the modulation peaks between the signal (a) and the reference (b) is connected to the depth coordinate Z as shown in (c).

Fig. 3 .
Fig. 3. Illustration of orthographic fringe projection and imaging with the FP-LIF technique.The projected light induce fluorescence close to the 3D structure surface, white stripes, connected to a linear global phase in the X direction, Eq. (10).The projection is imaged from an angle θ where the orthographic camera converts the global coordinates [X, Y, Z] T into camera coordinates [x, y] T .The projection can be represented using a camera matrix, Eq. (9).From the global phase and the camera matrix, the expression in Eq. (13) is derived for extracting the third coordinate d from the image phase.

Fig. 4 .
Fig. 4. The simulated drop 3D structure and simulated fringe patterns.The top left image shows the drop structure where the colour represents d(x, y -> d(x, y) and the remaining fringe patterns images were simulated with different base period length T and Signal to Noise Ratio (SNR).For a 3D visualisation of the drop go to https://3d.spray-imaging.com/simulated_drop.
The noisy image is rescaled to the range [0 − 255] and then each pixel value is rounded to an integer to simulate a recorded image.The static parameters used for all simulations are s ≈ 193 pixels/au, θ = 15 • and c = 0, and the simulated images have a resolution of 512x512 pixels.

Fig. 5 .
Fig. 5. 3D reconstructed cross sections of the simulated drop show good accuracy compared to the true structure.The reconstructions are produced from fringe patterns similar to the ones found in Fig. 4 but as all combinations of base period lengths, T, 7 and 12 pixels and SNR ∞, 20 and 2. Both horizontal cross sections (a) and (b) along y c , and vertical cross sections (c) and (d) along x c are shown.Here, (b) and (d) show the errors to the original structure in small detail.In (a), large errors are shown mainly close to the left edge which is connected to a high instantaneous frequency of the pattern that is hard to demodulate.If these large errors would be found in the reconstruction of an experimental image, the areas would be segmented as background.

Fig. 6 .
Fig. 6.Optimization of the period length T for 3D reconstruction of fringe patterns show an optimal value between 12 − 15 pixels for the full structure or 7 − 9 pixels if the left edge is disregarded.The three maps (full structure, left edge and remainder pixels indicated in the top right panel) show mean RMSE measures for 3D reconstructions of the simulated drop 3D structure with different combinations of noise level, SNR, and base fringe pattern period T.

Fig. 8 .
Fig. 8. Experimental 3D reconstruction of a pending drop.Panel (a) shows the fringe pattern recorded using the FP-LIF technique.It has a transparent overlay of a manual foreground segmentation, only pixels inside this area are reconstructed in 3D.Panel (b) shows the 3D reconstruction from the front and a slightly rotated version is shown in (c).The 3D reconstructions are virtually rendered with the 3D computer graphics software Blender.For a 3D visualisation go to https://3d.spray-imaging.com/drop.

Fig. 9 .
Fig. 9.The assumption of rotational symmetry for the pending drop is here used as validation of the experimental 3D reconstruction in Fig. 8.The blue envelope line in panel (a) is drawn on the edge of the original fringe pattern image and the distance from the centre of the envelope to the line is used as the rotational symmetric drop's radius for different Y coordinates.The illustration is similar to the one found in Fig. 5 where both a horizontal (c) and a vertical (d) cross section is compared to the rotational symmetric structure along y c and x c as indicated in panel (b).Panel (b) also show the full 3D reconstruction where the colour indicates the estimated d. Here, x c is chosen as the calibrated world coordinate X = 0.

Fig. 10 .
Fig. 10.Example of experimental images with different shapes.(a) shows the shape evolution of pending drops at different times (t 1 , t 2 , t 3 ) until they break up.(b) Experimental 3D reconstruction of a transient liquid sheet which is similarly visualised as Fig.9.This is a proof of concept example of using the FP-LIF technique on a more turbulent liquid surface structure with a short shutter time of 50 µs.For a 3D visualisation go to https://3d.sprayimaging.com/onion.