Lagrangian particle tracking in three dimensions via single-camera in-line digital holography

Lagrangian particle trajectories are measured in three spatial dimensions with a single camera using the method of digital in-line holography. Lagrangian trajectories of 60–120 μm diameter droplets in turbulent air obtained with data from one camera compare favorably with tracks obtained from a simultaneous dual-camera data set, the latter having high spatial resolution in all three dimensions. Using the single-camera system, particle motion along the optical axis is successfully tracked, allowing for long, continuous 3D tracks, but the depth resolution based on standard reconstruction methods is not sufficient to obtain accurate acceleration measurements for that component. Lagrangian velocity distributions for all three spatial components agree within reasonable sampling uncertainties and Lagrangian acceleration distributions agree for the two lateral components. An equivalent single-camera, imaging-based 2D tracking system would be challenged by the particle densities tested, but the holographic configuration allows for 3D tracking in the dilute limit. The method also allows particle size, shape and orientation to be measured along the trajectory. Lagrangian measurements of particle size provide a direct measure of particle size uncertainty under realistic conditions sampled from the entire measurement volume.


Introduction
During the last decade there has been rapid progress in the ability to track small particles in three spatial dimensions and in time, thereby opening the way for studies of Lagrangian properties of turbulence and of the dynamics of inertial particles in turbulence [1]- [8]. The ability to visualize the Lagrangian structure of turbulence provides a fundamentally different perspective from the Eulerian one that has dominated over the previous decades. Furthermore, the dynamics of particles themselves are a fascinating problem with implications for a host of physical processes, from planet formation to precipitation in clouds via droplet collisions [9,10].
The primary approach for three-dimensional (3D) particle tracking has been the use of multiple (usually four) fast cameras in stereo-view configurations [11,12]. The optical systems provide particle positions at incremental time steps and it then becomes a problem of matching the particles between frames to create Lagrangian tracks; accurate tracking depends on the optical configuration, including appropriate multi-camera calibration, and algorithms for finding and matching particles based on spatial position and various time derivatives of position [13,14]. In this paper, we discuss an alternative method for obtaining data suitable for 3D particle tracking, based on digital in-line holography. Digital holography has been used for particle tracking in several studies (see section 2) and here, we investigate the ability to achieve successful 3D tracking and simultaneous particle size measurement with a single camera (achieved in practice by taking data with two cameras having orthogonal views and ignoring one of them). The resulting depth resolution is sufficient for three-dimensional particle tracking, but not yet for high-precision determination of higher-order derivatives such as acceleration along the depth coordinate. This method has several pragmatic advantages in terms of system complexity and cost, but also provides additional information on particle size (and shape and orientation, if desired) that is valuable in some experiments. The purpose of this paper, therefore, is to introduce the tracking method and to illustrate its utility, not just in theory, but with 3 example measurements from a laboratory turbulence chamber seeded with particles. We attempt to provide enough details such that its relative place in the suite of tools used in visualization physics can be evaluated, but a detailed inter-comparison with other methods is not attempted here.
Advantages of the in-line holographic particle-tracking method we discuss here over traditional particle tracking methods can be summarized as follows: (i) The method requires one camera, as opposed to four, so costs are proportionally lower and optical alignment is significantly simpler. (ii) The method requires significantly less laser power because near-forward light is measured, so again the cost is proportionally lower and the technical and safety complexities of working with extremely high powered lasers are avoided. (iii) If a single camera is used, no particle matching between cameras is required and therefore the alignment of multiple cameras does not have to be considered and one step of data processing is eliminated. In our comparisons we also have found that particle tracks from the single-camera mode are longer, on average, than those from the dual-camera mode. (iv) Digital holography, in addition to providing 3D spatial information, can also provide access to particle size and, when relevant, shape and orientation. This opens up possibilities for studies of size-dependent processes, of particle-particle interactions such as coalescence, or of particle-orientation due to local fluid properties (e.g. vorticity and shear).
One result of these factors is that the single-camera, in-line holographic tracking method is well suited to complex measurement environments, such as in atmospheric flows, where it is desirable to minimize system complexity and size. Such flows often have a wide range of particle sizes and holography therefore allows measurement statistics to be conditioned on particle size or shape. The digital holographic particle tracking method also has disadvantages, including computationally intensive digital hologram reconstruction, limitation to relatively low particle number densities and limitation to relatively small sample volumes if high spatial resolution is desired (e.g. if particle size is measured).
The paper is organized as follows. After a brief review of previous work on holographic particle tracking we describe the two camera experimental setup with which the data used in this paper were obtained, including the methods for digital hologram reconstruction, for determining particle position and for matching of particles between multiple cameras. We then provide an overview of the particle tracking methods used on the same data set for dual-camera and singlecamera (where one camera is ignored) modes and compare results for the two methods from Lagrangian tracks of settling particles in a turbulent flow. Finally, we describe results for particle size determination along Lagrangian tracks and we then summarize the main results of the paper and briefly discuss some future directions and remaining challenges.

Holographic tracking overview
Digital holographic particle tracking velocimetry (HPTV) has been applied to a variety of problems including flow in a microchannel [15,16], wall boundary layer evolution [17], and microorganism behavior [18,19]. Each of these studies uses fast digital cameras to achieve multi-frame continuous tracking (as opposed to two-frame methods that yield only instantaneous velocity vectors) as well as microscopy to minimize the depth uncertainty. Microscopy has the advantage of a high numerical aperture and therefore a small depth uncertainty. But it is limited to a short working distance, which is impractical for measuring particles in turbulent flows with macroscopic geometries. The tracking methods presented later in this paper are suitable for dealing with large depth uncertainties from single camera holographic configurations with a small numerical aperture. A similar experimental setup has been used in recent studies of the motion of large, nearly neutrally-buoyant particles in turbulent flows [20]. A recent review of holographic PTV methods and other PIV methods that measure in three-dimensions has been published by Arroyo and Hinsch [21].
One of the limitations of digital holography as applied to 3D tracking has been the relatively poor depth resolution, which is in part inherent to the method due to the low numerical aperture for typical implementation. There has been some progress in overcoming the poor depth resolution [22]- [24] that makes tracking with a single camera more robust. A hybrid method, apparently first suggested by Sheng et al [25], is to use two holographic stereo views of the same volume, recorded by one camera via a folding mirror, such that the poorly-constrained depth dimension of one view corresponds to a well-constrained lateral dimension of the other view and vice versa. In this work, we have implemented a similar two-view holographic system, but with each view recorded by a separate camera, as a reference to evaluate the performance of single-camera particle tracking. The dual-camera system allows us to determine particle tracks with sufficiently high precision so as to obtain particle accelerations as well as velocities, with very high tracking efficiency (diminishing number of incorrect links between frames relative to the number of links in all of the tracks). A unique aspect of this study is that, we use the high-accuracy dual-camera tracks to evaluate the performance of simultaneous single-camera particle tracking. Holography also can yield particle size, shape and orientation, all of which are of potential interest in some tracking studies [12,19,26]. In this study, we use the continuous tracking capability to evaluate the precision of particle sizing, because we can assume that the true particle size does not change along a track due to the very short measurement times. Once the particle size precision is determined then the particle size itself can be used to identify possible tracking errors (or other events of interest such as droplet coalescence).

Experimental system and hologram analysis
We begin by describing the experimental system for obtaining holograms of particles in turbulence. Subsequently we describe how holograms are reconstructed, how particle positions in individual holograms are estimated, how particles are matched between the two cameras and how particles are tracked from frame to frame in time.

Experimental system
The measurements presented here were taken in the turbulence chamber shown schematically in figure 1. The chamber is a 50-cm-side cube with a speaker-driven jet at each vertex, inspired by the design of Hwang and Eaton [27]. The jets are driven with random, uncorrelated signals so as to generate turbulence that is approximately homogeneous and isotropic, although it should be pointed out that for the data presented here are no special requirements for the flow. The chamber is filled with air and at the speaker amplitudes used in this study the turbulence in the measurement volume has a root-mean-square velocity fluctuation of Figure 1. A top view of the experimental setup as well as the camera coordinate systems referred to in the text. The optical setup consists of a pulsed laser, spatial filter, collimating lens, beam splitter, two mirrors and two high speed cameras (numbered 1 and 2). Camera 1 was ignored for tracking in single-camera mode. The turbulence chamber is a cube with eight speakers, one in each corner. The particle generator is directly above the common volume sampled by both cameras. This common volume is located at the center of the turbulence chamber. The possible particle locations as estimated by each camera are illustrated as gray prolate spheroids, with white dots representing a possible best estimate. A black particle is at the actual particle position, which corresponds to the intersection of the two gray prolate spheroids (within the accuracy of the calibration of the cameras relative to each other). The coordinate system for each camera is left handed, simply for convenience so that +y is vertically up and +z is the depth coordinate.
u rms = 0.6 m s −1 , an energy dissipation rate of ε = 1.4 m 2 s −3 , determined via laser-Doppler velocimetry measurements (also consistent with second-order longitudinal velocity structure function resulting from tracking measurements), a Taylor scale of λ T = (15 νu 2 rms /ε) 1/2 = 7 mm, where ν is the kinematic viscosity of air and a Taylor-microscale Reynolds number of R λ = λ T u rms /ν = 260. The corresponding Kolmogorov length and time scales are η = (ν 3 /ε) 1/4 = 220 µm and τ η = (ν/ε) 1/2 = 3.3 ms. Each vertical side of the chamber has an optical-quality window, with two serving as the entrance windows for the collimated laser beam and two serving as the camera view windows. At the top of a chamber water droplets from a vibrating-orifice aerosol generator (TSI Model 3450) are allowed to settle into the turbulent flow. Relatively narrow droplet size distributions with mean diameter ranging between approximately 10 and 300 µm can be produced, depending on the orifice diameter. In this experiment a 27 µm orifice was used, with the vibrating orifice not being turned on in order to produce approximately 60-120 µm diameter droplets in the chamber.
The optical system consists of a frequency doubled, 527 nm, 50 mW Nd:YLF Q-switched laser pulsed at 2600 Hz, with pulse duration of approximately 20 ns. The beam is focused by a 10× magnification microscope objective, passed through a 15 µm pinhole and collimated with a 75 mm diameter, 1 m focal length lens. The collimated beam is about 50 mm in diameter. The beam is then passed through a beam splitter and guided by mirrors through the center of the chamber and then directed onto two CMOS cameras. The volume over which the two 6 camera views overlap is 17.6 × 13.2 × 17.6 mm 3 . The cameras are Phantom v7.1 cameras with 800 × 600, 22-µm pixels and 1 µs exposure time. These cameras are capable of 6688 frames per second but we ran them at 2600 Hz, sufficiently fast for the particle velocities measured in this experiment. A single continuous run of 8800 frames (about 3.4 s) was collected using both cameras.

Obtaining particle positions from holograms
The digitally recorded holograms are reconstructed through a computation of diffraction at different distance z from the hologram plane. Prior to reconstruction the holograms are processed to eliminate the large scale intensity variations by setting the lowest spatial frequency components to zero, via a high-pass filter in the Fourier domain [28]. Given a sufficiently clean optical setup this type of background noise removal is sufficient to prepare the holograms for reconstruction. The holograms are then numerically reconstructed using the convolution method in Fourier space as summarized by Fugal et al [29] and similarly used by Kim and Lee [16]. This method has the advantage of uniform sample spacing as pointed out by Kreis [30] and is computationally efficient [16,29]. We reconstruct each hologram at 100 µm depth (z-coordinate) intervals.
The number of particles in the common sample volume varies between 0 and 20 with a mean of 5 and a standard deviation of 3 according to the dual-camera configuration, which is an order of magnitude below the maximum number of 80-100 µm diameter particles this size of hologram can have before particles are no longer resolvable [31].
Each particle is found by applying an intensity threshold to the entire 3D reconstructed volume and joining neighboring voxels (volume elements) into groups that contain a particle reconstruction. These reconstructed particles appear to have the shape of a highly elongated prolate spheroids, with an equatorial radius (or minor axis) that is equal to the radius of the particle (or the diffraction spot size, whichever is larger) and length that is a function of the depth position uncertainty for that size of a particle (shown schematically in figure 1). The depth position is determined by making a maximum intensity profile as a function of depth along the optical axis. The profile is obtained by taking the maximum intensity of all the pixels in the voxel group at the same depth position.
Typically several local maxima (typically two peaks) appear in this intensity-versus-z profile, whose spacing and shape is dependent on particle size, detector properties and noise in the hologram [23]. The mean depth of these two peaks represents a best estimate of the position of the particle [23]. Where a peak appears as a superposition of peaks, the mean of those peak positions is used to find the centroid of the combined peak. The theoretical depth uncertainty scales as δ z ∼ d 2 /λ, where d is the particle diameter and λ is the wavelength [24], [32]- [34]. In our case of d ∼ 80 µm particles and λ = 527 nm, this theoretical uncertainty is approximately δ z ≈ 10 mm. As discussed later, we find that our z-estimation method yields a standard deviation of δ z ≈ 500 µm when comparing the dual-camera measured positions with the a single camera's reconstructed depth position (see section 4.2).
In addition to the depth position determination we must estimate the lateral particle position, because the particle images cover several pixels. The particles used in this experiment are water droplets and therefore can be assumed to be round. For simplicity we use a weighted averaging algorithm: a threshold is applied to the hologram reconstruction at the best estimate of particle depth position z 0 and the resulting group of pixels represents the individual particle 7 image. The center of the particle is determined by averaging the position of its pixels weighted by their intensity gray level values. If g(x, y) represents the associated pixel gray value at position (x, y), the xand y-coordinates of the particle center are given by: where the sum run over all pixels in a group. The resulting lateral position resolution can be taken to be twice the standard deviation of the difference between raw and smoothed particle tracks, with resulting standard deviations of δx = δy = 2 µm.

Camera position calibration and particle matching
In this section, we discuss our method of determining how the two cameras are positioned relative to each other (camera position calibration) and how we determine which particle in one camera's view is the match to a particle in the other camera's view (particle matching). In each hologram the depth position uncertainty is much greater than the lateral position uncertainty, resulting in position confidence contours in the shape of a prolate spheroid. The two cameras see the same set of particles in the common sample volume, but the poorly resolved depth coordinates are orthogonal as shown in figure 1. We calibrate the camera positions by searching for common particles between the two camera views, in the neighborhood of the overlapping sample volume. The calibration precision is improved by matching many particles in many holograms. Once the relative position of the two cameras is calibrated, particles in the two camera views are matched by searching in a volume related to the particle position uncertainties (e.g. approximate dimensions 2δ z × 2δ z × 4δ y ), wherein the counterpart particle should appear.
In the rest of this section, we outline the algorithm with which we match particles between the two camera views.

Camera position calibration.
Stereo methods require careful and somewhat tedious calibration of the relative positions of the cameras, usually accomplished by using very precise position measurements or by imaging common objects of known size and spatial separation [35]. Here, we use a more convenient method and simply use the particles that we intend to track to calibrate the camera positions. We can match particles between both camera views using the two-frame tracking matched probability algorithm described by Baek and Lee [36]. Instead of matching two frames separated in time, we match two frames separated in space. And instead of the spatial offset representing particle motion, the spatial offset is the offset in calibrated position between the two cameras (referred to here as 'pseudodisplacement'). We begin camera calibration with an initial estimate of the relative positions (and uncertainties in those positions) of the two cameras, based on straightforward direct measurements in the lab. Then using the particle positions obtained from hologram reconstruction (see section 3.2) we search for possible matches between the two camera views. The particle positions in camera 2 are transformed to the coordinate system of camera 1 using the initial guess. This is accomplished by using the coordinate transformation (affine transformation) in 2, where C x , C y and C z are the estimated x-, yand z-coordinates of camera 2 8 in camera 1's coordinate system.  It is assumed that the effect from the rotations on the other axes of camera 2 needed for the transformation and from the deviation of the two camera views from orthogonality are negligible (an initial optical alignment in the laboratory allows this). In searching for particle matches, we are careful to account for the relatively large depth uncertainty in the other camera's view. The Baek and Lee algorithm requires that any single particle's neighbors do not move much relative to each other in the second frame. Since we are comparing two measurements at the same instant in time, the neighboring particles should not move at all relative to each other except for apparent movement due to position estimation error. The Baek and Lee algorithm requires that the particles have a maximum displacement that limits the extent of the search region for matching particles in the second frame. Rather than searching in a spherical volume as in the standard Baek and Lee algorithm, we search in a rectangular prism-shaped volume (define a maximum pseudo-displacement in each rectilinear component). In our case, the maximum pseudo-displacement is found by taking into account the uncertainty in the initial guess and assuming a worst case scenario for the position measurement errors in both cameras of the same particle. Initially we do not know which direction one camera is offset relative to the other, but we know that all the particles move together and so we require that particle pseudo-displacements (the vector difference of a matched pair of particle positions, one from each camera) all point in approximately the same direction. Two pseudo-displacements are considered similar if their vector difference is less than a maximum difference (equivalent to T q in the Baek and Lee algorithm). We define the maximum allowable difference between the particle pseudo-displacements, in each rectilinear component, as the maximum possible difference between two correct pseudo-displacements (the matches between both cameras are good) based on worst-case position uncertainties.
Baek and Lee show that the number of incorrect matches scales inversely as the ratio of average inter-particle distance to the maximum allowable displacement between frames, especially when it is less than one. This is the case with the large uncertainty in the guessed camera positions and depth positions of the particles. To remove these false matches, we require that every possible match have a minimum number of neighbor particles that also have matches with a similar pseudo-displacement. This minimum number M n is found using a trial and error algorithm. It is dependent on both the neighborhood threshold in the Baek and Lee algorithm and the particle density. We have observed that requiring too many close neighbor matches (high M n ) results in too few matches and greater uncertainty in the position calibration. On the other hand, requiring too few (low M n ) results in including many false matches and degrades the accuracy of the position calibration. Therefore, M n is found by a linear search algorithm (an algorithm that iterates through successive values of M n ) that tries to keep the approximate width of the distribution of the y-components of the pseudo-displacements less than the lateral uncertainty in particle positions (standard deviation of the y-components of the pseudo-displacements is less than √ 2 times the standard deviation of the estimated y-error distribution from holography) and secondarily tries to obtain as many matches between the cameras as possible. 9 We calibrate the camera positions using a series of reconstructed holograms containing few particles on average. Because of strong fluctuations in the number of particles throughout the series, the frames are grouped into blocks for which a corresponding M n and position calibration are calculated. Because of the sparseness, the chances of particle coincidence causing matching errors is extremely small.
The correction to the initial guess of the relative positions of the cameras is taken to be the mean pseudo-displacement after the above calibration algorithm has been applied to all data blocks. Gaussian statistics were used to estimate an uncertainty in this mean pseudodisplacement. Using Gaussian statistics is safe as long as the mean is found using a very large number of pseudo-displacements and their distributions are relatively Gaussian. The uncertainty along the optical axis, which is the most uncertain, is discussed in 4.2 and is close enough to Gaussian for our purposes. The uncertainty is obtained by multiplying the expected Gaussian uncertainty by a constant some what greater than unity (4 for x and z, and 10 for y). This constant is used to take into account tracking errors and deviations of the real distribution of pseudo-displacements from the assumed Gaussian form and is determined via simulations of matching of randomly-spaced particles.

Particle matching.
The matches found by the camera position calibration algorithm could be used to determine which particle in one camera is a match to a particle in the other camera, but it is unsuitable since the algorithm is too conservative. Requiring a minimum number of neighbors to have a close pseudo-displacement to eliminate incorrect matches at the same time eliminates many possible correct matches. A type of nearest neighbor algorithm is utilized instead.
Using the camera position calibration (estimate corrected with the mean pseudodisplacement found in section 3.3.1), the particle positions in camera 2 are transformed to the coordinates of camera 1 in the same way as in 3.3.1. A worst case scenario of particle positions being at the maximum of their uncertainty (both the uncertainty from the hologram reconstruction and the uncertainty in the camera position calibration discussed in section 3.2 and section 3.3.1) from their real positions provides the maximum vector separation allowed for possible matches. The maximum vector separation is used to normalize the vector difference in position for all possible matches (the uncertainties in each direction are scaled to be approximately equal). The possible match with the least normalized separation (magnitude) is taken to be a match. All other possible matches with either one of the matched particles are eliminated. This is repeated until all possible matches are either made into matches or eliminated. The matches are taken to describe a single particle seen by both cameras. This particle's x-, yand z-coordinates are taken to be the x-coordinate of the particle in camera 1, the average of the y-coordinates of the particle in both cameras and the x-coordinate of the particle in camera 2 respectively. A record of all the matches (which particle in camera 1 matches which particle in camera 2 and the resulting common particle seen by both cameras) is kept for use in section 4.2 so that it may be determined which tracks found while ignoring one camera correspond with tracks found using information from both cameras. mode), as well as in the single-camera mode and then compare the results. The dual-camera mode is taken as the reference standard for particle tracks due to its higher spatial resolution in all three spatial dimensions. We can be confident that the tracks obtained with the dual-camera system are reliable because, we have implemented stereo particle tracking algorithms that have been shown to be reliable [13,14,35,36]. Furthermore, turbulence statistics and velocity structure functions obtained from the tracks are mutually self consistent and in agreement with independent estimates. By comparison, the single-camera system obtains accurate tracks in the two lateral dimensions, but in the depth dimension the tracking is more difficult. As a result, we sometimes refer to this as '2.5D' tracking because the depth resolution available in the singlecamera mode is sufficient to track in the third dimension (unlike a single camera operating in imaging mode), but with limited spatial resolution in that dimension. In our implementation it has been possible to obtain velocity statistics in the depth (z) dimension, but not higher-order derivatives such as acceleration.
In spite of this limitation, single-camera tracking allows for a very simple operating geometry, low required laser power and no need for multi-camera calibration that can be troublesome in complex measurement environments. Another beneficial factor is that it can provide longer z depth than dual-camera mode thereby providing a larger sample volume within the fluid under study. In what follows we describe the single-and dual-camera tracking methods and then make a detailed comparison of the two methods based on measurements of settling, inertial particles in a turbulent flow.

Particle tracking
Once particles have been found, positions estimated and, for tracking in dual-camera mode, matched between cameras, they can be tracked between consecutive frames. Particle tracking velocimetry (PTV) in flows with relatively low particle number densities has been developed quite extensively, both for 2D and 3D geometries. There are several methods for particle tracking, including algorithms based on nearest neighbor, three-frame minimum acceleration and four-frame minimum change in acceleration [13,14,35]. In our dual-camera experiment we adopted the relatively simple three-consecutive-frame tracking method, which is implemented as follows. First, particle positions in frame n − 1 and n are selected, then the location and searching volume in frame n + 1 are estimated using If one particle is found, the link is established (a link is defined as a section of a track from a particle in one frame to a particle in the next). If more than one particle in a search volume around the estimate are found, all possible tracks between frame n − 1 and frame n + 1 are built. Then the acceleration is calculated for each possible track and the trajectory with the minimum acceleration is selected [13]. (The first link of each track is determined using a simple nearest neighbor condition.) Use of the minimum acceleration method presumes that the camera frame rate is sufficiently high that measured turbulent velocities vary little between frames. For turbulent flow this means that the time between frames τ sample must be a fraction of the Kolmogorov time scale τ η [14]. For our system configuration τ η /τ sample ≈ 10 so this condition is met. For the single-camera tracking where one camera is ignored, we also select the threeframe minimum acceleration tracking algorithm, but some different procedures are made to deal with different lateral versus depth resolutions. The single-camera method provides x and y positions of particles accurately, but rather large uncertainty in the z positions, as discussed in section 3.2. The prolate-spheroid-shaped particle intensity profiles are often about 1 mm long in the z-direction. Because of this we rely more on the x and y position, and track particles in x-y space using the minimum acceleration method: the x-y position of the particle in frames n − 1 and n are used to estimate the x-y particle position for frame n + 1 and so on. The z uncertainty, however, is much larger than the typical displacement of the particle between two consecutive frames. Therefore, we use the average of the z position of the particle in frames n − 1 and n to estimate the particle search volume center for frame n + 1. For our experimental configuration the typical particle displacement is u rms τ sample = 160 µm and the depth uncertainty is δ z ≈ 1 mm (cf section 3.2). We set the z search neighborhood for frame n + 1 to several times δ z , but this is a rather loose condition. Figure 2, panel a, shows a typical track. The red x's represent the particle position in consecutive frames before fitting (described below). It is clearly observed that the z position is scattered along the track as a result of the z depth uncertainty.
Once a track has been identified, it is possible to use adjacent estimates of the z-position to improve the overall position estimate. This is accomplished by performing a smooth curve fit to the z-position versus time. We select the LOESS method (locally weighted scatter plot, smoothed using linear least squares fitting and a second-degree polynomial for smoothing) [37]. The measured z-positions are divided into local neighborhoods and a smooth curve is fitted to them via local linear regression. From the results we found that most z-positions deviate from the fitted curve by less than 1 mm, a promising trend (e.g. see figure 2). However, there are a significant number of outliers with z-positions scattered far from the known position. This is due to noise in the reconstructed image, which leads to failure of the peak finding algorithm in enhancing the z-position estimate. In order to avoid having these outliers adversely affect the overall fit, we exclude them from the fitting algorithm via the use of 'robust weights' given by a bisquare function. Figure 2 shows a typical fitting result and the resulting tracks will be compared to the dual-camera tracks in the following section.
The ability to track particles improves as the mean particle displacement between frames becomes small compared to the mean inter-particle spacing. For a three-dimensional system this can be expressed through the ratio α 3D = λ 3D / r , with λ 3D ≈ (1/n) 1/3 and r = √ 3u rms τ sample . For our system we estimate α 3D ≈ 34, with number density n = N 3D /V 3D based on the observed mean number of particles N 3D in the sample volume V 3D defined by the overlapping laser beams. The 3D tracking results, therefore, are in the highly dilute limit, minimizing potential for tracking errors. The success of single-camera tracking, however, must be compared to 2Dimage-based particle tracking, so we define α 2D = λ 2D / r , where λ 2D ≈ (1/n 2D ) 1/2 and r is corrected to account for two rather than three velocity components. It must be kept in mind that the 2D particle density n 2D = N 2D /A 2D is based on the observed mean number of particles N 2D in the camera cross section A 2D , and that N 2D = n A 2D L is determined by the image cross section and the full depth L of the chamber containing particles. For our system, with a depth of 50 cm, we approach α 2D ∼ 1, which is a significantly more challenging tracking scenario. Furthermore, it should be noted that α 3D and α 2D are further reduced when one considers the typical, positively-skewed counting fluctuations, resulting in number densities significantly beyond the mean. Ultimately, this means that the tracking results obtained here are in the dilute limit for 3D tracking, but not for an equivalent 2D tracking system.

Evaluation of single-camera tracking in three dimensions
We can find 3D particle tracks using positions obtained with a single camera using averaging and fitting to compensate for the depth (z-position) uncertainty. The dual-camera method provides 3D particle tracks with very high precision and therefore can be taken to be a standard for comparison. Here, we evaluate how closely the single-camera tracking compares to dual-camera tracking under realistic conditions with particles in a turbulent flow. We achieve this by first identifying common particles that are tracked both in single-and dual-camera configurations using the record of the matches between the two cameras found in section 3.3.2 and then by checking to see how many of the particle links between frames are different.
To begin with, figure 3(a) shows a comparison of the two types of tracks in the common sample volume, just for the lateral tracking of the single-camera method (red dots are 'true' positions from the dual-camera 3D tracking and blue circles are positions from the  single-camera tracking), and it can be seen that the tracks are very well matched. The fully three dimensional tracks are shown in figure 3(b), where it can be seen that the z-component shows larger deviation because of the depth uncertainty of the holographic reconstruction. This can be visualized more effectively in the animation shown in figure 4, which shows the formation of six individual particle tracks with particle position estimates from each camera alone, as well as the dual-camera, matched-particle position estimates. The animation clearly shows the relatively poor depth resolution compared to the lateral resolution for each single-camera view, (Available from stacks.iop.org/NJP/10/125013/mmedia.) An animation of six particle tracks, illustrating the relative particle position accuracy of the single-camera and dual-camera configurations. For each track, first an overhead (x, z) view is displayed (one camera measuring along the x-direction and one along the z-direction). Then for each track the sample volume is rotated to allow the track's 3D shape to be visualized. The blue and black dots represent the position estimates from each camera alone and the red dot is the position measurement using both cameras. Thus the blue and black dots jog up and down or side to side within the uncertainty of their depth positions while the red dot is relatively steady (note that the depth position estimates are those prior to smoothing). Table 1. The number of links obtained by the same tracking method applied to the same particles appearing in the common sample volume, for the dual-camera and single-camera configurations. 'Common links' are links made identically for both dual and single camera configurations, whereas 'Conflicting links' are links for which the two configurations agree either on the particle in the first or the second frame but disagree on the other. We note that the single-camera tracking mode provides more links and fewer broken tracks due to the fewer detections needed to obtain a particle position. although it should be noted that the displayed depth estimates are those prior to implementation of the smoothing algorithm. It is also worth noting that the accuracy of the z-position estimate resulting from the holographic reconstruction and the subsequent smoothing and fitting, can be improved to some extent by increasing the camera sampling rate. Typically the camera frame rate for a turbulent flow is a fraction of the Kolmogorov time scale [14], but oversampling would allow multiple depth measurements to be averaged. The maximum benefit of such oversampling would result if the depth estimates could be considered independent, but that is a question left for future studies. The relative performance of single-camera and dual-camera tracking is summarized in table 1. Two classes of tracks are compared, those with a minimum length of 3 frames, in order to look at total tracking efficiency, and those with a minimum length of 10 frames, in order to focus more on long tracks useful for Lagrangian studies. This comparison is made using only the common sample volume corresponding to the dual-camera system (cf figure 1). As seen in table 1 more links are made in the single-camera configuration because the dual-camera configuration requires that particles be detected in and matched between both cameras. Particles that appear to be missing in either camera will cause tracks to break or have missing links in dual-camera mode. The table also shows the number of common links (links made identically by both configurations) and conflicting links (links where both configurations agree on either Distribution of particle depth-position errors before (red) and after (blue) the smoothing algorithm is applied. The standard deviation before fitting is δz = 540 µm and after fitting is δz = 240 µm. The error is determined by comparing the z-component of the position vectors from single-camera tracks to those from dual-camera tracks.
the particle in the first or second frame but disagree on the particle in the other frame). The large number of common links and the small number of conflicting links demonstrates that the single-camera mode tracks in a very similar way to the dual-camera mode. It should be noted that in addition to the higher rate of tracking, many more tracks would be made if we considered the larger sample volume available in single-camera mode.
We can also measure how accurate the depth-positions from a single-camera are using the second camera to objectively measure the 'true' depth position. To our knowledge this is the first direct measure of depth (z-position) accuracy of inline holography from independent optical measurements. The more common test of depth position accuracy is to reconstruct images of particles deposited on a planar glass slide and then to fit a line to the particle position estimates and consider the scatter about that line [22,23]. The distribution of deviations between the single-camera depth position (not yet fitted) and the corresponding matched-particle position from the dual-camera configuration is shown in figure 5. We find that the bias (mean deviation) is 47 µm and the standard deviation is 540 µm. After the single-camera depth positions are smoothed using the aforementioned algorithm, the distribution becomes much narrower, as shown in figure 5, and the standard deviation falls to 240 µm.

Comparison of Lagrangian flow properties
The tracking procedure establishes links between particle positions in consecutive frames, and therefore these links can be used to calculate the Lagrangian particle velocity and acceleration. Velocity and acceleration components in x and y are obtained after applying a linear least Distributions of the three velocity components for the coordinate system of the camera used for single-camera tracking. The red curve is for the dual-camera mode and the blue curve is for the single-camera mode. The two tracking methods compare favorably for all three components.
squares fitting with a second-degree polynomial for smoothing, in order to reduce noise (some type of smoothing is standard in most tracking methods [14]).
Distributions of the three velocity components are shown in figure 6, where it can be seen that all three components (x, y, z) for the two methods show similar distributions. The root-mean-square velocity components for the dual-camera and single-camera modes are, respectively, v 2,x = 0.41, v 2,y = 0.41, v 2,z = 0.41, v 1,x = 0.42, v 1,y = 0.41 and v 1,z = 0.43 m s −1 . Both the v y distributions have a mean value of −0.27 m s −1 , which is consistent with a terminal fall speed of approximately 0.30 m s −1 for d = 80 µm water droplets. The data displayed here were obtained from 8800 consecutive holograms and tracks containing 10 or more links were used. We note that the distributions should not necessarily be identical because the two tracking algorithms do not necessarily produce tracks for all of the same particles and because matching of particles between two cameras inevitably leads to fewer particles to be tracked (cf table 1).
The acceleration distributions, shown in figure 7, show excellent agreement for the lateral (x and y) components. The a z -distribution for the single-camera mode, however, is broader than that for the dual-camera mode, clearly a result of the reduced accuracy of tracking along the depth coordinate. The root-mean-square acceleration components for the dualcamera and single-camera modes are, respectively, a 2,x = 20, a 2,y = 20, a 2,z = 21, a 1,x = 19, Distributions of the three Lagrangian acceleration components for the coordinate system of the camera used for single-camera tracking. The red curve is for the dual-camera mode and the blue curve is for the singlecamera mode. The two tracking methods compare favorably for the two lateral components (x, y), as shown in panels (b) and (c). The poor depth (z-coordinate) resolution diminishes the ability to obtain accurate acceleration distributions in this coordinate, as can be seen in panel (a).
with the Kolmogorov acceleration of a η ≈ (ε 3 /ν) 1/4 ≈ 20 m s −2 (although such close agreement between large-particle and fluid accelerations is surely coincidental given that there is a scaling factor somewhat greater than unity relating the actual RMS fluid acceleration to the (ε 3 /ν) 1/4 Kolmogorov scaling [11]). Two observations are relevant to this relatively poor performance for Lagrangian acceleration in the depth coordinate. Firstly, it should be noted that the z coordinate can be used for particle tracking and detailed Lagrangian statistics can then be obtained just using the xand y-coordinates. Secondly, there are several ways in which the depth position accuracy can be increased over what we have shown here. Possibilities include using smaller particles since depth uncertainty scales with the square of particle diameter, using a higher resolution detector and, as mentioned previously, increasing the hologram sample rate should lead to better depth resolution through averaging. It is plausible, therefore, that depth resolution could be enhanced so as to reach a level suitable for measurement of Lagrangian acceleration, but this remains for future work.
The observed performance of the single-camera and the dual-camera systems is consistent with estimates of the velocity and acceleration resolutions. It was already shown in section 4.2 that the depth uncertainty in the particle position estimates for the single-camera system is z = 240 µm after the smoothing algorithm is applied. The lateral position uncertainties, also estimated by comparing the unsmoothed and smoothed tracks, are y = 2 µm, or about 1/10 of the pixel size. We state only the result for y for brevity, and it should be understood that the result for the x-component is the same, as are all three components in the dual-camera system. For the raw, unsmoothed data the lateral resolutions for velocity and acceleration can be estimated, assuming negligible error in the time measurement, to be δv y ≈ 2δy/τ sample = 0.01 m s −1 and δa y ≈ 2δy/τ 2 sample = 27 m s −2 , respectively. The corresponding resolutions along the depth coordinate for the single-camera system are δv z ≈ 2δz/τ sample = 1.2 m s −1 and δa z ≈ 2δz/τ 2 sample = 3200 m s −2 , respectively. The smoothing algorithm applied in the lateral (x, y) dimensions is comparable to a 10-point filter and decreases the difference in neighboring position estimates by a factor of approximately 10, which leads to a corresponding factor of 10 decrease in the velocity and acceleration resolutions. Along the depth dimension a 40-point filter is applied, resulting in a factor of 40 improvement. Because δv y ≈ 0.001 m s −1 and δa y ≈ 2.7 m s −2 are well below the Kolmogorov scales of v η = (εν) 1/4 ≈ 0.07 m s −1 and a η = 21 m s −2 , we can be confident that the lateral tracking is accurate (as is the 3D tracking with the dual-camera configuration). For the single-camera configuration the depth resolution after filtering gives δv z ≈ 0.03 m s −1 and δa z ≈ 80 m s −2 , so the velocity resolution is just sufficient for estimation of fine-scale velocity, but the estimated accelerations are dominated by noise. One might consider additional filtering along the z-dimension, but the 40-point smoothing scale applied in the depth dimension already gives an effective smoothing time of 40τ sample = 15 ms, only a factor of 2 smaller than the expected velocity smoothness scale of approximately 10τ η = 33 ms. These factors taken together are consistent with the distributions shown in figure 6 and 7, and they also provide some insight into what system parameters can potentially lead to improved tracking with single-camera inline holographic systems.
In summary, single-camera tracking is able to produce 3D particle tracks with approximately the same Lagrangian velocity and acceleration distributions in x-y space as the more accurate dual-camera tracking; the z velocity distributions also compared favorably. We can safely say, therefore, that single-camera holographic tracking can produce particle tracks capable of providing useful measurements of 3D Lagrangian properties of particles in turbulent flows. The single-camera mode also is able to track more particles than the dual-camera mode and can have significantly larger sample volumes. The z depth information allows long 3D tracks to be obtained, albeit with larger uncertainty for Lagrangian properties in that component.

Particle size along Lagrangian tracks
As has been mentioned previously, there are many problems in which knowledge of particle size, shape, or orientation along Lagrangian tracks could be valuable. For example, with sufficient particle-size-measurement precision it would be possible to make direct observations of particle aggregation or droplet coalescence. Shape information is an important aspect of flows with deformable particles, such as bubble dynamics in liquid turbulence [12]. And lastly, the spatial orientation of nonspherical particles could be used as a local, Lagrangian probe of fluid vorticity or shear. In this section we discuss only the measurement of particle size, but adaptation for shape and spatial orientation studies should be relatively straightforward.
Size estimation of particles goes back to the earliest studies in particle holography [38] and digital in-line holography has been shown to be able to provide accurate particle size distributions [26]. In this study we use a simple pixel-counting method [29] for particle size determination. To be precise, we estimate the size of particles by counting the number of pixels above an intensity threshold applied to the reconstructed particle image at its focus, and then convert this number of pixels to the diameter of a circle of the same area. More advanced methods based on edge detection can be used and these are especially suitable for nonspherical particles.
As we track a particle throughout the sample volume, we evaluate how consistently holography can measure the particle size by observing how the size estimate changes in each frame. Along any given Lagrangian track the particle size is estimated under varying optical conditions and with realistic noise fluctuations. The data for all particles therefore allows us to sample the entire sample volume, including the full range of depth along the optical axis, and the regions near the lateral edges of the reconstructed volume where reconstruction artifacts might influence size retrieval. In this analysis, we consider the full lateral extent of the hologram and a broader range of optical depths by taking advantage of the larger sample volume available to a single-camera system, accepting tracks within the depth range of 310 z 410 mm. In that respect these measurements provide a more realistic assessment of size uncertainty than the typical calibration methods based, for example, on stationary glass beads. In order to make a study of the size uncertainty we must be confident that all observed variations in particle size are due to estimation uncertainty and not due to real particle size changes. Possible sources of particle size changes include evaporation, coalescence of drops, and tracking errors (connecting the tracks of two distinct particles). Evaporation is negligible because the residence time of the droplets in the measurement volume (order 0.1 s) is much less than the evaporation time scale (order 100 s, with the time scale being proportional to droplet diameter squared and to the ambient relative humidity, for which a conservative estimate of 90% is used) [39]. Coalescence probability scales as the particle number density squared and is therefore negligible in our study. The minimum acceleration tracking method also has a very low rate of tracking errors due to the low particle number density. We have confirmed both of these by checking individual tracks to ensure that no jumps in size are present, such as would result from droplet coalescence or from a tracking error (see more on tracking errors below). Figure 8 displays the standard deviation of particle diameter σ d versus mean diameter d along individual Lagrangian tracks for the full data set. The scatter in standard deviation lies mostly between 5 and 10 µm, with some particle tracks displaying standard deviations up to approximately 25 µm, or about one pixel pitch. It can be seen clearly that over the full range of droplet diameters measured there is a quite constant (independent of particle diameter) lower bound of σ d ≈ 5 µm, which is very near the square root of the pixel pitch. We interpret this as being consistent with the threshold pixel counting technique we are using: it is essentially a form of binary Gauss digitization, for which the expected error in area scales with the perimeter length (i.e. area of a circle is estimated with uncertainty that scales with the diameter, or the linear grid scale) [40]. Figure 9 provides a synthesis of the results in the form of a distribution of size uncertainties. Specifically, we have plotted the distribution of particle diameter deviations for each particle along its own track, normalized by the mean diameter along that track: (d −d)/d. The deviations are approximately normally distributed with a relative deviation of approximately ±10%. We note that this is suitable for particle coalescence studies in a monodisperse system, where coalescence of like-sized droplets leads to an increase in diameter of 2 (1/3) ≈ 1.3. In absolute terms, for our particle size distribution we measure a mean particle diameter of 89 µm with full-width at half maximum of approximately 20 µm, or one pixel width. This noise level, greater than the minimum possible for a simple equivalent-area estimate, most likely results from the threshold binarization, which by definition is sensitive to intensity variations between holograms. More robust particle sizing algorithms using dynamic threshold binarization or edge detection certainly could improve the result.
Particle size also can be used as a tracking criterion [19]. While size matching is unnecessary to track at our low particle densities, we have found that the few particle tracks with σ d /d > 0.3 usually indicate an incorrect match. We have visually confirmed that the tracking algorithm makes an error at these points. This condition can be implemented as a check after tracking has been performed, as we have done here, or in principle the size information could be used in parallel with the tracking algorithm. One possible result would be a minimum-acceleration/minimum-size-deviation tracking method.

Summary
The work presented here has demonstrated several aspects of Lagrangian particle tracking with in-line digital holography. Some of these aspects have been considered individually in other published work, but here we have attempted to make a study of particle tracking that includes the full spectrum of the problem, from hologram recording, to reconstruction, particle finding, particle sizing and particle tracking. The study utilized particles in a highly turbulent fluid rather than an idealized flow or static particles. The key aspect is the simultaneous use of a dual-camera system, allowing us to evaluate the accuracy of the single-camera method directly. Important results of this study can be summarized as follows: 1. A single-camera digital in-line holographic system can successfully track particles in three spatial dimensions, albeit with poorer resolution along the optical axis. We can refer to this as 2.5D tracking to distinguish it from tracking with high resolution in all three spatial dimensions. 2. Lagrangian trajectories, including velocity and acceleration distributions, from the lateral (x, y) components compare well with those statistics obtained from full 3D tracking. 3. Spatial resolution along the optical axis of the system is sufficiently high that trajectories and velocities are comparable to those obtained from dual-camera tracking, but accelerations are dominated by noise. 4. The tracking ability along the optical axis can be improved in future work through several methods. Improved algorithms for determining particle depth-position can be developed (here we have used a relatively straightforward method). Using smaller particles (due to d 2 dependence of depth resolution) or digital detectors with higher resolution both lead to improved depth resolution. Finally, we have speculated that depth resolution can be improved by increasing the hologram sample rate and therefore the averaging of position estimates. 5. The combination of single and dual-camera modes allows us to make a direct evaluation of the depth resolution of our hologram reconstruction and particle-position algorithms. Depth resolution is significantly improved by utilizing peak-finding routines developed from a knowledge of depth profiles for idealized particles. 6. Particle size can be measured along Lagrangian tracks with approximately 10% resolution even using a simple sizing algorithm and this opens the door for using size information to improve track quality. 7. Tracking and particle sizing with a single-camera holographic system allows a significantly larger volume to be investigated than with a dual-camera system with similar lateral tracking resolution. Purely 2D tracking in this large volume would be challenging, but moving to 3D places the problem in the dilute tracking limit. (A study of 3D tracking efficiency versus dilution is left for future work.) As mentioned in the introduction, in-line holographic particle tracking has certain advantages relative to standard stereo tracking methods. The method requires only a single camera and therefore a greatly simplified optical setup, with no multi-camera calibration or particle-matching algorithms required. Furthermore, required laser powers are reduced by orders of magnitude. The method is capable of sampling relatively large volumes, depending on the desired resolution. Lastly, in-line holography is able to provide continuous measurements of particle size, shape and orientation along the tracks. Disadvantages of the holographic technique are the relatively poor depth resolution and the heavy computational burden for postprocessing. The depth resolution can be improved by various methods, as discussed previously, and the computational burden, of course, is continuously growing lighter due to advances in computational hardware and software. We can conclude that single-camera, digital in-line holography is a viable technique for Lagrangian particle tracking, and one that is especially suitable to difficult environments. The best chance for future improvements, in our opinion, lies in the continued development of data processing algorithms to make particle finding and positioning more rapid and more accurate, and the subsequent tracking algorithms more robust.