Three-dimensional flame measurements with large field angle

: A system for three-dimensional computed tomography of chemiluminescence was developed to measure flames over a large field angle. Nine gradient-index rods, with a 9 × 1 endoscope and only one camera are used. Its large field of view, simplicity, and low cost make it attractive for inner flow field diagnostics. To study the bokeh effect caused by the imaging system on reconstruction solutions, fluorescent beads were used to determine the blurring function. Experiments using a steady diffusion flame were conducted to validate the system. Three models, namely the clear-imaging, out-of-focus imaging, and deconvolution models, were utilized. Taking the bokeh effect into account, the results suggest that based on run-times the deconvolution model provides the best reconstruction accuracy without increasing computational time.


Introduction
Using just the emitted photons of combustion reaction, chemiluminescence measurements of combustion provide an important indicator of heat release rates [1][2][3][4], equivalence ratios [2,5], excited species concentrations [6,7], without the need for lasers to be subjected to harsh conditions. This advantage of chemiluminescence enables researchers to use multiple 2D projections from various views of the target flame, combined with reconstruction algorithms, to obtain multidimensional measurements. Termed computed tomography of chemiluminescence (CTC) [8], J. Floyd et al. demonstrated its potential as a diagnostic tool for thermal-fluid studies from just the spatial structure of the flame and its physical variation. In recent years, three-dimensional CTC (3D-CTC) has been widely studied because of the great improvements in optical sensing capabilities and computing technologies, and the high resolution and instantaneous 3D measurements that have become achievable in practice. Floyd et al. [8] systematically studied various projection models and reconstruction algorithms, as well as resolutions from different viewpoints and noise. They also greatly reduced the equipment required in performing CTC that enable high resolution 3D measurements to be much more viable. In their experiments [9], ten commercial cameras in combination with mirrors provided simultaneous multi-directional projections of the flame. An optical distance of 637 mm and a corresponding 22-mm object domain were performed. Algorithms such as the algebraic reconstruction technique (ART), combined with cylindrical projections that were nonparallel and perspective corrected, were applied to achieve image reconstructions. Fiber-based endoscopes were used in the 3D-CTC experiments of Ma et al. [10][11][12], making the 3D-CTC procedure much more flexible by not requiring direct line-ofsight. Two or more cameras were used in their studies. The degradation in spatial resolution was investigated, and the capacity to resolve sub-millimeter flame features was demonstrated; also 3D measurements at kilohertz sampling rates in a supersonic combustor were performed.
The object domain in their experiments was 9.5 × 9.5 × 9.5 mm 3 for an object distance of about 200 mm [10], and 6 × 6 × 12 mm 3 for an object distance of about 100 mm [11]. Tomographic inversion by simulated annealing and ART algorithms were used in the computation. Wang et al. [13] proposed a camera calibration method to determine spatial locations and intrinsic parameter settings for the cameras; twelve cameras were synchronized and triggered for the simultaneous multiple-projection capture. The multiplicative ART algorithm was used in their reconstruction work. The field of view in their system is 20 × 36 × 20 mm 3 for an object distance of about 560 mm.
In most of the previous studies, the object domain is quiet small compared to the object distance, i.e., the field of view is small; a large field of view is more important for 3D-CTC measurements. First, at a fixed lens size, a larger field of view corresponds to a closer object distance, which improves the temporal resolution by shortening exposure times. Second, and more important, a large field of view is vital for inner flow field diagnostics, such as for scramjet [14] and swirl combustion systems [15], because of the limitation in the number and size of the optical windows.
In working with large field angles, blurring caused by the limited depth of field (DOF) needs to be taken into account. Floyd et al. [8] used ray-tracing methods to determine the projection geometry and applied a weighting scheme based on the blurring circle and the pixel intersection ratio. Ma et al. [16] used Monte Carlo methods to calculate point spread functions for the description of the projection process formed by the point source in their imaging system. Wang et al. [13] have also taken the size of the blurring circle into account in their calibration method of the imaging system. When increasing significantly the field of view, blurring does become more important because of the shallow DOF; it can couple with other intrinsic characteristics of the optical system, complicating the simulation of projection processing.
To measure the flame under large field angles while mitigating blurring, we propose a low-cost 3D-CTC system that combines a gradient-index (GRIN) lens with a customized fiber with nine inputs and one output. Using only one CCD camera, the system is inexpensive (cost about $5000) and easy to operate. The DOF and realistic bokeh effects are analyzed for this imaging system. Three projection models were adopted along with the consideration of the intensity distribution over the blurring circle; although not implemented in previous studies the intensities were obtained using the experimental calibration method. For the purpose of validating this 3D-CTC system, we used ten small CH4 diffusion flames arranged in a ring. Image reconstructions were compared to evaluate the three models.

CTC technique formulation
The physical configuration of the CTC problem is illustrated in Fig. 1. To reconstruct the 3D chemiluminescence (e.g., OH* or CH*) structure of a flame, the object is imaged at different angles simultaneously. To execute the tomography computationally, the 3D object domain and the 2D images were discretized into voxels and pixels, respectively, in different Cartesian coordinate systems. We use F to denote the distribution of the chemiluminescence intensities in the object domain. The emission intensity recorded by the CCD ( q I ) from the viewing angle (q) is then expressed as , , where i, j, k are the indices of the voxel; m, n are the indices of the pixels; and W is the weight matrix representing the relationship between a pixel and voxels.

Multi-directional imaging system
For our multi-directional imaging system (Fig. 2), projections were obtained using a customized fiber bundle with nine inputs and only one output, which was recorded by a single CCD camera (IMI 147FT). Each input is an assemblage of 13,000 fibers circularly bunched within an area of diameter of 1.5 mm. The nine fiber bundles guide the light signals to a CCD comprising a 3 × 3 array. Hence the total number of fibers is 117,000. The core diameter of each single fiber is 12 µm and the pixel size of the CCD is 6.45 µm × 6.45 µm. The object domain is 45 × 45 × 20 mm 3 at an object distance of about 65 mm, which establishes a field angle of about 50°, which is much larger than that in previous studies. To obtain the maximum DOF, nine GRIN lenses were used to direct the optical image of the object to the nine fiber inputs. The virtue of this kind of lens for imaging is its symmetrically smooth radial decrease in refractive index. It is commonly used as an objective lens or a relay lens for endoscopes for its small size and weight, as well as its advantage in correcting the Petzval field curvature [17]. There are various studies on the image quality of the GRIN lens using geometric optics [18,19] and numerical techniques [20,21]. The GRIN lenses used were 6.54 mm in length and 2 mm in diameter with a center refractive index of 1.64 and field angle of about 50°. At the object distance of 65 mm, its DOF is 86 mm under a permissible circle of confusion of 0.035 mm.  To determine the DOF and actual distribution of the blurring circle through the GRIN lens based imaging system from the point light source, a fluorescent bead with a diameter of about 100 µm was used to calibrate the blurring function on the pixels. In our experiences, the size of the bead barely affects calibration results if it is less than 200 µm; the ratio of object to image size is about 30:1 in this case. Ideally, the image size of the bead is about 3.3 µm, which is smaller than the pixel size (6.45 µm × 6.45 µm). Therefore, a small bead can be treated as a point source.
We placed the bead at several positions, and the projections appear as blurring circles [ Fig. 2(b)]. From these positions, their distribution intensities were recorded: Fig. 3(a) shows the measured distribution at fixed object distance of 65 mm between the bead and the optical axis, whereas Fig. 3(b) shows the measured distribution on the optical axis as the object distance varies. The chosen basis for depths is the object distance and the size of the object domain. The calibration depths should cover the object domain. As seen in these distributions, the blurring circle changes little in the domain of the imaging system although its diameter changes slightly when using the GRIN lens. We find that a Gaussian fit of the intensity in the radial direction is appropriate as well as consistent with the fitting given in Gosselin et al. [22]. The best fit for the calibration data presented in Fig. 3 corresponding to a Gaussian with full-width at half-maximum of 4.5 pixels.

Coordinates calibration
To determine the imaged center position on the pixel of the random spatial point from the object domain, a modified calibration method proposed in Wang et al. [13] was used. The relationship between the world coordinate system (x, y, z) and the coordinate system (x", y", z") of each view is expressed as where the angles of the coordinate transformation are denoted θ (rotation about the y axis), φ (pitch angle), and ε (spin angle); see Fig. 4(a). The object distance denoted Tz is the distance between the GRIN lens and the center of the target domain along the direction of the optical axis. Tx and Ty are the other two components in the world coordinate system. Different from Wang et al. [13], the 3D pattern scheme was produced by a device based on a laser and a 3D-motorized precision translation stage [ Fig. 4(b)]. The laser beam from a He-Ne laser (3 mW) was focused onto a frosted glass plate, which diffuses the luminescent point source evenly in all directions; the whole installation has full translational degrees of freedom to cover the object domain. The accuracy of the translation stage is about 10 µm. This focused laser point was imaged simultaneously using nine views at each position in a single shot. A total of 81 positions and shots were used to calculate all the parameters used in the lens imaging theory; the results are listed in Table 1.

Flames for the validation experiment
This low cost system overcomes the limitations of optical window making 3D-CTC more viable for applications in interior flow fields, such as in scramjets. In structure, the annular burner has ten holes uniformly distributed around the jet nozzle (Fig. 5). Each hole produces a small CH 4 diffusion flame. The inner and outer diameters of each flame were 25.18 mm and 37.74 mm; the size is slightly larger than the diameter of the hole, which was about 5.2 mm.
Using steady flames to test its capability, nine projections were obtained for the image reconstruction [ Fig. 6(a)]; they were captured simultaneously using the CCD camera with an exposure time of 100 µs. An intensity distribution of the ninth view in Fig. 6(a) along the red line is presented in Fig. 6(b). The minimum value is less than 0.05 indicating the small flame is isolated and the interspatial gap between adjacent flames is near zero (Fig. 5). We remark that the flame images of Figs. 6(a) and 6(b) are original, and hence show blurring. The intensity gradient between two actual flames is sharper than it appears in Fig. 6(b). The flame emissions of these ten small flames are not identical. CH4 gas is fed from one side (near flame 8) and the total flow rate is quite small (0.15 L/min). Therefore, flow rates at the small holes are not exactly the same. Even though the flames are stable, flames 7, 8, and 9 have stronger luminous intensities than flames on the opposite side.

Projection processing models
The traditional parallel projection mode is not suitable in practice for the optical setup because of the light-collecting characteristics of the lens [13]; this is more obvious for our experiments because of the large field angle. As the projection onto the CCD is based on the incoming photon counts, not only are the intersection area of the pixel and the blurring circle from the object point important factors of the projection but also the intensity distribution of the blurring circle. The blurring is caused by several sources and can be modeled using convolution [23]. The inverse deconvolution operation is being increasingly used for image processing to enhance resolution and contrast in the microscope and micro-CT fields [24][25][26].
In this study, three projection processing models were compared (Fig. 7): (a) Clearimaging model-blurring is ignored so that the measured projections are used as input data, and the voxel-to-pixel weight factors are determined by pinhole imaging theory; (b) Out-offocus imaging model-the measured projections are used as input data, and the voxel-to-pixel weight factors are determined from the intersection area times the intensity fitted using Gaussian functions corresponding to the separation between pixels and the blurring-circle center; (c) Deconvolution model-Lucy-Richardson deconvolution artifacts (Gaussian function) of the measured projections are used as input data, and the voxel-to-pixel weight factors are determined as for model (a). To simplify the calculation, we approximate the intersection area, which cannot be covered completely by the blurring circle, with polygons.
For models (a) and (b), flame images were not corrected or filtered prior to applying the tomography algorithm whereas the Lucy-Richardson deconvolution was applied in model (c); no other correction or filter was used.
The object domain was discretized into 432,000 voxels (120 × 120 × 30), whereas the number of pixels is 810,000 (300 × 300 × 9). Because there are more equations than unknowns, this discrete problem is ill-posed. As discussed by Daun et al. [29], prior knowledge of the flame can be used to mitigate this ill-posed problem. The prior knowledge used here is that all flames are completely imaged in all views and the emission intensity at the edge of the object domain is zero. Moreover, we believe that the calibrated blurring may mitigate the illposedness of the problem by acting as a filter.

Results
With regard to the visualization of reconstructions using these three projection models, volume rendered views of the flames were generated (Fig. 8). The top views, correspond to a pitch angle of 90° and are different from the other nine imaging views. All these results were obtained under the same computational operation with a resolution of 0.375 mm in the x-z plane and a resolution of 0.667 mm along the y direction. The reconstructions were outputted after 2000 iterations. The three results depict the ten small flames arranged in a ring. The diameter of the inner circle is about 67 voxels (25.13 mm) and for the outer circle about 100 voxels (37.50 mm), which match well with the burner size (Fig. 5).
As there is no obvious difference in the overall view, we can conclude that all three models are capable of obtaining good reconstructions in regard to the overall flame structure. Nevertheless, computational times are different ( Table 2). Model (b) expends some time in calculating the weight matrix as an additional operation associated with the Gaussian distribution is required each time a calculation of the voxel-to-pixel weight factors is performed. For the same procedure setting, the run-time for model (b) is 141,532 seconds, which is 17 times that for models (a) and (c).  To examine the quantitative intensity profiles and compare the absolute difference for each model, we obtained the intensity profile ( Fig. 9) between the red circles drawn in Fig. 8 Note that most values in the interspaces between adjacent flames are smaller for model (b) than for models (a) and (c). In particular, analyzing the minimum between flames 5 and 6 (and also between flames 10 and 1), the reconstructed intensity obtained using model (b) is smaller than that for model (c), which in turn is smaller than for model (a). As discussed in Section 2.4, the actual value between adjacent flames should be close to zero. Thus the best fidelity in the reconstruction is provided by model (b). , where F is the real distribution of the object domain and F rec the reconstructed distribution. Unfortunately, we do not know the actual F in experiments. Therefore, it is impossible to get the exact reconstruction errors. To compare these models, we define Diff as the error between the measured and reconstructed projections.  (6) where measurement p represents the distribution of the projections measured using the CCD; and k p compute is the distribution of the projections computed by each iteration reconstruction.
Although Diff does not indicate reconstruction accuracy, it can reflect the processing of the iterative calculation. Also, a small Diff is a necessary condition for a good reconstruction. From the plot of Diff for the three models (Fig. 10), the difference between the measured and reconstructed projections decreases rapidly for model (b) compared with models (a) and (c). The final summary Diff for all models are about 4.0 × 10 5 for 810,000 pixels, which implies an average absolute error of 0.49 for each pixel when the measured values of the projection are in the range from 0 to 160 counts. Models (a) and (c) have comparable computational runtimes and from their top-view reconstructions no obvious difference occur (Fig. 8). However, a clear difference is found in the horizontal slices (Fig. 11). For the methane-air diffusion flame used, each small flame should be a cylindrical cone [30]. Regarding the marked ellipse area, better toroidal shapes in the intensity distribution is reproduced by model (c), indicating that deconvolution may help improve the spatial resolution.

Conclusion
To increase the field angle of 3D-CTC diagnostics while mitigating blurring, GRIN lenses were used combined with customized 9 × 1 endoscopes and only one CCD camera. Nine projections from different views were obtained simultaneously in a single shot, making this system quite simple and low cost. The object domain is 45 × 45 × 20 mm 3 at an object distance of about 65 mm, corresponding to a field angle of about 50°.
As lenses and a fiber bundle were used in this imaging system, blurring cannot be ignored. Small fluorescent beads were used to calibrate the blurring function. The results showed that the blurring circle barely changes in the object domain of imaging system (65 ± 25 mm). A 2D Gaussian fit was found appropriate for the intensity in the radial direction.
Three projections models (clear-imaging, out-of-focus imaging, deconvolution) were employed for image reconstructions with experimental measurements taken from ten small flames arranged on a ring. The diameters of the three reconstructed flames match well with the burner dimensions, illustrating the sub-millimeter resolution capabilities of the models. Top views of the reconstructed flames provide the best fidelity for the out-of-focus imaging model. Nevertheless, computational costs are 17 times that of the other two models. The model using deconvolution produce more distinctive details than the clear-imaging model, demonstrating that adopting deconvolution may help improve spatial resolution.