Development of a beam optimization method for absorption-based tomography

Absorption tomography is an imaging technique that has been used simultaneously to image multiple scalar parameters, such as temperature and species concentration for combustion diagnostics. Practical combustors, such as internal combustion engines and gas turbine engines, only allow limited optical access, and typically a few (ca. 20-40) beams are available to probe the domain of interest. With such limited spatial sampling, it is non-trivial to optimize beam arrangement for a faithful reconstruction. Previous efforts on beam optimization rely on either heuristic/empirical methods lacking rigorous mathematical derivation or were derived by assuming certain prior information in the tomographic inversion. This paper aims to develop an approach that is expected to be especially useful when prior information is not easily available or intended to be included in the inversion processes. We demonstrate that the orthogonality between rows of the weight matrix directly correlates with reconstruction fidelity and can be used as an effective predictor for beam optimization. A systematic comparison between our method and the existing ones in the literature suggests the validity of our method. We expect this method to be valuable for not only the absorption tomography but also other tomographic modalities. © 2017 Optical Society of America OCIS codes: (120.1740) Combustion diagnostics; (110.6955) Tomographic imaging. References and links 1. are J. Ballester and T. García-Armingol, “Diagnostic techniques for the monitoring and control of practical flames,” Prog. Energ. Combust. 36(4), 375–411 (2010). 2. L. A. Rahn, “Laser-based combustion diagnostics,” Opt. Photonics News 7(9), 23–29 (1996). 3. V. A. Miller, V. A. Troutman, M. G. Mungal, and R. K. Hanson, “20 kHz toluene planar laser-induced fluorescence imaging of a jet in nearly sonic crossflow,” Appl. Phys. B 117(1), 401–410 (2014). 4. P. S. Hsu, N. Jiang, J. R. Gord, and S. Roy, “Fiber-coupled, 10 kHz simultaneous OH planar laser-induced fluorescence/particle-image velocimetry,” Opt. Lett. 38(2), 130–132 (2013). 5. M. Luong, R. Zhang, C. Schulz, and V. Sick, “Toluene laser-induced fluorescence for in-cylinder temperature imaging in internal combustion engines,” Appl. Phys. B 91(3-4), 669–675 (2008). 6. W. Cai and C. F. Kaminski, “Multiplexed absorption tomography with calibration-free wavelength modulation spectroscopy,” Appl. Phys. Lett. 104(15), 154106 (2014). 7. W. Cai and C. F. Kaminski, “A tomographic technique for the simultaneous imaging of temperature, chemical species, and pressure in reactive flows using absorption spectroscopy with frequency-agile lasers,” Appl. Phys. Lett. 104(3), 034101 (2014). 8. S. Shi, J. Wang, J. Ding, Z. Zhao, and T. H. New, “Parametric study on light field volumetric particle image velocimetry,” Flow Meas. Instrum. 49, 70–88 (2016). 9. P. M. Lillo, M. L. Greene, and V. Sick, “Plenoptic single-Shot 3D imaging of in-cylinder fuel spray geometry,” Z. Phys. Chem. 229, 549–560 (2014). 10. W. Cai and C. Kaminski, “A numerical investigation of high-resolution multispectral absorption tomography for flow thermometry,” Appl. Phys. B 119(1), 29–35 (2015). 11. W. Cai, X. Li, and L. Ma, “Practical aspects of implementing three-dimensional tomography inversion for volumetric flame imaging,” Appl. Opt. 52(33), 8106–8116 (2013). 12. W. Cai, X. Li, F. Li, and L. Ma, “Numerical and experimental validation of a three-dimensional combustion diagnostic based on tomographic chemiluminescence,” Opt. Express 21(6), 7050–7064 (2013). 13. M. P. Wood and K. B. Ozanyan, “Simultaneous temperature, concentration, and pressure imaging of water vapor in a turbine engine,” IEEE Sens. J. 15(1), 545–551 (2015). 14. G. T. Herman, Image Reconstruction from Projections (Springer-Verlag, 1979). 15. M. Allen, E. Furlong, and R. Hanson, “Tunable diode laser sensing and combustion control,” in Applied Combustion Diagnostics, K. Kohse-Hoinghaus, and J. B. Jeffries, eds., (Taylor & Francis, 2002), pp. 479–498. Vol. 25, No. 6 | 20 Mar 2017 | OPTICS EXPRESS 5982 #280676 https://doi.org/10.1364/OE.25.005982 Journal © 2017 Received 6 Dec 2016; revised 5 Feb 2017; accepted 16 Feb 2017; published 7 Mar 2017 16. C. Kim, M. Kim, J. Abell, W. Bewley, C. Merritt, C. Canedy, I. Vurgaftman, and J. Meyer, “Mid-IR distributedfeedback interband cascade lasers,” in SPIE OPTO, (International Society for Optics and Photonics, 2013), pp. 86311–86318. 17. S. Dupont, Z. Qu, S.-S. Kiwanuka, L. E. Hooper, J. C. Knight, S. R. Keiding, and C. F. Kaminski, “Ultra-high repetition rate absorption spectroscopy with low noise supercontinuum radiation generated in an all-normal dispersion fibre,” Laser Phys. Lett. 11(7), 075601 (2014). 18. G. Millot, S. Pitois, M. Yan, T. Hovhannisyan, A. Bendahmane, T. W. Hänsch, and N. Picqué, “Frequency-agile dual-comb spectroscopy,” Nat. Photonics 20, 27–30 (2016). 19. Y. Wang, M. G. Soskind, W. Wang, and G. Wysocki, “High-resolution multi-heterodyne spectroscopy based on Fabry-Perot quantum cascade lasers,” Appl. Phys. Lett. 104(3), 031114 (2014). 20. S. O’Hagan, T. Pinto, P. Ewart, and G. A. D. Ritchie, “Multi-mode absorption spectroscopy using a quantum cascade laser for simultaneous detection of NO and H2O,” Appl. Phys. B 122, 1–10 (2016). 21. W. Cai and C. F. Kaminski, “Tomographic absorption spectroscopy for the study of gas dynamics and reactive flows,” Prog. Energ. Combust. 59, 1–31 (2017). 22. X. An, M. S. Brittelle, P. T. Lauzier, J. R. Gord, S. Roy, G. H. Chen, and S. T. Sanders, “Demonstration of temperature imaging by H2O absorption spectroscopy using compressed sensing tomography,” Appl. Opt. 54(31), 9190–9199 (2015). 23. P. Wright, N. Terzija, J. L. Davidson, S. Garcia-Castillo, C. Garcia-Stewart, S. Pegrum, S. Colbourne, P. Turner, S. D. Crossley, and T. Litt, “High-speed chemical species tomography in a multi-cylinder automotive engine,” Chem. Eng. J. 158(1), 2–10 (2010). 24. S. A. Tsekenis, N. Tait, and H. McCann, “Spatially resolved and observer-free experimental quantification of spatial resolution in tomographic images,” Rev. Sci. Instrum. 86(3), 035104 (2015). 25. N. Terzija, J. L. Davidson, C. A. Garciastewart, P. Wright, K. B. Ozanyan, S. Pegrum, T. J. Litt, and H. Mccann, “Image optimization for chemical species tomography with an irregular and sparse beam array,” Meas. Sci. Technol. 19(9), 094007 (2008). 26. J. Song, Y. Hong, H. Pan, and G. Wang, “Beam arrangement on two-dimensional temperature reconstruction based on laser absorption spectroscopy,” in International Symposium on Photoelectronic Detection & Imaging (2013). 27. D. Mccormick, M. G. Twynstra, K. J. Daun, and H. Mccann, “Optimising laser absorption tomography beam arrays for imaging chemical species in gas turbine engine exhaust plumes,” in International Society for Industrial Process Tomography (2014). 28. M. G. Twynstra and K. J. Daun, “Laser-absorption tomography beam arrangement optimization using resolution matrices,” Appl. Opt. 51(29), 7059–7068 (2012). 29. M. P. Wood and K. B. Ozanyan, “Optimisation of a tomography sensor for imaging of temperature in a gas turbine engine,” IEEE Sensors 2013, pp. 1–4. 30. S. J. Grauer, P. J. Hadwin, and K. J. Daun, “Bayesian approach to the design of chemical species tomography experiments,” Appl. Opt. 55(21), 5772–5782 (2016). 31. A. Guha and I. Schoegl, “Tomographic laser absorption spectroscopy using Tikhonov regularization,” Appl. Opt. 53(34), 8095–8103 (2014). 32. K. J. Daun, S. J. Grauer, and P. J. Hadwin, “Chemical species tomography of turbulent flows: Discrete ill-posed and rank deficient problems and the use of prior information,” J. Quant. Spectrosc. Radiat. Transf. 172, 58–74 (2016). 33. P. C. Hansen, “Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion,” in Society for Industrial and Applied Mathematics (SIAM, 1999). 34. K. J. Daun, S. L. Waslander, and B. B. Tulloch, “Infrared species tomography of a transient flow field using Kalman filtering,” Appl. Opt. 50(6), 891–900 (2011). 35. L. Ingber, “Simulated annealing practice versus theory,” Math. Comput. 18, 29–57 (1993). 36. A. Corana, M. Marchesi, C. Martini, and S. Ridella, “Minimizing multimodal functions of continuous-variables with the simulated annealing algorithm,” ACM Trans. Math. Softw. 13(3), 262–280 (1987). 37. S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science 220(4598), 671–680 (1983). 38. M. Tsuzuki and T. Martins, Simulated Annealing: Strategies, Potential Uses & Advantages (Nova Science Publishers, 2014). 39. W. Cai and L. Ma, “Applications of critical temperature in minimizing functions of continuous variables with simulated annealing algorithm,” Comput. Phys. Commun. 181(1), 11–16 (2010). 40. S. Bejaoui, S. Batut, E. Therssen, N. Lamoureux, P. Desgroux, and F. Liu, “Measurements and modeling of laser-induced incandescence of soot at different heights in a flat premixed flame,” Appl. Phys. B 118(3), 449– 469 (2015). 41. R. Gordon, R. Bender, and G. T. Herman, “Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and x-ray photography,” J. Theor. Biol. 29(3), 471–481 (1970). 42. K. Sun, “Ultilization of multiple harmonics of wavelength modulation spectroscopy for practical gas sensing,” Ph.D. thesis (Stanford University, 2013). 43. M. S. Irandoost, M. Ashjaee, M. H. Askari, and S. Ahmadi, “Temperature measurement of axisymmetric partially premixed methane/air flame in a co-annular burner using Mach–Zehnder interferometry,” Opt. Lasers Eng. 74, 94–102 (2015). Vol. 25, No. 6 | 20 Mar 2017 | OPTICS EXPRESS 5983


Introduction
Optical imaging techniques are indispensable for studies of complex turbulent combustion, which is dominated by the interplay between turbulence, chemical kinetics, and transport phenomena [1,2].Those techniques can be generalized into two categories, namely planar imaging [3][4][5] and computational imaging methods [6][7][8].The former category typically requires a substantial optical access to deploy both excitation and detection paths.However, in practical applications such as engine diagnostics, the available optical access is extremely limited, making the planar imaging techniques impractical for implementation.The second category is the computational imaging methods, which can be further divided into light field imaging [9] and tomography [10][11][12][13].The light field imaging can achieve 3D spatial resolution with a single camera, but usually needs a large field of view (FOV) for signal collection and again, it is not suitable for engine measurements.On the other hand, tomography is a method that reconstructs the target field from its cross-sectional projections [14].Among all these tomographic modalities, tomographic absorption spectroscopy (TAS) is the most promising one as it inherits the advantages of both absorption spectroscopy and tomography, including 2D spatial resolution, species-specificity, and versatility [15].Due to the recent progress in semiconductor lasers [16], frequency-agile light sources [17], and advanced spectroscopic schemes [18][19][20], TAS is developing into a technique that provides an unprecedented opportunity for combustion diagnostics [21].Furthermore, with the development of fiber optics and the theory of compressed sensing [22], TAS can be implemented with minimal optical access, which is highly desirable for engine diagnostics.For example, TAS has been demonstrated on a multi-cylinder automotive engine for imaging of hydrocarbon vapor with only 27 beams [23].A spatially-resolved and observer-free method had been developed recently for the quantification of the spatial resolution for a similar sparse-sampling system with 31 beams [24].With such limited spatial sampling, it is non-trivial to arrange the beams in an optimal way to obtain the richest spatial information for both acceptable reconstruction fidelity and spatial resolution.
Numerous methods have been proposed to achieve an optimal beam configuration.All these methods tried to define either a qualitative or a quantitative indicator that can be used to predict the reconstruction fidelity.Terzija et al proposed a beam optimization strategy that relies on a hypothesis that the reconstruction can be improved by maximizing discrepancies between the beams in the Radon space [25].Song et al developed the so-called grid weight factor (GWF) method to predict the best beam configuration out of four candidates [26].However, both methods are heuristic/empirical and lacking solid mathematical foundation.Twynstra et al mathematically proved that the Frobenius distance between the so-called resolution matrix and identity matrix directly correlates with the reconstruction error [27][28][29].Since the resolution matrix was derived by incorporation of a smoothness prior, it is especially useful for the data-limited cases, in which inclusion of even an uninformative prior can significantly improve the reconstruction accuracy.Analogically, more informative prior information such as a noise model and the spatial covariance of the fluctuating concentration field can also be incorporated into a maximum a posteriori (MAP) estimate through Bayesian formulation to derive a possible indicator [30].Grauer et al proved that the MAP uncertainty, which can be quantified by the trace of the so-called posterior covariance matrix, is also a good predictor for the reconstruction error [30].Both approaches illustrate how an indicator can/should be designed for beam optimization by explicit consideration of the available a priori information at an early stage of implementing the TAS system.
In this work we aim to introduce an alternative approach, which treats the spatial sampling and regularization as separate and independent issues.In this approach, we hypothesize that the orthogonality between the rows of the weight matrix is directly correlated with the reconstruction accuracy.Thus, by maximizing the orthogonality between the subspaces defined by the rows of the weight matrix, which are determined by the physical positions of beams, a good reconstruction can be obtained.Since this method is derived only from the weight matrix without any assumptions of the target flow field, we expect it to be especially useful for cases in which a priori information is not easily available or is intended to be incorporated in the reconstruction processes.We consider our method to be complementary to the existing ones that were derived by assuming certain prior information.Later in this paper, we will first derive and validate our method, and then compare it with the existing representative methods to show its validity.
The remainder of the paper is organized as follows: Section 2 first describes mathematical formulation of both tomographic inversion and beam optimization methods; and then briefly reviews the simulated annealing (SA) algorithm that is necessary for a beam optimization process; Section 3 proposes our beam optimization method and discusses the comparison results obtained from simulative experiments; and finally the last section summarizes this work.

Mathematical formulation of absorption tomography
The mathematical formulation of absorption tomography has been discussed extensively [31,32] and we provide a brief summary here for the readers' convenience.The definition of the coordinate system is shown in Fig. 1.Intuitively, the positions of all beams are specified by φ = {φ 1 ,φ 2,..., φ K } with where d i is the perpendicular distance from the i-th beam to the origin which is defined as the center of the tomography field, θ i is the angle formed between the beam and the positive direction of the x axis, and subscript K is the total number of beams utilized in this experiment.As the i-th monochromatic beam produced from the emitter, traversing the tomography field, and picked up by the detector, its intensity is attenuated due to absorption of the target species.The corresponding absorbance can be derived from Beer-Lambert law and is given as: where I 0,i and I t,i are the incident and transmitted light intensities respectively; and f(l, ν) is the profile of absorption coefficient along the line-of-sight (LOS) and at a frequency of ν, and is a function of the absorbing species concentration, gas temperature, and pressure along LOS.By discretizing the tomography field into n 2 square pixels, each of which contains a uniform gas with respect to the reconstructed property (e.g. the absorption coefficient), Eq. ( 1) can be approximated by a "ray-sum" as 1 2 .
where A ij represents the chord length of the i-th beam within the j-th pixel, and f j is the absorption coefficient of the j-th pixel.It has to be noted that during this approximation process, an artificial error, the so-called discretization error is introduced as physically the target field is continuous rather than discrete.Repeating Eq. ( 2) for all beams results in a linear equation system that can be described as: .
where A is the weight matrix, f  the distribution of absorption coefficient arranged into a column vector, and p  the LOS measurements.
In practical applications, only a limited number of beams are available making the linear equation system rank-deficient [33].To maximize the sampling efficiency, the probing beams are usually arranged in an irregular manner e.g. as used in [25].The ART algorithm is an iterative reconstructed method that is often used in this case to solve the equation system and can be described as where the superscript n indicates the iteration number; the subscript i represents the i-th beam (i = 1,2,…,K) and; β∈(0,2] is the relaxation factor; i A  is the row vector from the i-th row of the weight matrix A. Figure 2 illustrates how the iterative process (i.e.Eq. ( 4)) can be interpreted from a geometrical perspective.For simplicity, only two beams are assumed in this case and the relaxation factor β is set to unity.In the figure, the coordinate system V-O-W is defined in the 2-dimensional vector space; and the purple lines S 1 and S 2 stand for subspaces with the normal vectors of 1 A   and 2 A  , respectively.The iteration usually starts from a guess solution 0 f  and progressively approaches the final solution f  through alternative projections from the current solution to the two subspaces.Each projection essentially updates the solution by adding a correction vector (represented as a red line) to the current solution.When β≠1, the correction vector will either be stretched (β>1) or compressed (β<1) leading to either faster or slower convergence.Obviously, the convergence can be accelerated by maximizing the angle between the two subspaces.For example, when S 2 is orthogonal to S 1 , only two iterations are necessary to reach the convergence.For the numerical experiments conducted in the following sections, 0 f  is set to a vector of zeros for simplicity.

Methods for beam optimization
It is well-recognized that the beam arrangement greatly influences the reconstruction quality [25,26,28].This paper aims to develop a new method to optimize the beam set denoted above as φ to achieve the best tomographic performance in terms of reconstruction accuracy.
In the following sub-sections, we first introduce two representative beam optimization methods in literature and then propose our method.
The first method in literature was derived rigorously and relies on the so-called resolution matrix, the detailed derivation of which can be found in [28,30].We briefly summarize it here for the readers' convenience.To alleviate the ill-posedness of the TAS problem, the constrained Tikhonov regularization, which enforces both non-negativity and smoothness, can be incorporated into Eq.( 3) and the inversion process is cast into a minimization problem as where f λ is the regularized solution, λ the regularization factor, and L the Laplacian matrix defined as where w is the total number of grids neighboring the i-th grid.
The least square solution to Tikhonov regularization can be obtained as where R is the so-called resolution matrix and I the identity matrix.The latter two terms in the equation are the errors caused by regulation and measurement noise, respectively.It has been shown that by choosing a proper λ, the error caused by noise is negligible [28].In this case, the regularization error should be minimized for an optimal solution.This can be done by minimizing the Frobenius distance between R and I as ) It has to be noted that the incorporation of smoothness prior provides additional linear equations.Through singular value decomposition (SVD) of the augmented matrix [A; λL], we found the prior can essentially provide additional n 2 -K non-trivial singular values to help span the null space of the equation system.However, it can potentially alter the original K singular values of A when λ is over-estimated, suggesting the smoothness condition distorts the information provided by the spatial sampling [34].Thus, the parameter λ should be chosen in such a way that the largest K of n 2 singular values of the augmented matrix do not overwhelm the original K nontrivial singular values.
The second method in literature relies on a predictor i.e. the so-called grid weighted factor (GWF) [26] and is defined as where m represents the total number of beams passing though the j-th grid.Unfortunately, no rigorous derivation is available for this approach.
As noted above, the two representative methods rely on either a heuristic indicator or was derived by explicit incorporation of certain prior information in the tomographic inversion.Here, we aim to develop an approach, which is expected to be especially useful when prior information is not easily available or intended to be included implicitly through e.g.iterative algorithms.As illustrated in Fig. 2, when two subspaces are orthogonal, the fastest convergence can be achieved.Such orthogonality also means the two corresponding linear equations are linearly independent from each other.In this case, the spatial sampling using these two beams is exploited to the maximum extent and provides the most useful information of the flow field.To quantify the linear independency between any beam pairs, we propose the concept of orthogonal degree (OD).The OD between the i-th and j-th beams is defined as , .

|| || || ||
To illustrate this concept, a simple tomographic example with a 3 × 3 discretization is depicted in Fig. 3. Four beams are used to probe the domain of interest in this case.The normal vectors of the corresponding subspaces are defined as (1,0,0,1,0,0,1,0,0) (0,1,0,0,1,0,0,1,0) (0,0,0,0,0,0,1,1,1) .(0,0,0,1,1,1,0,0,0) It has to be noted that OD is not the cosine value of the angle between the two beams in the physical space but the angle between their corresponding subspaces in the 9D vector space.For example, Beams 1 and 2 are parallel in the physical space but their corresponding subspaces are orthogonal; on the other hand, Beams 1 and 3 are orthogonal in the physical space but their corresponding subspaces are angled.It has to be noted that every element of i A   and j A  is non-negative, leading to an OD ranging from zero to unity.When two beams transecting the same set of pixels, the corresponding OD is either unity or close to unity, meaning the two beams essentially providing the same or very similar spatial information.The OD of all beam pairs can be calculated and arranged in a matrix format (referred to as MOD hereafter) as follows: .
Intuitively, the beam optimization problem can be cast into a minimization problem whose cost function can be defined as e.g. the average value or the weighted summation of all elements in the matrix.However, such definitions are ensemble parameters, which cannot reflect the value of a specific element.If any element in MOD is large, the two corresponding beams are providing very similar spatial information, making one of the beams redundant.For this reason, the cost function should be designed to ensure every element in MOD to be small so as to avoid such beam pairs.Thus, we define the cost function for the minimization problem as follow: where max( ) ⋅ returns the maximum element of a matrix.It has to be noted that MOD is a symmetrical matrix and only upper triangular part of it needs to be calculated for each function evaluation to avoid redundant computation.An identity matrix is subtracted from MOD to remove the elements on the diagonal, each element of which presents the OD of a line with itself.Subsequently, the minimization problem can be solved using a standard global optimizer such as the simulated annealing (SA) algorithm [35,36], which will be the topic of the following chapter.

Simulated annealing algorithm
The simulated annealing (SA) algorithm was proposed in 1983 [37] and has been demonstrated as a powerful tool to pinpoint the global minimum that is buried under a plethora of local minima.The SA algorithm numerically simulates an annealing process in which a material is first heated until it melts and then gradually cools until it crystalizes into a state with a minimum thermodynamic free energy [38].In the algorithm, the cost function f is analogous to the free energy of the material that has to be reduced.A parameter T is used in this algorithm to model the physical temperature and control the possibility of random walks according to the Metropolis criterion [39] as if ( ) ( ) 0, accept the , else if exp( / ), accept the , else remain .
where φ new is the new solution generated by adding a random disturbance to the current solution φ ; rand is a function that generates random numbers uniformly within (0, 1).During the annealing process, the new solution φ new is always accepted if its cost function is smaller; otherwise it can be accepted with a probability of exp(-∆f/T).Such mechanism enables the algorithm to escape local minima.As T decreases, the algorithm is less and less likely to accept a new solution with a larger function value; and gradually it will converge to the global solution.
The flow chart shown in Fig. 4 provides a detailed description of the algorithm customized for the beam optimization problem.At the beginning, the algorithm is initialized by setting parameters including the upper and lower bounds for the variables and the initial temperature T 0 .During the optimization process, the optimal solution φ opt and the corresponding cost function f(φ opt ) are recorded for each loop, within which a new solution φ new is created and the corresponding function value f(φ new ) is then calculated.The solution is accepted or rejected according to the Metropolis criterion as defined in Eq. ( 13).This process is repeated for a certain times before each temperature reduction, which can be realized by setting T = T × α, where α is a positive constant and is smaller than unity.Here, α is set to 0.85 as suggested in [36].The initial temperature T 0 is selected in such a way that the random walks at the initial temperature state can cover the whole searching space.The algorithm will be terminated when the relative change of f(φ opt ) between several consecutive temperature states is insignificant i.e. smaller than a small positive number e.g. 10 −3 .

Demonstration of the new method
To verify our hypothesis that the cost function F MOD defined above is an effective predictor for the reconstruction error, we performed proof-of-concept numerical demonstrations on all three methods for comparison.To facilitate discussion we refer to the methods using F MOD , F RM , F GWF as MOD, RM, and GWF, respectively hereafter.It is assumed that 32 beams, which can be arranged freely, are used to probe the flow field.The beam optimization problem can then be cast into a minimization problem with a total number of 64 variables (i.e.32 for d's and 32 for θ's).The minimization problems with F MOD , F RM , F GWF can then be solved using the SA algorithm introduced in the previous chapter.The evolution of the cost functions (labelled as red lines with solid circles) are plotted individually in each panel of Fig. 5 as a function of the # of T reductions.The curve was normalized as only the relative change is important here.It has to be pointed out that, for RM, the optimal regularization factor λ varies with φ (i.e.beam configuration) for a given discretization and should be chosen in such a way that the largest 32 of 100 non-trivial singular values of [A; λL] do not overwhelm the original 32 nontrivial singular values.Figure 6 shows how λ is determined for an example beam arrangement i.e. the one optimized by RM in Panel (b) of Fig. 5.The singular values of A and [A; λL] with various λ's are shown in the figure .As can be seen, when λ = 1, the largest singular values of the augmented matrix start to overwhelm counterpart of A, indicating the optimal value of λ lies between 0.1 and 1 [28].So, for this case optimal λ was chosen to be 0.5.Similar procedures were taken consistently to determine λ for other cases to make sure RM work properly.On the other hand, no additional assumption about the flow field is considered in the optimization processes of MOD and GWF but only the spatial sampling i.e. the beam configuration.To study the correlation between F MOD , F RM , F GWF and the reconstruction error, for each method, we took the optimal beam arrangement at each T, and tested them with a representative phantom shown in the Panel (a) of Fig. 7, which features two Gaussian peaks mimicking the multi-modal situations encountered in combustion applications.It has to be noted that the magnitude of the target field is arbitrarily set here since only the relative variations within the field will affect the tomographic inversion.The 32 LOS absorption 'measurements' were artificially simulated with the phantom and the beam configurations.
Tomographic inversions were then performed using both ART and Tikhonov reconstruction.The reconstruction errors were quantified by exa exa where exact f represents the phantom and rec f the reconstruction respectively.The corresponding reconstruction errors are then plotted for all methods in three panels of Fig. 5.The r 2 correlation between the cost functions and the reconstruction errors from both ART and Tikhonov reconstruction are also calculated for each case and labeled as 2 ART r and 2 Tik r respectively.As indicated by the r 2 correlation, both F RM and F GWF follow the reconstruction errors throughout the optimization process more closely than F MOD ; however, GWF landed in a huge error and RM in a small one but is still larger than MOD.The results shown in Fig. 5 suggest that a good r 2 correlation does not necessarily qualify an effective predictor.To show how MOD works, a video (see Visualization 1) was made visually to present the evolution of both the beam arrangement being optimized and the corresponding reconstructions from ART.
We also tested these methods on other representative phantoms as shown in Panel (b) and (c) of Fig. 7 respectively.The second phantom was designed to test the beam optimization methods on flames featuring shape edges such as a McKenna flame [40].The third phantom was used to simulate a turbulent flame with more complex structures.The same conclusion was drawn from these studies.

Comparison of various beam arrangements
To demonstrate how beam optimization can improve the reconstruction quality for the case discussed in Section 3.1 i.e. the one with 10 × 10 discretization, we compare here a few representative beam configurations, which are, plotted both in the physical space shown as Fig. 8(a) and the so-called Radon space [28] shown as Fig. 8(b).Beam arrangements #1 (parallel beams) and #2 (fanned beams) represent the commonly adopted beam sampling strategies which distribute the beams uniformly within each projection; Beam arrangement #3 and #4 had been used in practical applications in which the beams are deployed in an irregular and sparse manner [25]; and the remaining three arrangements #5, #6, and #7 are those optimized with GWF, RM and MOD respectively.Again, numerical studies were performed on these phantoms.Tomographic inversions were carried out with each of these beam arrangements by using the ART algorithm, and the corresponding reconstructions for Phantom 1 are shown in Fig. 9.As can be seen, the reconstructions vary dramatically from case to case, indicating spatial sampling has a critical effect on the reconstruction accuracy.Beam arrangements #1 and #2 result in a poor reconstruction quality due to the insufficient sampling of the marginal areas especially the corners of the tomography field.The irregular and sparse beam arrangements (#3 and #4) shown in Figs.8(a) and 8(b) acquired poor reconstructions even though it had been shown to produce good reconstruction quality by applying an iterative method and a median filter dozens of times [25].However, in this case the good performance is attributed to the smoothness regularization enforced by the median filter rather than the spatial sampling itself.The regularization for tomographic inversion is a separate topic which will not be discussed in detail here.Excellent discussion on regularization can be found in [32,33].In the Beam arrangement of #5, most of the beams were arranged at an angle of either π/4 or 3π/4 since maximizing Eq. ( 9) will enforce the beams to 1) traverse as many pixels as possible; and 2) avoid sampling the same pixels since the weight contributing to GWF will decay exponentially when the number of beams passing a pixel increases.Both beam arrangements #6 and #7 led to better reconstructions than the others.The beam arrangement #7 features the smallest OD i.e. the lowest linear dependencies between all beam pairs, providing the richest spatial information for tomographic reconstruction.We also tested the two optimized beam arrangements from RM and MOD using the Tikhonov reconstruction [28].The corresponding results are shown in Fig. 10.Panel (a) and (b) are from the Tikhonov reconstruction by setting λ as 0.5; and Panel (c) and (d) are results of λ = 0.05.As can be seen, the RM method outperforms MOD for λ = 0.5.This is understandable as when the regularization is relatively strong, the error caused by regularization dominates, and the RM method can effectively reduce the regularization error.On the other hand, when the regularization is relatively weak, the error caused by insufficient spatial sampling overwhelms the regularization error.In this case, MOD wins.Tikhonov reconstruction presents similar performance to ART when λ = 0.05.Since the MOD method is derived based on only the spatial sampling without any assumptions about the flow field, we expect it to be especially useful for cases when prior information is not easily available or embedded during the inversion processes.On the other hand, the RM method is recommended when a smoothness prior can be assumed.However, we want to point out that MOD is not expelling to any regularization in nature.Regularization can always been incorporated in the tomographic inversion either explicitly or implicitly.For example, the ART algorithm used here essentially entails an implicit prior information that the best solution is the one with maximum entropy (see [41] for more details).However, since ART only updates the pixels that are traversed by the sampling beams, when the arranged beams cannot cover all the pixels, the pixels uncovered will be remain as the initial guess.In this case, prior information e.g.smoothness, which can connect the pixels that are covered and uncovered should be used instead.From the comparisons discussed above we can conclude that 1) the reconstruction quality can be improved substantially using the optimized arrangements; 2) The arrangement optimized by MOD outperforms the others.The same conclusions are obtained for other phantoms as will be discussed in the following sub-sections.

Effects of noise on reconstruction accuracy
To examine the performance of the above-mentioned beam arrangements under noisy conditions, we repeated the reconstruction processes by adding 5% uniform noise to the LOS measurements i.e. the RHS of Eq.Again, the optimized beam arrangement from MOD achieved a better reconstruction than RM for both Phantom 1 and 3.The results shown in Panel (b) of Fig. 11 reveal that MOD successfully recovered the twin peak within the Phantom 1 even though RM failed.Thus, another advantage of MOD is its good noise immunity.This merit is extremely desirable for harsh combustion environment where the tomographic system suffers from window fouling and mechanical vibration [42].
To further validate our method, we tested GWF, RM, and MOD on more phantoms with the results plotted in Fig. 12. Panel (a) shows the reconstruction errors for 100 phantoms which are generated by randomly adjust the positions of the twin Gaussian peaks in Phantom 1.The blue line with triangles, red line with circles, and black line with squares represent the results for GWF, RM, and MOD respectively.Panel (b) is the counterpart of Panel (a) but with another 100 phantoms which are generated by randomly adjust the positions of the square and the Gaussian peak in Phantom 2. As can be seen, MOD consistently outperforms GWF and RM for all these phantoms.

Further analysis on beam optimization
To further analyze our method, we tested it with five more cases.The simulation conditions for those cases are summarized in Table 1.The optimized beam arrangements for all cases are plotted in the physical space as shown in the first row of Fig. 13; and the corresponding Radon plots are shown in the second row.By analyzing the optimized beam arrangements and Radon plots in Fig. 13, a common phenomenon can be observed.For those cases, all the rows are transected by a set of quasihorizontal beams, the corresponding subspaces of which are orthogonal in the vector space.The beams arranged in this way are most effective in reducing the cost function F MOD .Similarly, all the columns are transected by a set of quasi-vertical beams.The remaining beams are distributed almost symmetrically and uniformly around the origin within other projections.

Conclusion
In summary, this paper proposes a new beam optimization method which aims to increase the linear independencies between the equations defined by the probing beams.The numerical studies performed in this work validated its effectiveness and good noise immunity.Since our method is derived without any pre-assumption of the target flow field such as smoothness, we expect it to be especially useful when prior information is not easily available or intended to be embedded in tomographic reconstructions at a later stage, making it a perfect complement to the existing methods.Moreover, accelerated convergence is possible if a projections-ontoconvex-sets algorithm such as ART is used for tomographic inversion.Finally, we want to point out that this method is not limited to absorption tomography but can also be applied to any tomographic modalities such as interferometric tomography [43] and deflectometric tomography [44], etc.

Fig. 1 .
Fig. 1.Definition of the coordinate system and the specifications for the i-th beam transecting the tomography field.

Fig. 2 .
Fig. 2. Schematic of the iterative process of the ART algorithm.

Fig. 3 .
Fig. 3. Illustration of the concept of orthogonality degree with a 3 × 3 tomographic problem.

Fig. 4 .
Fig. 4. Flow chart of the SA algorithm customized for the beam optimization problem.

Fig. 5 .
Fig. 5.The evolution of the cost functions along with the corresponding reconstruction errors from both ART and Tikhonov reconstruction for GWF, RM, and MOD respectively.

Fig. 6 .
Fig. 6.Singular values of the weight matrix A and the augmented matrices [A; λL] using various λ for the optimized beam arrangement by RM as shown in Panel (b) of Fig. 5.

Fig. 8 .
Fig. 8. (a).Seven example beam configurations represented in the physical space.(b).Seven example beam configurations represented in the Radon space.

Fig. 9 .
Fig. 9. Reconstructions for the beam arrangements shown in Figs.8(a) and 8(b) using the ART algorithm.

Fig. 10 .
Fig. 10.Reconstructions for the beam arrangements shown in Panel #6 and #7 of Fig. 8(a) using the Tikhonov regularization method.Panel (a) and (b) are from Tikhonov reconstruction by setting λ as 0.5; and Panel (c) and (d) are results of λ = 0.05.
(3).Due to the obvious superiority of MOD and RM, we repeated the simulative studies shown above with the beam arrangements optimized with these two methods.The corresponding reconstructions are shown in Fig. 11.Panel (a) and (c) are the reconstructions for Phantom 1 and Phantom 3 respectively using the optimized beam arrangement from RM; and Panel (b) and (d) are the counterparts of Panel (a) and (b) using the MOD method.

Fig. 11 .
Fig. 11.Panel (a) and (c) are the reconstructions for Phantom 1 and Phantom 3 using the optimized beam arrangement from RM; and Panel (b) and (d) are the counterparts of Panel (a) and (b) using the MOD method.The ART algorithm was used in these cases.

Fig. 12 .
Fig. 12. Panel (a) shows the reconstruction errors for 100 phantoms which are generated by randomly adjust the positions of the twin Gaussian peaks in Phantom 1; and Panel (b) is the counterpart of Panel (a) but with another 100 phantoms which are generated by randomly adjust the positions of the square and the Gaussian peak in Phantom 2.

Fig. 13 .
Fig. 13.Five beam optimization cases using MOD.The first row shows the optimized beam arrangements in the physical space; and the second row is the counterpart of the first row in the Radon space.