Digital spectral separation methods and systems for bioluminescence imaging

We propose a digital spectral separation (DSS) system and methods to extract spectral information optimally from a we ak multispectral signal such as in the bioluminescent imaging (BLI) studies. This system utilizes our newly invented spatially-translated s pectral-image mixer (SSM), which consists of dichroic beam splitters, a mirror, and a DSS algorithm. The DSS approach overcomes the shortcomings of t he data acquisition scheme used for the current BLI systems. Primar ily, using our DSS scheme, spectral information will not be filtered out. Ac cordingly, truly parallel multi-spectral multi-view acquisition is enable d for the first time to minimize experimental time and optimize data quality. This approach also permits recovery of the bioluminescent signal time course, which is useful to study the kinetics of multiple bioluminescent probes usi ng multi-spectral bioluminescence tomography (MSBT). © 2008 Optical Society of America OCIS codes: (170.3880) Medical and biological imaging; (110.3010) Ima ge reconstruction techniques. References and links 1. R. Weissleder and V. Ntziachristos, “Shedding light onto live molecular targets,” Nat. Med. 9, 123–128 (2003). 2. V. Ntziachristos, J. Ripoll, L.H.V. Wang, and R. Weissled er, “Looking and listening to light: the evolution of whole-body photonic imaging,” Nat. Biotechnol. 23, 313-320 (2005). 3. M.K. So, C.J. Xu, A.M. Loening, S.S. Gambhir, J.H. Rao, “Se lf-illuminating quantum dot conjugates for in vivo imaging,” Nat. Biotechnol. 24, 339-343 (2006). 4. C. Kuo, O. Conquoz, T. Troy, D. Zwarg, and B. Rice, “Biolumi nescent tomography for in vivo localization and quantification of luminescent sources from a multiple-view maging system,” Mol. Imaging. 4, 370 (2005). 5. C. Kuo, H. Xu, and B. Rice, “Improved Techniques in Diffuse Luminescent Tomography on Whole Animal Meshes,” in “Annual Meeting of The Society for Molecular Ima ging,” Hawaii, 2006. 6. C. Kuo, O. Conquoz, T. Troy, H. Xu, and B. Rice, “Three-Dime nsional Reconstruction of In Vivo Bioluminescent Sources Based on Multi-Spectral Imaging,” J. Biomed. Opt. 12, 024007 (Apr. 19, 2007). 7. G. Wang, E.A. Hoffman, and G. McLennan, “Systems and metho ds f r bioluminescent computed tomographic reconstruction,” US Patent Application. No. 10/791140, 20 02. 8. G. Wang, et al., “Development of the first bioluminescent C T scanner,” Radiology229, 566 (2003). 9. W.X. Cong, et al., “Practical reconstruction method for b ioluminescence tomography,” Opt. Express. 13, 67566771 (2005). 10. A. Cong and G. Wang, “Multi-spectral bioluminescence to mography: Methodology and simulation,” Int. J. Biomed. Imaging ID57614 (2006). #89375 $15.00 USD Received 2 Nov 2007; revised 5 Dec 2007; accepted 18 Jan 2008; published 24 Jan 2008 (C) 2008 OSA 4 February 2008 / Vol. 16, No. 3 / OPTICS EXPRESS 1719 11. W. Cong and G. Wang, “Boundary integral method for biolum inescence tomography,” J. Biomed. Opt. 11, 020503 (2006). 12. W. Cong, et al., “A Born-type approximation method for bi oluminescence tomography,” Med. Phys. 33, 679-686 (2006). 13. G. Wang, et al., “In vivo mouse studies with bioluminesce nce tomography,” Opt. Express. 14 7801-7809 (2006). 14. W. Cong, A. Cong, H. Shen, Y. Liu, and G. Wang, “Flux vector f rmulation for photon propagation in the biological tissue,” Opt. Lett. 32, 2837-2839 (2007). 15. V. Ntziachristos, “Fluorescence molecular imaging,” A nnu. Rev. Biomed. Eng. 8, 1-33 (2006). 16. G. Wang, Y. Li, and M. Jiang, “Uniqueness theorems in biol uminescence tomography,” Med. Phys. 31, 22892299 (2004). 17. W. Han, W. Cong, and G. Wang, “Mathematical study and nume rical simulation of multispectral bioluminescence tomography,” Int. J. Biomed. Imaging. ID54390 (2006). 18. G. Alexandrakis, F.R. Rannou, and A.F. Chatziioannou, “ 3D bioluminescence imaging by use of a combined optical-PET tomographic system: A computer simulation fea sibility study,” Phys. Med. Biol.50, 4225-4241 (2005). 19. A.J. Chaudhari, et al., “Hyperspectral and multispectr al bioluminescence optical tomography for small animal imaging,” Phys. Med. Biol. 50, 5421-5441 (2005). 20. H. Dehghani, et al., “Spectrally resolved bioluminesce nce optical tomography,” Opt. Lett. 31, 365-367 (2006). 21. X. Qian, R. Svensson, X.Y. Ying, H. Shen, W. Cong, M. Henry , G. Wang, “Measurement of TemperatureDependent Bioluminescent Spectra in Vivo,” in “Annual Meet ing of The Society for Molecular Imaging,” Providence, Rhode Island, 2007. 22. H. Lang and C. Bouwhuis, “Optical System for a Color Telev ision Camera,” USA patent: 3202039, 1961. 23. K. Hideo Hoshuyama, “Color Separation Device of Solid-s tate Image Sensor,” USA patent: US 7,138,663 B2, 2003. 24. H. Macleod, “Thin Film Optical Filters,”(Taylor and Fra ncis, Philadelphia, PA, 2001). 25. G. Wang, H. Shen, D. Kumar, X. Qian, W. Cong, “The first biol uminescence tomography system for simultaneous acquisition of multi-view and multi-spectral data,” Int. J . Biomed. Imaging. ID58601, 2006. 26. C. Lawson and R. Hanson, “Solving least squares problems ,” (Prentice-Hall, Englewood Cliffs, 1974). 27. A. Bj orck, “Numerical Method for Least Squares Problems ,” (SIAM, Philadelphia, PA, 1996). 28. J. Cantarella and M. Piatex, “Tsnnls a sparse nonnegativ least-squares solver,” http://www.cs.dug.edu/ Epiatek/tsnnls/, (2004). 29. R. Storn K. Price, “Differential evolution A simple and efficient heuristic for global optimization over continuous spaces,” J. Global Optim. 11, 341-359 (1997). 30. K. Price, R. Storn, and J. Lampinen, “Differential evolu tion: a practical approach to global optimization,” (Springer, Berlin, 2004). 31. V. Feoktistov, “Differential evolution : in search of so lutions,” (Springer, New York, 2006). 32. A. Cong, W. Cong, H. Shen, et al. “Bioluminescence Tomogr aphy using Genetic Algorithm,” working paper, 2007.


Introduction
Among molecular imaging modalities, optical imaging and bioluminescence imaging in particular, has attracted remarkable attention for its unique advantages in probing capabilities, sensitivity, specificity, and cost-effectiveness [1,2,3].There are two main companies in the bioluminescence imaging area: BERTHOLD TECHNOLOGIES (http://www.berthold.com/)which provides NightOWL II LB 983 systems, and Caliper Life Science (http://www.caliperls.com)which markets IVIS image systems, including IVIS Lumina, IVIS 100, IVIS 200, IVIS-3D and IVIS-Spectrum [4,5,6].These systems have gained significant popularity over the past years.However, due to the light scatter and absorption in biological tissues, bioluminescence imaging is seriously limited in quantitative studies when bioluminescent sources are deep inside a small animal.
Bioluminescence tomography (BLT) is an emerging and promising molecular imaging technology that aims to reconstruct a 3D bioluminescent source distribution reflecting the concentration of bioluminescent cells in a living mouse from external views around the mouse [7,8,9,10,11,12,13,14].Similar to the development of x-ray CT from planar radiography, BLT allows localized and quantitative analyses of bioluminescent source/probe distributions while existing planar bioluminescent imaging is primarily qualitative.While fluorescence Fig. 1.Bioluminescent intensity change over time at 37 • C using the Xenogen IVIS 100 in a transgenic mouse injected with 150mg/kg D-luciferin [21].molecular tomography (FMT) of small animals has seen significant progress over the past four years [15], BLT is only emerging.The major reason for this developmental delay is that the bioluminescence tomography problem is rather less accurately posed and significantly more challenging than the fluorescence problem.We have studied and rigorously showcased this point [16].For this reason, BLT development requires italicized information to compensate for the lack of intrinsic information in the external bioluminescent measurement.It is already recognized that a significant improvement in BLT reconstruction quality can be achieved by 1) utilization of spectral information in the bioluminescent signal, 2) incorporation of a priori knowledge, 3) registration of an individualized anatomical volume and the corresponding attenuation map derived optically at the wavelengths of bioluminescence propagation.
To improve the BLT result, we proposed multi-spectral bioluminescence tomography (MSBT) to sample the bioluminescence spectrum into a number of bands or channels for multiwavelength imaging, and then perform the source reconstruction [10,17].Other groups in the BLT field also studied multi-spectral bioluminescence tomography [18,19,20].The results all demonstrated that multi-spectral information can improve the BLT reconstruction.
All the current commercial BLI/BLT systems use a filter wheel approach to capture multiple spectral images sequentially, each time with a different filter.This approach assumes that the bioluminescence signal is stable in the data acquisition process.Unfortunately, this assumption is only realistic for a very short time.Several studies show that the time-course of the bioluminescence signal can have a very board dynamic range during BLI/BLT experiments.For example, as shown in Fig. 1, the bioluminescent signal changes over 10-fold during a 30 minute experiment [21].In an MSBT experiment, at least 4 views and 2 spectral channel are needed, and the experimental time is often over 40 minutes using existing systems such as the IVIS 100.In this situation, the assumption is clearly violated and it becomes very difficult to improve the fundamentally compromised data quality retrospectively.Hence, this kind of measurement will introduce significant inconsistency in the resulting datasets and most likely render the reconstruction useless.
In the photography field, there are other designs which can get the multi-spectral signal simultaneously.The three-CCD (charge-coupled device) technology [22] is used in some cameras and camcorders, which relies on three dichroic prisms to divide the spectrum and direct each spectral component to its corresponding CCD.Since three CCDs are required in this process, the hardware expense is high, and it is not suitable for low f-number (large aperture) applications.Recently, Foveon Inc. (http://www.foveon.com/)introduced a new type of imaging sensor, with three layers for three spectral bands based on the fact that red, green, and blue signals penetrate silicon to different depths, which means that the spectral partition is fixed.Nikon has an alternative design which put a micro-lens and micro-dichroic array in front of the sensor [23].Since the spectrum-splitting device is rigidly attached to the sensor, which means that the spectral partition cannot be changed flexibly.
In this paper, we present a digital spectral separation (DSS) approach, which uses dichroic beam splitters and a digital spectral separation method to extract different spectral components from multi-spectral images.Primarily, spectral information will not be filtered out, and the time-course of the bioluminescence signal can be optimally recovered.As a result, truly parallel multi-spectral multi-view acquisition is enabled, substantially reducing the experimental time for multi-spectral imaging.Furthermore, all the component images are on the same focal plane of a single CCD camera, providing maximum image clarity.In the subsequent sections, we will present a representative prototype, describe associated DSS algorithms, and report numerical simulation results.In the last section, we discuss several relevant issues and conclude the paper.

SSM device
Figure 2 illustrates a two-spectrum Spatially-translated Spectral-image Mixer (SSM).The base of this device is a glass with two surface coatings: a long-wave-pass dichroic coating [24] on the first surface and a mirror coating on the back surface.To demonstrate the principle of SSM, let us assume that both the dichroic coating and the mirror have ideal physical characteristics, and the light source contains two spectral components: blue and red.The long-wave-pass dichroic coating transmits the light of the longer wavelength (red) and reflects that of the shorter wavelength (blue).The red component will then be reflected by the mirror and mixed with the blue component after a spatial shift.Note that in the mixed signal, the first and last parts are spectrally pure, while those in between are spectrally mixed.
This layered combination of the dichroic coating and mirror surface is the key for our proposed digital spectral separation.From now on, for convenience we call such a device a p q + (1 ) (1 ) (1 ) spatially-translated spectral-image mixer or splitter-covered signal-reflection mirror (SSM).We use SSM p q for an SSM device with a dichroic coating for stopping a signal in band p and transmitting a signal in band q; as a special case, a mirror can be denoted as SSM p+q 0 .Note that the surfaces in an SSM are not necessarily flat and can be adapted into different shapes and orientations for various applications.In contrast to the most popular filter wheel techniques, the most important advantage of an SSM is that it utilizes all the intercepted photons of various colors with all the involved virtual images on the focal plane of the camera.
In our prototype design, the dichroic coating at the first surface will have a stop band [380, 540]nm and a pass band [540, 780]nm.At a 45 • incident angle, the stop band can have a > 97% reflectivity, and the pass band can have a > 90% transmittance.The second surface will be coated by an enhanced silver coating or a dielectric mirror stack coating which can have a 99.9% reflectivity.The base of the SSM device is made of optical grade synthetic fused silica, which has a > 94% external transmittance from 380nm to 780nm.The thickness of the SSM device will be set between 5mm to 10mm.This SSM device can be customized from CVI Optical Components and Assemblies (http://www.cvilaser.com/).
Let a be the stop band reflectivity efficiency and b be the pass band transmission efficiency.As shown in Fig. 3, the SSM device can be modeled as follows.The beam splitter will reflect ap + (1 − b)q and transmit (1 − a)p + bq.The initially transmitted part will be reflected back to the dichroic coating by the mirror, and be reflected again in a proportion a(1 − a)p + b(1 − b)q of the total energy with the rest (1 − a) 2 p + b 2 q going through the dichroic coating to the CCD.The reflected part will repeat this route until its energy vanishes.The beams transmitted to the CCD can be calculated as: Definitely, the process cannot be repeated forever; after l = t the residual energy becomes which can be ignored for practical purposes when it is comparable with the Poisson data noise.
The coefficient a and b can be used to decide the number of significant reflections.Another important parameter is the spatial shift.This spatial shift can be easily calculated from the thickness of the SSM, the incident angle, and the refraction index of the glass.We can also measure the spatial shift after the experimental setting is fixed, as shown in section 4.3.

Four-view DSS prototype
As shown in Fig. 4, the four-view DSS system, which is based on our previously published fourview design [25], is unique because it allows digital separation of two spectral components.The main difference from the previous design lies in that the four mirrors are replaced by four SSMs.
In this system, two of the SSMs are mirrors (5mm thickness), which will redirect the original left and right view to the CCD.The other two SSMs are of the same type (SSM p q ), which will redirect the top and down views with remixed spectral components.The whole setting can be rotated by 90 • to acquire two independent views from each orthogonal direction around the mouse body.The resulting datasets will give us sufficient information to recover the two spectral components for all the four views.
Figure 5 shows a comparison between our proposed DSS system and an ordinary multiview system.Assume that a bioluminescence image has two spectral components: (blue) and (red), with differing time courses (the middle row of Fig. 5).To perform a two-spectral imaging experiment, a conventional multi-view system needs to take two images using red and blue filters sequentially (the upper row of Fig. 5).For each data acquisition session, part of the energy (blue then red) will be filtered out, and the associated information will not be recorded.On the other hand, our DSS system will not filter out any information and will fully utilize all the photons through an entire experimental process (the lower row of Fig. 5).Specifically, for the image the DSS system captures using the first SSM device, the first surface of that SSM device is a dichroic beam splitter to reflect the blue component and allow the red component to transmit.The red component will then be reflected by the mirror coating on the second surface.As a result, the red and blue signal will form two images with a spatial shift on the CCD.If the bioluminescent signal is spatially localized relative to that shift on the CCD, the DSS will directly separate the two spectral components; otherwise, the two images will be mixed on the CCD and can be digitally separated.Similarly, the image captured by DSS using a mirror contains both the red and blue components to offer sufficient information for digital separation.a composite image can be regarded as a linear combination of two virtual images in different spectral bands after a spatial shift.Let us assume that a line object has two spectral components p and q.A SSM p q will split and mix the spectra as following: where M is the image on the CCD, M k the kth pixel of M , p k the kth pixel of p, q k the kth pixel of q, q k = 0 for k < 1 and k > n, and i > 0. If the distance between the two coatings change, the spatial shift will be changed accordingly (i will be different).That is, we can obtain a new set of linear equations; for example, if the distance is set to zero (mirror coating only), we have M k = p k + q k for k = 1, ..., n.Combining these two equation sets, we have: which is an over-determined linear system with n variables and n + i equations.
Given the mathematic model of SSM p q in 2.1, the linear system will have the form: where M k and M k are the kth pixels in the image using mirror and SSM p q respectively.Since bioluminescence images generally very weak, and only a very small area will have signal, this means most the variables p k and q k will be forced zeros and will make the system smaller and over determined.

Multiple-component algorithm
In the preceding subsection, we have considered the two-component case, which can be extended to the multiple-component case.Let us assume that there are m spectral components: p (l) .One straight-forward extension is the use of m − 1 dichroic beam splitter layers.There is another way to un-mix more spectral components using a single dichroic layer SSM.Specifically, we can use m different SSMs: SSM p (l) p (2) +...+p (m) , SSM p (l) +p (2)  p (3) +...+p (m) , ..., SSM p (l) +...+p (m)

0
. The equations can be setup as follows: In this linear system, i l need not to be different.

Least squares reconstruction
In all the aforementioned cases, the corresponding linear system can be expressed as AX = B, where A denotes the system matrix associated with the spectral separation setup, B the measurement (the composite image), and X the spectral components of interest.Considering the measurement noise, the linear least-squares method is preferred to assure a stable solution.
In addition to the linear system, we add non-negative constraints upon the variables to confine the results further.The least squares problem is given by: Since the A matrix is a sparse matrix, a sparse non-negative least squares algorithm can be used to solve the problem efficiently [26,27,28].
Hence, we have developed a DE algorithm for DSS using C/C++ by combining the DE and the non-negative linear least squares algorithm.

Static DSS simulation with perfect coefficient
All the numerical simulations were conducted on an Intel Xeon 3.20GHz 4GB memory using the MATLAB 2006, Intel C/C++ compiler 10.0 and Intel Math Kernel Library 9.1.The first simulation was done with a real bioluminescent image of a mouse.In this case, the original pixel size was about 0.07 mm.A 5 × 5 pixel binning was applied to form an 180 × 80 image, as shown in Fig. 6(a).Since this was a spectrally mixed image, there is no spectral information.As shown in Fig. 6(b), we assumed that there were two spectral components p and q in the image, and the percentages for the two spectral components were 40% and 60% respectively.In the simulation, we assumed that a = 0.99 and b = 0.90 for the dichroic coating.A three-reflection model was used to generate the images: For M , the spatial shift is i = 10 (about 3.5mm).Poisson noise was added to the images according to the signal level.As shown in Fig. 6(c), the top image used SSM p q while the lower image used a mirror.Then DSS algorithm was applied to every column of the images to recover the spectral information using a same model.
Figure 6(d) shows the reconstructed spectral components.Figure 6(e) is the mixed result.The simulation results were in excellent agreement with the original spectral image.The reconstruction errors for the two bands were 5.4% and 4.2% respectively.We also noticed that while the spatial shift was increased, the reconstruction error would be decreased.For example, using the same setting as in the first experiment, if the spatial shift was set to 20 pixels, the errors would be 4.4% and 3.5% respectively; and if the spatial shift was 30 pixels, the error would be 4.1% and 3.0% only.Clearly, if the spatial shift was set sufficiently large, the signals in the two spectral bands would be fully separated.

Dynamic DSS simulation
The second simulation was to recover two spectral images and corresponding time-course functions.Assume that we have a 1D image with two spectra: p and q, as shown in Fig. 7(a).The time-courses for the two spectra were p(t) and q(t) in the form of chi-square functions.In our recent study on the time-courses of the bioluminescence probes, the time-course functions can be approximated by two different Gaussian/Chi-square distribution functions before and after the signal peak.For simplicity, here we used the Chi-square function to approximate the time-course function, as shown in Fig. 7(b).In the simulation, from the time interval 4 to 28 minutes, we captured 12 images, with an exposure time of 2 minutes per image.The images [1,5,9] were captured using a mirror, while the images [2,6,10] were taken using an SSM with a 5 pixels shift, images [3,7,11] using an SSM with a 10 pixels shift, and the images [4,8,12] using an SSM with a 15 pixels shift.Figure 7(a) shows the reconstructed results for p and q, with the errors 2.3% and 3.6% respectively.Figure 7(b) shows the recovered results for the two time-course functions, with the errors 1.6% and 3.3% respectively.We regenerated a set of data set with random Poisson noise and our DSS program gave similar results.
A similar experiment conduced with the IVIS Spectrum system (a filter-wheel based system) Fig. 7. Dynamic DSS simulation with a two spectral 1D image.(a) Spectral components p and q and the recovered results p and q , and (b) dynamic functions p(t) and q(t) as well as the recovered time-courses p (t) and q (t) respectively.
to capture the multi-spectral data for four views, would marginally capture one data set (8 images), while the time-course information will be completely lost.

Static DSS with imperfect coefficient
The third simulation was similar to the first one but with imperfect coefficients: a and b.The forward imaging was implemented using the forward ray tracing method, which is the most accurate algorithm in the optical engineering field.Then, our DSS algorithm was applied to the realistically simulated dataset to compare the recovered result and the filtered counterpart.
As shown in Fig. 8(a), the focus length of the lens was set to 55mm, the aperture of the lens 50mm, and the distance between the lens to the virtual images 500mm.The thickness of each SSM component was 5mm, with true coefficients a = 0.95 and b = 0.80.It is well known that the coefficient of dichroic coating will change when the incident angle varies; in our cases, the half cone angle (within the cone, the photon can enter the lens) for each point on the virtual image is less than arcsin(50/2/500) = 2.87 • .Since this aperture is very small, we can ignore the variation in the coefficient.A cylinder object was then placed in the middle of the four view DSS prototype.To reduce the forward simulation time, we only simulated a band on the cylindrical surface.The band source will form line images on the CCD.The two images captured from the mirror and SSM were shown in Fig. 8(b).To measure the spatial shift, we used a small point source on the cylinder surface.The point source was spectrally pure within the pass band of the SSM.The distance was measured between the corresponding light spots formed on the CCD.In this configuration, the spatial shift was 9 pixels, as shown in Fig. 8(c).This method can be used to calibrate the coefficients a and b.In the calibration, a much stronger light source (laser) would reduce noise greatly.In the simulation, the calibration procedure was repeated several times.The calibrated coefficients were found to be a = 0.9487 and b = 0.7970.Finally, the DSS data was shown in Fig. 8(d).The reconstruction errors were about 2.27% and 2.49% for spectral bands p and q respectively.In comparison with the errors from the first and the second simulations, the relative error in the third simulation was in the same level.

Discussions and conclusions
Bioluminescent signals are generally very weak.Even using a state of art liquid nitrogen cooled CCD camera, several minutes are needed to obtain a decent signal-to-noise ratio.To obtain multi-spectral data, the filter wheel approach requires long imaging times.The digital spectral separation approach proposed here provides a novel solution to multi-spectral low light imaging.In the digital spectral separation method, we shift the spectral components in space and collect all of them simultaneously, instead of filtering out any spectral information.Due to space limitations, we could not cover all possible designs using digital spectral separation principles.For example, the mirror and the dichroic beam splitter do not need to be parallel.Also, the mirror and dichroic beam splitter can be shaped flexibly as needed.We can have one view, two view, eight view, or even cone-shape designs.In the presented prototype design, the system still needs to take several images to recover multiple spectral components; for example, if we compare the four-view DSS prototype with the four-view BLT system, it will not reduce the experimental time.Alternatively, we can capture all the spectra information in one shot.As shown in Fig. 9(a), where (A, ..., H) are eight SSMs, each SSM covers two parts of the surface.For example, 1 is covered by A and B, so the same portion of the mouse body surface can be imaged twice by two adjacent optical elements.Therefore, the spectral information can be simultaneously recorded and subsequently recovered up to two components.However, the 3D mouse body surface model is required for signal calibration.This type of the design can be extended to separate more spectral components, as shown in Fig. 9(b).Note that we can also use more views to separate less spectral components.Comparing this eight-view DSS prototype with the ordinary IVIS system, the DSS system can reduce experimental time by 8 folds.In addition to the time saving, it can compensate for the temporal variation of the signal automatically.If we compare this eight-view DSS prototype with the multi-view design, the experimental time can be cut by half.
In the single probe case, the time-varying ratios between different spectral components are the same, and this ratio can be easily estimated from the bioluminescent images.If we have two or more types of probes in the mouse, the time-varying behaviors of different probes will not be the same.Because these probes generally differ in spectral profiles, the time-courses of different spectral profiles will not be the same.In this case, the dynamic DSS algorithm will be very useful to monitor the time-course of each spectral component according to Eq. (8).To recover the multi-probe source distributions, we definitely need to apply a BLT algorithm to the multi-spectral images extracted using the DSS algorithm, since the DSS algorithm itself cannot reconstruct multi-probe distributions directly.
Most importantly, the applications are not limited to bioluminescence imaging.Actually, the proposed DSS concept can be applied to any multi-spectral imaging studies if faster data acquisition is desirable.The focus of this paper is on the DSS method; further efforts will be made on system prototypes and reconstruction methods.Theoretical and animal studies on the performance optimization (such as noise minimization) of the DSS scheme will be conducted in the future.
In conclusion, we have proposed a DSS approach for optimal acquisition of multi-spectral data.Specifically, we have formulated the static and dynamic DSS algorithms, and described a four-view DSS prototype for bioluminescence tomography.Our results should be generally useful for multi-spectral imaging in the case of weak and/or variable signals.

Fig. 2 .
Fig.2.Two spectral components are separated, shifted and mixed using a dichroic coating and a mirror.

Fig. 6 .
Fig. 6.Static DSS simulation with a real bioluminescent view of a mouse.(a) Original mouse bioluminescent view, (b) two spectral components q (upper) and p (lower), (c) two composite images M (upper) and M (lower) captured by DSS, (d) two recovered spectral components q (upper) and p (lower), and (e) the recombined image p + q .

Fig. 8 .
Fig. 8. Static DSS simulation with ray tracing data and imperfect coefficients.(a) System configuration of the forward ray tracing for a SSM component, (b) line images formed on the CCD as simulated via ray tracing, (c) the spatial shift due to the SSM, (d) the spectral components p and q and recovered counterpart p and q ,

Fig. 9 .
Fig. 9. Digital spectral separation (DSS) in one shot.(a) Eight-view design in the twocomponent case, (b) twelve-view design in the three-component case.