Radiometric and design model for the Tunable Light-guide Image Processing Snapshot Spectrometer (TuLIPSS)

The tunable light-guide image processing snapshot spectrometer (TuLIPSS) is a novel remote sensing instrument that can capture a spectral image cube in a single snapshot. The optical modelling application for the absolute signal intensity on a single pixel of the sensor in TuLIPSS has been developed through a numerical simulation of the integral performance of each optical element in the TuLIPSS system. The absolute spectral intensity of TuLIPSS can be determined either from the absolute irradiance of the observed surface or from the tabulated spectral reflectance of various land covers and by the application of a global irradiance approach. The model is validated through direct comparison of the simulated results with observations. Based on tabulated spectral reflectance, the deviation between the simulated results and the measured observations is less than 5% of the spectral light flux across most of the detection bandwidth for a Lambertian-like surface such as concrete. Additionally, the deviation between the simulated results and the measured observations using global irradiance information is less than 10% of the spectral light flux across most of the detection bandwidth for all surfaces tested. This optical modelling application of TuLIPSS can be used to assist the optimal design of the instrument and explore potential applications. The influence of the optical components on the light throughput is discussed with the optimal design being a compromise among the light throughput, spectral resolution, and cube size required by the specific application under consideration. The TuLIPSS modelling predicts that, for the current optimal low-cost configuration, the signal to noise ratio can exceed 10 at 10 ms exposure time, even for land covers with weak reflectance such as asphalt and water. Overall, this paper describes the process by which the optimal design is achieved for particular applications and directly connects the parameters of the optical components to the TuLIPSS performance.


Radiometric and design model for the Tunable
Light-guide Image Processing Snapshot Spectrometer (TuLIPSS)

Selection of Dispersive Element:
While diffractive gratings are the most commonly used dispersive elements in spectrometers. They cause particular problems when applied to the TuLIPSS system, making prisms the better choice. Generally the grating produces multiple orders of diffraction simultaneously, which decreases the light throughput, with the stray light intensity significant. As such baffles are usually needed for the mechanical housing design. In TuLIPSS, there are about 50 layers of fiber blocks need to be dispersed onto their void spaces with spacing of about 250µm, and the effective focal length of the collimating lens is 180mm. Thus the angle difference between the adjacent two layers is about 0.08̊ . This small field angle readily generates substantial spectral cross talk of the selected diffraction order of one layer with a different diffraction order of its neighbor layers. On the other hand, the dispersion of a prism comes from its variation of refractive index with wavelength λ. The dispersion can therefore be determined by Snell's law at each interface of the prism. For a prism with apex angle α as shown in fig. S1, the indicated angles are given by The deviation angle ( ) is given by As equation (S1) shows, the deviation angle of light ray is determined by the incident angle refractive index and the apex angle of the prism. Under small angle approximation, the deviation angle for a prism in air is linear in refractive index n 1 (λ). The dispersion Δ of the prism is the difference in deviation angles between the two extreme wavelengths (λ min ,λ max ) of the spectral band transmitted by the prism, as Δ = δ(λ min ) -δ(λ max ). Though single prisms generally show less dispersive power and are nonlinear in wavelength, the high throughput and low stray light make it a better choice than grating for the current TuLIPSS system.
Compound prisms are solid assemblies of multiple single prisms cemented together. The deviation angle δ(λ) and the dispersion Δ can be obtained by concatenating the refraction equations at each interface. Through adjusting the apex angles and the abbe numbers of the single prisms, it is possible to retain the dispersion, while make the deviation of the central wavelength, δ λ max -λ min 2 , zero [1][2][3]. Thus, light at the central wavelength which enters at an angle with respect to the optical axis, exits the prism at the same angle with respect to the optical axis. This direct-vision-dispersion is a substantial advantage to maintain the TuLIPSS in a direct view geometry. It is helpful in keeping the mechanical housing design compact. Beyond the direct-vision-dispersion, compound prisms can also have large dispersion, such as the double Amici direct-vision compound prism, which has 23° of dispersion across the visible spectrum, equivalent to 1300 lines/mm gratings [1]. By taking advantage of the multiple degrees of freedom available in a compound prism design, the angular dispersion of the compound prism can be improved to linearity in wavelength [2], or linear-in-wavenumber [3].
In the current TuLIPSS system design, we need a relatively large spectral band, e.g. 460nm-700nm, to be dispersed in a relative narrow void space, e.g. 250µm (about 40 pixels size of camera). Thus, the low angular dispersion (less than 0.08̊ ) satisfies the requirement of spectral sampling. With this requirement, the high light throughput and low stray light are the dominant factors in selecting the dispersive element. The low-cost single BK7 prism (discussed in the main text) is a good candidate for the dispersive element. The spectral sampling can be tuned through changing the apex angle of the prism. For future implementations, the compound prism will allow us to shrink system size.

TuLIPSS modelling Application
Here, we describe the modelling application developed to determine the light throughput of the TuLIPSS instrument taking account of the various parameters that describe the surface radiance of the land cover being observed. The application was developed using the graphical user interface feature of Matlab (see Fig. S2). In this tool, both the slide bars and the text menu can be used to change the configuration and properties of the optical components. The pop-up menus for the camera and filters are used to select the quantum efficiency curve of the camera or filter transmission curve, respectively. The pop-up menu for irradiance is used to select the directional hemispherical reflectance of a particular land cover from the NASA spectral library (reference), or the absolute irradiance measured with the observation geometry from the scene observed. The 'Plot' click-button generates a plot of the signal response in units of pico-joules, which represents the total energy recorded by the single pixel during the exposure time; the 'To Photons' click-button converts the units from pico-joules to photons. The 'Poisson' clickbutton generates the digital counts (or analog-to-digital units) of the measured system considering the quantum yield of the sensor with shot noise, under the assumption that the photons reaching detector have a Poisson distribution.

Lookup table
The phase-shifting calibration provides the phase and modulation value for each pixel on the detector. However, one fiber core's dispersed line on the detector occupies a region of pixels. The final phase for each fiber core was determined in two steps: (1) a phase together with a modulation was interpolated at every wavelength position of the core; (2) the fiber core's final phase was the average of all the phase values using their modulations as the weight. As a result, two columns representing the fiber core's x and y coordinate on the object plane are appended to the lookup table.
Furthermore, the modulations for all wavelengths were also added up to determine the core's final modulation. Due to imperfection in the fabrication procedure and the mismatch between the cores and the pinhole, a few fiber cores have lower transmission than usual. These lowtransmission fibers caused dark spot artifacts on the reconstructed image. Therefore, the cores with modulations below a certain threshold are removed from the lookup table. With a higher threshold, the reconstructed image is more homogeneous at the expense of lower spatial sampling.

Flat field and dark count correction
The purpose of the flat field correction is to remove the ununiformed light transmission of the fiber bundle. In our measurement, the flat-field is the global irradiance reflected from a white target screen. This flat-field is captured by the detector with the white screen close to the objective lens and out of focus, which brings uniform illumination to the fiber bundle. The dark count is captured with the same exposure time as the data acquisition, while the aperture of the fore-optics was covered. The flat field was corrected t by first subtracting the dark count. Then the raw data is corrected by subtracting the dark count, and normalized by the corrected flat field. The intensity heterogeneity from the variation of the fiber transmission is eliminated by the flat field normalization.

Reconstruction
Having generated the lookup table and the flat-field corrected object image, each single channel image is reconstructed in two steps: (1) the coordinates for all fibers on the detector corresponding to that wavelength is provided by one single column. The fibers' intensities are obtained by interpolation at these coordinate positions on the raw image. (2) The interpolated intensities are remapped on a mesh grid according to their phase values, and a linear interpolation algorithm used to estimate the values on the grid points. The composite image is generated by combining all spectral channels and is pseudo-colored using a wavelength-to-RGB conversion algorithm.