Open Access
6 April 2023 Compact and ultracompact spectral imagers: technology and applications in biomedical imaging
Author Affiliations +
Abstract

Significance

Spectral imaging, which includes hyperspectral and multispectral imaging, can provide images in numerous wavelength bands within and beyond the visible light spectrum. Emerging technologies that enable compact, portable spectral imaging cameras can facilitate new applications in biomedical imaging.

Aim

With this review paper, researchers will (1) understand the technological trends of upcoming spectral cameras, (2) understand new specific applications that portable spectral imaging unlocked, and (3) evaluate proper spectral imaging systems for their specific applications.

Approach

We performed a comprehensive literature review in three databases (Scopus, PubMed, and Web of Science). We included only fully realized systems with definable dimensions. To best accommodate many different definitions of “compact,” we included a table of dimensions and weights for systems that met our definition.

Results

There is a wide variety of contributions from industry, academic, and hobbyist spaces. A variety of new engineering approaches, such as Fabry–Perot interferometers, spectrally resolved detector array (mosaic array), microelectro-mechanical systems, 3D printing, light-emitting diodes, and smartphones, were used in the construction of compact spectral imaging cameras. In bioimaging applications, these compact devices were used for in vivo and ex vivo diagnosis and surgical settings.

Conclusions

Compact and ultracompact spectral imagers are the future of spectral imaging systems. Researchers in the bioimaging fields are building systems that are low-cost, fast in acquisition time, and mobile enough to be handheld.

1.

Introduction and Motivation

Light interacts with objects through various means of scattering, absorption, reflection, and transmission. In the late 18th to early 19th century, studies involving light emitted from chemical flames and celestial bodies demonstrated that each chemical compound has a different fingerprint in its interactions with electromagnetic radiation. Nowadays, using instruments called spectrometers or spectrographs, the incoming radiation can be separated by wavelengths, and the resulting spectrum can be matched to determine the types and amount of chemicals. If spectrometers typically analyze the compound within a limited field of view, spectral imaging expands the field of view to include the spatial morphology of the subject. Spectral imaging is more powerful than spectrometry in understanding not just the types and amount but also the spatial distribution of chemicals.1 As such, spectral imaging is used in almost every application that requires imaging, such as satellite,2 agriculture,3 food science,4 and art.5 In biomedical imaging, spectral imaging has some advantages over regular color images. The amount of spectral and morphology information provides better understanding about physiological processes, which RGB and spectroscopy alone cannot achieve.6 Both Li et al.6 and Lu and Fei7 produced seminal literature reviews that discuss the different processes spectral imaging can reveal and help measure, such as metabolic processes; retinal oxygen saturation; tumors on the surface of skin, tongue, and mucosa; and ischemia in the intestine and the brain. Because spectral imaging mostly captures the reflected and scattered light in the nonionizing wavelengths, invasiveness and potential harm are minimal.

The motivation for compact and lightweight spectral camera systems came from remote sensing. Smaller and lighter cameras meant more space for other instruments. As spectral imaging was adapted to other fields, compact spectral cameras proved useful because they enabled on-the-spot sample acquisitions and analysis without cumbersome setup. Since the first generations of spectral cameras in the 1970s, innovations in compact spectrometry, manufacturing processes, material sciences, and computations have enabled compact and ultracompact spectral imaging systems.8 Innovations came from both industry and academia: while compact commercial devices were developed using proprietary solutions, many devices used within academia relied on low-cost, commercial-off-the-shelf (COTS) components. Since the last review of medical hyperspectral imaging from our research group,7 several new applications of spectral imaging in the medical and biological field became possible due to developments of compact and ultracompact spectral cameras. And yet, spectral imaging devices are still not widely used in the biomedical field.6,9 With this review paper, we hope that researchers can (1) understand the technological trends of upcoming spectral cameras, (2) understand new specific applications that portable spectral imaging unlocked, and (3) evaluate proper spectral imaging systems for their specific applications. Section 4 reviews the acquisition methods. Section 5 discusses components that enable the miniaturization of spectral cameras. In Sec. 6, we focus on a special subset of compact spectral cameras, which are spectral cameras that are both compact and low-cost, built using off-the-shelf components and low-cost manufacturing processes. In Sec. 7, we provide specific applications of compact spectral cameras in biomedical research. Finally, in Sec. 8 we provide extended discussion on the future of compact spectral cameras, in terms of engineering and biomedical applications.

2.

Scope and Methodology

In this paper, the term “spectral imager” refers to both hyperspectral imagers (HSIs) and multispectral imagers (MSIs). In the early years of spectral imaging research, the term “imaging spectrometer” was also common.1015 In general, MSIs capture <20 bands of wavelengths, whereas HSIs can capture 20 to hundreds of bands.6 Some literature uses the term “ultraspectral imaging,” which refers to systems that collect hundreds to thousands of bands.16 Spectral imaging captures light reflected, scattered, and fluoresced from a sample, as in conventional imaging. We do not cover other imaging modalities that also rely on the spectral response of tissues, such as laser speckle contrast imaging (LSCI),17 Raman spectroscopy,18 and optical coherence tomography.19 Readers should not confuse spectral imaging with multispectral photoacoustic imaging, which uses the formation of sound waves following light absorption to image at different wavelengths.20 We only discuss the cameras using digital sensors, as research in analog spectral cameras is almost nonexistent. Finally, the definition of “compact” as used in the literature varies depending on the field. Table 1 displays dimensions and weights for systems that meet our definition of “compact”: without external lens or cables, these systems weigh no more than 5 kg (11  lbs). We also define “ultracompact” cameras as systems that weigh <500  g (1.1  lbs). For comparison, commercially high-end digital single-lens reflex (DSLR) cameras typically weigh <2.5  kg (5.5  lbs), midrange commercial webcams weigh 80 to 100 g (3 to 4 oz), and modern handheld spectrometers usually weigh no more than 1 kg (2  lbs).8 Our findings are summarized in Table 1. In this paper, we use the notation h×w×λ to show the pixel raster size of the hypercube. h and w refer to the height and width in the spatial dimension, and λ refers to the number of bands in the spectral dimension.

Table 1

Compilation of compact spectral imaging systems used in medical and biological applications. Cust., customized or commercial-off-the-shelf systems; Comm., commercialized systems that can be purchased; LED, light-emitting diode; PGP, prism-grating-prism; AOTF, acousto-optic tunable filter; IMS, image mapping spectrometer; LCTF, liquid crystal tunable filter; FPI, Fabry–Perot interferometer; SRDA, spectrally resolved detector array; CTIS, computed tomographic imaging spectrometer; SFDI, spatial frequency-domain imaging; CCD, charge-coupled device; CMOS, complementary metal oxide semiconductors; CVD, cardiovascular disease; AMD, age-related macular degeneration; E, ex vivo; I, in vivo; H, human; A, animal; P, phantom. Weight refers to the weight of the camera and the lens only, not including other optical systems that may be used in the acquisition of biological samples.

JBO_28_4_040901_f016.png

Our literature search used the combination of the following search terms on three databases (Scopus, PubMed, and Web of Science): “compact” OR “miniature,” “hyperspectral” OR “multispectral,” and “camera” OR “imager.” We included only fully realized systems with definable dimensions and excluded developments in single components. There was a wide variety of contributions from industry, academic, and hobbyist space.

3.

Historical Progress

The earliest applications of spectral imaging system were point scan cameras used for Earth remote sensing.71 A point scan camera called the Multispectral Scanner System (MSS) was used onboard the Earth Resources Technology Satellite (later called LANDSAT-1) in 1972.2 The system provided invaluable satellite images for the purpose of identifying, managing, and surveying geographical resources. To relay light, the system used a set of 24 fiber optic cables that transmitted light to the detectors. Because charge-coupled devices (CCDs) were not available at that time, the detectors used were photomultiplier tubes, which were extremely sensitive vacuum tubes that generated voltage upon radiation in the visible to near-infrared light range. Because of the extra components, MSSs were extremely bulky by today’s standards. The four-band system weighed up to 48 kg, measured 40×59×89  cm in dimensions, and consumed up to 42 W of power.72 Nevertheless, its success prompted NASA and Jet Propulsion Laboratory to develop multispectral cameras onboard future LANDSAT missions. In 1975, NASA launched LANDSAT-2 and equipped it with an even larger (64 kg, 54×62×126  cm) five-band MSS with added infrared capability. In 1979, NASA developed the airborne imaging spectrometer (AIS), which included a 32×32  pixels mercury cadmium telluride imaging sensor coupled with a silicon CCD multiplexer. AIS had a compact design to be flown on aircraft, measuring 30×30×20  cm in dimensions.2 A series of AIS followed, the most successful one being the airborne visible/infrared imaging spectrometer (AVIRIS). While new satellite-based spectral systems were being developed, the need for hyperspectral imagery in space-based applications was deemed unnecessary, and the only hyperspectral system that enjoyed long-term usage was the 202-wavelengths Hyperion camera onboard the Earth Observation-1 (EO-1).2 At the same time, commercial hyperspectral cameras for the purpose of land observation were developed with new imaging sensor technologies. An example was the compact airborne spectrographic imager (CASI), developed in 1989. CASI used a 512×288  pixels CCD to record up to 512 spatial pixels or 288 spectral bands in the 430 to 780 nm range.73

Up until 2000, progress in spectral imaging left much to be desired, especially when compared with parallel progress in commercial cameras and other consumer electronics.74 Developments of compact spectral imaging systems prospered after the 2000s, due to developments of portable spectroscopy.8 These developments did not stem from any new optical architecture, but rather manufacturing methods that matured enough to create miniature imaging components. The number of transistors in an integrated circuit has been observed to double every two years for the last 50 years, this is colloquially known as Moore’s law. Smaller imaging sensors that followed Moore’s law, lithography processes, microelectro-mechanical systems (MEMS), microcontrollers, and 3D printers were some innovations that made new spectral cameras smaller and lightweight. With advances in unmanned aerial vehicles (UAVs or drones) for consumers, ultracompact spectral imaging systems were being developed for the purpose of being carried by UAV.75 Modern UAVs have payload limits ranging from 125 g to 5.5 kg, limiting the maximum weight of compact spectral cameras.76 However, UAVs were not the only impetus for compact and ultracompact imaging systems. As other fields, such as biomedical imaging, industrial imaging, and environmental monitoring, moved from laboratory analysis to on-site imaging, the demand for compact spectral cameras grew larger. In the early phase of developments, tradeoffs were often required between performance and size. However, new generations of ultracompact devices seemed to overcome those limitations altogether, as demonstrated by a recent line scan imaging system that weighed only 400 g (0.9  lbs), yet was capable of capturing 100 spectral bands at a resolution of <5  nm.77 Figures 1 and 2 show a sample of spectral cameras over time to show the progress in size.

Fig. 1

Timeline of the progress in size for spectral imaging. 1. MSS-1, 1972. 2. MSS-2, 1975. 3. CASI, 1989. 4. Digital airborne imaging spectrometer, 1993. 5. Compact high-resolution imaging spectrometer, 2000. 6. Headwall Hyperspec, 2009. 7. Resonon PikaL, 2012. 8. BaySpec OCI-1000, 2014. 9. Imec Snapscan, 2018. 10. Specim IQ, 2018. After the 2000s, spectral imaging systems became remarkably smaller due to manufacturing advances. All selected imaging systems required motion of the camera or the sensor to acquire the spectral cube.

JBO_28_4_040901_f001.png

Fig. 2

A selection of ultracompact spectral imagers. (a) Two snapshot imaging cameras that weighed <30  g (reproduced from Ref. 76). (b) A handheld snapshot camera that weighed <500  g (reproduced from Ref. 77). (c) A spatial scanning camera used in UAV applications (reproduced from Ref. 78). (d) A snapscan imaging camera being used in a laboratory setting (reproduced from Ref. 79).

JBO_28_4_040901_f002.png

4.

Acquisition Overview

The goal of spectral imaging is to acquire a datacube, a 3D block of data with two spatial dimensions and one spectral dimension (Fig. 3). In a datacube, the unit of smallest resolution is called the voxel, which is the equivalent of a pixel in digital images. The spatial dimension represents the field-of-field of interest, and the spectral dimension represents the different wavelengths. There are five methods of acquiring the datacube based on how much of the datacube is being captured within one exposure: point scanning, line scanning, spatial–spectral scanning, spectral scanning, and snapshot imaging.80 Because spatial–spectral scanning relies on translation to image the entire data cube, we grouped spatial–spectral scanning as a subset of line scanning.81 Acquisition methods can also be classified based on how they acquire the spectral component: either through interference filters, monochromators, or interferometers.80 Both interferometers and interference filters use the same optical mechanisms. They both consists of devices that superimposed light and produced interference patterns. However, interference filters typically use interference to block certain amount of light, whereas interferometers use interference to generate signals that can be measured and processed down the line. Each method offers their own engineering strengths and weaknesses, so choosing an appropriate method for datacube acquisition for specific applications is important.

Fig. 3

Comparison of different methods of datacube acquisitions. The shaded regions correspond to the section captured in one exposure. The arrows show the direction of scanning. (a) The hypercube. (b) Point scanning. (c) Line scanning. (d) Spectral scanning. (e) Spatial–spectral scanning. (f) Snapshot scanning.

JBO_28_4_040901_f003.png

4.1.

Point Scanning and Line Scanning

Point scanning and line scanning, collectively called the spatial scanning or imaging spectrograph, acquires the entire spectrum section-by-section and uses mechanical means to scan the entire space. These methods are also known as “whiskbroom”/ “spotlight” and “push-broom,” respectively, which are terminologies originated from satellite imaging. In a point scanning imager, an aperture only allows light from a small section to pass through a monochromator, which will disperse the light onto a sensor array. The specific method of scanning point by point in whiskbroom imagers brings to mind that of confocal microscopy imaging, in which a pinhole is also used to block lights that are out of focus. Hyperspectral confocal microscopy combines both methods with little modifications, producing 3D structures that can be analyzed by spectral values.82,83 In a line scanning imager, the aperture is a slit that allows a sliver of light to be dispersed onto a two-dimensional (2D) sensor plane (the direction of dispersion is perpendicular to the direction of the slit). Compared with point scanning, line scanning can capture the same field of view in less time, but the higher number of sensors means that there are potentially more elements to calibrate and higher chance of sensors artifact.6 Line scanning remains the dominant method of acquisition in biomedical imaging7 and remote sensing.81

As mentioned, usage of spectral imaging for practical purposes began with the MSS in 1972. In this system, the point-scanning imager used a mirror and the motion of the spacecraft to achieve perpendicular and parallel scanning, respectively.84 Nowadays, in the field of remote sensing, mirrors combined with motion of the aircrafts are still being used as the mean of acquiring the full spatial component.71,85 By contrast, in histology, fluorescence, and confocal microscopy imaging, the subject being imaged is moved with the help of programmable platforms.60 Several line scanning cameras that used mirrors to image near objects were proposed. However, these devices all required spatial calibration to avoid image distortions.86,87 In the works by Sigernes et al.86 and Gutiérrez-Gutiérrez et al.,87 the mirror is on a rotational axis parallel to the line scanning slit. When the mirror rotates, only a line section gets reflected and focused onto the dispersive device.

4.2.

Spectral Scanning

In spectral scanning (also called staring, framing, or band sequential), the datacube is captured spatially all at once but only at selected wavelengths. The process of wavelength selection is done using bandpass spectral filters, which can be either interference filters, variable filters, or interferometers. Because no filter has an infinitely small bandwidth, the resulting image captured at each wavelength should be considered more as a function of the filter’s spectral response, quantified by the following equation:

I(i)=0mi(λ)r(λ)dλ.
In this equation, I(i) is the intensity of the datacube captured by filter i, mi(λ) is the spectral response of the filter, and r(λ) is the aggregated spectrum that reaches the imager. Some early spectral scanning systems used filter wheels, which were interference filters that can be switched in and out. As the name suggests, the filters were arranged in a circle. During capture, the entire wheel rotated and cycled through every waveband. The advantages of filter wheels included cost and simplicity. However, the disadvantages of filter wheels included speed, size, lack of customization, and small number of bands. An alternative to mechanical filter wheels was electronic tunable filters, which were mainly acoustooptical tunable filters (AOTF),8890 liquid crystal tunable filters (LCTF),62,63,9193 or Fabry–Perot interferometers (FPI). There existed other mechanisms for manufacturing tunable filters, such as surface plasmon coupled tunable filters;94 however, three types described earlier are still the most popular. LCTF uses birefringence crystals to bandpass light. In LCTF, birefringent crystals and polarizing filters are stacked in alternating layers. Modifying the birefringent crystal’s retardance with electrical inputs results in different polarizing states of the output wavelength and in turn bandpass only selected wavelengths.91 AOTF uses crystal that selectively bandpass wavelengths based on the radio frequency inputs. AOTF functions similar to a diffraction grating but only has one constructive waveband passing through.95 Compared with LCTF, AOTF have faster switching time and less power demand.93 They also have no moving component, making them the preferred method for vibration-sensitive applications.93 Interferometers refer to a broad class of spectral imaging layouts that use different reflective surfaces layouts to produce interference, such as the Michelson interferometer, the Sagnac interferometer, the Mach–Zehnder interferometer, and the FPI.96

4.3.

Spatial-Spectral Scanning

Spatial–spectral scanning (also called windowing) captures the datacube in both the spatial and spectral directions within one single exposure and effectively samples a diagonal “slice” out of the datacube. Even though the datacube reconstruction is harder to visualize in spatial–spectral scanning, this method offers advantages in acquisition speed and movements. Methods of spatial–spectral scanning often used dispersion elements that were location-dependent, such as linear variable filters (LVFs).97 With the use of a simple grating element in the fore optics, a line scanning spectral camera can be converted into a spatial–spectral scanning camera.98 The grating element transmitted lights that pass through the aperture slit based on both spatial and spectral values. Pichette et al.79 developed a commercial device called the “Snapscan” imager that used location-dependent interference filters. In their system, a series of Fabry–Perot filters overlaid the imaging sensor. The bandpass values of the filters varied across the translation direction, so by moving the sensor both spatial and spectral components were sampled. Fourier transform spectrometer can also be used as a mechanism to achieve spatial–spectral scanning.13,99

4.4.

Snapshot Scanning

Snapshot scanning cameras capture the entire spatial structure multiple wavelengths at one exposure.100 The implementations of snapshot cameras are extremely diverse and can be classified into the following main categories: dispersive-based methods, coded-aperture methods, speckle-based methods, and spectral-resolved detector methods.101 Dispersive-based methods rely on dispersive elements (grating or prisms) to split the incoming images into different wavelengths that are recorded either on a single sensor or on multiple sensors. An example of a system that uses multiple sensors to record the spectral images is the beam splitting camera.100 These types of cameras are often seen in television and movies production, although they only have three sensors for red, green, and blue images. An example of a system that records the entire datacube on a single sensor is the computed tomographic imaging spectrometry (CTIS). CTIS is popular in the hobbyist and academic space, as it is easy to construct with low-cost components.102104 However, the reconstruction of the datacube using CTIS requires using inverse projection, which can be computational-consuming. Another problem is that CTIS records both the spatial and spectral data on a single sensor, which requires a tradeoff between spatial and spectral resolution. Coded-aperture methods use a patterned filter (mask/coded aperture) at the location of the aperture, which will “encode” the incoming wavelengths by lossy compression. The compressed data can record more light compared with CTIS, but this also means that some of the wavelengths will be lost.105 Speckle-based systems reconstruct the datacube through correlating the speckle data and the wavelengths. Spectrally resolved detector arrays (SRDA)39,5355 use interference filters manufactured on top of imaging sensor to capture snapshot images. For example, color filter arrays, the most popular of which are Bayer filters, are used in consumer digital cameras to simulate human color vision.106 Snapshot cameras that use SRDA are fast, compact, lightweight, and require no additional movement mechanics. However, they downsample the spatial dimension, which can result in aliasing of the spatial data if the Nyquist limit is not obeyed. There is also a tradeoff between the number of filters and the spatial pixel resolution: the higher the number of different filters are, the lower the light throughput.100 Readers who are interested in snapshot imaging implementations should consult a literature review by Hagen and Kudenov100

4.5.

Conversion from RGB to Spectral Images

There existed a separate line of research that generates spectral images without using spectral cameras. Regular color vision cameras use three color filters in the red, blue, and green (RGB) wavelengths to simulate human color vision. However, these filters are broadband filters with a large amount of overlap, and transformation algorithms can be used to extrapolate multispectral or hyperspectral data. Furthermore, hyperspectral data are often sparse, making recovery of spectral data more feasible.107 All conversion methods generated a mapping function that transform three-wavelengths data to multiwavelengths data. The function was approximated by minimizing the differences between generated spectral data and expected spectral images. One of the first methods used for this was Wiener estimation.42,108,109 Arad and Ben-Shahar107 used match pursuit algorithm to construct a mapping from natural RGB to hyperspectral images. In recent years, machine learning algorithms, including neural networks, were investigated to learn the mapping. Koundinya et al.110 proposed 2D and 3D convolutional neural networks (CNN) systems that mapped RGB images to 31-waveband images in the wavelengths 400 to 700 nm. Alvarez-Gila et al.111 used generative adversarial networks (GAN) to produce spectral images from RGB. All these methods still faced significant constraints. As noted by Signoroni et al.112 in a review of neural networks for spectral images analysis, several of these generation methods could only generate outputs in the visible light range. Furthermore, they required extensive training inputs of RGB images and corresponding spectral images, which by itself needs a spectral camera to capture. Most input images used for training were outdoor images, which can make the mapping unsuitable for indoor or laboratory settings.

4.6.

Comparisons of Acquisition Methods

The rule of thumb has been that spatial scanning cameras can achieve high spatial and spectral resolutions at the cost of acquisition time, snapshot cameras are fast, and spectral cameras achieve high spatial resolution while trading between acquisition time and spectral resolution.6 Considering the state-of-the-art available, it is important to re-examine this convention. We start by comparing two key spectral variables: the spectral resolution (defined as the full-width half-maximum or FWHM value at center wavelength8) and the spectral range. In this paper, we use the term “bandwidth” interchangeably with FWHM. Some authors such as Hagen and Kudenov100 found the term “resolution” in digital spectral cameras to be confusing as it can also be used to describe the number of spectral samples. Point scan and line scan systems have reliable spectral resolution: many commercial systems have FWHM in the range of 2.5 to 5 nm and are capable of capturing more than 100 spectral bands at once.81 One line scan system for remote sensing reported FWHM as narrow as 1.85 nm.113 Spectral scanning systems that use filter wheels typically achieved 3 to 10 bands38,114116 and each individual filter had bandwidth ranging anywhere from 30 to 250 nm.117 On the other hand, tunable filters can achieve very fine spectral resolution. AOTF could achieve a bandwidth of 0.1 nm;118 however, commercially available AOTF filters often had a minimum FWHM in the 2 to 6 nm range.91,119 LCTF systems often had larger spectral resolution compared with AOTF, ranging from 4 to 30 nm.91 Interferometers typically achieve good spectral resolution: FPI that were driven by piezoelectric had spectral bandwidth in the range of 10 to 20 nm.120 The FWHM also affects the light throughput, as interference filters with smaller bandwidths have lower light throughput. Some filters’ specification uses the term “optical density” to quantify the amount of energy blocked. Applications that need low signal such as autofluorescence imaging require an optical density of 6 or greater. However, low signal also means low signal-to-noise ratio (SNR) and affects the image quality.

In terms of functional ranges, the wavelength ranges of these devices were more dependent on the sensor types than the acquisition method. The functional range of silicon-based sensors is constrained in the range 400 to 1000 nm because of silicon’s photoelectric properties. If researchers wanted to investigate wavelengths in the short-wave infrared (SWIR) and higher (1000 to 12,000 nm), an alternative semiconductor material such as indium gallium arsenide (InGaAs, also named GaInAs) should be used. In terms of spectral wavelength selection, LCTF, AOTF, and FPIs provide an advantage over spatial scanning, snapshot scanning, and filter wheels. That is because the former can tune to arbitrary wavelengths while the latter have discrete wavelength selection. In many cases, specific selection of center wavelengths was not a major concern, so this advantage was not often used.

The spatial pixel resolution in spatial scanning and spectral scanning system is dependent mostly on the sensor resolution and the binning (grouping of several pixels into one) used. In snapshot cameras, the optic architecture determines the maximum pixel resolution available. If the snapshot architecture captures the entire datacube representation upon a single detector plane as in the case of CTIS, IMS, and SRDA architectures, then the spatial resolution is often poor. Ford et al.12 used a 2048×2048  pixels detector to capture a datacube of dimension 203×203×55 using CTIS. A recent CTIS system captured up to 315 wavebands but has a pixel resolution of 116×110  pixels.102 Many other commercial snapshot cameras could capture up to 100 or more wavebands but had spatial pixel resolution of not more than 100×100.81 The acquisition time of point scanning, line scanning, and spatial–spectral scanning systems can be described as rapid. A recent series of line scanning cameras to be used in industrial settings were capable of scanning 576  spatial line/s or 2880 spatial–spectral elements/s.121 Because spatial and spatial–spectral scanning systems are used in many industrial and remote sensing environments, the readout speed of these systems also depends on the translation speed of the camera or the samples. The acquisition time of spatial scanning systems relies on the switching time of the filters. Filter wheels achieve switching time in orders of seconds, LCTF in orders of 50 to 500 ms, and AOTF in orders of 10 to 50 μs.93,100 As for FPI, the switching time ranges from 5 to 50 ms,122 although <3  ms switching time had been reported.123 The fact that snapshot cameras can capture the datacube in one exposure does not necessarily mean that they have low acquisition time. Snapshot systems using mosaic filters can be very fast: one system was capable of capturing a 256×256×32  pixels datacube at a rate of 340 images per second.121 The processing time was part of the consideration for a very long time. Unlike spatial scanning, spectral and snapshot scanning methods all required postprocessing to stitch together the spectrum.74 Some methods required extensive processing to generate the datacube, such as CTIS (Fourier slice theorem) or Fourier transform spectroscopy (inverse Fourier transform). However, with advances in computer power, the processing time after acquisition to generate the datacube is becoming more similar across all platforms.100

Light throughput, the amount of light in the datacube that the detector can measure, affects the SNR. Theoretically, it is been long known that some methods of spectroscopy acquisition have higher light throughput compared with others, which in turn increases the SNR.100 However, among spatial and spatial-spectral scanning systems, the theoretical differences in SNR are not significant.14 Hagen et al.124 argued for the snapshot advantage, which is the increased throughput that comes from the fact that snapshot cameras capture the entire datacube at once. However, the snapshot advantage is only available for a select number of snapshot architectures, such as CTIS, IMS, and CASSI. Realistically, throughput also depends on the filter transmission rate and the quantum efficiency (QE) of the sensors. Finally, some imaging applications are more suitable for certain methods of acquisitions. Confocal microscopy, for example, can only be coupled with point scanning imaging due to the fact that only a small section of light is imaged.100

5.

Technical Aspects of Compact Spectral Cameras

Components of a spectral imager include the optical system (such as lens, endoscope, microscope), the spectral dispersion system (such as monochromator or interferometer), the digital image detector, the control module, and the mechanical elements (gears and housing).6 As spectral and optical systems are dependent on how the datacube is acquired, we decided to group them by acquisition methods instead. For some spectral imaging systems, illumination is also a critical component.

5.1.

Spatial Scanning

Miniature spatial scanning systems use two main classes of architectures: reflective grating and transmission grating/prism.74 In both systems, after the light passes through the aperture, it gets collimated (made parallel), then dispersed, and then the individual wavelengths will be refocused onto the array detector.15 In the Czerny–Turner reflective grating configuration, a curved mirror acts as the collimating mirror and a second curved mirror refocuses the diffracted light onto the imaging sensor. In the Offner configuration, three concentric elements (collimating mirror, reflective grating mirror, and focusing mirror) make up the optical components [Fig. 4(a)]. Even though the Offner configuration provides better spectrographic ability, manufacturing them was not possible until the 1990s due to the lack of precision lithography technology.74 Offner spectrometers are commonly used due to their low aberration.127 Warren et al.126 used a monolithic block of glass as the transmitting medium for the Offner relay, reducing the volume and weight of the imager down to 0.54 kg [Fig. 4(b)].

Fig. 4

Offner spectrograph is an example of a reflective grating configuration. (a) The working mechanism of an Offner spectrograph camera (adapted from Ref. 125). (b) An example of a device that uses Offner spectrograph (reproduced from Ref. 126).

JBO_28_4_040901_f004.png

Many low-cost and compact systems used prisms128130 or transmission grating70,85,102,131133 as the monochromator. The problem was that these systems were prone to artifacts due to misalignments of the optical components.85 Some key artifacts included chromatic aberration (lens not focusing all wavelengths properly), smile (bending of the spectral line), keystone (bending of the spatial data), and straylight (unwanted light caused by other sources).134 Some optical architecture showed advantages in producing fewer artifacts. For example, Offner spectrometers have less smiles and keystones compared with Czerny–Turner and transmission grating architectures.10,135 High quality optical components, such as achromatic lenses, also reduce aberration to a degree, but they can potentially increase the manufacturing cost of the device. Laboratory calibration can also be done to reduce smile and keystone.127 In commercial spatial scanning systems, smile ranges between 0.1 and 0.1 pixels and keystone reaches a maximum of 3.5 pixels at 1000 nm.136 Grism (prism and grating) can reduce some chromatic aberration and is also a common monochromator used in some compact spectral imagers.134,137 Prism–grating–prism (PGP) was first seen in the works of Aikio74 as a method of building ultracompact push broom imagers. In a PGP, two identical prisms sandwich a volume transmission grating [Fig. 5(a)]. Compared with prism and transmission grating alone, PGP disperses light linearly, has high throughput, and is extremely robust.74 The greatest advantage of PGP, however, is the ability to disperse light with little space, allowing the development of miniaturized spectral imaging systems. Multiple compact spectral imagers used PGP as their dispersive element121,137,139141 [Fig. 5(b)]. Some commercial systems also use high quality manufactured transmission holographic grating, such as the one reported by Wu et al.,77 which demonstrated an ultracompact line scan imager using volume phase grating. Table 2 summarizes the different common methods of dispersion used in spatial scanning imaging cameras.

Fig. 5

PGP is an example of a transmission grating configuration, where no reflective device was used. (a) The working mechanism of a PGP imaging camera (adapted from Ref. 125). (b) An example of a device that uses PGP (reproduced from Ref. 138).

JBO_28_4_040901_f005.png

Table 2

Optical comparison among common types of spatial scanning imagers.

TypeCollimating methodDispersion methodFocusing method
Czerny–Turner spectrometerConcave mirrorFlat reflective gratingConcave mirrors
Offner spectrometerConcave mirrorConvex reflective gratingConcave mirrors
Prism–grating–prismOptical lensesA transmission grating between two prismsOptical lenses
Holographic gratingOptical lensesVolume phase holographic transmission gratingOptical lenses
GrismOptical lensesTransmission grating followed by prismOptical lenses

Due to their nature of acquisition, many laboratory-based spatial scanning imagers require either the stage to move or the camera to move. These methods of acquisition are not suitable for bioengineering applications such as surgical guidance or in vivo imaging.129 To develop spectral imagers that can be compact and usable for live imaging, new techniques in spatial scanning acquisition have been devised to overcome the movement problem. These techniques use microelectronic internal devices such as digital micromirror devices (DMD)142 and piezoelectric motors70,79,143 to move the imaging sensors or optical components. Several commercial systems, such as those shown by Wu et al.77 and Behmann et al.,144 are fast enough that they can capture accurate spatial data entirely handheld.

5.2.

Spectral Scanning

Of the compact spectral scanning systems that we surveyed, tunable filters were preferred over filter wheels due to their small size and narrow bandwidth. Some have succeeded in miniaturizing filter wheel systems. For example, Kim et al.22 applied a filter wheel with nine wavebands on top of a smartphone camera. While electrically tunable filters can achieve arbitrary waveband selection, efforts to miniaturize them were hindered by the lack of suitable compact configurations. Both AOTF and LCTF require large external power and large optical pathway, which makes them unsuitable candidates for ultracompact systems.94 While the size of the filters in AOTF and LCTF themselves is compact and lightweight (often <1  kg),88,89,118,145,146 it is the weight of filter driver that adds up to the weight of these devices. Ishida et al.147 were able to deploy an LCTF-based system on top of a UAV. However, the UAV was only able to fly for 10 min due to the high payload weight. On the other hand, interferometric imaging spectrometers are becoming increasingly compact and have found their way into many applications. We discussed two primary types of compact spectral cameras that use interferometers as their dispersive elements: Fourier-transform imaging spectrometers (FTIS) and FPIs (Figs. 6 and 7). All the previously discussed systems used filters to filter out broadband light sources. However, the light sources themselves can be used as mechanisms for spectral scanning. In such systems, there is an arrangement of narrow-band light sources that illuminate the subjects at different wavelengths. The reflected light measured by the electronics detector is analogous to the spectral response of the subjects. Light-emitting diodes (LEDs) are a common method to achieve variable lighting useful for multispectral or hyperspectral systems. Various LEDs are discussed in Sec. 5.6.1. Table 3 compares the common methods used to produce spectral scanning imagers.

Fig. 6

FTIS. (a) Working mechanism of an FTIS system (adapted from Ref. 148). (b) Image of a compact FTIS camera that uses a focal plane birefringent interferometer (FPBI in the figure) in front of the detector (reproduced from Ref. 149).

JBO_28_4_040901_f006.png

Fig. 7

FPI. (a) Working mechanism of an FPI system (adapted from Ref. 150). (b) A compact FPI chip that is driven by electrostatic actuation (reproduced from Ref. 151).

JBO_28_4_040901_f007.png

Table 3

Comparisons of several types of spectral scanning cameras. AOTF, acousto-optic tunable filter; LCTF, liquid crystal tunable filter; FTIS, Fourier transform imaging spectrometer; FPI, Fabry–Perot interferometer; FWHM, full-width at half-maximum (measured in nm). Wavelengths refer to the operating range of wavelengths. VIS, visible wavelengths (400 to 700 nm); NIR, near-infrared wavelengths (700 to 2000 nm). Switching speed refers to time to change from one wavelength to another one; ms stands for milliseconds (1/1000 s).

TypeWavelengths operating rangeFWHM (nm)Number of bandsSwitching speed
Filter wheelVIS-NIR30 to 250<501  s
LED-basedVIS20 to 70<100500  ms
AOTFVIS-NIR2 to 6>10001  ms
LCTFVIS-NIR5 to 30>1000100  ms
FTISVIS-NIR10 to 50>100010  ms
FPIVIS-NIR10 to 50>100010  ms

5.2.1.

Fourier transform imaging spectrometers

An interferometer splits the incoming light wave into two light waves that are then superimposed onto each other. The superimposition has a slight delay, causing a wave pattern (interferogram) whose magnitude is dependent on the delay. In the early 19th century, Albert Michelson showed that one can use Fourier inverse transform to convert the interferogram into actual spectra of the incoming light.152 This was the basis of Fourier transform spectrometry [Fig. 6(a)]. When the detector noise dominates other sources, Fourier transform spectrometry has lower SNR compared with dispersive-based spectrometers and higher throughput compared with slit-based spectrometers.153 However, previous imaging spectrometers that used mechanical interferometers (also called Michelson interferometers) suffered from many drawbacks. Mainly, they required accurate mechanical movement, making them unsuitable for field deployment.154 In recent years, birefringent (different refractive index based on light polarization and propagation) crystals were being used to generate compact interferometers in FTIS.96,149,155 The majority of birefringent crystal schemes were either Wollaston or Savart prisms.155 Both optical schemes used prisms to separate polarized light and collimate them to introduce a delay in the light. Perri et al.155 introduced a birefringent interferometer called translating-wedge-based identical pulses encoding system and commercialized a compact hyperspectral camera using this system. Xu et al.149 used a birefringent interferometer at the focal plane to produce an ultracompact spectral camera [Fig. 6(b)].

5.2.2.

Fabry–Perot interferometers

In its simplest form, an FPI is an arrangement of two parallel or curved mirrors that bandpass wavelength based on the separation distance between the mirrors. Suppose two highly reflective surfaces are separated by a distance d apart by a separation medium with refractive index n, then a collimated beam arriving at normal angle will exhibit transmittance by constructive interference at the wavelengths 2dn=mλ for m=1,2,3156 with all other wavelengths being almost entirely reflected by the Fabry–Perot filters [Fig. 7(a)]. An FPI could then become a tunable filter by varying the distance between the two mirrors. However, FPI filters allow transmission of wavelengths that are periodic, and the distance between the two wavelengths that are transmitted is called the filter-free spectral range (FSR):

FSR=λ22d.
In this equation, the FSR and the distance between the plates are inversely related. This means that if we want to increase the range of the FPI, a low value of d needs to be selected. If the FPI operates within the infrared region, this value could be as low as several microns.157 Like other FTIS, FPI has a throughput advantage compared with dispersive-based spatial scanning cameras. However, the realization of FPI in spectrometry was very recent since their fabrication required highly reflective surfaces to decrease the spectral resolution. Instead of a singular reflective medium, alternating high and low refractive index materials were arranged to create highly reflective Bragg mirrors.158,159 The reflectivity of the materials affects the FWHM value of the filter in the following relationship:
F=πR(1R),
where F is the FWHM and R is the reflectivity of the cavity mirrors. If the distance between the mirrors is unchanged, FPI is also called etalon and is more often used in LVFs and snapshot scanning cameras (see Secs. 5.3.2 and 5.3.5). There are many methods to vary the distances between the mirrors. In piezo-actuated methods, piezo devices produce strong physical displacements when voltage is applied. In capacitive or electrostatic actuated methods, the moving plate is tensioned by springs and moves to electrostatic force157 [Fig. 7(b)]. FPI can be manufactured through photolithography and assembled either through surface micromachining or bulk micromachining (see Sec. 5.4 for more discussions). This process of manufacturing enables ultracompact FPI filters. From the equations, it is important to notice that the constructive interference in FPI not only allows the central wavelength to pass through but also other secondary wavelengths that are multiples of the central wavelength. Appropriate long-pass and short-pass optical filters should be used.54

5.3.

Snapshot Scanning

The mechanisms used to acquire a spectral cube in a snapshot manner are numerous. Here, we describe the common methods used in compact spectral imaging, which were CTIS, SRDA, compressive sensing, image mapping spectrometer (IMS), and LVF.

5.3.1.

Computed tomographic imaging spectroscopy

The basis for CTIS was proposed in the early 1990s and was refined by Descour and Dereniak11 and Johnson et al.104 After passing through an objective lens, light enters the dispersive element of the CTIS, in order: an aperture, a collimator lens, and a grating/reflection dispersive device [Fig. 8(a)]. The aperture could be either a square or a slit depending on whether the desired purpose is imaging or line spectrometry. After passing through the dispersive device, a focusing lens focuses the light onto the staring sensor. What shows up on the sensor is a series of projections of the hypercube, arranged with the zeroth-order in the center and higher order further from the center. Dispersive devices were often transmissive devices, but reflective devices have been developed. Reconstruction from the projections slice is done using Fourier slice theorem. Reconstruction is more accurate with a higher number of projections. However, with limited sensor size, a higher number of projections also means that the reconstructed hypercube will have a lower spatial resolution. Due to advances in computing power, reconstruction-based systems such as CTIS were realized at lower costs. Habel et al.103 demonstrated a CTIS camera that uses DSLR cameras and low-cost components. Salazar-Vazquez and Mendez-Vazquez102 demonstrated an entirely open-source CTIS system with 3D-printed housing and off-the-shelf optical systems. Their imager had a significantly lower cost yet achieved a higher number of wavebands compared with previous CTIS cameras.

Fig. 8

Optical layouts of some common snapshot imaging systems. (a) CTIS. (b) Coded-aperture imaging. (c) SRDA. (d) IMS (reproduced from Ref. 100).

JBO_28_4_040901_f008.png

5.3.2.

Spectrally resolved detector array

While Bayer filters were common and can be manufactured for low-cost consumer electronics, the same cannot be said for SRDA used in snapshot cameras [Fig. 8(b)]. This comes down to the fact that Bayer filters used organic pigments or dye to color their filters, which are cheap but have large bandwidth.106 To manufacture SRDA with bandwidth narrow enough for accurate scientific uses, other types of interference filters must be used, including plasmonic filters, silicon nanowires, Fabry–Perot etalons, cavity-enhanced multispectral photodetectors, and multilayer quantum-well infrared photodetectors.100,106 Compared with architectures for spatial and spectral scanning cameras, SRDA were not typically robust for multiple applications. The number and value of wavelengths were fixed, which means that either the application must be specific or the SRDA must be custom-made, both limited the efficiency of research. Nevertheless, SRDA systems found many applications in bioimaging; we found systems that used SRDA in fluorescence microscopy,23 fluorescence endoscope imaging,160,161 skin imaging,40 and fundus imaging.54 Recovering the full spectral image from the acquired mosaiced image is not a simple task since the mosaic image shows a sparse representation of the captured data. Multiple authors have proposed generic algorithms to tackle this problem. Miao et al.162 proposed a binary tree algorithm to reconstruct the final spectral image. Wu et al.163 used sparse encoding to estimate reconstruction candidates, then used a heuristic method to search for the optimal reconstruction. Sawyer et al.164 compiled different established reconstruction algorithms and produced a Python package for open-source distribution.

There were two methods of manufacturing SRDA: one is by directly depositing and etching the filter layer on top of the sensor (monolithic integration), and another is by producing the filter layer separately and then mount it on top of the sensor (hybrid integration).165 On the monolithic integration end, a group of researchers in Belgium advanced many aspects of SRDA manufacturing using FPI.76,79,117,121,166 By depositing etalons of different cavity heights directly on top of the sensor, the resulting effect is like depositing interference filters of different bandpass values. Similar to the engineering of moving FPI, Bragg mirrors should be used in place of a single reflective material such as silver or aluminum.117 The materials varied; however, TiO2 and SiO2 were commonly used as high and low refractive materials.167 For a more in-depth article on the process of deposition and etch, refer Ref. 168. Out of this, the Belgian group developed many ultracompact commercial snapshot, spatial and spatial–spectral scanning cameras that use different interference filters patterns. For snapshot cameras, the filters were grouped into 4×4 square cells repeated throughout the entire sensors.54,55,76 For spatial and spatial–spectral scanning cameras, the filters changed bandpass values (and cavity heights) linearly across the sensor, resulting in a “staircase” pattern.121,169 Most recently, SRDA using etalons had been monolithically integrated on top of InGaAs sensors, making these types of spectral cameras also functional in the SWIR ranges.170,171 Elsewhere, other researchers were finding ways to bring the cost of manufacturing down. Yu et al.172 proposed a batch wafer manufacturing method that used silver and aluminum oxide (Al2O3) as the dielectric materials.

5.3.3.

Compressive sensing

Coded-aperture imaging cameras project the datacube onto the 2D imaging sensor through a coded mask [Fig. 8(c)]. In theory, it is not possible to resolve the images obtained from the coded aperture to the datacube since many different datacubes can produce the same image on the sensor. However, by applying some constraints onto the datacube such as sparsity and low variation, reconstructing a unique datacube became possible. Žídek et al.173 demonstrated that with off-the-shelf components and several custom optical lenses, a compact spectral camera using compressive sensing can be constructed.

5.3.4.

Image mapping spectrometer

The IMS (also called image slicing spectrometer) provides a one-to-one mapping of the datacube voxels onto the detector’s pixels. The captured image will be optically “sliced” by thin strips of mirrors. A dispersive element will then disperse the spectral elements of the image onto the sensor. The mirrors are arranged such that after dispersion, the spatial–spectral dispersion of the sliced image will fill the sensor [Fig. 8(d)]. IMS originated in astronomy and has been used in fluorescence microscopy174 and retinal imaging.175 In recent years, several compact systems that used IMS emerged. Bedard et al. built a compact IMS system that has high spectral–spatial resolution (350×355×41  pixels) and good acquisition rate (7.2  frame/s). Pawlowski et al.176 used lenslet array as the optical slicing component in their ultracompact snapshot system. The optical element measured only 27×27×8  mm, and the resolution was 210×279×54  pixels.

5.3.5.

Linear variable filter

An LVF is a monochromator that varies its dispersive property based on the spatial location. It is typically used in spatial–spectral imagers (see Sec. 3.3 for the “Snapscan” concept). Ding et al.27 adapted the concept onto snapshot imaging by incorporating lenslet (small lenses that project images onto a small part of the sensor). LVF is created using FPI of varying cavity height. The lenslet splits the image onto subimages, each of which falls onto a different part of the LVF. Snapshot imaging is achieved by combining spectral filtered subimages onto the datacube. Conceptually, this is similar to SRDA and suffers similar drawbacks as SRDA does.

5.4.

Mechanical Components

Mechanical systems refer to nonoptical systems that are responsible for driving the optical components and provide the mechanical housing. Here, we discuss MEMS, which are commonly used in compact spectral cameras. MEMS refers to small systems that are manufactured using semiconductor fabrication techniques. These devices have been hypothesized as theoretically possible since the early 1960s177 and now encompass a large range of devices including RF switches, cantilever, piezoelectric, comb-drive actuator, and resonator.178 MEMS are electrically reliable, low on heat dissipation, and can be scaled up to large-scale and low-cost manufacturing processes.8 Prior to spectral imagers, MEMS has been used in portable spectrometers.179181 The single most important process in the manufacturing of MEMS is photolithography, or the process of etching complex nanopatterns on a photosensitive polymer using lights.182 Photolithography enables MEMS to be produced in high volume and consistent quality. To integrate multiple MEMS components together, two main methods are used: surface micromachining (depositing the desired system on top of a sacrificial layer to be washed away) or bulk micromachining (directly shaping the substrate without the need for a sacrificial layer). In hyperspectral and multispectral imaging systems, both surface micromachining and bulk machining were used to produce MEMS.157 In FPIs, MEMS is used as the actuator that drives the distances between the two reflective surfaces. This can be accomplished using either piezo-actuators or electrostatic actuators.183 According to Trops et al.,183 piezo-actuators FPI had larger optical apertures and higher SNR, and electrostatic actuators were mass-producible, have smaller optical apertures, and lower SNR. In terms of spectral ranges, piezo-actuated MEMS FPI have a wider tuning range compared with electrostatic actuated MEMS FPI. Rissanen et al.184 and Näsilä et al.159 used surface micromachining to produce MEMS FPI spectral cameras that work with mobile phone cameras. Näsilä et al.123 used the same technique to produce a ultracompact (<40  g) spectral camera. Another type of MEMS is a digital micromirror device (DMD), which is an array of microscopic mirrors on a chip that can be individually activated and rotated. A common usage of DMD is in push-broom imaging, where DMD replaces the slit translation mechanism. The DMD selects the narrow section of the image and reflects that section onto the grating mechanism. This mechanism was seen in works by Arablouei et al.142 and Dong et al.185,186 A comb-drive MEMS was used by Wang et al.187 to rotate a mirror within their handheld spectral imaging camera. There are notable limitations to the range of movement in MEMS. On small scales, the mechanical strain and stress behavior is much different compared with their macrocounterparts. Pull-in phenomenon is seen when the electrostatic forces between MEMS elements are greater than the mechanical forces, which “pulls in” the MEMS components into each other and potentially causes a breakdown.178 In spectral imagers, this is most applicable to MEMS-enabled FPI imagers, where the air gap between the mirrors cannot be smaller than two-thirds of the initial unactuated air gap size to avoid pull-in phenomena.159,183,188

For compact devices that are planned to be used in outdoor situations, mechanical housing is as important as every other component. Crocombe8 discussed the importance of rigid and durable housing in portable spectrometers used in manufacturing. Even though devices in a clinical setting are not subject to harsh environmental factors the same ways devices in industrial or earth science fields do, mechanical housing is still an important factor in biomedical imaging devices. Many housing systems, if they are used in noncommercial devices, are likely to be highly customized using 3D printing. 3D printing has many advantages over traditional manufacturing, such as the ability to go from designs to prints within a short amount of time, the low cost of plastic, the lack of screws and adhesion, and the fact that when 3D printing different designs, the printer configuration remains largely similar.85,189 The rise in 3D printing of optical instruments is due to not just the large availability of 3D printers but also because of the open-source movement.190 For a review of 3D printing technologies and additive manufacturing, see Ngo et al.191 3D printing can be used for many versatile components of spectral imaging. Ghassemi et al.192 and Cavalcanti et al.59 used 3D printing to develop biological phantom models for hyperspectral imaging. Ortega et al.60 created custom 3D printed gears to mechanically move the sample in a push broom microscopic imager. However, the most common use of 3D printing in spectral imager is for the housing of optical and dispersive elements.85,102,131,159,187,193 When researchers and hobbyists had access to 3D printers, design and manufacturing became an iterative process due to the speed and low-cost that 3D printing brings. Design was often done with the help of computer aided design (CAD) software. The most common 3D printing method used was fused deposition modeling (FSM), which uses a heated nozzle to fill semiliquid filaments in a layer-by-layer manner. Depending on the printer chosen, 3D printing using FSM can achieve high printing resolution and is capable of fitting optical lenses without much calibration necessary.131,189 However, FSM methods can have weak mechanical properties and poor aesthetic appearance compared with more advanced methods of additive manufacturing, such as stereolithography or power bed fusion.191 The most common material used was polylactic acid, commonly abbreviated as PLA.191 PLA is notable for having a low melting point and good biocompatibility. This aspect means that it is possible for researchers to use PLA to build biomedical components in their spectral cameras.51,59 However, PLA has some structural downsides. PLAs are known to shrink during printing; Sigernes et al.85 suggested that the actual design is 1% to 2% larger than intended so the optical components can fit. If the optical alignment is critical, 3D-printed housing can potentially produce optical misalignment and imaging artifacts. Despite shrinkage being a common problem in polymer-based printing, not many research studies discuss the solutions to counter this.194 Beyond accounting for the shrinkage in the initial design, Pearre et al.195 suggested bounding the printed components with solid structures. Alternatively, inkjet technology using plastic powder and cyanoacrylate adhesive was used by Wang et al.187 to print housing for their handheld spectral camera.

5.5.

Electronics Components

Electronics is the driving component of all spectral imaging systems. “Electronics systems” refers to systems that provide illumination, capture images, control hardware, and transfer data. Systems that are expected to perform their work remotely, such as UAV-based, require a means to store power and to store data as well. Here, we discuss the use of illumination with a special focus on LEDs, the use of sensors, the use of microcontrollers, and the expected power consumption of the system.

5.5.1.

Power consumption

While it is still the norm for many laboratory-based imaging systems to have no battery systems and to instead draw power from the grid,47,54 many UAV-based systems require batteries to have reliable performance in a reasonable amount of time. Even though battery life is not typically specified in many commercial systems, a flight time of a UAV from 12 to 90 min76 gives a good idea of how long spectral cameras should operate. More well-known specifications that come with remote battery-powered spectral cameras are power consumptions, which range from 5 to 10 W,134,137 comparable to the power consumptions of UAVs (10 to 20 W) but still greater than that of many consumer products such as smartphones (<500  mW).

5.5.2.

Sensor technologies

Most digital imaging sensors used in compact spectral imaging fall into two broad categories: charge-couple devices (CCDs) and active pixel sensors, also called complementary metal oxide semiconductors (CMOS). CCDs use linked MOS capacitors to transfer electric charges from the pixels toward the shift register where it will be amplified. CMOS use MOSFET switches to access and amplify each pixel individually.196 The size and cost of CMOS benefit greatly from advances in semiconductor fabrication, which has been observed to double the amount of transistor every 2 years.197 However, CMOS have higher dark current compared with CCD.198 The majority of compact spectral CCD/CMOS use silicon as the semiconductor, which has an operational range between 550 and 900 nm. Shorter wavelengths with higher frequency in the SWIR (900 to 1700 nm) range can penetrate deeper into biological tissues and reveal more underlying features; however, capturing them requires alternative CCD materials, such as InGaAs.25,158 Alternative architectures for CCD and CMOS used in spectral imaging include intensified CCD21,199 and electron multiplying CCD.200

5.5.3.

Microcontrollers

Many spectral imaging systems use microcontrollers to provide system control. In recent years, open-source systems have made spectral imagers more low-cost and more customizable. These microcontrollers were part of the open-design movement, which aimed for free collaboration and sharing of schematic and software. Common open-source systems include microcontroller boards aimed at specific tasks (such as the Arduino Uno) and microcomputers capable of full-fledged control tasks (such as the Raspberry Pi). Programming these devices is much easier compared with previous generations of programmable circuit boards, as these newer devices used USB connections to transfer instructions. Furthermore, these devices have a large support community, making them the preferred option for off-the-shelf spectral cameras and for prototyping new designs.

Open-source microcontrollers in spectral imaging devices perform three main tasks: (1) drive optical components, (2) control illumination, and (3) act as a device to send and receive signals. Nevala and Baden193 used an Arduino Uno microcontroller in a spatial scanning camera to move mirrors on a predefined path. The microcontroller also drove the spectrometer via a transistor–transistor logic gate to capture data. Ortega et al.60 used a similar microcontroller to drive a stepper motor, which will translate the microscope stage for a line scan imager to acquire images. Näsilä et al.159 system used an MEMS actuator driven by an AC actuator signal. A microcontroller board used the I2C interface to drive the actuator. Some spectral imaging systems used narrowband LED to provide illumination, which the camera could capture the reflected light and construct the datacube in the same manner as a spectral imager does. However, to make acquisition fast, synchronization between LED and image capture was necessary. Ohsaki et al.201 constructed an LED flickerless system controlled by a Raspberry Pi microcomputer. Di Cecilia et al.202,203 used microcomputer to drive a pulse current to control the LEDs that was synchronized with the shutter: once the camera shutter was pressed, the microcomputer turned off the LEDs. Typically, the spectral imaging system was directly connected to the workstation through a USB connection, or the data were stored inside memory for later retrieval. Näsilä et al.159 prototyped an FPI-based spectral imaging system whose main controller is a Raspberry Pi microcomputer. The microcomputer sent signals to the FPI driver, received images from the camera module, and sent images through Wi-Fi to a workstation for further analysis. A similar setup was employed by Salazar-Vazquez and Mendez-Vazquez102 in their CTIS-based spectral camera. The microcomputer employed was also a Raspberry Pi, which sent commands to the camera module and sent images through Wi-Fi to the workstation.

5.6.

Illumination

For most biomedical imaging applications, imaging was performed indoors using artificial illuminations. This posed various challenges for acquisitions. It is important to differentiate between luminance and radiance. Radiance refers to the quantity of radiant energy per emitted solid angle per receiving surface area and is applied across all wavelengths, whereas luminance is a human vision-centric measurement and is the radiance weighted by the response curve of the human eye. With spectral imaging devices, radiance is the more appropriate measurement of light sources whereas with regular cameras and human activities, luminance is more appropriate. Unlike outdoor illumination, which can exceed 100,000 lux in luminance, indoor systems using incandescent light only illuminate around 10,000 to 20,000 lux. Spectral imaging requires more illumination than similar RGB or monochrome cameras, because spectral cameras need to capture energy associated with a narrow range of wavelengths. Furthermore, many spectral imaging systems have additional filters that reduce the incoming light quantity. Some systems require wavelengths bandpass, lowpass, or high pass filters to block out unnecessary wavelengths.54 Some systems use beam splitters to either achieve snapshot imaging or to capture both live images and spectral images.200 Additional illuminations are often needed in spectral imaging acquisition. The additional components can increase the footprints of the system. The addition of light sources should be balanced with the applications. If the intensity of the light is too large, damage to the tissues can be irreversible. The most vulnerable organ is the eyes. Permissible exposure limit to the human retina is dependent on both the wavelength and exposure time. Rees and Dobre204 calculated the maximum exposure power to the eye at 0 deg to be 180  μW at 5 s and 120  μW at 30 s. For thermal light sources, Yan et al.205 evaluated maximum exposure for a direct angle to be 1.8×103×t0.25  J/cm2 with time t in second and is <3×104  s. These values are only for humans; for animals with retina diameter different than humans, the maximum permitted exposure needs to be varied. There is no definitive guidelines on animals’ retinal’ light exposure limits. For other organs, the limits are more forgiving and often orders of magnitude larger than many illuminations needs. Surgical lighting, for example, has a suggested luminance ranging from 40,000 to 160,000 lux,206 which is often enough for acquisitions in the visible light range.

There are different geometries to arrange the light source, the subject, and the imaging camera. Amigo et al.207 detailed several laboratory setups, which are frontal, lateral, contrast-transmission, diffuse, co-axial, and dark field. In each case, illumination should be as uniform as possible. Sawyer et al.208 detailed three types of nonuniformity: spatial uniformity, which refers to differences in illumination of incident light across object; angular uniformity, which refers to differences in incident and shadowed areas; and spectral uniformity, which refers to spatial uniformity across all wavelengths. They compared three different light setups for wide-field imaging: an LED ring, a fiber halogen ring, and a diffusion dome. They found that all systems achieved similar spatial and spectral uniformity, but the diffuse scattering dome achieved the highest angular uniformity.

For the illumination type, we covered the three commonly used types: halogen incandescent, gas discharged lamp, and LED. Halogen light operates in the ultraviolet, visible, and NIR regions. It offers a continuous spectrum suitable for acquisition with many bands. However, halogen light has a low color temperature (3200 to 5000 K) and can appear yellow at low illumination.206 Halogen and LED were the most often light sources in microscopes, so many spectral imaging applications that used microscopy also used halogen or LED as default illumination. Gas-discharged lamps, which include xenon lamps, mercury lamps, and mercury–argon lamps, often have higher color temperature. Xenon lamps have color temperatures ranging from 4000 to 6000 K, which makes them closer to outdoor lighting. Gas-discharged lamps often have energy spikes in the NIR regions, which can affect acquisitions. Due to these energy spikes, gas-discharged lamps are not used in retinal illumination and retinal surgeries. Both gas-discharged lamps and incandescent lamps emit a large amount of heat. If this is a concern, a fiber optic guide is needed to direct the light far away from the light source. Tunable laser is also seen in spectral imaging devices,209211 although not in compact systems due to the size and power consumption of the laser components.

5.6.1.

Light-emitting diode

The advantages of LED in illumination scanning systems are numerous: they are low-cost, fast, have low power consumption, low heat dissipation, and long life.206 A common way to use LED for spectral imaging is to use different LEDs to illuminate the object, each LED emits light within a specific waveband. Alignment of LED is important to achieve uniform illumination. Figure 9 shows different methods of arranging LEDs in a compact spectral imaging system. A common layout is the ring layout, where all LEDs are arranged in a circle.22,24,25,38 Li et al.212 proposed a different setup where the LEDs were arranged in a mosaic fashion, however warned that this arrangement can lead to nonuniform lighting for different wavelengths. Bolton et al.37 arranged the 16-LED setup into a 4×4 rectangular array. Light uniformity was ensured by choosing LEDs with very wide illumination angles. If the LEDs were arranged in a circle, it is better to have multiple of each type that are opposite to each other. Delpueyo et al.24 and Rey-Barroso et al.25 proposed a system using 32 LEDs, where four of each type of LED were arranged 90 deg apart [Fig. 9(a)]. For the acquisition camera, Shrestha and Hardeberg213 recommended that monochrome camera might perform better than commercial RGB cameras. The reason being that RGB cameras need to perform demosaicing operations, which altered the actual spectral readings. To control the LED, a separated power source and controller was needed to synchronize the LEDs with the acquisition camera. LEDs do not require a high-power source; a configuration built by Kim et al.22,38 only needed a 3.7 V battery to drive a multi-LEDs system [Fig. 9(b)]. If the LEDs emit different wavelengths, the activation voltage could be different for each LED.37 For Bolton et al.,37 different resistors values were required to produce different activation voltage using the same input voltage [Fig. 9(c)]. The biggest downside to LEDs is the limitations of wavelengths. The spectral imaging systems described can acquire only as many wavelengths as the number of different LEDs are available. For many researchers, this may not be a problem. Some applications only focus on specific excitation and absorption wavelengths and only need three to four different types of LEDs. Well-chosen LEDs are very beneficial. Another potential engineering problem comes from the quality of the LEDs. If the LEDs are bought from commercial vendors, then their bandwidth can range anywhere from 20 to 70 nm. If this is a problem, the interference filters should be applied to achieve a narrower bandwidth.214

Fig. 9

Different uses of LEDs in a compact imaging system. (a) A system that used 32 LEDs arranged in a circle. There were four of each types that are spaced 90 deg apart (reproduced from Ref. 24). (b) A system that used a singular white LED and a rotating spectral filter (reproduced from Ref. 22). (c) A system that arranged LEDs into a four-by-four rectangular plane (reproduced from Ref. 37).

JBO_28_4_040901_f009.png

6.

Low-Cost and Compact Spectral Cameras

A large variety of compact cameras came not from commercial manufacturers but from academics and hobbyists who wanted to build custom imagers that are more accessible and more customized. Many low-cost systems needed to strike a balance between size, acquisition speed, spatial resolution, and spectral resolution. However, with recent advances in custom components, additive manufacturing, smartphones, and compact spectrometers, custom low-cost devices have improved in quality and compactness.

6.1.

Commercial Off-the-Shelf Components

COTS, or commercial off-the-shelf, refers to components of a spectral camera that can be bought and assembled into a functional unit. Assembling COTS requires the researchers to have knowledge in building an optical system, electronics, control software, and hardware housing. The total price of all components added together is often much lower than the unit price of one commercial spectral camera of similar capability. Furthermore, COTS components allow researchers a high degree of customization. As mentioned in the previous section, components of a spectral imager include the optical system, the spectral dispersion system, the digital image detector, the control module/electronics, and the mechanical elements. It is more common for researchers to buy each of these components individually and assemble them together, because of the wide availability of low-cost options to choose from (Fig. 10). Alternatively, some researchers opted for portable spectrometers and digital cameras as the acquisition module and built the rest of their system surrounding those.

Fig. 10

Spectral devices enabled using commercial off-the-shelf components. (a) A spatial scanning camera built using customized optical components and 3D printed housing (reproduced from Ref. 85). (b) A spatial scanning camera that uses portable spectrometers as the spectral acquisition device (reproduced from Ref. 193). (c) A smartphone-powered spectral imaging systems (reproduced from Ref. 215).

JBO_28_4_040901_f010.png

6.2.

Optical Setup and Calibration

In the past, to conduct optical research, optical systems needed to be built on a rigid optical table and required large footprints. However, it is now possible to build optical systems with COTS components that are lightweight and have small spatial dimensions. Sigernes et al.85 [Fig. 10(a)] and Fortuna and Johansen216 built systems that were carried by UAVs. Their systems were spatial acquisition systems using transmission gratings. Kim et al.22 built a customized optical system to complement smartphone image acquisition. The system used a planoconcave lens, a reflective flat mirror, linear polarizers, and bandpass filters. The linear polarizers and the bandpass filters filtered the LED light sources of the smartphone, whereas the flat mirror and the planoconcave lens made the illumination broader and more uniform. Jeon et al.217 introduced a new type of diffraction optical element using photolithography and reactive-ion etching techniques. They used the fact that Fresnel diffraction is dependent on the wavelength to produce a diffractive element that produces spectral-varying point spread functions. 3D printed optical elements, such as reflective mirrors, focusing lenses, and prisms, despite already existing,194 have not been seen in any spectral imaging system as of yet. Customized mechanical housings are required to house these optical components. For that, 3D printing is the preferred option.

Even though COTS optical components are vastly cheaper than their commercial counterparts, lack of quality control can harm the acquisition quality. Smile, keystone, and chromatic aberration are all optical problems these customized systems face. We discussed smile and keystone in Sec. 5.1 in relation to spatial scanning systems. Chromatic aberration is a property of optical systems that makes the focus of different wavelengths not aligned. The point-spread function of the image depends on the wavelengths. Chromatic aberration makes focusing difficult. For spectral scanning systems, one method of dealing with chromatic aberration is to change the focus for each wavelength. However, this method could not be replicated with other types of spectral acquisition. A more robust engineering solution is to use achromatic lenses, which come at a higher cost than regular optical components. Mirror-based grating systems also reduce aberration; we discussed the Offner imaging spectrometer, which reduces aberration. If low-cost optical systems are to be expected to perform accurately, extensive calibration processes are necessary. Sigernes et al.85 discussed the steps to perform chromatic and sensitivity calibration in their customized camera. Riihiaho et al.218 performed smile correction by calculating the matrix that transforms curved spectral lines into the straight lines for each spectrum. Henriksen et al.219 proposed an algorithm that could perform real-time correction of smile and keystone based on matrix operations of calibrated results.

6.3.

Portable Spectrometers

Compact and portable spectrometers have been developed and used for commercial, health, and scientific purposes.8 Many custom imagers in fact incorporated one or multiple commercially available spectrometers into their designs. By incorporating spectrometers, researchers did not need to worry about optical architecture and could focus more on the scanning methods. Spectrometers also have a much finer spectral resolution compared with line scanning spectral cameras. However, because the field of view of a spectrometer is typically small, the sampling scheme needs to be large with oversampling, otherwise the spatial resolution would be very low. Devices that used compact spectrometers were usually point-scanning or line scanning imagers. If only one spectrometer was used, the device would be like a point-scanning camera, and the researcher would devise an appropriate scanning path. Nevala and Baden193 equipped a commercial spectrometer that has effective spectral ranges from 350 to 950 nm with two mirrors controlled by Arduino microcontrollers [Fig. 10(b)]. Because the scanner had a circular window, they proposed circular scanning paths that are based on Fermat’s spirals. In the system proposed by Stuart et al.,220 light entered a focusing lens before being redirected by two movable mirrors into a spectrometer. Commercial spectrometers were used in confocal microscopy by Frank et al.221 who connected the confocal pinhole with a spectrograph using a fiber optic cable. If multiple spectrometers were used, then the device is a line scanning imager. Uto et al.222 used a series of eight spectrometers for a line scan imager.

6.4.

Smartphones

Personal diagnostics with smartphones is gaining more attention in healthcare and bioengineering since smartphones are readily available, portable, offer high quality images, and are power efficient. This trend is reflected in spectroscopy and spectral imaging alike. Crocombe8 outlined four main usages of smartphones in spectrometry and spectral imaging: (1) to receive data from the spectrometer as a separated module; (2) to receive data; (3) to send command to the optical module; and (4) to be the optical module itself. The fourth usage is only applicable in the visible and near-infrared regions, as silicon detectors used in CMOS and CCD only operate within the 400 to 1000 nm wavelengths. Spectroscopic solutions have been developed for the smartphone for the purposes of food quality control,223 lab-on-a-chip diagnosis,224,225 and fluorescence spectroscopy.226 Similarly, spectral imagers have been developed with smartphones in mind for biomedical, remote sensing, and field laboratory purposes. Kim et al.38 developed a system that used smartphones as the controlling center for their MSI. In their system, the smartphone sent signals to a microcontroller unit that drove LEDs and received photos from a CMOS camera. Näsilä et al.159 and Rissanen et al.184 developed FPI-based spectral imagers with smartphone as the centerpiece, both as a controlling device that sent and received signals and as the optical camera itself. Smartphone cameras are extremely powerful and fast, which makes them the ideal candidate for spectral imaging modules. If the devices are used in personal healthcare settings, custom parts need to be developed for smartphone spectral imaging to fit the specific needs. Stuart et al.215 built a spectrograph with the smartphone camera as the acquisition device [Fig. 10(c)]. Bolton et al.51 and Cavalcanti et al.59 used 3D printing to create custom smartphone-based spectral colposcope and otoscope, respectively. Their systems used LEDs driven by microcontrollers to provide narrowband illuminations. These designs also used the smartphone’s ability to transfer signals over Bluetooth or Wi-Fi to construct an internet of things (IoT) system for spectral images acquisition and analysis. In some applications, smartphone can also be used as a geo-tagger, providing GPS data along with images to register images together.227

6.5.

Open-Source Design

Open-source design refers to the creation of software and hardware with the intention of freely sharing those designs for others to use, study, modify, and even commercialize without concerns of copyright or patent infringement. This allowance is performed through many different forms of open-source licenses. Whereas the nature of software makes open-source software easily distributed, open-source hardware still requires usages of tangible components to manufacture and assemble. Ideally, open-source hardware designers would want to use low-cost components, 3D-printed components, or other open-source hardware components and run free open-source software. Some spectral imaging researchers published systems that were open source with the goal of easily sharing research progress. As an example of the open-source model, Riihiaho et al.218 built a 3D-printed spectral imaging camera with designs from Sigernes et al.85 Some further modifications were made to reduce smile and aberration, which were publicly shared along with 3D CAD files. They also built software that performed aberration correction and shared it publicly. Nevala and Baden193 published the entire hardware and software design of their system, along with raw spectral data captured by the camera. Low-cost spectral imaging was driven not just by open-source hardware but also through open-source software. Many researchers published hyperspectral processing software as free and open source. Berisha et al.228 published a framework for processing large-scale biomedical hyperspectral images at a faster speed using GPU. The entire software and all its algorithms were published under a license that allowed inclusion in both free and commercialized systems.

7.

Applications of Compact Spectral Cameras in Biomedical Imaging

Many bioimaging researchers use compact and ultracompact cameras in both laboratory settings and clinical settings. Systems have been built for light-field microscopes,32,43,47,60 confocal imaging microscope,229,230 fundus cameras,54,55 endoscopes,46,48,56,140,200,231 laparoscope,63,65,66,70 colposcope,51,52 and otoscope.59 Spectral imaging for biomedical purposes has its own specifications that influence the choice of systems. Most commonly, images are captured in the visible-near infrared range (VIS-NIR) from 400 to 1500 nm. Imaging in the visible range often leverages the contrast between different types of dyes and stains, whereas imaging in the infrared range often leverages pure biochemical signatures of biotissues.232 VIS and NIR camera sensors need to be separated, with the first being silicon-based and the second being InGaAs-based. The spatial resolution should be enough to differentiate key features. In head and neck cancer (HNC) diagnosis, for example, the positive margin (margin of cancer plus normal tissue) for cancer identification is around 5 to 10 mm.233 The spatial resolution can be measured with an optical target, such as the USAF target. If the spatial resolution of the camera is not enough, magnification devices can be used. Acquisition speed can be slow for ex vivo applications but must be fast for in vivo applications. In a spectral colonoscopy system developed by Kumashiro et al.,48 they found that the acquisition time cannot exceed 5 s, otherwise the image quality will be unusable. Fast acquisition time also means that the illumination must be higher to accommodate it. In surgical or clinical settings, systems must be sterilized or isolated from the environment. Drapes, resin, and sealants can be used in these situations.

7.1.

Diagnosis and Monitoring of Diseases

The primary diagnostic targets for spectral imaging researchers were the skin, the oral cavity, and the retina. These organs are the most accessible organs for in vivo diagnosis. Using endoscopic systems, the luminal organs of the lower abdominal regions could also be imaged with spectral imagers. However, research in this field in vivo was scarce because in vivo research of lower abdominal regions requires systems with fast acquisition time and low illumination. For ex vivo applications, most spectral imaging research was done on pathology slides or tissues. Ex vivo tissues were imaged either in a tabletop setting using reflectance data or through a microscope, which can be reflectance or transmission data.

7.1.1.

Skin cancer

Both melanoma and nonmelanoma skin cancer (MNSC), which include Bowen’s disease, Kaposi’s sarcoma, basal cell carcinoma (BCC), and squamous cell carcinoma, are on the rise.234 While MNSC is more common, melanoma has a higher mortality rate.235 Early diagnosis and removal of both melanoma and MNSC are necessary. However, many forms of skin cancer appear similar to benign neoplasm.236 Dermatoscopy can be used to differentiate benign from malignant skin lesions; however, dermatoscopy requires clinician input. Spectral imaging has been used to improve automatic classification with high sensitivity.236 In early 2000, Hewett et al.21,237 developed a portable imaging station to determine tumor margin of MNSC lesions. They used 5-aminolevulinic acid (5-ALA) to induce the fluorescing molecule protoporphyrin IX (ppIX). After application of 5-ALA to the skin, fluorescence imaging of MNSC lesions was taken at the 400, 540, 600, and 635 nm wavelengths. Their results showed a clear outline of the lesion at the 635 nm wavelength, which corresponds with maximum ppIX fluorescence. In the same time period, Elbaum et al.236 developed a handheld spectral imaging system to automatically classify melanoma and melanocytic nevi (pigmented nevi, moles). From a set of 10 spectral images obtained from 430 to 950 nm wavelengths, 822 candidate parameters were extracted in wavelet domain and spatial domain. The features were supposed to represent morphological and textural elements of the lesions. Through training an expert system using a dataset of 63 melanoma images and 183 melanocytic nevi images, they achieved 100% sensitivity at 85% specificity. A similar handheld system [Fig. 11(a)] was developed by Delpueyo et al.24 to classify melanoma and BCC from pigmented nevi [Fig. 11(b)]. First-order statistics were used to extract morphological features. Their systems achieved 91.3% sensitivity at 54.5% specificity for both melanoma and BCC. Rey-Barroso et al.25 continued the study with a second handheld camera that can image in the NIR region (995 to 1613 nm). Higher wavelengths penetrated deeper through the skin and showed more features pertinent to skin cancer diagnosis. With the same set of features, they produced a sensitivity of 85.7% at specificity 76.9%, which was an improvement in specificity over the results obtained by Delpueyo et al. without much loss in sensitivity.

Fig. 11

(a) A compact and handheld spectral imaging system that used LEDs to provide spectral scanning. The system weighed 500 g and could capture images in eight wavelengths. (b) Using the system described in (a), different spectral signatures could be seen for nevi, melanoma, and basal cell carcinoma (reproduced from Ref. 24).

JBO_28_4_040901_f011.png

Many melanoma diagnostic tools that use spectral imaging exist on the market. However, they have high cost.238 Several researchers developed smartphone solutions for identification of malignant melanoma that were low-cost and accessible. Kim et al.22 built a smartphone spectral imaging system that photographed in 10 wavebands from 440 to 690 nm. They also created a software platform that identified lesions margin and graded lesions severity from spectral images. Ding et al.27 built a similar snapshot system that used smartphone camera. They photographed nevus lesions and identified elevated optical density of the nevus region in the 550 to 640 nm wavelengths; these wavelengths correspond to the peak absorption wavelengths of melanin and oxygenated hemoglobin. Uthoff et al.28 developed a smartphone spectral imaging system that mapped oxygen concentration, melanin concentration, and erythema measurement onto images of squamous cell carcinoma. However, these methods were only able to visualize the outline of skin lesions. To show that these smartphone-based methods can classify between malignant and benign melanoma lesions, further clinical studies are necessary.

7.1.2.

Head and neck cancers

HNC refers to cancers that originate from the nasal and oral cavity, nasopharynx, oropharynx, hypopharynx, larynx, and esophagus. Up to 90% of cases of HNC are squamous cell carcinoma,239 so the majority of studies reviewed focus on head and neck squamous cell carcinoma (HNSCC). Liu et al.45 used an AOTF camera to perform tongue tumor pixel segmentation. For classification, they used the sparse representation method. Their system achieved 96.5% pixel-accuracy on a dataset of 65 tumor images. Bedard et al.46 built a snapshot imaging system for imaging both autofluorescence and reflectance data. They used the system to photograph images of the oral cavity in healthy individuals and in patients with oral cancer. Using spectral unmixing, they were able to (1) highlight vasculatures in the lip region and (2) determine boundaries of tumors in the oral cavity. To image further into the oral cavity, spectral imaging systems used endoscopes. Often, in in vivo diagnosis, perfusion is a feature of interest since higher perfusion could be an indication of neoplasm. Köhler et al.70 created a spectral laparoscopic system for imaging the esophagus. Using customized metrics, they were able to visualize the hemoglobin index of a resected esophagus from a patient with Barrett’s syndrome. Our research lab contributed new research in identification of HNSCC ex vivo with the use of a compact spectral camera (Fig. 12). Ma et al.47 imaged histologic slides resected from the laryngeal and hypopharyngeal regions of patients with HNSCC [Fig. 12(a)]. They proposed two different methods of classification: (1) a support-vector machine (SVM) method that uses spectra of segmented nuclei as input data and (2) a convolutional neural network method that uses small image patches as input data [Fig. 12(b)]. They found that the CNN classifier had better performance compared with the SVM classifier. However, they also found that classifiers trained on spectral images did not outperform classifiers trained on RGB images.

Fig. 12

(a) A compact hyperspectral Snapscan camera on top of a microscopic system (reproduced from Ref. 47). (b) Using the system described, a series of hyperspectral digital pathology images were acquired. The slides were from patients diagnosed with squamous head and neck carcinoma. A machine learning system was used to produce a probability heat map of cancer occurrence (reproduced from Ref. 240).

JBO_28_4_040901_f012.png

7.1.3.

Lower abdominal cancer

Diagnosis of cancerous tumor in the lower abdominal region incorporates endoscopic tools. Kumashiro et al.48 used a commercial compact spectral camera for both in vivo and ex vivo identification of colorectal cancer tumors. For ex vivo observation, the camera was connected to a stereoscopic microscope. For in vivo observation, the camera was connected to a colonoscope. They found that in both ex vivo and in vivo analyses, the absorption rate of healthy mucosa tissues at 525 nm wavelength was significantly lower than that of adenocarcinoma. For in vivo data, tumor classification and real-time tumor mapping were also attempted. A sensitivity of 75.0% was achieved. Erfanzadeh et al.50 and Zeng et al.49 developed a handheld spectral imaging system for classification purposes. Erfanzadeh et al. imaged resected ovaries and found that the system can potentially identify malignant tumors from benign ovarian cysts. Zeng et al. used the system to image resected colorectal tumors. Mink et al.52 recognized the need for low-cost tools for cervical cancer diagnosis and developed a smartphone-based spectral colposcope. The system showed promise in augmenting biopsy and regular colposcopy. Baltussen et al.66 used two different spectral cameras, one operating in the visible light range (400 to 1000 nm) and one operating in the NIR range (900 to 1700 nm). Four types of tissues were classified: fat, healthy colorectal wall, colorectal tumor, and mucosa. They found that the near-infrared camera slightly outperformed the visible light camera in terms of classification result. Sun et al.241 produced a dataset of 880 hyperspectral images of cholangiocarcinoma (cancer of the bile duct) using an AOTF-based system. They used two types of CNN (Inception-V3 and ResNet-50) to classify cancerous tissues from normal tissues and achieved a 2% increase using hyperspectral data over using RGB data.

7.1.4.

Other types of cancers

In many forms of solid-tumor cancers, surgery is often the necessary treatment. It is important that surgical resection remove all malignant tissues. Spectral imaging on ex vivo tissue could be a method to identify or improve cancer/normal margin. Van Manen et al.53 used a snapshot camera to image excised breast tumors. They found that on average, tumor tissues had significantly higher fluorescence intensity compared with healthy tissues from 450 to 950 nm wavelengths, except at 619, 629, 897, and 934 nm wavelengths. Using the hierarchical stochastic neighbor embedding algorithm, they reduced the number of wavelengths as features and improved segmentation accuracy. Ortega et al.242 combined a compact spatial scanning imager with an upright microscope and custom scanning stage. They used their custom system to generate a database of hyperspectral brain tumor histology. For classification between tumor and normal regions, three different supervised classification methods were used: linear neural network, SVM, and random forest classifier. They found that the neural network classifier has the best overall accuracy of 78.2% while achieving a sensitivity of 75.44% at 77.03% specificity.

7.1.5.

Healing from burns, scars, and wounds

External factors such as age, lifestyle, smoking, and diet influence the healing process. Homeostatic imbalances such as ischemia (low perfusion), hypoxia, edema, and necrotic tissues also directly affect healing. To gauge healing process from burns, scars, and wounds, constant monitoring of perfusion, oxygenation rates, and tissue formation must be performed along with visual analysis.243 Methods used for wound monitoring included angiography with indocyanine green dye,244246 LSCI,247249 optical coherence tomography,250,251 laser Doppler flowmetry,252 and high-resolution ultrasound imaging.253,254 While no method offers distinct advantages over others, reflectance spectral imaging offers compact hardware and low-cost, noninvasive imaging.

Marotz et al.255 and Holmer et al.256 used a compact push-broom camera that operated in the VIS-NIR region for the purpose of assessing skin transplantation wounds. Reflectance data in the visible region were used to calculate hemoglobin relative concentration and oxygenation rate in the superficial dermis layer [Figs. 13(b) and 13(c)]. On the other hand, near-infrared reflectance data revealed deeper circulation in the subcutaneous layers and were used for estimating deep perfusion [Fig. 13(e)]. These data were mapped over the wound to assess the presence of ischemia [Fig. 13(f)]. Kulcke et al.34 later used the same commercial spectral system to image wound healing over a period of 2 weeks and showed that the perfusion and tissue oxygenation rates reduced to normal level after 2 weeks. Rutkowski et al.33 used compact spectral camera to monitor wound healing treatment with cold atmospheric plasma. They showed improved angiogenesis in both the dermis and subcutaneous tissue layers through cold atmospheric plasma treatment in vivo.

Fig. 13

Estimation of physiological values from a wound photograph. (a) RGB image, (b) and (c) relative and segmented tissue oxygenation mapping (StO2 in the figure), (d) reconstructed RGB image from hyperspectral data, (e) near-infrared-based perfusion data (NIR in the figure), and (f) relative tissue hemoglobin index (THI in the figure) (reproduced from Ref. 257).

JBO_28_4_040901_f013.png

7.1.6.

Pressure sore and vascular occlusion

Pressure soreness, also called pressure ulcer or bedsores, is a pathological disease where the skin is injured from continuous pressure. Pressure sore is common among bed-ridden patients and often develop around areas with bony protrusions such as the heels, tailbone, hips, and ankles.258 To prevent pressure sores, monitoring of vascular occlusion is important.259 Van Manen et al.39 combined snapshot spectral imaging with LSCI to image the upper arm during and after occlusion. During occlusion, perfusion in the epidermis decreased, which was detected with LSCI. From spectral imaging, oxygenation rate was determined. They found that oxygenation rate measured with spectral imaging correlates with perfusion measured with LSCI. Their findings showed that spectral imaging can be helpful in monitoring epidermal blood flow. Klaessens et al.260 developed a compact spectral scanning system using LCTF to image constricted hand in the 420 to 730 nm wavelengths. During occlusion, they showed that the concentration of deoxygenated hemoglobin and oxygenated hemoglobin increased and decreased, respectively. After the occlusion was released, the absorption and oxygenation level overshot before stabilizing to regular level after a short period of time. He and Wang42 used smartphones to record hyperspectral imaging of fingers under pressure and showed similar results. Chang et al.31 used a compact commercial snapshot camera to measure the oxygen saturation and monitor bed sores in enrolled patients. A condition similar to pressure ulcer is foot ulcer, which is the ulceration that happens below the ankle. Foot ulcer is common among people with diabetes and can be chronic in nature.261 Foot ulcer can also be monitored by monitoring oxy- and deoxysaturation for both healing and prevention.262264 Yang et al.265 developed a compact push broom imager that analyses oxygen saturation to predict the healing quality of foot ulcers. Yudovsky et al.266 developed a custom spectral imaging system using LED that illuminated at 15 different wavebands between 450 and 700 nm to analyze oxy- and deoxysaturation. The researchers used the system to predict foot ulcers before they can form. Lee et al.267 performed a pilot study using two commercial spectral cameras, one in the hemoglobin absorption wavelengths (542 to 578 nm) and one in the near-infrared spectrum (760 to 830 nm).

7.1.7.

Retinal diseases

Fundus imaging poses special engineering challenges for compact imagers. Because the eye is a small organ with limited reflection spectra, fundus imaging requires both illumination and magnification for accurate assessment.268 Many different biomarkers and diseases can be detected from the spectral response of the eyes. In clinical settings, most fundus cameras are table-top bulky systems, but many compact handheld and smartphone-based systems have been developed; for reports on these compact systems, consult reviews by Panwar et al.269 and Wintergerst et al.270 In humans, the macula is covered by the macular pigments lutein, zeaxanthin, and meso-zeaxanthin. These pigments are known to contribute to visual acuity.271 The appearance of this pigmented region can serve as biomarkers for visual or neurological diseases and is often quantified by macular pigment optical density (MPOD). Diabetic retinopathy is a complication of diabetes that affects the vessels of the retina and can cause blindness in some cases. Oxygen saturation of eye vessels can be used to infer progression of retinopathy in vivo.272 Early explorations in using compact spectral cameras for oxygenation monitoring was done by Johnson et al.104 through a custom-made CTIS. The system was designed to capture 50 spectral bands within the 450 to 700 nm range. Using the snapshot camera, they were able to demonstrate age-related changes in retinal vessels’ oxygen saturation in two healthy volunteers 30 years apart in age. A CTIS system was later used by Fawzi et al.58 to estimate MPOD in place of other methods, such as autofluorescence imaging and Raman spectroscopy. Li et al.55 used a compact SRDA camera with a microscope to image rat retina. The authors demonstrated that oxygenation rate can be successfully extracted from the snapshot camera. Kaluzny et al.54 continued the study in human subjects. By connecting the camera with a table-top fundus imaging system, they were able to image human retina in spectral datacubes. Using a best-fit model, they estimated oxygen saturation rate in the retinal arteries and veins. Through repeated imaging of the same eye, they achieved a mean standard deviation of 1.4%, showcasing high repeatability of the system. They also estimated MPOD, demonstrating that the snapshot system can extract multiple physiological data from only one measurement.

7.1.8.

Diseases of the central nervous system

An exciting recent application of spectral imaging in bioengineering is the monitoring and diagnosis of neurological diseases through retinal imaging. It is known that the retina contains optical neurons that are directly linked to the brain, so it follows that biomarkers for many neurological diseases, such as Alzheimer’s disease (AD), Parkinson’s, and multiple sclerosis can be seen through retinal imaging.273 Amyloid beta (Ab) peptide has been identified in the brains of people with AD and is a known biomarker. Similarly, autopsy of patients with AD shows high concentration of Ab in the retina.274 Through spectral imaging of transgenic Alzheimer’s mice using a compact endoscope, More et al.56 showed that in the wavelengths from 450 to 700 nm, there existed a marked difference in the optical spectra between wild-type mice and transgenic mice (Fig. 14). They also showed that changes in optical spectra strongly correlated with the accumulation of Ab in the retina and progression of AD over time.

Fig. 14

(a) A compact spectral imaging system used to image the retina of mice. (b) Using the system described, the retina of wild-type mice (WT) and transgenic mice for Alzheimer’s disease (APP1/PS1) were imaged. The result showed a significant difference in the spectral signature between the two (reproduced from Ref. 56).

JBO_28_4_040901_f014.png

7.2.

Surgical Guidance

Lu et al.7 identified the four key benefits of using spectral imaging in surgical guidance: (1) visualization of microsurgery features, (2) hyperspectral tumor segmentation during resection, (3) monitoring of tissue oxygenation rate, and (4) visualization of large organs. The use of compact spectral imager benefits surgical guidance greatly because they free up limited space in the surgical room. Because spectral imaging is good at identifying blood oxygenation status, many researchers used them in the surgical room to monitor blood flow. Anastomotic insufficiency is a break or leak in a surgical suture and is among the most serious complications in colorectal surgery. Jansen-Wilken et al.69 used a spectral camera to detect the anastomosis site during small bowel surgery. Many other researchers explored similar surgical complications, most commonly ischemia.63,67,275 Akbari et al.67 used two cameras that operated in the 400 to 1000 nm and 900 to 1700 nm wavelength for bowel surgery. They found that the highest contrast between normal and ischemic regions in the intestine were seen in the 765 to 830 nm wavelength range, and they used SVM to evaluate ischemia progression over time.

7.2.1.

Spectral endoscopy

Many researchers investigated the use of spectral imaging in combination of surgical visualization tools. Endoscopic tools, which aid minimally invasive surgery, are often combined with compact spectral cameras. Kumashiro et al.48 attached a mobile spectral camera to a colonoscope. They directly observed colon lesions in vivo during biopsy. However, because the camera had a long acquisition time, scanning time was limited to 5 s and scanning resolution was limited to 200×200  pixels. Nevertheless, they found significant differences in absorption between normal mucosa and adenocarcinoma regions at the wavelengths 525 nm. Laparoscopy is an operation performed through small incisions with the aid of cameras and is minimally invasive. Clancy et al.63 built a custom laparoscope and measured bowel oxygenation rate during clamping in vivo. While their system was built for minimally invasive surgeries, imaging was performed during open surgery. Zhang et al.65 used a similar system to identify different types of tissues. They identified an improvement in classification accuracy using multispectral images over using RGB images. However, their system was tested on ex vivo tissues and not during live surgery. Some researchers built laparoscopic systems with dual channels, such that both live video and spectral images can be simultaneously captured. This was commonly done with the use of beam splitters. Köhler et al.70 developed a spatial laparoscopic camera for aiding esophagus surgery that captured monochromatic video and spectral images. To validate their system, they compared it with a commercial spectral camera developed for surgical settings. The specimen used was ex vivo human esophagus with adenocarcinoma. They found that their own compact laparoscopic system showed carcinoma classification result consistent with that of commercial devices. A similar dual camera endoscopic system that showed both spectral image and real-time video was developed by Yoon et al.231 They demonstrated the system using an ex vivo pig esophagus and used linear unmixing to estimate concentration of staining solution (see Fig. 15). They stained the esophagus with methylene blue (MB) solution and used measured absorbance to calculate the MB concentration throughout the tissue. Yoon et al.276 later improved the system and used it for clinical testing on 10 patients that underwent colonoscopy. They used spectral angle mapping to extract features and k-nearest neighbors to classify between normal mucosa and polyps.

Fig. 15

Images taken from an ex vivo pig esophagus by a compact spectral endoscope. (a)–(c) The esophagus taken before staining. (d)–(f) The esophagus after MB staining. (g) and (h) The estimation map of MB concentration using two different linear unmixing methods. (i) The spectral absorbance plot for the shaded areas in (h). In (c) and (f), the color corresponds to the color segment in the corresponding image to the left (reproduced from Ref. 231).

JBO_28_4_040901_f015.png

7.2.2.

Spectral surgical microscope

A surgical microscope or operating microscope is a system of optical microscopy designed for aiding surgical procedures. They are useful to the point of necessity in microsurgery.206 The use of microscopy in surgical theaters has been around since the late 17th century. Today, surgical microscopes are used in many types of surgeries, ranging from micro-operations, such as dentistry, neurosurgery, and ENT surgery, to macro-operations, such as spine surgery, tumor resection, and plastic and reconstructive surgery. Modern surgical microscopes are engineering marvels, combining high-power optics, precise maneuverability, and good stability. They also have many digital components, which allow combination with many imaging modules, such as fluorescence imaging, optical coherence tomography, laser speckle imaging, and spectral imaging. Spectral imaging is of special interest to many researchers, because the noninvasive and noncontact nature of the technology means spectral imaging can provide visual aids with minimal complications. Furthermore, surgical microscopes often provide light sources sufficient for both the surgical operation and the image acquisition in the forms of xenon light, halogen light, and LED. This reduces the need for additional illumination as seen in other acquisition setups.

Many setups for spectral imaging in surgical microscope used a monochrome imaging camera, a broadband light source, and variable filters/filter wheel. Van Brakel et al.61 used an LCTF for the filters in their setup. They used the system to create high-resolution images of a dental implant; more specifically, they used the spectral data to estimate the soft tissue thickness and height surrounding the implant. To do this, they used a model of absorbance based on mucosa thickness. Postprocessing was required to align spectral images due to the motion blur. Nevertheless, the system was able to estimate soft tissue thickness consistent with previous literature. Both Roblyer et al.277 and Martin et al.278 used filter wheels for their acquisition setup. For illumination, Roblyer et al. replaced the original light sources of the surgical microscope with mercury lamp. While Martin et al. used a monochromatic camera, Roblyer et al. used an RGB camera and used the raw images to perform processing on. Pichette et al.64 used an ultracompact camera with a surgical microscope. The system used 4×4 SRDA to detect 16 spectral bands in the range 481 to 632 nm. With the system, they segmented blood vessels, assessed hemoglobin concentration, and detected potential vasomotion and epileptic spike responses.

8.

Discussion and Future Directions

Spectral imaging, which includes both multispectral and hyperspectral imaging, acquires images in many wavelengths and beyond the visible light range. The vast amount of data acquired both spatially and spectrally offer benefits in understanding biochemical compositions of tissues and their locations. The main drive for designing compact and lightweight spectral cameras came from remote sensing, where small cameras are necessary to fit onto UAVs. However, many other imaging-intensive fields, such as biomedical imaging, benefited tremendously from this progress in the development of compact systems. We reviewed the technological progress made in the engineering and manufacturing of compact spectral cameras and found that current compact systems are vastly superior compared with many of their bulky counterparts of only 20 years ago. While the engineering principles have existed since the mid-20th century, it was manufacturing progress that drove the miniaturization of many previously cumbersome systems. For example, the physics behind FPI systems has been known since the 17th century. However, manufacturing them in large quantity and small size required lithography, which has only been possible since the 1990s. As such, we expect that future progress in generating more compact cameras will come from new manufacturing techniques. We also expect future researchers will focus on the engineering and design of compact snapshot cameras. Compared with spatial and spectral scanning cameras, snapshot cameras have a smaller number of mechanical components, which makes them a prime candidate for miniaturization. The variety of methods available for capturing snapshot images meant that researchers could pursue different paths toward miniaturization as well. As of writing this, some of the smallest existing spectral cameras (<30  g) were all snapshot cameras using SRDA technology.53,54,76,161,279

8.1.

Hardware Limitations and Potential Solutions

Currently, there is still a large tradeoff in spatial scanning, spectral scanning, and snapshot imaging cameras. The tradeoff has three main components: spatial raster resolution, spectral number resolution, and acquisition time. The spatial and spectral scanning systems are optimized for spatial and spectral resolution, whereas the snapshot scanning systems are being optimized for acquisition time. If there are any “dream” spectral imaging systems that can achieve large spatial and spectral resolution within a short amount of time, several engineering barriers must be overcome. First, what should the acquisition mechanism be? If the system is snapshot, it needs to record the entire datacube onto the image sensor. The sensor for such a system would be much larger than any counterpart RGB imaging sensors; this already can impact the size of the camera. If the system is spatial or spectral, then the mechanism for translating or moving the filter must be fast enough to scan through the entire field in a reasonable time. Second, how will data recording work? Due to the enormous amount of data spectral cameras capture, most systems, even commercial ones, send raw data and receive commands to and from another computer. Behmann et al.144 described a commercial system that performed live calibrations without user input. New generations of spectral camera systems are heading toward on-the-spot image processing within the system hardware, which can make analysis much more seamless.

We briefly touched upon the power issues of existing cameras. For now, power consumption is not important because many laboratory-based systems draw power from the grid. However, in a future where biomedical spectral imaging systems will be used in low-resource situations, remote power sources and low power consumption must be a research priority. Another potential problem with compact spectral cameras is system cooling. Thermal radiation emitted by the instrument itself is negligible in the visible-NIR range but can affect the imagers working in the long wave infrared (LWIR, 8000 to 14,000 nm) range.280 To minimize this effect, LWIR spectral cameras need a specific cooling apparatus, which increases the size and weight of the system.

8.2.

Quest for Lower Costs

A barrier toward popularization of spectral imaging research is the high cost of commercialized systems. As of now, a commercial spectral imaging camera can cost tens of thousands of dollars, many orders of magnitude more than their commercialized RGB counterparts. The reason for this is due to the small market size and complex manufacturing process. Manufacturing accurate spectral imaging systems requires precise spectral components and elaborate calibration processes, similar to the manufacturing process of other precision instruments. To counter this, many research facilities and hobbyists have produced customized spectral cameras using off-the-shelf custom instruments. We reviewed the use of open-source hardware and software, 3D printing, smartphone, and low-cost spectroscopy, and off-the-shelf optical systems. One common thread that enables all of them is the rise of personal simulation and modeling software. Ray-tracing software, optic simulation software, and 3D modeling software make customization possible. Still, a wide gap exists between the quality of COTS systems and that of commercialized systems. Many commercialized methods used high precision manufacturing standards. For example, SRDA, MEMS, and Offner spectrometers used lithographic manufacture. These methods were limited to research labs and clean room operations.121,166 If systems are built using low-cost components, extensive optical calibrations are needed, typically using a secondary spectral camera or a known light source. Improper calibration severely distorts the experiment outcomes, especially in applications that require high spectral and spatial precision.

Progress in making more low-cost customized spectral cameras will come from engineering using off-the-shelf components and from the open-source movement. Researchers should share new systems designs through open-access journals and invite collaboration and improvements. While open-source design is becoming increasingly accepted, it faces significant challenges in medical research. Many new designs did not undergo the strict regulation necessary for medical devices. They also had lackluster business models, which discouraged further developments. Copyright laws that cover open-source designs differ between hardware and software. Software is “created work” that is often legally protected under copyright, whereas hardware is “invention” that is protected under patent. Many companies that manufacture open-source hardware still protect their product under a trademark, which acts as a form of quality assurance.281 3D printing and open-source movements have intertwined roots; software used to design and model in 3D are free and have large exchange forums on the internet. The majority of individuals involved in the 3D printing community are also involved in open-source projects, which shows a mindset of collaboration.282 We summarized the use of 3D printing for the manufacturing of customized parts and housing of compact spectral imagers. However, the biggest issue with 3D-printed housing is the mechanical durability. PLA, the most commonly used material for 3D printing, is cheap and accurate but has low heat tolerance and outdoor tolerance. It also has the potential to shrink after cooling. Future researchers interested in developing 3D-printed housing for hyperspectral cameras should study different materials and their durability; many materials exist such as acrylonitrile butadiene styrene, carbon fiber filaments, or metal-filled filaments that can serve as an alternative to PLA.

We believe that smartphones will be an important component of future compact low-cost spectral imagers. Smartphones are already engineered to be extremely compact with good processing and low power consumption. They are also relatively low-cost and widely available. Smartphone cameras have achieved considerable progress in the last decades. Researchers should use smartphones not just as a camera substitution but also as a control unit. For this, open-source phone operating systems should be preferred, because they allow control software to be written and shared freely. Smartphones are also interconnected to the communication network, which means that they can be used in a larger IoT system. We expected more smartphone-based systems to be used in low resource settings or in coordinated clinical trials. A potential problem is that smartphones have a variety of different configurations short-term technical support, so long-term development and collaboration can be difficult.

8.3.

Future of Spectral Cameras in Bioimaging

In the second part of the paper, we reviewed the use of compact spectral imagers in the biomedical imaging field. Building spectral imaging biomedical systems in many cases were similar to “plug-and-play” systems. We believe that compact imaging systems will become much more common in clinical settings. The smaller size means that there will be less space, which means more room for other instruments. We predict that compact spectral cameras in biomedical imaging will have two types of applications. First, they will replace existing bulky spectral imaging systems. Surgery and diagnostic devices will be the main beneficiary of this change. In surgery, smaller cameras can be fitted on top of endoscopes, surgical microscopes, and operating robots without much interference to the surgical process. Already, spectral imaging has enabled the visualization of key features during surgery but it has not been widely adopted because of bulky sizes.7 Compact spectral cameras can allow for widespread adoption of spectral imaging technologies in the surgery room. We believe that snapshot cameras will be the dominant system in these situations because of their rapid acquisition time. Compact spectral systems will be used for more diagnosis of skin and retinal diseases in vivo. The strength of spectral devices comes from the fact that they are noninvasive and fast, which means that sensitive organs such as the retina and skin are prime candidates for diagnosis. These organs also contain well-known spectral-sensitive information, most notably blood oxygenation rate. The lighter weights will enable some systems to go from tabletop to handheld, which means that they can be much more convenient for clinical settings.

Second, compact spectral cameras will be used to research new physiological processes. If we want to see more research being done in biomedical imaging using compact cameras, we must understand more about the spectral signatures of physiological processes. Knowledge of hemoglobin spectra and skin physiology has already helped researchers to construct elaborate models to diagnose diseases such as melanoma, burns, wounds, ulcer, diabetic foot, and erythema. Large pathology datasets have been used to construct classification algorithms for digital staining, cancerous cell diagnosis, and cellular segmentations.283 We expected similar progress will be made in the retina, as it is currently linked to many complex diseases of the central nervous system.284 Recently, new understanding of how amyloid beta affects the scattering profile led to the development of hyperspectral imaging for Alzheimer’s diagnosis.56,274,57 We advocate for creations and sharing of hyperspectral imaging databases. Currently, most spectral imaging databases are satellite images used for remote sensing. Only a handful of hyperspectral databases are for biomedical purposes,285,286 and they are of specific diseases. Creation of new databases is difficult: acquisition requires a hyperspectral camera system and many patients or specimens. They also require a large hosting space, which can be hundreds of gigabytes. However, the scientific contribution of such database will be invaluable if they advance our understanding of human physiology.

We also predict the use of compact spectral imaging alongside other imaging modalities. Spectral imaging is not always the superior imaging modality. New applications are limited by the penetration depth of light through the skin tissues. Light in the VIS-NIR region has a penetration depth ranging from 0.48 mm at 550 nm to 3.57 mm at 850 nm. For comparison, photoacoustic multispectral imaging can achieve a penetration depth of up to 5 cm with a handheld system.20 To circumvent this shortcoming, imaging in the SWIR wavelengths has been proposed to provide greater penetration depths.88 By combining compact spectral imagers with other imaging modality, the multimodal system was constructed to provide point-of-care analysis to the patients.108,225 There are several different imaging modalities that pair well with hyperspectral imaging systems. Optical coherent tomography (OCT) captures images with depth of several millimeters and micrometer resolution. Combination of OCT and spectral imaging provided both surface chemical information and depth information.287,288 LSCI provided dynamic movements information of blood vessels, which paired well with spectral imaging ability to resolve blood oxygenation contents.289 Raman spectroscopy provided detailed chemical information in a small spatial scale, which complemented spectral imaging.290 Compact devices exist for many of these modalities, which means that multimodal systems can still be compact and convenient.

9.

Conclusion

From large and bulky systems that were used on satellites and aircraft for remote sensing, spectral cameras now exist as compact and portable systems. Both the technology and the applications sides of spectral imaging remained not fully developed. New manufacturing methods and advances in computational speed mean that spectral cameras of the future can be high-quality, fast, and compact without sacrificing the cost factor. Spectral cameras becoming more compact and low-cost means that more individuals can use them to benefit biomedical research.

Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

Acknowledgments

This research was supported in part by the U.S. National Institutes of Health (NIH) Grants (R01CA156775, R01CA204254, R01HL140325, R01CA154475, and R21CA231911), by the Cancer Prevention and Research Institute of Texas (CPRIT) Grant RP190588, and by the Eugene McDermott Fellowship 202009 at the University of Texas at Dallas. The authors thank Kelden Pruitt and Ling Ma for contributing feedback to the manuscript.

References

1. 

J. Qin, “Chapter 5 – Hyperspectral imaging instruments,” Hyperspectral Imaging for Food Quality Analysis and Control, 129 –172 Academic Press, San Diego (2010). Google Scholar

2. 

A. F. Goetz, “Three decades of hyperspectral remote sensing of the Earth: a personal view,” Remote Sens. Environ., 113 S5 –S16 https://doi.org/10.1016/j.rse.2007.12.014 RSEEA7 0034-4257 (2009). Google Scholar

3. 

L. M. Dale et al., “Hyperspectral imaging applications in agriculture and agro-food product quality and safety control: a review,” Appl. Spectrosc. Rev., 48 (2), 142 –159 https://doi.org/10.1080/05704928.2012.705800 APSRBB 0570-4928 (2013). Google Scholar

4. 

Y.-Z. Feng and D.-W. Sun, “Application of hyperspectral imaging in food safety inspection and control: a review,” Crit. Rev. Food Sci. Nutr., 52 (11), 1039 –1058 https://doi.org/10.1080/10408398.2011.651542 CRFND6 0099-0248 (2012). Google Scholar

5. 

A. Polak et al., “Hyperspectral imaging combined with data classification techniques as an aid for artwork authentication,” J. Cult. Heritage, 26 1 –11 https://doi.org/10.1016/j.culher.2017.01.013 (2017). Google Scholar

6. 

Q. Li et al., “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt., 18 (10), 100901 https://doi.org/10.1117/1.JBO.18.10.100901 JBOPFO 1083-3668 (2013). Google Scholar

7. 

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt., 19 (1), 010901 https://doi.org/10.1117/1.JBO.19.1.010901 JBOPFO 1083-3668 (2014). Google Scholar

8. 

R. A. Crocombe, “Portable spectroscopy,” Appl. Spectrosc., 72 (12), 1701 –1751 https://doi.org/10.1177/0003702818809719 APSPA4 0003-7028 (2018). Google Scholar

9. 

S. J. Leavesley et al., “A theoretical-experimental methodology for assessing the sensitivity of biomedical spectral imaging platforms, assays, and analysis methods,” J. Biophotonics, 11 (1), e201600227 https://doi.org/10.1002/jbio.201600227 (2018). Google Scholar

10. 

M. P. Chrisp, “Convex diffraction grating imaging spectrometer,” (1999). Google Scholar

11. 

M. R. Descour and E. L. Dereniak, “Nonscanning no-moving-parts imaging spectrometer,” Proc. SPIE, 2480 48 –64 https://doi.org/10.1117/12.210908 (1995). Google Scholar

12. 

B. K. Ford, M. R. Descour and R. M. Lynch, “Large-image-format computed tomography imaging spectrometer for fluorescence microscopy,” Opt. Express, 9 (9), 444 –453 https://doi.org/10.1364/OE.9.000444 OPEXFF 1094-4087 (2001). Google Scholar

13. 

R. F. Horton, “Optical design for a high-etendue imaging Fourier-transform spectrometer,” Proc. SPIE, 2819 300 –315 https://doi.org/10.1117/12.258077 (1996). Google Scholar

14. 

R. G. Sellar and G. D. Boreman, “Comparison of relative signal-to-noise ratios of different classes of imaging spectrometer,” Appl. Opt., 44 (9), 1614 –1624 https://doi.org/10.1364/AO.44.001614 APOPAI 0003-6935 (2005). Google Scholar

15. 

C. T. Willoughby, M. A. Folkman and M. A. Figueroa, “Application of hyperspectral-imaging spectrometer systems to industrial inspection,” (1996). https://doi.org/10.1117/12.230385 Google Scholar

16. 

C.-C. Kung, M.-H. Lee and C.-L. Hsieh, “Development of an ultraspectral imaging system by using a concave monochromator,” J. Chin. Inst. Eng., 35 (3), 329 –342 https://doi.org/10.1080/02533839.2012.655535 CKCKDZ 0253-3839 (2012). Google Scholar

17. 

D. Briers et al., “Laser speckle contrast imaging: theoretical and practical limitations,” J. Biomed. Opt., 18 (6), 066018 https://doi.org/10.1117/1.JBO.18.6.066018 JBOPFO 1083-3668 (2013). Google Scholar

18. 

H. Abramczyk and B. Brozek-Pluska, “Raman imaging in biochemical and biomedical applications. Diagnosis and treatment of breast cancer,” Chem. Rev., 113 (8), 5766 –5781 https://doi.org/10.1021/cr300147r CHREAY 0009-2665 (2013). Google Scholar

19. 

S. Marschall et al., “Optical coherence tomography—current technology and applications in clinical and biomedical research,” Anal. Bioanal.Chem., 400 (9), 2699 –2720 https://doi.org/10.1007/s00216-011-5008-1 ABCNBP 1618-2642 (2011). Google Scholar

20. 

A. Taruttis and V. Ntziachristos, “Advances in real-time multispectral optoacoustic imaging and its applications,” Nat. Photonics, 9 (4), 219 –227 https://doi.org/10.1038/nphoton.2015.29 NPAHBY 1749-4885 (2015). Google Scholar

21. 

J. Hewett et al., “The application of a compact multispectral imaging system with integrated excitation source to in vivo monitoring of fluorescence during topical photodynamic therapy of superficial skin cancers,” Photochem. Photobiol., 73 (3), 278 –282 https://doi.org/10.1562/0031-8655(2001)0730278TAOACM2.0.CO2 PHCBAP 0031-8655 (2001). Google Scholar

22. 

S. Kim et al., “Smartphone-based multispectral imaging: system development and potential for mobile skin diagnosis,” Biomed. Opt. Express, 7 (12), 5294 –5307 https://doi.org/10.1364/BOE.7.005294 BOEICL 2156-7085 (2016). Google Scholar

23. 

A. S. Luthman et al., “Fluorescence hyperspectral imaging (fHSI) using a spectrally resolved detector array,” J. Biophotonics, 10 (6-7), 840 –853 https://doi.org/10.1002/jbio.201600304 (2017). Google Scholar

24. 

X. Delpueyo et al., “Multispectral imaging system based on light-emitting diodes for the detection of melanomas and basal cell carcinomas: a pilot study,” J. Biomed. Opt., 22 (6), 065006 https://doi.org/10.1117/1.JBO.22.6.065006 JBOPFO 1083-3668 (2017). Google Scholar

25. 

L. Rey-Barroso et al., “Visible and extended near-infrared multispectral imaging for skin cancer diagnosis,” Sensors, 18 (5), 1441 https://doi.org/10.3390/s18051441 SNSRES 0746-9462 (2018). Google Scholar

26. 

I. Lihacova et al., “A method for skin malformation classification by combining multispectral and skin autofluorescence imaging,” Proc. SPIE, 10685 1068535 https://doi.org/10.1117/12.2306203 (2018). Google Scholar

27. 

H. Ding et al., “Smartphone based multispectral imager and its potential for point-of-care testing,” Analyst, 144 (14), 4380 –4385 https://doi.org/10.1039/C9AN00853E ANLYAG 0365-4885 (2019). Google Scholar

28. 

R. D. Uthoff et al., “Point-of-care, multispectral, smartphone-based dermascopes for dermal lesion screening and erythema monitoring,” J. Biomed. Opt., 25 (6), 066004 https://doi.org/10.1117/1.JBO.25.6.066004 JBOPFO 1083-3668 (2020). Google Scholar

29. 

J. Spigulis et al., “Smartphone snapshot mapping of skin chromophores under triple-wavelength laser illumination,” J. Biomed. Opt., 22 (9), 091508 https://doi.org/10.1117/1.JBO.22.9.091508 JBOPFO 1083-3668 (2017). Google Scholar

30. 

H. Qi et al., “A hand-held mosaicked multispectral imaging device for early stage pressure ulcer detection,” J. Med. Syst., 35 (5), 895 –904 https://doi.org/10.1007/s10916-010-9508-x JMSYDA 0148-5598 (2011). Google Scholar

31. 

M.-C. Chang et al., “Multimodal sensor system for pressure ulcer wound assessment and care,” IEEE Trans. Ind. Inf., 14 (3), 1186 –1196 https://doi.org/10.1109/TII.2017.2782213 (2017). Google Scholar

32. 

Q. Li et al., “Microsopic hyperspectral imaging system used in rat skin analysis,” in 2012 5th Int. Conf. BioMed. Eng. and Inf., 76 –79 (2012). https://doi.org/10.1109/BMEI.2012.6512880 Google Scholar

33. 

R. Rutkowski et al., “Hyperspectral imaging for in vivo monitoring of cold atmospheric plasma effects on microcirculation in treatment of head and neck cancer and wound healing,” Clin. Plasma Med., 7 52 –57 https://doi.org/10.1016/j.cpme.2017.09.002 (2017). Google Scholar

34. 

A. Kulcke et al., “A compact hyperspectral camera for measurement of perfusion parameters in medicine,” Biomed. Eng./Biomed. Tech., 63 (5), 519 –527 https://doi.org/10.1515/bmt-2017-0145 (2018). Google Scholar

35. 

J. Marotz et al., “3D-perfusion analysis of burn wounds using hyperspectral imaging,” Burns, 47 (1), 157 –170 https://doi.org/10.1016/j.burns.2020.06.001 BURND8 0305-4179 (2021). Google Scholar

36. 

D. Thiem et al., “Hyperspectral analysis for perioperative perfusion monitoring: a clinical feasibility study on free and pedicled flaps,” Clin. Oral Investig., 25 (3), 933 –945 https://doi.org/10.1007/s00784-020-03382-6 (2021). Google Scholar

37. 

F. J. Bolton et al., “Portable, low-cost multispectral imaging system: design, development, validation, and utilization,” J. Biomed. Opt., 23 (12), 121612 https://doi.org/10.1117/1.JBO.23.12.121612 JBOPFO 1083-3668 (2018). Google Scholar

38. 

S. Kim et al., “Smartphone-based multispectral imaging and machine-learning based analysis for discrimination between seborrheic dermatitis and psoriasis on the scalp,” Biomed. Opt. Express, 10 (2), 879 –891 https://doi.org/10.1364/BOE.10.000879 BOEICL 2156-7085 (2019). Google Scholar

39. 

L. Van Manen et al., “Feasibility of a snapshot hyperspectral imaging for detection of local skin oxygenation,” Proc. SPIE, 10873 108730Q https://doi.org/10.1117/12.2507840 (2019). Google Scholar

40. 

Q. He and R. K. Wang, “Analysis of skin morphological features and real-time monitoring using snapshot hyperspectral imaging,” Biomed. Opt. Express, 10 (11), 5625 –5638 https://doi.org/10.1364/BOE.10.005625 BOEICL 2156-7085 (2019). Google Scholar

41. 

E. Zherebtsov et al., “Hyperspectral imaging of human skin aided by artificial neural networks,” Biomed. Opt. Express, 10 (7), 3545 –3559 https://doi.org/10.1364/BOE.10.003545 BOEICL 2156-7085 (2019). Google Scholar

42. 

Q. He and R. Wang, “Hyperspectral imaging enabled by an unmodified smartphone for analyzing skin morphological features and monitoring hemodynamics,” Biomed. Opt. Express, 11 (2), 895 –910 https://doi.org/10.1364/BOE.378470 BOEICL 2156-7085 (2020). Google Scholar

43. 

L. Chandler, A. Chandler and A. Periasamy, “Novel snapshot hyperspectral imager for fluorescence imaging,” Proc. SPIE, 10498 1049837 https://doi.org/10.1117/12.2300933 (2018). Google Scholar

44. 

F. Wang, A. Behrooz and M. Morris, “High-contrast subcutaneous vein detection and localization using multispectral imaging,” J. Biomed. Opt., 18 (5), 050504 https://doi.org/10.1117/1.JBO.18.5.050504 JBOPFO 1083-3668 (2013). Google Scholar

45. 

Z. Liu, H. Wang and Q. Li, “Tongue tumor detection in medical hyperspectral images,” Sensors, 12 (1), 162 –174 https://doi.org/10.3390/s120100162 SNSRES 0746-9462 (2012). Google Scholar

46. 

N. Bedard et al., “Multimodal snapshot spectral imaging for oral cancer diagnostics: a pilot study,” Biomed. Opt. Express, 4 (6), 938 –949 https://doi.org/10.1364/BOE.4.000938 BOEICL 2156-7085 (2013). Google Scholar

47. 

L. Ma et al., “Hyperspectral microscopic imaging for automatic detection of head and neck squamous cell carcinoma using histologic image and machine learning,” Proc. SPIE, 11320 113200W https://doi.org/10.1117/12.2549369 (2020). Google Scholar

48. 

R. Kumashiro et al., “Integrated endoscopic system based on optical imaging and hyperspectral data analysis for colorectal cancer detection,” Anticancer Res., 36 (8), 3925 –3932 ANTRD4 0250-7005 (2016). Google Scholar

49. 

Y. Zeng et al., “A multi spectral hand-held spatial frequency domain imaging system for imaging human colorectal cancer,” Proc. SPIE, 10874 108740T https://doi.org/10.1117/12.2510320 (2019). Google Scholar

50. 

M. Erfanzadeh et al., “Low-cost compact multispectral spatial frequency domain imaging prototype for tissue characterization,” Biomed. Opt. Express, 9 (11), 5503 –5510 https://doi.org/10.1364/BOE.9.005503 BOEICL 2156-7085 (2018). Google Scholar

51. 

F. J. Bolton et al., “Development and bench testing of a multi-spectral imaging technology built on a smartphone platform,” Proc. SPIE, 9699 969907 https://doi.org/10.1117/12.2218694 (2016). Google Scholar

52. 

J. W. Mink et al., “Initial clinical testing of a multi-spectral imaging system built on a smartphone platform,” Proc. SPIE, 9699 96990R https://doi.org/10.1117/12.2218693 (2016). Google Scholar

53. 

L. Van Manen et al., “Snapshot hyperspectral imaging for detection of breast tumors in resected specimens,” Proc. SPIE, 10856 108560I https://doi.org/10.1117/12.2507835 (2019). Google Scholar

54. 

J. Kaluzny et al., “Bayer filter snapshot hyperspectral fundus camera for human retinal imaging,” Curr. Eye Res., 42 (4), 629 –635 https://doi.org/10.1080/02713683.2016.1221976 CEYRDM 0271-3683 (2017). Google Scholar

55. 

H. Li et al., “Snapshot hyperspectral retinal imaging using compact spectral resolving detector array,” J. Biophotonics, 10 (6-7), 830 –839 https://doi.org/10.1002/jbio.201600053 (2017). Google Scholar

56. 

S. S. More, J. M. Beach and R. Vince, “Early detection of amyloidopathy in Alzheimer’s mice by hyperspectral endoscopy,” Invest. Ophthalmol. Vis. Sci., 57 (7), 3231 –3238 https://doi.org/10.1167/iovs.15-17406 (2016). Google Scholar

57. 

S. S. More et al., “In vivo assessment of retinal biomarkers by hyperspectral imaging: early detection of Alzheimer’s disease,” ACS Chem. Neurosci., 10 (11), 4492 –4501 https://doi.org/10.1021/acschemneuro.9b00331 (2019). Google Scholar

58. 

A. A. Fawzi et al., “Recovery of macular pigment spectrum in vivo using hyperspectral image analysis,” J. Biomed. Opt., 16 (10), 106008 https://doi.org/10.1117/1.3640813 JBOPFO 1083-3668 (2011). Google Scholar

59. 

T. C. Cavalcanti et al., “Smartphone-based spectral imaging otoscope: system development and preliminary study for evaluation of its potential as a mobile diagnostic tool,” J. Biophotonics, 13 (6), e2452 https://doi.org/10.1002/jbio.201960213 (2020). Google Scholar

60. 

S. Ortega et al., “Hyperspectral push-broom microscope development and characterization,” IEEE Access, 7 122473 –122491 https://doi.org/10.1109/ACCESS.2019.2937729 (2019). Google Scholar

61. 

R. van Brakel et al., “The effect of zirconia and titanium implant abutments on light reflection of the supporting soft tissues,” Clin. Oral Implants Res., 22 (10), 1172 –1178 https://doi.org/10.1111/j.1600-0501.2010.02082.x (2011). Google Scholar

62. 

D. Nouri, Y. Lucas and S. Treuillet, “Hyperspectral interventional imaging for enhanced tissue visualization and discrimination combining band selection methods,” Int. J. Comput. Assist. Radiol. Surg., 11 (12), 2185 –2197 https://doi.org/10.1007/s11548-016-1449-5 (2016). Google Scholar

63. 

N. T. Clancy et al., “Intraoperative measurement of bowel oxygen saturation using a multispectral imaging laparoscope,” Biomed. Opt. Express, 6 (10), 4179 –4190 https://doi.org/10.1364/BOE.6.004179 BOEICL 2156-7085 (2015). Google Scholar

64. 

J. Pichette et al., “Intraoperative video-rate hemodynamic response assessment in human cortex using snapshot hyperspectral optical imaging,” Neurophotonics, 3 (4), 045003 https://doi.org/10.1117/1.NPh.3.4.045003 (2016). Google Scholar

65. 

Y. Zhang et al., “Tissue classification for laparoscopic image understanding based on multispectral texture analysis,” Proc. SPIE, 9786 978619 https://doi.org/10.1117/12.2216090 (2016). Google Scholar

66. 

E. J. Baltussen et al., “Hyperspectral imaging for tissue classification, a way toward smart laparoscopic colorectal surgery,” J. Biomed. Opt., 24 (1), 016002 https://doi.org/10.1117/1.JBO.24.1.016002 JBOPFO 1083-3668 (2019). Google Scholar

67. 

H. Akbari et al., “Detection and analysis of the intestinal ischemia using visible and invisible hyperspectral imaging,” IEEE Trans. Biomed. Eng., 57 (8), 2011 –2017 https://doi.org/10.1109/TBME.2010.2049110 IEBEAX 0018-9294 (2010). Google Scholar

68. 

B. Jansen-Winkeln et al., “Determination of the transection margin during colorectal resection with hyperspectral imaging (HSI),” Int. J. Colorectal Dis., 34 (4), 731 –739 https://doi.org/10.1007/s00384-019-03250-0 IJCDE6 1432-1262 (2019). Google Scholar

69. 

B. Jansen-Winkeln et al., “Hyperspektral-imaging bei gastrointestinalen anastomosen,” Der. Chirurg., 89 (9), 717 –725 https://doi.org/10.1007/s00104-018-0633-2 (2018). Google Scholar

70. 

H. Köhler et al., “Laparoscopic system for simultaneous high-resolution video and rapid hyperspectral imaging in the visible and near-infrared spectral range,” J. Biomed. Opt., 25 (8), 086004 https://doi.org/10.1117/1.JBO.25.8.086004 JBOPFO 1083-3668 (2020). Google Scholar

71. 

A. F. Goetz et al., “Imaging spectrometry for earth remote sensing,” Science, 228 (4704), 1147 –1153 https://doi.org/10.1126/science.228.4704.1147 SCIEAS 0036-8075 (1985). Google Scholar

72. 

Jr. J. C. Lansing and R. W. Cline, “The four-and five-band multispectral scanners for Landsat,” Opt. Eng., 14 (4), 144312 https://doi.org/10.1117/12.7971838 (1975). Google Scholar

73. 

S. K. Babey and C. D. Anger, “Compact airborne spectrographic imager (CASI): a progress review,” Proc. SPIE, 1937 152 –163 https://doi.org/10.1117/12.157052 (1993). Google Scholar

74. 

M. Aikio, “Hyperspectral prism-grating-prism imaging spectrograph,” (2001). Google Scholar

75. 

Y. Zhong et al., “Mini-UAV-borne hyperspectral remote sensing: from observation and processing to applications,” IEEE Geosci. Remote Sens. Mag., 6 (4), 46 –62 https://doi.org/10.1109/MGRS.2018.2867592 (2018). Google Scholar

76. 

B. Geelen et al., “A tiny VIS-NIR snapshot multispectral camera,” Proc. SPIE, 9374 937414 https://doi.org/10.1117/12.2077583 (2015). Google Scholar

77. 

H. Wu et al., “Miniaturized handheld hyperspectral imager,” Proc. SPIE, 9101 91010W https://doi.org/10.1117/12.2049243 (2014). Google Scholar

78. 

J. Sandino et al., “Aerial mapping of forests affected by pathogens using UAVs, hyperspectral sensors, and artificial intelligence,” Sensors, 18 (4), 944 https://doi.org/10.3390/s18040944 SNSRES 0746-9462 (2018). Google Scholar

79. 

J. Pichette, W. Charle and A. Lambrechts, “Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan,” Proc. SPIE, 10110 1011014 https://doi.org/10.1117/12.2253614 (2017). Google Scholar

80. 

X. Prieto-Blanco et al., “Optical configurations for imaging spectrometers,” Computational Intelligence for Remote Sensing, 1 –25 Springer, Berlin, Heidelberg (2008). Google Scholar

81. 

T. Adão et al., “Hyperspectral imaging: a review on UAV-based sensors, data processing and applications for agriculture and forestry,” Remote Sens., 9 (11), 1110 https://doi.org/10.3390/rs9111110 RSEND3 (2017). Google Scholar

82. 

A. D. Elliott et al., “Real-time hyperspectral fluorescence imaging of pancreatic β-cell dynamics with the image mapping spectrometer,” J. Cell Sci., 125 (20), 4833 –4840 https://doi.org/10.1242/jcs.108258 JNCSAI 0021-9533 (2012). Google Scholar

83. 

W. F. Vermaas et al., “In vivo hyperspectral confocal fluorescence imaging to determine pigment localization and distribution in cyanobacterial cells,” Proc. Natl. Acad. Sci. U. S. A., 105 (10), 4050 –4055 https://doi.org/10.1073/pnas.0708090105 (2008). Google Scholar

84. 

A. M. Mika, “Three decades of Landsat instruments,” Photogramm. Eng. Remote Sens., 63 (7), 839 –852 (1997). Google Scholar

85. 

F. Sigernes et al., “Do it yourself hyperspectral imager for handheld to airborne operations,” Opt. Express, 26 (5), 6021 –6035 https://doi.org/10.1364/OE.26.006021 OPEXFF 1094-4087 (2018). Google Scholar

86. 

F. Sigernes et al., “Multipurpose spectral imager,” Appl. Opt., 39 (18), 3143 –3153 https://doi.org/10.1364/AO.39.003143 APOPAI 0003-6935 (2000). Google Scholar

87. 

J. A. Gutiérrez-Gutiérrez et al., “Custom scanning hyperspectral imaging system for biomedical applications: modeling, benchmarking, and specifications,” Sensors, 19 (7), 1692 https://doi.org/10.3390/s19071692 SNSRES 0746-9462 (2019). Google Scholar

88. 

V. Batshev et al., “Polarizer-free AOTF-based SWIR hyperspectral imaging for biomedical applications,” Sensors, 20 (16), 4439 https://doi.org/10.3390/s20164439 SNSRES 0746-9462 (2020). Google Scholar

89. 

O. Polschikova et al., “AOTF-based optical system of a microscope module for multispectral imaging techniques,” Proc. SPIE, 10592 105920H https://doi.org/10.1117/12.2297614 (2017). Google Scholar

90. 

Q. Li et al., “AOTF based hyperspectral tongue imaging system and its applications in computer-aided tongue disease diagnosis,” in 2010 3rd Int. Conf. Biomed. Eng. and Inf., 1424 –1427 (2010). Google Scholar

91. 

R. Abdlaty et al., “Hyperspectral imaging: comparison of acousto-optic and liquid crystal tunable filters,” Proc. SPIE, 10573 105732P https://doi.org/10.1117/12.2282532 (2018). Google Scholar

92. 

C. Balas, “A novel optical imaging method for the early detection, quantitative grading, and mapping of cancerous and precancerous lesions of cervix,” IEEE Trans. Biomed. Eng., 48 (1), 96 –104 https://doi.org/10.1109/10.900259 IEBEAX 0018-9294 (2001). Google Scholar

93. 

N. Gupta, “Development of staring hyperspectral imagers,” in IEEE Appl. Imagery Pattern Recognit. Workshop (AIPR), (2011). https://doi.org/10.1109/AIPR.2011.6176379 Google Scholar

94. 

II J. F. Turner and A. H. Zalavadia, “A novel surface plasmon coupled tunable wavelength filter for hyperspectral imaging,” Proc. SPIE, 10376 103760A https://doi.org/10.1117/12.2274671 PSISDG 0277-786X (2017). Google Scholar

95. 

N. Gat, “Imaging spectroscopy using tunable filters: a review,” 50 –64 (2000). https://doi.org/10.1117/12.381686 Google Scholar

96. 

C. Bai et al., “Compact birefringent interferometer for Fourier transform hyperspectral imaging,” Opt. Express, 26 (2), 1703 –1725 https://doi.org/10.1364/OE.26.001703 OPEXFF 1094-4087 (2018). Google Scholar

97. 

B. Delauré et al., “The geospectral camera: a compact and geometrically precise hyperspectral and high spatial resolution imager,” Int. Arch. Photogramm. Remote Sens. and Spatial Inf. Sci., XL-1/W1 69 –74 https://doi.org/10.5194/isprsarchives-XL-1-W1-69-2013 1682-1750 (2013). Google Scholar

98. 

B. Couce et al., “A windowing/pushbroom hyperspectral imager,” in Int. Conf. Knowl.-Based and Intell. Inf. and Eng. Syst., 300 –306 (2006). Google Scholar

99. 

M. H. Köhler et al., “Hyperspectral imager for the mid-infrared spectral range using a single-mirror interferometer and a windowing method,” OSA Contin., 2 (11), 3212 –3222 https://doi.org/10.1364/OSAC.2.003212 (2019). Google Scholar

100. 

N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng., 52 (9), 090901 https://doi.org/10.1117/1.OE.52.9.090901 (2013). Google Scholar

101. 

K. Monakhova et al., “Spectral DiffuserCam: lensless snapshot hyperspectral imaging with a spectral filter array,” Optica, 7 (10), 1298 –1307 https://doi.org/10.1364/OPTICA.397214 (2020). Google Scholar

102. 

J. Salazar-Vazquez and A. Mendez-Vazquez, “A plug-and-play hyperspectral imaging sensor using low-cost equipment,” HardwareX, 7 e00087 https://doi.org/10.1016/j.ohx.2019.e00087 (2020). Google Scholar

103. 

R. Habel, M. Kudenov and M. Wimmer, “Practical spectral photography,” Comput. Graphics Forum, 31 (2pt2), 449 –458 https://doi.org/10.1111/j.1467-8659.2012.03024.x (2012). Google Scholar

104. 

W. R. Johnson et al., “Snapshot hyperspectral imaging in ophthalmology,” J. Biomed. Opt., 12 (1), 014036 https://doi.org/10.1117/1.2434950 JBOPFO 1083-3668 (2007). Google Scholar

105. 

G. R. Arce et al., “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process Mag., 31 (1), 105 –115 https://doi.org/10.1109/MSP.2013.2278763 ISPRE6 1053-5888 (2013). Google Scholar

106. 

A. S. Luthman, “Spectral imaging systems and sensor characterisations,” Spectrally Resolved Detector Arrays for Multiplexed Biomedical Fluorescence Imaging, 9 –50 Springer International Publishing, Cham (2018). Google Scholar

107. 

B. Arad, O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural RGB images,” in Comput. Vision, 19 –34 (2016). Google Scholar

108. 

J. H. Song, C. Kim and Y. Yoo, “Vein visualization using a smart phone with multispectral Wiener estimation for point-of-care applications,” IEEE J. Biomed. Health Inf., 19 (2), 773 –778 https://doi.org/10.1109/JBHI.2014.2313145 (2014). Google Scholar

109. 

H.-Y. Yao et al., “Hyperspectral ophthalmoscope images for the diagnosis of diabetic retinopathy stage,” J. Clin. Med., 9 (6), 1613 https://doi.org/10.3390/jcm9061613 (2020). Google Scholar

110. 

S. Koundinya et al., “2D-3D CNN based architectures for spectral reconstruction from RGB images,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognit. Workshops, (2018). https://doi.org/10.1109/CVPRW.2018.00129 Google Scholar

111. 

A. Alvarez-Gila, J. Van De Weijer and E. Garrote, “Adversarial networks for spatial context-aware spectral image reconstruction from RGB,” in Proc. IEEE Int. Conf. Comput. Vision Workshops, (2017). https://doi.org/10.1109/ICCVW.2017.64 Google Scholar

112. 

A. Signoroni et al., “Deep learning meets hyperspectral image analysis: a multidisciplinary review,” J. Imaging, 5 (5), 52 https://doi.org/10.3390/jimaging5050052 (2019). Google Scholar

113. 

M. B. Stuart, A. J. McGonigle and J. R. Willmott, “Hyperspectral imaging in environmental monitoring: a review of recent developments and technological advances in compact field deployable systems,” Sensors, 19 (14), 3071 https://doi.org/10.3390/s19143071 SNSRES 0746-9462 (2019). Google Scholar

114. 

T. E. Renkoski, U. Utzinger and K. D. Hatch, “Wide-field spectral imaging of human ovary autofluorescence and oncologic diagnosis via previously collected probe data,” J. Biomed. Opt., 17 (3), 036003 https://doi.org/10.1117/1.JBO.17.3.036003 JBOPFO 1083-3668 (2012). Google Scholar

115. 

M. A. Afromowitz et al., “Multispectral imaging of burn wounds: a new clinical instrument for evaluating burn depth,” IEEE Trans. Biomed. Eng., 35 (10), 842 –850 https://doi.org/10.1109/10.7291 IEBEAX 0018-9294 (1988). Google Scholar

116. 

S. Salsone et al., “Histological validation of near-infrared reflectance multispectral imaging technique for caries detection and quantification,” J. Biomed. Opt., 17 (7), 076009 https://doi.org/10.1117/1.JBO.17.7.076009 JBOPFO 1083-3668 (2012). Google Scholar

117. 

N. Tack et al., “A compact, high-speed, and low-cost hyperspectral imager,” Proc. SPIE, 8266 82660Q https://doi.org/10.1117/12.908172 (2012). Google Scholar

118. 

A. Machikhin, V. Pozhar and V. Batshev, “Double-AOTF-based aberration-free spectral imaging endoscopic system for biomedical applications,” J. Innov. Opt. Health Sci., 8 (03), 1541009 https://doi.org/10.1142/S1793545815410096 (2015). Google Scholar

119. 

J. Katrašnik et al., “Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging systems,” Chemometr. Intell. Lab. Syst., 101 (1), 23 –29 https://doi.org/10.1016/j.chemolab.2009.11.012 CILSEN 0169-7439 (2010). Google Scholar

120. 

B. Guo et al., “Wide-band large-aperture Ag surface-micro-machined MEMS Fabry–Perot interferometers (AgMFPIs) for miniaturized hyperspectral imaging,” Proc. SPIE, 10545 105450U https://doi.org/10.1117/12.2286438 (2018). Google Scholar

121. 

P. Gonzalez et al., “A novel CMOS-compatible, monolithically integrated line-scan hyperspectral imager covering the VIS-NIR range,” Proc. SPIE, 9855 98550N https://doi.org/10.1117/12.2230726 (2016). Google Scholar

122. 

S. Poger and E. Angelopoulou, “Multispectral sensors in computer vision,” (2001). Google Scholar

123. 

A. Näsilä et al., “Cubic-inch MOEMS spectral imager,” Proc. SPIE, 10931 109310F https://doi.org/10.1117/12.2508420 (2019). Google Scholar

124. 

N. A. Hagen et al., “Snapshot advantage: a review of the light collection improvement for parallel high-dimensional measurement systems,” Opt. Eng., 51 (11), 111702 https://doi.org/10.1117/1.OE.51.11.111702 (2012). Google Scholar

125. 

J. Qin et al., “Line-scan hyperspectral imaging techniques for food safety and quality applications,” Appl. Sci., 7 (2), 125 https://doi.org/10.3390/app7020125 (2017). Google Scholar

126. 

C. P. Warren et al., “Miniaturized visible near-infrared hyperspectral imager for remote-sensing applications,” Opt. Eng., 51 (11), 111720 https://doi.org/10.1117/1.OE.51.11.111720 (2012). Google Scholar

127. 

J. Zhou et al., “Design and laboratory calibration of the compact pushbroom hyperspectral imaging system,” Proc. SPIE, 7506 75062M https://doi.org/10.1117/12.838159 (2009). Google Scholar

128. 

H. Du et al., “A prism-based system for multispectral video acquisition,” in IEEE 12th Int. Conf. Comput. Vision, (2009). https://doi.org/10.1109/ICCV.2009.5459162 Google Scholar

129. 

M. Abdo, V. Badilita and J. Korvink, “Spatial scanning hyperspectral imaging combining a rotating slit with a Dove prism,” Opt. Express, 27 (15), 20290 –20304 https://doi.org/10.1364/OE.27.020290 OPEXFF 1094-4087 (2019). Google Scholar

130. 

S.-H. Baek et al., “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graphics, 36 (6), 1 –12 https://doi.org/10.1145/3130800.3130896 ATGRDF 0730-0301 (2017). Google Scholar

131. 

R. B. Saager et al., “Portable (handheld) clinical device for quantitative spectroscopy of skin, utilizing spatial frequency domain reflectance techniques,” Rev. Sci. Instrum., 88 (9), 094302 https://doi.org/10.1063/1.5001075 RSINAK 0034-6748 (2017). Google Scholar

132. 

M. Abdo et al., “Dual-mode pushbroom hyperspectral imaging using active system components and feed-forward compensation,” Rev. Sci. Instrum., 89 (8), 083113 https://doi.org/10.1063/1.5025896 RSINAK 0034-6748 (2018). Google Scholar

133. 

M. Pisani and M. Zucco, “Simple and cheap hyperspectral imaging for astronomy (and more),” Proc. SPIE, 10677 1067706 https://doi.org/10.1117/12.2309835 (2018). Google Scholar

134. 

T. Hyvärinen et al., “Compact high-resolution VIS/NIR hyperspectral sensor,” Proc. SPIE, 8032 80320W https://doi.org/10.1117/12.887003 (2011). Google Scholar

135. 

L. Ziph-Schatzberg et al., “Compact, high performance hyperspectral systems design and applications,” Proc. SPIE, 9482 94820W https://doi.org/10.1117/12.2177564 (2015). Google Scholar

136. 

W. Bakker, H. van der Werff and F. van der Meer, “Determining smile and keystone of lab hyperspectral line cameras,” in 2019 10th Workshop Hyperspectral Imaging and Signal Process.: Evol. in Remote Sens., 1 –5 (2019). Google Scholar

137. 

L. Yang et al., “Design of miniaturization hyperspectral imager based on CMOS sensor,” Proc. SPIE, 11151 1115127 https://doi.org/10.1117/12.2533027 (2019). Google Scholar

138. 

M. Bajic et al., “Airborne sampling of the reflectivity by the hyperspectral line scanner in a visible and near infrared wavelengths,” in Proc. 24th Symp. Eur. Assoc. of Remote Sens. Lab., 25 –27 (2004). Google Scholar

139. 

X. Liu et al., “Fast hyperspectral imager driven by a low-cost and compact galvo-mirror,” Optik, 224 165716 https://doi.org/10.1016/j.ijleo.2020.165716 OTIKAJ 0030-4026 (2020). Google Scholar

140. 

F. Cai et al., “Compact dual-channel (hyperspectral and video) endoscopy,” Front. Phys., 8 110 https://doi.org/10.3389/fphy.2020.00110 (2020). Google Scholar

141. 

Z. Xu, Y. Jiang and S. He, “Multi-mode microscopic hyperspectral imager for the sensing of biological samples,” Appl. Sci., 10 (14), 4876 https://doi.org/10.3390/app10144876 (2020). Google Scholar

142. 

R. Arablouei et al., “Fast and robust pushbroom hyperspectral imaging via DMD-based scanning,” Proc. SPIE, 9948 99480A https://doi.org/10.1117/12.2239107 (2016). Google Scholar

143. 

M. Abdo et al., “Automatic correction of diffraction pattern shift in a pushbroom hyperspectral imager with a piezoelectric internal line-scanning unit,” Proc. SPIE, 10110 1011004 https://doi.org/10.1117/12.2248467 (2017). Google Scholar

144. 

J. Behmann et al., “Specim IQ: evaluation of a new, miniaturized handheld hyperspectral camera and its application for plant phenotyping and disease detection,” Sensors, 18 (2), 441 https://doi.org/10.3390/s18020441 SNSRES 0746-9462 (2018). Google Scholar

145. 

V. Pozhar et al., “Hyperspectral monitoring AOTF-based apparatus,” J. Phys.: Conf. Ser., 1368 022046 https://doi.org/10.1088/1742-6596/1368/2/022046 JPCSDZ 1742-6588 (2019). Google Scholar

146. 

M. Gaponov et al., “Acousto-optical imaging spectrometer for unmanned aerial vehicles,” Proc. SPIE, 10466 104661V https://doi.org/10.1117/12.2288303 (2017). Google Scholar

147. 

T. Ishida et al., “A novel approach for vegetation classification using UAV-based hyperspectral imaging,” Comput. Electron. Agric., 144 80 –85 https://doi.org/10.1016/j.compag.2017.11.027 CEAGE6 0168-1699 (2018). Google Scholar

148. 

V. Saptari, Fourier Transform Spectroscopy Instrumentation Engineering, SPIE Press, Bellingham, Washington (2003). Google Scholar

149. 

Y. Xu et al., “Ultra-compact Fourier transform imaging spectrometer using a focal plane birefringent interferometer,” Opt. Lett., 43 (17), 4081 –4084 https://doi.org/10.1364/OL.43.004081 OPLEDP 0146-9592 (2018). Google Scholar

150. 

H. J. Dutton, Understanding Optical Communications, 1 Prentice Hall PTR, New Jersey (1998). Google Scholar

151. 

A. Rissanen et al., “VTT’s Fabry-Perot interferometer technologies for hyperspectral imaging and mobile sensing applications,” Proc. SPIE, 10116 101160I https://doi.org/10.1117/12.2255950 (2017). Google Scholar

152. 

P. Connes, “Early history of Fourier transform spectroscopy,” Infrared Phys., 24 (2-3), 69 –93 https://doi.org/10.1016/0020-0891(84)90052-6 INFPAD 0020-0891 (1984). Google Scholar

153. 

M. Persky, “A review of spaceborne infrared Fourier transform spectrometers for remote sensing,” Rev. Sci. Instrum., 66 (10), 4763 –4797 https://doi.org/10.1063/1.1146154 RSINAK 0034-6748 (1995). Google Scholar

154. 

A. R. Harvey and D. W. Fletcher-Holmes, “Birefringent Fourier-transform imaging spectrometer,” Opt. Express, 12 (22), 5368 –5374 https://doi.org/10.1364/OPEX.12.005368 OPEXFF 1094-4087 (2004). Google Scholar

155. 

A. Perri et al., “Hyperspectral imaging with a TWINS birefringent interferometer,” Opt. Express, 27 (11), 15956 –15967 https://doi.org/10.1364/OE.27.015956 OPEXFF 1094-4087 (2019). Google Scholar

156. 

M. Vaughan, The Fabry-Perot Interferometer: History, Theory, Practice and Applications, 1st ed.Taylor and Francis, London (2017). Google Scholar

157. 

M. Ebermann et al., “Tunable MEMS Fabry-Pérot filters for infrared microspectrometers: a review,” Proc. SPIE, 9760 97600H https://doi.org/10.1117/12.2209288 (2016). Google Scholar

158. 

N. Gupta, P. R. Ashe and S. Tan, “Miniature snapshot multispectral imager,” Opt. Eng., 50 (3), 033203 https://doi.org/10.1117/1.3552665 (2011). Google Scholar

159. 

A. Näsilä et al., “Hand-held MEMS hyperspectral imager for VNIR mobile applications,” Proc. SPIE, 10545 105450R https://doi.org/10.1117/12.2286472 (2018). Google Scholar

160. 

A. S. Luthman et al., “Bimodal reflectance and fluorescence multispectral endoscopy based on spectrally resolving detector arrays,” J. Biomed. Opt., 24 (3), 031009 https://doi.org/10.1117/1.JBO.24.3.031009 JBOPFO 1083-3668 (2018). Google Scholar

161. 

S. Blair et al., “A 27-band snapshot hyperspectral imaging system for label-free tumor detection during image-guided surgery,” in Label-free Biomed. Imaging and Sens. (LBIS), (2019). Google Scholar

162. 

L. Miao et al., “Binary tree-based generic demosaicking algorithm for multispectral filter arrays,” IEEE Trans. Image Process., 15 (11), 3550 –3558 https://doi.org/10.1109/TIP.2006.877476 IIPRE4 1057-7149 (2006). Google Scholar

163. 

R. Wu et al., “Optimized multi-spectral filter arrays for spectral reconstruction,” Sensors, 19 (13), 2905 https://doi.org/10.3390/s19132905 SNSRES 0746-9462 (2019). Google Scholar

164. 

T. W. Sawyer et al., “Opti-MSFA: a toolbox for generalized design and optimization of multispectral filter arrays,” Opt. Express, 30 (5), 7591 –7611 https://doi.org/10.1364/OE.446767 OPEXFF 1094-4087 (2022). Google Scholar

165. 

S. Tisserand, “Custom Bayer filter multispectral imaging: emerging integrated technology,” in 2019 10th Workshop Hyperspectral Imaging and Signal Process.: Evol. in Remote Sens. (WHISPERS), 1 –4 (2019). Google Scholar

166. 

B. Geelen, N. Tack and A. Lambrechts, “A snapshot multispectral imager with integrated tiled filters and optical duplication,” Proc. SPIE, 8613 861314 https://doi.org/10.1117/12.2004072 (2013). Google Scholar

167. 

Q. Meng et al., “Study on the optical property of the micro Fabry-Perot cavity tunable filter,” Proc. SPIE, 7516 75160T https://doi.org/10.1117/12.840675 (2009). Google Scholar

168. 

S.-W. Wang et al., “16 × 1 integrated filter array in the MIR region prepared by using a combinatorial etching technique,” Appl. Phys. B, 82 (4), 637 –641 https://doi.org/10.1007/s00340-005-2102-0 (2006). Google Scholar

169. 

R. Hahn et al., “Detailed characterization of a hyperspectral snapshot imager for full-field chromatic confocal microscopy,” Proc. SPIE, 11352 113520Y https://doi.org/10.1117/12.2556797 (2020). Google Scholar

170. 

J. Dougherty, T. Jennings and M. Snikkers, “Compact camera technologies for real-time false-color imaging in the SWIR band,” Proc. SPIE, 8899 889907 https://doi.org/10.1117/12.2032737 (2013). Google Scholar

171. 

P. Gonzalez et al., “An extremely compact and high-speed line-scan hyperspectral imager covering the SWIR range,” Proc. SPIE, 10656 106560L https://doi.org/10.1117/12.2304918 (2018). Google Scholar

172. 

X. Yu et al., “Batch fabrication and compact integration of customized multispectral filter arrays towards snapshot imaging,” Opt. Express, 29 (19), 30655 –30665 https://doi.org/10.1364/OE.439390 OPEXFF 1094-4087 (2021). Google Scholar

173. 

K. Žídek et al., “Compact and robust hyperspectral camera based on compressed sensing,” Proc. SPIE, 10151 101510N https://doi.org/10.1117/12.2250268 (2016). Google Scholar

174. 

L. Gao, R. T. Kester and T. S. Tkaczyk, “Compact Image Slicing Spectrometer (ISS) for hyperspectral fluorescence microscopy,” Opt. Express, 17 (15), 12293 –12308 https://doi.org/10.1364/OE.17.012293 OPEXFF 1094-4087 (2009). Google Scholar

175. 

L. Gao, R. T. Smith and T. S. Tkaczyk, “Snapshot hyperspectral retinal camera with the image mapping spectrometer (IMS),” Biomed. Opt. Express, 3 (1), 48 –54 https://doi.org/10.1364/BOE.3.000048 BOEICL 2156-7085 (2012). Google Scholar

176. 

M. E. Pawlowski et al., “High speed image mapping spectrometer for biomedical applications,” in Opt. in the Life Sci. Congr., (2017). Google Scholar

177. 

R. P. Feynman, “There’s plenty of room at the bottom: an invitation to enter a new field of physics,” Resonance, 16 (9), 890 –905 https://doi.org/10.1007/s12045-011-0109-x (2011). Google Scholar

178. 

A. H. Nayfeh, M. I. Younis and E. M. Abdel-Rahman, “Dynamic pull-in phenomenon in MEMS resonators,” Nonlinear Dyn., 48 (1-2), 153 –163 https://doi.org/10.1007/s11071-006-9079-z NODYES 0924-090X (2007). Google Scholar

179. 

M. Kraft, A. Kenda and H. Schenk, “Hand-held high-speed spectrometers based on micro-electro-mechanical components,” in Proc. Symp. Photonics Technol. for 7th Framework Prog., (2006). Google Scholar

180. 

M. Kraft et al., “MEMS-based compact FT-spectrometers-a platform for spectroscopic mid-infrared sensors,” in Sensors, (2008). https://doi.org/10.1109/ICSENS.2008.4716400 Google Scholar

181. 

G. Jodor et al., “A MEMS-based Fourier transform spectrometer,” Fourier Transf. Spectrosc., 84 FMD11 https://doi.org/10.1364/FTS.2003.FMD11 (2003). Google Scholar

182. 

J. W. Judy, “Microelectromechanical systems (MEMS): fabrication, design and applications,” Smart Mater. Struct., 10 (6), 1115 https://doi.org/10.1088/0964-1726/10/6/301 SMSTER 0964-1726 (2001). Google Scholar

183. 

R. Trops et al., “Miniature MOEMS hyperspectral imager with versatile analysis tools,” Proc. SPIE, 10931 109310W https://doi.org/10.1117/12.2506366 (2019). Google Scholar

184. 

A. Rissanen et al., “MEMS FPI-based smartphone hyperspectral imager,” Proc. SPIE, 9855 985507 https://doi.org/10.1117/12.2229575 (2016). Google Scholar

185. 

X. Dong et al., “DMD-based hyperspectral microscopy with flexible multiline parallel scanning,” Microsyst. Nanoeng., 7 (1), 68 https://doi.org/10.1038/s41378-021-00299-2 (2021). Google Scholar

186. 

X. Dong et al., “DMD-based hyperspectral imaging system with tunable spatial and spectral resolution,” Opt. Express, 27 (12), 16995 –17006 https://doi.org/10.1364/OE.27.016995 OPEXFF 1094-4087 (2019). Google Scholar

187. 

Y. Wang et al., “MEMS scanner based handheld fluorescence hyperspectral imaging system,” Sens. Actuators, A, 188 450 –455 https://doi.org/10.1016/j.sna.2011.12.009 (2012). Google Scholar

188. 

P. Levin et al., “A wafer level packaged fully integrated hyperspectral Fabry-Perot filter with extended optical range,” in IEEE 32nd Int. Conf. Micro Electro Mech. Syst. (MEMS), (2019). https://doi.org/10.1109/MEMSYS.2019.8870831 Google Scholar

189. 

K. D. Long et al., “Multimode smartphone biosensing: the transmission, reflection, and intensity spectral (TRI)-analyzer,” Lab Chip, 17 (19), 3246 –3257 https://doi.org/10.1039/C7LC00633K LCAHAM 1473-0197 (2017). Google Scholar

190. 

C. Zhang et al., “Open-source 3D-printable optics equipment,” PLoS One, 8 (3), e59840 https://doi.org/10.1371/journal.pone.0059840 POLNCL 1932-6203 (2013). Google Scholar

191. 

T. D. Ngo et al., “Additive manufacturing (3D printing): a review of materials, methods, applications and challenges,” Compos. Part B, 143 172 –196 https://doi.org/10.1016/j.compositesb.2018.02.012 (2018). Google Scholar

192. 

P. Ghassemi et al., “Rapid prototyping of biomimetic vascular phantoms for hyperspectral reflectance imaging,” J. Biomed. Opt., 20 (12), 121312 https://doi.org/10.1117/1.JBO.20.12.121312 JBOPFO 1083-3668 (2015). Google Scholar

193. 

N. Nevala and T. Baden, “A low-cost hyperspectral scanner for natural imaging and the study of animal colour vision above and under water,” Sci. Rep., 9 (1), 10799 https://doi.org/10.1038/s41598-019-47220-6 (2019). Google Scholar

194. 

T. Blachowicz, G. Ehrmann and A. Ehrmann, “Optical elements from 3D printed polymers,” e-Polymers, 21 (1), 549 –565 https://doi.org/10.1515/epoly-2021-0061 EPOLCI 1618-7229 (2021). Google Scholar

195. 

B. W. Pearre et al., “Fast micron-scale 3D printing with a resonant-scanning two-photon microscope,” Addit. Manuf., 30 100887 https://doi.org/10.1016/j.addma.2019.100887 (2019). Google Scholar

196. 

P. Magnan, “Detection of visible photons in CCD and CMOS: a comparative view,” Nucl. Instrum. Methods Phys. Res., Sect. A, 504 (1-3), 199 –212 https://doi.org/10.1016/S0168-9002(03)00792-7 (2003). Google Scholar

197. 

M. T. Bohr and I. A. Young, “CMOS scaling trends and beyond,” IEEE Micro, 37 (6), 20 –29 https://doi.org/10.1109/MM.2017.4241347 IEMIDZ 0272-1732 (2017). Google Scholar

198. 

J. Janesick, J. T. Andrews and T. Elliott, “Fundamental performance differences between CMOS and CCD imagers: Part 1,” Proc. SPIE, 9591 959102 https://doi.org/10.1117/12.2189941 (2006). Google Scholar

199. 

Y. Sun et al., “Endoscopic fluorescence lifetime imaging for in vivo intraoperative diagnosis of oral carcinoma,” Microsc. Microanal., 19 (4), 791 https://doi.org/10.1017/S1431927613001530 MIMIF7 1431-9276 (2013). Google Scholar

200. 

H.-T. Lim and V. M. Murukeshan, “A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications,” Sci. Rep., 6 24044 https://doi.org/10.1038/srep24044 (2016). Google Scholar

201. 

M. Ohsaki et al., “Hyperspectral imaging using flickerless active LED illumination,” Proc. SPIE, 10338 103380Z https://doi.org/10.1117/12.2266765 (2017). Google Scholar

202. 

L. Di Cecilia, F. Marazzi and L. Rovati, “A hyperspectral imaging system for the evaluation of the human iris spectral reflectance,” Proc. SPIE, 10045 100451S https://doi.org/10.1117/12.2252184 (2017). Google Scholar

203. 

L. Di Cecilia, F. Marazzi and L. Rovati, “Hyperspectral imaging of the human iris,” Diffuse Opt. Spectrosc. Imaging VI, 10412 104120R https://doi.org/10.1117/12.2286173 (2017). Google Scholar

204. 

S. Rees and G. Dobre, “Maximum permissible exposure of the retina in the human eye in optical coherence tomography systems using a confocal scanning laser ophthalmoscopy platform,” Proc. SPIE, 8925 89250N https://doi.org/10.1117/12.2044819 (2014). Google Scholar

205. 

B. Yan et al., “Maintaining ocular safety with light exposure, focusing on devices for optogenetic stimulation,” Vis. Res., 121 57 –71 https://doi.org/10.1016/j.visres.2016.01.006 VISRAM 0042-6989 (2016). Google Scholar

206. 

L. Ma and B. Fei, “Comprehensive review of surgical microscopes: technology development and medical applications,” J. Biomed. Opt., 26 (1), 010901 https://doi.org/10.1117/1.JBO.26.1.010901 JBOPFO 1083-3668 (2021). Google Scholar

207. 

J. M. Amigo, S. Grassi, Data Handling in Science and Technology, 17 –34 Elsevier( (2019). Google Scholar

208. 

T. W. Sawyer, A. S. Luthman and S. E. Bohndiek, “Evaluation of illumination system uniformity for wide-field biomedical hyperspectral imaging,” J. Opt., 19 (4), 045301 https://doi.org/10.1088/2040-8986/aa6176 JOOPDB 0150-536X (2017). Google Scholar

209. 

S. R. Patel et al., “A prototype hyperspectral system with a tunable laser source for retinal vessel imaging,” Invest. Ophthalmol. Vis. Sci., 54 (8), 5163 –5168 https://doi.org/10.1167/iovs.13-12124 (2013). Google Scholar

210. 

M. E. Martin et al., “Development of an advanced hyperspectral imaging (HSI) system with applications for cancer detection,” Ann. Biomed. Eng., 34 (6), 1061 –1068 https://doi.org/10.1007/s10439-006-9121-9 ABMECF 0090-6964 (2006). Google Scholar

211. 

V. V. Podlipnov et al., “Experimental determination of soil moisture on hyperspectral images,” Comput. Opt., 42 (5), 877 –884 https://doi.org/10.18287/2412-6179-2017-42-5-877-884 COOPE3 0955-355X (2018). Google Scholar

212. 

H.-N. Li et al., ““Multi-spectral imaging using LED illuminations,” in 2012 5th Int. Congr. Image and Signal Process., 538 –542 (2012). Google Scholar

213. 

R. Shrestha, J. Y. Hardeberg, and A. Elmoataz et al., “How are LED illumination based multispectral imaging systems influenced by different factors?,” in Int. Conf. Image and Signal Process., 61 –71 (2014). Google Scholar

214. 

N. Everdell et al., “Multispectral imaging of the ocular fundus using light emitting diode illumination,” Rev. Sci. Instrum., 81 (9), 093706 https://doi.org/10.1063/1.3478001 RSINAK 0034-6748 (2010). Google Scholar

215. 

M. B. Stuart et al., “Low-cost hyperspectral imaging with a smartphone,” J. Imaging, 7 (8), 136 https://doi.org/10.3390/jimaging7080136 (2021). Google Scholar

216. 

J. Fortuna and T. A. Johansen, “A lightweight payload for hyperspectral remote sensing using small UAVs,” in 2018 Workshop Hyperspectral Image and Signal Process.: Evol. in Remote Sens. (WHISPERS), 1 –5 (2018). Google Scholar

217. 

D. S. Jeon et al., “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics, 38 117 https://doi.org/10.1145/3306346.3322946 (2019). Google Scholar

218. 

K. A. Riihiaho, M. A. Eskelinen and I. Pölönen, “A do-it-yourself hyperspectral imager brought to practice with open-source Python,” Sensors, 21 (4), 1072 https://doi.org/10.3390/s21041072 SNSRES 0746-9462 (2021). Google Scholar

219. 

M. B. Henriksen et al., “Real-time corrections for a low-cost hyperspectral instrument,” in 2019 10th Workshop Hyperspectral Imaging and Signal Process.: Evol. in Remote Sens. (WHISPERS), 1 –5 (2019). Google Scholar

220. 

M. B. Stuart et al., “Low-cost hyperspectral imaging system: design and testing for laboratory,” Sensors, 20 (11), 3293 https://doi.org/10.3390/s20113293 SNSRES 0746-9462 (2020). Google Scholar

221. 

J. H. Frank et al., “A white light confocal microscope for spectrally resolved multidimensional imaging,” J. Microsc., 227 (3), 203 –215 https://doi.org/10.1111/j.1365-2818.2007.01803.x JMICAR 0022-2720 (2007). Google Scholar

222. 

K. Uto et al., “Development of a low-cost hyperspectral whiskbroom imager using an optical fiber bundle, a swing mirror, and compact spectrometers,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 9 (9), 3909 –3925 https://doi.org/10.1109/JSTARS.2016.2592987 (2016). Google Scholar

223. 

G. Rateni, P. Dario and F. Cavallo, “Smartphone-based food diagnostic technologies: a review,” Sensors, 17 (6), 1453 https://doi.org/10.3390/s17061453 SNSRES 0746-9462 (2017). Google Scholar

224. 

D. Erickson et al., “Smartphone technology can be transformative to the deployment of lab-on-chip diagnostics,” Lab Chip, 14 (17), 3159 –3164 https://doi.org/10.1039/C4LC00142G LCAHAM 1473-0197 (2014). Google Scholar

225. 

T. Laksanasopin et al., “A smartphone dongle for diagnosis of infectious diseases at the point of care,” Sci. Transl. Med., 7 (273), 273re271 https://doi.org/10.1126/scitranslmed.aaa0056 (2015). Google Scholar

226. 

K. E. McCracken et al., “Smartphone-based fluorescence detection of bisphenol A from water samples,” RSC Adv., 7 (15), 9237 –9243 https://doi.org/10.1039/C6RA27726H (2017). Google Scholar

227. 

L. Wang et al., “LeafSpec: an accurate and portable hyperspectral corn leaf imager,” Comput. Electron. Agric., 169 105209 https://doi.org/10.1016/j.compag.2019.105209 CEAGE6 0168-1699 (2020). Google Scholar

228. 

S. Berisha et al., “SIproc: an open-source biomedical data processing platform for large hyperspectral images,” Analyst, 142 (8), 1350 –1357 https://doi.org/10.1039/C6AN02082H ANLYAG 0365-4885 (2017). Google Scholar

229. 

S. Rattanavarin et al., “Handheld multispectral confocal microscope for cervical cancer diagnosis,” in 2012 Int. Conf. Opt. MEMS and Nanophotonics, 41 –42 (2012). Google Scholar

230. 

I. W. Jung et al., “2-D MEMS scanner for handheld multispectral dual-axis confocal microscopes,” J. Microelectromech. Syst., 27 (4), 605 –612 https://doi.org/10.1109/JMEMS.2018.2834549 JMIYET 1057-7157 (2018). Google Scholar

231. 

J. Yoon et al., “A clinically translatable hyperspectral endoscopy (HySE) system for imaging the gastrointestinal tract,” Nat. Commun., 10 1902 https://doi.org/10.1038/s41467-019-09484-4 NCAOBW 2041-1723 (2019). Google Scholar

232. 

M. Pilling and P. Gardner, “Fundamental developments in infrared spectroscopic imaging for biomedical applications,” Chem. Soc. Rev., 45 (7), 1935 –1957 https://doi.org/10.1039/C5CS00846H CSRVBR 0306-0012 (2016). Google Scholar

233. 

K. K. Tasche et al., “Definition of “close margin” in oral cancer surgery and association of margin distance with local recurrence rate,” JAMA Otolaryngol.–Head Neck Surg., 143 (12), 1166 –1172 https://doi.org/10.1001/jamaoto.2017.0548 (2017). Google Scholar

234. 

Z. Apalla et al., “Skin cancer: epidemiology, disease burden, pathophysiology, diagnosis, and therapeutic approaches,” Dermatol. Ther. (Heidelb.), 7 (1), 5 –19 https://doi.org/10.1007/s13555-016-0165-y (2017). Google Scholar

235. 

F. Bray et al., “Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: Cancer J. Clin., 68 (6), 394 –424 https://doi.org/10.3322/caac.21492 (2018). Google Scholar

236. 

M. Elbaum et al., “Automatic differentiation of melanoma from melanocytic nevi with multispectral digital dermoscopy: a feasibility study,” J. Am. Acad. Dermatol., 44 (2), 207 –218 https://doi.org/10.1067/mjd.2001.110395 JAADDB 0190-9622 (2001). Google Scholar

237. 

J. Hewett et al., “Fluorescence detection of superficial skin cancers,” J. Mod. Opt., 47 (11), 2021 –2027 https://doi.org/10.1080/09500340008232454 JMOPEW 0950-0340 (2000). Google Scholar

238. 

E. Song et al., “Paired comparison of the sensitivity and specificity of multispectral digital skin lesion analysis and reflectance confocal microscopy in the detection of melanoma in vivo: a cross-sectional study,” J. Am. Acad. Dermatol., 75 (6), 1187 –1192.e2 https://doi.org/10.1016/j.jaad.2016.07.022 JAADDB 0190-9622 (2016). Google Scholar

239. 

S. Marur and A. A. Forastiere, “Head and neck cancer: changing epidemiology, diagnosis, and treatment,” 489 –501 (2008). https://doi.org/10.4065/83.4.489 Google Scholar

240. 

L. Ma et al., “Hyperspectral microscopic imaging for the detection of head and neck squamous cell carcinoma on histologic slides,” Proc. SPIE, 11603 116030P https://doi.org/10.1117/12.2581970 (2021). Google Scholar

241. 

L. Sun et al., “Diagnosis of cholangiocarcinoma from microscopic hyperspectral pathological dataset by deep convolution neural networks,” Methods, 202 22 –30 https://doi.org/10.1016/j.ymeth.2021.04.005 MTHDE9 1046-2023 (2022). Google Scholar

242. 

S. Ortega et al., “Detecting brain tumor in pathological slides using hyperspectral imaging,” Biomed. Opt. Express, 9 (2), 818 –831 https://doi.org/10.1364/BOE.9.000818 BOEICL 2156-7085 (2018). Google Scholar

243. 

M. G. Sowa et al., “Review of near-infrared methods for wound assessment,” J. Biomed. Opt., 21 (9), 091304 https://doi.org/10.1117/1.JBO.21.9.091304 JBOPFO 1083-3668 (2016). Google Scholar

244. 

K. M. Patel et al., “Use of intraoperative indocyanin-green angiography to minimize wound healing complications in abdominal wall reconstruction,” J. Plast. Surg. Hand Surg., 47 (6), 476 –480 https://doi.org/10.3109/2000656X.2013.787085 (2013). Google Scholar

245. 

C. Holm et al., “Intraoperative evaluation of skin-flap viability using laser-induced fluorescence of indocyanine green,” Br. J. Plast. Surg., 55 (8), 635 –644 https://doi.org/10.1054/bjps.2002.3969 BJPSAZ 0007-1226 (2002). Google Scholar

246. 

J. T. Alander et al., “A review of indocyanine green fluorescent imaging in surgery,” Int. J. Biomed. Imaging, 2012 940585 https://doi.org/10.1155/2012/940585 (2012). Google Scholar

247. 

F. Lindahl, E. Tesselaar and F. Sjöberg, “Assessing paediatric scald injuries using laser speckle contrast imaging,” Burns, 39 (4), 662 –666 https://doi.org/10.1016/j.burns.2012.09.018 BURND8 0305-4179 (2013). Google Scholar

248. 

A. Rege et al., “In vivo laser speckle imaging reveals microvascular remodeling and hemodynamic changes during wound healing angiogenesis,” Angiogenesis, 15 (1), 87 –98 https://doi.org/10.1007/s10456-011-9245-x (2012). Google Scholar

249. 

W. Heeman et al., “Clinical applications of laser speckle contrast imaging: a review,” J. Biomed. Opt., 24 (8), 080901 https://doi.org/10.1117/1.JBO.24.8.080901 JBOPFO 1083-3668 (2019). Google Scholar

250. 

A. J. Deegan et al., “Optical coherence tomography angiography monitors human cutaneous wound healing over time,” Quantum Imaging Med. Surg., 8 (2), 135 https://doi.org/10.21037/qims.2018.02.07 (2018). Google Scholar

251. 

M. J. Cobb et al., “Noninvasive assessment of cutaneous wound healing using ultrahigh-resolution optical coherence tomography,” J. Biomed. Opt., 11 (6), 064002 https://doi.org/10.1117/1.2388152 JBOPFO 1083-3668 (2006). Google Scholar

252. 

L. Atiles et al., “Laser Doppler flowmetry in burn wounds,” J. Burn Care Rehabil., 16 (4), 388 –393 https://doi.org/10.1097/00004630-199507000-00003 (1995). Google Scholar

253. 

M. Dyson et al., “Wound healing assessment using 20 MHz ultrasound and photography,” Skin Res. Technol., 9 (2), 116 –121 https://doi.org/10.1034/j.1600-0846.2003.00020.x (2003). Google Scholar

254. 

S. C. Gnyawali et al., “High-resolution harmonics ultrasound imaging for non-invasive characterization of wound healing in a pre-clinical swine model,” PLoS One, 10 (3), e0122327 https://doi.org/10.1371/journal.pone.0122327 POLNCL 1932-6203 (2015). Google Scholar

255. 

J. Marotz et al., “Extended perfusion parameter estimation from hyperspectral imaging data for bedside diagnostic in medicine,” Molecules, 24 (22), 4164 https://doi.org/10.3390/molecules24224164 (2019). Google Scholar

256. 

A. Holmer et al., “Hyperspectral imaging in perfusion and wound diagnostics: methods and algorithms for the determination of tissue parameters,” Biomed. Eng./Biomed. Tech., 63 (5), 547 –556 https://doi.org/10.1515/bmt-2017-0155 (2018). Google Scholar

257. 

A. Holmer et al., “Oxygenation and perfusion monitoring with a hyperspectral camera system for chemical based tissue analysis of skin and organs,” Physiol. Meas., 37 (11), 2064 https://doi.org/10.1088/0967-3334/37/11/2064 PMEAE3 0967-3334 (2016). Google Scholar

258. 

J. M. Evans et al., “Pressure ulcers: prevention and management,” Mayo Clinic Proc., 70 (8), 789 –799 https://doi.org/10.4065/70.8.789 (1995). Google Scholar

259. 

J. H. Klaessens et al., “Non-contact tissue perfusion and oxygenation imaging using a LED based multispectral and a thermal imaging system, first results of clinical intervention studies,” Proc. SPIE, 8572 857207 https://doi.org/10.1117/12.2003807 (2013). Google Scholar

260. 

J. H. Klaessens et al., “Non-invasive skin oxygenation imaging using a multi-spectral camera system: effectiveness of various concentration algorithms applied on human skin,” Proc. SPIE, 7174 717408 https://doi.org/10.1117/12.808707 (2009). Google Scholar

261. 

G. S. Lazarus et al., “Definitions and guidelines for assessment of wounds and evaluation of healing,” Wound Repair Regener., 2 (3), 165 –170 https://doi.org/10.1046/j.1524-475X.1994.20305.x WREREU 1067-1927 (1994). Google Scholar

262. 

A. Nouvong et al., “Evaluation of diabetic foot ulcer healing with hyperspectral imaging of oxyhemoglobin and deoxyhemoglobin,” Diabetes Care, 32 (11), 2056 –2061 https://doi.org/10.2337/dc08-2246 DICAD2 0149-5992 (2009). Google Scholar

263. 

W. Jeffcoate et al., “Use of HSI to measure oxygen saturation in the lower limb and its correlation with healing of foot ulcers in diabetes,” Diabetic Med., 32 (6), 798 –802 https://doi.org/10.1111/dme.12778 DIMEEV 1464-5491 (2015). Google Scholar

264. 

L. Khaodhiar et al., “The use of medical hyperspectral technology to evaluate microcirculatory changes in diabetic foot ulcers and to predict clinical outcomes,” Diabetes Care, 30 (4), 903 –910 https://doi.org/10.2337/dc06-2209 DICAD2 0149-5992 (2007). Google Scholar

265. 

Q. Yang et al., “Investigation of the performance of hyperspectral imaging by principal component analysis in the prediction of healing of diabetic foot ulcers,” J. Imaging, 4 (12), 144 https://doi.org/10.3390/jimaging4120144 (2018). Google Scholar

266. 

D. Yudovsky, A. Nouvong and L. Pilon, “Hyperspectral imaging in diabetic foot wound care,” J. Diabetes Sci. Technol., 4 (5), 1099 –1113 https://doi.org/10.1177/193229681000400508 (2010). Google Scholar

267. 

C. J. Lee et al., “Quantitative results of perfusion utilising hyperspectral imaging on non-diabetics and diabetics: a pilot study,” Int. Wound J., 17 1809 –1816 https://doi.org/10.1111/iwj.13469 (2020). Google Scholar

268. 

E. DeHoog and J. Schwiegerling, “Fundus camera systems: a comparative analysis,” Appl. Opt., 48 (2), 221 –228 https://doi.org/10.1364/AO.48.000221 APOPAI 0003-6935 (2009). Google Scholar

269. 

N. Panwar et al., “Fundus photography in the 21st century: a review of recent technological advances and their implications for worldwide healthcare,” Telemed. e-Health, 22 (3), 198 –208 https://doi.org/10.1089/tmj.2015.0068 (2016). Google Scholar

270. 

M. W. Wintergerst et al., “Smartphone-based fundus imaging: where are we now?,” Asia-Pac. J. Ophthalmol., 9 (4), 308 –314 https://doi.org/10.1097/APO.0000000000000303 (2020). Google Scholar

271. 

E. Loskutova et al., “Macular pigment and its contribution to vision,” Nutrients, 5 (6), 1962 –1969 https://doi.org/10.3390/nu5061962 (2013). Google Scholar

272. 

A. Guduru et al., “Oxygen saturation of retinal vessels in all stages of diabetic retinopathy and correlation to ultra-wide field fluorescein angiography,” Invest. Ophthalmol. Vis. Sci., 57 (13), 5278 –5284 https://doi.org/10.1167/iovs.16-20190 (2016). Google Scholar

273. 

T. E. Yap et al., “Retinal correlates of neurological disorders,” Ther. Adv. Chronic Dis., 10 2040622319882205 https://doi.org/10.1177/2040622319882205 (2019). Google Scholar

274. 

X. Hadoux et al., “Non-invasive in vivo hyperspectral imaging of the retina for potential biomarker use in Alzheimer’s disease,” Nat. Commun., 10 4227 https://doi.org/10.1038/s41467-019-12242-1 NCAOBW 2041-1723 (2019). Google Scholar

275. 

H. Akbari et al., “Hyperspectral imaging and diagnosis of intestinal ischemia,” in 2008 30th Annu. Int. Conf. IEEE Eng. in Med. and Biol. Soc., 1238 –1241 (2008). Google Scholar

276. 

J. Yoon et al., “First experience in clinical application of hyperspectral endoscopy for evaluation of colonic polyps,” J. Biophotonics, 14 (9), e202100078 https://doi.org/10.1002/jbio.202100078 (2021). Google Scholar

277. 

D. M. Roblyer et al., “Multispectral optical imaging device for in vivo detection of oral neoplasia,” J. Biomed. Opt., 13 (2), 024019 https://doi.org/10.1117/1.2904658 JBOPFO 1083-3668 (2008). Google Scholar

278. 

R. Martin, B. Thies and A. O. Gerstner, “Hyperspectral hybrid method classification for detecting altered mucosa of the human larynx,” Int. J. Health Geogr., 11 (1), 21 https://doi.org/10.1186/1476-072X-11-21 (2012). Google Scholar

279. 

, “Imec and XIMEA launch industrial grade hyperspectral camera solution addressing the high-quality standards for machine vision applications,” https://www.imec-int.com/en/press/imec-and-ximea-launch-industrial-grade-hyperspectral-camera-solution-addressing-high-quality (2020). Google Scholar

280. 

H. Holma et al., “Advances in hyperspectral LWIR pushbroom imagers,” Proc. SPIE, 8032 80320X https://doi.org/10.1117/12.884078 (2011). Google Scholar

281. 

G. Niezen, P. Eslambolchilar and H. Thimbleby, “Open-source hardware for medical devices,” BMJ Innov., 2 (2), 78 –83 https://doi.org/10.1136/bmjinnov-2015-000080 (2016). Google Scholar

282. 

J. Moilanen and T. Vadén, “3D printing community and emerging practices of peer production,” First Monday, 18 (8), https://doi.org/10.5210/fm.v18i8.4271 (2013). Google Scholar

283. 

S. Ortega et al., “Hyperspectral and multispectral imaging in digital and computational pathology: a systematic review,” Biomed. Opt. Express, 11 (6), 3195 –3233 https://doi.org/10.1364/BOE.386338 BOEICL 2156-7085 (2020). Google Scholar

284. 

A. London, I. Benhar and M. Schwartz, “The retina as a window to the brain: from eye research to CNS disorders,” Nat. Rev. Neurol., 9 (1), 44 –53 https://doi.org/10.1038/nrneurol.2012.227 (2013). Google Scholar

285. 

H. Fabelo et al., “In vivo hyperspectral human brain image database for brain cancer detection,” IEEE Access, 7 39098 –39116 https://doi.org/10.1109/ACCESS.2019.2904788 (2019). Google Scholar

286. 

R. Leon et al., “Non-invasive skin cancer diagnosis using hyperspectral imaging for in situ clinical support,” J. Clin. Med., 9 (6), 1662 https://doi.org/10.3390/jcm9061662 (2020). Google Scholar

287. 

S. Dontu et al., “Combined spectral-domain optical coherence tomography and hyperspectral imaging applied for tissue analysis: preliminary results,” Appl. Surf. Sci., 417 119 –123 https://doi.org/10.1016/j.apsusc.2017.03.175 ASUSEE 0169-4332 (2017). Google Scholar

288. 

R. Guay-Lord et al., “Combined optical coherence tomography and hyperspectral imaging using a double-clad fiber coupler,” J. Biomed. Opt., 21 (11), 116008 https://doi.org/10.1117/1.JBO.21.11.116008 JBOPFO 1083-3668 (2016). Google Scholar

289. 

S. Lee et al., “Multimodal imaging of laser speckle contrast imaging combined with mosaic filter-based hyperspectral imaging for precise surgical guidance,” IEEE Trans. Biomed. Eng., 69 (1), 443 –452 https://doi.org/10.1109/TBME.2021.3097122 IEBEAX 0018-9294 (2021). Google Scholar

290. 

I. J. Maybury et al., “Comparing the effectiveness of hyperspectral imaging and Raman spectroscopy: a case study on Armenian manuscripts,” Heritage Sci., 6 (1), 1 –15 https://doi.org/10.1186/s40494-018-0206-1 (2018). Google Scholar

Biography

Minh H. Tran is a PhD candidate in bioengineering at the University of Texas at Dallas. His research is in artificial intelligence, machine learning, and disease diagnosis.

Baowei Fei is a Cecil H. and Ida Green Chair in systems biology science, professor of bioengineering at the University of Texas at Dallas, and professor of radiology at UT Southwest Medical Center. He is a director of the Quantitative BioImaging Laboratory. He is a fellow of the International Society for Optics and Photonics and a fellow of the American Institute for Medical and Biological Engineering.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Minh H. Tran and Baowei Fei "Compact and ultracompact spectral imagers: technology and applications in biomedical imaging," Journal of Biomedical Optics 28(4), 040901 (6 April 2023). https://doi.org/10.1117/1.JBO.28.4.040901
Received: 18 October 2022; Accepted: 28 February 2023; Published: 6 April 2023
Lens.org Logo
CITATIONS
Cited by 8 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Imaging systems

Cameras

Imaging spectroscopy

Biomedical optics

Biomedical applications

Tunable filters

Optical filters

Back to Top