HyperColorization: Propagating spatially sparse noisy spectral clues for reconstructing hyperspectral images

Hyperspectral cameras face challenging spatial-spectral resolution trade-offs and are more affected by shot noise than RGB photos taken over the same total exposure time. Here, we present a colorization algorithm to reconstruct hyperspectral images from a grayscale guide image and spatially sparse spectral clues. We demonstrate that our algorithm generalizes to varying spectral dimensions for hyperspectral images, and show that colorizing in a low-rank space reduces compute time and the impact of shot noise. To enhance robustness, we incorporate guided sampling, edge-aware filtering, and dimensionality estimation techniques. Our method surpasses previous algorithms in various performance metrics, including SSIM, PSNR, GFC, and EMD, which we analyze as metrics for characterizing hyperspectral image quality. Collectively, these findings provide a promising avenue for overcoming the time-space-wavelength resolution trade-off by reconstructing a dense hyperspectral image from samples obtained by whisk or push broom scanners, as well as hybrid spatial-spectral computational imaging systems.


Introduction
Hyperspectral cameras (HSC) have the potential to significantly improve performance in tasks such as object detection or classification by capturing images with a broader spectral range and higher spectral resolution [1,2].Example applications include food quality control [3], detecting crop disease in agriculture [4], pigment mapping in art inspection [5], mineral classification in geology [6], astronomy [7] and tumor detection in biomedical engineering [8].Many of these tasks call for tightly targeted interventions that require high spatial resolution, but in traditional HSCs there is a rigid trade-off between spatial and spectral resolution that cannot adjust to scene content.Additionally, spectral resolution divides light throughput across numerous spectral channels so that HSCs are more susceptible to shot noise compared to similar RGB cameras when exposure times are equal.These drawbacks may explain why HSCs have not seen widespread adoption despite their advantages in many important applications.
Traditional hyperspectral cameras acquire three-dimensional data cubes by sequentially capturing two-dimensional slices.In the case of pushbroom cameras, the capture occurs one spatial row at a time, employing a narrow slit and dispersing prism to cover all wavelengths simultaneously [9].An alternative method involves capturing the entire scene, wavelength by wavelength.While conventional systems typically rely on numerous narrowband filters [10], recent advancements, as demonstrated by Zhang et al. [11], and Feng et al. [12], reveal the feasibility of achieving high spectral resolution with a reduced number of broadband spectral filters in concert with an artificial intelligence-driven reconstruction algorithm.However, these approaches still require a sacrifice in spatial or temporal resolution to capture the suggested channels, and may require retraining when used on new datasets.
Recent advances in hyperspectral imaging have introduced novel approaches, including compressed sensing and hybrid camera systems.Compressed sensing techniques involve reconstructing hyperspectral images (HSI) from fewer measurements than the actual reconstruction Fig. 1.HyperColorization simplified pipeline with an example image from [30].We use a grayscale image to guide a whisk/push broom scanner or a computational imager to collect spatially sparse spectral samples.These noisy samples are used to estimate the best colorization dimension.Note that we show spatially uniform sampling at a rate of 3% but can see additional performance gains from image-guided sampling.Finally, the sparse samples are densified using an optimization-based spectral sample propagation algorithm.The code is available at https://github.com/NUBIVlab/HyperColorizationand includes demos, documentation of each figure, and spectral results for every pixel in each image shown.
• Proposal of an ideal dimensionality approximation algorithm that estimates the optimal rank of color space for effective colorization from noisy spectral samples.• Introduction of grayscale-image-guided sampling methods that intelligently adjust the sampling frequency for whisk and push broom scanners or suggest sampling locations for computational imaging systems.• Analysis of the trade-off between the number of measurements and the exposure time for each measurement within a fixed time budget, providing insights into optimizing data collection strategies.

Hyperspectral colorization using optimization
Our color propagation algorithm is based on the assumption that neighboring pixels that have similar intensities have a high probability of sharing similar spectral responses.Our premise is directly analogous to the assumption of Levin et al. in [22], where they postulate that nearby pixels with similar gray levels should possess similar RGB colors.In Levin's work, color propagation was performed specifically on the U and V channels of the YUV color space.This deliberate choice allowed the propagation of colors while preserving the original luminance information encoded in the Y channel.However, perception-driven hyperspectral and multispectral color spaces are comparatively underdeveloped, limiting generalizability to HSIs.We show in Figure 2 that grayscale guide images generated by summing across spectral channels in the visible range meet this assumption for HSIs, even when the target spectra include or consist entirely of near infrared bands.The correlation of gray level difference between neighboring guide image pixels and the L1 distance between their spectra in Fig. 2a (correlation: 0.99) confirms that grayscale guide images are informative for visible spectra.The correlation of this visible-based guide image into the near-infrared (NIR) range (400-1100 nm in Fig. 2b with correlation 0.97 and 700-1100 nm in Fig. 2c with correlation .78)suggests a promising robustness to the spectral sensitivity used to generate guide images.Our algorithm requires a grayscale guide image  ∈ R × aligned with a spatially sparse data cube  ∈ R ×× that contains measured spectral clues.In the case of hybrid camera systems, data in  is collected by the spectral camera and  is captured by a grayscale camera.To simulate the measurement process, we sample spectral vectors from a HSI to obtain M, and we average the channels that are in the visible range to obtain G.We will use bold r and s to indicate 2D pixel coordinates, therefore,  (r) and  (s) represent  dimensional vectors.To propagate spectral clues, we will individually minimize the criteria described below over all the spectral channels: We minimize this equation over  ∈ R ×× , which serves as the densified version of .In the equation above,  (r) represents the set of pixels in the n-by-n patch surrounding r as its neighbors, where n=3 for the results demonstrated in this paper.The value of  is 2 if  (r) contains a spectral clue; otherwise, it is set to 1. Finally,  r,s is the weight that encodes the similarity between pixels r and s in the grayscale image.We calculate  r,s using one of the affinity functions described in [22]: In Eq. 2,  2 r is the variance in the patch surrounding r.If r and s have similar intensities in the grayscale image , then  r,s should assume a higher value to enforce a similar spectral response between the two pixels.based on image (a) using Eq. 3 removes these artifacts.Image from [32] contains 31 channels from 400 to 700 nm and has been projected to RGB here for visualization.
Propagating spectral clues in this manner causes luminance information to be lost in the output, as HSIs don't have a dedicated Y channel to preserve this information as in the YUV color space.As a result, errors in overall brightness can arise across local patches in the image (Fig. 3b).However, the relative intensities in spectral response vectors remain accurate, so the problem can be fixed by normalizing the spectral vectors according to the grayscale image, using Eq.3: Where  is a camera-pair-dependent parameter and   () and   () are normalized spectral response functions of the grayscale and spectral cameras.This equation assumes a linear relationship between the L1 norm of the spectral vector and the intensity of the grayscale image.Fig. 3a shows the simulated guide image that is used to renormalize the original reconstruction in panel b (31 channels projected to RGB for visualization), producing the corrected reconstruction result shown in panel c.Traditional methods like bilateral or edge-aware filtering are popular in handling noisy images.However, applying a bilateral filter after color propagation is ineffective because propagation spreads each spectral sample across a large area, creating spatially consistent noise.Instead, a better approach is to average spatially close spectral samples before color propagation.We perform this averaging with edge-aware filtering to prevent color bleed: Here,  ∈ [0, 1] is the blurred result of Canny edge detection on the grayscale image with a normalized Gaussian filter; it acts as a weight parameter that captures the spatial proximity of r to an edge, and  (r) is the neighborhood of spectral measurements in the vicinity of r, which includes the nearest | (r)| pixels.We show results using a 21 × 21 pixel neighborhood and the Gaussian kernel is 31 × 31 with variance 11.

Hyperspectral colorization in a learned subspace
It is well known that the collection of distinct spectral responses in a natural image occupy a low-rank space, and there are many approaches to represent HSIs using low dimensional models [32][33][34][35].In this paper, we will follow the results of Chakrabarti et al. in [32] and Lee et al. in [35], to treat a hyperspectral image as a 2D matrix,   × , where  is the number of pixels and  is the dimension of the spectral vector of each pixel.By applying singular value decomposition (SVD), we can extract  singular vectors that span R  .Fig. 4 displays the singular vectors corresponding to the most significant singular values learned from the CAVE dataset [36].
Treating these singular vectors as an orthonormal basis, we can project our hyperspectral image onto the top  <  vectors, effectively reducing the number of channels.This compressed representation of the hyperspectral data can be viewed as a learned color space, and we will show that the HyperColorization algorithm can work without loss of fidelity in the low dimensional space, given that a proper value for  is chosen.
Fig. 4. Singular vectors corresponding to the largest 6 singular values learned from the CAVE dataset [36] and their RGB projections for coefficients from -15 to +15.Note that RGB channels have been allowed to saturate.
In the presence of noise, the choice of dimensionality  of HyperColorization becomes critical to performance.By propagating spectral clues in a lower dimension, we not only reduce computational costs but also remove noise components that are orthogonal to the subspace spanned by the selected vectors, thereby increasing the signal-to-noise ratio (SNR).However, if  is chosen too small, singular vectors will underfit the data and cause a drop in HyperColorization accuracy.On the other hand, if  is chosen to be too large, the projection may "overfit" to the measurement, preserving shot noise.The optimal reconstruction dimensionality depends on the intrinsic dimensionality of the scene and the level of noise present.Accurately estimating this dimensionality before colorization is crucial.
To estimate the optimal reconstruction dimensionality, we have devised a method that leverages noisy spectral samples and our basis vectors.As depicted in Fig. 5, we make the following observations: (1) As the noise levels increase, the minimum variance of projections onto our learned color channels also tends to increase.(2) The location of the 'elbow' points, as defined in [37], gradually shifts towards lower dimensions with greater noise.Both phenomena can be attributed to the white characteristic of Poisson noise, which evenly distributes the noise power across dimensions.These two features (minimum and elbow point of explained variance curve) can be used to predict the reconstruction dimension of highest performance.
In Fig. 6, we performed a grid search to determine the best colorization dimensionality at each exposure time over the image from [30] shown in Fig 1 .To quantify the reconstruction quality, we used earth movers distance (EMD) because compared to other quality metrics, also shown in Fig. 6, it offers two advantages.First, it scales more linearly on log-scale of exposure time, avoiding abrupt transitions and plateaus (compare purple lines across subfigures).Secondly, it retains spectral diversity even under extreme noise (compare 5 vs. 2 basis vectors at lowest exposure time).Generalizing from this example (see Fig. S1-3 for more), we trained a second-order polynomial regression model to predict the best-EMD reconstruction dimension using the CAVE dataset [36].The predicted dimensionality is shown in blue and repeated across subfigures.The impact of reconstruction dimension with 1% sampling ratio.In this example, 3-dimensional reconstruction underfits to the data, while 27-dimensional reconstruction overfits to noise.The optimal reconstruction dimensionality is 9. Image [30] contains 31 visible bands, projected onto RGB for visualization.

Grayscale guided sampling for push and whisk broom cameras
Images of natural scenes often feature large patches with low-frequency brightness changes and consistent color spectra, alongside localized image patches with high-frequency spatial details and more diverse color spectra.When a grayscale image of the scene is available before spectral sampling, spatial sampling patterns can be adapted to prioritize high-variance patches.
During color propagation, spectral samples are competing to fill the surrounding region.When the sampling algorithm is uniform, the distance between the samples is equal, ensuring a fair competition between the samples.If our sampling algorithm does not preserve this uniformity as much as possible, some samples may overflow their true extent by not getting challenged by other samples.Consequently, segmentation based sampling algorithms, such as the one discussed in [15], are not well-suited for our color propagation algorithm.
Our adaptive sampling algorithm considers the diversity of gray levels and local spatial features to determine if a local patch in the image might be rich in spectral characteristics.For uniform sampling with a push broom scanner, we regularly sample every  = 1/   row, giving equal weight to each row.In contrast, our adaptive sampling method for push broom scanners assigns a weight to each row based on two features.The first feature is the number of unique gray levels close by in the reduced-bit-depth representation of the grayscale image.The second feature is the number of 'good features to track' [38] in the vicinity in the grayscale image.We scale and normalize these features so that the minimum value is approximately 0.1 and the mean is 1.Finally, we combine the features using weighted averaging.As we traverse the image, we accumulate the weight; and take a measurement when the value surpasses the threshold of .Weights are adjusted according to: where Γ() :=  0.1 + 0.9 with  = .7 for the results reported in this paper based on a grid search.Figure 8a shows a posterized guide image with weights shown in panel b and guided sample locations in c.The insets in Fig. 8d show an example location where guided sampling leads to a more accurate reconstruction compared to uniform sampling.Additional sampling examples for push and whisk broom sensors can be seen in Fig. S4-6.
The adaptive sampling method for whisk broom scanners extends the algorithm for push broom scanners.Once the rows for sampling are chosen, we assign weights to each pixel along those selected rows based on the two features previously described.We accumulate the weights on each pixel and take a measurement when it surpasses a threshold determined by the desired sampling rate.

Results
We tested our algorithm in simulation using the five datasets in Table 1, and assume the Poisson-Gaussian noise model when simulating the grayscale image and the HSI.Specifically, the value at each image pixel of the simulated grayscale image  (r) is and the pixel value of the simulated HSI image  (r, ) is where  * (r, )) is the HSI measurement from the dataset that we use as ground truth.The coefficient  is the brightness adjustment factor, which we set as 9.6 × 10 7 photons per full grayscale range.This approximately corresponds to the brightness level of an office setting [40] using a FLIR grayscale camera [41].Exposure time  was set by the experiment.Finally, we set the parameters of the Gaussian noise to be  = 0 and  = 0.1.The metrics we report in our experiments include peak signal-to-noise ratio (PSNR), goodness factor coefficient (GFC), spectral similarity value (SSV), and SSIM [42].Additionally, we propose using earth-mover's distance (EMD) [43] or "Wasserstein distance" to compare ground truth spectra and reconstruction spectra at each pixel and average it through the image.This generalization of KL divergence provides a useful assessment of hyperspectral accuracy, particularly when spatial reconstruction errors are negligible.

Sampling patterns
We conducted experiments using random, uniform, and guided sampling patterns for both push and whisk broom scanners.Table 2 compares the performance of uniform and guided sampling patterns at a fixed sampling ratio, see Table S1 to compare results of all sampling patterns at various sampling ratios.

Imaging with a fixed time budget
When utilizing scanner-type imaging modalities within a limited timeframe, a balance must be struck between the number of samples acquired and the exposure time per sample.Increasing the sampling ratio improves the ability to capture finer details but leads to higher Poisson noise per sample.This is what allows us to break the traditional trade-offs seen in these scanning modalities, by lowering noise on a subset of spectral samples and regaining spatial resolution through the guide image.Table 3 shows the advantage in reconstructing HSIs fom 10% of pixels rather than using the same total exposure times across all pixels.Conversely, reducing the number of samples risks missing colored regions entirely.Our framework does not inherently determine the optimal sampling ratio given a specific time budget and scene structure.However, based on insights from our data, we provide a practical guideline, as depicted in Figure 9. Examining Fig. 9 reveals that the best strategy is to lean towards capturing more samples at the expense of having more shot noise per sample because our framework offers several tools to deal with noise, including techniques like dimensionality reduction and edge-aware filtering.However, it's crucial to acknowledge a fundamental limitation: colors of regions that were never sampled cannot be recovered.Undersampling an image is only justifiable when our primary interest lies in preserving lower-frequency regions.The presence of the bulge at the lower end of the violin plots indicates that, even in cases where the image is undersampled, most of its content remains accurately colorized.These trends are confirmed on images from Harvard, KAIST and CAVE datasets shown in Fig. S7-9.Fig. 9. HyperColorization has a per-image optimal sampling ratio and favors oversampling.We show here the histograms of errors on 400 × 400 pixel scale Bear & Fruit image [30] with a total time budget of 5 seconds under guided whisk broom sampling.Notice that as our sampling ratio (top x-axis) increases, our exposure time per pixel (bottom x-axis) decreases.Violin plots represent the distribution of error across pixels.While the optimal sampling ratio varies from image to image, spatial undersampling is a more serious problem than the increased photon noise due to spatial oversampling.

Comparison to the state of the art
In our study, we conducted a comparison involving HyperColorization and other hybrid camera systems, along with a snapshot technique.Among the hybrid techniques, SASSI [15] and Cao et al. [16] utilize RGB cameras for guidance and grayscale sensors with dispersing prisms for spectral sample collection.To isolate the measurements on the image sensor, SASSI employs spatial light modulators (SLM) while Cao et al. use a static mask.They later densify the collected spectral samples using their respective algorithms with the guidance of the captured RGB image.
HyperColorization is designed to densify spectral samples from arbitrary imaging modalities.Besides whisk and push broom scanners, our algorithm can be seamlessly integrated into hybrid spectral imagers to enhance their color propagation accuracy.The results presented in Table 3 demonstrate that HyperColorization outperforms the spatial densification algorithms employed by SASSI and Cao et al. while maintaining a lower computational cost compared to Cao et al.Additionally, see Fig. 10 to compare reconstructions of hybrid architectures over two images from Harvard and KAIST datasets over the visible band; and see Fig. 11 to compare reconstructions over two images from the ICVL dataset over visible and NIR bands.Finally, see Figs. 12 and S10 to compare results on a color checker.HyperColorization boosts performance of traditional pushbroom sensors by allowing targeted exposure of color samples, exceeding traditional spatial-spectral resolution tradeoffs by allocating a short time budget to a smaller number of pixels before recovering spatial detail with a grayscale guide image.HyperColorization also outperforms the spectral sample propagation techniques introduced in hybrid approaches SASSI [15] and Cao et al. [16], with HyperColorization results reported here under uniform whisk broom sampling.Another important class of spectral imagers is snapshot cameras.We benchmarked our technique against Choi et al. [39], who used a coded aperture snapshot spectral imager system.With Choi et al.'s approach the spectral image is reconstructed using a single image which was captured through a dispersing prism and a coded aperture.The inverse problem solved by such cameras is highly ill-posed, however, they simplify the optical system and the data collection process.With Choi et al., due to the difficulty of the inversion, the reconstruction takes around 12 hours, which is significantly longer than the hybrid camera systems previously discussed.Fig. 13 shows an example comparison of HyperColorization and Choi et al., demonstrating our sharper recovery of edges and texture while also obtaining more faithful spectral curves.[39] and HyperColorization (3% guided whisk broom sampling, no noise added).HyperColorization reconstructs higher quality HSIs with dramatically less wall time (on the order of seconds vs. hours), but requires a more complicated optical system that involves two cameras.

Conclusion
We presented a novel hyperspectral imaging framework called HyperColorization that enables traditional hyperspectral cameras such as whisk and push broom scanners to break the time-spacewavelength resolution trade-off through 'colorizing' a grayscale image by using spatially sparse spectral clues.We demonstrated that colorization within a learned subspace yields superior results in the presence of noise, and the optimal dimensionality of this subspace can be estimated directly from noisy spectral measurements.Grayscale-image-guided sampling algorithms offer slight improvements over uniform sampling patterns.Finally, our analysis of the trade-off between sampling ratio and exposure time per pixel suggest that prioritizing higher sampling ratios is more advantageous, as our framework provides effective tools for managing noise but lacks the ability to recover under-sampled images.
Next steps in improving HyperColorization involve refining grayscale-image-guided sampling algorithms, integrating spatial basis models into the colorization process, enhancing algorithm robustness for varied sampling scenarios, and devising heuristics to determine optimal sample numbers.Another interesting venue is to venture beyond the visible spectrum, particularly into the NIR and short-wave-infrared (SWIR) range.In these bands, we are not bound by the lower dimensional subspace seen in the visible range, creating more challenges but also greater rewards.In these spectral bands, there is a potential to develop spectral densification algorithms that support imaging modalities capable of recovering from compressed or sparse representations along the spectral dimension, such as the ones discussed in [11,12].This would potentially decrease the necessary amount of measurements they need to take to get accurate reconstructions.
Computational imaging systems have enabled the development of snapshot hyperspectral imagers.Despite their low data collection times, the computational decoding of the captured data can still be time-consuming, posing practical limitations.While our framework has been primarily discussed in the context of whisk and push broom scanners, a key advantage of our approach lies in its adaptability to other imaging modalities as a post-processing step.In our future work, we aim to build upon this framework to decrease the computational cost and mathematical complexity associated with the decoding process in computational imagers.We are hoping to achieve this by potentially alleviating the decoding step's ill-posed nature by isolating the measurements to enable the division of the intricate inversion task into more manageable sub-problems.

Fig. 2 .
Fig. 2.Luminance differences between neighboring pixels carry information about spectral differences.The absolute difference between values on grayscale guide images (simulated by summing across visible spectral channels) is correlated with the L1 distance between their spectra.This correlation is strongest in (a), where the spectral differences are taken along the same wavelength range as the guide image.The relationship weakens in (b) where spectral bands not included in the guide image are considered, and is weaker still in (c) where there is no spectral overlap between the guide image's spectral bands and our target spectral bands.Data shown here is drawn from the dustbin and bulb images of the ICVL dataset[31] which includes 519 bands from 390 to 1043 nm.

Fig. 3 .
Fig. 3. Luminance Adjustment.(a): Grayscale guide image, created by summing across wavelengths, with red dots indicating the uniformly-spaced locations of spectral clues (shown over 2×2 pixels for visibility).(b): Color propagation does not preserve luminance information on hyperspectral images.(c): Renormalization of image (b)based on image (a) using Eq. 3 removes these artifacts.Image from[32] contains 31 channels from 400 to 700 nm and has been projected to RGB here for visualization.

Fig. 5 .
Fig. 5.The relationship between noise level and the projection of hyperspectral clues onto our learned basis.The elbow location and vertical offset of these curves vary with noise and can be used to predict the highest-performing reconstruction dimensionality.

Fig. 6 .
Fig. 6.Reconstruction error across metrics, exposures times, and reconstruction dimensions for the sample Bear & Fruit image.(a) We propose EMD as a useful metric for hyperspectral quality due to the well-behaved purple curve.Generalizing to the full CAVE dataset, we can approximately predict (blue) this curve using the features described in Fig. 5.(b) Other error metrics show sudden jumps and plateaus in their best-performing dimension, and suggest very low dimensions at short exposure times.

Fig. 7
Fig. 7 illustrates the effects of reconstruction dimensionality on samples with a simulated exposure time of 40s per pixel.A high-dimensional reconstruction (27D, shown in yellow) overfits to the noise spikes seen in the measurement (dotted black) while a low dimensional reconstruction (3D, shown in magenta) underfits the data and excessively smooths the spectra.The correct dimension (9D, shown in blue) more closely matches the ground truth spectra (solid black).

Fig. 7 .
Fig.7.The impact of reconstruction dimension with 1% sampling ratio.In this example, 3-dimensional reconstruction underfits to the data, while 27-dimensional reconstruction overfits to noise.The optimal reconstruction dimensionality is 9. Image[30] contains 31 visible bands, projected onto RGB for visualization.

Fig. 8 .
Fig. 8. Guided sampling improves performance.(a) A posterized guide image from [30] is shown with line-by-line scores for the features that guide sampling (b).(c) Our adaptive algorithm adjusts the sampling ratio so that the regions expected to have higher color diversity have a higher sample density.(d) An example inset where uniform sampling fails to capture enough spectral samples to colorize accurately.31 channel data is projected onto RGB for visualization.

Fig. 10 .
Fig. 10.Reconstruction results on example images in the visible range from Harvard [32](top) and KAIST [39] (bottom) datasets for two different noise levels simulating different exposure times.We compare to two hybrid architecture methods, SASSI and Cao et al., with parameters set to maximize PSNR (SASSI rank 1, Cao et al. filter sizes 31 with spatial variance of 21 and spectral variance of 0.03, set by grid search) but still result in noisy and desaturated spectra and spatial artifacts such noise and pixelation (orange insets) or color blotches and edge artifacts (blue insets).

Fig. 11 .
Fig. 11.Reconstruction results of hybrid architectures over two example images from ICVL dataset with 3% guided sampling.These examples have 519 channels in visible and NIR.HyperColorization results are obtained by projecting the noisy images to the top 31 singular vectors trained from the same images with guide images generated from the full spectral range including NIR, conditions likely to provide an upper bound on HC performance at this task.Hyperparameters of SASSI are left at their default values, and hyperparameters of Cao et al. are initialized after as the same values reported in Fig. 10.Noise suppression causes SASSI to lose saturation, while Cao et al.'s results exhibit spatial artifacts.

Fig. 12 .
Fig. 12.Comparison of hybrid camera systems on a 31-channel, 420-720 nm image of a color checker from the KAIST dataset [39].(a) Projections onto RGB show spatial noise reduction in HyperColorization, with PSNR and EMD reported for hyperspectral data over all pixels.(b) Spectra from sample patches, showing noisy measurements in the yellow patch for Cao et al.'s because their algorithm does not modify the initial color samples.(c) These noisy locations are visible across methods in the yellow patch inset.See Figure S10 for spectral results in all patches, which are summarized numerically in (d).

Fig. 13 .
Fig. 13.Comparison of Choi et al.[39] and HyperColorization (3% guided whisk broom sampling, no noise added).HyperColorization reconstructs higher quality HSIs with dramatically less wall time (on the order of seconds vs. hours), but requires a more complicated optical system that involves two cameras.

Table 1 .
Datasets used for simulations.

Table 3 .
Comparison of HyperColorization to classic and hybrid camera systems using 21 images selected from Harvard, KAIST and CAVE datasets.