Comparison of a portable Vis-NIR hyperspectral imaging and a snapscan SWIR hyperspectral imaging for evaluation of meat authenticity

Highlights • Portable Vis-NIR and SWIR hyperspectral imaging were used for meat authenticity.• Linear and non-linear chemometrics methods were used to analyze the data.• The best results were achieved with Vis-NIR HSI combined with non-linear methods.• Vis-NIR and SWIR HSI may be used for online/in-line routine screening of meat authenticity.


Introduction
Meat authenticity and meat fraud detection are challenging for scientists, industries, regulatory agencies, and consumers. Meat authenticity is, however, important regarding different social and health issues, in particular for consumers following a religious diet (Halal, kosher, etc.) and who suffer from allergies (Dashti et al., 2021;Mohammed Kamruzzaman, 2021). Also, the authenticity of food products affects the credibility of the companies. Different analytical methods including, polymerase chain reaction (PCR) (Kumar, 2021), chromatographic and mass spectrometry (MS) techniques (Stachniuk et al., 2021), and immunoassay-based methods (Mandli et al., 2018) were developed for meat authentication. Current approaches for meat fraud detection have different limitations such as being time-consuming, laborious, expensive, and destructive. In addition, they usually require high-skill technicians (Dashti et al., 2021;Mohammed Kamruzzaman, 2021;Mohammed Kamruzzaman et al., 2016). Hence, there is a need for developing new non-invasive, rapid and accurate detection techniques, especially to design online/portable detection instruments/devices that can be applied throughout the food chain . Hyperspectral imaging integrates both spectroscopic and imaging techniques in one system. The combined nature of techniques provides simultaneously physical and spatial characteristics such as shape, size, appearance and color of the sample under analysis and intrinsic chemical and molecular information thereof through spectral analysis (Wu & Sun, 2013a, 2013b. Among the available technologies, hyperspectral imaging (HSI) is a relatively new technology that has provided a promising alternative, offering speed, accuracy, and reliability, besides being a nondestructive analysis technology for different production processes (Siche et al., 2016). Recently, HSI techniques have received much attention for application in non-destructive food analysis and therefore also in meat and meat products. These techniques have been successfully implemented for meat quality (Jia et al., 2022;Mohammed Kamruzzaman, ElMasry, et al., 2013;Zhang et al., 2019), offal adulteration detection in ground meat (Grundy et al., 2022;Jiang et al., 2021;M Kamruzzaman et al., 2014) and meat speciation (Al-Sarayreh et al., 2017;Jiang et al., 2019;Mohammed Kamruzzaman et al., 2012;Mohammed Kamruzzaman, Sun, et al., 2013;Rady & Adedeji, 2020).
Previous studies have mainly used benchtop hyperspectral instruments for the application of meat authenticity. Considering the large amount of adulteration of meat and the need for rapid detection at various stages of the meat supply chain, there is an urgent need to develop nondestructive, fast and portable techniques for on/in site analysis of meat authenticity at various stages of meat supply chain.
The objective of this study was: (a) to compare the practicability of two hyperspectral imaging systems with different wavelength ranges including portable visible-near infrared (400-1000 nm) and a snapscan short wave infrared (SWIR, 1116(SWIR, -1670 in combination with different linear and non-linear classification multivariate data analysis to identify authentication of different meat species (lamb, beef, chicken and pork); (b) to quantify the pork in lamb, beef and chicken samples with dereferent linear and non-linear regression methods. Finally,this research can provide best wavelength (regarding with these cameras) and multivariate methods for meat speciation behind inherent features of portable and snapscan HSI.

Collection and preparation of meat samples
Lamb (n = 40), beef (n = 40), chicken (n = 40), and pork (n = 40) meat samples (in total, 160 samples) were purchased from local butchers and supermarkets in the Netherlands over four months in 2020 (Table S1). Species-specific PCR was used as a reference method to verify the species of the samples (Scholtens et al., 2017). This method was selected due to its high sensitivity and relative rapid analyses. Samples that tested negative were removed from the dataset.
Mixtures of meat samples (approximately 300 g) were prepared by manually removing skin and visible fat with a kitchen knife, cut into small cubes (2x2x2 cm), and ground separately using a meat grinder (Tristar VM-4210, Smartwares group, Tilburg, The Netherlands) with the coarse grinding disc (hole diameter 0.7 cm). After grinding, parts of the ground meat samples were mixed by hand. The mixtures were prepared by adding 2 %, 5 %, 10 %, 25 %, and 50 % pork (w/w) to lamb, beef, and chicken samples. Homogeneity was checked visually. Mixtures were prepared by randomization of samples (Table S2). Samples were vacuum-sealed using and stored at − 18.5 • C in the dark and conditioned at room temperature for 12 h in sealed containers before data acquisition.

Vis-NIR (400-1000 nm) spectral imaging
Data acquisition in the Vis-NIR was performed using a portable Specim IQ hyperspectral line-scan camera with push-broom technology (Specim, Spectral Imaging ltd., Oulu, Finland). Image acquisition was performed in the wavelength range of 400-1000 nm in reflectance mode. The spatial resolution of the recorded data was 512x512 pixels and the spectral resolution was 7 nm resulting in 204 spectral bands across the wavelength range. A halogen-based illumination (2 × 1250 W) that covers the full wavelength range of 400 to 1000 nm range was used. A 95 % white reference tile was used for calibration before data acquisition.

SWIR (1116-1670 nm) spectral imaging
An IMEC SWIR hyperspectral Snapscan imaging camera (Interuniversity Microelectronics Centre, Leuven, Belgium) with 108 spectral bands and a spectral range of 1116-1670 nm equipped with an Optec 16 mm F1.7 SWIR lens (Optec S.p.A., Parabiago, Italy) was used. The system was controlled by the IMEC Snapscan software (version 1.3.0.8, IMEC). Four small halogen lamps were utilized for the homogeneous illumination of the samples. Before data acquisition, the camera was calibrated using a 95 % reflection calibration white standard (WS) tile sized 200 × 200 mm, using an integration time of 2.5 ms for the 640 × 512 × 108 (x * y * λ) hypercubes.

Data analysis
Background segmentation, extraction of spectral data from the hyperspectral images and calculations performed in the present research were carried out using MATLAB (R2017b, The Mathworks Inc.), PLS ToolBox (R9.0, Eigenvector Inc.), and MIA_Toolbox (version 3.1, Eigenvector Inc.). ROIs from the Vis-NIR data were selected by applying the mask. The mask was created by subtracting a low-reflectance band from a high-reflectance band.
ROIs from the SWIR images were selected by manual removal of the background using the MIA toolbox. The mean spectra of ROIs were calculated by averaging the spectra of all pixels within the ROI at each wavelength.

Exploring the pattern in pure meat samples
Principal component analysis (PCA) was used to indicate clustering patterns of the samples (PCA score plots) and to assess wavelengths that have the most influence on the classification of samples (PCA loading plots) (Abdi & Williams, 2010).
Venetian blinds cross-validation was used to assess the performance of the model and minimize the risk of overfitting data. In PLS-DA, the optimal number of PLS factors (latent variables (LVs)) for the models was selected by Venetian blinds cross-validation (number of data split: 10, thickness: 1) according to minimum RMSEC and RMSECV plots. The performance of classification models was evaluated according to the percentage of samples truly classified during calibration development and, afterward, with external validation. These performance parameters, such as sensitivity, specificity, accuracy and error rate are usually derived from a confusion matrix, to better assess the classification performance.

Regression methods
For identification of the presence of port in meat samples, different regression linear and non-linear chemometric models were tested: partial least squares (PLSR), support vector machines regressing (SVM-R) and artificial neural networks based on multi-layer back-propagation perceptron (ANN-MLP) were applied to predict the percentage of adulteration.

Software
Clustering (PCA), classification (PLS-DA, SVM and ANN-BPN) and regression (PLSR and SVM-R) analysis were performed using PLStoolbox version 9.0 (Eigenvector, USA). Also, ANN-MLP regression model was applied using MVC1 toolbox (Olivieri et al., 2004) written for MATLAB. Before chemometric modeling, an appropriate data pretreatment was chosen based on the performance of the chemometric model. Effects of the different data pretreatments are displayed in the results and discussion section. A flowchart that explains the complete analysis of the hyperspectral data starting from image acquisition to multivariate analysis is shown in Figure S1. Fig. 1 shows Vis-NIR and SWIR averaged spectra obtained from four pure meat species. According to the heavy overlap of the spectra for different samples, chemometric analysis is necessary for the extraction of useful information.

Exploring the pattern in pure meat samples
Different preprocessing techniques were tested and the Gap segment 2nd order derivative with external parameter orthogonalization (EPO) (2 PCs) and mean center data preprocessing steps were applied before PCA. These pre-processing methods were found preferable for both Vis-NIR and SWIR HSI cameras, respectively (data of other pre-processing tested not displayed).
According to the PCA score plots in Fig. 2 A and B, separations between groups in score space for both cameras were found when classifying the four different meat species. In the Vis-NIR PCA score plot, PC1 is corresponding to the maximum variance capture and provides the best separations (clustering) between species. The highest loading on PC1 and PC2 (Fig. 2 C) were found around 418, 550 and 578 nm associated with meat pigments and wavelength between 800 and 1000 nm could be related to meat composition (water, protein and fat,..). On contrary, PC1 is corresponding to the maximum variance but the best clustering was on PC2 in the SWIR PCA score plot. According to the loading on PC1 and PC2 (Fig. 2 D), the wavelengths between 1116 and 1500 nm (associated with the water and protein content) have more contributions to samples clustering (Cozzolino & Murray, 2004;Downey et al., 2000). The better separation between the species in PCA score plots of Vis-NIR spectra compared with SWIR spectra may be caused by the different wavelength ranges. In the Vis-NIR features, the myoglobin and hemoglobin content of the different meat species can be incorporated into the information, while in the SWIR more information on the macrocomposition like moisture, protein, and fat is detected.
Therefore, the adequate separation between species in Vis-NIR HSI rather than SWIR HSI could be related to both pigment color (visible region) and muscle composition (NIR region) information. These results are in agreement with those reported by other authors (Alomar et al., 2003;Cozzolino & Murray, 2004;Mohammed Kamruzzaman et al., 2016;Moran et al., 2018).
Interestingly, the type of cut was not found to be relevant in the PCA analysis.
Furthermore, the wavelengths with a higher value are important to explain the variance and are potential wavelengths to differentiate the  four species (see Fig. 2 C and D). These wavelengths can be used for meat classification instead of the whole spectral range.

Discrimination (classification) of pure meat samples
The most appropriate pre-processing strategy was chosen according to the highest sensitivity, specificity, and lowest classification error. The best performing PLS-DA models for Vis-NIR and SWIR cameras were achieved with baseline correction in addition to 1st derivative (order: 2, window 19pt) and 2nd derivative (order: 2, window 21pt), and OSC (orthogonal signal correction), respectively (Tables 1 and 2).
In this regard, the best total accuracy values for Vis-NIR and SWIR were 90 % and 83 %, respectively. Also in the SVM method, the best appropriate pre-processing strategy was chosen according to the highest sensitivity, specificity, and lowest classification error. Hereof, 1st derivative (order: 2, window: 15pt) and 2nd derivative (order: 2, window: 9pt) plus Pareto scaling(scale each variable by the square root of its standard deviation (Sqrt Std) were the most proper pre-processing methods for Vis-NIR and SWIR, respectively. The best total accuracy values for Vis-NIR and SWIR were 96 % and 88 %, receptively (Tables 1  and 2). For ANN-BPN methods, the best results were achieved with SNV for both cameras and the best total accuracy was 94 % and 89 % for Vis-NIR and SWIR, respectively (Tables 1 and 2). Tables 1 and 2 show that non-linear models like SVM with RBF kernel and ANN-BPN, have a better prediction ability than the linear models (PLS-DA) for both HSI systems.
The total accuracy for Vis-NIR was arranged between 90 and 96 % and the total accuracy for SWIR was 83-89 %. In our work, the Vis-NIR camera has superior performance for pure meat classification of lamb, beef, chicken, and pork in comparison with the SWIR camera. It could be due to the different and longer wavelength range of the Vis-NIR hyperspectral camera (400-1000 nm) rather than the SWIR hyperspectral camera (1116-1670 nm). Visual differences were observed between sample species in both visible and NIR regions (Fig S1 A).
Peaks around 418, 546, and 578 nm may be related to hemoglobin and myoglobin absorption. This showed that both pigment color Table 1 Classification performance (in %) of PLS-DA, SVM (kernel function: RBF), and ANN (BPN algorithm) models for classification of lamb, beef, chicken, and pork with Vis-NIR hyperspectral camera (400-1000 nm).   information (from the visible region) and the composition of the meat (from the NIR region) gave information to be used for speciation purposes (Alamprese et al., 2013;Cozzolino & Murray, 2004;Dashti et al., 2021;Moran et al., 2018). Snapscan HSI systems have limited spectral information for red-meat speciation and authentication while proposing advantages such as portability and video rate imaging (Al-Sarayreh et al., 2020).

Detection of adulteration levels of pork in the meat of other species
The main objective of this contribution was to quantify the level of adulteration (pork) in minced lamb, beef, and chicken. The regression models were developed using PLS, SVM, and ANN-MLP to correlate the spectral data with the level of adulteration. Here different spectral pretreatments were assessed for all these models and the best models were selected according to the highest R 2 and lowest RMSE for calibration and cross-validation. Full cross-validation and test set were utilized to validate internally and externally all predictive models. As illustrated in Table 3, overall, the performance of the ANN-MLP regression model (as a fully non-linear model) in the case of cross-validation and prediction was better than PLS and SVM regression models. In Vis-NIR hyperspectral camera, the best quantification models belong to the detection of pork in beef, pork in chicken, and the end, pork in lamb, respectively. The weaker results for the quantification of pork in lamb could be due to the similarity between their spectra (Fig. S2 A).
The results revealed that spectral pre-treatment methods offer more improvement in the performance of PLS and SVM models. In ANN models, spectral pre-treatment methods mainly affected RMSE rather than R square (Table 3).
As can be seen in Table 4, the performance of the ANN-MLP regression model in the case of cross-validation and prediction was better than PLS and SVM regression models. Here, different spectral preprocessing methods were tried for SWIR spectra, but unfortunately, no suitable PLS and SVM regression models could be gained with those spectra. According to tables 3 and 4, overall, the Vis-NIR camera had a better performance for pork meat adulteration quantification in other species (lamb, beef, and chicken).

Table 3
Performance comparison of the three regression models for quantification of pork in beef, lamb, and chicken meat samples (without preprocessing and with the best preprocessing) with Vis-NIR hyperspectral camera (400-1000 nm).

Conclusions
This contribution was carried out to compare the potential of two HSI cameras for speciation and adulteration quantification of different meat species combined with linear and non-linear classifications (PLS-D, SVM, and ANN-BPN) and regressions (PLSR, SVM-R, and ANN-MLP) multivariate methods. In classification methods, the total accuracy for Vis-NIR was between 90 and 96 % and the total accuracy for SWIR was 83-89 %. In this approach, the best performances were acquired with ANN-BPN in both HSI cameras.
For pork quantification in other species, the ANN-MLP regression method showed better performance in terms of cross-validation and prediction in both HSI systems rather than PLS and SVM regressions methods, and the best performance was acquired with the portable Vis-NIR HSI camera. The results demonstrated that the spectral data collected from portable Vis-NIR hyperspectral imaging combined with non-linear multivariate methods provide better performance for meat speciation and detection of pork adulteration in minced lamb, beef, and chicken rather than snapscan SWIR hyperspectral imaging.
Funding This work was supported by the Dutch Ministry of Agriculture, Nature and Food Quality (Knowledge base grant, KB-38-001-008).