Next Article in Journal
Prioritization of Information Security Controls through Fuzzy AHP for Cloud Computing Networks and Wireless Sensor Networks
Next Article in Special Issue
Wrist Vascular Biometric Recognition Using a Portable Contactless System
Previous Article in Journal
Dual-Axis Metasurface Strain Sensor Based on Polarization–Phase-Deformation Relationship
Previous Article in Special Issue
Enhancing Security on Touch-Screen Sensors with Augmented Handwritten Signatures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blind Quality Assessment of Iris Images Acquired in Visible Light for Biometric Recognition †

1
Department of Computer and Information Science, University of Konstanz, 78457 Konstanz, Germany
2
Department of Computer Science, Norwegian University of Science and Technology, N-2802 Gjøvik, Norway
*
Author to whom correspondence should be addressed.
This paper is an extended version of the conference paper: Jenadeleh, M.; Pedersen, M.; Saupe, D. Realtime quality assessment of iris biometrics in visible light. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018.
Sensors 2020, 20(5), 1308; https://doi.org/10.3390/s20051308
Submission received: 4 January 2020 / Revised: 23 February 2020 / Accepted: 25 February 2020 / Published: 28 February 2020
(This article belongs to the Special Issue Biometric Systems)

Abstract

:
Image quality is a key issue affecting the performance of biometric systems. Ensuring the quality of iris images acquired in unconstrained imaging conditions in visible light poses many challenges to iris recognition systems. Poor-quality iris images increase the false rejection rate and decrease the performance of the systems by quality filtering. Methods that can accurately predict iris image quality can improve the efficiency of quality-control protocols in iris recognition systems. We propose a fast blind/no-reference metric for predicting iris image quality. The proposed metric is based on statistical features of the sign and the magnitude of local image intensities. The experiments, conducted with a reference iris recognition system and three datasets of iris images acquired in visible light, showed that the quality of iris images strongly affects the recognition performance and is highly correlated with the iris matching scores. Rejecting poor-quality iris images improved the performance of the iris recognition system. In addition, we analyzed the effect of iris image quality on the accuracy of the iris segmentation module in the iris recognition system.

1. Introduction

The stability of iris patterns over the human lifespan and their uniqueness was first noticed in 1987 [1]. Since then, biometric iris recognition has been extensively investigated for accurate and automatic personal identification and authentication [2]. Most commercial iris recognition systems use near-infrared (NIR) images. However, due to the popularity of smartphones and similar handheld devices with digital cameras, iris recognition systems using images taken in visible light have recently been developed [3,4,5].
Image quality is a key factor affecting the performance of iris recognition systems [6,7,8]. In the biometric recognition literature, a biometric quality measure is a covariate that is measurable, influences performance, and is actionable [9,10,11]. Quality measurement can include subject and image covariates. Subject covariates are attributes of a person, which may be properties of subjects such as eyelid occlusion, glare, iris deformation, or wearing of glasses. Image covariates depend on sensor and acquisition conditions, such as focus, noise, resolution, compression artifacts, and illumination effects. In this work, we develop a real-time quality measure for image covariates as an actionable quality score, e.g., to decide whether an input iris image sample should be enrolled into a dataset or rejected and a new sample should be captured.
The performance of an iris recognition system in visible light suffers from all of the image quality factors mentioned above. To overcome this problem, some researchers have considered image quality in different ways for iris recognition systems [12,13,14,15,16,17]. However, these systems fall short in two ways:
  • The considered image covariates and distortions are limited. Only distortions are taken into account that are often seen, such as Gaussian blur, noise, motion blur, and defocus. However, authentic iris images, especially those taken by handheld devices, may additionally suffer from other types of distortion.
  • Typically, quality assessment is applied to accurately segmented iris images. However, image distortion also affects the performance of the segmentation module of iris recognition systems. Thus, poor image quality can lead to poorly segmented irises and increase in the false rejection rate.
In this paper, we propose a general-purpose and fast image quality method that aims to assess the distortion of iris images acquired in unconstrained environments. This method can be used for real-time quality prediction of iris images to rapidly filter image samples with poor quality. Iris images with insufficient quality could lead to high dissimilarity scores for matching pairs and increase the false rejection rate of an iris recognition system. We investigate the effect of iris image quality on the recognition performance of a reference iris recognition system for three challenging iris image datasets acquired in visible light.
This paper is an extended version of our conference paper [18] and mostly a part of the Ph.D. thesis of the first author [19]. The remainder of the paper is organized as follows: Section 2 surveys the literature on iris image quality assessment and iris recognition systems. Section 3 presents the proposed metric for iris image quality assessment. In Section 4, experiments are conducted to study the effect of image quality on the accuracy of iris segmentation. In Section 5, the improvements achieved by filtering poor-quality iris images are discussed using three performance measures on three large iris image datasets acquired in visible light. The paper concludes with suggestions for future research in Section 6.

2. Related Work

In this section, we review the literature on iris image quality assessment, followed by a brief overview of some state-of-the-art iris recognition systems.
Recently, research has been reported to improve the performance of iris recognition systems by considering image quality, but with certain limitations. In some studies, image quality has been examined by considering only certain quality factors, such as sharpness [20], out-of-focus [21], and JPEG compression [22]. These metrics alone cannot be expected to produce reliable quality assessments of authentic in-the-wild iris images.
In some other work, iris image quality metrics are applied after segmentation of the iris. In [23], the result of the iris segmentation module is used to form a quality score. Happold et al. [24] proposed a method for predicting the iris matching scores of an iris image pair based on their quality features. They calculated these features for precisely segmented iris images. They labeled a dataset of iris image pairs with the corresponding matching scores. They trained their method for predicting the matching score of an image pair based on their quality features. Therefore, these methods cannot be used to measure iris image quality in the iris recognition system pipeline before segmentation.
Several metrics for iris image quality were developed based on a fusion of several quality measures of image and subject covariates. The authors of [25,26] combined quality measures relating to motion blur, angular deviation, occlusion, and defocus into an overall quality value of an input iris image. These quality metrics were developed for NIR-based images and compared to traditional NIR-controlled iris image acquisition settings. However, images in visible light and under uncontrolled lighting conditions result in notorious differences in the appearance of the acquired images [3]. Therefore, this method may not be used directly to evaluate the quality of iris images in visible light. Li et al. [27] proposed a method for predicting an iris matching score based on iris quality factors such as motion blur, illumination, off-angle, occlusions, and dilation. This method requires segmented irises to compute some of these quality factors (dilation and occlusions).
The authors of [10] used combined subject and image covariates, such as the degree of defocusing, occlusion, reflection, and illumination, to form an overall quality score. They focused on the evaluation of iris images after iris segmentation, which allows the systems to process images of poor and good quality in the acquisition phase. They considered only a few image covariates for quality estimation.
Proença [3] proposed a metric for the quality assessment of iris images taken in visible light. This metric measures six image quality attributes such as focus score, off-angle score, motion score, occlusion score, iris pigmentation level, and pupil dilation. Then, the impact of image quality on feature matching was analyzed. The results showed a significant performance improvement of the iris recognition system by avoiding low-quality images. However, this method requires precisely segmented iris images, and only the motion-blur score is combined with some quality factors related to the subject’s covariates.
The authors in [12] proposed an approach that automatically selects the regions of an iris image with the most distinguishably changing patterns between the reference iris image and the distorted version to compute the feature. The measured occlusion and dilation are combined to form a total image quality score to study the correlation between iris image quality and iris recognition accuracy.
In the approach of [28], the image quality is assessed locally, based on a fusion schema at the pixel level using a Gaussian mixture model, which gives a probabilistic measure of the quality of local regions of the iris image. The local quality measure is used to detect the poorly segmented pixels and remove them from the fusion process of a sequence of iris images.
Recently, many image quality methods have been proposed for perceptual quality assessment of natural images [29,30,31,32,33,34,35]. Some of these models use statistics of completed local binary patterns (CLBP) as a part of their feature vectors. In [33], joint statistics of local binary patterns (LBP) and CLBP patterns produced quality-aware features, and a regression function was trained to map the feature space to the perceived quality scores. In [32], features based on several local image descriptors such as CLBP, local configuration patterns (LCP), and local phase quantization (LPQ) were extracted, and then a support vector regressor was used to predict the quality scores. These models are trained to predict the perceptual quality of natural images. Liu et al. [36,37] studied some of these methods for filtering low-quality iris images. This study showed inconsistencies for the predicted quality, e.g., removing more low-quality images did not always increase the performance of the iris recognition system. In addition, they removed the low-quality images for each subject separately. Therefore, the filtered images do not have the same range of quality, and there is no global quality-filtering threshold.
In summary, some of the methods for iris quality assessment, such as [25,26], are proposed for NIR images, and only a few types of distortion are considered. Some other quality metrics, like those in [3,23,24], require a segmented iris image to calculate their quality features. They also take limited distortion types into account and are not expected to work well for quality assessment of authentic iris images taken in visible light in arbitrary environmental conditions. Iris recognition systems based on authentic images will broaden the scope of iris recognition systems, and require more research to develop robust metrics for quality assessment of authentically distorted iris images.
Since we used an iris recognition system as a reference system in this paper, in the following, we briefly review some state-of-the-art iris recognition systems.
The fast iris recognition (FIRE) system for images acquired by mobile phones in visible light was proposed by Galdi et al. [38]. It is based on the combination of three classifiers by exploiting iris color and texture information. Raja et al. [39] proposed a recognition system for iris images captured in visible light. This method extracts deep sparse features from image blocks and the whole iris image in different color channels to form the feature vector for an input iris image. Minaee et al. [40] proposed an iris feature extraction method based on textural and scattering transform features. The principal component analysis (PCA) technique is used to reduce the extracted feature dimension.
Recently, OSIRIS version 4.1, an open-source iris detection system, was proposed by Othman et al. [41]. This system follows the classic Daugman method [42] with some improvements in segmentation, normalization, coding, and matching modules. For iris and pupil segmentation, the Viterbi algorithm is used for optimal contour detection. For normalization, a non-circular iris normalization is performed using the coarse contours detected by the Viterbi algorithm. The coding module is based on 2-D Gabor filters, which are calculated in different scales and resolutions. Finally, the matching module calculates the global dissimilarity score between two iris codes using the Hamming distance. We used this system as a reference iris recognition system.

3. Proposed Method

In this section, we present our fast and general-purpose method for assessing the quality of iris images acquired in visible light.
Earlier works on iris recognition [42,43] employed block-based operations to obtain iris features. Therefore, we can infer that the most distinctive information in the iris pattern comes from the local patterns of an iris image rather than from global features. Local binary patterns (LBP) and their derivatives have been successfully used in many pattern recognition applications, including texture classification [44,45,46], image retrieval [47,48], object recognition [49,50], action recognition [51,52], and biometric recognition [53,54,55,56].
Most of the LBP-based biometric recognition methods use statistical analysis of local patterns for their feature extraction. Wu et al. [29] showed that image distortions could change the statistics of LBPs. They then examined the statistics of the LBPs to suggest an index for evaluating natural image quality. However, this index does not accurately predict image quality for some common image distortions, such as Gaussian blur and impulse noise.
In the proposed differential sign–magnitude statistics index (DSMI), sign and magnitude patterns are first derived. Then, the statistical characteristics of these patterns are analyzed for their sensitivity to iris image distortion. Statistical features of specific coincidence patterns with high sensitivity to image distortion are identified. A weighted nonlinear mapping is applied to the features to form the iris image quality score. This metric takes advantage of the observation that low-quality iris images have fewer of these patterns compared with those in high-quality iris images.

3.1. Proposed Quality Metric

Our iris image quality metric uses statistical features extracted from patterns of signs and magnitudes of local intensity differences. Then, certain locally weighted statistics of specific sign–magnitude coincidence patterns are used to define the quality score. Guo et al. [46] suggested a completed local binary pattern (CLBP) to represent the local difference information that is missed in the LBP representation of an image [57]. We investigate how common distortions in iris images could alter the statistics of the CLBP. Then, a quality metric based on a specific coincidence of sign and magnitude patterns of the CLBP is proposed.
In CLBP, a local grayscale image patch is represented by its central pixel, and the local differences are given by d p = x p x c , where x c = I ( c ) is the gray value of the central pixel of the given patch and x p is the gray value of a pixel in the neighborhood. A local difference d p can be decomposed into two components, its sign and its magnitude. These signs and magnitudes of local differences are combined into corresponding patterns, CLBP - S and CLBP - M , as follows.
Let C = { ( i , j ) | i = 0 , , M 1 , j = 0 , , N 1 } be the set of pixels of a normalized grayscale image I of N pixels width by M pixels height. For a given pixel c C , let x c and x p , p = 0 , , P 1 , denote the gray values of the center pixel c and the P points on a circle of radius R about x c . For example, suppose the coordinates of x c are (0,0); then, the coordinates of x p are ( R cos ( 2 π p / P ) , R sin ( 2 π p / P ) ). The grayscale value x p is estimated by interpolation if its coordinates do not coincide with the center of a pixel. Then, the CLBP-S patterns are defined by
CLBP - S P , R ( c ) = p = 0 P 1 b p · 2 p , b p = { 1 x p x c 0 otherwise .
The CLBP - S operator generates the same code as that of the original LBP operator. The CLBP magnitude patterns are defined similarly by
CLBP - M P , R ( c ) = p = 0 P 1 b p · 2 p , b p = { 1 m p z 0 otherwise ,
where m p = | x p x c | is the magnitude of the local difference d p . Furthermore, the threshold value z is the average local difference in the P-neighborhoods of all center pixels together, i.e.,
z = 1 | C | P c C p = 0 P 1 | x p x c | .
For each pixel c C , we consider the P-bit binary representation of the sums in Equations (1) and (2) as binary codes of CLBP - S P , R and CLBP - M P , R . Using these binary representations, we define rotation invariant indices or patterns for CLBP - S and CLBP - M in a manner similar to that proposed by Ojala et al. [57] for LBP codes. Equation (4) gives the rotation invariant indices of CLBP - S ,
CLBP - S P , R r i u 2 ( c ) = G ( CLBP - S P , R ( c ) ) = G p = 0 P 1 b p 2 p = { p = 0 P 1 b p U ( p = 0 P 1 b p 2 p ) 2 P + 1 otherwise .
Here, U gives the number of bit changes (0 to 1 or 1 to 0) of the P-bit binary representation of a number (including circular shift),
U p = 0 P 1 b p 2 p = p = 0 P 1 | b p b mod ( p + 1 , P ) | .
Similarly, Equation (5) gives the uniform rotation invariant patterns of CLBP - M .
CLBP - M P , R r i u 2 ( c ) = G ( CLBP - M P , R ( c ) ) ) .
Note that these indices, CLBP - S P , R r i u 2 and CLBP - M P , R r i u 2 , range over the set { 0 , , P + 1 } . The first indices from 0 up to P correspond to local sign and magnitude patterns with only, at most, two bit changes and, thus, denote uniform local patterns. All non-uniform patterns are assigned to the remaining index P + 1 .
CLBP - S P , R r i u 2 generates fewer codes than the basic CLBP - S . It carries less textural information by simplifying the local structure. CLBP - M P , R r i u 2 provides a compact representation of textural information derived from local magnitude patterns.
For an illustration for the case of P = 4 neighbors at distance R = 1 from the central pixel of a patch, we provide Figure 1. We obtain six indices k and l for sign and magnitude patterns, corresponding to five rotation invariant uniform patterns ( k , l = 0 , , 4 ) and one index ( k , l = 5 ) that represents all non-uniform patterns.
Finally, the local indices for sign and magnitude have to be combined to give a quality indicator for an iris image as a whole. We first join the two types of indices into a set of bitmaps V k , l ( c ) , indexed by k , l ,
V k , l ( c ) = { 1 CLBP - S P , R r i u 2 ( c ) = k   a n d   CLBP - M P , R r i u 2 ( c ) = l 0 otherwise .
For each pair k , l of indices, we form a weighted sum of V k , l ( c ) over all pixels c, which is nonlinearly scaled to the unit interval by r ( x ) = 1 e a x as follows:
Q k , l = r 1 | C | c C V k , l ( c ) σ ^ 2 ( c ) + δ 2 .
Here, σ ^ 2 ( c ) is the local variance of the P-neighboring pixels of the center pixel c, and δ 2 is a small constant value to prevent division by zero. The parameters δ 2 and a are empirically set to 0.00025 and 0.01 , respectively.
In Equation (7), the normalization by the local variance emphasizes local minima and maxima, and normalizing the scores to the range [ 0 , 1 ) is only for ease of interpretation of the quality scores. The value of Q k , l is considered as an image quality score derived from the sign pattern k and the magnitude pattern l. In our experiments, we used four neighbors ( P = 4 ) with unit distance ( R = 1 ) from the central pixel c of a local patch.
Our experiments showed that Q k , l with the specific coincidence of the sign pattern k = 0 and magnitude pattern l = 0 has a high correlation with iris image quality. Therefore, we used Q 0 , 0 as our proposed DSMI quality score. We had summarized the proposed DSMI metric in our conference paper [18], considering, however, only the selected coincidence sign–magnitude patterns.

3.2. Empirical Justification

Inspired by Wu et al. [29], we examine the distinctiveness of each pattern of CLBP - S 4 , 1 r i u 2 , which coincides with patterns of CLBP - M 4 , 1 r i u 2 for separating high-quality iris images from distorted versions. To that end, we generated an artificially distorted iris image dataset from 600 pristine high-quality references taken from the Warsaw-BioBase-Smartphone-Iris v1.0 [4], UTIRIS [58], and G C 2 multi-modal [36] datasets. A total of 3 to 12 samples per eye from 75 individuals were selected. This dataset was used only to justify our choice of specific sign–magnitude patterns and also to investigate how filtering out the low-quality iris images using the DSMI metric could affect the performance of the segmentation module of the reference iris recognition system. The reference iris images have no content-dependent deformations such as eyelid occlusion, and were selected from individuals with high, medium, and low degrees of iris pigmentation. The irises of all of these reference iris images were segmented accurately by the reference iris recognition system.
Five common image distortions with different levels and multiple distortions were used to distort the reference iris images. These distortions are Gaussian blur (GB), motion blur (MB), white Gaussian noise (WGN), salt and pepper noise (IN), and overexposure (OE). The parameters of each function and the number of the distorted versions of each reference image are listed in Table 1. In addition to the individual types of distortions, we generated multiple distorted iris images (GB+WGN). First, we distorted the images with GB and then with WGN. Since GB tends to occur during the acquisition phase due to the different working conditions of the image sensors, we applied it first. WGN is a noise model that can be used to mimic the effects of random processes, such as sensor noise due to poor illumination and thermal noise in the imaging device. For simplicity, the recommendation of [59] was followed, and WGN was introduced in the end.
To analyze the discrimination power of the scores Q k , l for separating the high-quality reference images from their distorted versions, we show the distributions of the corresponding scores Q k , l for some selected combinations of k and l in Figure 2. Visual inspection clearly shows that the coincidence of sign–magnitude patterns with k = 0 and l = 0 gives the greatest discrimination power. The predicted quality scores for the reference iris images are mostly between 0.8 and 1, and the scores for the distorted versions are mostly less than 0.8. Therefore, we chose this coincidence pattern to form our DSMI quality metric (DSMI = Q 0 , 0 ).

4. Iris Segmentation Accuracy

The performance of iris segmentation in a classical iris recognition system has a significant impact on the overall performance. In this section, we analyze how image distortions affect the performance of the segmentation module and how quality filtering could improve the segmentation.
Most of the state-of-the-art iris recognition systems for iris imaging acquired in visible light, such as FIRE [38], Raja et al. [39], and OSIRIS, version 4.1 [41], can be used as reference iris recognition systems. We have chosen OSIRIS version 4.1 because (1) OSIRIS is an open-source iris recognition system that facilitates reproducible experiments, (2) it shows high recognition performance [41], and (3) it was used as the reference iris recognition system in some recent biometric recognition studies [4,60,61,62,63,64]. The segmentation module of OSIRIS version 4.1 uses the Viterbi algorithm to detect the iris and pupil contours [65]. The outputs are contours of the iris, which represent the inner boundary between the pupil and iris and the outer boundary between the iris and sclera, resulting in a binary mask for the iris.
For our experiments, we used the artificially distorted dataset from the previous section, which is summarized in Table 1. We segmented all iris images using the OSIRIS segmentation module. The mask of the segmented iris of each reference image was taken as the ground truth for comparison with the segmentation results for the distorted versions. The iris segmentation error is computed by the fraction of mislabeled pixels,
e = 1 | C | c C T ( c ) M ( c ) ,
where | C | is the cardinality of the pixel set C of an iris image, and T and M represent the ground truth and the generated iris masks, respectively. The symbol ⊕ represents the exclusive OR operation to identify the segmentation error. If the error e was below the threshold 0.05 , the iris segmentation was assumed to be correct. The threshold value was set manually by the authors.
In Figure 3, we show the fractions of incorrectly segmented irises for the different types of distortion and for low, medium, and high degrees of iris pigmentation. The fractions are given as functions of the percentage of low-quality images that were filtered out using the proposed DSMI quality metric.
The results shown indicate a clear correlation between the DSMI quality of iris images and segmentation accuracy. Therefore, filtering out poor-quality images before segmentation will improve the performance by reducing the number of incorrectly segmented images, as indicated by the negative slopes of the plots.
In summary, the experiments performed in this section show that the accuracy of the segmentation module varies for iris images with different pigmentations and different distortions. Highly pigmented iris images present a greater challenge for the reference iris recognition system, while the system is more robust for the segmentation of low-pigmented iris images. However, filtering out poor-quality iris images using the proposed DSMI metric increases the accuracy of iris segmentation.

5. Experimental Results

In this section, we investigate to what extent filtering out poor-quality iris images with the proposed quality metric improves the performance of the reference iris recognition system. We also compare our DSMI quality metric with the BRISQUE [66] and WAV1 [67] image quality metrics. BRISQUE uses statistical features extracted from pixel intensities to train a support vector machine for predicting image quality. Pertuz et al. [67] compared 15 metrics to estimate the blur of an image. In their study, WAV1 performed better than the others. WAV1 uses statistical properties of the discrete wavelet transform coefficients. Since blur is a common distortion of iris images taken by handheld imaging devices such as smartphones, we also compare our method with the WAV1 metric. Our experiments were conducted on three large authentic iris image datasets acquired in visible light.

5.1. Iris Image Datasets

There are many iris image datasets recorded with near-infrared cameras such as CASIA V4 [68], CASIA-Iris-Mobile-V1 [69], IIT Delhi [70], and ND CrossSensor Iris 2013 [71]. However, there are just a few iris image datasets acquired in visible light. Four are widely used in iris recognition research: UTIRIS [58], UBIRIS [72], MICHE [73], and VISOB [74].
An optometric framework in a controlled environment was used for capturing the irises of the UTIRIS dataset, resulting in high-quality iris images. UBIRIS iris images were taken from moving subjects and at different distances, resulting in more heterogeneous images compared to UTIRIS. Nevertheless, the pictures have good quality, better than the expected quality of iris images captured by handheld devices. The MICHE and VISOB datasets are challenging datasets for iris recognition systems, including images with varying degrees of iris pigmentation and eye make-up. In addition, the quality of the images is impaired by lack of focus, gaze deviations, specular reflections, eye occlusion, different lighting conditions, and motion blur.
Instead, we chose three datasets of the G C 2 multi-modal biometric dataset [36] because they contain authentically distorted iris images typically seen when capturing iris images with handheld devices such as smartphones. In addition, the iris images were taken from many subjects with different handheld cameras in uncontrolled environments at different distances. Iris pigmentation varied, from European subjects with bright iris textures to Asian subjects with very dark iris textures. In addition to the various authentic distortions corresponding to the image covariates, the iris images are subject to a variety of quality losses related to the subject’s covariates, such as gaze deviation, off-angle, reflections, eye closure, and make-up. Also, the datasets contain 12–15 iris images of varying quality per eye and person, which is useful for studying the effect of quality filtering. The iris images have more than 30 different resolutions.
  • The first dataset of G C 2 , REFLEX, was taken with a Canon D700 camera using a Canon EF 100 mm f/2.8 L macro lens (18 megapixels). It contains 1422 irises of 48 subjects. A total of 12 to 15 samples were taken per eye (left and right).
  • The second dataset, LFC, contains iris images taken by a light field camera. The LFC dataset contains 1454 iris images from the right and left eyes of 49 subjects. For each eye, 13 to 15 samples were taken.
  • The third dataset, PHONE, was taken by a smartphone (Google Nexus 5, 8 megapixel camera). It contains 1379 iris images from the right and left eyes of 50 subjects, and 12 to 15 samples were taken per eye.
We compare an iris image with all iris images from the same dataset. Table 2 summarizes these datasets and shows the number of matching and non-matching iris pairs. Figure 4 shows some samples from these datasets, and Figure 5 shows the histograms of the quality scores of the datasets, estimated by the proposed DSMI metric.

5.2. Iris Recognition Performance Analysis

To evaluate the performance improvement of iris recognition achieved by quality filtering using an image quality metric, we used three performance methods, namely the Daugman’s decidability index [75], the area under the receiver operating characteristic curves (AUC), and the equal error rates (EER). We compared the performance of three image quality metrics when used for quality filtering. Given a threshold for a metric, we rejected those images that exhibited a quality lower than the threshold. The thresholds for each of the three metrics were chosen such that 1/4, 1/2, and 3/4 of the images were rejected. In our experiments, OSIRIS version 4.1 was used as a reference iris recognition system.

5.2.1. Daugman’s Decidability Index

Daugman’s decidability index [75] is a widely used method for assessing the performance of iris recognition systems [3,36,75]. In an iris recognition system like OSIRIS, a binary phase code is derived for each presented iris image. Then, the fractional Hamming distance to the phase code of a reference iris image is computed. The distributions of these Hamming distances are compared between a set of matching and a set of non-matching iris image pairs from a test dataset. The larger the overlap between the distributions, the more likely recognition errors become. The Daugman index ( d ) measures the separation of these distributions by
d = | μ E μ I | 1 2 ( σ E 2 + σ I 2 ) ,
where μ E and μ I are the means and σ E and σ I are the standard deviations of the distributions. Larger values correspond to better discrimination. We follow this procedure using the G C 2 multi-modal biometric dataset and plot the histograms of the Hamming distances for the matching and the non-matching iris pairs in Figure 6. For visualization, normal distributions were fitted to the histograms.
We can now study the effect of quality filtering on the performance of the iris recognition system. In Figure 7, we show Daugman’s decidability index as a function of the fraction of removed poor-quality images. DSMI, BRISQUE, and WAV1 image quality metrics were used for quality filtering. Filtering out low-quality iris images using the DSMI metric leads to the largest performance improvement in the REFLEX dataset, while quality filtering in the PHONE dataset leads only to small improvements. This could be due to the DSMI metric performing better in quality assessment on iris images in the REFLEX dataset or to the PHONE dataset posing a greater challenge to the reference iris recognition system. The Daugman index for the PHONE dataset is only 1.36, compared to 2.02 and 1.90 for REFLEX and LFC, respectively (see Figure 6).
From the Daugman’s decidability index values in the three test datasets, as shown in Figure 7, we can conclude that filtering out the iris images with the poorest quality using the proposed DSMI metric improves the recognition accuracy of the reference iris recognition system. The BRISQUE metric also performs well in the REFLEX dataset, but it is not consistent for quality filtering in the LFC and PHONE datasets. WAV1 is not consistent with quality filtering on all three test datasets.

5.2.2. Receiver Operating Characteristic Curve

The area under the curve (AUC) of the receiver operating characteristic (ROC) is a widely used performance metric for comparing the accuracy of iris recognition systems. The iris recognition system with the larger AUC is considered to be a more accurate system.
To visualize and measure the improvements of the performance of the reference iris recognition system by filtering out the poor quality iris images, the ROC curves were generated for each dataset by plotting the true positive rate against the false positive rate at various fractional Hamming distances (see Figure 8).
Figure 8 shows the ROC curves for the three test datasets with different quality filtering thresholds using our DSMI metric, BRISQUE, and WAV1 metrics. The solid red lines in Figure 8 show the performance of the reference iris recognition system without quality filtering. Without quality filtering, the corresponding AUC value for the REFLEX dataset is 0.9065, for the LFC dataset it is 0.8861, and for the PHONE dataset it is 0.8226. The AUC values show again that the PHONE dataset is the most challenging one for the reference iris recognition system.
We also computed the AUC values after removing 1/4, 1/2, and 3/4 of the iris images with the poorest quality from each test dataset. The AUC values are listed in the figure legends for all of the test datasets. Using the proposed DSMI metric for quality filtering increased the AUC value in all test datasets.
In the REFLEX dataset, filtering out a quarter of the iris images with the poorest quality using the DSMI metric greatly improves the performance of the reference iris recognition system in terms of AUC by 0.0406 (4.5%). However, filtering out the second quarter only increases AUC by 0.0062 (0.65%). This indicates that the middle two quarters of the iris images have a small quality deviation, and filtering a part of these images does not result in a considerable improvement in the performance of the iris recognition system. However, filtering the third quarter of the iris images with the poorest quality improves the AUC significantly by 0.0336 (3.5%).
The performance improvements for the LFC dataset after filtering out the first, second, and third quarters of the iris images with the poorest quality using the DSMI metric are 0.0278 (3.1%), 0.0124 (1.4%), and 0.0104 (1.1%), respectively. The values for performance improvement on the PHONE dataset are 0.0049 (0.6%), 0.0127 (1.5%), and 0.0413 (4.9%). Filtering out the first quarter of the iris images with the poorest quality using the DSMI metric only slightly improves the AUC value, but filtering out three quarters of the iris images with the poorest quality improves the performance significantly by 7.2%. We visualized these performance improvements in Figure 9.
The analysis of the AUC values shows that the performance of the reference iris recognition system has improved by quality filtering in all test datasets when using the DSMI metric for quality assessment. In contrast, BRISQUE is consistent for quality filtering for the REFLEX dataset, but not for the other two test datasets. WAV1 shows inconsistent performance in all test datasets.
The reason for this could be that the DSMI metric is optimized for assessing the image quality of iris images and BRISQUE for the perceptual quality of natural images. Both, however, can assess image quality for different image distortions. The WAV1 metric is optimized for blur assessment. Since blur is common in iris images taken with handheld devices, we compare our method with the WAV1 metric. However, the iris images in test datasets have more complicated authentic in-the-wild image distortions, and these distortions degrade the performance of WAV1 in all test datasets.

5.2.3. Equal Error Rate

The equal error rate (EER) is the rate at which both accept and reject errors are equal. The EER is used for comparing the accuracy of classification systems with different receiver operating characteristic (ROC) curves. With the EER approach, the system with the lowest EER is considered the most accurate.
In Table 3, we calculated the EER values when three image quality metrics were used to filter out the poor-quality iris images from the test datasets. The greatest performance improvement is achieved by filtering out poor-quality iris images using the DSMI metric on the REFLEX dataset. The PHONE dataset is the more challenging dataset for the reference iris recognition system, resulting in higher EER values.
The results confirm that rejecting poor-quality images using the proposed DSMI metric improves the iris recognition performance consistently, while this observation does not hold for BRISQUE and WAV1 metrics.
In summary, for all of the test iris image datasets (REFLEX, LFC, PHONE) and all of the performance evaluation methods (Daugman’s decidability index, AUC, EER), the performance of the reference iris recognition system (OSIRIS, Version 4.1) increased consistently by filtering out iris images with the poor quality using the proposed DSMI quality metric. In contrast, for the other two image quality metrics (BRISQUE, WAV1), the experiments showed inconsistencies, i.e., removing more low-quality images did not always increase the performance of the reference iris recognition system.
Figure 10 shows some iris samples from the test datasets with poor quality scores predicted by the proposed DSMI metric. These samples will be filtered out when we remove a quarter of the iris images with the poorest quality from each test dataset. If we pass these samples to the reference iris detection system for iris recognition, all of them will be falsely rejected. Thus, the proposed DSMI metric can be used to decide whether an input iris sample should be enrolled in a dataset or rejected, and a new sample should be captured based on the quality score. Although our method is designed to consider only image covariates, some subject covariates, such as eyelid occlusion due to blinking, may also result in motion blur or other image quality distortions that can be measured by our proposed quality metric, as shown in Figure 10c. All iris samples shown in Figure 10 suffer from authentic image distortion and other quality degradation due to subject covariates.
Figure 11 shows some iris samples with DSMI scores that are higher than the threshold for filtering out one quarter of the iris samples with the poorest quality from each test dataset. Our proposed framework passes these images for iris segmentation and identification when only a quarter of the iris images with the poorest quality are filtered out from the test datasets. However, all of these samples will be falsely rejected by the reference iris recognition system. Some of these images have quality degradation related to subject covariates, such as eyelashes obscuring the iris or closed eyes.
The iris samples that are shown in Figure 11 have fewer image distortions compared to the sample shown in Figure 10. Therefore, our quality metric predicts higher quality scores for these iris images. Some of these images have quality degradations related to subject covariates, such as eyelashes obscuring the iris or closed eyes. If we filter out half of the iris samples with the poorest quality, these samples will be filtered. However, by setting a higher quality filtering threshold, some iris samples may be rejected unnecessarily.

5.3. Computational Complexity

It is straightforward to assess the computational complexity of the DSMI quality metric by checking the algorithmic steps, outlined in Section 3.1, one by one. The result is a time complexity, linear in the size of the input image. More precisely, it is O ( N × M × P ) , where N × M is the image size in pixels, and P is the number of points checked in the neighborhood of each pixel for deriving the sign and magnitude patterns.
We also recorded the actual speed of the quality metric using our implementation, running on an MSI GP60 laptop with an Intel Core i7 processor and 16GB RAM with MATLAB version 2018b in Ubuntu 18.04.3 LTS. We computed the DSMI quality scores on four parts of the test datasets, each containing iris images of the same size in pixels, ranging from 596 × 397 up to 2036 × 1358 (see Table 4). The table confirms the linear time complexity, amounting to roughly 0.06 × 10 6 seconds per pixel. At that processing speed, a throughput of 66 frames per second (FPS) can be achieved at resolution 596 × 397 . For the higher resolutions, 625 × 537 , 1233 × 810 , and 2036 × 1358 , the speed is 40, 16, and 6 FPS, respectively. Therefore, the proposed method can be used to assess the quality of iris images in interactive applications, such as iris recognition systems based on handheld imaging devices.

6. Conclusions and Future Work

In this paper, we presented a fast image quality metric, based on statistical features of the sign–magnitude transform to estimate the quality of iris images acquired by handheld devices in visible light. We suggest that this method can be used to decide whether an input iris sample should be enrolled in a dataset or rejected, and a new sample should be captured based on the quality score to improve the speed and the recognition rate of the reference iris recognition system.
We conducted extensive experiments to demonstrate these improvements using three performance methods for measuring the iris recognition accuracy on three large datasets acquired in unconstrained environments in visible light. The experiments showed that the proposed approach improved the accuracy of the reference iris recognition system.
However, we would like to point out that the inclusion of quality filtering in an iris recognition system can increase the computational costs of iris image recognition, and some iris images may be rejected unnecessarily. This could be caused by an error in the quality metric, by too conservative of a setting of the quality threshold, or by quality factors related to the subject covariates. In our future work, we will propose a metric for iris image quality assessment that takes into account all of these factors. Furthermore, another future work is to develop an algorithm to monitor criteria, such as recognition performance, time and number of photos required per person, and customer satisfaction, in order to dynamically adapt the threshold for quality filtering to achieve optimal performance.
It may also be promising to examine the use of the proposed quality metric to assess the quality of other biometric images, such as facial image, and NIR biometric images.

Author Contributions

Conceptualization, M.J. and M.P.; Investigation, M.J.; methodology M.J., M.P., D.S.; validation M.J. and D.S.; Writing—original draft preparation, M.J.; Writing—review and editing, M.J., M.P., D.S.; Visualization, M.J., M.P., D.S.; Supervision, M.P. and D.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Exzellenzstrategie des Bundes und der Länder (the Excellence Strategy of the German Federal and State Governments), the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project-ID 251654672—TRR 161 and the Research Council of Norway within project no. 221073 HyPerCept–Color and quality in higher dimensions.

Acknowledgments

The authors thank Jon Yngve Hardeberg, Katrin Franke, and Sokratis Katsikas for their helpful discussions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Flom, L.; Safir, A. Iris Recognition System. U.S. Patent 4,641,349, 3 February 1987. [Google Scholar]
  2. Daugman, J. New methods in iris recognition. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2007, 37, 1167–1175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Proença, H. Quality assessment of degraded iris images acquired in the visible wavelength. IEEE Trans. Inf. Forensics Secur. 2011, 6, 82–95. [Google Scholar] [CrossRef]
  4. Trokielewicz, M. Iris recognition with a database of iris images obtained in visible light using smartphone camera. In Proceedings of the 2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), Sendai, Japan, 29 February–2 March 2016; pp. 1–6. [Google Scholar]
  5. Raja, K.B.; Raghavendra, R.; Vemuri, V.K.; Busch, C. Smartphone based visible iris recognition using deep sparse filtering. Pattern Recognit. Lett. 2015, 57, 33–42. [Google Scholar] [CrossRef]
  6. Thavalengal, S.; Bigioi, P.; Corcoran, P. Iris authentication in handheld devices-considerations for constraint-free acquisition. IEEE Trans. Consum. Electron. 2015, 61, 245–253. [Google Scholar] [CrossRef] [Green Version]
  7. Thavalengal, S.; Bigioi, P.; Corcoran, P. Evaluation of combined visible/NIR camera for iris authentication on smartphones. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, 7–12 June 2015; pp. 42–49. [Google Scholar] [CrossRef] [Green Version]
  8. Bharadwaj, S.; Vatsa, M.; Singh, R. Biometric quality: A review of fingerprint, iris, and face. EURASIP J. Image Video Process. 2014, 2014, 34. [Google Scholar] [CrossRef]
  9. Phillips, P.J.; Beveridge, J.R. An introduction to biometric-completeness: The equivalence of matching and quality. In Proceedings of the 2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems, Washington, DC, USA, 28–30 September 2009; pp. 1–5. [Google Scholar]
  10. Daugman, J.; Downing, C. Iris Image Quality Metrics with Veto Power and Nonlinear Importance Tailoring. Available online: https://pdfs.semanticscholar.org/60a3/a6f3e3e047fa1602b735f0682d2a01c84953.pdf (accessed on 12 January 2017).
  11. Beveridge, J.R.; Givens, G.H.; Phillips, P.J.; Draper, B.A. Factors that influence algorithm performance in the face recognition grand challenge. Comput. Vis. Image Underst. 2009, 113, 750–762. [Google Scholar] [CrossRef]
  12. Belcher, C.; Du, Y. A selective feature information approach for iris image-quality measure. IEEE Trans. Inf. Forensics Secur. 2008, 3, 572–577. [Google Scholar] [CrossRef]
  13. Pillai, J.K.; Patel, V.M.; Chellappa, R.; Ratha, N.K. Secure and robust iris recognition using random projections and sparse representations. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1877–1893. [Google Scholar] [CrossRef] [Green Version]
  14. Zhou, Z.; Du, E.Y.; Belcher, C.; Thomas, N.L.; Delp, E.J. Quality fusion based multimodal eye recognition. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Seoul, Korea, 14–17 October 2012; pp. 1297–1302. [Google Scholar]
  15. Shi, C.; Jin, L. A fast and efficient multiple step algorithm of iris image quality assessment. In Proceedings of the Second International Conference on Future Computer and Communication, Wuhan, China, 21–24 May 2010; pp. 589–593. [Google Scholar]
  16. Dong, W.; Sun, Z.; Tan, T.; Wei, Z. Quality-based dynamic threshold for iris matching. In Proceedings of the 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 1949–1952. [Google Scholar]
  17. Makinana, S.; Van Der Merwe, J.J.; Malumedzha, T. A fourier transform quality measure for iris images. In Proceedings of the International Symposium on Biometrics and Security Technologies, Kuala Lumpur, Malaysia, 26–27 August 2014; pp. 51–56. [Google Scholar]
  18. Jenadeleh, M.; Pedersen, M.; Saupe, D. Realtime quality assessment of iris biometrics under visible light. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 556–565. [Google Scholar]
  19. Jenadeleh, M. Blind Image and Video Quality Assessment. Ph.D. Thesis, Universität Konstanz, Konstanz, Germany, October 2018. [Google Scholar]
  20. Chen, L.; Han, M.; Wan, H. The fast iris image clarity evaluation based on Brenner. In Proceedings of the 2nd International Symposium on Instrumentation and Measurement, Sensor Network and Automation (IMSNA), Toronto, ON, Canada, 23–24 December 2013; pp. 300–302. [Google Scholar]
  21. Starovoitov, V.; Golińska, A.K.; Predko-Maliszewska, A.; Goliński, M. No-Reference Image Quality Assessment for Iris Biometrics. In Image Processing and Communications Challenges 4; Springer: Berlin, Germany, 2013; pp. 95–100. [Google Scholar]
  22. Bergmüller, T.; Christopoulos, E.; Fehrenbach, K.; Schnöll, M.; Uhl, A. Recompression effects in iris recognition. Image Vis. Comput. 2017, 58, 142–157. [Google Scholar] [CrossRef]
  23. Mottalli, M.; Mejail, M.; Jacobo-Berlles, J. Flexible image segmentation and quality assessment for real-time iris recognition. In Proceedings of the 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 1941–1944. [Google Scholar]
  24. Happold, M. Learning to predict match scores for iris image quality assessment. In Proceedings of the IEEE International Joint Conference on Biometrics (IJCB), Clearwater, FL, USA, 29 September–2 October 2014; pp. 1–8. [Google Scholar]
  25. Kalka, N.D.; Zuo, J.; Schmid, N.A.; Cukic, B. Estimating and fusing quality factors for iris biometric images. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2010, 40, 509–524. [Google Scholar] [CrossRef]
  26. Li, X.; Sun, Z.; Tan, T. Comprehensive assessment of iris image quality. In Proceedings of the 18th IEEE International Conference on Image Processing (ICIP), Brussels, Belgium, 11–14 September 2011; pp. 3117–3120. [Google Scholar]
  27. Li, X.; Sun, Z.; Tan, T. Predict and improve iris recognition performance based on pairwise image quality assessment. In Proceedings of the International Conference on Biometrics (ICB), Madrid, Spain, 4–7 June 2013; pp. 1–6. [Google Scholar]
  28. Othman, N.; Dorizzi, B. Impact of quality-based fusion techniques for video-based iris recognition at a distance. IEEE Trans. Inf. Forensics Secur. 2015, 10, 1590–1602. [Google Scholar] [CrossRef]
  29. Wu, Q.; Wang, Z.; Li, H. A highly efficient method for blind image quality assessment. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 339–343. [Google Scholar]
  30. Ma, K.; Liu, W.; Liu, T.; Wang, Z.; Tao, D. dipIQ: Blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Trans. Image Process. 2017, 26, 3951–3964. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Jenadeleh, M.; Moghaddam, M.E. BIQWS: Efficient Wakeby modeling of natural scene statistics for blind image quality assessment. Multimed. Tools Appl. 2017, 76, 13859–13880. [Google Scholar] [CrossRef]
  32. Freitas, P.G.; da Eira, L.P.; Santos, S.S.; Farias, M.C. Image quality assessment using BSIF, CLBP, LCP, and LPQ operators. Theor. Comput. Sci. 2020, 805, 37–61. [Google Scholar] [CrossRef]
  33. Wu, Q.; Li, H.; Wang, Z.; Meng, F.; Luo, B.; Li, W.; Ngan, K.N. Blind image quality assessment based on rank-order regularized regression. IEEE Trans. Multimed. 2017, 19, 2490–2504. [Google Scholar] [CrossRef]
  34. Liu, L.; Liu, B.; Huang, H.; Bovik, A.C. No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun. 2014, 29, 856–863. [Google Scholar] [CrossRef]
  35. Gu, J.; Meng, G.; Redi, J.A.; Xiang, S.; Pan, C. Blind image quality assessment via vector regression and object oriented pooling. IEEE Trans. Multimed. 2018, 20, 1140–1153. [Google Scholar] [CrossRef] [Green Version]
  36. Liu, X.; Pedersen, M.; Charrier, C.; Bours, P. Can no-reference image quality metrics assess visible wavelength iris sample quality? In Proceedings of the IEEE International Conference on Image Processing, Beijing, China, 17–20 September 2017; pp. 3530–3534. [Google Scholar]
  37. Xinwei, L.; Christophe, C.; Marius, P.; Patrick, B. Performance Evaluation of no-reference image quality metrics for visible wavelength iris biometric images. In Proceedings of the 26th European Signal Processing Conference (EUSIPCO 2018), Rome, Italy, 3–7 September 2018. [Google Scholar]
  38. Galdi, C.; Dugelay, J.L. FIRE: Fast iris recognition on mobile phones by combining colour and texture features. Pattern Recognit. Lett. 2017, 91, 44–51. [Google Scholar] [CrossRef]
  39. Raja, K.B.; Raghavendra, R.; Venkatesh, S.; Busch, C. Multi-patch deep sparse histograms for iris recognition in visible spectrum using collaborative subspace for robust verification. Pattern Recognit. Lett. 2017, 91, 27–36. [Google Scholar] [CrossRef]
  40. Minaee, S.; Abdolrashidi, A.; Wang, Y. Iris recognition using scattering transform and textural features. In Proceedings of the 2015 IEEE Signal Processing and Signal processing Education Workshop (SP/SPE), Salt Lake City, UT, USA, 9–12 August 2015; pp. 37–42. [Google Scholar]
  41. Othman, N.; Dorizzi, B.; Garcia-Salicetti, S. OSIRIS: An open source iris recognition software. Pattern Recognit. Lett. 2016, 82, 124–131. [Google Scholar] [CrossRef]
  42. Daugman, J. How iris recognition works. IEEE Trans. Circuits Syst. Video Technol. 2004, 14, 21–30. [Google Scholar] [CrossRef]
  43. Miyazawa, K.; Ito, K.; Aoki, T.; Kobayashi, K.; Nakajima, H. An effective approach for iris recognition using phase-based image matching. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1741–1756. [Google Scholar] [CrossRef] [PubMed]
  44. Nguyen, V.D.; Nguyen, D.D.; Nguyen, T.T.; Dinh, V.Q.; Jeon, J.W. Support local pattern and its application to disparity improvement and texture classification. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 263–276. [Google Scholar] [CrossRef]
  45. Liu, L.; Lao, S.; Fieguth, P.W.; Guo, Y.; Wang, X.; Pietikäinen, M. Median robust extended local binary pattern for texture classification. IEEE Trans. Image Process. 2016, 25, 1368–1381. [Google Scholar] [CrossRef] [PubMed]
  46. Guo, Z.; Zhang, L.; Zhang, D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 2010, 19, 1657–1663. [Google Scholar]
  47. Dubey, S.R.; Singh, S.K.; Singh, R.K. Multichannel decoded local binary patterns for content-based image retrieval. IEEE Trans. Image Process. 2016, 25, 4018–4032. [Google Scholar] [CrossRef]
  48. Murala, S.; Wu, Q.J. Local mesh patterns versus local binary patterns: Biomedical image indexing and retrieval. IEEE J. Biomed. Health Inf. 2014, 18, 929–938. [Google Scholar] [CrossRef]
  49. Satpathy, A.; Jiang, X.; Eng, H.L. LBP-based edge-texture features for object recognition. IEEE Trans. Image Process. 2014, 23, 1953–1964. [Google Scholar] [CrossRef]
  50. Shang, J.; Chen, C.; Pei, X.; Liang, H.; Tang, H.; Sarem, M. A novel local derivative quantized binary pattern for object recognition. Visual Comput. 2017, 33, 221–233. [Google Scholar] [CrossRef]
  51. Yu, M.; Liu, L.; Shao, L. Structure-preserving binary representations for RGB-D action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1651–1664. [Google Scholar] [CrossRef] [Green Version]
  52. Chen, C.; Liu, M.; Liu, H.; Zhang, B.; Han, J.; Kehtarnavaz, N. Multi-Temporal Depth Motion Maps-Based Local Binary Patterns for 3-D Human Action Recognition. IEEE Access 2017, 5, 22590–22604. [Google Scholar] [CrossRef]
  53. Kang, W.; Wu, Q. Contactless palm vein recognition using a mutual foreground-based local binary pattern. IEEE Trans. Inf. Forensics Secur. 2014, 9, 1974–1985. [Google Scholar] [CrossRef]
  54. Popplewell, K.; Roy, K.; Ahmad, F.; Shelton, J. Multispectral iris recognition utilizing hough transform and modified LBP. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 1396–1399. [Google Scholar]
  55. Hezil, N.; Boukrouche, A. Multimodal biometric recognition using human ear and palmprint. IET Biom. 2017, 6, 351–359. [Google Scholar] [CrossRef]
  56. Piciucco, E.; Maiorana, E.; Campisi, P. Palm vein recognition using a high dynamic range approach. IET Biom. 2018, 7, 439–446. [Google Scholar] [CrossRef]
  57. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  58. Hosseini, M.S.; Araabi, B.N.; Soltanian-Zadeh, H. Pigment melanin: Pattern for iris recognition. IEEE Trans. Instrum. Meas. 2010, 59, 792–804. [Google Scholar] [CrossRef] [Green Version]
  59. Jayaraman, D.; Mittal, A.; Moorthy, A.K.; Bovik, A.C. Objective quality assessment of multiply distorted images. In Proceedings of the 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 4–7 November 2012; pp. 1693–1697. [Google Scholar]
  60. Czajka, A.; Bowyer, K.W.; Krumdick, M.; VidalMata, R.G. Recognition of image-orientation-based iris spoofing. IEEE Trans. Inf. Forensics Secur. 2017, 12, 2184–2196. [Google Scholar] [CrossRef]
  61. Raghavendra, R.; Raja, K.B.; Busch, C. Exploring the usefulness of light field cameras for biometrics: An empirical study on face and iris recognition. IEEE Trans. Inf. Forensics Secur. 2016, 11, 922–936. [Google Scholar] [CrossRef]
  62. Talreja, V.; Ferrett, T.; Valenti, M.C.; Ross, A. Biometrics-as-a-service: A framework to promote innovative biometric recognition in the cloud. In Proceedings of the 2018 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 12–14 January 2018; pp. 1–6. [Google Scholar]
  63. Zhao, D.; Fang, S.; Xiang, J.; Tian, J.; Xiong, S. Iris template protection based on local ranking. Secur. Commun. Netw. 2018, 2018, 1–9. [Google Scholar] [CrossRef]
  64. Thavalengal, S. Contributions to Practical Iris Biometrics on Smartphones. Ph.D. Thesis, National University of Ireland, Galway, Ireland, May 2016. [Google Scholar]
  65. Sutra, G.; Garcia-Salicetti, S.; Dorizzi, B. The Viterbi algorithm at different resolutions for enhanced iris segmentation. In Proceedings of the Fifth IAPR International Conference on Biometrics (ICB), New Delhi, India, 29 March–1 April 2012; pp. 310–316. [Google Scholar]
  66. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  67. Pertuz, S.; Puig, D.; Garcia, M.A. Analysis of focus measure operators for shape-from-focus. Pattern Recognit. 2013, 46, 1415–1432. [Google Scholar] [CrossRef]
  68. CASIA V4. Available online: http://biometrics.idealtest.org/dbDetailForUser.do?id=4 (accessed on 2 May 2016).
  69. CASIA-Iris-Mobile-V1. Available online: http://biometrics.idealtest.org/dbDetailForUser.do?id=13 (accessed on 25 May 2016).
  70. Kumar, A.; Passi, A. Comparison and combination of iris matchers for reliable personal authentication. Pattern Recognit. 2010, 43, 1016–1026. [Google Scholar] [CrossRef]
  71. ND-CrossSensor-Iris-2013 Dataset. Available online: https://cse.nd.edu/labs/cvrl/data-sets/biometrics-data-sets (accessed on 12 June 2016).
  72. Proença, H.; Filipe, S.; Santos, R.; Oliveira, J.; Alexandre, L.A. The UBIRIS. v2: A database of visible wavelength iris images captured on-the-move and at-a-distance. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1529. [Google Scholar] [CrossRef] [PubMed]
  73. De Marsico, M.; Nappi, M.; Riccio, D.; Wechsler, H. Mobile Iris Challenge Evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern Recognit. Lett. 2015, 57, 17–23. [Google Scholar] [CrossRef]
  74. Rattani, A.; Derakhshani, R.; Saripalle, S.K.; Gottemukkula, V. ICIP 2016 competition on mobile ocular biometric recognition. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 320–324. [Google Scholar]
  75. Daugman, J. Biometric Decision Landscapes. Technical Report 482. University of Cambridge, Computer Laboratory, 2000. Available online: https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-482.pdf (accessed on 10 July 2016).
Figure 1. The patterns in the upper row correspond to CLBP - S 4 , 1 r i u 2 , which compares the gray value of the central pixel of a patch ( x c ) with the gray values of its four neighbors ( x p ). The black and white disks denote smaller and greater values than those of the central pixel value, respectively. In the lower row, CLBP - M 4 , 1 r i u 2 compares the absolute values of the differences of the gray values of the central pixel and its neighbors (| x c x p | ) with the threshold z from Equation (3). The hatched and white disks denote smaller and greater absolute values than those of the threshold, respectively. Note that the patterns are rotation invariant. Thus, in the case of P = 4 shown here, the patterns for k , l = 1 , 2 , 3 , 5 may be rotated by multiples of 90 degrees without changing the values of CLBP - S 4 , 1 r i u 2 and CLBP - M 4 , 1 r i u 2 .
Figure 1. The patterns in the upper row correspond to CLBP - S 4 , 1 r i u 2 , which compares the gray value of the central pixel of a patch ( x c ) with the gray values of its four neighbors ( x p ). The black and white disks denote smaller and greater values than those of the central pixel value, respectively. In the lower row, CLBP - M 4 , 1 r i u 2 compares the absolute values of the differences of the gray values of the central pixel and its neighbors (| x c x p | ) with the threshold z from Equation (3). The hatched and white disks denote smaller and greater absolute values than those of the threshold, respectively. Note that the patterns are rotation invariant. Thus, in the case of P = 4 shown here, the patterns for k , l = 1 , 2 , 3 , 5 may be rotated by multiples of 90 degrees without changing the values of CLBP - S 4 , 1 r i u 2 and CLBP - M 4 , 1 r i u 2 .
Sensors 20 01308 g001
Figure 2. The solid red lines show the distributions of the quality scores of the high-quality iris images, and the dotted blue lines show the distributions for the distorted versions with different distortion types, which are shown on the right side of each row. The quality scores Q k , l are formed based on four different coincidences of sign (k) and magnitude (l) patterns, shown at the bottom of each column. The first column shows the histograms of the quality score Q 0 , 0 , and the second, third, and fourth columns show the histograms of the coincidence patterns Q 0 , l with l 0 , l = all , and l = 4 .
Figure 2. The solid red lines show the distributions of the quality scores of the high-quality iris images, and the dotted blue lines show the distributions for the distorted versions with different distortion types, which are shown on the right side of each row. The quality scores Q k , l are formed based on four different coincidences of sign (k) and magnitude (l) patterns, shown at the bottom of each column. The first column shows the histograms of the quality score Q 0 , 0 , and the second, third, and fourth columns show the histograms of the coincidence patterns Q 0 , l with l 0 , l = all , and l = 4 .
Sensors 20 01308 g002
Figure 3. The segmentation performance of the reference iris recognition system is shown for segmenting iris images with high, medium, and low pigmentation, and distorted in different ways. The fraction of incorrectly segmented images is plotted versus the percentage of filtered low-quality images, based on the differential sign–magnitude statistics index (DSMI) metric.
Figure 3. The segmentation performance of the reference iris recognition system is shown for segmenting iris images with high, medium, and low pigmentation, and distorted in different ways. The fraction of incorrectly segmented images is plotted versus the percentage of filtered low-quality images, based on the differential sign–magnitude statistics index (DSMI) metric.
Sensors 20 01308 g003
Figure 4. Some iris image samples with high, medium, and low pigmentation from the multi-modal biometric dataset G C 2 [36]. The first, second, and third rows show some images from the REFLEX, LFC, and PHONE datasets, respectively.
Figure 4. Some iris image samples with high, medium, and low pigmentation from the multi-modal biometric dataset G C 2 [36]. The first, second, and third rows show some images from the REFLEX, LFC, and PHONE datasets, respectively.
Sensors 20 01308 g004
Figure 5. Normalized histograms of the quality scores according to the DSMI metric on three test iris datasets.
Figure 5. Normalized histograms of the quality scores according to the DSMI metric on three test iris datasets.
Sensors 20 01308 g005
Figure 6. Normal distributions fitted to the normalized histograms of Hamming distances of matching (solid lines) and non-matching (dash lines) iris pairs are shown for three test image datasets.
Figure 6. Normal distributions fitted to the normalized histograms of Hamming distances of matching (solid lines) and non-matching (dash lines) iris pairs are shown for three test image datasets.
Sensors 20 01308 g006
Figure 7. Daugman’s decidability index for all iris images, after filtering different parts of the iris images with the poorest quality using three image quality metrics on three test datasets.
Figure 7. Daugman’s decidability index for all iris images, after filtering different parts of the iris images with the poorest quality using three image quality metrics on three test datasets.
Sensors 20 01308 g007
Figure 8. The receiver operating characteristic (ROC) curves for the three test datasets (REFLEX, LFC, and PHONE) with different quality filtering thresholds using our DSMI metric, BRISQUE, and WAV1. The solid red, dashed blue, dot-dashed green, and dotted black lines were plotted without quality filtering, after filtering out one-quarter, half, and three-quarters of the poorest-quality images, respectively.
Figure 8. The receiver operating characteristic (ROC) curves for the three test datasets (REFLEX, LFC, and PHONE) with different quality filtering thresholds using our DSMI metric, BRISQUE, and WAV1. The solid red, dashed blue, dot-dashed green, and dotted black lines were plotted without quality filtering, after filtering out one-quarter, half, and three-quarters of the poorest-quality images, respectively.
Sensors 20 01308 g008
Figure 9. Area under the curve (AUC) values for all iris images after removing different parts of the iris images with the poorest quality.
Figure 9. Area under the curve (AUC) values for all iris images after removing different parts of the iris images with the poorest quality.
Sensors 20 01308 g009
Figure 10. The first row shows some iris samples from the multi-modal biometric dataset G C 2 [36], which are classified as low-quality samples by our DSMI metric. All of these samples would be falsely rejected with high dissimilarity scores (>0.47) by the reference iris detection system. However, if we filter out a quarter of the iris images with the poorest quality from each test dataset, these samples will be removed and not passed to the iris recognition system. The second row shows the segmentation result of the segmentation module of the reference iris recognition system. The DSMI scores are listed below the iris samples.
Figure 10. The first row shows some iris samples from the multi-modal biometric dataset G C 2 [36], which are classified as low-quality samples by our DSMI metric. All of these samples would be falsely rejected with high dissimilarity scores (>0.47) by the reference iris detection system. However, if we filter out a quarter of the iris images with the poorest quality from each test dataset, these samples will be removed and not passed to the iris recognition system. The second row shows the segmentation result of the segmentation module of the reference iris recognition system. The DSMI scores are listed below the iris samples.
Sensors 20 01308 g010
Figure 11. The first row shows some iris samples from the multi-modal biometric dataset G C 2 [36], which are classified by our DSMI metric as iris samples of sufficient quality if only one quarter of the iris images with the poorest quality are filtered out. Therefore, these images are passed to the iris recognition pipeline for further processing. However, all of these samples would be falsely rejected by the reference iris recognition system with high dissimilarity values (>0.47). The second row shows the segmentation result of the segmentation module of the reference iris recognition system. The DSMI scores are listed below the iris samples.
Figure 11. The first row shows some iris samples from the multi-modal biometric dataset G C 2 [36], which are classified by our DSMI metric as iris samples of sufficient quality if only one quarter of the iris images with the poorest quality are filtered out. Therefore, these images are passed to the iris recognition pipeline for further processing. However, all of these samples would be falsely rejected by the reference iris recognition system with high dissimilarity values (>0.47). The second row shows the segmentation result of the segmentation module of the reference iris recognition system. The DSMI scores are listed below the iris samples.
Sensors 20 01308 g011
Table 1. Summary of the artificially distorted iris image dataset.
Table 1. Summary of the artificially distorted iris image dataset.
Reference Iris Images
Degree of Iris PigmentationNumber of IndividualsNumber of All Iris Images
High25200
Medium25200
Low25200
Distorted Iris Images
Distortion TypeMATLAB FunctionParameters IntervalDistorted VersionsAll Distorted Iris Images
GBimgaussfilt(I, sigma)0.5–5106000
INimnoise(I,’salt &pepper’,density)0.05–0.6127200
OEI+t10–100106000
MBH=fspecial(’motion’,len, theta); imfilter(I,H,’replicate’)10–60; 10–603621,600
WGNimnoise(I,’gaussian’,0,V)0.002–0.02106000
GB+WGNimgaussfilt(I, sigma); imnoise(I,’gaussian’,0,V)0.5–5; 0.002–0.0210060,000
Table 2. Summary of the G C 2 dataset.
Table 2. Summary of the G C 2 dataset.
DatasetsREFLEXLFCPHONE
Number of subjects484950
Total images142214541379
Samples per eye12–1513–1512–15
Matching pairs945710,0459092
Non-matching pairs975,4501,056,485941,039
CameraCanon D700Light field cameraPhone Nexus
Lowest resolution 1085 × 724 327 × 218 450 × 300
Highest resolution 2813 × 1876 1080 × 1080 1811 × 1208
Table 3. The equal error rate (EER) values are calculated after filtering different parts of the iris images with the poorest quality from each test dataset. This table shows the EER values when all iris images are passed to the iris recognition system and after filtering out one quarter, half, and three quarters of the iris images with the poorest quality from the REFLEX, LFC, and PHONE datasets using the DSMI, BRISQUE, and WAV1 quality metrics.
Table 3. The equal error rate (EER) values are calculated after filtering different parts of the iris images with the poorest quality from each test dataset. This table shows the EER values when all iris images are passed to the iris recognition system and after filtering out one quarter, half, and three quarters of the iris images with the poorest quality from the REFLEX, LFC, and PHONE datasets using the DSMI, BRISQUE, and WAV1 quality metrics.
Removed PartREFLEXLFCPHONE
DSMIBRISQUEWAV1DSMIBRISQUEWAV1DSMIBRISQUEWAV1
0%0.14690.14690.14690.17700.17700.17700.24660.24660.2466
25%0.09870.12020.17140.15000.16040.15620.24180.23740.2594
50%0.08780.9780.19630.13760.15280.16920.22930.22760.2595
75%0.03820.05200.24430.12870.17240.19550.18450.24120.2434
Table 4. Comparison of the average running time (seconds) on four sets of iris images with different resolutions.
Table 4. Comparison of the average running time (seconds) on four sets of iris images with different resolutions.
Image Resolutions 596 × 397 625 × 537 1233 × 810 2036 × 1358
Average running time per image (seconds)0.0150.0260.0610.181
Average running time per pixel (microseconds)0.0650.0620.0620.064
Frames per second (FPS)6640166

Share and Cite

MDPI and ACS Style

Jenadeleh, M.; Pedersen, M.; Saupe, D. Blind Quality Assessment of Iris Images Acquired in Visible Light for Biometric Recognition. Sensors 2020, 20, 1308. https://doi.org/10.3390/s20051308

AMA Style

Jenadeleh M, Pedersen M, Saupe D. Blind Quality Assessment of Iris Images Acquired in Visible Light for Biometric Recognition. Sensors. 2020; 20(5):1308. https://doi.org/10.3390/s20051308

Chicago/Turabian Style

Jenadeleh, Mohsen, Marius Pedersen, and Dietmar Saupe. 2020. "Blind Quality Assessment of Iris Images Acquired in Visible Light for Biometric Recognition" Sensors 20, no. 5: 1308. https://doi.org/10.3390/s20051308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop