1 Introduction

In order to discriminate one person from another and to prove the authenticity of a person, biometric security system plays a major role in today’s world. Furthermore, due to the outbreak of covid-19 virus, everyone is interested in contactless security system. The best option for this problem is iris recognition system. An “iris recognition system” is a form of “biometric authentication” that utilises human irises to authenticate individuals. Now a days, in everyday life style, everyone requires a reliable, fast and automatic personal identification system, due to the need of high security. Iris is not only unique but also stays unchanged throughout in the person’s lifetime.

Iris recognition has a number of steps namely: Image acquisition, Image Segmentation, Image Normalization, Feature Extraction and Matching. In recent years, iris recognition has been evaluated as a trending topic because, it requires fewer parameters for “user cooperation” and “imaging conditions”. However, under these imaging conditions, captured iris images can suffer from various factors i.e. “noise”, “gaze deviation”, “iris rotation”, “absence of iris”, “motion/defocus blur”, “occlusions” due to “eyelid/eyelash/hair/glasses”, that makes iris recognition a challenging task. Performance of the “iris recognition” system depends on the “iris segmentation” where iris segmentation seperates an “iris region” from entire captured eye image. The whole process of iris segmentation can be divided into two steps i.e. “iris localization” and “noise detection”. The first step separates the iris part from sampled eye image, which can be used by subsequent steps. The second step finds noise in the iris part i.e. eyelashes, eyelids and reflections. Many researchers proposed “iris segmentation” and/or “localization schemes” such as “histogram and thresholding”, “Circular Hough Transform (CHT)”, “Integro-differential operator (IDO)”, “active contours model”, “graph cuts”, “Genetic Algorithm based CHT(GACHT)” or “deep learning”. Many of these methods unable to deal with the eye images containing various noise factors quoted as “eyebrows, eyelashes, contact lenses, non-uniform illumination, defocus and/or eyeglasses”. The functioning of “CHT” and “IDO” is found to be robust against noise, but fails at higher noise densities (NDs). In contrast, the “histogram” and “thresholding” based approaches prove to be fast, but robustness against noise is very low. To address these issues, this study proposes an effective “segmentation” approach.

In order to solve the above issues, in this paper, we focus on two main essential issues i.e., noise removal and iris segmentation. The major contribution of the current work can be summarized as follows.

To obtain an accurate iris segmentation from a noisy iris image, we have modified the first step of the Wildes approach. Now in the improved Wildes method (IWM), in step 1, we obtain the edge map of the input noisy iris image using ICWFL method. A comparative analysis of six standard edge detection techniques with own proposed ICWFL method is carried out. That is Roberts (Bhardwaj and Mittal, 2012), Sobel (El-Khamy et al., 2000), Prewitt (Gonzalez and Woods, 2002), Canny (1986), ICA (Xuan and Hong, 2017), BE3 (Mittal et al., 2019) and ICWFL (Kumawat and Panda, 2021). The results are shown in Tables 2. It is found that ICWFL (Kumawat and Panda, 2021) outperforms the above six edge detection techniques. In order to get a “smooth” and “noise-free” image from the noisy edge map image from the above step, we have applied proposed hybrid restoration fusion filter (HRFF). Varities of Noise Densities(NDs) starting from 10 to 95 dB are added to the edge map image of step1. After obtaining the smooth grey scale edge map image, we apply Wilde’s approach for finding inner and outer boundaries of edge image i.e. iris and pupil. We have compared the accuracy of segmentation with different NDs (10–95 dB). It is found that the proposed filter gives the best iris segmentation accuracy even with high noise in comparison with some standard filters like “Median Filter (MF)” (Kumar et al., 2020), “Hybrid Median Filter (HMF)” (Rakesh et al., 2013), “Novel Adaptive Fuzzy Switching Median (NAFSM)” (Kenny and Nor, 2010), “Based on Pixel Density Filter (BPDF)” (Erkan and Gokrem, 2018), “Different Applied Median Filter (DMAF)” (Erkan et al., 2018).

2 Related work

For the past two years, due to the outbreak of the covid-19 virus, everyone throughout the globe is interested in contactless security system. Due to its contactless features this technology is extensively used in various industries, offices, educational institutions etc. A standard iris biometric system consistes of four modules i.e. eye “image acquisition”, “iris segmentation”, “feature extraction”, “matching and recognition”. Here, “iris segmentation” plays a very important role in the overall system’s performance. If iris segmentation is not done accurately, the process of iris recognition fails and the person can not be authenticated properly. Next we describe some “state-of-the-art” schemes on the iris segmentation.

A study of performance metrices for biometric security is carried out in Sivaram et al. (2019). Daugman (2004), presented how iris recognition system works and a comparison is made among 9.1 million users acquired in trails in Britain the USA, Korea and Japan. Iris quality assessment is carried out in Nathan et al. (2006), by analysing 7 quality factors which affects the recognition. Rao et al. (2020), optimise the parameters used in iris recognition system which aims to make the user friendly system, minimize the “time and space” complexities. A deep learning-based iris segmentation approach named Iris Parse Net is proposed (Wang et al., 2020). The proposed approach is a complete “iris segmentation” solution where the “iris mask and parameterized” both boundaries find out together by modelling them into a unified multitask network. Hunny et al. (2012) proposed an iris segmentation algorithm and an adaptive SURF descriptor for iris recognition using an adaptive threshold method. The proposed method can handle all the possible changes in transformation. Rahmani and Narouei (2020) proposed an automatic iris segmentation approach using graphics processing unit (GPU) to detect the border between iris and pupil. In another recent work, Lubos et al. (2021) makes a comprehensive overview of the various iris recognition datasets. They have focused on the quantitative analysis of scholarly publication data, which are utilised for iris recognition. From the web of science online library, they have reviewed 158 different iris datasets, which have been employed in 689 research articles. In another work, Pathak et al. (2019), an effective method for segmentation of iris, sclera and pupil is presented. The input image is pre-processed by using a bilateral filter. An “iris localization” method by using Hough Transformation is proposed (Sunanda and Shikha, 2015). Hough transformation is used to identify the circles, lines and canny edge detection method is used to improve accuracy. An analysis of iris segmentation using IDO and Hough Transform (HT) approach is presented in Zainal et al. (2013), using CASIA database they have studied the performance of HT. To perform “iris recognition”, “iris segmentation” is essential (Wildes, 1997). Daugman’s and Wildes are the two well known iris segmentation methods that are cited in Peihua and Xiaomin (2008), Manchanda et al. (2013) and Jan and Min-Allah (2020) as well as CHT is used to localize the iris as referred in Verma et al. (2012), Kennedy et al. (2018) and Cherabit et al. (2012). An improved version of Wildes method for iris segmentation is used in this paper.

Li et al. (2010) proposed a novel method to segment iris in noisy iris images. They proposed a “limbic boundary localization” algorithm which is a combination of K-mean clustering and an improved version of Hough transform. Also, an upper eyelid detection approach is proposed that combines a parabolic integro-differential operator and a RANSAC (Random Sample Consensus) which utilizes edges detected by a “one-dimensional edge detector”. In the segmented iris images specular highlight is removed.

An innovative iris segmentation algorithm captured in noisy environments is proposed in Labati and Scotti (2010). This method can extract iris from the input eye image in an uncontrolled environment, where both reflections and occlusions are also presented. There are mainly three steps to perform iris segmentation. In the first step, it locates the centers of both the pupil and iris in the input image. In the second step, two image strips which contains iris boundaries are extracted and linearized. The last step locates the iris boundary points in the strips by performing regularization operation. Different types of occlusions such as reflections and eyelashes are then pointed out and removed from the final area of segmentation. Jeong et al. (2010) proposes an iris segmentation method for non-ideal iris images. Here AdaBoost eye detection method is used in order to reduce the iris detection error, which are caused by two circular edge detection operations. Secondly the method employs a color segmentation for detecting any obstructions caused by ghosts of the visible light. If extracted “corneal specular reflections” in the detected pupil and iris regions is absent, the captured iris image is determined as a closed eye one.

Khan and Kong (2022) developed an iris segmentation approach which is based on Laplacian of Gaussian (LOG) filter in presence of noise. To detect pupil boundary the LOG filter along with region growing is used. In the next step, zero crossing of the LOG filter is employed to mark the inner and outer boundaries. In a recent work Malinowski and Saeed (2022) proposed an iris segmentation method which is insensitive to light reflections and also reflected mirror images. This approach works well even when pupil and Iris are not positioned perpendicularly to the camera eye. The proposed algorithm is effective for noisy and also poor quality eye images, due to the use of edge approximation using the “harmony search algorithm”. A comprehensive review on iris recognition technique is presented in Malgheet et al. (2021). The recognition of iris is divided into seven phases. The authors reviewed the method of iris recognition associated with each phase. Also, two approaches of iris recognition are presented here, the traditional approach and deep learning approach. Abdelwahed et al. (2020) presented a segmentation algorithm for iris recognition by using a hybridization approach between Daugman’s Integro Differential Operator (IDO) with edge-based methods. The proposed algorithm takes the advantages of the good qualities of both the methods to increase the precision and reduce the recognition time. In another research work (Abdulwahid et al., 2020), H.J. Abdulwahid presented an effective method for locating the iris of the eye image. In the first step, A mixture of gamma transform and contrast enhancing mechanisms are used to isolate the iris area. In the next step, the statistical image parameters, that is, mean and standard deviation are employed as a feature for detection of the outer iris boundary. To detect the inner iris boundary, the IDO technique is used.

3 Proposed methodology for segmenting an iris

Figure 1 illustrates flowchart of the proposed methodology for finding accurate segmentation of an iris using IWM approach from an eye image. It includes three modules: Image Acquisition, Image Preprocessing, Image Segmentation. Image preprocessing is divided into two sections i.e. Edge detection using ICWFL and noise removal from iris region using a novel HRFF.

Fig. 1
figure 1

Flowchart of the proposed methodology for accurate iris segmentation

3.1 Image acquisition

In this paper, we have gathered all the images from publicly available database i.e. \(IIT\_Delhi\) database. Table 1 refers the information about this database i.e. from IIT delhi database. It contains 224 samples of eye images and each sample having 10 variations. Therefore, a total number of 2240 eye images are available. Here, in this paper we consider 5 samples of each person which belongs to 10 variation so, total it contains 50 samples of eye images. All samaples are of size \(320\times 240\) pixels in BMP format along with NIR (Near Infrared) type. To test the working of the introduced HRFF filter, in the proposed IWM segmentation scheme, we have taken all the noisy sample images and the effectiveness of the proposed filter is tested with a varities of Noise Densities (NDs) from 10 to 95 dB but for the sake of convenience in the paper we have given the results of the proposed method with 10 dB, 30 dB, 50 dB, 70 dB and 95 dB.

Table 1 Information of database

3.2 Image preprocesing

In the first step of image preprocessing, the input RGB eye image from IIT_Delhi database is converted to greyscale (gs) for further processing and segmentation task. Then 10 dB noise is added to the above gs image. The next step is to detect the edges in the noisy input iris images.

3.2.1 Edge detection using ICWFL (improved Canny with fuzzy logic)

The proposed “edge detector” ICWFL is applied to generate the edge map gradients E(xy) from smooth gray scale eye images I(xy) which works well for accurate detection of edges and to test the accuracy of the proposed ICWFL edge detector for iris segmentation, we have done a comparative analysis of some existing edge detectors with the proposed method. The concept behind the ICWFL approach is referred in Kumawat and Panda (2021). The following algorithm steps elaborates the working of ICWFL edge detector:

Step 1::

“Canny edge detection (CED)” method used “Gaussian filter” for image smoothing. This method unable to detect the edges on “low contrast and noisy images”. To get an improved accuracy of edge detection, a standard edge detection algorithm should impose more smooth effect to noise and less smooth effect to edge points. Keeping this in mind, we have used a “median filter”, which is a “non linear digital filter”. Because, this filter “preserves sharp edges” in the same time removes “noise”. The output of this filter is calculated by the “median of the gray levels” in the “neighborhood of that pixel”.

$$\begin{aligned} Op(i,j)=median_{(p,q)\in Sw_{ij}} {s(p,q)} \end{aligned}$$
(1)

where s(pq) denotes sampled image, \(Sw_{i,j}\) represented as “pixel under the window mask”, Op(ij) represents an “output image”.

Step 2::

In “CED” to calculate “gradient amplitude”, it uses a small “2X2 neighborhood” window to compute the “finite difference mean value”. But using this method one can miss out some “real edges” and also this is very sensitive to “noise”. So, we have calculated the gradients in three directions i.e. (i) “Horizontal gradient in X direction” (ii) “Vertical gradient in Y direction” (iii) “Diagonal gradient in both X and Y direction”. Here, gradients are calculated by using “Prewitt filter”. If we define I as the source image and the \(G_{x}\) and \(G_{y}\) are two images, which at each point contain the horizontal and vertical derivative approximations, then

$$\begin{aligned} G_{x}= & {} \begin{pmatrix} +1 &{} 0 &{} -1\\ +1 &{} 0 &{} -1\\ +1 &{} 0 &{} -1 \end{pmatrix} *I \end{aligned}$$
(2)
$$\begin{aligned} G_{y}= & {} \begin{pmatrix} +1 &{} +1 &{} +1\\ 0 &{} 0 &{} 0\\ -1 &{} -1 &{} -1 \end{pmatrix} *I \end{aligned}$$
(3)

where * denotes the 2-dimensional convolution operation. In the horizontal mask i.e., \(G_{x}\) as the center column is zero, it does not include the original values of an image. It calculates the difference of the right and left pixel values around that edge. This results in an increase in edge intensity values and it became enhanced in comparison to the original image. In the second mask that is \(G_{y}\), as the center row of the mask consist of zeros, it does not include the original values of the edge in the image. It calculates the difference of the above and below pixel intensities and making the edge visually clear. Both the masks have opposite sign in them and sum of both masks equals to zero. At each point in the image, the resulting gradient approximation can be combined to give the gradient magnitude by using the following equation:

$$\begin{aligned} G=\sqrt{G_{x}^{2}+G_{y}^{2}} \end{aligned}$$
(4)

Using the above equation, we can also calculate the gradient direction as given by:

$$\begin{aligned} \theta =arctan\frac{G_{y}}{G_{x}} \end{aligned}$$
(5)

where \(\theta \) denotes the angle of direction. If the value of \(\theta \) equal to zero, it refers to a vertical edge, which is darker on the right side.

Step 3::

After “gradient calculated image” we can see “thick” and “thin” edges. “Non-Max Suppression (NMS)” step will help us mitigate the thick ones. It is an “edge-thinning technique” which is applied to find “the largest” edge. After that “gradient calculation”, “edge extracted” from the “gradient value” is “still blurred”. Calculating the “NMS” value, “ICWFL” uses “\(3\times 3\) mask” of pixels, where each pixel consist “eight neighboring pixels (E, W, N, S, ES, SW, WN, NE)”. Pixel comparison along with its direction as shown in Fig. 2.

Fig. 2
figure 2

Pixel comparison along with its direction

Consider the edge in the below Fig. 3, which has three edge points. Assume that the point (x, y) is having largest gradient of edge. Check the edge points in the direction perpendicular to the edge and verify if their gradient is less than (x, y). If the values are less than (x, y) gradient, we can supress those non-maxima points along the curve as shown in equation 6.

Fig. 3
figure 3

Example of NMS calculation

$$\begin{aligned} M(x,y)= {\left\{ \begin{array}{ll} |\bigtriangledown s|(x,y), &{}\quad \text {if}\quad |\bigtriangledown s |(x,y)> |\bigtriangleup s |(x',y')\; and\; |\bigtriangledown s|(x,y) > |\bigtriangleup s|(x'',y'')\\ 0 &{} \quad \text {otherwise } \end{array}\right. }\nonumber \\ \end{aligned}$$
(6)

If a sample pixel value is “greater than its adjacent pixels” then do not “replace the value” else “replace the value”.

Step 4::

After the NMS step, there are few edge pixels which can be affected by “noise and scalar variation”. In order to account for these “spurious responses”. it is required to filter pixels with a “weak gradient value” and preserve edge pixel with a “high gradient value”. This requirement can be made by selecting “high and low threshold value”. If an edge pixel’s “gradient” value is higher than the “high threshold” value, it is marked as a “strong edge pixel”. If an edge pixel’s “gradient” value is smaller than the “high threshold” value and “larger than the low threshold value”, it is marked as a “weak edge pixel”.

If an edge pixel’s value is smaller than the “low threshold” value, it will be “supressed”. In the “traditional Canny edge detection algorithm”, there will be two fixed “manual global threshold values” to filter all the “false edges”. But, as the image gets complex, different “local areas” will need “different threshold values” to accurately find the “real edges”. A “threshold set to high” can miss some important information. On the other hand, a “threshold set too low” will falsely identify some “irrelevant information such as noise” as important information. It is difficult to give a “generic threshold” that “works well” on all images. So, the main improvement in this step is to preserve or find all the edges whether it is false or true, without giving the thresholds manually. Both high and low thresholds are obtained by the following equations.

$$\begin{aligned} thigh=0.5*\left[ \frac{1}{n}\right] \end{aligned}$$
(7)

Here, “thigh” refers to high threshold and “n” represent the total number of pixels in the given input image.

$$\begin{aligned} tlow=0.5*{th} \end{aligned}$$
(8)

Here, “tlow” refers to the low threshold.

Step 5::

In the last step, all the “unnecessary edges” are “supressed” that are either “weak” or those which are “not connected” to “strong edges”.

Step 6::

In traditional edge detection procedures there are some drawbacks like “edge thickness” is fixed and the parameters like “threshold” is difficult to implement. The advantage of the “Fuzzy rule based technique” is that “thickness of the edge” can be controlled by “altering the rules” and “output parameters”. Using this, the complexity of the problem can be reduced drastically. In the proposed work, the output image from the “improved version of canny edge detection algorithm” is fed to a “Fuzzy Inference System (FIS)”.

Step 7::

A FIS is designed which takes the “process values as input” and these values are then converted into “fuzzy plane”. A “Fuzzy rule base” is defined that determine and show the “edge pixels” in the “output image”. In this step, to “preprocess the image” before “FIS” is applied, the concept of a “window or mask” is carried out as shown in Fig. 4. This mask takes the greyscale sample values \(S_{1}, S_{2}, S_{3},\ldots ,S_{8}\) of “eight neighborhood pixels” with the “center pixel S”, as the output pixel as shown in Fig. 4a. Figure 4b demonstrates the “processed window mask”, where \(\vartriangle S_{j}\) = \((S_{j})\) - (S), for \( j = 1, 2, 3,\ldots , 8\).

Step 8::

In “fuzzifier stage” “input membership function” is used where the “grey levels” of the images are “mapped” to new set of “linguistic values”.

Step 9::

In “defuzzifier” or “output stage” where the values of the “grey level” are “mapped” to “new crisp values”. In the current work, “defuzzification” is done with the “Centroid of Area (COA)” method.

Fig. 4
figure 4

Proposed \(3\times 3\) window mask a window mask, b processed window mask

Algorithm steps for ICWFL edge detector

Input : smooth gray scale(gs) eye images obtained from Sect. 3.2

I::

“Median filter (MF)” applied on gs image.

II::

“gradient magnitude and direction” is calculated on gs MF image.

III::

“edge thinning” technique performed on “output image” of step 2.

IV::

“double threshold” is used to discard and save the edge pixels with “weak” and “strong” gradient values.

V::

“hysteresis” is performed to track the edges which obtain the final “improved Canny edge detected” image.

VI::

above output image is “scanned” by “\(3\times 3\) window mask”.

VII::

FIS system is designed taken as “eight scanned pixels” as a “crisp input” then these are converted into “linguistic variables” i.e. “Low, Mid and High” by using “Triangular membership function (Tmf)”.

VIII::

For the above “\(3\times 3\) window mask” for inputs 24 “fuzzy rules” applied to obtain “fuzzy outputs” i.e. “Weak, Strong or Partial edges” using “Gaussian membership function(Gmf)” based on combination of three “linguistic variables”.

IX::

Using “Centroid of Area (COA)” method the above “fuzzy output” is “defuzzified” to get the noisy edge map image.

Output : Improved Canny with Fuzzy logic noisy edge map image obtained

End of Algorithm

Figure 5, illustrates flowchart of the proposed ICWFL algorithm.

Fig. 5
figure 5

Flowchart of the proposed ICWFL edge detector algorithm

Figure 6, shows the Edge map gradients of various edge detectors. Here, Fig. 6a represents the smooth grey scale original image, (b, c) shows the horizontal and vertical edge map gradient of (a). Figure 6d represents Vertical edge map gradient of Roberts (Bhardwaj and Mittal, 2012) edge detector. (e–i) shows Vertical edge map gradient of Sobel (El-Khamy et al., 2000), Prewitt (Gonzalez and Woods, 2002), Canny (1986) , ICA (Xuan and Hong, 2017) and BE3 (Mittal et al., 2019) edge detector respectively. Figure 6j represents the Vertical edge map gradient of ICWFL edge detector that produces the fine smooth edges as compared to all existing edge detectors.

Fig. 6
figure 6

Edge map gradients of various edge detectors noisy iris images a gray scale of sampled noisy image; b horizontal edge mapped iris; c vertical edge mapped iris; d roberts (Bhardwaj and Mittal, 2012) vertical edge mapped iris; e sobel (El-Khamy et al., 2000) vertical edge mapped iris; f Prewitt (Gonzalez and Woods, 2002) vertical edge mapped iris; g Canny (1986) vertical edge mapped iris; h ICA (Xuan and Hong, 2017) vertical edge mapped iris; i BE3 (Mittal et al., 2019) vertical edge mapped iris; j ICWFL (Kumawat and Panda, 2021) vertical edge mapped iris

3.2.2 Novel hybrid restoration fusion filter (HRFF)

AS the output image of Sect. 3.2.1 contains unwanted noise, we have proposed a Hybrid restoration fusion filter (HRFF) to get a smooth and clear image. So, to the grey noisy edge map E(xy) image, HRFF is applied to obtain a clean image SI(x,y). A novel “HRFF” is proposed with the help of multiresolution concept using “image fusion” combined with important features of two restoration filters i.e. DWF(Deconvolution using Wiener Filter) (Trambadia and Dholakia, 2015) and DLR(Deconvolution using Lucy-Richardson Filter) (Al-Taweel et al., 2015). So, motivation behind taking “image fusion based on wavelets” is that of “coefficient combination”. We can merge the coefficient in an appropriate way, which is fit to a particular application, to obtain the best quality in the fused images. The following algorithm steps detail the working of HRFF:

Algorithm steps for Hybrid Restoration Fusion Filter (HRFF)

Input : Take two noisy edge map eye images obtained from the output of Sect. 3.2.1

I::

Apply two non blind deconvolution algorithm i.e. DWF and DLR on input images

II::

Perform “DWT decomposition” on above restored images

III::

After decomposition, the “approximation and detailed component” can be separated. Here, we modified only approximation coefficient of both restored filters and detail coefficients remain unchanged

IV::

obtained approximated coefficient of both DWF and DLR restored filter, then apply DWF and DLR filter again on approximated coefficients which became the modified restored filter i.e. modified DWF (MDWF) and modified DLR (MDLR)

IV::

Set MDWF and MDLR image to s1 and s2, then fix the value of “fused factor(ff)” is 0.8. Hence, formulation for the fused image Fs for s1 and s2 is shown below:

$$\begin{aligned}&Fs1=(1-ff)*s1; \end{aligned}$$
(9)
$$\begin{aligned}&Fs2=ff*s2; \end{aligned}$$
(10)
$$\begin{aligned}&Fs=Fs1+Fs2; \end{aligned}$$
(11)

where the Fs1 and Fs2 matrices are shown as follows:

$$\begin{aligned} Fs=\left[ \begin{array}{cc} MWAC &{} WHC\\ WVC &{} WDC\\ \end{array}\right] + \left[ \begin{array}{cc} MLAC &{} LHC\\ LVC &{} LDC\\ \end{array}\right] \end{aligned}$$
(12)
V::

After fusion, obtained four coefficients of double hybrid restoration filter image such as Fused Modified Wiener Lucy Approximated Coefficient(FMWLAC), Fused Wiener Lucy Horizontal Coefficient (FWLHC), Fused Wiener Lucy Vertical Coefficient (FWLVC) and Fused Wiener Lucy Detailed Coefficient (FWLDC)where FMWLAC(MWAC,MLAC), FWLHC(WHC,LHC), FWLVC(WVC,LVC) andFWLDC(WDC,LDC).

VI::

perform “IDWT(Inverse Discrete Wavelet Transform)” to get a “resultant image”

VII::

obtained a fused “double hybrid modified restoration fused filter synthesized” eye image

Output : HRFF eye image obtained

End of Algorithm

Step 1::

The process of HRFF inculdes a unified “double hybrid restoration filter” for noise reduction, combined with important features of two restoration filters i.e. DWF (Trambadia and Dholakia, 2015) and DLR (Al-Taweel et al., 2015) as referred in (Trambadia and Dholakia, 2015) and (Al-Taweel et al., 2015). We have applied “DWF filter and DLR filter”, to the two “noisy eye images” which is obtained from 3.2.1 seperately.

Step 2::

“restored DWF image and restored DLR image” are decomposed, using “multiresolution approach i.e. DWT”.

Step 3::

After “DWT decomposition, obtained 4 bands which transforms the image from the “spatial domain to frequency domain” using “2-D Discrete Wavelet Transformation (DWT)” . The image splits into “vertical and horizontal lines” and represents the “first-order of DWT”, and the image consist of four parts such as, “Approximated coefficient (AC), Horizontal coefficient (HC), Vertical Coefficient (VC) and Diagonal coefficient (DC)”. So, obtained 8 sets of coefficients, where first four sets derived from DWF filter and another four sets from DLR filter. The coefficients are “Wiener Approximated Coefficient(WAC)”, “Wiener Horizontal coefficient (WHC)”, “Wiener Vertical coefficient (WVC)” and “Wiener Diagonal coefficient (WDC)” resulting from “DWF filter”. Similarly, for “DLR filter” we get the coefficients such as: “LAC (Lucy-Richardson Approximated Coefficient), LHC (Lucy-Richardson Horizontal Coefficient), LVC (Lucy-Richardson Vertical Coefficient) and LDC (Lucy-Richardson Diagonal Coefficient)”. This paper deals with the “multiresolution decomposition” which refers to the “discrete two dimensional wavelet transform”, that is proposed before applying the concept of “image fusion”. When “decomposition” is done, the “approximation and detailed component” can be seperated. Out of these four bands, the “low-frequency coefficients” based on wavelet transforms retain the “most energy” of the source images. Because of this, this paper is focused on a double hybrid restoration filter to “WAC and LAC” component only and on the other hand, coefficients i.e. “WHC, WVC, WDC and LHC, LVC and LDC” coefficients are remain unaffected.

Step 4::

After applying the “DWF filter” on “WAC coefficients”, the decomposition of the “modified DWF restoration” will have a coefficients as “MWAC, WHC,WVC and WDC”. Here, only “approximated coefficients” are modified and all other coefficients “remain unchanged”. After applying “DWT” four resulting coefficients can be represented as:

$$\begin{aligned} AC= & {} [(s(i,j)*\phi -i\phi -j)(2p,2q)]_{(p,q)\in z^2} \end{aligned}$$
(13)
$$\begin{aligned} HC= & {} [(s(i,j)*\phi -i\psi -j)(2p,2q)]_{(p,q)\in z^2} \end{aligned}$$
(14)
$$\begin{aligned} VC= & {} [(s(i,j)*\psi -i\phi -j)(2p,2q)]_{(p,q)\in z^2} \end{aligned}$$
(15)
$$\begin{aligned} DC= & {} [(s(i,j)*\psi -i\psi -j)(2p,2q)]_{(p,q)\in z^2} \end{aligned}$$
(16)

where AC, HC, VC and DC represents “Approximated coefficient, Horizontal coefficient, Vertical Coefficient and Diagonal coefficient” of given image. \(\phi \) and \(\psi \) represents the “scaling and wavelet function”. \(z^2\) represents the “size of the image”, pq indicate the “coordinates of the z image” and “s(ij)” represents the given “sample image” on which we have applied the “level-1 decomposition using DWT”.

Step 5::

In this step, image fusion is applied to the coefficients of both “MDWF” and “MDLR” filter. This work represents an “image fusion scheme” based on the “wavelet transform”.

step 6::

After fusion of both filters, obtained four coefficients of double hybrid restoration filtered image.

Step 7::

In order to get a fused image having the properties of both modified hybrid restoration filters, “image composition” based on “IDWT” is performed to get resultant image. Formulation of creating “fusion model” for image restoration using DWT can be summarized in algorithm step IV. Figure 7 illustrates the flowchart of a novel HRFF approach.

Fig. 7
figure 7

Flowchart of the novel HRFF approach

The efficiency of the proposed HRFF can be tested on the output noisy edge map image of step 3.2.1. We have tested the effect of the algorithm taking the effect of the proposed HRFF filter with various NDs i.e. 10 dB, 30 dB, 50 dB, 70 dB and 95 dB. It can be seen from Fig. 8. So, we can conclude that the “proposed filter” works well even with high NDs i.e. 95 dB, as well as with low NDs i.e. 10 dB, 30 dB, 50 dB and 70 dB. Also, with the help of the new double hybrid restoration filter, combined with multiresolution approach to image fusion, we can obtain a smooth fine image, retaining all the important information from the degraded image.

The proposed HRFF is having the following properties:

  1. 1.

    Generally, noise smoothing filters when applied to noisy images tends to blur the images. But, in addition to this noise is also reduced. HRFF filter is also showing this property i.e., in the process of noise reduction it will result in a blur output image.

  2. 2.

    This filter is good at removing salt and pepper noise from an image. But it also works well for other noises.

  3. 3.

    HRFF Filter preserves sharp edges, in the process of noise reduction. The process results in an image with reduced sharp transitions in intensities that ultimately leads to noise reduction.

  4. 4.

    As HRFF is a combination of two well-known filters that is Wiener filter and Lucy-Richardson filter, this filter works faster than the other filters.

So, this filter is used to suppress image noise, enhance edges and improve edge clarity.

Fig. 8
figure 8

Various Nd on smooth gray scale image with their novel HRFF a sampled (Nd10db) noisy image with their novel filter; b Nd30 second image with their novel filter; c Nd50 third image with their novel filter; d Nd70 fourth image with their novel filter; e Nd95 last image with their novel filter

A numerical comparison is shown in Table 2, taking into consideration three image quality assessments parameters i.e. MAE (Mean Absolute Error), RMSE (Root Mean Squared Error) and PSNR (Peak Signal-to-Noise Ratio). In this table a comparison is made between the output HRFF image with the input noisy ICWFL edge map image taking six edge detectors. It is observed that in case of MAE the error is low in all edge detected filtered image. But in proposed ICWFL error is very less i.e. 0.3742 in case of HRFF image. In case of RMSE, HRFF image shows least error for ICWFL edge map image. For the third parameter i.e. PSNR, if the value is high, it produces high quality of image. For the ICWFL edge map image, when we apply HRFF the value of PSNR is 66.4635 which is the highest among all the existing six edge detected images. So, from this numerical result, we can conclude that when we combine ICWFL edge detected image with HRFF, the image quality is very high in comparison to six existing edge detected images like Roberts (Bhardwaj and Mittal, 2012), Sobel (El-Khamy et al., 2000), Prewitt (Gonzalez and Woods, 2002), Canny (1986), ICA (Xuan and Hong, 2017), BE3 (Mittal et al., 2019). Furthermore, there is a difference in numerical values of noisy and filtered images in all the edge detection procedures. That is in case of two image quality assessment parameters MAE, RMSE the amount of error in case of noisy images is more than that of the HRFF image. And for PSNR, the amount of information content is more in HRFF image in comparison to noisy image.

Table 2 Shows the comparison of various edge detectors on noisy and filtered images based on IQA parameters

3.3 Image segmentation based on IWM approach

Improved Wilde’s Method (IWM) is a modified version of existing Wildes method. Here, it takes the input from the above HRFF filtered image section 4.2.2 for finding an outer and inner boundary of the circle. The following algorithm illustrates the procedure of finding an iris and pupil from the given eye HRFF smooth image.

Algorithm steps for IWM approach

Input ::

HRFF smooth image obtained from section 4.2.2

Step 1::

Finds outer boundary(iris) from the given image

Step 2::

Initialize the center coordinates with their radius coordinates for outer circle

Step 3::

Finds inner boundary(pupil) from the given image

Step 4::

Initialize the center coordinates with their radius coordinates for inner circle

Step 5::

compute the circle gradients for both inner and outer

Step 6::

test the computed gradient is maximum or minimum

Step 7::

If maximum then construct the circle with their initalized coordinates

Output : segmented smooth eye image having iris and pupil boundary

End of Algorithm

After obtaining smooth gray scale edge map gradients apply segmentation approach for finding inner and outer boundary of eye image i.e. iris and pupil where inner boundary refers to pupil and outer boundary of circle refers to iris. Segmenting an iris and pupil together using existing Wildes approach where an input becomes filtered gray scale smooth edge map image. For finding an outer boundary, initalize the center coordinates with their radius and then construct the circle with the help of center and radius coordinates. Three parameters are needed for finding any circle \((x_{p},y_{p},r_{p})\). Here \((x_{p},y_{p})\) represents the venter coordinates and \(r_{p}\) denoted the radius of the circle. So the accumlator will be as:

$$\begin{aligned} IWM_{p}= \sum _{q=1}^{m} {IWM}(x_{q},y_{q},x_{p},y_{p},r_{p}) \end{aligned}$$
(17)

where m is total number of pixles in edge map image and \({IWM}(x_{q},y_{q},x_{p},y_{p},r_{p})\) is the basic circle equation i.e.[ \(\sqrt{(x_{q}-x_{p})}\) + \(\sqrt{(y_{q}-y_{p})}\) - \({r_{p}}\)]. Hence radius of pupil will be denominated as: \(r_{p}= \sqrt{(x_{q}-x_{p})} + \sqrt{(y_{q}-y_{p})}\) Here q refers to 1,2,3...m and \( IWM_{p}\) denotes circle range for lower limit and upper limit of radius. Lower limit of radius \(r_{l}\) will be as \((x_{p},y_{p},r_{p})\) and upper limit of the radius \(r_{u}\) parameters will be as \((x_{p},y_{p},r_{p})\). For localization of circle, radius range is always required. After that we compute the circle gradients in both ”horizontal and vertical directions”, when the edge map gradient is maximum. If it is maximum, then it is able to detects the outer boundary of the circle in vertical direction, otherwise for constructing a circle, we have to initalize the center coordinates along with their radius again. Repeat this process till we get the maximum edge map gradient for finding outer boundary of the circle i.e. iris and similar steps are followed for pupil also. Figure 9 illustrates a flowchart of the proposed Image segmentation IWM approach.

Fig. 9
figure 9

Flowchart of the image segmentation IWM approach

3.4 Comparison between existing Wildes and improved novel Wildes approach for segmenting an iris

There are a number of shortcomings with the existing Wildes method (WM) and these can be solved using IWM approach. They are:

1::

WM requires “threshold values” to be choosen for “edge detection”, and this may result in “critical edge points” being removed, resulting in failure to detect circles/arcs where as Improved novel Wilde’s approach does not require any threshold values for the edge detection.

2::

The WM is “computationally intensive” due to its “brute-force approach”, and it is not suitable for “real-time applications” but the proposed algorithm can be applied to real time applications, as it is very fast in execution. This effect can be seen from Table 4, where we have given a comparison of execution time of all the existing filters with the own proposed one.

3::

WM approach is highly sensitive to image noise. where as in IWM method it is less sensitive to noise. Even with high ND like 95dB, IWM works well and can segment iris and pupil from the input eye image.

4::

When Noise density is high, Wildes method is unable to detect an iris properly where as, in higher noise density IWM is able to detect but, the results leads to blurriness.

5::

IWM approach produces more accurate result as compare to existing Wildes approach. Below Fig. 10 shows the results of exisitng Wildes approach for noisy and segmenting images and Fig. 11 shows result of noisy and segmenting images based on IWM approach.

6::

While segmenting an eye image with existing WM method, it contains circle iris, circle pupil, noise with eyelids and eyelashes as shown in Fig. 10a, b shows only iris and pupil of segmented image of Fig. 10a. Figure 11a, IWM method is applied on given sampled eye images which is combination of edge detected ICWFL with HRFF filtered image to show an iris, pupil, noise with eyelids and eyelashes where the noise is less as compare to Fig. 10a. Figure 11b shows only iris and pupil of Fig. 11a sampled eye image where the noise with eyelids and eyelashes are removed. Table 3 shows that IWM approach is better. Because, from Table 3 in case of WM the radius of iris is 99 for sample image S1 and for IWM it is 100 that is, the boundary is detected properly. Similar findings can be drawn in case of pupil.

Fig. 10
figure 10

Result images of WM approach A noisy sample images S1, S2, S3; B segmenting sample images S1, S2, S3

Fig. 11
figure 11

Result images of IWM approach a noisy sample images S1, S2, S3; b segmenting sample images S1, S2, S3

Table 3 Numerical analysis of different segmenting approaches

In this way we can segment an iris and pupil from input eye image using novel IWM segmenting approach. The main difference between the existing and own proposed approach is the edge map and filter and the major contribution of this paper is to present an approach that produce effective segmentation of iris to authenticate a people in less time and reduce the complexity and increase the reliablity.

The implemented approach was tested with various noise densities on the sampled eye images, such as the images containing noise of low , mid or high density. The test results demonstrated that the ”Wildes algorithm” detects the iris efficiently in the lower noise density images with higher accuracy. The functioning of the algorithm on the higher noise density images has been improved by additional preprocessing (filter with ICWFL) approach on these images.

Fig. 12
figure 12

Five sampled noisy images of each person as input from IITDelhi database with varying noise density (ND): a ND 10dB; b ND 30dB; c ND 50dB; d ND 70dB; e ND 95dB

Fig. 13
figure 13

Five sampled segmenting noisy images of each person as input from IITDelhi database removing noise density 10 based on different filters using IWM a median filter (Kumar et al., 2020); b NAFSM filter (Kenny and Nor, 2010); c BPDF filter (Erkan and Gokrem, 2018); d DAMF filter (Erkan et al., 2018); e HMF filter (Rakesh et al., 2013); f Own proposed HRFF filter (Kumawat and Panda, 2021);

Fig. 14
figure 14

Five sampled segmenting noisy images of each person as input from IITDelhi database removing noise density 30 based on different filters using IWM a median filter (Kumar et al. (2020)); b NAFSM filter (Kenny and Nor, 2010); c BPDF filter (Erkan and Gokrem, 2018); d DAMF filter (Erkan et al., 2018); e HMF filter (Rakesh et al., 2013); f own proposed HRFF filter (Kumawat and Panda, 2021)

Fig. 15
figure 15

Five sampled segmenting noisy images of each person as input from IITDelhi database removing noise density 50 based on different filters using IWM a median filter (Kumar et al., 2020); b NAFSM filter (Kenny and Nor, 2010); c BPDF filter (Erkan and Gokrem, 2018); d DAMF filter (Erkan et al., 2018); e HMF filter (Rakesh et al., 2013); f own proposed HRFF filter (Kumawat and Panda, 2021)

Fig. 16
figure 16

Five sampled segmenting noisy images of each person as input from IITDelhi database removing noise density 70 based on different filters using IWM a median filter (Kumar et al., 2020); b NAFSM filter (Kenny and Nor, 2010); c BPDF filter (Erkan and Gokrem, 2018); d DAMF filter (Erkan et al., 2018); e HMF filter (Rakesh et al., 2013); f own proposed HRFF filter (Kumawat and Panda, 2021)

Fig. 17
figure 17

Five sampled segmenting noisy images of each person as input from IITDelhi database removing noise density 95 based on different filters using IWM a median filter (Kumar et al., 2020); b NAFSM filter (Kenny and Nor, 2010); c BPDF filter (Erkan and Gokrem, 2018); d DAMF filter (Erkan et al., 2018); e HMF filter (Rakesh et al., 2013); f own proposed HRFF filter (Kumawat and Panda, 2021)

Fig. 18
figure 18

Nine sampled segmented speckle noisy images of each person as input from IITDelhi database with varying noise density (ND) 70 a S1 eye image; b S2 eye image; c S3 eye image; d S4 eye image; e S5 eye image; f S6 eye image; g S7 eye image; h S8 eye image; i S9 eye image

4 Simulation and results

To prove the efficiency of the proposed IWM algorithm we have carried out a performance analysis of different existing restoration filters with the own proposed HRFF with various NDs i.e. 10 dB, 30 dB, 50 dB, 70 dB and 95 dB. Both visual and numerical results shows the accuracy of the proposed HRFF which is applied in IWM algorithm. Figures 12, 13, 14, 15, 16 and 17 shows a comparative analysis of five existing filters i.e. MF (Kumar et al., 2020), HMF (Rakesh et al., 2013), NAFSM (Kenny and Nor, 2010), DAMF (Erkan et al., 2018), BPDF (Erkan and Gokrem, 2018) with own proposed(op) filter (Kumawat and Panda, 2021). At low NDs all these filters give more or less similar result. But when we keep on increasing noise i.e. 70 dB and 95 dB, all the existing filters fail to restore the original image. But HRFF still retains almost all the information from original image. Figure 18 shows nine sampled segmented images that has been taken from IITDelhi database with 70dB noise density using speckle noise. Here, Fig. 18a shows frist sample of eye images i.e. S1 having ten variation of images but due to space constraints this paper presents only 7 variation of images of each sample. Figure 18b shows second sample that is S2 and Fig. 18c–i shows sample from S3 to S9. This paper describes various image quality parameters such as: PSNR (Peak Signal-To-Noise Ratio), SNR (Signal-To-Noise Ratio), Resolution of sampled segmented eye images with noisy image and various accuracy parameters i.e. IR (Iris Ratio), PR (Pupil Ratio), PSIR (Performance of Segmenting Iris Ratio), PSPR (Performance of Segmenting Pupil Ratio) and FAR (False Acceptance Rate), whose equations are given below:

Table 4 Filters applied on various sampled 50 images contains salt and pepper noises on different densities i.e. 10 dB, 30 dB, 50 dB, 70 dB, 95 dB
Table 5 PSNR applied on various sampled 90 images contains speckle noises on 70dB noise density
Table 6 SNR applied on various sampled 90 images contains speckle noises on 70dB noise density
Table 7 Resolution parameter applied on various sampled 90 images contains speckle noises on 70dB noise density
Table 8 False acceptance rate (FAR) parameter applied on various sampled 90 images contains speckle noises on 70 dB noise density
$$\begin{aligned}&IR = \left[ \frac{accurate \, iris \, segmentation}{ accurate \, iris \, segmentation \, + \, inaccurate \, iris \, segmentation }\right] *100 \end{aligned}$$
(18)
$$\begin{aligned}&PR = \left[ \frac{accurate \, pupil \, segmentation}{ accurate \, pupil \, segmentation \, + \, inaccurate \, pupil \, segmentation }\right] *100\nonumber \\ \end{aligned}$$
(19)
$$\begin{aligned}&PSIR = \left[ \frac{accurate \, pupil \, segmentation}{accurate \, pupil \, segmentation \, + \, inaccurate \, iris \, segmentation}\right] *100\nonumber \\&PSPR = \left[ \frac{accurate \, iris \, segmentation}{accurate \, iris \, segmentation \, + \, inaccurate \, pupil \, segmentation}\right] *100\nonumber \\ \end{aligned}$$
(20)
$$\begin{aligned}&FAR = \left[ \frac{Number \, of \, Iris \, False \, acceptance}{Total \, number \, of \, Iris \, Acceptance}\right] *100 \end{aligned}$$
(21)

The percentage value of the parameters are reflected in Table 4 for different filters compared with our proposed filter. For example, if we consider percentage of accuracy of the existing filters with own proposed one, at lower NDs the values are comparable. For example at ND 10dB for own proposed it is \(90\%\) and MF it is \(92\%\), HMF-\(84\%\), DAMF-\(70\%\), BPDF-\(54\%\) and NAFSM-\(62\%\). But when ND increases from 10dB to 95dB the accuracy decreases. At 95 dB MF,HMF,BPDF totally fails. But for op HRFF shows \(44\%\) accuracy. Similarly from 50 samples images, number of accurately segmented iris in case of 10 dB with op filter is 50 i.e. 100%, which is highest among all these exisitng filters. When the noise is 95dB, the number of accurate segmented iris in case of our filter is 24 out of 50 samples, which is also highest among all these exisitng filters. If we consider PSIR%, for 10 dB it is 100%, in case of op filter. And 89% in case of 30 dB noise for op filter. Similar findings can be drawn for PSPR%, IR% and PR%. PSPR% for 10dB noise, in case of op filter is 90.90909091 which is the maximum in comparison to other filters. And for 95dB noise PSIR% is 45.83333333, while PSPR% is 46.15384615 in case of op filter. For 70 dB noise IR% is 86 and PR% is 84 which is very high in comparison to other filters. So, we can conclude that at higher NDs, our filter outperforms all other filters.

We have also compared the performance of the proposed algorithm with the exisitng algorithms in terms of execution time and image quality parameters. It can be seen from the Table 4 that at different NDs our op filter takes less time for giving the segmentation result in comparison to all other filters. For example, 95 dB op filter takes 06.166192 seconds which is the lowest among all other filters.

Tables 5 and 6 shows PSNR and SNR value of various filters which contains 70dB speckle noise density on nine sampled of eye images having ten variation of each sample. Here, OP (Kumawat and Panda, 2021) produces higher value of PSNR and SNR as compare to existing filters which shows the result of image quality. Image quality is better when PSNR as well as SNR is high and when it is low, image quality will be degraded. In Table 7 shows the resolution of various filters with noisy images of all nine samples.

From Table 7, we can conclude that noisy segmenting images contains higher resolution and when filters applied on noisy images, resolution got decreased. Here, own proposed filter resolution value is very low as compare to existing filter which shows this own proposed filter is very efficient to remove the noise from the images as compare to others.

Fig. 19
figure 19

Nine sampled segmented speckle noisy images of each person as input from IITDelhi database with varying noise density (ND)70 based on different filters a PSNR; b SNR; c resolution; d FAR

Fig. 20
figure 20

Nine sampled segmented speckle noisy images of each person as input from IITDelhi database with varying noise density(ND)70 based on different filters using IWM a median filter (Kumar et al., 2020); b HMF filter (Rakesh et al., 2013); c DAMF filter (Erkan et al., 2018); d BPDF filter (Erkan and Gokrem, 2018); e NAFSM filter (Kenny and Nor, 2010); f own proposed HRFF filter (Kumawat and Panda, 2021);

In Table 8 shows the false acceptance rate values of each filters with noisy image. Here, the total number of iris acceptance is 90 which is also known as total number of samples of eye images and number of false acceptance represent the term as FA. In this table, OP (Kumawat and Panda, 2021) produces values of FAR is very low as compare to exiting filters.If FAR is low, it ensure that any unauthorized person will not be allowed to access otherwise unauthorized person will be authenticated and data will be loss. All the comparison of image qulality parameters such as PSNR, SNR and Resolution along with accuracy parameter i.e. FAR shown in Fig. 19. Here, Fig. 19a plots the PSNR values of all filters, Fig. 19b plots the SNR values of all filters having nine samples, Fig. 19c plots the resolution value of all filters having nine samples and Fig. 19d plots FAR values of all filters using 90 images together. Figure 20 shows nine samples of six different filters where Fig. 20a–f represents filetrs that is MF (Kumar et al., 2020), HMF (Rakesh et al., 2013), DAMF (Erkan et al., 2018), BPDF (Erkan and Gokrem, 2018), NAFSM (Kenny and Nor, 2010), with own proposed(op) filter (Kumawat and Panda, 2021) applied on all samples that is from S1 to S9. . That is the ICWFL edge detection in combination with HRFF can be best suited for an iris segmentation algorithm. The proposed algorithm can be implemented for real-time applications as it takes very less time for performing iris segmentation.

5 Conclusion

In the present paper, an accurate iris segmentation scheme is presented that is robust to noise. The Wilde’s iris segmentation approach is modified to get an accurate segmentation. As mentioned earlier an iris biometric system consists of four modules i.e. “image acquisition”, “iris segmentation”, “feature extraction matching and recognition”. Each module has its own importance and contribution to the accurate and reliable iris recognition. But out of these four modules, iris segmentation functions crucially in the overall system accuracy. Keeping this in mind, this paper focuses on two important aspects i.e. edge detection of iris and pupil using ICWFL method and reduction of unwanted noise with the help of HRFF. The method of edge detection and noise reduction is incorporated in Wilde’s method of iris segmentation. Performance analysis using various parameters for accuracy shows that IWM outperforms Wilde’s method of iris segmentation.

6 Future work

Future work, could be to develop a complete iris recognition system. Here, this paper focus only first two steps of iris recognition i.e. preprocessing and segmentation. In future, an automated scheme for feature extraction and matching can be developed, in order to design a complete iris recognition system. So, a reliable iris recognition system may be developed, which can be best suited to real-time applications.