1 Introduction

Ear recognition, a field within biometrics, concerns itself with the use of images of the ears to identify individuals. Much like fingerprints, ears are unique to an individual; even identical twins can have distinguishable ears [23]. Similar to images of the face, ear images can be captured from a distance, making them a useful biometric for security, surveillance, and other related purposes. Researchers have explored this topic extensively over the last two decades, investigating techniques for extracting features from ear images and their subsequent comparison [10, 28]. Successful feature extraction techniques in ear recognition and other biometrics include Principal Component Analysis- (PCA) [29,30,31,32, 37], wavelet-based [5, 13, 18, 25], Support Vector Machine (SVM) [4, 26, 27] and neural network-based and other [1, 2, 7, 9, 11, 15, 22, 24, 27, 33, 39, 40] methods. Amongst these techniques, PCA has been used for both feature extraction in the form of eigenvectors and dimensionality reduction. Several PCA based image classification and dimensionality reduction methods, including techniques for 3D and hyperspectral images have been reported in the literature [14, 16, 34, 35]. Research has shown that extended PCA based methods achieve greater performance to that of standard PCA in terms of computation costs, dimensionality reduction capability, memory usage, and classification accuracy. The application of single image PCA based ear recognition has been reported in the literature [32, 38]. However, the classification accuracy of these PCA based techniques is lower than that of the learning based techniques. Yet learning based techniques are computationally expensive, data dependent, and require extensive training data, which may not always be available. Consequently, there is demand for robust, low computation cost ear recognition techniques that are less data dependent while offering acceptable accuracy. Recent research on the application of extensions of PCA for hyperspectral image classification [35] has shown the potential of PCA based methods to deliver significantly higher classification and recognition accuracy at a much lower computational cost.

This paper investigates robust, low computation cost PCA based algorithms for ear recognition, built upon the successes of PCA based techniques in the field of hyperspectral image processing. To accomplish this, this paper examines methods of generating a hyperspectral-like image from an input grayscale image to increase matching accuracy of PCA based ear recognition techniques. This investigation has resulted in the development of a multi-band, single image PCA based ear recognition technique, called Two Dimensional Multi-Band PCA (2D-MBPCA). Initial experimental results for the proposed Two Dimensional Multi-Band PCA (2D-MBPCA) ear recognition algorithm were published in [36]. The published algorithm uses either the hill climbing optimization method, or equally splits the full gray scale image between the target numbers of the images. It then performs the standard PCA method on the resulting set of images, extracting their principal components as their features, which are used for recognition. The performance of the proposed algorithm was assessed using images of two benchmark datasets. Results show that the proposed 2D-MBPCA algorithm significantly outperforms PCA based matching algorithms. This paper presents an ear recognition technique, which unlike other PCA-based methods, does not require input images to be projected into a common eigenspace. Instead, the input image is divided into multiple images based on its pixels’ values, representing the data in a novel fashion that can then be subjected to PCA. In [36], the authors presented two methods called equal size and greedy hill climbing based partitioning techniques for generating multiple images from the input image. Further results, including a new partitioning method, called histogram based boundary calculation, which utilizes the histogram of a set of training images to determine the pixel value boundaries, are presented. The proposed technique then applies the standard PCA method on each resulting set of images of the input image to extract their principal components, which are used as features. These features are then used for recognition. In order to maximize the performance of the proposed technique, the intersection of the number of features and total eigenvector energy, which is empirically consistent with the matching performance of the proposed technique, is used as the optimal number of images to be generated from the input image. Experimental results on the images of two benchmark ear image datasets demonstrate that the proposed 2D-MBPCA technique greatly outperforms traditional PCA applied to single images and the well-known ‘eigenfaces’ technique [31], as well as providing very competitive results with those of the learning based techniques. Moreover, the computational burden of the proposed 2D-MBPCA algorithm has been evaluated and compared with other PCA based and learning based ear recognition techniques, demonstrating that 2D-MBPCA generates competitive results to learning based algorithms at a fraction of their computational cost. The proposed 2D-MBPCA can be used for other biometric techniques, such as iris recognition, which has been successfully demonstrated by Ghaffari et al. in [12], where the authors demonstrated significantly higher recognition performance to those of traditional techniques. In addition, the proposed algorithm can be applied to other biometric applications, including face recognition. Furthermore, the proposed algorithm has significant potential to be incorporated with other statistical or learning based recognition techniques and has the potential to improve their performance. The main contributions of this paper are: a) Development of a Two Dimensional Multi-Band PCA (2D-MBPCA) ear recognition technique; b) Generation of a hyperspectral like image cube from a single gray image; c) Use of three different methods called equal size, greedy hill climbing and histogram-based boundary calculation algorithm for generating multiple images from the input image; d) Using the intersection of the number of features and total eigenvector energy to determine the optimal number of images to be generated from the input image and experimentally verifying it.

The rest of the paper is organized as follows: Section 2 introduces the proposed 2D-MBPCA method and its image partitioning algorithms, Section 3 details the benchmark ear image datasets and presents the experimental results, and Section 4 concludes the paper.

2 2D multi-band PCA technique

In this section, a 2D Multi-Band PCA (2D-MBPCA) ear recognition technique is presented. The proposed method is inspired by the application of PCA on hyperspectral images, which has been shown to produce high accuracy results for classification. 2D-MBPCA divides the input ear image into multiple bands, mimicking a hyperspectral image, and then applies standard PCA on the resulting bands. The resulting eigenvectors are then used as features for matching. Consequently, 2D-MBPCA translates the success of PCA based hyperspectral image classification to the single image ear recognition domain. The proposed 2D-MBPCA method includes the following four components: A) Pre-Processing; B) Multiple-Image Generation; C) standard PCA; and D) Matching. Figure 1 shows the block diagram of the proposed algorithm.

Fig. 1
figure 1

A block diagram of the proposed 2D-MBPCA algorithm

2.1 Pre-processing

Let E be the set of all ear images, where each image in E is of size x × y. It is assumed that the input image eE is an 8-bit, grayscale image. First, each pixel value pe is converted to a new value p’ as shown in (1):

$$ p' = p/255 $$
(1)

Histogram equalization is then applied on the resulting image to increase its contrast. To do so, the Probability Mass Function (PMF) PX of the image is first calculated:

$$ P_{X} (x_{k} )=P(X=x_{k} ) \text{ for } k=0,1,...,255 $$
(2)

where X = x0,x1,...,x255 represent the pixel values and PX(xk) is the probability of coefficients in bin k. The resulting PDF is then used to calculate the Cumulative Distribution Function (CDF) CX of the image:

$$ C_{X} (k)=P(X \le x_{k}) \text{ for } k=0,1,...,255 $$
(3)

where CX(k) is the cumulative probability of Xxk. Finally, each pixel value within the image is mapped to a new value using its resulting CDF. After histogram equalization, each of the resulting images in E is ready to be converted into multiple images.

2.2 Multiple-image generation

The proposed 2D-MBPCA method can use any method to generate multiple images from the histogram equalized input image. In this investigation three methods called: equal size, histogram based, and greedy hill climbing based boundary calculation methods were used to determine the boundaries. These methods are detailed in subsequent subsections. The multiple image generation can be formulated as follows: Assume e is the input histogram equalized image and let N be the number of desired images to be generated from the input image e. The proposed algorithm uses N-1 boundaries to split the input image pixels into N target images according to the pixel values. Let B = {b1,b2,...,bN− 1} be the boundary values. Then the input image e pixels are divided into N target images as follows:

  1. 1.

    Generate N images of the same size of e and set their pixels to zero. Let these images be: F = [f1,f2,...,fN].

  2. 2.

    Assign each pixel p in e of value in the range [0,b1),[b1,b2),...,[bN− 1,1] to image f1,f2,...,fN, respectively.

The input image e has now been partitioned into F images, where F can be considered to be, in a sense, a multispectral image and each fF captures its own intensity band. Figure 2 shows an example of multiple image generation, where the input image has been divided into four images using the equal size boundary calculation, yielding the following bands: [0, 0.25), [0.25, 0.5), [0.5, 0.75), and [0.75, 1] for image f1, f2, f3, and f4, respectively.

Fig. 2
figure 2

An example of multiple image generation: a) Pre-processed input image e from the IITD II dataset [17], resulting multiple images: b) f1 c) f2 d) f3 e) f4

In this research, three different methods to calculate boundaries for image partitioning called: equal size, histogram based, and greedy hill climbing based methods are introduced. Equal size boundary calculation divides the input image into equal bands based on the selected number of bands. The histogram based technique takes a training subset of the dataset and filters its histogram to determine the boundaries. The greedy hill climbing technique attempts all possible positions to add a boundary. Once a single boundary is found, it then iterates to add further boundaries until the matching accuracy is maximized. These methods are detailed in the following subsections.

2.2.1 Equal size boundary calculation

Let N be the number of target images for the input image e to be divided into. The pixel value boundaries B = {b1,b2,...,bN− 1} are then calculated using (8):

$$ b_{n}=n/N \text{ for } n=1,2, ..., (N-1) $$
(4)

2.2.2 Histogram based boundary calculation

The proposed histogram based boundary calculation method first divides the pre-processed input image dataset into training and test images. The proposed algorithm then calculates the histogram of all images within the training set using 256 equal sized bins. The resulting histogram is then smoothed using a 1D Gaussian low pass filter with coefficients [0.25 0.5 0.25]. The minima of the histogram are then used as the boundaries for the 2D-MBPCA algorithm. This process is repeated several times to determine the set of boundaries that maximize the accuracy of the training set, and the resulting boundaries are finally used for testing. An example of such a histogram is shown in Fig. 3 and the block diagram for the histogram technique is shown in Fig. 4.

Fig. 3
figure 3

A histogram of the training set from the IITD II dataset [17] with the partitions B = {b1,b2,...,bN− 1}

Fig. 4
figure 4

A block diagram of the histogram based boundary calculation method

2.2.3 Greedy hill climbing based boundary calculation

The greedy hill climbing algorithm calculates the boundaries by iteratively running 2D-MBPCA on a training set of images. The proposed algorithm initializes with a set of training images called input_images, an empty set of boundaries called bnds, and a measure of the overall best matching percent found called top_percent. It then attempts to find the first optimal boundary to add to the set of boundaries found. This is accomplished by using a variable curr_bnd that represents the potential boundary to be added to the set of optimal boundaries (bnds), which is initialized to a pre-selected step size κ. Two other variables, best_percent and opt_bnd, which contain the best matching accuracy and its associated boundary point found in this iteration, are initialized to zero. While curr_bnd is less than one, a temporary set of boundaries temp_bnds is created by copying bnds, concatenating opt_bnd, and sorting the result. 2D-MBPCA is performed on the input_images using temp_bnds, with the resulting correct match percentage stored as matching_percent. If matching_percent is higher than the best_percent found so far in this iteration, best_percent is set to matching_percent and curr_bnd is saved as opt_bnd. The variable curr_bnd is then incremented by κ. When curr_bnd finally exceeds one, the algorithm now has a single boundary opt_bnd that may be permanently added to the boundary set bnds. If the overall matching percentage has increased in this iteration by adding opt_bnd (i.e, if best_percent is greater than top_percent), top_percent becomes best_percent, opt_bnd is permanently added to bnds, bnds is sorted, and a new iteration begins. If adding opt_bnd does not increase the overall matching percentage, the algorithm returns the already calculated set bnds as the best set of boundaries found and the boundaries are then used on the test set of images. A block diagram of the proposed greedy hill climbing technique can be seen in Fig. 5.

Fig. 5
figure 5

A block diagram of the proposed greedy hill climbing based boundary calculation method

A κ value of 0.05 was chosen as a compromise between performance, overfitting, and computation time for the results presented in this paper. Although this greedy hill climbing approach is not guaranteed to find the global optimum for boundary values, it produces sufficient results while simultaneously reducing computation time when compared to a brute force method.

2.3 Principal component analysis

Assume F is a set of images of the same size and F = f1,f2,...,fN. For each image fF, a mean adjusted image f’ is created as follows:

$$ f' = f - \overline{f} $$
(5)

where \(\overline {f}\) is the mean value of the pixels in image f. Every image f’ is then converted to a column wise vector, allowing F to be represented as a two dimensional matrix S. PCA is then performed using Singular Value Decomposition (SVD) on matrix S creating the following decomposition:

$$ S = U {\Sigma} V^{T} $$
(6)

where U is a unity matrix and the columns of V are the orthonormal eigenvectors of the covariance matrix of S and Σ is a diagonal matrix of their respective eigenvalues. The eigenvectors form a basis for an eigenspace for each set of images F. The resulting principal components in V are finally used for matching.

2.4 Matching

Let M = m1,m2,...,mN− 1 be the set of principal components of a query image q and let r be an image in the dataset of images R with principal components L = [l1,l2,...,lN− 1]. Each Euclidean distance dnD = [d1,d2,...,dN− 1] between q and r can be calculated using (6):

$$ d_{n}=\sqrt{{\Sigma}_{n}(m_{n}-l_{n})^{2} )} $$
(7)

After the calculation of the Euclidean distances between the principal components, they are averaged into an average distance metric, as written in (7):

$$ AvD={\Sigma} D/(N-1) $$
(8)

The best match for query image q in the image dataset R is the image for which AvD is minimized.

3 Experimental results

To assess the performance of the proposed 2D-MBPCA technique and compare its performance against standard Principal Component Analysis (PCA) method for ear recognition, experimental results were generated using two benchmark ear image datasets called the Indian Institute of Technology

Delhi II (IITD II) [17] and the University of Science and Technology Beijing I (USTB I) [8], which are widely used in the literature [3, 4, 25, 27, 36]. These two datasets were selected due to their widespread use and because they have been pre-aligned. The IITD II dataset consists of 793 images of the right ear of 221 participants. Each participant was photographed between three and six times, with each image being of size 180 × 50 pixels and in 8-bit grayscale. The images of IITD II dataset are tightly cropped, of equal size, and are manually centered and aligned. The USTB I dataset consists of 180 images of the right ear of 60 participants, each of whom were photographed three times. The images in this dataset are 8-bit grayscale of size 150 × 80. The images in USTB I are tightly cropped; however, they exhibit some slight rotation and shearing. Multiple example images from these two datasets are illustrated in Fig. 6, where Fig. 6a-b and c-d show images from the IITD II and USTB I datasets, respectively.

Fig. 6
figure 6

Sample images of two unique individuals from the IITD II dataset a-b [17]. Sample images of two unique individuals from the USTB I dataset (c-d) [8]

The proposed 2D-MBPCA technique using the three different boundary selection algorithms described in Section 2.2, standard single-image PCA, and the eigenfaces methods were applied to the images of the two datasets. The algorithms were all assessed via Rank-1 and Rank-5 criteria [10]. For each subject, two images were randomly selected to serve as database images and a third image was randomly selected to be a query image. Given a particular query image, if it was correctly matched with an image of the same subject in the database, it was marked as a Rank-1 image. Similarly, if the subject was found within the closest five images to the query image, it was marked as a Rank-5 image. The percentage of Rank-1 and Rank-5 images in the dataset are then listed as Rank-1 and Rank-5 accuracies. This process was repeated for two additional permutations, with the Rank-1 and Rank-5 accuracies averaged across all trials. It should be mentioned that 10% of the image datasets were randomly selected and used for calculating the boundary values in the histogram based boundary calculation experiment. The same 10% was also selected for tuning the κ parameter for greedy hill climbing based boundary calculation. The remaining 90% of each image dataset was then used to generate experimental results.

3.1 Experimental results for the standard PCA method

To create results for the standard PCA method, it was applied to each image individually. The resulting eigenvectors were then used for matching. The results for both the IITD II and USTB I image datasets are presented in Table 1. From Table 1, it can be seen that the PCA performance on the IITD II is higher than its performance on the USTB I. This could be explained by the fact that some images within the USTB I dataset are slightly rotated.

Table 1 Rank-1 and Rank-5 matching accuracy (%) for Standard PCA

3.2 Experimental results for independent component analysis

Independent Component Analysis (ICA) was applied to each input image, extracting its components. The resulting components were then used for matching. The results for both the IITD II and USTB I image datasets are presented in Table 2. From Table 2, it can be seen that, like PCA, the performance of ICA on the IITD II dataset is higher than its performance on the USTB I dataset.

Table 2 Rank-1 and Rank-5 matching accuracy (%) for ICA

3.3 Experimental results for the eigenfaces method

To create experimental results for the eigenfaces PCA based method [6], 10% of images of each ear dataset were used to calculate the eigenvectors and the remaining images were projected along those eigenvectors to create eigenears. The resulting eigenears were compared using the Euclidean distance. The experimental results are tabulated in Table 3. From Table 3, it can be seen that the eigenfaces method vastly outperforms single image PCA on the images of the IITD II dataset, yet shows only a marginal performance increase on the images of the USTB I dataset. This is due to the fact that some of the images of the USTB I dataset are rotated and PCA is not a rotation invariant transform.

Table 3 Rank-1 and Rank-5 matching accuracy (%) for the ‘eigenfaces’ PCA method [6]

3.4 Experimental results for the 2DPCA method

The 2DPCA method [32] was applied to each input image, generating its feature matrix. The resulting feature matrices were used for matching using the nearest neighbor distance measure. The experimental results are tabulated in Table 4. From Table 4, it can be seen that the 2DPCA method outperforms both single image PCA and the ‘eigenfaces’ technique on the images of both datasets.

Table 4 Rank-1 and Rank-5 matching accuracy (%) for the 2DPCA method [32]

3.5 Experimental results for the (2D)2 PCA method

The (2D)2 PCA method [38] was applied to each input image, generating its feature matrix. As in 2DPCA, the resulting feature matrices were used to perform matching using the nearest neighbor distance measure. The experimental results are tabulated in Table 5. From Table 5, it can be seen that the (2D)2 PCA method slightly outperforms its anchor 2DPCA algorithm, as well as single image PCA and the ‘eigenfaces’ technique.

Table 5 Rank-1 and Rank-5 matching accuracy (%) for the (2D)2 PCA method [38]

3.6 Experimental results for the proposed 2D-MBPCA using equal size boundaries

The proposed 2D-MBPCA method was applied to both the IITD II and USTB I datasets as discussed in Section 3. Experimental results were generated for up to twenty bands with a step size of one. For each number of bands (N), the number of principal components used for matching was varied between one and N-1. The number of correct matches was calculated for each combination of boundaries and number of principal components. A subset of the results for both the IITD II and USTB I datasets are presented in Tables 678, and 9, with the most accurate trial in bold. The matching accuracy produced using the equal band size method increases as the number of partitions increases to a certain point, but then decreases. For brevity, only the results up to and including the maximum accuracy are included in the presented tables. From the results presented in Tables 6-9, it can be noted that the proposed 2D-MBPCA method greatly outperforms the standard PCA method on both the IITD II and USTB I datasets. The Rank-1 accuracy improved by 56.41% and 51.11% and the Rank-5 accuracy improved by 44.19% and 38.99% on the IITD II and USTB I datasets, respectively. In addition, the proposed 2D-MBPCA technique significantly outperforms the eigenface method, demonstrating a Rank-1 accuracy improvement of 2.98% and 20.18% on images of the IITD II and USTB I datasets, respectively. Furthermore, it can be observed that there is a direct correlation between number of features used for a given number of boundaries and the matching accuracy. Interestingly, both the IITD II and USTB I datasets use the same number of bands to reach their maximum Rank-1 accuracy.

Table 6 Rank-1 matching accuracy (%) using equal size boundaries on the IITD II dataset
Table 7 Rank-5 matching accuracy (%) using equal size boundaries on the IITD II dataset
Table 8 Rank-1 matching accuracy (%) using equal size boundaries on the USTB I dataset
Table 9 Rank-5 matching accuracy (%) using equal size boundaries on the USTB I dataset

3.7 Experimental results for the proposed 2D-MBPCA using histogram based boundaries

In this section, the performance of 2D-MBPCA is assessed when using the histogram based boundary selection algorithm as introduced in Section 3. 10% of each dataset was withheld as a validation set. Experimental results for different filter orders were generated for each dataset individually using its respective validation set. The filter order with the highest performance for each dataset was selected. Figures 7 and 8 show the relationship between filter order and matching accuracy on both the IITD II and USTB I datasets. Tables 10 and 11 shows a subset of the results from both figures with the best performing filter order in bold. From these figures and tables, it can be seen that the matching accuracy increases as the order of the Gaussian low-pass filter increases. It reaches a saturation point when the order of the filter is 14 and 18 for the IITD II and USTB I datasets, respectively. The resulting optimum filter orders are used to generate the experimental results.

Fig. 7
figure 7

Rank-1 and Rank-5 matching accuracy as a function of filter order for the IITD II dataset

Fig. 8
figure 8

Rank-1 and Rank-5 matching accuracy as a function of filter order for the USTB I dataset

Table 10 Rank-1 and Rank-5 matching accuracy (%) using histogram based boundaries on the IITD II dataset
Table 11 Rank-1 and Rank-5 matching accuracy (%) using histogram based boundaries on the USTB I dataset

3.8 Experimental Results for the Proposed 2D-MBPCA Using Greedy Hill Climbing Based Boundaries

For this experiment, 2D-MBPCA was performed using the greedy hill climbing based boundary selection method described in Section 3. κ values from 0.01 to 0.1 with a step size of 0.01 were tested on a 10% validation set. The value κ = 0.05 was chosen as a middle point between matching accuracy and computational complexity. Although this approach is not guaranteed to find the global optimum for boundaries, it produces sufficient results while simultaneously reducing computation time. The results are shown in Table 12. From Table 12, it can be seen that the greedy hill climbing based boundary selection method generates promising results on both datasets.

Table 12 Rank-1 and Rank-5 matching accuracy (%) using greedy hill climbing based boundary selection on the IITD II and USTB I datasets

To give the reader a summary on the performance of the proposed 2D-MBPCA when using its three different boundary selection algorithms, a summary of its matching accuracies on the IITD II and USTB I datasets are tabulated in Tables 13 and 14, respectively. Furthermore, the Cumulative Match Characteristic (CMC) curves for both datasets are represented in Figs. 9 and 10.

Table 13 Matching accuracy (%) of the proposed 2D-MBPCA technique using equal size, histogram, and greedy hill climbing boundary selection algorithms on the IITD II dataset
Table 14 Matching accuracy (%) of the proposed 2D-MBPCA technique using equal size, histogram, and greedy hill climbing boundary selection algorithms on the USTB I dataset
Fig. 9
figure 9

CMC curve for the IITD II dataset for Equal Size, Histogram, and Greedy Hill Climbing boundary techniques

Fig. 10
figure 10

CMC curve for the USTB I dataset for Equal Size, Histogram, and Greedy Hill Climbing boundary techniques

To enable the reader to compare the performance of the proposed 2D-MBPCA technique with both its PCA based and also state of the art learning based techniques, Rank-1 matching results for the PCA based techniques including: 2D-MBPCA, single image PCA, ICA, eigenfaces [6], 2DPCA [32], and (2D)2 PCA [38], as well as results for the learning based techniques including: BSIF with SVM [4] and neural network with SVM [27] are tabulated in Table 15. In addition, CMC curves for the PCA based algorithms are presented in Figs. 11 and 12. From Table 15, it can be seen that the proposed method significantly outperforms the PCA based state of the art techniques. Furthermore, these results show that 2D-MBPCA gives competitive results to those of the learning based algorithms for the images of both the IITD II and USTB I datasets. From Figs. 11 and 12, it is evident that the proposed 2D-MBPCA algorithm gives significantly higher performance to those of single image PCA and the eigenfaces techniques. Experimental results show that the proposed method is slightly sensitive to ear rotation and yaw. Therefore, an image registration, which adjusts images according to a template orientation, could mitigate the effect of ear rotation and yaw, resulting in improved accuracy.

Table 15 Rank-1 matching accuracy (%) for the proposed 2D-MBPCA and the state of the art PCA and learning based ear recognition methods
Fig. 11
figure 11

CMC curve for the IITD II dataset for Single Image PCA, Eigenfaces, and 2D-MBPCA techniques

Fig. 12
figure 12

CMC curve for the USTB I dataset for Single Image PCA, Eigenfaces, and 2D-MBPCA techniques

3.9 Justification of the achieved performance

From the experimental results, it is clear that the proposed 2D-MBPCA technique significantly outperforms other PCA based methods. This improvement can be explained by the fact that the proposed technique expands the feature space by a factor of b − 1, where b is the number of bands (the number of features for single image PCA is xy, while the number of features for

2D-MBPCA is xy ∗ (b − 1), where the original image is of size xy). Although increasing the number of bands linearly increases the feature space, the effectiveness of the features is limited by the energy of individual eigenvectors. Consequently, there is a theoretical limitation on the maximum number of features, and thus the number of bands, that can be used for matching. This limitation is consistent with the experimental results in Section 3.8, where the matching performance of 2D-MBPCA first increases as the number of bands increases, reaching a maximum, and then decreases. To demonstrate this finding, the number of features and the total eigenvector energy for each number of bands were calculated for both datasets and are illustrated in Figs. 13 and 14.

Fig. 13
figure 13

The number of features and total eigenvector energy versus the number of bands, where the intersection demonstrates the number of bands for maximum achievable performance, for the IITD II dataset [17]

Fig. 14
figure 14

The number of features and total eigenvector energy versus the number of bands, where the intersection demonstrates the number of bands for maximum achievable performance, for the USTB I dataset [8]

From Figs. 13 and 14, it can be seen that as the number of bands increases, the total eigenvector energy decreases inversely. The intersection of the Eigenvector Energy and Number of Features graphs occurs at approximately six bands in both figures; these numbers are exactly the number of bands which produces the maximum matching accuracy for the proposed technique on the IITD II and USTB I datasets using the greedy hill climbing technique. This implies that the distribution of the energy across image Eigenvectors is a function of the number of the images that the input image is divided into. Increasing the number of multi-band images changes the distribution of the energy across image Eigenvectors. It initially consolidates most of the image Eigenvectors energy into a smaller number of Eigenvectors till it reaches its optimal number of images. As the number of multi-band images goes above its optimal number the image Eigenvectors energy splits to more Eigenvectors. This theoretically put a cap on maximum number of multi-band image.

3.10 Execution time

Ear recognition techniques are generally classified into two main categories: statistical based and learning based techniques. Statistical based techniques, including PCA, ICA, Eigenfaces, 2DPCA, (2D)2 PCA, and the proposed 2D-MBPCA algorithm, extract some statistics or features directly from the image and use these features to find the best match, while learning based techniques use a range of information including image statistics, features, and other data extracted from the image dataset to train classifiers such as neural networks and support vector machines. Learning based techniques then use the trained classifiers to find the best match for an input query image. Consequently, learning based ear recognition algorithms are much more computationally expensive than their statistical based counterparts.

To give the reader a sense of the computational complexity of the proposed 2D-MBPCA algorithm with respect to other statistical based methods, as well as the state of the art learning based techniques, 2D-MBPCA, single image PCA, ICA, eigenfaces [6], 2DPCA [32], (2D)2 PCA [38], BSIF with SVM [4], and neural network with SVM [27] were implemented in MATLAB. The resulting algorithms were then executed on a Windows 10 personal computer equipped with a 7th generation Intel core i7 processor, an Nvidia GTX 1080 graphics card, and a 512 GB Toshiba NVMe solid-state drive (no other applications, updates or background programs were running during the computation). The average computation time for processing an query image using each algorithm (learning based techniques were already trained and their training time has not been included in their measurement) was measured using 100 randomly selected query images from each dataset. The resulting measurements are tabulated in Table 16.

Table 16 Average execution time (milliseconds) of the proposed 2D-MBPCA and the state of the art PCA based and learning based algorithms

From Table 16, it can be seen that the proposed technique’s execution time is almost the same as the single image PCA method. However, the eigenfaces technique is significantly faster than the proposed 2D-MBPCA algorithm. This is due to the fact that 2D-MBPCA performs PCA on each query image, whereas the eigenfaces method simply projects the query image along the pre-calculated eigenvectors. In comparison to those of learning based techniques, the execution time of the proposed 2D-MBPCA technique is significantly lower while generating competitive matching performance. The learning based methods require a training phase which is computationally intensive and has not been counted in the results presented in Table 16. In addition, the performance of learning based techniques significantly deteriorates when using cross-dataset validation, while the performance of the proposed 2D-MBPCA method is independent of the dataset. It is general knowledge that the performance of learning based techniques is dependent on the feature extraction techniques they use. From the results presented in this paper, which demonstrate the proposed 2D-MBPCA algorithm generates significantly higher performance to those of statistical based techniques, it is clear that 2D-MBPCA extracts more higher energy features, which presumably convey more accurate features of the image, than other PCA based techniques. Therefore, the proposed 2D-MBPCA technique has an inherent ability to further improve the performance of learning-based ear recognition algorithms when it is used as their primary feature extractor. Furthermore, 2D-MBPCA has potential to improve areas in which multi-banding may help with separation, such as image and video object segmentation [19,20,21].

4 Conclusion

In this paper, a Two Dimensional Multi-Band PCA (2D-MBPCA) method was presented. The proposed algorithm takes an ear image and performs histogram equalization on it. Multiple images are then generated from the resulting histogram-equalized image using one of three boundary selection techniques. The standard PCA method is then applied on the resulting images, extracting their eigenvectors, which are then used for feature matching using Euclidean distance. The proposed technique takes the graph intersection of the number of resulting multiple image features and their total eigenvector energies, which was consistent with empirical performance of the proposed technique on two image datasets, as optimal number of multiple images to be created from the input image. Experimental results show that the proposed 2D-MBPCA algorithm significantly outperforms both the standard PCA and the eigenface methods using images from two standard benchmark datasets. Furthermore, the proposed technique gives competitive results to those of the learning based methods at a fraction of their computational cost.