1 Introduction

Increase in capability of data storage technology has witnessed a huge growth of multimedia content especially image repositories. The result of which is the need for an efficient content-based image indexing and retrieval framework. Here, the content means the features of image such as color, texture, and shape. The advantage of content-based image indexing is the automatic assignment of index to an image based on content instead of using a textual keyword. The growth of new techniques based on content of image has shown tremendous results in solving the problem of retrieving relevant image information from huge repositories of images.

Local binary pattern popularly known as LBP is a texture content-based image indexing and retrieval framework proposed by Ojala et al. [1]. LBP is a simple and efficient framework based on extracting texture features from an image using a 3 × 3 neighborhood. LBP operator represents an image using two hundred and fifty six binary patterns.

Other variants of LBP are rotationally invariant LBP, and uniform LBP which is useful in representing the image efficiently with less number of feature vectors is compared to normal LBP.

Due to its effectiveness and simplicity, the LBP operator has been used in number of applications such as face recognition task [2], facial expression recognitions task [3], segmentation of texture [4], and texture-based classification [57].

Various extensions to the standard LBP have been proposed in the literature for image and face classification applications [813].

An extension of the LBP operator which is used for representing a texture video using volume local binary pattern (VLBP) has been proposed [10, 11].

The rest of this chapter is organized as follows: Sect. 2 discusses the LBP in detail; Sect. 3 describes our proposed novel frameworks; Sect. 4 discusses experiments and results; and Sect. 5 concludes the chapter.

2 Local Binary Pattern

Local binary pattern operator proposed by Ojala et al. [1] is mainly used for texture-based image classification and retrieval. The standard LBP operation is based on a 3 × 3 neighborhood. In this chapter, we mainly concentrate on a 3 × 3 neighborhood for proposing the new frameworks. The standard steps in finding the LBP code for an image are given as follows:

  1. Step 1:

    After selecting a 3 × 3 neighborhood, threshold the eight neighbor pixels based on sign obtained by finding the difference between the neighbors and the center pixel values.

  2. Step 2:

    Multiply the thresholding result with the binary weights and sum up all the values to get the binary value code for the center pixel.

Consider p 1, p 2, … p 7, p 8 are the neighbor pixels and p c is the center pixel of a 3 × 3 neighborhood. Then, the LBP for the center pixel can be obtained by the following equation:

$${\text{LBP}}({\text{Center}}\,{\text{pixel}}) = \sum\limits_{a = 1}^{8} {2^{a - 1} } *f(p_{a} - p_{c} )$$
(1)

where f(v) is the threshold function

f(v) = 1 if v > = 0,

f(v) = 0, otherwise. And p c is the center pixel and p a is the neighbor pixel.

The procedure of finding local binary pattern (LBP) is as shown in Fig. 1.

Fig. 1
figure 1

Calculation of LBP values for center pixel of a 3 × 3 neighborhood

3 Proposed Method

In this section, we propose three novel clockwise local difference binary pattern algorithms meant for content-based image indexing and retrieval framework. Our framework is based on 3 × 3 neighborhood pixels of an image. Consider the following 3 × 3 neighborhood as shown in Fig. 2.

Fig. 2
figure 2

An example of 3 × 3 neighborhood

As shown in Fig. 2, P1, P2, … P9 are the positions of pixels in a 3 × 3 neighborhood. As explained earlier, our proposed method mainly focused on a 3 × 3 neighborhood.

3.1 Algorithm 1: For CWLDBP1

  1. Step 1:

    In a 3 × 3 neighborhood of an image, find the difference between values of pixels positioned at P1 and P7, followed by P4 and P8, by P9 and P3, and by P6 and P2. That is

    • V1 = pixel value at (P1) − pixel value at (P7)

    • V2 = pixel value at (P4) − pixel value at (P8)

    • V3 = pixel value at (P9) − pixel value at (P3)

    • V4 = pixel value at (P6) − pixel value at (P2)

  2. Step 2:

    Threshold the values V1, V2, V3, and V4 using a threshold T. In our case, value of T is 0. That is, find out whether the values V1, V2, V3, and V4 are greater than zero or not. If the value is greater than zero, then the new value will become one otherwise the new value will become zero. For example

    • If V1 > 0

    • V1 = 1

    • Else

    • V1 = 0

    • End

  3. Step 3:

    To the threshold resultant values vector [V1, V2, V3, V4] obtained in Step 2, we perform dot product with the vector [1, 2, 4, 8] and find out the sum of all to get the binary code for the center pixel P5.

The procedure followed in the proposed Algorithm 1 is as shown in Fig. 3 with an example of calculation of new value for center pixel at position P5 (in Fig. 2).

Fig. 3
figure 3

Example calculation of center pixel value using Algorithm 1

Fig. 4
figure 4

Example calculation of center pixel value using Algorithm 2

3.2 Algorithm 2: For CWLDBP2

  1. Step 1:

    In a 3 × 3 neighborhood of an image, find the difference between values of pixels positioned at P7 and P9, followed by P8 and P6, by P3 and P1, and by P2 and P4. That is

    • V1 = pixel value at (P7) − pixel value at (P9)

    • V2 = pixel value at (P8) − pixel value at (P6)

    • V3 = pixel value at (P3) − pixel value at (P1)

    • V4 = pixel value at (P2) − pixel value at (P4)

  2. Step 2:

    Threshold the values V1, V2, V3, and V4 using a threshold T. In our case value of T is 0. That is, find out whether the values V1, V2, V3, and V4 are greater than zero or not. If the value is greater than zero, then the new value will become one otherwise the new value will become zero. For example

    • If V1 > 0

    • V1 = 1

    • Else

    • V1 = 0

    • End

  3. Step 3:

    To the threshold resultant values vector [V1, V2, V3, V4] obtained in Step 2, we perform dot product with the vector [1, 2, 4, 8] and find out the sum of all to get the binary code for the center pixel P5.

    The procedure followed in the proposed Algorithm 2 is as shown in Fig. 4   with an example of calculation of new value for center pixel at position P5.

    Fig. 5
    figure 5

    Example calculation of center pixel value using Algorithm 3

3.3 Algorithm 3: For CWLDBP3

  1. Step 1:

    In a 3 × 3 neighborhood of an image, find the difference between values of pixels positioned at P1 and P7, followed by P4 and P8, P9 and P3, P6 and P2, P7 and P9, P8 and P6, P3 and P1, and by P2 and P4. That is

    • V1 = pixel value at (P1) − pixel value at (P7)

    • V2 = pixel value at (P4) − pixel value at (P8)

    • V3 = pixel value at (P9) − pixel value at (P3)

    • V4 = pixel value at (P6) − pixel value at (P2)

    • V5 = pixel value at (P7) − pixel value at (P9)

    • V6 = pixel value at (P8) − pixel value at (P6)

    • V7 = pixel value at (P3) − pixel value at (P1)

    • V8 = pixel value at (P2) − pixel value at (P4)

  2. Step 2:

    Threshold the values V1, V2, V3, V4, V5, V6, V7, and V8 using a threshold T. In our case value of T is 0. That is, find out whether the values V1, V2 ….V8 are greater than zero or not. If the value is greater than zero, then the new value will become one otherwise the new value will become zero. For example

    • If V1 > 0

    • V1 = 1

    • Else

    • V1 = 0

    • End

  3. Step 3:

    To the threshold resultant values vector [V1, V2, V3, V4, V5, V6, V7, V8] obtained in Step 2, we perform dot product with the vector [1, 2, 4, 8, 16, 32, 64, 128] and find out the sum of all to get the binary code for the center pixel at position P5.

The procedure followed in the proposed Algorithm 3 is as shown in Fig. 5 with an example of calculation of new value for center pixel at position P5.

An example gray scale image from Corel-1k dataset and the resultant images after applying the proposed algorithms are shown in Fig. 6.

Fig. 6
figure 6

a Gray scale image; b result of CWLDBP1; c result of CWLDBP2; d result of CWLDBP3

4 Experiments and Results

In this section, we discuss experiments and results of our proposed algorithms. To evaluate the performance of our proposed algorithms, we have used the Corel 1k dataset which consists thousands of images consisting of ten different categories of images (Hundred images of each category). We compare our proposed algorithms with already-existing methods such as 3 × 3 neighborhood-based LBP Rotational Invariant (LBP-ri), LBP Uniform (LBP-u2), and LBP Uniform Rotational Invariant (LBP-riu2).

In our experiments for finding best matches for a given query image, we have used the k-nearest neighbor search algorithm.

The experimental results (Precision % based on top 10 resultant images retrieved for a query image) are shown in Table 1.

Table 1 Results in terms of precision (n = 10) (%) for various categories (S. no) of images in Corel-1k dataset

The experimental results (Recall % based on top 100 resultant images retrieved for a query image) are shown in Table 2.

Table 2 Results in terms of recall (n = 100) (%) for various categories (S. no) of images in Corel-1k dataset

In the above tables, various categories of images in Corel-1k  are 1-Africans, 2-Beaches, 3-Buildings, 4-Buses, 5-Dinosaurs, 6-Elephants, 7-Flowers, 8-Horses, 9-Mountains, 10-Food, and AVG represents average

5 Conclusion

In this chapter, we have proposed three new algorithms based on a 3 × 3 neighborhood in an image, meant for content-based image indexing and retrieval framework. We have evaluated our proposed algorithms using Corel-1k dataset and our proposed algorithms have shown reasonable results in terms of precision and recall. In our future work, we will try to improve the performance of above-proposed algorithms with the help of Gabor filters.