An Efficient Two-layer based Technique for Content-based Image Retrieval

Fawzi Abdul Azeez Salih1, Alan Anwer Abdulla2,3*

1Department of Computer Science, College of Science, University of Sulaimani, Sulaimani,Iraq, 2Department of Information Technology, College of Commerce, University of Sulaimani, Sulaimani, Iraq, 3Department of Information Technology, University College of Goizha, Sulaimani, Iraq

Corresponding author’s e-mail: Alan Anwer Abdulla, Department of Information Technology, College of Commerce, University of Sulaimani, Sulaimani, Iraq; Department of Information Technology, University College of Goizha, Sulaimani, Iraq. E-mail: alan.abdulla@univsul.edu.iq
Received: 07-01-2021 Accepted: 30-03-2021 Published: 05-04-2021
DOI: 10.21928/uhdjst.v5n1y2021.pp28-40


ABSTRACT

The rapid advancement and exponential evolution in the multimedia applications raised the attentional research on content-based image retrieval (CBIR). The technique has a significant role for searching and finding similar images to the query image through extracting the visual features. In this paper, an approach of two layers of search has been developed which is known as two-layer based CBIR. The first layer is concerned with comparing the query image to all images in the dataset depending on extracting the local feature using bag of features (BoF) mechanism which leads to retrieve certain most similar images to the query image. In other words, first step aims to eliminate the most dissimilar images to the query image to reduce the range of search in the dataset of images. In the second layer, the query image is compared to the images obtained in the first layer based on extracting the (texture and color)-based features. The Discrete Wavelet Transform (DWT) and Local Binary Pattern (LBP) were used as texture features. However, for the color features, three different color spaces were used, namely RGB, HSV, and YCbCr. The color spaces are utilized by calculating the mean and entropy for each channel separately. Corel-1K was used for evaluating the proposed approach. The experimental results prove the superior performance of the proposed concept of two-layer over the current state-of-the-art techniques in terms of precision rate in which achieved 82.15% and 77.27% for the top-10 and top-20, respectively.

Index Terms: CBIR, Feature Extraction, Color Descriptor, DWT, LBP

1. INTRODUCTION

Beside content-based image retrieval (CBIR), digital image processing plays a vital role in numerous areas such as processing and analyzing medical image [1], image inpainting [2], pattern recognition [3], biometrics [4], multimedia security [5], and information hiding [6]. In the area of image processing and computer vision, CBIR has grown increasingly as an advanced research topic. CBIR refers to the system which retrieves similar images of a query image from the dataset of images without any help of caption and/or description of the images [7]. There are two main mechanisms for image retrieval which are text-based image retrieval (TBIR) and CBIR [8]. TBIR was first introduced in 1970 as search and retrieve images from image dataset [9]. In such a kind of image retrieval mechanism, the images are denoted by text and then the text is used to retrieve or search for the images. The TBIR method depends on the manual text search or keyword matching of the existing image keywords and the result has been relied on the human labeling of the images. TBIR approach requires information such as image keyword, image location, image tags, image name, and other information related to the image. Human involvement is needed in the challenging process of entering information of the images in the dataset. The drawbacks of TBIR are as follows: 1- It leads to inaccurate results if human has been doing datasets annotation process incorrectly, 2- single keyword of image information is not effective to transfer the overall image description, and 3- it is based on manual annotation of the images, which is time consuming [10]. Researchers introduced CBIR as a new mechanism for image retrieval to overcome the above-mentioned limitations of TBIR. It is considered as a popular technique to retrieve, search, and browse images of query information from a broad dataset of images. In CBIR, the image information, visual features such as low-level features (color, texture, and/or shape), or bag of features (BoF) have been extracted from the images to find the most similar images in the dataset [11]. Fig. 1 illustrates the general block diagram of the CBIR mechanism [12].

thumblarge

Fig. 1. General block diagram of CBIR mechanism.

Fig. 1 shows the block diagram of basic CBIR system that involves two phases: Feature extraction and feature matching. The first phase involves extracting the image features while the second phase involves matching these features [13]. Feature extraction is the process of extracting features from the dataset of images and stored in feature vector and also extracting features from the query image. On the other hand, feature matching is the process of comparing the extracted features from the query image to the extracted features from images in the dataset using similarity distance measurement. The corresponding image in the dataset is considered as a match/similar image to the query image if the distance between feature vector of the query image and the image in the dataset is small enough. Thus, the matched images are then ranked based on the similarity index from the smallest distance value to the largest one. Eventually, the retrieved images are selected according to the lowest distance value. The essential objective of the CBIR systems is improving the efficiency of the system by increasing the performance using the combination of features [9]. Image features can be categorized into two types: Global features and local features. Global features extract information from the entire image while local features work locally which are focused on the key points in images [14]. For the large image dataset, image relevant to the query image are very few. Therefore, the elimination of irrelevant images is important. The main contribution of this research is first eliminating the irrelevant images in the dataset and then finds the most similar/matches images from the rest of the remained images. The reminder of the paper is organized as follows: Section 2 discusses the related work. Section 3 introduces a background about the techniques used in the proposed approach. Section 4 presents the proposed CBIR approach. Section 5 shows the experimental results. Finally, section 6 gives the conclusions.

2. RELATED WORK

Studies related to the developed techniques of CBIR have been researched a lot and they mainly focused on analyzing and investigating the interest points/areas such as corners, edges, contours, maxima shapes, ridges, and global features [15]. Some of those developed approaches are concerned on combining/fusing certain types of the extracted features, since such kind of strategy has an impact on describing the image content efficiently [13], [16]. This section reviews the most important and relevant existing works on CBIR. The main competition in this research area is increasing the precision rate that refers to the efficiency of retrieving the most similar images correctly. Kato et al. were first to investigate this field of study in 1992, who developed a technique for sketch retrieval, similarity retrieval, and sense retrieval to support visual interaction [17]. Sketch retrieval accepts the image data of sketches, similarity retrieval evaluates the similarity based on the personal view of each user, and sense retrieval evaluates based on the text data and the image data at content level based on the personal view. Yu et al., in 2013, proposed an effective image retrieval system based on the (BoF) model, depending on two ways of integrating [18]. Scale-invariant feature transform (SIFT) and local binary pattern (LBP) descriptors were integrated in one hand, and histogram of oriented gradients (HOG) and LBP descriptors were integrated on the other hand. The first proposed integration, namely SIFT-LBP, provides better precision rate in which reached 65% for top-20 using Jaccard similarity measurement. Shrivastava et al., in 2014, introduced a new scheme for CBIR based on region of interest ROI codes and an effective feature set consisting of a dominant color and LBP were extracted from both query image and dataset of images. This technique achieved, using Euclidean distance measurement, a precision rate of 76.9% for top-20 [19]. DWT as a global feature, and gray level co-occurrence matrix (GLCM), as local feature, was extracted and fused in the algorithm introduced by Gupta et al., in 2015, and as a result, a precision rate of 72.1 % was obtained for top-20 using Euclidean distance [20]. Another technique was introduced by Navabi et al., in 2017, for CBIR which based on extracting color and texture features. The technique used color histogram and color moment as color feature. The principal component analysis (PCA) statistical method was applied for the dimension’s reduction. Finally, Minkowski distance measurement was used to find most similar images. As reported, this technique achieved the precision rate of 62.4% for top-20 [21]. Nazir et al., in 2018, proposed a new CBIR technique by fusing the extracted color and texture features [22]. Color Histogram (CH) was used to extract a color information, and DWT as well as edge histogram descriptor (EDH) were used to extract texture features. As authors claimed, this technique achieved a precision rate of 73.5% for top-20 using Manhattan distance measurement. Pradhan et al., in 2019, developed a new CBIR scheme based on multi-level colored directional motif histogram (MLCDMH) [23]. This scheme extracts local structural features at three different levels. The image retrieval performance of this proposed scheme has been evaluated using different Corel/natural, object, texture, and heterogeneous image datasets. For the Corel-1k, the precision rate of 64% and 59% was obtained for top-10 and top-20, respectively. Recently, Sadique et al., in 2019, developed a new CBIR technique by extracting global and local features [7]. A combination of speeded up robust features (SURF) descriptor with color moments, as local feature, and modified GLCM, as global feature, leads this technique to obtain 70.48% of the precision rate for top-20 using Manhattan similarity measurement. Continuously, in 2019, Khawaja et al. proposed another technique for CBIR using object and color features [24]. Authors claimed that this technique outperformed in certain categories of the benchmark datasets Caltech-101 and Corel-1000, and it gained 76.5% of the precision rate for top-20 using Euclidean distance. Different from the previous techniques discussed above, Qazanfari et al., in 2019, investigated HSV color space for developing CBIR technique [25]. As reported in this work, the human visual system is very sensitive to the color as well as edge orientation, and also color histogram and color difference histogram (CDH) are two kinds of low-level feature extraction which are meaningful representatives of the image color and edge orientation information. This proposed technique used Canberra distance measurement to measure the similarity between the extracted feature of the both query image and images in the dataset. This technique achieved 74.77% of the precision rate for the top-20 using Euclidean distance similarity measurement. Rashno et al., in 2019, developed an algorithm in which HSV, RGB, and norm of low frequency components were used to extract color features, and DWT was used to extract texture features [26]. Accordingly, ant colony optimization (ACO) feature selection technique was used to select the most relevant features. Eventually, Euclidian distance measurement was used to measure the similarity between query and images in the dataset. The results reported in this work showed that this approach reached the precision rate of 60.79% using Euclidean distance for the top-20. Finally, Aiswarya et al., in 2020, proposed a CBIR technique which uses a multi-level stacked Autoencoders for feature selection and dimensionality reduction [27]. A query image space is created first before the actual retrieval process by combining the query image as well as similar images from the local image dataset (images in device gallery) to maintain the image saliency in the visual contents. The features corresponding to the query image space elements are searched against the characteristics of images in a global dataset. This technique achieved the precision rate of 67% for top-10.

3. BACKGROUND

This section aims to provide detailed background information about important techniques, used in the proposed approach presented in this paper, such as SURF feature descriptor, color-based features, texture-based features, and feature matching techniques.

3.1. SURF Feature Descriptor

There are many feature descriptors available and SURF is one of the most common and significant feature descriptors in which can be considered as a local feature. In comparison with global features such as color, texture, and shape; local features can provide more detailed characters in an image. The rotation and scale invariant descriptor can perform better in terms of distinctiveness, repeatability, and robustness [12]. SURF is used in many applications such as bag of feature (BoF) which is used and success in image analysis and classification [28]. In the BoF technique, the SURF descriptor is sometimes used first to extract local features. Then K-means clustering is used to initialize M center point to create M visual words. The K-means clustering algorithm takes the feature space as input and reduces it to the M cluster as output. Then, the image is represented as a code word histogram by mapping the local features into a vocabulary [28]. Fig. 2 illustrates the methodology of the image representation based on the BoF model.

thumblarge

Fig. 2. Methodology of the BoF technique for representing Image in CBIR.

SURF features are extracted from database images, then the k-means clustering algorithm takes feature space as input and reduces it into clusters as output. The center of each cluster is called a visual word and the combination of visual words formulates the dictionary, which is also known as codebook or vocabulary. Finally, using these visual words of the dictionary, the histogram is constructed using visual words of each image. The histogram of v visual words is formed from each image. After that, resultant information in the form of histograms is added to the inverted index of the BoF model [25].

3.2. Texture-based Features Extraction

Texture-based features can be considered as a powerful low-level feature for image search and retrieval applications. There are many works have been developed on texture analysis, classification, and segmentation for the last four decades. Yet, there is no unique definition for the texture-based features. Texture is an attribute representing the spatial arrangement of the gray levels of the pixels in a region or image. In other words, texture-based features can be used to separate and extract prominent regions of interest in an image and apply to the visual patterns that have properties of homogeneity independent of a single color or intensity [9]. Texture analysis methods can be categorized into statistical, structural, and spectral [15]. DWT and LBP are the two methods of texture feature extraction used in this work.

3.2.1. Discrete wavelet transform (DWT)

The DWT is considered to be an efficient multiresolution technique and it is easy to compute [29]. The signal for each level decomposed into four frequency sub-bands which are: Low of low (LL), low of high (LH), high of low (HL), and high of high (HH) [30]. DWT is used to change an image from the spatial domain into the frequency domain, the structure of the DWT is illustrated in Fig. 3 [22], [26].

thumblarge

Fig. 3. DWT sub-bands.

Wavelet transform could be applied to images as 2-dimensional signals. To refract an image into k level, first the transform is applied on all rows up to k level while columns of the image are kept unchanged. Then, this task is applied on columns while keeping rows unchanged. In this manner, frequency components of the image are obtained up to k level. These frequency components in various levels let us to better analyze original image or signal [26]. For more details about DWT, you can see [31].

3.2.2. Local binary pattern (LBP)

The concept of LBP was originally proposed by Ojala et al. in 1996 [29], [32]. LBP can be considered as a texture analysis approach unifying structural and statistical models. The characteristic of LBP is that LBP operator is invariant to monotonic gray-level changes [33]. In the process of LBP calculation, firstly a 3 × 3 grid of image is selected, and then the intensity value of the center pixel can be computed using the intensity values of its neighboring pixels based on the following equations [34]:

thumblarge

where, n is the number of neighboring pixels around the center pixel. Ik is the intensity value of the kth neighboring pixel, and Ic is the intensity value of the center pixel. An example of LBP is presented in Fig. 4 [34].

thumblarge

Fig. 4. An example of LBP operator.

Fig. 4 shows the LBP spectrum of the Lena image with different circular domain radius and sampling points. Correspondingly, fineness of the texture information in the obtained LBP spectrum is different. Taking the Lena image as an example, with the increase of sampling radius, the gray scale statistical value of the LBP map is sparser [34].

In the proposed approach presented in this paper, after LBP is applied on the LH and HL sub-bands of the DWT, 512 features are extracted to represent the image.

3.3. Color-based Features Extraction

Color is considered as a basic feature observation in viewing an image to reveal a variety of information [12]. Color is extremely used feature for image retrieval techniques [35], [36]. Color points create color space and various color spaces based on the perceptual concepts are used for color illustration [23]. Among all color spaces, YCrCb and HSV have the mentioned perceptual characteristic. In YCbCr, the Y represents the luminance while the color is represented by Cb and Cr [37]. In our proposed approach, the mean and entropy of each component of the color spaces RGB, HSV, and YCbCr have been calculated as color features. Meanwhile, totally 18 color-based features are extracted.

3.4. Feature Matching

There are variety of similarity measurements used to determine the similarity between the query image and the images in the dataset [9]. Manhattan distance is used as a similarity measurement for both layers of the proposed approach in this work, equation (3) [36]:

thumblarge

Where x is the feature vector of query image, y is the feature vector of the dataset of images, and k is the dimension of image feature. Manhattan distance is also known as City block distance. In general, the Manhattan distance is non-negative where zero defines an identical point, and other means little similarity [9].

3.4.1. Proposed approach

This section describes the details of the proposed two-layer approach in the following steps:

  • 1. Let the query image is denoted by Q, and I = {I1, I2,…, In} refers to the dataset which consists of n images.

  • 2. First layer of the proposed approach involves the following steps:

  • a. QBoF and IBoF represent the feature vector of Q and I, respectively, after BoF technique is implemented on.

  • b. To find the similarity between QBoF and IBoF, Manhattan similarity measurement is used, and as a result, M most similar images to the query image are retrieved.

  • 3. Second layer of the proposed approach, which includes the following steps, implements on the query image Q as well as the M most similar images that gained in the first layer.

  • a. Extracting the following features from Q and Mi:

  • • Let L = {l1, l2,…, l512} be the vector of 512 extracted texture-based features after LBP is applied on the LH and HL sub-bands of the DWT, 256 features extract from each sub-band.

  • • Let C = {c1, c2,……, c18} be the extracted 18 color-based features that represent the mean and entropy of the three components of RGB, HSV, and YCbCr color spaces. Meanwhile, 6 features are extracted from each of the mentioned color spaces.

  • • Let F = L + C represents the feature vector of the fused of all the 530 extracted features from the previous steps.

  • • Finally, QF and MFi represent the fused feature vector of Q and Mi, respectively.

  • b. To find the similarity between QF and MFi, Manhattan similarity measurement is used, to retrieve the most similar images to the query image.

The block diagram of the proposed two-layer approach is illustrated in Fig. 5.

thumblarge

Fig. 5. Block diagram of the proposed two-layer CBIR approach.

4. EXPERIMENTAL RESULTS

Experiments are conducted comprehensively in this section to evaluate the performance of the proposed approach in terms of precision rate, the most common confusion matrix measurement used in the research area of CBIR. In addition, the proposed approach is compared to the current existing works.

4.1. Dataset

Corel-1K dataset of images has been used, which is a public and well-known dataset, that contains 1000 images in the form of 10 categories and each category consists of 100 images with resolution sizes of (256 × 384) or (384 × 256) [37], [38]. The categories are arranged as follows: African, people, beaches, buildings, buses, dinosaurs, elephants, flowers, horses, mountains, and foods [37].

4.2. Evaluation Measurements

To evaluate the performance of the proposed approach, precision confusion matrix measurement has been used which determines the number of correctly retrieved images to the total number of the retrieved images from the tested dataset of images. Meanwhile, it measures the specificity of image retrieval system based on the following equation [38], [39]:

thumblarge

where Rc represents the total number of correctly retrieved images and Rt represents the total number of retrieved images. In this study, top-10 and top-20 have been tested. Top-10 indicates the total number of retrieved images is 10 images, and top-20 indicates the total number of retrieved images is 20 images.

4.3 Results

The experiments carried out in this work include two parts: (a) Single layer CBIR model and (b) Two-layer CBIR model. First part evaluates the single layer model (i.e., BoF technique) alone, and on the other hand, CBIR technique based on extracting texture and color features is evaluated. In the second part, the proposed two-layer model has been assessed. The experiments are detailed in the following steps:

1. BoF-based CBIR technique is tested using different number of clusters, as BoF technique relies on the K-means clustering algorithm to create clusters, which is commonly called visual words. The number of clusters cannot be selected automatically; manual selection is needed. To select the proper number of clusters, (i.e., value of k-means), the different number of clusters have been tested to obtain the best precision result of BoF technique. The precision results of different number of clusters are illustrated in the following tables.

From Tables 1 and 2, it is quite obvious that the best result is achieved when k = 500 for both top-10 and top-20.

TABLE 1: Precision rate of BoF technique for different number of clusters for top-10

thumblarge

TABLE 2: Precision rate of BoF technique for different number of clusters for top-20

thumblarge

2. The DWT sub-bands, and the concatenation of the sub-bands, have been tested as a texture feature as presented in the following tables.

From Tables 3 and 4, one can observe that the best result is obtained when the LH and HL sub-bands are concatenated for both top-10 and top-20.

TABLE 3: Precision rate for the DWT sub-bands for top-10

thumblarge

TABLE 4: Precision rate for the DWT sub-bands for top-20

thumblarge

3. In this step, LBP is implemented on DWT sub-bands, implementation of LBP on LH and HL sub-bands.

In the proposed method, LBP is extracted from DWT sub-bands to form a sub-novel local feature descriptor. To achieve this, we performed DWT decomposition and consider the high frequency sub-bands HL, and LH. However, the sub-bands HL and LH also contain edge and contour details of image’s significant in extracting pose and expression relevant features with the aid of LBP. We ignored the low-frequency LL and the high-frequency HH sub-band as it mostly contains the noise with negligible feature details. To preserve the spatial characteristics and to form a robust local feature descriptor, multi-region LBP pattern-based features [4] are obtained from non-overlapping regions of DWT sub-bands {HL, LH}, are statistically significant and offer reduced dimensionality with increased robustness to noise. Each of the sub-band {HL, LH} is equally divided into m non-overlapping rectangle regions R0; R1;… ; Rm, each of size (x,y) pixels. From each of these m regions, we extract local features LBP each with 256 labels separately. Local features from successive regions are concatenated to form a combining the two results in one vector with 512 features, the results in Tables 5 and 6.

TABLE 5: Precision rate for LBP for top-10

thumblarge

TABLE 6: Precision rate for LBP for top-20

thumblarge

From Tables 5 and 6, one can observe that the best result is obtained when the implementation of LBP on LH sub-band as well as HL sub-band is concatenated.

4. Before selecting an appropriate color description, selection of color space is important and needs to choose a color model for color feature extraction process [35]. This step evaluates the impact of extracting the color feature by testing different color space components such as: RGB, YCbCr, and HSV. In other words, mean and entropy for each color components have been calculated and the results are presented in the following tables.

The results presented in Tables 7 and 8 demonstrates that combining the extracted features of all the tested color spaces provides best precision rate.

TABLE 7: Precision rate for color-based features for top-10

thumblarge

TABLE 8: Precision rate for color-based features for top-20

thumblarge

5. Finally, all the extracted features are fused. Meanwhile, by concatenating the extracted LBP in step 3 and the extracted color feature in step 4, Table 9.

TABLE 9: Concatenation of texture-based feature and color-based feature

thumblarge

6. Eventually, the proposed two-layer approach has been tested. It includes two layers: The first layer implements BoF technique (for K=500) and M most similar images are retrieved, M is user defined. In the second layer, color and texture features are extracted from the query image and the M remained images, as a result, N most similar images are retrieved. The following tables investigate the best value of M. In other words, Tables 10 and 11 show investigating different number of M for top-10 and top-20, respectively.

TABLE 10: Precision rate for different number of M for top-10

thumblarge

TABLE 11: Precision results for different number of M for top-20

thumblarge

Results in Tables 10 and 11 demonstrate that the best precision results are obtained for M = 100 and M=200. For this reason, different numbers of M in the range of M=100 to M=200 have also been investigated to gain better precision result, Tables 12 and 13.

TABLE 12: Precision rate of the proposed approach for different number of M for top-10

thumblarge

TABLE 13: Precision rate of the proposed approach for different number of M for top-20

thumblarge

From Tables 12 and 13, it is quite clear that the best result is obtained when M =110 for both top-10 and top-20. More experiments have been done to compare the proposed approach with the state-of-the-art techniques, Table 14.

TABLE 14: Precision rate of the tested CBIR techniques

thumblarge

According to the results presented in Table 14, the best performance (precision rate) is achieved by the proposed approach for both top-10 and top-20. All the tested state-of-the-art techniques, except technique in Ahmed and Naqvi [24], they tested their approach either for top-10 or for top-20, and this is why in Table 14 some cells are not contained the precision rate.

5. CONCLUSION

The two-layer based CBIR approach for filtering and minimizing the dissimilar images in the dataset of images to the query image has been developed in this study. In the first layer, the BoF has been used and as a result, M most similar images are remained for the next layer. Meanwhile, the most dissimilar images are eliminated and, hence, the range of search is narrowed for the next step. The second layer concentrated on concatenating the extracted both (texture and color)-based features. The results obtained by the proposed approach devoted the impact of exploring the concept of two-layer in improving the precision rate compared to the existing works. The proposed approach has been evaluated using Corel-1K dataset and the precision rate of 82.15% and 77.27% for top-10 and top-20 is achieved, respectively. In the future, certain feature extractors need to be investigated as well as feature selection techniques need to be added to select the most important feature which reflects increasing precision rate.

REFERENCES

[1]. Z. F. Mohammed and A. A. Abdulla. “Thresholding-based white blood cells segmentation from microscopic blood images”. UHD Journal of Science and Technology, vol. 4, no. 1, 9, 2020.

[2]. M. W. Ahmed and A. A. Abdulla. “Quality improvement for exemplar-based image inpainting using a modified searching mechanism”. UHD Journal of Science and Technology, vol. 4, no. 1, 1, 2020.

[3]. H. Liu, J. Yin, X. Luo and S. Zhang. “Foreword to the Special Issue on Recent Advances on Pattern Recognition and Artificial Intelligence”. Springer, Berlin, 1, 2018.

[4]. A. Wojciechowska, M. Choraśand R. Kozik. “Evaluation of the Pre-processing Methods in Image-Based Palmprint Biometrics”. Springer, International Conference on Image Processing and Communications, 1, 2017.

[5]. A. A. Abdulla, S. A. Jassim and H. Sellahewa. “Secure Steganography Technique Based on Bitplane Indexes”. 2013 IEEE International Symposium on Multimedia, 2013.

[6]. A. A. Abdulla. “Exploiting Similarities between Secret and Cover Images for Improved Embedding Efficiency and Security in Digital Steganography”. Department of Applied Computing, The University of Buckingham, United Kingdom, pp. 1-235, 2015.

[7]. S. Farhan, B. K. Biswas and R. Haque. “Unsupervised Content-Based Image Retrieval Technique Using Global and Local Features”. International Conference on Advances in Science, Engineering and Robotics Technology, 2, 2019.

[8]. R. S. Patil, A. J. Agrawal. “Content-based image retrieval systems:A survey”. Advances in Computational Sciences and Technology, vol. 10, 9, pp. 2773-2788, 2017.

[9]. H. Shahadat and R. Islam. “A new approach of content based image retrieval using color and texture features”. Current Journal of Applied Science and Technology, vol. 21, no. 1, pp. 1-16, 2017.

[10]. A. Sarwar, Z. Mehmood, T. Saba, K. A. Qazi, A. Adnan and H. Jamal. “A novel method for content-based image retrieval to improve the effectiveness of the bag-of-words model using a support vector machine”. Journal of Information, vol. 45, pp. 117-135, 2019.

[11]. L. K. Paovthra and S. T. Sharmila. “Optimized feature integration and minimized search space in content based image retrieval”. Procedia Computer Science, vol. 165, pp. 691-700, 2019.

[12]. A. Masood, M. A. Shahid and M. Sharif. “Content-based image retrieval features:A survey”. The International Journal of Advanced Networking and Applications, vol. 10, no. 1, pp. 3741- 3757, 2018.

[13]. S. Singh and S. Batra. “An Efficient Bi-layer Content Based Image Retrieval System”. Springer, Berlin, 3, 2020.

[14]. Y. D. Mistry. “Textural and color descriptor fusion for efficient content-based image”. Iran Journal of Computer Science, vol. 3, pp. 1-15, 2020.

[15]. K. T. Ahmed, A. Irtaza, M. A. Iqbal. “Fusion of Local and Global Features for Effective Image Extraction”. Elsevier, Amsterdam, Netherlands, vol. 51, pp. 76-99, 2019.

[16]. M. O. Divya and E. R. Vimina. “Maximal Multi-channel Local Binary Pattern with Colour Information for CBIR”. Springer, Berlin, 2, 2020.

[17]. T. Kato. “Database Architecture for Content-based Image Retrieval”. International Society for Optics and Photonics, vol. 1662. pp. 112-123, 1992.

[18]. J. Yu, Z. Qin, T. Wan and X. Zhang. “Feature integration analysis of bag-of-features model for image retrieval”. Neurocomputing, vol. 120, pp. 355-364, 2013.

[19]. N. Shrivastava. “Content-based Image Retrieval Based on Relative Locations of Multiple Regions of Interest Using Selective Regions Matching”. Elsevier, Amsterdam, Netherlands, vol. 259, pp. 212-224, 2014.

[20]. E. Gupta and R. S. Kushwah. “Combination of Global and Local Features Using DWT with SVM for CBIR in Reliability”. Infocom Technologies and Optimization (ICRITO) Trends and Future Directions, 2015.

[21]. E. Gupta and R. S. Kushwah. “Content-based image retrieval through combined data of color moment and texture”. International Journal of Computer Science and Network Security, vol. 17, pp. 94-97, 2017.

[22]. A. Nazir, R. Ashraf, T. Hamdani and N. Ali. “Content Based Image Retrieval System by using HSV Color Histogram, Discrete Wavelet Transform and Edge Histogram Descriptor”. 2018 International Conference on Computing, Mathematics and Engineering Technologies, 4, 2018.

[23]. P. Jitesh, A. Ashok, P. A. Kumarand and B. Haider. “Multi-level colored directional motif histograms for content-based”. The Visual Computer, vol. 36, pp. 1847-1868, 2020.

[24]. K. T. Ahmed and S. H. Naqvi. “Convolution, Approximation and Spatial Information Based Object and Color Signatures for Content Based Image Retrieval”. 2019 International Conference on Computer and Information Sciences, 2019.

[25]. H. Qazanfari, H. Hassanpour and K. Qazanfari. “Content-based image retrieval using HSV color space features”. International Journal of Computer and Information Engineering, vol. 13, no. 10, pp. 537-545, 2019.

[26]. E. Rashno. “Content-based image retrieval system with most relevant features among wavelet and color features”. Iran University of Science and Technology, vol. pp. 1-18, 2019.

[27]. K. S. Aiswarya, N. Santhi and K. Ramar. “Content-based image retrieval for mobile devices using multi-stage autoencoders”. Journal of Critical Reviews, vol. 7, pp. 63-69, 2020.

[28]. J. Zhou, X. Liu, W. Liu and J. Gan. “Image retrieval based on effective feature extraction and diffusion process”. Multimedia Tools and Applications, vol. 78, no. 5, pp. 6163-6190, 2019.

[29]. P. Srivastava. “Content-Based Image Retrieval Using Multiresolution Feature Descriptors”. Springer, Berlin, pp. 211-235, 2019.

[30]. I. A. Saad. “An efficient classification algorithms for image retrieval based color and texture features”. Journal of AL-Qadisiyah for Computer Science and Mathematics, vol. 10, no. 1, pp. 42-53, 2018.

[31]. M. S. Haji. “Content-based image retrieval:A deep look at features prospectus”. International Journal of Computational Vision and Robotics, vol. 9, no. 1, pp. 14-37, 2019.

[32]. V. Geetha, V. Anbumani, S. Sasikala and L. Murali. “Efficient Hybrid Multi-level Matching with Diverse Set of Features for Image Retrieval”. Springer, Berlin, pp. 12267-12288, 2020.

[33]. R. Boukerma, S. Bougueroua and B. Boucheham. “A Local Patterns Weighting Approach for Optimizing Content-Based Image Retrieval Using a Differential Evolution Algorithm”. 2019 International Conference on Theoretical and Applicative Aspects of Computer Science, 2019.

[34]. Y. Cai, G. Xu, A. Li and X. Wang. “A novel improved local binary pattern and its application to the fault diagnosis of diesel engine”. Shock and Vibration, vol. 2020, 9830162, 2020.

[35]. G. Xie, B. Guo, Z. Huang, Y. Zheng and Y. Yan. “Combination of Dominant Color Descriptor and Hu Moments in Consistent Zone for Content Based Image Retrieval”. IEEE Access, vol. 8, pp. 146284-146299, 2020.

[36]. A. C. Nehal and M. Varma. “Evaluation of Distance Measures in Content Based Image Retrieval”. 2019 3rd International conference on Electronics, Communication and Aerospace Technology, pp. 696-701, 2019.

[37]. S, Bhardwaj, G. Pandove and P. K. Dahiya. “A futuristic hybrid image retrieval system based on an effective indexing approach for swift image retrieval”. International Journal of Computer Information Systems and Industrial Management Applications, vol. 12, pp. 1-13, 2020.

[38]. S. P. Rana, M. Dey and P. Siarry. “Boosting content based image retrieval performance through integration of parametric and nonparametric approaches”. Journal of Visual Communication and Image Representation, vol. 58, pp. 205-219, 2019.

[39]. M. K. Alsmadi. “Content-Based Image Retrieval Using Color, Shape and Texture Descriptors and Features”. Springer, Berlin, pp. 1-14, 2020.