Performance enhancement method for multiple license plate recognition in challenging environments

Multiple-license plate recognition is gaining popularity in the Intelligent Transport System (ITS) applications for security monitoring and surveillance. Advancements in acquisition devices have increased the availability of high definition (HD) images, which can capture images of multiple vehicles. Since license plate (LP) occupies a relatively small portion of an image, therefore, detection of LP in an image is considered a challenging task. Moreover, the overall performance deteriorates when the aforementioned factor combines with varying illumination conditions, such as night, dusk, and rainy. As it is difficult to locate a small object in an entire image, this paper proposes a two-step approach for plate localization in challenging conditions. In the first step, the Faster-Region-based Convolutional Neural Network algorithm (Faster R-CNN) is used to detect all the vehicles in an image, which results in scaled information to locate plates. In the second step, morphological operations are employed to reduce non-plate regions. Meanwhile, geometric properties are used to localize plates in the HSI color space. This approach increases accuracy and reduces processing time. For character recognition, the look-up table (LUT) classifier using adaptive boosting with modified census transform (MCT) as a feature extractor is used. Both proposed plate detection and character recognition methods have significantly outperformed conventional approaches in terms of precision and recall for multiple plate recognition.

• The proposed technique improves the accuracy of plate detection in challenging environments that have non-uniform illumination and low resolution (based on distance from the camera). Our proposed MLPR technique divides plate detection problems into vehicle detection and plate localization, which results in scaled information for plate localization and helps to remove background noise and clutters. • The proposed plate recognition algorithm does not put a restriction for uniform light conditions, low resolution, or angular plates. Moreover, the character recognition part is robust to varying illumination, low resolution, different orientations, and multiple fonts. Experimental results have shown that the character size of 6 × 9 pixels is also recognized effectively.
In this paper, Faster-RCNN is used for vehicle detection followed by plate localization using morphological operations in the HSI color space. Geometric properties of area and aspect ratio of connected pixels are used for character segmentation. Moreover, this paper uses texture-based feature extraction method MCT, which is robust to illumination changes and low resolution [13], with lookup table classifier in boosting framework for character recognition. The remainder of this paper is organized as follows. Related work is reviewed in Sect. 2. Section 3 presents the proposed license plate detection and recognition method, detailed simulation results are presented in Sect. 4. Finally, the conclusion is given in Sect. 5.

Related work
This section briefly introduces the recent advances and published works in the plate recognition domain.

License plate detection
Most of the existing work on plate detection target a single vehicle in an image. Therefore, the demand for multiple plate detection has increased considerably owing to an increase in multilane structure in modern cities. Edge detection methods consider an area with a higher density of characters as an LP. Combining this property with geometric properties of plates has been widely used to extract LPs. Vertical edge detection is more robust compared with horizontal edge detection, which provides inaccurate results owing to errors due to the car bumper area [3]. A fast and robust vertical edge detection method was proposed that increases the speed of detection by eliminating unwanted lines [14]. Yepez et al. [15] proposed a plate detection method based on only morphological operations. They developed an algorithm to select appropriate structuring element (SE) from a set of SEs by training these SEs on the whole dataset. This approach could not perform well for multiple license plate recognition, due to variations in the size of plates in an image.
In [16], the block processing method was proposed which detects the area of LP by finding the maximum edge magnitudes among the blocks. Connected component labeling (CCL) [17] was used in binary images to label groups of connected pixels and use attributes such as the height/width ratio, and area to localize the plate. In [18], a character-based approach was used to localize an LP by calculating the distance between the characters on the LP. Rizwan et al. [2] proposed a method for detecting Chinese plates by taking advantage of the chromatic component of the YD b D r space model and eliminates non-plate regions by using an average of energy map and edge information of the plate.
Kamal et al. combined AdaBoost [19] with Haar-like features in a cascaded manner for License Plate Detection (LPD) [6] and genetic algorithms [20] have been used to classify and identify plates based on color information by using geometric attributes CCL for localization. In [21], the authors proposed an entropy-based feature selection method, followed by SVM for classification for plate detection. The proposed method performed segmentation by identification of the luminance channel [22] and then used Otsu's thresholding for binary segmentation of that channel. This method was only able to produce reasonable results on a small number of images.
Recently, deep learning architectures have also been used for the task of the LPD [23]. The CNN has been used in a cascaded form, where the first CNN classifier searches for any text on the image, and 2nd classifier is used to reject false positives, i.e., any other text from the text on LP [24]. Xiang et al. [25] proposed CNN-based network that extracts low and high features at several stages to distinguish details of plate and background followed by three-loss layer architecture for accurate plate detection. To enhance efficiency, researchers in [26] used the advanced structure of Faster R-CNN [27], which directly detects the plates in an end-to-end manner. A modified YOLO [28] was used for license plate localization, which had the capability of detecting license plates that had different variations like rotation, skewness, and different orientations, this method had high computational complexity.
Faster-RCNN with VGG 16 as a feature extraction method, without utilizing the fully connected layers, was used for LPL [29]. Moreover, DL-based Image enhancement [30] and denoising [31] techniques can be applied to improve the overall license plate detection accuracy. Moreover, LPD can be related to scene text detection as a special case, several such methods have been presented for text/ number detection [32]. Xue et al. [33] technique identify dense text boundary points to detect scene text, which characteristically comprehends the shape and location of text lines in contrast to other methods using segmentation-based techniques. These methods require strong contextual information. In addition to plate characters not having a strong relation to context semantic information, issues like varying light conditions combined with low resolution, and angle further deteriorates overall performance of such techniques.

Character segmentation
Character segmentation is a key step used to isolate characters for recognition. Most popular methods used geometric properties of area and aspect ratio [34], horizontal and vertical projection methods of characters were used to segment the plates [35], and also multiple features were combined to segment the characters of LP [36].
A Convolutional Neural Network (CNN) based two-stage process is proposed in [37] to segment and recognize characters (0-9, A-Z). Tarigan et al. [38] proposed an LP segmentation technique that consists of horizontal character segmentation, connected component labeling, verification, and scaling.

Character recognition
Recently, many methods and classification techniques have been proposed for recognition [23]. The template-matching method [4], calculates the correlation between character and templates and the maximum correlation valued template is considered as a character. However, it has shown poor performance in variable character size, noise, and rotation. Multilayer NN is trained to recognize characters [5]. The multistage classifier was used to recognize characters with lower case, upper case, digits, and two-line plates. This technique's performance deteriorates with varying illumination and small size of characters [39].
In [40], CNN and bi-directional long short memory (BLSTM) are combined for plate recognition. The CNN was used as feature extraction due to its high discrimination ability and BLSTM is capable tof extracting context information from past information followed by dense cluster voting (DCV) for classification. Bulan et al. [41] proposed a segmentation and annotation free method for plate recognition. They proposed a twostage classifier, which first used a winnows classifier for candidate region extraction followed by CNN for plate classification. For optical character recognition, a segmentation-free approach using hidden Markov models (HMMs) was proposed. In [42], a research group developed a robust ALPR technique in unconstraint environments. In [43], researchers presented a novel architecture for Chinese LPR by cascading CNN and extremal learning machines (ELMs). The CNN is applied for feature extraction and the ELM is used as a classifier, which yields encouraging results with short training time. In [10], a cascaded recurrent neural network (RNN) method integrated with a shortterm memory (STM) is proposed to recognize the sequential features. These features are extracted from the whole license plate via CNN. Proposed License Plate Detection and Recognition Method. In addition, adapting encoder-decoder architecture character recognition can be regarded as scene text recognition (STR) in which the encoder extracts features followed by character sequence decoding. The RNN with wide applications in natural language processing has been used a lot in the STR [44]. However, one major drawback of RNN is processing data in order. YU et al. [45] presented semantic reasoning module, which utilizes parallel transmission to mitigate the limitation of one-way transmission of context. Similarly, in [46], researchers utilized two transformers, one for an image to character mapping and the second for the character to word, respectively.

Proposed license plate detection and recognition method
This section describes the architecture of the proposed system. Figure 2 shows the overall architecture of the license plate detection method. For readers' easy understanding, we divide our developed method into the following interconnected steps.

Vehicle detection
Object detection is becoming a complex problem with an increase in applications, such as multiple object tracking [47], and self-driving cars. Many handcrafted methods, such as the HoG and the Haar features [47] and deep learning (R-CNN [48], Faster-RCNN [27], YOLO [49] methods were proposed recently. However, some have slow processing speed and others have a low accuracy rate. Faster-RCNN has shown the best detection rates among deep learning object detectors with real-time processing capabilities. However, Faster-RCNN performance deteriorates for small object detection, for instance, the LP localization in our case. Therefore, this paper uses Faster R-CNN for vehicle detection that helps in providing relevant and scaled information in an image. Moreover, Faster-RCNN shows excellent results for vehicle detection as vehicle size is large as compared to the plate's size in multiple license plate detection cases. Faster-RCNN is divided into two parts: region proposal network (RPN) that generates a proposal for vehicles region, followed by fast R-CNN for vehicle/non-vehicle classification and to efficiently refine the proposal and detect the vehicles. To generate feature maps of the input image, we employ a pre-trained VGG-16 [50] model consisting of 13 convolutional, 5 max-pooling, and FC layers. The feature maps are fed to RPN which scans each map by sliding window and generates proposals with bound boxes for vehicle region. For multiple vehicle detection scenarios, the network has to detect vehicles of multiple scales and aspect ratios as the distance between vehicle and camera varies. To deal with variable scales, anchors in RPN are introduced, which uses three scales (128 × 128, 256 × 256and512 × 512 ) and 3 aspect ratios (1 : 1, 1 : 2, 2 : 1) , which results in 9 anchors at each location. As the size of each region proposal is different from each other, it is difficult to make efficient architecture for different sizes. Region of interest (RoI) pooling simplifies the problem and extracts fixed-size feature representation. The features from RoI pooling are flattened into vectors. Eventually, these vectors are fed into two fully connected layers, the one for vehicle/non-vehicle classification based on each RoI SoftMax probability, and the other for predicting the rectangular coordinates for each region. In this method, faster-RCNN architecture is trained using stochastic gradient descent with momentum (SGDM) that minimizes the error and quickly updates the weights. SGD uses only one sample data set from the training data to update weights while the gradient descent (GD) method must consider all training datasets to update the weights/parameters. The initial learning rate of 0.001 was used for training VGG 16 parameters and 0.0001 for the remaining parameters for 50 k iterations. One image was randomly sampled per batch for training. Each image was resized to 600 and 1400 for shorter and longer sides, respectively. Figure 3 shows the results of vehicles detected in an image using faster-RCNN.

License plate localization
After successful vehicle detection, the next step is to locate the LP by using morphological operations in the HSI color space. This color space is known to be closely related to the color visualization of human beings [51]. The vehicle area is converted to HSI color space, which separates the color information from the intensity [51]. The current approach uses hue information to determine the colored background plates by defining specific criteria, as our main aim is to find yellow-green and orange plates. Based on our experiment, the following criteria proved sufficient for our requirement.

Background Color is Blue
For readers' information, we state here that there exist several other color spaces, which are based on the tristimulus values, such as CMY, HSI, or YCbCr [51]. Since, we are using the HSI color space in this stage, therefore, we empirically observe that the equations that define HSI are formed in a way that rotates the entire nominal RGB color cube and scales it to fit within a larger color cube. Although these conversions require high computations to correctly determine and interpret HSI signals. However, in our case, we observe that the use of hue to decide background color does not overfit to the investigated data. In addition, white background plates and monochrome images are located using intensity information of HSI color space. Figure 4a shows the binary image results intensity channel and Fig. 4b shows the segmentation result of the Hue channel.
After segmentation and binarization, the candidate's area contains regions of connected pixels. These connected components are labeled using the '4-connectivity labeling' method so that each pixel in the connected region has the same label. Edge detection methods have matrix multiplications that increase computational cost. Therefore, morphological operations have been used instead of edge detection  1) and (2), respectively.
In multiple plate detection, the size of LPs depends upon the distance of a car from the camera.
Thus, having more than one SE for one task can increase the computational load. After testing and verifying on several test images, an optimum SE was selected. Figure 5a, b shows the effect of morphological operations on both binary images. Since  most of the non-LP regions are removed. Finally, we apply two geometric conditions of area and aspect ratio to locate the license plate. In multiple license plate detection, plate size will vary depending on the distance from the camera. Therefore, having multiple area and aspect ratio values is not an optimum solution. The experiment on a large number of test images was performed to find optimum values. Therefore, area values between 1500 and 4000 pixels and an aspect ratio of 0.2-0.6 are used in the proposed method for plate localization. A similar process is carried on the remaining detected vehicles. Figure 6 shows the overall result of detecting multiple license plates by the proposed methods.

Character segmentation
The recognition of localized LP now proceeds to the segmentation step. This is a crucial step as recognition totally depends on how well the characters are separated from one another. In this study, pixel connectivity in binary images is used for segmenting characters [52]. First, the LP regions are converted to binary values by using Otsu's threshold method [35]. Next, a morphological thinning is performed to reduce the joining between the LP boundaries and text and in between characters that can negatively impact the process. The connected components are labeled based on pixel connectivity. The labeled pixels are considered and pixels having the same area and aspect ratio are detected as characters. Figure 7 shows segmented characters of some number plates with varying light conditions, different backgrounds, and multiple sizes.

Character recognition
In multiple plate detection scenarios, the size of plates will vary depending on the distance from the camera. Therefore, the characters isolated during the segmentation step will also have variable sizes. Therefore, techniques, such as template matching do not perform well due to their requirement of fixed size. The resolution of characters plays a crucial role in the identification of characters. Moreover, conventional approaches do not perform well in challenging environments, and various illumination conditions, i.e., rainy time, dusk time, cloudy, and underground parking images. For character recognition, the AdaBoost with modified census transform (MCT) [13] as a feature extractor is used with a lookup table (LUT) classifier. The LUT is efficient in multi-Gaussian samples classification, whose sensitivity to a fixed number of bins is suitable for the character recognition process. Table 1 shows the algorithm for character recognition. Texture-based analysis plays a vital role in vision-based applications, focusing mainly on how to derive texture features by taking advantage of neighborhood properties. Local binary pattern (LBP) computes a local representation of texture by comparison of center pixel to its neighboring pixels in a defined mask. However, the LBP features have shown poor results when the center pixel value is changed due to varying illumination conditions. Therefore, in our proposed method, MCT features are used and provided excellent results for texture description in the character recognition process in changing light conditions. Figure 8 shows a calculation of the MCT feature with a 3 × 3 window from a segmented character. MCT features first compute the mean intensity value of the 3 × 3 window around that specific pixel. For each pixel in the window, MCT assigns "1" if the current pixel value is higher than the mean value and it assigns "0" otherwise. This binary value is converted to decimal to obtain the feature value. This integer value represents an independent local pattern. Therefore, a 3 × 3 kernel can have a total of 511 feature values. Next, the LUT classifier is used for the classification of the MCT features at every pixel location of the character to produce 511 bin feature indices. 511 bin histogram is created Γ(x) for all samples in the training set. LUT assigns + 1 if positive samples are greater and -1 otherwise, as shown in (3): Figure 9 shows an example of the LUT classifier, where rows represent the pattern value and columns represent the weak classifier candidates. AdaBoost is an iterative method that sequentially selects a weak classifier pixel location with minimum weighted error in every iteration of learning. Finally, a strong classifier is constructed from the sum of all weak pixel classifiers as shown in (4): As character recognition is a multiclass problem, we use one against all classification techniques to construct k = 50 classifiers for 50 classes. Each classifier is trained by taking positive examples from one class and negative examples from the remaining classes. The output of the multi-class classifier is activated for class having maximum output among all binary classifiers. For outputs of multiple binary classifiers, a multi-classifier generates a vector output S as shown in (5):

Experimental results and discussion
This section provides experimental results on the published dataset as well as private multiple license plate datasets to describe the effectiveness of the proposed method.

a) Benchmarks
Peking University dataset PKU This dataset was presented and collected by Yuan et al. [53]. It comprises 5 groups. Group 1 to group 4 contain images with a single license plate of cars/trucks on highways and roads in varying illumination conditions. However, group 5 contains 1152 images with multiple vehicles under diverse environmental conditions comprising low illumination, low resolution (size), and varying contrast.
Application-oriented license plate dataset (AOLP) [54] Contains a total of 2049 Taiwanese license plates images. Based on diverse applications this dataset is divided into three categories: AC (access control, 681 images), LE (law enforcement, 757 images), and RP (road patrol, 611 images). Specifically, AC contains images passing taken by a stationary camera moving at low speed, LE contains images of cars taken by the roadside camera moving at variable speeds, RP is the most challenging part of this dataset that contains images captured by law enforcement vehicles.
Media Lab dataset [55] Contains 706 images of Greek license plates under-constrained as well as the unconstrained environment. This dataset is divided into different groups based on various conditions such as blurred, color, and grayscale images, LPs with a close view, and shadows and images having more than one vehicle. [56] dataset, which is currently the largest Chinese vehicle dataset available, Table 2 describes the details of the dataset. As discussed, the Subset AC and LE of the AOLP dataset are relatively less challenging than the RP subset as it contains images with blurriness and distortion introduced due to the motion of the camera. As can be seen from Table 3 that our method outperformed [57] and robust attention [58] in all three subsets of this dataset. However, Table 2 represents that we achieved comparable accuracy to the [59] which has shown the best possible accuracy to date. Thus proposed method performance on the AOLP dataset represents that it is effective in challenging conditions of distortion, blurriness, and rotation.

Xu et al. presented the CCPD
As already mentioned, in the PKU dataset the most relevant subgroup to our paper application is G5, which contains multiple license plate images in challenging conditions. As can be seen from Table 4, the proposed method achieves comparable accuracy to all state of art methods in groups G1-G4 of the dataset. However, as the proposed algorithm outperforms all methods on the G5 subgroup which has huge vehicle diversity and plates with multiple orientations, etc., this further accentuates the superiority of the proposed scheme for multiple plate detection. Table 5 depicts that performance on the CCPD dataset in terms of group-wise accuracy of the proposed method is comparatively better than other states of art algorithms. Luo et al. [63] is the only method performing better on groups of tilt and rotate as this method was explicitly presented to solve the tilted/rotated plates. The other state of art methods compared included [64] a text recognition approach and multi cascaded CNN [65] based approach and attention-based method which utilizes Xception CNN for feature extraction and recurrent neural network for decoding. As can be seen from the results proposed method accuracy is better for groups containing unconstrained environments like weather and challenge as state of art methods were unable to detect plates in extreme reflective gare and weather conditions.   In the media lab dataset, the number of images is very less to effectively train the model, therefore we decided to choose the training model used by [59], which performs fourfold cross-validation. This model divides the subsets into four equal random parts followed by using three subsets for training and the fourth one for testing. It is evident from Table 6 that the proposed method outperformed the existing methods as this dataset contains relatively less challenging illumination and weather conditions compared to other datasets and contains good resolution images as well.
The private dataset contains a total of 4179 (resolution 1920 × 1080) images were taken, using 2000 for testing and 2179 for training purposes, in varying illumination conditions and environments, i.e., (night, day, dusk, cloudy weather, rainy weather, and parking). Table 7 compares the results in terms of recall and precision ratio of detection method with existing methods when applied to images with multiple license plates. Figure 10 shows the results for challenging illuminating and weather conditions throughout the day.    The proposed method outperforms conventional methods in terms of both precision and recall. There were 5543 vehicles in 2000 images used for the testing process. The proposed method detected 5361 LPs correctly with an accuracy (recall rate) of 96.72%. Recall and precision are defined as below: Figure 10 shows the results for challenging illuminating and weather conditions throughout the day.
The recall ratio of the proposed method is 13% higher than of the edge detection method since the edge method was unable to detect color background number plates. Precision is higher when compared with the AdaBoost method, as the AdaBoost method also detects headlights and text as a license plate. Li et al. [24] trained a 37 class CNN system for character detection in images followed by a CNN classifier as a false positive eliminator. This method also produced more false positives in real-world scenarios where images had text other than LP. The Yolo and SSD both underperformed on the private dataset as both were unable to detect plates in these images due to images containing small size license plates (based on distance from the camera). Similarly, the detection results of text detection techniques [32,33] are less accurate as these methods performance deteriorates due to a combination of low resolution (size of plates) with other environmental factors such as varying illumination, angular plates, weather conditions, etc.
Character recognition performance is evaluated in Table 3. The proposed recognition method was tested on all the plates successfully detected. We compared the performance of the presented method with popular methods. First, a scale-invariant feature transform is used for feature extraction and a support vector machine is used for classification. The second method is a 3-layer multilayer neural network for character recognition. The third method is a traditional convolutional neural network having two convolution layers, 2 fully connected layers followed by SVM for classification.
The SIFT and the SVM-based method was unable to classify characters due to partial occlusion in rainy images, the effect of vehicle headlights in basement images, and exposure to strong sunlight. ANN has the worst results as broken characters and two font characters on plates were unrecognizable by this method. As CNN can automatically learn features, it has performed better than both existing methods. However, its performance degraded for low-resolution (based on the distance of camera) characters. When compared to state-of-the-art methods in scene text recognition that included methods using RNN/transformers [44] and semantic reasoning network (SRN) [46]. The proposed method outperformed these methods as they were unable to perform well in an uncontained environment that includes occlusions (raindrops), very low resolution, characters with extreme reflective glare, and little semantic meanings. Moreover, the proposed method outperformed these methods in terms of accuracy in challenging conditions, such as varying illumination images as per the  Table 8. Figure 11 shows the comparison of character recognition performance of conventional approaches with the proposed method for low-resolution characters and demonstrates the superiority of the proposed scheme when compared with the benchmarks. Table 9 lists the overall (detection + recognition) performance results obtained under different lighting conditions and weather conditions. Our dataset consists of images taken during night time, daytime, dusk, and cloudy weather. Images taken during cloudy and sunny days produce a better result due to consistent light conditions, except for cases when LPs are affected by the reflection of sunlight from surroundings. Images of a car parked in the basement also produced good results with exception to a reflection of other cars' headlights. Worst results were produced during dusk time owing to the quickly varying illumination during this time of day.  Moreover, dimming sunlight, street lights, and cars' headlights have a negative impact on the overall performance of the method, especially when vehicles are at a far distance from the camera. Results of images taken in rainy conditions are also encouraging. However, some characters were not recognized owing to images getting blurred due to water pouring down the windscreens of the cars containing the camera.
In terms of scalability, the proposed algorithm performance should be consistent and must not deteriorate drastically. To evaluate scalability, we perform analysis on execution time by increasing the number of plate images and processors, respectively. We observed that with a gradual increase in pate images from 1000 to 4000 images processing time increases, however, this increase is mitigated by increasing the number of processors as shown in Fig. 12.

Ablation study
A detailed analysis to demonstrate the effectiveness of the proposed method is presented in this section. A License plate occupying a small portion of the image losses critical information after several down-sampling stages applied by CNN based object detector. Hence, these methods were not able to achieve better accuracy. Table 10 shows the comparison of state of art object detectors on various LP sizes. It is evident from the table that with a decrease in plate size and an increase in challenging  conditions the accuracy levels of object detectors also reduce significantly. Thus based on these results and multiple license plate recognition involves small size plate and challenging conditions proposed method used Faster-RCNN was used as object detector and then license plate region is localized using Image processing techniques. To further improve the overall accuracy selection of an optimum number of scales and anchor boxes play a crucial role. Therefore, we conducted multiple experiments by changing scales and aspect ratios to evaluate the performance of Faster-RCNN on our dataset. Table 11 compares the results of different combinations of scales and aspect ratios. We can conclude that the default setting of the 3 anchors scale and 3 aspect ratio produce the best results in vehicle detection. Furthermore, the same color of LP with different values of color components in the day and night hours makes it a difficult condition to handle particularly in extreme cases. However, the proposed algorithm resulted in the efficient handling of this difficulty. In this application, the HSI color model has been applied, in which any component of the color can be altered separately without disturbing others. This feature of the proposed model is very effective in dealing with adverse conditions such as extreme light conditions. This characteristic of the current model resulted in achieving high precision even in challenging conditions.
As it is obvious that this paper is targeting multiple license plate recognition, therefore computational performance will be not as much as efficient for images with a single license plate Table 12 shows the time consumption of each portion of the algorithm. The proposed approach requires only 570 ms to detect a license plate, which includes vehicle detection and plate detection in an image with a resolution of 1920 × 1080 pixeles. Moreover, it almost takes 91 ms to recognize the characters of the license plate. It was observed that our two-step approach does not outperform other methods in terms of computational performance, however, it is sufficient enough to achieve the real-time processing speed required for ITS applications.

Conclusions and future work
In this paper, a multiple license plate recognition method, for high-resolution images, was presented, which works in challenging illumination conditions in real-time scenarios. The proposed technique divided plate detection into two steps. In the first step, faster-RCNN was used to detect all the vehicles in an image resulting in scaled information to locate plates. Meanwhile, morphological operations were used to reduce non-plate regions and geometric properties were used to localize plate HSI color space. Then, character recognition is executed by a LUT classifier using adaptive boosting with MCT as a feature extractor. Experimental results showed that the detection rate of the proposed method is much higher than existing methods, with an overall detection rate of 96.72% and a recognition rate of 98.02% in multiple LPs and varying illumination scenarios. The proposed algorithm might be suitable for realtime ITS applications [32]. Future work could focus on developing a parallel version of the developed algorithm. We belive that will further improve the execution time to recognize a license plate.