Colour Vision Model-Based Approach for Segmentation of Trafﬁc Signs

This paper presents a new approach to segment tra ﬃ c signs from the rest of a scene via CIECAM, a colour appearance model. This approach not only takes CIECAM into practical application for the ﬁrst time since it was standardised in 1998, but also introduces a new way of segmenting tra ﬃ c signs in order to improve the accuracy of colour-based approach. Comparison with the other CIE spaces, including CIELUV and CIELAB, and RGB colour space is also carried out. The results show that CIECAM performs better than the other three spaces with 94%, 90%, and 85% accurate rates for sunny, cloudy, and rainy days, respectively. The results also conﬁrm that CIECAM does predict the colour appearance similar to average observers.


INTRODUCTION
Recognising a traffic sign correctly at the right time and the right place is very important to ensure the safe journey not only for the car drivers but also for their passengers as well as pedestrians crossing the road at the time. Sometimes, due to a sudden change of viewing conditions, traffic signs can hardly be spotted/recognised until it is too late, which gives rise to the necessity of development of an automatic system to assist car drivers for recognition of traffic signs. Normally, such a car-assistant system requires real-time recognition to match the speed of the moving car, which in turn requires speedy processing of images. Segmentation of potential traffic signs from the rest of a scene should therefore be performed first before the recognition in order to save the processing time. In this study, segmentation of traffic signs based on colour is investigated.
Colour is a dominant visual feature and undoubtedly represents a piece of key information for drivers to handle. Colour information is widely used in traffic sign recognition systems [1,2], especially for segmentation of traffic sign images from the rest of a scene. Colour is regulated not only for the traffic sign category (red = stop, yellow = danger, etc.) but also for the tint of the paint that covers the sign, which should correspond, with a tolerance, to a specific wavelength in the visible spectrum [3]. The most discriminating colours for traffic signs include red, orange, yellow, green, blue, violet, brown, and achromatic colours [4,5].
Broadly speaking, three major approaches are applied in traffic sign recognition, that is, colour-based, shape-based, and neural-network-based recognition. Due to the colour nature of traffic signs, colour-based approach has become very popular.

Traffic sign segmentation based on colour
Many researchers have developed various techniques in order to make full use of the colour information carried by traffic signs. Tominaga [6] creates clustering method in a colour space, whilst Ohlander et al. [7] employ an approach of recursive region splitting to achieve colour segmentation. The colour spaces they applied are HSI (hue, saturation, intensity) and L * a * b * . These colour spaces are normally limited to only one lighting condition, which is D65. Hence, the range of each colour attribute, such as hue, will be narrowed down due to the fact that weather conditions change with colour temperatures ranging from 5000 K to 7000 K.
Many other researchers focus on a few colours contained in the signs. For example, Kehtarnavaz et al. [8] process 2 EURASIP Journal on Image and Video Processing "stop" signs of mainly a red colour, whilst Kellmeyer and Zwahlen [9] have created a system to detect "warning" signs combining colours of red and yellow. Their system is able to detect 55% of the "warning" signs within the 55 images. Another system detecting "danger" and "prohibition" signs has been developed by Nicchiotti et al. [10] applying hue, saturation, and lightness (HSL) colour space. Paclík et al. [11] try to classify traffic signs into different colour groups, whilst Zadeh et al. [12] have created subspaces in RGB space to enclose the variations of each colour in each of the traffic signs. The subspaces in RGB space have been formed by training clusters of signs and are determined by the ranges of colours, which are then applied to segment the signs. Similar work is also conducted by Priese et al. [13] applying a parallel segmentation method based on HSV colour space and working on "prohibition" signs. Yang et al. [14] focus just on red triangle signs and define a colour range to perform segmentation based on RGB. The authors have developed several additional procedures based on the estimation of shape, size, and location of primarily segmented areas to improve the performance of RGB method. Miura et al. [15] use both colour and intensity to determine candidates of traffic signs and confine themselves to detect white circular and blue rectangular regions. Their multiple-threshold approach is good for not missing any candidate, but it detects many false candidate regions.
Due to the change of weather conditions, such as sunny, cloudy, and evening times when all sorts of artificial lights are present [3], the colour of the traffic signs as well as illumination sources appears different, resulting in that most colour-based techniques for traffic signs segmentation and recognition may not work properly all the time. So far, there is no method available that is widely accepted [16,17].
In this study, traffic signs are segmented based on colour contents using a standard colour appearance model CIECAM97s that is recommended by the CIE (International Committee on Illumination) [18,19].

CIECAM colour appearance model
CIECAM, or CIECAM97s, the colour appearance model recommended by CIE (Commission Internationale de l'Eclairage), was initially studied by a group of researchers in UK between middle 1980s and early 1990s running two 3year projects consecutively. They based on Hunt's colour vision model [20][21][22][23] of a simplified theory of colour vision for chromatic adaptation together with a uniform colour space, and they conducted a series of psychophysical experiments to study human's perception under different viewing conditions simulating real viewing environment. In total, about 40 000 data were collected for a variety of media, including reflection papers, transparencies, 35 mm project slides, and textile materials. These data were applied to evaluate and further develop Hunt's model, which was standardised in 1998 as a simple colour appearance model by CIE [19], called CIECAM. It can predict colour appearance as accurately as an average observer and is expected to extend traditional colorimetry (e.g., CIE XYZ and CIELAB) to the prediction of the observed appearance of coloured stimuli under a wide variety of viewing conditions. The model takes into account the tristimulus values (X, Y, and Z) of the stimulus, its background, its surround, the adapting stimulus, the luminance level, and other factors such as cognitive discounting of the illuminant. The output of colour appearance models includes mathematical correlates for perceptual attributes that are brightness, lightness, colourfulness, chroma, saturation, and hue. Table 1 summarises the input and output information for CIECAM.
In this study, colour attributes of lightness, chroma, and hue angle are applied, which are calculated in (1): where and R a , G a , B a are the postadaptation cone responses with detailed calculations in [23] and A W is the A value for reference white. Constants N bb , N cb are calculated as where n = Y b /Y W , the Y values for the stimulus and reference white, respectively. Since it is standardised, the CIECAM has not been applied to the practical application. In the present study, this model is investigated on the segmentation of traffic signs. Comparisons with the other colour spaces including CIELUV, HSI, and RGB are also carried out on the performance of sign segmentation.

Image data collection
A high-quality Olympus digital camera with C-3030 zoom, which has been calibrated before shooting, is employed to capture pictures in real viewing conditions [24]. The collection of sign images reflects the variety of viewing conditions and the variations in sizes of traffic signs caused by the changing distances between traffic signs and the driver (the position to take pictures). The viewing conditions are consisted of two elements. One is the weather conditions including sunny, cloudy, and rainy conditions and the other is the viewing angles with complex traffic sign positions as well as multiple signs at a junction, which distorts the shapes of signs to some degrees. The distance between the driver (and therefore the car) and the sign determines the size of traffic sign inside an image and is related to the recognition speed. According to The Highway Code [25] from UK, the stopping distance should be more than 10 meters under 30 MPH (miles per hour), giving around 10 seconds to brake the car in case of emergency. Therefore, the photos are taken between the distances of 10, 20, 30, 40, and 50 meters, respectively, to each sign. In total, 145 pictures have been taken including 52, 60, and 33 pictures under sunny, rainy, and cloudy days, respectively. All the photos are taken with similar camera settings.

Initial estimation of viewing conditions
To apply CIECAM model, a quick and rough classification takes place first to determine a particular set of viewing parameters for each of three categories of viewing conditions, that is, sunny, cloudy, and rainy.
Since most sign photos are taken under similar driving positions, at normal viewing position, one image consists of 3 parts from top to the bottom, containing sky, signs/scenes, and the road surface, respectively. If, however, some images miss one or two parts, for example, an image may miss the road surface when taken uphill; these images are classified into sunny day conditions, which can be corrected during recognition stage.
Based on this information, image classification can be carried out based on the saturation of sky or the texture of the road. The degree of saturation of the sky (blue colour in this case) will decide the sunny, cloudy, and rainy status, which is determined using threshold method collectively based on the information from our sign database. For the sky colour, sunny sky is very distinguished from cloudy and rainy skies. On the other hand, for the cloudy or rainy day, another measure has to be introduced by the study of the texture of the road that appears at the bottom 1/3 part of an image. The texture of the road is measured using fast Fourier transform with the average magnitude (AM) as threshold, which is shown in where |F( j, k)| are the amplitudes of the spectrum calculated by (5) and N is the number of frequency components: where f(m, n) is the image, n, m are the pixel coordinates, N, M are the numbers of image row and column, and u, v are frequency components [26].

Traffic sign segmentation
After classification, the reference white is obtained by measuring a piece of white paper many times during the period of two weeks using a colour meter, CS-100A, under each viewing condition. The average of these values is given in Table 2 and applied in the subsequent calculations. The images taken under real viewing conditions are transformed from RGB space to CIE XYZ values using (6) gained during camera calibration procedure and then to LCH (lightness, chroma, hue), the space generated by the model of CIECAM: The range of hue, chroma, and lightness for each weather condition is therefore calculated as given in Table 3. These values are the mean values ± standard deviations. Only hue and chroma are employed in the segmentation in the consideration that lightness hardly changes much with the change of viewing conditions. These ranges are applied as thresholds to segment potential traffic sign pixels. Those pixels within the range are then clustered together using the algorithm of quad-tree histogram method [27], which recursively divides the image into quadrants until all elements are homogeneous, or until a predefined, "grain," size is reached. Figure 1 demonstrates the interface for traffic sign segmentation, which shows that three potential signs are segmented from the image shown in Figure 1. The bottom right is however the rear part of a car.

EXPERIMENTAL RESULTS
To evaluate the results of segmentation, two measures are used. One is the probability of correct detection, denoted by P c , To evaluate CIECAM model, a different set of 128 pictures is selected including 48 pictures taken under sunny day, and 53 and 27 pictures taken under rainy and cloudy days, respectively. Within these images, a total of 142 traffic signs are visible. Among them, 53, 32, and 57 signs are with sunny, cloudy, and rainy conditions, respectively. The results of segmentation are listed in Table 4. Table 4 illustrates that for the sunny day 94% signs have been correctly segmented using CIECAM model. However, it also gives 23% false segments, that is, the regions without any signs at all, like the segment at the bottom right in  Figure 1 showing the rear part of a car. Table 4 also demonstrates that the model works better on sunny days than on cloudy or rainy days, the last two viewing conditions receiving P c values of 90% and 85%, respectively. Although the segmentation process gives some false segments, these segments can be discarded during the 2nd phase of shape classification and recognition stages described in other papers [28]. Figure 2 demonstrates rejection of falsely segmented regions after both segmentation and recognition procedures. During the shape classification and recognition stages, the system first checks all the segments and discards the nonsign segments. For all 128 pictures, 99% of false positive regions were discarded; 58% of them were rejected after shape classification procedure and 41% after following recognition procedure. The foveal system for traffic sign (FOSTS) recognition that applies behavioural model of vision (BMV) will retrieve the correct sign back which matches the segment of interest. Those correct signs have been stored in a database in advance. Figure 3 demonstrates an interface for sign recognition [28].

COMPARISON WITH HSI AND CIELUV METHODS
In the literature, HSI and CIELUV are the most commonly used methods employed in segmentation based on colour. The comparison with CIECAM applied in this study is therefore carried out. The calculation for HSI (hue, saturation, Xiaohong Gao et al.  and intensity) is shown in (8), which is claimed to be much closer to human perception [27] than that for RGB, the space by which images are originally represented:

5
CIELUV is recommended by CIE for specifying colour differences, and it is uniform as equal scale intervals represent approximately equal perceived differences in the attributes considered. This space has been widely used for evaluating colour differences in connection with colour rendering of light sources and colour difference control for surface colour industries including textile, painting, and printing. The attributes generated by the space are hue (H), chroma (C), and lightness (L) as described in (9) [29]: where Y 0 , u 0 , v 0 are the Y, u, v values for the reference white.
The segmentation procedure using these two spaces is similar to that of CIECAM. Firstly, the colour ranges for each attribute are obtained for each weather condition. Then, images are segmented using thresholding method based on these colour ranges. Table 5 gives the results of comparison between these three colour spaces.
These data show that for each weather condition, CIECAM outperforms the other two spaces with correct segmentation rates of 94%, 90%, and 85%, respectively, for sunny, cloudy, and rainy conditions. CIELUV performs better than HSI for the cloudy and rainy day conditions. Also, HSI gives the largest percentage of false segmentation with 29%, 37%, and 39%, respectively, for each of the sunny, cloudy, and rainy weather conditions. The results also show that all colour spaces perform worse for the rainy day than for the other two weather conditions (sunny and cloudy), which is in line with everyday experience. That is, the visibility is worse in a rainy day than in a sunny or cloudy day for drivers. Figure 4 demonstrates the results of segmentation carried out by the 3 colour spaces, which show that CIECAM gives two correct segments with signs. Whilst CIELUV segments two signs correctly, it also gives one false segment without any signs. Though for HSI colour space, it gives two correct sign segments and two false segments, which again illustrates that HSI performs the worst in traffic sign segmentation task based on colour.

TRAFFIC SIGN SEGMENTATION BASED ON RGB
Comparison with RGB colour space for the segmentation of traffic sign is also carried out on a calibrated monitor. The calibrated colour temperature setting is the average daytime D65. On the basis of preliminary evaluation, the RGB composition characteristic for traffic signs was determined as follows: for red signs, are red, green, and blue components of a pixel, respectively. In addition, while determining each segmented region as a potential traffic sign, two additional conditions should be taken into account, which are as follows.
(i) The size of clustered colour blobs is no less than 10×10 pixels. (ii) The relation of width/height of the segmented region is in a range of 0.5-1.5.
The same group of pictures (n = 128) as tested by CIECAM is segmented based on the approach described above. The results obtained are listed in Table 6.
In comparison with the data presented in Table 4, it indicates that the probability of correct traffic sign segmentation 6 EURASIP Journal on Image and Video Processing  by RGB is lower than that by CIECAM for sunny and cloudy weather conditions. In addition, the probability of false positive detection is much higher for the RGB method, and it strongly depends on weather conditions.

CONCLUSIONS AND DISCUSSIONS
This paper introduces a new colour-based approach for segmentation of traffic signs. It utilises the application of CIE colour appearance model that is developed based on human perception. The experimental results show that this CIECAM model performs very well and can give very accurate segmentation results with up to 94% accuracy rate for sunny days. When compared with HSI, CIELUV, and RGB, the three most popular colour spaces used in colour segmentation research, CIECAM overperforms the other three. The result not only confirms that the model's prediction is closer to average observer's visual perception but also opens up a new approach for colour segmentation when processing images. However, when it comes to the calculation, CIECAM is more complex than the other colour spaces and needs longer calculations with more than 20 steps, which will pose a problem when processing video images in real time. At the moment, the processing time for segmentation can be reduced to 1.8 seconds, and the recognition time is 0.19 second (for 86 signs in traffic sign database scanned from The Highway Code [25], UK, and arranged by colour and shape), arriving at 2 seconds for processing one frame of image. When processing video images, there are usually 8 frames in one second, which means that the total time (= segmentation time + recognition time) should be 0.125 second for one frame of image in order to match current calculation speed. Therefore, more work needs to be done to further optimise algorithms for segmentation and recognition in order to meet the demand for real-time traffic sign recognition. Incorporation with the other method as explained in [30] can also be an approach. Although the correct segmentation rate is less than 100% when applying CIECAM, the reason is mainly the sign images being too small in some scenes. When processing video images, the signs of interest will become larger when the car is closer to the signs. Hence, the correct segmentation rate can be improved increasingly.

ACKNOWLEDGMENTS
This work is partly supported by The Royal Society, UK, under the International Scientific Exchange Scheme and partly sponsored by Russian Foundation for Basic Research, Russia, Grant no. 05-01-00689. Their support is gratefully acknowledged.

Call for Papers
Digital filter banks find various good applications in communications signal processing. In general, they can be used to obtain very sharp frequency selectivity to isolate different communications frequency channels from each other and from interfering spectral components. This can be done in a very flexible and dynamic manner. Thus, filter banks constitute a very powerful generic tool for software-defined radios and spectrally agile communication systems. The theoretical capacity limits in communications can be approached by multicarrier techniques. With radio channels, multicarrier techniques can be combined with multiantenna transmitters and receivers to provide efficiency. Existing or planned transmission systems rely on the OFDM technique to reach these goals. However, OFDM has a number of drawbacks, such as the use of the cyclic prefix to cope with the channel impulse response which results in a loss of capacity and the requirement of block processing to maintain orthogonality among all the subcarriers. Furthermore, the leakage among frequency subbands has a serious impact on the performance of FFT-based spectrum sensing and OFDM-based cognitive radio in general.
So far, some attempts have been made to introduce filter bank multicarrier (FBMC) in the radio communications arena, in particular, the isotropic orthogonal transform algorithm (IOTA). However, the full exploitation and optimization of FBMC techniques in the context of radio evolution have not been considered sufficiently. Consequently, advances in communication aspects of FBMC are still required to make it useful for future radio systems.
This has motivated advanced research in the European ICT project PHYDYAS, which supports this special issue. Topics of interest include, but are not limited to: • Filter bank-based multicarrier transmission and prototype filter design • Filter bank-based signal processing for other communication waveforms • Filter bank applications in software-defined radio • Data-aided and blind techniques for synchronization and channel estimation • Preamble and pilot-pattern design • Equalization and demodulation • FBMC MIMO techniques and beamforming • Radio scene spectrum analysis and cognitive radio • Interference management • Interlayer optimization and FBMC-specific scheduling • Filter bank for channel coding • Filter bank in AD and DA conversions Before submission authors should carefully read over the journal's Author Guidelines, which are located at http://www .hindawi.com/journals/asp/guidelines.html. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://mts.hindawi.com/, according to the following timetable:

Call for Papers
Recently, there has been a growing interest in femtocell networks both in academia and industry. They offer significant advantages for next-generation broadband wireless communication systems. For example, they eliminate the deadspots in a macrocellular network. Moreover, due to short communication distances (on the order of tens of meters), they offer significantly better signal qualities compared to the current cellular networks. This makes high-quality voice communications and high data rate multimedia type of applications possible in indoor environments.
However, this new type of technology also comes with its own challenges, and there are significant technical problems that need to be addressed for successful deployment and operation of these networks. Standardization efforts related to femtocell networks in 3GPP (e.g., under TSG-RAN Working Group 4 and LTE-Advanced) and IEEE (e.g., under IEEE 802.16m) are already underway.
The goal of this special issue is to solicit high-quality unpublished research papers on design, evaluation, and performance analysis of femtocell networks. Suitable topics include but are not limited to the following:

Call for Papers
The maritime domain continues to be important for our society. Significant investments continue to be made to increase our knowledge about what "happens" underwater, whether at or near the sea surface, within the water column, or at the seabed. The latest geophysical, archaeological, and oceanographical surveys deliver more accurate global knowledge at increased resolutions. Surveillance applications allow dynamic systems, such as marine mammal populations, or underwater intruder scenarios, to be accurately characterized. Underwater exploration is fundamentally reliant on the effective processing of sensor signal data. The miniaturization and power efficiency of modern microprocessor technology have facilitated applications using sophisticated and complex algorithms, for example, synthetic aperture sonar, with some algorithms utilizing underwater and satellite communications. The distributed sensing and fusion of data have become technically feasible, and the teaming of multiple autonomous sensor platforms will, in the future, provide enhanced capabilities, for example, multipass classification techniques for objects on the sea bottom. For such multiplatform applications, signal processing will also be required to provide intelligent control procedures. All maritime applications face the same difficult operating environment: fading channels, rapidly changing environmental conditions, high noise levels at sensors, sparse coverage of the measurement area, limited reliability of communication channels, and the need for robustness and low energy consumption, just to name a few. There are obvious technical similarities in the signal processing that have been applied to different measurement equipment, and this Special Issue aims to help foster cross-fertilization between these different application areas.
This Special Issue solicits submissions from researchers and engineers working on maritime applications and developing or applying advanced signal processing techniques. Topics of interest include, but are not limited to: • Sonar applications for surveillance and reconnaissance • Radar applications for measuring physical parameters of the sea surface and surface objects • Nonacoustic data processing and sensor fusion for improved target tracking and situational awareness • Underwater imaging for automatic classification