Developing and Validating a Simulation Model for Counting and Classification of Vehicles

An algorithm system approach has been presented for extracting traffic data using video image processing. While an offline program focuses on extracting vehicles, tracking them and provides the vehicle count for a short period of time. It uses background subtraction, shadow removal, and pixel analysis for extracting moving objects. The results show that the algorithm is capable of counting 95% of the vehicles due to some shaking in the video feed. These data have been analyzed by statistical regression to show the credibility of the results which been approved to be useful according to the value of R Square and Significance F compared with the value of the observation method. Also, the classification of vehicles was performed using the improfile command in Matlab-Video Image Processing that computes the intensity values along a line or a multiline path in an image. The algorithm program was developed to detect vehicles in traffic videos and get the vehicle count for the small time period as an assistance tool for a researcher who seeks vehicle counting.


INTRODUCTION
Speed, flow, and density are macroscopic parameters for characterizing the traffic stream as a whole while headway and spacing are microscopic measures for distinguishing the individual vehicles (TRB 2010).High demands for computer algorithms and technological solutions have been raised due to computer vision techniques for traffic analysis and monitoring in a real-time mode (Mathew & Rao 2007).Most convincing applications are in vehicle tracking, and the crucial issue is initiating a track automatically.In recent years, image processing has been applied to the field of traffic research with goals that include queue detection, incident detection, vehicle classification, and vehicle counting (Zehang et al. 2004) and (Dailey et al. 2000).
Many researchers have been trying to develop methods that can be applied in video-based traffic surveillance (Qian et al. 2013).Some of the applications of video base surveillance include vehicle tracking, counting the number of vehicles, calculating vehicle velocity, finding vehicle trajectory, classifying the vehicles, estimating the traffic density, finding the traffic flow, license plate recognition, etc. (Puan et al. 2014) and (Trivedi et al. 2007).Tracking is the process of measurements obtained from a target in order to maintain an estimate of its current state.A track is a state trajectory estimated from a set of measurements that have been associated with the same target.In tracking targets, there can be two major problems: an uncertainty associated with the measurements in addition to their accuracy, which is usually modeled by additive noise, and the uncertainty of the measurement origin -a measurement that is to be used in the tracking algorithm may not have originated from the target of interest (Morris & Trivedi 2008).

THEORETICAL WORK PIXEL ANALYSIS
Recognizing the absolute difference between the two respective images pixel-by-pixel with the present threshold is one of the basic method of change detection.A pixel, I(i,j), is considered as foreground pixel if the foreground difference, d(i,j) is: (1) Where T denotes the preset threshold and It(i,j) and It-1(i,j) are pixels at current and previous frames (Jianxin et al. 2003) and (Avery et al. 2004).Noted that the difference of two corresponding pixels in the subsequent frames is the main problem with this simple approach.The absolute difference could less than the threshold if the pixel under consideration lacks texture and is part of the moving object, this pixel will be considered as a background pixel.Also, random camera noise can further confuse the decision to be made.
To overcome this problem a background image is constructed using several frames from the video.Now, instead of comparing with the previous frame, the current frame is compared with the background image. (2) For background image construction, the histogram is constructed at each pixel location.A pixel can have a value between (0) and (255).Frames with same pixel value are grouped together to make the histogram.Since the foreground region will be covered by the moving vehicles with different colors it can be assumed that the pixel value that occurs most frequently in the histogram is the background pixel.This way a background image is generated.

SHADOW REMOVAL
Removing cast shadow is a proposed edge-based algorithm by (Xiao et al. 2007).They based their algorithm on three observations.They found after generating edges on foreground images that: 1.The cast shadows present sharp edges because the illumination source is far from the objects; 2. The vehicle has significant edges; however the corresponding shadow is generally edgeless; 3. The edge of cast shadow fastens on the boundary region of the moving foreground mask.
According to (Porikli & Thornton 2005) assumption method which stated that: "shadow decreases the luminance and changes the saturation, yet it does not affect the hue".Figure (1) shows that the color vector I(p) was pointed to the background vector B(p) to get the luminance change h. (1) Where T denotes the preset threshold and I t (i,j) and I t-1 (i,j) are pixels at current and previous frames (Jianxin et al. 2003) and (Avery et al. 2004).Noted that the difference of two corresponding pixels in the subsequent frames is the main problem with this simple approach.The absolute difference could less than the threshold if the pixel under consideration lacks texture and is part of the moving object, this pixel will be considered as a background pixel.Also, random camera noise can further confuse the decision to be made.
To overcome this problem a background image is constructed using several frames from the video.Now, instead of comparing with the previous frame, the current frame is compared with the background image. (2) For background image construction, the histogram is constructed at each pixel location.A pixel can have a value between (0) and (255).Frames with same pixel value are grouped together to make the histogram.Since the foreground region will be covered by the moving vehicles with different colors it can be assumed that the pixel value that occurs most frequently in the histogram is the background pixel.This way a background image is generated.

SHADOW REMOVAL
Removing cast shadow is a proposed edge-based algorithm by (Xiao et al. 2007).They based their algorithm on three observations.They found after generating edges on foreground images that: 1.The cast shadows present sharp edges because the illumination source is far from the objects; 2. The vehicle has significant edges; however the corresponding shadow is generally edgeless; 3. The edge of cast shadow fastens on the boundary region of the moving foreground mask.
According to (Porikli & Thornton 2005) assumption method which stated that: "shadow decreases the luminance and changes the saturation, yet it does not affect the hue".( Where T denotes the preset threshold and I t (i,j) and I t-1 (i,j) are pixels at current and previous frames (Jianxin et al. 2003) and (Avery et al. 2004).Noted that the difference of two corresponding pixels in the subsequent frames is the main problem with this simple approach.The absolute difference could less than the threshold if the pixel under consideration lacks texture and is part of the moving object, this pixel will be considered as a background pixel.Also, random camera noise can further confuse the decision to be made.
To overcome this problem a background image is constructed using several frames from the video.Now, instead of comparing with the previous frame, the current frame is compared with the background image. (2) For background image construction, the histogram is constructed at each pixel location.A pixel can have a value between (0) and (255).Frames with same pixel value are grouped together to make the histogram.Since the foreground region will be covered by the moving vehicles with different colors it can be assumed that the pixel value that occurs most frequently in the histogram is the background pixel.This way a background image is generated.

SHADOW REMOVAL
Removing cast shadow is a proposed edge-based algorithm by (Xiao et al. 2007).They based their algorithm on three observations.They found after generating edges on foreground images that: 1.The cast shadows present sharp edges because the illumination source is far from the objects; 2. The vehicle has significant edges; however the corresponding shadow is generally edgeless; 3. The edge of cast shadow fastens on the boundary region of the moving foreground mask.
According to (Porikli & Thornton 2005) assumption method which stated that: "shadow decreases the luminance and changes the saturation, yet it does not affect the hue".Figure (1) shows that the color vector I(p) was pointed to the background vector B(p) to get the luminance change h. (3) Where is the angle between B(p) and I(p).Another angle is computed between background vector and (1,1,1) vector.They, further, define luminance ratio as r = [I(p)] h.Pixels satisfying the following criteria are considered as shadow pixel. (4) Where is the maximum angle separation, determines maximum allowed darkness and brightness respectively.In this research, this approach is used to eliminate all shadow pixels.

VEHICLE TRACKING
The distance between all vehicle images was used by (Atkociunas et al. 2005) using coordinates of their centre to find the tracking vehicles in the subsequent frames.Firstly, the marked the geometric centres of each vehicle which calculated as followed: (5) Where and are vehicle center coordinates and Xj and Yj are the coordinates of pixel lying within the boundary of the target vehicle.Their assumption on tracking vehicles was "displacement of the image centre of the observed vehicle in two neighboring frames is less than the distance between it and another vehicle's centers in the same or neighboring frames".The calculation distances between all vehicles in frame n and all vehicles in frame (n+1) using coordinates (,) of their centers is to find the tracking vehicle: Weak shadow is defined as a conic volume around the corresponding Where is the angle between B(p) and I(p).Another angle background vector and (1,1,1) vector.They, further, define luminance ratio satisfying the following criteria are considered as shadow pixel.

Where
is the maximum angle separation, determine darkness and brightness respectively.In this research, this approach is used t pixels.

VEHICLE TRACKING
The distance between all vehicle images was used by (Atkociunas et al. 2005 their centre to find the tracking vehicles in the subsequent frames.Firstly, the centres of each vehicle which calculated as followed: Where and are vehicle center coordinates and Xj and Yj are th lying within the boundary of the target vehicle.Their assumption on "displacement of the image centre of the observed vehicle in two neighboring distance between it and another vehicle's centers in the same or neig calculation distances between all vehicles in frame n and all vehicles coordinates (,) of their centers is to find the tracking vehicle: Where is the maximum angle separation, determine darkness and brightness respectively.In this research, this approach is used t pixels.

VEHICLE TRACKING
The distance between all vehicle images was used by (Atkociunas et al. 200 their centre to find the tracking vehicles in the subsequent frames.Firstly, the centres of each vehicle which calculated as followed: Where and are vehicle center coordinates and Xj and Yj are th lying within the boundary of the target vehicle.Their assumption on "displacement of the image centre of the observed vehicle in two neighboring distance between it and another vehicle's centers in the same or neig calculation distances between all vehicles in frame n and all vehicles coordinates (,) of their centers is to find the tracking vehicle:

Where
is the angle between B(p) and I(p).Another angle background vector and (1,1,1) vector.They, further, define luminance ra satisfying the following criteria are considered as shadow pixel.

Where
is the maximum angle separation, determi darkness and brightness respectively.In this research, this approach is used pixels.

VEHICLE TRACKING
The distance between all vehicle images was used by (Atkociunas et al. 20 their centre to find the tracking vehicles in the subsequent frames.Firstly, t centres of each vehicle which calculated as followed: Where and are vehicle center coordinates and Xj and Yj are lying within the boundary of the target vehicle.Their assumption on "displacement of the image centre of the observed vehicle in two neighbori distance between it and another vehicle's centers in the same or ne calculation distances between all vehicles in frame n and all vehicle coordinates (,) of their centers is to find the tracking vehicle:

Where
is the angle between B(p) and I(p).Another angle background vector and (1,1,1) vector.They, further, define luminance ra satisfying the following criteria are considered as shadow pixel.

Where
is the maximum angle separation, determ darkness and brightness respectively.In this research, this approach is use pixels.

VEHICLE TRACKING
The distance between all vehicle images was used by (Atkociunas et al. 2 their centre to find the tracking vehicles in the subsequent frames.Firstly, centres of each vehicle which calculated as followed: Where and are vehicle center coordinates and Xj and Yj are lying within the boundary of the target vehicle.Their assumption o "displacement of the image centre of the observed vehicle in two neighbor distance between it and another vehicle's centers in the same or n calculation distances between all vehicles in frame n and all vehicle coordinates (,) of their centers is to find the tracking vehicle: The "Optical Flow block" estimates the direction and speed of object motion from one image to another or from one video frame to another using either the Horn-Schunck or the Lucas-Kanade method.To compute the optical flow between two images, you must solve the following optical flow constraint equation: The "Thresholding and Region Filtering Block" represents a series of subsystems: 1. Provide an input port for a subsystem or model 2. Represents mathematical functions including logarithmic, exponential, power, and modulus functions.The "From Multimedia File block" reads audio samples, video frames, or both from a multimedia file.The block imports data from the file into a Simulink model while The "Color Space Conversion block" converts color information between color spaces.The conversion from the R'B'G' color space to intensity is defined by the following equation: (7) The "Optical Flow block" estimates the direction and speed of object motion from one image to another or from one video frame to another using either the Horn-Schunck or the Lucas-Kanade method.To compute the optical flow between two images, you must solve the following optical flow constraint equation: Finding the minimum of gives the tracking vehicle.Applying this method to all the vehicles presents in the current image will keep them tracked.

METHODOLOGY
Optical flow estimation model was used to estimate the motion vectors in each frame of the video sequence.Binary feature images were produced by Thresholding and performing morphological closing on the motion vectors model.The model locates the cars in each binary feature image using the Blob Analysis block (Figure 2).
The "From Multimedia File block" reads audio samples, video frames, or both from a multimedia file.The block imports data from the file into a Simulink model while The "Color Space Conversion block" converts color information between color spaces.The conversion from the R'B'G' color space to intensity is defined by the following equation:  The "From Multimedia File block" reads audio samples, video frames, or both from a multimedia file.The block imports data from the file into a Simulink model while The "Color Space Conversion block" converts color information between color spaces.The conversion from the R'B'G' color space to intensity is defined by the following equation: (7) The "Optical Flow block" estimates the direction and speed of object motion from one image to another or from one video frame to another using either the Horn-Schunck or the Lucas-Kanade method.To compute the optical flow between two images, you must solve the following optical flow constraint equation: The "Thresholding and Region Filtering Block" repres 1. Provide an input port for a subsystem or model 2. Represents mathematical functions including modulus functions.Source: http://www.mathworks.com the input screen along with the vector analysis for the vehicles beside the thresholding analysis to conclude with the result window.
Source: http://www.mathworks.comFinally, Draw Shape block was used to draw a green rectangle around the cars that pass beneath the white line.The following figure shows four windows which represent the input screen along with the vector analysis for the vehicles beside the thresholding analysis to conclude with the result window.It is necessary to select a high level of traffic flow sites that give realistic results in order to be analyzed statistically.
To collect corrected and sufficient data that satisfy the requirements of statistical calculations and representations.Selected sites should satisfy the followings: 1.The existence of an accessible vantage point allows for data collection to be made without effect on the observed traffic behavior.2. Vehicle flow varies over the times of the day.

Range in the percentage of vehicle movement
types and traffic compositions are to be considered.

DATA COLLECTION
Along the daytime of work days (7:30 -10:30 am), surveys were made to three highway sections for the two opposite directions along LEBUHRAYA KAJANG SILK divided multilane highway crossing KAJANG city, MALAYSIA.Based on surveys, divided multilane highway sections were selected since these were found to satisfy the objectives and specifications of data collection.Fifteen minutes have been adopted as a period time along the specified working time which leads us to (12) time-segments per section per direction.As a total, (72) time-segments have been recorded for all sections per directions.The recorded data were abstracted from video films with the aid of computer program called MATLAB ® program.Developed by MathWorks, MATLAB® allows matrix manipulations, plotting of functions and data, implementation of algorithms and creation of user interfaces.
Unnecessary objects have been removed from the video sections using the Shadow Removal technique based on the edge generation algorithm for foreground object's images and omitting the background's disruption by changing the saturation level.

DATA ANALYSIS AND RESULTS
The procedures which presented in this section are performing a multiple regression analysis between observed and programmed counting based on the survey to know the credibility of programs counting instead of using the observed one, then classify the vehicles by using video image processing technique.

MULTIPLE REGRESSION ANALYSIS
A Multiple Regression Analysis is performed when there are two different variables and must compare the differences between them to save the significant.To do so, firstly, we must state the dependent and independent variables for the analysis.Linear regression models with more than one independent variable are referred to as multiple linear models, as opposed to simple linear models with one independent variable (Orlov 1996).Equation (9) shows the general formula of the classification of regression models with mathematical expressions: (9) Where: y = dependent variable (predicted by a regression model) bo = intercept (or constant) bi = (i=1,2, …p) -ith coefficient corresponding to xi xi = (i=1,2, …p) -ith independent variable from total set of p variables p = number of independent variables (number of coefficients) n = number of observations (experimental data points) The two variables in this project are observed and programmed counting based on the survey.First, we must satisfy that Hypothesized Mean Differences is (0) which means there are no differences between two groups and dependent and independent variables for the analysis.Linear regression models independent variable are referred to as multiple linear models, as opposed to si with one independent variable (Orlov 1996).Equation ( 9 The two variables in this project are observed and programmed counting b First, we must satisfy that Hypothesized Mean Differences is (0) which m differences between two groups and Alpha is (0.05) which also means the Stan the groups.Then analyze the two groups by "Regression".Table (1) and Figure results for observing and programmed counting for Section A (7:30-10:3 (NB).The results show that the value of R Square is (0.9971).The second piec the value of Significance F is (5.41E-14) which is less than our Alpha value of that we can't reject the null hypothesis.Also, the Intercept Coefficient is (-8.329R-Square shows that we can't reject the result of the model despite the minor error of reading between the observation and program variables which can be accepted as there is no significance mean differences.Classification modelling has been made to classify the vehicles as a preventing tool of miscounting the small vehicles like motorcycles by scanning the intensity of the longitudinal line of the vehicle across pixels.Enough data was gathered (72 time-segments) across the specified sections to show the credibility of the algorithm model against the real counting which takes a long time and much effort to collect during bad weather.Therefore, if the algorithm model is to be applied, resulting values need to be adjusted according to the methodology section.
Figure (1)  shows that the color vector I(p) was pointed to the background vector B(p) to get the luminance change h.

FIGURE 1 .
FIGURE 1. Weak shadow is defined as a conic volume around the correspond

FIGURE 1 .
FIGURE 1. Weak shadow is defined as a conic volume around the correspond

FIGURE 1 .
FIGURE 1. Weak shadow is defined as a conic volume around the correspo

FIGURE 1 .
FIGURE 1. Weak shadow is defined as a conic volume around the corresp

3 .
Compute the mean value along the specified dimension of the input or across time (running mean).4. Uses the Neighborhood size parameter to specify the size of the neighborhood.5. Perform morphological closing on an intensity or binary image.

Figure ( 3
Figure (3) shows the subsystem block of Thresholding and Region Filtering model.

FIGURE 2 .
FIGURE Finding the minimum of gives the tracking vehicle.Applying this method to all the vehicles presents in the current image will keep them tracked.METHODOLOGY Optical flow estimation model was used to estimate the motion vectors in each frame of the video sequence.Binary feature images were produced by Thresholding and performing morphological closing on the motion vectors model.The model locates the cars in each binary feature image using the Blob Analysis block (Figure 2).

FIGURE 2 .
FIGURE Figure (3) shows the subsystem block of Thresholdin

Figure ( 3
Figure (3) shows the subsystem block of Thresholding and Region Filtering model.

Finally,
Draw Shape block was used to draw a green rectangle around the cars that pass beneath the white line.The following figure shows four windows which represent

FIGURE 4 .
FIGURE 4. Display Results Windows.Source: http://www.mathworks.com ) shows the gene classification of regression models with mathematical expressions: Where: y = dependent variable (predicted by a regression model) b o = intercept (or constant) b i = (i=1,2, …p) -ith coefficient corresponding to xi x i = (i=1,2, …p) -ith independent variable from total set of p variables p = number of independent variables (number of coefficients) n = number of observations (experimental data points)

ImprofileFIGURE 5 .
FIGURE 5. Line fit plot for program and observation.

TABLE 1 .
Summary output for section A (NB)Alpha is (0.05) which also means the Standard Deviation for the groups.Then analyze the two groups by "Regression".Table(1) and Figure (5) show analyzing results for observing and programmed counting for Section A (7:30-10:30) -North Bound (NB).The results show that the value

TABLE 1 .
Summary output for section A (NB) Square is (0.9971).The second piece of information is the value of Significance F is (5.41E-14) which is less than our Alpha value of (0.05) which mean that we can't reject the null hypothesis.Also, the Intercept Coefficient is(-8.3294).