VIDEO-BASED VEHICLE DETECTION ON A TWO-WAY ROAD Summary

The paper presents a method of vehicle detection on a two-way road. Vehicle detection is carried out on the basis of the video stream from the camera placed over a road. The input image sequence is created by consecutive frames taken from the video stream. Images from the input image sequence are processed individually one by one. A detection field is defined for each lane of the road. Images from the input image sequence are converted into point image representation. The sums of the edge points within the detection fields are calculated. States of the detection fields are determined on the basis of calculated sums of the edge points. Vehicles are detected by analysis of states of the detection fields. Experimental results are provided.


INTRODUCTION
In contemporary traffic systems, image data are utilized for the determination of traffic parameters [1,5,8].Traffic parameters can be determined by the application of image analysis in video-based traffic systems.Video-based traffic systems are usually complex and multistage, which use various processing methods.The determination of traffic parameters can be carried out in stages such as filtering, edge detection, morphological operations, Z. Czapla segmentation, background updating, shadow elimination, determination of vehicle parameters, feature extraction, vehicle identification and vehicle tracking [6,7,10].
In the process of the video-based determination of traffic parameters, image processing is often performed.An important image processing technique is edge detection.The popular techniques of edge detection are gradient methods based on discrete convolution.In gradient methods of edge detection, based on discrete convolution, various masks are utilized, e.g., Roberts masks, Sobel masks and Prewitt masks [4,11].There are also other well-known edge detection methods [9,12].
The proposed video-based method of vehicle detection is based on image conversion into point representation [3].Vehicle detection is carried out by the determination of changes to the detection field state.The detection field state depends on the sum of the edge points calculated within the detection field [2].The proposed method of vehicle detection is intend for automatic measurement systems of road traffic parameters.

ALGORITHM OF VEHICLE DETECTION
The algorithm of vehicle detection on a two-way road uses image data obtained from a camera placed over a road in a measuring station.Input image data consist of consecutive frames taken from an input video stream, which form the input image sequence.Vehicle detection on a two-way road is performed as follows:  description of an input image sequence  definition of detection fields  image conversion into the point representation  determination of states of the detection fields  vehicle detection Each image from the input image sequence is labelled with an ordinal number.The properties of images from the input image sequence depend on the applied camera.The content of individual images differs in a number of objects and their location.The quality of images from the input image sequence can change along with the time of day and weather conditions.Images from the input image sequence are processed one by one.

DESCRIPTION OF THE INPUT IMAGE SEQUENCE
The input image sequence consists of consecutive images from the input video stream at the frame rate of f frames per second (fps).Each image of the input image sequence is in the greyscale format with an intensity resolution of 8 bits and a size of M x N (columns by rows) pixels.The position of an image in the input image sequence is marked by the sequence ordinal number denoted by i. Examples of images from the input image sequence, with a size of 256 x 256 pixels and a frame rate of 30 fps, are shown in Figure 1.As neighbouring images in the input image sequence are spaced at 1/f-second intervals, the time resolution of images in the input image sequence is equal to 1/f s.

DEFINITION OF THE DETECTION FIELDS
For each road lane, one rectangular detection field is defined.Examples of images from the input image sequence, with the marked detection fields, are shown in Figure 2   The detection fields are equal to mR -mL + 1 in width and run across each road lane.The length of the detection fields, which is equal to nB -nU + 1, is set small, thereby allowing a two-state interpretation of features in the detection field.

IMAGE CONVERSION INTO POINT REPRESENTATION
The conversion of an image into point representation processes the source image in thebitmap format into the target binary image.Image conversion into point representation is performed with the use of small image gradients [3].

Z. Czapla
Conversion of an image into point representation uses two image matrices.Image matrix A contains pixel values of the source image.Binary image matrix B has the same size as matrix A and is allocated for the purpose of target point values: Except the border elements, for each element of matrix A (1 < n < N -2 and 1 < m < M -2), the magnitudes of small image gradients in horizontal, vertical and diagonal directions are determined.The magnitudes of the small row gradients (gr), the small column gradient (gc), the small gradient "diagonal down" (gd) and the small gradient "diagonal up" (gu) are determined, respectively, by the following equations: ., , , , The maximum value of the gradient magnitudes is determined for each processed element of matrix A according to the following equation: (3) The target binary values are determined on the basis of the appropriate maximum value of the gradient magnitudes and the threshold value denoted by T, as follows:

DETERMINATION OF STATES OF THE DETECTION FIELDS
States of the detection fields describe pointers of occupancy.The pointer value is equal to 0 for the state "detection field free", while it is equal to 1 for the state "detection field occupied".The state of the detection field is determined on the basis of the sum of the edge points inside the detection field [2,3].
The sum of the edge points for the detection field k in the image i of the input image sequence is calculated as follows: . 1 : Average sums of the edge points within the detection fields are calculated for the current image i and p previous images.The average sums for images satisfying i > p are given by: A state of the detection field k describes pointer of occupancy P(k) (equal to 0 for the state "detection field free" and equal to 1 for the state "detection field occupied").The state of the detection field k changes from "detection field free" to "detection field occupied" if the following condition is satisfied: . 0 (7) where R(k)O is the threshold value for the change into the occupied detection field.The state of the detection field k changes from "detection field occupied" to "detection field free" if the condition: . 1 is satisfied, where R(k)F is the threshold value for the change into the free detection field.

VEHICLE DETECTION
A vehicle moving through the detection field changes its state.To begin with, the state of the detection field is "detection field free".The vehicle driving into the detection field changes its state into "detection field occupied".The vehicle leaving the detection field changes its state from "detection field occupied" into "detection field free".Vehicles driving through the detection fields are shown in Figure 4.