Integrating the Projective Transform with Particle Filtering for Visual Tracking

This paper presents the projective particle filter, a Bayesian filtering technique integrating the projective transform, which describes the distortion of vehicle trajectories on the camera plane. The characteristics inherent to traffic monitoring, and in particular the projective transform, are integrated in the particle filtering framework in order to improve the tracking robustness and accuracy. It is shown that the projective transform can be fully described by three parameters, namely, the angle of view, the height of the camera, and the ground distance to the first point of capture. This information is integrated in the importance density so as to explore the feature space more accurately. By providing a fine distribution of the samples in the feature space, the projective particle filter outperforms the standard particle filter on different tracking measures. First, the resampling frequency is reduced due to a better fit of the importance density for the estimation of the posterior density. Second, the mean squared error between the feature vector estimate and the true state is reduced compared to the estimate provided by the standard particle filter. Third, the tracking rate is improved for the projective particle filter, hence decreasing track loss.


Introduction and Motivations
Vehicle tracking has been an active field of research within the past decade due to the increase in computational power and the development of video surveillance infrastructure. The area of Intelligent Transportation Systems (ITSs) is in need for robust tracking algorithms to ensure that top-end decisions such as automatic traffic control and regulation, automatic video surveillance and abnormal event detection are made with a high level of confidence. Accurate trajectory extraction provides essential statistics for traffic control, such as speed monitoring, vehicle count, and average vehicle flow. Therefore, as a low-level task at the bottom-end of ITS, vehicle tracking must provide accurate and robust information to higher-level modules making intelligent decisions. In this sense, intelligent transportation systems are a major breakthrough since they alleviate the need for devices that can be prohibitively costly or simply unpractical to implement. For instance, the installation of inductive loop sensors generates traffic perturbations that cannot always be afforded in dense traffic areas. Also, robust video tracking enables new applications such as vehicle identification and customized statistics that are not available with current technologies, for example, suspect vehicle tracking or differentiated vehicle speed limits. At the top-end of the system are high level-tasks such as event detection (e.g., accident and animal crossing) or traffic regulation (e.g., dynamic adaptation and lane allocation). Robust vehicle tracking is therefore necessary to ensure effective performance.
Several techniques have been developed for vehicle tracking over the past two decades. The most common ones rely on Bayesian filtering, and Kalman and particle filters in particular. Kalman filter-based tracking usually relies on background subtraction followed by segmentation [1,2], although some techniques implement spatial features such as corners and edges [3,4] or use Bayesian energy minimization [5]. Exhaustive search techniques involving template matching [6] or occlusion reasoning [7] have also been used for tracking vehicles. Particle filtering is preferred when the hypothesis of multimodality is necessary, 2 EURASIP Journal on Image and Video Processing for example, in case of severe occlusion [8,9]. Particle filters offer the advantage of relaxing the Gaussian and linearity constraints imposed upon the Kalman filter. On the downside, particle filters only provide a suboptimal solution, which converges in a statistical sense to the optimal solution. The convergence is of the order O( N S ), where N S is the number of particles; consequently, they are computation-intensive algorithms. For this reason, particle filtering techniques for visual tracking have been developed only recently with the widespread of powerful computers. Particle filters for visual object tracking have first been introduced by Isard and Blake, part of the CONDENSATION algorithm [10,11], and Doucet [12]. Arulampalam et al. provide a more general introduction to Bayesian filtering, encompassing particle filter implementations [13]. Within the last decade, the interest in particle filters has been growing exponentially. Early contributions were based on the Kalman filter models; for instance, Van Der Merwe et al. discussed an extended particle filter (EPF) and proposed an unscented particle filter (UPF), using the unscented transform to capture second order nonlinearities [14]. Later, a Gaussian sum particle filter was introduced to reduce the computational complexity [15]. There has also been a plethora of theoretic improvements to the original algorithm such as the kernel particle filter [16,17], the iterated extended Kalman particle filter [18], the adaptive sample size particle filter [19,20], and the augmented particle filter [21]. As far as applications are concerned, particle filters are widely used in a variety of tracking tasks: head tracking via active contours [22,23], edge and color histogram tracking [24,25], sonar [26], and phase [27] tracking, to name few. Particle filters have also been used for object detection and segmentation [28,29], and for audiovisual fusion [30].
Many vehicle tracking systems have been proposed that integrate features of the object, such as the traditional kinematic model parameters [2,7,[31][32][33] or scale [1], in the tracking model. However, these techniques seldom integrate information specific to the vehicle tracking problem, which is key to the improvement of track extraction; rather, they are general estimators disregarding the particular traffic surveillance context. Since particle filters require a large number of samples in order to achieve accurate and robust tracking, information pertaining to the behavior of the vehicle is instrumental in drawing samples from the importance density. To this end, the projective fractional transform is used to map the vehicle position in the real world to its position on the camera plane. In [35], Bouttefroy et al. proposed the projective Kalman filter (PKF), which integrates the projective transform into the Kalman tracker to improve its performance. However, the PKF tracker differs from the proposed particle filter tracker in that the former relies on background subtraction to extract the objects, whereas the latter uses color information to track the objects.
The aim of this paper is to study the performance of a particle filter integrating vehicle characteristics in order to decrease the size of the particle set for a given error rate. In this framework, the task of vehicle tracking can be approached as a specific application of object tracking in a constrained environment. Indeed, vehicles do not evolve freely in their environment but follow particular trajectories. The most notable constraints imposed upon vehicle trajectories in traffic video surveillance are summarized below.
Low Definition and Highly Compressed Videos. Traffic monitoring video sequences are often of poor quality because of the inadequate infrastructure of the acquisition and transport system. Therefore, the size of the sample set (N S ) necessary for vehicle tracking must be large to ensure robust and accurate estimates.
Slowly-Varying Vehicle Speed. A common assumption in vehicle tracking is the uniformity of the vehicle speed. The narrow angle of view of the scene and the short period of time a vehicle is in the field of view justify this assumption, especially when tracking vehicles on a highway.
Constrained Real-World Vehicle Trajectory. Normal driving rules impose a particular trajectory on the vehicle. Indeed, the curvature of the road and the different lanes constrain the position of the vehicle. Figure 1 illustrates the pattern of vehicle trajectories resulting from projective constraints that can be exploited in vehicle tracking.
Projection of Vehicle Trajectory on the Camera Plane. The trajectory of a vehicle on the camera plane undergoes severe distortion due to the low elevation of the traffic surveillance camera. The curve described by the position of the vehicle converges asymptotically to the vanishing point.
We propose here to integrate these characteristics to obtain a finer estimate of the vehicle feature vector. More specifically, the mapping of real-world vehicle trajectory through a fractional transform enables a better estimate of the posterior density. A particle filter is thus implemented, which integrate cues of the projection in the importance density, resulting in a better exploration of the state space and a reduction of the variance in the trajectory estimation. Preliminary results of this work have been presented in [34]; this paper develops the work further. Its main contributions are: (i) a complete description of the homographic projection problem for vehicle tracking and a review of the solutions proposed to date; (ii) an evaluation of the projective particle filter tracking rate on a comprehensive dataset comprising around 2,600 vehicles; (iii) an evaluation of the resampling accuracy for the projective particle filter; (iv) a comparison of the performance of the projective particle filter and the standard particle filter using three different measures, namely, the sampling frequency, the mean squared error and tracking drift. The rest of the paper is organized as follows. Section 2 introduces the general particle filtering framework. Section 3 develops the proposed Projective Particle Filter (PPF). An analysis of the PPF performance versus the standard particle filter is presented in Section 4 before concluding in Section 5.

Bayesian and Particle Filtering
This section presents a brief review of Bayesian and particle filtering. Bayesian filtering provides a convenient framework for object tracking due to the weak assumptions on the state space model and the first-order Markov chain recursive properties. Without loss of generality, let us consider a system with state x of dimension n and observation z of dimension m. Let x 1:k¸{ x 1 , . . . , x k } and z 1:k¸{ z 1 , . . . , z k } denote, respectively, the set of states and the set of observations prior to and including time instant t k . The state space model can be expressed as when the process and observation noises, v k−1 and n k , respectively, are assumed to be additive. The vector-valued functions f and h are the process and observation functions, respectively. Bayesian filtering aims to estimate the posterior probability density function (pdf) of the state x given the observation z as p(x k | z k ). The probability density function is estimated recursively, in two steps: prediction and update. First, let us denote by p(x k−1 | z k−1 ) the posterior pdf at time t k−1 , and let us assume it is known. The prediction stage relies on the Chapman-Kolmogorov equation to estimate the prior pdf p(x k | z k−1 ): When a new observation becomes available, the prior is updated as follows: where p(z k | x k ) is the likelihood function and λ k is a normalizing constant, As the posterior probability density function p(x k | z k ) is recursively estimated through (3) and (4), only the initial density p(x 0 | z 0 ) is to be known. Monte Carlo methods and more specifically particle filters have been extensively employed to tackle the Bayesian problem represented by (3) and (4) [36,37]. Multimodality enables the system to evolve in time with several hypotheses on the state in parallel. This property is practical to corroborate or reject an eventual track after several frames. However, the Bayesian problem then cannot be solved in closed form, as in the Kalman filter, due to the complex density shapes involved. Particle filters rely on Sequential Monte Carlo (SMC) simulations, as a numerical method, to circumvent the direct evaluation of the Chapman-Kolmogorov equation (3). Let us assume that a large number of samples {x i k , i = 1 · · · N S } are drawn from the posterior distribution p(x k | z k ). It follows from the law of large numbers that where w i k are positive weights, satisfying w i k = 1, and δ(·) is the Kronecker delta function. However, because it is often difficult to draw samples from the posterior pdf, an importance density q(·) is used to generate the samples x i k . It can then be shown that the recursive estimate of the posterior density via (3) and (4) can be carried out by the set of particles, provided that the weights are updated as follows [13]: The choice of the importance density q(x i k | x i k−1 , z k ) is crucial in order to obtain a good estimate of the posterior pdf. It has been shown that the set of particles and associated weights {x i k , w i k } will eventually degenerate, that is, most of the weights will be carried by a small number of samples and a large number of samples will have negligible weight [38]. In such a case, and because samples are not drawn from the true posterior, the degeneracy problem cannot be avoided and resampling of the set needs to be performed. Nevertheless, the closer the importance density is from the true posterior density, the slower the set {x i k , w i k } will degenerate; a good choice of importance density reduces the need for resampling. In this paper, we propose to model the fractional transform mapping the real world space onto the camera plane and to integrate the projection in the particle filter through the importance density q(

Projective Particle Filter
The particle filter developed is named Projective Particle Filter (PPF) because the vehicle position is projected on the camera plane and used as an inference to diffuse the particles in the feature space. One of the particularities of the PPF is to differentiate between the importance density and the transition prior pdf, whilst the SIR (Sampling Importance Resampling) filter, also called standard particle filter, does not. Therefore, we need to define the importance density from the fractional transform as well as the transition prior p(x k | x k−1 ) and the likelihood p(z k | x k ) in order to update the weights in (6).

Linear Fractional Transformation.
The fractional transform is used to estimate the position of the object on the camera plane (x) from its position on the road (r). The physical trajectory is projected onto the camera plane as shown in Figure 2. The distortion of the object trajectory happens along the direction d, tangential to the road. The axis d p is parallel to the camera plane; the projection x of the vehicle position on d p is thus proportional to the position of the vehicle on the camera plane. The value of x is scaled by X vp , the projection of the vanishing point on d p , to obtain the position of the vehicle in terms of pixels. For practical implementation, it is useful to express the projection along the tangential direction d onto the d p axis in terms of video footage parameters that are easily accessible, namely: (ii) height of the camera (H), (iii) ground distance (D) between the camera and the first location captured by the camera.
It can be inferred from Figure 2, after applying the law of cosines, that where cos α = (D + r)/ H 2 + (D + r) 2 and β = arctan(D/H) + θ/2. After squaring and substituting 2 in (7), we obtain Grouping the terms in x to get a quadratic form leads to After discarding the nonphysically acceptable solution, one gets However, because D H and θ is small in practice (see Table 1), the angle β is approximately equal to π/2 and, consequently, (11) simplifies to x = rH/(D + r). Note that this result can be verified using the triangle proportionality theorem. Finally, we scale x with the position of the vanishing point X vp in the image to find the position of the vehicle in terms of pixel location, which yields (The position of the vanishing point can either be approximated manually or estimated automatically [39]. In our experiments, the position of the vanishing point is estimated manually). The projected speed and the observed size of the object on the camera plane are also important variables for the problem of tracking, and hence it is necessary to derive them. Let v = dr/dt andẋ = dx/dt. Differentiating (12), after substituting for x ( x = rH/(D + r)) and eliminating r, yields the observed speed of the vehicle on the camera plane: The observed size of the vehicle b can also be derived from the position x if the real size of the vehicle s is known. If the center of the vehicle is x, its extremities are located at x + s/2 and x−s/2. Therefore, applying the fractional transformation yields The state vector x is modeled with the position, the speed and the size of the vehicle in the image: where x and y are the Cartesian coordinates of the vehicle, x andẏ are the respective speeds and b is the apparent size of the vehicle; more precisely, b is the radius of the circle best fitting the vehicle shape. Object tracking is traditionally performed using a standard kinematic model (Newton's Laws), taking into account the position, the speed and the size of the object (The size of the object is essentially maintained for the purpose of likelihood estimation). In this paper, the kinematic model is refined with the estimation of the speed and the object size through the fractional transform along the distorted direction d. Therefore, the process function f, defined in (1), is given by It is important to note that since the fractional transform is along the x-axis, the function f˙x provides a better estimate than a simple kinematic model taking into account the speed of the vehicle. On the other hand, the distortion along the y-axis is much weaker and such an estimation is not necessary. One novel aspect of this paper is the estimation of the vehicle position along the x axis and its size through f˙x and f b (x), respectively. It is worthwhile noting that the standard kinematic model of the vehicle is recovered when f˙x( denotes the standard kinematic model in the sequel. The samples of the PPF are drawn from the importance density q(x k | x k−1 , z k ) = N (x k , f(x k−1 ), Σ q ) and the standard kinematic model is used in the prior density p(x k | x k−1 ) = N (x k , g(x k−1 ), Σ p ), where N (·, µ, Σ) denotes the normal distribution of covariance matrix Σ centered on µ. The distributions are considered Gaussian and isotropic to evenly spread the samples around the estimated state vector at time step k.

Likelihood Estimation. The estimation of the likelihood
is based on the distance between color histograms, as in [40]. Let us define an M-bin histogram H = {H[u]} u=1···M , representing the distribution of J color pixel values c, as follows: where u is the set of bins regularly spaced on the interval [1, M], κ is a linear binning function providing the bin index of pixel value c i , and δ(·) is the Kronecker delta function. The pixels c i are selected from a circle of radius b centered on (x, y). Indeed, after projection on the camera plane, the circle is the standard shape that delineates the vehicle best. Let us denote the target and the candidate histograms by 6 EURASIP Journal on Image and Video Processing H t and H x , respectively. The Bhattacharyya distance between two histograms is defined as Finally, the likelihood p(z k | x i k ) is calculated as

Projective Particle Filter Implementation.
Because most approaches to tracking take the prior density as importance density, the samples x i k are directly drawn from the standard kinematic model. In this paper, we differentiate between the prior and the importance density to obtain a better distribution of the samples. The initial state x 0 is chosen as x 0 = [x 0 , y 0 , 10, 0, 20] T where x 0 and y 0 are the initial coordinates of the object. The parameters are selected to cater for the majority of vehicles. The position of the vehicles (x 0 , y 0 ) is estimated either manually or with an automatic procedure (see Section 4.2). The speed along the x-axis corresponds to the average pixel displacement for a speed of 90 km·h −1 and the apparent size b is set so that the elliptical region for histogram tracking encompasses at least the vehicle. The size is overestimated to fit all cars and most standard trucks at initialization: the size is then adjusted through tracking by the particle filters. The value x 0 is used to draw the set of samples . The transition prior p(x k | x k−1 ) and the importance density q(x k | x k−1 , z k ) are both modeled with normal distributions. The prior covariance matrix and mean are initialized as Σ p = diag([6 1 1 1 4]) and µ p = g(x 0 ), respectively, and Σ q = diag([1 1 0.5 1 4]) and µ q = f(x 0 ), for the importance density. These initializations represent the physical constraints on the vehicle speed.
A resampling scheme is necessary to avoid the degeneracy of the particle set. Systematic sampling [41] is performed when the variance of the weight set is too large, that is, when the number of the effective samples N eff falls below a given threshold N , arbitrarily set to 0.6N S in the implementation. The number of effective samples N eff is evaluated as The implementation of the projective particle filter algorithm is summarized in Algorithm 1.

Experiments and Results
In this section, the performances of the standard and the projective particle filters are evaluated on traffic surveillance data. Since the two vehicle tracking algorithms possess the same architecture, the difference in performance can be attributed to the distribution of particles through the importance density integrating the projective transform. The experimental results presented in this section aim to evaluate (1) the improvement in sample distribution with the implementation of the projective transform, (2) the improvement in the position error of the vehicle by the projective particle filter, (3) the robustness of vehicle tracking (in terms of an increase in tracking rate) due to the fine distribution of the particles in the feature space.
The algorithm is tested on 15 traffic monitoring video sequences, labeled Video 001 to Video 015 in Algorithm 1. The number of vehicles, and the duration of the video sequences as well as the parameters of the projective transform are summarized in Table 1. Around 2,600 moving vehicles are recorded in the set of video sequences. The videos range from clear weather to cloudy with weak illumination conditions. The camera was positioned above highways at a height ranging from 5.5 m to 8 m. Although the camera was placed at the center of the highways, a shift in the position has no effect on the performance, be it only for the earlier detection of vehicles and the length of the vehicle path. On the other hand, the rotation of the camera would affect the value of D and the position of the vanishing point X vp . The video sequences are low-definition (128 × 160) to comply with the characteristics of traffic monitoring sequences. The video sequences are footage of vehicles traveling on a highway. Although the roads are straight in the dataset, the algorithm can be applied to curved roads with approximation of the parameters over short distances because the projection tends to linearize the curves in the image plane.

Distribution of Samples.
An evaluation of the importance density can be performed by comparing the distribution of the samples in the feature space for the standard and the projective particle filters. Since the degeneracy of the particle set indicates the degree of fitting of the importance density through the number of effective samples N eff (see (20)), the frequency of particle resampling is an indicator of the similarity between the posterior and the importance density. Ideally, the importance density should be the posterior. This is not possible in practice because the posterior is unknown; if the posterior were known, tracking would not be required.
First, the mean squared error (MSE) between the true state of the feature vector and the set of particles is presented without resampling in order to compare the tracking accuracy of the projective and standard particle filters based solely on the performance of the importance and prior densities, respectively. Consequently, the fit of the importance density to the vehicle tracking problem is evaluated. Furthermore, computing the MSE provides a quantitative estimate of the error. Since there is no resampling, a large number of particles is required in this experiment: we chose N S = 300. Figure 3 shows the position MSE for the standard and the projective particle filters for 80 trajectories in Video 008 sequence; the average MSEs are 1.10 and 0.58, respectively.
Second, the resampling frequencies for the projective and the standard particle filters are evaluated on the entire dataset. A decrease in the resampling frequency is the result of a better (i.e., closer to the posterior density) modeling of the density from which the samples are drawn. The resampling frequencies are expressed as the percentage of resampling compared to the direct sampling at each time step k. Figure 4 displays the resampling frequencies across the entire dataset for each particle filter. On average, the projective particle filter resamples 14.9% of the time and the standard particle filter 19.4%, that is, an increase of 30% between the former and the latter.
For the problem of vehicle tracking, the importance density q used in the projective particle filter is therefore more suitable for drawing samples, compared to the prior density used in the standard particle filter. An accurate importance density is beneficial not only from a computational perspective since the resampling procedure is less frequently called, but also for tracking performance, as the particles provide a better fit to the true posterior density. Subsequently, the tracker is less prone to distraction in case of occlusion or similarity between vehicles.

Trajectory Error
Evaluation. An important measure in vehicle tracking is the variance of the trajectory. Indeed, high-level tasks, such as abnormal behavior or DUI (driving under the influence) detection, require an accurate tracking of the vehicle and, in particular, a low MSE for the position. Figure 5 displays a track estimated with the projective particle filter and the standard particle filter. It can be inferred qualitatively that the PPF achieves better results than the standard particle filter. Two experiments are conducted to evaluate the performance in terms of position variance: one with semiautomatic variance estimation and the other one with ground truth labeling to evaluate the influence of the number of particles.   The resampling frequency is the ratio between the number of resampling and the number of particle filter iteration. The average resampling frequency for the projective particle filter is 14.9%, and 19.4% for the standard particle filter.
In the first experiment, the performance of each tracker is evaluated in terms of MSE. In order to avoid the tedious task of manually extracting the groundtruth of every track, a synthetic track is generated automatically based on the parameters of the real world projection of the vehicle trajectory on the camera plane. Figure 6 shows that the theoretic and the manually extracted tracks match almost perfectly. The initialization of the tracks is performed as in [35]. However, because the initial position of the vehicle when tracking starts may differ from one track to another, it is necessary to align the theoretic and the extracted tracks in order to cancel the bias in the estimation of the MSE. Furthermore, the variance estimation is semiautomatic since the match between the generated and the extracted tracks is visually assessed. It was found that Video 005, Video 006, and Video 008 sequences provide the best matches overall. The 205 vehicle tracks contained in the 3 sequences     Table 2 for a sample set size of 100. It can be inferred from Table 2 that the PPF consistently outperforms the standard particle filter. It is also worth noting that the higher MSE in this experiment, compared to the one presented in Figure 5 for Video 008, is due to the smaller number of particleseven with resampling, the particle filters do not reach the accuracy achieved with 300 particles. Position MSE for standard and projective particle filters Projective particle filter Standard particle filter Figure 7: Position mean squared error versus number of particles for the standard and the projective particle filter.
In the second experiment, we evaluate the performance of the two tracking algorithms w.r.t. the number of particles. Here, the ground truth is manually labeled in the video sequence. This experiment serves as validation to the semiautomatic procedure described above as well as an evaluation of the effect of particle set size on the performance of both the PPF and the standard particle filter. To ensure the impartiality of the evaluation, we arbitrarily decided to extract the ground truth for the first 5 trajectories in Video 001 sequence. Figure 7 displays the average MSE over 10 epochs for the first trajectory and for different values of N S . Figure 8 presents the average MSE for 10 epochs on the 5 ground truth tracks for N S = 20 and N S = 100. The experiments are run with several epochs to increase the confidence in the results due to the stochastic nature of particle filters. It is clear that the projective particle filter outperforms the standard particle filter in terms of MSE. The higher accuracy of the PPF, with all parameters being EURASIP Journal on Image and Video Processing  identical in the comparison, is due to the finer estimation of the sample distribution by the importance density and the consequent adjustment of the weights.

Tracking Rate
Evaluation. An important problem encountered in vehicle tracking is the phenomenon of tracker drift. We propose here to estimate the robustness of the tracking by introducing a tracking rate based on drift measure and to estimate the percentage of vehicles tracked without severe drift, that is, for which the track is not lost. The tracking rate primarily aims to detect the loss of vehicle track and, therefore, evaluates the robustness of the tracker. Robustness is differentiated from accuracy in that the former is a qualitative measure of tracking performance while the latter is a quantitative measure, based on an error measure as in Section 4.2, for instance. The drift measure for vehicle tracking is based on the observation that vehicles are converging to the vanishing point; therefore, the trajectory of the vehicle along the tangential axis is monotonically decreasing. As a consequence, we propose to measure the number of steps where the vehicle position decreases (p d ) and the number of steps where the vehicle position increases or is constant (p i ), which is characteristic of drift of a tracker. Note that horizontal drift is seldom observed since the distortion along this axis is weak. The rate of vehicles tracked without severe drift is then calculated as The tracking rate is evaluated for the projective and standard particle filters. Figure 9 displays the results for the entire traffic surveillance dataset. It shows that the projective particle filter yields better tracking rate than the standard particle filter across the entire dataset. The projective particle filter improves the tracking rate compared to the standard particle filter. Figure 9 also shows that the difference between the tracking rates is not as important as the difference in MSE because the second one already performs well on vehicle tracking. At a high-level, the projective particle filter still yields a reduction in the drift of the tracker.

Discussion.
The experiments show that the projective particle filter performs better than the standard particle filter in terms of sample distribution, tracking error and tracking rate. The improvement is due to the integration of the projective transform in the importance density. Furthermore, the implementation of the projective transform requires very simple calculations under simplifying assumptions (12). Overall, since the projective particle filter requires fewer samples than the standard particle filter to achieve better tracking performance, the increase in computation due to the projective transform is offset by the reduction in sample set size. More specifically, the projective particle filter requires the computation of the vector-valued process function and the ratio γ k for each sample. For the process function, (13) and (14), representing f˙x(x) and f b (x), respectively, must be computed. The computation burden is low assuming that constant terms can be precomputed. On the other hand, the projective particle filter yields a gain in the sample set size since less particles are required for a given error and the resampling is 30% more efficient.
The projective particle filter performs better on the three different measures. The projective transform leads to a reduction in resampling frequency since the distribution of the particles carries accurately the posterior and, consequently, the degeneracy of the particle set is slower. The mean squared error is reduced since the particles focus around the actual position and size of the vehicle. The drift rate benefits from the projective transform since the tracker is less distracted by similar objects or by occlusion. The improvement is beneficial for applications that require vehicle "locking" such as vehicle counts or other applications for which performance is not based on the MSE. It is worthwhile noting here that the MSE and the tracking rate are independent: it can be observed from Figure 9 that the tracking rate is almost the same for Video 005, Video 006, and Video 008, but there is a factor of 2 between the MSE's of Video 005 and Video 008 (see Table 2).

Conclusion
A plethora of algorithms for object tracking based on Bayesian filtering are available. However, these systems fail to take advantage of traffic monitoring characteristics, in particular slow-varying vehicle speed, constrained real-world vehicle trajectory and projective transform of vehicles onto the camera plane. This paper proposed a new particle filter, namely, the projective particle filter, which integrates these characteristics into the importance density. The projective fractional transform, which maps the real world position of a vehicle onto the camera plane, provides a better distribution of the samples in the feature space. However, since the prior is not used for sampling, the weights of the projective particle filter have to be readjusted. The standard and the projective particle filters have been evaluated on traffic surveillance videos using three different measures representing robust and accurate vehicle tracking: (i) the degeneracy of the sample set is reduced when the fractional transform is integrated within the importance density; (ii) the tracking rate, measured through drift evaluation, shows an improvement in robustness of the tracker; (iii) the MSE on the vehicle trajectory is reduced with the projective particle filter. Furthermore, the proposed technique outperforms the standard particle filter in terms of MSE even with a fewer number of particles.