This invention relates to a fusion algorithm for a Vidar traffic surveillance system.
A traditional radar based traffic surveillance system uses a Doppler radar for vehicle speed monitoring which measures a vehicle speed at line-of-sight (LOS). In
where K is a Doppler frequency conversion constant and φt is called the Doppler cone angle or simply the Doppler angle. Although a Doppler radar based system has an advantage of a long detection range, there are several difficulties associated with the traditional radar based system, including (1) the Doppler radar beam angle is too large to precisely locate vehicles within the radar beam; (2) the angle between the vehicle moving direction and the LOS, is unknown and therefore, needs to be small enough for a reasonable speed estimation accuracy; (3) since all velocity vectors on the equal-Doppler cone in
A video camera based traffic surveillance system uses a video camera to capture a traffic scene and relies on computer vision techniques to indirectly calculate vehicle speeds. Precise vehicle locations can be identified. However, since no direct speed measurements are available and the camera has a finite number of pixels, the video camera based traffic surveillance system can be used only in a short distance application.
A video-Doppler-radar (Vidar) traffic surveillance system combines both the Doppler radar based system and the video based system into a unique system to preserve the advantages of both systems and overcome the shortcomings of both systems. A patent application on Vidar traffic surveillance system has been filed by the first author, Patent Application No. 12266227.
A Vidar traffic surveillance system may include a first movable Doppler radar to generate a first radar beam along the direction of a first motion ray, a second movable Doppler radar to generate a second radar beam along the direction of a second motion ray, a third fixed Doppler radar to generate a third radar beam along a direction ray, a video camera to serve as an information fusion platform by intersecting the first and second radar motion rays with the camera virtual image plane, a data processing device to process Doppler radar and video information, a tracking device to continuously point the surveillance system to the moving vehicle, and a recording device to continuously record the complete information of the moving vehicle.
Robustly matching information from a video camera and multiple Doppler radars is a prerequisite for information fusion in a Vidar traffic surveillance system. However, because of the different modalities of video and Doppler radar sensors, matching information from a video camera and Doppler radars is very difficult. Due to the special video-radar geometry introduced in Vidar, correctly matching between a video sequence and Doppler signals is possible. This invention describes a robust algorithm to match video signals and Doppler radar signals, and an algorithm to fuse the matched video and Doppler radar signals.
A fusion algorithm for a Vidar traffic surveillance system may include the following steps: (1) deriving Doppler angles from a video sequence; (2) generating estimated Doppler signals from estimated Doppler angles; (3) matching estimated Doppler signals to the measured Doppler signals of two moving Doppler radars; (4) finding the best match between the estimated and measured Doppler signals; (5) forming a three-scan, range-Doppler geometry from the stationary Doppler radar and estimated Doppler angles; (6) matching video signals to stationary Doppler radar signals; (7) fusing the matched video and Doppler radar signals to generate moving vehicle velocity and range information.
The invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which, like reference numerals identify like elements, and in which:
The functional flow chart of the algorithm is shown in
Derive Doppler Angles from a Video Sequence
The objective of this step (step 105 in
Referring to
f
D
1
=K
1υτ
where υt
f
D
3
=K
3υt
Since the motion of the first moving Doppler radar is known as
υτ
we have
A similar equation may be derived for the second moving Doppler radar
and a1, a2, K1, K2, K3, φ1, and φ2 are all known from calibration. Given Doppler angle estimates, {circumflex over (θ)}τ
Â
1k
=K
1
a
1 cos({circumflex over (θ)}τ
Within a predefined time window, cosine signals (Doppler signals) may be generated as
Â
1k cos(ωt+φ1) and Â2k cos(ωt+φ2),tk−L≦t≦tk (15)
where L is the window length. It is straightforward to match estimated cosine signals to the measured Doppler signals in a single vehicle case using a least square method which is performed in step 106 of
From the video camera, multiple pairs of Doppler angles are estimated:
where N is the number of vehicles, which in turn generate multiple cosine signals as
Â
1k
i cos(ωt+φ1) and Â2ki cos(ωt+φ2),i=1, . . . , N.
Using multiple hypothesis testing, the moving radar Doppler data set {D1i, D2i} corresponding to
may be identified (also in step 106 of
from three Doppler radars, a more accurate Doppler frequency of the ith vehicle may be determined.
When two vehicles are close to each other, Doppler angles alone cannot set them apart. The stationary Doppler radar signals should provide additional information about their speeds. In general, it is relatively more accurate for a camera to measure an angle than derive a velocity. On the other hand, it is relatively more accurate for a Doppler radar to measure a velocity than derive an angle. The contribution of this invention is to robustly tie together the angle information from a video camera and the Doppler (velocity) information from a Doppler radar. In this invention, we will match angle rates from video signals to stationary Doppler radar signals via a unique three-scan geometry.
A three-scan geometry is shown in
where Oqki=[uki, τki, f] and Oqk+1i=[uk+1i, υk+1i, f] are the locations of the ith vehicle on the image plane. Assume a constant velocity model, i.e., υt
Using the cosine law, we have the constrained equation for the three-scan geometry (step 107 in
(a1+Δki)2+(ai)2−2(ai+Δkj)ai cos(Δθki)=(ai−Δk+1i)2+(ai)2−2(ai−Δk+1i)ai cos(Δθk+1i). (18)
Solving the following equation for
(ai)2[2 cos(Δθk+1i)−2 cos(Δθki)]+ai[2Δki+2Δk+1i−2Δki cos(Δθki)−2Δk+1i cos(Δθk+1i)]+(Δki)2−(Δk+1i)2=0 (19)
we may find the range from the Vidar device to the vehicle which is performed in step 108 of
The criterion for matching video signals to stationary Doppler radar signals becomes validating the following equation. Given an arbitrary Doppler signal pair from the stationary Doppler radar, say f3
(ai)2[2 cos(Δθk+1i)−2 cos(Δθki)]+ai[2Δkj+2Δk+1j−2Δkj cos(Δθki)−2Δk+1j cos(Δθk+1i)]Δ(Δkj)2−(Δk+1j)2=0. (21)
Once the matched video and Doppler radar signals are found, they are fed into a stochastic model for fusion, which is performed in step 109 of
Assume the kinematics of the ith vehicle satisfy a stochastic constant velocity (CV) model
where Xki=[xi, y1, zi]k is the ith vehicle's 3D coordinate. The positional measurement equation may be
The velocity measurement equation may be established as
Eqs. (22), (25) and (28) form a stochastic system for vehicle information fusion and an extended Kalman filter may be used to estimate the position and velocity of the vehicle. For a CV model, minimum two scans may be needed and for a constant acceleration (CA) model minimum three scans may be needed to converge.