The present invention relates to a system and method for tracking of a moving object and determining its instantaneous distance from a vehicle. More particularly, the present invention provides for a system and method for low computational complexity implementation of Kalman filter in post-tracking/measurement stage to remove the noise error developed in the measurement of position of the moving object.
The invention relates to an automatic detection of pedestrians along with the measurement of distance from the moving vehicle. Now days, driver and pedestrian safety are becoming more and more important worldwide as the number of road accidents have increased significantly. These accidents are a major cause of loss of life and property. Recognition of pedestrian enables early warning which helps in reducing the probability of such accidents.
There are numerous approaches for detecting pedestrian and measuring their distance from the moving vehicle. Most of them involve use of sensors or cameras for detecting obstacles. These methods also involves use of typically, SIFT, SURF or other sophisticated image processing algorithms that are inherently robust in identifying identical stereo points resulting in accurate triangulation for position estimation. However, these image processing algorithms are due to inherent complex nature and demand exhaustive computation that in turn requires more processing power to retain good time performance of the system. One such method (US20070154068) provides estimating distance to an obstacle from a moving automotive vehicle equipped with a monocular camera. Specifically, the method includes image processing techniques used to reduce errors in real time of the estimated distance. A dimension e.g. a width is measured in the respective images of two or more image frames, thereby producing measurements of the dimension. This method provides a reduction in noise but it involves use of complex equations for measuring distance. Other such methods are less effective in night time (in poor light conditions) and in unfavorable weather conditions.
There is, therefore a need of a method and system to provide automatic detection of pedestrians and their distance from the moving vehicle with less error or noise in the tracked images and should involve less complexity. The method and system should also be capable of working in night mode and at the time of unfavorable weather conditions.
A method for identifying a moving object in front of a vehicle and determining its distance from the said vehicle is disclosed. The method comprises the steps of capturing real time images falling in range of an image capturing device in a plurality of frames, synchronizing the frames so captured to obtain a synchronized frame or an identical frame. The method further comprises of processing of the captured image in the synchronized frame by or the identical frame by executing processor implemented steps of identifying the image by selected feature wherein the feature includes pixels, refining or tracking the moving object by a kernel based correlation tracking for the identical frames, triangulation of the moving object so detected to get an instantaneous distance between the moving object and the vehicle and refining of the instantaneous distance measured by using a Kalman filter layer to remove noise produced in the image of the moving object.
A system for identifying a moving object in front of a vehicle and determining an instantaneous distance between the moving object and said vehicle is disclosed. The system comprises of an image capturing device adapted to capture images falling in the range of the image capturing device in a plurality of frames, a system clock coupled with the image capturing device for synchronizing the plurality of frames to obtain a synchronized frame. The system further comprises of a processor for processing the captured image in the synchronized frame and capable of executing programmed instructions for filtering and refining of the captured images by using a Kalman filter layer and a calculating unit for determining the instantaneous distance between the moving object and the vehicle.
Some embodiments of this invention, illustrating its features, will now be discussed:
The words “comprising”, “having”, “containing'” and “including”, and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
It must also be noted that as used herein and in the appended claims, the singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise. Although any systems, methods, apparatuses, and devices similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention, the preferred, systems and parts are now described.
The disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms.
Following are the definitions for the understanding of the invention:
Real time: a system is a real time when it can support the execution of application with time constraints on that execution.
Moving object: any object which is in motion with respect to the vehicle.
Real time image: an image that can be animated on screen at the same speed as the real-world object.
In front of: the position of a moving object which falls directly in the range of the image capturing device.
Instantaneous distance: the distance between the moving object and the vehicle without any delay.
The present invention discloses a method and system (1) for identifying a moving object (2) in front of a vehicle (3). More particularly, the present invention discloses a method and a system (1) for determining an instantaneous distance between the moving object (2) and the vehicle (3). The moving object (2) includes but is not limited to pedestrians, automobile, animal etc.
In accordance with an aspect, as illustrated in
In accordance with an aspect, the image captured by the image capturing device (4) in the synchronized frame is processed through a processor (6). The moving object (2) is identified by applying an identification or detection algorithm. For the algorithm part, the first step is to collect a feature using a Haar Basis Function on the image of the moving object (2).
The image can be represented in terms of an equation:
For the algorithm part, the first step is to collect features using Haar Basis Functions on an integral image. The feature includes pixels.
Where ii is the integral image and i is the original image.
This integral image can be computed faster using the following recursive function.
s(x,y)=s(x,y−1)+i(x,y)
ii(x,y)=ii(x−1,y)+s(x,y)
With s(x, −1)=ii(−1, y)=0 initially.
On this integral image of the moving object (2) a Haar like feature is computed using block size 12×28.
After having a considerable amount of the feature from both positive (human) and negative (non-human) a classifier AdaBoost is trained to obtain a trained Adaboost. Referring to
As illustrated in
According to the quantizatized color scheme and by way of specific example:
Let h, s, and v be the HSV domain color value, with s and v normalized between [0,1], for a particular R, G, and B value and index is the quantized bin index.
A pure black area can be found in
Gray area can be found in
and
a white area can be found in
The color area is found in
The quantization of h, s and v value for different s and v is done as follows:
Let Hindex, Sindex and Vindex be the quantized index for different h, s and v values.
Finally the histogram bin index is given by the following equation
index=4*Hindex+2*Sindex+Vindex+8;
The correlation between two segmented objects is defined as the Bhattacharya's Distance, d(y)=√{square root over (1−ρ[p(y),{circumflex over (q)}])}, where ρ[.] is the Bhattacharya Coefficient between p and q.
Now, if any moving object (2) is segmented in previous frame and not segmented in current frame it undergoes a kernel based tracking as follows.
A candidate model is created using the same histogram model described earlier in the previous frame. In current frame in the same position and same rectangle as previous a target model is fit.
Now the distance between target model and target candidate histogram is calculated, d(y)=√{square root over (1−ρ[p(y),{circumflex over (q)}])}, where ρ[.] is the bhattacharya coefficient between p and q.
Now the displacement of the target centre is calculated by the weighted mean.
Once the new location of target is found,
With the above tracking process, the moving object which is missing is tracked.
Now the image capturing device (4) is calibrated using Zhang's calibration.
The epipolar geometry between two views are established
m2TFm1=0
Where F is the fundamental matrix corresponding to the (m1→m2) correspondence points. From F one can compute the matrix E having relation with F,
F=Kr−TEK1−1
such that,
E=[t]xR
where t and R are translation and rotation of the image capturing device (4).
Once the moving object is tracked in plurality of image capturing device (4) the Harris corner matching is done within the rectangle of the moving object (2).
The corners are matched using standard correlation and an instantaneous distance between the moving object (2) and the vehicle (3) is calculated using standard stereo-vision based triangulation technique.
In accordance with an aspect, the processing further includes refinement of the instantaneous distance measured between the moving object (2) and the vehicle (3) by using a Kalman filter layer to remove noise created due to complex background and poor light conditions or poor weather conditions for example fog, rain etc. The complex background results in difficulties in identifying the moving object.
As illustrated in
A linear system can be described as:
The State equation:
xk+1=Axk+Buk+wk (1)
Output equation;
yk=Cxk+zk (2)
In the above equations A,B and C are the matrices, k is the time index, x defines the state of the system (1), u is a known input to the system (1), y is measurement output, and w and z are the noise called process noise (w) and measured noise (z). One can not measure x directly, rather one can measure y which in turn is a function of x but can't take the face value of y as it is corrupted by noise z. Running a Kalman filter layer gives a better estimate of the current state x that in turn gives corrected measurements of its components (in our case x consists of position and velocity).
This is how filtering is done on the measured raw data with the technique using the Kalman filtering layer.
In accordance with an embodiment, by way of specific example, the vehicle (3) is considered to be linearly approaching towards the moving object (2). It is also assumed that velocity of the moving object (2) is negligible with respect to the vehicle (3). So in effect, it is a system (1) consist of a vehicle (3) approaching with constant velocity to a fixed object and so the distance between the vehicle (3) and the fixed object monotonously decreases with time. This is normally a fair assumption as the speed of the vehicle (3) is normally much greater than the velocity of the moving object (2). When these two are comparable (as is the case when the vehicle is slowing down, one may not need automatic object detection to avoid accidents).
Referring to
For a zero acceleration model, the instantaneous position and velocity of the vehicle (3) are governed by below mentioned equations.
pk+1=pk+Δtvk+{tilde over (p)}k (3)
and,
vk+1=vk+{tilde over (v)}k (4)
Where Δt is the measurement interval, pk is the position of the vehicle (3), vk is the velocity of the vehicle (3), {tilde over (p)}k and {tilde over (v)}k position and velocity noise. It is to be noted that the stereo vision based distance measurement data serves as the position information (p) of the vehicle (3).
Mathematically speaking, the state of the system (1) is consists of position (p) and velocity (v) as follows:
Since we measure the position (p) at the Δt interval in the output y, our linear equations boils down to:
y
k=[1 0]xk+zk (7)
where zk is measurement noise.
Comparing the equations (1,2) with equations (6,7) we note that:
To start the Kalman filtering we need a prior knowledge of three more parameters, i.e. the initial state x, process noise covariance Q and measurement noise covariance R.
For every moving object (2) to be detected, we set the initial position with the first measured distance and we set initial velocity to zero as we didn't have any mechanism/instrumentation to feed vehicle's (3) instantaneous velocity to the algorithm in real time and we find it better to leave it as zero and let the filter adapt to the velocity after a few iteration.
Process noise covariance (Q) and measurement noise covariance (R) are given as:
Q=E(wkwkT) and R=E(zkzkT) (8)
Where wkT and zkT is transpose of wk and zk. And E(.) signifies expected value.
Measurement noise covariance is relatively easy to evaluate as it would be the square of standard deviation in measured data. We have measured it offline (the actual process of tuning it is described in sec V) and fed the value in the algorithm as a constant throughout the simulation (true for all moving objects).
We are now only left with calculating the process noise covariance. This is also determined offline and fed to the algorithm as a constant value (true for all pedestrians). The detail of tuning it is described in sec V. Process covariance is give by:
The basic parameter to be assumed/experimented is the noise in velocity so that Q can be estimated. Say velocity noise (i.e. the standard deviation in velocity) is v_noise.
Since the position is proportional to time interval Δt times velocity and the velocity noise is v_noise, the variance in position is a=(Δt)2·(v_noise)2. Next, we calculate the variance in velocity as b=(v_noise)2. Finally, the covariance of the position noise and velocity noise is equal to the standard deviation of the position noise times the standard deviation of the velocity noise, which can be calculated as c=(Δt·v_noise)·(v_noise)=Δt·(v_noise)2. Combining these equations we have:
The v_noise parameter is determined offline to choose an optimum value for Q. This can be treated as an offline training process done on some captured data, which is then used as a constant always and applied on the real data.
To get physical significance (much like root mean square quantity) we have further modified the total error to get final error (in meters) as:
error=√{square root over ((total_error/n))} (12)
In accordance with an aspect, the present invention also discloses a system (1) for identifying a moving object (2) in front of a vehicle (3). More particularly, the present invention discloses a system (1) for determining distance between the moving object (2) and the vehicle (3).
Referring to
In accordance with an embodiment, the system (1) comprises of plurality of image capturing device (4). The image capturing device (4) includes a camera. The camera is mounted on the vehicle (3) and is in motion.
In accordance with an embodiment, the system (1) further comprises of a control unit (7) for measuring and fixing steering angles and velocity of the vehicle (3) based on the dimensions measured by the Kalman filter layer.
By way of specific example, let us assume the moving object (2) to be a pedestrian. Pedestrian can be assumed to be fixed relative to the vehicle (3) as the relative velocity for the vehicle (3) is way higher than the pedestrian who is walking. In that case the average rate of change of the vehicle-pedestrian distance (Z) determines the velocity of the vehicle (3). We get Z for the detected pedestrians and we also have timestamps for each measurements (t). So the velocity (V) can be determined using those information as described below:
Speed can be measured as:
Say, the algorithm has detected n pedestrians with corresponding vehicle-pedestrian distance as zt11, zt12 . . . zt1n in timeframe t1 (can be easily seen from the timestamp) and zt21, zt22 . . . zt2n in timeframe t2, so the speed (s) of the vehicle can be determined as:
Steering angle may be calculated as:
This can be measured by measuring the optical flow. Let opticalFlowLeft be the optical flow for the left half of the image and opticalFlowRight be the optical flow for the right half of the image. Then the steering angle (theta) can be computed as:
Velocity or direction may be calculated as:
Combining the speed (s) and steering angle (theta) information gives the velocity/direction information of the vehicle (3).
In accordance with an embodiment, the system (1) further comprises of an alrm generating device which may be a module for alerting an operator of the vehicle by generating an alrm.
The system of the present invention employs 10 points smoothing (moving average filter) directly on the raw data and calculated the error to see how this naïve approach compares with the Kalman output. Table 1 shows a comparative study on the error. This shows Kalman technique to be the best candidate to reduce measurement error to a significant level.
Implemented on a on a 2 GHz x86 processor, the proposed system was able to accurately detect and track pedestrians from a vehicle moving at 25 km/hr and the processing time taken was less than 200 msec, which is fast enough for generating pedestrian alarms.
To have an optimize Q, and R as input to the Kalman filter we have varied Q (through v_noise) and R in a wide range (0.01 to 1000) through pure simulation and calculated the error based on the reference data set. The result is plotted in
applied on other moving object it also showed good results as illustrated in
The preceeding description has been presented with reference to various embodiments of the invention. Persons skilled in the art and technology to which this invention pertains will appreciate that alterations and changes in the described process and methods of operation can be practiced without meaningfully departing from the principle, spirit ans scope of this invention.
Number | Date | Country | Kind |
---|---|---|---|
1911/MUM/2010 | Jun 2010 | IN | national |
0969/MUM/2011 | Mar 2011 | IN | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IN2011/000436 | 6/30/2011 | WO | 00 | 3/13/2013 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2012/001709 | 1/5/2012 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7139411 | Fujimura et al. | Nov 2006 | B2 |
7209221 | Breed et al. | Apr 2007 | B2 |
7385680 | Tamaki et al. | Jun 2008 | B2 |
7418113 | Porikli | Aug 2008 | B2 |
8164628 | Stein et al. | Apr 2012 | B2 |
20050100192 | Fujimura | May 2005 | A1 |
20060187305 | Trivedi | Aug 2006 | A1 |
20070154068 | Stein | Jul 2007 | A1 |
20080144925 | Zhu | Jun 2008 | A1 |
20090080701 | Meuter | Mar 2009 | A1 |
20090309966 | Chen | Dec 2009 | A1 |
20100057305 | Breed | Mar 2010 | A1 |
Number | Date | Country | |
---|---|---|---|
20130194419 A1 | Aug 2013 | US |