The present invention relates to motor vehicle vision system applications that track or display the location of the vehicle relative to roadway lane markers, and more particularly to a method of consolidating lane marker position information from successively generated video images.
Motor vehicle forward vision data generated by a video camera mounted at or near the driver's eye level can be processed to identify various items of interest such as roadway lane markers. The vision system can then determine the location of the vehicle relative to the lane markers, for displaying video information to the driver or for detecting lane changing and/or driving patterns indicative of a drowsy driver. Most of these applications require lane marker detection in a region of about 5–30 meters forward of the vehicle, where the lane markers can be reliably approximated as straight lines. However, dashed or periodic lane markers can have relatively large gaps, and frequently only a fraction of a lane marker is visible to the camera in any given video frame, particularly in mechanizations where a portion of the roadway within the video frame is obscured by the hood or fenders of the vehicle. Since this can degrade the ability of the lane tracking system to perform the intended functions, it would be beneficial if the information obtained from successively generated video images could be consolidated to provide more complete lane marker data, either for display or lane detection purposes.
The present invention is directed to a method of consolidating lane marker position information by projecting lane marker information from a previously generated video frame into a current video frame. Projecting the lane marker information involves transforming the detected markers from the previous frame to world coordinates, and predicting their position in the current video frame based on measured vehicle rotation and translation parameters. The projected marker coordinates can be combined with world coordinates of lane markers from the current video frame for lane detection applications, or converted to image plane coordinates and combined with lane marker image plane coordinates of the current video frame for driver display purposes.
The method of the present invention is carried out in a vehicle-mounted vision system designed, among other things, to capture video images of a scene in the forward path of the vehicle for analysis and/or display to the driver. One of the principle objectives of the video image analysis is to identify lane markers painted on the roadway and the location of the host vehicle relative to the markers.
The principle region of interest for purposes of lane marker identification and tracking comprises the portions of the roadway approximately 5–30 meters forward of the vehicle 10. The outside boundary of this region within the real world and image plane views of
The present invention enhances the available information by projecting lane marker coordinate data from a previous video frame into the current video frame and consolidating the projected and current lane marker coordinate data to provide a more complete representation of the lane markers and their position with respect to the host vehicle, whether the lane marker data is used for display purposes or lane tracking algorithms. Projection of the previously captured lane marker coordinate data involves characterizing the vehicle's movement in terms of its speed (translation) and yaw rate (rotation) or similar parameters. Thus, a system for carrying out the method of the present invention is represented by the block diagram of
Referring to the flow diagram of
As mentioned above, the projection of lane marker coordinates from a given video frame to the next successive video frame according to this invention involves translating the coordinates based on vehicle speed VS and rotating the coordinates based on yaw rate YR. The starting point is the world coordinate pair (x, y) of a previously identified lane marker, where the x-coordinate represents down-range distance from the vehicle 10 and the y-coordinate represents cross-range distance from the vehicle's central longitudinal axis (as may be represented by the arrow 14 in
Δx(n)=d(n)cosφ(n), and
Δy(n)=d(n)sinφ(n)
This frame-to-frame origin shift of the vehicle is applied to the lane marker coordinates of the prior video frame. For any such coordinate pair (xi(n), yi(n)), the projected coordinate pair in the next video frame, ({tilde over (x)}i(n+1), {tilde over (y)}i(n+1)), is given by:
{tilde over (x)}i(n+1)=(xi(n)−Δx(n))cosφ(n)+(yi(n)−Δy(n))sinφ(n), and
{tilde over (y)}i(n+1)=−(xi(n)−Δx(n))sinφ(n)+(yi(n)−Δy(n))cosφ(n)
The terms (xi(n)−Δx(n)) and (yi(n)−Δy(n)) account for the vehicle translation, while the functions cosφ(n) and sinφ(n) account for the vehicle rotation.
In summary, the present invention provides simple and cost-effective method of consolidating identified coordinates of images successively generated by a vehicle vision system. The consolidated coordinates provide enhanced display and improved lane marker tracking. While the method of the present invention has been described with respect to the illustrated embodiment, it is recognized that numerous modifications and variations in addition to those mentioned herein will occur to those skilled in the art. Accordingly, it is intended that the invention not be limited to the disclosed embodiment, but that it have the full scope permitted by the language of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
4970653 | Kenue | Nov 1990 | A |
5467634 | Brady et al. | Nov 1995 | A |
5473931 | Brady et al. | Dec 1995 | A |
5809161 | Auty et al. | Sep 1998 | A |
6035253 | Hayashi et al. | Mar 2000 | A |
6134509 | Furusho et al. | Oct 2000 | A |
6292752 | Franke et al. | Sep 2001 | B1 |
6647328 | Walker | Nov 2003 | B1 |
6708087 | Matsumoto | Mar 2004 | B1 |
6765480 | Tseng | Jul 2004 | B1 |
6807287 | Hermans | Oct 2004 | B1 |
6819779 | Nichani | Nov 2004 | B1 |
6868168 | Tsuji | Mar 2005 | B1 |
6888447 | Hori et al. | May 2005 | B1 |
6977630 | Donath et al. | Dec 2005 | B1 |
20010035880 | Musatov et al. | Nov 2001 | A1 |
20010056544 | Walker | Dec 2001 | A1 |
20020198632 | Breed et al. | Dec 2002 | A1 |
20030023614 | Newstrom et al. | Jan 2003 | A1 |
20030123705 | Stam et al. | Jul 2003 | A1 |
20030128182 | Donath et al. | Jul 2003 | A1 |
20040042638 | Iwano | Mar 2004 | A1 |
20040049324 | Walker | Mar 2004 | A1 |
20040066376 | Donath et al. | Apr 2004 | A1 |
20040143381 | Regensburger et al. | Jul 2004 | A1 |
20040183905 | Comaniciu et al. | Sep 2004 | A1 |
Number | Date | Country |
---|---|---|
197 49 086 | Nov 1997 | DE |
2003322522 | Nov 2003 | JP |