An embodiment relates generally to lane marker detection of a road using an image-based capture device.
Camera-based lane marker detection systems are used to detect roads or road segments of a vehicle road. Most systems work reasonably well in highway scenarios and cannot handle the complex environment in local scenarios. Such camera-based systems are susceptible to incorrectly distinguishing road objects such as curbs and lane markers from shadows of trees, buildings, and other environmental conditions. Furthermore, camera-based systems are typically challenged by certain conditions such as sharp curves in the road or the weather/low sun angle. Since curves in roads are more difficult to detect in comparison to straight lines, such systems have the disadvantage of running slower processing times to accurately detect the lane markers.
An advantage of the invention provides for a low cost and reliable detection of lane markers in a road using an image capture device. The invention provides for the use of a low-cost image capture device such as a camera for detecting lane markers reliably along curved roads or straight roads. The lane markers of the sharp curves are distinguished from distracting factors such as various shadows from buildings and trees in addition to poorly painted lane markers.
An embodiment contemplates a method of detecting road lane markers in a vehicle road using an imaging device. Road input image data is captured using the imaging device. Lighting normalization is applied to the road input image data. The method detects the road lane markers in a few main orientations in the input image. In each main orientation, the normalized input data is convolved with an oriented edge detection filter for generating an oriented edge-based filter response. The normalized input data is convolved with an oriented line detection filter for generating an oriented line-based filter response. Candidate lane markers are selected in response to the edge-based filter response and line-based filter response in the neighboring angles of each main orientation. A transformation technique is applied to the candidate lane markers for identifying the lane markings in the neighboring angles of each main orientation.
An embodiment contemplates a lane marker detection system. An imaging device is provided for capturing road input data. A processor receives the captured road input data received by the imaging device. The processor applies lighting normalization to the road input data. The processor processes the normalized input data in a few main orientations. In each main orientation, the processor convolves the normalized input data with an oriented edge detection filter for generating an oriented edge-based filter response. The processor convolves the normalized input data with an oriented line detection filter for generating an oriented line-based filter response. The processor selects candidate lane markers from the edge-based filter response and the line-based filter response. In each main orientation, the processor applies a transformation technique to the selected candidate lane markers for identifying the line segments of lane markings. An output device is provided for identifying a location of each of the lane markers.
a is an image representation of an oriented line filtering process.
b is a graphical representation of the oriented line filtering process.
a is an image representation of an oriented edge filtering process.
b is a graphical representation of the oriented edge filtering process.
a is a graphical representation of a combined oriented edge and line filtering process.
b is an image representation of a combined oriented edge and line filtering process.
There is shown in
The lane marker detection system 10 includes an image capture device 12 including, but not limited to, a camera. The image capture device 12 captures an image of the road, typically the area directed in the front of the vehicle. The captured image is processed for identifying both edges of a lane marker as well as a line (i.e., main body) of the lane marker.
The lane marker detection system 10 further includes a processor 16 for receiving and processing the captured image data by the image capture device 12. A memory storage device 18 may also be provided for storing and retrieving the captured data.
The processor 16 executes a program that filters the captured image data in real time for determining the presence and location of one or more lane markers in the vehicle road. The detected lane markers are provided to an output device 14 such as an autonomous steering module or an image display device. The autonomous steering module may use the processed information for autonomously maintaining vehicle position within the road between the detected lane markers. The image display device which may include, but is not limited to, monitor-type displays, a projection-type imaging, holograph-type imaging, or similar imaging displays may use the processed information for highlighting the lane markers in the image display device for providing visual enhancement of the road to the driver of the vehicle. The term highlighting refers to identifying the location of the lane markers or road segments in the image data and may be performed by any comparable method for identifying the location of the lane markers in the captured image data.
In block 20, the image capture device captures an image of a candidate road segment exterior of the vehicle as illustrated in
The normalized captured image data is split into a plurality of parallel processing paths for lane marker detection in a few main orientation angles. In block 22, in each main orientation angle α, the normalized captured image data is convolved with an oriented line detection filter with angle α for detecting the lines with the angle close to α (i.e., main body portion) of the lane marker. In addition, the normalized captured image data is convolved with an oriented edge detection filter with angle α for detecting the edges with the angle close to a bordering the line of the lane marker.
a and 5b show an image representation and a graphical representation, respectively, of a respective oriented line detection filter applied to captured image data for generating a line-based filter response. As shown in the filter response of the image representation in
a and 6b show an image representation and a graphical representation, respectively, of a respective oriented edge detection filter applied to a captured image data for generating an edge-based filter response. The edge-based filter applied to the captured image data enhances the edges for detecting the edges of a respective lane marker by generating a negative response and a positive response as shown the filter response of
a and 7b show an image representation and a graphical representation, respectively, of a combined edge and line oriented filtering process applied to the normalized captured input data. The graphical representation and the image representation using the oriented edge detection filters and the oriented line detection filters are applied for cooperatively detecting the lines and edges of lane markers in the vehicle road to identify the candidate points of lane markers in image.
In block 23, a Hough transform technique is applied for identifying the line segments of lane markings from the candidate points of lane markers. The Hough transform is a feature extraction technique used to find imperfect instances of objects within the class of shapes of the candidate points being analyzed. The Hough transform is concerned with the identification of lines in the candidate points in image, and more specifically, the identification of the positions and angles of the lines within the candidate points in image. For example, the oriented edge detection filter and line detection filter with an angle α in block 22 are used as a pre-processor to obtain candidate points that are on the desired line with the angle close to a (have large positive line filter responses and large positive and large negative edge filter responses along the perpendicular direction of α) in the normalized image. Due to imperfections in the either the oriented edge detection filter, the oriented line detection filter, or the noise in the normalized captured image data, there may be missing pixels on the desired lines/curves or noise generated from the filtering results. Therefore, it is possible to perform groupings of candidate points into the candidate line segments by parameterizing the lines based on the candidate points in image. The Hough transform technique basically determines if there is enough evidence of a line based on the candidate points. If there is enough evidence, the parameters of the line are calculated. The Hough technique parameterizes lines in the Hough domain with two parameters, ρ and θ, where ρ represents the distance between the line and the origin, and θ is the angle of the line. Using this parameterization, the equation written as follows:
ρi=x cos θi+y sin θi
For a line with parameters (ρ,θ) in an image plane, all the points that go through the line obey the above equation. As a result, for the candidate image points, the Hough transform algorithm determines which lines can be extracted and which lines can be eliminated.
In block 24, a false alarm mitigation analysis test is applied for verifying that the identified lane marker as extracted by the Hough technique is a lane marker. To execute the false alarm mitigation analysis test, a length (l) of each identified lane marker is determined in the world coordinates through camera calibration with respect to the ground plane. Next, the length (l) of each identified lane marker is compared to a predetermined length. The predetermined length is representative of a minimum length that a respective lane marker must be in order to be considered a lane marker. If the length (l) is greater than the predetermined length, then the identified lane marker is considered to be a lane marker. If the length (l) is less than the predetermined length, then the identified lane marker is considered not to be the lane marker.
A second false mitigation analysis test may be applied in addition to or as an alternative to the false alarm mitigation analysis test described above as shown in
In block 25, lane marker detection is applied to the output device as shown in
For each main orientation angle α, in step 32, the normalized input image is convolved with the oriented edge detection filter for detecting the edge responses of lane markers with orientation close to the angle α captured in the input image. In step 33, the normalized lighting input data is convolved with the oriented line detection filter for detecting the line responses of lane markers with orientation close to the angle α captured in the input image. The filtering using the oriented edge and line detection filters are performed simultaneously.
In step 34, candidate lane markers are detected from the filter responses. The candidate lane markers are detected from the cooperative filtering of the line of the lane markers and the edges of the lane markers. For each main orientation angle α, the oriented edge detection filter and line detection filter with an angle α are used to obtain candidate points that are on the desired line with the angle close to α. The selected candidate points have large positive line filter responses greater than a pre-determined threshold, and large positive edge filter responses greater than a pre-determined threshold and large negative edge filter responses less than a negative pre-determined threshold along the perpendicular direction of α at a pre-determined distance (e.g. a few pixels) in the normalized image. In step 35, a Hough transform technique is applied to the candidate lane markers detected in the line-based filter response and the edge-based filter responses. The Hough transform technique extracts the identified lane markers and eliminates any outliers in the filter responses.
In step 36, a false alarm mitigation analysis is applied to the identified lane markers extracted from the Hough transform technique. In step 37, a determination is made whether a false alarm mitigation analysis is satisfied. If the false alarm mitigation analysis is satisfied, then the location of the lane markers is identified in an output device in step 38.
In step 37, if the determination is made that the false alarm mitigation analysis is not satisfied, then a determination is made that the identified lane markers or at least one of the identified lane markers are not lane markers and the routine proceeds to step 39 where the routine ends.
In step 40, a length (l) of each identified lane marker is determined in the world coordinates through camera calibration with respect to the ground plane. In step 41, the determined length (l) width is compared to a predetermined length. In step 42, a determination is made whether the length (l) is greater than the predetermined length. If the length (l) is smaller than the predetermined length, then the routine proceeds to step 39 where the routine is terminated. If the length (l) is greater than the predetermined distance, then the routine proceeds to step 38.
In step 38, lane markers of the road are highlighted in the output device such as an image display device for visually enhancing the location of the lane marker to the driver of the vehicle. Alternatively, the location of the lane markers may be provided to an output device such as an autonomous steering module for autonomously maintaining the vehicle position between the lane markers.
In step 50, a distance (d) is determined between a pair of parallel lane markers in the world coordinates through camera calibration with respect to the ground plane. In step 51, a comparison is made as to whether the distance (d) is greater than a predetermined distance. In step 52, a determination is made whether the distance (d) is greater than the predetermined distance. If distance (d) is less than the predetermined distance, then the routine proceeds to step 39 where the routine is terminated. If the distance (d) is greater than the predetermined distance, then, then the routine proceeds to step 38.
In step 38, lane markers of the road are highlighted in the image display device of visually enhancing the location of the lane marker to the driver of the vehicle. Alternatively, the location of the lane markers may be provided to an autonomous steering module for autonomously maintaining the vehicle position between the lane markers.
While certain embodiments of the present invention have been described in detail, those familiar with the art to which this invention relates will recognize various alternative designs and embodiments for practicing the invention as defined by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5675489 | Pomerleau | Oct 1997 | A |
6819779 | Nichani | Nov 2004 | B1 |
7151996 | Stein | Dec 2006 | B2 |
7876926 | Schwartz et al. | Jan 2011 | B2 |
20020159622 | Schneider et al. | Oct 2002 | A1 |
20040042638 | Iwano | Mar 2004 | A1 |
20040151356 | Li et al. | Aug 2004 | A1 |
20060210116 | Azuma | Sep 2006 | A1 |
20080109118 | Schwartz et al. | May 2008 | A1 |
20100014713 | Zhang et al. | Jan 2010 | A1 |
20100014714 | Zhang et al. | Jan 2010 | A1 |
Number | Date | Country | |
---|---|---|---|
20100014714 A1 | Jan 2010 | US |