The present invention generally relates to the detection of objects in a traffic scene and more specifically relates to the identification of road signs.
Traffic scenes typically have a large amount of information that a driver has to process. Because drivers are faced with many distractions, they may not pay attention to road signs. Elderly drivers find it especially difficult to read and understand the posted road signs. This may result in hazardous situations that can lead to collisions. To solve this problem, auto manufacturers have used vision systems to automate road sign recognition. However, vision systems are problematic due to the complexity of traffic scenes and the constantly changing traffic environment. The use of vision systems is further complicated by the fact that there are no common standards for road signs in different countries. The signs may also differ from one region to another within the same country.
The recognition process of road signs is typically divided into two phases: the segmentation phase and the recognition phase. In the segmentation phase, the road signs are identified and separated from the rest of the traffic scene. In the recognition phase, the signs are read and classified. The classification usually involves image processing techniques such as optical character recognition (“OCR”) and pattern recognition.
In many instances the segmentation phase is the bottleneck of the recognition process. The most common method used for segmentation is color segmentation. Color segmentation is problematic because color can differ depending on the time of day and illumination. Other prior art solutions have attempted to identify road signs according to their geometric shape by assuming that road signs have standard geometric shapes within a certain region. These solutions are also troublesome because road signs often are partially obstructed by other objects or rotated with respect to the camera used to obtain their images. Because traffic scenes are cluttered with many objects, geometric shape detection proves to be very complex and requires an increased computational load.
The present invention provides a system for detecting road signs that uses polarization of light to achieve road sign detection. In the present invention, an object identification system includes at least one processor; a light source coupled to the at least one processor and configured to emit light towards a retroreflective object and a non-retroreflective object; a first sensor coupled to the at least one processor, the first sensor configured to detect light having a first polarization orientation; and a second sensor coupled to the at least one processor, the second sensor configured to detect light having a second polarization orientation substantially orthogonal to the first polarization orientation.
In another form of the present invention, the object identification system includes at least one processor; a light source coupled to the at least one processor, the light source configured to emit light towards a retroreflective object and a non-retroreflective object; a sensor coupled to the at least one processor, the sensor configured to detect light reflected by the retroreflective object and light reflected by the non-retroreflective object.
In yet another form of the present invention, the object identification system includes at least one processor; a light source coupled to the at least one processor and configured to emit light towards a retroreflective object and a non-retroreflective object; a light detection device coupled to the at least one processor, the light detection device including a light splitting means configured to divide light having a first polarization orientation from light having a second polarization orientation substantially orthogonal to the first polarization orientation; and a first sensor and a second sensor coupled to the light splitting means, the first sensor configured to detect light having the first polarization orientation and the second sensor configured to detect light having the second polarization orientation.
In still another form of the present invention, the object identification system includes a light source configured to emit light towards a retroreflective object and a non-retroreflective object, the light source including polarizing means configured to polarize the emitted light in a first polarization orientation; a first sensor including a first sensor filter means configured to filter light to the first sensor having the first polarization orientation; a second sensor including a second sensor filter means configured to filter light to the second sensor having a second polarization orientation substantially orthogonal to the first polarization orientation; and at least one processor coupled to each of the light source, the first sensor and the second sensor, the at least one processor including memory storing software capable of being executed by the at least one processor to carry out the steps of instructing the first sensor to detect a first image and the second sensor to detect a second image, the first and second images having corresponding pixels that form regions when aligned; performing at least one image extraction technique to extract the regions having a predetermined phase and partial polarization; comparing the extracted regions to known characteristics of retroreflective objects; and performing at least one image processing technique to read the text on the retroreflective object.
In another form of the present invention, a method of detecting an object is provided, the method including the steps of emitting polarized light towards a retroreflective object and a non-retroreflective object; filtering light reflected by the retroreflective object and the non-retroreflective object; and detecting the reflected light having the same polarization orientation as the emitted light.
The above-mentioned and other features of this invention, and the manner of attaining them, will become more apparent and the invention itself will be better understood by reference to the following description of embodiments of the invention taken in conjunction with the accompanying drawings, wherein:
Corresponding reference characters indicate corresponding parts throughout the several views. Although the drawings represent embodiments of the present invention, the drawings are not necessarily to scale and certain features may be exaggerated in order to better illustrate and explain the present invention.
The embodiments disclosed below are not intended to be exhaustive or limit the invention to the precise forms disclosed in the following detailed description. Rather, the embodiments are chosen and described so that others skilled in the art may utilize their teachings.
Light available in most environments is partially polarized. Shown in
The polarization state of partially polarized light may be described using phase and partial polarization. The phase of the polarization is defined as the orientation of linearly polarized component 22 relative to a reference position, e.g., Imax relative to polarizer's 20 zero (0) degree position. The partial polarization ratio provides a measure of the degree of polarization. To estimate the phase and partial polarization, three polarization orientation measurements are needed—zero (0) degrees, forty-five (45) degrees and ninety (90) degrees. Using these polarizer orientations, the phase (theta) and partial polarization can be calculated as follows:
In many applications, such as road sign detection, accurate estimation of the polarization state is not necessary. When differentiating between two orthogonal polarization states, two crossed polarizer orientations of zero (0) degrees and ninety (90) degrees are sufficient. The phase and the partial polarization can be approximated by using the following equations:
The use of the above equations reduces computational and hardware complexity. Accordingly, the present invention utilizes these equations in identifying objects in a traffic scene.
The use of the term “retroreflective” hereinafter refers to a characteristic of an object that allows the object to reflect incident light back to its source and to preserve the polarization state of the incident light. This concept is exhibited in
A first embodiment of the object identification system of the present invention is shown in
Linear polarizing filters 132, 134 (“illumination polarizers”) are attached to light sources 130, 131, respectively, either formed in the transparent covers of light sources 130, 131 or added to the transparent covers, and illumination polarizers 132, 134 have the same polarization (e.g., phase=0 degrees, 45 degrees, 90 degrees, etc.). Polarizers 132, 134 may also be integrated with light sources 130, 131. Other light sources, for example lasers, can emit polarized light without the use of polarizers 132, 134.
Linear polarizing filters 152, 154 (“sensor polarizers”) are respectively attached to image sensors 150, 151. Other embodiments of object identification system 100 may include three or more image sensors and corresponding sensor polarizers. Sensor polarizers 152, 154 pass the component of light with polarization along their orientations to image sensors 150, 151, and image sensors 150, 151 detect the brightness of the polarized light components.
One of sensor polarizers 152, 154 has the same polarization as illumination polarizers 130, 131. For example, if illumination polarizers 132, 134 have a zero (0) degree polarization, then either sensor polarizer 152 or sensor polarizer 154 has a zero (0) degree polarization. The other of sensor polarizers 152, 154 has a polarization orthogonal (i.e., 90 degree difference) to the polarization of illumination polarizers 132, 134. Returning to the above example, if illumination polarizers 132, 134 and, consequently, sensor polarizer 152 have a zero (0) degree polarization, then sensor polarizer 154 has a ninety (90) degree polarization.
The operation of object identification system 100 is now explained with reference to
Traffic scene 200 includes various objects, including objects 210, 220, 240. Objects 210, 220 are non-retroreflective and may include stationary and/or mobile objects found at any typical traffic scene, for example, vehicles, trees, pedestrians, light poles, telephone polls, buildings, etc. Objects 210, 220 reflect unpolarized light illustrated by reflected light 126a, 126b, 126c, 127a, 127b, 127c (represented as dashed lines) in
Object 240 is retroreflective, thereby maintaining the polarization orientation of incident light 122, 124 and reflecting light 123, 125 back to their respective sources. Accordingly, light 123 reflected from object 240 has the same polarization orientation 121 as polarized incident light 122, and reflected light 125 has the same polarization orientation 121 as polarized incident light 124. Sensor polarizer 152 enables reflected light 123 to pass to image sensor 152 because illumination polarizers 132, 134 and sensor polarizer 152 have zero (0) degree polarization orientations. The intensity of reflected light 123 captured by image sensor 150 is greater than the intensity of reflected light 125 captured by image sensor 151 because sensor polarizer 154 has a ninety (90) degree polarization. The orthogonal relationship between sensor polarizer 152 and sensor polarizer 154 provides the maximum discrimination because when light is polarized in a certain direction, there is minimum reflection in the orthogonal direction.
After respective image sensors 150, 151 detect reflected light 123, 125, 126b, 126c, 127b, 127c, each of image sensors 150, 151 create an image of scene 200 using known imaging techniques. Using the phase and partial polarization equations detailed above, processor 160 calculates the phase and partial polarizations of reflected light 123, 125 on a pixel by pixel basis. More specifically, processor 160 aligns the two images and calculates the phase and partial polarization for each of the corresponding pixel elements. Processor 160 then uses known image processing segmentation techniques (e.g., thresholding, edge-finding, blob analysis, etc.) to extract regions of pixels that correspond to predetermined phase and partial polarization requirements.
The detection step is followed by the recognition step. After extracting the regions, processor 160 compares the extracted regions against predetermined features of the object that system 100 is being used to identify. Such features may include minimum-maximum size, shape and aspect ratio. If object 240 is determined to be within the tolerance levels of the predefined features, then object 240 is detected as being a strong candidate for the object that system 100 is being used to identify. While this embodiment describes the use of two images sensors 150, 151 and two corresponding sensor polarizers 152, 154, other embodiments of the present invention may include three image sensors and three sensor polarizers.
In an exemplary embodiment of the present invention, object 240 is a traffic road sign. Road signs are typically coated with known retroreflective materials such as paint or tape. In other embodiments of the invention, object 240 includes any retroreflective object found in a traffic scene, for example, markers on side guard rails, lane markings such as “bots dots” or “cat eyes,” and construction barrels and barricades.
A specific example of how object identification system 100 may be used is in a vehicle to detect and read a retroreflective speed limit sign. As described above, system 100 first uses polarization sensing to detect the speed limit sign in a traffic scene. Processor 160 compares the features of the detected speed limit sign to those of standard speed limit signs and filters out regions not containing the predetermined features of standard speed limit signs. Processor 160 next executes software that instructs processor 160 to use an OCR technique to read the text string(s) on the speed limit sign, to extract numerals read in the text string, and to determine the speed limit on the speed limit sign. Example OCR techniques suitable for use with the present invention include, but are not limited to, spatial template matching, contour detection, neural networks, fuzzy logic and Fourier transforms. Processor 160 may then compare the speed on the speed limit sign to the speed taken from speedometer 164 of the vehicle and generate a warning to the vehicle's driver if the speed of the vehicle is exceeding the speed limit.
Additional embodiments of the object identification system are shown in
In object identification system 300, one of sensor polarizers 352, 354 has the same polarization orientation as illumination polarizer 332. The other of sensor polarizers 352, 354 has a polarization orientation orthogonal to the polarization of illumination polarizer 332 so as to provide the maximum discrimination between reflected light 323 passed through sensor polarizer 352 and captured by image sensor 350, and reflected light 324 passed through sensor polarizer 354 and captured by image sensor 351. Image sensors 350, 351 may share housing 370.
As shown in
Light source 430 emits incident light 422 polarized by illumination polarizer 432 and having orientation 121 toward traffic scene 200, and retroreflective object 240 reflects light 423 having the same polarization orientation 121 back in the direction of light source 430. Lens 460 captures reflected light 423. Upon zoom lens 460 capturing reflected light 423, beam splitter 470 passes reflected light 423 having 0 degrees polarization to image sensor 452 and passes reflected light 423 having 90 degree polarization to image sensor 450. Therefore, the maximum discrimination is again provided between reflected light 423 passed through beam splitter 470 and detected by image sensor 452 and reflected light 423 passed through beam splitter 470 and detected by image sensor 450. Each of image sensors 450, 452 then creates an image of scene 200 using known imaging techniques, and processor 160 calculates the phase and partial polarization of reflected light 423 on a pixel by pixel basis as described above.
In another embodiment of the invention shown in
Another embodiment of the object identification system of the present invention also uses a single image sensor. Shown in
The steps performed by the multiple embodiments of the inventive object identification system are shown in
While this invention has been described as having an exemplary design, the present invention may be further modified within the spirit and scope of this disclosure. This application is therefore intended to cover any variations, uses, or adaptations of the invention using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains.