The present application claims priority from Japanese patent application JP2011-201660 filed on Sep. 15, 2011, the content of which is hereby incorporated by reference into this application.
The present invention relates to a technology that recognizes external worlds by using an image sensor, and particularly, to a technology that detects an object regardless of a distance up to the object.
Development of a preventive safety system that prevents an accident is under way in order to reduce casualties by a traffic accident. As the preventive safety system which is a system that is operated under a situation in which a possibility that the accident will occur is high, for example, a pre-crash safety system is put to practical use, which calls a driver's attention by a warning when a possibility that a self-vehicle collides with a vehicle which travels ahead of the self-vehicle arises and reduces damages of an occupant by using an automatic brake when the collision cannot be avoided.
As a method of detecting the vehicle ahead of in front of the self-vehicle, a method of imaging a front of the self-vehicle with a camera mounted on the vehicle and recognizing a shape pattern of the vehicle, that is, a vehicle pattern from the captured image has been known. For example, Japanese Patent Application Laid-Open Publication No. 2005-156199 discloses a method of detecting the vehicle by determining edges of both ends of the vehicle. However, since how the vehicle looks is different depending on the distance, high detection precision cannot be implemented only by applying the same processing regardless of a long range or a short range. For example, since resolution deteriorates in the long range, a characteristic having high discrimination cannot be determined, and as a result, detection precision deteriorates. In regards to the object, a method of changing a processed content depending on a distance or an access state is proposed (see Japanese Patent Application Laid-Open Publication Nos. 2007-072665 and H10(1998)-143799).
According to Japanese Patent Application Laid-Open Publication No. 2007-072665, an object candidate which becomes an obstacle to travelling is detected by a background subtraction method and a template defined for each distance is applied to the detected object candidate so as to discriminate what the object is. However, when the object is omitted from first object candidate detection, the object cannot be discriminated.
According to Japanese Patent Application Laid-Open Publication No. H10(1998)-143799, a template for tracking a vehicle is switched based on a relative velocity of the vehicle detected by a stereo camera so as to improve tracking performance. However, performance cannot be improved with respect to initial detection.
In view of above problems, the present invention has been made in an effort to provide a method and a device for recognizing external worlds that more preferably detect an object regardless of a distance, and a vehicle system using the same.
An embodiment of the present invention provides a method for recognizing external worlds by an external world recognizing device that analyzes a captured image and detects an object in which the external world recognizing device sets a first area and a second area for detecting the object in the image, and the object is detected by using both an object pattern and a background pattern of the corresponding object pattern at the time of detecting the object in the set second area.
Another embodiment of the present invention provides a device for recognizing external worlds that analyzes a captured image and detects an object, including: a processing area setting unit setting a first area and a second area for detecting the object in the image; and first and second object detecting units detecting the objects in the set first area and second area, respectively, wherein the first object detecting unit uses only an object pattern at the time of detecting the object and the second object detecting unit uses both the object pattern and a background pattern of the corresponding object pattern at the time of detecting the object.
Yet Another embodiment of the present invention provides a vehicle system including an external world recognizing device that detects a vehicle by analyzing an image acquired by capturing the vicinity of a self-vehicle, in which the external world recognizing device includes a processing unit and a storage unit, the storage unit stores a first classifier and a second classifier, and the processing unit sets a first area for detecting a vehicle and a second area of a longer range than the first area, in the image, detects a vehicle rectangular shape of the vehicle by determining a vehicle pattern by means of the first classifier, in the first area, detects the vehicle rectangular shape of the vehicle by determining the vehicle pattern and a background pattern of the corresponding vehicle pattern by means of the second classifier, in the second area, corrects the vehicle rectangular shape detected in the second area and computes a time to collision (TTC) up to a collision with the self-vehicle based on the vehicle rectangular shape detected by using the first classifier or the vehicle rectangular shape detected and corrected by using the second classifier.
According to the embodiments of the present invention, the object can be detected more appropriately regardless of the distance up to the object.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the following description, as an object to be detected, a vehicle, in particular, a vehicle that travels ahead of a self-vehicle is described as an example, but the object to be detected is not limited thereto and may be a pedestrian.
By using
In
Accordingly, in the device for recognizing external worlds according to each embodiment, a plurality of classifiers are prepared for each distance and the plurality of classifiers are switched so as to improve the performance of the object detection in all distances. In detail, the object is detected by using the classifier based on the object pattern to be detected in the short range and the object is detected by using the classifier including both the object and the background pattern in the long range. The reason is as follows. That is, in the long range in which the object pattern is unclear, a method for increasing an amount of information other than the object may increase a detection rate by concurrently using the background pattern. In the short range in which the object pattern is clear, a method without the background pattern may decrease error detection. In the device for recognizing external worlds according to each embodiment, the classifiers having different characteristics are defined and the classifiers are switched appropriately according to the short range and the long range so as to more preferably detect the object regardless of the distance.
In each embodiment described below, as the objects 11 and 12, 4-wheel vehicles that travel ahead are described as an example, but the objects are not limited thereto. For example, even a two-wheel vehicle and a pedestrian may be more preferably detected by the same module.
Referring to
Referring to
As illustrated in
Herein, x represents the image part area 402, H1(x) represents the first classifier, ht(x) represents a weak classifier and αt represents a weight coefficient of the weak classifier ht(x). That is, the first classifier 203 is configured by weighted voting of each of T weak classifiers. Sign ( )is the sign function, and when a value in parentheses on a right side is positive, +1 is returned and when the corresponding value is negative, −1 is returned. Weak classifier ht(x) in the parentheses on the right side may be represented as in Equation 2.
Herein, ft(x) represents a t-th feature amount and θ represents a threshold. As the feature amount, Haar-like features (differences in luminance average among the areas) or histograms of oriented gradients (HoG) features may be used. Other feature amounts may be used or co-occurrence features in which different feature amounts are combined may be used. In selecting the feature amount or learning the weight coefficient, a learning method such as adaptive boosting (AdaBoost) or random forest may be used.
Next, referring to
As illustrated in
Referring to
The time to collision (TTC) computing unit 208 of
Alternatively, the relative distance z may be acquired as follows by using the focal length f, a vehicle height Hi on the image and a camera installation height Ht.
The TTC may be acquired as in the following equation based on the relative distance z and a relative velocity vz (a derivation of z) which are acquired as above.
In
In the first embodiment described above, the following effects can be acquired by detecting the vehicle by switching the first classifier 203 and the second classifier 206. That is, in the short range area having high resolution, since an image pattern of the vehicle itself may be maximally exhibited, a high detection rate may be implemented while suppressing error detection. In the long range area having low resolution, the detection rate may be significantly improved by increasing the amount of information by means of both the vehicle and a pattern other than the vehicle. The area is limited and vehicle detection suitable for each area is performed to thereby reduce a processing load.
Next, a device for recognizing external worlds according to a second embodiment will be described. The same reference numerals designate the same components among components of the device for recognizing external worlds according to the second embodiment as the components of the device for recognizing external worlds according to the first embodiment, and a description thereof will be omitted.
First, referring to
A processing area setting method in the processing area setting unit 702 is the same as that in the first embodiment, and for example, the bottom position B1 on the image is acquired by assuming that the start point of the short range area is the ND[m] point, and parameters X1, W1 and H1 indicating the position and the size of the area are prescribed to set the first area 303. Similarly, the bottom position B2 on the image is acquired by assuming that the start point of the long range area is the FD[m] point, and parameters X2, W2 and H2 indicating the position and the size of the area are prescribed to set the second area 304. Of course, setting the points of the short range and the long range is not limited thereto. Vehicle detection is performed by using the vehicle detector 204 for each processing area as acquired above.
In the second embodiment described as above, by setting the processing area based on a lane detection result, only searching of an area required for traveling is performed to reduce a calculation amount. By setting the processing area by using the yaw rate, in particular, the vicinity of a key prediction course of the self-vehicle may be primarily searched, and as a result, the calculation amount may be reduced.
Hereinafter, as a third embodiment, an embodiment applied to the vehicle system will be described. The same reference numerals designate the same components among components of the device for recognizing external worlds according to the embodiment as the components of the device for recognizing external worlds according to the first embodiment and a description thereof will be omitted.
A flow of recognizing external worlds in the CPU 1006 will be described. First, the processing area setting unit 201 sets the first area and the second area in the image inputted from the camera 1000. The vehicle detector 204 detects the vehicle by using the first classifier 203 stored in the memory 1005 with respect to the image of the first area. The vehicle detector 204 detects the vehicle by using the second classifier 205 stored in the memory 1005 with respect to the image of the second area. The rectangular correction unit 207 performs rectangular correction by using the background/vehicle rate which has been already known. The time to collision (TTC) computing unit 208 computes the time to collision (TTC).
Lastly, the collision risk computing unit 1007 computes a risk by using the time to collision (TTC) computed by the time to collision (TTC) computing unit 208 based on a predetermined reference. When the collision risk computing unit 1007 computes that there is the risk, the speaker 1001 outputs a warning by using warning sound or voice. When it is computed that the risk further increases, the driving controlling device 1002 avoids a collision by putting on the brake.
In the third embodiment described as above, a collision warning system that raises a warning at the time when it is computed that there is the risk may be implemented by computing the time to collision (TTC) by means of the external world recognizing device, thereby supporting a driver's driving. A pre-crash safety system that puts on the brake at the time when it is computed that the risk is very high may be implemented by computing the time to collision (TTC) by means of the external world recognizing device, thereby supporting a driver's driving and reducing a damage in the collision.
The present invention is not limited to each embodiment described above and various changes can be made without departing from the spirit of the present invention. For example, the embodiments are described in detail in order to describe the present invention for easy understanding and are not limited to including all components of the description. Further, some of components of a predetermined embodiment can be substituted by components of another embodiment and the components of another embodiment can be added to the components of the predetermined embodiment. Other components can be added, deleted and substituted with respect to some of the components of each embodiment.
Some or all of the components, functions, processing units, processing modules and the like are designed by, for example, integrated circuits and thus may be implemented by hardware. The case in which some or all thereof are implemented by software that implements each component, each function and the like has been primarily described, but information including programs, data, files and the like that implement each function may be stored in recording devices including a hard disk, a solid state driver (SSD) and the like or recording media including an IC card, an SD card, a DVD and the like in addition to the memory, and when needed, the information may be downloaded and installed through a wireless network.
Number | Date | Country | Kind |
---|---|---|---|
2011-201660 | Sep 2011 | JP | national |