The invention relates to a method for operating a driver assistance device in a motor vehicle. An image of an environmental region of the motor vehicle is captured by means of a camera of the driver assistance device. In addition, sensor data is captured to the environmental region by means of a sensor different from the camera. In addition, the invention relates to a driver assistance device for performing such a method as well as to a motor vehicle with such a driver assistance device.
Driver assistance devices are already known from the prior art in diverse configuration. Thus, camera systems are known on the one hand, which have a plurality of video cameras attached to the motor vehicle, the images of which can be displayed on a display in the motor vehicle. The images of the cameras can also be subjected to image processing, and additional functionalities can be provided based on the images. For example, based on the images, object identification is effected such that the camera system can serve as a collision warning system. On the other hand, systems are also known, which are formed for measuring distances between the motor vehicle and the obstacles located in its environment. Here, for example ultrasonic sensors are meant, which can be disposed distributed on the front and the rear bumper of the motor vehicle. Each ultrasonic sensor then has its own capturing range, which represents a partial segment of a common capturing range of the entire ultrasonic sensor system. Thus, each ultrasonic sensor measures the distances in its own capturing range.
It is also already prior art to combine a camera system with a sensor system in a motor vehicle. Such a sensor fusion is for example known from the document GB 2463544 A. Here, a plurality of ultrasonic sensors is employed, which are for example attached to a bumper. The environmental region of the motor vehicle detected by the ultrasonic sensors is additionally imaged by means of a camera. A computing device processes the sensor data of the ultrasonic sensors as well as the images of the camera at the same time. On the one hand, the images are displayed on a display in the motor vehicle; on the other hand, it is examined by means of the computing device if a detected object approaches the motor vehicle. As the case may be, a warning signal is then output.
In the prior art, thus, the sensor fusion is effected such that all information of different sensor systems—namely of the camera on the one hand and the ultrasonic sensors on the other hand—is collected and processed at the same time in a common computing device.
It is the object of the invention to demonstrate a solution how in a method of the initially mentioned kind the images of the camera on the one hand and the sensor data of the sensor on the other hand can be better combined with each other than in the prior art.
According to the invention, this object is solved by a method, by a driver assistance device as well as by a motor vehicle having the features according to the respective independent claims. Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
A method according to the invention serves for operating a driver assistance device of a motor vehicle by capturing an image of an environmental region of the motor vehicle by means of a camera of the driver assistance device as well as by capturing sensor data to the environmental region by means of a sensor different from the camera, namely for example an ultrasonic sensor. According to the invention, it is provided that an object located in the environmental region is identified in the image by means of an electronic computing device of the driver assistance device and the sensor data of the sensor is used for identifying the object in the image.
Thus, the effect according to the invention is achieved in that the computing device identifies the object external to vehicle in the image not or not only based on the image data, but (also) based on the sensor data of the at least one sensor. Therein, the invention is based on the realization that with the aid of the detection algorithms known from the prior art, which serve for detecting objects based on images, it is not always possible to identify the object in the captured image. Namely, object identification solely based on the image data is not possible or only in restricted manner in particular in a near range of up to about 0.5 m from the motor vehicle. It can occur that an object located in this near range is depicted in the captured images, but cannot be identified solely based on the images. Now, the invention takes the way to use the sensor data of the sensor for identifying the object in the captured image. For example, this can be configured such that, if the object cannot be identified based on the image data, the same object is identified solely based on the sensor data. By contrast, if the object is identified both based on the sensor data and based on the image, thus, the identification of the object in the image can be effected both depending on the sensor data and depending on the image data. Overall, thus, the sensor fusion is improved compared to the prior art, and the accuracy and reliability of the object identification in the image of the camera are increased.
The identification of the object in the image can for example be effected such that at least one region of the object depicted in the image is surrounded by a bounding box. Such an approach to label an object identified based on image data in the image by means of a bounding box is already known for example from the printed matter JP 2011/119917 A. In this embodiment, however, it is also proposed to generate such a bounding box not or not only based on the image data of the camera, but additionally or alternatively also based on the sensor data of the sensor. This embodiment exploits the fact that a sensor operating according to the echo propagation time method has a certain detection range and measures the distances only in this detection range. In particular with ultrasonic sensors, this detection range is relatively narrow such that with the presence of a plurality of sensors with good accuracy, the position of the object relative to the motor vehicle and therefore also the position of the object in the captured image can be determined, too. The bounding box generated based on the sensor data can for example have a width in the image of the camera, which corresponds to the width of the detection range of the sensor. Such a camera image with the bounding box can then be used in very different manners: on the one hand, this image can be displayed on a display in the motor vehicle such that the driver is informed about the detected object. On the other hand, this image with the bounding box can also be transmitted to other driver assistance systems in the motor vehicle and these other systems can use the image for providing different functionalities in the motor vehicle. Such a system can for example be the collision warning system, which is able to generate a warning signal for warning the driver based on the image.
In an embodiment it is provided that the object is identified in the image both based on the sensor data of the sensor and based on the image of the camera by means of a computing device. This is in particular provided if the object identification is possible both based on the sensor data and based on the image data of the camera, thus if the object is located in an overlapping region between the detection range of the sensor as well as an image analysis range, in which the object identification is also possible based on the image data. This embodiment has the advantage that the object external to vehicle can be identified particularly reliably and extremely exactly in the image of the camera. Namely, this embodiment combines the advantages of the object identification based on the image data on the one hand with the advantages of the object identification based on the sensor data on the other hand such that the respective disadvantages of the two object identification methods can be avoided.
For example, this can be effected such that in the image a first bounding box is generated based on the sensor data of the sensor, while a second bounding box is generated based on the image of the camera (thus by means of image processing). Then, the two bounding boxes can be merged to a common bounding box. Thus, the generation of the bounding box in the image of the camera is particularly precise.
Particularly preferred, the identification of the object involves that a width of the object in the image is determined based on the image of the camera, while the position of a lower end of the object in the image is determined based on the sensor data of the sensor. This embodiment is based on the realization that both the object identification based on the image data and the identification based on the sensor data have “weak points”. Thus, in the object identification based on the image data, the exact determination of the lower end in the image is not possible or only in restricted manner due to the used detection algorithms (optical flow, ego-motion compensation method). With these detection algorithms, for example, the feet of pedestrians can only inexactly be detected. On the other hand, the determination of the width of the object in the image is only possible with restricted accuracy based on the sensor data of the sensor. For this reason, presently it is proposed to use the image data of the camera for determining the width of the object in the image and to use the sensor data of the sensor for determining the position of a lower end of the object in the image. The respective disadvantages of the identification methods—based on the image data on the one hand and based on the sensor data on the other hand—can therefore be avoided, and the object identification can be particularly precisely effected.
The latter embodiment can for example be realized such that the merging of the two bounding boxes in performed in a very specific manner: for the common bounding box, the width of the second bounding box (based on the camera data) as well as the position of a lower edge of the first bounding box in the image (based on the sensor data) can be adopted. The common bounding box thus has the width of the bounding box generated based on the image data, and the position of the lower edge of the common bounding box corresponds to the position of the bounding box generated based on the sensor data. The common bounding box thus particularly precisely reflects the actual position of the object in the image.
As already explained, it can occur that the object is within the detection range of the sensor, but outside of an image analysis range, in which the identification of the object based on the image data is possible at all. In such a case, the same object is preferably identified in the image solely based on the sensor data of the sensor. The deficiencies of the object identification based on the sensor data is accepted in this embodiment. However, this embodiment allows that even in absence of object identification based on the image data, the object external to vehicle nevertheless can be identified in the image of the camera.
By contrast, if the object is outside of the detection range of the sensor, the same object is identified in the image solely based on the image of the camera. Thus, if the object cannot be identified based on the sensor data, solely the image data of the camera is used for object identification. This is in particular the case if the object is relatively far from the motor vehicle, namely in a distance of greater than for example 2.2 m. Namely, in such a distance, the object can no longer be detected with the aid of the sensor, and the object identification can be performed solely based on the image data.
An ultrasonic sensor is preferred, which is used for capturing the sensor data to the environmental region of the motor vehicle. Overall, a plurality of ultrasonic sensors can be used, which can be disposed distributed on the front bumper and/or on the rear bumper of the motor vehicle. Each ultrasonic sensor then has its own detection range, and the individual detection ranges can be next to each other—optionally also overlapping. However, the invention is not restricted to an ultrasonic sensor. Other sensors can also be employed, which are different from the camera. In particular, the at least one sensor is such one, which operates according to the echo propagation time method, thus a distance sensor, in which the distances are measured by measuring the propagation time of the transmit signal.
The invention also relates to a driver assistance device for a motor vehicle, which is formed for performing a method according to the invention.
A motor vehicle according to the invention has a driver assistance device according to the invention.
The preferred embodiments presented with respect to the method according to the invention and the advantages thereof correspondingly apply to the driver assistance device according to the invention as well as to the motor vehicle according to the invention.
Further features of the invention are apparent from the claims, the figures and the description of figures. All of the features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or else alone.
Now, the invention is explained in more detail based on a preferred embodiment as well as with reference to the attached drawings.
There show:
A motor vehicle 1 illustrated in
Each ultrasonic sensor 3 to 6 has a detection range 7, 8, 9, 10, in which the respective ultrasonic sensor 3 to 6 can measure the distances. If for example an object external to vehicle is in the detection range 7 of the ultrasonic sensor 3, thus, the ultrasonic sensor 3 can detect the distance of this object from the motor vehicle 1. The detection ranges 7 to 10 are closely next to each other and immediately adjoin each other. The detection ranges 7 to 10 thus cover a relatively large environmental region behind the motor vehicle 1 such that the individual detection ranges 7 to 10 each represent a partial segment of the environmental region behind the motor vehicle 1. Therein, the respective detection ranges 7 to 10 are relatively narrow segments, which are next to each other in vehicle transverse direction and are configured elongated in vehicle longitudinal direction.
In addition, the driver assistance device 2 has a camera 11, which is disposed in the rear region of the motor vehicle 1 similar to the ultrasonic sensors 3 to 6 and images an environmental region 12 behind the motor vehicle 1. The environmental region 12 imaged with the camera 11 also includes the detection ranges 7 to 10 of the ultrasonic sensors 3 to 6 such that the detection ranges 7 to 10 are within the imaged environmental region 12.
The camera 11 is a video camera, which is able to provide a plurality of frames per second or a temporal sequence of images. The camera 11 has a relatively large capturing angle or aperture angle, which even can be in a range of values from 120° to 190°. This angle is bounded by two lines 13, 14 in
Both the ultrasonic sensors 3 to 6 and the camera 11 are electrically connected to an electronic computing device not illustrated in more detail in the figures, which can for example include a digital signal processor and a memory. Thus, the computing device receives the sensor data of the ultrasonic sensors 3 to 6 on the one hand and also the images—thus the image data—of the camera 11 on the other hand.
An exemplary image 15 of the camera 11, in which the environmental region 12 is depicted, is illustrated in
Solely based on the sensor data of the ultrasonic sensors 3 to 6, thus, a first bounding box 19 can be generated in the image 15, which is illustrated in
Therefore, the object identification is here effected solely based on the sensor data of the ultrasonic sensors 3 to 6. In order to generate the first bounding box 19, namely, a special image processing of the image 15 is not required.
This type of object identification, which is performed solely based on the sensor data of the ultrasonic sensors 3 to 6, is for example provided if the computing device is not capable of identifying the object 16 in the image 15 solely based on the image processing of the image data due to the low distance of the object 16. If the optical object identification does not provide results, thus, the object 16 in the image 15 is detected—as shown above—solely based on the sensor data of the ultrasonic sensors 3 to 6. Such an image 15 according to
By contrast, if the detection of the object 16 based on the camera data is possible, thus, the image processing algorithms known from the prior art can also be used, which serve for detecting the object in 16 in the image 15. Such detection algorithms also provide a bounding box 20 (second bounding box) as it is illustrated in more detail in
If the object identification based on the sensor data of the ultrasonic sensors 3 to 6 is not possible, thus, the detection of the object 16 is effected solely based on the image 15, thus solely based on the camera data. The result of this object identification is illustrated in
It can also occur that the object 16 in the image 15 can be identified both based on the sensor data of the ultrasonic sensors 3 to 6 and based on the image data of the camera 11. As is illustrated in
As already explained, different situations can occur:
Usually, this will depend on in which distance the object 16 is located from the motor vehicle 1. With reference to
Thus, at the end, an image 15 with a bounding box 19, 20 or 21 is available. This image 15 can now be displayed on a display. Additionally or alternatively, this image 15 can also be further processed in order to be able to provide further functionalities in the motor vehicle 1, namely for example the functionality of warning the driver.
Number | Date | Country | Kind |
---|---|---|---|
10 2012 001 554.2 | Jan 2012 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2013/050847 | 1/17/2013 | WO | 00 | 7/7/2014 |