The invention relates to a method for detecting raindrops on a windscreen of a vehicle, in which at least one image is captured by a camera. Moreover, the invention relates to a camera assembly for detecting raindrops on a windscreen of a vehicle.
For motor vehicles, several driving assistance systems are known, which use images captured by a single or by several cameras. The images obtained can be processed to allow a display on screens, for example at the dashboard, or they may be projected on the windscreen, in particular to alert the driver in case of danger or simply to improve his visibility. The images can also be utilized to detect raindrops or fog on the windscreen of the vehicle. Such raindrop or fog detection can participate in the automatic triggering of a functional units of the vehicle. For example the driver can be alerted, a braking assistance system can be activated, windscreen wipers can be turned on and/or headlights can be switched on, if rain is detected.
U.S. Pat. No. 7,247,838 B2 describes a rain detection device comprising a camera and an image processor, wherein filters are used to divide an image processing area of an image captured by the camera in two parts. The upper two thirds of the screen are dedicated to an adaptive front lighting system and the lower third to raindrop detection. Thus, the same camera can be used for different functions.
Quite a lot of computation time is needed in order to detect raindrops by image processing. This makes it difficult to design a camera with the required processing means embedded in a compact manner.
It is therefore the object of the present invention to create a method and a camera assembly for detecting raindrops on a windscreen of a vehicle, which require less computing time.
This object is met by a method with the features of claim 1 and by a camera assembly with the features of claim 10. Advantageous embodiments with convenient further developments of the invention are indicated in the dependent claims.
According to the invention, in a method for detecting raindrops on a windscreen of a vehicle, in which at least one image is captured by the camera, at least one reference object is identified in a first image captured by the camera. The at least one identified object is at least partially superimposed to at least one object extracted from a second image captured by the camera. Raindrop detection is then performed within the second image. As an already identified object is superimposed to an object extracted from the second image, there is no need for identification of this object in the second image. On the contrary objects in the second image, to which identified objects of the first image have been superimposed are rejected, and no identification effort has to be undertaken. This considerably reduces the computing time required to correctly detect raindrops on the windscreen. Also the eliminated or rejected objects do not cause any false drop detection. In order to superimpose the reference object to a corresponding object extracted from the second image similarities in size and/or shape may be considered.
Superimposing an identified object to an extracted object in the second image can be readily performed by superimposing at least one reference point from the first image to a reference point in the second image. There does not necessarily need to be complete congruence between the identified object in the first image and the extracted object in the second image. Tolerances may be accepted as long as there is at least a partial match between the identified object and the extracted object to which the identified object is superimposed.
A reference object is not a raindrop and different thereto and could be an especially a road marking or a tree beside the road or a curb stone or anything like that.
In an advantageous embodiment of the invention, raindrop detection is only performed for objects extracted from the second image, which are different from the at least one object to which the identified object is superimposed. This considerably reduces the complexity of raindrop detection in the second image.
In a further advantageous embodiment of the invention the at least one superimposed object is utilized to delimit a region within the second image to at least one side, wherein the raindrop detection is only performed for objects extracted from that region, which are different from the region's limits. With a region smaller than the second image raindrop detection performed among objects extracted from the second image is considerably less processing-time consuming than an identification of raindrops within the entire second image.
The at least one superimposed reference object can comprise a substantially linear element. This makes it particularly easy to delimit a region within the second image by the superimposed reference object. Also objects matching the reference objects can thus be readily found in the second image.
The at least one superimposed reference object may comprise in particular a lane marking and/or a road side and/or a road barrier and/or a road curb. Such objects are readily identified within the first image by image processing performed within the context of line assist driving assistance systems. Also it can be assumed that there are objects in the second image with the same function for road traffic. Consequently superimposing such reference objects to objects in the second image can easily be performed based on objects' similarity. Especially if such linear objects are already identified within another function performed by the camera, it is very useful to utilize the results within the raindrop detection process. Furthermore, eliminating objects which are outside an area corresponding to a driving lane delimited by lane markings, drastically reduces the complexity of the identification process. This is due to the fact that objects outside the region delimited by the lane markings are particularly numerous and variously shaped. On the contrary the driving lane itself is quite homogenous.
In another preferred embodiment of the invention the first image and the second image are image areas of one image captured by a bifocal camera. Thus the two image areas are images captured simultaneously, and a reference object identified in the first image area can very easily be superimposed to a corresponding object extracted from the second image area.
It has further turned out to be an advantage, if the first image is focused at a greater distance from the camera than the second image. This allows to perform reliable raindrop detection within the second image while other functions related to driving assistance systems may be performed by processing the first image.
It is particularly useful, if the first image is focused at infinity and the second image is focused on the windscreen. Then for each function, i.e. raindrop detection within the second image and line recognition in the first image, appropriate images or image areas are captured by the camera.
When objects extracted from the second image are classified in order to identify raindrops, a number of classifying descriptors can be utilized for reliable raindrop detection. These objects are different from the objects extracted from the second image, to which the reference object has been superimposed.
Finally, it has turned out to be advantageous, if a supervised learning machine is utilized to identify raindrops among objects extracted from the second image. Such a supervised learning machine, for example a support vector machine is particularly powerful in identifying rain drops. This can be performed by assigning a score or a confidence level to each extracted object, wherein the score or confidence level is indicative of a probability that the extracted object is a rain drop.
The camera assembly according to the invention, which is configured for detecting raindrops on a windscreen of a vehicle comprises a camera for capturing at least one image. It further comprises processing means configured to identify at least one object in a first image captured by the camera, superimpose the at least one identified reference object at least partially to at least one object extracted from the second image captured by the camera, and to perform raindrop detection within the second image. Such a camera assembly is able to perform raindrop detection within a particularly short computing time without an excessively powerful processing means. This allows the camera assembly to be particularly compact, which makes it possible to easily install it in the cabin of the vehicle.
The preferred embodiments presented with respect to the method for detecting raindrops and the advantages thereof correspondingly apply to the camera assembly according to the invention and vice versa.
All of the features and feature combinations mentioned in the description above as well the features and feature combinations mentioned below in the description of the figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or else alone without departing from the scope of the invention.
Further advantages, features and details of the invention are apparent from the claims, the following description of preferred embodiments as well as from the drawings. Therein show:
In
The camera 12 is a bifocal camera which is focused on the windscreen of the vehicle and focused at infinity. The camera 12 which may include a CMOS or a CCD image sensor is configured to view the windscreen of the vehicle and is installed inside a cabin of the vehicle. The windscreen can be wiped with the aid of wiper blades in case the camera assembly 10 detects raindrops on the windscreen. The camera 12 captures images of the windscreen, and through image processing it is determined whether objects on the windscreen are raindrops or not.
For the detection of raindrops on the windscreen the bifocal camera 12 captures an image 14, wherein a lower part 16 or lower image area is focused on the windscreen (see
In step S14 objects are extracted from the lower part 16 of the image 14. In a next step the extracted objects are classified in order to identify raindrops. In this step S16 a confidence level or score is computed for each extracted object, and the confidence level or score is assigned to the object. In a next step S18 raindrops are selected, if the score or confidence level of each extracted object is high enough. After determining whether extracted objects are classified as raindrops or non-drops the quantity of water on the windscreen is estimated in a step S20. According to the quantity of water on the windscreen an appropriate action is triggered. For instance the windscreen wipers wipe the windscreen in an appropriate manner to remove the raindrops, headlights are switched on, a braking assistance system is activated, or the driver is alerted that rainy conditions are present.
This upper part 18 of the image 14 is processed within a lane assist driving assistance system. The image processing of the upper part 18 of the image 14 may also be utilized within a speed limit driving assistance system, additionally or alternatively to driving lane departure functions. Consequently, in a step S24 objects such as lines 20 which delimit a driving lane 22 of a road are identified in the upper part 18 of the image 14. For the lower part 16 of the image 14 the image pre-processing step S12 and the objects extraction step S14 (see
In order to delimit the region 24 the lines 20 identified in the upper part 18 of the image 14 are transferred into the lower part 16 of the image 14. As it can be assumed that the lines 20 bordering the driving lane 22 do also exist in the lower part 16 of the image 14, the lines 20 or at least part of the lines 20 are therefore superimposed to objects extracted within the lower part 16 of the image 14. These extracted objects in the lower part 16 of the image 14 do therefore not need to be classified or further analyzed, as it is known from the image processing of the upper part 18 that these objects are lane markings which continue in the lower part 18 of the image 14.
The rejection of objects to be processed further on drastically diminishes the number of objects that need to be labelled in further steps of image processing. Also the rejected objects do not lead to any false drop detection in the lower part 16 of the image 14. Furthermore, by limiting the region 24 in the lower part 16 of the image 14 fewer objects need to be classified within the lower part 16. For example, the lines 20 on the road itselves, the road sides, wheels of close driving vehicles and other objects outside the region 24 do not need to be classified.
Consequently, in step S28 labels are established for the objects inside the region 24 only. This classification or labelization of the objects within the region 24 is based on a set of descriptors which may describe object shape, intensity, texture and/or context. This classification is the main computing effort within the detection of raindrops. Only the objects inside the region 24 defined by the left and right bordering lines of the region 24 are analyzed, and the objects corresponding to the superimposed lines 20 are rejected. Thus pre-selecting the region 24 results in fewer objects to be processed.
As this processing is performed for only a limited number of objects, namely the objects within the region 24, the processing time can be reduced for a given processor 26 of the camera assembly 10 (see
In a next step S30 selection is performed based on the utilized descriptors. This selection or recognition of real drops that need to be distinguished from objects that are non-drops is preferably performed by a supervised learning machine such as a support vector machine. Utilizing the characteristics of an object within the region 24 leads to the detection of raindrops 28 within the region 24 (see
From the selection process in step S30 results a list of potential raindrops, wherein a confidence score is indicated for each one of the potential raindrops. Thus, in a step S32 objects having a score or confidence level the value of which is above a threshold value are retained as raindrops 28. With this result the quantity of water is estimated based on the number and the surface of these raindrops 28 within the analyzed area of the image.
By the utilization of the output of image processing in the upper part 18 of the image 14 for a driving assistance system such as lane departure the detection of raindrops 28 within the region 24 enables a performance enhancement of the camera assembly 10. The reduction of complexity is not only achieved by delimiting the region 24, but also by rejecting objects identified as the lines 20 and other details.
In yet another image 42 captured by the camera 12 objects like wheels 44 of a truck 46, a motorway barrier 48 and the like are eliminated before analyzing them for raindrop detection within the lower part of the image 42. To achieve this the continuous line on one side of a driving lane 22 and the discontinuous line on the other side of the driving lane 22 are superimposed to corresponding sections 50 of the lines in the lower part of the image 42. By eliminating a number of objects in the lower part of the image 42 the complexity of the classification of objects is reduced and the computing can be performed more quickly.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2011/004506 | 9/7/2011 | WO | 00 | 6/27/2014 |