The present invention relates to an image processing method, in particular for detecting emergency vehicles.
Nowadays, it is known practice to equip a motor vehicle with a driving assistance system, commonly called ADAS (“Advanced Driver Assistance System”). Such a system comprises, as is known, an imaging device such as a camera mounted on the vehicle, which makes it possible to generate a series of images representing the environment of the vehicle. For example, a camera mounted at the rear of the vehicle makes it possible to film the environment behind the vehicle, and in particular following vehicles. These images are then used by a processing unit for the purpose of assisting the driver, for example by detecting an obstacle (pedestrians, stopped vehicle, objects on the road, etc.), or else by estimating the time to collision with the obstacles. The information given by the images acquired by the camera therefore has to be reliable and relevant enough to allow the system to assist the driver of the vehicle.
In particular, the majority of international legislation stipulates that a driver must not block the passage of priority vehicles in operation (also called emergency vehicles, such as fire engines, ambulances, police vehicles, etc.) and must make it easier for them to travel. It is thus appropriate for ADAS systems to be able to recognize such priority vehicles, all the more so when their lights (flashing lights) are activated, so as not to obstruct their call-out.
In current ADAS systems comprising a camera filming the front or rear of the vehicle, priority vehicles are detected in the same way as other standard (non-priority) vehicles. These systems generally implement a combination of machine learning approaches with geometric perception approaches. The images from these cameras are processed so as to extract bounding boxes around any type of vehicle (passenger cars, trucks, buses, motorcycles, etc.), including emergency vehicles, meaning that existing ADAS systems do not make it possible to reliably distinguish between a priority following vehicle and a standard following vehicle.
In addition, these systems are subject to problems with partial or total temporal concealment of the images acquired by the camera linked to the fact that a priority vehicle is not required to comply with conventional traffic rules and is allowed to zigzag between lanes, reduce safety distances or travel between two lanes, meaning that existing systems are not adapted for such behaviors and circumstances.
Again, it is difficult to detect priority vehicles or emergency vehicles because there is a large variety of types of priority vehicles. Indeed, these vehicles are characterized by their flashing lights, which are either LED-based or bulb-based, which may be either fixed or rotating, which are of different colors and have variable arrangements on the vehicle. For example, some vehicles are equipped with a single flashing light, others are equipped with pairs of flashing lights, yet others are equipped with bars comprising more than two flashing lights, etc. This variability problem makes it all the more difficult for existing ADAS systems to reliably detect priority vehicles.
This is exacerbated by detection problems for scenes also containing front headlights and tail lights of other vehicles and all other lights present in the environment behind the vehicle, and which may increase the difficulty in terms of detecting the flashing lights of priority vehicles.
An aspect of the present invention therefore proposes an image processing method for quickly and reliably detecting priority vehicles regardless of the type of priority vehicle and regardless of the conditions in which it is traveling, in particular by detecting the lights (flashing lights) of these vehicles.
According to an aspect of the invention, this is achieved by virtue of a method for processing a video stream of images captured by at least one color camera on board a motor vehicle, said images being used by a computer on board said vehicle to detect a priority vehicle located in the environment of the vehicle, the at least one camera being oriented toward the rear of the vehicle, said method being characterized in that it comprises the following steps:
The method according to an aspect of the invention thus makes it possible to reliably detect the flashing lights of an emergency vehicle regardless of the brightness and weather conditions, and to do so up to a distance of 150 meters.
According to one exemplary embodiment, in the segmentation step, predefined segmentation thresholds are used so as to segment the luminous zones according to four categories:
According to one embodiment, after the segmentation step, the method furthermore comprises what is called a post-segmentation filtering step, making it possible to filter the results from the segmentation step, this post-segmentation step being carried out according to predetermined criteria regarding position and/or size and/or color and/or intensity. This filtering step makes it possible to reduce false detections.
According to one exemplary embodiment, the post-segmentation step comprises a dimensional filtering sub-step in which luminous zones located in parts of the image that are far from a horizon line and from a vanishing point and have a size less than a predetermined dimensional threshold are filtered. This step makes it possible to eliminate candidates corresponding to objects perceived by the camera that correspond to measurement noise.
According to one exemplary embodiment, the post-segmentation step comprises a sub-step of filtering luminous zones having a size greater than a predetermined dimensional threshold and a luminous intensity less than a predetermined luminous intensity threshold. This step makes it possible to eliminate candidates that, although they are close to the vehicle, do not have the luminous intensity required to be flashing lights.
According to one exemplary embodiment, the post-segmentation step comprises a positional filtering sub-step in which luminous zones positioned below a horizon line defined on the image of the image sequence are filtered. This step makes it possible to eliminate candidates corresponding to headlights of a following vehicle.
According to one embodiment, the post-segmentation step comprises, for a segmented luminous zone, a sub-step of performing oriented chromatic thresholding-based filtering. This specific filtering makes it possible to filter the colors more precisely. For example, for the color blue, to filter a large number of false positives in the detection of luminous zones classified as the color blue due to the fact that the white light emitted by the headlights of following vehicles may be perceived as being of the color blue by the camera.
According to one embodiment, the method furthermore comprises a second segmentation step, at the end of the tracking step, for each segmented luminous zone for which no association was found.
According to one exemplary embodiment, the second segmentation step comprises:
This step makes it possible to confirm the luminous zones segmented (detected) in the segmentation. Indeed, this last check makes it possible to ensure that the false detection was actually a false detection, and that it was not a headlight of a following vehicle, for example.
According to one exemplary embodiment, in the frequency analysis step, a flashing frequency of each segmented luminous zone is compared with a first frequency threshold and with a second frequency threshold greater than the first frequency threshold, both thresholds being predetermined, a segmented luminous zone being filtered if:
According to one exemplary embodiment, the first frequency threshold is equal to 1 Hz and the second frequency threshold is equal to 5 Hz.
According to one exemplary embodiment, the method furthermore comprises a step of performing directional analysis of each segmented luminous zone, making it possible to determine a displacement of said segmented luminous zone.
According to one exemplary embodiment, a segmented luminous zone is filtered if the displacement direction obtained in the directional analysis step makes it possible to conclude as to:
An aspect of the invention also relates to a computer program product comprising instructions for implementing a method comprising:
An aspect of the invention also relates to a vehicle, comprising at least one color camera oriented toward the rear of the vehicle and able to acquire a video stream of images of an environment behind the vehicle and at least one computer, the computer being configured to implement:
Other features, details and advantages will become apparent from reading the following detailed description and from examining the appended drawings, in which:
A priority vehicle is characterized by luminous spots 5, 6, also called flashing lights, emitting in the blue, the red or the orange. These colors are used by priority vehicles in all countries. The priority vehicles may comprise one or more flashing lights, the arrangements of which may vary according to the number of flashing lights with which they are equipped (either a single flashing light or a pair of separate flashing lights that are spaced from one another, or a plurality of flashing lights that are aligned and close to one another), and the location of these flashing lights on the body of the priority vehicle (on the roof of the priority vehicle, on the front bumper of the priority vehicle, etc.). There are also many kinds of flashing lights. They may be LED-based or bulb-based.
Flashing lights of priority vehicles are also defined by their flashing nature, alternating between phases of being on and phases of being off.
The method according to an aspect of the invention will now be described with reference to
The method according to an aspect of the invention comprises a step 100 of acquiring an image sequence, comprising for example a first image I1, a second image I2, following the first image I1, and a third image I3 following the second image I2. Such images I1, I2 and I3 are shown in
The method according to an aspect of the invention comprises a step 200 of performing thresholding-based colorimetric segmentation.
This segmentation step 200 makes it possible, using predefined segmentation thresholds, to detect and segment luminous zones ZLi in each image of the selection of images. This colorimetric segmentation is carried out according to four color categories:
The color violet is used in particular to detect certain specific flashing lights with a different chromaticity. For example, bulb-based flashing lights, perceived as being blue to the naked eye, may be perceived as being violet by cameras.
To adapt to the significant variability of flashing lights of priority vehicles, the thresholds used in the segmentation step 200 are extended with respect to thresholds conventionally used for the recognition of traffic lights, for example. These segmentation thresholds are predefined, for each color, for saturation, for luminous intensity, and for chrominance.
The segmentation step 200 gives the position in the image, the intensity and the color of the segmented luminous zones ZLi of the emergency vehicle, when these are on.
As shown in
The extension of the threshold values used for the segmentation step makes it possible to adapt to the variability of flashing lights of priority vehicles in order to be sensitive to a greater range of color shades. However, this extension creates significant noise in the segmentation step 200.
To reduce the number of potential candidates for the detection of flashing lights of priority vehicles, in other words to reduce the number of false positives, the method according to an aspect of the invention comprises a post-segmentation step 210. This post-segmentation step 210 makes it possible to perform filtering based on predetermined criteria regarding position of the luminous zones ZLi in the image under consideration, and/or size of the luminous zones ZLi and/or color of the luminous zones ZLi and/or intensity of the luminous zones ZLi.
With reference to
Moreover, as mentioned previously, luminous zones corresponding to distant lights are located close to the horizon line H and to the vanishing point F of the image I1. Luminous zones located too far from this horizon line H and from this vanishing point F, in particular on the lateral edges of the image I1, are thereby also filtered when they have a size less than the predetermined dimensional threshold. This is therefore the case for the luminous zone ZL4 illustrated in
The post-segmentation step 210 comprises a sub-step 212 of filtering luminous zones having a luminous intensity less a predetermined luminous intensity threshold, for example having a luminous intensity of less than 1000 lux, when these luminous zones ZLi have a size greater than a predetermined threshold, for example a size greater than 40 pixels. Indeed, the luminous zones filtered in sub-step 212, although they have a size in the image corresponding to a proximity of the light with respect to the vehicle 1 in the rear scene filmed by the camera 2, they do not have the luminous intensity needed to be candidates of interest for being flashing lights of priority vehicles. A segmented luminous zone ZLi of low intensity but close enough to the vehicle 1 to have what is called a large size in the image may for example correspond to a simple reflection of the sun's rays from a support.
The post-segmentation step 210 furthermore comprises a positional filtering sub-step 213 in which luminous zones positioned below the horizon line H are filtered. This is therefore the case for the luminous zone ZL2 and ZL3 illustrated in
The post-segmentation step 210 may also comprise a sub-step 214 of filtering conflicting luminous zones. At least two luminous zones are conflicting when they are close, intersect or if one is contained within the other. When such a conflict is observed, only one of these two luminous zones is retained, the other then being filtered. The luminous zone out of the two conflicting zones that is to be eliminated is determined, in a manner known per se, according to predetermined criteria regarding brightness, size and color of said zone.
The post-segmentation step 210 comprises, for a segmented luminous zone, a sub-step 215 of performing oriented chromatic thresholding-based filtering.
With reference to
This definition of colors by minimum and maximum values on the U and V axes creates rectangular color blocks that are not representative of reality and therefore not suitable. It is therefore necessary to carry out chromatic filtering for each color. Oriented chromatic thresholding-based filtering is tantamount to adjusting, for each color, the minimum and maximum values on the U and V axes of the color space of the color blocks. The risk of false detections due to the proximity of the color blocks (proximity of the U and V values between the colors in the color space (U,V)) is thus reduced. For example, it is sought, with this oriented chromatic filter, to filter colors of blue B that are too close to the color violet V, colors of red R that are too close to the color orange O, and so on.
This oriented chromatic filtering makes it possible to refine the definition of the colors and, therefore, to filter a large number of false positives in the detection of luminous zones.
Thus, after implementing the abovementioned sub-steps of the post-segmentation step 210, all that are retained as potential candidates for being flashing lights of priority vehicles are luminous zones ZLi that are:
It should be noted that all of the sub-steps are not always necessarily implemented in order to carry out the method according to an aspect of the invention, but that only some of them, alone or in combination, might be retained, on a case-by-case basis, depending on the complexity of the processed image.
Step 300 is a step of tracking each luminous zone ZLi detected in each image. In a manner known to those skilled in the art, an expected position of the luminous zones ZLi segmented in the image in the segmentation step 200 is computed by the computer 3 and is used to ensure that a light detected in an image In indeed corresponds to one and the same segmented luminous zone in a previous image In−1 and that might have moved. The expected position of the luminous zones is determined using a prediction. The expected position of the segmented luminous zone in the current image In is computed based on the position of the luminous zone in the previous image In−1 plus a vector corresponding to the displacement of the luminous zone between the image In−2 and the image In−1, taking into account the displacement of the vehicle 1.
Moreover, as will be explained below, flashing lights of priority vehicles are in particular characterized by the flashing frequency of their flashing lights. Now, in order to be able to estimate the flashing frequency of the flashing flights, it is necessary to be able to estimate the evolution of their brightness (alternation of lighting phases and non-lighting phases) over time. The segmentation step 200 gives us the position in the image, the intensity and the color of these segmented luminous zones ZLi only when these correspond to phases in which the flashing lights are on. The tracking step 300 makes it possible to associate the flashing lights from one image to another, and also to extrapolate their positions when they are in a non-lighting phase (off).
This tracking step 300, which is known from the prior art, has thresholds adapted to the flashing nature of the flashing lights and makes it possible in particular to associate the luminous zones of the flashing lights from one image to another image, and also to extrapolate their positions when the flashing lights are in an off phase (corresponding to an absence of a corresponding luminous zone in the image).
Each luminous zone ZLi segmented in the segmentation step 200 is associated, in a manner known per se, with a prediction luminous zone ZPi of the same color.
For each segmented luminous zone ZIi for which no association was found with a prediction luminous zone ZPi in the tracking step 300, a second segmentation step 310 is implemented, at the end of the tracking step 300.
This second segmentation step 310 comprises a first sub-step 311 in which the segmentation thresholds are widened (in other words, the segmentation thresholds are defined so as to be less strict, less filtering), and the segmentation step 200 and the tracking step 300 are repeated for each image of the image sequence, with these new widened segmentation thresholds. This step makes it possible to detect a segmented luminous zone ZLi in a prediction segmentation zone ZPi.
If, at the end of this first sub-step 311, still no association is found between the processed segmented luminous zone ZLi and a prediction luminous zone ZPi, a second sub-step 312 is implemented, in which the segmentation thresholds are modified so as to correspond to the color white. Indeed, this last check makes it possible to ensure that the false detection was actually a false detection, and that it was not a headlight of a following vehicle.
This second sub-step 312 makes it possible in particular to detect headlights of following vehicles whose white light may contain the color blue, for example.
The method according to an aspect of the invention then comprises a step 400 of performing colorimetric classification of each luminous zone ZLi.
This classification step 400 makes it possible to select the luminous zones ZLi resulting from the segmentation step 200. For each of the colors, a classifier is trained (in a prior what is called offline training step) to discriminate positive data (representative of flashing lights to be detected) from negative data (representative of all noise resulting from the segmentation step 200 that is not flashing lights and that it is therefore desirable not to detect, such as front headlights or tail lights of vehicles, reflections from the sun, traffic lights, etc.).
If, in the classification step, a segmented luminous zone ZLi is not able to be classified (recognized) by the classifier, then this luminous zone is filtered.
On the other hand, if a luminous zone ZLi is recognized by the classifier, it is retained as being a serious candidate for being a flashing light. At the end of the classification step 400, a list of candidate luminous zones ZCi is obtained, these candidate luminous zones ZCi being characterized by the following parameters:
The flashing status is obtained by detecting the flashing of the flashing lights. This detection consists in:
The confidence index ICC is obtained with the information in relation to flashing (flashing state) and positive classification by the classifier in step 400. According to one embodiment, after the classification step 400, the confidence index of each segmented luminous zone ZLi is updated.
If the classification is positive, the classification confidence index ICC for an image at a time t is updated with respect to a classification confidence index ICC for an image at a time t−1 using the following formula:
Icc(t)=Icc(t−1)+FA [Math 1]
where FA is a predetermined increase factor.
If the classification is negative, the classification confidence index ICC for an image at a time t is updated with respect to a classification confidence index ICC for an image at a time t−1 using the following formula:
Icc(t)=Icc(t−1)−FR [Math 2]
where FR is a predetermined reduction factor.
The position and color information is for its part given by the classification step 200.
At the end of the classification step 400, in order to determine whether a candidate luminous zone ZCi is likely to be a flashing light of an emergency vehicle, the method comprises a step of performing frequency analysis in order to compute and threshold the flashing frequency of the segmented luminous zone ZLi and a step of computing a time integration of the classifier response. These steps are detailed below.
In a step 500, performing frequency analysis of each segmented luminous zone ZLi makes it possible to determine a flashing or non-flashing nature of the segmented luminous zone ZLi.
Advantageously, prior to this step 500, inconsistencies that the segmented luminous zones ZLi could exhibit are corrected. In particular, this correction is carried out on the color, the size or else the intensity of the segmented luminous zones ZLi. If an excessively great color fluctuation is detected, for example if the segmented luminous zone ZLi changes from red to orange from one image to another, then said segmented luminous zone ZLi is filtered. As another alternative, if the size of the segmented luminous zone ZLi varies too much from one image to another (variation greater than two, for example), then said segmented luminous zone ZLi is filtered.
Based on the detection of the on and off phases of the flashing light, making it possible to determine the flashing, a fast Fourier transformation (FFT), in a manner known per se, makes it possible to determine the frequency of this flashing.
In the frequency analysis step 500, this flashing frequency of each segmented luminous zone ZLi is compared with a first frequency threshold SF1 and with a second frequency threshold SF2 greater than the first frequency threshold SF1, both thresholds being predetermined. If the flashing frequency is less than the first frequency threshold SF1, then the segmented luminous zone ZLi is considered to not be flashing and therefore to not be a flashing light, and is filtered. If the flashing frequency is greater than the second frequency threshold SF2, then the segmented luminous zone ZLi is also considered to not be a flashing light, and is filtered.
This frequency analysis of the flashing makes it possible to filter segmented luminous zones ZLi for which it is certain that they are constant or that they are flashing too slowly, or on the contrary, that they are flashing too fast, to be flashing lights of priority vehicles. Thus, all that are retained as being candidates of interest are segmented luminous zones ZLi having a flashing frequency between the frequency thresholds SF1 and SF2.
According to one exemplary embodiment, the first frequency threshold SF1 is equal to 1 Hz and the second frequency threshold SF2 is equal to 5 Hz. Moreover, according to one example, the flashing lights fitted to police vehicles and emergency vehicles have a frequency typically between 60 and 240 FPM. FPM (abbreviation for flashes per minute) is a unit of measurement used to quantify the flashing frequency of a flashing light, corresponding to the number of cycles that occur in one minute. A value measured in FPM may be converted to hertz by dividing it by 60. In other words, for such priority vehicles, their frequency is between 1 Hz and 6 Hz.
These frequency threshold values SF1 and SF2 make it possible in particular:
This step makes it possible to improve the detection performance (true positives and false positives) in terms of detecting flashing lights of emergency vehicles by fusing the information relating to the segmented luminous zones ZLi. Indeed, by studying the segmented luminous zones ZLi as a whole (in all images of the image sequence) and not individually, it is possible to reduce the false positive rate while maintaining a satisfactory detection rate.
In this step of the method, a set of segmented luminous zones ZLi are detected in the image of the image sequence. These detected segmented luminous zones ZLi are stored in a memory of the computer 3 in the form of a list comprising an identifier associated with each segmented luminous zone ZLi and also parameters of these segmented luminous zones ZLi, such as:
In this step, all of the detected and retained segmented luminous zones ZLi are considered to be potential flashing lights of emergency vehicles. In order to determine the presence or the absence of flashing lights of emergency vehicles in the scene (environment behind the vehicle corresponding to the image sequence acquired by the camera 2), the method finally comprises a step 700 of analyzing the scene, consisting in computing an overall confidence index ICG for each image of the image sequence, for each of the colors red, orange, blue and violet. According to one exemplary embodiment, the overall confidence indices may be grouped by color. For example, the confidence index for the color violet is integrated with the confidence index for the color blue.
For this purpose, an instantaneous confidence index ID is computed in each image of the image sequence and for each of the colors red, orange, blue and violet.
The instantaneous confidence is computed based on the parameters of the segmented luminous zones ZLi of the current image of the image sequence. Segmented luminous zones ZLi with a flashing state and a sufficient classification confidence index ICC are first taken into account.
To determine segmented luminous zones ZLi having a classification confidence index ICC sufficient to be taken into account, a state machine in the form of hysteresis is illustrated in
The state of the luminous zone ZLi is initialized (Ei) in the state “OFF”:
Ici(Red(x2))=ΣIcc(Red(x2)) [Math 3]
This instantaneous confidence is then filtered over time in order to obtain an overall confidence index for each image of the image sequence and for each of the colors red, orange, blue and violet, using the following formula:
ICG(t)=(1−α)*ICG(t−1)+α*Ici [Math 4]
The higher the value of the coefficient α, the greater the weight of the instantaneous confidence index Ici in the computing of the overall confidence index ICG.
According to one exemplary embodiment, the coefficient α varies according to the parameters of the segmented luminous zones ZLi. For example, the coefficient α is:
According to one exemplary embodiment, the coefficient α varies as a function of the position of the segmented luminous zones ZLi in the image. In particular, the coefficient α is:
According to one exemplary embodiment, the coefficient α varies as a function of the relative position of the segmented luminous zones ZLi with respect to one another in the image. In particular, the coefficient α is increased if multiple segmented luminous zones ZLi are aligned on one and the same line L.
The variations in the parameter a make it possible to adjust the sensitivity of the method and thus to detect the events more or less quickly depending on the parameters of the segmented luminous zones ZLi.
It is also possible to assign a weight to each segmented luminous zone ZLi of the current image of the image sequence that depends on the other parameters, for example:
This weight makes it possible to speed up the increase in the overall confidence index ICG when multiple segmented luminous zones ZLi have strongly correlated positions, brightnesses and colors. Slowing down the increase in the overall confidence index ICG when the lights are of small size and of lower intensity makes it possible to reduce the false positive rate, although this leads to slower detection of distant emergency vehicles, this being acceptable.
Step 700 makes it possible to substantially reduce the detected false positive rate while still maintaining satisfactory detection rates for flashing lights of emergency vehicles.
At the end of step 700, it is possible, for the computer 3, to indicate the presence of an emergency vehicle in the environment behind the vehicle 1, corresponding to the image sequence acquired by the camera 2, and in particular to declare that a luminous zone ZLi is a flashing light of an emergency vehicle, for example using a hysteresis threshold that is known per se.
A state machine in the form of hysteresis is illustrated in
The state of the luminous zone ZLi is initialized (Ei) in the state “OFF”:
The vehicle (in the case of an autonomous vehicle), or the driver of the vehicle 1 otherwise, is then able to take the necessary measures to facilitate and not hinder the movement of said emergency vehicle.
Number | Date | Country | Kind |
---|---|---|---|
FR2010472 | Oct 2020 | FR | national |
This application is the U.S. National Phase Application of PCT International Application No. PCT/EP2021/078342, filed Oct. 13, 2021, which claims priority to French Patent Application No. 2010472, filed Oct. 13, 2020, the contents of such applications being incorporated by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/078342 | 10/13/2021 | WO |