The present invention concerns an obstacle detection procedure within the area surrounding a motor vehicle.
It also concerns a commissioning device of the procedure.
It is particularly applied in the field of motor vehicles.
In the field of motor vehicles, a known state of the technique of an obstacle detection procedure within the area surrounding a motor vehicle, involves the following stages:
One disadvantage of this state of the technique is that this detection is difficult to apply to the detection and classification of a pedestrian.
The present invention aims to provide an obstacle detection procedure within the area surrounding a motor vehicle, which makes it possible to precisely detect not only vehicles but also pedestrians.
According to a first object of the invention, this aim is achieved by an obstacle detection procedure within the area surrounding a motor vehicle, featuring the fact that it involves the following stages:
As one will see in detail hereinafter, the fact of combining detection by shape and movement recognition will make it possible to detect and firmly locate an obstacle, and the fact of applying, to these detections, probabilities of belonging to a category by means of indices of confidence, will make it possible to reinforce pedestrian detection.
According to modes of completion without limitations, the detection procedure may involve, moreover, one or more further characteristics among the following:
Detection by active sensors (distance sensor) makes it possible to refine the localisation of detected obstacles on an acquired image of the environment of the vehicle.
This makes it possible to detect obstacles according to the detection distances at which they are situated in an image. The result of the application of the classifiers makes it possible to determine if an obstacle is situated in a label, and thus detect it. Label means, in the present application, the detection zone in the image. This detection zone is of a certain size and shape. Of course, it is possible to give it different shapes. The system according to the present invention proceeds to obstacle detection within the limits of this label. According to one variant of completion, the label represents the obstacle to be detected in that it corresponds approximately to the surface that this type of obstacle will occupy in the image. For example, in order to detect a pedestrian, one can take a rectangular shape the large side of which is vertical. Thus, any pedestrian whose image is within this rectangle will be detected.
This makes it possible to obtain obstacles in movements that do not belong to the background.
This makes it possible to associate to the classified region of interest an index of confidence representing the certainty of belonging to a class.
The use of histograms is simple and quick to commission.
The use of a probability card is simple and quick to commission. This makes it possible to accumulate the probabilities on regions likely to represent a pedestrian.
According to a second object of the invention, this concerns a detection device of an obstacle within the area surrounding a motor vehicle, featuring the fact that it is fit to:
According to a mode of completion without limitation, the detection device is, moreover, fit to:
According to a third object of the invention, this concerns a computer programme product including one or more sequences of instructions executable from an information processing unit, the execution of these sequences of instructions allowing the procedure to be commissioned according to any one of the previous characteristics.
The invention and its different applications will be better understood by reading the following description and examining the figures which accompany it.
These are only presented by way of indication and in no way limit the invention.
In all the figures, the common elements bear the same reference numbers.
The obstacle detection procedure within the area surrounding a motor vehicle according to the invention is described in a first mode of completion without limitation in
One will note that the term motor vehicle means any type of motorised vehicle.
First Mode of Completion
According to this first mode of completion, the detection procedure involves the following stages as illustrated in
In one mode of completion without limitation, the detection procedure also involves a further stage of applying a change of perspective to an acquired image (stage CORR(I)).
In one mode of completion without limitation, the detection procedure also involves a stage of acquisition of a sequence SQ of images I. One will note that this stage may be carried out upstream by another procedure.
For the rest of the description, in the mode of completion without limitation of the procedure described, the procedure includes these further stages of image acquisition and change of perspective.
The stages of the procedure are described in detail hereafter.
In a first stage 1), one acquires a sequence SQ of images I of the environment E of a vehicle V.
The acquisition takes place by means of a CAM camera.
As the image acquisition methods are known by the professional, they are not described here.
In a second stage 2), one applies a change of perspective to an acquired image I. This makes it possible to counterbalance the distortions due to the CAM camera. One thus restores the obstacles O which are situated at the edge of image I. As illustrated in a diagrammatic example in
In a third stage 3), one defines at least one first region of interest ROI1 by carrying out a first detection of an obstacle O by shape recognition in an acquired image I of this environment E.
In one mode of completion, the first detection by shape recognition uses a method well known by the professional called “AdaBoost”, described in the document entitled ‘An Introduction to Boosting and Leveraging by Ron Meir and Gunnar Rätsch—Department of Electrical Engineering, Technion, Haifa 32000 Israel/Research School of Information Science & Engineering—the Australian National University, Canberra, ACT 0200, Australia’.
This method is based on:
A weight is associated with each strong Classifier CLs which represents a rate of good detections of an obstacle in relation to the given VIGN label series for several weak classifiers.
A weak Classifier CLw consists of a unitary test related to a comparison of a pixel in relation to another pixel in the same label. A weight is likewise associated with it.
The genetic algorithm makes it possible to calculate combinations of weak Classifiers CLw applied to the VIGN label series of reference which make it possible to achieve the rate of good detections associated with the strong Classifiers CLs.
One will note that this apprenticeship stage may be carried out upstream of the described detection procedure.
In practice, one applies a scale of reduction SR resulting in a sub-sampled image Isr, as illustrated in
In one mode of completion without limitation, at each iteration, a different scale of reduction SR is used, as illustrated in
Thus, in examples without limitations, in
One will note that a scale of reduction SR is taken in accordance with the distance at which one wishes to detect an obstacle O in an acquired image I or reduced image Isr.
The examples without limitations given above make it possible to carry out a detection of an obstacle O between 0 and 9 metres from the vehicle V considered.
Thus, during the scanning, for each POS position of the label in an image Isr, one carries out the following sub-stages illustrated in
One repeats stages i) to iii) for each POS position of a VIGN label in the image Isr.
One thus obtains, if applicable, a set of memorised POS positions for the sub-sampled image Isr.
Thus, one obtains a set of memorised POS positions of a VIGN label in each acquired or sub-sampled image. Each memorised POS position of a VIGN label thus represents a first region of interest ROI1.
On
One will note that the more one increases the value of a scale of reduction SR, the more one detects obstacles, in this case pedestrians, who are close to vehicle V.
Thus, for the first sub-sampled image Isr1, it is the distant obstacles (which enter the VIGN label) that will be detected, whilst in the image Isr5 and image Isr6, it is the nearby obstacles (which enter the VIGN label) that will be detected. On the example without limitation of
One will note that in another mode of completion, one can alternate the sub-stage of sub-sampling with the sub-stage of scanning and analysis.
In one mode of completion without limitation, the first detection by shape recognition involves a sub-stage to determine a scanning zone Zb in a sub-sampled image Isr. This sub-stage is also applied for each sub-sampled image Isr2 to Isr6. This makes it possible to reduce the processing time. In fact, one thus avoids scanning places in an image I where one knows that one cannot detect an obstacle O in a VIGN label because:
Thus, in one mode of completion without limitation, the scanning zone Zb involves:
As one can see in
In examples without limitations:
In a fourth stage 4), one defines at least one second region of interest ROI2 by carrying out a second detection of an obstacle O by movement detection in relation to vehicle V on a sequence of acquired images I of this environment E.
First Mode of Completion
In a first mode of completion without limitation, the second detection (stage DET_MVT1 (Iref, Ires, Smax, H) illustrated in
In one mode of completion without limitation, this stage uses a method well known by the professional called the “Running Average Method” and described in the document G. Christogiannopoulos, P. B. Birch, R. C. D. Young, C. R. Young, “Segmentation of moving objects from cluttered background scenes using a running average model”. SPIE Journal, vol 5822, pp. 13-20. 2005.
Thus, as illustrated on the example without limitation of
Iref=α*Ic+(1−α)*Iref.
With α a percentage of apprenticeship.
In one example without limitation, α=0.05
This percentage of apprenticeship means that one retains 5% of the new Image Ic and 95% of the previous Image Iref.
In other words, the background evolves according to the movement of the objects (including obstacles) in the image.
One will note that the first background Image Iref is the first acquired image I in the acquired sequence SQ.
After obtaining background Image Iref, one thus takes out the first current Image Ic to this background Image Iref and one obtains a resulting Image Ires.
One thus has Ires=Absolute value (Ic−Iref).
One carries out this stage on the set of acquired images I of the sequence of images SQ.
In one variant of completion without limitation, one compares the intensity of each pixel of the resulting Image Ires with this Smax threshold. If the intensity is above the Smax threshold, this means that there is movement in the resulting Image Ires.
In one example without limitation, in order to define the intensity threshold Sbr:
In one example without limitation, the percentage is 20%.
Thus, by carrying out a binarisation of the resulting Image Ires, one removes noise BR and one shows second regions of interest ROI2 representative of a moving obstacle O.
One thus distinguishes a moving obstacle O from noise.
One will note that noise BR may be for example the shadow on the ground of a tree which moves due to the wind, or even a change of light intensity on the image due to moving clouds, for example.
In one mode of completion without limitation, the second detection DET_MVT1 also involves the following stages.
a) Regrouping the regions of interest ROI2 of each resulting Image Ires which tally between them. The regions of interest ROI2 which tally between them actually represent the same obstacle O.
b) To define rectangles around these second regions of interest ROI2 obtained, as these rectangles now define these second regions of interest ROI2.
In one mode of completion without limitation, to determine the rectangle, one takes the minimum and maximum extremities of a region of interest ROI2.
In one mode of completion without limitation, the regions of interest ROI2 which are situated above a line characteristic of the horizon HZ (illustrated in
Second Mode of Completion
In a second mode of completion without limitation as illustrated in
These sub-stages are described hereafter.
A) To determine points of interest PtI.
As one can see on the diagrammatic example without limitation of
B) Follow-up of the points of interest PtI.
From the follow-up of the points of interest PtI of an image I in a next image I+1, one deduces from this the vectors of displacement Vmi of these points of interest PtI on an image I and on the next image I+1.
In one mode of completion without limitation, one uses a method well known by the professional called the “Lucas Kanade method” described in the document B. D. Lucas and T. Kanade “An Iterative Image Registration Technique with an Application to Stereo Vision” IJCAI '81 pp. 674-679”.
As one can see on the diagrammatic example without limitation of
One will note that sub-stages A) and B) are described in greater detail in the document “The Computation of Optical flow—S. S Beauchemin and J. L Barron, University of Western Ontario—ACM Computing Surveys, Vol 27, No 3, September 1995. Since this “optical flow” method is well known by the professional, it is not described in greater detail here.
C) To regroup into second regions of interest ROI2.
One thus regroups the points of interest PtI which have identical vectors of displacement Vmi, that is:
To this effect, in one mode of completion without limitation, one uses a method well known by the professional called “labelisation”.
From these regroupings, one determines second regions of interest ROI2. To this effect, one takes the points of extremity of the regrouped vectors of displacement.
In the diagrammatic example of
D) To determine the displacement Vmv of vehicle V.
To determine the displacement of vehicle V, one uses odometric values (wheel speed, rotation speed around the vertical axis) supplied by this vehicle V.
The displacement of vehicle V is represented by the displacement of the CAM camera fixed to vehicle V according to
In one mode of completion without limitation, one uses the following equation which represents the displacement of the CAM camera fixed to vehicle V:
with
Ωx: angle of pitching of the vehicle;
Ωy: angle of sway of the vehicle;
Ωz: angle of rotation around the vertical axis of the vehicle;
F: Focus of the CAM camera;
Ty: Vertical transfer of vehicle V between an image I and a next image I+1;
Tz: Longitudinal transfer of vehicle V between an image I and a next image I+1,
Tx: Lateral transfer of vehicle V between an image I and a next image I+1;
Xn, Yn: the CAM camera marker;
Zn: the distance of an obstacle O in relation to the camera;
xn+1−xn: the difference of position in abscissa on the image of an obstacle O between an image I and a next image I+1; and
yn+1−yn: the difference of position in ordinates on the image of an obstacle O between an image I and a next image I+1.
In one mode of completion without limitation, this is a situation where the speed of the vehicle is below or equal to a threshold representing a situation of current parking of the vehicle. In one example without limitation, the threshold is 20 km/hr.
This then gives: Ωx=0 and Ωy=0
Furthermore, in one mode of completion without limitation, one only considers the displacement of the vehicle in relation to the angle to the steering wheel.
Thus, Ty=0
In one mode of completion without limitation, the distance Zn is calculated from the width of a second region of interest ROI2 calculated previously in stage C) by making a hypothesis on the width a priori of obstacle O.
One can thus predict the new position of an obstacle in image I at instant T+1 (xn+1, yn+1) from the displacement of the vehicle and its position at instant T (xn, yn) and thus determine the predicted displacement Vmp of obstacle O induced by the displacement of vehicle Vmv. One supposes here that the obstacle is immobile in image I.
E) To discriminate from the second regions of interest ROI2.
To this effect, one then compares the two calculated displacements Vmi and Vmp.
If Vmi=Vmp, then one concludes that the second region of interest ROI2 to which is associated the vector of displacement Vmi is immobile, i.e. it does not move in relation to vehicle V. In this case, no account is taken of the second region of interest ROI2 representing an obstacle O.
If Vmi≠Vmp, one then concludes that the second region of interest ROI2 is mobile, i.e. it moves in relation to vehicle V. In this case, one takes account of this second region of interest ROI2 representing an obstacle O.
In the diagrammatic example of
The region of interest ROI22 corresponding to the obstacle O2 was not retained. In fact, obstacle O2 is a pedestrian who is immobile in relation to the vehicle. His observed displacement was only due to the displacement of the vehicle.
In one mode of completion without limitation, one associates with the second regions of interest ROI2 (which one takes into account) an index of confidence IC2.
In one variant of completion without limitation, IC2=Absolute value((Vmi−Vmp)/Vmp).
Thus, the higher the ratio, the more chance there is that obstacle O may be a mobile obstacle.
Third Mode of Completion
The second regions of interest ROI2 are defined by carrying out a second detection of an obstacle by movement detection according to the first mode of completion and the second mode of completion described above, as illustrated in
In a fifth stage 5), one classifies the detected obstacle O with a first IC1 and second IC2 index of confidence respectively, applied to the first ROI1 and the second ROI2 regions of interest in relation to given characteristics C.
One recalls that the regions of interest ROI1, ROI2 are the regions determined during stages 3 and 4.
One will note however that in the mode of completion illustrated in
One will remark that prior to this classification stage, one determines two types of populations one of which represents a pedestrian as follows. One will note that this determination is in general carried out upstream of the described detection procedure.
From several M labels of reference VIGNref some of which include an obstacle such as a pedestrian and some of which do not include any obstacle, one constructs histograms of reference HISTREF from the orientation contours detected in these labels (sub-stage CONST_HISTREF(VIGNref)).
Thus, in one mode of completion without limitation, the given characteristics are of histograms of orientated gradients. In one example without limitation, nine orientations are used (corresponding to nine directions over 360°). This makes it possible to obtain a good compromise between the calculation time and the quality of the classification.
At each pixel of a contour of a VIGN labelref, one calculates an orientation and one sees to which of the nine orientations OR it belongs.
One accumulates the NORM standards of the orientations on the set of the pixels of the contours of a VIGN labelref. One thus obtains a histogram of reference as illustrated in
One thus obtains Histograms M of reference which may be divided into two types of populations (a population with a pedestrian or a population without) as illustrated as a diagram in 2D (DI1, DI2) in
In order to dissociate these 2 populations, an algorithm of apprenticeship determines a border of decision.
In one mode of completion without limitation one constructs this border by a separator with a vast margin, a method known by the professional as the SVM method (“Support-Vector Machine”)—published by Kluwer Academic Publishers, Boston and written by Corinna Cortes and Vladimir Vapnik. The border may, without limitation, be a polynomial function (for example
The classification stage involves the following sub-stages.
To this effect, one then compares the “histogram” vectors obtained with the border of decision DG. The more one retreats from the border, the higher the likelihood of belonging to a population. Furthermore, the more one approaches the border, the more ambiguous it is to belong to a population.
One thus defines the first and second indices of confidence IC1, IC2 applied to the first regions of interest ROI1 and the second regions of interest ROI2 respectively, and thus to the constructed HIST “histogram” Vectors respectively.
Thus, the closer a “histogram” vector HIST is to the border DG, the closer the associated index of confidence IC at 0.5 for example (in the case where the value of an index of confidence is situated between 0 and 1).
On the contrary, the further a “histogram” vector HIST is from border DG in the region POP1, the higher the index of confidence IC1 of belonging to the population POP1, and the weaker the index of confidence IC2 of belonging to the population POP2.
In the example without limitation taken in
One thus classifies an obstacle O with the indices of confidence IC1, IC2 applied respectively to the first and second regions of interest ROI1, ROI2 in relation to the vectors histograms HIST, the classification making it possible to determine the category of the obstacle to which it belongs, in this case a pedestrian or otherwise.
In a sixth stage 6), one validates the classification of the detected object O in relation to these indices of confidence IC1, IC2 and in relation to these regions of interest ROI1, ROI2, as illustrated in
One recalls that the regions of interest ROI1, ROI2 are the regions determined during stages 3 and 4, and the indices of confidence IC1, IC2 are the indices of confidence determined during stages 4 (arising directly from movement detection according to the second mode of completion DET_MVT2(Vmi, Vmv) and 5 (arising from the classification).
In one mode of completion without limitation, the validation involves the following sub-stages as illustrated in
a) Constructing a probability card P_MAP corresponding to an image I in which each classified region of interest ROI1, ROI2 is represented by a distribution of probability (sub-stage CONST_P_MAP(IC1, IC2, Dim) illustrated in
To this effect, one thus establishes a probability card P_MAP based on a set of accumulated Gauss functions G, in which the Gauss functions G are constructed from:
Thus, if one represents a Gauss function G in mathematical terms, this gives:
With e the Euler number.
The Gauss function G is represented graphically in the form of a symmetrical curve in the shape of a bell.
One thus has:
The probability card P_MAP thus involves several Gauss functions G, some of which may or may not tally, as illustrated in one example without limitation in
b) Accumulating these distributions of probability which tally in the probability card (P_MAP) in order to obtain at least one local maximum (sub-stage ADD_G(P_MAP, ICF) illustrated in
To this effect, one accumulates the Gauss functions G of the probability card P_MAP which tally.
One thus obtains several local maximums resulting in several resulting indices of confidence ICF. The local maximum makes it possible to obtain the most likely localisation of having an obstacle O which is a pedestrian.
As illustrated on the diagrammatic example without limitation of
One will note that in one mode of completion without limitation, a resulting index of confidence ICF has a ceiling of 1.
c) Validating the region of interest ROI1, ROI2 which is closer to each local maximum of the probability card P_MAP (sub-stage VALID_CLASS(ROI1, ROI2, ICF) illustrated in
In one variant of completion without limitation, for each local maximum, one chooses the region of interest ROI1, ROI2 (of which the index of confidence was used for the local maximum) the summit of which is situated closer to this local maximum, and one attributes to its associated index of confidence IC1, IC2 the resulting index of confidence ICF. This variant makes it possible to refer to a region of interest ROI1, ROI2 already existing as determined beforehand in the previous stages, and makes it possible to remain accurate at the level of the localisation of an obstacle O (an already existing region of interest being centred on an obstacle O).
Thus, in the example which explains the diagram of
In another variant of completion without limitation, one could uphold the index of confidence ICF resulting from the accumulation of the Gauss functions. At this moment, the choice of the region of interest ROI would be a region of interest ROI centred on this index of confidence.
In one mode of completion without limitation, the validation stage also involves a further sub-stage of:
d) comparing the local maximums of the Gauss functions which tally in relation to a threshold of detection SG (sub-stage COMP (G, SG) illustrated in
If each local maximum is below this threshold, one estimates that the index of confidence resulting ICF is nil. One thus estimates that no pedestrian is detected, but that the Gauss functions correspond to noise or a false detection. In this case, one retains no region of interest ROI which served the purposes of accumulation.
Thus, the validation of the classification makes it possible to select validated regions of interest taken among the first and second regions of interest ROI1, ROI2 arising from the classification stage and which each represents a pedestrian.
Second Mode of Completion
According to this second mode of completion, in addition to the stages described in the first mode, the detection procedure also involves the further stages as illustrated in
In one mode of completion without limitation, the detection procedure also involves a further stage of confirming the detection of an obstacle on a sequence of acquired images (stage TRACK(POS)). This makes it possible to confirm the presence of a so-called validated region of interest and smooth its position over a whole sequence SQ of images I.
For the rest of the description, according to this second mode of completion without limitation, the procedure includes this further stage.
The stages are described in detail hereafter.
In a seventh stage 7), one carries out a third obstacle detection O by sensor/s with detection range below a first threshold S1 resulting in a determined POS position.
First Mode of Completion
In a first mode of completion without limitation illustrated in
Second Mode of Completion.
In a second mode of completion without limitation illustrated in
Third Mode of Completion
In a third mode of completion without limitation illustrated in
In one mode of completion without limitation, the regrouping is carried out by means of a comparison between the sensor distances Duls and Drad obtained. One compares each distance Duls with each distance Drad. If the difference of distances Diff1 obtained from the comparison is below a determined threshold S4, then one considers that it is the same obstacle O which was detected by both types of sensors ULS and RAD. In one example without limitation, the threshold S4 is 50 cm.
In this case (Diff1<=S4), in one mode of completion without limitation, one only retains the POS position detected by the radar sensor RAD (the detection by the latter being more accurate in general than detection by an ultrasound sensor ULS).
Otherwise (Diff1>S4), one estimates that the detections do not correspond to the same obstacle O and they are retained in so far as one was not able to regroup them with other detections.
One thus obtains a list of detected obstacles O in which one has deleted the double detections.
One will note that the fact of detecting obstacles by sensors by using both ultrasound sensors ULS and radar sensors RAD gives a very wide cover of detection. In fact, as one can see in
The fact of carrying out the detections by means of both types of sensors ULS and RAD makes it possible to cover the dead zone Zm as illustrated in
Furthermore, one will note that the fact of carrying out detection by means of a camera (whether by shape recognition or movement detection) combined with detection by sensors makes it possible to locate more precisely the detected obstacles in the vehicle marker, whether a combination with detection by ultrasound sensors ULS alone (as illustrated in
In an eighth stage 8), one projects the POS position defined in a reference marker COORef.
In one mode of completion without limitation, the reference marker COORef is the image marker XI; YI. This makes it possible to minimise the impact of detection errors when calculating the distance in the image of an obstacle O, as opposed to a solution in which a vehicle marker would be taken in account.
In one example without limitation, one will note that in order to carry out the projection of the POS position in the image marker XI; YI, it is sufficient to know the correspondence between the position in the image marker and the position in the vehicle marker Xv, Yv.
In one example without limitation, the projection is carried out according to a matrix of projection MP as follows.
Matrix of Passage MP:
One obtains certain first projections PJ1, as illustrated in diagram form in
One will note that the projection PJ1 of a POS position of an obstacle O determined by an ultrasound sensor ULS gives a rectangle. In the example illustrated in
Furthermore, the projection PJ1 of a POS position of an obstacle O determined by a radar sensor RAD gives a point. In the example illustrated in
The same applies in the case where both types of sensors (radar and ultrasound) were used.
In both cases (ultrasound sensors or radar sensors), this projection stage of the POS position also involves a sub-stage of defining, from a projection PJ1, an associated projection zone PJ1p.
In one mode of completion without limitation, the width of the projection zone PJ1p is centred on the projection PJ1 and the base of the protection [translator's note: possibly a misprint for ‘projection’] zone is at a tangent to the projection point PJ1 (in the case of a point), as illustrated in
In one example without limitation, one takes the dimensions of a projection zone equal to 1.7 m in height by 0.7 m of width. The dimensions of this projection zone PJ1p are thus determined so that they correspond to those of a pedestrian.
Thus, in the example illustrated in
Thus, in the example illustrated in
In a ninth stage 9), one projects the regions of interest ROI validated in this reference marker COORef.
One obtains certain second projections PJ2, as illustrated in diagram form in
In a tenth stage 10), one aligns the two projections obtained PJ1, PJ2 and one attributes the determined POS position to the classified obstacle O in accordance with the alignment.
In one mode of completion without limitation, the alignment is a comparison between two projections PJ1, PJ2 which is carried out according to the following criteria:
One will note that the distance of a projection PJ1 is the distance Duls or Drad given by the CAPT sensor.
Furthermore, the distance of a projection PJ2 is the distance detected in an image I of a region of interest ROI and recalculated in the vehicle marker V by the matrix of projection MP.
In examples without limitations:
One recalls that the projection PJ1 of detection by sensors is represented by the projection zone PJ1p described previously. Thus, in practice, the comparison is carried out between a projection zone PJ1p and a projection PJ2.
Thus, in the case where all these criteria are fulfilled, one estimates that the alignment between two projections PJ1, PJ2 is positive. Otherwise, one retains the projection PJ1, PJ2 until an alignment is found with another projection PJ2, PJ1 respectively.
If no alignment is found, then it is considered negative.
In the diagrammatic example of
In the diagrammatic example of
Thus, once the alignment is positive, one deduces from this that the corresponding obstacle O is a pedestrian and in one mode of completion without limitation, one attributes to it:
Furthermore, in one mode of completion without limitation, one increases its associated index of confidence IC. In one example without limitation, the new index of confidence IC=IC+(1−IC)/2.
In another mode of completion, one can associate with it:
One will note however that the POS position detected by the sensors is more accurate than the estimated position, and that the region of interest ROI is likewise more accurate than the defined projection zone.
If no alignment is found for a projection PJ1 or PJ2 then:
In one example without limitation, the threshold of confidence Sc=0.7.
In a eleventh stage 11), one carries out a follow-up of validated regions of interest on a sequence SQ of acquired images.
In one mode of completion without limitation, this stage uses a method well known by the professional called ESM (“Efficient Second Order Method”) developed by the INRIA and described in the document “Benhimane, E. Malis, Real-time image-based tracking of planes using efficient second-order minimisation IEEE/RSJ International Conference on Intelligent Robots Systems, Sendai, Japan, 2004”.
This method is based on a research of the same pattern in a sequence SQ of acquired images I, more particularly between a current image and an image of reference, and on the repetition of this pattern in a certain number of images I of the sequence SQ. This avoids losing a detection of obstacle O in the case where an obstacle O would not be detected on an image I of a sequence SQ, while it was detected on the other images I.
Thus, the procedure of the invention described makes it possible to reliably detect obstacles O, whether or not they are pedestrians, based not only on detection by shape recognition, but also movement detection recognition, and if applicable detection by sensors.
The procedure of the invention is commissioned by a DISP device of detection of an obstacle O within an environment E of a motor vehicle, this device being represented in diagram form in
This DISP device is integrated in the motor vehicle V.
This DISP device is fit to:
It involves a control unit UC fit to carry out the stages above.
In one mode of completion without limitation, the DISP device of detection is also fit to:
In modes of completion without limitations, the detection DISP device is, moreover, fit to:
In one mode of completion without limitation, the DISP device involves a set of control units UC including at least one control unit fit to carry out the stages described above. In one variant of completion without limitation, the set involves several control units UC1, UC2, UC3. Thus, in variants of completion without limitation, the control units UC may be divided into the CAM camera, the projectors PJ, the sensors ULS, RAD, or even a calculator vehicle ECU.
In the example without limitation of
In one mode of completion without limitation, the CAM camera is of type VGA or WGA and makes it possible to acquire images of respective size of 640*480 pixels or 752*480 pixels. In one example without limitation, the angle of opening φ is 130°. Of course, other types of cameras with other characteristics may be used.
One will note that the above-mentioned detection procedure may be commissioned by means of a micro-programmed “software” device, a hard-wired logic and/or electronic “hardware” components.
Thus, the DISP adjustment device may involve one or more computer programme products PG including one or more sequences of instructions executable from an information processing unit such as a microprocessor, or a processing unit of a microcontroller, ASIC, computer etc., the execution of these sequences of instructions allowing the described procedure to be commissioned.
Such a computer programme PG may be recorded in ROM type non-volatile recordable memory or in EPPROM or FLASH type non-volatile re-recordable memory. This computer programme PG may be recorded in memory in the factory or again loaded in memory or remotely loaded in memory. The sequences of instructions may be sequences of machine instructions, or again sequences of a command language interpreted by the processing unit at the time of their execution.
In the example without limitation of
Of course, the invention is not limited to the modes of completion and examples described above.
Thus, once the detection of a pedestrian has been validated, one can arrange an alert system which makes it possible to alert the driver of vehicle V that a pedestrian is situated close to the vehicle and enables him to brake, for example. One can also provide an automatic braking system following such a detection.
Thus, the detection procedure may be used for detection behind and/or in front of the motor vehicle V.
Thus, the invention particularly presents the following advantages:
Number | Date | Country | Kind |
---|---|---|---|
0954633 | Jul 2009 | FR | national |