The invention relates to the angle calibration of the position of a camera on board a vehicle, in particular an automotive vehicle.
It applies in particular to the initial calibration of a computer-aided driving system, whether the latter is installed in factory, as an “original equipment”, with no specific calibration, or as an “aftermarket equipment” by a user on his vehicle. Another example of a situation in which a calibration of the camera orientation is required is the use of a smartphone application that uses the image taken by the camera of this smartphone, mounted on a support fixed to the dashboard or the windscreen of the vehicle.
Many computer-aided driving systems are known, which implement the image taken by a camera fixed in the vehicle and that “looks at” the road. Algorithms allow to perform various functions such as: run-off-road detection, estimation of the distance with the other vehicles, anti-collision alert, obstacle detection, etc.
For all these applications, it is necessary to accurately know the position and orientation of the camera with respect to the vehicle.
In all these situations, the mounting of the camera may be very approximate, which requires the application of significant corrections to the signal delivered by the camera before being able to suitably analyse the latter. It may also be required to proceed to a subsequent readjustment of the parameters of calibration, for example to compensate for the deformations liable to be undergone by the installation.
It is difficult to install a camera at an accurate place due to the diversity of the vehicle models and of the technical constraints. If it is relatively simple to determine the position of the camera in the vehicle in terms of translational degrees of freedom (measurement of the distance between the focus point of the camera and the centre of the vehicle or the level of the ground), it is on the other hand far more difficult to accurately measure the orientation of the camera with respect to that of the vehicle in terms of rotational degrees of freedom, i.e. to determine the angle deviation between the optical axis of the camera and the longitudinal axis of the vehicle.
The object of the invention is to propose a method of automatic calibration of the orientation of a camera on board a vehicle, with a camera whose intrinsic parameters (focal length, resolution, distortion) are known, but whose relative orientation with respect to the vehicle is not known.
As will be seen, this method requires no intervention of the driver and is implemented directly based on the image of the scene taken by the camera when the vehicle is running normally. It will also be seen that this method can be applied as well to a camera located at the front of the vehicle or at the rear thereof, for example for a rear-view camera or a camera for estimating the distance with the vehicle that follows the equipped vehicle.
The angle calibration consists in evaluating a rotation (which can be expressed as a matrix) between a coordinate system linked to the camera and a coordinate system linked to the vehicle.
This rotation may be decomposed into three elementary rotations defined by pitch, roll and yaw angles, respectively. A full estimation of the rotation between the camera and the vehicle hence requires to determine these three angles.
Many angle calibration methods have already been proposed.
Among these methods, a number of them allow to fully estimate the rotation between the camera and the vehicle, but require a human intervention and/or a previous step, for example due to the use of a specific grid fixed to the system, of chequered tags laid on the ground or on a wall, or of markers fixed to the hood of the vehicle and visible by the camera.
The other methods, which use no external element and require no human intervention, have allow until now only a partial calibration, because it is not possible to estimate all the rotational degrees of freedom (pitch, roll, yaw) together.
In particular, many methods have been proposed to estimate the pitch and yaw rotations, in particular by analysing the position of the horizon and/or of the vanishing point in the image of the scene taken by the camera, as described for example in the FR 2 874 300 A1.
But these techniques allow to estimate automatically only the pitch and yaw angles between the camera and the vehicle, and not the roll angle. Other methods, such as those proposed for example by the US 20110228181 A1 and US 20090290032 A1, are based on the identification of points of interest (“feature areas”) located on the ground, which are ordinary points but which have to be imperatively found from one image to the following one. It is hence necessary to have a continuous sequence of images, and to implement a complex algorithm allowing to identify reliably the “same” point of interest in two close images so as to be able to follow the displacements of this point from one image to the immediately following one.
It will only be mentioned by the way the techniques involving pairs of cameras, which produce a stereoscopic representation of the scene. It is far more simple to calibrate the position of a pair of cameras, but the techniques implemented are not transposable to a single camera, as in the present case.
The present invention has for object to solve the above-mentioned problems, by proposing a technique of full angle calibration,—i.e. according to the three pitch, roll and yaw degrees of freedom—of the orientation of a (single) camera on board a vehicle, in a fully automatic manner during a phase of driving on a road and with no intervention of the user nor use of any marker or target in a previous phase.
Another object of the invention is to provide such a technique of calibration that requires for its implementation neither continuous sequence of images nor complex algorithm of detection and tracking of points of interest in two close images.
The basic idea of the invention consists in taking, when the vehicle is running, images of the road markings delimiting the circulation lanes of the road, and in fully estimating, through an iterative process, the orientation of the camera with respect to the vehicle, based on the position of two lanes located side-by-side on the image, for example the width of the central lane (that on which the vehicle is running) and the width of the lane located on the left and/or the right of the latter.
In particular, as will be seen hereinafter, the “edges” detection has to be understood as a detection of the lines formed by the road borders and/or by the circulation lane separators (continuous or dashed white lines), wherein such detection can be made by a very simple algorithm that does not need to identify, recognize and track the “same” points on the ground from one image to the following one. Once performed the step of recognition of these lines, very few calculations are required for the calibration, and it is not necessary to have additional points of interest.
Another advantage is that the calibration according to the invention may be obtained even if there is neither continuous sequence of images nor complex algorithm of detection and tracking of points of interest in two close images. The invention may in particular be implemented based on images that are not consecutive, for example images recorded at different locations.
Moreover, the invention does not require to accurately know the height of the vehicle or the width of the lanes. Only approached quantities are used to initialise these values, which will be estimated by the algorithm.
More precisely, the invention proposes a method of the general type disclosed by the above-mentioned FR 2 874 300 A1, i.e. comprising, during a phase of displacement of the vehicle on a road, steps of:
Characteristically of the invention, this method further comprises the following successive steps:
Very advantageously, the step f) of estimating the roll angle comprises the following successive steps:
In a preferential embodiment, the step f2) of calculating the width of each of the circulation lanes and the roll angle for each couple of circulation lanes comprises:
Estimating the width of each of the circulation lanes can in particular comprise applying the relation:
{tilde over (w)}=H
v cos (α)(a2−a1)
At step f2), estimating the roll angle can comprise applying the relation:
An exemplary embodiment of the invention will now be described, with reference to the appended drawings in which the same references denote identical or functionally similar elements throughout the figures.
They are the following coordinate systems:
The objective of the calibration is to determine the rotation that exists between the vehicle coordinate system and the camera coordinate system.
Let's consider an ordinary point, with Pv the coordinates thereof in the vehicle coordinate system and Pc the coordinates thereof in the camera coordinate system. We have:
where T is the translation between the centres O and C of the two coordinate systems.
By definition of the vehicle coordinate system, we have:
T=[0−Hv0]
where Hv is the distance from the camera to the point O in the vehicle coordinate system (height).
It may also be written:
R=Rot
X
×Rot
Y
×Rot
Z
with:
where ax, ay and az correspond to the pitch, yaw and roll angles, respectively.
More generally, the lane edges obtained on an image will be denoted by Dk, and the list of the edges obtained on a sequence of consecutive images by {(Dk)i}.
The estimation of the rotation will be denoted by Re. Based on this estimation, the position of the edges in the image of a virtual camera in rotation Re with respect to the camera can be easily calculated. These edges, or corrected edges, will be denoted by {circumflex over (D)}k. The list of the corrected edges for a sequence of images will be denoted by {({circumflex over (D)}k)i}.
The extraction of the edges, i.e. the determination of the parameters defining the straight lines Dk in the image, is performed by a conventional technique of image analysis, for example by detecting boundaries (which transform the image into a table of binary values to which is applied a technique of calculation of the gradients), then executing a Hough transformation, which is a well-known technique (whose principle is exposed in particular in the U.S. Pat. No. 3,069,654 A) allowing to very rapidly detect and characterize straight lines in a digitized image.
In the case of images exhibiting distortion (i.e. where the traces of the edges would not be rectilinear in the raw image taken by the camera), this distortion will be corrected before the detection of the edges, these latter being supposed to be rectilinear.
The mode of operation of the calibration method of the invention, allowing to characterize the angle position of the camera with respect to the vehicle, i.e. to determine the respective above-defined pitch, yaw and roll angles ax, ay and az, will now be described with reference to the general diagram of
The diagrams of
Due to its non-linear character, the method of the invention implements an iterative algorithm, wherein this algorithm can be iterated either on a sequence of images, or on several sequences of images, the sequence being changed at each iteration. This second possibility allows to limit the impact of the errors of measurement on a part of the video; a first estimation of the pitch, roll and yaw angles will hence be obtained, and this estimation will then be refined as time and the various subsequent sequences go along.
It will also be noted that the method of the invention can allow an initial calibration as well as a later recalibration by fine correction of the mechanical deformations liable to appear during the use of the camera, for example a displacement of the latter on its support due to the vibrations undergone, the variations of temperature, etc.
It will be finally noted that the method of the invention can be applied to any type of camera installed in a vehicle and observing the road, whether it is a camera located on the front of the car or a rear-view camera, provided that the taken scene comprises at least two parallel circulation lanes of constant width.
The calibration essentially comprises the following steps:
These steps are iterated until the corrective angles estimated by each module are negligible (test of block 22).
More precisely, an edge detection processing (block 10) is applied to an input sequence of images, which allows to obtain a list of edges {(Dk)i}.
This list of edges is corrected (block 12) by application of a rotation Re that is the estimate, for the current iteration, of the desired rotation. Initially, this estimate Re is initialised by the identity matrix Id, as schematized by the switch 14.
Based on the list of corrected edges {({circumflex over (D)}k)i}, the algorithm estimates (block 16) the residual pitch and yaw angles ãx, ãy corresponding to the signal of error of the iterative algorithm according to these two components.
The estimation operated by the block 16 is advantageously performed by the vanishing point method, which is a well-known method: essentially, the method considers the vanishing point in the image, which is the intersection of all the edges Dk of the road in the image. Let (uVP, vVP) be the coordinates of this vanishing point in the image and f the focal length of the camera, if the vehicle is perfectly aligned with the road, the global coordinate system (Xg, Yg, Zg) linked to the ground and the coordinate system (Xv, Yv, Zv) linked to the vehicle are identical, and the following results are obtained:
whatever is the roll angle az.
The pitch angle ax and yaw angle ay can hence be obtained.
In practice, the vehicle is never perfectly aligned with the axis of the road, so that the previous equations are no longer exacts. To compensate for that, a sequence of images is used, which is long enough so that the vehicle can be considered as being generally aligned with the road over the duration of this sequence, and the point of intersection of the edges is calculated for each image, then an average of all the so-calculated intersection points is determined, so as to define the vanishing point whose coordinate will then allow to estimate the pitch and the yaw.
The following step (block 18) consists in updating the rotation matrix Re by applying to the value Re of the previous iteration a rotation of compensation by the residual pitch and yaw values determined by the block 16:
R
e
←Rot(ãx,ãy)×Re.
The so-corrected rotation matrix Re is then applied to a module (block 20) of estimation of the residual roll ãz, implementing an algorithm itself iterative, whose details will be exposed hereinafter with reference to
The residual pitch, yaw and roll values ãx, ãy and ãz, which constitute the signal of error of the loop, are tested (block 22) and the algorithm is iterated if required by a new updating Re←Rot(ãz)×Re (block 24) to operate a roll correction (the pitch and yaw having already been corrected in block 18). The resulting rotation matrix Re will constitute the new input value of the algorithm at block 12 for the following iteration.
The process is then continued until the residual pitch, yaw and roll angles are considered as negligible (test of block 22). If such is the case, it means that the algorithm has converged, and the final estimated value Re may then be delivered as an output, after application of a last updating Re←Rot(ãz)×Re (block 26, identical to block 24) for a roll correction by the last value ãz determined previously.
The method is based on the following principle: for a given image, the width of the lanes visible on the image (central lane, and left and/or right lanes that have been detected) is estimated: it will be possible to estimate the roll using the difference between the estimated widths and the real width of the lanes.
The rotation due to the roll will be denoted Rz and the estimation of the reference lane width, which is not known, will be denoted We.
In order to converge towards a correct estimation of the roll and of the real width, the algorithm makes several iterations. At each iteration, the previous estimation Rz is used to correct the edges, then the residual roll angle δaz is estimated.
The inputs of the algorithm are:
The following steps are iterated until the residual roll δaz is low enough (test of block 38):
As regards more precisely the step of estimation of the reference width (block 36), to estimate the real width of the lanes, the whole sequence of images is used, on which are calculated the widths (par image) of the right, left and central lanes. The average of the widths are then calculated per type of lane, and the minimal width obtained is used: the use of this minimal value allows to limit the risks of having a negative factor under the root.
In the sequence of images, the module does not always detect several lanes. When on a single image the central lane is detected, the estimation of the width thereof is all the same used to estimate the reference width, but it is not possible to calculate a estimation of the roll for this image.
At time of calculating the average of the rolls per image, it is verified that there has been enough images for which several lanes have been detected.
As the iterations go along, the estimation of the real width of the lanes converges (test of block 38).
The algorithm then ends by extraction (block 42) of the value of the angle az such that Rz=Rot(az), allowing to measure the importance of the roll correction.
For a given image, the estimation is operated as follows:
If three lanes are detected, the width of each lane {tilde over (w)}g, {tilde over (w)}m, {tilde over (w)}d and two roll correction angles azg and azd are estimated. If only two lanes are detected, then a single roll correction angle is estimated. If only the central lane is detected, its width is estimated, but there is no roll estimation.
More precisely, the estimation of the width of a lane (blocks 44, 46) uses the equations of the straight lines of the lane edges, such equations being given for each image by the edge detection module 10. More precisely, a lane is defined by two edges, on the left and on the right, with the following equations:
D
left
:u=α
1
v+b
1
D
right
:u=a
2
v+b
2
Based on these edges, the width may be estimated by:
{tilde over (w)}=H
v cos (α)(a2−a1)
where α represents the angle between the vehicle and the road and the height of the camera.
If the height Hv of the camera with respect to the ground is known, this value may be usefully used to calculate the roll angle; in the opposite case, it is possible to take any arbitrary value, for example Hv=1 m, which will lately be eliminated during the iterations of the algorithm. Indeed, the estimations of the width of each lane ({tilde over (w)}g, {tilde over (w)}m, {tilde over (w)}d) are proportional to Hv.
But, as We is estimated based these measurements, We is also proportional to Hv and az hence does not depend on Hv. The single impact of Hv is during the initialisation of We: if Hv is unknown, it is just necessary to give it a probable value, coherent with the initialisation of We, so that it is not necessary to know the real height of the camera.
As b1=b2=f tan(α), it is possible to estimate a for each image by:
The previous formula to estimate the lane width is valid only if the edges are perfectly corrected. When there remains a partial rotation, it is only an estimation of the width, but thanks to the iterations, this estimation will converge towards the good result.
This calculation is valid for the central lane as for the right and left lanes.
As regards the estimation of the residual roll for each couple of lanes, we have at our disposal the detection of two parallel lanes, for example central lane and left lane. The widths of the lanes {tilde over (w)}g and {tilde over (w)}m having been estimated on each image and W representing the reference width of the lanes, it is demonstrated that the roll az can be calculated by application of the following expression:
This relation is an approximation at the first order, valid when az is small. Hv is the height of the camera with respect to the ground, and α, which represents the angle between the vehicle and the road, is estimated based on the equations of the lane edges.
The same formula is applied for the right and central lane couple.
Number | Date | Country | Kind |
---|---|---|---|
1362390 | Dec 2013 | FR | national |