This application is a National Stage of International patent application PCT/EP2010/069697, filed on Dec. 15, 2010, which claims priority to foreign French patent application No. FR 0906095, filed on Dec. 16, 2009, the disclosures of which are incorporated by reference in their entirety.
The present invention refers to the geo-referencing of an area by means of an optronics system, with a decametric performance class, and more particularly in strongly oblique exposure conditions.
The following applies:
εP=εh/tan θ=(r/h)εh≈εh/θ
An error εh of 20 m for example, for an altitude greater than that of the object by 20 kft, induces, for the positioning of a point P situated at 30 km, an error εP of 120 m.
The following applies:
εP=r·εθ/sin θ=r·ε74·(h2+r2)1/2/h=r2·εθ(1+(h/r)2)1/2≈r2·εθ/h
In the same conditions as in the preceding example, an error εθ of 1 mrd for example, which corresponds to a very favorable case, induces an error δθ on the positioning of the point P of 150 m.
These errors may be aggregated to ultimately introduce an error of approximately 270 m for the positioning of the point P.
Such errors are found nevertheless also in low-altitude terrestrial optronics systems (helicopters, mini UAVs) which are fixed, for example, at the top of a vehicle mast or a ship's mast. However, most of these systems have to acquire and characterize the positioning of objects moving at great distance with decametric efficiency.
The influence of measurement errors on the geo-locating error of a point P has just been illustrated. Geo-referencing consists of geo-locating all the points of an imaged area and not a single point.
To be able to have a geo-referencing of decametric class in the above mentioned conditions, it is usual practice to use post-processing operations on the ground generally performed by a specialist operator who realigns the images acquired by means of geographic references (or landmark by preferably having world or at least sufficient coverage for the expressed requirements). However, these landmarks, generally taken from exposures close to the vertical, are difficult to pair with the strongly oblique image automatically, and are subject to the aging of the information.
One of the difficulties is to acquire this performance rapidly and at any point of the globe without having to use true world coverage information, which is difficult to pair with the image straight away, but not subject to the aging of the information.
One solution, implemented in a related field, for establishing digital elevation models, or DEM, consists in performing, using a in flight laser, a number of beam distance and direction measurements in favorable acquisition conditions. In practice, this application is performed with low exposure constraints making it possible to acquire the information in conditions which are close to the vertical and with a fairly low flight altitude so that the orientation errors are not too detrimental for direct location performance. The systems used do not generally offer any associated imaging and, when they do have it, the two systems are not coupled. The aim of these airborne laser plotting techniques is solely to use distance measurements in order to reconstruct DEMs and the coupling with an image is not provided in the applications encountered which are all to do with distant acquisition conditions of strongly oblique aims. Moreover, these approaches favorably lend themselves to the updating of information produced and to thereof control the stereo plotting sites which involve producing, in ground stations and under the control of an operator, digital terrain models (DTM) and ortho-images on the imaged areas.
Another solution commonly used to produce ortho-images, DTMs, in order to ultimately produce geographic maps and vector databases (DB), uses aero-triangulation techniques based on acquisitions of optical or radar images, from aircraft or from satellites.
The sensors on satellites are commonly used to cover large areas based on image acquisition and position and attitude measurements; also, this is done on the scale of a territory and more. The control of the internal consistency of the optical images, based on the observation of the forms of the objects of the scene after rectification, is produced by means of a matrix detector. It is ensured by using the trajectory and/or scanning techniques, while ensuring an overlapping and a contiguous reconstruction of the information which can then be oriented as a whole based on a few landmarks in order to correct the remaining orientation biases. These spatio-triangulation techniques are also indicated when the acquisitions are produced from observation or remote detection satellites.
The aero-triangulation applications correspond to acquisitions with wide measurement bases (a base being the displacement of the detector between two images) and therefore to a relatively low rate of acquisition (of the order of 0.1 Hz) compared to strategic applications (some tens of Hz) reaching an acquiring rate of approximately 10,000 images in 3 minutes.
Here again, the images are processed and used in a ground station under the control of an operator. In his or her work producing information, he or she also has:
The enhancing of the geo-referencing of the images by an operator on the ground constitutes a process that is effective with regard to the result and, at the same time, restricted with respect to the implementation time, the need for reference geographic data, the correlating work and time involved—even more when the information associated with the image to be geo-referenced is of lesser quality.
The aero-triangulation works determine the absolute orientation of all the images. This makes it possible, if necessary, to assemble them as a single image (or block of images) and to correct the result by means of inputting homologous or/and landmark points as well as to provide a manual or visual performance check. The need for an operator in the loop to control the quality of assembly of the images and of the geo-referencing of an area covered by a number of images is unfeasible in the conditions of use of applications notably requiring a much shorter implementation time, close to real time.
In addition to this performance problem linked to the exposure conditions (or CDPV), there is the need to have:
The coverage of large areas is ensured by displacing the detector or/and by using larger detectors or/and greater fields.
The coverage of a large area by a satellite means is facilitated by its displacement in its orbit and a good relative quality between the exposure parameters because:
For the terrestrial applications, the displacement of the detector is not always possible and its size is sometimes limited with regard to the areas to be acquired. The coverage of large areas by an aeroterrestrial means is more difficult since:
The use of large detectors, with materials of well-controlled quality, first of all favored the use of array detectors. However, the difficulty associated with finely knowing the pointing over time (between the directions of the image corresponding to the direction of the array) degrades the internal consistency of the image (which allows for control of its geometry) and therefore one of the strong characteristics of optronics. Moreover, the integration time has to be reduced in order to be adapted to the scrolling effects linked to the displacements of the detectors relative to the imaged area.
The possibility of using greater fields to cover large areas runs counter to the requirement in GSD for a given acquisition distance range. To remedy this constraint, rapid scanning-based acquisition modes such as frame-step (or step staring) are used and the number of detectors on one and the same platform is increased.
For the military applications, large quantities of well-resolved images have to be able to be geo-referenced rapidly.
The aim of the invention is to overcome these drawbacks of implementation times, the need for an operator on the ground, and for external reference data, of insufficient resolution of the scene in the image, while observing the constraints of decametric class geo-referencing, and by adapting to the requirement the surface area on the ground imaged in conditions of strongly oblique exposure and significant acquisition range.
The geo-referencing method according to the invention is based on the provision of two types of information that have strong accuracy and precision:
An algorithmic processing operation computes the condition parameters of the exposures of each image based on the preceding information.
Thus, a few accurate distance measurements and precise angular deviations (using the internal consistency information of the optronics images and/or the precision of the inertial measurements) allow for a better geo-referencing of the area assuming ground that has little unevenness or by having an DTM. The quality of the geo-referencing of the imaged area then benefits globally from the accuracy of the distance measurements and locally from the geometrical consistency imparted by the relative quality of the optronics angular measurements.
According to the balance sheet of the errors produced, the consistency regarding the respective quality of the information used, which is of the order of a meter for each contribution (a few pixels of size 10 μrad at 30 km for 20 kft of altitude represents a distance of 1.5 m), will be noted.
More specifically, the subject of the invention is a method for geo-referencing an area by means of an imaging optronics system which comprises a step of acquiring M successive images by means of a detector, the imaged area being distributed between these M images, with M≧1. It is mainly characterized in that it also comprises the steps:
This method allows for the geo-locating of an entire imaged area, not limited to a single point of the scene:
According to one embodiment of the invention with M≧3, the M images of the area are acquired in succession; these images present areas of overlap two by two and the method comprises a step of extracting homologous primitives in the areas of overlap of these M images and a step of mapping the images two by two on the basis of on these homologous primitives.
According to a particular implementation of the preceding embodiment, when P=K, the range-found points are respectively at the center of each of the images.
Preferably, when the optronics system is fixed, the parameters describing the positioning (xe, ye, ze) are estimated only once.
When the optronics system comprises positioning means and it moves on a known trajectory, the positionings xe, ye, ze can be estimated on the basis of the successive position measurements and a mode of the trajectory.
When the optronics system accesses (or includes) measurement means indicating its positioning, its speed, its acceleration; then its trajectory is modeled in parametric form. The positionings xe, ye, ze are then estimated for the positions at the times corresponding to that of the acquisitions (images and range findings).
According to a first variant, when there are a number of range-found points in one and the same image, the distance measurements are acquired simultaneously for these points.
According to another variant, when there are a number of range-found points in one and the same image, the distance measurements are acquired in succession for these points, the time to acquire each distance being less than the ratio of the time to acquire this image to the number of these points in the image.
Also the subject of the invention is a geo-referencing optronics system which comprises a detector having an optical axis (COA), means for positioning this detector, means for measuring the attitude of the detector, a range finder harmonized with the COA of the detector and a processing unit linked to the abovementioned elements, and capable of implementing the method when P=K.
According to one feature of the invention, the range finder emitting a laser beam is equipped with means for splitting or deflecting the emitted laser beam, for the analysis of the signals received in order to determine the time of flight (ToF) and the orientation of the beam relative to the image by means of a processing operation suitable for implementing the method as described.
Other features and advantages of the invention will become apparent on reading the following detailed description, given as a nonlimiting example and with reference to the appended drawings in which:
From one figure to another, the same elements are identified by the same references.
The geo-referencing error of an area is conditioned by the quality of six external parameters, also called exposure parameters, indicated in
The position of the detector (or the station camera) is preferably used in a Cartesian geographic coordinate system:
The 6 exposure parameters (xm, ym, zm, φm, θm, ψm) are determined with a quality that is conditioned by that of the measuring instruments and of the associated processing units.
The calibration of the parameters internal to the optronics system (focal and optical distortion of the imaging device, principal image point, etc.) is assumed to be done elsewhere. However, the method also makes it possible to estimate these parameters and more accurately determine, for example, the particular values that the parameters of an optical distortion model assume in the operating conditions of the sensor (temperature and mechanical stresses).
The method for geo-referencing an area is performed by means of an imaging optronics system 100 shown in
Generally, the method comprises the following steps:
Generally, the estimation of the parameters characterizing the CP of each of the K images is performed using:
Four uses are described below which use a number of distance measurements over an imaged area in order to enhance the knowledge concerning the camera parameters (CP) describing the exposure station and image attitude, provided by the measurements, or even to determine these parameters without angular measurements.
Application (1): enhancement of attitude and height with 3 distances. A contiguous image area is used which has 3 distance measurements in order to explicitly determine the value of the 2 angles (ψ0 and θ0) characterizing the orientation of the COA (excluding last rotation about the COA (φ0) and the height z0 of the sensor (see
It is important to recall that these 3 parameters comprise the two parameters for which the measurement errors have the most critical influence on the geo-referencing of the image in strongly oblique sight (see
This application constitutes both a didactic illustration and a presentation of a basic concept of the process.
Application (2): densification of the earth's surface over the imaged area. A redundancy of the distance and image measurements in relation to the number of parameters to be estimated is used. Beyond the 3 minimum distances necessary, each new distance provides a relevant measurement concerning the distance to the scene at the point targeted and therefore, for a position of the sensor and an attitude of the image that are well known, relevant information with a view to positioning the point targeted on the ground. Determining the altitude and the position of scattered points densifies the initial knowledge of the ground model. The process proposes to take into account all the measurements in full in a joint estimation of the exposure parameters and of the scene parameters. However, and notably in order to illuminate the meaning thereof, it is also possible to use some of the measurements to estimate the CPs and the rest to know the altitude over the places on the ground that have been the subject of a measurement.
Application (3): aero-lateration with a set of images and of distance measurements. Use is made of a set of overlapping images, observations of homologous primitives between images, distance measurements on the images, a scene model and approximate measurements making it possible to initialize the exposure parameters of the images in order to enhance the exposure parameters of each image and those describing the scene model. The application is essentially focused on the estimation of the external parameters but the internal parameters consisting of the focus, the coordinates of the image principal point (IPP) and the description of the optical distortion can also be estimated in the context of this application. The redundancy of the distance measurements on the image is used in order to densify the scene model and, in this way, enhance the mappings between the corresponding features (CF) and the positioning of the extractions in the iterations of the estimation. This application presents the implementation of the process in the context of an operational application in its most general dimension.
Application (4): use of landmarks. At least 3 distance measurements are used on points of an imaged area that is paired with a geo-referenced reference image datum in order to explicitly compute the external exposure parameters and enhance the knowledge thereof.
The text below gives a few details for the implementation of these applications.
Application (1): Enhancement of the CPs with 3 Distances
For a perspective exposure, the colinearity equations which link the image and terrain coordinates make it possible to write the location function associating a point of the ground “G” of coordinates (xk,yk,zk) with an image pixel as:
The above expression contains the following:
The rotation matrix R characterizing the attitude of the image is written in its minimal representation as a function of the 3 Euler angles (ψ0, θ0, φ0):
or, in more developed form:
The equation (1) is given without distortion on the image. In practice, this is evaluated by means of a grid or a parametric model linking the position of an ideal pixel (without distortion) of image coordinates (p, q) to the position of the real pixel (with distortion) of coordinates (p′,q′) in the image. The main effect of the distortion is to introduce a radial deformation on the perfect pixel coordinates (p,q) by transforming them into (p′,q′) according to the following form:
p′=pc+L(r)(p−pc)
q′=qc+L(r)(q−qc)
r=√{square root over ((p−pc)2+(q−qc)2)}{square root over ((p−pc)2+(q−qc)2)}
in which the pixel of coordinates (pc, qc) corresponds to the center of the distortion, also called principal point of symmetry (PPS).
The function L(r) is defined for r>0 and L(0)=1 with an approximation by Taylor development to the order N:
By taking into account the fact that the distortion corrections remain small compared to the size of the images, the above equations are inverted by being written:
Thus, it will be noted that the account taken of the distortion is limited, in this type of modeling, to a wider linear estimation of parameters. This step therefore represents a fairly low complexity which will make it possible:
Out of the following 3 observable quantities:
For N range-found image points, it is possible to write the following 2 N relationships based on the location function of the sensor (equation 1) to write the following relationships:
The equation for measuring the distance between the position of the sensor and a particular point on the ground is also expressed as:
dk=√{square root over ((xk−x0)2+(yk−y0)2+(zk−z0)2)}{square root over ((xk−x0)2+(yk−y0)2+(zk−z0)2)}{square root over ((xk−x0)2+(yk−y0)2+(zk−z0)2)}+νk (equation 2)
By replacing the latter in the preceding expressions and by disregarding the measurement noise ν, the following observation expression is obtained by making use of the properties of orthogonality of rotation matrix “R”:
We note that the quantities R3n involve only the angles φ and θ which reflects the non-observability of ψ (angle of rotation about the vertical axis). In this expression, the sign depends on the colinearity factor μ which can be constructed as being >0 (by placing the image opposite or in front of the optical center—negative snapshot). This operation is made possible by the central symmetry about the IPP by inverting the 3 axes of coordinates (p, q, f) of the image coordinate system.
By using mk to designate the quantity which depends only on the measurements (image coordinates of the point k, distance to the ground associated with the direction of the point k and height of the sensor zk) and the information or the assumption concerning the ground height (z0):
Expressions are obtained for the two angles, when the 3 image points (pk, qk) are not aligned. Without describing the writing details thereof, the computations make it possible to express:
It will be noted that the circularity of these expressions as a function of the measurements (1, 2, 3) reflects the interchangeability of the role of the 3 points (P1, P2, P3).
Moreover, it will be noted that:
Application (2): Modeling and Densification of the Earth's Surface on the Imaged Area
For the processing of the terrain, different uses can be proposed depending on whether the ground has to be:
To use an available model, the proposed approach consists in modeling the surface of the scene in parametric form and in using polynomial-based functions to develop it. For this, a discretization of the area of interest is adopted in the form of a regular grid. The height information situated on the nodes (i,j) of the grid is used to determine the parameters “hij” of a model of the form:
This development of the altitude (or height) uses basic functions “cij” in polynomial form to the powers of x and y according to different representations, such as:
in which U and V are fixed matrices, of respective dimensions N×P and Q×M. The matrix H of dimension (P×Q) represents the matrix of the coefficients which are estimated. For N=M=4 and on a support of the normalized grid at [−1,1]2, the matrix U=VT and is written, for example:
In practice, one of the preceding two developments can be used to express z in one and the same form of linear function of the coefficients hij which are known according to the information available a priori.
To densify an existing model, use is made once again of the preceding modeling (equation 8) and available initial information on the terrain. It is proposed to densify the original spatial information known on the nodes of a regular grid, the pitch of which corresponds, for example, to an DTM standard of level 1 (pitch of approximately 90 meters at the equator). This DTM makes it possible to initialize triangular cells as indicated in
With a number of distance measurements>3, the complementary information can be used to densify the scene model on the points where the distance measurements are performed. Since the projections of the distances to the ground are not distributed according to a regular grid, the densification obtained leads to an enrichment of an DTM in triangular form without that posing a problem as to its future use (
To estimate a model, a hypothesis or an existing “rough” model on the area, or an estimation obtained on the nth iteration, is used as a starting point. A process for estimating or making consistent all the available information such as that described in the application 3 is used to find a correction to be applied to the altitude in the following form:
in which the hij items are the altitudes determined on the nodes of the grid in a preceding iteration of the estimation process.
We note that the observation equation (equation 3) remains valid if the altitude zk is modeled by a more general form.
Thus, the altitude development coefficients can be grouped together with the rotation elements to resolve the linear system:
AX=B+ν
in which ν is a term of the first order accounting for the measurement errors.
In the absence of initial altitude information, it is also possible to estimate a ground model. The simplest consists in modeling the surface as a medium plane on the scale of the area covered by the images. The altitude then changes over a plane of the space as:
z=h00+h10x+h01y
In this expression, h00 represents the altitude at the origin whereas h10 and h01 respectively represent the slopes in the two directions x and y. With this planar model, the following will, for example, be written:
With 6 distance measurements, the quantity X is determined accurately (to within the measurement noise ν), which gives both:
With more measurements, the preceding system is estimated:
Thus, the proposed process is capable of densifying or enhancing an initial approximate ground model of the scene or to construct it ab initio.
The processes of densification or estimation of the model representing the surface of the ground have various advantages since they notably allow for better performance levels for:
Application (3): Aero-Lateration
We propose this name to recall the fact that the proposed process constitutes an extension of the conventional aero-triangulation process which proceeds without using the distance measurement. For this application, it is noted that:
For this application, there are:
In the preceding expression, the quantity zki is evaluated according to the scene model:
Having an image and observation set, the exposure parameters of the set of images are then obtained by minimizing the quadratic sum “J” of the different preceding residues, or the expression:
Θ=min J
J=JM+JΘ+JT
Without seeking to resolve the system specifically, since it depends on the number of images, of link primitives and on the occurrence of the presence of the primitives over all the images, the structure of the equations to be resolved leads to a matrix linear system. The size of the system, which depends on the number of images, of CF and on the number of distances, is fairly large but the latter is also very hollow because the CFs couple only a small number of images. The matrix of the system becomes all the more hollow as the number of appearances of each primitive on more than two images decreases and the number of images increases.
It will be noted that another possible approach consists in proceeding in two steps:
Application (4): CP by Landmark Point Range-Finding
This application makes it possible to compute the exposure parameters by range-finding landmark points and enhancing the values of the parameters obtained from approximate measurements. The procedure is qualified as an active plotting procedure. For this application, the optronics system accesses geographic information enabling it, by appropriate processing, to pair this information which may be reduced to a minimum of 3 landmark points. It is then possible to estimate all of the 6 external parameters by range-finding objects of known coordinates (landmark points).
The approximate measurements of position of the sensor and of its image attitude make it possible in particular to map the range-found points in the image to a reference image datum in order to have coordinates of these terrain points. In detail, this approach uses the measurements (xm, ym, zm, ψm, θm, φm), supplying CPs making it possible to geo-reference the image approximately and to fix the geographic coordinates with the primitives extracted by means of the location function; of these, the parameter Θm is initialized according to the measurements. This geo-referencing of the primitives makes it possible to pair them by an automatic procedure with the landmarks of the reference datum and thus to correct the geographic coordinates which had initially been assigned to them from the measurements of the CPs. The primitives can then be considered to be landmarks Gke for which the coordinates on the ground (xk, yk, zk) have a quality of the reference data (class of a few meters), for which the image coordinates (pk, qk) are known according to the processing of the signal with an extraction quality of the order of a pixel and for which the distance to the scene (dk) is measured according to the range finder with a metric quality.
This approach differs from the preceding one by the fact that there is reference information in the scene (the geographic coordinates of the points on the ground Gke).
For the problem of conventional plotting, which relies in principle on image coordinates (and therefore directions) and landmark points to estimate all of the 6 parameters (including the position of the sensor and the attitude of the image), a generalization is proposed by adding the distance measurements to the traditional information. This complementary provision of information presents two advantages:
This active plotting technique proceeds as follows:
Thus, by using reference image data, the proposed method can enhance and simplify the procedure for estimating the parameters (xe, ye, ze, ψe, θe, φe) by automatically pairing the image acquired with the range-found points with the reference datum.
For the different applications presented, one major benefit of the method according to the invention lies in the enhancement of the knowledge concerning the depression θe, and the flight height ze, which are the 2 parameters that are most sensitive to the geo-referencing balance sheet as described in the preamble with relation to
The swing, or rotation φe of the detector about the COA also offers a benefit since this quantity is not always measured, nor even specified in terms of performance, according to the mechanization retained for the orientation of the head of the optronics sensor.
Another benefit of the approach lies in the consistency of the information used both on the complementarity of the nature of the errors (distance accuracy and angular precision) and on the order of performance close to a meter.
The acquisitions of the positioning and of the attitude measurements are not generally performed at the same times, or even at the image acquisition times. In this case, a step for synchronizing these acquisitions of measurements on the image acquisition times must be provided. These positionings and attitudes are also synchronized with the distance measurements.
According to a first embodiment of the method, described in relation to
As illustrated in
According to one alternative, the range finder is equipped with rapid beam-deflection means, this deflection being such that the time to acquire each distance is less than or equal to the ratio of the time to acquire this image to the number of range findings P. The most conventional laser beam-deflection techniques use prisms or rotating mirrors. It is also possible to use devices with orientable reflection or refraction elements, with birefringent deflector and with interferences. More recent techniques use acousto-optical components, MEMS to diffract the laser beam in different directions. A fiber optic system may also be used effectively to produce this function by providing the laser beams on the places that are preferred in terms of relative distribution in the image. This provision may be simultaneous with the different fibers or sequential so as to supply all the available laser power in each fiber. Under this principle, each fiber may be duplicated to analyze the signal on reception. The function may also be produced with a more dense spatial sampling by directly using a matrix detector simultaneously performing the imaging function and the range-finding function.
Whatever the approach used, it makes it possible to have a large number of range findings which enhances performance, through the redundancy of the distance measurements, and allows for their consistency checking in order to detect artefacts (of multiple-path type).
According to another alternative, the divergence of the laser of the range finder covers all of the image and the reception of the range finder is matrix-based.
Generally, the number of points range-found is the same for each image, but not necessarily.
The more range-found points there are in an image, the more efficient and robust is the result obtained.
According to a second embodiment of the method, described in relation to
It will be noted that it is possible to form one large image from M images, but this is not essential in estimating the parameters xe, ye, ze, φe, θe, ψe of each image.
When there is more than one image, it is essential for the images to overlap at least two by two in order for them to be able to be mapped together. The overlap may vary within a wide range from a minimum of the order of 10% to almost 100% (a value of 60% corresponds to the conventional aero-triangulation conditions for the civilian applications in which the acquisitions are vertical). Two successive images are mapped together by using homologous primitives (representing the same details of the scene) belonging to the overlaps of the images. These primitives may be represented by points or segments or, more generally, forms parametrically describing the contours of the forms corresponding to elements visible in the scene. These form descriptors are obtained by an automatic processing operation suitable for extracting radiometric information characteristic of a one-off geometric form, linear or even surface-oriented, such as a (one-off) intersection of a road portion (linear) or the contour of a field (surface). These homologous primitives are independent of the range-found points.
In
In
In some cases, these M successive images are acquired from a fixed optronics system, that is to say, one whose position x0, y0, z0 does not change during the acquisitions. These images are, for example, acquired by detector scanning or by rotation of the platform. In this configuration, the geographic position of the sensor does not vary in the different acquisitions; its position xe, ye, ze can therefore be estimated just once. On the other hand, the attitudes change from one image to another and the orientations φe, θe, ψe will therefore be estimated M times. Finally, when the system is fixed, 3M+3 external parameters characterizing the CPs are estimated.
The approach based on detector scanning presents the advantage of having a geo-referenced area of the desired size, greater than just the instantaneous field of the detector, conditioned by the time to acquire and the scanning speed. This capability makes it possible notably to perform the acquisition by prioritizing the use of a small field (NFOV, standing for Narrow Field Of View) rather than a large field (WFOV standing for Wide FOV) in order to have a better GSD on the information generally situated at a great distance. It can be implemented without any additional hardware cost, on a system that already has a range finder and a means for scanning the COA. This makes it possible to consider the upgrading of existing sensors.
When, furthermore, a number of distances are plotted on each of the images, the advantages of the approach based on scanning are combined with enhanced performance because of the redundancy of the distance measurements in the process of estimating the exposure parameters and the overlap area.
In other cases, these M successive images are acquired from an optronics system placed on a moving platform. In this case, its position changes over time. There will therefore generally be M estimations of position xe, ye, ze and of attitude φe, θe, ψe: M positioning and attitude parameters will be acquired (xm, ym, zm, φm, θm, ψm) then estimated (xe, ye, ze, φe, θe, ψe), or 6M parameters.
In this case also, the images can be acquired by scanning.
When the platform describes a known trajectory defined by a parametric model, the number of parameters to be estimated can be reduced by a modeling of the trajectory including N parameters.
The trajectory model makes it possible to constrain the change of position of the sensor (within a range in which the trend of the parameters is compatible with the kinematic capabilities of the system) and to have position values outside of the measurement times by filtering or interpolating the information. The trajectory model gives the position of the platform with, for example, the conventional polynomial expression of the following form in which t0 is the origin or reference time and OM(n) is the n-th derivative of the position at the time t0:
Since the acquisition interval corresponds to a short time period, a 2nd order development will generally be sufficient to account for any maneuver of the platform. Otherwise, if a higher degree time power polynomial has to be used, preference will be given to a development of the trajectory in the form of a spline curve in order to avoid any unrealistic oscillations that might appear with the preceding polynomial method. To illustrate the reduction of complexity which results from the modeling, it is sufficient to indicate that a modeling limited to acceleration comprises 9 parameters to be estimated whereas the number of components of positions generated in 1 second at an image rate of 50 Hz amounts to 150!
In the polynomial approach, the coefficients of the development can be obtained from a measurement of the kinematic characteristics at the instant t0, whereas, for both approaches, the coefficients can be estimated from a number of measurements (position, speed) by an adjustment of least squared type. This procedure is elementary since the model is linear as a function of the values of the components on the three position, speed and acceleration components.
Thus, the establishment of the trajectory model is either based on a minimum of one set (time, position, speed) or from a number of sets and from a least squared estimation procedure. The resulting development makes it possible to determine the position of the platform (and therefore indirectly of the sensor) at the time of the measurements of the sensor in order to have synchronous information. If necessary, the same type of operation can be performed to synchronize the image measurements and range findings.
The trajectory parameters can be estimated in a way that is:
According to a third embodiment of the method, described in relation to
It will also be noted that it is possible to have P>K. Some images may not have any range-found point, since they have homologous primitives with other images which themselves have range-found points.
The method according to the invention is triggered to acquire the environment of a particular geographic position. From the measured position of the sensor, an approximate orientation for the collection is deduced. The computation of the angular directions to be applied to the COA is then performed to take account of:
Since the orientation measurements have a better short term accuracy (between close images), and the number of images to be produced in azimuth is generally greater than that in bearing to acquire an area with similar dimensions, a greater overlap in bearing than in azimuth will preferably be chosen and a scanning will be performed first in azimuth then in bearing. The angular displacement deviation of the line of sight will then be greater between 2 bearing values than between 2 azimuth values.
The method according to the invention makes it possible to better determine the parameters which substantially affect the geo-referencing performance for strongly oblique exposures and in particular in aeroterrestrial operations comprising:
Number | Date | Country | Kind |
---|---|---|---|
09 06095 | Dec 2009 | FR | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2010/069697 | 12/15/2010 | WO | 00 | 6/16/2012 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2011/073227 | 6/23/2011 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6009190 | Szeliski et al. | Dec 1999 | A |
6757445 | Knopp | Jun 2004 | B1 |
7925114 | Mai et al. | Apr 2011 | B2 |
20020060784 | Pack et al. | May 2002 | A1 |
20030044085 | Dial et al. | Mar 2003 | A1 |
20030160757 | Shirai et al. | Aug 2003 | A1 |
20040004706 | Uezono et al. | Jan 2004 | A1 |
20040167709 | Smitherman et al. | Aug 2004 | A1 |
20040233461 | Armstrong et al. | Nov 2004 | A1 |
20040234123 | Shirai et al. | Nov 2004 | A1 |
20050031197 | Knopp | Feb 2005 | A1 |
20060215935 | Oldroyd | Sep 2006 | A1 |
20070103671 | Ash | May 2007 | A1 |
20070104354 | Holcomb | May 2007 | A1 |
20080123994 | Schultz et al. | May 2008 | A1 |
20080205707 | Braunecker et al. | Aug 2008 | A1 |
20080310757 | Wolberg et al. | Dec 2008 | A1 |
20090034795 | Bridenne et al. | Feb 2009 | A1 |
20090087013 | Westrick | Apr 2009 | A1 |
20090141966 | Chen et al. | Jun 2009 | A1 |
20090154793 | Shin et al. | Jun 2009 | A1 |
20100013927 | Nixon | Jan 2010 | A1 |
20110096957 | Anai et al. | Apr 2011 | A1 |
20110109719 | Wilson et al. | May 2011 | A1 |
Entry |
---|
“From point-based to feature-based aerial triangulation,” T. Schenk, ISPRS Journal of Photograrnrnetry & Remote Sensing 58 (2004), pp. 315-329. |
Number | Date | Country | |
---|---|---|---|
20120257792 A1 | Oct 2012 | US |