This application is the national phase under 35 U.S.C. §371 of PCT/SE2009/000497 filed 20 Nov. 2009.
The present invention relates to a method estimating absolute orientation of a vehicle by means of camera and GPS. In this connection GPS stands for a Global Positioning System.
In the following the expression absolute orientation refers to how the vehicle is directed relative to a system of axes fixed relative to the ground.
In many application fields it is important to get an accurate estimation of the relative orientation of a vehicle. One such application field is production of maps when images can be taken from airborne vehicles such as unmanned airborne vehicles, often abbreviated UAV. Other application fields are as components in navigation equipments for aeroplanes, helicopters and UAVs or included in missiles such as cruise missiles.
There are quite a number of different ways known to estimate the absolute orientation. One way of doing this is to observe landmarks and compare with a map. It is also possible to combine an inertial motion unit, IMU, that can comprise accelerometers and gyros, with a magneto-meter. Another way is to combine an IMU with a GPS. Examples of rather complex navigation methods using camera, GPS and IMU are known from WO 2005100915 A1, U.S. Pat. No. 5,894,323 A and WO 9735166 A1. Another example is disclosed in US 20060018642 disclosing an infra red camera and GPS used in combination with laser distance measuring and magnetic compass. These by references exemplified methods are component requiring and complex in the treatments of obtained information.
Accordingly, these mentioned methods have different advantages and drawbacks differing in under which circumstances they can be used, the accuracy of the results obtained, time delay introduced and so on.
The object of the invention is to provide a method estimating absolute orientation requiring few components, namely in principle GPS and camera, so that the costs can be kept down and in addition a method that can be combined with existing navigation methods is available.
The object of the invention is obtained by a method characterized in that the absolute orientation is obtained by the use of absolute position from GPS and relative motion from concurrent images taken above an essentially horizontal ground.
Essential for the invention is that that the vehicle travels above an essentially horizontal ground enabling estimation of the absolute orientation using only a camera and a GPS.
According to a preferred method three concurrent images are used to obtain the absolute orientation. This is the minimum number of images needed to obtain an estimation of the absolute orientation and that at the same time offers an accurate estimation.
According to a still preferred method the following steps are carried out:
Principles for identification of key points are known since long ago and for example we refer to an article of David G Lowe, Computer Science Department University of British Columbia Vancouver, B.C., Canada: “Distinctive Image Feature from Scale-Invariant Keypoints”, pp 1-28, Jan. 5, 2004 and in particular to paragraph 2 on page 2 to page 5.
According to a further development of the still preferred method the matching of the key points between the selected images comprises the following steps:
In this connection it can be noticed that such a type of matching per se is previously known from for example an article of Hartley Richard Australian National University Canberra; Australia and Zisserman Andrew University of Oxford, UK: “Multiple View Geometry in Computer Vision, second edition, pp 364-390, Cambridge University Press 2000, 2003. However, according to the invention the matching is applied in a particular application field concerning estimation of absolute orientation of a vehicle. There is no indication to use the matching in such an application field or closely related application fields.
The invention will now be described in more detail with reference to the accompanying drawings in which:
In
The method according to the invention is now described with reference to the flow chat of
Based upon images 6 delivered from the camera 5 a key point calculation is carried out in a block 7. This can be done according to known calculation principles as known from the key point reference given above or other similar key point calculation principles. The general principle is to identify key points in the images having particular features. Preferably three images taken at a suitable distance from each other are chosen for this key point process and key points found in all three images are selected. Block 8 comprises selected key points supplied by block 7 and to be projected down to a horizontal ground plane in a block 9. This block 9 is also supplied with position estimates from a block 10 informed from a GPS 11 of the absolute positions, x, y and height, of the images involved in the key point calculations. The position of an image is defined as the position in space where the centre of the camera was located when the image were taken.
In block 9 the key points are projected down on a horizontal plane for each selected key point. Each image defines a position in the horizontal plane for each key point. The position of each key point in space is determined in block 9 by calculating the geometrical centre of gravity for the corresponding projections and equal to the number of images selected. The mutual positions of the key points and the position relative to the image positions are now determined. An error value for each key point and image is obtained by the distance between the back-projection of the geometric centre of gravity and its respective key point in the image. The rotation of the images and the height coordinate of the horizontal plane are determined by minimizing the distances defined in the preceding sentence, block 12 in cooperation with block 9. The minimizing process output orientation estimates, block 13, that in case of key points corresponding to objects essentially on a horizontal ground defines a calculated absolute orientation of the images and when the camera is fixed mounted on or in the vehicle the absolute orientation of the vehicle is also determined.
Accordingly, the proposed method takes advantage of the travelling of a vehicle above horizontal ground. Furthermore the method presuppose that the vehicle is equipped with a fixed mounted calibrated camera synchronized with a GPS receiver so that each taken image can be provided with a GPS-position for the camera at the moment when the photo or image is taken.
The invention is not limited to the method exemplified above but may be modified within the scope of the attached claims.
| Filing Document | Filing Date | Country | Kind | 371c Date |
|---|---|---|---|---|
| PCT/SE2009/000497 | 11/20/2009 | WO | 00 | 9/19/2012 |
| Publishing Document | Publishing Date | Country | Kind |
|---|---|---|---|
| WO2011/062525 | 5/26/2011 | WO | A |
| Number | Name | Date | Kind |
|---|---|---|---|
| 5647015 | Choate et al. | Jul 1997 | A |
| 5894323 | Kain et al. | Apr 1999 | A |
| 20060018642 | Chaplin | Jan 2006 | A1 |
| 20080059065 | Strelow et al. | Mar 2008 | A1 |
| 20080109184 | Aratani et al. | May 2008 | A1 |
| Number | Date | Country |
|---|---|---|
| 0431191 | Jun 1991 | EP |
| 1898181 | Mar 2008 | EP |
| 2436740 | Oct 2007 | GB |
| WO-9735166 | Sep 1997 | WO |
| WO-2005100915 | Oct 2005 | WO |
| WO-2006002322 | Jan 2006 | WO |
| WO-2008024772 | Feb 2008 | WO |
| Entry |
|---|
| Davison et al., “MonoSLAM: Real-Time Single Camera SLAM”, Jun. 2007, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 29, No. 6, p. 1052-1067. |
| Supplementary European Search Report—Apr. 5, 2013—Issued in Connection With Counterpart European Patent Application No. 09851510.9. |
| PCT/ISA/210—International Search Report—Jun. 30, 2010. |
| PCT/ISA/237—Written Opinion of the International Searching Authority—Jun. 30, 2010. |
| David G. Lowe; Computer Science Department University of British Columbia Vancouver, B.C., Canada; “Distinctive Image Feature from Scale-Invariant Keypoints”; pp. 1-28. |
| Richard Hartley et al; Australian National University, Canberra, Australia, “Multiple View Geometry in Computer Vision”, second edition, pp. 364-390, Cambridge University Press 2000, 2003. |
| D. Katzourakis et al.; “Vision Aided Navigation for Unmanned Helicopters” 17th Mediterranean Conference on Control and Automation, Makedonia Place, Thessaloniki, Greece, Jun. 24-26, 2009. |
| T. Templeton et al.; “Autonomous Vision-based Landing and Terrain Mapping Using an MPC-controlled Unmanned Rotorcraft”; 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, Apr. 10-14, 2007. |
| Number | Date | Country | |
|---|---|---|---|
| 20130004086 A1 | Jan 2013 | US |