Embodiments of the present invention relate generally to methods and instruments for determining tilt angle and tilt direction using image processing techniques. The methods and instruments may be used in survey applications such as determining locations of points.
Controlling tilt of a survey instrument is a major activity for a surveyor. A great deal of time and effort is devoted to insuring that a survey instrument is leveled. Conventional methods of leveling a survey instrument involve aligning the survey instrument with a local gravity vector using a bubble level. Typical survey procedures involve leveling a survey instrument before performing measurements so that data is nearly free of tilt errors.
Today's survey instruments often comprise an optical system and a Global Navigation Satellite System (GNSS), otherwise referred to a Global Positioning System (GPS). A conventional GNSS survey instrument typically includes a location measurement device coupled to an end of a surveyor's pole, whereas a conventional optical survey instrument (e.g., optical total station) typically uses a tripod support system. The GNSS type of survey instrument is used to determine locations of points of interest that are typically located on the ground when many data points are desired, owing to its ease of portability. A bottom or tip of the surveyor's pole is placed at the point of interest, held in a vertical position as indicated by the bubble level, and a location measurement is obtained. Leveling ensures that a measurement center (e.g., the antenna phase center) of the location measurement device is as close to directly above the point of interest as possible. This is important because error is introduced if the measurement center is not directly above the point of interest. For example, a surveyor's pole that is two-meters long and is tilted two degrees from vertical can result in as much as two centimeters of measurement error. That is, the measurement center of the location measurement device may be as much as two centimeters to one side of the point of interest.
If the time required to level a survey instrument could be reduced or eliminated, a surveyor could be more productive by taking more measurements during a given time period. Thus, improved methods and instruments are continually desired to reduce the time and effort required to level a survey instrument. This applies to GNSS survey instruments, optical survey instruments, handheld survey instruments, and any other type of survey instrument that utilizes leveling processes.
Embodiments of the present invention provide improved methods and instruments for determining tilt angle and tilt direction of a survey instrument using image processing techniques. The tilt angle and tilt direction may be used in survey applications to determine locations of points. For example, in some embodiments a pose of an imaging device that is coupled to a surveying instrument is determined using matchmove image processing techniques. As used herein, matchmove refers broadly to software applications that can be used to extract information (such as camera pose) from one or more images. The pose of the imaging device can be used to determine a tilt angle and a tilt direction of the survey instrument. Further, the survey instrument may include a location measurement device, and a measured location may be used with the tilt angle and tilt direction to determine a location of a point of interest (e.g., a point at a tip of a surveyor's pole, a point identified using a laser pointer of a handheld survey instrument, or the like). Details of these and other embodiments are described below.
In accordance with an embodiment of the present invention, a survey instrument includes a support pole having a first end and a second end and a GNSS receiver coupled to the first end of the support pole and having a known spatial relationship with the second end of the support pole. The GNSS receiver may be configured to determine a location of an antenna phase center of the GNSS receiver in a reference frame. The survey instrument also includes an imaging device coupled to the support pole. The imaging device may be configured to obtain image information. The survey instrument also includes a processor in electrical communication with the GNSS receiver and the imaging device. The processor may be configured to receive the location of the antenna phase center from the GNSS receiver, receive the image information from the imaging device, determine a pose of the imaging device using the image information, and determine a tilt angle of the support pole and a tilt direction of the support pole in the reference frame. The tilt angle and the tilt direction of the support pole may be determined using the pose of the imaging device. The processor may also be configured to determine a location of the second end of the support pole in the reference frame. The location may be determined using the location of the antenna phase center of the GNSS receiver and the tilt angle and the tilt direction of the support pole.
In an embodiment, the image information includes a first image captured at a first location with the support pole substantially parallel to a local gravity vector and a second image captured at a second location. The pose of the imaging device may be determined based on features in the first image compared to the features in the second image.
In another embodiment, the tilt angle of the support pole is determined with reference to a local gravity vector.
In another embodiment, the image information includes a plurality of images each capturing features that are common with another one of the plurality of images. At least one of the plurality of images may be captured while the support pole is substantially parallel to a local gravity vector. The processor may be configured to process the image information using correspondences between the plurality of images.
In another embodiment, the processor is configured to process the image information using a feature-identification process. The image information may include at least one image capturing features having known locations in the reference frame.
In another embodiment, the imaging device and the antenna phase center of the GNSS receiver are arranged in a known spatial relationship.
In yet another embodiment, an entrance pupil of the imaging device is substantially coaxial with the antenna phase center of the GNSS receiver and the second end of the support pole.
In accordance with another embodiment of the present invention, a survey instrument includes a surveying device configured to perform survey measurements and an imaging device coupled to the surveying device and having a known spatial relationship with the surveying device. The imaging device may be configured to obtain image information. The survey instrument also includes a processor in electrical communication with the imaging device. The processor may be configured to receive the image information from the imaging device, process the image information to determine a pose of the imaging device, and to determine a tilt angle of the survey instrument and a tilt direction of the survey instrument in a reference frame. The tilt angle and tilt direction of the survey instrument may be determined using the pose of the imaging device.
In an embodiment, the image information includes at least one image and the pose of the imaging device is determined based on features in the image having known locations in the reference frame.
In another embodiment, the processor is disposed within a handheld device that is separate from the surveying device and the imaging device. The image information may be received from the imaging device using a wireless communications link.
In another embodiment, the surveying device comprises a GNSS receiver and the survey measurements performed by the surveying device include location measurements.
In another embodiment, the survey instrument also includes a support pole. A first end of the support pole may be coupled to the surveying device and a second end of the support pole may be configured to be placed at a point of interest. In some embodiments, the imaging device may be coupled to the support pole and have a known spatial relationship with the second end of the support pole. The processor may be further configured to process the image information to determine a location of the imaging device in the reference frame, and configured to determine a location of the second end of the support pole in the reference frame using the location of the imaging device, the tilt angle and the tilt direction of the survey instrument, and the known spatial relationship between the imaging device and the second end of the support pole. In other embodiments, the surveying device may be coupled to a first end of the support pole and comprise a GNSS receiver configured to determine location information. The GNSS receiver may have a known spatial relationship with the second end of the support pole. The processor may be further configured to determine a location of the second end of the support pole using the location information from the GNSS receiver, the tilt angle and the tilt direction of the survey instrument, and the known spatial relationship between the GNSS receiver and the second end of the support pole.
In another embodiment, the imaging device and the surveying device are arranged in a known spatial relationship.
In another embodiment, the survey measurements performed by the surveying device include location measurements corresponding to a measurement center of the surveying device. An entrance pupil of the imaging device may be substantially coaxial with the measurement center of the surveying device and first and second ends of the support pole.
In another embodiment, the surveying device comprises an optical survey instrument configured to perform optical survey measurements and a distance measuring device configured to determine a distance to a point of interest. The survey instrument may further comprise a tripod support coupled to the surveying device and the imaging device. The surveying device may comprise a GNSS receiver configured to determine location information, and the processor may be further configured to determine a location of the point of interest using the location information, the optical survey measurements, the distance, and the tilt angle and the tilt direction of the survey instrument.
In yet another embodiment, the surveying device and the imaging device are integrated within a handheld device. The handheld device may comprise a laser pointer for aligning the handheld device with a point of interest and a distance measurement device for determining a distance to the point of interest. The survey measurements performed by the surveying device may include location information. The processor may be further configured to determine a location of the point of interest using the location information, the tilt angle and the tilt direction of the survey instrument, and the distance to the point of interest. In some embodiments, the distance measurement device determines the distance to the point of interest using sonic measurements. In other embodiments, the distance measurement device is an electronic distance measurement (EDM) device. In yet other embodiments, the distance measurement device uses the laser to determine the distance to the point of interest.
In accordance with another embodiment of the present invention, a method of determining a location of a point of interest using a survey instrument includes receiving a location of an antenna phase center from a GNSS receiver, receiving image information from an imaging device, determining a pose of the imaging device using the image information, and determining a tilt angle of the survey instrument and a tilt direction of the survey instrument in a reference frame. The tilt angle and the tilt direction of the survey instrument may be determined using the pose of the imaging device. The method also includes determining the location of the point of interest using the location of the antenna phase center and the tilt angle and the tilt direction of the survey instrument.
In an embodiment, the method also includes providing a user indication to obtain additional image information. The user indication may be activated based on at least one of (1) a distance from a reference measurement station, (2) a number of correspondences between images compared to a threshold, or (3) an error in the pose of the imaging device compared to a threshold.
In accordance with yet another embodiment of the present invention, a method of determining a tilt angle and a tilt direction of a survey instrument includes aligning the survey instrument with a local gravity vector at a first station and acquiring a first image at the first station using an imaging device. The first image may capture a plurality of features. The method also includes positioning the survey instrument at a second station different from the first station and acquiring a second image at the second station using the imaging device. The second image may capture at least a portion of the plurality of features captured in the first image. The method also includes processing the first image and the second image to determine a pose of the imaging device at the second station and determining the tilt angle and the tilt direction of the survey instrument at the second station. The tilt angle and the tilt direction may be determined using the pose of the imaging device.
In an embodiment, the method also includes, determining a location of the survey instrument in the reference frame at the second station and determining a location of a point of interest using the tilt angle and the tilt direction of the survey instrument at the second station and the location of the survey instrument at the second station.
In another embodiment, the method also includes providing a user indication that the tilt angle at the second station is greater than a threshold tilt angle.
Numerous benefits are achieved using embodiments of the present invention over conventional techniques. For example, some embodiments provide methods for determining a tilt angle and tilt direction of a survey instrument using a pose of an imaging device. Since pose can be determined, survey measurements can be performed using an un-leveled survey instrument. Performing measurements using an un-leveled survey instrument can increase measurement efficiency by reducing the time and effort that is normally required to level the survey instrument. The increased efficiency can reduce measurement time and lower measurements costs. Depending on the embodiment, one or more of these benefits may exist. These and other benefits are described throughout the specification and more particularly below.
Embodiments of the present invention provide improved methods and instruments for determining tilt angle and tilt direction using image processing techniques. As used herein, tilt angle refers to an angle between a real-world vertical axis (e.g., a local gravity vector) and a vertical axis of an imaging device. Tilt direction refers to orientation (or rotation about the vertical axis) relative to a reference such as true north, magnetic north, or any other reference. The image processing techniques may involve using one or more matchmove techniques to determine a pose of an imaging device. The pose may include a location and rotation of the imaging device relative to a reference. In some embodiments, the reference is provided by features in an image where the features are at known locations in a reference frame. In other embodiments, the reference is provided by correspondences between features in images where at least one of the images is acquired with the imaging device in a known (e.g., leveled) position. The pose of the imaging device can be used to determine a tilt angle and tilt direction of the survey instrument to which the imaging device is coupled. The tilt angle and tilt direction can be used with a measured location of a survey instrument to determine a location of a point of interest. As an example, in some embodiments the survey instrument may include a support pole (e.g., a surveyor's pole), and the tilt angle and tilt direction may be used to determine the location of a point at a tip of the support pole.
As used herein, pose refers to exterior (or extrinsic) orientation of an imaging device. This is the orientation of the imaging device with respect to surrounding objects in a field of view. The orientation is generally defined by a rotation matrix R and a translation vector V that relate a coordinate system of the imaging device with a real-world coordinate system. The process of determining pose may be referred to as extrinsic calibration. This is in contrast to intrinsic calibration, that may be used to determine internal parameters such as focal length, image aspect ratio, effective number of pixels, principal point, and the like.
Determining Pose Using Features at Known Locations
The pose of the camera 104 in this example can be determined using image information obtained using the camera 104 and the locations of the reference points 114 in the reference frame. The image information is obtained by capturing one or more images of the cube 112 that include the reference points 114. The locations of the reference points 114 may be determined using the survey instrument 102 and conventional survey measurements (e.g., by measuring the locations of the reference points 114). Alternatively, the locations of the reference points 114 may be provided (e.g., determined previously). The locations include coordinates of one or more of the reference points 114 in the reference frame (or in any other reference frame with a known relationship so that coordinates can be transformed into a common reference frame).
Rather than providing the locations of all of the reference points 114, information about alignment of the cube 112 in the reference frame may be known along with a spatial relationship between the reference points 114. For example, the cube 112 may be aligned ‘right and regular’ with a local gravity vector and distances between the reference points 114 (e.g., in x,y,z coordinates) provided. In this case, the location of only one of the reference points 114 is needed to determine the locations of the other reference points 114.
Using the image information and the locations of the reference points 114, the pose of the camera 104 can be determined using known matchmove techniques. In this embodiment, the matchmove techniques utilize feature-identification processes to identify the reference points 114 in the one or more images obtained using the camera 104. The cube 112 may include a survey reflector with a target to improve the feature-identification processes. The pose of the camera 104 relative to the reference points 114 is determined based on the positions of the reference points 114 in the one or more images and the known locations of the reference points 114 in the reference frame. Most matchmove techniques can determine the pose of the camera 104 from a single image that includes at least four of the reference points 114.
Examples of matchmove software applications that may be used with some embodiments include Voodoo Camera Tracker by Digilab, 3D-Equalizer by Science. D. Visions, Boujou by 2d3, MatchMover by Autodesk, PFTrack by The Pixel Farm, SynthEyes by Andersson Technologies, and VooCAT by Scenespector Systems.
Following is a list of references that provide additional details on various matchmove techniques. Each of these references are incorporated herein by reference in their entirety.
Some of the matchmove techniques referenced above can determine pose of the camera 104 in real-time, while others post-process data stored in memory. While accuracy varies depending on the specific technique, many techniques report sub-pixel accuracy.
Determining Pose Using Correspondences Between Images
In this example it is not necessary that the features 318a, 318b, 318c, 318d be at known locations in a reference frame to determine the pose of the camera 304. Instead, in this example the survey instrument 302 is in a leveled position. The survey instrument 302 may be leveled according to known methods that may involve use of a bubble level indicating when the survey instrument is level to within an accuracy of the bubble level. In a leveled position, an image is captured using the camera 304. The image includes some of the features 318a, 318b, 318c, 318d surrounding the survey instrument 302. These features will serve as a reference against which at least a portion of the same features in another images can be compared and a change in pose determined.
Leveled in this example refers to alignment with a reference (e.g., vertical alignment with a local gravity vector). While any reference frame may be used with embodiments of the invention, the reference frame in this example is a real-world coordinate system like that of
Using the image obtained with the survey instrument 302 at the initial position 316 and the image obtained with the survey instrument 302 at the point of interest 310, the pose of the camera at the point of interest 310 can be determined using known matchmove techniques. In this embodiment, the matchmove techniques utilize correspondences between the images. The pose is determined with reference to the pose at the initial position 316 in a leveled position.
Most of the same matchmove software applications and references listed above can also be used to determine pose using correspondences between images. U.S. patent application Ser. No. 13/167,733, filed Jun. 24, 2011, provides additional details on matchmove techniques using correspondences between images. The pose of the camera is determined based on the positions of the features 318a, 318b, 318c, 318d in the image acquired with the survey instrument 302 in a known position (leveled) and the positions of the features 318a, 318b, 318c, 318d in the image acquired with the survey instrument 302 in a unknown position (un-leveled).
Positions of the features 318a, 318b, 318c, 318d will be different in the two images. This is illustrated from a camera point of view in
Most matchmove techniques are configured to identify and track features in images. These features often include arbitrary points located at arbitrary locations. Often many hundreds of such features can be identified and tracked between images. Many matchmove techniques can use planar structures (e.g., the ground or building facades), surfaces, edges, corners of objects, and the like to determine pose based on correspondences between just one or two of such features. Some matchmove techniques automatically detect features in images, analyze correspondences, eliminate outliers, and incrementally estimate and refine camera parameters. Matchmove software applications often utilize a menu format and may provide location and orientation information using menus such as ‘View→Camera Parameters’ or the like.
It should be appreciated that while only two images are used in the examples shown in
The number of correspondences required between any two images depends on the particular matchmove technique. In an embodiment, the number of correspondences required may be used to implement a warning signal (e.g., visual and/or audible cue) should the number of correspondences approach or drop below a required threshold. For example, if more than half of the initial reference points are lost after moving from an initial point of interest to a new point of interest, an indication may be provided to a user via audible or visual signaling on a data collector or controller. Upon receiving such a warning, an orientation of the survey instrument may be adjusted to capture an image with more features that are common with other images. Alternatively, the survey instrument may be re-leveled and a new reference established. In other embodiments, other indications of loss of overall accuracy may be used to trigger a warning signal, such as reduced accuracy of camera pose as determined by internal matchmove camera metrics or a distance from a reference measurement station.
Determining Pose Using Features at Known Locations & Correspondences Between Images
In some embodiments, combinations of the above methods may be used to determine pose of a camera. For example, the pose may be determined from an image having features at known locations as described above. Using this pose as a reference, one or more additional images may be captured and changes in pose determined using correspondences between the images as also described above. In these embodiments, it is not necessary to level the survey instrument to obtain a reference image. Instead, the image with features at known locations may be used as the reference image.
Using Pose to Determine Tilt Angle and Tilt Direction
Following is an example of some of the steps and calculations that may be used to determine tilt angle and tilt direction of a survey instrument using a pose of a camera in accordance with an embodiment of the invention. It should be appreciated that the steps and calculations described herein are exemplary and that one of ordinary skill in the art would recognize many variations, modifications, and alternatives in light of the present disclosure.
Matchmove software applications typically output pose data in a CAHV format (a convention commonly used in machine vision), whereas the reference (or coordinate) frame of interest for survey applications is spherical. Spherical coordinates provide tilt angle (generally referred to as theta) relative to a vertical axis such as a gravitational vector and tilt direction (generally referred to as phi) relative to some reference such as true or magnetic north. The tilt angle and tilt direction determine a vector r emanating from a zero reference point and extending to an imaginary point on a sphere. This is shown in
In the CAHV format, C provides a distance from a feature in a field of view to a perspective center (or entrance pupil) of an imaging device. The perspective center is generally on an axis passing through a lens of the imaging device. As shown in
A coordinate transform may be used to convert the data from the CAHV format to an intermediate X′,Y′,Z′ camera reference frame such as that illustrated in
A second coordinate transform using known techniques may be used to convert the data from the X′,Y′,Z′ camera reference frame to a real-world (e.g., GNSS or GPS) coordinate frame. In the real-world coordinate frame, Z extends in a vertical direction parallel to a gravity vector, and X and Y extend along a horizontal plane. This is shown in
The data may be converted from the real-world coordinate frame to spherical coordinates using known conversions. The vector r may be determined using the equation:
r=[X2+Y2+Z2]1/2 Equation (1)
As illustrated in
Using real-world coordinates X,Y,Z, the tilt angle and tilt direction of the survey instrument can be determined using the equations:
Tilt Angle(theta)=arcos(Z/r) Equation (2)
Tilt Direction(phi)=arctan(Y/X) Equation (3)
Determining Location of Point of Interest
A location of a point of interest can be determined using a location of a survey instrument and a tilt angle and tilt direction of the survey instrument. Referring to the example of
The following equations can be used to determine the X and Y components of the ground error and the Z component of the height error:
X1=r*sin(theta)*cos(phi) Equation (4)
Y1=r*sin(theta)*sin(phi) Equation (5)
Z1=r*cos(theta) Equation (6)
Where r is the distance from the antenna phase center to a tip of the support pole using the survey instrument shown in the example of
While methods have been described for performing survey measurements using an un-leveled survey instrument, it should be appreciated that measurement error will increase with larger tilt angles. Thus, in some embodiments the survey instrument may be configured to provide an warning (e.g., audio or visual cue) if the tilt exceeds a specified angle. In such a situation, a surveyor may repeat a measurement with the survey instrument in a more vertical position.
Example Calculation of Location of Point of Interest
Following is an example calculation of the location of the point of interest 110 shown in
Survey Instrument Configuration
The GNSS receiver 1006 and the support pole 1024 are arranged such that the antenna phase center 1028 of the GNSS receiver 1006 and a tip 1026 of the support pole 1024 have a known spatial relationship (e.g., both aligned with a vertical axis and having a known length d between them). The camera 1004 may also be arranged such that the same vertical axis passes through the entrance pupil of the camera 1004. In some embodiments, a distance between the antenna phase center 1028 of the GNSS receiver 1006 and the entrance pupil of the camera 1004 may also be known.
The controller 1020 may include memory for storing the information received from the GNSS receiver 1006 and the camera 1004. Computer code may also be stored in the memory with instructions that are executable by the processor to determine the pose of the camera 1004, the tilt angle and tilt direction of the survey instrument 1002, and the location of the tip 1026 of the support pole 1024.
It should be appreciated that the processor and memory are not limited. The processor may include one or more general purpose microprocessors or application specific integrated circuits (ASICs) and at least part of the instructions may be embodied in software, firmware, and/or hardware. The memory may include an operating system and one or more software applications for determining the tasks described above in accordance with embodiments of the invention. The memory may include any type of non-transitory media including magnetic storage media, optical storage media, flash memory, and the like.
A pose of the handheld device 1140 (or the imaging device) may be determined using known matchmove techniques as described above. For example, the pose may be determined using features in an image where the features are at known locations in a reference frame. The pose may also be determined using correspondences between images where at least one of the images is acquired with the imaging device in a known (e.g., leveled) position. A tilt angle (theta) and a tilt direction (phi) may then be determined using known transformations and Equations (1)-(3) as described above. Equations (4)-(6) may be used to determine X and Y components of a ground error and a Z component of a height error.
A pose of the camera 1104 may be determined using known matchmove techniques as described above. For example, the pose may be determined using features in an image where the features are at known locations in a reference frame. The pose may also be determined using correspondences between images where at least one of the images is acquired with the imaging device in a known (e.g., leveled) position. A tilt angle (theta) and a tilt direction (phi) may then be determined using known transformations and Equations (1)-(3) as described above. Equations (4)-(6) may be used to determine X and Y components of a ground error and a Z component of a height error in a manner similar to that illustrated in
Determining Tilt Angle and Tilt Direction Using Image Processing
The method also includes determining a pose of the imaging device using the image information (1206). The pose is determined using known matchmove techniques. The method also includes determining a tilt angle and a tilt direction of a survey instrument, where the tilt angle and tilt direction are determined using the pose of the imaging device (1208). The tilt angle and tilt direction can be determined from the pose using coordinate transforms. The method also includes determining a location of the point of interest using the location of the antenna phase center and the tilt angle and the tilt direction of the survey instrument (1210).
It should be appreciated that the specific steps illustrated in
It should be appreciated that some embodiments of the present invention may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks may be stored in a computer-readable medium such as a storage medium. Processors may be adapted to perform the necessary tasks. The term “computer-readable medium” includes, but is not limited to, portable or fixed storage devices, optical storage devices, wireless channels, sim cards, other smart cards, and various other non-transitory mediums capable of storing, containing, or carrying instructions or data.
While the present invention has been described in terms of specific embodiments, it should be apparent to those skilled in the art that the scope of the present invention is not limited to the embodiments described herein. For example, features of one or more embodiments of the invention may be combined with one or more features of other embodiments without departing from the scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. Thus, the scope of the present invention should be determined not with reference to the above description, but should be determined with reference to the appended claims along with their full scope of equivalents.
The present application is a continuation-in-part of U.S. patent application Ser. No. 13/167,733, filed Jun. 24, 2011, the disclosure of which is incorporated herein by reference in its entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5642285 | Woo | Jun 1997 | A |
6147598 | Murphy | Nov 2000 | A |
6282362 | Murphy | Aug 2001 | B1 |
7248285 | Needham | Jul 2007 | B2 |
7339611 | Marold | Mar 2008 | B2 |
7526384 | MacIntosh et al. | Apr 2009 | B2 |
7541974 | Scherzinger | Jun 2009 | B2 |
7619561 | Scherzinger | Nov 2009 | B2 |
7650013 | Dietsch et al. | Jan 2010 | B2 |
7697127 | Vogel | Apr 2010 | B2 |
7719467 | Norda | May 2010 | B2 |
8351686 | Graesser | Jan 2013 | B2 |
20030083804 | Pilley et al. | May 2003 | A1 |
20040168148 | Goncalves et al. | Aug 2004 | A1 |
20050125142 | Yamane | Jun 2005 | A1 |
20050209815 | Russon et al. | Sep 2005 | A1 |
20060125691 | Menache et al. | Jun 2006 | A1 |
20080285805 | Luinge et al. | Nov 2008 | A1 |
20090024325 | Scherzinger | Jan 2009 | A1 |
20090093959 | Scherzinger et al. | Apr 2009 | A1 |
20090262974 | Lithopoulos | Oct 2009 | A1 |
20100063733 | Yunck | Mar 2010 | A1 |
20100141759 | Scherzinger | Jun 2010 | A1 |
20100172546 | Sharp | Jul 2010 | A1 |
20100174507 | Vogel | Jul 2010 | A1 |
20110007939 | Teng et al. | Jan 2011 | A1 |
20120163656 | Wang et al. | Jun 2012 | A1 |
Number | Date | Country |
---|---|---|
1931945 | Jun 2008 | EP |
1936323 | Jun 2008 | EP |
1944572 | Jul 2008 | EP |
2240740 | Oct 2010 | EP |
2009100773 | Aug 2009 | WO |
2009100774 | Aug 2009 | WO |
2009103342 | Aug 2009 | WO |
2009106141 | Sep 2009 | WO |
2010080950 | Jul 2010 | WO |
Entry |
---|
Chapman et al., “Monocular SLAM—Alternative Navigation for GPS-Denied Areas,” GPS World; Sep. 2008, pp. 42-49. |
Kohler et al., “TrackSense: Infrastructure Free Precise Indoor Positioning Using Projected Patterns,” A. LaMarca et al. (Eds.) Pervasive 2007, LNCS 4480, pp. 334-350, Springer-Verlag Berlin Heiderlberg. |
Lemaire et al., “Vision-Based SLAM: Stereo and Monocular Approaches,” International Journal of Computer Vision 74 (3), pp. 343-364, Springer Science + Business Media, LLC (2007). |
Notice of Allowance of Feb. 6, 2014 for U.S. Appl. No. 13/167,733, 6 pages. |
Non-Final Office Action of Oct. 8, 2013 for U.S. Appl. No. 13/167,733, 15 pages. |
Number | Date | Country | |
---|---|---|---|
20120330601 A1 | Dec 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13167733 | Jun 2011 | US |
Child | 13397445 | US |