The present disclosure relates generally to systems and methods for calibrating an optical system and, more particularly, to systems and methods for calibrating an optical system on a movable object, such as an unmanned aerial vehicle.
In the fields of computer vision and machine vision, information about objects in three-dimensional space may be collected using imaging equipment, including one or more digital cameras. The collected information, which may be in the form of digital images or digital videos (“image data”), may then be analyzed to identify objects in the images or videos and determine their locations in two-dimensional or three-dimensional coordinate systems. The image data and determined locations of the identified objects may then be used by humans or computerized control systems for controlling devices or machinery to accomplish various scientific, industrial, artistic, or leisurely activities. The image data and determined locations of the identified objects may also or alternatively be used in conjunction with image processing or modeling techniques to generate new images or models of the scene captured in the image data and/or to track objects in the images.
In some situations, the imaging equipment can become misaligned with respect to a calibration position, which can adversely affect image analysis and processing, feature tracking, and or other functions of the imaging system. For example, during operation, imaging systems can sustain physical impacts, undergo thermal expansion or contraction, and/or experience other disturbances resulting in changes to the physical posture of one or more imaging devices associated with the system. Thus, the imaging system must be periodically recalibrated to restore accuracy of its functions.
While the effects of misalignment can be experienced by any single camera in an imaging system, this problem can also have particular effects on multi-camera systems, such as stereo imaging systems. Stereo imagery is one technique used in the fields of computer vision and machine vision to view or understand the location of an object in three-dimensional space. In stereo imagery, multiple two-dimensional images are captured using one or more imaging devices (such as digital cameras or video cameras), and data from the images are manipulated using mathematical algorithms and models to generate three-dimensional data and images. This method often requires an understanding of the relative physical posture of the multiple imaging devices (e.g., their translational and/or rotational displacements with respect to each other), which may require the system to be periodically calibrated when the posture of one or more imaging devices changes.
Known calibration techniques are labor intensive, complex, and require the digital imaging system to be taken out of service. For example, some calibration techniques require multiple images to be taken of specialized patterns projected on a screen or plate from multiple different angles and locations. This requires the digital imaging system to be taken out of service and brought to a location where these calibration aids can be properly used. Furthermore, the position of the digital imaging system during calibration (e.g., the angles and distances of the imaging devices with respect to the specialized patterns) must be carefully set by the calibrating technician. Thus, if any of the calibration configurations are inaccurate, the calibration may not be effective and must be performed again.
There is a need for improved systems and methods for calibrating optical systems, such as digital imaging systems on movable objects, to effectively and efficiently overcome the above-mentioned problems.
In one embodiment, the present disclosure relates to a method of calibrating an imaging system. The method may include capturing images using at least one imaging device, identifying feature points in the images, identifying calibration points from among the feature points, and determining the posture of the at least one imaging device or a different imaging device based on the positions of the calibration points in the images.
In another embodiment, the present disclosure relates to a system for calibrating a digital imaging system. The system may include a memory having instructions stored therein, and an electronic control unit having a processor configured to execute the instructions. The electronic control unit may be configured to execute the instructions to capture images using at least one imaging device, identify feature points in the images, identify calibration points from among the feature points, and determine a posture of the at least one imaging device or a different imaging device based on positions of the calibration points in the images.
In yet another embodiment, the present disclosure relates to a non-transitory computer-readable medium storing instructions, that, when executed, cause a computer to perform a method of calibrating a imaging system. The method may include capturing images using at least one imaging device, identifying feature points in the images, identifying calibration points from among the feature points, and determining a posture of the at least one imaging device or a different imaging device based on positions of the calibration points in the images.
In yet another embodiment, the present disclosure relates to an unmanned aerial vehicle (UAV). The UAV may include a propulsion device, an imaging device, a memory storing instructions; and an electronic control unit in communication with the propulsion device, and the memory. The controller may include a processor configured to execute the instructions to capture images using at least one imaging device, identify feature points in the images, identify calibration points from among the feature points, and determine a posture of the at least one imaging device or a different imaging device based on positions of the calibration points in the images.
The following detailed descriptions refer to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions or modifications may be made to the components illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the disclosed embodiments and examples. Instead, the proper scope is defined by the appended claims.
Movable object 10 may include a housing 11, one or more propulsion assemblies 12, and a payload 14, such as a camera or video system. In some embodiments, as shown in
Propulsion assemblies 12 may be positioned at various locations (for example, top, sides, front, rear, and/or bottom of movable object 10) for propelling and steering movable object 10. Although only four exemplary propulsion assemblies 12 are shown in
Propulsion assemblies 12 may be configured to propel movable object 10 in one or more vertical and horizontal directions and to allow movable object 10 to rotate about one or more axes. That is, propulsion assemblies 12 may be configured to provide lift and/or thrust for creating and maintaining translational and rotational movements of movable object 10. For instance, propulsion assemblies 12 may be configured to enable movable object 10 to achieve and maintain desired altitudes, provide thrust for movement in all directions, and provide for steering of movable object 10. In some embodiments, propulsion assemblies 12 may enable movable object 10 to perform vertical takeoffs and landings (i.e., takeoff and landing without horizontal thrust). In other embodiments, movable object 10 may require constant minimum horizontal thrust to achieve and sustain flight. Propulsion assemblies 12 may be configured to enable movement of movable object 10 along and/or about multiple axes.
Payload 14 may include at least one sensory device 22, such as the exemplary sensory device 22 shown in
Carrier 16 may include one or more devices configured to hold the payload 14 and/or allow the payload 14 to be adjusted (e.g., rotated) with respect to movable object 10. For example, carrier 16 may be a gimbal. Carrier 16 may be configured to allow payload 14 to be rotated about one or more axes, as described below. In some embodiments, carrier 16 may be configured to allow 360° of rotation about each axis to allow for greater control of the perspective of the payload 14. In other embodiments, carrier 16 may limit the range of rotation of payload 14 to less than 360° (e.g., ≤270°, ≤210°, ≤180, ≤120°, ≤90°, ≤45°, ≤30°, ≤15° etc.), about one or more of its axes.
Imaging devices 18 and 22 may include devices capable of capturing image data. For example, imaging devices 18 and 22 may include digital photographic cameras (“digital cameras”), digital video cameras, or digital cameras capable of capturing still photographic image data (e.g., still images) and video image data (e.g., video streams, moving visual media, etc.). Imaging devices 18 may be fixed such that their fields of view are non-adjustable, or alternatively may be configured to be adjustable with respect to housing 11 so as to have adjustable fields of view. Imaging device 22 may be adjustable via carrier 16 or may alternatively be fixed directly to housing 11 (or a different component of movable object 10). Imaging devices 18 and 22 may have known focal length values (e.g., fixed or adjustable for zooming capability), distortion parameters, and scale factors, which also may be determined empirically through known methods. Imaging devices 18 may be separated by a fixed distance (which may be known as a “baseline”), which may be a known value or determined empirically.
Movable object 10 may also include a control system for controlling various functions of movable object 10 and its components.
Electronic control unit 26 may be a commercially available or proprietary electronic control unit that includes data storage and processing capabilities. For example, electronic control unit may include memory 28 and processor 30. In some embodiments, electronic control unit 26 may comprise memory and a processor packaged together as a unit or included as separate components.
Memory 28 may be or include non-transitory computer-readable media and can include one or more memory units of non-transitory computer-readable media. Non-transitory computer-readable media of memory 36 may be or include any type of disk including floppy disks, hard disks, optical discs, DVDs, CD-ROMs, microdrive, magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory integrated circuits), or any type of media or device suitable for storing instructions and/or data. Memory units may include permanent and/or removable portions of non-transitory computer-readable media (e.g., removable media or external storage, such as an SD card, RAM, etc.).
Information and data may be communicated to and stored in non-transitory computer-readable media of memory 28. Non-transitory computer-readable media associated with memory 28 may also be configured to store logic, code and/or program instructions executable by processor 30 to perform any of the illustrative embodiments described herein. For example, non-transitory computer-readable media associated with memory 28 may be configured to store computer-readable instructions that, when executed by processor 30, cause the processor to perform a method comprising one or more steps. The method performed by processor 30 based on the instructions stored in non-transitory computer readable media of memory 28 may involve processing inputs, such as inputs of data or information stored in the non-transitory computer-readable media of memory 28, inputs received from another device, inputs received from any component of or connected to control system 24. In some embodiments, the non-transitory computer-readable media can be used to store the processing results produced by processor 30.
Processor 30 may include one or more processors and may embody a programmable processor (e.g., a central processing unit (CPU)). Processor 30 may be operatively coupled to memory 28 or another memory device configured to store programs or instructions executable by processor 30 for performing one or more method steps. It is noted that method steps described herein may be embodied by one or more instructions and data stored in memory 28 and that cause the method steps to be carried out when processed by the processor 30.
In some embodiments, processor 30 may include and/or alternatively may be operatively coupled to one or more control modules, such as a calibration module 36 in the illustrative embodiment of
Positioning device 32 may be a device for determining a position of an object. For example, positioning device 32 may be a component configured to operate in a positioning system, such as a global positioning system (GPS), global navigation satellite system (GNSS), Galileo, Beidou, GLONASS, geo-augmented navigation (GAGAN), satellite-based augmentation system (SBAS), real time kinematics (RTK), or another type of system. Positioning device 32 may be a transmitter, receiver, or transceiver. Positioning device 32 may be used to determine a location in two-dimensional or three-dimensional space with respect to a known coordinate system (which may be translated into another coordinate system).
Sensors 34 may include a device for determining changes in posture and/or location of movable object 10. For example sensors 34 may include a gyroscope, a motion sensor, an inertial sensor (e.g., an IMU sensor), an optical or vision-based sensory system, etc. Sensors 34 may include or more sensors of a certain type and/or may include multiple sensors of different types. Sensors 34 may enable the detection of movement in one or more dimensions, including rotational and translational movements. For example, sensors 34 may be configured to detect movement around roll, pitch, and/or yaw axes and/or along one or more axes of translation.
The components of electronic control unit 26 can be arranged in any suitable configuration. For example, one or more of the components of the electronic control unit 26 can be located on movable object 10, carrier 16, payload 14, imaging devices 18 and/or 22, or an additional external device in communication with one or more of the above. In some embodiments, one or more processors or memory devices can be situated at different locations, such as on the movable object 10, carrier 16, payload 14, imaging devices 18 and/or 22, or an additional external device in communication with one or more of the above, or suitable combinations thereof, such that any suitable aspect of the processing and/or memory functions performed by the system can occur at one or more of the aforementioned locations.
Imaging devices 18 and/or 22 may be used to capture images in real space, and the images may be displayed, for example, on a display device 38. Display device 38 may be an electronic display device capable of displaying digital images, such as digital images and videos captured by imaging devices 18 and 22. Display device 38 may be, for example, a light emitting diode (LED) screen, liquid crystal display (LCD) screen, a cathode ray tube (CRT), or another type of monitor. In some embodiments, display device 38 may be mounted to a user input device (“input device”) 40 used to operate or control movable object 10. In other embodiments, display device 38 may be a separate device in communication with imaging devices 18 and/or 22 via a wired or wireless connection. In some embodiments, display device 38 may be associated with or connected to a mobile electronic device (e.g., a cellular phone, smart phone, personal digital assistant, etc.), a tablet, a personal computer (PC), or other type of computing device (i.e., a compatible device with sufficient computational capability).
Control system 24 may be configured to detect, identify, and/or track features in images captured by imaging devices 18 ad 22. Features in image may refer to physical features of subject matter reflected in the image. For example, features may include lines, curves, corners, edges, interest points, ridges, line intersections, contrasts between colors, shades, object boundaries, blobs, high/low texture and or other characteristics of an image. Features may also include objects, such as any physical object identifiable in an image. Features in an image may be represented by one or more pixels arranged to resemble visible characteristics when viewed. Features may be detected by analyzing pixels using feature detection methods. For example, feature detection may be accomplished using methods or operators such as Gaussian techniques (e.g., Laplacian of Gaussian, Difference of Gaussian, etc.), features from accelerated segment test, determinant of Hessian, Sobel, Shi-Tomasi, and others. Other known methods or operators not listed here may also be used. Such methods and operators may be familiar in the fields of computer vision and machine learning. Detected features may also be identified as particular features or extracted using feature identification techniques. Feature identification or extraction may be accomplished using Hough transform, template matching, blob extraction, thresholding, and/or other known techniques. Such techniques may be familiar in the fields of computer vision and machine learning. Feature tracking may be accomplished using such techniques as Kanade-Lucas-Tomasi (KLT) feature tracker and/or other known tracking techniques.
For example,
The location of coordinate points in the image coordinate system (such as coordinate points (u1, v1) and (u2, v2) in the example of
An exemplary model for converting two-dimensional coordinate to three-dimensional coordinates is shown below:
where u and v are coordinates in the two-dimensional image coordinate system; x, y, and z are coordinates in the three-dimensional world coordinate system; K is a calibration matrix; R is a rotation matrix; and T is a translation matrix.
An exemplary calibration matrix K is shown below:
where αx is equal to fmx (where f is the focal length of an imaging device and mx is a scale factor); αy is equal to fmy (where f is the focal length of an imaging device and my is a scale factor); γ is a distortion parameter; and u0 and v0 are coordinates for the optical center point in the image coordinate system. The parameters in calibration matrix K may be known parameters (e.g., known to be associated with an imaging device) or may be determined empirically. The rotation matrix R and translation matrix T may be determined empirically using a calibration process.
Consistent with embodiments of the present disclosure, calibration of an imaging system may include, for example, determining the relative positions of two cameras in a binocular system (such as imaging devices 18), or between any two cameras. Calibration may also include determining the posture (such as tilt) of a camera with respect to the ground coordinate system or world coordinate system.
Step 502 may include capturing two or more images of substantially the same view by two separate imaging devices separated by a distance or by a single imaging device from two different points in space. For example, referring to
Alternatively, two or more images are captured sequentially by a single imaging device (such as one of imaging devices 18 or imaging device 22) from different locations as movable object 10 moves in space. For example, a first image may be captured using an imaging device with movable object 10 at a first location, and a second image may be captured using the same imaging device with movable object 10 at a different location.
As an example,
Once two or more images are captured, in Step 504, feature points are identified in the captured images. As explained above, feature points may be the points in images at which features are located. Features may include lines, curves, corners, edges, interest points, ridges, line intersections, blobs, and contrasts between colors, shades, object boundaries, high/low texture and or other characteristics of an image. Features may also include or correspond to objects, such as any physical object identifiable in an image. Features in an image may be represented by one or more pixels arranged to resemble visible characteristics when viewed. Features may be detected by analyzing pixels using feature detection methods. For example, feature detection may be accomplished using methods or operators such as Gaussian techniques (e.g., Laplacian of Gaussian, Difference of Gaussian, etc.), features from accelerated segment test, determinant of Hessian, Sobel, Shi-Tomasi, and/or others. Other known methods or operators not listed here may also be used. Such methods and operators may be familiar in the fields of computer vision and machine learning. Detected features may also be identified as particular features or extracted using feature identification techniques. Feature identification or extraction may be accomplished using Hough transform, template matching, blob extraction, thresholding, and/or other known techniques. Such techniques may be familiar in the fields of computer vision and machine learning. Feature tracking may be accomplished using such techniques as Kanade-Lucas-Tomasi (KLT) feature tracker and/or other known tracking techniques, for example, Scale-invariant feature transform (SIFT), Oriented FAST and rotated BRIEF (ORB), or FAST and BRIEF. Features identified in each image may be correlated with the known techniques as well.
Referring again to the example of
Step 506 may include identifying calibration points from among the feature points identified in the images. As mentioned above, calibration may be performed to understand the posture of each imaging device. Thus, understanding the rotational and translational (e.g., linear) positions of the imaging devices may be desired. One technique for understanding the rotational and translational positions of an imaging device is to calculate rotational and translational factors based on the two-dimensional locations of features in captured images. Rotational factors may be determined based on identifying feature points with little or no difference in translational location between two images (with respect to the image coordinate system), while translational factors may be determined based on identifying feature points with varying translational locations. In other words, feature points for determining rotational factors may be feature points corresponding to the same feature (i.e., the same physical feature in real space) that appears to be at the same two-dimensional location in the image coordinate system between images. And feature points for determining translational factors may be feature points corresponding to the same feature (i.e., the same physical feature in real space) that appears to be at different two-dimensional locations in the image coordinate system between images.
Two images taken of the same view, either simultaneously by two cameras of a binocular system, or by a single camera from two different locations, provide a stereoptic view. As is commonly known, the same object may appear in different positions in two stereo images. The difference between the locations of the same object or feature in two images is referred to as “disparity,” a term understood in the fields of image processing, computer vision, and machine vision.
Disparity may be minimal (e.g., 0) for feature points that may be referred to as “far points,” i.e., feature points far enough away from an imaging device (such as the skyline) that the features appear not to move between the images. Features with noticeable disparity, i.e., features that appear to move between the images even though they may not have actually moved in real space, may be near enough to the imaging device(s) and are referred to as “near points.” Disparity is inversely related to the distance between the locations at which images are taken (e.g., the distance between two imaging devices or the distance between two points from which images are taken using the same imaging device).
It is possible a non-far point may have a disparity of 0 or near 0 between two images because of the collective rotational and translational displacement between the two images. Feature points identified as potential or candidate far points may be identified based on disparity and confirmed as far points based on subsequent disparity determinations. For instance, where a feature point is identified as a far point based on disparity, subsequent movement of movable object 10 may change the point of view of imaging devices 18 and/or 22 such that the disparity of the identified feature point may be greater than 0 (or beyond a threshold) in a subsequent comparison and disparity determination. In such a case, the candidate feature point may not be an actual far point and may be discarded for purposes of calibration and determining posture. Thus, consistent with embodiments of the present disclosure, more than two images may be captured in Step 502, and disparity calculated for the feature points between the multiple images, to improve accuracy of identification of far points. If the disparity of a candidate far point does not change (or does not change substantially) over time, there is a higher probability that the candidate far point is a true far point that can be used for calibration and posture determination.
As discussed above, the multiple images may be obtained by imaging devices as over a period of time as movable object 10 moves. As shown in
Consistent with embodiments of the present disclosure, identification of calibration points (Step 506) may include further analysis of the feature points identified as far points based on comparison of the images. In particular, a calculation may be performed to determine the real space distance of the feature point from the imaging device, and if the distance is greater than a threshold, the feature point is deemed a far point.
To determine the distance from a feature point to an imaging device in the system, the two-dimensional image coordinates of a feature points can be converted to the three-dimensional world coordinate system, which allows the unknown distance to be determined. The position of a feature point in the world coordinate system may be represented by the term Pw, which may be determined using the following expression:
The operation or calculation in expression (3) is performed on feature points across multiple images. The number of images may be represented by the term n.
represents the two-dimensional coordinates of a feature point in the i-th image. Ri and Ti represent the rotational matrix and translational matrix for the ith image. The rotational matrix Ri may be determined based on rotational information collected by a sensor capable of measuring rotational parameters, such a sensor 34 (e.g., an IMU sensor, gyroscope, or other type of sensor). The translational matrix Ti may be determined using a sensor or system capable of determining a change in translational or linear position, such as positioning device 32 (e.g., GPS or other type of system). The parameters in calibration matrix K (in expression (2)) may be known parameters (e.g., known to be associated with an imaging device) or may be determined empirically. The projection matrix h operates on a 3-D point
as follows:
where
is the three-dimensional coordinates of a point in space and
is the projected two-dimensional location, or homogeneous coordinates, of that 3-D point on an image, where
represents the two dimensional coordinates from the perspective of the imaging device.
Pw is the coordinate value of each feature point identified in the first image of the multiple images n, wherein the value includes three dimensions, one of them being the distance between the imaging device and the position of the interest points (e.g., the distance in real space). By solving the minimum Pw value that satisfies the expressions above, the distance from an imaging device to each feature point can be determined, which can help determine whether a feature point is a suitable calibration point. For example, the coordinate dimension corresponding to the distance from the feature point to the imaging device in Pw can be compared to predetermined threshold values for identifying far points and near points. For instance, the distance value in in Pw can be compared to a first threshold value, and if the distance value is greater than or equal to the first threshold value, the feature point corresponding to Pw may be a suitable far point. The distance value in Pw can also be compared to a second threshold (which may be the same or different from the first threshold), and if the distance value is less than the second threshold, the feature point corresponding to Pw may be a suitable near point. The threshold values may be determined empirically or theoretically. That is, the threshold comparisons may help determine in a physical sense whether the candidate near points and candidate far points are actually physically far enough away from the imaging devices to constitute valid feature points for calibrating the imaging system.
With suitable far points and near points selected using the methods described above, rotation and translation matrices (i.e., calibrated matrices) may be determined to reflect the current posture of and relationship between two cameras (such as in a binocular system or between two cameras mounted on movable object 10. As noted above, the identified far points may be calibration points for determining the rotation matrix, and the identified near points may be calibration points for determining the translation matrix. To determine the rotation and translation matrices, a set of images may be captured (e.g., at least a pair of images taken in accordance with the methods described above) that include the calibration points identified (e.g., the near point(s) and far point(s) identified in accordance with the methods described above). Images captured in Step 502 may be used for this purpose as well. For a set of calibration points (e.g., numbered 1 through n for convenience), the location of the calibration points in the two-dimensional image coordinate system in each image in a pair of images can be used to determine a rotation matrix R or a translation matrix T.
For example, a rotation matrix R characterizing the relative rotational displacement between the left and right imaging devices 18 may be determined using the following expression:
where uli represents the u coordinate of the ith calibration point in a left image captured by the left imaging device; vli represents the v coordinate of the ith calibration point in the left image; uri represents the u coordinate of the ith calibration point in a right image captured by the right imaging device; and vri represents the v coordinate of the ith calibration point in the right image. Kl and Kr represent the calibration matrices of the left and right imaging devices, respectively (and may be the same where the same imaging device was used to capture both images). By solving the minimum R value that satisfies the expressions above, a matrix can be determined that accounts for the relative rotational posture of the left and imaging devices.
Likewise, a translation matrix T characterizing the relative translational displacement between the left and right imaging devices 18 may be determined using the following expression:
In Expression (6), R be the rotational matrix determined through Expression (5) above, or may be determined based on data collected from sensors capable of identifying rotational displacements, such as sensor 34. By solving the minimum T value that satisfies the expressions above, a matrix can be determined that accounts for the translational posture of the left and right imaging devices.
In a multi-camera system, the above method may be applied to determine the relative positions (both rotational and translational) between any two cameras, using images captured by the two cameras simultaneously or when the cameras are not in motion.
In some embodiments, an angular displacement, e.g., tilt, of an imaging device can be determined by identifying a line in a captured image and comparing the identified line to a reference line. For example, sometimes an imaging device can become angularly displaced or tilted with respect to a scene to be captured, which may be the result of misalignment of the imaging device on the movable object. To correct for such tilt, the angle of tilt can be determined by comparing a line in a tilted image with a reference line so the image can be processed to account for the tilt. For example, an image can be captured using an imaging device in a manner described above. In some embodiments, an image gathered in a step of process 500 may be used, and in other embodiments, a separate image may be captured. Feature points may then be identified in the image using a known technique in the manner described above, such as Gaussian techniques (e.g., Laplacian of Gaussian, Difference of Gaussian, etc.), features from accelerated segment test, determinant of Hessian, Sobel operator, Shi-Tomasi, and/or others. Feature points identified in steps of process 500 may be used, or alternatively feature points may be identified in a separate process. Feature points of interest for this operation may be feature points on or near line-like features visible in the image. That is, for purposes of comparing to a reference line, features of interest may be sky lines, the horizon, or other types of line-like that can be discerned from an image and may be approximately horizontal with respect to the world coordinate system.
For example, as shown in
Multiple feature points 50 may be identified on or near the line-like feature. A straight line 52 may be fit to the identified feature points using a suitable technique. For example, the method of least square or random sample consensus (RANSAC) method may be used to fit a line to the identified feature points. The fit line 52 may represent or correspond to the sky line, horizon, or other discernable feature in the image. A reference line 54 may also be identified in the image. In some embodiments, the reference line 54 may be defined with respect to fan axis of the image coordinate system (e.g., the line 54 may be parallel to an axis of the image coordinate system). An angular offset θ between the fit line 52 and the reference line 54 may be determined using the following expression:
where Δv is a displacement along the v axis of the image coordinate system between the fit line 52 and the reference line 54, and Δu is a displacement along the u axis of the image coordinate system from the intersection of the fit line 52 and the reference line 54. The angle θ may be indicative of an angular displacement of an imaging device with respect to “horizontal” in the world coordinate system when the line 52 is presumed to be horizontal (or an acceptable approximation of horizontal) in the world coordinate system.
It is contemplated that the exemplary comparisons described in the disclosed embodiments may be performed in equivalent ways, such as for example replacing “greater than or equal to” comparisons with “greater than,” or vice versa, depending on the predetermined threshold values being used. Further, it will also be understood that the exemplary threshold values in the disclosed embodiments may be modified, for example, replacing any of the exemplary zero or 0 value with other reference values, such as reference values, threshold values, or comparisons.
It will be further apparent to those skilled in the art that various other modifications and variations can be made to the disclosed methods and systems. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed methods and systems. For example, while the disclosed embodiments are described with reference to an exemplary movable object 10, those skilled in the art will appreciate the disclosure may be applicable to any movable objects. It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents
This application is a continuation of International Application No. PCT/CN2018/073866, filed Jan. 23, 2018, the entire content of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2018/073866 | Jan 2018 | US |
Child | 16937047 | US |