A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The disclosed embodiments relate generally to mobile platform operations and more particularly, but not exclusively, to controlling a mobile platform during operation.
Autonomous or self-guided obstacle detection and avoidance is an important feature of mobile platforms. Many existing techniques for obstacle detection have drawbacks in functionality and cost. Some techniques can detect distance information without detecting directional information. For example, ultrasound is relatively inexpensively and can be used in outdoor imaging applications since ambient light does not interfere with ultrasound sensing. However, single-element ultrasound sensors, which transmit and detect ultrasound signals using a single ultrasound transducer, can detect distance but not direction. Although arrayed ultrasound technology can be used to retrieve some directional information, the high cost of arrayed ultrasound sensors can be prohibitive for many applications.
In view of the foregoing, there is a need for systems and methods for mobile platform obstacle detection that overcome the problem of lack of directional information.
In accordance with a first aspect disclosed herein, there is set forth a method for controlling a mobile platform, comprising:
In accordance with another aspect disclosed herein, there is set forth a system for controlling a mobile platform, comprising:
In accordance with another aspect disclosed herein, there is set forth a mobile platform, comprising:
In accordance with another aspect disclosed herein, there is set forth a computer readable storage medium, comprising:
It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are generally represented by like reference numerals for illustrative purposes throughout the figures. It also should be noted that the figures are only intended to facilitate the description of the embodiments. The figures do not illustrate every aspect of the described embodiments and do not limit the scope of the present disclosure.
The present disclosure sets forth systems and for controlling mobile platforms that enable obtaining directional information about objects in an environment of the mobile platform, overcoming disadvantages of existing techniques.
Turning now to
The exemplary mobile platform 10 is shown in
In
Turning now to
An object 50 is shown in
d
i=(xi−x0)2+(yi−y0)2+(zi−z0)2 Equation (1)
Stated somewhat differently, each distance d1 is a function of the corresponding coordinates {right arrow over (p)}i of the mobile platform 10 and the coordinates {right arrow over (p)}o of the object 50. In some embodiments, the coordinates {right arrow over (p)}i and the corresponding distances di can be obtained at various positions 30. The unknown coordinates {right arrow over (p)}o of the object 50 are three independent variables (for example, the variables x0, y0, z0 in Cartesian coordinates) that can be estimated based on the known quantities (xi, yi, zi) and di at positions 30. The estimation can be performed, for example, by solving a system of linear equations with the three variables x0, y0, z0, wherein the number of equations Nis the number of positions 30 of the mobile platform 10. In some embodiments, distances di can be measured at three positions 30 of the mobile platform to determine the coordinates {right arrow over (p)}o of the object 50. In some embodiments, distances di can be measured at more than three positions 30 of the mobile platform 10 to determine the coordinates {right arrow over (p)}o of the object 50.
In some embodiments, one or more distances di measured at certain positions 30 of the mobile platform 10 can be excluded when estimating the position of the object 50. For example, distances di measured at certain positions 30 can produce degenerate solutions of the coordinates {right arrow over (p)}o of the object 50 and therefore do not increase accuracy. Distance measurements taken at positions 30 that produce degenerate solutions can be excluded. In some embodiments, to improve accuracy, non-collinear positions 30 of the mobile platform 10 can be used to estimate the position of the object 50.
Turning now to
At 302, a position of the object 50 is determined based on the measuring of the distances d, at 301. The unknown coordinates {right arrow over (p)}o of the object 50 can be determined based on the distances di measured at each of the positions 30 of the mobile platform 10. In some embodiments, the coordinates {right arrow over (p)}o of the object 50 can be determined based on the positions 30 of the mobile platform 10 and the measured distances di. The coordinates {right arrow over (p)}i of each position 30 of the mobile platform 10 can be obtained using any suitable manner. In some embodiments, the coordinates {right arrow over (p)}i of each position 30 can be obtained using a global positioning system (GPS), whereby the coordinates {right arrow over (p)}i are tracked by an external source (such as a GPS satellite). In such cases, the coordinates {right arrow over (p)}i of each position 30 can be provided by the GPS as global coordinates that are expressed in relation to a fixed point on the Earth. The position of the object 50 that is determined using global coordinates {right arrow over (p)}i of each position 30 can also be given as global coordinates {right arrow over (p)}o of the object 50. Alternatively, and/or additionally, the coordinates {right arrow over (p)}i of each position 30 can be obtained using an inertial measurement unit (IMU) that is situated aboard the mobile platform 10, which can use accelerometers and/or gyroscopes to track the positions 30 of the mobile platform 10. In such cases, the coordinates {right arrow over (p)}i of each position 30 can be provided by the IMU as relative coordinates that are expressed in relation to a local position (such an initial position of the mobile platform 10, or other suitable reference point). The position of the object 50 that is determined using relative coordinates {right arrow over (p)}i of each position 30 can also be given as relative coordinates {right arrow over (p)}o of the object 50.
In some embodiments, the coordinates {right arrow over (p)}o of the object 50 can be determined using an optimization technique based on the coordinates {right arrow over (p)}i of each position 30 of the mobile platform 10 and the measured distances di. Various optimization techniques are suitable for the present systems and methods, including but not limited to linear optimization techniques and/or nonlinear optimization techniques. Additional optimization techniques that are suitable for the present systems and methods include, for example, least square optimization, Kalman filter, combinatorial optimization, stochastic optimization, linear programming, nonlinear programming, dynamic programming, gradient descent, genetic algorithms, hill climbing, simulated annealing, and the like.
In some embodiments, the object 50 can be a stationary object (for example, a building). Accordingly, the coordinates {right arrow over (p)}o of the object 50 can remain fixed as the distances di are measured at each of the positions 30 of the mobile platform 10. In some embodiments, the object 50 is mobile, but has a fixed position during a time interval during which distances di to the object 50 are measured at several positions 30 of the mobile platform 10. For example, a bird near the mobile platform 10 can generally move about, but can remain still for several measurements during a particular time interval, and therefore a position of the bird during that time interval can be found. Instill other embodiments, the object 50 is a mobile object but moves slowly relative to the mobile platform 10 (for example, a moving boat). Accordingly, several distances di can be measured during a time interval during which the object 50 is approximately stationary relative to the mobile platform 10.
In some embodiments, the measured distances di can be filtered to suppress measurement noise. Noise in the measured distances di can arise, for example, when the mobile platform 10 detects stray particles in the surroundings of the object 50 being tracked. Noise can further arise from a lack of precision in the sensors 11 (shown in
In some embodiments, the position of the object 50 can be periodically determined. An exemplary method 400 for periodically determining the position of the object 50 using batched distance measurements is shown in
In some embodiments, a least square method can be used to determine and/or update the position of the object 50 based on a plurality of measured distances di. An exemplary least square method is illustrated as follows with respect to an object 50 having coordinates {right arrow over (p)}o=(x0, y0, z0) and positions 30 of the mobile platform 10 having respective coordinates {right arrow over (p)}i=(xi, yi, zi). The relationship between these coordinates and the measured distances di can be represented as in Equation (2) below (shown for i=1 and 2):
(x1−x0)2+(y1−y0)2+(z1−z0)2−d12+λ((x2−x0)2+(y2−y0)2+(z2−z0)2−d22)=0 Equation (2)
The parameter λ can be set to −1 to remove quadratic terms, from which Equation (3) can be obtained:
(z12−z22)=(d12−d22) Equation (3)
Equation (3) is a linear equation that can be represented in matrix format as follows:
A·{right arrow over (p)}
o
=B Equation (4)
wherein
A=[2(x2−x1),2(y2−y1),2(z2−z1)],{right arrow over (p)}0[x0,y0,z0]T, B=[(d12−d22)−(x12−x22)−(y12−y22)−(z12−z22)]
Where the number N of positions 30 are measured, and Nis greater than three, the matrices A and B can be represented as follows:
A least square method can be used to find an optimal solution to Equation (4). For example, an objective function for optimization can be set as follows:
F({right arrow over (p)}0)=(B−A{circumflex over (p)}0)T·(B−A{right arrow over (p)}0) Equation (7)
The objective function can be minimized by taking a derivative of the objective function and setting the derivative to zero, as follows:
wherein Ai represents the ith column of matrix A. Solving yields the following result:
{right arrow over (p)}
0=(ATA)−1ATB Equation (9)
Accordingly, the coordinates {right arrow over (p)}o=(x0, y0, z0) of the object 50 can be obtained based on the matrices A and B, which can include known parameters. The above example is intended to illustrate only one embodiment of a least square method for determining the position of the object 50, and is not meant to be limiting.
In some embodiments, the position of the object 50 can be determined in real time. An exemplary method 500 is shown in
In some embodiments, the position of the object 50 can be determined based a distances d measured at a new position 30 and distances di measured at one or more former positions 30 of the mobile platform 10. Stated somewhat differently, the position of the object 50 can be updated based on the new position 30 while taking into account a historical trajectory of the mobile platform 10 relative to the object 50. In some embodiments, the position of the object 50 can be updated using a filtering method. An exemplary filtering method is a Kalman filter, which uses a probabilistic model to estimate parameters of interest based on observed variable that are indirect, inaccurate, and/or uncertain. The Kalman filter is recursive, as the parameters can be updated in real time as new observations are made. The Kalman filter can further model noise (such as Gaussian noise) and thereby suppress that noise in estimating the parameters of interest. The Kalman filter can be used in the present systems and methods for real time determination of the position of the object 50.
An exemplary Kalman filter is illustrated below that includes as a state variable the coordinates {right arrow over (p)}o of the object 50. The exemplary Kalman filter further includes a state variable P that models error of the coordinates {right arrow over (p)}o. The Kalman filter is shown as taking place in two steps: a “predict” step and an “update” step. The “predict” and “update” steps can be repeated in multiple iterations (denoted with index k) until an optimal solution for the updated position of the object 50 is attained. For example, the “predict” step can take place as follows:
k
{right arrow over (p)}
0
−=k−1{right arrow over (p)}0 Equation (10)
k
P
−=k−1P+Q Equation (11)
wherein k{right arrow over (p)}0 is an a posteriori state estimate for iteration k−1, k{right arrow over (p)}0− is an a priori state estimate for iteration k, k−1P is an a posteriori error estimate for iteration k−1, and kP− is an a priori state estimate for iteration k. The variable Q represents noise that is added to the error P at each iteration. Q can be set to various values to model different types of noise. Exemplary values of Q can be, without limitation, 0, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100, or more. In some embodiments, the variable Q can be set to various values without affecting optimization of the position of the object 50. In some embodiments, Q can be set to 1. In other embodiments, Q can be set to 0 (in other words, assuming no noise).
The “update” step of the Kalman filter can take place, for example, as follows. First, a matrix kH can be found that reflects a change in the state variable from one iteration of the Kalman filter to another. For example, the matrix Hat state k can be found as:
wherein {right arrow over (p)} represents coordinates of the new position 30 of the mobile platform 10, {right arrow over (r)}={right arrow over (p)}−{right arrow over (p0)} represents a displacement between the new position 30 and the position of the object 50, and d represents the distance between the new position 30 and the position of the object 50. A Kalman gain matrix K can be found as follows:
k
K=
k
P
−k
H(kH−kP−kHT+R)−1 Equation (13)
wherein R represents sensor noise and can be set to various values depending on particular hardware used. R can be set to, for example, R=1, 2, 3, 4, 5, 6, 7, 8, 9, or 10. The “predict” step can then proceed as follows to obtain a posteriori state and error estimates for iteration k:
k
{right arrow over (p)}
0=k{right arrow over (p)}o−+kK(d−kH·k{right arrow over (p)}o−) Equation (14)
k
P=(I−kK·kH)kP− Equation (15)
wherein k{right arrow over (p)}0 represents an a posteriori state estimate for iteration k, and kP represents an a posteriori error estimate for iteration k. The “predict” and “update” steps can be repeated as needed to optimize the coordinates {right arrow over (p)}o of the object 50.
The above example is intended to illustrate only one embodiment of a filtering method for determining the coordinates {right arrow over (p)}o of the object 50 and is not meant to be limiting. Furthermore, multiple methods can be used to determine the coordinates {right arrow over (p)}o of the object 50. Using one method (for example, a least square method) does not preclude using another method (for example, a Kalman filter). For example, in some embodiments, both a batch method (shown above with respect to
Turning now to
The mobile platform control system 100 can include one or more processors 120. Without limitation, each processor 120 can include one or more general purpose microprocessors (for example, single or multi-core processors), application-specific integrated circuits, application-specific instruction-set processors, graphics processing units, physics processing units, digital signal processing units, coprocessors, network processing units, audio processing units, encryption processing units, and the like. The processors 120 can be configured to perform any of the methods described herein, including but not limited to, a variety of operations relating to obstacle detection and avoidance. In some embodiments, the processors 120 can include specialized hardware for processing specific operations relating to obstacle detection and avoidance—for example, processing distance data collected from the sensors 11, determining a position of an object 50 (shown in
The mobile platform control system 100 can include one or more additional hardware components (not shown), as desired. Exemplary additional hardware components include, but are not limited to, a memory (or computer readable storage medium) 130, which can include instruction for carrying out any of the methods described herein. Suitable memories 130 include, for example, a random access memory (RAM), static RAM, dynamic RAM, read-only memory (ROM), programmable ROM, erasable programmable ROM, electrically erasable programmable ROM, flash memory, secure digital (SD) card, and the like. The mobile platform control system 100 can further include one or more input/output interfaces (for example, universal serial bus (USB), digital visual interface (DVI), display port, serial ATA (SATA), IEEE 1394 interface (also known as FireWire), serial, video graphics array (VGA), super video graphics array (SVGA), small computer system interface (SCSI), high-definition multimedia interface (HDMI), audio ports, and/or proprietary input/output interfaces). One or more input/output devices 140 (for example, buttons, a keyboard, keypad, trackball, displays, and a monitor) can also be included in the mobile platform control system 100, as desired.
In some embodiments, one or more components of the mobile platform 10 (for example, a sensor 11 shown in
Various methods of distance measurement can be used in the present systems and methods. Although exemplary distance measurement techniques are described below with respect to
The light can be emitted from the light source 710 and toward an object 50. Light reflected from the object 50 can be detected by the time-of-flight sensor 700 using one or more photosensors 720 that can sense the reflected light and convert the sensed light into electronic signals. Each photosensor 720 of the time-of-flight sensor 700 can be, for example, a charge-coupled device (CCD), a complementary metal-oxide-semiconductor (CMOS), an N-type metal-oxide-semiconductor (NMOS) imaging devices, and hybrids/variants thereof. The photosensors can be arranged in a two-dimensional array (not shown) that can each capture one pixel of image information, collectively enabling construction of an image depth map (not shown). In some embodiments, the time-of-flight sensor 700 has a quarter video graphics array (QVGA) or higher resolution—for example, a resolution of at least 0.05 Megapixels, 0.1 Megapixels, 0.5 Megapixels, 1 Megapixel, 2 Megapixels, 5 Megapixels, 10 Megapixels, 20 Megapixels, 50 Megapixels, 100 Megapixels, or an even greater number of pixels. The time-of-flight sensor 700 can advantageously be configured to differentiate between reflected light (signal) and ambient light (noise). Once the reflected light is sensed, the distance d to the object 50 can be measured according to the time-of-flight of the light signal (for example, using a phase-shift method).
In some embodiments, the distance d can be measured using ultrasound. Turning now to
In some embodiments, multiple sensors 11 and/or multiple types of sensors 11 can be used to determine a distance d to an object 50. Individual sensing techniques can have drawbacks that are compensated for by other sensing techniques. For example, ultrasound can typically have limited detection range (commonly less than five meters) and limited sensitivity to small objects. Time-of-flight sensing has longer range but may be limited by interference from strong ambient lighting. Accordingly, ultrasound and time-of-flight sensing can advantageously be used in conjunction to determine the distance d. In some embodiments, the sensors 11 can be physically discrete devices for easy of replacement and modularity. In other embodiments, the sensors 11 can be integrated partially or fully into a single device, and share overlapping physical components such as housing, microchips, photosensors, detectors, communications ports, and the like.
The mobile platform control system 100 can be situated relative to the mobile platform 10 in any convenient fashion. In some embodiments, the mobile platform control system 100 can be mounted aboard the mobile platform 10 as illustrated in
In some embodiments, the mobile platform control system 100 can be configured to autonomously control the mobile platform 10 to avoid the object 50 during travel or other operations. As shown in
Turning now to
As shown in
The disclosed embodiments are susceptible to various modifications and alternative forms, and specific examples thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the disclosed embodiments are not to be limited to the particular forms or methods disclosed, but to the contrary, the disclosed embodiments are to cover all modifications, equivalents, and alternatives.
This application is a continuation of U.S. application Ser. No. 16/003,276, filed on Jun. 8, 2018, which is a continuation of International Application No. PCT/CN2015/097066, filed on Dec. 10, 2015, the entire contents of both of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 16003276 | Jun 2018 | US |
Child | 17153176 | US | |
Parent | PCT/CN2015/097066 | Dec 2015 | US |
Child | 16003276 | US |