SYSTEMS AND METHODS FOR AUTONOMOUS VISION-GUIDED OBJECT COLLECTION FROM WATER SURFACES WITH A CUSTOMIZED MULTIROTOR

Abstract
Various embodiments of a vision-guided unmanned aerial vehicle (UAV) system to identify and collect foreign objects from the surface of a body of water are disclosed herein. A vision system and methodology has been developed to reduce reflections and glare from a water surface to better identify an object for removal. A linearized polarization filter and a specularity-removal algorithm is used to eliminate excessive reflection and glare. A contour-based detection algorithm is implemented for detecting the targeted objects on water surface. Further, the system includes a boundary layer sliding mode control (BLSMC) methodology to reduce and minimize position and velocity errors between the UAV and object in the presence of modeling and parameter uncertainties due to variation in a moving water surface.
Description
FIELD

The present disclosure generally relates to unmanned aerial vehicles (UAVs), and in particular, to a system and associated method for autonomous vision-guided object collection using a multirotor.


BACKGROUND

The advent of unmanned aerial vehicles (UAVs) has opened up numerous opportunities for executing perilous and mission-critical tasks such as search and rescue, exploration in radioactive and hazardous environments, inspection of vertical structures, and water sample collection. UAVs can fly over dams and canals for water sample collection, inspection and potentially avoid physical labor for these hazardous tasks. For instance, canals in Arizona have played a critical role in irrigation, power generation, and human's daily uses. These canals require frequent monitoring and inspection. Currently, they are dried up periodically for collecting trash items which is a time-consuming and expensive process.


Autonomous object collection from water surfaces using UAVs poses challenges in 1) aerial manipulation and object collection and 2) landing on a moving object. The field of aerial grasping and manipulation has witnessed great progress recently. A multirotor equipped with a 7-degree of freedom (DOF) manipulator was proposed for aerial manipulation tasks. The 7-DOF manipulator is attached with a camera and is shown to track an object. Pounds et al. employed a helicopter for aerial grasping and discussed the effects of ground wash. To ensure repeatable grasping performance, precise motion control would be required to ensure the object lies within the grasper's workspace, especially when the quadrotor is close to the object. Additionally, adding a robotic manipulator would increase the aircraft gross weight (AGW) and reduce the overall flight time and effective payload. In previous work on autonomous aerial grasping, a hexarotor was integrated with a three-finger soft grasper, which was made of silicone with pneumatically-controlled channels. Experimental results demonstrated that off-centered and irregularly-shaped objects were grasped successfully by the soft grasper. In another reference, a deformable quadrotor was proposed where the whole body was deformed to grasp objects. An origami-inspired foldable arm was proposed in one reference which could be utilized for performing different tasks in confined spaces. Although the robotic arm is extremely lightweight, the arm has one degree of freedom and can pick up objects from limited range of directions. Despite considerable work in this field, aerial grasping of objects on water surfaces poses additional challenges such as i) random motion of floating objects due to unpredictable current flow, ii) partially submerged objects.


Considerable research has been conducted for a multirotor to autonomously land on a moving target. In another reference, a minimum-energy based trajectory planning method was proposed to land on a moving target with a constant velocity. Lee et al. proposed a line-of-sight based trajectory tracking control for quadrotor landing on a moving vehicle in outdoor conditions. In this work, the velocity commands were generated based on the relative distance between the current and target position. Multiple outdoor tests were performed in another reference to autonomously land a UAV on a moving target using model predictive control (MPC). In another reference, a small quadrotor was demonstrated to track a moving spherical ball and the quadrotor's planned trajectory was updated using a receding horizon trajectory planner. However, there are several significant differences between landing on a moving target on ground and a floating object on water surface. Generally, moving targets consist of distinctive markers which reduce the complexity of tracking and dynamics of the moving target is deterministic. On the other hand, tracking floating objects on a water surface is challenging due to reflection and glare from water surfaces. Moreover, the motion of a floating object in close proximity to a multirotor's propeller outwash is complex and random. Therefore, a robust control technique is required to handle modeling uncertainties for reliably tracking and landing on the floating object.


It is with these observations in mind, among others, that various aspects of the present disclosure were conceived and developed.





BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1A-1D are a series of views showing a multirotor detecting an object, descending above the object, landing on water on the object, and picking up the object;



FIG. 2A is an isometric view showing the multirotor of FIG. 1A;



FIG. 2B is a top view showing the multirotor of FIG. 1A;



FIG. 2C is a first side view showing the multirotor of FIG. 1A with a net opened;



FIG. 2D is a second side view showing the multirotor of FIG. 1A with a net closed;



FIG. 2E is an enlarged view showing a plurality of sensors and a servo motor of the multirotor of FIG. 1A;



FIG. 3 is a diagram showing a controller of an aerial manipulation system of the multirotor of FIG. 1A;



FIGS. 4A-4D are a series of views showing reflection elimination and object detection as performed by the multirotor of FIG. 1A;



FIG. 5 is an illustration showing an operation principle of an aerial manipulation system of the multirotor of FIG. 1A;



FIGS. 6A-6F are a series of views showing a comparison of states for multirotor of FIG. 1A with respect to a control input;



FIGS. 7A-7F are a series of views showing results from a successful flight test of the multirotor of FIG. 1A;



FIGS. 8A-8F are a series of views showing results from a failed flight test of the multirotor of FIG. 1A;



FIGS. 9A-9F are a series of views showing results from a successful flight test of the multirotor of FIG. 1A with a dark shade object on a cloudy day;



FIGS. 10A-10F are a series of views showing results for state and control trajectories for a conventional system management controller (SMC) of the multirotor of FIG. 1A; and



FIG. 11 is a simplified diagram showing an example computing device for implementation of the present system.





Corresponding reference characters indicate corresponding elements among the view of the drawings. The headings used in the figures do not limit the scope of the claims.


DETAILED DESCRIPTION

Various embodiments of a system for autonomous detection and collection of objects floating on a water surface are disclosed herein. In particular, autonomous detection and collection is performed by a multirotor that implements a boundary layer sliding mode control (BLSMC) method with a dynamic sliding manifold. The dynamic sliding manifold is designed to eliminate a reaching phase for sliding mode control. The multirotor features a robust vision-based controller in conjunction with an integrated net system. During experimental evaluation, the multirotor collected floating objects of different shapes and sizes with a 91.6% success rate. The main contributions of present disclosure are summarized as follows:

  • A net-based multirotor system with flotation is proposed for object collection from water surfaces.
  • A computationally efficient contour based algorithm with specularity removal is developed for detecting objects on water surfaces under different illumination conditions. A comparison with different detectors is provided to show the advantages of our algorithm.
  • A BLSMC approach, with a dynamic sliding surface, is developed. A constrained MPC is designed for determining the optimal sliding surface and the controller demonstrates reliable object tracking in the presence of modeling uncertainties while enabling the system to collect objects under different weather conditions.


II. Multirotor Setup

Referring to FIGS. 1A-3, components of the system 100 including a multirotor 101 for object detection and collection are described. The system 100 demonstrates complete autonomy by utilizing a plurality of onboard sensors 150 for detection, tracking and collection of floating or partially submerged objects on water surfaces. In some embodiments, the system 100 can also be utilized for submersible deployment and collection.


A. Aerial Platform and Actuation System

In one embodiment, the multirotor 101 includes a base frame 120. In some embodiments, the base frame 120 is a co-axial octorotor frame having an enhanced in-air stability, a compact design, and high thrust-to-weight ratio compared to a conventional quadrotor. The base frame 120 is coupled to a landing assembly 103 as shown in FIGS. 2A-2E. In one embodiment, the multirotor 101 has a flight time of 12 minutes with a 500-gram payload. A capacity of an onboard battery (not shown) is 11,000 mAh. One embodiment of the multirotor 101, excluding the payload, weighs 2,352 grams.


The base frame 120 provides a plurality of arms 121, each respective arm 121 including at least one propeller 122 of a plurality of propellers 122 rotatable by an associated propeller motor 124 of a plurality of propeller motors 124. The plurality of propeller motors 124 are controlled at least in part by a flight controller system 300 that provides an optimal control output to be applied to each respective propeller motor 124 of the plurality of propeller motors 124 based on a detected position of an object 10 and a set of positional and attitudinal properties of the multirotor 101 such that the multirotor 101 captures the object 10 upon landing on a landing surface.


B. Flotation and Integrated Net System

With continued reference to FIGS. 2A-2E, the multirotor 101 is equipped with the landing assembly 103 that includes a landing structure 130 that frames a capture void 132 for collection of an object 10. As shown, the capture void 132 defines a first terminus 134 and a second terminus 135 located opposite from the first terminus 134. In some embodiments, the landing structure 130 includes a buoyant structure, which can include a first buoyant sub-structure 131A and a second buoyant sub-structure 131B to land and float on water. A primary factor while determining the dimensions of the buoyant sub-structures 131A and 131B was a total weight of the vehicle, as the length and radius of the buoyant sub-structures 131A and 131B determine the generated buoyant force. In one embodiment, the buoyant sub-structures 131A and 131B are of a polyethylene material and provide a 41.2 N of buoyant force whereas the weight of the proposed aerial system is 23.07 N. In one embodiment, the landing assembly 103 including buoyant sub-structures 131A and 131B weighs only 120 grams in total. The position of the buoyant sub-structures 131A and 131B are each 40 cm away from the central body to prevent toppling on water surfaces. It should be noted that while the landing structure 130 shown includes two separable buoyant sub-structures 131A and 131B, the landing structure 130 can optionally be circular or square-shaped and can be present along an entire perimeter of the capture void 132.


The multirotor 101 also includes an active net mechanism 104 for collection of the object within the capture void 132, which has a larger workspace compared to a grasper, as shown in FIGS. 2A-2E. The active net mechanism 104 can include a net 142 that defines a first portion 144 and a second portion 145, the first portion 144 being affixed to the landing structure 130 directly at the first terminus 134 of the capture void 132. The second portion 145 of the net 142 is affixed to a moveable rod 148 that is moveable by a servo arm 147. The net 142 can be formed of a durable polypropylene material, as polypropylene is lightweight and provides a large lifting capacity without breaking the net 142. The net mechanism 104 also includes a high-torque servo motor 146 that actuates the servo arm 147, which is attached perpendicularly to the center of the moveable rod 148, as shown. When the servo motor 146 is deactivated, the servo arm 147 and moveable rod 148 push the second portion 145 of the net 142 towards the first portion 144 of the net 142 and the first terminus 134 of the capture void 132 to “open” the net 142 such that the capture void 132 is open, as shown in FIG. 2C. When the servo motor 146 is activated, the servo arm 147 and moveable rod 148 push the second portion 145 of the net 142 towards the second terminus 135 of the capture void 132 to span across the capture void 132 when picking up an object, as shown in FIGS. 2B and 2D. FIGS. 2C and 2D respectively show the net 142 in open and closed positions.


C. Sensor and Computation Subsystem

Referring to FIGS. 2A, 2E and 3, the system 100 is equipped with a plurality of sensors 150 that are collectively operable to estimate a set of positional and attitudinal properties of the multirotor 101, including an inertial measurement unit (IMU) for attitude estimation. Onboard differential pressure sensors and GPS 154 are used for position estimation. A LiDAR 152, which in some embodiments is a single-shot Teraranger Evo (Terabee, France), is located on a bottom of the multirotor 101 and is used for altitude estimation above the water surface for water landing. The multirotor 101 is further equipped with an image capture device 153, which in some embodiments is an oCam-1CGN-U (Withrobot, Rep. of Korea) global shutter monocular camera, for object detection and tracking without motion blur. The image capture device 153 outputs 640×480 images at 80 fps. A low-level flight control unit 190, PIXHAWK, interfaces with sensors through UART and I2C communication and provides signals to each propeller motor 124 and the servo motor 146 based on the desired torques outputted by the flight controller system 300. A high-level computer 180 including a processor 182 in communication with a memory 184 is further included, and in some embodiments is an Intel UpBoard, which interfaces with the image capture device 153 and runs various detection, tracking and vision-based control functions collectively performed by an object detection system 200 and the flight controller system 300 of FIG. 3.


The different software blocks are demonstrated in FIG. 3, including the object detection system 200 that identifies a position of the object 10 within a frame of a video feed captured by image capture device 153 and the flight controller system 300 that provides an optimal control output to be applied to each respective propeller motor 124 of the plurality of propeller motors 124 based on the position of the object 10 and a set of positional and attitudinal properties of the multirotor 101 such that the multirotor 101 encapsulates the object 10 within the capture void 132 upon landing on a landing surface (as demonstrated in FIG. 5). Object detection system 200 can include an object detection sub-system 210 that extracts visual features from the video feed images provided by image capture device 153, and an object inertial pose estimation sub-system 250 that estimates a position and velocity of the object 10 relative to the multirotor 101 based on the position of the object 10 as identified within a frame of the video feed. The vision-based control block 310 receives an estimated position and velocity of the object 10 from object inertial pose estimation sub-system 250 and also receives an estimated position and velocity of the multirotor 101 from a multirotor 6-D pose estimation sub-system 290 that interfaces with the plurality of sensors 150. The vision-based control block 310 of flight controller system 300 determines an optimal control output to be applied to each respective propeller motor 124 including inertial frame thrusts. Eventually, the generated inertial frame thrusts are scaled and sent to a multirotor attitude control sub-system 350 of flight controller system 300 that generates desired attitude setpoints and resultant generated torques for application to the propeller motors 124.


The object detection system 200 detects the object 10 which drifts due to the propeller outwash. The position of the object 10 is estimated by object inertial pose estimation sub-system 250 of object detection system 200. The flight controller system 300 implements a controller scheme discussed in section IV. The inertial thrusts obtained at vision-based control block 310 according to Eq. 3 below are sent to the multirotor attitude control sub-system 350 of flight controller system 300 which generates desired attitude setpoints and resultant generated torques. Eventually, the optimal control output including generated torques are applied to each respective propeller motor 124 of the plurality of propeller motors 124 of the multirotor 101 through the low-level flight control unit 190.


III. Object Detection

The goal of the object detection system 200 is to detect objects on water surfaces. Object detection system 200 uses a combination of a linear polarization filter 156 in front of a lens of the image capture device 153 and a reflection removal methodology to suppress reflections and employ an edge detector followed by contour extraction for detection.


A. Challenges

Detection and tracking of objects on water surfaces pose a multitude of challenges. A major issue with outdoor testing in sunny environments is the glare and reflection of light from the water surfaces and objects. Some of these reflections are intense enough to completely overwhelm the object in the scene. An initial approach was to remove these reflections using background subtraction techniques such as the Gaussian Mixture model or per-pixel Bayesian Mixture Model. However, these algorithms assume still backgrounds and are ineffective when applied to backgrounds that are in motion. Another challenge is the change in the aspect ratio of the object as the multirotor lands on it. This called for the need of the object detection system 200 that is not affected by scale changes. Changing illumination conditions and partial occlusion of the bottle due to drift pose additional challenges.


B. Strategy

The object detection system 200 is represented in FIGS. 4A-4D and the pseudo-code is given by Algorithm 1:












Algorithm 1


Object detection algorithm
















1:
while read polarized camera frame do


2:
 if Object detected = = false then


3:
  f ← getframe


4:
  Canny_edge_detector(f)


5:
  contours(i) = FindContours(f)


6:
  Obj = max _area{contours(i)}


7:
  if No contour jumps in a window of 10 frames then


8:
   Object detected ← true


9:
  else


10
   Object detected ← false


11
  end if


12
 end if


13
 if Object detected = = true then


14
  f ← getframe


15
  Imin(p) = min{fR(p), fG(p), fB(p)}


16
  T = μv + η * σv





17
  
τ(p)={T,ifImin(p)>TImin(p),otherwise






18
  
β^(p)={0,pwithinboudingboxImin(p)-τ(p),otherwise






19
  fsf(p) = merge{fR(p) − {circumflex over (β)}(p), fG(p) − {circumflex over (β)}(p), fB(p) − {circumflex over (β)}(p)}


20
  Canny_edge_detector(fsf)


21
  contours(i) = FindContours(ffs)


22
  Obj = max_area{contours(i)}


23
  if Object not detected for more than 10 frames then


24
   Object detected ← false


25
  end if


26
 end if


27
end while









In the first phase, a canny edge detector is implemented on polarized video feed to extract a plurality of closed contours from the frame. A contour search method is employed to find the largest closed contour. However, due to the presence of reflections on the water surface, the largest closed contour can either be an object or a reflection. Since the reflections are volatile, their contours aren't consistently placed in the same region. If the largest closed contour is consistently placed in the same region for a minimum threshold quantity of consecutive frames (in some embodiments, the minimum threshold quantity is at least 10 consecutive frames), it indicates the presence of an object in the scene as shown in FIG. 4B. Once the initial closed contour of the object is obtained, current bounding box coordinates indicative of the position of the object are further obtained for the initial closed contour. To ensure reliable object detection, reflections are removed from a subsequent frame by considering the bounding box coordinates of the object in the current frame. By doing this, the object detection system 200 guarantees that the reflection removal algorithm removes specular component of the entire image without affecting the features of the object.


In the second phase, the specularity removal starts by computing the minimum intensity value Imin(p) across the RGB channel for each pixel in the frame. An intensity threshold value T=μv+η·σv is then calculated to distinguish highlighted pixels. μv and σv are the mean and standard deviation of the minimum values Imin over all the pixels. η is a measure of the specular degree of an image and is generally set to 0.5. Based on the calculated intensity threshold T, an offset τ is then computed to determine which pixels have to be changed to suppress reflections. The specular component {circumflex over (β)}(p) of the frame is computed by subtracting the offset from Imin. For any pixel inside the bounding box, {circumflex over (β)}(p) is set to 0 to preserve the features of the object to be detected. Finally, the specular component is subtracted from the frame to get the specular-free image without removing reflections from the area where the object is located. One can clearly see the reflections removed without the object being affected by comparing FIGS. 4A and 4C. Since reflections have been eliminated in the rest of the frame, the region where the object lies is highlighted when compared with rest of the image. The contours are extracted again using the canny edge detector and the object is detected as seen in FIG. 4C. The updated bounding box coordinates of the object is utilized for specularity-removal from the next frame, and the process iterates. If no contours are detected for a pre-determined time window, the object detection system 200 reverts back to the maximum-area contour searching.


C. Performance Comparison

The performance comparison was conducted on an Intel Core i7-10750H CPU system with 10 GB RAM and 4 cores. Test videos of 1 minute length each were recorded for five objects at 30 fps, as shown in Table I, in cloudy/partly cloudy conditions with a polarizing filter 156 in front of the lens of the image capture device 153. The gain and exposure values were 15 and 45 respectively. The object detection system 200 was compared with deep-learning based detectors such as YOLOv3 and Mobilenet SSDv2 which were pre-trained on the COCO dataset. The metrics used for the evaluation are the processed frames per second (FPS), CPU usage, precision and recall. The latter two are defined as:







Precision
=

TP

TP
+
FP



,

Recall
=

TP

TP
+
FN



,




where TP, FP, and FN are the numbers of True Positives, False Positives, and False Negatives detected by the tracker, respectively.









TABLE 1







Performance comparisons of different trackers.











Average FPS
Total CPU usage in %
Precision & Recall
















Objects
Ours
YOLO
SSD
Ours
YOLO
SSD
Ours
YOLO
SSD





White Bottle
21
2.8
11
37.5
86.25
75
0.96
0.97
0.57


Dark carton
21
2.8
11
37.5
86.25
75
0.91
0.96
0.52


Dark can
21
2.8
11
37.5
86.25
75
0.78
0.90
0.22


Silver can
21
2.8
11
37.5
86.25
75
0.91
0.90
0.86


Juice carton
21
2.8
11
37.5
86.25
75
0.97
0.96
0.80









For training the deep-learning models, 2000 images were collected and labeled. Since the objective of the object detection system 200 is to detect any object, all the images were labelled under a single class. Synthetic data was generated by performing various operations such as rotation, shearing and varying brightness intensity on the training dataset. The brightness of the images was varied to capture different illumination conditions. The addition of the synthetic data increased the size of the training dataset to 6000 images. Both YOLO and SSD were trained until they reached their average losses stabilized at 0.13 and 3.1 respectively. The models were then deployed on the collected test videos to examine their FPS and CPU usage. The average CPU consumption for the object detection system 200 is 37.5% whereas the average FPS is 21 FPS. The YOLO model utilizes 86.25% of the total CPU and average FPS is 2.8 FPS which is not suitable for realtime testing. SSD had a better average FPS of 11 FPS when compared to YOLO, but it also had a high CPU consumption of 75% that made it unsuitable for our purpose.


The ground truth generation and performance evaluation were conducted in MATLAB. This ground truth was only utilized for calculating precision and recall, and was not used in field experiments. Precision and Recall are equal because the test videos and the ground truth video had an equal number of frames which results to an equal number of true positives and false negatives. The YOLO model has the highest precision and recall for most of the objects. The object detection system 200 shows comparable performance to YOLO. SSD has the lowest precision and recall due to its inability to detect objects for a large number of frames.


IV. System Modeling and Control

The dynamics of the flight controller system 300 with respect to the multirotor 101 and floating object 10 is outlined in this section. The propeller outwash of the multirotor 101 moves the floating object 10 around, and flight controller system 300 is designed for the multirotor 101 to track and land on the object, particularly such that the multirotor 101 encapsulates the object 10 within the capture void 132 upon landing on a landing surface. The flight controller system 300 determines the optimal control output to be applied to each respective propeller motor 124 of the plurality of propeller motors 124 based on the set of positional and attitudinal properties of the multirotor 101, the position of the object 10 and a velocity of the object 10.


A. Multirotor and Object Dynamics

A set of dynamics models of the multirotor 101 and object 10 are studied in detail as reliable 2-D tracking of the object is necessary for successful object collection. The following assumptions were made for dynamic modeling of the multirotor 101 and the object 10:

  • 1) The translational kinematics of the multirotor 101 were simplified by assuming the drag acting on the body is negligible.
  • 2) The pattern of the propeller outwash is radially outwards from the position of the multirotor.
  • 3) Water currents and object's vertical dynamics were negligible.
  • 4) The direction of force due to propeller outwash is along the projection of vector, on the water surface, from the center of the multirotor 101 to that of the object 10 on the water surface. It can be seen in FIG. 5 that the vertical downwash transitions to radial outwash upon interaction with water surface.


As illustrated in FIG. 5, the propeller outwash drifts the object 10 and the force generated due to the airflow governs the dynamics of the object 10. The set of dynamics models describe a relationship between the set of positional and attitudinal properties of the multirotor 101, the position of the object 10 and a velocity of the object 10, including a relationship between one or more airflow forces generated by the plurality of propellers 122 of the multirotor 101 and the position of the object 10 and the velocity of the object 10. In particular, 3-D translational dynamics of the multirotor 101 and the object 10 are given by the following equations:





{dot over (x)}q=vxq, {dot over (y)}q=vyq, {dot over (v)}xq=ux, {dot over (v)}yq=uy,






ż
q
=v
zq
, {dot over (v)}
zq
, =g−u
z
, {dot over (x)}
o
=v
xo
, {dot over (y)}
o
=v
yo,






m
o
{dot over (v)}
xo
=−b(vvo)2sgn(vxo)+Fx, Fx=F cos δ,






m
o
{dot over (v)}
yo
=−b(vyo)2sgn(vyo)+Fy, Fy=F sin δ,






F
x
=F
emp cos δ+ΔFx, |ΔFx|≤βx, βx≥0,






F
y
=F
emp sin δ+ΔFy, |ΔFy|≤βy, βy≥0,






F
emp
=k
1
|v
air|2{circumflex over (v)}air, δ=tan−1 ((yo−yq)/(xo−xq))   (1)


where: g=9.81, (ux, uy)∈custom-character2 is the control input to the system, (xq, yq, zq)∈custom-character3 is the position of the multirotor 101, (vxq, vyq, vzq)∈custom-character3 is the velocity of the multirotor 101, (xo, yo, zo)∈custom-character3 is the position of the object 10, (vxo, vyo, vzo)∈custom-character3 is the velocity of the object 10, all in the North-East-Down (NED) Frame. The mass of the object 10 is defined as mocustom-character, F∈custom-character2 is the planar force experienced by the object 10 due to propeller outwash and represents the coupling dynamics between the multirotor 101 and object 10. k1custom-character is a coefficient which is dependent on the area of the object 10 and density of surrounding fluid. Fempcustom-character2 is the empirical formulation of F. (βx, βy)∈custom-character2 represent the bounds on modeling uncertainties. The damping coefficient is b∈custom-character and vaircustom-character2 is the airflow velocity surrounding the object due to the propeller outwash.


B. Controller Design

One objective of the flight controller system 300 is to reduce position and velocity errors between the multirotor 101 and object 10 in the presence of modeling and parameter uncertainties. A boundary layer sliding mode control (BLSMC) with Constrained Linear Model Predictive Control (MPC) approach is disclosed herein that determines the optimal control output to be applied to each respective propeller motor 124 of the plurality of propeller motors 124 based on the set of positional and attitudinal properties of the multirotor 101, the position of the object 10, and the velocity of the object 10. The BLSMC strategy makes the flight controller system 300 robust against modeling uncertainties. A dynamic sliding manifold is used to eliminate the reaching phase for BLSMC to ensure robustness from the beginning. Furthermore, the constrained MPC is designed considering the closed loop error dynamics of the flight controller system 300 and predicts the future position and velocity errors over a prediction horizon and finds an optimal control input to drive the position and velocity errors to zero. For the inertial Z direction, a PD velocity controller is designed to descend with a predefined constant velocity. For the 3-D rotational dynamics of the multirotor 101, an attitude controller is implemented. For object collection, the multirotor 101 has a predefined yaw setpoint as it can collect objects with any heading.


1) Boundary Layer Sliding Mode Control (BLSMC)

A BLSMC approach of the flight controller system 300 is disclosed to alleviate the chattering of control signals that are commonly seen in conventional sliding mode control methods. As the designed control inputs are the target thrusts to be applied to the propeller motors 124 in the inertial X and Y directions, chattering thrust signals are detrimental because they cause a multirotor 101 to constantly roll and pitch back and forth. As a result, the measurements from the image capture device 153 of the position and velocity of the object 10 can be adversely affected. To design BLSMC, dynamic sliding manifolds are defined as the following:






s
x=(vxo−vxq)+λx(x0−xq)+ϕx(t)






s
y=(vyo−vyq)+λy(y0−yq)+ϕy(t)   (2)


It can be noted that sx(0)=0 and sy(0)=0, if ϕx(0)=−ėx(0)−λex(0) and ϕy(0)=−ėy(0)−λey(0). This eliminates the reaching phase and the sliding mode control system is kept inside the boundary layer from the beginning. Thus, controller functions to keep the sliding mode control system in the boundary layer are designed as:











u
x

=




-
b


m
o





(

v

x

o


)

2



sgn

(

v
xo

)


+



F
emp


cos

θ


m
o


+


λ
x

(


v
xo

-

v
xq


)

+


ϕ
.

x

+


(


η
x

+

β
x


)



sat

(


s
x


ϵ
x


)




,




(
3
)










u
y

=




-
b


m
o





(

v

y

o


)

2



sgn

(

v
yo

)


+



F
emp


sin

θ


m
o


+


λ
y

(


v
yo

-

v
yq


)

+


ϕ
.

y

+


(


η
y

+

β
y


)



sat

(


s
y


ϵ
y


)







where sat(·) is the saturation function. The BLSMC is designed with the objective to keep the sliding mode control system in the boundary layer and make it insensitive to modeling and parameter uncertainties. The next objective is to design ϕx and ϕy such that the position and velocity errors are regulated to the origin in an optimal manner.


Formulation for Femp:Femp is a function of vair which is determined empirically by collecting the windspeed data using an anemometer. The windspeed, due to propeller outwash, is collected at horizontal distances d from the multirotor every 0.5 m until d=3 m. For every distance d, the height of the multirotor h from the surface is also varied from 0.5 m to 2 m. Finally, vair is obtained as a function of d and h. In field experiments, the sum of the first two terms in the controller is constrained within a bound to prevent sudden and aggressive maneuvers.


2) Constrained Linear Model Predictive Control (MPC)

A constrained linear MPC approach is proposed to design {dot over (ϕ)}x and {dot over (ϕ)}y. For the sake of brevity, only the design of {dot over (ϕ)}x is shown and {dot over (ϕ)}y is designed in the same way. Based on the ux in (3), the closed-loop error (ex=xo−xq) dynamics is:











e
¨

x

=




v
.

xo

-


v
.

xq


=


λ



e
.

x


-


ϕ
.

x

-


(


η
x

+

β
s


)



sat

(


s
x


ϵ
x


)








(
4
)







When the sliding mode control system is within the boundary layer,







sat

(


s
x


ϵ
x


)

=



s
x


ϵ
x


.





Then combining (4) and (2) gives:














e
¨

x

=




-

λ
x





e
.

x


-


ϕ
.

x

-


(


η
x

+

β
x


)



(


s
x


ϵ
x


)









=




-

(


λ
x

+


ζ
x


ϵ
x



)





e
.

x


-




ζ
x



λ
x



ϵ
x




e
x


-


ϕ
.

x

-



ζ
x


ϵ
x




ϕ
x










(
5
)







where ζxxx. The closed-loop error dynamics then becomes:










[





e
.

x







e
¨

x







ϕ
.

x




]

=



[



0


1


0







-

ζ
x




λ
x



ϵ
x






-

λ
x


-


ζ
x


ϵ
x







-

ζ
x



ϵ
x






0


0


0



]

[




e
x







e
.

x






ϕ
x




]

+


[



0





-
1





1



]


w






(
6
)







where w={dot over (ϕ)}x. The continuous-time system is discretized and the cost function for the linear MPC is defined as follows:









J
=



min
U


E
N
T



PE
N


+




i
=
0


N
-
1



(



E
i
T



QE
i


+


w
i
T



RW
i



)







(
7
)







where Ei=[ex(i)ėx(i) ϕx(i)]T and N is the prediction horizon. The cost function (7) can be re-written as the following quadratic programming problem:










J
=



min
U


U
T


2


(


R
~

+



S
~

T



Q
~



S
~



)


U

+


x
T


2


T
~



Q
~



S
~


U



,




(
8
)










s
.
t
.


U
min



U


U
max





An optimal control sequence, U*, is generated by solving (8) and w={dot over (ϕ)}x=U*(0). The matrices are defined as follows:










U
=

col

(


w
0

,


,

w

N
-
1



)


,


R
~

=

diag

(

R
,


,
R

)


,




(
8
)











Q
~

=

diag

(


Q
1

,


,

Q

N
-
1


,
P

)


,


T
~

=

col

(

A
,


,

A
N


)


,







S
=

[



B


0





0


0




AB


B





0


0























A

N
-
1



B





A

N
-
2



B






AB


B



]


,







A
=

[



1


dt


0








-

ζ
x




λ
x



ϵ
x



dt






-

λ
x



dt

-



ζ
x


ϵ
x



dt

+
1






ζ
x


ϵ
x



dt





0


0


1



]


,

B
=

[



0





-
dt





dt



]






where dt is the sampling time. This constrained MPC was implemented in using the qpOASES libraries.


C. Simulation Results

The performance of the flight controller system 300 is compared with that of a conventional sliding mode controller (SMC) in simulation. Due to space limitation, comparisons are provided only along the x-axis in FIGS. 6A-6F. In this simulation, the following gains were used: ζ=5, λx=5, ϵx=0.1, R=0.1, P=250 I3 and N=10. The lower and upper bounds Umin and Umax are −10 ms−2 and 10 ms−2. The initial positions are xq(0)=2 and xo(0)=0. The initial velocities are zero. It can be noted that the multirotor can track the position and velocity of the object with both approaches. However, for conventional SMC both positions continue to evolve unbounded with time. Whereas, in the proposed approach, both positions are bounded after they converge as shown in the first subplot. This can be attributed to the high terminal weight P, imposed on the terminal state. Moreover, the chattering in the control signal occurs in the SMC control signals, whereas there is no chattering in the control input for the flight controller system 300 using BLSMC+MPC due to the boundary layer implementation.


V. Field Experiments

The efficacy of the present system 100 is validated through a series of field experiments. The experimental setup, results and discussions on the flight tests are presented in this section.


A. Experimental Setup

The outdoor field experiments were conducted in a lake park at Gilbert, Arizona (lat:33.3589, lon:-111.7685). The experiments were conducted during September and October 2020 and the weather conditions were mostly sunny with some clouds in late afternoons. The wind speeds varied between 05 mph with sporadic wind gusts. Additionally, multiple experimental trials were conducted with a dark aluminum can (cylindrical, 200 g) on a cloudy day to demonstrate the system's potential. Due to very limited instances of cloudy days in Arizona, object collection experiments couldn't be conducted with a dark carton (cuboidal, 300 g).









TABLE II







Experimental results for 22 successful trials and 2 failure attempts











Success
1st failure
2nd failure













Final distance (m)
0.15 ± 0.06
0.38
0.85


Landing duration (s)
7.41 ± 1.10
7.26
6.43


Used capacity (mAh)
80.50 ± 11.63
76.5
72









B. Experimental Results and Discussion

The aerial manipulation system achieved 22 successful attempts and 2 failed attempts. Due to the unavailability of ground truth data in the outdoor scenario, the error between the final position of the multirotor and object was utilized to analyze the performance of the system. The origin for all the experiments was set when the GPS lock was found. For the sake of brevity, the results of the experimental trials were shown in Table II, including the landing time, battery capacity consumed for performing landing, and norm of error between the final positions of the floating object and multirotor. Due to space constraints, one successful and one failed trials are described thoroughly. çxy=2, λxy=1.0 and ηxy=0.5 were used for all the experiments.


The experimental trials demonstrated a high success rate for object collection using the net mechanism 104 and the object detection system 200. Table II shows that the battery capacity consumed during autonomous landing is 80.50 mAh, which is 14% of 575 mAh (the average battery consumption during one trial). The two failed attempts happened in the late afternoon; one with the can and the other with the bottle. One success attempt and one failure attempt are demonstrated in FIGS. 7A-7F and 8A-8F respectively. From FIGS. 7A-7F, it can be noted that the multirotor 101 reliably tracks the position of the object along the X and Y axes. The aerial system starts the descent from a height of 1.8 meters above water surface. The LiDAR is operative above 0.5 meters, so once the multirotor is within 0.5 meters above the water surface, the multirotor 101 is programmed to descend at a constant and low for about 1 second without LiDAR data, after which it drops on the water surface. The rationale behind descending after 0.5 meters is to ensure that the multirotor 101 continues to track the object when it is in proximity to the water surface without causing water splash.


Additionally, if the multirotor 101 drops on the object at 0.5 meters, the water splash can be detrimental to the onboard electrical components. In this successful experiment, the total time taken, from object detection to autonomously landing on it, is 6.294 seconds, whereas the average time for the 22 successful trials is 7.41 seconds. FIGS. 8A-8F illustrate one failed attempt. Similar to a successful attempt, the multirotor 101 reliably tracks the position and velocity of the object along X and Y axes until 5.407 seconds. At that time, the multirotor 101 is within 0.5 meters from the water surface. Right at this time, the object goes out of the frame along the multirotor 101 X-axis. As a result, the multirotor 101 pitches forward, in an attempt to track the object, while continuing to descend. Despite the pitching maneuver, the object lands outside the workspace of the net 142 due to erratic motion caused by turbulent water flow. The final distance between the position of the object and multirotor 101 is 0.85 meters which is outside the capture void 132. Furthermore, both failures occurred in the late afternoon when collecting objects with a cylindrical surface, which can be attributed to partial object visibility. Some potential methods to further improve the object collection include usage of a camera on a gimbal to have a flexible field of view as the multirotor 101 gets close to the water surface. Experimental trials were also conducted on a cloudy day with a dark aluminum can. FIGS. 9A-9F demonstrate one successful attempt for dark shade object collection on a cloudy day. The multirotor 101 successfully lands on the object and the final distance between the positions of the object and the multirotor 101 is 0.12 m. For comparison, a standard SMC was implemented and the results were shown in FIGS. 10A-10F. The flight test for standard SMC was conducted with çxy=1, λxy=0.5 as the flight controller system 300 was extremely aggressive with ↑xy=2, λxy=1.0. During this flight test, the final distance between the object and multirotor 101 was 0.39 meters and the object was outside the net's workspace. The failure can be attributed to the jitters in control inputs.


Computing System


FIG. 11 is a schematic block diagram of an example device 400 that may be used with one or more embodiments described herein, e.g., as a component of system 100 and/or as high-level computing device 180 shown in FIGS. 2A, 2B and 3.


Device 400 comprises one or more network interfaces 410 (e.g., wired, wireless, PLC, etc.), at least one processor 420, and a memory 440 interconnected by a system bus 450, as well as a power supply 460 (e.g., battery, plug-in, etc.).


Network interface(s) 410 include the mechanical, electrical, and signaling circuitry for communicating data over the communication links coupled to a communication network. Network interfaces 410 are configured to transmit and/or receive data using a variety of different communication protocols. As illustrated, the box representing network interfaces 410 is shown for simplicity, and it is appreciated that such interfaces may represent different types of network connections such as wireless and wired (physical) connections. Network interfaces 410 are shown separately from power supply 460, however it is appreciated that the interfaces that support PLC protocols may communicate through power supply 460 and/or may be an integral component coupled to power supply 460.


Memory 440 includes a plurality of storage locations that are addressable by processor 420 and network interfaces 410 for storing software programs and data structures associated with the embodiments described herein. In some embodiments, device 400 may have limited memory or no memory (e.g., no memory for storage other than for programs/processes operating on the device and associated caches).


Processor 420 comprises hardware elements or logic adapted to execute the software programs (e.g., instructions) and manipulate data structures 445. An operating system 442, portions of which are typically resident in memory 440 and executed by the processor, functionally organizes device 400 by, inter alia, invoking operations in support of software processes and/or services executing on the device. These software processes and/or services may include Object Detection and Flight Control processes/services 490 described herein with reference to Object Detection System 200 and Flight Controller System 300. Note that while Object Detection and Flight Control processes/services 490 is illustrated in centralized memory 440, alternative embodiments provide for the process to be operated within the network interfaces 410, such as a component of a MAC layer, and/or as part of a distributed computing network environment.


It will be apparent to those skilled in the art that other processor and memory types, including various computer-readable media, may be used to store and execute program instructions pertaining to the techniques described herein. Also, while the description illustrates various processes, it is expressly contemplated that various processes may be embodied as modules or engines configured to operate in accordance with the techniques herein (e.g., according to the functionality of a similar process). In this context, the term module and engine may be interchangeable. In general, the term module or engine refers to model or an organization of interrelated software components/functions. Further, while the Object Detection and Flight Control processes/services 490 is shown as a standalone process, those skilled in the art will appreciate that this process may be executed as a routine or module within other processes.


It should be understood from the foregoing that, while particular embodiments have been illustrated and described, various modifications can be made thereto without departing from the spirit and scope of the invention as will be apparent to those skilled in the art. Such changes and modifications are within the scope and teachings of this invention as defined in the claims appended hereto.

Claims
  • 1. A multirotor device, comprising: a base frame defining a plurality of arms, each arm including at least one propeller of a plurality of propellers rotatable by an associated propeller motor of a plurality of propeller motors;a landing assembly located underneath the base frame and including a landing structure framing a capture void, wherein the capture void is configured to encapsulate an object and wherein the capture void defines a first terminus and a second terminus located opposite from the first terminus;an object detection system being operable to estimate a position of the object; anda flight controller system including: a plurality of sensors collectively operable to estimate a set of positional and attitudinal properties of the multirotor; anda processor in communication with a memory, the memory including instructions, which, when executed, cause the processor to: determine an optimal control output to be applied to each respective propeller motor of the plurality of propeller motors based on the position of the object and the set of positional and attitudinal properties of the multirotor such that the multirotor encapsulates the object within the capture void upon landing on a landing surface; andapply the optimal control output to each respective propeller motor of the plurality of propeller motors.
  • 2. The multirotor device of claim 1, further comprising: a net mechanism including: a net defining a first portion and a second portion, wherein the first portion of the net is affixed to the landing structure at the first terminus of the capture void; anda moveable rod affixed to the second portion of the net and configured for positioning between the first terminus of the capture void and the second terminus of the capture void;wherein the net spans across the capture void when the moveable rod is positioned at the second terminus of the capture void.
  • 3. The multirotor device of claim 2, wherein the moveable rod is moveable by a servo arm operable to actuate the moveable rod between the first terminus of the capture void and the second terminus of the capture void.
  • 4. The multirotor device of claim 1, wherein the flight controller system incorporates a set of dynamics models that describe a relationship between the set of positional and attitudinal properties of the multirotor, the position of the object and a velocity of the object.
  • 5. The multirotor device of claim 4, wherein the set of dynamics models includes a model descriptive of a relationship between one or more airflow forces generated by the plurality of propellers of the multirotor and the position of the object and the velocity of the object.
  • 6. The multirotor device of claim 1, wherein the flight controller system employs a sliding mode control system to determine the optimal control output based on the set of positional and attitudinal properties of the multirotor, the position of the object and a velocity of the object, the sliding mode control system including a dynamic sliding manifold operation based on constrained linear model predictive control that minimizes a position error between a position of the object and a position of the multirotor and minimizes a velocity error between a velocity of the object and a velocity of the multirotor and keeps the sliding mode control system within a boundary layer of the sliding mode control system. The multirotor device of claim 1, wherein the object detection system includes an image capture device in communication with the processor and the memory, the memory including instructions, which, when executed, cause the processor to: receive a video feed including a frame indicative of the object from the image capture device;extract a plurality of closed contours from the frame;identify a largest closed contour of the plurality of closed contours placed within a region for a minimum threshold quantity of consecutive frames, wherein a position of the largest closed contour within the frame is indicative of the position of the object; andapply a specularity removal operation to the frame.
  • 8. The multirotor device of claim 7, wherein the memory of the object detection system further includes instructions, which, when executed, cause the processor to: determine a minimum intensity value of a plurality of minimum intensity values for each pixel within the frame;determine an intensity threshold value to distinguish one or more highlighted pixels within the frame using a mean value and a standard deviation value of the plurality of minimum intensity values;determine an offset value based on the intensity threshold value, the offset value being used to indicate one or more pixels that need to be modified to suppress a reflection within the frame;determine a specular component of each pixel within the frame by subtracting the offset value from the intensity threshold value for each pixel outside of a bounding box indicative of the position of the object; andsubtract the specular component from each respective pixel outside of the bounding box indicative of the position of the object for a subsequent frame of the plurality of frames.
  • 9. The multirotor device of claim 7, wherein the memory of the object detection system further includes instructions, which, when executed, cause the processor to: estimate a velocity of the object based on one or more positions of the object across a plurality of frames.
  • 10. The multirotor device of claim 1, wherein the set of positional and attitudinal properties of the multirotor include an attitude, a position, a velocity, and an altitude of the multirotor relative to the landing surface.
  • 11. The multirotor device of claim 1, wherein the landing structure is a buoyant structure.
  • 12. The multirotor device of claim 11, wherein the buoyant structure includes a first buoyant sub-structure and a second buoyant sub-structure.
  • 13. A method, comprising: receiving, at a processor in communication with a memory, a video feed including a frame indicative of an object from an image capture device;extracting, at the processor, a plurality of closed contours from the frame;identifying, at the processor, a largest closed contour of the plurality of closed contours placed within a region for a minimum threshold quantity of consecutive frames, wherein a position of the largest closed contour within the frame is indicative of the position of the object; andapplying a specularity removal operation to the frame.
  • 14. The method of claim 13, further comprising: determining, at the processor, a minimum intensity value of a plurality of minimum intensity values for each pixel within the frame;determining, at the processor, an intensity threshold value to distinguish one or more highlighted pixels within the frame using a mean value and a standard deviation value of the plurality of minimum intensity values;determining, at the processor, an offset value based on the intensity threshold value, the offset value being used to indicate one or more pixels that need to be modified to suppress a reflection within the frame;determining, at the processor, a specular component of each pixel within the frame by subtracting the offset value from the intensity threshold value for each pixel outside of a bounding box indicative of the position of the object; andsubtracting, at the processor, the specular component from each respective pixel outside of the bounding box indicative of the position of the object for a subsequent frame of the plurality of frames.
  • 15. The method of claim 14, further comprising: estimating, at the processor, a velocity of the object based on one or more positions of the object across a plurality of frames.
  • 16. The method of claim 13, further comprising: determining, by the processor, an optimal control output to be applied to a respective propeller motor of a plurality of propeller motors of a multirotor based on the position of the object and a set of positional and attitudinal properties of the multirotor such that the multirotor encapsulates the object within a capture void of the multirotor upon landing on a landing surface; andapplying the optimal control output to each respective propeller motor of the plurality of propeller motors.
  • 17. The method of claim 13, further comprising: actuating a net mechanism of the multirotor such that a net of the multirotor spans across the capture void to capture the object within the net.
  • 18. The method of claim 13, further comprising: applying a dynamic sliding manifold operation within a sliding mode control system based on constrained linear model predictive control that minimizes a position error between a position of the object and a position of the multirotor and minimizes a velocity error between a velocity of the object and a velocity of the multirotor and keeps the sliding mode control system within a boundary layer of the sliding mode control system.
  • 19. A method, comprising: estimating, by a processor in communication with a memory, a position of an object relative to a multirotor;determining, by the processor, an optimal control output to be applied to a respective propeller motor of a plurality of propeller motors of the multirotor based on the position of the object and a set of positional and attitudinal properties of the multirotor such that the multirotor encapsulates the object within a capture void of the multirotor upon landing on a landing surface; andapplying the optimal control output to each respective propeller motor of the plurality of propeller motors.
  • 20. The method of claim 19, further comprising: applying a dynamic sliding manifold operation within a sliding mode control system based on constrained linear model predictive control that minimizes a position error between a position of the object and a position of the multirotor and minimizes a velocity error between a velocity of the object and a velocity of the multirotor and keeps the sliding mode control system within a boundary layer of the sliding mode control system.
  • 21. The method of claim 19, further comprising: receiving, at the processor, a video feed including a frame indicative of an object from an image capture device;extracting, at the processor, a plurality of closed contours from the frame;identifying, at the processor, a largest closed contour of the plurality of closed contours placed within a region for a minimum threshold quantity of consecutive frames, wherein a position of the largest closed contour within the frame is indicative of the position of the object; andapplying a specularity removal operation to the frame.
  • 22. The method of claim 19, further comprising: estimating, at the processor, a velocity of the object based on one or more positions of the object across a plurality of frames.
  • 23. The method of claim 19, further comprising: actuating a net mechanism of the multirotor such that a net of the multirotor spans across the capture void to capture the object within the net.
CROSS-REFERENCE TO RELATED APPLICATIONS

This is a non-provisional application that claims benefit to U.S. Provisional Patent Application Ser. No. 63/178,645 filed 23 Apr. 2021, which is herein incorporated by reference in its entirety.

Provisional Applications (1)
Number Date Country
63178645 Apr 2021 US