The present technology relates generally to the mitigation of threats from unmanned aircraft systems threats and, more specifically, to interdiction systems using one or more of radar, fixed cameras, and interceptor aircraft to mitigate such threats.
Small unmanned aircraft systems (“sUAS”), such as radio-controlled drones or quadcopters, can pose a serious threat to civil aviation traffic and airspaces, ground installations, other high value assets, and large crowds. These sUAS can be easily obtained by recreational hobbyists and by those who seek to operate them for malicious purposes. The effective guidance and control capability of commercially-available sUAS as well as their capability for autonomous flight control features make these devices especially dangerous as standoff threats. Weapons or other dangerous instruments can be attached to the sUAS, further increasing the threat posed to sensitive locations. Swarm attacks involving multiple simultaneous sUAS threats are especially worrisome and present unique challenges.
Attempts to counter the threat posed by autonomous sUAS using radio frequency, for instance by detecting sUAS control signals, co-opting sUAS wireless communication links, or disrupting GPS signals by spoofing, can be ineffective in some situations. Recent improvements of the signal security of commercial GPS systems by adding digital signatures onto GPS civil navigation messages have made spoofing increasingly difficult. Similarly, approaches based on radio frequency disruption or control are becoming increasingly ineffective as attackers become more sophisticated. These radio frequency approaches can be ineffective against fully-autonomous sUAS. Accordingly, a need exists for an interdiction system to counter the threat posed by sUAS.
Systems and methods are provided for the interdiction of sUAS systems, such as a drone, and other threats that can provide greater efficacy than spoofing approaches. In one aspect, there is a method for drone interdiction. The method can include detecting, using one or more radars, a target aircraft within a surveillance zone. The method can include generating first one or more interceptor aircraft commands to direct an interceptor aircraft to the target aircraft, based on data from the one or more radars. The method can include commanding the interceptor aircraft according to the first one or more interceptor aircraft commands. The method can include acquiring a target image using a camera mounted on the interceptor aircraft. The method can include, generating, in response to determining the target aircraft is in the target image, second one or more interceptor aircraft commands to direct the interceptor aircraft to the target aircraft, based on at least one of the target image, a fixed camera target image from one or more fixed cameras and the data from the one or more radars. The method can include commanding the interceptor aircraft according to the second one or more interceptor aircraft commands.
In some embodiments, the method can include tracking the target aircraft based on the fixed camera target image, a fixed camera system model, and the data from the one or more radars. In some embodiments, the method can include determining that the target aircraft is a threat. In some embodiments, the method can include determining that the target is a threat by analyzing the fixed camera target image. In some embodiments, the method can include commanding the interceptor aircraft to an interceptor aircraft base station in response to determining the target aircraft is not a threat.
In some embodiments, the method can include immobilizing, by the interceptor aircraft, the target aircraft. In some embodiments, the method can include immobilizing, by the interceptor aircraft, the target aircraft by the interceptor aircraft using a net assembly to immobilize the target aircraft. In some embodiments, the method can include immobilizing, by the interceptor aircraft, the target aircraft by the interceptor aircraft using a net gun to immobilize the target aircraft.
In another aspect, there is a method for drone interdiction. The method can include detecting, using one or more radars, a target aircraft within a surveillance zone. The method can include generating first one or more interceptor aircraft commands to direct an interceptor aircraft to the target aircraft, based on data from the one or more radars. The method can include commanding the interceptor aircraft according to the first one or more interceptor aircraft commands. The method can include acquiring a target image using a camera mounted on the interceptor aircraft. The method can include determining, in response to determining the target aircraft is in the target image, an interception location, based on at least one of the target image, a fixed camera target image from one or more fixed cameras and the data from the one or more radars. The method can include generating second one or more interceptor aircraft commands to direct the interceptor aircraft to the target aircraft, based on the interception location. The method can include commanding the interceptor aircraft according to the second one or more interceptor aircraft commands.
In some embodiments, determining, in response to determining the target aircraft is in the target image, an interception location, based on at least one of the target image, a fixed camera target image from one or more fixed cameras and the data from the one or more radars can include generating a first track state, based on the target image. In some embodiments, determining, in response to determining the target aircraft is in the target image, an interception location, based on at least one of the target image, a fixed camera target image from one or more fixed cameras and the data from the one or more radars can include generating a first track score, based on the first track state. In some embodiments, determining, in response to determining the target aircraft is in the target image, an interception location, based on at least one of the target image, a fixed camera target image from one or more fixed cameras and the data from the one or more radars can include generating a second track state, based on one or more of the fixed camera target image from the one or more fixed cameras and the data from the one or more radars. In some embodiments, determining, in response to determining the target aircraft is in the target image, an interception location, based on at least one of the target image, a fixed camera target image from one or more fixed cameras and the data from the one or more radars can include generating a second track score, based on the second track state. In some embodiments, determining, in response to determining the target aircraft is in the target image, an interception location, based on at least one of the target image, a fixed camera target image from one or more fixed cameras and the data from the one or more radars can include selecting the first track state or the second track state by comparing the first track score and the second track score.
In another aspect, there is a method for drone interdiction. The method can include detecting a target aircraft, based on data from one or more of one or more radars, a fixed camera image from one or more fixed cameras, and an interceptor aircraft image from a camera mounted to an interceptor aircraft. The method can include generating an interception location where the interceptor aircraft and the target aircraft are expected to meet. The method can include directing, based on the interception location, the interceptor aircraft to the interception location to immobilize the target aircraft. In some embodiments, directing, based on the interception location, the interceptor aircraft to the interception location to immobilize the target aircraft can include capturing the target aircraft using a hanging net. In some embodiments, directing, based on the interception location, the interceptor aircraft to the interception location to immobilize the target aircraft can include firing a net gun at the target to capture the target aircraft.
Other aspects and advantages of the present technology will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating the principles of the technology by way of example only.
The foregoing and other objects, features, and advantages of the present technology, as well as the technology itself, will be more fully understood from the following description of various embodiments, when read together with the accompanying drawings, in which:
The interdiction systems and methods described herein can offer improved real-time monitoring and interception capabilities over other methods of sUAS interdiction. The use of multiple modes of detection, including, for example, distributed radar, fixed camera sensors, and distributed interceptor aircraft can provide a flexible and rapid sUAS mitigation response to reliably identify, capture, and defeat sUAS and swarm threats. The interdiction systems and methods described herein can lead to faster response and verification times, as well as improve the time between when the sUAS threat is detected and when it is immobilized. Immobilizing or capturing sUAS threats intact can advantageously improve the likelihood that the sUAS operator can be identified and held accountable.
Surveillance zone 120 represents the area that is monitored and protected from the threat of incoming sUAS, like drones 122A-122C and 124A-124C. In some embodiments, the coverage of Doppler radars 110A-110C can define surveillance zone 120. In other embodiments, each of Doppler radars 110A-110C can individually define its own surveillance zone. The number of Doppler radars used in an interdiction system can be based on the desired size of the surveillance zone as well as the coverage area of each individual Doppler radar. Longer range detection can be achieved by increasing transmit power of radars 110A-110C, or by increasing antenna gain of radars 110A-110C. Doppler radars 110A-110C can be, for example, Laufer Wind MD-12 Doppler radars produced by Laufer Wind of New York, N.Y., though other types of radars can be used in accordance with the technology. Radars of this type can detect and track small targets with a radar cross section (“RCS”) of less than 0.03 square meters, for example, birds and small sUAS, to ranges up to four kilometers.
Fixed camera 130 monitors surveillance zone 120. Fixed camera 130 can be disposed at a fixed location in surveillance zone 120 and can be capable of tilting and panning by a gimbal. In some embodiments, fixed camera 130 can be a plurality of fixed cameras distributed along the perimeter of surveillance zone 120. In some embodiments, the coverage range of fixed camera 130 can define surveillance zone 120. Fixed camera 130 can be mounted above the ground, for example 20 feet from the ground or at a height that can surmount nearby obstacles such as trees and buildings. In some embodiments, there may be multiple fixed cameras, to increase the coverage area of the fixed cameras or improve resolution of images captured by utilizing the nearest camera. Fixed camera 130 can acquire images of targets and track targets at a shorter range than radars 110A-110C, for example a range of less than 500 meters. In some embodiments, fixed camera 130 is cued or pointed in the direction of a target by central controller 115 based on track data from radars 110A-110C. Fixed camera 130 can transmit video and/or image data to central controller 115. Images from fixed camera 130 can be used to discriminate between targets that present a threat, such as drones 122A-122C and 124A-124C, and targets that do not present a threat, such as birds or random ground clutter. In some embodiments, a user can verify a threat based on a video or still image feed from fixed camera 130. In other embodiments, central controller 115 can analyze images captured by fixed camera 130 to verify whether an object is a threat automatically. In some embodiments, images captured by fixed camera 130 can be analyzed by a processor collocated with fixed camera 130. Fixed camera 130 can capture video or images in the visible spectrum, in the infrared (IR) spectrum, or both. In some embodiments, the field of view (“FOV”) of fixed camera 130 can be selected based on the angular accuracy of radars 110A-110C and optics of fixed camera 130. For example, where the angular resolution of the radar is +/−0.2 degrees, the minimum fixed camera FOV can be 4 degrees. Fixed camera 130 can have a zoom lens, which can have a wider FOV at a lower zoom, as compared with a higher zoom. In some embodiments, fixed camera 130 can require an FOV that is, for example, five times larger, than the angular resolution of radars 110A-110C.
Interceptor aircraft 140A-140B are distributed in surveillance zone 120. Interceptor aircraft 140A-140B can be, for example quadcopter drones. Interceptor aircraft 140A-140B can be approximately 105 centimeters square by 30 centimeters high. The dimensions of interceptor aircraft 140A-140B can vary depending on the application, including, for example, using a smaller interceptor aircraft where a smaller surveillance zone is desired or a larger interceptor aircraft where a larger surveillance zone is required. Interceptor aircraft 140A-140B can be, for example, a quadcopter, or an octocopter, such as the DJI S1000+. In some embodiments, there can be only one interceptor aircraft per surveillance zone 120. In other embodiments, two or more interceptor aircraft can be used, depending on the size of the surveillance zone and the desired time of response of the interceptor aircraft. In some embodiments, each interceptor aircraft can intercept its own target, such as in the event of a swarm attack with multiple target sUAS. Interceptor aircraft 140A-140B can be distributed at regular intervals throughout surveillance zone 120 to minimize the time between when a target, such as drones 122A-122C and 124A-124C, is detected by interdiction system 100 and when it is intercepted and immobilized by one or more of interceptor aircraft 140A-140B.
Interceptor aircraft 140A-140B can transmit video or image data or other information detected about its operational state, including, for example, pitch, yaw, or rotor power, via a wireless connection to central controller 115. Interceptor aircraft 140A-140B can include inertial measurement units (“IMUs”) that include sensors to measure various parameters of the flight of interceptor aircraft 140A-140B. For example, the IMU can include rate gyros and accelerometers for measuring the acceleration of interceptor aircraft 140A-140B and angular rates (e.g., roll, pitch, and yaw) of interceptor aircraft 140A-140B. Interceptor aircraft 140A-140B can receive command data via a wireless connection from central controller 115. In some embodiments, a user can manually override control of interceptor aircraft 140A-140B by central controller 115 and pilot interceptor aircraft 140A-140B via manual input.
Interceptor aircraft 140A-140B include an on-board camera, capable of capturing video or images in the visible spectrum, IR spectrum, or both. In some embodiments, the on-board camera can have a range of up to 75 meters and can have six times zoom capability. The on-board camera can have an FOV capable of compensating for any angular accuracy deficiencies of radars 110A-110C, for example, a 4 degree FOV. The on-board camera can have a zoom lens, which can have a wider FOV at a lower zoom, as compared with a higher zoom. In some embodiments, interceptor aircraft 140A-140B can use an on-board camera to verify whether an object, such as drones 122A-122C and 124A-124C, presents a threat. Central controller 115 can determine whether objects detected by the cameras mounted to interceptor aircraft 140A-140B pose a threat, such as drones 122A-122C and 124A-124C or whether they do not, such as birds or miscellaneous ground clutter. Where interdiction system 100 determines that the tracked object shown in the image acquired by cameras mounted to interceptor aircraft 140A-140B is not a threat, it can command interceptor aircraft 140A-140B to return to an interceptor aircraft base station.
In some embodiments, interceptor aircraft 140A-140B can be capable of a maximum flight speed of 30-45 miles per hour. Interceptor aircraft 140A-140B can be located at interceptor aircraft base stations, not depicted, when not in use. In some embodiments, interceptor aircraft 140A-140B are capable of a flight time of fifteen minutes or longer before requiring charging. In some embodiments, interceptor aircraft 140A-140B are capable of carrying a payload capacity of up to six kilograms.
In some embodiments, interceptor aircraft 140A-140B include hanging nets to intercept, disrupt the flight of, and/or capture sUAS targets, such as drones 122A-122C and 124A-124C. In other embodiments, interceptor aircraft 140A-140B include mounted net guns that can be fired at a sUAS target. The net gun can include a net gun housing and a net propulsion barrel that cooperate to propel a net toward a target with the aid of, for example, a high pressure carbon dioxide canister. The net gun can fire, for example, a square net that is eight feet by eight feet by two inches square in dimension. The net gun can propel a net at a nominal velocity of, for example, thirty feet per second, with a range of thirty feet. Net guns can be advantageous because they minimize drag and energy dissipation of interception drones 140A-140B during flight. In some embodiments, central controller 115 can control the firing of the net gun. In some embodiments, a computer on interceptor aircraft 140A-140B can control the firing of the net gun.
Central controller 115 of interdiction system 100 can transmit and receive data from each of the components of interdiction system 100, including, for example, Doppler radars 110A-110C, fixed camera 130, and interceptor aircraft 140A-140B. Central controller 115 connects with these components through network 150, which can be, for example, an encrypted managed-UDP (user datagram protocol) wide area network. In some embodiments, central controller 115 is connected to stationary components of interdiction system 100 by a wired connection, for example 10/100 and Gigabit Ethernet connections. Central controller 115 can be connected to interceptor aircraft 140A-140B through a wireless connection. The wireless connection can be established by RF receivers 155A-155B connected to central controller 115 that interfaces with a radio modem, for example a 900 MHz radio modem, on interceptor aircraft 140A-140B. In other embodiments, central controller 115 can be connected to all components through wireless connections. In some embodiments, central controller 115 can monitor the health of one or more of radars 110A-110C, fixed camera 130, and interceptor aircraft 140A-140B. In some embodiments, central controller 115 can be a rack-mounted computer. In other embodiments, central controller 115 can be a ruggedized unit for outdoor operation. Central controller 115 can be capable of simultaneously tracking more than thirty targets in surveillance zone 120.
Mission manager 215 includes radar control and track fusion module 230. Radar control and track fusion module 230 provides control parameters to radars 110A-110C of interdiction system 100. Radar control and track fusion module 230 can use software available from Laufer Wind. Radar control and track fusion module 230 fuses data from radars 110A-110C to increase the size of surveillance zone 120. Radar control and track fusion module 230 can determine which of radars 110A-110C provide the highest likelihood of accurately locating a target, for example by using a predictor/corrector filtering, such as alpha/beta filtering or kalman filtering, to correct for inaccuracies. In some embodiments, radar control and track fusion module 230 can fuse data by generating a track score for each tracked target. The track score can be based on certain attributes of the tracked target, such as strength of the signal return, or the time last detected, to resolve the most accurate track for the target. In some embodiments, radar control and track fusion module 230 can fuse data from radars 110A-110C or additional radars to extend the size of surveillance zone 120 of interdiction system 100.
Mission manager 215 includes camera control and video capture module 240. Camera control and video capture module 240 provides control for fixed camera 130 and cameras mounted to drones 140A-140B, for example directional control. Camera control and video capture module 240 can provide control for fixed camera 130 and cameras mounted to drones 140A-140B based on data from those cameras and/or data from radars 110A-110C. Camera control and video capture module 240 can control parameters of video capture performed by fixed camera 130 and cameras mounted to drones 140A-140B. For example, camera control and video capture module 240 can set the frame rate for video capture, which can be, for example, 30 Hz or 60 Hz.
Mission Manager 215 includes interceptor aircraft control module 250. Interceptor aircraft control module 250 can use data from radar control and track fusion module 230 and/or camera control and video capture module 240 to generate commands and control interceptor aircrafts 140A-140B to intercept and immobilize a target, such as drones 122A-122C and 124A-124C. In some embodiments, interceptor aircraft control module 250 can use software such as Dronecode APM Planner or DJI Guidance SDK to facilitate control of interceptor aircrafts 140A-140B.
When interdiction method 300 detects an object in step 320, the interdiction system pilots the interceptor aircraft toward the target in step 330. Central controller 115 can use a radar tracker, in conjunction with an interception model, to determine a location where the interceptor aircraft can intercept the target. Central controller 115 can generate commands to pilot the interceptor aircraft to an expected interception location, as described in greater detail with respect to
In some embodiments, central controller 115 can use a video or image acquired from a fixed camera, such as fixed camera 130, to verify whether a detected object is a threat, for example by comparing a threat profile against the image detected by the fixed camera. If the object is determined not to be a threat, interdiction method 300 is stopped. In some embodiments, verification of whether an object is a threat can be performed by central controller 115 according to signals from radars 110A-110C, or from cameras mounted to interceptor aircraft 140A-140B. In other embodiments, a user monitoring interdiction method 300 can manually override the controls of interceptor aircraft 140A-140B by an interface through central controller 115, for example through GUI 210.
Central controller 115 uses interception module 420 to generate an interception location 425 based on an interceptor aircraft dynamics model and a radar target model. The interceptor aircraft dynamics model can be an equation that accounts for attributes of the interceptor aircraft, such as interceptor aircraft 140A-140B, including flight characteristics and capabilities and aerodynamic properties, such as the effects of control surfaces, rates of differential motor torques, and/or the collective motor torques. The interceptor aircraft dynamics model can also incorporate aircraft measured state 435, which can include data about the current flight conditions of the interceptor aircraft, measured by IMU 430 of the interceptor aircraft. The radar target model can be an equation that accounts for the known or expected attributes of the detected object, which can be, for example, drones 122A-122C or drones 124A-124C. The radar target model can incorporate radar target state estimate 415 from radar tracker module 410. Central controller 115 can use interception module 420 to determine the location at which the interceptor aircraft dynamics model and the target model predict the target and the interceptor aircraft will intersect, which is output as interception location 425. In some embodiments, the interceptor aircraft dynamics model can update at a rate of 30 Hz or a rate of 60 Hz, based on the refresh rate of IMU 430 generating aircraft measured state 435. In some embodiments, the target model can update at a rate of 0.3 Hz, based on the radar scan cycle.
Central controller 115 uses autopilot command module 440 to generate autopilot aircraft commands 445 based on interception location 425 and the interceptor aircraft dynamics model. Central controller 115 uses autopilot command module 440 to solve for the set of autopilot aircraft commands 445 to cause the interceptor aircraft to fly to interception location 425. The set of autopilot aircraft commands 445 can include, for example, a yaw command, pitch command, and/or motor speed commands. In some embodiments, autopilot command module 440 can solve for the set of autopilot aircraft commands 445 to pilot an interceptor aircraft to interception location 425 using a matrix-type approach to determine all of the commands collectively. In other embodiments, autopilot command module 440 can calculate the command for each axis separately. In some embodiments, interceptor aircraft dynamics model can be a collection of models, where each model accounts for differences based upon certain flight conditions. For example, there may be different interceptor aircraft dynamics models where an interceptor aircraft is flying at a comparatively higher speed, such as at the interceptor aircraft's maximum speed, or a comparatively lower rate of speed.
An updated interception location 425 can be generated each time radar target state estimate 415 is updated, for example at the refresh rate of radars 110A-C and/or as quickly as the refresh rate of aircraft measured state 435 information provided by IMU 430 about the flight of the interceptor aircraft. Inner stability loop 450 can facilitate the interceptor aircraft maintaining level flight. Inner stability loop 450 can be performed by a computer on-board the interceptor aircraft. IMU 430 of the interceptor aircraft generates aircraft measured state 435 based on information detected about the interceptor aircraft's current flight conditions. Inner stability loop 450 uses stability filter 460 to generate stability aircraft commands 465 that are intended to correct for disturbances encountered by the interceptor aircraft during flight, for example, impact by small objects, wind disturbances, or any other irregularities. Stability filter 460 can include, for example, a rate feedback or lagged rate feedback filter, which can calculate stability aircraft commands 465 on a per-axis basis according to aircraft measured state 435 and an interceptor aircraft dynamics model. In other embodiments, stability filter 460 can be a model following filter. Stability filter 460 outputs stability aircraft commands 465, for example a yaw command, pitch command, or rotor power command, to maintain the interceptor aircraft in an upright position. Inner stability loop 450 uses command summer 470 which sums autopilot aircraft commands 445 and stability aircraft commands 465 to generate aircraft control commands 475. Aircraft control commands 475 are used by interceptor aircraft flight controller 480 to pilot the interceptor aircraft toward interception location 425, for example, as in step 330.
Fixed camera tracking module 550 is used by central controller 115 to generate gimbal inputs 555. Central controller 115 calculates gimbal inputs 555 with fixed camera tracking module 550 based on a video target model and a fixed camera system model. The video target model can be an equation that accounts for the known or expected attributes, such as size or flight characteristics, of the detected object, incorporating video target state estimate 515. The fixed camera system model can be an equation that accounts for the dynamics of the fixed camera system. For example, the fixed camera system model can reflect servo dynamics and/or inertia of the gimbal of the fixed camera system and can reflect structural dynamics of a fixed support structure on which the gimbal camera is mounted. The fixed camera system can use fixed camera measurement unit 560 to measure current conditions of the fixed camera system and generate fixed camera system measured state 565. The fixed camera system model incorporates fixed camera system measured state 565, describing, for example, current position, current orientation, and current motion of the fixed camera system. Central controller 115 can use fixed camera tracking module 550 to determine gimbal inputs 555 to cause fixed camera 530 to point at a tracked object. For example, fixed camera tracking module 550 can use Newton's Method or the Broyden-Fletcher-Goldfarb-Shanno (“BFGS”) algorithm, or a similar method, to determine the set of gimbal inputs 555 based on the fixed camera system model and the video target model. Fixed camera controller 570 uses gimbal inputs 555 to cause fixed camera 530 to pan, tilt, or zoom to track the detected object. In some embodiments, video tracker module 510 can use fixed camera system measured state 565 to improve the performance of tracking a detected object.
Central controller 115 uses interceptor aircraft optimizer module 580 to generate aircraft control commands 475. Interceptor aircraft optimizer module 580 calculates aircraft control commands 475 based on the video target model and an interceptor aircraft dynamics model. Interceptor aircraft dynamics model can incorporate aircraft measured state 435 from IMU 430. The video target model can be an equation that accounts for the known or expected attributes, such as size or flight characteristics, of the detected object and incorporates video target state estimate 515. Central controller 115 uses interceptor aircraft optimizer module 580 can predict the interception location where the interceptor aircraft will meet the target. In some embodiments, interceptor aircraft dynamics model and video target model are solved as a system of linear equations by interceptor aircraft optimizer module 580 to establish an interception location where the paths of the interceptor aircraft and the detected target can be expected to intercept. In other embodiments, interceptor aircraft optimizer module 580 can use Newton's Method or the Broyden-Fletcher-Goldfarb-Shanno (“BFGS”) algorithm, or a similar method, to determine the set of aircraft control commands 475 that can be used to pilot the interceptor aircraft toward a target. In some embodiments, video tracker module 510 can use aircraft measured state 435 to improve the performance of tracking a detected object. In some embodiments, interceptor aircraft optimizer module 580 can use video target state estimate 515 to determine whether a detected object, such as drones 122A-122C or drones 124A-124C, are in range of a net gun mounted to the interceptor aircraft and/or whether the interceptor aircraft is pointed at the target. Interceptor aircraft optimizer module 580 can generate a command to cause the interceptor aircraft to fire the net gun to immobilize the target. In some embodiments, interceptor aircraft optimizer module 580 can be a part of central controller 115, for example as part of interceptor aircraft control module 250. In some embodiments, interceptor aircraft optimizer module 580 can be a part of a computer on-board the interceptor aircraft.
Tracks can be fused in an asynchronous manner whenever a track update is received from either radar tracker module 410 or video tracker module 510. At each update, the track scores of the two can be compared by sensor fusion module 610, and the track with the better score is used as to update the fused track. Track updates with measurements are filtered using a Kalman filter by sensor fusion module 610 to generate the improved target state estimate 615. The track scores can normalized using the measurement and attribute covariances. Since the tracks are updating asynchronously, the normalization factor in the track score (the inverse of the square root of the determinant of the measurement and attribute covariance matrix) can be predicted up to the current time using the same kinematic model and process noise used in the Kalman filter. The track score update can be described by the equation provided by Blackman and Popoli:
but for a non-updating track, S, d, and p(ys) are decayed by the time increment:
S
decayed
=H(FPFT+GQGT)HT+R
Where H is the measurement matrix, F is the kinematic matrix (which includes the time increment), P is the covariance matrix of the measurements, G is the state transition matrix, Q is the process noise and R is the measurement variances. Similarly, the covariance of the residual for the signal attributes is decayed using a model of their kinematics, transitions and covariances. d is the Mahalanobis distance, which also includes Sdecayed. In this way the track scores can be normalized for their disparate attributes as well as their asynchronous updates, and at every update the better scored track can be determined by sensor fusion module 610 and used to generate improved target state estimate 615.
Once improved target state estimate 615 is generated by sensor fusion module 610, fixed camera tracking module 550 and interceptor aircraft optimizer module 580 can use improved target state estimate 615 in the same manner as described with respect to target state estimate 515 in
On-board video tracker 710 also generates y-error from centroid 742, which reflects the distance along the y-axis or vertical axis that a detected object is from the center of a frame of on-board video 525. Camera tilt filter 744 generates camera tilt command 746 by determining the tilt distance or tilt angle that would be required to change the direction of the on-board camera 520 to be pointed at the detected object. The direction of on-board camera 520 is controlled according to camera tilt command 746. Collective power filter 760 includes a model correlating the amount of tilt dictated by camera tilt command 746 with the amount of interceptor aircraft collective rotor power, which can dictate the height or altitude of the interceptor aircraft, that would be required to orient the tracked object at the centroid of a frame of on-board video 525 if the on-board camera 520 is not tilted, or is at zero degrees of tilt from its central position. Collective power summer 768 sums camera tilt collective power command 762 and autopilot collective power command 764 to generate an approach collective power command 772. Inner stability loop 450 can use approach collective power command 772 to pilot the interceptor aircraft in the direction of the tracked object. Autopilot collective power command 764 can be, for example, a portion of autopilot aircraft commands 445 corresponding to a single axis. In some embodiments, such as where an interceptor aircraft is using a passive net hanging from the interceptor aircraft to immobilize a target, central controller 115 can modify approach collective power command 772 such that the interceptor aircraft will be just above the tracked target so that it can interdict and immobilize the target. In some embodiments, incorporating flight commands according to on-board camera 520 does not occur until a target is in range, which can be controlled, for example, by a switch that controls whether camera pan yaw command 722 or tilt collective power command 762 reach yaw summer 728 or collective power summer 768. The switch can be controlled by central controller 115 or by a computer on-board the interceptor aircraft.
The above-described techniques can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The implementation can be as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a non-transitory machine-readable storage device, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
Method steps can be performed by one or more programmable processors executing a computer program to perform functions of the technology by operating on input data and generating output. Method steps can also be performed by, and apparatus can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Modules can refer to portions of the computer program and/or the processor/special circuitry that implements that functionality.
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor receives instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer also includes, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Data transmission and instructions can also occur over a communications network. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in special purpose logic circuitry.
To provide for interaction with a user, the above described techniques can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer (e.g., interact with a user interface element). Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
The above described techniques can be implemented in a distributed computing system that includes a back-end component, e.g., as a data server, and/or a middleware component, e.g., an application server, and/or a front-end component, e.g., a client computer having a graphical user interface and/or a Web browser through which a user can interact with an example implementation, or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet, and include both wired and wireless networks.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The technology has been described in terms of particular embodiments. The alternatives described herein are examples for illustration only and not to limit the alternatives in any way. The steps of the technology can be performed in a different order and still achieve desirable results. Other embodiments are within the scope of the following claims.
This application claims the benefit of U.S. Patent Application No. 62/211,319, filed on Aug. 28, 2015, and titled “Drone Interdiction System,” and U.S. Patent Application No. 62/352,728, filed on Jun. 21, 2016, and titled “Mitigation of Small Unmanned Aircraft Systems (sUAS) Threats;” the entire contents of each are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62211319 | Aug 2015 | US | |
62352728 | Jun 2016 | US |