This invention relates to determining the location of submerged vehicles. More particularly to one-way transmission of time-synchronized signals and real-time processing to facilitate low-cost self-navigation within a liquid body.
Precise locational information is critical for practically any autonomous vehicle, robot, or other object. Location-determining solutions for terrestrial vehicles exist, including GPS positioning. However, GPS signals does not penetrate into liquid bodies (e.g. the ocean), and submerged vehicles must rely on other methods to determine their precise location. Underwater positing systems have been developed, but these have major drawbacks, relying either on multiple stationary acoustic beacons, or large and power-hungry devices systems.
U.S. Pat. No 5,894,450 describes an underwater location solution utilizing Long Baseline (LBL) acoustic positioning systems first developed in the 1960s. LBL-based systems require multiple, moored transmitters, each emitting signals into the environment in order for submerged vehicles to receive the signals and triangulate their position. LBL transmitters are fixed in a single location and any transmitter movement would make the system inoperable. Also, the reliance on multiple transmitters increases the acoustic noise of the system, making signal measurement and calculation by the receiving vehicle error-prone, decreasing the fidelity of the system. Furthermore, LBL systems typically require two-way time-travel (TWTT).
Underwater and oceanographic activities usually take place in large and complex liquid environments. Typical marine environments are the open ocean, littoral (near shore), reefs, bays, island areas, straits, seas, gulfs, shipping lanes, harbors, canals, reservoirs, lakes, rivers and even liquid handling plants. The local terrain in any of these environments can be very complex, and any vehicle operating in such an environment must know its location in order to effectively carry out its desired operation. Current underwater localization methods are costly, large and power hungry (i.e. requiring significant power supplies); thus increasing the size and expense of an underwater vehicle incorporating such a system. Therefore, there is a need for a low cost, electrically efficient submersible positioning system. Such a system is described herein. Such a low-cost system would increase the use of autonomous underwater vehicles (AUVs) in many underwater activates, including surveying, patrolling, search and rescue, environmental monitoring, and scientific exploration, sampling, and measuring.
Furthermore, in order to provide a complete picture or report of the desired underwater activity, it is essential for an AUV to cover all or most of a region of interest. Due to size, weight and cost constraints for small AUVs, and the physical complexity and characteristics of the underwater environment, a single AUV can only record a limited amount of data with its on-board instrumentation, requiring more time and movements to cover the desired area. One way to expand the volume of coverage, is to use more than one AUV, often in a pre-selected formation, where each AUV covers a small portion of the overall coverage area surveyed by the AUV formation.
AUV formations or networks greatly expand the activities AUVs can efficiently and effectivity perform. However, there are significant practice challenges to the use of AUV formations in submerged environments. Larger submergible vehicles such as submarines and large AUVs have specialized, high quality navigation systems, but larger vehicles are costly, not well adapted for use in formations, and require substantial support capabilities between missions.
Small, inexpensive AUVs, while well adapted for use in formations, cannot utilize the large, heavy, costly and power-hungry navigation systems of larger AUVs. Therefore, small AUVs experience significant navigation errors, which accumulate rapidly, on the order of tens of meters per minute. These navigation errors present a significant hurdle for the use of small AUVs in formation or networked activities.
Recently, Fischell et al. proposed a technique in “Relative Acoustic Navigation for Sensing with Low-Cost AUVs”, IEEE IRCA 2016 Workshop on Marine Robot Localization and Navigation and “Design of General Autonomy Payload for Low-Cost AUV R&D” (Viquez et al.
IEEE Autonomous Underwater Vehicles, p. 151-155, 2016), both incorporated by reference herein. That technique utilized a pulse per second signal from a GPS-synchronized CSAC (chip scale atomic clock) transmitter. Fischell et al. discloses a fixed-source transmitter, but suggest that a mobile-source may be possible. However, their method only achieved low-quality results with unacceptably large errors in estimated range and bearing to a fixed acoustic source, and was only possible in a non-real-time manner. A mobile-source utilizing this method would have had even greater range and bearing error.
It is therefore desirable to enhance accurate navigation for AUVs by utilizing an innovative strategy of precision signal sensing, timing and processing to overcome the aforementioned navigation, sensing and control issues.
An object of the present invention is to enable low-cost yet accurate localization by one or more secondary vehicles within a liquid body relative to a single source of time-synchronized acoustic signals.
Another object of the present invention is to enable use of only a single primary acoustic sound system, either fixed or on a motile primary vehicle, to reduce secondary vehicle complexity, weight and power consumption.
A still further object of the present invention is to enable scalable secondary vehicle numbers without requiring time or frequency sharing of the transmitted signal.
This invention features a localization system for at least one vehicle within a liquid body, the localization system including a primary system, also referred to herein as a source system and/or a master system, having a first positioning module configured to determine a location of the primary system. The primary system further includes a signal generation and timing unit that generates periodic timed primary signals, and a submergible transmitter to transmit the primary signals, such as acoustic signals, through the liquid body. The localization system further includes at least one secondary system that is carried by the vehicle and includes at least two receivers to receive the primary signals and a controller that (i) maintains time-synchronization with the primary system, (ii) develops a range estimate signal from measurements of received signals from each receiver and (iii) develops an azimuth-inclination estimation of likeliest angle-of arrival of the primary signals. The controller utilizes a plurality of coordinate frames, such as particle filter coordinate frames, to provide an estimate of vehicle location.
In certain embodiments, the estimate of vehicle location is relative to the location of the primary system. In some embodiments, the controller applies a beamformer to a first plurality of look-angles and the received primary signal, each look-angle representing a combination of azimuth and inclination vectors, generating a first plurality of corresponding outputs, each output having a power, and the controller selects the output with the maximum power, representing approximately the azimuth and inclination between the primary system and secondary system. In a number of embodiments, the first plurality of look-angles is constrained to a second plurality of look-angles by the controller applying a particle filter, said second plurality having a smaller number of look-angles than the number of look-angles in the first plurality. In some embodiments, the controller further comprises a spatial filter stored in a computer-readable storage medium, the spatial filter comprising phase-shifts associated with a regular grid of a third plurality of look-angles. The secondary system further comprises a second positioning module, configured to determine the location of the secondary system, and the controller re-initiates the first plurality of look-angles upon said second positioning module determining the location of the secondary system. The first plurality of look-angles is converted to a fourth plurality of look-angles based on a motion model, the motion model estimating vehicle speed and yaw. The fourth plurality of look-angles is constrained to a fifth plurality of look angles by the controller applying a particle filter, said fifth plurality having a smaller number of look-angles than the number of look-angles in the fourth plurality.
In certain embodiments, the primary signals are generated to further include information encoding at least one of primary system location, and at least one command to the at least one secondary system. The signal generation and timing unit further generates periodic secondary signals which comprise information encoding at least one of primary system location, and at least one command to the at least one secondary system. In one embodiment, the location system further includes at least a second primary system which comprises a third positioning module configured to determine a location of said second primary system, a second signal generation and timing unit that generates periodic timed second primary signals, and a second submersible transmitter to transmit the second primary signals through the liquid body. The primary signals have a first waveform and the second primary signals have a second waveform, and the at least two receivers receive said second primary signals. The at least one secondary vehicle is configured to achieve self-localization to within one meter accuracy.
In one embodiment, phased-array beamforming is conducted by iterating various azimuth-inclination look-angles, using array geometry to apply time-delay phase shifts to the received signals, and summing the time-delayed signals to determine a maximum response where the receiver/hydrophone signals are in phase and add constructively. In certain embodiments, phase shifts associated with each look-angle are precomputed and stored for use by the controller/beamforming and filtering module. In one embodiment, frequency domain operations are limited to a range of interest (e.g. 6-10 kHz). Certain embodiments generate heatmap of azimuth-inclination combinations.
In some embodiments, the particle filter coordinate frames utilized by the controller include at least two of a body-fixed frame, a vehicle-carried frame, and a local-level frame. In some embodiments, the controller transforms local-level particles to obtain vehicle-carried particles, and calculates particle magnitudes to obtain range-line particles. In certain embodiments, the controller re-initializes particles when a locational fix (e.g. GPS signal) is received. In some embodiments, a location prediction step includes sorting a vehicle-carried particle set and a range-line particle set in ascending order according to their weights.
In a number of embodiments, the controller further includes a factor graph smoothing module to optimize location estimations over a trajectory of the vehicle. In some embodiments, the vehicle further includes a propulsion system and a power source
This invention also features a localization system for a plurality of vehicles within a liquid body. A primary source system includes a positioning module, a signal generation and timing unit that generates periodic timed primary signals, and a submergible transmitter to transmit the primary signals through the liquid body. Each vehicle includes (i) at least two receivers such as a hydrophone array to receive the primary signals, (ii) a data acquisition module that maintains time-synchronization with the primary source system to trigger periodic recordings from the receivers, (iii) a beamforming and matched filtering module that develops a range estimate signal from measurements of received primary signals from each receiver and develops an azimuth-inclination estimation of likeliest angle-of arrival of the primary signals, and (iv) a particle filtering module utilizing a plurality of coordinate frames to provide an estimate of vehicle location.
This invention further features a method for locating at least one submersible vehicle, including selecting at least one primary system including a first positioning module, a signal generation and timing unit configured to generates periodic timed primary signals, and a submersible transmitter to transmit the primary signals. The method further includes selecting the at least one submersible vehicle to carry at least one secondary system including at least two receivers to receive the primary signals, and a controller. The location of the at least one primary system is obtained utilizing the first positioning module. At least one primary signal is sent from the at least one primary system, and time-synchronization is maintained between the at least one primary system and at least one secondary system. The at least one primary signal is received utilizing the at least two receivers of the at least one secondary system. An azimuth-inclination estimation of likeliest angle-of arrival of the received primary signal is developed, and a plurality of coordinate frames are utilized to estimate range and secondary location relative to the at least one primary system.
This invention may also be expressed as a method performed by a vehicle having at least one computer processor executing computer program instructions stored on at least one non-transitory computer-readable medium to accomplish the method as described herein. This invention may be further expressed as a vehicle having a non-transitory computer-readable medium storing computer program instructions to accomplish the method as described herein. This invention may be still further expressed as a system including at least one vehicle having at least one computer processor, at least one non-transitory computer-readable medium operatively connected to the computer processor, and memory accessible by the processor. Responsive to execution of program instructions accessible to the at least one processor, the at least one processor is configured to accomplish the method described herein.
In what follows, preferred embodiments of the invention are explained in more detail with reference to the drawings, in which:
The term “AUV” is intended to include Autonomous Underwater Vehicles.
The term “ROV” is intended to include Remotely Operated Vehicles.
The term “UxV” is intended to include UAVs (Underwater Autonomous Vehicles or AUVs) and URVs (Underwater Remotely-operated Vehicles or ROVs).
The term “attitude” is intended to be given its ordinary meaning for navigation as a vertical angle from a horizontal plane to a reference node, also referred to as a reference point, a primary, a beacon, or an acoustic sound system according to the present invention.
The term “azimuth” is intended to be given its ordinary meaning for navigation as a horizontal angle measured clockwise from a reference node.
The terms “primary,” “acoustic source system” or “beacon” is intended to include any suitable system capable of producing an output that can be transmitted through a fluid to one or more secondary vehicles. One currently preferred acoustic source system comprises a Lubell amplifier and underwater speaker.
The term “beamforming” refers herein as a processing technique commonly used in sensor or receiver arrays for directional signal transmission or reception. Received signals from an array are combined, and the combination leads to interference between signals. When the signals are combined at the look-angle that represents the direction of the transmitter, the signals at that particular look-angle experience constructive interference, resulting in a larger output, as illustrated in
The term “coordinate frame” as used herein refers to a system that uses numbers (i.e. coordinates) to uniquely determine a position in 3D space, with an origin set at a given point, usually the primary or a secondary. Each coordinate frame as disclosed herein has dimensions set relative to an origin, for example the origin being the secondary and the dimensions being forward of the secondary, port side of the secondary and above (towards the liquid body surface SS) the secondary, abbreviated Forward-port-Above, or bff.
The term “look-angle” as used herein refers to a combination of an azimuth vector and an inclination vector. The azimuth vector being the direction of a primary to a given secondary, expressed as the angular distance from the North point in a coordinate frame to the point at which a vertical circle passing through the primary intersects the horizon. The inclination vector being the secondary's tilt, as the difference between a reference plane and the primary plan or axis.
The term “DVL” is intended to include Doppler Velocity Logs.
The term “AHRS” is intended to include Attitude and Reference Systems.
The term “inertial navigation system” or “INS” is intended to include a navigation system that uses a computer, motion sensors, rotational sensors, and in some cases, magnetic sensors along with dead reckoning to continuously calculate the position, orientation and speed of a moving object without external references or input. Any suitable sensors can be used, including gyroscopes for object rotation, accelerometers for motion, and magnetometers for magnetics.
A “liquid body” refers to any structure, feature, or geographical formation capable of holding or retaining a liquid and the liquid contained therein. Examples of liquid bodies include but are not limited to, an ocean, bay, lake, river, reservoir, tank or pipe. The liquid can be any liquid, including water, saltwater, oil, liquefied gas, ethanol, wastewater, or the like. In this disclosure, the three-dimensional area of a liquid body may be referred to as a “medium,” a “liquid medium” or “multi-dimensional space”. The liquid bodies utilized by the invention are generally of a size and structure capable of simultaneously accommodating, in addition to the liquid, a primary and at least one secondary vehicle.
The term “locational fix” refers to the process of a component or vehicle that determines its location. Most often a locational fix is obtained by receiving a valid GPS position, with corresponding longitude and latitude coordinates (i.e. absolute location). A locational fix may be represented positional information by any means commonly known in the art, including laser positioning, DVL-INS systems, and reference mapping.
The term “inverted USBL” indicates that an acoustic array is carried by secondary vehicle instead of the primary acoustic source system.
The term “formation” includes any two or more vehicles or objects that move in relation with each other.
The term “submerged formation” refers to any formation of vehicles capable of diving into a suitable liquid, that is, to travel beneath the surface of a liquid body.
The term “self-localize” refers to the ability of a mobile object or vehicle to accurately determine its position in a medium in respect to a primary vehicle, without the aid of large, power hungry DVL, INS, AHRS, or other such systems.
The term “regular grid” refers to any three-dimensional grid, most often representing the surface of a sphere, of look angles. The look angles of the regular grid are selected and placed into a set and an equally spaced, which is separated by a regular distance between lines of the grid, arc degrees for spherical regular grids.
The term “offline” as used herein refers to a digital computational task that is performed at a time later than the signals or information is originally generated or received. Most often, offline refers to calculations performed later and on a different computing system than the system that originally received and stored the information. Offline calculations include the near instantons relaying of information from a first system to a second system, the second system performing the calculations and relaying the output back to the first system.
The term “low power” in this disclosure refers to the total power consumption of a mobile object without power-hungry systems like a DVL, AHRS and the like. Low power systems typically each require approximately 20 or less watts. Low power further refers to power consumption rates such that mission times are of sufficient length, and do not require vehicles with large power sources.
As referred in this disclosure, a “vehicle” is any controllable object that can physically move through the desired medium or liquid, including floating on top of the medium, or navigating through the medium (i.e. submerged). The vehicle can be any appropriate object, as commonly known in the art, including but not limited to a ship, boat, barge, or other human-occupied vehicle, AUV, ROV, UUV, submarine, or other submerged craft.
A “submerged vehicle” refers to any motile vehicle, vessel or device capable of being introduced into and operating within or on the liquid, or liquid body. Many submerged vehicles are commonly referred to as underwater vehicles, although they may operate in other liquids besides water. In this disclosure, a submerged vehicle includes, but is not limited to AUVs, drones, remotely operated vehicles (ROVs), unmanned underwater vehicles (UUVs), submarines manned or unmanned, amphibious vessels and the like.
The system and methods described herein will now be described in more detail, with references to illustrative embodiments. The described features, advantages, and characteristics of the present invention may be combined in one of any suitable combinations, in one or more than one embodiment. One skilled in the relevant art will be aware that this invention may be practiced with or without one or more of the detailed features or advances present in any embodiment as described herein. In some instances, additional innovative features or advantages are recognized in particular embodiments that are not present in others. These illustrated embodiments, are for the purpose of describing the inventive system, features, and methodologies and are not to be understood to be limiting in any way.
The inventive system described herein pertains to submerged relative localization of a submerged vehicle, and formations of vehicles in a liquid body. Precise determination of a submerged vehicle's location is very important for many different underwater operations. When submerged vehicles operate as a group or team to complete a single mission, location information is even more important. In some constructions, submerged formations comprise at least two vehicles, either (i) a fixed primary or master signalling system, also referred to as simply the primary, and at least one secondary vehicle, or (ii) a motile primary vehicle and at least one secondary vehicle. In general, the primary obtains positional information by a method commonly known in the art, and transmits its positional information via a time-coordinated, time-stamped pulsed signal S (e.g. acoustic pulse). The secondary vehicle passively receives the signal pulse S with multiple receivers, analyzes the signal, and determines its own location relative to the primary.
This invention may be accomplished by a number of systems. In one currently preferred construction, a designated primary comprises a positioning module, a controller, a signal generation and timing unit, and a transmission mechanism. The system further comprises at least one secondary having a receiver mechanism, a controller, and a data acquisition module. Optionally, the secondary vehicle may further comprise an inertial measurement mechanism, a compass mechanism, and a depth sensing mechanism.
This invention includes innovative use of (a) one-way time travel (OWTT) of an inverted ultrashort baseline (iUSBL) signal from primary to secondary (b) passive reception of said OWTT signal by the secondary (piUSBL), and (c) precise position & orientation calculation via precise time synchronization, closely coupled particle filtering and beamforming.
The inventive submerged localization system described herein offers several important improvements over existing operations. The improvements include: 1) a single transmitter which is capable of being mobile, is carried by a motile primary, 2) low-power-consumption passive localization via OWTT and filtering and beamforming to accomplish ranging, azimuth and inclination to the primary, and 3) scalable secondary vehicle formations with non-interfering secondary receivers.
The primary 110, as illustrated in
Use of the innovative submerged formation system 100 according to the present invention is illustrated in
The primary 110 further comprises a positioning mechanism 112 to accurately determine the position of the primary, a transmitter 116 that emits, or transmits a signal S into the liquid body LB and a controller 113. The positioning module 112 obtains positional information, often absolute position (e.g. GPS latitude and longitude). The controller 113 is connected to the position mechanism 112 and processes the positional information, and in turn programmatically generates a suitable signal S. The controller 113 instructs the transmitter 116 how and when to produce signal S. Furthermore, the timing systems of the primary and secondary are synchronized. Each mechanism in the master vehicle will be discussed in detail below.
In order for the at least one secondary 150 to precisely determine its location relative to the primary 110, a signal S must be sent through the liquid body LB. The signal may be any suitable signal as known in the art, including acoustic, optical, and radio frequency, or other electromagnetic signals. In some embodiments, the primary 110 comprises more than one transmitter, each transmitter emitting a different signal mode (e.g. optical and acoustic signals).
The signal S typically comprises a pre-defined structure and timing. Known timing and time synchronizing between primary and secondary allows for precise estimation of range, azimuth and inclination by the secondary. The exact onset of the signal must be timed precisely, or the time-sensitive information contained within, and convey by, the signal would introduce significant errors, rendering the system inoperative. In one embodiment, the inventive system utilizes oscilloscope calibration and a hard-coded delay cycle to ensure a precise delay between the pulse per second (PPS) rising edge and the onset of signal. In the currently preferred embodiment, less than 1 millisecond of jitter (defined as slight irregular variation), less than 2 milliseconds of jitter, less than 3 milliseconds of jitter, less than 4 milliseconds of jitter, and less than 5 milliseconds of jitter is maintained between PPS rising edge and signal onset.
Utilizing an Arduino microcontroller is preferred to ensure extremely consistent (≤1 ms jitter) periodic transmission of the acoustic signal. The customizable nature of the primary and the signal generation according to the present invention enables broadcasting a variety of different signals as needed. In the currently preferred embodiiment, this is enabled by a wideband (200-2,300 Hz) frequency response of the transmitter (e.g. a Lubell underwater speaker). The underwater speaker and amplifier dominates the total cost of the primary, which could be reduced through the careful design and construction of a narrowband alternative. Furthermore, the primary is configured to customize the timing of the signal S, outputting the signal (e.g. a GPS up-chirp signal) at the start of every GPS second.
Known signal structure allow for proper identification of the signal from background noise by the secondary 150, as well as proper classification of the sections and properties of the signal S. Examples of acoustic signals S are illustrated in
Additional information encoded in the signal S may be any desired information, most often updated primary location (e.g. a moving primary vehicle), new movement or measurement commands for secondary vehicles, or a subset of a plurality of secondary vehicles. In addition, the timing of the signals S is adaptable. Most often the signals are sent every second, in some cases the signals are sent every 0.1 seconds, 0.5 seconds, 2 seconds, 5 seconds, and every 10 seconds. Simple encoded commands may comprise instructions to the secondary vehicles' behaviour. For example, “follow”, “approach”, “circle”, “surface”, and/or execute pre-loaded programs (obtain sample, image, etc.) commands can be incorporated into the acoustic signal without affecting secondary localization. One technique for imparting more information into audio signals is described by Kira Coley in “The Dawn of High-Speed Underwater Communications”, Ocean News & Tech, October 2017, pp. 32-35. Other techniques and equipment for underwater acoustic communications are disclosed by Zhou et al. in U.S. Pat. No. 7,859,944, both publications are incorporated herein by reference
In the currently preferred embodiment, the signal S comprises an acoustic signal. The signal must be at least partially defined and known to the primary and secondary vehicles before deployment, such that the secondary vehicles accurately classify a recording correctly as a received signal RS. In one embodiment, depicted in
The present invention provides for one or more receiving systems 151 (e.g. a payload for a small AUV), often incorporated into a submersible vehicle 150 (e.g. a small AUV), comprising a receiver mechanism 152, to receive signals from the primary vehicle, and a controller 156 for maintaining time-synchronization, and calculating range, bearing, azimuth-inclination estimation, and, ultimately, secondary vehicle location.
In one preferred embodiment, the secondary vehicle 150, as illustrated in
In one currently preferred embodiment, the secondary vehicle 150 comprises an AUV, for example the Bluefin SandShark AUV, commercially available from General Dynamics. The payload 151 comprises approximately the front two third's of the vehicle 150 pictured in
In another preferred embodiment, the secondary vehicle comprises a Hydromea Vertex AUV with an attached payload. The payload comprises a receiver mechanism 152, a low-grade IMU, a bearing mechanism (e.g. a compass), a controller, a timing mechanism and an optional depth sensing mechanism. Acoustic receiver equipment for underwater acoustic communications can be utilized as disclosed by Zhou et al. in U.S. Pat. No. 7,859,944, for example. The components of the secondary vehicles and the payload 151 will be discussed in more detail below.
As illustrated in
Phased-array beamforming, described in more detail below, involves iterating over a large set of look-angles in a three-dimensional space, depicted in
In the currently preferred embodiment, the secondary 150 utilizes SMC beamforming 400 to select a sub-set of look-angles to determine secondary relative location. SMC beamforming is an approach that closely couples 440 raw signal processing 402 (including matched filtering 404 and beamforming 406), with particle filtering 420 in real-time. This close coupling is referred herein as the beamformer plus particle filter 440 (B+PF) approach. SMC beamforming 400 results in an algorithm whose computational complexity scales with the number of particles rather than set size, while allowing for the seamless integration of motion model odometry 422 with instantaneous received signal measurements RS.
As illustrated in
A particle filter is uniquely suited for angular estimation with beamforming for the following reasons: firstly, the response of an array is generally multimodal in nature (e.g. domains 550, 560 and 570), and this multimodality is often made worse by signal effects such as multipath and environmental noise; secondly, spherical coordinates mean that the particles remain within a small, closed domain; and thirdly, the particles themselves represent the look-angles at which beamformer spatial filtering is performed, greatly reducing computational complexity while improving estimation accuracy.
Reference Fames and Transformations. The innovative SMC Beamformer uses three reference frames, each frame using a right-hand coordinate system. The Forward-Port-Above body-fixed frame (bff), in which beamforming is performed, see Eqs. 19-24; the vehicle-carried East-North-Up frame (vcf), whose origin is fixed to the secondary's center of gravity; and the vehicle-carried East-North-Up local-level frame (llf), whose origin is also fixed to the secondary's center of gravity, but in which both range and angle to the primary 110 are combined to estimate relative primary position, and in which the filter motion update step is applied. A set of N particles with associated weights Willf are stored in the llf and used to model the relative position of the primary, with their states given by Equation 1.
{right arrow over (s)}
i
llf(t)=[xillf(t)yillf(t)zillf(t)]T Eq. 1
Transforming between this set of combined range-angle llf particles, {right arrow over (s)}illf (t), and the separated range and angle particles in the range domain and the vcf requires the beamformer to perform a duplication of the set through a transformation denoted by Rllfvcf,
The corresponding weights of these two sets are denoted as wibff and wir. And they are set equal to willf during the transform of Equation 5.
(wibff, wir)=willf Eq. 5
The range-only particles ri are now properly in the range domain for fusion with range measurements. However, the angle-only vcf particles {right arrow over (s)}vcf must still be transformed into the domain in which beamforming occurs, which is the bff. Here, it becomes clear why the weights of the angle-only particles are denoted by wibff—the vcf is only used as an intermediary frame between the llf and the bff. Transforming from the vcf to the bff is denoted by Rvcfbff,
{right arrow over (s)}
i
bff(t)=[xibff(t)yibff(t)zibff(t)]T=Rvcfbff·{right arrow over (s)}vcf(t) Eq. 6
R
vcf
bff
=R
z(α)Ry(β)Rx(γ) Eq. 7
R
bff
vcf=(Rvcfbff)T Eq. 8
The inverse transformation from the separate range-only and angle-only particle sets back into the combined range-angle set {right arrow over (s)}illf (t) has a caveat: rather than multiplying each angle-only particle {right arrow over (s)}vcf (t) and their associated weights wibff by every range-only particle ri and weight wir (which would result in N2 particles in {right arrow over (s)}illf(t), the present invention instead sorts {right arrow over (s)}vcf (t) and r(t) in ascending order according to their respective weights wibff and wir; the inverse transform Rvcfllf,
Weighted angle particles in the body-fixed frame 1412 are shown in
These equations allow for the estimation to maintain a constant number of N particles efficiently during duplication and recombination and performs well in practice. The reference frames used by the sequential SMC beamformer and the transformations between each are shown in
Initialization. In order to improve computation time, the SMC beamformer first precomputes and stores in memory the spatial filter 408 H[ω; θ, ϕ] containing phase-shifts associated with a regular grid of look-angles, with the grid parameters selected as a trade-off between memory consumption and angular resolution. This grid is later used as a look-up-table for phase-shifts nearest to a given look-angle.
Initialization of the filter is dependent on whether or not the primary vehicle is fixed or mobile. In both cases, all weights of the llf particles, {right arrow over (s)}illf (t), initialized with equal weights, willf=1/N, when the secondary vehicle is on the liquid body surface SS, in the vehicle-carried East-North-Up (ENU) frame whose origin is fixed and centered around the secondary vehicle. In addition, the particles are reinitialized whenever the secondary vehicle surfaces.
If the primary is fixed at a known position, the llf particles are initialized around the primary's relative position using knowledge of the secondary's GPS position and noise. If the primary is mobile, the llf particles are instead initialized randomly with a uniform distribution within a spherical volume centered around the secondary and extending to the system's maximum range. One embodiment's range is calculated in Equation 10. The filter makes the strong assumption that the primary 110 is within the system's range when the secondary is deployed.
The second step of the SMC beamforming is performed upon secondary movement, referred herein as motion update 422. Once the secondary vehicle dives and loses GPS reception, the particles are updated with a simple motion model that uses estimates of the secondary's speed-over-ground (Vg) and yaw (α), provided to the payload 151 by the secondary vehicle 150 (e.g. a SandShark AUV). Because the present invention does not require a DVL, most embodiments will lack such a system, speed-over-ground can be calculated using the secondary's propulsion mechanism (e.g. propeller speed, or RPM) that is scaled by some calibrated factor C and compensated by pitch (β), as demonstrated in Eq. 11. Because the filter is estimating the relative position of the primary in the secondary vehicle-centered llf, the direction opposite to secondary vehicle's direction of travel is used to propagate the llf particles {right arrow over (s)}ibff (t) as shown in Equation 12, where Δz is the secondary vehicle's change in depth from its pressure sensor, and Gaussian noise has been added to vehicle speed and yaw. The final term, N(0, σv
Measurement Updates. Whenever the secondary system 151 receives a valid signal measurement RS, the {right arrow over (s)}illf (t) local-level frame particle set is first transformed into duplicate sets of particles in the range-domain and body-fixed frame, using equations 2, 3, 4, 5, 6 and 7. This transformation gives a range-only particle set, r(t), and an angle-only particle set, {right arrow over (s)}ibff (t), along with their associated weights wir and wibff.
The weights wir of the particles r, in the range domain are updated by multiplying against the range signal outputted from the matched filtering Equations 15 and 16. Systematic resampling is then used to resample and reweight this set of range-only particles.
The particles {right arrow over (s)}ibff in the bff now represent look-angles at which to perform beamforming. The standard Cartesian-to-spherical transform provides the azimuths (ϕ) and inclinations (θ) at which to apply the set of beamforming equations, shown in Equations 19-24. Upon completion of beamforming at these look-angles, the weights wibff of these angle-only particles are updated by multiplying against their corresponding beamformer output power, and systematic resampling is performed to obtain an updated particle set. The transform Rvcfllf then places these particles into the vehicle-carried frame.
All that remains to be done is to transform these updated range-only and angle-only particle sets back into the local-level frame, which is done using the recombination Equations 8 and 9. The filter loop then repeats.
Likelihood Estimation. To estimate the relative position of the primary, the weighted mean of the particles {right arrow over (s)}illf in the local-level frame is used. When the primary is fixed at a known position, absolute localization (e.g. latitude and longitude coordinates) is possible by negating the relative primary position and offsetting the result by the absolute coordinates of the primary. For a moving primary, the relative position enables the secondary to operate in a primary-relative coordinate frame.
The localization performance of the SMC beamformer is comparable to the two-stage conventional B+PF approach described elsewhere herein. The SMC beamformer computational gain can be naively estimated as follows: assuming that the conventional beamformer processing time of a single look-angle is n milliseconds (ms), and the filter loop time of a single particle is m ms, given N look-angles and M particles, the total processing time for the two-stage B+PF approach is approximated in Equation 13. Conversely, for the SMC beamformer, the processing time is approximated in Equation 14.
((N·n)+(M·m)) Eq. 13
M·(n+m)=((M·n)+(M·m)) Eq. 14
Given the desired angular resolution of the beamformer, it is often the case that M<<N, and therefore the net reduction in processing time is about ((N−M)·n) ms. Filter performance in terms of both accuracy and computational efficiency is now directly proportional to the single parameter of the number of particles M.
The SMC beamformer allows for fine tuning of the system according to the computational power of a specific embodiment. Firstly, by increasing the resolution of the grid of look-angles, the look-up-table consumes more memory, but improves precision by more accurately representing the true beamformer power output for a given particle. Secondly, accuracy is imporved by increasing the number of particles, but at the cost of greater computation time. In one example of the present invention, a regular grid of 22 inclination and 360 azimuth angles, resulting in 519 Mb of memory usage, or about half the memory available on a Raspberry Pi 3 controller is used. 1500 particles are used to achieve sub-second rates, fast enough for real-time performance.
The secondary vehicle estimates range to the primary through the use of matched filtering 404, as illustrated in
In one construction, matched filtering is performed with measurements from each of the four hydrophones to obtain a range estimate signal 506. This is in essence a convolution between hydrophone i's received signal, xi[n], and a replica of the signal (e.g. an up-chirp) h[n], shown in Equation 15.
y
i
[n]=Σ
k=0
N−1
h[n−k]x
i
[k] Eq. 15
The output yi[n] reaches a maximum at the sample number at which xi[n] most closely resembles the broadcast signal template replica h[n] . The standard deviation of this sample number across the four elements can be used as a measure of the validity of the array measurement, shown in Equation 16.
When σ<5 we deem the measurement valid. Invalid measurements occur when the signal S is not received by all receivers 153, which occurs primarily due to self-occlusion—the body of the secondary obstructs the transmitted signal S. The matched filter output from all receivers are combined and smoothed using moving average, shown in Equations 17 and 18 to generate the range estimate:
Finally, sample numbers n are converted to ranges by subtracting the characterized systemic delay and scaling by the ratio of approximate sound velocity in the liquid body, here for freshwater over sampling rate (r=(c/Fs), n=1480/37500·n). This range estimate signal y[n] is normalized and passed on to the particle filter.
Phased-array beamforming is also performed using the raw measurements, giving an azimuth-inclination heatmap estimating the angle-of-arrival relative to a frame of reference for the secondary vehicle such as the body-fixed frame of a secondary. Assuming a planar incident sound wave and isotropic receiver response, beamforming is done by iterating through various azimuth-inclination combinations (look-angles), using the array geometry to apply time delays (phase shifts) to the received signals RS, and summing these time-delayed signals. The look-angle with the maximum response is the likeliest angle-of-arrival of the incident wave, which occurs when the signals are in phase and add constructively.
In other words, the aim is to steer the beamformer over a set of candidate look-angles, and to select the look-angle that results in the maximum beamformer output power; this occurs when the look-angle points in the direction of the incoming plane wave and the phase-shifted signals add constructively, as shown in
The time delay τi of a plane wave arriving from a specific look-angle at each element can be calculated as shown in Equation 19, where {right arrow over (u)} is shown in Equation 20.
where {right arrow over (p)}i is the position of receiver 153 (e.g. a hydrophone), and ϕ and θ are the look-angle azimuth and inclination, and c is the speed of sound in the liquid body. This time delay is constant for a given look-angle, but corresponds to phase shifts in the frequency domain that increase with frequency, as shown in Equation 21. Applying the time delay from Equation 20 to the signal fi received by receiver i (e.g. a hydrophone), is equivalent to applying phase-shifts in the frequency domain (Eq. 21).
where fi is the plane wave signal incident on receiver i, and {right arrow over (ω)} is the vector of frequencies output by the n-point DFT. Beamforming negates these geometrically-induced time delays by using the spatial filter 408 H [ω; θ, ϕ] for each element that applies opposing phase-shifts and sums over all elements in Equations 22 and 23. The Beamformer frequency-averaged output power is then calculated by Equation 24.
The previously described matched filter 404 is applied to the received signals RS to enhance detection, and the Chirp Z-transform is used rather than a full DFT to reduce n without loss of resolution in the frequency range of interest. When a look-angle is pointing toward the primary, the output power is large. By steering (i.e. iterating) the look-angle over a set of candidates and searching for the largest response, the likeliest angle to the primary 110 can be found, providing an instantaneous estimate of the azimuth and inclination between the secondary and primary, as summarized in Equation 25. Steering look-angles involves iterating through a set of candidate look-angles, analyzing the response (output) level of each, and comparing the current response to the maximum output found in the set so far.
Unlike previous methods, which utilize a two-stage offline (e.g. non-real-time and not onboard the secondary vehicle) process of conventional beamformer processing followed by the sampling of the beamform output by a particle filter, to fuse signal measurements and odometry, the present invention discloses a method of closely coupling beamforming with particle filtering (the SMC beamformer). This tight integration is key in enabling closed-loop, online secondary navigation. Online navigation is performed in real-time on the secondary vehicle itself, and not on another, remote platform/computer.
In some embodiments, alternative beamforming algorithms are performed, as now described. The likely angle-of-arrival (i.e. look-angle) can be found by iterating through a set of look-angles, to find the maximal output, which would mean the look-angle is equal to the true angle-of-arrival of the incident wave. The iteration of a number of look-angles, produces an azimuth-inclination heatmap, g[ϕ,θ], as describe in Equations 26 and 27.
This heatmap is passed on to the particle filter when 265°≤ϕ*≤95° and 45°≤θ*≤135° to prevent error caused by self-occlusion.
Matched filtering and beamforming provide instantaneous and noisy estimates of range and angle-of-arrival respectively. In addition, underwater acoustic propagation frequently exhibits undesirable properties such as multi-path and reflections, resulting in outliers and measurement distributions that are non-Gaussian. Consider the three valid matched filtering outputs in
The particle filter of the present invention makes use of three main coordinate frames: the Forward-Port-Above body-fixed frame (bff), in which acoustic angle-of-arrival measurements are made (Eqs. 5-9); the vehicle-carried East-North-Up frame (vcf), whose origin is fixed to the center of gravity of the AUV; and the local-level East-North-Up frame (llf), whose origin we define to be the primary position and within which the secondary vehicle navigates. At time instant t, the filter models the azimuth-inclination to the beacon using a set of N particles and associated weights wix which reside on the unit ball in the vehicle-carried frame. A second set of N particles and weights wir residing on a 0-300 m range-line (rl) are maintained to estimate range to the beacon using matched filtering measurements (Eqs. 15, 17, 18). Two particle sets are used due to the independent nature of range and azimuth-inclination measurements. The state vectors of the particles in each set are given by Equation 28:
{right arrow over (s)}
i
x(t)=[xivcf(t), yivcf(t), zivcf(t)]T and
{right arrow over (s)}
i
r(t)=[rirl(t)]{right arrow over (s)}ir(t)=[rirl(t)] Eq. 28
Typically, the particle filter is initialized using the secondary's positioning module (e.g. GPS measurements) when the secondary is on the liquid body surface SS awaiting deployment. Secondary GPS position is transformed into the local-level frame by subtracting primary location, and particles are initialized in this frame centered around the positioning module position. These particles are then transformed to the secondary vehicle-carried frame 504 and range-line 508 for state vector storage as give in Equations 29 through 32.
where (xGPS, yGPS) is the local-level frame GPS position and σGPS is the standard deviation of GPS measurement noise (or other positional module locational information). The transform from the local-level to the vehicle-carried frame 504 and range-line 508 is denoted by Rllfvcf,
1) Prediction: In the prediction step the two particle sets are sorted in ascending order according to their weights. The particles are then transformed to the local-level frame by combining both sets (essentially by element-wise multiplication)—a transform denoted by Rvcfllf (Eq. 34 and
Storing the particles in the vehicle-carried rather than the body-fixed frame allows us to exclude attitude propagation in the update step. This reduces computation, since attitude updates occur much faster than acoustic measurements.
2) Update: Whenever a valid azimuth-inclination heatmap or range estimate signal 506 is received, weights are updated and the particles resampled. For the range particle set, the update step is: particle weights are multiplied by the value of the range estimate signal corresponding to their associated ranges, and resampling and weight normalization is done using systematic resampling. For the azimuth-inclination particle set are first transform the particles into the body-fixed frame using vehicle pitch (θvcf), roll (Ψvcf) and yaw (φvcf)—a transformation denoted as Rvcfbff,
let {right arrow over (U)}iΦ(t)=[ϕibff(t), θibff(t)]T Eq. 38
{right arrow over (U)}
i
Φ(t)=RΦ((Rz(ϕvcf)Ry(θvcf)Rx(ψvcf))T{right arrow over (S)}ix(t)) Eq. 39
In the body-fixed frame the azimuth-inclination particles are represented using spherical coordinates 502; their weights are multiplied with the corresponding azimuth-inclination heatmap values, and resampling and weight normalization is performed using systematic resampling. Finally the particles are transformed back into the vehicle-carried frame using the inverse rotation matrices and the standard spherical to Cartesian transform—this process is denoted as Rbffvcf,
3) Estimation: Estimation is performed by calculating the weighted means of both the range and azimuth-elevation particle sets in the body-fixed frame. Transformation of the range and azimuth-elevation means into the local-level frame 510 provides an estimate of the secondary position (and trajectory 512 in this construction) in the local- level frame. In addition, the means transformed into vehicle-carried frame are used during factor graph smoothing. The particle filter output is set to a definable particle number. In one embodiment, the output of the particle filter is set to 500 particles, along with visualizations of the particles in each coordinate frame can be seen in
Factor Graph Smoothing: Although the particle filter provides an estimate of the secondary vehicle's location, it does so by recursively marginalizing out all previous measurements, resulting in a trajectory that often contains discontinuities. A factor graph smoothing algorithm can improve this by utilizing all particle filter measurements to optimize over the full secondary trajectory 512. This approach results in a smoother and more consistent trajectory, while still retaining the robustness against acoustic outliers provided by the particle filter.
{right arrow over (x)}
t
=[x
i
, y
i, ϕi]T Eq. 40
In this approach, the estimate the secondary vehicle's pose, as shown in Equation 40, in the local-level frame using a factor graph smoothing framework to represent the collection of poses over the entire trajectory. Each node {right arrow over (x)}i in the graph corresponds to the pose estimate at time i, and is linked to preceding and subsequent pose nodes by odometry constraints calculated using our motion model, as depicted in
{right arrow over (b)}=[x
b
, y
b]T Eq. 41
The primary vehicle 110 comprises a positioning mechanism 112, a transmitter 116, and a controller 113. The positioning module 112 obtains positional information relating to the location of the primary vehicle. The positional module may comprise any suitable positioning system known in the art, for example a GPS receiver. Most often the output of the positioning module is absolute positional information, represented by latitude and longitude coordinates. In one embodiment, the positioning module comprises a DVL-INS system to determine its underwater position and to generate positional information. The generated positional information is transferred to the controller for the proper generation of a signal S.
The controller 156 comprises a digital controlling device, performing all common informational receiving, relaying and transmitting commands between electrical components in the payload 151, and in the secondary vehicle 150. Often the controller comprises a single board controller, for example a Raspberry Pi computer. In other embodiments, the controller 156 comprises an interconnected Arduino Uno microcontroller with a Wave Shield for audio transmission. In some embodiments, the controller comprises more than physical structure, separated by control over different components, for example the controller 156 may further comprise a DAQ and amplifier 170 and a battery and power board 172.
The positioning module 112 comprises any suitable device that can precisely determine the position of the primary vehicle. In some embodiments, the positioning module 112 comprises a GPS receiver unit. In other embodiments, the positioning module 112 comprises a
DVL-aided INS or DVL-aided IMU (inertial measurement unit). The positioning module 112 in the currently preferred embodiment comprises a Garmin 18xLvC GPS unit, from which the rising edge of the pulse-per-second (PPS) signal is used to trigger playback of a pre-recorded 20 msec, 7-9 kHz linear up-chirp signal output by the Wave Shield.
The primary vehicle's signal generation and timing unit 114 in turn transfers locational information in the precisely time-synced pulse to the secondary vehicle. In some embodiments, the timing aspect of the signal generation and timing unit originates from the localization mechanism in the form of the GPS PPS signal. In other embodiments the unit comprises a precise timing device, as described in more detail for the secondary vehicle, below. In one construction, the PPS signal from this GPS receiver is input into a digital pin on the Arduino Uno equipped with an Adafruit Wave Shield, which allows it to precisely detect the onset of each second; this in turn triggers the Wave Shield to output a user-defined acoustic signal stored on a SD card.
In some embodiments, the primary transmitter 116 further comprises an acoustic modem, as described by Zhou et al. in U.S. Pat. No. 7,859,944. Such embodiments allow for at least the one-way transfer of information from the primary 110 to one or more secondary vehicles 150. In such embodiments, the signal S is still produced as described above to provide secondary vehicles 150 with properly timed locational information, however additional information can be encoded into the acoustic signal and relayed passively to the secondary vehicle(s), or sent in a separate signal. In some embodiments, the acoustic modem is used to send the same information to all secondary vehicles present. In other embodiments, the acoustic modem uses time slots to encode information for specific secondary vehicles. The duration and number of dedicated time slots are limited such that the inventive system's precise synchronization and localization are not impacted, when incorporated into the positioning signal S. In further embodiments the secondary vehicle 150 further comprises an acoustic modem enabling two-way communications between primary and secondary.
The transmission mechanism 116 comprises any suitable transmitting system suitable for submersion and the production of precisely timed signals. In the currently preferred embodiment, the transmission mechanism 116 comprises a Lubell UW30 amplifier and a LL916C underwater speaker. In other embodiments, the transmission mechanism 116 comprises an optical transmitter, a radio-frequency transmitter or a combination of suitable modalities. Fan et al. in U.S. Patent Publication No. 2016/0127042 describe in more detail examples of combinatorial transmitters suitable for a transmission mechanism in the present invention.
The present invention provides for at least one submersible object, referred as a secondary, to receive signals S from the primary to establish positional information while submerged. The secondary comprises a payload 151. The payload 151 comprises at least a receiving mechanism 152, and a controller 156. The receiver 152, most often comprises a plurality of individual receivers 153, most preferably at least two receivers, each receiver configured to receive signals S sent by the primary. In the currently preferred embodiment, the receiver mechanism comprises a hydrophone array with at least two hydrophones, at least three hydrophones or preferably, at least four hydrophones, spaced in a tetrahedral array. In some embodiments, the receiver mechanism is mounted on the nose of the payload (
The secondary payload 151 also comprises a timimg funtion. In some embodiments, the timing funtion, including time-keeping and time-syncronization (to the primary) are incorporated into the controller 156. In other embodiments, the timing functions are controlled by a separate digital device, referred as a timing unit 154. In one embodiment, the timing unit is a Measurement Computing USB-1608FS-Plus digital acquisition 162 (DAQ) device is provided to perform timing functions in conjuction with a SA.45 chip-scale atomic clock 164 (CSAC) as well as analog-to-digital conversion from the CSAC as well as other payload analog devices. In the currently preferred embodiment, the DAQ 162 is triggered to record 8000 samples every second at a sampling rate of 37.5 kS/s. And this digital signal is processed using the controller 156 (e.g. a a Raspberry Pi 3 embedded computer) using an online navigation algorithm as described in more detail elsewhere herein. In other embodiments, the timing unit comprises a Jackson Labs GPS disciplined oscillator containing a SA.45 CSAC, providing the payload 151 with a highly precise GPS-synchronized PPS signal. The CSAC is synchronized to the GPS PPS signal before deployment, and maintains time-synchronization while the secondary vehicle is submerged.
The CSAC triggers the DAQ at the onset of each second, allowing the receiver mechanism 152 to record signal measurements RS in sync with primary broadcasts, effectively enabling OWTT ranging to the primary (e.g. an acoustic beacon). The timing unit provides a time base that typically drifts by less than 0.5 ms/day, but it makes up a disproportionate amount of the cost of the receiving system.
In other embodimetns, alternative time syncing methods or mechanisms can be used. A carefully selected microcontroller-compensated crystal oscillator may provide a less expensive time base that drifts by only ms/day or lower. In embodiments, where the secondary vehicle 150 is tethered, GPS PPS can be directly relayed from the surface to achieve a significant reduction in cost. In other embodiments, the primary vehicle's motor or other sound-producing system produces a reproducible acoustic waveguide invariant, and that invariant is used to determine range. Harms et al (2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), p. 4001-4004), incorpoarted by reference in full, describes a method for ranging an acoustic source (i.e. a primary) based on analyzing fading characteristics of different acoustic tonal components of the waveguide invariant parameter beta. In some embodiments of the present invention, the Harms' method is modified by using a standarized acoustic source on the primary vehicle to determine secondary positional information. The secondary vehicle 150 times the recipt of the different tonal components emitted from the primary vehicle 110 and calculates range based on the seperation of the tonal components, as described above.
The secondary vehicle 150 often further comprises common components made available to the payload 151 through a typical digital interface 168 include, MEMS IMU with magnetometer, depth pressure sensors, GPS units, propellers, and control fins. Navigation data, including vehicle attitude and speed, is pre-filtered by the vehicle 150 from IMU and prop RPM information. The controller 156 uses this filtered data to command vehicle 150 depth, heading, and speed. The payload 151 most often also comprises a power source 166, comprising any suitable battery or electrical generator as known in the art.
Referring to
Both matched filtering and phased-array beamforming were performed on the Raspberry Pi 3 in real-time at approximately 1.25 Hz with 4050 look-angles (15 inclination and 270 azimuth equally-spaced angles). This data was recorded by the payload along with pre-filtered navigation data from the vehicle, which was received by the payload at a rate of about 10 Hz. The payload and AUV system clocks were synchronized using an NTP server running on the payload.
Particle filtering and factor graph smoothing were performed offline. 500 particles were used for both the azimuth-inclination and range set. iSAM2 was used to build and solve the factor graph with vehicle poses added at a rate of 5 Hz and using ranges and azimuths output by the particle filter. A new graph was initialized each time the AUV received a GPS fix, allowing us to monitor the difference in estimated and true position during the underwater to surface transition.
The AUV was deployed for two runs (designated run 1 and run 2), with the vehicle surfacing for three GPS fixes during the first and four GPS fixes during the second. For the two runs we perform a qualitative comparison between the trajectories 714 resulting from vehicle dead reckoning, particle filtering, and factor graph smoothing. We also use a simple metric to assess the inter-GPS-fix navigation performance of the three methods over both runs: during the underwater to surface transition a discontinuity in position occurs when the AUV gets a GPS fix, which is caused by localization error during underwater navigation; smaller jumps would indicate better performance. Unfortunately, this metric is subject to GPS positional error.
Plots of azimuth,
The plots of
Qualitative examination of the trajectories indicate that the dead reckoned estimates are the least self-consistent, with large discontinuities when the secondary vehicle surfaces for a fix. The particle filter trajectories are better in this respect, but they suffer from non-continuity caused by incorporation of latest observations in the filter's recursive estimate. The trajectories resulting from iSAM2 on the other hand are both the most self-consistent, and maintain a smooth, continuous trajectory between GPS fixes; this is a result of optimizing over the entire vehicle history, incorporating all acoustic measurements. These observations are supported by the jump distances in Table 1 above, showing that the iSAM2 approach has both the smallest average discontinuity, and the lowest standard deviation.
Besides the inherent positional uncertainty associated with GPS measurements, possible sources for the observed differences between GPS and our localization approach include motion due to river currents, which cannot be accounted for; inaccurate characterization of the systemic delay in the source/receiver system; inaccurate measurement of the acoustic array element positions; as well as jitter in the onset of the acoustic beacon signal—a maximal 2 ms delay in the onset of the beacon transmission at a sound speed of 1480 m/s corresponds to a 2.96 m error, which is on the same order of magnitude as the differences observed.
To demonstrate single-beacon piUSBL absolute navigation, we carried out two additional closed-loop deployments (designated run 3 and run 4) of our SandShark AUV on a portion of the Charles River adjacent to the MIT sailing pavilion, illustrated in
Since GPS is unavailable underwater, we also deployed two commercial Hydroid LBL transponders 811a, 811b fastened to the pavilion at a depth of approximately 1 m, with the first transponder at position (52:8; 23:8) m and the second at (−55:6; −25:6) m relative to our custom acoustic beacon.
The SandShark payload is equipped with a WHOI micromodem that is not used for any purpose other than to query the LBL transponders at a rate of 0.2 Hz. This allows us to compare our solutions to the range values outputted by this independent system, providing a means for quantifying navigation accuracy. Note that the LBL system itself is subject to acoustic effects that result in range outliers. In order to remove these outliers, a simple constant velocity filter is employed—essentially, if the difference between subsequent LBL ranges is above the distance that can be achieved by the vehicle moving at maximum speed in that time delta, then that LBL measurement is discarded. This ensures that physically impossible LBL ranges are pruned from each dataset, but even so, some outliers still remain.
Results.
Ranges from these trajectories to the two commercial LBL transponders are plotted in
Although the LBL system is queried by the vehicle at a rate of 0:2 Hz, only about 32% of those queries were met with a valid response (detected with power above a certain threshold), indicating the acoustically challenging nature of the river (LB). As a result, an even smaller percentage (˜10%) of LBL ranges occur concurrently for both beacons—without concurrent ranges, LBL-based localization is not possible; this is the reason why we have opted to compare piUSBL to LBL range measurements directly, rather than compare piUSBL and LBL position estimates. Outliers in the LBL ranges are apparent (e.g. in run 3 650-760 s when the vehicle is stationary while receiving GPS), but this data still allows us to validate the navigational ability of our system.
Taking the absolute difference between the ranges outputted by the trajectory estimates and the raw ranges from the LBL system allows us to plot error statistics with respect to LBL, as shown in
MOVING BEACON RELATIVE NAVIGATION. Field Experiments. To validate the feasibility of the relative navigation operating paradigm, we performed an initial test using our SandShark AUV (i.e. the secondary) in Ashumet pond in Falmouth, Mass. In this run, our custom acoustic beacon (i.e. the primary) was manually towed by an inflatable kayak, and was set to broadcast a 20 ms, 7-9 kHz LFM up-chirp (
Unfortunately, in this case no LBL system was deployed, and so only the internal odometry of the AUV was available to estimate absolute AUV position. To verify that the vehicle was indeed homing in on the beacon, a forward-pointing GoPro camera was mounted to the payload, allowing us to visually confirm the beacon during flybys.
Results.
Alternative Systems: This invention has been presented above as a system to localize a small, low-cost AUV using a single primary source, either fixed or mobile. One construction utilizes uses OWTT of known signal emitted by the source to estimate range, and an AUV mounted array to estimate angle to the primary source using matched filtering and beamforming. These measurements are fused with an AUV motion model using a particle filter, then smoothed with a factor graph-based algorithm to provide a good-performance AUV localization estimate, without the use of conventional sensors such as a DVL or high-grade INS. It is acoustically passive on the AUV, reducing power use and cost, and enabling multiple AUVs to localize using a single beacon.
Alternatives according to the present invention include deploying two primary source beacons with different chirp signals to remove vehicle dependence on magnetometer for yaw, which limits vehicle deployment to areas devoid of large magnetic anomalies; and implementing online versions of particle filtering and factor graph smoothing to perform closed loop navigation with our factor graph estimate.
As described above, one or more secondary vehicles such as an AUV uses pitch-roll-heading combined with matched filtering and beamforming to calculate range, azimuth, and inclination to a single fixed beacon, thereby resolving an instantaneous estimate of its position. In some constructions, systems and methods according to the present invention enables real-time, on-board, consistent estimation of primary position by closely coupling phased-array beamforming and particle filtering. In addition to improved localization and computational performance, such an extended system enables a novel operating paradigm for AUVs—navigation relative to a non-stationary beacon whose absolute position is opaque to the vehicle. Since a major limitation of USBL approaches is the decrease in positional accuracy with increasing range due to angular error, this paradigm enables the AUV to bound its positional error by continuously operating in close proximity to a mobile beacon, facilitating AUV deployments over large spatial length scales.
As described above, systems according to the present invention an approach that enables a miniature, very low-cost autonomous underwater vehicle (AUV) to self-localize and navigate without the use of large, expensive, and power-hungry conventional AUV navigational sensors, such as a Doppler velocity log (DVL) or a high-grade attitude and heading reference system (AHRS). Our system has two defining characteristics: it uses a single primary beacon to periodically broadcast a known signal into a liquid body, which greatly improves usability and reduces system cost; and it uses a vehicle-mounted USBL array to passively detect and process the broadcast signal to generate an estimate of primary beacon position, easily permitting the system to scale to a large number of vehicles. Our approach uses matched filtering to estimate one-way travel-time range to the primary beacon, and a beamformer spatial filter to estimate azimuth and inclination between the vehicle and the primary beacon. Closed-loop AUV navigation using an inexpensive embedded computer is achieved through the close coupling of beamforming and sequential Monte-Carlo beamforming and filtering, allowing the vehicle to fuse signal measurements with motion-model odometery in a computationally efficient manner, resulting in the online generation of consistent and accurate estimates of relative beacon position. We have experimentally shown the ability of our system to accurately perform closed-loop, absolute navigation in the case where the beacon is fixed at a known position, verifying our results against an independent commercial LBL positioning system. In addition, preliminary results of the vehicle navigating relative to a moving beacon have demonstrated the feasibility of this operating paradigm, opening up the future possibility of multi-AUV deployments over large spatial length scales.
The low-cost, low-power nature of this system makes it ideally suited to a variety of applications. Besides enabling multiple miniature, low-cost AUVs to self-localize, this approach can be used on conventional, high-cost AUVs or gliders to enable long-duration deployments, or under-ice navigation; it is ideal as a navigational aid for vehicles in the emerging consumer remotely-operated vehicle (ROV) space, since its cost can be further reduced by time-synchronization over the tether; and it can enable novel AUV operating schemes, such as coordinated surveys with multiple vehicles, or multi-AUV formations for environmental acoustic monitoring.
In yet another system according to the present invention, a second primary acoustic beacon is utilized, which may improve localization accuracy at a modest downgrade in ease-of-use. The use of two primary beacons will also enable the estimation of heading without a magnetometer, which is especially useful for low-cost AUVs and ROVs that experience magnetic interference in structured environments.
The objective of this example is a secondary vehicle command and control methodology that is easy to maintain as secondary vehicle formations scale up in number, while providing accurate acoustic navigation for a new generation of miniature, lowcost AUVs that lack high-fidelity navigational sensors (i.e. a DVL-aided INS). This method was demonstrated in field trials in which three SandShark AUVs were placed in the water, and were commanded to different patterns based on the broadcast acoustic waveform and position of a single beacon (i.e. primary) in the Charles River.
The implications for this operational paradigm are to make multi-vehicle operations easier on the operator. Each AUV has a unique identifier assigned automatically on launch, and which determines parameterized offsets in x (Δx), y (Δy), depth (Δz), range (r) and heading (θ) retrieved from a pre-defined look-up table. Desired vehicle state in each operational mode is then determined by the estimated relative position of the primary, the autonomous secondary vehicle behavior assigned to the mode, and the set of retrieved offset parameters. Since depth is also configurable with offsets, vehicles may be stacked in depth using these behaviors.
This methodology can be extended for use with a transmitter carried by an intelligent primary vehicle, such as a conventional AUV outfitted with high-fidelity navigational sensors or an autonomous surface vehicle (ASV), resulting in a deployment paradigm that enables the operational command and control of AUV groups autonomously or remotely. The autonomous behaviors commanded by the dial in these experiments were ‘Default’, ‘Relative Loiter’, ‘Relative Line’, ‘Return and Surface’, and ‘Abort’. An illustration of how these behaviors are configured using the parameterized offsets in the x-y plane is shown in
As illustrated in
With these operational modes, multiple vehicles may be commanded to collect data using different behaviors relative to an operator with minimal configuration in the field. The primary power of this approach is scalability: any number of vehicles can be added, each with different parameterized offsets specified in the look-up table to perform mission specific sampling. By recording source position and logging the relative position of the primary estimated by each vehicle, we can accurately estimate the trajectories of all AUVs in an absolute (global) frame of reference in post processing, either on-deck when all AUVs have returned and data downloaded, or after the fact. An advantage of this technique is ease of configuration and intuitive operation: the user need only specify offset parameters for each behavior per vehicle in a single look-up table to get easily understood primary-centric multi-AUV operations.
An example application is in oceanographic sampling of fronts. An operator could deploy many AUVs in a single area, at which point they command a ‘Relative Line’ behavior, with vehicle tracks crossing the front. When the operator determines that the front has moved, they change modes so that vehicles enter the ‘Relative Loiter’ behavior to follow and circle the beacon, and moves the vessel to the new front location before switching the mode back to the ‘Relative Line’ behavior. Upon mission completion, ‘Return and Surface’ brings all vehicles back to the operator. If the beacon is housed on an ASV or conventional AUV outfitted with a DVL-aided INS, the operator can command the beacon remotely via a single acoustic or radio modem installed on this intelligent primary vehicle. Collected data can be globally geo-referenced by using the beacon position from GPS (in the ASV case) or DVL-aided INS (for a primary AUV) to correct AUV fleet trajectories in post processing.
Experiments were conducted to demonstrate these principles in the Charles River in Cambridge Mass. A beacon was used to command three submerged AUVs in-situ and in real-time based on broadcast waveform. Three Bluefin SandShark AUVs named Platypus, Quokka and Wombat, were used in these experiments. The experiments used a custom acoustic beacon that consisted of an acoustic source box with corresponding underwater speaker (collectively the primary).
SandShark Autonomous Underwater Vehicle. Production-model Bluefin SandShark AUVs from General Dynamics were used for testing acoustics and autonomy in experiments. Unlike conventional AUVs which typically navigate using an expensive DVL-aided INS, the SandShark AUV is a miniature, low-cost alternative that navigates by default via dead-reckoning using propeller speed and vehicle attitude from a MEMS IMU; as such, its positional error without external acoustic aiding accumulates at a rate of about 3 m/min, unless on the surface where it receives GPS. The manufacturer provides a tail section with thruster and control fins, including sensors (IMU and GPS) and actuators required for basic vehicle control. Users can then add a payload that interfaces to the tail via a cable that includes power and Ethernet.
Vehicle Payload and Configuration. The payloads added to the SandShark vehicles include the piUSBL receiver. This system consists of an external hydrophone array, and a dry bottle containing a DAQ, timing, and autonomy system. The data measured by the pyramidal array (i.e. receivers 152) is collected using a Measurement Computing 1608FSPlus DAQ. A Microsemi CSAC provides a PPS timing signal that triggers the DAQ to record data to the computer in sync with the acoustic transmission by the primary. Collected data is processed on the computer to identify the broadcast waveform and to estimate range and bearing to the primary's acoustic transmitter. The payloads also include a NBOSI temperature/salinity sensor to be used in future oceanographic sensing missions. All data logging, signal processing, and MOOS-IvP autonomy is performed in real-time onboard the secondary AUV. All behavior configurations are tested in simulation that includes AUV dynamics prior to deployment to ensure expected behavior.
Data from the DAQ is processed on the computer to estimate range and bearing to the acoustic beacon as well as the waveform. PPS triggers data collection such that the start of each data sequence corresponds to the transmitter firing. The “most likely” waveform is determined by calculating the maximum of the matched filter with each possible waveform. Beamforming and matched filtering are then performed based on most likely waveform and coupled into particle filtering to estimate range r and bearing γ to the acoustic source from the vehicle. This is fused with vehicle heading h to estimate the relative location of the acoustic source, δx, δy.
In another currently preferred embodiment, a system comprising two primary acoustic sources increases self-localization of secondary vehicles to less than 1 meter accuracy, that is, to self-localization accuracy within one meter. In systems utilizing multiple primary vehicles, each primary transmits different waveforms such that each primary's transmissions can be received by a secondary vehicle and properly timed and interpreted. The inventive localization system described in detail herein applied to a single primary vehicle, is also applicable to embodiments with multiple beacons. In multiple acoustic source embodiments, the controller of the secondary vehicle further determines relative location to all primaries, the likelihood of errors, and determines the best location fit in light of all primary vehicles.
The piOWTT system with multiple primary vehicles is advantageous over currently available LBL systems, in that it requires as few as two acoustic sources (where LBS systems require an array of four sources), and enables real-time, on-board processing, allowing for accurate, constantly updated location information on the secondary vehicle.
Although specific features of the present invention are shown in some drawings and not in others, this is for convenience only, as each feature may be combined with any or all of the other features in accordance with the invention. While there have been shown, described, and pointed out fundamental novel features of the invention as applied to a preferred embodiment thereof, it will be understood that various omissions, substitutions, and changes in the form and details of the devices illustrated, and in their operation, may be made by those skilled in the art without departing from the spirit and scope of the invention. For example, it is expressly intended that all combinations of those elements and/or steps that perform substantially the same function, in substantially the same way, to achieve the same results be within the scope of the invention. Substitutions of elements from one described embodiment to another are also fully intended and contemplated. It is also to be understood that the drawings are not necessarily drawn to scale, but that they are merely conceptual in nature.
It is to be understood that the foregoing embodiments are provided as illustrative only, and do not limit or define the scope of the invention. Various other embodiments, including but not limited to the following, are also within the scope of the claims. For example, elements and components described herein may be further divided into additional components or joined together to form fewer components for performing the same functions. Any of the functions disclosed herein may be implemented using means for performing those functions. Such means include, but are not limited to, any of the components disclosed herein, such as the computer-related components described below.
The techniques described above may be implemented, for example, in hardware, one or more computer programs tangibly stored on one or more computer-readable media, firmware, or any combination thereof. The techniques described above may be implemented in one or more computer programs executing on, or executable by, a programmable computer including any combination of any number of the following: a processor, a storage medium readable and/or writable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), an input device, and an output device. The input device and/or the output device form a user interface in some embodiments. Program code may be applied to input entered using the input device to perform the functions described and to generate output using the output device.
Embodiments of the present invention include features which are only possible and/or feasible to implement with the use of one or more computers, computer processors, and/or other elements of a computer system. Such features are either impossible or impractical to implement mentally and/or manually. For example, embodiments of the present invention automatically (i) maintain time-synchronization with the primary system, (ii) develop a range estimate signal from measurements of received signals from the at least two receivers and (iii) develop an azimuth-inclination estimation of likeliest angle-of-arrival of the primary signals, wherein the controller utilizes a plurality of coordinate frames to provide an estimate of secondary system location. Such features can only be performed by computers and other machines and cannot be performed manually or mentally by humans.
Any claims herein which affirmatively require a computer, a processor, a memory, or similar computer-related elements, are intended to require such elements, and should not be interpreted as if such elements are not present in or required by such claims. Such claims are not intended, and should not be interpreted, to cover methods and/or systems which lack the recited computer-related elements. For example, any method claim herein which recites that the claimed method is performed by a computer, a controller, a processor, a memory, and/or similar computer-related element, is intended to, and should only be interpreted to, encompass methods which are performed by the recited computer-related element(s). Such a method claim should not be interpreted, for example, to encompass a method that is performed mentally or by hand (e.g., using pencil and paper). Similarly, any product claim herein which recites that the claimed product includes a computer, a processor, a memory, and/or similar computer-related element, is intended to, and should only be interpreted to, encompass products which include the recited computer-related element(s). Such a product claim should not be interpreted, for example, to encompass a product that does not include the recited computer-related element(s).
Each computer program within the scope of the claims below may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language. The programming language may, for example, be a compiled or interpreted programming language.
Each such computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor. Method steps of the invention may be performed by one or more computer processors executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, the processor receives (reads) instructions and data from a memory (such as a read-only memory and/or a random access memory) and writes (stores) instructions and data to the memory. Storage devices suitable for tangibly embodying computer program instructions and data include, for example, all forms of non-volatile memory, such as semiconductor memory devices, including EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROMs. Any of the foregoing may be supplemented by, or incorporated in, specially-designed ASICs (application-specific integrated circuits) or FPGAs (Field-Programmable Gate Arrays).
A computer can generally also receive (read) programs and data from, and write (store) programs and data to, a non-transitory computer-readable storage medium such as an internal disk (not shown) or a removable disk or flash memory. These elements will also be found in a conventional desktop or workstation computer as well as other computers suitable for executing computer programs implementing the methods described herein, which may be used in conjunction with any digital print engine or marking engine, display monitor, or other raster output device capable of producing color or gray scale pixels on paper, film, display screen, or other output medium or other type of user interface. Any data disclosed herein may be implemented, for example, in one or more data structures tangibly stored on a non-transitory computer-readable medium. Embodiments of the invention may store such data in such data structure(s) and read such data from such data structure(s).
It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto. Other embodiments will occur to those skilled in the art and are within the following claims.
This application claims priority to U.S. Provisional Application No. 62/612,520 filed Dec. 31, 2017, the contents of which are hereby incorporated as if set forth herein in its entirety.
This invention was made with U.S. Government support under N00014-16-1-2081 awarded by the Office of Naval Research. The U.S. Government has certain rights in this invention.
Number | Date | Country | |
---|---|---|---|
62612520 | Dec 2017 | US |