Method of recording sonar data

Information

  • Patent Application
  • 20210141072
  • Publication Number
    20210141072
  • Date Filed
    January 28, 2020
    4 years ago
  • Date Published
    May 13, 2021
    3 years ago
Abstract
A sonar system comprising a sonar transmitter, a very large array two dimensional sonar receiver, and a beamformer section transmits a series of sonar pings into an ensonified volume of fluid at a rate greater than 5 pings per second, receives sonar signals reflected and scattered from objects in the ensonified volume, and beamforms the reflected signals to provide a video presentation and/or to store the beamformed data for later use. The parameters controlling the sonar system are changed so that the beamformer section treats the data from the receiver section with more than one set of parameters per ping and/or neighboring pings. The stream of data is treated either in parallel or in series by different beamforming methods so that at least one beam from the beamformer has more than one value.
Description
FIELD OF THE INVENTION

The field of the invention is the field generating and receiving of sonar pulses and of visualization and/or use of data from sonar signals scattered from objects immersed in a fluid.


OBJECTS OF THE INVENTION

It is an object of the invention to improve visualization using sonar imaging. It is an object of the invention to measure and record the positions and orientations, and images of submerged objects. It is an object of the invention to improve resolution of sonar images. It is an object of the invention to present sonar video images at increased video rates. It is an object of the invention to rapidly change the sonar image resolution between at least 2 pings of a series of pings. It is the object of the invention to change rapidly change the direction of the field of view on sonar images between at least 2 pings of a series of pings.


SUMMARY OF THE INVENTION

A series of sonar pings are sent into an insonified volume of water and reflected or scattered from submerged object(s) in the insonified volume of water. One or more large sonar receiver arrays of sonar detectors are used to produce and analyze sonar data to produce 3 dimensional sonar data describing the submerged object(s) for each ping. One or more parameters controlling the sonar imaging system are changed between pings and/or within a single ping in the series of pings. The resulting changed data are stored and/or combined together to produce an enhanced video presentation of the submerged objects at an enhanced video frame rate of at least 5 frames per second. More than one of the parameters used to control the sonar imaging system are used to produce different 3D images from the same ping in a time less than the time between two pings.







DETAILED DESCRIPTION OF THE INVENTION

It has long been known that data presented in visual form is much better understood by humans than data presented in the form of tables, charts, text, etc. However, even data presented visually as bar graphs, line graphs, maps, or topographic maps requires experience and training to interpret them. Humans can, however, immediately recognize and understand patterns in visual images which would be difficult for even the best and fastest computers to pick out. Much effort has thus been spent in turning data into images.


In particular, images which are generated from data which are not related to light are often difficult to produce and often require skill to interpret. One such type of data is sonar data, wherein a sonar signal pulse is sent out from a sonar generator into a volume of sea water or fresh water of a lake or river, and reflected sound energy from objects in the insonified volume is measured by a sonar receiver.


The field of underwater sonar imaging is different from the fields of medical ultrasonic imaging and imaging of underground rock formations because there are far fewer sonar reflecting surfaces in the underwater insonified volume. Persons skilled in the medical and geological arts would not normally follow the art of sonar imaging of such sparse targets. The system of the invention-has vessel apparatus of the invention on the surface of a body of water which we will call a part of a sea. The water rests on a seabed. It is understood that any fluid that supports sound waves may be investigated by the methods of the present invention. The apparatus generally comprises a sonar ping transmitter (or generator) and a sonar receiver, but the sonar transmitter and receiver may be separated for special operations. Various sections of the apparatus are each controlled by controllers which determine parameters required for optimum operation of the entire system. In the present specification, a parameter is a specific value to be used which can be changed rapidly between pings. The parameters may be grouped in sets and the set can be switched, either by hand or automatically according to a criterion. The decision to switch parameters may be made by an operator or made automatically based on information gained from prior pings sent out by sonar transmitter or by information gained from the current ping. The sonar transmitter sends out pulses of sound waves which propagate into the water in an approximately cone shaped beam. The pulses strike objects in the water such as stones on the seabed, an underwater vessels, swimming divers, sea wall. The underwater vessel may either be manned or be a remotely operated vessel (ROV). The objects underwater that have a different density than the sea water reflect pulses 19 as a generally expanding waves back toward the apparatus of the invention.


The term “insonified volume” is known to one of skill in the art and is defined herein as being a volume of fluid through which sound waves are directed. In the present invention, the sonar signal pulse of sound waves is called and defined herein as a ping, which is sent out from one or more sonar ping generators or transmitters, each of which insonifies a roughly conical volume of fluid. A sonar ping generator is controlled by a ping generator controller according to set of ping generator parameters. Ping generator parameters comprise ping sonar frequency, ping sonar frequency variation during the ping pulse, ping rate, ping pulse length, ping power, ping energy, ping direction with respect a ping generator axis, and 2 ping angles which determine a field of view of the objects. A ping generator of the invention preferably has a fixed surface of material which is part of a sphere, but may shaped differently.—A ping generator cross section has piezo electric elements-sandwiched between electrically conducting materials. The piezo electric elements have electrically insulating material separating each element from the other elements. One electrically conducting material is preferably a solid sheet of material which is grounded and is in contact with the seawater, and is thin enough that ultrasonic pressure waves can easily pass through it, but thick enough that water does not leak through it and get into the interior of the ping generator. The other end of each of the piezoelectric material elements is energized by applying an ultrasonic frequency voltage to electrical elements which are separated electrically from each other and which energize groups of piezo electric elements to vibrate with the same phase and frequency. Different segments change the beam pattern of the outgoing sonar waves. The-full beam has a divergence of 50 degrees and the restricted beam has a divergence of 25 degrees. By energizing appropriate combinations of electrodes, the beam may be sent out up, down, left, or right.


Ping generators of the prior art could send out a series of pings with a constant ping frequency during the ping. Ping frequencies varying in time during the sent out ping are known in the prior art. Changing the ping frequency pattern, duration, power, directions, and other ping parameters rapidly and/or automatically between pings in a series has not heretofore been proposed. One method of the invention anticipates that the system itself uses the results from a prior ping can be analyzed automatically to determine the system parameters needed for the next ping, and can send the commands to the various system controllers in time to change the parameters for the next ping. When operating in a wide angle mode at a particular angle and range, for example, a new object anywhere in the field of view can signal the system controllers to send the next outgoing ping the direction of the object, decrease the field of view around the new object, increase the number of pings per second according to a criterion based on the distance to the object, set the ping power to optimize conditions for the range of the object, etc. Most preferably, the system can be set to automatically change any or all system parameters to optimize the system for either anticipated or in reaction to unanticipated changes in the environment.


In a particularly preferred embodiment, the controller system may be set to change the sent out frequency alternately between a higher and a lower frequency from ping to ping. The resulting images alternate between a higher resolution and smaller field of view for the higher frequency, and a lower resolution and a larger field of view for the lower frequency. The alternate images may then be stitched after the receiver stage to provide a video stream at half the frame rate of the system available with unchanged parameters, but with higher central resolution and wider field of view, or at the same frame rate by stitching neighboring images.


The sonar receiver of the invention is a large array of pressure measuring elements. The sonar receiver is controlled by a sonar receiver controller according to set of sonar receiver parameters. The array is preferably arranged as a planar array because it is simpler to construct, but may be shaped in any convenient form such as a concave or convex spherical form for different applications. The array has preferably 24 times 24 sonar detecting elements, or more preferably 48 times 48 elements, or even more preferably 64 time 64 detectors, or most preferably 128 times 128 elements. A square array of elements is preferred, but the array may be a rectangular array or a hexagonal array or any other convenient shape. The detector elements are generally constructed by sandwiching a piezo electric material between two electrically conducting materials as shown for the sonar transmitter, but with an electrical connection to each element in the array. When a reflected sonar ping reaches the sonar detecting element, the element is compressed and decompressed at the sonar ping frequency, and produces a nanovolt analog signal between the electrically conducting materials. The nanovolt signals are amplified and digitally sampled at a sonar receiver sampling rate controlled by the sonar receiver controller, and the resulting digital signal is compared to a signal related to sent out ping signals to measure the phase and amplitude of the incoming sonar signals for each receiver element. The amplification or gain for the incoming sonar signals is controlled by the sonar receiver controller. If the sonar ping frequency is changed rapidly between pings, the sampling rate may also be changed to reflect the changed ping frequency. The incoming sonar ping is divided into consecutive slices of time, where the slice time is related to the slice length by the speed of sound in the water. A slice time parameter is set by the sonar receiver controller. For example, pings arriving from more distant objects can have wider slices than pings reflections from closer objects. Each slice contains a number of sonar wavelengths as the pulse travels through the water. The sonar receiver preferably has sonar receiver parameters controlled by the sonar receiver controller to have, for example, programable phase delays between the detector elements digital sampling times may be varied to achieve the same result. The sonar receiver may have parameters controlled by the sonar receiver controller which can be set to change the amplification or gain of the nanovolt electrical signals during the incoming sonar ping reflected signals. Prior art time varying gain (TVG) systems have used preplanned amplification ramps to correct for attenuation in the water column. This gain is applied based on range (distance from transmitter), but the gain profile does not change from ping to ping. Generally, the attenuation of the ultrasonic waves is higher for higher ping frequencies. Prior art changed the amplification factor by a preplanned schedule to even out the signals between the received first slice and the last slice of a ping. Prior TVG did not allow for the increased absorption by soft mud on the seafloor, for example. Since mud absorbs sound waves, the reflected sound waves are less intense as soon as the reflected slice reaches the mud. The TVG is changed on the next ping to boost the signals that reflect or are scattered by the mud. In the same way, the TVG is changed to boost or reduce the gain for slices that more strongly reflect or are scattered by a hard, highly reflecting object like sea wall.


A phase and amplitude of the pressure wave coming into the sonar receiver is preferably assigned to each detector element for each incoming slice, and a phase map may be generated for each incoming slice. A phase map is like a topographical map showing lines of equal phase on the surface of the detector array. Applying additional gain control can be incorporated with Phase Filtering.


Phase map and data cleanup and noise reduction may be done optionally in the sonar receiver or in a beamformer section. The phase map and/or the digital stream of data from the detector are passed to the beamformer section, where the data are analyzed to determine the ranges and characteristics of the objects in the insonified volume.


The range of the object is determined by the speed of sound in the water and the time between the outgoing ping and the reflected ping received at the receiver. The data are most preferably investigated by using a spherical coordinate system with origin in the center of the detector array, a range variable, and two angle variables defined with respect to the normal to the detector array surface. The beamformer section is controlled by a beamformer controller using a set of beamformer parameters. The space that the receiver considers is divided into a series of volume elements radiating from the detector array and called beams. The center of each volume element of a beam has the same two angular coordinate and each volume element may have the same thickness as a slice. The beam volume elements may also preferably have thickness proportional to their range from the detector, or any other characteristic parameters as chosen by a beamformer controller. The range resolution is given by the slice thickness.


The beamformer controller controls the volume of space “seen” by the detector array and used to collect data. For example, if the sonar transmitter sends out a narrow or a broad beam, or changes the direction of the sent out beam, the beamformer may also change the system to only look at the ensonified volume. Thus, the system of the invention preferably changes two or more of the system parameters between the same pings to improve the results. Some of the parameters controlled by the beamformer controller are:

    • Field-of-view
    • Minimum and maximum beamformed ranges
    • Beam detection mode such as (First Above Threshold FAT or maximum amplitude (MAX) or many other modes as known in the art)
    • Range resolution
    • Minimum signal level included in image
    • Image dynamic range
    • Array weighting function (used to modify the beamforming profile)
    • Applying additional gain post beamforming (this can be incorporated with Thresholding).


The incoming digital data stream from each sonar detector of the receiver array has typically been multiplied by a TVG function. A triangular data function ensures that the edges of the slices have little intensity to reduce digital noise in the signal. The TVG signal is set to zero to remove data that is collected from too near too and to far away from the detector, and to increase or decrease the signal depending on the situation.


In the prior art, the data have been filtered according to a criterion, and just one volume element for each beam was selected to have a value. For example, if the data was treated to accept the first signal in a beam arriving at the detector having an amplitude above a defined threshold (FAT), the three dimension point cloud used to generate an image for the ping would be much different from a point cloud generated by picking a value generated by using the maximum signal (MAX). In the FAT case, the image would be, for example, of fish swimming through the insonified volume and the image in the MAX case would be the image of the sea bottom. In the prior art, only one range in each beam would show at most one value or point and all the other ranges of a single beam would be assigned a zero.


In the present invention, the data stream is analyzed by completing two or more beamformer processing procedures in the time between two pings, either in parallel or in series. In a video presentation, the prior art showed a time series of 3D images to introduce another, fourth dimension time into the presentation of data. By introducing values into more than one volume element per ping, we introduce a 5th dimension to the presentation. We can “see” behind objects, for example and “through” objects and “around” objects to get much more information. We can use various data treatments to improve the video image stream. In the same way, other ways of analyzing the data stream can be used to accomplish provide cleaner images, higher resolution images, expanded range images, etc. These different images imaging tasks to can be used on only one ping. The different images may be combined into a single image in a video presentation, or in more than one video at the frame rate the same as the ping rate.


If we are surveying a seawall, we Beamform the data before the wall (sea bottom—oblique to beams (low backscatter) soft (low intensity signals returned)) differently from the harbour wall (orthogonal to beams (high back scatter) hard, high intensity. If we know where a seawall is from a chart, the beamformer can use GPS or camera data to work out what ranges are before the wall and what are after and change TVG in the middle of the returned ping.


If we know the sea depth we can specify two planes, SeaSurfacePlane and SeatBottomPlane only data between the planes will be processed and sent from the head to the top end.


A large amount of data generated per second by prior art sonar systems has traditionally been discarded because of data transmission and/or storage limits. The present invention allows a higher percentage of the original data generated to be stored for later analysis. The present invention makes use of a prior invention (U.S. application Ser. No. 15/908,395 filed Feb. 28, 2018) assigned to the assignee of the present invention. In this invention, the raw data is not digitized by using analogue to digital circuitry, but by comparator technology which drastically reduces the equipment cost for the large sonar arrays used. The amount of amount of raw data sent from the receiver to the beamformer is drastically reduced, allowing the beamformer to produce more data than a single beamformer parameter set data per ping.


The method of the invention starts the process of sending out a ping -by setting all system parameters for all system controllers. Either all parameters are the same as the last ping, or they are changed automatically by signals from stages of the previous ping. Commands are sent to the ping transmitter which sends data to the receiver controller to set parameters for the receiver and start the receiver. Receiver receives analogue signals, samples the voltages from each element, and transmits data to the beamformer controller which sends data and instructions to the Beamformer section.


The beamformer analyses data and decides whether the next ping should change settings, and if so sends signals to the appropriate controller to change the settings for the next ping. The beamformer analyses the data and decides either on the basis of incoming ping data or on previous instructions whether to perform single or multiple types of analysis of the incoming ping data. For example, the beamformer could analyze the data using both the FAT and MAX analysis, and present both images either separately or combined, so that there will be some beams having more than one value per beam. The reduced data is stored or raw data or image data is sent for further processing into a video presentation at a rate greater than a preferred rate of 5 frames per second. More preferably, a frame rate of 10 frames a second, and a most preferred frame rate of 20 frames per second has been shown.


Sonar imaging started with scanning an object to produce a sequence of images which could be used to produce a 3 dimensional (3D) shape. The limitation of this approach is the inability to see any moving objects and the dependency of a stable platform to record perform the imaging. The Echoscope® was the world's first 3D sonar system that allowed moving objects in the water column to be viewed in real time, making it the first truly four-dimensional sonar system. 4D volumetric images represent a true volume of spatial data collected and processed at the same instant. Sequential 4D volumetric images represent a time sequence of the scene showing moving objects within the volumetric image.


The Echoscope's® video quality imaging has continued to lead the field for over two decades. Coda Octopus are now achieving another world first: bringing to the market the world's first 5D and 6D Sonars.


5D images are 4D images with multiple slices of depth data, similar to a medical CT scan. The 5 D images contain more depth information, detail, and resolution of each target and sequential 5CD images over time show higher resolution moving targets.


6D Parallel Intelligent Processing Engine (PIPE) allows multiple parallel 5D images to be generated with different imaging and sonar parameters. This allows different processing to be performed on raw sonar data in parallel to extract more specific results without compromise.


The original Echoscope® system, first released in 2004, revolutionised 3D sonar by simultaneously beamforming a grid of over 16,000 beams, allowing a full 3D depth image to be generated in under 1/10th of a second. This rapid processing allowed the system to deliver the Echoscope's trademark real-time 3D output, generating video quality, 3D views of moving objects in the water column. The ability to present these 3D maps in real time means that the existing Echoscope is already a 4D system, and it is this fourth dimension of time that continues to differentiate the Echoscope from its competitors.


CodaOctopus has continued to push the technological boundaries, and are releasing a series of new 5G and 6G sonars that are set to dramatically extend the capability of the Echoscope®. At the heart of this new system is a state-of-the art processor that allows the sonar data to be handled orders of magnitude faster, and with much greater flexibility, or stored for off-line processing. The biggest change facilitated by this processor is the ability to beamform the entire duration of each sonar ping to give full time series (FTS) data on all beams. Rather than just returning at most a single 1 dimensional range point for each beam to create an image with a maximum 16,384 points), the new system returns a fully populated volume of over 1.6 million beamformed data points, while still operating at over 20 pings per second!


The ability to return multiple data points on every beam takes the data to the next generation and presents a wealth of new opportunities for analysing the sonar data. The biggest initial advantage is that the 5G system generates much fuller, and more detailed images when the points are rendered in 3D, as the beamformer can potentially see around smaller objects in the near-field. The system also returns multiple range points for beams striking flat surfaces at high incidence angles, meaning that the seafloor is much better resolved in the far field of the volume image.


The state-of-the-art processor has also allowed the sensitivity of the beamformer to be increased, as its floating-point operation allows for a much greater dynamic range in the data. This is a significant advantage in many acoustically challenging applications and environments. The combination of having multiple range points recorded for each beam and the increase in sensitivity means that the far-field can be much more clearly and densely resolved in the output images.


The major increase in the quality and volume of data generated by the 5G system means that new types of data processing are possible, and new, useful information can be extracted. The challenge with large datasets, however, is that they can be slow and cumbersome to analyse. To combat this Coda Octopus have developed PIPE®: The Parallel Information Processing Engine. This tool adopts novel parallel processing methods to perform multiple, simultaneous analyses of the large 5G dataset, delivering a range of useful outputs in real time. This ability to produce multiple, concurrent 5GD datasets takes the new system to its sixth data generation (6G).


The development of PIPE® is not just restricted to the data processing side of the system, with hardware updates also being implemented to maximise the functionality of this new tool. Different 5G data outputs might require different signals to be transmitted from the sonar, or might need different signal amplification and filtering operations to be applied. For example, one task might need high-resolution and a narrow field of view, while another could require a low-frequency, long range signal with a wide field of view. PIPE allows these different 5G datasets to be processed concurrently by switching between many different sets of sonar operating parameters, with this switching occurring from ping to ping at 20 Hz. It is possible, for example, to generate four completely different 5G sonar images separated by less than 0.05 sec , with the composite, 6G image being fully updated 5 times per second.


To understand the full potential of this new technology, consider a pipe inspection operation being conducted with an ROV. The ROV pilot requires a longer range, forward looking view to allow both navigation and obstacle avoidance. There could then be an engineer inspecting the condition of the pipe itself, who requires a high resolution, downward looking image to be able to detect damage or corrosion on the pipe. The 6G PIPE system is capable of generating both these images simultaneously in real time, meaning that the engineers are able to make instant decisions, such as whether to slow down to inspect a particular section of pipe in more detail.


Since the raw data from the survey is also being stored, it is possible to go back through the data in post-processing and apply different image processing methods to highlight different information. This does not provide quite the same flexibility as the real-time 6G processing, as the transmit and receive parameters are fixed. There is still significant value, however, in having access to the measured raw data rather than a processed image that has already removed a large proportion of the original information.


In the case study presented above, all the different presentations of the data were being viewed by human analysts, but this doesn't have to be the case: the new 5D/6D data makes the latest generation Echoscope very well suited to deployment on a fully autonomous vehicle. As an example, the system could be operated to simultaneously provide a far-field obstacle avoidance view, and a high-resolution seabed view for detailed autonomous navigation. The raw data could then be stored for subsequent human post-processing and analysis once the AUV is returned to the surface.


The Echoscope 5G/6G system is the sonar for the information age. It uses the very latest hardware and software to open up a range of new possibilities for visualising and analysing the underwater environment. The 5D/6G system is also ideally placed to satisfy the future needs of the growing fleet of autonomous vessels in the world's oceans, lakes and rivers. It therefore looks likely that the new generation of Coda Octopus 5G/6G Echoscopes will continue to lead the field, as their 4G predecessors have done before them.


Obviously, many modifications and variations of the present invention are possible in light of the above teachings. It is therefore to be understood that, within the scope of the appended claims, the invention may be practiced otherwise than as specifically described.

Claims
  • 1. A method of recording sonar data measured by a sonar system having sonar system parameters, comprising; a) transmitting a first set of sonar pings into a first volume of sonar signal transmitting material, the sonar pings transmitted from a sonar ping transmitting device, wherein the sonar ping transmitting device is controlled by a first set of ping transmitting parameters chosen from a predetermined list of ping transmitting parameters;b) receiving sonar acoustical signals reflected or scattered from objects in the first volume of sonar signal transmitting material, wherein the received acoustical sonar signals are received by a sonar receiving device array controlled by a first set of receiver parameters selected from a predetermined list of receiver parameters to convert the received sonar acoustical signals into digital data signals which are transmitted to a sonar beamforming device and/or a digital processing device for further processing of the digital data signals,c) beamforming and/or further processing the received sonar signals wherein the step of sonar beamforming is performed by a sonar beamforming device controlled by a first set of beamforming parameters chosen from a predetermined list of beamforming parameters, and wherein the digital processing device for further processing is controlled by parameters chosen from a predetermined list of processing device parameters; thend) changing at least one sonar system parameter to provide at least two significantly different three dimensional (3D) sonar data sets for each ping of the first set of pings, the at least two (3D) sonar data sets for describing the objects reflecting or scattering the transmitted sonar signals, wherein a sonar system parameter is defined as any parameter chosen from the predetermined lists of ping transmitting parameters, receiver parameters, beamforming parameters, and processing device parameters.
  • 2. The method of claim 1, wherein the at least two significantly different sonar data sets are used to provide at least two significantly different beamformed sonar images.
  • 3. The method of claim 2, wherein step d) further comprises changing at least one of the sonar beamforming device parameters for each ping of the first set of pings during the step of beamforming.
  • 4. The method of claim 1, wherein step d) further comprises; changing at least one of the sonar receiving device parameters during the step of receiving for each ping of the first set of pings.
  • 5. The method of claim 4, wherein two different gain profiles are used to provide two significantly different data sets for each ping of the first set of pings.
  • 6. The method of claim 1, wherein the sonar system parameters are set to provide sonar data to reconstruct a consolidated image from the requested eyepoints of a more than one user.
  • 7. The method of claim 1, wherein the image resolution is changed during the sonar signal receiving step from higher resolution for the first arriving ping reflection to lower resolution for the later arriving ping reflections.
  • 8. The method of claim 1, wherein features identified in two or more data sets for each ping are matched with corresponding features identified in at least one further data set to provide a mosaic data base.
  • 9. The method of claim 8, wherein the at least one further data set for a ping is produced from preceding and/or succeeding pings, and where in the sonar device parameter changed is a digital processing device parameter, wherein the digital processing device parameter changed is a sidelobe clipping parameter and/or a thresholding parameter.
  • 10. The method of claim 9, wherein the at least one digital processing device parameter changed is chosen to reduce unwanted acoustic artefacts.
  • 11. The method of claim 1, wherein one of the at least two significantly different sonar data sets for each ping of the first set of pings is a full-time series 3D volume data set or a partial series 3D volume data set.
  • 12. The method of claim 11, wherein two of the at least two significantly different sonar data sets are a full-time series 3D volume data set and a partial series 3D volume data set.
  • 13. The method of claim 1, wherein the at least one digital processing device parameter is changed from a first sidelobe filter to a second sidelobe filter.
  • 14. The method of claim 13, wherein the sonar data set produced using the first sidelobe filter is monitored in real-time to ensure no wanted objects are being removed.
  • 15. The method of claim 1, wherein the image resolution is changed during the sonar signal receiving step from higher resolution for the first arriving ping reflection to lower resolution for the later arriving ping reflections.
  • 16. The method of claim 1, wherein features identified in the two or more data sets are matched with a Simultaneous Localization and Mapping (SLAM) technique.
  • 17. The method of claim 1, wherein an absolutely positioned mosaic is created.
  • 18. The method of claim 17, wherein the two or more data sets for each ping are matched with models.
  • 19. The method of claim 18, wherein the models represent known physical entities.
  • 20. The method of claim 18, wherein the models may be moving and/or updated in real-time.
  • 22. The method of claim 18, wherein the models are volume 3D binned data
  • 23. The method of claim 1, wherein the two or more data sets for each ping are compared with at least one previously generated data set.
  • 24. The method of claim 23, wherein the previously generated data set may be from a previous survey of the same physical location.
  • 25. The method of claim 24, wherein the comparison is used to determine dredging progress and/or to check for scouring around structures in areas with high underwater currents and/or to check for changes in infrastructure.
  • 26. The method of claim 23, wherein the two or more data sets for each ping are matched with data from previously surveyed areas to find to determine dredging progress and/or to check for scouring around structures in areas with high underwater currents and/or to check for changes in infrastructure such as quay wall damage and/or explosive devices placed underwater.
  • 27. The method of claim 1, wherein the range of at least one of the at least two significantly different data sets is divided into a number of sections, and the range of zero or one object for each section is recorded and/or shown as a partial time-series.
  • 28. The method of claim 27, wherein at least one of at least two significantly different beamformed sonar images is a custom view.
  • 29. The method of claim 28, wherein the custom view is a cross section or a plan view.
  • 30. The method of claim 1, wherein the first set of sonar pings forms part of a series of sets of sonar pings transmitted at a rate of at least 5 pings per second.
  • 31. The method of claim 30, wherein the least two significantly different sonar data sets representative of the first set of sonar pings are displayed as at least one video stream at a frame rate of at least 5 frames per second.
  • 32. The method of claim 30, wherein two sets of sonar data are recorded and/or shown.
  • 33. The method of claim 1, wherein at least two significantly different sonar data sets for each ping of the first set of pings are used to simultaneously provide a far-field obstacle avoidance view and a high-resolution seabed view.
  • 34. The method of claim 33, wherein the far-field obstacle avoidance view and the high-resolution seabed view are used in autonomous navigation.
  • 35. The method of claim 1, wherein at least one of the at least two significantly different sonar data sets for each ping comprises raw data.
RELATED PATENTS AND APPLICATIONS

The following US Patents and US patent applications are related to the present application: U.S. Pat. No. 6,438,07 1 issued to Hansen, et al. on August 20; U.S. Pat. No. 7,466,628 issued to Hansen on Dec. 16, 2008; U.S. Pat. No. 7,489,592 issued Feb. 10, 2009 to Hansen; U.S. Pat. No. 8,059,486 issued to Sloss on Nov. 15, 2011; U.S. Pat. No. 7,898,902 issued to Sloss on Mar. 1, 2011; U.S. Pat. No. 8,854,920 issued to Sloss on Oct. 7, 2014; and U.S. Pat. No. 9,019,795 issued to Sloss on Apr. 28, 2015; U.S. patent applications Ser. Nos. 14/927,748 and 14/927,730 filed on Oct. 30, 2015, 15/978,386 filed on May 14, 2018, 15/908,395 filed on Feb. 28, 2018, 15/953423 filed on Apr. 14, 2018, 16693684 filed Nov 11, 2019, and 62931956 and 62932734 filed Nov. 7, 2019, 16/362255 filed on Mar. 22, 2019, and 62/818,682 filed Mar. 14, 2019 and 16/727,198 filed 26 Dec. 2019 and are also related to the present application. The above identified patents and patent applications are assigned to the assignee of the present invention and are incorporated herein by reference in their entirety including incorporated material.

Provisional Applications (2)
Number Date Country
62932734 Nov 2019 US
62931956 Nov 2019 US