The field of the invention is the field generating and receiving of sonar pulses and of visualization and/or use of data from sonar signals scattered from objects immersed in a fluid.
It is an object of the invention to improve visualization using sonar imaging. It is an object of the invention to measure and record the positions and orientations, and images of submerged objects. It is an object of the invention to improve resolution of sonar images. It is an object of the invention to present sonar video images at increased video rates. It is an object of the invention to rapidly change the sonar image resolution between at least 2 pings of a series of pings. It is the object of the invention to change rapidly change the direction of the field of view on sonar images between at least 2 pings of a series of pings.
A series of sonar pings are sent into an insonified volume of water and reflected or scattered from submerged object(s) in the insonified volume of water. One or more large sonar receiver arrays of sonar detectors are used to produce and analyze sonar data to produce 3 dimensional images of the submerged object(s) for each ping. One or more parameters controlling the sonar imaging system are changed between pings to change the series of images. The resulting changed images are combined together to produce an enhanced video presentation of the submerged objects at an enhanced video frame rate of at least 5 frames per second. More than one of the parameters used to control the sonar imaging system are used to produce different 3D images from the same ping in a time less than the time between two pings.
It has long been known that data presented in visual form is much better understood by humans than data presented in the form of tables, charts, text, etc. However, even data presented visually as bar graphs, line graphs, maps, or topographic maps requires experience and training to interpret them. Humans can, however, immediately recognize and understand patterns in visual images which would be difficult for even the best and fastest computers to pick out. Much effort has thus been spent in turning data into images.
In particular, images which are generated from data which are not related to light are often difficult to produce and often require skill to interpret. One such type of data is sonar data, wherein a sonar signal pulse is sent out from a sonar generator into a volume of sea water or fresh water of a lake or river, and reflected sound energy from objects in the insonified volume is measured by a sonar receiver.
The field of underwater sonar imaging is different from the fields of medical ultrasonic imaging and imaging of underground rock formations because there are far fewer sonar reflecting surfaces in the underwater insonified volume. Persons skilled in the medical and geological arts would not normally follow the art of sonar imaging of such sparse targets.
The term “insonified volume” is known to one of skill in the art and is defined herein as being a volume of fluid through which sound waves are directed. In the present invention, the sonar signal pulse of sound waves is called and defined herein as a ping, which is sent out from one or more sonar ping generators or transmitters, each of which insonifies a roughly conical volume of fluid. A sonar ping generator is controlled by a ping generator controller according to set of ping generator parameters. Ping generator parameters comprise ping sonar frequency, ping sonar frequency variation during the ping pulse, ping rate, ping pulse length, ping power, ping energy, ping direction with respect a ping generator axis, and 2 ping angles which determine a field of view of the objects. A ping generator preferably has a fixed surface of material 22 which is part of a sphere, but may shaped differently. Preferred ping generators of the invention are sketched in
Ping generators of the prior art could send out a series of pings with a constant ping frequency during the ping. Ping frequencies varying in time during the sent out ping are known in the prior art. Changing the ping frequency pattern, duration, power, directions, and other ping parameters rapidly and/or automatically between pings in a series has not heretofore been proposed. One method of the invention anticipates that the system itself uses the results from a prior ping can be analyzed automatically to determine the system parameters needed for the next ping, and can send the commands to the various system controllers in time to change the parameters for the next ping. When operating in a wide angle mode at a particular angle and range, for example, a new object anywhere in the field of view can signal the system controllers to send the next outgoing ping the direction of the object, decrease the field of view around the new object, increase the number of pings per second according to a criterion based on the distance to the object, set the ping power to optimize conditions for the range of the object, etc. Most preferably, the system can be set to automatically change any or all system parameters to optimize the system for either anticipated or in reaction to unanticipated changes in the environment.
In a particularly preferred embodiment, the controller system may be set to change the sent out frequency alternately between a higher and a lower frequency. The resulting images alternate between a higher resolution and smaller field of view for the higher frequency, and a lower resolution and a larger field of view for the lower frequency. The alternate images may then be stitched after the receiver stage to provide a video stream at half the frame rate of the system available with unchanged parameters, but with higher central resolution and wider field of view, or at the same frame rate by stitching neighboring images.
Intelligent steering of the high-resolution, focused field of view on to a specific target of interest would mean that this technology would not necessarily be limited only to short range applications. If only one of the four steered pings, for example, needs to be continuously updated to generate real-time images, then the range limit could be significantly extended. The intelligent focusing may be implemented in a mode whereby a low-frequency, low-resolution ping with a large field of view is used to locate the target of interest. The subsequent high-frequency, high-resolution ping may then be directed to look specifically at the region of interest without having to physically steer the sonar head.
In this particularly preferred embodiment, additional intelligent and predictive processing and inter-frame alignment may be used to account for and track motion and moving objects. The priority of frame processing may be adapted to allow focus and higher refresh rate of images including the primary target, for example with the field of view centered on a primary target, or moving objects requiring the images that represent a portion of the field of view containing moving object to be updated more frequently.
The sonar receiver of the invention is a large array of pressure measuring elements. The sonar receiver is controlled by a sonar receiver controller according to set of sonar receiver parameters. The array is preferably arranged as a planar array shown in
A phase and amplitude of the pressure wave coming into the sonar receiver is preferably assigned to each detector element for each incoming slice, and a phase map may be generated for each incoming slice. A phase map is like a topographical map showing lines of equal phase on the surface of the detector array.
Applying additional gain control can be incorporated with Phase Filtering.
Phase map and data cleanup and noise reduction may be done optionally in the sonar receiver or in a beamformer section. The phase map and/or the digital stream of data from the detector are passed to the beamformer section, where the data are analyzed to determine the ranges and characteristics of the objects in the insonified volume.
The range of the object is determined by the speed of sound in the water and the time between the outgoing ping and the reflected ping received at the receiver. The data are most preferably investigated by using a spherical coordinate system with origin in the center of the detector array, a range variable, and two angle variables defined with respect to the normal to the detector array surface. The beamformer section is controlled by a beamformer controller using a set of beamformer parameters. The space that the receiver considers is divided into a series of volume elements radiating from the detector array and called beams. The center of each volume element of a beam has the same two angular coordinate and each volume element may have the same thickness as a slice. The beam volume elements may also preferably have thickness proportional to their range from the detector, or any other characteristic parameters as chosen by a beamformer controller. The range resolution is given by the slice thickness.
The beamformer controller controls the volume of space “seen” by the detector array and used to collect data. For example, if the sonar transmitter sends out a narrow or a broad beam, or changes the direction of the sent out beam, the beamformer may also change the system to only look at the insonified volume. Thus, the system of the invention preferably changes two or more of the system parameters between the same pings to improve the results. Some of the parameters controlled by the beamformer controller are:
The incoming digital data stream from each sonar detector of the receiver array has typically been multiplied by a TVG function. A triangular data function ensures that the edges of the slices have little intensity to reduce digital noise in the signal. The TVG signal is set to zero to remove data that is collected from too near too and to far away from the detector, and to increase or decrease the signal depending on the situation.
In the prior art, the data have been filtered according to a criterion, and just one volume element for each beam was selected to have a value. For example, if the data was treated to accept the first signal in a beam arriving at the detector having an amplitude above a defined threshold (FAT), the three dimension point cloud used to generate an image for the ping would be much different from a point cloud generated by picking a value generated by using the maximum signal (MAX). In the FAT case, the image would be, for example, of fish swimming through the insonified volume and the image in the MAX case would be the image of the sea bottom. In the prior art, only one range in each beam would show at most one value or point and all the other ranges of a single beam would be assigned a zero.
In the present invention, the data stream is analyzed by completing two or more beamformer processing procedures in the time between two pings, either in parallel or in series. In a video presentation, the prior art showed a time series of 3D images to introduce another, fourth dimension time into the presentation of data. By introducing values into more than one volume element per ping, we introduce a 5th dimension to the presentation. We can “see” behind objects, for example and “through” objects and “around” objects to get much more information. We can use various data treatments to improve the video image stream. In the same way, other ways of analyzing the data stream can be used to accomplish provide cleaner images, higher resolution images, expanded range images, etc. These different images imaging tasks to can be used on only one ping. The different images may be combined into a single image in a video presentation, or in more than one video at the frame rate the same as the ping rate.
If we are surveying a seawall, we Beamform the data before the wall (sea bottom—oblique to beams (low backscatter) soft (low intensity signals returned)) differently from the harbour wall (orthogonal to beams (high back scatter) hard, high intensity. If we know where a seawall is from a chart, the beamformer can use GPS or camera data to work out what ranges are before the wall and what are after and change TVG in the middle of the returned ping.
If we know the sea depth we can specify two planes, SeaSurfacePlane and SeatBottomPlane only data between the planes will be processed and sent from the head to the top end.
A large amount of data generated per second by prior art sonar systems has traditionally been discarded because of data transmission and/or storage limits. The present invention allows a higher percentage of the original data generated to be stored for later analysis.
The beamformer analyses data and decides whether the next ping should change settings, and if so sends signals to the appropriate controller to change the settings for the next ping. The beamformer analyses the data in step 67 and decides either on the basis of incoming ping data or on previous instructions whether to perform single or multiple types of analysis of the incoming ping data. For example, the beamformer could analyze the data using both the FAT and MAX analysis, and present both images either separately or combined, so that there will be some beams having more than one value per beam. The reduced data is sent from step 67 to step 68 which stores or sends raw data or image data for further processing into a video presentation at a rate greater than 5 frames per second.
Obviously, many modifications and variations of the present invention are possible in light of the above teachings. It is therefore to be understood that, within the scope of the appended claims, the invention may be practiced otherwise than as specifically described.
The following US patents and US patent applications are related to the present application: U.S. Pat. No. 6,438,071 issued to Hansen, et al. on Aug. 20, 2002; U.S. Pat. No. 7,466,628 issued to Hansen on Dec. 16, 2008; U.S. Pat. No. 7,489,592 issued Feb. 10, 2009 to Hansen; U.S. Pat. No. 8,059,486 issued to Sloss on Nov. 15, 2011; U.S. Pat. No. 7,898,902 issued to Sloss on Mar. 1, 2011; U.S. Pat. No. 8,854,920 issued to Sloss on Oct. 7, 2014; and U.S. Pat. No. 9,019,795 issued to Sloss on Apr. 28, 2015; U.S. patent application Ser. Nos. 14/927,748 and 14/927,730 filed on Oct. 30, 2015, Ser. No. 15/978,386 filed on May 14, 2018, Ser. No. 15/908,395 filed on Feb. 28, 2018, Ser. No. 15/953,423 filed on Apr. 14, 2018, Ser. No. 16/693,684 filed Nov. 11, 2019, and 62/931,956 and 62932734 filed Nov. 7, 2019, Ser. No. 16/362,255 filed on Mar. 22, 2019, and 62/818,682 filed Mar. 14, 2019 and are also related to the present application. The above identified patents and patent applications are assigned to the assignee of the present invention and are incorporated herein by reference in their entirety including incorporated material.
Number | Date | Country | |
---|---|---|---|
62932734 | Nov 2019 | US | |
62931956 | Nov 2019 | US |