The present invention relates generally to sonar, and more particularly a wideband sonar receiver and to sonar signal processing algorithms.
Producing frequency-modulated sonar systems for commercial applications such as fish finding routinely faces cost pressures. Thus, advanced signal processing techniques must be implemented in an efficient, cost-effective manner. For example, it is desirable to offer relatively high power, such as 1 kW over a wideband frequency range, such as 25 kHz to 255 kHz. The lower frequencies are desirable for deep water operation whereas the higher frequencies offer better resolution in shallow water operation.
The received signal power varies greatly depending upon what depth of operation for a sonar system. The reflected sonar pulse is relatively strong from shallow targets. In contrast, the reflected sonar pulse is relatively weak from deeper targets due to the greater ranges that the deep water reflected pulse must travel. Thus, a sonar receiver capable of operating in both shallow and deep water must accommodate a wide dynamic range in received pulse power such as 120 dB.
An analog-to-digital converter (ADC) requires 20 bits of resolution to directly capture such a large dynamic range. In that regard, 20 bits of resolution means that the ADC is capable of distinguishing over one million different amplitude levels. Such a high-resolution ADC is costly and thus inappropriate for commercial operation.
The receiver costs are exacerbated for a wideband sonar system. But wideband operation is desirable in that pulse compression techniques such as a chirp pulse provide enhanced range resolution. In that regard, range resolution in sonar systems is a function of the effective pulse length. The shorter the effective pulse, the greater the range resolution. But sonar performance is also dependent upon the achievable signal-to-noise ratio (SNR) for the received sonar pulses. In general, the greater the energy for the transmitted pulses, the greater the SNR is for the resulting received pulses. Achieving higher SNR and shorter pulse lengths are thus at odds with one another for a sonar system with a given transmit power, the SNR is reduced as the pulse length is reduced. Pulse compression techniques enable sonar systems to achieve finer range resolution without sacrificing SNR. To achieve this goal, the pulses may be frequency modulated across a relatively long pulse extent or length. For example,
In a pulse compression sonar system, the sonar receiver correlates a replica pulse 105 with the received pulse 100. The resulting detection peak 110 is much narrower than the original pulse length, thus representing the pulse compression effect. In a chirp embodiment, the effective compressed pulse length T′ (as defined by the 3 dB width for detection peak 110) equals 1/Δf, where Δf is the frequency difference modulated across pulse 100. So the effective pulse is narrowed but the SNR still corresponds to the original pulse width. Thus pulse compression methods are a popular technique to achieve greater range resolution.
Pulse 100 is unshaped in that it has a constant amplitude across all the frequencies. The correlation of an unshaped pulse with its replica in the sonar receiver produces relatively high amplitude range sidelobes as shown in
Achieving efficient pulse compression yet also having good SNR is not the only challenge for sonar systems. For example, fish-finding sonar systems must fight a variety of interferences such as background noise or signals from other sonar systems. These interferences complicate the task of distinguishing bottom echoes and mask the desired fish detection. To address interferences such as clutter due to water quality, suspended particles such as zooplankton, and thermocline detection, a standard processing scheme employs time averaging of the detected signal. However, time averaging often has very limited effectiveness against these problems.
Accordingly, there is a need in the art for improved sonar systems that offer frequency agile performance and relatively high power at low cost. In addition, there is a need in the art for improved sonar systems that offer pulse compression and sidelobe suppression at low cost. Finally, there is a need in the art for improved sonar processing techniques.
In accordance with a first aspect of the disclosure, a wideband sonar receiver is provided that includes: a selectable bandpass filter adapted to filter a received sonar signal to produce a filtered signal; an analog-to-digital converter for converting a version of the filtered signal to provide digitized samples; a digital basebanding and decimation stage adapted to baseband and decimate the digitized samples to produce baseband samples of the received sonar signal; and a correlator adapted to correlate the baseband samples with baseband replica samples to provide a correlated signal.
In accordance with a second aspect of the disclosure, a method of processing a received wideband sonar signal is provided that includes: selecting from a plurality of filter bands based upon a center frequency for the received wideband sonar signal; filtering the received wideband sonar signal using the selected filter band; and applying a time-varying gain to a resulting filtered signal.
In accordance with a third aspect of the disclosure, a sonar system for shaping a received sonar chirp pulse according to a shaped replica pulse is provided that includes: a digital signal processor (DSP) configured to divide a frequency-domain version of the received sonar chirp pulse by a frequency-domain version of the shaped replica pulse (FSRP) to provide a shaping filter response and to multiply the shaping filter response by a conjugate of the FSRP to provide a combined correlation and shaping response, the DSP being further configured to multiply the frequency-domain version with the correlation and shaping response to produce a correlated and shaped signal.
In accordance with a fourth aspect of the disclosure, a method of rejecting sonar interference is provided that includes: detecting echoes corresponding to a series of transmitted sonar pulses, each detected echo being represented by a series of time samples; for each detected echo, comparing a series of time samples to corresponding time samples in a preceding echo and a subsequent echo to determine whether the compared time samples exceed the corresponding time samples by a set limit; and if the compared time samples exceed the corresponding time samples in the preceding and subsequent echoes by the detection threshold, replacing the compared times samples with alternative sample values.
a is a time-domain representation of a series of echo signals from various depths in accordance with an embodiment.
b is a time-domain representation of the echo signals of
A low-cost wideband sonar system is disclosed that achieves high sensitivity as well as sidelobe suppression. In addition, a variety of advanced sonar signal processing algorithms are disclosed that may be advantageously implemented on such a low-cost yet high performance sonar system. The sonar system will be discussed first followed by a discussion of the sonar signal processing algorithms.
A low-cost wideband sonar receiver is disclosed, in accordance with an embodiment, that achieves high sensitivity at low cost. In reference to the drawings, an example sonar receiver 300 (
In one embodiment, selectable bandpass filter 115 is configured to select from 3 available bandpass channels as shown in
In one embodiment, bandpass filter 115 comprises three selectable Chebyshev filters to provide a maximum roll off close to the band edges to obtain the best rejection of interfering signals close to the selected bandpass. For example,
The output of band pass filter 115 is processed by a variable gain amplifier 120. The variable gain amplifier applies an increasing gain as a function of time from a given sonar ping. This gain resets for the next ping Thus, reflected pulses from relatively shallow targets receive lower gains whereas reflected pulses from relatively deeper targets receive higher gain from variable gain amplifier 120. This variable gain thus relaxes the resolution requirements for a downstream analog-to-digital converter (ADC) 125 in that the dynamic range between deep reflections and shallow reflections is flattened because weak signals from deep targets are given higher gain as compared to the gain applied to relatively stronger signals from shallower targets.
An anti-alias filter 130 receives the output from variable gain amplifier 120. Despite the selection by bandpass filter 115, which as discussed further below will reject signals that are beyond the Nyquist rate of ADC 125, there is still a likelihood of interfering signals from, for example, adjacent electronic components of the sonar system incorporating receiver 300. Anti-alias filter 130 thus provides additional protection against such unwanted signals.
ADC 125 digitizes the output signal from anti-alias filter 130. This digitization can exploit subharmonic sampling such that a clock rate 135 for the sampling is 4/N times the operating center frequency, where N is an odd integer. In this fashion, the design of ADC 125 may be relaxed in that it can operate at lower sample rates. However, although subharmonic sampling enables a lower clock rate, there is no escaping the Nyquist rate limitations.
In that regard, the spectral content of a signal having bandwidth greater than B/2 cannot be captured without aliasing unless it is sampled according to a rate of B or greater. Thus, suppose ADC 125 is clocked at ⅘*Fe, where Fe is the center frequency of the signal of interest. Under the Nyquist rate law, all signals having a bandwidth greater than ⅖*Fc will be aliased. Thus, a sampling rate of ⅘*Fc can only sample signals of bandwidth ⅖*Fc or less such that the maximum center frequency for the signal being sampled is ⅕*Fc. Signals having greater bandwidth will be aliased. Thus, subharmonic sampling is suitable for single frequency sonar systems but may not be suitable for wideband systems employing pulse compression. Moreover, subharmonic sampling may reduce the achievable signal-to-noise ratio as compared to higher sampling rates.
To provide the maximum bandwidth capability, ADC 125 may be clocked so as to oversample such as at a rate of 4*Fc. This faster rate makes it possible to employ, in one embodiment, a Sigma-Delta-type ADC. Such an ADC has the advantage of providing a digital low pass filter with an extremely fast roll-off rate at the Nyquist frequency, thus reducing the complexity and therefore expense of anti-alias filter 130. In addition, a Sigma-Delta-type ADC achieves higher resolution with relatively lower cost components as compared to conventional ADC architectures. In this fashion, a relatively high dynamic range can be exploited such as a 16-bit resolution for ADC 125. To minimize noise from clock jitter, the transmit and receive processing may be synchronized in the sonar system including receiver 300. In addition, clock 135 may be derived using a direct divide down of an FPGA crystal frequency used in the transmitter (e.g., within a sonar transceiver).
Digital portion 310 of receiver 300 processes the digitized samples from ADC 125. In one embodiment, this digital processing includes basebanding, filtering, and decimation. Basebanding allows the sample rate for ADC 125 to be relaxed, which subsequently reduces the processing load for digital portion 310. In other applications, basebanding is often performed in the analog domain but the processing speed is adequate to perform basebanding in the digital domain for receiver 300. Such digital basebanding in receiver 300 is advantageous because shifting down to baseband in the analog domain often suffers from noise and errors due to DC offsets in the analog circuitry. In contrast, such errors are avoided in receiver 300. Moreover, an analog local oscillator can introduce phase errors between the complex components of its signal whereas receiver 300 will have relatively perfectly synchronized local oscillator components. In addition, analog errors occur due to drift in component values with time or environment whereas receiver 300 is immune to such analog basebanding drift errors.
The basebanding processing principal is as follows: Consider a receive signal s(t) and a complex local oscillator signal r(t), multiply these together to obtain a modulated signal q(t) and filter this to remove the high frequency content and obtain the basebanded signal. A mathematical expression for basebanding follows, with s(t)=A sin(a) and r(t)=cos(b) and j sin(b), where a is the receive signal phase and b is the local oscillator phase. The multiplication of s(t) with r(t) gives:
q(t)=A[sin(a)cos(b)+j sin(a)sin(b)
By trigonometric identities, q(t) is also given as
q(t)={A/2)[sin(a+b)+sin(a−b)}+j{cos(a−b)cos(a+b)}]
After filtering, q(t) becomes
q(t)=(A/2)[sin(a−b)+j cos(a−b)]
which fc is the complex basebanded signal. For a chirp signal centered around the carrier frequency, the phase is given by
a=2π(fc+(frt/2t)−(fr/2))t
where fc is the center frequency, fr is the bandwidth, and T is the burst length. If the local oscillator phase b is given by
b=2πfctt
the factor (a−b) becomes
(a−b)=2π((frt/2t)−(fr/2)t
which is a chirp signal centered on dc. It will be appreciated, however, that the digital basebanding disclosed herein may be applied to other types of broadband signals. Moreover, the signal need not be centered about the local oscillator frequency and the sampling rate may be adjustable.
A cosine 140 component and a sin component 145 (
A low pass filter 155 selects for the difference components produced by multipliers 150 to complete the basebanding process. In one embodiment, low pass filter 155 comprises a finite impulse response (FIR) filter. A FIR filter is advantageous in that it employs only real coefficients. Thus, the complex multiplications within filter 155 can reduce to just two multiplications rather than four: a complex number (a+jb) times a real coefficient c reduces to ac+jbc. In addition, the FIR coefficients may be selected to be symmetric so that the number of coefficients required to be stored is halved. In turn, the number of FIR multiplications is also halved. The output of filter 155 may be represented as fir in such an embodiment:
The cos component 140 and sin component 145 alternate so that the resultant I and Q signals also alternate as real only or imaginary only. This alternation of the I and Q components can be exploited to further reduce filter 155 complexity.
Filter 155 also decimates to considerably reduce the number of processes needed to filter the received digital samples from ADC 125. Thus, the number of samples fed to filter 155 may be linked to the desired decimation rate to directly reduce the filter processes according to the decimation rate. Note that filter coefficients and the baseband coefficients can also be combined according to the decimation level to remove a stage in the filter processing (the decimation level should be an integer multiple of 4).
If the sampling rate is 4*fc, adjacent samples are in quadrature so that processing can proceed with only the real samples by obtaining the quadrature data selecting an adjacent sample, which is equivalent to shifting the current sample by 90 degrees). However, such a simplification in processing loses 3 dB in signal-to-noise ratio. To avoid that loss, receiver 300 may be implemented with full complex processing. In addition, the processing may be performed entirely in integer form to reduce the processing load. However, such a simplification will also reduce the signal-to-noise ratio. To maintain simplicity but get better signal-to-noise performance, the filter processing may be performed using integer math but with scaling—at each processing stage, the integer values are allowed to grow but then scaled. Such a scaling technique retains the improved dynamic range of receiver 300 but reduces the bit size to a more manageable level.
As discussed earlier, digital basebanding avoids the error sources associated with analog domain processing. It is highly unlikely for basebanding in the digital domain to introduce spurious signals. In contrast, for basebanding in the analog domain, spurious signals can readily result from nonlinearities, mismatching of frequencies, or component drift. However, digital basebanding can suffer from dc offset when values are truncated, which is necessary if a value exceeds the available bit size. Such truncation introduces a bias that acts effectively as a dc bias and increases noise accordingly. Such truncation will arise at each scaling stage in the integer math implementation discussed above. A solution to this issue is to add 1 to the value just prior to the last shift operation. This effectively adds one-half to the resultant value, which restores the bias level thus removing the dc component. To implement filter 155, an FPGA may be used to accommodate the use of programmable filter coefficients and scaling. Digital component 310 thus comprises an FPGA in one embodiment. A controlling CPU (not illustrated) can thus supply the FPGA with relevant coefficients depending upon the frequency band of operation.
A complex correlator 160 correlates the basebanded and decimated chirp signal from filter 155. Complex correlator 160 may comprise a digital signal processor (DSP). The longer the correlation length for complex correlator 160 (and hence the longer the burst length for the associated sonar system), the better the noise rejection. In contrast, a shorter burst length (and thus a shorter correlation length of complex correlator 160) has the desirable effect of increasing the range resolution for single frequency operation and also reducing the dead time during the transmit burst. Shallow water operation will require shorter burst lengths whereas deeper water operation requires longer burst lengths. To accommodate these conflicting demands, complex correlator 160 may be a variable length complex correlator responsive to a commanded correlation length 165. The variable correlation length corresponds to the variable burst length. The programming of coefficients in a variable complex correlator 160 accommodates the varying burst lengths and frequencies. Since the correlation occurs at baseband, the chirp signal may be implemented to be symmetric about its center as shown in the correlation of
where cor represents the correlation results. But such symmetry requires that all signals be placed symmetrically around the center frequency, which may not be possible for certain transducer capabilities.
Another method of reducing the correlation overhead is to employ FFT techniques. In that regard, correlation is equivalent to conjugate multiplication in the frequency domain. Thus, converting the basebanded and decimated signal from filter 155 using an FFT reduces the processing to a single set of multiplications rather than a sliding multiply and add as required for a time domain correlation. An IFFT is then applied to the resulting product to obtain the correlation results. An overlap add FFT method can considerably reduce the processing load, but at the expense of requiring more memory resources to store the intermediate stages and at the expense of added complexity. In one embodiment, complex correlator 160 is implemented by programming an FPGA with sufficient memory resources to store the coefficients for the replica signal. The complex correlation can then be implemented directly in the time domain without the need for external memory and associated interface code. For example, if an FPGA implements the digital basebanding process, the same FPGA can implement the complex correlation. In that regard, digital portion 310 may comprise an FPGA as discussed earlier.
With the complex correlation completed, the power of the complex correlation results may be extracted in a complex-to-magnitude stage 170. A logarithm stage 175 may then take the log of the powers to produce a detected echo signal 180. A processor (not illustrated) processes the detected echo signals with, for example, the sonar processing algorithms discussed below. The resulting processed signal may then be displayed such as shown in
It is conventional to transmit shaped pulses to achieve better sidelobe suppression. But shaping the pulses lowers the transmitted signal power and thus lowers the SNR for the received pulses. A shaping filter will now be discussed that advantageously achieves shaping upon receipt of unshaped pulses will now be discussed.
The shaping filter is generated in the digital domain to support a high pole count by the use of finite impulse response (FIR) filters (resulting in a pole count that would be impossible to reproduce in the analog domain). Moreover, the digital implementation of the shaping filter enables an on-the-fly calculation of the filter coefficients. Although the shaping filter can be implemented in the time domain, the following discussion will address an embodiment in which the shaping filter is combined with the correlation coefficients in the frequency domain such that shaping filter is applied to the correlation coefficients. The resulting combined coefficients may then be applied to the incoming received pulses. Such a combination of the shaping filter and the correlation coefficients reduces the computation load in that the combined coefficients only need to be calculated once for a particular range setting.
An ideal shaping filter will have an almost infinite extent in the time domain to achieve a perfect match, which is of course impractical to implement. The combined coefficients can be limited to a more realizable extent in time by applying a smooth transition from the length of the original correlation coefficients out to some percentage of this length. For example, in one embodiment, the combined coefficients may be limited to be no more than three times the extent of the original coefficients.
A derivation for the combined coefficients is as follows. In this analysis, lower case indicates a time domain representation whereas upper case denotes a frequency domain representation. In addition, a convolution is represented by “x,” a complex conjugation by “*,” a Fourier transform by “F,” and an inverse transform by “IF,” A received time domain pulse is represented by “s” so that its frequency domain representation is thus given by “S.” Similarly, a corresponding time-domain replica pulse is represented by “r” so that its frequency domain representation is given by “R.”
As discussed above, the pulse compression correlation is typically performed in the frequency domain as implemented using a Fourier transform:
F(sxr*)=F(s)·F(r*)=S·R*=C (1)
where C is the correlation result and “·” represents multiplication. The received signal s contains bursts with varying frequency content that require amplitude shaping prior to replica correlation to reduce the resulting range sidelobes. Advantageously, the following filter coefficients shape the signal burst as a function of the frequency content and position in the burst. The content of the transmitted pulse s is of course known as the sonar system is generating it. Similarly, the replica r is also known as this is the desired transmitted signal with the applied amplitude function (e.g Hamming). Finally, the desired correlation result C is also known as this is the autocorrelation of the replica r. Equation (1) can thus be replaced as follows:
C=R·R* (2)
A shaping filter T that can adapt the received signal s to make it resemble the replica r is as follows:
R=T·S (3)
Equation (3) can be rewritten to solve for T as follows:
T=R/S (4)
From equations (2) and (3) it follows that:
C=T·S·R* (5)
Given the algebraic associativity and commutativity of the correlation process, the filter T and replica R may be combined into a “super replica” denoted as U:
U=T·R* (6)
The correlation process C thus becomes
C=U·S (8)
And a time domain version of the super replica filter is
u=IF(U) (9)
The above filtering process can be summarized according to the following steps
1) Convert the replica and the signal into the frequency domain.
2) In the frequency domain, divide the replica by the signal to obtain the shaping filter T.
3) Multiply the shaping filter T by the replica conjugate R* to obtain the super replica U.
4) In the frequency domain, multiply the super filter with the signal or convert the super filter to the time domain and convolve the super filter with the time domain received signal.
To further decrease the computation burden, fast Fourier transform (FFT) techniques may be used to calculate the super filter U. Note that the extent of the super filter in the time domain should be chosen to allow the replica size to increase by an amount considered necessary to support the length of the shaping filter—the replica will increase by the length of the super filter. The time extent of the super filter depends upon a tradeoff between processing load and the desire to achieve the best match to the received signal. In one embodiment, the super filter has an arbitrary maximum size of three times the replica length to achieve a compromise of achieving the best match without excessively overloading the processing. It will be appreciated, however, that achieving a perfect match would require in extreme cases a considerably longer super filter size.
In that regard, the worst case (longest super filter) occurs between an unshaped (rectangular) signal and a harshly shaped replica such as a Blackman-Harris function. Conversely, the best case in terms of super filter time length demands occurs for a correctly shaped signal that already matches its replica and thus would not need shaping. In one embodiment, the length of the super filter is a function of the mismatch between signal and replica to thereby automatically reduce the processing load. Should the super filter be arbitrarily limited in time such as no greater than three times the length of the replica, it is also desirable to apply a smooth transition function at the ends of the time domain super filter to prevent issues with discontinuities at the super filter.
The advantageous sidelobe suppression of the shaping filter techniques discussed herein may be better appreciated with reference to the following example correlation results between an unshaped (rectangular) signal shown in
Similarly, the sidelobe extent can be reduced by reducing the length of the super filter to twice the extent of the replica as shown for correlation 800 in
Should the sonar transmitter be capable of some shaping, performance can be significantly enhanced. For example,
The super filter is responsive to the frequency content of the transmitted signal so its effectiveness is a function of the bandwidth, with best results at higher bandwidths and no effect for narrowband signals. But this diminishing performance at lower bandwidths is inconsequential in that narrowband signals do not have range sidelobes because the detection peak extends across the full extent of the correlation for such signals. Thus, there is no need to reduce sidelobes for narrowband signals so no improvement in that regard is necessary.
This bandwidth dependence for the super filter performance is shown in
The filtering discussed herein improves the signal-to-noise ratio (SNR) for a sonar system that would normally used a shaped transmit signal because the effective signal energy level is increased since the transmitter is not shaping the transmitted signal—it is not subject to a reduction due to amplitude shaping. The achieved increase in SNR is a function of the desired range sidelobe levels, i.e. the shaping function imposed upon correlation of the received signal. For a Kaiser-Bessel shaping to achieve −60 dB range sidelobes, the filtering discussed herein increases SNR by 1 to 2 dB. Because SNR is measured on a logarithmic scale, such an increase is actually quite significant in that an increase of just 3 dB represents a doubling of the transmit power. Accordingly, the filtering techniques disclosed herein are quite significant and advantageous.
Another advantage of the disclosed filtering techniques is with regard to the detection of low-level targets in conjunction with a very high-level return from adjacent structure such as a hard bottom surface. The detection of low-level targets requires a large amount of gain but the resulting reflections from the large interfering target would saturate the sonar receiver. The large target increases the range sidelobe levels such that the signal no longer retains its shaping and tends towards the unshaped signal case, as shown for correlation 1305 in
Should the unshaped signal become saturated in the sonar receiver, the shape and phase is virtually unaffected because the result is a rectangular pulse representation of the sinusoidal signal burst. The rectangular pulse representation is the original signal (the fundamental) with gradually decreasing odd harmonics. The super filter combination will eliminate the harmonics and act on the fundamental in the desired fashion to produce correlation 1300. As compared to correlation 1305, correlation 1300 has significantly reduced sidelobe levels that enable the detection of low-level targets.
Referring back to
The frequency agile and low-sidelobe sonar system discussed herein may advantageously apply the following algorithms. However, it will be appreciated that these algorithms may be practiced in conventional sonar systems as well.
These various processes may be better understood with regard to a resulting sonar display prior to these enhancements and after application of the post-processing techniques.
The reduction in sonar interference and background noise and clutter is also illustrated in
The specific processing techniques will now be addressed in more detail, starting with a discussion of the sonar interference rejection process.
Interference from other sonar systems will be due to pick up, either electrically or acoustically, of a transmit burst at the front end of the receiving sonar system. The sonar interference signals will have a large amplitude, a fixed burst length, and appear randomly throughout the acquisition. Detection and elimination can therefore be conducted by the statistical comparison throughout an acquisition and between adjacent acquisitions to identify sonar interference-like signals.
Recognition of signals as interference is based on the length of time (number of samples) that a large signal level is present in a particular acquisition and the persistency over a number of acquisitions; if the signal does not appear in the same part of subsequent acquisitions then it is assumed to be interference. The detection process may allow for varying lengths of interference time to accommodate varying lengths of bursts from the interfering sonar. The sonar interference rejection algorithm disclosed herein therefore includes parameters for duration within a ping and for various levels of intensity.
A flow chart for the sonar interference rejection algorithm is shown in
The detection of such an anomalous “dash” for the echo trace for ping 2 is detected as follows. In a step 505, the samples in the middle echo trace (e.g., ping 2 of
A step 520 determines if there are time samples remaining in the middle buffer. If so, the algorithm loops back to step 505. It will be appreciated that the “set limit” variable of step 510 may be varied depending upon a particular design goal. With the algorithm set to its highest level of sensitivity, step 515 requires only one sample to be classified as interference. In contrast, with the algorithm set to its lowest level of sensitivity, step 515 requires eleven consecutive time samples in the middle trace to be greater than the set margin over the corresponding time samples in the adjacent echo traces to classify the set limit region as sonar interference.
Once detected, the interference must be rejected in such a way that is not conspicuous on the display. This is achieved by replacing the data with samples from adjacent acquisitions, which is successful if the signals in the area of interference do not change significantly between acquisitions. This is the case for data that lies in mid water, or for larger targets such as the bottom return. The replacement algorithm, which exchanges the samples in the acquisition corresponding to the interference with samples from adjacent acquisitions, is illustrated in
Steps 710, 715, and 720 test for whether a given replacement alternative is being asserted. Should step 710 be true, the current sample is replaced with the minimum of the same time samples in the adjacent buffers in a step 725. Similarly, if step 715 is true, the current sample is replaced with an average of the same time samples in the adjacent buffers in a step 730. Finally, if step 720 is true, the current sample is replaced with a mean of the same time samples in the adjacent buffers in a step 735. A step 740 tests for whether all samples have been checked. If there are remaining samples, the replacement algorithm continues by repeating step 705.
The select min method performed in step 725 has a bias towards lower levels and works well with areas in the acquisitions corresponding to back ground noise. Conversely, the select average method performed in step 730 tends to smooth out the data so it works better in situations with gradual changes in the data, such as fish like targets and large singular objects. The select mean method performed in step 735 (mean in this method is defined as halfway between the maximum and minimum values) will have less smoothing than the average method and is better suited to fast changing environments.
This algorithm is aimed at identifying fish-like targets, which appear as arches on the display as can be seen in the zoomed data in
The process of detection and discrimination against non-fish targets can be achieved by statistical comparison between adjacent acquisitions to identify the characteristic trends as described above. Therefore the recognition of signals as targets is based on the persistency over a number of acquisitions, length of time present in a particular acquisition and trend in time between adjacent acquisitions. The algorithm process is as follows and is shown in the flow chart in
There are numerous sources of noise and clutter in sonar systems, which manifest as false targets and speckling of the sonar display. The electrical noise will be a combination of general background and circuit noise along with pickup from boat engines. Clutter will be from multiple reflections within the water such as from particulates or bubbles or from surface and bottom boundaries. The net result reduces the ability to discern desirable targets such as fish and structure.
A considerable amount of noise rejection will be achieved through the front end echo detection process, but it is desirable to further enhance the sonar display to reduce unwanted signals. Standard techniques use filtering methods which invariably trade target detail and clarity against noise and clutter levels. Although these methods are retained within the noise rejection algorithm, a much better result can be achieved by first applying the algorithms described herein, where the algorithms identify desirable features thus allowing the ability to enhance targets and deemphasize noise.
A filtering technique employed after applying the detection algorithms is a novel method of noise and clutter reduction that is conducted using a statistical analysis of adjacent samples within an acquisition and with adjacent samples from subsequent acquisitions. This algorithm combines the detection of noise and clutter and is shown in
Should a sample be deemed as noise or clutter, it is replaced according to the algorithm shown in
Bait schools are schools of fish that attract larger predator fish such as marlin or tuna. The fisherman will position the boat over a school in order to catch the predator fish, which results in the school remaining stationary below the boat and within the sonar beam. For a tightly packed school, which invariably occurs when predators are patrolling, the echo returns for the individual fish merge creating the impression of a large solid target. With this sitting in the beam at a fairly constant depth, the sounder module will misinterpret the school as bottom and report an incorrect depth. The rejection algorithm must avoid this misinterpretation.
The bait school rejection algorithm relies on statistical analysis of the acquired detection data over a number of pings and is included in the bottom detection algorithm as described further below. The main requirement is that the bait school is more ‘broken up’ than the bottom return, which is the case for widely spaced fish or when employing pulse compression techniques, which provides better resolution and can distinguish between individual fish within the school. The statistics in this case will favor the true bottom return over the fish school.
Thermoclines are distinct layers within the water that have temperatures that change more rapidly with depth than water above and below. They are generally created by flow and eddies or by the action of the sun on the upper layers. The acoustic properties of the water are affected by temperature so when layers are created like this the acoustic signal can be reflected at the boundary. As this layer remains with the motion of the boat, the sounder module may misinterpret the thermocline layer as a bottom. In addition, the boundary layers tend to attract zooplankton which again acts as reflectors and diffusers, thus reducing the acoustic signal reaching the bottom and increasing the reflected signal from the boundary.
An automated algorithm to reject thermoclines may analyze the apparent structure of the layer to determine how likely that the echoes are due to thermoclines rather than true bottom returns. This can be enhanced by considering later returns in the acquisition which will either be second echoes (multipath from surface and bottom) or further thermoclines or the actual bottom return. Sometimes this is not physically possible due to the strength of the return from the layer(s) and the corresponding reduction of strength of the bottom echoes, so it is necessary to provide some form of user intervention to identify the true bottom return.
The automated thermocline rejection algorithm relies on statistical analysis of the acquired detection data over a number of pings and is included in the bottom detection algorithm as described further below. The main requirement is that the thermocline is more ‘broken up’ than the bottom return, which is the case for weak thermocline layers or when employing pulse compression techniques, which provides better resolution and can distinguish between multiple returns. The statistics in this case will favor the true bottom return over the thermocline.
A manual override option allows the user to select a section of the depth that can be eliminated from the bottom detection algorithm, thus removing the possibility of incorrect detection. For safety reasons the algorithm continues to analyze the data in this section and provides the shallowest depth of potential bottom along with the bottom return depth considered to be the true bottom. The sonar system depth warnings are triggered by the shallowest depth, thus avoiding the potential of grounding the boat in the case where the user has incorrectly eliminated the true bottom return from the depth detection algorithm.
The bottom detection algorithm should distinguish between returns that are from the true bottom and those that exhibit similar characteristics, such as thermoclines or bait schools. The tracking algorithm must maintain lock on to the bottom return once acquired and support fast changing gradients. The bottom detection algorithm searches the acquired detection data for each ping to find returns that match a statistical representation of bottom-like returns. The persistence over a set number of samples of a level that exceeds a set threshold is considered a potential bottom. If the potential bottom persists over a number of pings within a set tolerance of samples, the potential bottom confidence is promoted otherwise the confidence is demoted. Lists of potential bottoms are maintained in a hierarchical manner such that the bottom with the most confidence is stored at the top. Once the probability of the top most element has reached a threshold it is designated as the bottom depth, which is then published onto the system.
Additional enhancements include the provision of a low pass filter applied to the acquisition prior to bottom detection to reject any fast changing phenomena such as bait schools or clutter; the bottom return generally persists over a number of samples so will not get rejected. This can be improve further by exploiting the fine range resolution of pulse compression processing which effectively increases the rate of change of semi-transparent layers such as thermoclines and low density bait schools; effectively increasing the effectiveness of the filter. Another enhancement involves statistical analysis of the change of bottom depth to track the bottom over varied gradients, where the trajectory of the bottom is considered so that fast changing depths are maintained while sudden changes of depth, due to incorrect locking onto a thermocline or bait school, are rejected. Yet another enhancement involves the use of chart data when available to limit the bottom detect algorithm to a set tolerance around the chart depth to reduce misinterpreting thermoclines or bait schools as the bottom. This improves the stability and accuracy of the bottom lock.
Bottom haze is a description of a mechanism where the noise level increases with depth when approaching the bottom. This can be seen in window 401 in
The color gain and noise floor curves are processes applied to the data before display. The noise floor curve is generated from the statistics of the data to determine the level of noise and is subtracted from the data to reduce the displayed noise. The color gain curve is generated from the statistics of the data to determine the maximum levels which is then applied to autoscale the display.
The haze rejection algorithm manipulates these curves to reduce the interference without detriment to the target definition. The principal is to take the standard curves and lift the noise floor by applying a gradient based on the distance from the bottom depth and compensating for this change by adjusting the color gain to maintain the high signal levels. The noise floor thus increasingly removes more of the lower level signals, removing the haze, and the color gain holds the targets at the same intensity. This can be seen by comparing window 401 to window 406 and also window 410 to window 415 in
The haze rejection algorithm applies a linear (in dB's) gradient to the noise floor starting at zero adjustment at the surface with gradually increasing level toward the bottom; this can be linked to the TVG curve shape. If a bottom detection is available and pulse compression is being used then additional adjustment is applied to the data immediately above the bottom. This takes the form of a curve (in dB's) with zero additional adjustment at a distance from the bottom equal to twice the transmitted burst length, increasing to a maximum at the bottom depth.
In many bottom sounder sonar systems the algorithm to automatically detect the bottom must start its search at some arbitrary depth and gradually home in on the actual depth, changing the amount of data required dependant on the depth. This process can take some time, particularly as the maximum depth capability of the modern sounders has increased to depths in excess of 10,000 ft where return times (and hence acquisition times) can be in the region of 4 seconds. The starting point is always a compromise between finding a shallow bottom quickly, where the starting point would need to be shallow, while pinging deep enough to ensure that a deeper bottom depth is insonified so that thermoclines are not mistaken as the bottom. The default is usually to err on the side of caution and ping to deep water first, which will increase the time to achieve a lock.
The fast bottom detect algorithm is a combination of successive approximation fast pings along with the ability to access network information such as chart bottom depth. If the chart depth is available then the algorithm will start at a depth slightly greater and apply the normal bottom detection algorithm. If a previous known depth is available within a set time period, then this will be the starting point. Alternatively the algorithm will conduct a successive approximation approach starting at an arbitrary depth until bottom lock is obtained. The user has the ability to override the process by manually selecting the depth range, which will force the algorithm to consider the selected depth to analyze for potential bottoms. This is particularly useful if the bottom detection has inadvertently selected an incorrect shallow depth as the bottom and has set the range to suit this depth, in which case the real bottom will not be insonified and therefore will not be discovered.
As those of some skill in this art will by now appreciate and depending on the particular application at hand, many modifications, substitutions and variations can be made in and to the materials, apparatus, configurations and methods of use of the devices of the present disclosure without departing from the spirit and scope thereof. In light of this, the scope of the present disclosure should not be limited to that of the particular embodiments illustrated and described herein, as they are merely by way of some examples thereof, but rather, should be fully commensurate with that of the claims appended hereafter and their functional equivalents.
This patent application is a continuation of International Patent Application No. PCT/US2012/062315 filed Oct. 26, 2012, which claims priority to and the benefit of U.S. Provisional Patent Application No. 61/551,875 filed Oct. 26, 2011, U.S. Provisional Patent Application No. 61/551,890 filed Oct. 26, 2011, and U.S. Provisional Patent Application No. 61/607,435 filed Mar. 6, 2012. The contents of all of the above-noted applications are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
61551875 | Oct 2011 | US | |
61551890 | Oct 2011 | US | |
61607435 | Mar 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2012/062315 | Oct 2012 | US |
Child | 14261316 | US |