This document pertains generally, but not by way of limitation, to non-destructive evaluation, and more particularly, to apparatus and techniques for providing acoustic inspection, such as using a full matrix capture (FMC) acquisition or other matrix acquisition approach where acquired A-scan data is compressed.
Various inspection techniques can be used to image or otherwise analyze structures without damaging such structures. For example, x-ray inspection, eddy current inspection, or acoustic (e.g., ultrasonic) inspection can be used to obtain data for imaging of features on or within a test specimen. For example, acoustic imaging can be performed using an array of ultrasound transducer elements, such as to image a region of interest within a test specimen. Different imaging modes can be used to present received acoustic signals that have been scattered or reflected by structures on or within the test specimen.
Acoustic testing, such as ultrasound-based inspection, can include focusing or beam-forming techniques to aid in construction of data plots or images representing a region of interest within the test specimen. Use of an array of ultrasound transducer elements can include use of a phased-array beamforming approach and can be referred to as Phased Array Ultrasound Testing (PAUT). For example, a delay-and-sum beamforming technique can be used such as including coherently summing time-domain representations of received acoustic signals from respective transducer elements or apertures. In another approach, a Total Focusing Method (TFM) technique can be used where one or more elements in an array (or apertures defined by such elements) are used to transmit an acoustic pulse and other elements are used to receive scattered or reflected acoustic energy, and a matrix is constructed of time-series (e.g., A-Scan) representations corresponding to a sequence of transmit-receive cycles in which the transmissions are occurring from different elements (or corresponding apertures) in the array. Such a TFM approach where A-scan data is obtained for each element in an array (or each defined aperture) can be referred to as a “full matrix capture” (FMC) technique.
Capturing time-series A-scan data either for PAUT or TFM applications can involve generating considerable volumes of data. For example, digitization of A-scan time-series data can be performed locally by a test instrument having an analog-front-end and analog-to-digital converter physically cabled to a transducer probe assembly. A corresponding digitized amplitude resolution (e.g., 8-bit or 12-bit resolution) and time resolution (e.g., corresponding to a sample rate in excess of tens or hundreds of megasamples per second) can result in gigabits of time-series data for each received A-scan record for later processing, particularly if such A-scan records are stored in a as full-bandwidth and full-resolution analytic representations.
Accordingly, the present inventors have recognized, among other things, that a technique can be used to reduce a size of a data set associated with storage or transmission of acoustic imaging data by selectively retaining information within a bandwidth of an acoustic probe signal chain and discarding data outside of such bandwidth. Accordingly, the present inventors have recognized that use of a reduced sample rate can be sufficient to convey such information, such as by down-sampling an originally-acquired time-series. Acoustic inspection productivity can be enhanced using techniques described herein to perform such selective reduction of acquired acoustic data volume, such as data corresponding to elementary A-scan or other time-series representations of received acoustic echo data. In the approach described herein, time-series data can be decimated for efficient storage or transmission. A representation of the time-series data can be reconstructed, such as by using a Fourier transform-based up-sampling technique or a convolutional interpolation filter, as illustrative examples. The techniques described herein can be used for a variety of different acoustic measurement techniques that involve acquisition of time-series data (e.g., A-Scan data). Such techniques include Full Matrix Capture (FMC) applications, plane wave imaging (PWI), or PAUT, as illustrative examples.
In an example, a machine-implemented method for processing compressed acoustic inspection data can include receiving down-sampled digital representations of acquired acoustic echo data corresponding to respective received acoustic echo signals, the respective received acoustic echo signals corresponding to transducer apertures of a multi-element electroacoustic transducer array used for an acoustic inspection operation, up-sampling the down-sampled digital representations using at least one of an interpolation technique or a frequency-domain up-sampling technique, to generate up-sampled time-series representations of respective acoustic echo signals, and processing the up-sampled time-series representations of the respective acoustic echo signals to generate a visual representation of a result of the acoustic inspection operation. Generally, the down-sampled digital representations comprise a lesser volume of data than the up-sampled representations.
In an example, a system for processing compressed acoustic inspection data can include a first processing facility comprising at least one first processor circuit and at least one first memory circuit, along with a first communication circuit communicatively coupled with the first processing facility. The at least one first memory circuit comprises instructions that, when executed by the at least one first processor circuit, cause the system to receive, using the first communication circuit, down-sampled digital representations of acquired acoustic echo data corresponding to respective received acoustic echo signals, the respective received acoustic echo signals corresponding to transducer apertures of a multi-element electroacoustic transducer array used for an acoustic inspection operation, up-sample the down-sampled digital representations using at least one of an interpolation technique or a frequency-domain up-sampling technique, to generate up-sampled time-series representations of respective acoustic echo signals, and process the up-sampled time-series representations of the respective acoustic echo signals to generate a visual representation of a result of the acoustic inspection operation. Generally, as in the example above, the down-sampled digital representations comprise a lesser volume of data than the up-sampled representations.
In an example, the system can include a second processing facility comprising at least one second processor circuit, at least one second memory circuit, along with a second communication circuit communicatively coupled with the second processing facility and communicatively coupled with first communication circuit. The at least one second memory circuit comprises instructions that, when executed by the at least one second processor circuit, cause the system to digitize acoustic echo data acquired by the multi-element electroacoustic transducer array using an analog front-end circuit coupled with the multi-element electroacoustic transducer array, decimate the digitized acoustic echo data to establish the down-sampled digital representations of acquired acoustic echo data, and transmit, using the second communication circuit, the down-sampled digital representations to the first communication circuit.
In an example, a system for processing compressed acoustic inspection data can include a means for digitizing acoustic echo data acquired by a multi-element electroacoustic transducer array, a means for decimating the digitized acoustic echo data to establish down-sampled digital representations of acquired acoustic echo data, a means for receiving the down-sampled digital representations of acquired acoustic echo data corresponding to respective received acoustic echo signals, the respective received acoustic echo signals corresponding to transducer apertures of the multi-element electroacoustic transducer array, a means for up-sampling the down-sampled digital representations using at least one of an interpolation technique or a frequency-domain up-sampling technique, to generate up-sampled time-series representations of respective acoustic echo signals, and a means for processing the up-sampled time-series representations of the respective acoustic echo signals to generate a visual representation of a result of an acoustic inspection operation, where the down-sampled digital representations comprise a lesser volume of data than the up-sampled representations.
This summary is intended to provide an overview of subject matter of the present patent application. It is not intended to provide an exclusive or exhaustive explanation of the invention. The detailed description is included to provide further information about the present patent application.
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.
Acoustic inspection productivity can be enhanced using techniques to perform compression of acquired acoustic data, such as data corresponding to elementary A-scan or other time-series representations of received acoustic echo data, as mentioned above. In various approaches as described herein, time-series data can be decimated for efficient storage or transmission. The decimation process can be performed in a manner that preserves a frequency spectrum (and related information) of interest without loss within a specified probe signal chain bandwidth but can be used to discard information extending beyond such a specified bandwidth. The present inventors have recognized, among other things, that the representation of the time-series data can be reconstructed from decimated data set, such as by using a frequency domain technique (e.g., a Fourier transform-based up-sampling technique) or a convolutional interpolation filter, as illustrative examples. Use of such a frequency domain technique or convolutional interpolation filter can allow a reconstructed time-series representation to fully represent the information within the specified probe bandwidth from the originally acquired time-series. The techniques described herein can be used for a variety of different acoustic measurement techniques that involve acquisition of time-series data (e.g., A-Scan data). Transmission of compressed acoustic echo data can allow greater inspection productivity such as facilitating processing of such acoustic echo data at a location different from the acquisition or probe location, or even supporting provisioning of processing and related imaging as a service using a remote server or cloud-based approach.
A modular probe assembly 150 configuration can be used, such as to allow a test instrument 140 to be used with various different probe assemblies 150. Generally, the transducer array 152 includes piezoelectric transducers, such as can be acoustically coupled to a target 158 (e.g., a test specimen or “object-under-test”) through a coupling medium 156. The coupling medium can include a fluid or gel or a solid membrane (e.g., an elastomer or other polymer material), or a combination of fluid, gel, or solid structures. For example, an acoustic transducer assembly can include a transducer array coupled to a wedge structure comprising a rigid thermoset polymer having known acoustic propagation characteristics (for example, Rexolite® available from C-Lec Plastics Inc.), and water can be injected between the wedge and the structure under test as a coupling medium 156 during testing, or testing can be conducted with an interface between the probe assembly 150 and the target 158 otherwise immersed in a coupling medium.
The test instrument 140 can include digital and analog circuitry, such as a front-end circuit 122 including one or more transmit signal chains, receive signal chains, or switching circuitry (e.g., transmit/receive switching circuitry). The transmit signal chain can include amplifier and filter circuitry, such as to provide transmit pulses for delivery through an interconnect 130 to a probe assembly 150 for insonification of the target 158, such as to image or otherwise detect a flaw 160 on or within the target 158 structure by receiving scattered or reflected acoustic energy elicited in response to the insonification.
While
The receive signal chain of the front-end circuit 122 can include one or more filters or amplifier circuits, along with an analog-to-digital conversion facility, such as to digitize echo signals received using the probe assembly 150. Digitization can be performed coherently, such as to provide multiple channels of digitized data aligned or referenced to each other in time or phase. The front-end circuit can be coupled to and controlled by one or more processor circuits, such as a processor circuit 102 included as a portion of the test instrument 140. The processor circuit can be coupled to a memory circuit, such as to execute instructions that cause the test instrument 140 to perform one or more of acoustic transmission, acoustic acquisition, processing, or storage of data relating to an acoustic inspection, or to otherwise perform techniques as shown and described herein. The test instrument 140 can be communicatively coupled to other portions of the system 100, such as using a wired or wireless communication interface 120.
For example, performance of one or more techniques as shown and described herein can be accomplished on-board the test instrument 140 or using other processing or storage facilities such as using a compute facility 108 or a general-purpose computing device such as a laptop 132, tablet, smart-phone, desktop computer, or the like. For example, processing tasks that would be undesirably slow if performed on-board the test instrument 140 or beyond the capabilities of the test instrument 140 can be performed remotely (e.g., on a separate system), such as in response to a request from the test instrument 140. Similarly, storage of imaging data or intermediate data such as A-scan matrices of time-series data or other representations of such data, for example, can be accomplished using remote facilities communicatively coupled to the test instrument 140. The test instrument can include a display 110, such as for presentation of configuration information or results, and an input device 112 such as including one or more of a keyboard, trackball, function keys or soft keys, mouse-interface, touch-screen, stylus, or the like, for receiving operator commands, configuration information, or responses to queries.
Such acquisition can include some degree of processing before storage or transmission. For example, phased-array ultrasound testing (PAUT), virtual source aperture (VSA) technique, or plane wave imaging (PWI) can be performed by aggregating received echo signals corresponding to a specified group of transmission events. Generally, FMC-based acquisition, PAUT, PWI, VSI, or a sparse matrix capture (SMC) technique can be performed, and the down-sampling and up-sampling techniques described herein are generally applicable for processing of time-series data acquired using any of the various modalities at 214. A specified (e.g., “standardized”) acquisition matrix format can be established as mentioned at 216, such as having dimensionality determined by the acquisition modality as shown in
Acquired acoustic echo time-series data can be stored as a compressed representation in the acquisition matrix at 216 and transmitted to another processing facility (such as locally or remotely situated with respect to the acquisition probe), and one or more techniques can be performed at 218 to provide a visual representation for a user. Such techniques can include beamforming using PAUT or a Total Focusing Method (TFM) or using another technique. As mentioned elsewhere herein, such time-series data corresponding to acoustic echo signals can be stored, transmitted, compressed, and decompressed using real-valued signal data or using an analytic representation comprising a real-valued representation and an imaginary-valued representation (such as generated using a Hilbert transform or using other techniques as described elsewhere herein).
Generally, as mentioned above, the multi-element array 150 can include or can be electrically coupled with an analog front end, such as including an analog-to-digital converter ADC 323. The probe (or another probe in a pitch/catch scheme) can generate an acoustic pulse from a specified transmit aperture (e.g., a single transducer or a specified group of transducers), having a central acoustic frequency, “Fc,” and bandwidth, “BW.” Resulting acoustic echo signals from each transmit event can be digitized using the ADC 323 (or an array of such ADC 32 channels, such as corresponding to each element in the multi-element array). Respective acoustic echo signals (such as A-scan time-series representations) can be filtered digitally using a discrete-time filter 324, such as a low-pass or bandpass filter, and at 326, the respective acoustic echo signals can be decimated. The filter 324 can be used to suppress higher frequency components such as having a cut-off frequency corresponding to a Nyquist rate of the down-sampled sample rate, avoiding aliasing artifacts.
Decimation generally refers to dropping samples from a time-series according to a specified decimation level or ratio. For example, a 1:7 decimation ratio or decimation level of “7” implies that only one out of every seven samples will be retained from the acquired time series, and the remaining samples are dropped. Accordingly, “decimation” does not literally require a 1:10 ratio, and merely refers down-sampling the time-series to achieve a longer sample interval, and a correspondingly lesser time-series record size in terms of data storage, assuming that the amplitude resolution remains the same. The down-sampled digital representations of the acquired acoustic echo data can be transmitted to the processing unit 308, such as for up-sampling at 334. In one approach, the down-sampled digital representations can be zero-padded in the time domain such as by inserting zero-valued samples between non-zero amplitude samples in the decimated time-series, where the zero-valued samples have a desired shorter sample interval corresponding to an up-sampling target sample rate. Zero-padding in the time-domain, without more, may result in missing peak information or other features corresponding to discarded signal components beyond the cutoff frequency of the filter 324 (e.g., aliasing artifacts).
In the illustration of
The present inventors have recognized that various approaches can be used to improve the quality of an up-sampled time-series representation. As discussed further below, the up-sampling at 334 can include use of a convolutional interpolation filter, such as a polynomial interpolation filter, or a frequency-domain based technique. As an illustration, the present inventors have recognized, among other things, that use of a polynomial interpolation filter or frequency-domain based technique can help suppress or eliminate a loss of amplitude stability that may otherwise occur due to loss of peak information in the decimation at 326.
The frequency-domain upsampling technique can include performing a discrete Fourier transform (DFT) or computational equivalent (e.g., Fast Fourier Transform (FFT)) on a respective down-sampled time-series representation. In the frequency domain, additional zero-valued frequency bins can be added extending beyond the Nyquist frequency of the transformed time-series data (where this Nyquist frequency corresponds to the decimated-lower-sample rate). After such zero-padding in the frequency domain, an inverse transform (e.g., iDFT or iFFT) can be performed, to provide a corresponding up-sampled time-series.
The up-sampling at 334 can also include use of a Hilbert transform operator in the frequency domain, such as applied before zero padding in the frequency domain. In this manner, the up-sampling workflow can include transforming the down-sampled (e.g., decimated) time-series data into the frequency domain, then applying a Hilbert transform or other multiplicative operator to generate a real-valued spectrum and an imaginary-valued spectrum (or a complex-valued spectrum including real and imaginary-valued signal components corresponding to each frequency bin). The application of the Hilbert transform before zero padding can provide enhanced computational efficiency in at least two respects. First, the size (and corresponding data footprint) of the frequency domain representation of the transformed time-series data can be smaller before zero padding, and the application of a Hilbert transform (e.g., multiplicatively) is performed on fewer data values as compared to a record that is zero padded in the frequency domain. Zero padding in the frequency domain can be performed after the Hilbert transform operation is performed. Second, the real and imaginary components provided at 336A and 336B can be established contemporaneously when the zero-padded frequency domain representation is inverted to provide the up-sampled time-series representation. In this manner, an extra operation of generating the real and imaginary components at 336A and 336B is avoided.
In addition, or alternatively, a time-domain technique can be used for such up-sampling and recovery of peak information at 334. For example, the down-sampled time-series representations can be zero-padded in the time domain as mentioned above, and a convolutional (e.g., digital) filter can be applied to the down-sampled time-series representations wherein the filter time steps correspond to the sample interval of the up-sampled data (e.g., at the up-sampled—higher—sample rate). The convolutional filter can include an impulse response defined by a piece-wise set of polynomial expressions. An example of a symmetric polynomial interpolation filter (where h(t) is the time-domain impulse response of the filter) can be implemented as follows:
In the processing unit, up-sampling can be performed as in
The decimated time-series data corresponding to the spectrum shown in
Referring to
For example,
In the illustration of
Alternatively, or in addition, a convolutional filter (e.g., a discrete-time piece-wise polynomial interpolation filter or other filter) can be applied to the down-sampled digital representations received at 920, in the time-domain, such as applied to a zero-padded representation of the down-sampled data as discussed above. At 945, the up-sampled time-series representations can be processed (e.g., coherently summed), such as to generate a visual representation of a result of an acoustic inspection operation. Such a visual representation can include a magnitude or intensity plot associated with TFM beamforming, as illustrative examples. Other imaging modalities can be used, as discussed above.
The technique 900 can include digitizing acoustic echo data acquired by a multi-element electroacoustic transducer array at 905, decimating the digitized acoustic echo data to establish the down-sampled representations at 910, and transmission of the down-sampled digital representations at 915, such as using a communication interface (e.g., a communication circuit) such as a network interface.
Specific examples of main memory 604 include Random Access Memory (RAM), and semiconductor memory devices, which may include storage locations in semiconductors such as registers. Specific examples of static memory 1006 include non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices: magnetic disks, such as internal hard disks and removable disks: magneto-optical disks: RAM: or optical media such as CD-ROM and DVD-ROM disks.
The machine 1000 may further include a display device 1010, an input device 1012 (e.g., a keyboard), and a user interface (UI) navigation device 1014 (e.g., a mouse). In an example, the display device 1010, input device 1012 and UI navigation device 1014 may be a touch-screen display. The machine 1000 may include a mass storage device 1016 (e.g., drive unit), a signal generation device 1018 (e.g., a speaker), a network interface device 1020, and one or more sensors 1030, such as a global positioning system (GPS) sensor, compass, accelerometer, or some other sensor. The machine 1000 may include an output controller 1028, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
The mass storage device 1016 may include a machine readable medium 1022 on which is stored one or more sets of data structures or instructions 1024 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 1024 may also reside, completely or at least partially, within the main memory 1004, within static memory 1006, or within the hardware processor 1002 during execution thereof by the machine 1000. In an example, one or any combination of the hardware processor 1002, the main memory 1004, the static memory 1006, or the mass storage device 1016 comprises a machine readable medium.
Specific examples of machine readable media include, one or more of non-volatile memory, such as semiconductor memory devices (e.g., EPROM or EEPROM) and flash memory devices: magnetic disks, such as internal hard disks and removable disks: magneto-optical disks: RAM: or optical media such as CD-ROM and DVD-ROM disks. While the machine readable medium 1022 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) configured to store the one or more instructions 1024.
Δn apparatus of the machine 1000 includes one or more of a hardware processor 1002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 1004 and a static memory 1006, sensors 1030, network interface device 1020, antennas 1032, a display device 1010, an input device 1012, a UI navigation device 1014, a mass storage device 1016, instructions 1024, a signal generation device 1018, or an output controller 1028. The apparatus may be configured to perform one or more of the methods or operations disclosed herein.
The term “machine readable medium” includes, for example, any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 1000 and that cause the machine 1000 to perform any one or more of the techniques of the present disclosure or causes another apparatus or system to perform any one or more of the techniques, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples include solid-state memories, optical media, or magnetic media. Specific examples of machine readable media include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices: magnetic disks, such as internal hard disks and removable disks: magneto-optical disks: Random Access Memory (RAM): or optical media such as CD-ROM and DVD-ROM disks. In some examples, machine readable media includes non-transitory machine-readable media. In some examples, machine readable media includes machine readable media that is not a transitory propagating signal.
The instructions 1024 may be transmitted or received, for example, over a communications network 1026 using a transmission medium via the network interface device 1020 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®), IEEE 802.15.4 family of standards, a Long Term Evolution (LTE) 4G or 5G family of standards, a Universal Mobile Telecommunications System (UMTS) family of standards, peer-to-peer (P2P) networks, satellite communication networks, among others.
In an example, the network interface device 1020 includes one or more physical jacks (e.g., Ethernet, coaxial, or other interconnection) or one or more antennas to access the communications network 1026. In an example, the network interface device 1020 includes one or more antennas 1032 to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. In some examples, the network interface device 1020 wirelessly communicates using Multiple User MIMO techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 1000, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to generally as “examples.” Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.
In the event of inconsistent usages between this document and any documents so incorporated by reference, the usage in this document controls.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc., are used merely as labels, and are not intended to impose numerical requirements on their objects.
Method examples described herein can be machine or computer-implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Such instructions can be read and executed by one or more processors to enable performance of operations comprising a method, for example. The instructions are in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. Further, in an example, the code can be tangibly stored on one or more volatile, non-transitory, or non-volatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.
The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
This patent application claims the benefit of priority of Lepage et al., U.S. Provisional Patent Application Ser. No. 63/216,829, titled “ACOUSTIC ACQUISITION MATRIX CAPTURE DATA COMPRESSION,” filed on Jun. 30, 2021 (Attorney Docket No. 6409.210PRV), which is hereby incorporated by reference herein in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CA2022/051040 | 6/29/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63216829 | Jun 2021 | US |