The present description relates generally to digital audio coding.
Digital audio encoding often includes transforming frames of time-domain audio samples into a block of frequency-domain samples, and then quantizing the frequency domain samples.
In frequency domain coding, transients often result in perceptible quantization noise due to lack of temporal masking. For example, a percussive sound followed by silence or silence followed by the onset of a voice results in transients that frequency domain coding does not code well. When frequency modeling is applied to such transients in bandwidth-constrained coding applications, frequency models often move signal energy to portions of an audio signal that should be silent, which can lead to a perception of distortion on behalf of a human listener. These artifacts often are characterized as “pre-echo” artifacts.
To mitigate such artifacts, two techniques are popular in audio coding. First, an audio coder that performs its frequency transforms on frames of audio content may employ shorter transform windows when transients occur than when transients are not present. Second, an audio coder may employ temporal noise shaping (TNS). Both techniques, however, increase the number of bits used to code audio content, which may make them inapplicable for bandwidth-constrained coding applications.
Certain features of the present disclosure are set forth in the appended claims. However, for the purpose of explanation, several implementations of the present disclosure are set forth in the following figures.
The detailed description set forth below is intended as a description of various configurations of the present disclosure and is not intended to represent the only configurations in which the present disclosure can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the present disclosure. However, the present disclosure is not limited to the specific details set forth herein and can be practiced using one or more other implementations. In one or more implementations, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the present disclosure.
Improved techniques for coding an audio signal with a transient audio sound include parsing a frame of predetermined length of audio samples into a series of windows of a smaller size, and transforming the windows of time-domain samples into a series of windows of frequency-domain samples. In a first aspect, the frequency-domain samples may be organized according to an alignment pattern, and the frequency domain samples may be coded with respect to an envelope of the organized frequency-domain samples. In a second aspect, coding of the frequency-domain samples may include vector quantization of vectors formed of frequency-domain samples selected from across the frame.
In some implementations of improved audio transient coding, a frame of audio samples may be coded with respect to an envelope of the frequency-domain samples arranged according to an alignment pattern that traverses multiple windows of the frame. The alignment pattern may include placing a lowest-frequency coefficient of one window adjacent to a lowest frequency coefficient of its neighboring window and also placing a highest frequency coefficient of another window adjacent to a highest-frequency coefficient of its neighboring window. A first version of the alignment pattern may include sequentially concatenating the windows of frequency coefficients, where the frequency coefficients are ordered, such as by sorting, within each window according to their corresponding frequency, and a direction of the ordering reverses between neighboring windows in the concatenation. For example, a first window may sort frequencies from low to high, the next window may sort from high to low, then low to high, and continuing in with alternating sort order. A second version of the alignment pattern may include sorting the frequency coefficient by frequency across an entire frame. For example, the lowest frequency coefficient from all windows in the frame may be grouped and followed by the second-lowest frequency coefficient from all windows. In an aspect, the first version of the alignment pattern may be used for frame with a strong transient, while the second version of the alignment pattern may be used for frames with a weaker transient. In another aspect, estimation of an envelope of frequency coefficients organized according to such alignment patterns may be improved, such as by modeling the envelope with a linear prediction.
In other implementations of improved audio transient coding, the frequency coefficients may be coded with vector quantization, where the vectors are formed by selecting frequency coefficients from scattered locations across the frame. In a first example, a vector may be formed of non-neighboring or disjoint frequencies of a single window. In second example, a vector may be formed of frequencies from a plurality of different windows in a frame. The formed vectors may be quantized by selecting an entry in a codebook for each vector that minimizes a measure of distortion, and in some aspects, the distortion may be weighted based on human-perceptual weighting of the frequencies in the vector. In an aspect, the vector quantization may be conjugate vector quantization according to a conjugate vector codebook.
In an aspect, communication channel 110 may include transmitters and receivers and a transmission medium between the transmitters and receiver across which the encoded audio is communicated. In other aspects, the communications channel may include computer storage upon which the encoded audio is stored for communication between the audio encoder 102 and audio decoder 104. In some implementations, both an audio encoder and decoder will be implemented on the same side of communication channel 110, for example to enable two-way (duplex) audio communication.
The transient detector 210 may determine, from an analysis of frame content, whether the frame's content indicates presence of a transient or not. Based on the determination, the transient detector 210 may issue control commands to the other units of the system 200. In a first aspect, the transient detector 210 may issue a control signal to the transform unit 220 that determines a size of a window used by the transform unit 220 as it applies its transform to the input frame of source video. In another aspect, the transient detector 210 may issue a control signal to the envelope processor 230 that determines an alignment pattern used by the envelope processor 230 as it processes transform coefficients output from the transform unit 220. In a further aspect, the transient detector 210 may issue a control signal to the quantizer 240 that defines windows from which the quantizer 240 extracts processed coefficients for quantization. The transient detector 210 also may output its control signals to the syntax unit 250, which may provide representations of those control signals in coded audio data that is output to an audio decoder 102 (
The transform unit 220 may process audio samples within an input frame and transform them into an alternative domain for processing. Typically, input samples represent the source audio on a time domain basis. The transform unit 220 may convert the time-domain samples of the input frame into a frequency domain representation, for example, as a set of frequency coefficients. As part of this operation, the transform unit 220 may perform a frequency analysis of the samples in the input frame and derive a frequency-based representation of those samples. For example, an overlapped or Modified Discrete Cosine Transform (“MDCT”) may be applied to the input frame, or a pseudo-quadrature mirror filter (“PQMF”) may be used. In some examples, the transform unit 220 may perform frequency-domain processing based on basis functions that are derived based on a non-uniform frequency scale, e.g., warped frequency transforms such as warped MDCT or warped DCT or a custom non-uniform frequency scale derived from a machine learning (ML) system that minimizes a multi-resolution short-term Fourier transform (STFT) loss function. The multi-resolution STFT loss function in an ML system may estimate a series of error values in a test system relative to a reference system across multiple time-frequency resolutions.
The transform unit 220 may apply its transform at a window size as determined by a control signal output by the transient detector 210. When the transient detector 210 determines, for example, that the input frame does not possess a transient, the transform unit 220 may perform its transform operation across an entire frame. When the transient detector 210 determines that the input frame possesses a transient, the transform unit 220 may partition the input frame into a plurality of smaller units, called “windows” for convenience, and apply its transform separately on each window. The number of windows may be determined from a binary decision output from the transient detector 210 indicating whether a transient is determined to be present or not which causes the transform unit 220 to perform its transform either on an entire frame or on a predetermined number of windows (say, 8 windows). Alternatively, the control signal from the transient detector 210 may identify a number of windows into which a source frame may be partitioned, such as 1, 2, 4, or 8 windows.
The envelope processor 230 may process transform domain coefficients from the transform unit 220 to decorrelate them. Oftentimes, envelope processing involves processing to normalize transform domain coefficient values and to reduce (or eliminate) structure that may be present in the transform domain coefficients that are input to the envelope processor 230. The envelope processor 230 may output the processed coefficients to the quantizer 240 and the envelope representation data to the syntax unit 250. The envelope representation data may identify envelope processing parameter(s) that are applied by the envelope processor 230, which an audio decoder 102 (
Operations of the envelope processor 230 may be controlled at least in part by a control signal output from the transient detector 210. When the transient detector 210 determines, for example, that the input frame does not possess a transient, the envelope processor 230 may represent the spectral envelope according to an alignment pattern that extends across the entire frame. When the transient detector 210 determines, however that the input frame possesses a transient, the envelope processor 230 may represent the spectral envelope according to an alignment pattern that arranges transform coefficients from each window in an efficient representation.
The quantizer 240, as its name implies, may apply quantization operations to normalized coefficients output from the envelope processor 230. The quantizer 240 may include, for example, a scalar quantizer, and/or may include a vector quantizer operating according to vector quantization codebook. In the case of vector quantization, vectors may be derived from the normalized coefficients received from the envelope processor 230, which may be selected on a predetermined basis, and the vectors may be normalized and then applied to one or more predetermined codebooks to identify a closest-matching codebook entry to the normalized vector. The codebook entry may be output from the quantizer 240 as a representation of the selected coefficients. The quantizer 240 may repeat the operation for a predetermined number of frame coefficients. In an aspect, the vector quantization may be conjugate vector quantization according to conjugate vector codebooks.
Operations of the quantizer 240 may be controlled at least in part by a control signal output from the transient detector 210. In one aspect, in response to a determination from the transient detector 210 that a transient is detected, the quantizer 240 may alter its selection of coefficients for formation of vectors. For example, the quantizer 240 may select coefficients for each vector according to a selection pattern that ensures coefficients will be selected from two or more windows that are generated by the transform unit 220. Alternatively, the quantizer 240 may select coefficients for each vector according to a disperse selection pattern.
The syntax unit 250 may generate a coded audio signal from the data provided to it from the transient detector 210, the envelope processor 230, and the quantizer 240. For example, the syntax unit 250 may receive data from the transient detector 210 representing a determination whether a transient is present in the source audio frame. The syntax unit 250 may receive data representing the spectral envelope derived by the envelope processor 230. The syntax unit 250 also may receive vectors generated by the quantizer 240 from the normalized coefficients. The syntax unit 250 may integrate the received data into a coded audio signal to be sent to the audio decoder 104 (
The syntax unit 310 may parse individual data elements from the coded audio provided by the audio coding system 200 (
The dequantizer 320 may invert coding operations applied by the quantizer 240 (
The envelope processor 330 may invert coding operations applied by the envelope processor 230 (
The inverse transform unit 340 may invert transform processes applied by the transform unit 220. Recovered transform coefficients received from the envelope processor 330 may be transformed from the transform domain into time-domain samples of recovered audio. For example, frequency coefficients, which may be generated by an MDCT process, may be converted from the frequency domain to the time domain. The inverse transform conversion process may operate according to window sizes as determined by a control signal from the controller 350 and, ultimately, the transient processor 210 (
In another aspect, a frame that is determined by the transient detector 210 not to contain a transient may be coded and decoded by alternate processes (not shown). Thus, frames that do not contain transients may bypass the coding and decoding elements illustrated in
The systems 200 and 300 illustrated in
In optional aspects of method 400, a transient may be detected in the frame (box 402). When a transient is not detected, the frame may be encoded with an alternate technique (box 404). In an aspect, frequency-domain coefficients of a frame may be organized by an alternating sort order of frequencies within windows (box 412), such as with the first alignment pattern described above. In another aspect, frequency domain coefficients of a frame may be organized by sorting frequencies across windows of the frame (box 414), such as with the second alignment pattern described above.
In some implementations, encoding frequency-domain coefficients with respect to an envelope (box 418) may include normalizing the coefficients based on the estimated envelope (box 410), removing residual structure from the normalized coefficients (box 422), and vector quantizing vectors of the normalized coefficients (box 424). In an aspect, the vectors may be formed from disjoint frequencies within a window of the frame (box 426) or may be formed of coefficients from across windows of the frame (box 428).
In some optional aspects, a decoder may parse indications of decoding control data from the encoded bitstream, and subsequent decoding operations may be controlled by the decoding control data. For example, a decoder may parse an indication of a transient in an audio frame (516). When a transient is indicated for a frame (518), decoding continues with box 502; otherwise, when a transient is not indicated for the frame, the frame is decoded with an alternate technique (520). In another example, a decoder may parse an indication of residual structure in a frame of frequency coefficients. A decoder may apply the residual restructure to frequency coefficient (526) prior to de-normalization in box 508.
In other optional aspects, frequency coefficients were quantized with a vector quantizer, decoding the coefficients (504) may include decoding indices of the vectors (504), assigning the coefficients from a vector to disjoint frequencies within a window (522), and/or assigning the coefficients from a vector to frequencies of different windows across a frame (524). In another optional aspect, de-normalizing coefficients (508) may include scaling coefficients of a frame with each coefficient's corresponding envelope value (510).
The example of
In an aspect, the system 200 may vary selection of envelope alignment patterns according to strength of transient determinations made by the transient detector 210 (
In an aspect, the envelope processor 230 (
As discussed, an envelope processor 230 may perform coefficient normalization using a spectral envelope that is derived for a frame.
In an aspect, quantizer 100 may select an index to represent a vector based on a perceptual weighting of frequencies of the coefficients forming the vector. For example, comparator 1030 may determine a distortion for a candidate vector by subtracting the element values in the candidate vector from corresponding element values in a vector to be quantized from vector assembly processor 1010. The resulting differences between vector elements may be weighted based on a perceptual value of the corresponding frequency represented by the coefficient vector elements in the vector to be quantized. The weighted differences between vector elements may then be combined as a distortion measure for the candidate vector, such as with a mean-squared-error (MSE) or mean-absolute-error metric. The distortion measures for each corresponding candidate vector may then be compared to select an index to represent the vector being quantized.
The quantizer 10000 may repeat its operation on a plurality of input vectors selected from frame coefficients until a predetermined number of vectors have been generated from the frame or until the frame coefficients are exhausted.
As discussed, the vector assembly processor 1010 may extract coefficients from a frame of normalized coefficients as determined by a sampling pattern. In an aspect, the sampling pattern may be provided to the quantizer 1000 from a transient detector 210 (
In a second aspect, quantizer 1000 input vectors may be formed by collecting frequency-domain coefficients from a dispersed set of frequencies within a single window such as 1110.1. In this second example, the sampling pattern may select non-neighboring frequency coefficient positions from among the coefficients within the single window 1110.1.
In operation, source audio may be provided as a time-domain signal of audio samples. Source audio typically is organized into “frames,” units of a predetermined number of samples such as 1,024 samples. When the audio is represented with a fixed sampling rate (e.g., 48 kHz or 48,000 samples/second), each frame represents the source audio's content over a predetermined temporal duration.
The transient detector 1204 may detect, from content of a frame, whether a transient sound occurs during the frame. In an aspect, transient detector 1204 may determine a strength of a transient within the frame or probability of that a transient exists in the frame. Transient detector 1204 may provide an indication of a transient, for example, as a Boolean value (either a transient was detected or not in the frame), as a strength of a detected transient in the frame, or a probability that a transient exists in the frame. When the transient detector 1204 determines that a transient has occurred, it may generate a control signal to the windowed transformer.
The windowed transformer 1202 may partition an input frame into a plurality of windows when it receives a control signal from the transient detector 1204 indicating presence of a transient in the frame. The windowed transformer 1202 also may transform time-domain audio samples within each window into a set of frequency-domain coefficients, for example, using a MDCT.
In an aspect, the number of windows generated by the windowed transformer 1202 may vary based on the content provided by the transient detector 1204 control signal. For example, if no transient is detected, windowed transformer 1202 may avoid partitioning; the windowed transformer 1202 may transform the frame of source audio with single MDCT as a unit. Alternately, if a transient is detected by transient detector 1204, windowed transformer 1202 may separately transform windows of the frame with a window width less than the frame length. The MDCT may generate sets of frequency-domain coefficients, one set per partitioning window, representing the audio content contained within the respective partitioning window.
The spectrum reorganizer 1206 may sort the frequency-domain coefficients of each frame according to an alignment pattern. Reorganization in an alignment pattern may improve efficiency of envelope representations performed by later stages of the system 100. The alignment pattern may sort the frequency-coefficients according to their corresponding frequency.
A first alignment pattern, as described above, may include sequentially concatenating the windows of frequency coefficients, where the frequency coefficients are sorted within each window according to their corresponding frequency, and the order of the sort reverses between neighboring windows in the concatenation. Frequency coefficients within each window will contain coefficient values for each of a number of frequencies between a DC frequency and a maximum frequency generated by the windowed transformer 1202; the first alignment pattern may relocate like kind coefficients adjacent to each other at the boundaries between adjacent windows (e.g., a DC coefficient of one window will be placed adjacent to a DC coefficient of another window and a highest-frequency coefficient of a window will be placed adjacent to a highest-frequency coefficient of a neighboring window) when considered along a scan direction of an envelope representation. An example of organizing according to the first alignment pattern is described below with reference to
The envelope estimator 1212 may estimate an envelope for a whole frame of frequency-domain coefficients organized according to an alignment pattern. The envelope estimator 1212 may generate output data, shown as a frequency envelope indication, which is placed into the coded audio bitstream and transmitted to the audio decoder 104 (
The envelope normalizer 1210 may generate a normalized representation of the frequency-domain coefficients based on the frequency envelope indication generated by the envelope estimator 1212. For example, the envelope normalizer 1210 may divide each frequency coefficient by that coefficient's corresponding value in the frequency envelope indication.
The residual structure estimator 1216 may identify any residual structure remaining in the sequence of frequency domain coefficient after normalizing them based on the envelope, and residual structure remover may remove the identified residual structure from the normalized frequency-domain coefficients. For example, residual structure may be modeled as periodic characteristics remaining in the values along the normalized coefficients. The residual structure estimator 1216 estimate parameters of the periodic characteristics model and provide an indication of the residual structure as the estimated parameters of the periodic characteristics model.
In an aspect, the normalizer 1210 may act as a first stage of de-correlating the sequence of frequency-domain coefficients, and then structure remover 1214 may act as a second stage of de-correlating the sequency of frequency-domain coefficients. Some quantizers, including vector quantizers, may operate more effectively when the sequential inputs to the quantizer are de-correlated from each other.
The quantizer 1220 may quantize de-correlated frequency domain coefficients. In some implementations, quantizer 1220 may include a vector quantizer 1222 that quantizes vectors of de-correlated frequency coefficients according to a vector codebook 1218 to produce an index of a codeword in the codebook for each vector input to quantizer 1220. In aspect, quantizer 1220 may combine multiple types of quantizers. For example, quantizer 1220 may use a (uniform or non-uniform) scalar quantizer to quantize lower-frequency coefficients and also use a vector quantizer to quantize higher frequency coefficients. In another aspect, codebook 1218 may include multiple codebooks, such as conjugate vector codebooks.
In aspects, the vectors may be formed by collecting disjoint frequency coefficients from the sequence of frequency samples of the frame. In a first example, the vectors may be formed by collecting frequency-domain coefficients from a plurality of windows into each vector. In a second example, vectors may be formed by collecting frequency-domain coefficients from a dispersed set of frequencies within a single window. In this second example, the vectors may include only non-neighboring frequencies.
The syntax generator 1224 may integrate the input data received from other processing elements in the system 1200 into a coded bitstream to send to the audio decoder 100. For example, the syntax generator 1224 may receive frame codebook indices from the quantizer 1220 indications of a detected transient from the transient detector 1204, a frequency envelope indication from the envelope estimator 1212, a residual structure estimation from the residual structure estimator 1216. The syntax generator 1224 may integrate these data elements into a coded representation of the frame according to a syntax of a coding protocol utilized between the audio encoder 102 and the audio decoder 104 (
In operation, the decoding system 1300 may invert many of the operations of encoding system 1200 (
The bus 1410 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computing device 1400. In one or more implementations, the bus 1410 communicatively connects the one or more processing unit(s) 1414 with the ROM 1412, the system memory 1404, and the permanent storage device 1402. From these various memory units, the one or more processing unit(s) 1414 retrieves instructions to execute and data to process in order to execute the processes of the subject disclosure. The one or more processing unit(s) 1414 can be a single processor or a multi-core processor in different implementations.
The ROM 1412 stores static data and instructions that are needed by the one or more processing unit(s) 1414 and other modules of the computing device 1400. The permanent storage device 1402, on the other hand, may be a read-and-write memory device. The permanent storage device 1402 may be a non-volatile memory unit that stores instructions and data even when the computing device 1400 is off. In one or more implementations, a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) may be used as the permanent storage device 1402.
In one or more implementations, a removable storage device (such as a floppy disk, flash drive, and its corresponding disk drive) may be used as the permanent storage device 1402. Like the permanent storage device 1402, the system memory 1404 may be a read-and-write memory device. However, unlike the permanent storage device 1402, the system memory 1404 may be a volatile read-and-write memory, such as random-access memory. The system memory 1404 may store any of the instructions and data that one or more processing unit(s) 1414 may need at runtime. In one or more implementations, the processes of the subject disclosure are stored in the system memory 1404, the permanent storage device 1402, and/or the ROM 1412. From these various memory units, the one or more processing unit(s) 1414 retrieves instructions to execute and data to process in order to execute the processes of one or more implementations.
The bus 1410 also connects to the input and output device interfaces 1406 and 1408. The input device interface 1406 enables a user to communicate information and select commands to the computing device 1400. Input devices that may be used with the input device interface 1406 may include, for example, alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output device interface 1408 may enable, for example, the display of images generated by computing device 1400. Output devices that may be used with the output device interface 1408 may include, for example, printers and display devices, such as a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flexible display, a flat panel display, a solid-state display, a projector, or any other device for outputting information.
One or more implementations may include devices that function as both input and output devices, such as a touchscreen. In these implementations, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Finally, as shown in
Implementations within the scope of the present disclosure can be partially or entirely realized using a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) encoding one or more instructions. The tangible computer-readable storage medium also can be non-transitory in nature.
The computer-readable storage medium can be any storage medium that can be read, written, or otherwise accessed by a general purpose or special purpose computing device, including any processing electronics and/or processing circuitry capable of executing instructions. For example, without limitation, the computer-readable medium can include any volatile semiconductor memory, such as RAM, DRAM, SRAM, T-RAM, Z-RAM, and TTRAM. The computer-readable medium also can include any non-volatile semiconductor memory, such as ROM, PROM, EPROM, EEPROM, NVRAM, flash, nvSRAM, FeRAM, FeTRAM, MRAM, PRAM, CBRAM, SONOS, RRAM, NRAM, racetrack memory, FJG, and Millipede memory.
Further, the computer-readable storage medium can include any non-semiconductor memory, such as optical disk storage, magnetic disk storage, magnetic tape, other magnetic storage devices, or any other medium capable of storing one or more instructions. In one or more implementations, the tangible computer-readable storage medium can be directly coupled to a computing device, while in other implementations, the tangible computer-readable storage medium can be indirectly coupled to a computing device, e.g., via one or more wired connections, one or more wireless connections, or any combination thereof.
Instructions can be directly executable or can be used to develop executable instructions. For example, instructions can be realized as executable or non-executable machine code or as instructions in a high-level language that can be compiled to produce executable or non-executable machine code. Further, instructions also can be realized as or can include data. Computer-executable instructions also can be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets, functions, etc. As recognized by those of skill in the art, details including, but not limited to, the number, structure, sequence, and organization of instructions can vary significantly without varying the underlying logic, function, processing, and output.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, one or more implementations are performed by one or more integrated circuits, such as ASICs or FPGAs. In one or more implementations, such integrated circuits execute instructions that are stored on the circuit itself.
Those of skill in the art would appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. Various components and blocks may be arranged differently (e.g., arranged in a different order, or partitioned in a different way) all without departing from the scope of the present disclosure.
It is understood that any specific order or hierarchy of blocks in the processes disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged, or that all illustrated blocks be performed. Any of the blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components (e.g., computer program products) and systems can generally be integrated together in a single software product or packaged into multiple software products.
As used in this specification and any claims of this application, the terms “base station,” “receiver,” “computer,” “server,” “processor,” and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” means displaying on an electronic device.
As used herein, the phrase “at least one of” preceding a series of items, with the term “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase “at least one of” does not require selection of at least one of each item listed; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
The predicate words “configured to,” “operable to,” and “programmed to” do not imply any particular tangible or intangible modification of a subject, but, rather, are intended to be used interchangeably. In one or more implementations, a processor configured to monitor and control an operation or a component may also mean the processor being programmed to monitor and control the operation or the processor being operable to monitor and control the operation. Likewise, a processor configured to execute code can be construed as a processor programmed to execute code or operable to execute code.
Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some implementations, one or more implementations, a configuration, the configuration, another configuration, some configurations, one or more configurations, the present disclosure, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the present disclosure or that such disclosure applies to all configurations of the present disclosure. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” or as an “example” is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, to the extent that the term “include,” “have,” or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term “comprise” as “comprise” is interpreted when employed as a transitional word in a claim.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112 (f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.”
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject disclosure.
The present disclosure benefits from priority conferred by application Ser. No. 63/505,838, filed Jun. 2, 2023, entitled “Efficient Coding Of Transients In Transform-Domain” the disclosures of which are incorporated herein in their entireties.
Number | Date | Country | |
---|---|---|---|
63505838 | Jun 2023 | US |