This disclosure relates to coding of audio signals (e.g., speech coding).
The linear prediction (LP) analysis-synthesis framework has been successful for speech coding because it fits well the source-system paradigm for speech synthesis. In particular, the slowly time-varying spectral characteristics of the upper vocal tract are modeled by an all-pole filter, while the prediction residual captures the voiced, unvoiced, or mixed excitation behavior of the vocal chords. The prediction residual from the LP analysis is modeled and encoded using a closed-loop analysis-by-synthesis process.
In analysis-by-synthesis code excited linear prediction (CELP) systems, the excitation sequence that results in the lowest observed “perceptually-weighted” mean-square-error (MSE) between the input and reconstructed speech is selected. The perceptual weighting filter shapes the prediction error such that quantization noise is masked by the high-energy formants. The role of perceptual weighting filters is to de-emphasize the error energy in the formant regions. This de-emphasis strategy is based on the fact that in the formant regions, quantization noise is partially masked by speech. In CELP coding, the excitation signal is generated from two codebooks, namely, the adaptive codebook (ACB) and the fixed codebook (FCB). The ACB vector represents a delayed (i.e., by closed-loop pitch value) segment of the past excitation signal and contributes to the periodic component of the overall excitation. After the periodic contribution in the overall excitation is captured, a fixed codebook search is performed. The FCB excitation vector partly represents the remaining aperiodic component in the excitation signal and is constructed using an algebraic codebook of interleaved, unitary-pulses. In speech coding, pitch- and formant-sharpening techniques provide significant improvement to the speech reconstruction quality, for example, at lower bit rates.
Formant sharpening may contribute to significant quality gains in clean speech; however, in the presence of noise and at low signal-to-noise ratios (SNRs), the quality gains are less pronounced. This may be due to inaccurate estimation of the formant sharpening filter and partly due to certain limitations of the source-system speech model that additionally needs to account for noise. In some cases, the degradation in speech quality is more noticeable in the presence of bandwidth extension where a transformed, formant sharpened low band excitation is used in the high band synthesis. In particular, certain components (e.g., the fixed codebook contribution) of the low band excitation may undergo pitch- and/or formant-sharpening to improve the perceptual quality of low-band synthesis. Using the pitch- and/or formant-sharpened excitation from low band for high band synthesis may have higher likelihood to cause audible artifacts than to improve the overall speech reconstruction quality.
Unless expressly limited by its context, the term “signal” is used herein to indicate any of its ordinary meanings, including a state of a memory location (or set of memory locations) as expressed on a wire, bus, or other transmission medium. Unless expressly limited by its context, the term “generating” is used herein to indicate any of its ordinary meanings, such as computing or otherwise producing. Unless expressly limited by its context, the term “calculating” is used herein to indicate any of its ordinary meanings, such as computing, evaluating, smoothing, and/or selecting from a plurality of values. Unless expressly limited by its context, the term “obtaining” is used to indicate any of its ordinary meanings, such as calculating, deriving, receiving (e.g., from an external device), and/or retrieving (e.g., from an array of storage elements). Unless expressly limited by its context, the term “selecting” is used to indicate any of its ordinary meanings, such as identifying, indicating, applying, and/or using at least one, and fewer than all, of a set of two or more. Unless expressly limited by its context, the term “determining” is used to indicate any of its ordinary meanings, such as deciding, establishing, concluding, calculating, selecting, and/or evaluating. Where the term “comprising” is used in the present description and claims, it does not exclude other elements or operations. The term “based on” (as in “A is based on B”) is used to indicate any of its ordinary meanings, including the cases (i) “derived from” (e.g., “B is a precursor of A”), (ii) “based on at least” (e.g., “A is based on at least B”) and, if appropriate in the particular context, (iii) “equal to” (e.g., “A is equal to B”). Similarly, the term “in response to” is used to indicate any of its ordinary meanings, including “in response to at least.”
Unless otherwise indicated, the term “series” is used to indicate a sequence of two or more items. The term “logarithm” is used to indicate the base-ten logarithm, although extensions of such an operation to other bases are within the scope of this disclosure. The term “frequency component” is used to indicate one among a set of frequencies or frequency bands of a signal, such as a sample of a frequency-domain representation of the signal (e.g., as produced by a fast Fourier transform or MDCT) or a subband of the signal (e.g., a Bark scale or mel scale subband).
Unless indicated otherwise, any disclosure of an operation of an apparatus having a particular feature is also expressly intended to disclose a method having an analogous feature (and vice versa), and any disclosure of an operation of an apparatus according to a particular configuration is also expressly intended to disclose a method according to an analogous configuration (and vice versa). The term “configuration” may be used in reference to a method, apparatus, and/or system as indicated by its particular context. The terms “method,” “process,” “procedure,” and “technique” are used generically and interchangeably unless otherwise indicated by the particular context. A “task” having multiple subtasks is also a method. The terms “apparatus” and “device” are also used generically and interchangeably unless otherwise indicated by the particular context. The terms “element” and “module” are typically used to indicate a portion of a greater configuration. Unless expressly limited by its context, the term “system” is used herein to indicate any of its ordinary meanings, including “a group of elements that interact to serve a common purpose.” The term “plurality” means “two or more.” Any incorporation by reference of a portion of a document shall also be understood to incorporate definitions of terms or variables that are referenced within the portion, where such definitions appear elsewhere in the document, as well as any figures referenced in the incorporated portion.
The terms “coder,” “codec,” and “coding system” are used interchangeably to denote a system that includes at least one encoder configured to receive and encode frames of an audio signal (possibly after one or more pre-processing operations, such as a perceptual weighting and/or other filtering operation) and a corresponding decoder configured to produce decoded representations of the frames. Such an encoder and decoder are typically deployed at opposite terminals of a communications link. In order to support a full-duplex communication, instances of both of the encoder and the decoder are typically deployed at each end of such a link.
Unless otherwise indicated, the terms “vocoder,” “audio coder,” and “speech coder” refer to the combination of an audio encoder and a corresponding audio decoder. Unless otherwise indicated, the term “coding” indicates transfer of an audio signal via a codec, including encoding and subsequent decoding. Unless otherwise indicated, the term “transmitting” indicates propagating (e.g., a signal) into a transmission channel.
A coding scheme as described herein may be applied to code any audio signal (e.g., including non-speech audio). Alternatively, it may be desirable to use such a coding scheme only for speech. In such case, the coding scheme may be used with a classification scheme to determine the type of content of each frame of the audio signal and select a suitable coding scheme.
A coding scheme as described herein may be used as a primary codec or as a layer or stage in a multi-layer or multi-stage codec. In one such example, such a coding scheme is used to code a portion of the frequency content of an audio signal (e.g., a lowband or a highband), and another coding scheme is used to code another portion of the frequency content of the signal.
The linear prediction (LP) analysis-synthesis framework has been successful for speech coding because it fits well the source-system paradigm for speech synthesis. In particular, the slowly time-varying spectral characteristics of the upper vocal tract are modeled by an all-pole filter, while the prediction residual captures the voiced, unvoiced, or mixed excitation behavior of the vocal chords.
It may be desirable to use a closed-loop analysis-by-synthesis process to model and encode the prediction residual from the LP analysis. In an analysis-by-synthesis code-excited LP (CELP) system (e.g., as shown in
It may be expected that speech energy in the formant regions will partially mask noise that may otherwise occur in those regions. Consequently, it may be desirable to implement an LP coder to include a perceptual weighting filter (PWF) to shape the prediction error such that noise due to quantization error may be masked by the high-energy formants.
A PWF W(z) that de-emphasizes energy of the prediction error in the formant regions (e.g., such that the error outside of those regions may be modeled more accurately) may be implemented according to an expression such as
where γ1 and γ2 are weights whose values satisfy the relation 0<γ2<γ1<1, αi are the coefficients of the all-pole filter, A(z), and L is the order of the all-pole filter. Typically, the value of feedforward weight γ1 is equal to or greater than 0.9 (e.g., in the range of from 0.94 to 0.98) and the value of feedback weight γ2 varies between 0.4 and 0.7. As shown in expression (1a), the values of γ1 and γ2 may differ for different filter coefficients αi, or the same values of γ1 and γ2 may be used for all i, 1≦i≦L. The values of γ1 and γ2 may be selected, for example, according to the tilt (or flatness) characteristics associated with the LPC spectral envelope. In one example, the spectral tilt is indicated by the first reflection coefficient. A particular example in which W(z) is implemented according to expression (1b) with the values {γ1, γ2}={0.92, 0.68} is described in sections 4.3 and 5.3 of Technical Specification (TS) 26.190 v11.0.0 (AMR-WB speech codec, September 2012, Third Generation Partnership Project (3GPP), Valbonne, FR).
In CELP coding, the excitation signal e(n) is generated from two codebooks, namely, the adaptive codebook (ACB) and the fixed codebook (FCB). The excitation signal e(n) may be generated according to an expression such as
e(n)=gpv(n)+gcc(n), (2)
where n is a sample index, gp and gc are the ACB and FCB gains, and v(n) and c(n) are the ACB and FCB vectors, respectively. The ACB vector v(n) represents a delayed segment of the past excitation signal (i.e., delayed by a pitch value, such as a closed-loop pitch value) and contributes to the periodic component of the overall excitation. The FCB excitation vector c(n) partly represents a remaining aperiodic component in the excitation signal. In one example, the vector c(n) is constructed using an algebraic codebook of interleaved, unitary pulses. The FCB vector c(n) may be obtained by performing a fixed codebook search after the periodic contribution in the overall excitation is captured in gpv(n).
Methods, systems, and apparatus as described herein may be configured to process the audio signal as a series of segments. Typical segment lengths range from about five or ten milliseconds to about forty or fifty milliseconds, and the segments may be overlapping (e.g., with adjacent segments overlapping by 25% or 50%) or nonoverlapping. In one particular example, the audio signal is divided into a series of nonoverlapping segments or “frames”, each having a length of ten milliseconds. In another particular example, each frame has a length of twenty milliseconds. Examples of sampling rates for the audio signal include (without limitation) eight, twelve, sixteen, 32, 44.1, 48, and 192 kilohertz. It may be desirable for such a method, system, or apparatus to update the LP analysis on a subframe basis (e.g., with each frame being divided into two, three, or four subframes of approximately equal size). Additionally or alternatively, it may be desirable for such a method, system, or apparatus to produce the excitation signal on a subframe basis.
It may be desirable to employ pitch-sharpening and/or formant-sharpening techniques, which can provide significant improvement to the speech reconstruction quality, particularly at low bit rates. Such techniques may be implemented by first applying the pitch-sharpening and formant-sharpening on the impulse response of the weighted synthesis filter (e.g., the impulse response of W(z)×1/Â(z), where 1/Â(z) denotes the quantized synthesis filter), before the FCB search, and then subsequently applying the sharpening on the estimated FCB vector c(n) as described below.
1) It may be expected that the ACB vector v(n) does not capture all of the pitch energy in the signal s(n), and that the FCB search will be performed according to a remainder that includes some of the pitch energy. Consequently, it may be desirable to use the current pitch estimate (e.g., the closed-loop pitch value) to sharpen a corresponding component in the FCB vector. Pitch sharpening may be performed using a transfer function such as the following:
where τ is based on a current pitch estimate (e.g., τ is the closed-loop pitch value rounded to the nearest integer value). The estimated FCB vector c(n) is filtered using such a pitch pre-filter H1(z). The filter H1(z) is also applied to the impulse response of the weighted synthesis filter (e.g., to the impulse response of W(z)/Â(z)) prior to FCB estimation. In another example, the filter H1(z) is based on the adaptive codebook gain gp, such as in the following:
(e.g., as described in section 4.12.4.14 of Third Generation Partnership Project 2 (3GPP2) document C.S0014-E v1.0, December 2011, Arlington, Va.), where the value of gp (0≦gp≦1) may be bounded by the values [0.2, 0.9].
2) It may also be expected that the FCB search will be performed according to a remainder that includes more energy in the formant regions, rather than being entirely noise-like. Formant sharpening (FS) may be performed using a perceptual weighting filter that is similar to the filter W(z) as described above. In this case, however, the values of the weights satisfy the relation 0<γ1<γ2<1. In one such example, the values γ1=0.75 for the feedforward weight and γ2=0.9 for the feedback weight are used:
Unlike the PWF W(z) in Eq. (1) that performs the de-emphasis to hide the quantization noise in the formants, an FS filter H2(z) as shown in Eq. (4) emphasizes the formant regions associated with the FCB excitation. The estimated FCB vector c(n) is filtered using such an FS filter H2(z). The filter H2(z) is also applied to the impulse response of the weighted synthesis filter (e.g., to the impulse response of W(z)/Â(z)) prior to FCB estimation.
The improvements in speech reconstruction quality that may be obtained by using pitch and formant sharpening may directly depend on the underlying speech signal model and the accuracy in the estimation of closed-loop pitch τ and the LP analysis filter A(z). Based on several large-scale listening tests, it has been experimentally verified that the formant sharpening can contribute to big quality gains in clean speech. In the presence of noise, however, some degradation has been observed consistently. Degradation caused by formant sharpening may be due to inaccurate estimation of the FS filter and/or due to limitations in the source-system speech modeling that additionally needs to account for noise.
A bandwidth extension technique may be used to increase the bandwidth of a decoded narrowband speech signal (having a bandwidth of, for example, from 0, 50, 100, 200, 300 or 350 Hertz to 3, 3.2, 3.4, 3.5, 4, 6.4, or 8 kHz) into a highband (e.g., up to 7, 8, 12, 14, 16, or 20 kHz) by spectrally extending the narrowband LPC filter coefficients to obtain highband LPC filter coefficients (alternatively, by including highband LPC filter coefficients in the encoded signal) and by spectrally extending the narrowband excitation signal (e.g., using a nonlinear function, such as absolute value or squaring) to obtain a highband excitation signal. Unfortunately, degradation caused by formant sharpening may be more severe in the presence of bandwidth extension where such a transformed lowband excitation is used in highband synthesis.
It may be desirable to preserve the quality improvements due to FS in both clean speech and noisy speech. An approach to adaptively vary the formant-sharpening (FS) factor is described herein. In particular, quality improvements were noted when using a less aggressive emphasis factor γ2 for the formant sharpening in the presence of noise.
In an example embodiment, Task T100 in
Task T100 may be implemented to perform noise estimation and to calculate a long-term SNR. For example, task T100 may be implemented to track long-term noise estimates during inactive segments of the audio signal and to compute long-term signal energies during active segments of the audio signal. Whether a segment (e.g., a frame) of the audio signal is active or inactive may be indicated by another module of an encoder, such as a voice activity detector. Task T100 may then use the temporally smoothed noise and signal energy estimates to compute the long-term SNR.
Task T200 may be implemented to adaptively vary the formant-sharpening factor over time. For example, task T200 may be implemented to use the estimated long-term SNR from the current frame to adaptively vary the formant-sharpening factor for the next frame.
Task T200 may also be implemented to use a different mapping of γ2 value vs. long-term SNR. Such a mapping may be piecewise linear with one, two, or more additional inflection points and different slopes between adjacent inflection points. The slope of such a mapping may be steeper for lower SNRs and more shallow at higher SNRs, as shown in the example of
Task T300 applies a formant-sharpening filter on the FCB excitation, using the FS factor produced by task T200. The formant-sharpening filter H2(z) may be implemented, for example, according to an expression such as the following:
Note that for clean speech and in the presence of high SNRs, the value of γ2 is close to 0.9 in the example of
In bandwidth extension, using a formant-sharpened lowband excitation for highband synthesis may result in artifacts. An implementation of method M100 as described herein may be used to vary the FS factor such that the impact on the highband is kept negligible. Alternatively, a formant-sharpening contribution to the highband excitation may be disabled (e.g., by using the pre-sharpening version of the FCB vector in the highband excitation generation, or by disabling formant sharpening for the excitation generation in both of the narrowband and the highband). Such a method may be performed within, for example, a portable communications device, such as a cellular telephone.
Apparatus MF100 also includes means F200 for calculating a formant sharpening factor based on the calculated average SNR (e.g., as described herein with reference to task T200). Apparatus MF100 also includes means F300 for applying a filter that is based on the calculated FS factor to an FCB vector that is based on information from the audio signal (e.g., as described herein with reference to task T300). Such an apparatus may be implemented within, for example, an encoder of a portable communications device, such as a cellular telephone.
In
The decoder may be implemented to apply the filters H1(z) and H2(z) to the FCB vector as well. In one such example, the encoder is implemented to transmit the calculated FS factor to the decoder as a parameter of the encoded frame. This implementation may be used to control the extent of formant sharpening in the decoded signal. In another such example, the decoder is implemented to generate the filters H1(z) and H2(z) based on a long-term SNR estimate that may be locally generated (e.g., as described herein with reference to the pseudocode listings in
Transmitting terminal 102 includes an audio encoder AE10, and receiving terminal 104 includes an audio decoder AD10. Audio encoder AE10, which may be used to compress audio information (e.g., speech) from a first user interface UI10 (e.g., a microphone and audio front-end) by extracting values of parameters according to a model of human speech generation, may be implemented to perform a method as described herein. A channel encoder CE10 assembles the parameter values into packets, and a transmitter TX10 transmits the packets including these parameter values over network NW10, which may include a packet-based network, such as the Internet or a corporate intranet, via transmission channel TC10. Transmission channel TC10 may be a wired and/or wireless transmission channel and may be considered to extend to an entry point of network NW10 (e.g., a base station controller), to another entity within network NW10 (e.g., a channel quality analyzer), and/or to a receiver RX10 of receiving terminal 104, depending upon how and where the quality of the channel is determined.
A receiver RX10 of receiving terminal 104 is used to receive the packets from network NW10 via a transmission channel. A channel decoder CD10 decodes the packets to obtain the parameter values, and an audio decoder AD10 synthesizes the audio information using the parameter values from the packets (e.g., according to a method as described herein). The synthesized audio (e.g., speech) is provided to a second user interface UI20 (e.g., an audio output stage and loudspeaker) on the receiving terminal 104. Although not shown, various signal processing functions may be performed in channel encoder CE10 and channel decoder CD10 (e.g., convolutional coding including cyclic redundancy check (CRC) functions, interleaving) and in transmitter TX10 and receiver RX10 (e.g., digital modulation and corresponding demodulation, spread spectrum processing, analog-to-digital and digital-to-analog conversion).
Each party to a communication may transmit as well as receive, and each terminal may include instances of audio encoder AE10 and decoder AD10. The audio encoder and decoder may be separate devices or integrated into a single device known as a “voice coder” or “vocoder.” As shown in
In at least one configuration of transmitting terminal 102, an audio signal (e.g., speech) may be input from first user interface UI10 to audio encoder AE10 in frames, with each frame further partitioned into sub-frames. Such arbitrary frame boundaries may be used where some block processing is performed. However, such partitioning of the audio samples into frames (and sub-frames) may be omitted if continuous processing rather than block processing is implemented. In the described examples, each packet transmitted across network NW10 may include one or more frames depending on the specific application and the overall design constraints.
Audio encoder AE10 may be a variable-rate or single-fixed-rate encoder. A variable-rate encoder may dynamically switch between multiple encoder modes (e.g., different fixed rates) from frame to frame, depending on the audio content (e.g., depending on whether speech is present and/or what type of speech is present). Audio decoder AD10 may also dynamically switch between corresponding decoder modes from frame to frame in a corresponding manner. A particular mode may be chosen for each frame to achieve the lowest bit rate available while maintaining acceptable signal reproduction quality at receiving terminal 104.
Audio encoder AE10 typically processes the input signal as a series of nonoverlapping segments in time or “frames,” with a new encoded frame being calculated for each frame. The frame period is generally a period over which the signal may be expected to be locally stationary; common examples include twenty milliseconds (equivalent to 320 samples at a sampling rate of 16 kHz, 256 samples at a sampling rate of 12.8 kHz, or 160 samples at a sampling rate of eight kHz) and ten milliseconds. It is also possible to implement audio encoder AE10 to process the input signal as a series of overlapping frames.
Frame encoder FE10 is implemented to perform a codebook-based scheme (e.g., codebook excitation linear prediction or CELP) according to a source-filter model that encodes each frame of the input audio signal as (A) a set of parameters that describe a filter and (B) an excitation signal that will be used at the decoder to drive the described filter to produce a synthesized reproduction of the audio frame. The spectral envelope of a speech signal is typically characterized by peaks that represent resonances of the vocal tract (e.g., the throat and mouth) and are called formants. Most speech coders encode at least this coarse spectral structure as a set of parameters, such as filter coefficients. The remaining residual signal may be modeled as a source (e.g., as produced by the vocal chords) that drives the filter to produce the speech signal and typically is characterized by its intensity and pitch.
Particular examples of encoding schemes that may be used by frame encoder FE10 to produce the encoded frames EF include, without limitation, G.726, G.728, G.729A, AMR, AMR-WB, AMR-WB+ (e.g., as described in 3GPP TS 26.290 v11.0.0, September 2012 (available from ETSI)), VMR-WB (e.g., as described in the Third Generation Partnership Project 2 (3GPP2) document C.S0052-A v1.0, April 2005 (available online at www-dot-3gpp2-dot-org)), the Enhanced Variable Rate Codec (EVRC, as described in the 3GPP2 document C.S0014-E v1.0, December 2011 (available online at www-dot-3gpp2-dot-org)), the Selectable Mode Vocoder speech codec (as described in the 3GPP2 document C.S0030-0, v3.0, January 2004 (available online at www-dot-3gpp2-dot-org)), and the Enhanced Voice Service codec (EVS, e.g., as described in 3GPP TR 22.813 v10.0.0 (March 2010), available from ETSI).
Linear prediction coding (LPC) analysis module LA10 encodes the spectral envelope of each core audio frame as a set of linear prediction (LP) coefficients (e.g., coefficients of the all-pole filter 1/A(z) as described above). In one example, LPC analysis module LA10 is configured to calculate a set of sixteen LP filter coefficients to characterize the formant structure of each 20-millisecond frame. Analysis module LA10 may be implemented, for example, as described in section 5.2 of 3GPP TS 26.190 v11.0.0.
Analysis module LA10 may be configured to analyze the samples of each frame directly, or the samples may be weighted first according to a windowing function (for example, a Hamming window). The analysis may also be performed over a window that is larger than the frame, such as a 30-msec window. This window may be symmetric (e.g. 5-20-5, such that it includes the 5 milliseconds immediately before and after the 20-millisecond frame) or asymmetric (e.g. 10-20, such that it includes the last 10 milliseconds of the preceding frame). An LPC analysis module is typically configured to calculate the LP filter coefficients using a Levinson-Durbin recursion or the Leroux-Gueguen algorithm. Although LPC encoding is well suited to speech, it may also be used to encode generic audio signals (e.g., including non-speech, such as music). In another implementation, the analysis module may be configured to calculate a set of cepstral coefficients for each frame instead of a set of LP filter coefficients.
Linear prediction filter coefficients are typically difficult to quantize efficiently and are usually mapped into another representation, such as line spectral pairs (LSPs) or line spectral frequencies (LSFs), or immittance spectral pairs (ISPs) or immittance spectral frequencies (ISFs), for quantization and/or entropy encoding. In one example, analysis module LA10 transforms the set of LP filter coefficients into a corresponding set of ISFs. Other one-to-one representations of LP filter coefficients include parcor coefficients and log-area-ratio values. Typically a transform between a set of LP filter coefficients and a corresponding set of LSFs, LSPs, ISFs, or ISPs is reversible, but embodiments also include implementations of analysis module LA10 in which the transform is not reversible without error.
Analysis module LA10 is configured to quantize the set of ISFs (or LSFs or other coefficient representation), and frame encoder FE20 is configured to output the result of this quantization as LPC index XL. Such a quantizer typically includes a vector quantizer that encodes the input vector as an index to a corresponding vector entry in a table or codebook. Module LA10 is also configured to provide the quantized coefficients {circumflex over (α)}i for calculation of the weighted synthesis filter as described herein (e.g., by ACB search module AS10).
Frame encoder FE20 also includes an optional open-loop pitch search module OL10 that may be used to simplify pitch analysis and reduce the scope of the closed-loop pitch search in adaptive codebook search module AS10. Module OL10 may be implemented to filter the input signal through a weighting filter that is based on the unquantized LP filter coefficients, to decimate the weighted signal by two, and to produce a pitch estimate once or twice per frame (depending on the current rate). Module OL10 may be implemented, for example, as described in section 5.4 of 3GPP TS 26.190 v11.0.0.
Adaptive codebook (ACB) search module AS10 is configured to search the adaptive codebook (based on the past excitation and also called the “pitch codebook”) to produce the delay and gain of the pitch filter. Module AS10 may be implemented to perform closed-loop pitch search around the open-loop pitch estimates on a subframe basis on a target signal (as obtained, e.g., by filtering the LP residual through a weighted synthesis filter based on the quantized and unquantized LP filter coefficients) and then to compute the adaptive codevector by interpolating the past excitation at the indicated fractional pitch lag and to compute the ACB gain. Module AS10 may also be implemented to use the LP residual to extend the past excitation buffer to simplify the closed-loop pitch search (especially for delays less than the subframe size of, e.g., 40 or 64 samples). Module AS10 may be implemented to produce an ACB gain gp (e.g., for each subframe) and a quantized index that indicates the pitch delay of the first subframe (or the pitch delays of the first and third subframes, depending on the current rate) and relative pitch delays of the other subframes. Module AS10 may be implemented, for example, as described in section 5.7 of 3GPP TS 26.190 v11.0.0. In the example of
Fixed codebook (FCB) search module FS10 is configured to produce an index that indicates a vector of the fixed codebook (also called “innovation codebook,” “innovative codebook,” “stochastic codebook,” or “algebraic codebook”), which represents the portion of the excitation that is not modeled by the adaptive codevector. Module FS10 may be implemented to produce the codebook index as a codeword that contains all of the information needed to reproduce the FCB vector c(n) (e.g., represents the pulse positions and signs), such that no codebook is needed. Module FS10 may be implemented, for example, as described in
Gain vector quantization module GV10 is configured to quantize the FCB and ACB gains, which may include gains for each subframe. Module GV10 may be implemented, for example, as described in section 5.9 of 3GPP TS 26.190 v11.0.0
Chip/chipset CS10 includes a receiver (e.g., RX10), which is configured to receive a radio-frequency (RF) communications signal and to decode and reproduce an audio signal encoded within the RF signal, and a transmitter (e.g., TX10), which is configured to transmit an RF communications signal that describes an encoded audio signal (e.g., as produced using method M100). Such a device may be configured to transmit and receive voice communications data wirelessly via any one or more of the codecs referenced herein.
Device D10 is configured to receive and transmit the RF communications signals via an antenna C30. Device D10 may also include a diplexer and one or more power amplifiers in the path to antenna C30. Chip/chipset CS10 is also configured to receive user input via keypad C10 and to display information via display C20. In this example, device D10 also includes one or more antennas C40 to support Global Positioning System (GPS) location services and/or short-range communications with an external device such as a wireless (e.g., Bluetooth™) headset. In another example, such a communications device is itself a Bluetooth™ headset and lacks keypad C10, display C20, and antenna C30.
Communications device D10 may be embodied in a variety of communications devices, including smartphones and laptop and tablet computers.
Wireless device 1102 includes a processor 1104 which controls operation of the device. Processor 1104 may also be referred to as a central processing unit (CPU). Memory 1106, which may include both read-only memory (ROM) and random access memory (RAM), provides instructions and data to processor 1104. A portion of memory 1106 may also include non-volatile random access memory (NVRAM). Processor 1104 typically performs logical and arithmetic operations based on program instructions stored within memory 1106. The instructions in memory 1106 may be executable to implement the method or methods as described herein.
Wireless device 1102 includes a housing 1108 that may include a transmitter 1110 and a receiver 1112 to allow transmission and reception of data between wireless device 1102 and a remote location. Transmitter 1110 and receiver 1112 may be combined into a transceiver 1114. An antenna 1116 may be attached to the housing 1108 and electrically coupled to the transceiver 1114. Wireless device 1102 may also include (not shown) multiple transmitters, multiple receivers, multiple transceivers and/or multiple antennas.
In this example, wireless device 1102 also includes a signal detector 1118 that may be used to detect and quantify the level of signals received by transceiver 1114. Signal detector 1118 may detect such signals as total energy, pilot energy per pseudonoise (PN) chips, power spectral density, and other signals. Wireless device 1102 also includes a digital signal processor (DSP) 1120 for use in processing signals.
The various components of wireless device 1102 are coupled together by a bus system 1122 which may include a power bus, a control signal bus, and a status signal bus in addition to a data bus. For the sake of clarity, the various busses are illustrated in
The methods and apparatus disclosed herein may be applied generally in any transceiving and/or audio sensing application, especially mobile or otherwise portable instances of such applications. For example, the range of configurations disclosed herein includes communications devices that reside in a wireless telephony communication system configured to employ a code-division multiple-access (CDMA) over-the-air interface. Nevertheless, it would be understood by those skilled in the art that a method and apparatus having features as described herein may reside in any of the various communication systems employing a wide range of technologies known to those of skill in the art, such as systems employing Voice over IP (VoIP) over wired and/or wireless (e.g., CDMA, TDMA, FDMA, and/or TD-SCDMA) transmission channels.
It is expressly contemplated and hereby disclosed that communications devices disclosed herein may be adapted for use in networks that are packet-switched (for example, wired and/or wireless networks arranged to carry audio transmissions according to protocols such as VoIP) and/or circuit-switched. It is also expressly contemplated and hereby disclosed that communications devices disclosed herein may be adapted for use in narrowband coding systems (e.g., systems that encode an audio frequency range of about four or five kilohertz) and/or for use in wideband coding systems (e.g., systems that encode audio frequencies greater than five kilohertz), including whole-band wideband coding systems and split-band wideband coding systems.
The presentation of the described configurations is provided to enable any person skilled in the art to make or use the methods and other structures disclosed herein. The flowcharts, block diagrams, and other structures shown and described herein are examples only, and other variants of these structures are also within the scope of the disclosure. Various modifications to these configurations are possible, and the generic principles presented herein may be applied to other configurations as well. Thus, the present disclosure is not intended to be limited to the configurations shown above but rather is to be accorded the widest scope consistent with the principles and novel features disclosed in any fashion herein, including in the attached claims as filed, which form a part of the original disclosure.
Those of skill in the art will understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, and symbols that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Important design requirements for implementation of a configuration as disclosed herein may include minimizing processing delay and/or computational complexity (typically measured in millions of instructions per second or MIPS), especially for computation-intensive applications, such as playback of compressed audio or audiovisual information (e.g., a file or stream encoded according to a compression format, such as one of the examples identified herein) or applications for wideband communications (e.g., voice communications at sampling rates higher than eight kilohertz, such as 12, 16, 32, 44.1, 48, or 192 kHz).
An apparatus as disclosed herein (e.g., apparatus A100, A200, MF100, MF200) may be implemented in any combination of hardware with software, and/or with firmware, that is deemed suitable for the intended application. For example, the elements of such an apparatus may be fabricated as electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or logic gates, and any of these elements may be implemented as one or more such arrays. Any two or more, or even all, of these elements may be implemented within the same array or arrays. Such an array or arrays may be implemented within one or more chips (for example, within a chipset including two or more chips).
One or more elements of the various implementations of the apparatus disclosed herein (e.g., apparatus A100, A200, MF100, MF200) may be implemented in whole or in part as one or more sets of instructions arranged to execute on one or more fixed or programmable arrays of logic elements, such as microprocessors, embedded processors, IP cores, digital signal processors, FPGAs (field-programmable gate arrays), ASSPs (application-specific standard products), and ASICs (application-specific integrated circuits). Any of the various elements of an implementation of an apparatus as disclosed herein may also be embodied as one or more computers (e.g., machines including one or more arrays programmed to execute one or more sets or sequences of instructions, also called “processors”), and any two or more, or even all, of these elements may be implemented within the same such computer or computers.
A processor or other means for processing as disclosed herein may be fabricated as one or more electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or logic gates, and any of these elements may be implemented as one or more such arrays. Such an array or arrays may be implemented within one or more chips (for example, within a chipset including two or more chips). Examples of such arrays include fixed or programmable arrays of logic elements, such as microprocessors, embedded processors, IP cores, DSPs, FPGAs, ASSPs, and ASICs. A processor or other means for processing as disclosed herein may also be embodied as one or more computers (e.g., machines including one or more arrays programmed to execute one or more sets or sequences of instructions) or other processors. It is possible for a processor as described herein to be used to perform tasks or execute other sets of instructions that are not directly related to a procedure of an implementation of method M100, such as a task relating to another operation of a device or system in which the processor is embedded (e.g., an audio sensing device). It is also possible for part of a method as disclosed herein to be performed by a processor of the audio sensing device and for another part of the method to be performed under the control of one or more other processors.
Those of skill will appreciate that the various illustrative modules, logical blocks, circuits, and tests and other operations described in connection with the configurations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Such modules, logical blocks, circuits, and operations may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an ASIC or ASSP, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to produce the configuration as disclosed herein. For example, such a configuration may be implemented at least in part as a hard-wired circuit, as a circuit configuration fabricated into an application-specific integrated circuit, or as a firmware program loaded into non-volatile storage or a software program loaded from or into a data storage medium as machine-readable code, such code being instructions executable by an array of logic elements such as a general purpose processor or other digital signal processing unit. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. A software module may reside in a non-transitory storage medium such as RAM (random-access memory), ROM (read-only memory), nonvolatile RAM (NVRAM) such as flash RAM, erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), registers, hard disk, a removable disk, or a CD-ROM; or in any other form of storage medium known in the art. An illustrative storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
It is noted that the various methods disclosed herein (e.g., implementations of method M100 or M200) may be performed by an array of logic elements such as a processor, and that the various elements of an apparatus as described herein may be implemented as modules designed to execute on such an array. As used herein, the term “module” or “sub-module” can refer to any method, apparatus, device, unit or computer-readable data storage medium that includes computer instructions (e.g., logical expressions) in software, hardware or firmware form. It is to be understood that multiple modules or systems can be combined into one module or system and one module or system can be separated into multiple modules or systems to perform the same functions. When implemented in software or other computer-executable instructions, the elements of a process are essentially the code segments to perform the related tasks, such as with routines, programs, objects, components, data structures, and the like. The term “software” should be understood to include source code, assembly language code, machine code, binary code, firmware, macrocode, microcode, any one or more sets or sequences of instructions executable by an array of logic elements, and any combination of such examples. The program or code segments can be stored in a processor readable medium or transmitted by a computer data signal embodied in a carrier wave over a transmission medium or communication link.
The implementations of methods, schemes, and techniques disclosed herein may also be tangibly embodied (for example, in tangible, computer-readable features of one or more computer-readable storage media as listed herein) as one or more sets of instructions executable by a machine including an array of logic elements (e.g., a processor, microprocessor, microcontroller, or other finite state machine). The term “computer-readable medium” may include any medium that can store or transfer information, including volatile, nonvolatile, removable, and non-removable storage media. Examples of a computer-readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash memory, an erasable ROM (EROM), a floppy diskette or other magnetic storage, a CD-ROM/DVD or other optical storage, a hard disk or any other medium which can be used to store the desired information, a fiber optic medium, a radio frequency (RF) link, or any other medium which can be used to carry the desired information and can be accessed. The computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic, RF links, etc. The code segments may be downloaded via computer networks such as the Internet or an intranet. In any case, the scope of the present disclosure should not be construed as limited by such embodiments.
Each of the tasks of the methods described herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. In a typical application of an implementation of a method as disclosed herein, an array of logic elements (e.g., logic gates) is configured to perform one, more than one, or even all of the various tasks of the method. One or more (possibly all) of the tasks may also be implemented as code (e.g., one or more sets of instructions), embodied in a computer program product (e.g., one or more data storage media such as disks, flash or other nonvolatile memory cards, semiconductor memory chips, etc.), that is readable and/or executable by a machine (e.g., a computer) including an array of logic elements (e.g., a processor, microprocessor, microcontroller, or other finite state machine). The tasks of an implementation of a method as disclosed herein may also be performed by more than one such array or machine. In these or other implementations, the tasks may be performed within a device for wireless communications such as a cellular telephone or other device having such communications capability. Such a device may be configured to communicate with circuit-switched and/or packet-switched networks (e.g., using one or more protocols such as VoIP). For example, such a device may include RF circuitry configured to receive and/or transmit encoded frames.
It is expressly disclosed that the various methods disclosed herein may be performed by a portable communications device such as a handset, headset, or portable digital assistant (PDA), and that the various apparatus described herein may be included within such a device. A typical real-time (e.g., online) application is a telephone conversation conducted using such a mobile device.
In one or more exemplary embodiments, the operations described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, such operations may be stored on or transmitted over a computer-readable medium as one or more instructions or code. The term “computer-readable media” includes both computer-readable storage media and communication (e.g., transmission) media. By way of example, and not limitation, computer-readable storage media can comprise an array of storage elements, such as semiconductor memory (which may include without limitation dynamic or static RAM, ROM, EEPROM, and/or flash RAM), or ferroelectric, magnetoresistive, ovonic, polymeric, or phase-change memory; CD-ROM or other optical disk storage; and/or magnetic disk storage or other magnetic storage devices. Such storage media may store information in the form of instructions or data structures that can be accessed by a computer. Communication media can comprise any medium that can be used to carry desired program code in the form of instructions or data structures and that can be accessed by a computer, including any medium that facilitates transfer of a computer program from one place to another. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and/or microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technology such as infrared, radio, and/or microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray Disc™ (Blu-Ray Disc Association, Universal City, Calif.), where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
An acoustic signal processing apparatus as described herein may be incorporated into an electronic device that accepts speech input in order to control certain operations, or that may otherwise benefit from separation of desired noises from background noises, such as communications devices. Many applications may benefit from enhancing or separating clear desired sound from background sounds originating from multiple directions. Such applications may include human-machine interfaces in electronic or computing devices which incorporate capabilities such as voice recognition and detection, speech enhancement and separation, voice-activated control, and the like. It may be desirable to implement such an acoustic signal processing apparatus to be suitable in devices that only provide limited processing capabilities.
The elements of the various implementations of the modules, elements, and devices described herein may be fabricated as electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or gates. One or more elements of the various implementations of the apparatus described herein may also be implemented in whole or in part as one or more sets of instructions arranged to execute on one or more fixed or programmable arrays of logic elements such as microprocessors, embedded processors, IP cores, digital signal processors, FPGAs, ASSPs, and ASICs.
It is possible for one or more elements of an implementation of an apparatus as described herein to be used to perform tasks or execute other sets of instructions that are not directly related to an operation of the apparatus, such as a task relating to another operation of a device or system in which the apparatus is embedded. It is also possible for one or more elements of an implementation of such an apparatus to have structure in common (e.g., a processor used to execute portions of code corresponding to different elements at different times, a set of instructions executed to perform tasks corresponding to different elements at different times, or an arrangement of electronic and/or optical devices performing operations for different elements at different times).
The present application claims priority from and is a continuation of U.S. patent application Ser. No. 14/026,765 filed on Sep. 13, 2013, which claims the benefit of and priority from U.S. Provisional Patent Application No. 61/758,152 filed on Jan. 29, 2013, the contents of which are expressly incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61758152 | Jan 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14026765 | Sep 2013 | US |
Child | 15636501 | US |