This invention relates to signal processing.
Voice communications over the public switched telephone network (PSTN) have traditionally been limited in bandwidth to the frequency range of 300-3400 kHz. New networks for voice communications, such as cellular telephony and voice over IP (Internet Protocol, VoIP), may not have the same bandwidth limits, and it may be desirable to transmit and receive voice communications that include a wideband frequency range over such networks. For example, it may be desirable to support an audio frequency range that extends down to 50 Hz and/or up to 7 or 8 kHz. It may also be desirable to support other applications, such as high-quality audio or audio/video conferencing, that may have audio speech content in ranges outside the traditional PSTN limits.
Extension of the range supported by a speech coder into higher frequencies may improve intelligibility. For example, the information that differentiates fricatives such as ‘s’ and ‘f’ is largely in the high frequencies. Highband extension may also improve other qualities of speech, such as presence. For example, even a voiced vowel may have spectral energy far above the PSTN limit.
One approach to wideband speech coding involves scaling a narrowband speech coding technique (e.g., one configured to encode the range of 0-4 kHz) to cover the wideband spectrum. For example, a speech signal may be sampled at a higher rate to include components at high frequencies, and a narrowband coding technique may be reconfigured to use more filter coefficients to represent this wideband signal. Narrowband coding techniques such as CELP (codebook excited linear prediction) are computationally intensive, however, and a wideband CELP coder may consume too many processing cycles to be practical for many mobile and other embedded applications. Encoding the entire spectrum of a wideband signal to a desired quality using such a technique may also lead to an unacceptably large increase in bandwidth. Moreover, transcoding of such an encoded signal would be required before even its narrowband portion could be transmitted into and/or decoded by a system that only supports narrowband coding.
Another approach to wideband speech coding involves extrapolating the highband spectral envelope from the encoded narrowband spectral envelope. While such an approach may be implemented without any increase in bandwidth and without a need for transcoding, the coarse spectral envelope or formant structure of the highband portion of a speech signal generally cannot be predicted accurately from the spectral envelope of the narrowband portion.
It may be desirable to implement wideband speech coding such that at least the narrowband portion of the encoded signal may be sent through a narrowband channel (such as a PSTN channel) without transcoding or other significant modification. Efficiency of the wideband coding extension may also be desirable, for example, to avoid a significant reduction in the number of users that may be serviced in applications such as wireless cellular telephony and broadcasting over wired and wireless channels.
In one embodiment, a method of signal processing includes calculating an envelope of a first signal that is based on a low-frequency portion of a speech signal, calculating an envelope of a second signal that is based on a high-frequency portion of the speech signal, and calculating a plurality of gain factor values according to a time-varying relation between the envelopes of the first and second signal. The method includes attenuating, based on a variation over time of a relation between the envelopes of the first and second signals, at least one of the plurality of gain factor values.
In another embodiment, an apparatus includes a first envelope calculator configured and arranged to calculate an envelope of a first signal that is based on a low-frequency portion of a speech signal, and a second envelope calculator configured and arranged to calculate an envelope of a second signal that is based on a high-frequency portion of the speech signal. The apparatus includes a factor calculator configured and arranged to calculate a plurality of gain factor values according to a time-varying relation between the envelopes of the first and second signals, and a gain factor attenuator configured and arranged to attenuate at least one of the plurality of gain factor values based on a variation over time of a relation between the envelopes of the first and second signals.
In another embodiment, a method of signal processing includes generating a highband excitation signal. In this method, generating a highband excitation signal includes spectrally extending a signal based on a lowband excitation signal. The method includes synthesizing, based on the highband excitation signal, a highband speech signal. The method includes attenuating at least one of a first plurality of gain factor values according to at least one distance among the first plurality of gain factor values and, based on a second plurality of gain factor values resulting from the attenuating, modifying a time-domain envelope of a signal that is based on the lowband excitation signal.
In another embodiment, an apparatus includes a highband excitation generator configured to generate a highband excitation signal based on a lowband excitation signal, a synthesis filter configured and arranged to produce a synthesized highband speech signal based on the highband excitation signal, and a gain factor attenuator configured and arranged to attenuate at least one of a first plurality of gain factor values according to at least one distance among the first plurality of gain factor values. The apparatus includes a gain control element configured and arranged to modify, based on a second plurality of gain factor values including the at least one attenuated gain factor value, a time-domain envelope of a signal that is based on the lowband excitation signal.
a shows a block diagram of a wideband speech encoder A100 according to an embodiment.
b shows a block diagram of an implementation A102 of wideband speech encoder A100.
a shows a block diagram of a wideband speech decoder B100 according to an embodiment.
b shows a block diagram of an implementation B102 of wideband speech decoder B100.
a shows a block diagram of an implementation A112 of filter bank A110.
b shows a block diagram of an implementation B122 of filter bank B120.
a shows bandwidth coverage of the low and high bands for one example of filter bank A110.
b shows bandwidth coverage of the low and high bands for another example of filter bank A110.
c shows a block diagram of an implementation A114 of filter bank A112.
d shows a block diagram of an implementation B124 of filter bank B122.
a shows an example of a plot of log amplitude vs. frequency for a speech signal.
b shows a block diagram of a basic linear prediction coding system.
a shows an example of a plot of log amplitude vs. frequency for a residual signal for voiced speech.
b shows an example of a plot of log amplitude vs. time for a residual signal for voiced speech.
a shows plots of signal spectra at various points in one example of a spectral extension operation.
b shows plots of signal spectra at various points in another example of a spectral extension operation.
a shows a diagram of a windowing function.
b shows an application of a windowing function as shown in
a shows a schematic diagram of an implementation D122 of delay line D120.
b shows a schematic diagram of an implementation D124 of delay line D120.
a shows a flowchart for a method M200 according to an embodiment.
b shows a flowchart for an implementation M210 of method M200.
a shows a block diagram of an implementation A232 of highband gain factor calculator A230.
b shows a block diagram of an arrangement including highband gain factor calculator A232.
a and 36b shows plots of examples of mappings from calculated variation value to attenuation factor value.
a and 43b shows plots of examples of mappings from magnitudes of a calculated variation value to smoothing factor value.
a shows one example of a one-dimensional signal, and
c shows an example of the signal of
d shows an example of the signal of
a shows a flowchart of a method QM10 according to an embodiment.
b shows a flowchart of a method QM20 according to an embodiment.
In the figures and accompanying description, the same reference labels refer to the same or analogous elements or signals.
Embodiments as described herein include systems, methods, and apparatus that may be configured to provide an extension to a narrowband speech coder to support transmission and/or storage of wideband speech signals at a bandwidth increase of only about 800 to 1000 bps (bits per second). Potential advantages of such implementations include embedded coding to support compatibility with narrowband systems, relatively easy allocation and reallocation of bits between the narrowband and highband coding channels, avoiding a computationally intensive wideband synthesis operation, and maintaining a low sampling rate for signals to be processed by computationally intensive waveform coding routines.
Unless expressly limited by its context, the term “calculating” is used herein to indicate any of its ordinary meanings, such as computing, generating, and selecting from a list of values. Where the term “comprising” is used in the present description and claims, it does not exclude other elements or operations. The term “A is based on B” is used to indicate any of its ordinary meanings, including the cases (i) “A is equal to B” and (ii) “A is based on at least B.” The term “Internet Protocol” includes version 4, as described in IETF (Internet Engineering Task Force) RFC (Request for Comments) 791, and subsequent versions such as version 6.
a shows a block diagram of a wideband speech encoder A100 according to an embodiment. Filter bank A110 is configured to filter a wideband speech signal S10 to produce a narrowband signal S20 and a highband signal S30. Narrowband encoder A120 is configured to encode narrowband signal S20 to produce narrowband (NB) filter parameters S40 and a narrowband residual signal S50. As described in further detail herein, narrowband encoder A120 is typically configured to produce narrowband filter parameters S40 and encoded narrowband excitation signal S50 as codebook indices or in another quantized form. Highband encoder A200 is configured to encode highband signal S30 according to information in encoded narrowband excitation signal S50 to produce highband coding parameters S60. As described in further detail herein, highband encoder A200 is typically configured to produce highband coding parameters S60 as codebook indices or in another quantized form. One particular example of wideband speech encoder A100 is configured to encode wideband speech signal S10 at a rate of about 8.55 kbps (kilobits per second), with about 7.55 kbps being used for narrowband filter parameters S40 and encoded narrowband excitation signal S50, and about 1 kbps being used for highband coding parameters S60.
It may be desired to combine the encoded narrowband and highband signals into a single bitstream. For example, it may be desired to multiplex the encoded signals together for transmission (e.g., over a wired, optical, or wireless transmission channel), or for storage, as an encoded wideband speech signal.
An apparatus including encoder A102 may also include circuitry configured to transmit multiplexed signal S70 into a transmission channel such as a wired, optical, or wireless channel. Such an apparatus may also be configured to perform one or more channel encoding operations on the signal, such as error correction encoding (e.g., rate-compatible convolutional encoding) and/or error detection encoding (e.g., cyclic redundancy encoding), and/or one or more layers of network protocol encoding (e.g., Ethernet, TCP/IP, cdma2000).
It may be desirable for multiplexer A130 to be configured to embed the encoded narrowband signal (including narrowband filter parameters S40 and encoded narrowband excitation signal S50) as a separable substream of multiplexed signal S70, such that the encoded narrowband signal may be recovered and decoded independently of another portion of multiplexed signal S70 such as a highband and/or lowband signal. For example, multiplexed signal S70 may be arranged such that the encoded narrowband signal may be recovered by stripping away the highband filter parameters S60. One potential advantage of such a feature is to avoid the need for transcoding the encoded wideband signal before passing it to a system that supports decoding of the narrowband signal but does not support decoding of the highband portion.
a is a block diagram of a wideband speech decoder B100 according to an embodiment. Narrowband decoder B110 is configured to decode narrowband filter parameters S40 and encoded narrowband excitation signal S50 to produce a narrowband signal S90. Highband decoder B200 is configured to decode highband coding parameters S60 according to a narrowband excitation signal S80, based on encoded narrowband excitation signal S50, to produce a highband signal S100. In this example, narrowband decoder B110 is configured to provide narrowband excitation signal S80 to highband decoder B200. Filter bank B120 is configured to combine narrowband signal S90 and highband signal S100 to produce a wideband speech signal S110.
b is a block diagram of an implementation B102 of wideband speech decoder B100 that includes a demultiplexer B130 configured to produce encoded signals S40, S50, and S60 from multiplexed signal S70. An apparatus including decoder B102 may include circuitry configured to receive multiplexed signal S70 from a transmission channel such as a wired, optical, or wireless channel. Such an apparatus may also be configured to perform one or more channel decoding operations on the signal, such as error correction decoding (e.g., rate-compatible convolutional decoding) and/or error detection decoding (e.g., cyclic redundancy decoding), and/or one or more layers of network protocol decoding (e.g., Ethernet, TCP/IP, cdma2000).
Filter bank A110 is configured to filter an input signal according to a split-band scheme to produce a low-frequency subband and a high-frequency subband. Depending on the design criteria for the particular application, the output subbands may have equal or unequal bandwidths and may be overlapping or nonoverlapping. A configuration of filter bank A110 that produces more than two subbands is also possible. For example, such a filter bank may be configured to produce one or more lowband signals that include components in a frequency range below that of narrowband signal S20 (such as the range of 50-300 Hz). It is also possible for such a filter bank to be configured to produce one or more additional highband signals that include components in a frequency range above that of highband signal S30 (such as a range of 14-20, 16-20, or 16-32 kHz). In such case, wideband speech encoder A100 may be implemented to encode this signal or signals separately, and multiplexer A130 may be configured to include the additional encoded signal or signals in multiplexed signal S70 (e.g., as a separable portion).
a shows a block diagram of an implementation A112 of filter bank A110 that is configured to produce two subband signals having reduced sampling rates. Filter bank A110 is arranged to receive a wideband speech signal S10 having a high-frequency (or highband) portion and a low-frequency (or lowband) portion. Filter bank A112 includes a lowband processing path configured to receive wideband speech signal S10 and to produce narrowband speech signal S20, and a highband processing path configured to receive wideband speech signal S10 and to produce highband speech signal S30. Lowpass filter 110 filters wideband speech signal S10 to pass a selected low-frequency subband, and highpass filter 130 filters wideband speech signal S10 to pass a selected high-frequency subband. Because both subband signals have more narrow bandwidths than wideband speech signal S10, their sampling rates can be reduced to some extent without loss of information. Downsampler 120 reduces the sampling rate of the lowpass signal according to a desired decimation factor (e.g., by removing samples of the signal and/or replacing samples with average values), and downsampler 140 likewise reduces the sampling rate of the highpass signal according to another desired decimation factor.
b shows a block diagram of a corresponding implementation B122 of filter bank B120. Upsampler 150 increases the sampling rate of narrowband signal S90 (e.g., by zero-stuffing and/or by duplicating samples), and lowpass filter 160 filters the upsampled signal to pass only a lowband portion (e.g., to prevent aliasing). Likewise, upsampler 170 increases the sampling rate of highband signal S100 and highpass filter 180 filters the upsampled signal to pass only a highband portion. The two passband signals are then summed to form wideband speech signal S110. In some implementations of decoder B100, filter bank B120 is configured to produce a weighted sum of the two passband signals according to one or more weights received and/or calculated by highband decoder B200. A configuration of filter bank B120 that combines more than two passband signals is also contemplated.
Each of the filters 110, 130, 160, 180 may be implemented as a finite-impulse-response (FIR) filter or as an infinite-impulse-response (IIR) filter. The frequency responses of encoder filters 110 and 130 may have symmetric or dissimilarly shaped transition regions between stopband and passband. Likewise, the frequency responses of decoder filters 160 and 180 may have symmetric or dissimilarly shaped transition regions between stopband and passband. It may be desirable but is not strictly necessary for lowpass filter 110 to have the same response as lowpass filter 160, and for highpass filter 130 to have the same response as highpass filter 180. In one example, the two filter pairs 110, 130 and 160, 180 are quadrature mirror filter (QMF) banks, with filter pair 110, 130 having the same coefficients as filter pair 160, 180.
In a typical example, lowpass filter 110 has a passband that includes the limited PSTN range of 300-3400 Hz (e.g., the band from 0 to 4 kHz).
In the example of
In the alternative example of
In a typical handset for telephonic communication, one or more of the transducers (i.e., the microphone and the earpiece or loudspeaker) lacks an appreciable response over the frequency range of 7-8 kHz. In the example of
In some implementations, providing an overlap between subbands as in the example of
Overlapping of subbands allows a smooth blending of lowband and highband that may lead to fewer audible artifacts, reduced aliasing, and/or a less noticeable transition from one band to the other. Moreover, the coding efficiency of narrowband encoder A120 (for example, a waveform coder) may drop with increasing frequency. For example, coding quality of the narrowband coder may be reduced at low bit rates, especially in the presence of background noise. In such cases, providing an overlap of the subbands may increase the quality of reproduced frequency components in the overlapped region.
Moreover, overlapping of subbands allows a smooth blending of lowband and highband that may lead to fewer audible artifacts, reduced aliasing, and/or a less noticeable transition from one band to the other. Such a feature may be especially desirable for an implementation in which narrowband encoder A120 and highband encoder A200 operate according to different coding methodologies. For example, different coding techniques may produce signals that sound quite different. A coder that encodes a spectral envelope in the form of codebook indices may produce a signal having a different sound than a coder that encodes the amplitude spectrum instead. A time-domain coder (e.g., a pulse-code-modulation or PCM coder) may produce a signal having a different sound than a frequency-domain coder. A coder that encodes a signal with a representation of the spectral envelope and the corresponding residual signal may produce a signal having a different sound than a coder that encodes a signal with only a representation of the spectral envelope. A coder that encodes a signal as a representation of its waveform may produce an output having a different sound than that from a sinusoidal coder. In such cases, using filters having sharp transition regions to define nonoverlapping subbands may lead to an abrupt and perceptually noticeable transition between the subbands in the synthesized wideband signal.
Although QMF filter banks having complementary overlapping frequency responses are often used in subband techniques, such filters are unsuitable for at least some of the wideband coding implementations described herein. A QMF filter bank at the encoder is configured to create a significant degree of aliasing that is canceled in the corresponding QMF filter bank at the decoder. Such an arrangement may not be appropriate for an application in which the signal incurs a significant amount of distortion between the filter banks, as the distortion may reduce the effectiveness of the alias cancellation property. For example, applications described herein include coding implementations configured to operate at very low bit rates. As a consequence of the very low bit rate, the decoded signal is likely to appear significantly distorted as compared to the original signal, such that use of QMF filter banks may lead to uncanceled aliasing. Applications that use QMF filter banks typically have higher bit rates (e.g., over 12 kbps for AMR, and 64 kbps for G.722).
Additionally, a coder may be configured to produce a synthesized signal that is perceptually similar to the original signal but which actually differs significantly from the original signal. For example, a coder that derives the highband excitation from the narrowband residual as described herein may produce such a signal, as the actual highband residual may be completely absent from the decoded signal. Use of QMF filter banks in such applications may lead to a significant degree of distortion caused by uncanceled aliasing.
The amount of distortion caused by QMF aliasing may be reduced if the affected subband is narrow, as the effect of the aliasing is limited to a bandwidth equal to the width of the subband. For examples as described herein in which each subband includes about half of the wideband bandwidth, however, distortion caused by uncanceled aliasing could affect a significant part of the signal. The quality of the signal may also be affected by the location of the frequency band over which the uncanceled aliasing occurs. For example, distortion created near the center of a wideband speech signal (e.g., between 3 and 4 kHz) may be much more objectionable than distortion that occurs near an edge of the signal (e.g., above 6 kHz).
While the responses of the filters of a QMF filter bank are strictly related to one another, the lowband and highband paths of filter banks A110 and B120 may be configured to have spectra that are completely unrelated apart from the overlapping of the two subbands. We define the overlap of the two subbands as the distance from the point at which the frequency response of the highband filter drops to −20 dB up to the point at which the frequency response of the lowband filter drops to −20 dB. In various examples of filter bank A110 and/or B120, this overlap ranges from around 200 Hz to around 1 kHz. The range of about 400 to about 600 Hz may represent a desirable tradeoff between coding efficiency and perceptual smoothness. In one particular example as mentioned above, the overlap is around 500 Hz.
It may be desirable to implement filter bank A112 and/or B122 to perform operations as illustrated in
It is noted that as a consequence of the spectral reversal operation, the spectrum of highband signal S30 is reversed. Subsequent operations in the encoder and corresponding decoder may be configured accordingly. For example, highband excitation generator A300 as described herein may be configured to produce a highband excitation signal S120 that also has a spectrally reversed form.
d shows a block diagram of an implementation B124 of filter bank B122 that performs a functional equivalent of upsampling and highpass filtering operations using a series of interpolation, resampling, and other operations. Filter bank B124 includes a spectral reversal operation in the highband that reverses a similar operation as performed, for example, in a filter bank of the encoder such as filter bank A114. In this particular example, filter bank B124 also includes notch filters in the lowband and highband that attenuate a component of the signal at 7100 Hz, although such filters are optional and need not be included. The Patent Application “SYSTEMS, METHODS, AND APPARATUS FOR SPEECH SIGNAL FILTERING” filed herewith, now U.S. Pub. No. 2007/0088558 includes additional description and figures relating to responses of elements of particular implementations of filter banks A110 and B120, and this material is hereby incorporated by reference.
Narrowband encoder A120 is implemented according to a source-filter model that encodes the input speech signal as (A) a set of parameters that describe a filter and (B) an excitation signal that drives the described filter to produce a synthesized reproduction of the input speech signal.
b shows an example of a basic source-filter arrangement as applied to coding of the spectral envelope of narrowband signal S20. An analysis module calculates a set of parameters that characterize a filter corresponding to the speech sound over a period of time (typically 20 msec). A whitening filter (also called an analysis or prediction error filter) configured according to those filter parameters removes the spectral envelope to spectrally flatten the signal. The resulting whitened signal (also called a residual) has less energy and thus less variance and is easier to encode than the original speech signal. Errors resulting from coding of the residual signal may also be spread more evenly over the spectrum. The filter parameters and residual are typically quantized for efficient transmission over the channel. At the decoder, a synthesis filter configured according to the filter parameters is excited by a signal based on the residual to produce a synthesized version of the original speech sound. The synthesis filter is typically configured to have a transfer function that is the inverse of the transfer function of the whitening filter.
The analysis module may be configured to analyze the samples of each frame directly, or the samples may be weighted first according to a windowing function (for example, a Hamming window). The analysis may also be performed over a window that is larger than the frame, such as a 30-msec window. This window may be symmetric (e.g. 5-20-5, such that it includes the 5 milliseconds immediately before and after the 20-millisecond frame) or asymmetric (e.g. 10-20, such that it includes the last 10 milliseconds of the preceding frame). An LPC analysis module is typically configured to calculate the LP filter coefficients using a Levinson-Durbin recursion or the Leroux-Gueguen algorithm. In another implementation, the analysis module may be configured to calculate a set of cepstral coefficients for each frame instead of a set of LP filter coefficients.
The output rate of encoder A120 may be reduced significantly, with relatively little effect on reproduction quality, by quantizing the filter parameters. Linear prediction filter coefficients are difficult to quantize efficiently and are usually mapped into another representation, such as line spectral pairs (LSPs) or line spectral frequencies (LSFs), for quantization and/or entropy encoding. In the example of
Quantizer 230 is configured to quantize the set of narrowband LSFs (or other coefficient representation), and narrowband encoder A122 is configured to output the result of this quantization as the narrowband filter parameters S40. Such a quantizer typically includes a vector quantizer that encodes the input vector as an index to a corresponding vector entry in a table or codebook.
As seen in
It is desirable for narrowband encoder A120 to generate the encoded narrowband excitation signal according to the same filter parameter values that will be available to the corresponding narrowband decoder. In this manner, the resulting encoded narrowband excitation signal may already account to some extent for nonidealities in those parameter values, such as quantization error. Accordingly, it is desirable to configure the whitening filter using the same coefficient values that will be available at the decoder. In the basic example of encoder A122 as shown in
Some implementations of narrowband encoder A120 are configured to calculate encoded narrowband excitation signal S50 by identifying one among a set of codebook vectors that best matches the residual signal. It is noted, however, that narrowband encoder A120 may also be implemented to calculate a quantized representation of the residual signal without actually generating the residual signal. For example, narrowband encoder A120 may be configured to use a number of codebook vectors to generate corresponding synthesized signals (e.g., according to a current set of filter parameters), and to select the codebook vector associated with the generated signal that best matches the original narrowband signal S20 in a perceptually weighted domain.
The system of narrowband encoder A122 and narrowband decoder B112 is a basic example of an analysis-by-synthesis speech codec. Codebook excitation linear prediction (CELP) coding is one popular family of analysis-by-synthesis coding, and implementations of such coders may perform waveform encoding of the residual, including such operations as selection of entries from fixed and adaptive codebooks, error minimization operations, and/or perceptual weighting operations. Other implementations of analysis-by-synthesis coding include mixed excitation linear prediction (MELP), algebraic CELP (ACELP), relaxation CELP (RCELP), regular pulse excitation (RPE), multi-pulse CELP (MPE), and vector-sum excited linear prediction (VSELP) coding. Related coding methods include multi-band excitation (MBE) and prototype waveform interpolation (PWI) coding. Examples of standardized analysis-by-synthesis speech codecs include the ETSI (European Telecommunications Standards Institute)-GSM full rate codec (GSM 06.10), which uses residual excited linear prediction (RELP); the GSM enhanced full rate codec (ETSI-GSM 06.60); the ITU (International Telecommunication Union) standard 11.8 kb/s G.729 Annex E coder; the IS (Interim Standard)-641 codecs for IS-136 (a time-division multiple access scheme); the GSM adaptive multirate (GSM-AMR) codecs; and the 4 GV™ (Fourth-Generation Vocoder™) codec (QUALCOMM Incorporated, San Diego, Calif.). Narrowband encoder A120 and corresponding decoder B110 may be implemented according to any of these technologies, or any other speech coding technology (whether known or to be developed) that represents a speech signal as (A) a set of parameters that describe a filter and (B) an excitation signal used to drive the described filter to reproduce the speech signal.
Even after the whitening filter has removed the coarse spectral envelope from narrowband signal S20, a considerable amount of fine harmonic structure may remain, especially for voiced speech.
Coding efficiency and/or speech quality may be increased by using one or more parameter values to encode characteristics of the pitch structure. One important characteristic of the pitch structure is the frequency of the first harmonic (also called the fundamental frequency), which is typically in the range of 60 to 400 Hz. This characteristic is typically encoded as the inverse of the fundamental frequency, also called the pitch lag. The pitch lag indicates the number of samples in one pitch period and may be encoded as one or more codebook indices. Speech signals from male speakers tend to have larger pitch lags than speech signals from female speakers.
Another signal characteristic relating to the pitch structure is periodicity, which indicates the strength of the harmonic structure or, in other words, the degree to which the signal is harmonic or nonharmonic. Two typical indicators of periodicity are zero crossings and normalized autocorrelation functions (NACFs). Periodicity may also be indicated by the pitch gain, which is commonly encoded as a codebook gain (e.g., a quantized adaptive codebook gain).
Narrowband encoder A120 may include one or more modules configured to encode the long-term harmonic structure of narrowband signal S20. As shown in
An implementation of narrowband decoder B110 according to a paradigm as shown in
In an implementation of wideband speech encoder A100 according to a paradigm as shown in
In addition to parameters that characterize the short-term and/or long-term structure of narrowband signal S20, narrowband encoder A120 may produce parameter values that relate to other characteristics of narrowband signal S20. These values, which may be suitably quantized for output by wideband speech encoder A100, may be included among the narrowband filter parameters S40 or outputted separately. Highband encoder A200 may also be configured to calculate highband coding parameters S60 according to one or more of these additional parameters (e.g., after dequantization). At wideband speech decoder B100, highband decoder B200 may be configured to receive the parameter values via narrowband decoder B110 (e.g., after dequantization). Alternatively, highband decoder B200 may be configured to receive (and possibly to dequantize) the parameter values directly.
In one example of additional narrowband coding parameters, narrowband encoder A120 produces values for spectral tilt and speech mode parameters for each frame. Spectral tilt relates to the shape of the spectral envelope over the passband and is typically represented by the quantized first reflection coefficient. For most voiced sounds, the spectral energy decreases with increasing frequency, such that the first reflection coefficient is negative and may approach −1. Most unvoiced sounds have a spectrum that is either flat, such that the first reflection coefficient is close to zero, or has more energy at high frequencies, such that the first reflection coefficient is positive and may approach +1.
Speech mode (also called voicing mode) indicates whether the current frame represents voiced or unvoiced speech. This parameter may have a binary value based on one or more measures of periodicity (e.g., zero crossings, NACFs, pitch gain) and/or voice activity for the frame, such as a relation between such a measure and a threshold value. In other implementations, the speech mode parameter has one or more other states to indicate modes such as silence or background noise, or a transition between silence and voiced speech.
Highband encoder A200 is configured to encode highband signal S30 according to a source-filter model, with the excitation for this filter being based on the encoded narrowband excitation signal.
Quantizer 420 is configured to quantize the set of highband LSFs (or other coefficient representation, such as ISPs), and highband encoder A202 is configured to output the result of this quantization as the highband filter parameters S60a. Such a quantizer typically includes a vector quantizer that encodes the input vector as an index to a corresponding vector entry in a table or codebook.
Highband encoder A202 also includes a synthesis filter A220 configured to produce a synthesized highband signal S130 according to highband excitation signal S120 and the encoded spectral envelope (e.g., the set of LP filter coefficients) produced by analysis module A210. Synthesis filter A220 is typically implemented as an IIR filter, although FIR implementations may also be used. In a particular example, synthesis filter A220 is implemented as a sixth-order linear autoregressive filter.
Highband gain factor calculator A230 calculates one or more differences between the levels of the original highband signal S30 and synthesized highband signal S130 to specify a gain envelope for the frame. Quantizer 430, which may be implemented as a vector quantizer that encodes the input vector as an index to a corresponding vector entry in a table or codebook, quantizes the value or values specifying the gain envelope, and highband encoder A202 is configured to output the result of this quantization as highband gain factors S60b.
In an implementation as shown in
In one particular example, analysis module A210 and highband gain calculator A230 output a set of six LSFs and a set of five gain values per frame, respectively, such that a wideband extension of the narrowband signal S20 may be achieved with only eleven additional values per frame. The ear tends to be less sensitive to frequency errors at high frequencies, such that highband coding at a low LPC order may produce a signal having a comparable perceptual quality to narrowband coding at a higher LPC order. A typical implementation of highband encoder A200 may be configured to output 8 to 12 bits per frame for high-quality reconstruction of the spectral envelope and another 8 to 12 bits per frame for high-quality reconstruction of the temporal envelope. In another particular example, analysis module A210 outputs a set of eight LSFs per frame.
Some implementations of highband encoder A200 are configured to produce highband excitation signal S120 by generating a random noise signal having highband frequency components and amplitude-modulating the noise signal according to the time-domain envelope of narrowband signal S20, narrowband excitation signal S80, or highband signal S30. While such a noise-based method may produce adequate results for unvoiced sounds, however, it may not be desirable for voiced sounds, whose residuals are usually harmonic and consequently have some periodic structure.
Highband excitation generator A300 is configured to generate highband excitation signal S120 by extending the spectrum of narrowband excitation signal S80 into the highband frequency range.
In one example, spectrum extender A400 is configured to perform a spectral folding operation (also called mirroring) on narrowband excitation signal S80 to produce harmonically extended signal S160. Spectral folding may be performed by zero-stuffing excitation signal S80 and then applying a highpass filter to retain the alias. In another example, spectrum extender A400 is configured to produce harmonically extended signal S160 by spectrally translating narrowband excitation signal S80 into the highband (e.g., via upsampling followed by multiplication with a constant-frequency cosine signal).
Spectral folding and translation methods may produce spectrally extended signals whose harmonic structure is discontinuous with the original harmonic structure of narrowband excitation signal S80 in phase and/or frequency. For example, such methods may produce signals having peaks that are not generally located at multiples of the fundamental frequency, which may cause tinny-sounding artifacts in the reconstructed speech signal. These methods also tend to produce high-frequency harmonics that have unnaturally strong tonal characteristics. Moreover, because a PSTN signal may be sampled at 8 kHz but bandlimited to no more than 3400 Hz, the upper spectrum of narrowband excitation signal S80 may contain little or no energy, such that an extended signal generated according to a spectral folding or spectral translation operation may have a spectral hole above 3400 Hz.
Other methods of generating harmonically extended signal S160 include identifying one or more fundamental frequencies of narrowband excitation signal S80 and generating harmonic tones according to that information. For example, the harmonic structure of an excitation signal may be characterized by the fundamental frequency together with amplitude and phase information. Another implementation of highband excitation generator A300 generates a harmonically extended signal S160 based on the fundamental frequency and amplitude (as indicated, for example, by the pitch lag and pitch gain). Unless the harmonically extended signal is phase-coherent with narrowband excitation signal S80, however, the quality of the resulting decoded speech may not be acceptable.
A nonlinear function may be used to create a highband excitation signal that is phase-coherent with the narrowband excitation and preserves the harmonic structure without phase discontinuity. A nonlinear function may also provide an increased noise level between high-frequency harmonics, which tends to sound more natural than the tonal high-frequency harmonics produced by methods such as spectral folding and spectral translation. Typical memoryless nonlinear functions that may be applied by various implementations of spectrum extender A400 include the absolute value function (also called fullwave rectification), halfwave rectification, squaring, cubing, and clipping. Other implementations of spectrum extender A400 may be configured to apply a nonlinear function having memory.
Downsampler 530 is configured to downsample the spectrally extended result of applying the nonlinear function. It may be desirable for downsampler 530 to perform a bandpass filtering operation to select a desired frequency band of the spectrally extended signal before reducing the sampling rate (for example, to reduce or avoid aliasing or corruption by an unwanted image). It may also be desirable for downsampler 530 to reduce the sampling rate in more than one stage.
a is a diagram that shows the signal spectra at various points in one example of a spectral extension operation, where the frequency scale is the same across the various plots. Plot (a) shows the spectrum of one example of narrowband excitation signal S80. Plot (b) shows the spectrum after signal S80 has been upsampled by a factor of eight. Plot (c) shows an example of the extended spectrum after application of a nonlinear function. Plot (d) shows the spectrum after lowpass filtering. In this example, the passband extends to the upper frequency limit of highband signal S30 (e.g., 7 kHz or 8 kHz).
Plot (e) shows the spectrum after a first stage of downsampling, in which the sampling rate is reduced by a factor of four to obtain a wideband signal. Plot (f) shows the spectrum after a highpass filtering operation to select the highband portion of the extended signal, and plot (g) shows the spectrum after a second stage of downsampling, in which the sampling rate is reduced by a factor of two. In one particular example, downsampler 530 performs the highpass filtering and second stage of downsampling by passing the wideband signal through highpass filter 130 and downsampler 140 of filter bank A112 (or other structures or routines having the same response) to produce a spectrally extended signal having the frequency range and sampling rate of highband signal S30.
As may be seen in plot (g), downsampling of the highpass signal shown in plot (f) causes a reversal of its spectrum. In this example, downsampler 530 is also configured to perform a spectral flipping operation on the signal. Plot (h) shows a result of applying the spectral flipping operation, which may be performed by multiplying the signal with the function ejnπ or the sequence (−1)n, whose values alternate between +1 and −1. Such an operation is equivalent to shifting the digital spectrum of the signal in the frequency domain by a distance of π, It is noted that the same result may also be obtained by applying the downsampling and spectral flipping operations in a different order. The operations of upsampling and/or downsampling may also be configured to include resampling to obtain a spectrally extended signal having the sampling rate of highband signal S30 (e.g., 7 kHz).
As noted above, filter banks A110 and B120 may be implemented such that one or both of the narrowband and highband signals S20, S30 has a spectrally reversed form at the output of filter bank A110, is encoded and decoded in the spectrally reversed form, and is spectrally reversed again at filter bank B120 before being output in wideband speech signal S110. In such case, of course, a spectral flipping operation as shown in
The various tasks of upsampling and downsampling of a spectral extension operation as performed by spectrum extender A402 may be configured and arranged in many different ways. For example,
Plot (d) shows the spectrum after a spectral reversal operation. Plot (e) shows the spectrum after a single stage of downsampling, in which the sampling rate is reduced by a factor of two to obtain the desired spectrally extended signal. In this example, the signal is in spectrally reversed form and may be used in an implementation of highband encoder A200 which processed highband signal S30 in such a form.
The spectrally extended signal produced by nonlinear function calculator 520 is likely to have a pronounced dropoff in amplitude as frequency increases. Spectral extender A402 includes a spectral flattener 540 configured to perform a whitening operation on the downsampled signal. Spectral flattener 540 may be configured to perform a fixed whitening operation or to perform an adaptive whitening operation. In a particular example of adaptive whitening, spectral flattener 540 includes an LPC analysis module configured to calculate a set of four filter coefficients from the downsampled signal and a fourth-order analysis filter configured to whiten the signal according to those coefficients. Other implementations of spectrum extender A400 include configurations in which spectral flattener 540 operates on the spectrally extended signal before downsampler 530.
Highband excitation generator A300 may be implemented to output harmonically extended signal S160 as highband excitation signal S120. In some cases, however, using only a harmonically extended signal as the highband excitation may result in audible artifacts. The harmonic structure of speech is generally less pronounced in the highband than in the low band, and using too much harmonic structure in the highband excitation signal can result in a buzzy sound. This artifact may be especially noticeable in speech signals from female speakers.
Embodiments include implementations of highband excitation generator A300 that are configured to mix harmonically extended signal S160 with a noise signal. As shown in
Before being mixed with harmonically extended signal S160, the random noise signal produced by noise generator 480 may be amplitude-modulated to have a time-domain envelope that approximates the energy distribution over time of narrowband signal S20, highband signal S30, narrowband excitation signal S80, or harmonically extended signal S160. As shown in
In an implementation A304 of highband excitation generator A302, as shown in the block diagram of
Envelope calculator 460 may be configured to perform an envelope calculation as a task that includes a series of subtasks.
y(n)=ax(n)+(1−a)y(n−1), (1)
where x is the filter input, y is the filter output, n is a time-domain index, and a is a smoothing coefficient having a value between 0.5 and 1. The value of the smoothing coefficient a may be fixed or, in an alternative implementation, may be adaptive according to an indication of noise in the input signal, such that a is closer to 1 in the absence of noise and closer to 0.5 in the presence of noise. Subtask T130 applies a square root function to each sample of the smoothed sequence to produce the time-domain envelope.
Such an implementation of envelope calculator 460 may be configured to perform the various subtasks of task T100 in serial and/or parallel fashion. In further implementations of task T100, subtask T110 may be preceded by a bandpass operation configured to select a desired frequency portion of the signal whose envelope is to be modeled, such as the range of 3-4 kHz.
Combiner 490 is configured to mix harmonically extended signal S160 and modulated noise signal S170 to produce highband excitation signal S120. Implementations of combiner 490 may be configured, for example, to calculate highband excitation signal S120 as a sum of harmonically extended signal S160 and modulated noise signal S170. Such an implementation of combiner 490 may be configured to calculate highband excitation signal S120 as a weighted sum by applying a weighting factor to harmonically extended signal S160 and/or to modulated noise signal S170 before the summation. Each such weighting factor may be calculated according to one or more criteria and may be a fixed value or, alternatively, an adaptive value that is calculated on a frame-by-frame or subframe-by-subframe basis.
Weighting factor calculator 550 may be configured to calculate weighting factors S180 and S190 according to a desired ratio of harmonic content to noise content in highband excitation signal S120. For example, it may be desirable for combiner 492 to produce highband excitation signal S120 to have a ratio of harmonic energy to noise energy similar to that of highband signal S30. In some implementations of weighting factor calculator 550, weighting factors S180, S190 are calculated according to one or more parameters relating to a periodicity of narrowband signal S20 or of the narrowband residual signal, such as pitch gain and/or speech mode. Such an implementation of weighting factor calculator 550 may be configured to assign a value to harmonic weighting factor S180 that is proportional to the pitch gain, for example, and/or to assign a higher value to noise weighting factor S190 for unvoiced speech signals than for voiced speech signals.
In other implementations, weighting factor calculator 550 is configured to calculate values for harmonic weighting factor S180 and/or noise weighting factor S190 according to a measure of periodicity of highband signal S30. In one such example, weighting factor calculator 550 calculates harmonic weighting factor S180 as the maximum value of the autocorrelation coefficient of highband signal S30 for the current frame or subframe, where the autocorrelation is performed over a search range that includes a delay of one pitch lag and does not include a delay of zero samples.
In a second stage, a delayed frame is constructed by applying the corresponding identified delay to each subframe, concatenating the resulting subframes to construct an optimally delayed frame, and calculating harmonic weighting factor S180 as the correlation coefficient between the original frame and the optimally delayed frame. In a further alternative, weighting factor calculator 550 calculates harmonic weighting factor S180 as an average of the maximum autocorrelation coefficients obtained in the first stage for each subframe. Implementations of weighting factor calculator 550 may also be configured to scale the correlation coefficient, and/or to combine it with another value, to calculate the value for harmonic weighting factor S180.
It may be desirable for weighting factor calculator 550 to calculate a measure of periodicity of highband signal S30 only in cases where a presence of periodicity in the frame is otherwise indicated. For example, weighting factor calculator 550 may be configured to calculate a measure of periodicity of highband signal S30 according to a relation between another indicator of periodicity of the current frame, such as pitch gain, and a threshold value. In one example, weighting factor calculator 550 is configured to perform an autocorrelation operation on highband signal S30 only if the frame's pitch gain (e.g., the adaptive codebook gain of the narrowband residual) has a value of more than 0.5 (alternatively, at least 0.5). In another example, weighting factor calculator 550 is configured to perform an autocorrelation operation on highband signal S30 only for frames having particular states of speech mode (e.g., only for voiced signals). In such cases, weighting factor calculator 550 may be configured to assign a default weighting factor for frames having other states of speech mode and/or lesser values of pitch gain.
Embodiments include further implementations of weighting factor calculator 550 that are configured to calculate weighting factors according to characteristics other than or in addition to periodicity. For example, such an implementation may be configured to assign a higher value to noise gain factor S190 for speech signals having a large pitch lag than for speech signals having a small pitch lag. Another such implementation of weighting factor calculator 550 is configured to determine a measure of harmonicity of wideband speech signal S10, or of highband signal S30, according to a measure of the energy of the signal at multiples of the fundamental frequency relative to the energy of the signal at other frequency components.
Some implementations of wideband speech encoder A100 are configured to output an indication of periodicity or harmonicity (e.g. a one-bit flag indicating whether the frame is harmonic or nonharmonic) based on the pitch gain and/or another measure of periodicity or harmonicity as described herein. In one example, a corresponding wideband speech decoder B100 uses this indication to configure an operation such as weighting factor calculation. In another example, such an indication is used at the encoder and/or decoder in calculating a value for a speech mode parameter.
It may be desirable for highband excitation generator A302 to generate highband excitation signal S120 such that the energy of the excitation signal is substantially unaffected by the particular values of weighting factors S180 and S190. In such case, weighting factor calculator 550 may be configured to calculate a value for harmonic weighting factor S180 or for noise weighting factor S190 (or to receive such a value from storage or another element of highband encoder A200) and to derive a value for the other weighting factor according to an expression such as
(Wharmonic)2+(Wnoise)2=1, (2)
where Wharmonic denotes harmonic weighting factor S180 and Wnoise denotes noise weighting factor S190. Alternatively, weighting factor calculator 550 may be configured to select, according to a value of a periodicity measure for the current frame or subframe, a corresponding one among a plurality of pairs of weighting factors S180, S190, where the pairs are precalculated to satisfy a constant-energy ratio such as expression (2). For an implementation of weighting factor calculator 550 in which expression (2) is observed, typical values for harmonic weighting factor S180 range from about 0.7 to about 1.0, and typical values for noise weighting factor S190 range from about 0.1 to about 0.7. Other implementations of weighting factor calculator 550 may be configured to operate according to a version of expression (2) that is modified according to a desired baseline weighting between harmonically extended signal S160 and modulated noise signal S170.
Artifacts may occur in a synthesized speech signal when a sparse codebook (one whose entries are mostly zero values) has been used to calculate the quantized representation of the residual. Codebook sparseness occurs especially when the narrowband signal is encoded at a low bit rate. Artifacts caused by codebook sparseness are typically quasi-periodic in time and occur mostly above 3 kHz. Because the human ear has better time resolution at higher frequencies, these artifacts may be more noticeable in the highband.
Embodiments include implementations of highband excitation generator A300 that are configured to perform anti-sparseness filtering.
Anti-sparseness filter 600 may be configured to alter the phase of its input signal. For example, it may be desirable for anti-sparseness filter 600 to be configured and arranged such that the phase of highband excitation signal S120 is randomized, or otherwise more evenly distributed, over time. It may also be desirable for the response of anti-sparseness filter 600 to be spectrally flat, such that the magnitude spectrum of the filtered signal is not appreciably changed. In one example, anti-sparseness filter 600 is implemented as an all-pass filter having a transfer function according to the following expression:
One effect of such a filter may be to spread out the energy of the input signal so that it is no longer concentrated in only a few samples.
Artifacts caused by codebook sparseness are usually more noticeable for noise-like signals, where the residual includes less pitch information, and also for speech in background noise. Sparseness typically causes fewer artifacts in cases where the excitation has long-term structure, and indeed phase modification may cause noisiness in voiced signals. Thus it may be desirable to configure anti-sparseness filter 600 to filter unvoiced signals and to pass at least some voiced signals without alteration. Unvoiced signals are characterized by a low pitch gain (e.g. quantized narrowband adaptive codebook gain) and a spectral tilt (e.g. quantized first reflection coefficient) that is close to zero or positive, indicating a spectral envelope that is flat or tilted upward with increasing frequency. Typical implementations of anti-sparseness filter 600 are configured to filter unvoiced sounds (e.g., as indicated by the value of the spectral tilt), to filter voiced sounds when the pitch gain is below a threshold value (alternatively, not greater than the threshold value), and otherwise to pass the signal without alteration.
Further implementations of anti-sparseness filter 600 include two or more filters that are configured to have different maximum phase modification angles (e.g., up to 180 degrees). In such case, anti-sparseness filter 600 may be configured to select among these component filters according to a value of the pitch gain (e.g., the quantized adaptive codebook or LTP gain), such that a greater maximum phase modification angle is used for frames having lower pitch gain values. An implementation of anti-sparseness filter 600 may also include different component filters that are configured to modify the phase over more or less of the frequency spectrum, such that a filter configured to modify the phase over a wider frequency range of the input signal is used for frames having lower pitch gain values.
For accurate reproduction of the encoded speech signal, it may be desirable for the ratio between the levels of the highband and narrowband portions of the synthesized wideband speech signal S100 to be similar to that in the original wideband speech signal S10. In addition to a spectral envelope as represented by highband coding parameters S60a, highband encoder A200 may be configured to characterize highband signal S30 by specifying a temporal or gain envelope. As shown in
The temporal envelopes of narrowband excitation signal S80 and highband signal S30 are likely to be similar. Therefore, encoding a gain envelope that is based on a relation between highband signal S30 and narrowband excitation signal S80 (or a signal derived therefrom, such as highband excitation signal S120 or synthesized highband signal S130) will generally be more efficient than encoding a gain envelope based only on highband signal S30. In a typical implementation, highband encoder A202 is configured to output a quantized index of eight to twelve bits that specifies five gain factors for each frame.
Highband gain factor calculator A230 may be configured to perform gain factor calculation as a task that includes one or more series of subtasks.
It may be desirable for highband gain factor calculator A230 to be configured to calculate the subframe energies according to a windowing function.
It may be desirable to apply a windowing function that overlaps adjacent subframes. For example, a windowing function that produces gain factors which may be applied in an overlap-add fashion may help to reduce or avoid discontinuity between subframes. In one example, highband gain factor calculator A230 is configured to apply a trapezoidal windowing function as shown in
Without limitation, the following values are presented as examples for particular implementations. A 20-msec frame is assumed for these cases, although any other duration may be used. For a highband signal sampled at 7 kHz, each frame has 140 samples. If such a frame is divided into five subframes of equal length, each subframe will have 28 samples, and the window as shown in
Inverse quantizer 560 is configured to dequantize highband filter parameters S60a (in this example, to a set of LSFs), and LSF-to-LP filter coefficient transform 570 is configured to transform the LSFs into a set of filter coefficients (for example, as described above with reference to inverse quantizer 240 and transform 250 of narrowband encoder A122). In other implementations, as mentioned above, different coefficient sets (e.g., cepstral coefficients) and/or coefficient representations (e.g., ISPs) may be used. Highband synthesis filter B204 is configured to produce a synthesized highband signal according to highband excitation signal S120 and the set of filter coefficients. For a system in which the highband encoder includes a synthesis filter (e.g., as in the example of encoder A202 described above), it may be desirable to implement highband synthesis filter B204 to have the same response (e.g., the same transfer function) as that synthesis filter.
Highband decoder B202 also includes an inverse quantizer 580 configured to dequantize highband gain factors S60b, and a gain control element 590 (e.g., a multiplier or amplifier) configured and arranged to apply the dequantized gain factors to the synthesized highband signal to produce highband signal S100. For a case in which the gain envelope of a frame is specified by more than one gain factor, gain control element 590 may include logic configured to apply the gain factors to the respective subframes, possibly according to a windowing function that may be the same or a different windowing function as applied by a gain calculator (e.g., highband gain calculator A230) of the corresponding highband encoder. In other implementations of highband decoder B202, gain control element 590 is similarly configured but is arranged instead to apply the dequantized gain factors to narrowband excitation signal S80 or to highband excitation signal S120.
As mentioned above, it may be desirable to obtain the same state in the highband encoder and highband decoder (e.g., by using dequantized values during encoding). Thus it may be desirable in a coding system according to such an implementation to ensure the same state for corresponding noise generators in highband excitation generators A300 and B300. For example, highband excitation generators A300 and B300 of such an implementation may be configured such that the state of the noise generator is a deterministic function of information already coded within the same frame (e.g., narrowband filter parameters S40 or a portion thereof and/or encoded narrowband excitation signal S50 or a portion thereof).
One or more of the quantizers of the elements described herein (e.g., quantizer 230, 420, or 430) may be configured to perform classified vector quantization. For example, such a quantizer may be configured to select one of a set of codebooks based on information that has already been coded within the same frame in the narrowband channel and/or in the highband channel. Such a technique typically provides increased coding efficiency at the expense of additional codebook storage.
As discussed above with reference to, e.g.,
The pitch structure of an actual residual signal may not match the periodicity model exactly. For example, the residual signal may include small jitters in the regularity of the locations of the pitch pulses, such that the distances between successive pitch pulses in a frame are not exactly equal and the structure is not quite regular. These irregularities tend to reduce coding efficiency.
Some implementations of narrowband encoder A120 are configured to perform a regularization of the pitch structure by applying an adaptive time warping to the residual before or during quantization, or by otherwise including an adaptive time warping in the encoded excitation signal. For example, such an encoder may be configured to select or otherwise calculate a degree of warping in time (e.g., according to one or more perceptual weighting and/or error minimization criteria) such that the resulting excitation signal optimally fits the model of long-term periodicity. Regularization of pitch structure is performed by a subset of CELP encoders called Relaxation Code Excited Linear Prediction (RCELP) encoders.
An RCELP encoder is typically configured to perform the time warping as an adaptive time shift. This time shift may be a delay ranging from a few milliseconds negative to a few milliseconds positive, and it is usually varied smoothly to avoid audible discontinuities. In some implementations, such an encoder is configured to apply the regularization in a piecewise fashion, wherein each frame or subframe is warped by a corresponding fixed time shift. In other implementations, the encoder is configured to apply the regularization as a continuous warping function, such that a frame or subframe is warped according to a pitch contour (also called a pitch trajectory). In some cases (e.g., as described in U.S. Pat. Appl. Publ. 2004/0098255), the encoder is configured to include a time warping in the encoded excitation signal by applying the shift to a perceptually weighted input signal that is used to calculate the encoded excitation signal.
The encoder calculates an encoded excitation signal that is regularized and quantized, and the decoder dequantizes the encoded excitation signal to obtain an excitation signal that is used to synthesize the decoded speech signal. The decoded output signal thus exhibits the same varying delay that was included in the encoded excitation signal by the regularization. Typically, no information specifying the regularization amounts is transmitted to the decoder.
Regularization tends to make the residual signal easier to encode, which improves the coding gain from the long-term predictor and thus boosts overall coding efficiency, generally without generating artifacts. It may be desirable to perform regularization only on frames that are voiced. For example, narrowband encoder A124 may be configured to shift only those frames or subframes having a long-term structure, such as voiced signals. It may even be desirable to perform regularization only on subframes that include pitch pulse energy. Various implementations of RCELP coding are described in U.S. Pat. No. 5,704,003 (Kleijn et al.) and U.S. Pat. No. 6,879,955 (Rao) and in U.S. Pat. Appl. Publ. 2004/0098255 (Kovesi et al.). Existing implementations of RCELP coders include the Enhanced Variable Rate Codec (EVRC), as described in Telecommunications Industry Association (TIA) IS-127, and the Third Generation Partnership Project 2 (3GPP2) Selectable Mode Vocoder (SMV).
Unfortunately, regularization may cause problems for a wideband speech coder in which the highband excitation is derived from the encoded narrowband excitation signal (such as a system including wideband speech encoder A100 and wideband speech decoder B100). Due to its derivation from a time-warped signal, the highband excitation signal will generally have a time profile that is different from that of the original highband speech signal. In other words, the highband excitation signal will no longer be synchronous with the original highband speech signal.
A misalignment in time between the warped highband excitation signal and the original highband speech signal may cause several problems. For example, the warped highband excitation signal may no longer provide a suitable source excitation for a synthesis filter that is configured according to the filter parameters extracted from the original highband speech signal. As a result, the synthesized highband signal may contain audible artifacts that reduce the perceived quality of the decoded wideband speech signal.
The misalignment in time may also cause inefficiencies in gain envelope encoding. As mentioned above, a correlation is likely to exist between the temporal envelopes of narrowband excitation signal S80 and highband signal S30. By encoding the gain envelope of the highband signal according to a relation between these two temporal envelopes, an increase in coding efficiency may be realized as compared to encoding the gain envelope directly. When the encoded narrowband excitation signal is regularized, however, this correlation may be weakened. The misalignment in time between narrowband excitation signal S80 and highband signal S30 may cause fluctuations to appear in highband gain factors S60b, and coding efficiency may drop.
Embodiments include methods of wideband speech encoding that perform time warping of a highband speech signal according to a time warping included in a corresponding encoded narrowband excitation signal. Potential advantages of such methods include improving the quality of a decoded wideband speech signal and/or improving the efficiency of coding a highband gain envelope.
Narrowband encoder A124 is also configured to output a regularization data signal SD10 that specifies the degree of time warping applied. For various cases in which narrowband encoder A124 is configured to apply a fixed time shift to each frame or subframe, regularization data signal SD10 may include a series of values indicating each time shift amount as an integer or non-integer value in terms of samples, milliseconds, or some other time increment. For a case in which narrowband encoder A124 is configured to otherwise modify the time scale of a frame or other sequence of samples (e.g., by compressing one portion and expanding another portion), regularization information signal SD10 may include a corresponding description of the modification, such as a set of function parameters. In one particular example, narrowband encoder A124 is configured to divide a frame into three subframes and to calculate a fixed time shift for each subframe, such that regularization data signal SD10 indicates three time shift amounts for each regularized frame of the encoded narrowband signal.
Wideband speech encoder AD10 includes a delay line D120 configured to advance or retard portions of highband speech signal S30, according to delay amounts indicated by an input signal, to produce time-warped highband speech signal S30a. In the example shown in
Further implementations of highband encoder A200 may be configured to perform spectral analysis (e.g., LPC analysis) of the unwarped highband speech signal S30 and to perform time warping of highband speech signal S30 before calculation of highband gain parameters S60b. Such an encoder may include, for example, an implementation of delay line D120 arranged to perform the time warping. In such cases, however, highband filter parameters S60a based on the analysis of unwarped signal S30 may describe a spectral envelope that is misaligned in time with highband excitation signal S120.
Delay line D120 may be configured according to any combination of logic elements and storage elements suitable for applying the desired time warping operations to highband speech signal S30. For example, delay line D120 may be configured to read highband speech signal S30 from a buffer according to the desired time shifts.
Delay line D122 is configured to output the time-warped highband signal S30a from an offset location OL of shift register SR1. The position of offset location OL varies about a reference position (zero time shift) according to the current time shift as indicated by, for example, regularization data signal SD10. Delay line D122 may be configured to support equal advance and retard limits or, alternatively, one limit larger than the other such that a greater shift may be performed in one direction than in the other.
A regularization time shift having a magnitude of more than a few milliseconds may cause audible artifacts in the decoded signal. Typically the magnitude of a regularization time shift as performed by a narrowband encoder A124 will not exceed a few milliseconds, such that the time shifts indicated by regularization data signal SD10 will be limited. However, it may be desired in such cases for delay line D122 to be configured to impose a maximum limit on time shifts in the positive and/or negative direction (for example, to observe a tighter limit than that imposed by the narrowband encoder).
b shows a schematic diagram of an implementation D124 of delay line D122 that includes a shift window SW. In this example, the position of offset location OL is limited by the shift window SW. Although
In other implementations, delay line D120 is configured to write highband speech signal S30 to a buffer according to the desired time shifts.
In the particular example shown in
In the example of
It may be desirable for delay line D120 to apply a time warping that is based on, but is not identical to, the warping specified by regularization data signal SD10.
The time shift applied by the narrowband encoder may be expected to evolve smoothly over time. Therefore, it is typically sufficient to compute the average narrowband time shift applied to the subframes during a frame of speech, and to shift a corresponding frame of highband speech signal S30 according to this average. In one such example, delay value mapper D110 is configured to calculate an average of the subframe delay values for each frame, and delay line D120 is configured to apply the calculated average to a corresponding frame of highband signal S30. In other examples, an average over a shorter period (such as two subframes, or half of a frame) or a longer period (such as two frames) may be calculated and applied. In a case where the average is a non-integer value of samples, delay value mapper D110 may be configured to round the value to an integer number of samples before outputting it to delay line D120.
Narrowband encoder A124 may be configured to include a regularization time shift of a non-integer number of samples in the encoded narrowband excitation signal. In such a case, it may be desirable for delay value mapper D110 to be configured to round the narrowband time shift to an integer number of samples and for delay line D120 to apply the rounded time shift to highband speech signal S30.
In some implementations of wideband speech encoder AD10, the sampling rates of narrowband speech signal S20 and highband speech signal S30 may differ. In such cases, delay value mapper D110 may be configured to adjust time shift amounts indicated in regularization data signal SD10 to account for a difference between the sampling rates of narrowband speech signal S20 (or narrowband excitation signal S80) and highband speech signal S30. For example, delay value mapper D110 may be configured to scale the time shift amounts according to a ratio of the sampling rates. In one particular example as mentioned above, narrowband speech signal S20 is sampled at 8 kHz, and highband speech signal S30 is sampled at 7 kHz. In this case, delay value mapper D110 is configured to multiply each shift amount by ⅞. Implementations of delay value mapper D10 may also be configured to perform such a scaling operation together with an integer-rounding and/or a time shift averaging operation as described herein.
In further implementations, delay line D120 is configured to otherwise modify the time scale of a frame or other sequence of samples (e.g., by compressing one portion and expanding another portion). For example, narrowband encoder A124 may be configured to perform the regularization according to a function such as a pitch contour or trajectory. In such case, regularization data signal SD10 may include a corresponding description of the function, such as a set of parameters, and delay line D120 may include logic configured to warp frames or subframes of highband speech signal S30 according to the function. In other implementations, delay value mapper D110 is configured to average, scale, and/or round the function before it is applied to highband speech signal S30 by delay line D120. For example, delay value mapper D110 may be configured to calculate one or more delay values according to the function, each delay value indicating a number of samples, which are then applied by delay line D120 to time warp one or more corresponding frames or subframes of highband speech signal S30.
Task TD300 generates a highband excitation signal based on a narrowband excitation signal. In this case, the narrowband excitation signal is based on the encoded narrowband excitation signal. According to at least the highband excitation signal, task TD400 encodes the highband speech signal into at least a plurality of highband filter parameters. For example, task TD400 may be configured to encode the highband speech signal into a plurality of quantized LSFs. Task TD500 applies a time shift to the highband speech signal that is based on information relating to a time warping included in the encoded narrowband excitation signal.
Task TD400 may be configured to perform a spectral analysis (such as an LPC analysis) on the highband speech signal, and/or to calculate a gain envelope of the highband speech signal. In such cases, task TD500 may be configured to apply the time shift to the highband speech signal prior to the analysis and/or the gain envelope calculation.
Other implementations of wideband speech encoder A100 are configured to reverse a time warping of highband excitation signal S120 caused by a time warping included in the encoded narrowband excitation signal. For example, highband excitation generator A300 may be implemented to include an implementation of delay line D120 that is configured to receive regularization data signal SD10 or mapped delay values SD10a, and to apply a corresponding reverse time shift to narrowband excitation signal S80, and/or to a subsequent signal based on it such as harmonically extended signal S160 or highband excitation signal S120.
Further wideband speech encoder implementations may be configured to encode narrowband speech signal S20 and highband speech signal S30 independently from one another, such that highband speech signal S30 is encoded as a representation of a highband spectral envelope and a highband excitation signal. Such an implementation may be configured to perform time warping of the highband residual signal, or to otherwise include a time warping in an encoded highband excitation signal, according to information relating to a time warping included in the encoded narrowband excitation signal. For example, the highband encoder may include an implementation of delay line D120 and/or delay value mapper D110 as described herein that are configured to apply a time warping to the highband residual signal. Potential advantages of such an operation include more efficient encoding of the highband residual signal and a better match between the synthesized narrowband and highband speech signals.
As noted above, highband encoder A202 may include a highband gain factor calculator A230 that is configured to calculate a series of gain factors according to a time-varying relation between highband signal S30 and a signal based on narrowband signal S20 (such as narrowband excitation signal S80, highband excitation signal S120, or synthesized highband signal S130).
a shows a block diagram of an implementation A232 of highband gain factor calculator A230. Highband gain factor calculator A232 includes an implementation G10a of envelope calculator G10 that is arranged to calculate an envelope of a first signal, and an implementation G10b of envelope calculator G10 that is arranged to calculate an envelope of a second signal. Envelope calculators G10a and G10b may be identical or may be instances of different implementations of envelope calculator G10. In some cases, envelope calculators G10a and G10b may be implemented as the same structure configured to process different signals at different times.
Envelope calculators G10a and G10b may each be configured to calculate an amplitude envelope (e.g., according to an absolute value function) or an energy envelope (e.g., according to a squaring function). Typically, each envelope calculator G10a, G10b is configured to calculate an envelope that is subsampled with respect to the input signal (e.g., an envelope having one value for each frame or subframe of the input signal). As described above with reference to, e.g.,
Factor calculator G20 is configured to calculate a series of gain factors according to a time-varying relation between the two envelopes over time. In one example as described above, factor calculator G20 calculates each gain factor as the square root of the ratio of the envelopes over a corresponding subframe. Alternatively, factor calculator G20 may be configured to calculate each gain factor based on a distance between the envelopes, such as a difference or a signed squared difference between the envelopes during a corresponding subframe. It may be desirable to configure factor calculator G20 to output the calculated values of the gain factors in a decibel or other logarithmically scaled form.
b shows a block diagram of a generalized arrangement including highband gain factor calculator A232 in which envelope calculator G10a is arranged to calculate an envelope of a signal based on narrowband signal S20, envelope calculator G10b is arranged to calculate an envelope of highband signal S30, and factor calculator G20 is configured to output highband gain factors S60b (e.g., to a quantizer). In this example, envelope calculator G10a is arranged to calculate an envelope of a signal received from intermediate processing P1, which may include structures as described herein that are configured to perform calculation of narrowband excitation signal S80, generation of highband excitation signal S120, and/or synthesis of highband signal S130. For convenience, the description below assumes that envelope calculator G10a is arranged to calculate an envelope of synthesized highband signal S130, although implementations in which envelope calculator G10a is arranged to calculate an envelope of narrowband excitation signal S80 or highband excitation signal S120 instead are expressly contemplated and hereby disclosed.
A degree of similarity between highband signal S30 and synthesized highband signal S130 may indicate how well the decoded highband signal S100 will resemble highband signal S30. Specifically, a similarity between temporal envelopes of highband signal S30 and synthesized highband signal S130 may indicate that decoded highband signal S100 can be expected to have a good sound quality and be perceptually similar to highband signal S30.
It may be expected that the shapes of the envelopes of narrowband excitation signal S80 and highband signal S30 will be similar over time and, consequently, that relatively little variation will occur among highband gain factors S60b. In fact, a large variation over time in a relation between the envelopes (e.g., a large variation in a ratio or distance between the envelopes), or a large variation over time among the gain factors based on the envelopes, may be taken as an indication that synthesized highband signal S130 is very different from highband signal S30. For example, such a variation may indicate that highband excitation signal S120 is a poor match for the actual highband residual signal over that time period. In any case, a large variation over time in a relation between the envelopes or among the gain factors may indicate that the decoded highband signal S100 will sound unacceptably different from highband signal S30.
It may be desirable to detect a significant change over time in a relation between the temporal envelope of synthesized highband signal S130 and the temporal envelope of highband signal S30 (such as a ratio or distance between the envelopes) and accordingly to reduce the level of the highband gain factors S60b corresponding to that period. Further implementations of highband encoder A202 are configured to attenuate the highband gain factors S60b according to a variation over time in a relation between the envelopes and/or a variation among the gain factors over time.
Gain factor attenuator G32 includes a factor calculator G50 configured to select or otherwise calculate attenuation factor values according to the calculated variations. Gain factor attenuator G32 also includes a combiner, such as a multiplier or adder, that is configured to apply the attenuation factors to highband gain factors S60-1 to obtain highband gain factors S60-2, which may be then be quantized for storage or transmission. For a case in which variation calculator G40 is configured to produce a respective value of the calculated variation for each pair of envelope values (e.g., as the squared difference between the current distance between the envelopes and the previous or subsequent distance), the gain control element may be configured to apply a respective attenuation factor to each gain factor. For a case in which variation calculator G40 is configured to produce one value of the calculated variation for each set of pairs of envelope values (e.g., one calculated variation for the pairs of envelope values of the current frame), the gain control element may be configured to apply the same attenuation factor to more than one corresponding gain factor, such as to each gain factor of the corresponding frame. In a typical example, the values of the attenuation factors may range from a minimum magnitude of zero dB to a maximum magnitude of 6 dB (or, alternatively, from a factor of 1 to a factor of 0.25), although any other desired range may be used. It is noted that attenuation factor values expressed in dB form may have positive values, such that an attenuation operation may include subtracting the attenuation factor value from a respective gain factor, or negative values, such that an attenuation operation may include adding the attenuation factor value to a respective gain factor.
Factor calculator G50 may be configured to select one among a set of discrete attenuation factor values. For example, factor calculator G50 may be configured to select a corresponding attenuation factor value according to a relation between the calculated variation and one or more threshold values.
Alternatively, factor calculator G50 may be configured to calculate the attenuation factor value as a function of the calculated variation.
It may be desirable to implement gain factor attenuation in a manner that limits discontinuity in the resulting gain envelope. In some implementations, factor calculator G50 is configured to limit the degree to which the attenuation factor value may change at one time (e.g., from one frame or subframe to the next). For an incremental mapping as shown in
A degree of variation over time in a relation between the envelope of highband signal S30 and the envelope of synthesized highband signal S130 may also be indicated by fluctuations among the values of highband gain factors S60b. A lack of variation among the gain factors over time may indicate that the signals have similar envelopes, with similar fluctuations of level over time. A large variation among the gain factors over time may indicate a significant difference between the envelopes of the two signals and, accordingly, a poor expected quality of the corresponding decoded highband signal S100. Further implementations of highband encoder A202 are configured to attenuate highband gain factors S60b according to a degree of fluctuation among the gain factors.
In one particular example as shown in
Gain factor attenuator G34 includes an instance of factor calculator G50 as described above that is configured to select or otherwise calculate attenuation factors according to the calculated variations. In one example, factor calculator G50 is configured to calculate an attenuation factor value fa according to an expression such as the following:
fa=0.8+0.5v,
where v is the calculated variation produced by variation calculator G60. In this example, it may be desired to scale or otherwise limit the value of v to be not greater than 0.4, such that the value of fa will not exceed unity. It may also be desirable to logarithmically scale the value of fa (e.g., to obtain a value expressed in dB).
Gain factor attenuator G34 also includes a combiner, such as a multiplier or adder, that is configured to apply the attenuation factors to highband gain factors S60-1 to obtain highband gain factors S60-2, which may be then be quantized for storage or transmission. For a case in which variation calculator G60 is configured to produce a respective value of the calculated variation for each gain factor (e.g., based on the squared difference between the gain factor and the previous or subsequent gain factor), the gain control element may be configured to apply a respective attenuation factor to each gain factor. For a case in which variation calculator G60 is configured to produce one value of the calculated variation for each set of gain factors (e.g., one calculated variation for the current frame), the gain control element may be configured to apply the same attenuation factor to more than one corresponding gain factor, such as to each gain factor of the corresponding frame. In a typical example, the values of the attenuation factors may range from a minimum magnitude of zero dB to a maximum magnitude of 6 dB (or, alternatively, from a factor of 1 to a factor of 0.25, or from a factor of 1 to a factor of 0), although any other desired range may be used. It is noted that attenuation factor values expressed in dB form may have positive values, such that an attenuation operation may include subtracting the attenuation factor value from a respective gain factor, or negative values, such that an attenuation operation may include adding the attenuation factor value to a respective gain factor.
It is noted again that while the description above assumes that envelope calculator G10a is configured to calculate an envelope of synthesized highband signal S130, arrangements in which envelope calculator G10a is configured to calculate an envelope of narrowband excitation signal S80 or highband excitation signal S120 instead are hereby expressly contemplated and disclosed.
In other implementations, attenuation of the highband gain factors S60b (e.g. after dequantization) is performed by an implementation of highband decoder B200 according to a variation among the gain factors as calculated at the decoder. For example,
As discussed above, relatively large variations in the gain factors may indicate a mismatch between the narrowband and highband residual signals. However, variations may occur among the gain factors due to other reasons as well. For example, calculation of gain factor values may be performed on a subframe-by-subframe basis, rather than sample-by-sample. Even in a case where an overlapping windowing function is used, the reduced sampling rate of the gain envelope may lead to a perceptually noticeable fluctuation in level between adjacent subframes. Other inaccuracies in estimating the gain factors may also contribute to excessive level fluctuations in decoded highband signal S100. Although such gain factor variations may be smaller in magnitude than a variation which triggers gain factor attenuation as described above, they may nevertheless cause an objectionable noisy and distorted quality in the decoded signal.
It may be desirable to perform a smoothing of highband gain factors S60b.
y(n)=βy(n−1)+(1−β)x(n), (4)
where x indicates the input value, y indicates the output value, n indicates a time index, and β indicates a smoothing factor F10. If the value of the smoothing factor β is zero, then no smoothing occurs. If the value of the smoothing factor β is at a maximum, then a maximum degree of smoothing occurs. Gain factor smoother G82 may be configured to use any desired value of smoothing factor F10 between 0 and 1, although it may be preferred to use a value between 0 and 0.5 instead, such that a maximally smoothed value includes equal contributions from the current and previous smoothed values.
It is noted that expression (4) may be expressed and implemented equivalently as
y(n)=(1−λ)y(n−1)+λx(n), (4b)
where if the value of the smoothing factor λ is one, then no smoothing occurs, while if the value of the smoothing factor λ is at a minimum, then a maximum degree of smoothing occurs. It is contemplated and hereby disclosed that this principle applies to the other implementations of gain factor smoother G82 as described herein, as well as to other IIR and/or FIR implementations of gain factor smoother G80.
Gain factor smoother G82 may be configured to apply a smoothing factor F10 that has a fixed value. Alternatively, it may be desirable to perform an adaptive smoothing of the gain factors rather than a fixed smoothing. For example, it may be desirable to preserve larger variations among the gain factors, which may indicate perceptually significant features of the gain envelope. Smoothing of such variations may itself lead to artifacts in the decoded signal, such as smearing of the gain envelope.
In a further implementation, gain factor smoother G80 is configured to perform a smoothing operation that is adaptive according to a magnitude of a calculated variation among the gain factors. For example, such an implementation of gain factor smoother G80 may be configured to perform less smoothing (e.g., to use a lower smoothing factor value) when a distance between current and previous estimated gain factors is relatively large.
Factor calculator F40 may be configured to select one among a set of discrete smoothing factor values. For example, factor calculator F40 may be configured to select a corresponding smoothing factor value according to a relation between the magnitude of the calculated variation and one or more threshold values.
Alternatively, factor calculator F40 may be configured to calculate the smoothing factor value as a function of the magnitude of the calculated variation.
In one example, factor calculator F40 is configured to calculate a value vs of smoothing factor F12 according to an expression such as the following:
where the value of da is based on a magnitude of the difference between the current and previous gain factor values. For example, the value of da may be calculated as the absolute value, or as the square, of the current and previous gain factor values.
In a further implementation, a value of da is calculated as described above from gain factor values before input to attenuator G30, and the resulting smoothing factor is applied to the gain factor values after output from attenuator G30. In such case, for example, a value based on an average or sum of the values of vs over a frame may be used as the input to factor calculator G50 in gain factor attenuator G34, and variation calculator G60 may be omitted. In a further arrangement, the value of da is calculated as an average or sum of the absolute values or squares of differences between adjacent gain factor values for a frame (possibly including a preceding and/or subsequent gain factor value) before input to gain factor attenuator G34, such that the value of vs is updated once per frame and is also provided as the input to factor calculator G50. It is noted that in at least the latter example, the value of the input to factor calculator G50 is limited to not greater than 0.4.
Other implementations of gain factor smoother G80 may be configured to perform smoothing operations that are based on additional previous smoothed gain factor values. Such implementations may have more than one smoothing factor (e.g., filter coefficient), which may be adaptively varied together and/or independently. Gain factor smoother G80 may even be implemented to perform smoothing operations that are also based on future gain factor values, although such implementations may introduce additional latency.
For an implementation that includes both gain factor attenuation and gain factor smoothing operations, it may be desirable to perform the attenuation first, so that the smoothing operation does not interfere with determination of the attenuation criteria.
An adaptive smoothing operation as described herein may also be applied to other stages of the gain factor calculation. For example, further implementations of highband encoder A200 include adaptive smoothing of one or more of the envelopes, and/or adaptive smoothing of attenuation factors that are calculated on a per-subframe or per-frame basis.
Gain smoothing may have advantages in other arrangements as well. For example,
Quantization of the gain factors introduces a random error that is usually uncorrelated from one frame to the next. This error may cause the quantized gain factors to be less smooth than the unquantized gain factors and may reduce the perceptual quality of the decoded signal. Independent quantization of gain factors (or gain factor vectors) generally increases the amount of spectral fluctuation from frame to frame compared to the unquantized gain factors (or gain factor vectors), and these gain fluctuations may cause the decoded signal to sound unnatural.
A quantizer is typically configured to map an input value to one of a set of discrete output values. A limited number of output values are available, such that a range of input values is mapped to a single output value. Quantization increases coding efficiency because an index that indicates the corresponding output value may be transmitted in fewer bits than the original input value.
The quantizer could equally well be a vector quantizer, and gain factors are typically quantized using a vector quantizer.
If the input signal is very smooth, it can happen sometimes that the quantized output is much less smooth, according to a minimum step between values in the output space of the quantization.
In a method according to one embodiment, a series of gain factors is calculated for each frame (or other block) of speech in the encoder, and the series is vector quantized for efficient transmission to the decoder. After quantization, the quantization error (defined as the difference between quantized and unquantized parameter vector) is stored. The quantization error of frame N−1 is reduced by a weighting factor and added to the parameter vector of frame N, before quantizing the parameter vector of frame N. It may be desirable for the value of the weighting factor to be smaller when the difference between current and previous estimated gain envelopes is relatively large.
In a method according to one embodiment, the gain factor quantization error vector is computed for each frame and multiplied by a weighting factor b having a value less than 1.0. Before quantization, the scaled quantization error for the previous frame is added to the gain factor vector (input value V10). A quantization operation of such a method may be described by an expression such as the following:
y(n)=Q(s(n)+b[y(n−1)−s(n−1)]),
where s(n) is the smoothed gain factor vector pertaining to frame n, y(n) is the quantized gain factor vector pertaining to frame n, Q(•) is a nearest-neighbor quantization operation, and b is the weighting factor.
An implementation 435 of quantizer 430 is configured to produce a quantized output value V30 of a smoothed value V20 of an input value V10 (e.g., a gain factor vector), where the smoothed value V20 is based on a weighting factor b V40 and a quantization error of a previous output value V30a. Such a quantizer may be applied to reduce gain fluctuations without additional delay.
c shows an example of a (dequantized) sequence of output values V30a as produced by quantizer 435a in response to the input signal of
It may be desirable to use a recursive function to calculate the feedback amount. For example, the quantization error may be calculated with respect to the current input value rather than with respect to the current smoothed value. Such a method may be described by an expression such as the following:
y(n)=Q[s(n)], s(n)=x(n)+b[y(n−1)−s(n−1)],
where x(n) is the input gain factor vector pertaining to frame n.
d shows an example of a (dequantized) sequence of output values V30b as produced by quantizer 435b in response to the input signal of
It is noted that embodiments as shown herein may be implemented by replacing or augmenting an existing quantizer Q10 according to an arrangement as shown in
In one example, the value of weighting factor b is fixed at a desired value between 0 and 1. Alternatively, it may be desired to configure quantizer 435 to adjust the value of the weighting factor b dynamically. For example, it may be desired for quantizer 435 to be configured to adjust the value of the weighting factor b depending on a degree of fluctuation already present in the unquantized gain factors or gain factor vectors. When the difference between the current and previous gain factors or gain factor vectors is large, the value of weighting factor b is close to zero and almost no noise shaping results. When the current gain factor or vector differs little from the previous one, the value of weighting factor b is close to 1.0. In such manner, transitions in the gain envelope over time (e.g., attenuations applied by an implementation of gain factor attenuator G30) may be retained, minimizing smearing when the gain envelope is changing, while fluctuations may be reduced when the gain envelope is relatively constant from one frame or subframe to the next.
As shown in
The value of weighting factor b may be made proportional to a distance between consecutive gain factors or gain factor vectors, and any of various distances may be used. The Euclidean norm is typically used, but others which may be used include Manhattan distance (1-norm), Chebyshev distance (infinity norm), Mahalanobis distance, and Hamming distance.
It may be appreciated from
a shows a flowchart of a method of signal processing QM10 according to an embodiment. Task QT10 calculates first and second gain factor vectors, which may correspond to adjacent frames of a speech signal. Task QT20 generates a first quantized vector by quantizing a third vector that is based on at least a portion of the first vector. Task QT30 calculates a quantization error of the first quantized vector. For example, task QT30 may be configured to calculate a difference between the first quantized vector and the third vector. Task QT40 calculates a fourth vector based on the quantization error. For example, task QT40 may be configured to calculate the fourth vector as the sum of a scaled version of the quantization error and at least a portion of the second vector. Task QT50 quantizes the fourth vector.
b shows a flowchart of a method of signal processing QM20 according to an embodiment. Task QT10 calculates first and second gain factors, which may correspond to adjacent frames or subframes of a speech signal. Task QT20 generates a first quantized gain factor by quantizing a third value based on the first gain vector. Task QT30 calculates a quantization error of the first quantized gain factor. For example, task QT30 may be configured to calculate a difference between the first quantized gain factor and the third value. Task QT40 calculates a filtered gain factor based on the quantization error. For example, task QT40 may be configured to calculate the filtered gain factor as the sum of a scaled version of the quantization error and the second gain factor. Task QT50 quantizes the filtered gain factor.
As mentioned above, embodiments as described herein include implementations that may be used to perform embedded coding, supporting compatibility with narrowband systems and avoiding a need for transcoding. Support for highband coding may also serve to differentiate on a cost basis between chips, chipsets, devices, and/or networks having wideband support with backward compatibility, and those having narrowband support only. Support for highband coding as described herein may also be used in conjunction with a technique for supporting lowband coding, and a system, method, or apparatus according to such an embodiment may support coding of frequency components from, for example, about 50 or 100 Hz up to about 7 or 8 kHz.
As mentioned above, adding highband support to a speech coder may improve intelligibility, especially regarding differentiation of fricatives. Although such differentiation may usually be derived by a human listener from the particular context, highband support may serve as an enabling feature in speech recognition and other machine interpretation applications, such as systems for automated voice menu navigation and/or automatic call processing.
An apparatus according to an embodiment may be embedded into a portable device for wireless communications such as a cellular telephone or personal digital assistant (PDA). Alternatively, such an apparatus may be included in another communications device such as a VoIP handset, a personal computer configured to support VoIP communications, or a network device configured to route telephonic or VoIP communications. For example, an apparatus according to an embodiment may be implemented in a chip or chipset for a communications device. Depending upon the particular application, such a device may also include such features as analog-to-digital and/or digital-to-analog conversion of a speech signal, circuitry for performing amplification and/or other signal processing operations on a speech signal, and/or radio-frequency circuitry for transmission and/or reception of the coded speech signal.
It is explicitly contemplated and disclosed that embodiments may include and/or be used with any one or more of the other features disclosed in the U.S. Provisional Pat. Appl. No. 60/673,965 and/or in the U.S. patent application Ser. No. 11/397,432, of which this application claims benefit. It is also explicitly contemplated and disclosed that embodiments may include and/or be used with any one or more of the other features disclosed in U.S. Provisional Pat. Appl. No. 60/667,901 and/or any of the related Patent Applications identified above (now U.S. Pub. Nos 2006/0282263, 2007/0088558, 2007/0088541, 2006/0277042, 2007/0088542, 2006/0277038, 2006/0271356, and 2008/0126086). Such features include removal of high-energy bursts of short duration that occur in the highband and are substantially absent from the narrowband. Such features include fixed or adaptive smoothing of coefficient representations such as lowband and/or highband LSFs (for example, by using a structure as shown in
The foregoing presentation of the described embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments are possible, and the generic principles presented herein may be applied to other embodiments as well. For example, an embodiment may be implemented in part or in whole as a hard-wired circuit, as a circuit configuration fabricated into an application-specific integrated circuit, or as a firmware program loaded into non-volatile storage or a software program loaded from or into a data storage medium (e.g., a non-transitory processor-readable medium) as machine-readable code, such code being instructions executable by an array of logic elements such as a microprocessor or other digital signal processing unit. The non-transitory processor-readable medium may be an array of storage elements such as semiconductor memory (which may include without limitation dynamic or static RAM (random-access memory), ROM (read-only memory), and/or flash RAM), or ferroelectric, magnetoresistive, ovonic, polymeric, or phase-change memory; or a disk medium such as a magnetic or optical disk. The term “software” should be understood to include source code, assembly language code, machine code, binary code, firmware, macrocode, microcode, anyone or more sets or sequences of instructions executable by an array of logic elements, and any combination of such examples.
The various elements of implementations of highband excitation generators A300 and B300, highband encoder A200, highband decoder B200, wideband speech encoder A100, and wideband speech decoder B100 may be implemented as electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset, although other arrangements without such limitation are also contemplated. One or more elements of such an apparatus may be implemented in whole or in part as one or more sets of instructions arranged to execute on one or more fixed or programmable arrays of logic elements (e.g., transistors, gates) such as microprocessors, embedded processors, IP cores, digital signal processors, FPGAs (field-programmable gate arrays), ASSPs (application-specific standard products), and ASICs (application-specific integrated circuits). It is also possible for one or more such elements to have structure in common (e.g., a processor used to execute portions of code corresponding to different elements at different times, a set of instructions executed to perform tasks corresponding to different elements at different times, or an arrangement of electronic and/or optical devices performing operations for different elements at different times). Moreover, it is possible for one or more such elements to be used to perform tasks or execute other sets of instructions that are not directly related to an operation of the apparatus, such as a task relating to another operation of a device or system in which the apparatus is embedded.
a shows a flowchart of a method M200 of generating a highband excitation signal according to an embodiment. Task Y100 calculates a harmonically extended signal by applying a nonlinear function to a narrowband excitation signal derived from a narrowband portion of a speech signal. Task Y200 mixes the harmonically extended signal with a modulated noise signal to generate a highband excitation signal.
Embodiments also include additional methods of speech coding, encoding, and decoding as are expressly disclosed herein, e.g., by descriptions of structural embodiments configured to perform such methods. Each of these methods may also be tangibly embodied (for example, in one or more data storage media as listed above) as one or more sets of instructions readable and/or executable by a machine including an array of logic elements (e.g., a processor, microprocessor, microcontroller, or other finite state machine). Thus, the present invention is not intended to be limited to the embodiments shown above but rather is to be accorded the widest scope consistent with the principles and novel features disclosed in any fashion herein, including in the attached claims as filed, which form a part of the original disclosure.
This application claims benefit of U.S. Provisional Pat. Appl. No. 60/673,965, entitled “PARAMETER CODING IN A HIGH-BAND SPEECH CODER,” filed Apr. 22, 2005. This application is also a continuation-in-part of and claims benefit of U.S. patent application Ser. No. 11/397,432, entitled “SYSTEMS, METHODS, AND APPARATUS FOR SPEECH SIGNAL FILTERING,” filed Apr. 3, 2006. This application is also related to the following Patent Applications filed Apr. 3, 2006: U.S. patent application Ser. No. 11/397,794, entitled “SYSTEMS, METHODS, AND APPARATUS FOR WIDEBAND SPEECH CODING”; U.S. patent application Ser. No. 11/397,870, entitled “SYSTEMS, METHODS, AND APPARATUS FOR HIGHBAND EXCITATION GENERATION,”; U.S. patent application Ser. No. 11/397,505, entitled “SYSTEMS, METHODS, AND APPARATUS FOR ANTISPARSENESS FILTERING”; U.S. patent application Ser. No. 11/397,871, entitled “SYSTEMS, METHODS, AND APPARATUS FOR GAIN CODING,”; U.S. patent application Ser. No. 11/397,433, entitled “SYSTEMS, METHODS, AND APPARATUS FOR HIGHBAND BURST SUPPRESSION,”; U.S. patent application Ser. No. 11/397,370, entitled “SYSTEMS, METHODS, AND APPARATUS FOR HIGHBAND TIME WARPING,”; and U.S. patent application Ser. No. 11/397,872, entitled “SYSTEMS, METHODS, AND APPARATUS FOR QUANTIZATION OF SPECTRAL ENVELOPE REPRESENTATION,”. This application is also related to the following Patent Application filed herewith: U.S. patent application Ser. No. 11/408,390, entitled “SYSTEMS, METHODS, AND APPARATUS FOR GAIN FACTOR SMOOTHING”.
Number | Name | Date | Kind |
---|---|---|---|
3158693 | Flanagan et al. | Nov 1964 | A |
3855414 | Alleva et al. | Dec 1974 | A |
3855416 | Fuller | Dec 1974 | A |
4616659 | Prezas et al. | Oct 1986 | A |
4630305 | Borth et al. | Dec 1986 | A |
4696041 | Sakata | Sep 1987 | A |
4747143 | Kroeger et al. | May 1988 | A |
4805193 | McLaughlin et al. | Feb 1989 | A |
4852179 | Fette | Jul 1989 | A |
4862168 | Beard | Aug 1989 | A |
5077798 | Ichikawa et al. | Dec 1991 | A |
5086475 | Kutaragi et al. | Feb 1992 | A |
5119424 | Asakawa et al. | Jun 1992 | A |
5285520 | Matsumoto et al. | Feb 1994 | A |
5455888 | Iyengar et al. | Oct 1995 | A |
5581652 | Abe et al. | Dec 1996 | A |
5684920 | Iwakami et al. | Nov 1997 | A |
5689615 | Benyassine et al. | Nov 1997 | A |
5694426 | McCree et al. | Dec 1997 | A |
5699477 | McCree | Dec 1997 | A |
5699485 | Shoham | Dec 1997 | A |
5704003 | Kleijn et al. | Dec 1997 | A |
5706395 | Arslan | Jan 1998 | A |
5727085 | Toyama et al. | Mar 1998 | A |
5737716 | Bergstrom et al. | Apr 1998 | A |
5757938 | Akagiri et al. | May 1998 | A |
5774842 | Nishio et al. | Jun 1998 | A |
5797118 | Saito | Aug 1998 | A |
5890126 | Lindemann | Mar 1999 | A |
5966689 | McCree | Oct 1999 | A |
5978759 | Tsushima et al. | Nov 1999 | A |
6009395 | Lai | Dec 1999 | A |
6014619 | Wuppermann et al. | Jan 2000 | A |
6029125 | Hagen et al. | Feb 2000 | A |
6041297 | Goldberg | Mar 2000 | A |
6097824 | Lindemann et al. | Aug 2000 | A |
6122384 | Mauro | Sep 2000 | A |
6134520 | Ravishankar | Oct 2000 | A |
6144936 | Jarvinen et al. | Nov 2000 | A |
6223151 | Kleijn et al. | Apr 2001 | B1 |
6263307 | Arslan | Jul 2001 | B1 |
6301556 | Hagen et al. | Oct 2001 | B1 |
6330534 | Yasunaga et al. | Dec 2001 | B1 |
6330535 | Yasunaga et al. | Dec 2001 | B1 |
6353808 | Matsumoto et al. | Mar 2002 | B1 |
6385261 | Tsuji et al. | May 2002 | B1 |
6385573 | Gao et al. | May 2002 | B1 |
6449590 | Gao | Sep 2002 | B1 |
6523003 | Chandran et al. | Feb 2003 | B1 |
6564187 | Kikumoto et al. | May 2003 | B1 |
6675144 | Tucker et al. | Jan 2004 | B1 |
6678654 | Zinser, Jr. et al. | Jan 2004 | B2 |
6680972 | Liljeryd et al. | Jan 2004 | B1 |
6681204 | Matsumoto et al. | Jan 2004 | B2 |
6704702 | Oshiriki | Mar 2004 | B2 |
6704711 | Gustafsson et al. | Mar 2004 | B2 |
6711538 | Omori et al. | Mar 2004 | B1 |
6715125 | Juang | Mar 2004 | B1 |
6732070 | Rotola-Pukkila et al. | May 2004 | B1 |
6735567 | Gao et al. | May 2004 | B2 |
6751587 | Thyssen et al. | Jun 2004 | B2 |
6757395 | Fang et al. | Jun 2004 | B1 |
6757654 | Westerlund et al. | Jun 2004 | B1 |
6772114 | Sluijter et al. | Aug 2004 | B1 |
6826526 | Norimatsu et al. | Nov 2004 | B1 |
6879955 | Rao | Apr 2005 | B2 |
6889185 | McCree | May 2005 | B1 |
6895375 | Malah | May 2005 | B2 |
6925116 | Liljeryd et al. | Aug 2005 | B2 |
6988066 | Malah | Jan 2006 | B2 |
7003451 | Kjorling et al. | Feb 2006 | B2 |
7016831 | Suzuki et al. | Mar 2006 | B2 |
7031912 | Yajima et al. | Apr 2006 | B2 |
7050972 | Henn et al. | May 2006 | B2 |
7088779 | Aarts | Aug 2006 | B2 |
7136810 | Paksoy | Nov 2006 | B2 |
7149683 | Jelinek | Dec 2006 | B2 |
7155384 | Banba | Dec 2006 | B2 |
7167828 | Ehara | Jan 2007 | B2 |
7174135 | Sluijter et al. | Feb 2007 | B2 |
7191123 | Bessette et al. | Mar 2007 | B1 |
7222069 | Suzuki et al. | May 2007 | B2 |
7228272 | Rao | Jun 2007 | B2 |
7260523 | Paksoy et al. | Aug 2007 | B2 |
7330814 | McCree | Feb 2008 | B2 |
7346499 | Chennoukh et al. | Mar 2008 | B2 |
7359854 | Nilsson et al. | Apr 2008 | B2 |
7376554 | Ojala et al. | May 2008 | B2 |
7386444 | Stachurski | Jun 2008 | B2 |
7392179 | Yasunaga et al. | Jun 2008 | B2 |
7428490 | Xu et al. | Sep 2008 | B2 |
7596492 | Sung et al. | Sep 2009 | B2 |
7613603 | Yamashita | Nov 2009 | B2 |
8364494 | Vos et al. | Jan 2013 | B2 |
20020052738 | Paksoy et al. | May 2002 | A1 |
20020087308 | Ozawa | Jul 2002 | A1 |
20030036905 | Toguri et al. | Feb 2003 | A1 |
20030093279 | Malah et al. | May 2003 | A1 |
20030154074 | Kikuiri et al. | Aug 2003 | A1 |
20040019492 | Tucker et al. | Jan 2004 | A1 |
20040098255 | Kovesi et al. | May 2004 | A1 |
20040101038 | Etter | May 2004 | A1 |
20040128126 | Nam et al. | Jul 2004 | A1 |
20040153313 | Aubauer et al. | Aug 2004 | A1 |
20040181398 | Sung et al. | Sep 2004 | A1 |
20040204935 | Anandakumar et al. | Oct 2004 | A1 |
20050004793 | Ojala et al. | Jan 2005 | A1 |
20050071153 | Tammi et al. | Mar 2005 | A1 |
20050143980 | Huang | Jun 2005 | A1 |
20050143989 | Jelinek | Jun 2005 | A1 |
20050149339 | Tanaka et al. | Jul 2005 | A1 |
20050251387 | Jelinek | Nov 2005 | A1 |
20060206319 | Taleb | Sep 2006 | A1 |
20060206334 | Kapoor et al. | Sep 2006 | A1 |
20060271356 | Vos et al. | Nov 2006 | A1 |
20060277038 | Vos et al. | Dec 2006 | A1 |
20060277039 | Vos et al. | Dec 2006 | A1 |
20060277042 | Vos et al. | Dec 2006 | A1 |
20060282262 | Vos et al. | Dec 2006 | A1 |
20060282263 | Vos et al. | Dec 2006 | A1 |
20070088541 | Vos et al. | Apr 2007 | A1 |
20070088542 | Vos et al. | Apr 2007 | A1 |
20070088558 | Vos et al. | Apr 2007 | A1 |
20080126086 | Vos et al. | May 2008 | A1 |
Number | Date | Country |
---|---|---|
2429832 | Jun 2002 | CA |
1397064 | Feb 2003 | CN |
0732687 | Sep 1996 | EP |
0770990 | May 1997 | EP |
1008984 | Jun 2000 | EP |
1089258 | Apr 2001 | EP |
1126620 | Aug 2001 | EP |
1164579 | Dec 2001 | EP |
1300833 | Apr 2003 | EP |
1498873 | Jan 2005 | EP |
2244100 | Sep 1990 | JP |
08180582 | Jul 1996 | JP |
8248997 | Sep 1996 | JP |
8305396 | Nov 1996 | JP |
9101798 | Apr 1997 | JP |
2000206989 | Jul 2000 | JP |
2001100773 | Apr 2001 | JP |
2001237708 | Aug 2001 | JP |
2001337700 | Dec 2001 | JP |
2002268698 | Sep 2002 | JP |
2003243990 | Aug 2003 | JP |
2003526123 | Sep 2003 | JP |
2004126011 | Apr 2004 | JP |
2005345707 | Dec 2005 | JP |
2073913 | Feb 1997 | RU |
2131169 | May 1999 | RU |
2233010 | Jul 2004 | RU |
525147 | Mar 2003 | TW |
526468 | Apr 2003 | TW |
9848541 | Oct 1998 | WO |
9912155 | Mar 1999 | WO |
0156021 | Aug 2001 | WO |
02052738 | Jul 2002 | WO |
02086867 | Oct 2002 | WO |
03021993 | Mar 2003 | WO |
03044777 | May 2003 | WO |
Entry |
---|
Makinen J et al: “The Effect of Source Based Rate Adaptation Extension in AMR-WB Speech Codec” Speech Coding, 2002, IEEE Workshop Proceedings. Oct. 6-9, 2002, Piscataway, NJ, USA, IEEE, Oct. 6, 2002, pp. 153-155. |
Bruno Bessette et al: “The Adaptive Multirate Wideband Speech Codec (AMR-WB)” IEEE Transactions on Speech and Audio Processing, IEEE Service Center, New York, NY, US, vol. 10, No. 8, Nov. 2002, pp. 622, 623, and 630. |
Nilsson M et al: “Avoiding Over-Estimation in Bandwidth Extension of Telephony Speech” 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. (ICASSP). Salt Lake City, UT, May 7-11, 2001, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), New York, NY: IEEE, US, vol. 1 of 6, May 7, 2001, pp. 869-872. |
Harma, A. et al. A comparison of warped and conventional linear predictive coding. 11 pp. Last accessed Dec. 15, 2006 at http://www.acoustics.hut.fi/˜aqi/wwwPhD/P8.PDF. (IEEE Trans. Speech Audio Proc., vol. 9, No. 5, Jul. 2001, pp. 579-588.). |
Jelinek, M. et al. Noise reduction method for wideband speech coding. Euro. Sig. Proc. Conf., Vienna, Austria, Sep. 2004, pp. 1959-1962. |
Qian, Y. et al. Classified Highband Excitation for Bandwidth Extension of Telephony Signals. Proc. Euro. Sig. Proc. Conf., Antalya, Turkey, Sep. 2005. 4 pp. |
McCree, A., “A 14 kb/s Wideband Speech Coder With a Parametric Highband Model,” Int. Conf. on Acoustic Speech and Signal Processing, Turkey, 2000, pp. 1153-1156. |
3rd Generation Partnership Project 2 (“3GPP2”). Enhanced Variable Rate Codec, Speech Service Options 3, 68, and 70 for Wideband Spread Spectrum Digital Systems, 3GPP2 C.S0014-C, ver. 1.0, Jan. 2007. |
Budagavi, M. et al.: “Speech Coding in Mobile Radio Communications,” Proc. IEEE, vol. 86, No. 7, Jul. 1998, pp. 1402-1412. |
Chu, W. et al. Optimization of window and LSF interpolation factor for the ITU-T G.729 speech coding standard, 4 pp. (Eurospeech 2003, Geneva, pp. 1061-1064. |
D17 So. S. Efficient Block Quantisation for Image and Speech Coding, PhD. Thesis, Griffith Univ., Brisbane, AU, Mar. 2005. Cover and chs. 5 and 6 (pp. 195-293). |
Dattoro J et al: “Error spectrum Shaping and Vector Quantization” (ONLINE) Oct. 1997, XP002307027 Stanford University, Retrieved from the Internet: URL: WWW. Stanford. Edu/ {dattorro/proj392c.pdf> [retrieved on Jun. 23, 2006]. |
Digital Radio Mondiale (DRM): System Specification; ETSI ES 201 980. TSI Standards, European Telecommunications Standards Institute, Sophia-Antipo, FR, vol. BC, No. V122, Apr. 2003, XP 014004528, ISSN: 0000-0001, pp. 1-188. |
Doser. A., et al., Time Frequency Techniques for Signal Feature Detection. IEEE, XP010374021, Oct. 24, 1999. pp. 452-456, vol. 1, Thirty=Third Asilomar Conference. |
Drygajilo, A. Speech Coding Techniques and Standards. Last accessed Dec. 15, 2006 at http://scgwww.epfl.ch/courses/Traitement—de—la—parole-2004-2005-pdf/12-codage%20Ppur-Drygajlo-Chapter-4-3.pdf. 23 pp. (chapter of Speech and Language Engineering. |
Epps, J. “Wideband Extension of Narrowband Speech for Enhancement and Coding.” Ph.D. thesis, Univ. of New South Wales, Sep. 2000. Cover, chs. 4-6 (pp. 66-121), and ch. 7 (pp. 122-129). |
European Telecommunications Standards Institute (ETSI) 3rd Generation partnership Project (3GPP), Digital cellular telecommunications system (Phase 2+), Enhanced Full Rate (EFR) speech transcoding, GSM 06.60, ver. 8.0.1, Release 1999. |
European Telecommunications Standards Institute (ETSI) 3rd Generation Partnership Project (3GPP). Digital cellular telecommunications system (Phase 2+), Full rate speech, Transcoding, GSM 06.10, ver. 8.1.1, Release 1999. |
Guibe, G. et al. Speech Spectral Quantizers for Wideband Speech Coding, 11 pp. ;Last accessed Dec. 14, 2006 at http://eprints.ecs.soton.ac.uk/6376/01/1178—pap.pdf (Euro. Trans. On Telecom., 12(6), pp. 535-545, 2001. |
Guleryuz, O et al.: “On the DPCM Compression of Gaussian Autho-Regressive Sequence,” 33 pages. Last accessed Dec. 14, 2006 at http://eeweb.poly.edu/˜onur/publish/dpcm.pdf. |
Hagen, R et al. ,“Removal of Sparse-excitation artifacts in CELP,” Proc. ICASSP, May 1998. vol. 1, pp. 145-148, xp010279147. |
His-Wen Nein et al: “Incorporating Error Shaping Technique into LSF Vector Quantization” IEEE Transactions on Speech and Audio Processing. IEEE Service Center, vol. 9, No. 2, Feb. 2001 XP011054076 ISSN: 1063-6676. |
International Telecommunications Union, (“ITU-T”), Series G: Transmission Systems and Media, Digital Systems and Networks, Digital transmission systems—Terminal equipments—Coding of analogue signals by method other than PCM coding of speech at 8 Kbits/s using conjugate-structure algebraic code—Excited linear—Prediction CS-ACELP, Annex E: 11.8 Kbits/s CS ACELP Speech Coding algorithm, Sep. 1998. |
Kim, A. et al.: Improving the rate distribution performance of DPCM Proc. 7th ISSPA, Paris, FR, Jul. 2003, pp. 97-100. |
Kleijn, W. Bastiaan, et al., “The RCELP Speech-Coding Algorithm,” European Transactions on Telecommunications and Related Technologies, Sep.-Oct. 1994, pp. 39-48, vol. 5, No. 5, Milano, IT XP000470678. |
Knagenhjelm, P. H. and Kleijn, W. B., “Spectral dynamics is more important than spectral distortion,” Proc. IEEE Int. Conf. on Acoustic Speech and Signal Processing, 1995, pp. 732-735. |
Koishida, K. et al, A 16-kbit/s bandwidth scalable audio coder based on the G. 729 standard. Proc. ICASSP, Istanbul, Jun. 200, 4 pp. (vol. 2, pp. 1149-1152). |
Lahouti. F, et al. Single and Double Frame Coding of Speech LPC Parameters Using a Lattice-based Quantization Scheme. (Tech. Rel. UW-E&CE#2004-10, Univ, of Waterloo, ON, Apr. 2004, 22 pp. |
Lahouti, F. et al. Single and Double Frame Coding of Speech LPC Parameters Using a Lattice-based Quantization Scheme. IEEE Trans. Audio, Speech, and Lang. Proc., 9pp. (Preprint of vol. 14, No. 5, Sep. 2006, pp. 1624-1632. |
Makhoul, J. and Berouti, M.. “High Frequency Regeneration in Speech Coding Systems,” Proc. IEEE Int. Conf. on Acoustic Speech and Signal Processing, Washington, 1979, pp. 428-431. |
Massimo Gregorio Muzzi, Amelioration d'un codeur parametrique. Rapport Du Stage, XP002388943, Jul. 2003, pp. 1-76. |
McCree, A. et al. A 1.7 kbis MELP coder with improved analysis and quantization. 4 pp. (Proc. ICASSP, Seattle, WA, May 1998, pp. 593-596. |
McCree, Alan, et al., An Embedded Adaptive Multi-Rate Wideband Speech Coder, IEEE International Conference on Acoustics, Speech, and Signal Processing, May 7-11, 2001, pp. 761-764, vol. 1 of 6. |
Nilsson, M., Andersen, S.V., Kleijn, W.B., “Gaussian Mixture Model based Mutual Information Estimation between Frequency Based in Speech,” Proc. IEEE Int. Conf. on Acoustic Speech and Signal Processing, Florida, 2002, pp. 525-528. |
Noise shaping (Wikipedia entry) 3 pages. Last accessed Dec. 15, 2006 at http://en.wikipedia.org/wiki/Noise—shaping. |
Nomura, T., et al., “A bitrate and bandwidth scalable CELP coder.” Acoustics, Speech and Signal Processing, May 1998. vol. 1, pp. 341-344, XP010279059. |
Nordon, F. et al.: “A speech spectrum distortion measure with interframe memory.” 4 pages (Proc. ICASSP, Salt Lake City, UT, May 2001, vol. 2.). |
P.P. Vaidyanathan, Multirate Digital Filters, Filter Banks, Polyphase Networks, and Applications: A Tutorial, Proceedings of the IEEE, XP 000125845. Jan. 1990, pp. 56-93, vol. 78, No. 1. |
Pereira. W. et al. Improved spectral tracking using interpolated linear prediction parameters. PRC. ICASSP, Orlando FL, May 2002, pp. I-261-I-264. |
Postel, Jon, ed., Internet protocol, Request for Comments (Standard) RFC 791, Internet Engineering Task Force, Sep. 1981. (Obsoletes RFC 760), URL: http://www.ietf.org/rfc/rfc791.txt. |
Ramachandran, R. et al. Pitch Prediction Filters in Speech Coding. IEEE Trans. Acoustics, Speech, and Sig. Proc., vol. 37, No. 4, Apr. 1989, pp. 467-478. |
Roy, G. Low-rate analysis-by-synthesis wideband speech coding. MS thesis, McGrill Univ., Montreal, QC, Aug. 1990. Cover, ch. 3 (pp. 19-38, and ch. 6 (pp. 87-91). |
Samuelsson, J. et al. Controlling Spectral Dynamics in LPC Quantization for Perceptual Enhancement. 5 pp. (Proc. 31st Asilomar Conf. Sig. Syst. Comp., 1997, pp. 1066-1070. |
Tammi, Mikko, et al., “Coding Distortion Caused by a Phase Difference Between the LP Filter and its Residual,” IEEE, 1999, pp. 102-104, XP10345571A. |
The CCITT G. 722 Wideband Speech Coding Standard 3 pp. Last Accessed Dec. 15, 2006 at http://www.umiacs.Umd.edu/users/desin/Speech/mode3.html. |
TS 26 090 v2 0 0. Mandatory Speech Codec speech processing functions. Jun. 1999. Cover, section 6, pp. 37-41, and figure 4, p. 49. p. 7. |
Universal Mobile Telecommunications System (UMTS): audio codec processing functions; Extended Adaptive MultiR-Rate-Wideband (AMR-WB+) code; Transcoding functions (3GPP TS 26.290 version 6.2.0 release 6); ETSI TS 126 290, ETSI Standards, European Telecommunication Standards Institute, vol. 3-SA4, No. v620, Mar. 2005, pp. 1-86. |
Valin, J.-M., Lefebvre, R., “Bandwidth Extension of Narrowband Speech for Low Bit-Rate Wideband Coding,” Proc. IEEE Speech Coding Workshop (SCW), 2000, pp. 130-132. |
Wideband Speech Coding Standards and Applications. VoiceAge Whitepaper. 17 pp. Last accessed Dec. 15, 2006 at http://www.voiceage.com/media/WidebandSpeech.pdf. |
International Search Report—PCT/US2006/014992, International Search Authority—European Patent Office—Jan. 22, 2007. |
International Preliminary Report on Patentability—PCT/US2006/014992, International Search Authority—The International Bureau of WIPO—Geneva, Switzerland—Oct. 23, 2007. |
Written Opinion—PCT/US2006/014992, International Search Authority—European Patent Office—Jan. 22, 2007. |
Cabral, “Evaluation of Methods for Excitation Regeneration in Bandwidth Extension of Speech”, Master thesis, KTH, sweden, Mar. 27, 2003. |
Hsu, “Robust bandwidth extension of narrowband speech”, McGill University, Canada, Nov. 2004. |
Normura et al., “A bitrate and bandwidth scalable CELP coder,” Proceedings of the 1998 IEEE ICASSP, vol. 1, pp. 341-344, May 12, 1998. |
Vaseghi, “Advanced Digital Signal Processing and Noise Reduction”, Ch 13, Published by John Wiley and Sons Ltd., 2000. |
Anonymous: “Noise Shaping,” Wikipedia, Dec. 5, 2004, XP002387163, Retrieved Online: <http://www.wikipedia.org/>. |
“Signal Processing Toolbox: For Use with MATLAB User's Guide,” ver. 4.2, Published by the Math Works Inc., Jan. 1999. |
Kim, J., “Filter Bank Design and Subband Coding,” (Project 1 Report), University of Maryland, Retrieved Online: <http://www.ece.umd.edu/class/enee624.S2003/ENEE624jusub.pdf>, pp. 1-26, published Mar. 31, 2003. |
Number | Date | Country | |
---|---|---|---|
20060282262 A1 | Dec 2006 | US |
Number | Date | Country | |
---|---|---|---|
60673965 | Apr 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11397432 | Apr 2006 | US |
Child | 11408511 | US |