The present invention relates to encoders for encoding an audio signal, in particular a speech related audio signal. The present invention also relates to decoders and methods for decoding an encoded audio signal. The present invention further relates to encoded audio signals and to an advanced speech unvoiced coding at low bitrates.
At low bitrate, speech coding can benefit from a special handling for the unvoiced frames in order to maintain the speech quality while reducing the bitrate. Unvoiced frames can be perceptually modeled as a random excitation which is shaped both in frequency and time domain. As the waveform and the excitation looks and sounds almost the same as a Gaussian white noise, its waveform coding can be relaxed and replaced by a synthetically generated white noise. The coding will then consist of coding the time and frequency domain shapes of the signal.
wherein w is lower than 1. The gain parameter gn is computed for getting a synthesized energy matching the original energy in the perceptual domain according to:
where sw(n) and nw(n) are the input signal and generated noise, respectively, filtered by the perceptual filter fw(n). The gain gn is computed for each subframe of size Ls. For example, an audio signal may be divided into frames with a length of 20 ms. Each frame may be subdivided into subframes, for example, into four subframes, each comprising a length of 5 ms.
Code excited linear prediction (CELP) coding scheme is widely used in speech communications and is a very efficient way of coding speech. It gives a more natural speech quality than parametric coding but it also requests higher rates. CELP synthesizes an audio signal by conveying to a Linear Predictive filter, called LPC synthesis filter which may comprise a form 1/A(z), the sum of two excitations. One excitation is coming from the decoded past, which is called the adaptive codebook. The other contribution is coming from an innovative codebook populated by fixed codes. However, at low bitrates the innovative codebook is not enough populated for modeling efficiently the fine structure of the speech or the noise-like excitation of the unvoiced. Therefore, the perceptual quality is degraded, especially the unvoiced frames which sounds then crispy and unnatural.
For mitigating the coding artifacts at low bitrates, different solutions were already proposed. In G.718[1] and in [2] the codes of the innovative codebook are adaptively and spectrally shaped by enhancing the spectral regions corresponding to the formants of the current frame. The formant positions and shapes can be deducted directly from the LPC coefficients, coefficients already available at both encoder and decoder sides. The formant enhancement of codes c(n) are done by a simple filtering according to:
c(n)*fe(n)
wherein * denotes the convolution operator and wherein fe(n) is the impulse response of the filter of transfer function:
Where w1 and w2 are the two weighting constants emphasizing more or less the formantic structure of the transfer function Ffe(z). The resulting shaped codes inherit a characteristic of the speech signal and the synthesized signal sounds cleaner.
In CELP it is also usual to add a spectral tilt to the decoder of the innovative codebook. It is done by filtering the codes with the following filter:
Ft(z)=1−βz−1
The factor β is usually related to the voicing of the previous frame and depends, i.e., it varies. The voicing can be estimated from the energy contribution from the adaptive codebook. If the previous frame is voiced, it is expected that the current frame will also be voiced and that the codes should have more energy in the low frequencies, i.e., should show a negative tilt. On the contrary, the added spectral tilt will be positive for unvoiced frames and more energy will be distributed towards high frequencies.
The use of spectral shaping for speech enhancement and noise reduction of the output of the decoder is a usual practice. A so-called formant enhancement as post-filtering consists of an adaptive post-filtering for which the coefficients are derived from the LPC parameters of the decoder. The post-filter looks similar to the one (fe(n)) used for shaping the innovative excitation in certain CELP coders as discussed above. However, in that case, the post-filtering is only applied at the end of the decoder process and not at the encoder side.
In conventional CELP (CELP=(Code)-book excited Linear Prediction), the frequency shape is modeled by the LP (Linear Prediction) synthesis filter, while the time domain shape can be approximated by the excitation gain sent to every subframe although the Long-Term Prediction (LTP) and the innovative codebook are usually not suited for modeling the noise-like excitation of the unvoiced frames. CELP needs a relatively high bitrate for reaching a good quality of the speech unvoiced.
A voiced or unvoiced characterization may be related to segment speech into portions and associated each of them to a different source model of speech. The source models as they are used in CELP speech coding scheme rely on an adaptive harmonic excitation simulating the air flow coming out the glottis and a resonant filter modeling the vocal tract excited by the produced air flow. Such models may provide good results for phonemes like vocals, but may result in incorrect modeling for speech portions that are not generated by the glottis, in particular when the vocal chords are not vibrating such as unvoiced phonemes “s” or “f”.
On the other hand, parametric speech coders are also called vocoders and adopt a single source model for unvoiced frames. It can reach very low bitrates while achieving a so-called synthetic quality being not as natural as the quality delivered by CELP coding schemes at much higher rates.
Thus, there is a need for enhancing audio signals.
An object of the present invention is to increase sound quality at low bitrates and/or reducing bitrates for good sound quality.
According to an embodiment, an encoder for encoding an audio signal may have: an analyzer configured for deriving prediction coefficients and a residual signal from a frame of the audio signal; a formant information calculator configured for calculating a speech related spectral shaping information from the prediction coefficients; a gain parameter calculator configured for calculating a gain parameter from an unvoiced residual signal and the spectral shaping information; and a bitstream former configured for forming an output signal based on an information related to a voiced signal frame, the gain parameter or a quantized gain parameter and the prediction coefficients.
According to another embodiment, a decoder for decoding a received signal having information related to prediction coefficients may have: a formant information calculator configured for calculating a speech related spectral shaping information from the prediction coefficients; a noise generator configured for generating a decoding noise-like signal; a shaper configured for shaping a spectrum of the decoding noise-like signal or an amplified representation thereof using the spectral shaping information to obtain a shaped decoding noise-like signal; and a synthesizer configured for synthesizing a synthesized signal from the amplified shaped encoding noise-like signal and the prediction coefficients.
Another embodiment may have an encoded audio signal having prediction coefficient information for a voiced frame and an unvoiced frame, a further information related to the voiced signal frame and an information related to a gain parameter or a quantized gain parameter for the unvoiced frame.
According to another embodiment, a method for encoding an audio signal may have the steps of: deriving prediction coefficients and a residual signal from an audio signal frame; calculating a speech related spectral shaping information from the prediction coefficients; calculating a gain parameter from an unvoiced residual signal and the spectral shaping information; and forming an output signal based on an information related to a voiced signal frame, the gain parameter or a quantized gain parameter and the prediction coefficients.
According to another embodiment, a method for decoding a received audio signal having an information related prediction coefficients and a gain parameter may have the steps of: calculating a speech related spectral shaping information from the prediction coefficients; generating a decoding noise-like signal; shaping a spectrum of the decoding noise-like signal or an amplified representation thereof using the spectral shaping information to obtain a shaped decoding noise-like signal; and synthesizing a synthesized signal from the amplified shaped encoding noise-like signal and the prediction coefficients.
Another embodiment may have a non-transitory digital storage medium having stored thereon a computer program for performing a method for encoding an audio signal, the method having the steps of: deriving prediction coefficients and a residual signal from an audio signal frame; calculating a speech related spectral shaping information from the prediction coefficients; calculating a gain parameter from an unvoiced residual signal and the spectral shaping information; and forming an output signal based on an information related to a voiced signal frame, the gain parameter or a quantized gain parameter and the prediction coefficients, when said computer program is run by a computer.
Another embodiment may have a non-transitory digital storage medium having stored thereon a computer program for performing a method for decoding a received audio signal having an information related prediction coefficients and a gain parameter, the method having the steps of: calculating a speech related spectral shaping information from the prediction coefficients; generating a decoding noise-like signal; shaping a spectrum of the decoding noise-like signal or an amplified representation thereof using the spectral shaping information to acquire a shaped decoding noise-like signal; and synthesizing a synthesized signal from the amplified shaped encoding noise-like signal and the prediction coefficients, when said computer program is run by a computer.
The inventors found out that in a first aspect a quality of a decoded audio signal related to an unvoiced frame of the audio signal, may be increased, i.e., enhanced, by determining a speech related shaping information such that a gain parameter information for amplification of signals may be derived from the speech related shaping information. Furthermore a speech related shaping information may be used for spectrally shaping a decoded signal. Frequency regions comprising a higher importance for speech, e.g., low frequencies below 4 kHz, may thus be processed such that they comprise less errors.
The inventors further found out that in a second aspect by generating a first excitation signal from a deterministic codebook for a frame or subframe (portion) of a synthesized signal and by generating a second excitation signal from a noise-like signal for the frame or subframe of the synthesized signal and by combining the first excitation signal and the second excitation signal for generating a combined excitation signal a sound quality of the synthesized signal may be increased, i.e., enhanced. Especially for portions of an audio signal comprising a speech signal with background noise, the sound quality may be improved by adding noise-like signals. A gain parameter for optionally amplifying the first excitation signal may be determined at the encoder and an information related thereto may be transmitted with the encoded audio signal.
Alternatively or in addition, the enhancement of the audio signal synthesized may be at least partially exploited for reducing bitrates for encoding the audio signal.
An encoder according to the first aspect comprises an analyzer configured for deriving prediction coefficients and a residual signal from a frame of the audio signal. The encoder further comprises a formant information calculator configured for calculating a speech related spectral shaping information from the prediction coefficients. The encoder further comprises a gain parameter calculator configured for calculating a gain parameter from an unvoiced residual signal and the spectral shaping information and a bitstream former configured for forming an output signal based on an information related to a voiced signal frame, the gain parameter or a quantized gain parameter and the prediction coefficients.
Further embodiments of the first aspect provide an encoded audio signal comprising a prediction coefficient information for a voiced frame and an unvoiced frame of the audio signal, a further information related to the voiced signal frame and a gain parameter or a quantized gain parameter for the unvoiced frame. This allows for efficiently transmitting speech related information to enable a decoding of the encoded audio signal to obtain a synthesized (restored) signal with a high audio quality.
Further embodiments of the first aspect provide a decoder for decoding a received signal comprising prediction coefficients. The decoder comprises a formant information calculator, a noise generator, a shaper and a synthesizer. The formant information calculator is configured for calculating a speech related spectral shaping information from the prediction coefficients. The noise generator is configured for generating a decoding noise-like signal. The shaper is configured for shaping a spectrum of the decoding noise-like signal or an amplified representation thereof using the spectral shaping information to obtain a shaped decoding noise-like signal. The synthesizer is configured for synthesizing a synthesized signal from the amplified shaped coding noise-like signal and the prediction coefficients.
Further embodiments of the first aspect relate to a method for encoding an audio signal, a method for decoding a received audio signal and to a computer program.
Embodiments of the second aspect provide an encoder for encoding an audio signal. The encoder comprises an analyzer configured for deriving prediction coefficients and a residual signal from an unvoiced frame of the audio signal. The encoder further comprises a gain parameter calculator configured for calculating a first gain parameter information for defining a first excitation signal related to a deterministic codebook and for calculating a second gain parameter information for defining a second excitation signal related to a noise-like signal for the unvoiced frame. The encoder further comprises a bitstream former configured for forming an output signal based on an information related to a voiced signal frame, the first gain parameter information and the second gain parameter information.
Further embodiments of the second aspect provide a decoder for decoding a received audio signal comprising an information related to prediction coefficients. The decoder comprises a first signal generator configured for generating a first excitation signal from a deterministic codebook for a portion of a synthesized signal. The decoder further comprises a second signal generator configured for generating a second excitation signal from a noise-like signal for the portion of the synthesized signal. The decoder further comprises a combiner and a synthesizer, wherein the combiner is configured for combining the first excitation signal and the second excitation signal for generating a combined excitation signal for the portion of the synthesized signal. The synthesizer is configured for synthesizing the portion of the synthesized signal from the combined excitation signal and the prediction coefficients.
Further embodiments of the second aspect provide an encoded audio signal comprising an information related to prediction coefficients, an information related to a deterministic codebook, an information related to a first gain parameter and a second gain parameter and an information related to a voiced and unvoiced signal frame.
Further embodiments of the second aspect provide methods for encoding and decoding an audio signal, a received audio signal respectively and to a computer program.
Subsequently, embodiments of the present invention are described with respect to the accompanying drawings, in which:
Equal or equivalent elements or elements with equal or equivalent functionality are denoted in the following description by equal or equivalent reference numerals even if occurring in different figures.
In the following description, a plurality of details is set forth to provide a more thorough explanation of embodiments of the present invention. However, it will be apparent to those skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well known structures and devices are shown in block diagram form rather than in detail in order to avoid obscuring embodiments of the present invention. In addition, features of the different embodiments described hereinafter may be combined with each other, unless specifically noted otherwise.
In the following, reference will be made to modifying an audio signal. An audio signal may be modified by amplifying and/or attenuating portions of the audio signal. A portion of the audio signal may be, for example a sequence of the audio signal in the time domain and/or a spectrum thereof in the frequency domain. With respect to the frequency domain, the spectrum may be modified by amplifying or attenuating spectral values arranged in or at frequencies or frequency ranges. Modification of the spectrum of the audio signal may comprise a sequence of operations such as an amplification and/or attenuation of a first frequency or frequency range and afterwards an amplification and/or an attenuation of a second frequency or frequency range. The modifications in the frequency domain may be represented as a calculation, e.g. a multiplication, division, summation or the like, of spectral values and gain values and/or attenuation values. Modifications may be performed sequentially such as first multiplying spectral values with a first multiplication value and then with a second multiplication value. Multiplication with the second multiplication value and then with the first multiplication value may allow for receiving an identical or almost identical result. Also, the first multiplication value and the second multiplication value may first be combined and then applied in terms of a combined multiplication value to the spectral values while receiving the same or a comparable result of the operation. Thus, modification steps configured to form or modify a spectrum of the audio signal described below are not limited to the described order but may also be executed in a changed order whilst receiving the same result and/or effect.
The encoder 100 comprises an analyzer 120 configured for deriving prediction coefficients (LPC=linear prediction coefficients) 122 and a residual signal 124 from a frame of the audio signal. The frame builder 110 or the analyzer 120 is configured to determine a representation of the audio signal 102 in the frequency domain. Alternatively, the audio signal 102 may be a representation in the frequency domain already.
The prediction coefficients 122 may be, for example linear prediction coefficients. Alternatively, also non-linear prediction may be applied such that the predictor 120 is configured to determine non-linear prediction coefficients. An advantage of linear prediction is given in a reduced computational effort for determining the prediction coefficients.
The encoder 100 comprises a voiced/unvoiced decider 130 configured for determining, if the residual signal 124 was determined from an unvoiced audio frame. The decider 130 is configured for providing the residual signal to a voiced frame coder 140 if the residual signal 124 was determined from a voiced signal frame and to provide the residual signal to a gain parameter calculator 150, if the residual signal 124 was determined from an unvoiced audio frame. For determining if the residual signal 122 was determined from a voiced or an unvoiced signal frame, the decider 130 may use different approaches such as an auto correlation of samples of the residual signal. A method for deciding whether a signal frame was voiced or unvoiced is provided, for example in the ITU (international telecommunication union)—T (telecommunication standardization sector) standard G.718. A high amount of energy arranged at low frequencies may indicate a voiced portion of the signal. Alternatively, an unvoiced signal may result in high amounts of energy at high frequencies. The encoder 100 comprises a formant information calculator 160 configured for calculating a speech related spectral shaping information from the prediction coefficients 122.
The speech related spectral shaping information may consider formant information, for example, by determining frequencies or frequency ranges of the processed audio frame that comprise a higher amount of energy than the neighborhood. The spectral shaping information is able to segment the magnitude spectrum of the speech into formants, i.e. bumps, and non-formants, i.e. valley, frequency regions. The formant regions of the spectrum can be for example derived by using the Immittance Spectral Frequencies (ISF) or Line Spectral Frequencies (LSF) representation of the prediction coefficients 122. Indeed the ISF or LSF represent the frequencies for which the synthesis filter using the prediction coefficients 122 resonates.
The speech related spectral shaping information 162 and the unvoiced residuals are forwarded to the gain parameter calculator 150 which is configured to calculate a gain parameter gn from the unvoiced residual signal and the spectral shaping information 162. The gain parameter gn may be a scalar value or a plurality thereof, i.e., the gain parameter may comprise a plurality of values related to an amplification or attenuation of spectral values in a plurality of frequency ranges of a spectrum of the signal to be amplified or attenuated. A decoder may be configured to apply the gain parameter gn to information of a received encoded audio signal such that portions of the received encoded audio signals are amplified or attenuated based on the gain parameter during decoding. The gain parameter calculator 150 may be configured to determine the gain parameter gn by one or more mathematical expressions or determination rules resulting in a continuous value. Operations performed digitally, for example, by means of a processor, expressing the result in a variable with a limited number of bits, may result in a quantized gain ĝn. Alternatively, the result may further be quantized according to quantization scheme such that an quantized gain information is obtained. The encoder 100 may therefore comprise a quantizer 170. The quantizer 170 may be configured to quantize the determined gain gn to a nearest digital value supported by digital operations of the encoder 100. Alternatively, the quantizer 170 may be configured to apply a quantization function (linear or non-linear) to an already digitalized and therefore quantized fain factor gn. A non-linear quantization function may consider, for example, logarithmic dependencies of human hearing highly sensitive at low sound pressure levels and less sensitive at high pressure levels.
The encoder 100 further comprises an information deriving unit 180 configured for deriving a prediction coefficient related information 182 from the prediction coefficients 122. Prediction coefficients such as linear prediction coefficients used for exciting innovative codebooks comprise a low robustness against distortions or errors. Therefore, for example, it is known to convert linear prediction coefficients to inter-spectral frequencies (ISF) and/or to derive line-spectral pairs (LSP) and to transmit an information related thereto with the encoded audio signal. LSP and/or ISF information comprises a higher robustness against distortions in the transmission media, for example error, or calculator errors. The information deriving unit 180 may further comprise a quantizer configured to provide a quantized information with respect to the LSF and/or the ISP.
Alternatively, the information deriving unit may be configured to forward the prediction coefficients 122. Alternatively, the encoder 100 may be realized without the information deriving unit 180. Alternatively, the quantizer may be a functional block of the gain parameter calculator 150 or of the bitstream former 190 such that the bitstream former 190 is configured to receive the gain parameter gn and to derive the quantized gain ĝnbased thereon. Alternatively, when the gain parameter gn is already quantized, the encoder 100 may be realized without the quantizer 170.
The encoder 100 comprises a bitstream former 190 configured to receive a voiced signal, a voiced information 142 related to a voiced frame of an encoded audio signal respectively provided by the voiced frame coder 140, to receive the quantized gain ĝnand the prediction coefficients related information 182 and to form an output signal 192 based thereon.
The encoder 100 may be part of a voice encoding apparatus such as a stationary or mobile telephone or an apparatus comprising a microphone for transmission of audio signals such as a computer, a tablet PC or the like. The output signal 192 or a signal derived thereof may be transmitted, for example via mobile communications (wireless) or via wired communications such as a network signal.
An advantage of the encoder 100 is that the output signal 192 comprises information derived from a spectral shaping information converted to the quantized gain ĝn. Therefore, decoding of the output signal 192 may allow for achieving or obtaining further information that is speech related and therefore to decode the signal such that the obtained decoded signal comprises a high quality with respect to a perceived level of a quality of speech.
The decoder 200 comprises a bitstream deformer (demultiplexer; DE-MUX) for receiving the input signal 202. The bitstream deformer 210 is configured to provide the prediction coefficients 122, the quantized gain ĝnand the voiced information 142. For obtaining the prediction coefficients 122, the bitstream deformer may comprise an inverse information deriving unit performing an inverse operation when compared to the information deriving unit 180. Alternatively, the decoder 200 may comprise a not shown inverse information deriving unit configured for executing the inverse operation with respect to the information deriving unit 180. In other words, the prediction coefficients are decoded i.e., restored.
The decoder 200 comprises a formant information calculator 220 configured for calculating a speech related spectral shaping information from the prediction coefficients 122 as it was described for the formant information calculator 160. The formant information calculator 220 is configured to provide speech related spectral shaping information 222. Alternatively, the input signal 202 may also comprise the speech related spectral shaping information 222, wherein transmission of the prediction coefficients or information related thereto such as, for example quantized LSF and/or ISF instead of the speech related spectral shaping information 222 allows for a lower bitrate of the input signal 202.
The decoder 200 comprises a random noise generator 240 configured for generating a noise-like signal, which may simplified be denoted as noise signal. The random noise generator 240 may be configured to reproduce a noise signal that was obtained, for example when measuring and storing a noise signal. A noise signal may be measured and recorded, for example, by generating thermal noise at a resistance or another electrical component and by storing recorded data on a memory. The random noise generator 240 is configured to provide the noise(-like) signal n(n).
The decoder 200 comprises a shaper 250 comprising a shaping processor 252 and a variable amplifier 254. The shaper 250 is configured for spectrally shaping a spectrum of the noise signal n(n). The shaping processor 252 is configured for receiving the speech related spectral shaping information and for shaping the spectrum of the noise signal n(n), for example by multiplying spectral values of the spectrum of the noise signal n(n) and values of the spectral shaping information. The operation can also be performed in the time domain by a convoluting the noise signal n(n) with a filter given by the spectral shaping information. The shaping processor 252 is configured for providing a shaped noise signal 256, a spectrum thereof respectively to the variable amplifier 254. The variable amplifier 254 is configured for receiving the gain parameter gn and for amplifying the spectrum of the shaped noise signal 256 to obtain an amplified shaped noise signal 258. The amplifier may be configured to multiply the spectral values of the shaped noise signal 256 with values of the gain parameter gn. As stated above, the shaper 250 may be implemented such that the variable amplifier 254 is configured to receive the noise signal n(n) and to provide an amplified noise signal to the shaping processor 252 configured for shaping the amplified noise signal. Alternatively, the shaping processor 252 may be configured to receive the speech related spectral shaping information 222 and the gain parameter gn and to apply sequentially, one after the other, both information to the noise signal n(n) or to combine both information, e.g., by multiplication or other calculations and to apply a combined parameter to the noise signal n(n).
The noise-like signal n(n) or the amplified version thereof shaped with the speech related spectral shaping information allows for the decoded audio signal 282 comprising a more speech related (natural) sound quality. This allows for obtaining high quality audio signals and/or to reduce bitrates at encoder side while maintaining or enhancing the output signal 282 at the decoder with a reduced extent.
The decoder 200 comprises a synthesizer 260 configured for receiving the prediction coefficients 122 and the amplified shaped noise signal 258 and for synthesizing a synthesized signal 262 from the amplified shaped noise-like signal 258 and the prediction coefficients 122. The synthesizer 260 may comprise a filter and may be configured for adapting the filter with the prediction coefficients. The synthesizer may be configured to filter the amplified shaped noise-like signal 258 with the filter. The filter may be implemented as software or as a hardware structure and may comprise an infinite impulse response (IIR) or a finite impulse response (FIR) structure.
The synthesized signal corresponds to an unvoiced decoded frame of an output signal 282 of the decoder 200. The output signal 282 comprises a sequence of frames that may be converted to a continuous audio signal.
The bitstream deformer 210 is configured for separating and providing the voiced information signal 142 from the input signal 202. The decoder 200 comprises a voiced frame decoder 270 configured for providing a voiced frame based on the voiced information 142. The voiced frame decoder (voiced frame processor) is configured to determine a voiced signal 272 based on the voiced information 142. The voiced signal 272 may correspond to the voiced audio frame and/or the voiced residual of the decoder 100.
The decoder 200 comprises a combiner 280 configured for combining the unvoiced decoded frame 262 and the voiced frame 272 to obtain the decoded audio signal 282.
Alternatively, the shaper 250 may be realized without an amplifier such that the shaper 250 is configured for shaping the spectrum of the noise-like signal n(n) without further amplifying the obtained signal. This may allow for a reduced amount of information transmitted by the input signal 222 and therefore for a reduced bitrate or a shorter duration of a sequence of the input signal 202. Alternatively, or in addition, the decoder 200 may be configured to only decode unvoiced frames or to process voiced and unvoiced frames both by spectrally shaping the noise signal n(n) and by synthesizing the synthesized signal 262 for voiced and unvoiced frames. This may allow for implementing the decoder 200 without the voiced frame decoder 270 and/or without a combiner 280 and thus lead to a reduced complexity of the decoder 200.
The output signal 192 and/or the input signal 202 comprise information related to the prediction coefficients 122, an information for a voiced frame and an unvoiced frame such as a flag indicating if the processed frame is voiced or unvoiced and further information related to the voiced signal frame such as a coded voiced signal. The output signal 192 and/or the input signal 202 comprise further a gain parameter or a quantized gain parameter for the unvoiced frame such that the unvoiced frame may be decoded based on the prediction coefficients 122 and the gain parameter gn, ĝn, respectively.
The gain parameter calculator 350 is configured for providing a gain parameter gn as it was described above. The gain parameter calculator 350 comprises a random noise generator 350a for generating an encoding noise-like signal 350b. The gain calculator 350 further comprises a shaper 350c having a shaping processor 350d and a variable amplifier 350e. The shaping processor 350d is configured for receiving the speech related shaping information 162 and the noise-like signal 350b, and to shape a spectrum of the noise-like signal 350b with the speech related spectral shaping information 162 as it was described for the shaper 250. The variable amplifier 350e is configured for amplifying a shaped noise-like signal 350f with a gain parameter gn(temp) which is a temporary gain parameter received from a controller 350k. The variable amplifier 350e is further configured for providing an amplified shaped noise-like signal 350g as it was described for the amplified noise-like signal 258. As it was described for the shaper 250, an order of shaping and amplifying the noise-like signal may be combined or changed when compared to
The gain parameter calculator 350 comprises a comparer 350h configured for comparing the unvoiced residual provided by the decider 130 and the amplified shaped noise-like signal 350g. The comparer is configured to obtain a measure for a likeness of the unvoiced residual and the amplified shaped noise-like signal 350g. For example, the comparer 350h may be configured for determining a cross-correlation of both signals. Alternatively, or in addition, the comparer 350h may be configured for comparing spectral values of both signals at some or all frequency bins. The comparer 350h is further configured to obtain a comparison result 350i.
The gain parameter calculator 350 comprises the controller 350k configured for determining the gain parameter gn(temp) based on the comparison result 350i. For example, when the comparison result 350i indicates that the amplified shaped noise-like signal comprises an amplitude or magnitude that is lower than a corresponding amplitude or magnitude of the unvoiced residual, the controller may be configured to increase one or more values of the gain parameter gn(temp) for some or all of the frequencies of the amplified noise-like signal 350g. Alternatively, or in addition, the controller may be configured to reduce one or more values of the gain parameter gn(temp) when the comparison result 350i indicates that the amplified shaped noise-like signal comprises a too high magnitude or amplitude, i.e., that the amplified shaped noise-like signal is too loud. The random noise generator 350a, the shaper 350c, the comparer 350h and the controller 350k may be configured to implement a closed-loop optimization for determining the gain parameter gn(temp). When the measure for the likeness of the unvoiced residual to the amplified shaped noise-like signal 350g, for example, expressed as a difference between both signals, indicates that the likeness is above a threshold value, the controller 350k is configured to provide the determined gain parameter gn. A quantizer 370 is configured to quantize the gain parameter gn to obtain the quantized gain parameter ĝn.
The random noise generator 350a may be configured to deliver a Gaussian-like noise. The random noise generator 350a may be configured for running (calling) a random generator with a number of n uniform distributions between a lower limit (minimum value) such as −1 and an upper limit (maximum value), such as +1. For example, the random noise generator 350 is configured for calling three times the random generator. As digitally implemented random noise generators may output pseudo-random values an addition or superimposing of a plurality or a multitude of pseudo-random functions may allow for obtaining a sufficiently random-distributed function. This procedure follows the Central Limit Theorem. The random noise generator 350a ma be configured to call the random generator at least two, three or more times as indicated by the following pseudo-code:
Alternatively, the random noise generator 350a may generate the noise-like signal from a memory as it was described for the random noise generator 240. Alternatively, the random noise generator 350a may comprise, for example, an electrical resistance or other means for generating a noise signal by executing a code or by measuring physical effects such as thermal noise.
The shaping processor 350b may be configured to add a formantic structure and a tilt to the noise-like signals 350b by filtering the noise-like signal 350b with fe(n) as stated above. The tilt may be added by filtering the signal with a filter t(n) comprising a transfer function based on:
Ft(z)=1−βz−1
wherein the factor β may be deduced from the voicing of the previous subframe:
wherein AC is an abbreviation for adaptive codebook and IC is an abbreviation for innovative codebook.
β=0.25·(1+voicing)
The gain parameter gn, the quantized gain parameter ĝn respectively allows for providing an additional information that may reduce an error or a mismatch between the encoded signal and the corresponding decoded signal, decoded at a decoder such as the decoder 200.
With respect to the determination rule
the parameter w1 may comprise a positive non-zero value of at most 1.0, advantageously of at least 0.7 and at most 0.8 and more advantageously comprise a value of 0.75. The parameter w2 may comprise a positive non-zero scalar value of at most 1.0, advantageously of at least 0.8 and at most 0.93 and more advantageously comprise a value of 0.9. The parameter w2 is advantageously greater than w1.
Basically, the gain parameter calculator 350′ implements at least partially a decoder by synthesizing the synthesized signal 350l′. When compared to the encoder 300 comprising the comparer 350h configured for comparing the unvoiced residual and the amplified shaped noise-like signal, the encoder 400 comprises the comparer 350h′, which is configured to compare the (probably complete) audio frame and the synthesized signal.
This may allow for a higher precision as the frames of the signal and not only parameters thereof are compared to each other. The higher precision may entail an increased computational effort as the audio frame 122 and the synthesized signal 350l′ may comprise a higher complexity when compared to the residual signal and to the amplified shaped noise-like information such that comparing both signals is also more complex. In addition, synthesis has to be calculated necessitating computational efforts by the synthesizer 350m′.
The gain parameter calculator 350′ comprises a memory 350n′ configured for recording an encoding information comprising the encoding gain parameter gn or a quantized version ĝn thereof. This allows the controller 350k to obtain the stored gain value when processing a subsequent audio frame. For example, the controller may be configured to determine a first (set of) value(s), i.e., a first instance of the gain factor gn(temp) based or equal to the value of gn for the previous audio frame.
The gain parameter calculator 550 comprises the noise generator 350a configured for providing the noise(-like) signal n(n) and an amplifier 550g configured for amplifying the noise signal n(n) based on the noise gain parameter gn to obtain an amplified noise signal 550h. The gain parameter calculator comprises a combiner 550i configured for combining the amplified shaped code signal 550f and the amplified noise signal 550h to obtain a combined excitation signal 550k. The combiner 550i may be configured, for example, for spectrally adding or multiplying spectral values of the amplified shaped code signal and the amplified noise signal 550f and 550h. Alternatively, the combiner 550i may be configured to convolute both signals 550f and 550h.
As described above for the shaper 350c, the shaper 550b may be implemented such that first the code signal c(n) is amplified by the variable amplifier 550e and afterwards shaped by the shaping processor 550d. Alternatively, the shaping information 550c for the code signal c(n) may be combined with the code gain parameter information gc such that a combined information is applied to the code signal c(n).
The gain parameter calculator 550 comprises a comparer 550l configured for comparing the combined excitation signal 550k and the unvoiced residual signal obtained for the voiced/unvoiced decider 130. The comparer 550l may be the comparer 550h and is configured for providing a comparison result, i.e., a measure 550m for a likeness of the combined excitation signal 550k and the unvoiced residual signal. The code gain calculator comprises a controller 550n configured for controlling the code gain parameter information gc and the noise gain parameter information gn. The code gain parameter gc and the noise gain parameter information gn may comprise a plurality or a multitude of scalar or imaginary values that may be related to a frequency range of the noise signal n(n) or a signal derived thereof or to a spectrum of the code signal c(n) or a signal derived thereof.
Alternatively, the gain parameter calculator 550 may be implemented without the shaping processor 550d. Alternatively, the shaping processor 550d may be configured to shape the noise signal n(n) and to provide a shaped noise signal to the variable amplifier 550g.
Thus, by controlling both gain parameter information gc and gn, a likeness of the combined excitation signal 550k when compared to the unvoiced residual may be increased such that a decoder receiving information to the code gain parameter information gc and the noise gain parameter information gn may reproduce an audio signal which comprises a good sound quality. The controller 550n is configured to provide an output signal 550o comprising information related to the code gain parameter information gc and the noise gain parameter information gn. For example, the signal 550o may comprise both gain parameter information gn and gc as scalar or quantized values or as values derived thereof, for example, coded values.
Alternatively, the encoder 600 may be configured to comprise one quantizer configured for quantizing the code gain parameter information gc and the noise gain parameter gn for obtaining the quantized parameter information ĝc and ĝn. Both gain parameter information may be quantized, for example, sequentially.
The formant information calculator 160 is configured to calculate the speech related spectral shaping information 550c from the prediction coefficients 122.
The controller 550n may be configured for determining the gain parameter information gc and gn for each subframe of a processed audio frame. The controller may be configured to determine, i.e., to calculate, the gain parameter information gc and gn based on the details set forth below.
First, the average energy of the subframe may be computed on the original short-term prediction residual signal available during the LPC analysis, i.e., on the unvoiced residual signal. The energy is averaged over the four subframes of the current frame in the logarithmic domain by:
Wherein Lsf is the size of a subframe in samples. In this case, the frame is divided in 4 subframes. The averaged energy may then be coded on a number of bits, for example, three, four or five, by using a stochastic codebook previously trained. The stochastic codebook may comprise a number of entries (size) according to a number of different values that may be represented by the number of bits, e.g. a size of 8 for a number of 3 bits, a size of 16 for a number of 4 bits or a number of 32 for a number of 5 bits. A quantized gain may be determined from the selected codeword of the codebook. For each subframe the two gain information gc and gn are computed. The gain of code gc may be computed, for example based on:
where cw(n) is, for example, the fixed innovation selected from the fixed codebook comprised by the signal generator 550a filtered by the perceptual weighted filter. The expression xw(n) corresponds to the conventional perceptual target excitation computed in CELP encoders. The code gain information gc may then be normalized for obtaining a normalized gain gnc based on:
The normalized gain gnc may be quantized, for example by the quantizer 170-1. Quantization may be performed according to a linear or logarithmic scale. A logarithmic scale may comprise a scale of size of 4, 5 or more bits. For example, the logarithmic scale comprises a size of 5 bits. Quantization may be performed based on:
Indexnc=└20*log10((gnc+20)/1.25)+0.5┘
wherein Indexnc may be limited between 0 and 31, if the logarithmic scale comprises 5 bits. The Indexnc may be the quantized gain parameter information. The quantized gain of code ĝc, may then be expressed based on:
The gain of code may be computed in order to minimize the mean squared root error or mean squared error (MSE)
wherein Lsf corresponds to line spectral frequencies determined from the prediction coefficients 122.
The noise gain parameter information may be determined in terms of energy mismatch by minimizing an error based on
The variable k is an attenuation factor that may be varied dependent or based on the prediction coefficients, wherein the prediction coefficients may allow for determining if speech comprises a low portion of background noise or even no background noise (clean speech). Alternatively, the signal may also be determined as being a noisy speech, for example when the audio signal or a frame thereof comprises changes between unvoiced and non-unvoiced frames. The variable k may be set to a value of at least 0.85, of at least 0.95 or even to a value of 1 for clean speech, where high dynamic of energy is perceptually important. The variable k may be set to a value of at least 0.6 and at most 0.9, advantageously to a value of at least 0.7 and at most 0.85 and more advantageously to a value of 0.8 for noisy speech where the noise excitation is made more conservative for avoiding fluctuation in the output energy between unvoiced and non-unvoiced frames. The error (energy mismatch) may be computed for each of these quantized gain candidates ĝc. A frame divided into four subframes may result in four quantized gain candidates ĝc. The one candidate which minimizes the error may be output by the controller. The quantized gain of noise (noise gain parameter information) may be computed based on:
wherein Indexn is limited between 0 and 3 according to the four candidates. A resulting combined excitation signal, such as the excitation signal 550k or 550k′ may be obtained based on:
e(n)=·c(n)+·n(n)
wherein e(n) is the combined excitation signal 550k or 550k′.
An encoder 600 or a modified encoder 600 comprising the gain parameter calculator 550 or 550′ may allow for an unvoiced coding based on a CELP coding scheme. The CELP coding scheme may be modified based on the following exemplary details for handling unvoiced frames:
The controller 810 comprises an analysis block configured for obtaining prediction coefficients as it is described for the analyzer 320 to obtain the prediction coefficients 122. The controller further comprises a synthesis filter 840 for filtering the combined excitation signal 550k with the synthesis filter 840, wherein the synthesis filter 840 is adapted by the filter coefficients 122. A further comparer may be configured to compare the input signal s(n) and the synthesized signal ŝ(n), e.g., the decoded (restored) audio signal. Further, the memory 350n is arranged, wherein the controller 810 is configured to store the predicted signal and/or the predicted coefficients in the memory. A signal generator 850 is configured to provide an adaptive excitation signal based on the stored predictions in the memory 350n allowing for enhancing adaptive excitation based on a former combined excitation signal.
The audio signal sequence 282 may comprise a good quality and a high likeness when compared to an encoded input signal.
Further embodiments provide decoders enhancing the decoder 1000 by shaping and/or amplifying the code-generated (code excited) excitation signal 1012 and/or the noise-like signal 1022. Thus, the decoder 1000 may comprise a shaping processor and/or a variable amplifier arranged between the signal generator 1010 and the combiner 1050, between the noise generator 1020 and the combiner 1050, respectively. The input signal 1002 may comprise information related to the code gain parameter information gc and/or the noise gain parameter information, wherein the decoder may be configured to adapt an amplifier for amplifying the code generated excitation signal 1012 or a shaped version thereof by using the code gain parameter information gc. Alternatively, or in addition, the decoder 1000 may be configured to adapt, i.e., to control an amplifier for amplifying the noise-like signal 1022 or a shaped version thereof with an amplifier by using the noise gain parameter information.
Alternatively, the decoder 1000 may comprise a shaper 1070 configured for shaping the code excited excitation signal 1012 and/or a shaper 1080 configured for shaping the noise-like signal 1022 as indicated by the dotted lines. The shapers 1070 and/or 1080 may receive the gain parameters gc and/or gn and/or speech related shaping information. The shapers 1070 and/or 1080 may be formed as described for the above described shapers 250, 350c and/or 550b.
The decoder 1000 may comprise a formantic information calculator 1090 to provide a speech related shaping information 1092 for the shapers 1070 and/or 1080 as it was described for the formant information calculator 160. The formant information calculator 1090ma be configured to provide different speech related shaping information (1092a; 1092b) to the shapers 1070 and/or 1080.
Although
In a step 1430 of method 1400 an output signal is formed based on an information related to a voiced signal frame, the first gain parameter information and the second gain parameter information.
In other words, aspects of the present invention propose a new way of coding the unvoiced frames by means of shaping a randomly generated Gaussian noise and shaped it spectrally by adding to it a formantic structure and a spectral tilt. The spectral shaping is done in the excitation domain before exciting the synthesis filter. As a consequence, the shaped excitation will be updated in the memory of the long-term prediction for generating subsequent adaptive codebooks.
The subsequent frames, which are not unvoiced, will also benefit from the spectral shaping. Unlike the formant enhancement in the post-filtering, the proposed noise shaping is performed at both encoder and decoder sides.
Such an excitation can be used directly in a parametric coding scheme for targeting very low bitrates. However, we propose also to associate such an excitation in combination with a conventional innovative codebook within a CELP coding scheme.
For the both methods, we propose a new gain coding especially efficient for both clean speech and speech with background noise. We propose some mechanisms to get as close as possible to the original energy but at the same time avoiding too harsh transitions with non-unvoiced frames and also avoiding unwanted instabilities due to the gain quantization.
The first aspect targets unvoiced coding with a rate of 2.8 and 4 kilobits per second (kbps). The unvoiced frames are first detected. It can be done by a usually speech classification as it is done in Variable Rate Multimode Wideband (VMR-WB) as it is known from [3].
There are two main advantages doing the spectral shaping at this stage. First, the spectral shaping is taking into account for the gain calculation of the excitation. As the gain computation is the only non-blind module during the excitation generation, it is a great advantage to have it at the end of the chain after the shaping. Secondly it allows saving the enhanced excitation in the memory of LTP. The enhancement will then also serve subsequent non-unvoiced frames.
Although the quantizers 170, 170-1 and 170-2 where described as being configured for obtaining the quantized parameters ĝc and ĝn, the quantized parameters may be provided as an information related thereto, e.g., an index or an identifier of an entry of a database, the entry comprising the quantized gain parameters ĝc and ĝn.
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.
The inventive encoded audio signal can be stored on a digital storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet.
Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.
A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods may be performed by any hardware apparatus.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which will be apparent to others skilled in the art and which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
13189392 | Oct 2013 | EP | regional |
14178788 | Jul 2014 | EP | regional |
This application is a continuation of U.S. application Ser. No. 16/504,891, filed Jul. 8, 2019, which is a continuation of U.S. patent application Ser. No. 15/131,681, filed Apr. 18, 2016, now issued as U.S. Pat. No. 10,373,625, which is a continuation of copending International Application No. PCT/EP2014/071767, filed Oct. 10, 2014, which is incorporated herein by reference in its entirety, and additionally claims priority from European Application No. 13189392.7, filed Oct. 18, 2013, and from European Application No. 14178788.7, filed Jul. 28, 2014, which are also incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5444816 | Adoul | Aug 1995 | A |
5528727 | Wang | Jun 1996 | A |
5732389 | Kroon | Mar 1998 | A |
5864797 | Fujimoto | Jan 1999 | A |
5926788 | Nishiguchi | Jul 1999 | A |
5946651 | Jarvinen | Aug 1999 | A |
6003001 | Maeda | Dec 1999 | A |
6067511 | Grabb | May 2000 | A |
6131084 | Hardwick | Oct 2000 | A |
6192335 | Ekudden et al. | Feb 2001 | B1 |
6230124 | Maeda | May 2001 | B1 |
6311154 | Gersho et al. | Oct 2001 | B1 |
6415252 | Peng et al. | Jul 2002 | B1 |
6611800 | Nishiguchi | Aug 2003 | B1 |
6615169 | Ojala | Sep 2003 | B1 |
7778827 | Jelinek et al. | Aug 2010 | B2 |
8144804 | Chinen et al. | Mar 2012 | B2 |
8712766 | Ashley | Apr 2014 | B2 |
10304470 | Fuchs | May 2019 | B2 |
10373625 | Fuchs | Aug 2019 | B2 |
10607619 | Fuchs | Mar 2020 | B2 |
10909997 | Fuchs | Feb 2021 | B2 |
20020091514 | Fuchigami | Jul 2002 | A1 |
20030074192 | Choi | Apr 2003 | A1 |
20040148162 | Fingscheidt et al. | Jul 2004 | A1 |
20050010402 | Sung | Jan 2005 | A1 |
20060173677 | Sato et al. | Aug 2006 | A1 |
20060206317 | Morii | Sep 2006 | A1 |
20090222273 | Massaloux et al. | Sep 2009 | A1 |
20100174538 | Vos | Jul 2010 | A1 |
20100262420 | Herre et al. | Oct 2010 | A1 |
20110200198 | Grill et al. | Aug 2011 | A1 |
20120209599 | Malenovsky et al. | Aug 2012 | A1 |
20160232908 | Fuchs | Aug 2016 | A1 |
20160232909 | Fuchs | Aug 2016 | A1 |
20190228787 | Fuchs | Jul 2019 | A1 |
20190333529 | Fuchs | Oct 2019 | A1 |
20200219521 | Fuchs | Jul 2020 | A1 |
20210098010 | Fuchs | Apr 2021 | A1 |
Number | Date | Country |
---|---|---|
098072 | Apr 2016 | AR |
1188957 | Jul 1998 | CN |
1272939 | Nov 2000 | CN |
1328683 | Dec 2001 | CN |
1338096 | Feb 2002 | CN |
1440126 | Sep 2003 | CN |
1795495 | Jun 2006 | CN |
101401153 | Apr 2009 | CN |
102124517 | Jul 2011 | CN |
102341848 | Feb 2012 | CN |
967594 | Dec 1999 | EP |
H06202697 | Jul 1994 | JP |
2001051699 | Feb 2001 | JP |
2010055002 | Mar 2010 | JP |
2011518345 | Jun 2011 | JP |
5686369 | Jan 2015 | JP |
2015515644 | May 2015 | JP |
1020010033539 | Apr 2001 | KR |
2223555 | Feb 2004 | RU |
2255380 | Jun 2005 | RU |
2316059 | Jan 2008 | RU |
2008146294 | May 2010 | RU |
2400832 | Sep 2010 | RU |
2012130472 | Sep 2013 | RU |
0025305 | May 2000 | WO |
2013127364 | Sep 2013 | WO |
Entry |
---|
“Frame Error Robust Narrow-Band and Wideband Embedded Variable Bit-Rate Coding of Speech and Audio from 8-32 kbit/s”, ITU-T, G.718, Series G: Transmission System and Media, Digital Systems and Networks, Recommendation ITU-T G.718, Telecommunication Standardization Sector of ITU, Jun. 2008, 257 pages., Jun. 2008, pp. 1-257. |
Gerson, I. A. , et al., “Vector Sum Excited Linear Prediction (VSELP)”, Advances in Speech Coding. Vancouver, Sep. 5-8, 1989 [Proceedings of the Workshop on Speech Coding for Telecommunications], Boston, Kluwer, US, pp. 69-79. |
Jelinek, Milan , et al., “Wideband Speech Coding Advances in VMR-WB Standard”, IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, No. 4, pp. 1167-1179. |
Moreau, N , et al., “Successive Orthogonalizations in the Multistage CELP Coder”, Speech Processing 1, San Francisco, Mar. 23-26, 1992 [Proceedings of the International Conference on Acoustics, Speech, an Signal Processing (ICASSP)] New York, IEEE, US, vol. 1, pp. 61-64. |
Quackenbush , “A 7 kHz bandwidth, 32 kbps speech coder for ISDN”, 1991 International Conference on Acoustics, Speech, and Signal Processing. (Abstract), Apr. 1991, pp. 1-4. |
Taddei , et al., “Efficient coding of transitional speech segments in celp”, Speech Coding, 2002, IEEE Workshop Proceedings. Oct. 6-9, 2002, Piscataway, NJ, USA, IEEE, Oct. 6, 2002 (Oct. 6, 2002), pp. 14-16, XP010647197, Oct. 6, 2002. |
Taumi, S. , et al., “13kbps Low-Delay Error- Robust Speech Coding for GSM EFR”, Speech Coding for Telecommunications, 1995. Proceedings., 1995 IEEE Workshop, pp. 61-62. |
Thyssen, J. , et al., “A Candidate for the ITU-T 4 kbit/s Speech Coding Standard”, 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. (ICASSP)Salt Lake City, Utah May 7-11, 2001, pp. 681-684. |
Zhang, L. , et al., “A ELP Variable Rate Speech Codec With Low Avarage Rate”, 1997 IEEE International Conference on Acoustic, Speech, & Signal Processing. Speech Processing. Speech Processing. Munich, Apr. 21-24, 1997; Los Alamitos, IEEE Corp. Soc. Press,, Apr. 21, 1997, pp. 735-738, XP000822552. |
Number | Date | Country | |
---|---|---|---|
20210098010 A1 | Apr 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16504891 | Jul 2019 | US |
Child | 17121179 | US | |
Parent | 15131681 | Apr 2016 | US |
Child | 16504891 | US | |
Parent | PCT/EP2014/071767 | Oct 2014 | US |
Child | 15131681 | US |