The present invention relates generally to audio transform coding, and, in particular embodiments, to a system and method for spectrum harmonic/noise sharpness control.
In modern audio/speech signal compression technology, a concept of BandWidth Extension (BWE) is widely used. The similar or same technology sometimes is also called High Band Extension (HBE), SubBand Replica (SBR), or Spectral Band Replication (SBR). Although the name could be different, they all have the similar meaning of encoding/decoding some frequency sub-bands (usually high bands) with little budget of bit rate (or even with zero budget of bit rate) or significantly lower bit rate than normal encoding/decoding approaches. Low bit rate coding sometimes causes low quality. If a few bits can improve the quality, it is worth spending the few bits.
Frequency domain can be defined as FFT transformed domain. It can also be in Modified Discrete Cosine Transform (MDCT) domain. A well known BWE can be found in the standard ITU-T G.729.1, in which the algorithm is named as Time Domain Bandwidth Extension (TDBWE).
ITU-T G.729.1 is also called a G.729EV coder, which is an 8-32 kbit/s scalable wideband (50 Hz-7,000 Hz) extension of ITU-T Rec. G.729. By default, the encoder input and decoder output are sampled at 16,000 Hz. The bitstream produced by the encoder is scalable and consists of 12 embedded layers, which will be referred to as Layers 1 to 12. Layer 1 is the core layer corresponding to a bit rate of 8 kbit/s. This layer is compliant with G.729 bitstream, which makes G.729EV interoperable with G.729. Layer 2 is a narrowband enhancement layer adding 4 kbit/s, while Layers 3 to 12 are wideband enhancement layers adding 20 kbit/s with steps of 2 kbit/s.
The G.729EV coder is designed to operate with a digital signal sampled at 16,000 Hz followed by a conversion to 16-bit linear PCM before the converted signal is inputted to the encoder. However, the 8,000 Hz input sampling frequency is also supported. Similarly, the format of the decoder output is 16-bit linear PCM with a sampling frequency of 8,000 or 16,000 Hz. Other input/output characteristics are converted to 16-bit linear PCM with 8,000 or 16,000 Hz sampling before encoding, or from 16-bit linear PCM to the appropriate format after decoding. The bitstream from the encoder to the decoder is defined within this Recommendation.
The G.729EV coder is built upon a three-stage structure: embedded Code-Excited Linear-Prediction (CELP) coding, Time-Domain Bandwidth Extension (TDBWE), and predictive transform coding that is also referred to as Time-Domain Aliasing Cancellation (TDAC). The embedded CELP stage generates Layers 1 and 2, which yield a narrowband synthesis (50 Hz-4,000 Hz) at 8 kbit/s and 12 kbit/s. The TDBWE stage generates Layer 3 and allows producing a wideband output (50 Hz-7,000 Hz) at 14 kbit/s. The TDAC stage operates in the MDCT domain and generates Layers 4 to 12 to improve quality from 14 kbit/s to 32 kbit/s. TDAC coding represents the weighted CELP coding error signal in the 50 Hz-4,000 Hz band and the input signal in the 4,000 Hz-7,000 Hz band.
The G.729EV coder operates on 20 ms frames. However, the embedded CELP coding stage operates on 10 ms frames, such as G.729 frames. As a result, two 10 ms CELP frames are processed per 20 ms frame. In the following, to be consistent with the context of ITU-T Rec. G.729, the 20 ms frames used by G.729EV will be referred to as superframes, whereas the 10 ms frames and the 5 ms subframes involved in the CELP processing will be called frames and subframes, respectively.
The TDBWE encoder is illustrated in
The maximum of the window is centered on the second 10 ms frame of the current superframe. The window is constructed such that the frequency envelope computation has a lookahead of 16 samples (2 ms) and a lookback of 32 samples (4 ms). The windowed signal is transformed by FFT. The even number of bins of the full length 128-tap FFT are computed using a polyphase structure. Finally, the frequency envelope parameter set is calculated as logarithmic weighted sub-band energies for 12 evenly spaced and equally wide overlapping sub-bands in the FFT domain.
The TDBWE excitation signal 201, exc(n), is generated by 5 ms subframe based on parameters which are transmitted in Layers 1 and 2 of the bitstream. Specifically, the following parameters are used: the integer pitch lag T0=int(T1) or int(T2) depending on the subframe, the fractional pitch lag frac, the energy Ec of the fixed codebook contributions, and the energy Ep of the adaptive codebook contribution. Energy Ec is mathematically expressed as
while energy Ep is expressed as
A detailed description can be found in the ITU G.729.1 Recommendation.
The parameters of the excitation generation are computed every 5 ms subframe. The excitation signal generation consists of the following steps:
estimation of two gains gv and guv for the voiced and unvoiced contributions to the final excitation signal exc(n);
pitch lag post-processing;
generation of the voiced contribution;
generation of the unvoiced contribution; and
low-pass filtering.
In G.729.1, TDBWE is used to code the wideband signal from 4 kHz to 7 kHz. The narrow band (NB) signal from 0 to 4 kHz is coded with G729 CELP coder, wherein the excitation consists of adaptive codebook contribution and fixed codebook contribution. The adaptive codebook contribution comes from the voiced speech periodicity. The fixed codebook contributes to unpredictable portion. The ratio ξ of the energies of the adaptive and fixed codebook excitations (including enhancement codebook) is computed for each subframe as:
In order to reduce this ratio ξ in case of unvoiced sounds, a “Wiener filter” characteristic is applied:
This leads to more consistent unvoiced sounds. The gains for the voiced and unvoiced contributions of exc(n) are determined using the following procedure. An intermediate voiced gain g′v is calculated by:
which is slightly smoothed to obtain the final voiced gain gv:
where g′v,old is the value of g′v of the preceding subframe.
To satisfy the constraint gv2+guv2=1, the unvoiced gain is represented as:
g
uv=√{square root over (1−gv2)}. (5)
The generation of a consistent pitch structure within the excitation signal exc(n) requires a good estimate of the fundamental pitch lag t0 of the speech production process. Within Layer 1 of the bitstream, the integer and fractional pitch lag values T0 and frac are available for the four 5 ms subframes of the current superframe. For each subframe, the estimation of t0 is based on these parameters.
The aim of the G.729 encoder-side pitch search procedure is to find the pitch lag, which minimizes the power of the LTP residual signal. That is, the LTP pitch lag is not necessarily identical with t0, which is a requirement for the concise reproduction of voiced speech components. The most typical deviations are pitch-doubling and pitch-halving errors, i.e., the frequency corresponding to the LTP lag is a half or double that of the original fundamental speech frequency. Especially, pitch-doubling (or tripling, etc.) errors are preferably avoided. Thus, the following post-processing of the LTP lag information is used. First, the LTP pitch lag for an oversampled time-scale is reconstructed from T0 and frac, and a bandwidth expansion factor of 2 is considered:
t
LTP=2 ·(3·T0+frac). (6)
The (integer) factor between the currently observed LTP lag tLTP and the post-processed pitch lag of the preceding subframe tpost,old (see Equation 9) is calculated as:
If the factor f falls into the range 2, . . . , 4, a relative error is evaluated as:
If the magnitude of this relative error is below a threshold ε=0.1, it is assumed that the current LTP lag is the result of a beginning pitch-doubling (-tripling, etc.) error phase. Thus, the pitch lag is corrected by dividing by the integer factor f, thereby producing a continuous pitch lag behavior with respect to the previous pitch lags:
which is further smoothed as:
Note that this moving average leads to a virtual precision enhancement from a resolution of ⅓ to ⅙ of a sample. Finally, the post-processed pitch lag tp is decomposed into integer and fractional parts:
The voiced components 206, sexc,v(n), of the TDBWE excitation signal are represented as shaped and weighted glottal pulses. The voiced components 206 sexc,v(n) are thus produced by overlap-add of single pulse contributions:
where nPulse,int[p] is a pulse position, Pn
where p is the pulse counter, i.e., nPulse,int[p] is the (integer) position of the current pulse and nPulse,int[p-1] is the (integer) position of the previous pulse.
The fractional part of the pulse position may be expressed as:
The fractional part of the pulse position serves as an index for the pulse shape selection. The prototype pulse shapes Pi(n) with i=0, . . . , 5 and n=0, . . . , 56 are taken from a lookup table as plotted in
The gain factor gPulse[p] for the individual pulses is derived from the voiced gain parameter gv and from the pitch lag parameters:
g
Pulse
[p]=(2·even(nPulse,int[p])−1)·gv·√{square root over (6t0,int+t0,frac)}. (15)
Therefore, it is ensured that increasing pulse spacing does not result in the decrease in the contained energy. The function even( ) returns 1 if the argument is an even integer number, and returns 0 otherwise.
The unvoiced contribution 207, sexc,uv(n), is produced using the scaled output of a white noise generator:
s
exc,uv(n)=guv·random(n), n=0, . . . , 39. (16)
Having the voiced and unvoiced contributions sexc,v(n) and sexc,uv(n), the final excitation signal 202, sHBexc(n), is obtained by low-pass filtering of exc(n)=Sexc,v(n)+Sexc,uv(n).
The low-pass filter has a cut-off frequency of 3,000 Hz and its implementation is identical with the pre-processing low-pass filter for the high band signal.
The shaping of the time envelope of the excitation signal sHBexc(n) utilizes the decoded time envelope parameters {circumflex over (T)}env(i) with i=0, . . . , 15 to obtain a signal 203, ŝHBT(n), with a time envelope which is nearly identical to the time envelope of the encoder side HB signal sHB(n). This is achieved by a simple scalar multiplication of a gain function gT(n) with the excitation signal sHBexc(n). In order to determine the gain function gT(n), the excitation signal sHBexc(n) is segmented and analyzed in the same manner as described for the parameter extraction in the encoder. The obtained analysis results from sHBexc(n) are, again, time envelope parameters {tilde over (T)}env(i) with i=0, . . . , 15. They describe the observed time envelope sHBexc(n). Then, a preliminary gain factor is calculated by comparing {circumflex over (T)}env(i) with {tilde over (T)}env(i). For each signal segment with index i=0, . . . , 15, these gain factors are interpolated using a “flat-top” Hanning window. This interpolation procedure finally yields the desired gain function.
The decoded frequency envelope parameters {circumflex over (F)}env(j) with j=0, . . . , 11 are representative for the second 10 ms frame within the 20 ms superframe. The first 10 ms frame is covered by parameter interpolation between the current parameter set and the parameter set from the preceding superframe. The superframe of 203, ŝHBT(n), is analyzed twice per superframe. This is done for the first (l=1) and for the second (l=2) 10 ms frame within the current superframe and yields two observed frequency envelope parameter sets {tilde over (F)}env,l(j) with j=0, . . . , 11 and frame index l=1, 2. Now, a correction gain factor per sub-band is determined for the first frame and for the second frame by comparing the decoded frequency envelope parameters {circumflex over (F)}env(j) with the observed frequency envelope parameter sets {tilde over (F)}env,l(j). These gains control the channels of a filterbank equalizer. The filterbank equalizer is designed such that its individual channels match the sub-band division. It is defined by its filter impulse responses and a complementary high-pass contribution.
The signal 204, ŝHBF(n), is obtained by shaping both the desired time and frequency envelopes on the excitation signal sHBexc(n) (generated from parameters estimated in lower-band by the CELP decoder). There is in general no coupling between this excitation and the related envelope shapes {circumflex over (T)}env(i) and {circumflex over (F)}env(j). As a result, some clicks may occur in the signal ŝHBF(n). To attenuate these artifacts, an adaptive amplitude compression is applied to ŝHBF(n). Each sample of ŝHBF(n) of the i-th 1.25 ms segment is compared to the decoded time envelope {circumflex over (T)}env(i), and the amplitude of ŝHBF(n) is compressed in order to attenuate large deviations from this envelope. The signal after this post-processing is named as 205, ŝHBbwe(n).
Embodiments of the present invention are generally in the field of speech/audio transform coding. In particular, embodiments of the present invention relate to the field of low bit rate speech/audio transform coding, and are specifically related to applications in which ITU-T G.729.1 and/or G.718 super-wideband extension are involved
One embodiment of the invention discloses a method of controlling spectral harmonic/noise sharpness of decoded subbands. The spectral sharpness parameter representing the spectral harmonic/noise sharpness of the each subband at encoder side is estimated. The spectral sharpness parameter(s) are quantized and the quantized sharpness parameter(s) are transmitted from the encoder to a decoder. The spectral sharpness parameter of each decoded subband at decoder side is estimated. The corresponding transmitted sharpness parameter(s) from encoder are compared with the corresponding measured spectral sharpness parameter(s) at decoder and the main sharpness control parameter for the each decoded subband is formed. The main sharpness control parameter for the each decoded subband is analyzed and the decoded spectral subband is made sharper if judged not sharp enough. In addition, or alternatively, the decoded spectral subband is made flatter or noisier if judged not flat or noisy enough. The energy level of the each modified subband is normalized to keep the energy level almost unchanged.
In one example, the spectral sharpness parameter representing the spectral harmonic/noise sharpness of the each subband is estimated by calculating the magnitude ratio between the average magnitude and maximum magnitude or the energy level ratio between the average energy level and maximum energy level. If a plurality of the spectral sharpness parameters are estimated on a plurality of the subbands, the one spectral sharpness parameter estimated from the sharpest spectral subband can be chosen to represent the spectral sharpness of the plurality of the subbands when the number of bits to transmit the spectral sharpness information is limited.
In another example, each main sharpness control parameter for each decoded subband is formed by analyzing the differences between the corresponding transmitted spectral sharpness parameter(s) and the corresponding measured spectral sharpness parameter(s) from the decoded subbands. Each main sharpness control parameter for the each decoded subband can be smoothed between the current subbands and/or between consecutive frames.
In another example, making the decoded spectral subband sharper is realized by reducing the energy of the frequency coefficients between the harmonic peaks, increasing the energy of the harmonic peaks, and/or reducing the noise component.
In another example, making the decoded spectral subband flatter or noisier is realized by increasing the energy of the frequency coefficients between the harmonic peaks, reducing the energy of the harmonic peaks, and/or increasing the noise component.
In another embodiment, a method of controlling the spectral harmonic/noise sharpness of decoded subbands is disclosed. The spectral sharpness parameter of the each decoded subband at decoder side is estimated. The main sharpness control parameter for each decoded subband is formed. The main sharpness control parameter for the each decoded subband is analyzed and the decoded spectral subband is made sharper if judged not sharp enough. The energy level of the each modified subband is normalized to keep the energy level almost unchanged.
In one example, each main sharpness control parameter for each decoded subband is formed by smoothing the spectral sharpness parameters of the decoded subbands between the current subbands and/or between consecutive frames.
In another example, the decoded subband showing sharper spectrum is made further sharper than the other decoded subbands showing less sharp in terms of comparing the main sharpness control parameters of the decoded subbands.
A method of influencing the bit allocation to different subbands is disclosed in another embodiment. The spectral sharpness parameter of each subband is estimated. The values of the spectral sharpness parameters from the different subbands are compared. The allocation of more bits or extra bits is favored for coding the subband that shows sharper spectrum than the other subband that shows less sharp or flatter spectrum according to the comparison of estimated spectral sharpness parameters.
In one example, when the sharper subbands get more bits, the flatter subbands get fewer bits if the total bit budget is fixed. The importance order of the subbands is determined according to both the spectral sharpness distribution and the energy level distribution of the subbands.
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawing, in which:
The making and using of the presently preferred embodiments are discussed in detail below. It should be appreciated, however, that the present invention provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed are merely illustrative of specific ways to make and use the invention, and do not limit the scope of the invention.
Low bit rate coding sometimes causes low quality. One typical low bit rate transform coding method is the BWE algorithm; another example of low bit rate transform coding is that spectrum subbands of high band are generated through limited intra-frame frequency prediction from low band to high band. Because of the low bit rate, fine spectral structure is often not precise enough. With a generated fine spectral structure or a coded spectrum with a low bit rate, there exists often the problem of incorrect spectral harmonic/noise sharpness, which means it could be over-harmonic (over-sharp) or over-noisy (over-flat). Embodiments of the present invention utilize efficient methods to control spectral harmonic/noise sharpness. Harmonic/noise sharpness measuring is introduced, which is not simply based on signal periodicity. Measuring spectral sharpness can be also used to influence bit allocation for different subbands.
BandWidth Extension (BWE) has been widely used. The similar or same technology is sometimes referred to as High Band Extension (HBE), SubBand Replica (SBR), or Spectral Band Replication (SBR). They all have the similar or same meaning of encoding/decoding some frequency sub-bands (usually high bands) with little budget of bit rate (or even with zero budget of bit rate) or significantly lower bit rate than normal encoding/decoding approaches.
BWE is often used to encode and decode some perceptually critical information within a bit budget while generating some information with very limited bit budget or without spending any number of bits. It usually comprises frequency envelope coding, temporal envelope coding (optional), and spectral fine structure generation. Spectral fine structure is often generated without spending bit budget or by using small number of bits. The corresponding signal in time domain of spectral fine structure is usually called excitation after removing the spectral envelope. The precise description of spectral fine structure needs a lot of bits, which becomes not realistic for any BWE algorithm. A realistic way is to artificially generate spectral fine structure, which means that spectral fine structure is copied from other bands, and mathematically generated according to limited available parameters, or predicted from other bands with very small number of bits.
Due to the fact of low bit rate, not only is spectral fine structure generated by BWE is not precise enough, the coded spectrum with the low bit rate can also be not precise enough perceptually, for example, the coded spectrum with the limited intra-frame frequency prediction approach. With a generated spectral fine structure or coded spectrum with a low bit rate, there often exists the problem of incorrect spectral harmonic/noise sharpness, which means it could be over-harmonic (over-sharp) or over-noisy (over-flat).
Embodiments of this invention propose an efficient method to control spectral harmonic/noise sharpness. Harmonic/noise sharpness measuring is introduced, which is not simply based on signal periodicity. The spectral sharpness measuring can be also used to influence bit allocation for different subbands. In particular, the embodiments can be advantageously used when ITU-T G.729.1/G.718 codecs are in the core layers for a scalable super-wideband codec.
In a conventional G.729.1 TDBWE, the harmonic/noise sharpness is basically controlled by gains gv and guv, which are expressed in equations (4) and (5). The root control of the gains comes from the energy Ep of the adaptive codebook contribution (also called pitch predictive contribution or Long-Term Prediction contribution) as seen in equation (1). Energy Ep is calculated from the CELP parameters, which are used to encode a low band (Narrow Band), where gv strongly depends on the periodicity of the signal in low band within the defined pitch range. When gv is relatively high, the spectrum of the generated excitation will show stronger harmonics (sharper spectrum peaks). Otherwise, a noisier spectrum, and/or a less harmonic or flatter spectrum will be observed. This harmonic/noise sharpness control has two potential problems:
The spectrum examples shown in
Exemplary embodiments can the harmonic/noise sharpness control for spectral subbands decoded at low bit rates. An exemplary embodiment includes the following points:
An exemplary embodiment based on the above described-points is provided as follows. At encoder side, the high band [7 kHz,14 kHz] of the original signal is divided into 4 subbands in the MDCT domain, where each subband contains 70 coefficients. In each subband of 70 coefficients, one spectral sharpness parameter in the first half subband (with 35 coefficients) and another spectral sharpness parameter in the second half subband (with 35 coefficients) are estimated respectively according to equation (17). The smaller one named as shp_enc of these two sharpness values is chosen to represent the spectral sharpness of the corresponding subband of 70 coefficients. One bit is used to tell decoder if this sharpness value is smaller than 0.18 (shp_enc<0.18) or not.
At the decoder side, there are also 8 half subbands, each having 35 coefficients, resulting in the total number of 8×35=280 coefficients, which represent the high band [7 kHz,14 kHz]. The spectral sharpness parameters of the generated subbands or decoded subbands are estimated in each half subband of 35 coefficients in the same way as encoder with equation (17). Let's note shp_dec as the estimated sharpness value for each half subband of 35 coefficients at decoder side. A primary sharpness control value noted as Sharp_c is first evaluated in terms of the difference between shp_enc and shp_dec in the following way:
Then, the values of Sharp_c from the first half subband to the last half subband is smoothened to obtain the smoothed value, Sharp_c_sm for each half subband. The value of Sharp_c_sm is further smoothened between the consecutive frames to obtain the main sharpness control parameter Sharp_main, which will play the dominant influence for the spectral sharpness control. When Sharp_main is large enough, the corresponding half subband spectrum will be made sharper, and the greater Sharp_main is, the sharper the spectrum should be. On the other hand, when Sharp_main is small enough, the corresponding half subband spectrum will be made flatter or noisier, and the smaller Sharp_main is, the flatter or noisier the spectrum should be. Finally, the energy after the spectral modification may be normalized to the original energy, which is the same one as before the spectral modification.
From the above description, a method of controlling spectral harmonic/noise sharpness of decoded subbands is provided. The method comprises the steps of: estimating spectral sharpness parameter representing spectral harmonic/noise sharpness of each subband at encoder side; quantizing spectral sharpness parameter(s) and transmitting quantized parameter(s) from encoder to decoder; estimating spectral sharpness parameter of each decoded subband at decoder side; comparing the corresponding transmitted sharpness parameter(s) from encoder with the corresponding spectral sharpness parameter(s) measured at decoder and forming main sharpness control parameter for each decoded subband; analyzing main sharpness control parameter for each decoded subband and making decoded spectral subband sharper if judged not sharp enough; making decoded spectral subband flatter or noisier if judged not flat or noisy enough; and normalizing the energy level of each modified subband to keep the energy level almost unchanged.
As already described, the spectral sharpness parameter representing spectral harmonic/noise sharpness of each subband is estimated by calculating the magnitude ratio of an average magnitude to the maximum magnitude, or by calculating the energy level ratio of an average energy level to the maximum energy level. If a plurality of spectral sharpness parameters are estimated on a plurality of subbands, one spectral sharpness parameter estimated from the sharpest spectral subband can be chosen to represent the spectral sharpness of the plurality of subbands when the number of bits to transmit the spectral sharpness information is limited. Each main sharpness control parameter for each decoded subband is formed by analyzing the differences between the corresponding transmitted spectral sharpness parameter(s) and the corresponding spectral sharpness parameter(s) measured from decoded subbands. Each main sharpness control parameter for each decoded subband can be smoothened between current subbands and/or between consecutive frames.
Making a decoded spectral subband sharper is realized by reducing the energy levels of frequency coefficients between harmonic peaks, increasing the energy levels of harmonic peaks, and/or reducing the noise component. Making decoded spectral subband flatter or noisier is realized by increasing the energy levels of frequency coefficients between harmonic peaks, reducing the energy levels of harmonic peaks, and/or increasing the noise component.
Additional embodiments will now be described.
If the decoded subbands already have reasonably good quality, the reference spectral sharpness information may not be necessarily transmitted from encoder to decoder. The spectral sharpness of decoded subbands may still be improved by doing actually post spectral sharpness control. The post spectral sharpness control is also based on the measured spectral sharpness parameter as defined in equation (17) for each subband instead of periodicity measuring. The measured spectral sharpness parameter can be smoothened between current subbands and/or between consecutive frames to form main sharpness control parameter for each decoded subband. If the main sharpness control parameter indicates that one subband is a sharp subband, it can be made sharper in a way described in the previous paragraph. In other words, the sharper the decoded subband is, the sharper the decoded subband is. This idea is somehow similar to the pitch-post-processing concept used for CELP codec in G.729.1, in which decoded periodic signal is made more periodic.
From the above-description, a method of controlling spectral harmonic/noise sharpness of decoded subbands is provided. The method comprises the steps of estimating the spectral sharpness parameter of each decoded subband at decoder side; forming the main sharpness control parameter for each decoded subband; analyzing the main sharpness control parameter for each decoded subband and making decoded spectral subband sharper if it is determined as being not sharp enough; and normalizing the energy level of each modified subband to keep the energy level almost unchanged. Each main sharpness control parameter for each decoded subband is formed by smoothing measured spectral sharpness parameters of decoded subbands between current subbands and/or between consecutive frames. Decoded subband showing sharper spectrum is made sharper than other decoded subbands in terms of comparing the main sharpness control parameters of decoded subbands.
Spectral sharpness related embodiments will now be described.
In the above-described embodiments, spectral sharpness is controlled by modifying related subbands at the decoder side. It is known that harmonic subband is perceptually more important than noisy subband if they have similar energy levels. Perceptual quality can be improved by allocating more bits to code harmonic subbands rather than noisy subbands. The spectral sharpness measuring of one subband can help to tell the corresponding subband is harmonic-like or noise-like. The embodiment includes the following points:
From the above description, a method of influencing the bit allocation to different subbands is provided. The method comprises the steps of estimating spectral sharpness parameter of each subband; comparing the values of spectral sharpness parameters from different subbands; and favoring the allocation of more bits or extra bits for coding the subband that shows a sharper spectrum than other subbands showing less sharp or flatter spectrum according to the comparison of estimated spectral sharpness parameters. If the total bit budget is fixed and the sharper subbands get more bits, flatter subbands must get less bits. The bit allocation to different subbands is usually based on the importance order of the related subbands, instead of relying only on spectral energy level distribution. The importance order may be determined according to both spectral sharpness distribution and spectral energy level distribution of the related subbands.
Audio access device 6 uses microphone 12 to convert sound, such as music or a person's voice into analog audio input signal 28. Microphone interface 16 converts analog audio input signal 28 into digital audio signal 32 for input into encoder 22 of CODEC 20. Encoder 22 produces encoded audio signal TX for transmission to network 26 via network interface 26 according to embodiments of the present invention. Decoder 24 within CODEC 20 receives encoded audio signal RX from network 36 via network interface 26, and converts encoded audio signal RX into digital audio signal 34. Speaker interface 18 converts digital audio signal 34 into audio signal 30 suitable for driving loudspeaker 14.
In embodiments of the present invention, where audio access device 6 is a VOIP device, some or all of the components within audio access device 6 are implemented within a handset. In some embodiments, however, Microphone 12 and loudspeaker 14 are separate units, and microphone interface 16, speaker interface 18, CODEC 20 and network interface 26 are implemented within a personal computer. CODEC 20 can be implemented in either software running on a computer or a dedicated processor, or by dedicated hardware, for example, on an application specific integrated circuit (ASIC). Microphone interface 16 is implemented by an analog-to-digital (A/D) converter, as well as other interface circuitry located within the handset and/or within the computer. Likewise, speaker interface 18 is implemented by a digital-to-analog converter and other interface circuitry located within the handset and/or within the computer. In further embodiments, audio access device 6 can be implemented and partitioned in other ways known in the art.
In embodiments of the present invention where audio access device 6 is a cellular or mobile telephone, the elements within audio access device 6 are implemented within a cellular handset. CODEC 20 is implemented by software running on a processor within the handset or by dedicated hardware. In further embodiments of the present invention, audio access device may be implemented in other devices such as peer-to-peer wireline and wireless digital communication systems, such as intercoms, and radio handsets. In applications such as consumer audio devices, audio access device may contain a CODEC with only encoder 22 or decoder 24, for example, in a digital microphone system or music playback device. In other embodiments of the present invention, CODEC 20 can be used without microphone 12 and speaker 14, for example, in cellular base stations that access the PTSN.
The above description contains specific information pertaining to the spectral sharpness control. However, one skilled in the art will recognize that the present invention may be practiced in conjunction with various encoding/decoding algorithms different from those specifically discussed in the present application. Moreover, some of the specific details, which are within the knowledge of a person of ordinary skill in the art, are not discussed to avoid obscuring the present invention.
The drawings in the present application and their accompanying detailed description are directed to merely example embodiments of the invention. To maintain brevity, other embodiments of the invention which use the principles of the present invention are not specifically described in the present application and are not specifically illustrated by the present drawings.
While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments.
This patent application claims priority to U.S. Provisional Application No. 61/094,883, filed on Sep. 6, 2008, and entitled “Spectrum Harmonic/Noise Sharpness Control,” which application is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61094883 | Sep 2008 | US |