The present invention relates to the field of the coding/decoding and the processing of audio frequency signals (such as speech, music or other such signals) for their transmission or their storage.
More particularly, the invention relates to a frequency band extension method and device in a decoder or a processor producing an audio frequency signal enhancement.
Numerous techniques exist for compressing (with loss) an audio frequency signal such as speech or music.
The conventional coding methods for conversational applications are generally classified as waveform coding (PCM for “Pulse Code Modulation”, ADCPM for “Adaptive Differential Pulse Code Modulation”, transform coding, etc.), parametric coding (LPC for “Linear Predictive Coding”, sinusoidal coding, etc.) and parametric hybrid coding with a quantization of the parameters by “analysis by synthesis” of which CELP (“Code Excited Linear Prediction”) coding is the best known example.
For non-conversational applications, the prior art for (mono) audio signal coding consists of perceptual coding by transform or in sub-bands, with a parametric coding of the high frequencies by band replication (SBR for Spectral Band Replication).
A review of the conventional speech and audio coding methods can be found in the works by W. B. Kleijn and K. K. Paliwal (eds.), Speech Coding and Synthesis, Elsevier, 1995; M. Bosi, R. E. Goldberg, Introduction to Digital Audio Coding and Standards, Springer 2002; J. Benesty, M. M. Sondhi, Y. Huang (eds.), Handbook of Speech Processing, Springer 2008.
The focus here is more particularly on the 3GPP standardized AMR-WB (“Adaptive Multi-Rate Wideband”) codec (coder and decoder), which operates at an input/output frequency of 16 kHz and in which the signal is divided into two sub-bands, the low band (0-6.4 kHz) which is sampled at 12.8 kHz and coded by CELP model and the high band (6.4-7 kHz) which is reconstructed parametrically by “band extension” (or BWE, for “Bandwidth Extension”) with or without additional information depending on the mode of the current frame. It can be noted here that the limitation of the coded band of the AMR-WB codec at 7 kHz is essentially linked to the fact that the frequency response in transmission of the wideband terminals was approximated at the time of standardization (ETSI/3GPP then ITU-T) according to the frequency mask defined in the standard ITU-T P.341 and more specifically by using a so-called “P341” filter defined in the standard ITU-T G.191 which cuts the frequencies above 7 kHz (this filter observes the mask defined in P.341). However, in theory, it is well known that a signal sampled at 16 kHz can have a defined audio band from 0 to 8000 Hz; the AMR-WB codec therefore introduces a limitation of the high band by comparison with the theoretical bandwidth of 8 kHz.
The 3GPP AMR-WB speech codec was standardized in 2001 mainly for the circuit mode (CS) telephony applications on GSM (2G) and UMTS (3G). This same codec was also standardized in 2003 by the ITU-T in the form of recommendation G.722.2 “Wideband coding speech at around 16 kbit/s using Adaptive Multi-Rate Wideband (AMR-WB)”.
It comprises nine bit rates, called modes, from 6.6 to 23.85 kbit/s, and comprises continuous transmission mechanisms (DTX, for “Discontinuous Transmission”) with voice activity detection (VAD) and comfort noise generation (CNG) from silence description frames (SID, for “Silence Insertion Descriptor”), and lost frame correction mechanisms (FEC for “Frame Erasure Concealment”, sometimes called PLC, for “Packet Loss Concealment”).
The details of the AMR-WB coding and decoding algorithm are not repeated here;
a detailed description of this codec can be found in the 3GPP specifications (TS 26.190, 26.191, 26.192, 26.193, 26.194, 26.204) and in ITU-T-G.722.2 (and the corresponding annexes and appendix) and in the article by B. Bessette et al. entitled “The adaptive multirate wideband speech codec (AMR-WB)”, IEEE Transactions on Speech and Audio Processing, vol. 10, no. 8, 2002, pp. 620-636 and the source codes of the associated 3GPP and ITU-T standards.
The principle of band extension in the AMR-WB codec is fairly rudimentary. Indeed, the high band (6.4-7 kHz) is generated by shaping a white noise through a time (applied in the form of gains per sub-frame) and frequency (by the application of a linear prediction synthesis filter or LPC, for “Linear Predictive Coding”) envelope. This band extension technique is illustrated in
A white noise uHB1(n), n=0, L, 79 is generated at 16 kHz for each 5 ms sub-frame by linear congruential generator (block 100). This noise uHB1(n) is shaped in time by application of gains for each sub-frame; this operation is broken down into two processing steps (blocks 102, 106 or 109):
ĝ
HB
=w
SP
g
SP+(1−wSP)gBG
At 23.85 kbit/s, a correction information item is transmitted by the AMR-WB coder and decoded (blocks 107, 108) in order to refine the gain estimated for each sub-frame (4 bits every 5 ms, or 0.8 kbit/s).
The artificial excitation uHB(n) is thereafter filtered (block 111) by an LPC synthesis filter with transfer function 1/AHB(z) and operating at the sampling frequency of 16 kHz. The construction of this filter depends on the bit rate of the current frame:
1/AHB(z)=1/Âest(z/γ)
1/AHB(z)=1/Â(z/γ)
A number of drawbacks in the band extension technique of the AMR-WB codec can be identified:
The AMR-WB decoding algorithm has been improved partly with the development of the scalable ITU-T G.718 codec which was standardized in 2008.
The ITU-T G.718 standard comprises a so-called interoperable mode, for which the core coding is compatible with the G.722.2 (AMR-WB) coding at 12.65 kbit/s; furthermore, the G.718 decoder has the particular feature of being able to decode an AMR-WB/G.722.2 bit stream at all the possible bit rates of the AMR-WB codec (from 6.6 to 23.85 kbit/s).
The G.718 interoperable decoder in low delay mode (G.718-LD) is illustrated in
However, the band extension in the AMR-WB and/or G.718 (interoperable mode) codecs is still limited on a number of aspects.
In particular, the synthesis of high frequencies by shaped white noise (by a temporal approach of LPC source-filter type) is a very limited model of the signal in the band of the frequencies higher than 6.4 kHz.
Only the 6.4-7 kHz band is re-synthesized artificially, whereas in practice a wider band (up to 8 kHz) is theoretically possible at the sampling frequency of 16 kHz, which can potentially enhance the quality of the signals, if they are not pre-processed by a filter of P.341 type (50-7000 Hz) as defined in the Software Tool Library (standard G.191) of the ITU-T.
A need therefore exists to improve the band extension in a codec of AMR-WB type or an interoperable version of this codec or more generally to improve the band extension of an audio signal, in particular so as to improve the frequency content of the band extension.
An exemplary embodiment of the present disclosure relates to a method for extending frequency band of an audio frequency signal during a decoding or improvement process comprising a step of obtaining the signal decoded in a first frequency band termed the low band. The method is such that it comprises the following steps:
It will be noted that subsequently “band extension” will be taken in the broad sense and will include not only the case of the extension of a sub-band at high frequencies but also the case of a replacement of sub-bands that are set to zero (of “noise filling” type in transform coding).
Thus, at one and the same time by taking into account tonal components and an ambience signal extracted from the signal arising from the decoding of the low band, it is possible to perform the band extension with a signal model suited to the true nature of the signal in contradistinction to the use of artificial noise. The quality of the band extension is thus improved and in particular for certain types of signals such as music signals.
Indeed, the signal decoded in the low band comprises a part corresponding to the sound ambience which can be transposed into high frequency in such a way that a mixing of the harmonic components and of the existing ambience makes it possible to ensure a coherent reconstructed high band.
It will be noted that, even if the invention is motivated by the enhancement of the quality of the band extension in the context of the interoperable AMR-WB coding, the different embodiments apply to the more general case of the band extension of an audio signal, particularly in an enhancement device performing an analysis of the audio signal to extract the parameters necessary to the band extension.
The different particular embodiments mentioned below can be added independently or in combination with one another to the steps of the extension method defined above.
In one embodiment, the band extension is performed in the domain of the excitation and the decoded low band signal is a low band decoded excitation signal.
The advantage of this embodiment is that a transformation without windowing (or equivalently with an implicit rectangular window of the length of the frame) is possible in the domain of the excitation. In this case no artifact (block effects) is then audible.
In a first embodiment, the extraction of the tonal components and of the ambience signal is performed according to the following steps:
This embodiment allows precise detection of the tonal components.
In a second embodiment, of low complexity, the extraction of the tonal components and of the ambience signal is performed according to the following steps:
In one embodiment of the combining step, a control factor for the energy level used for the adaptive mixing is computed as a function of the total energy of the decoded or decoded and extended low band signal and of the tonal components.
The application of this control factor allows the combining step to adapt to the characteristics of the signal so as to optimize the relative proportion of ambience signal in the mixture. The energy level is thus controlled so as to avoid audible artifacts.
In a preferred embodiment, the decoded low band signal undergoes a step of transform or filter bank-based sub-band decomposition, the extracting and combining steps then being performed in the frequency or sub-band domain.
The implementation of the band extension in the frequency domain makes it possible to obtain a fineness of frequency analysis which is not available with a temporal approach, and makes it possible also to have a frequency resolution that is sufficient to detect the tonal components.
In a detailed embodiment, the decoded and extended low band signal is obtained according to the following equation:
with k the index of the sample, U(k) the spectrum of the signal obtained after a transform step, UHB1(k) the spectrum of the extended signal, and start_band a predefined variable.
Thus, this function comprises a resampling of the signal by adding samples to the spectrum of this signal. Other ways of extending the signal are possible however, for example by translation in a sub-band processing.
The present invention also envisages a device for extending frequency band of an audio frequency signal, the signal having been decoded in a first frequency band termed the low band. The device is such that it comprises:
a module for extracting tonal components and an ambience signal on the basis of a signal arising from the decoded low band signal;
a module for combining the tonal components and the ambience signal by adaptive mixing using energy level control factors to obtain an audio signal, termed the combined signal;
a module for extending onto at least one second frequency band higher than the first frequency band and implemented on the low band decoded signal before the extraction module or on the combined signal after the combining module.
This device exhibits the same advantages as the method described previously, that it implements.
The invention targets a decoder comprising a device as described.
It targets a computer program comprising code instructions for the implementation of the steps of the band extension method as described, when these instructions are executed by a processor.
Finally, the invention relates to a storage medium, that can be read by a processor, incorporated or not in the band extension device, possibly removable, storing a computer program implementing a band extension method as described previously.
Other features and advantages of the invention will become more clearly apparent on reading the following description, given purely as a non-limiting example and with reference to the attached drawings, in which:
Unlike the AMR-WB decoding which operates with an output sampling frequency of 16 kHz and the G.718 decoder which operates at 8 or 16 kHz, a decoder is considered here which can operate with an output (synthesis) signal at the frequency fs=8, 16, 32 or 48 kHz. Note that it is assumed here that the coding has been performed according to the AMR-WB algorithm with an internal frequency of 12.8 kHz for the low band CELP coding and at 23.85 kbit/s a sub-frame gain coding at the frequency of 16 kHz, but interoperable variants of the AMR-WB coder are also possible; although the invention is described here at the decoding level, it is assumed here that the coding can also operate with an input signal at the frequency fs=8, 16, 32 or 48 kHz and appropriate resampling operations, outside the scope of the invention, are implemented on coding as a function of the value of fs. It may be noted that when fs=8 kHz at the decoder, in the case of a decoding that is compatible with AMR-WB, it is not necessary to extend the 0-6.4 kHz low band, since the reconstructed audio band at the frequency fs is limited to 0-4000 Hz.
In
The decoding according to
u′(n)=ĝpv(n)+ĝcc(n), n=0,L,63
In variants which can be implemented for the invention, the post-processing operations applied to the excitation can be modified (for example, the phase dispersion can be enhanced) or these post-processing operations can be extended (for example, a reduction of the cross-harmonics noise can be implemented), without affecting the nature of the band extension. We do not describe here the case of the decoding of the low band when the current frame is lost (bfi=1) which is informative in the 3GPP AMR-WB standard; in general, whether dealing with the AMR-WB decoder or a general decoder relying on the source-filter model, one is typically involved with best estimating the LPC excitation and the coefficients of the LPC synthesis filter so as to reconstruct the lost signal while retaining the source-filter model. When bfi=1 it is considered here that the band extension (block 309) can operate as in the case bfi=0 and a bitrate<23.85 kbit/s; thus, the description of the invention will subsequently assume, without loss of generality, that bfi=0.
It can be noted that the use of blocks 306, 308, 314 is optional.
It will also be noted that the decoding of the low band described above assumes a so-called “active” current frame with a bit rate between 6.6 and 23.85 kbit/s. In fact, when the DTX mode is activated, certain frames can be coded as “inactive” and in this case it is possible to either transmit a silence descriptor (on 35 bits) or transmit nothing. In particular, it is recalled that the SID frame of the AMR-WB coder describes several parameters: ISF parameters averaged over 8 frames, mean energy over 8 frames, “dithering flag” for the reconstruction of non-stationary noise. In all cases, in the decoder, there is the same decoding model as for an active frame, with a reconstruction of the excitation and of an LPC filter for the current frame, which makes it possible to apply the invention even to inactive frames. The same observation applies for the decoding of “lost frames” (or FEC, PLC) in which the LPC model is applied.
This exemplary decoder operates in the domain of the excitation and therefore comprises a step of decoding the low band excitation signal. The band extension device and the band extension method within the meaning of the invention also operates in a domain different from the domain of the excitation and in particular with a low band decoded direct signal or a signal weighted by a perceptual filter.
Unlike the AMR-WB or G.718 decoding, the decoder described makes it possible to extend the decoded low band (50-6400 Hz taking into account the 50 Hz high-pass filtering on the decoder, 0-6400 Hz in the general case) to an extended band, the width of which varies, ranging approximately from 50-6900 Hz to 50-7700 Hz depending on the mode implemented in the current frame. It is thus possible to refer to a first frequency band of 0 to 6400 Hz and to a second frequency band of 6400 to 8000 Hz. In reality, in the favored embodiment, the excitation for the high frequencies and generated in the frequency domain in a band from 5000 to 8000 Hz, to allow a bandpass filtering of width 6000 to 6900 or 7700 Hz whose slope is not too steep in the rejected upper band.
The high-band synthesis part is produced in the block 309 representing the band extension device according to the invention and which is detailed in
In order to align the decoded low and high bands, a delay (block 310) is introduced to synchronize the outputs of the blocks 306 and 309 and the high band synthesized at 16 kHz is resampled from 16 kHz to the frequency fs (output of block 311). The value of the delay T will have to be adapted for the other cases (fs=32, 48 kHz) as a function of the processing operations implemented. It will be recalled that when fs=8 kHz, it is not necessary to apply the blocks 309 to 311 because the band of the signal at the output of the decoder is limited to 0-4000 Hz.
It will be noted that the extension method of the invention implemented in the block 309 according to the first embodiment preferentially does not introduce any additional delay relative to the low band reconstructed at 12.8 kHz; however, in variants of the invention (for example by using a time/frequency transformation with overlap), a delay will be able to be introduced. Thus, generally, the value of Tin the block 310 will have to be adjusted according to the specific implementation. For example in the case where the post-processing of the low frequencies (block 306) is not used, the delay to be introduced for fs=16 kHz may be fixed at T=15.
The low and high bands are then combined (added) in the block 312 and the synthesis obtained is post-processed by 50 Hz high-pass filtering (of IIR type) of order 2, the coefficients of which depend on the frequency fs (block 313) and output post-processing with optional application of the “noise gate” in a manner similar to G.718 (block 314).
The band extension device according to the invention, illustrated by the block 309 according to the embodiment of the decoder of
This extension device can also be independent of the decoder and can implement the method described in
This device receives as input a signal decoded in a first frequency band termed the low band u(n) which can be in the domain of the excitation or in that of the signal. In the embodiment described here, a step of sub-band decomposition (E401b) by time frequency transform or filter bank is applied to the low band decoded signal to obtain the spectrum of the low band decoded signal U(k) for an implementation in the frequency domain.
A step E401a of extending the low band decoded signal in a second frequency band higher than the first frequency band, so as to obtain an extended low band decoded signal UHB1(k), can be performed on this low band decoded signal before or after the analysis step (decomposition into sub-bands). This extension step can comprise at one and the same time a resampling step and an extension step or simply a step of frequency translation or transposition as a function of the signal obtained at input. It will be noted that in variants, step E401a will be able to be performed at the end of the processing described in
This step is detailed subsequently in the embodiment described with reference to
A step E402 of extracting an ambience signal (UHBA(k)) and tonal components (y(k)) is performed on the basis of the decoded low band signal (U(k)) or decoded and extended low band signal (UHB1(k)). The ambience is defined here as the residual signal which is obtained by deleting the main (or dominant) harmonics (or tonal components) from the existing signal.
In most broadband signals (sampled at 16 kHz), the high band (>6 kHz) contains ambience information which is in general similar to that present in the low band.
The step of extracting the tonal components and the ambience signal comprises for example the following steps:
detection of the dominant tonal components of the decoded (or decoded and extended) low band signal, in the frequency domain; and
computation of a residual signal by extraction of the dominant tonal components to obtain the ambience signal.
This step can also be obtained by:
obtaining of the ambience signal by computing a mean of the decoded (or decoded and extended) low band signal; and
obtaining of the tonal components by subtracting the computed ambience signal, from the decoded or decoded and extended low band signal.
The tonal components and the ambience signal are thereafter combined in an adaptive manner with the aid of energy level control factors in step E403 to obtain a so-called combined signal (UHB2(k)). The extension step E401a can then be implemented if it has not already been performed on the decoded low band signal.
Thus, the combining of these two types of signals makes it possible to obtain a combined signal with characteristics that are more suitable for certain types of signals such as musical signals and richer in frequency content and in the extended frequency band corresponding to the whole frequency band including the first and the second frequency band.
The band extension according to the method improves the quality for signals of this type with respect to the extension described in the AMR-WB standard.
Using a combination of ambience signal and of tonal components makes it possible to enrich this extension signal so as to render it closer to the characteristics of the true signal and not of an artificial signal.
This combining step will be detailed subsequently with reference to
A synthesis step, which corresponds to the analysis at 401b, is performed at E404b to restore the signal to the time domain.
In an optional manner, a step of energy level adjustment of the high band signal can be performed at E404a, before and/or after the synthesis step, by applying a gain and/or by appropriate filtering. This step will be explained in greater detail in the embodiment described in
In an exemplary embodiment, the band extension device 500 is now described with reference to
Thus, the processing block 510 receives a decoded low band signal (u(n)). In a particular embodiment, the band extension uses the decoded excitation at 12.8 kHz (exc2 or u(n)) as output by the block 302 of
This signal is decomposed into frequency sub-bands by the sub-band decomposition module 510 (which implements step E401b of
In a particular embodiment, a transform of DCT-IV (for “Discrete Cosine Transform”—type IV) (block 510) type is applied to the current frame of 20 ms (256 samples), without windowing, which amounts to directly transforming u(n) with n=0, L, 255 according to the following formula:
in which N=256 and k=0, L, 255
A transformation without windowing (or equivalently with an implicit rectangular window of the length of the frame) is possible when the processing is performed in the excitation domain, and not the signal domain. In this case no artifact (block effects) is audible, thereby constituting a significant advantage of this embodiment of the invention.
In this embodiment, the DCT-IV transformation is implemented by FFT according to the so-called “Evolved DCT (EDCT)” algorithm described in the article by D. M. Zhang, H. T. Li, A Low Complexity Transform—Evolved DCT, IEEE 14th International Conference on Computational Science and Engineering (CSE), August 2011, pp. 144-149, and implemented in the standards ITU-T G.718 Annex B and G.729.1 Annex E.
In variants of the invention, and without loss of generality, the DCT-IV transformation will be able to be replaced by other short-term time-frequency transformations of the same length and in the excitation domain or in the signal domain, such as an FFT (for “Fast Fourier Transform”) or a DCT-II (Discrete Cosine Transform-type II). Alternatively, it will be possible to replace the DCT-IV on the frame by a transformation with overlap-addition and windowing of length greater than the length of the current frame, for example by using an MDCT (for “Modified Discrete Cosine Transform”). In this case, the delay Tin the block 310 of
In another embodiment, the sub-band decomposition is performed by applying a real or complex filter bank, for example of PQMF (Pseudo-QMF) type. For certain filter banks, for each sub-band in a given frame, one obtains not a spectral value but a series of temporal values associated with the sub-band; in this case, the embodiment favored in the invention can be applied by carrying out for example a transform of each sub-band and by computing the ambience signal in the domain of the absolute values, the tonal components still being obtained by differencing between the signal (in absolute value) and the ambience signal. In the case of a complex filter bank, the complex modulus of the samples will replace the absolute value.
In other embodiments, the invention will be applied in a system using two sub-bands, the low band being analyzed by transform or by filter bank.
In the case of a DCT, the DCT spectrum, U(k), of 256 samples covering the band 0-6400 Hz (at 12.8 kHz), is thereafter extended (block 511) into a spectrum of 320 samples covering the band 0-8000 Hz (at 16 kHz) in the following form:
in which it is preferentially taken that start_band=160.
The block 511 implements step E401a of
In the frequency band corresponding to the samples ranging from indices 200 to 239, the original spectrum is retained, to be able to apply thereto a progressive attenuation response of the high-pass filter in this frequency band and also to not introduce audible defects in the step of addition of the low-frequency synthesis to the high-frequency synthesis.
It will be noted that, in this embodiment, the generation of the oversampled and extended spectrum is performed in a frequency band ranging from 5 to 8 kHz therefore including a second frequency band (6.4-8 kHz) above the first frequency band (0-6.4 kHz).
Thus, the extension of the decoded low band signal is performed at least on the second frequency band but also on a part of the first frequency band.
Obviously, the values defining these frequency bands can be different depending on the decoder or the processing device in which the invention is applied.
Furthermore, the block 511 performs an implicit high-pass filtering in the 0-5000 Hz band since the first 200 samples of UHB1(k) are set to zero; as explained later, this high-pass filtering may also be complemented by a part of progressive attenuation of the spectral values of indices k=200, L, 255 in the 5000-6400 Hz band; this progressive attenuation is implemented in the block 501 but could be performed separately outside of the block 501. Equivalently, and in variants of the invention, the implementation of the high-pass filtering separated into blocks of coefficients of index k=0, L, 199 set to zero, of attenuated coefficients k=200, L, 255 in the transformed domain, will therefore be able to be performed in a single step.
In this exemplary embodiment and according to the definition of UHB1(k), it will be noted that the 5000-6000 Hz band of UHB1(k) (which corresponds to the indices k=200, L, 239) is copied from the 5000-6000 Hz band of NO. This approach makes it possible to retain the original spectrum in this band and avoids introducing distortions in the 5000-6000 Hz band upon the addition of the HF synthesis with the LF synthesis—in particular the phase of the signal (implicitly represented in the DCT-IV domain) in this band is preserved.
The 6000-8000 Hz band of UHB1(k) is here defined by copying the 4000-6000 Hz band of U(k) since the value of start_band is preferentially set at 160.
In a variant of the embodiment, the value of start_band will be able to be made adaptive around the value of 160, without modifying the nature of the invention. The details of the adaptation of the start_band value are not described here because they go beyond the framework of the invention without changing its scope.
In most broadband signals (sampled at 16 kHz), the high band (>6 kHz) contains ambience information which is naturally similar to that present in the low band. The ambience is defined here as the residual signal which is obtained by deleting the main (or dominant) harmonics from the existing signal. The harmonicity level in the 6000-8000 Hz band is generally correlated with that of the lower frequency bands.
This decoded and extended low band signal is provided as input to the extension device 500 and in particular as input to the module 512. Thus the block 512 for extracting tonal components and an ambience signal implements step E402 of
In a particular embodiment, the extraction of the tonal components and of the ambience signal (in the 6000-8000 Hz band) is performed according to the following operations:
where ε=0.1 (this value may be different, it is fixed here by way of example).
For i=0 . . . L−1, this mean level is obtained through the following equation:
This corresponds to the mean level (in absolute value) and therefore represents a sort of envelope of the spectrum. In this embodiment, L=80 and represents the length of the spectrum and the index i from 0 to L−1 corresponds to the indices j+240 from 240 to 319, i.e. the spectrum from 6 to 8 kHz.
In general fb(i)=i−7 and fn(i)=i+7 however the first and last 7 indices (i=0, L, 6 and i=L−7, L, L−1) require special processing and without loss of generality we then define:
fb(i)=0 and fn(i)=i+7 for i=0,L,6
fb(i)=i−7 and fn(i)=L−1 for i=L−7,L,L−1
In variants of the invention, the mean of |UHB1(j+240)|, j=fb(i), . . . , fn(i), may be replaced with a median value over the same set of values, i.e. lev(i)=medianj=fb(i), . . . , fn(i)(|UHB1(j+240)|) This variant has the defect of being more complex (in terms of number of computations) than a sliding mean. In other variants a non-uniform weighting may be applied to the averaged terms, or the median filtering may be replaced for example with other nonlinear filters of “stack filters” type.
The residual signal is also computed:
y(i)=|UHB1(i+240)|−lev(i), i=0,K,L−1
which corresponds (approximately) to the tonal components if the value y(i) at a given spectral line i is positive (y(i)>0).
This computation therefore involves an implicit detection of the tonal components. The tonal parts are therefore implicitly detected with the aid of the intermediate term y(i) representing an adaptive threshold. The detection condition being y(i)>0. In variants of the invention this condition may be changed for example by defining an adaptive threshold dependent on the local envelope of the signal or in the form y(i)>lev(i)+xdB where x has a predefined value (for example x=10 dB).
The energy of the dominant tonal parts is defined by the following equation:
Other schemes for extracting the ambience signal can of course be envisaged. For example, this ambience signal can be extracted from a low-frequency signal or optionally another frequency band (or several frequency bands).
The detection of the tonal spikes or components may be done differently.
The extraction of this ambience signal could also be done on the decoded but not extended excitation, that is to say before the spectral extension or translation step, that is to say for example on a portion of the low-frequency signal rather than directly on the high-frequency signal.
In a variant embodiment, the extraction of the tonal components and of the ambience signal is performed in a different order and according to the following steps:
This variant can for example be carried out in the following manner: A spike (or tonal component) is detected at a spectral line of index i in the spectrum of amplitude |UHB1(i+240)| if the following criterion is satisfied:
|UHB1(i+240)|>|UHB1(i+240−1)| and |UHB1(i+240)|>|UHB1(i+240+1)|,
for i=0, K, L−1. As soon as a spike is detected at the spectral line of index i a sinusoidal model is applied so as to estimate the amplitude, frequency and optionally phase parameters of a tonal component associated with this spike. The details of this estimation are not presented here but the estimation of the frequency can typically call upon a parabolic interpolation over 3 points so as to locate the maximum of the parabola approximating the 3 points of amplitude |UHB1(i+240)| (expressed as dB), the amplitude estimation being obtained by way of this same interpolation. As the transform domain used here (DCT-IV) does not make it possible to obtain the phase directly, it will be possible, in one embodiment, to neglect this term, but in variants it will be possible to apply a quadrature transform of DST type to estimate a phase term. The initial value of y(i) is set to zero for i=0, K, L−1. The sinusoidal parameters (frequency, amplitude, and optionally phase) of each tonal component being estimated, the term y(i) is then computed as the sum of predefined prototypes (spectra) of pure sinusoids transformed into the DCT-IV domain (or other domain if some other sub-band decomposition is used) according to the estimated sinusoidal parameters. Finally, an absolute value is applied to the terms y(i) to express the domain of the amplitude spectrum as absolute values.
Other schemes for determining the tonal components are possible, for example it would also be possible to compute an envelope of the signal env(i) by spline interpolation of the local maximum values (detected spikes) of |UHB1(i+240)|, to lower this envelope by a certain level in dB in order to detect the tonal components as the spikes which exceed this envelope and to define y(i) as
y(i)=max(|UHB1(i+240)|−env(i),0)
In this variant the ambience is therefore obtained through the equation:
lev(i)=|UHB1(i+240)|−y(i), i=0,K,L−1
In other variants of the invention, the absolute value of the spectral values will be replaced for example by the square of the spectral values, without changing the principle of the invention; in this case a square root will be necessary in order to return to the signal domain, this being more complex to carry out.
The combining module 513 performs a combining step by adaptive mixing of the ambience signal and of the tonal components. Accordingly, an ambience level control factor F is defined by the following equation:
β being a factor, an exemplary computation of which is given hereinbelow.
To obtain the extended signal, we first obtain the combined signal in absolute values for i=0 . . . L−1:
to which are applied the signs of UHB1(k):
y″(i)=sgn(UHB1(i+240))˜y′(i)
where the function sgn(·) gives the sign:
By definition the factor Γ is >1. The tonal components, detected spectral line by spectral line by the condition y(i)>0, are reduced by the factor Γ; the mean level is amplified by the factor 1/Γ.
In the adaptive mixing block 513, a control factor for the energy level is computed as a function of the total energy of the decoded (or decoded and extended) low band signal and of the tonal components.
In a preferred embodiment of the adaptive mixing, the energy adjustment is performed in the following manner:
U
HB2
fac·y″(k−240), k=240,L,319
UHB2(k) being the band extension combined signal.
The adjustment factor is defined by the following equation:
Where γ makes it possible to avoid over-estimation of the energy. In an exemplary embodiment, we compute β so as to retain the same level of ambience signal with respect to the energy of the tonal components in the consecutive bands of the signal. We compute the energy of the tonal components in three bands: 2000-4000 Hz, 4000-6000 Hz and 6000-8000 Hz, with
in which
And where N(k1,k2) is the set of the indices k for which the coefficient of index k is classified as being associated with the tonal components. This set may be for example obtained by detecting the local spikes in U′(k) satisfying |U′(k)|>lev(k) or lev(k) is computed as the mean level of the spectrum, spectral line by spectral line. It may be noted that other schemes for computing the energy of the tonal components are possible, for example by taking the median value of the spectrum over the band considered. We fix β in such a way that the ratio between the energy of the tonal components in the 4-6 kHz and 6-8 kHz bands is the same as between the 2-4 kHz and 4-6 kHz bands:
where
and max(.,.) is the function which gives the maximum of the two arguments.
In variants of the invention, the computation of β may be replaced with other schemes. For example, in a variant, it will be possible to extract (compute) various parameters (or “features”) characterizing the low band signal, including a “tilt” parameter similar to that computed in the AMR-WB codec, and the factor β will be estimated as a function of a linear regression on the basis of these various parameters by limiting its value between 0 and 1. The linear regression will, for example, be able to be estimated in a supervised manner by estimating the factor β by being given the original high band in a learning base. It will be noted that the way in which β is computed does not limit the nature of the invention.
Thereafter, the parameter β can be used to compute γ by taking account of the fact that a signal with an ambience signal added in a given band is in general perceived as stronger than a harmonic signal with the same energy in the same band. If we define a to be the quantity of ambience signal added to the harmonic signal:
α=√{square root over (1−β)}
it will be possible to compute γ as a decreasing function of a, for example γ=b−a√{square root over (α)}, b=1.1, a=1.2 and γ limited from 0.3 to 1. Here again, other definitions of α and γ are possible within the framework of the invention.
At the output of the band extension device 500, the block 501, in a particular embodiment, carries out in an optional manner a dual-operation of application of bandpass filter frequency response and of de-emphasis (or deaccentuation) filtering in the frequency domain.
In a variant of the invention, the de-emphasis filtering will be able to be performed in the time domain, after the block 502, even before the block 510; however, in this case, the bandpass filtering performed in the block 501 may leave certain low-frequency components of very low levels which are amplified by de-emphasis, which can modify, in a slightly perceptible manner, the decoded low band. For this reason, it is preferred here to perform the de-emphasis in the frequency domain. In the preferred embodiment, the coefficients of index k=0, L, 199 are set to zero, so the de-emphasis is limited to the higher coefficients.
The excitation is first de-emphasized according to the following equation:
which Gdeemph(k) is the frequency response of the filter 1/(1−0.68z−1) over a restricted discrete frequency band. By taking into account the discrete (odd) frequencies of the DCT-IV, Gdeemph(k) is defined here as:
in which
In the case where a transformation other than DCT-IV is used, the definition of θk will be able to be adjusted (for example for even frequencies).
It should be noted that the de-emphasis is applied in two phases for k=200, L, 255 corresponding to the 5000-6400 Hz frequency band, where the response 1/(1−0.68z−1) is applied as at 12.8 kHz, and for k=256, L, 319 corresponding to the 6400-8000 Hz frequency band, where the response is extended from 16 kHz here to a constant value in the 6.4-8 kHz band.
It can be noted that, in the AMR-WB codec, the HF synthesis is not de-emphasized. In the embodiment presented here, the high-frequency signal is on the contrary de-emphasized so as to restore it to a domain consistent with the low-frequency signal (0-6.4 kHz) which exits the block 305 of
In a variant of the embodiment, in order to reduce the complexity, it will be possible to set Gdeemph(k) at a constant value independent of k, by taking for example Gdeemph(k)=0.6 which corresponds approximately to the average value of Gdeemph(k) for k=200, L, 319 in the conditions of the embodiment described above.
In another variant of the embodiment of the decoder, the de-emphasis will be able to be carried out in an equivalent manner in the time domain after inverse DCT.
In addition to the de-emphasis, a bandpass filtering is applied with two separate parts: one, high-pass, fixed, the other, low-pass, adaptive (function of the bit rate).
This filtering is performed in the frequency domain.
In the preferred embodiment, the low-pass filter partial response is computed in the frequency domain as follows:
in which Nlp=60 at 6.6 kbit/s, 40 at 8.85 kbit/s, and 20 at the bit rates>8.85 bit/s.
Then, a bandpass filter is applied in the form:
The definition of GhP(k), k=0, L, 55, is given, for example, in table 1 below.
It will be noted that, in variants of the invention, the values of Ghp(k) will be able to be modified while keeping a progressive attenuation. Similarly, the low-pass filtering with variable bandwidth, Glp(k), will be able to be adjusted with values or a frequency support that are different, without changing the principle of this filtering step.
It will also be noted that the bandpass filtering will be able to be adapted by defining a single filtering step combining the high-pass and low-pass filtering.
In another embodiment, the bandpass filtering will be able to be performed in an equivalent manner in the time domain (as in the block 112 of
The inverse transform block 502 performs an inverse DCT on 320 samples to find the high-frequency signal sampled at 16 kHz. Its implementation is identical to the block 510, because the DCT-IV is orthonormal, except that the length of the transform is 320 instead of 256, and the following is obtained:
where N16k=320 and k=0, L, 319.
In the case where the block 510 is not a DCT, but some other transformation or decomposition into sub-bands, the block 502 carries out the synthesis corresponding to the analysis carried out in the block 510.
The sampled signal at 16 kHz is thereafter in an optional manner scaled by gains defined per sub-frame of 80 samples (block 504).
In a preferred embodiment, a gain gHB1(m) is first computed (block 503) per sub-frame by ratios of energy of the sub-frames such that, in each sub-frame of index m=0, 1, 2 or 3 of the current frame:
in which
with ε==0.01. The gain per sub-frame gHB1(m) can be written in the form:
which shows that, in the signal uHB, the same ratio between energy per sub-frame and energy per frame as in the signal u(n) is assured.
The block 504 performs the scaling of the combined signal (included in step E404a of
u
HB′(n)=gHB1(m)uHB(n), n=80m,L,80(m+1)−1
It will be noted that the implementation of the block 503 differs from that of the block 101 of
Thus, this scaling step makes it possible to retain, in the high band, the ratio of energy between the sub-frame and the frame in the same way as in the low band.
In an optional manner, the block 506 thereafter performs the scaling of the signal (included in step E404a of
u
HB″(n)=gHB2(m)uHB′(n), n=80m,L,80(m+1)−1
where the gain gHB2(m) is obtained from the block 505 by executing the blocks 103, 104 and 105 of the AMR-WB codec (the input of the block 103 being the excitation decoded in low band, u(n)). The blocks 505 and 506 are useful for adjusting the level of the LPC synthesis filter (block 507), here as a function of the tilt of the signal. Other schemes for computing the gain gHB2(m) are possible without changing the nature of the invention.
Finally, the signal, uHB′(n) or uHB″(n), is filtered by the filtering module 507 which can be embodied here by taking as transfer function 1/Â(z/γ), where γ=0.9 at 6.6 kbit/s and γ=0.6 at the other bit rates, thereby limiting the order of the filter to order 16. In a variant, this filtering will be able to be performed in the same way as is described for the block 111 of
In variant embodiments of the invention, the coding of the low band (0-6.4 kHz) will be able to be replaced by a CELP coder other than that used in AMR-WB, such as, for example, the CELP coder in G.718 at 8 kbit/s. With no loss of generality, other wide-band coders or coders operating at frequencies above 16 kHz, in which the coding of the low band operates with an internal frequency at 12.8 kHz, could be used. Moreover, the invention can obviously be adapted to sampling frequencies other than 12.8 kHz, when a low-frequency coder operates with a sampling frequency lower than that of the original or reconstructed signal. When the low-band decoding does not use linear prediction, there is no excitation signal to be extended, in which case it will be possible to perform an LPC analysis of the signal reconstructed in the current frame and an LPC excitation will be computed so as to be able to apply the invention.
Finally, in another variant of the invention, the excitation or the low band signal (u(n)) is resampled, for example by linear interpolation or cubic “spline” interpolation, from 12.8 to 16 kHz before transformation (for example DCT-IV) of length 320. This variant has the defect of being more complex, since the transform (DCT-IV) of the excitation or of the signal is then computed over a greater length and the resampling is not performed in the transform domain.
Furthermore, in variants of the invention, all the computations necessary for the estimation of the gains (GHBN, gHB1(m), gHB2(m), gHBN, . . . ) will be able to be performed in a logarithmic domain.
This type of device comprises a processor PROC cooperating with a memory block BM comprising a storage and/or working memory MEM.
Such a device comprises an input module E able to receive a decoded or extracted audio signal in a first frequency band termed the low band restored to the frequency domain (U(k). It comprises an output module S able to transmit the extension signal in a second frequency band (UHB2(k)) for example to a filtering module 501 of
The memory block can advantageously comprise a computer program comprising code instructions for the implementation of the steps of the band extension method within the meaning of the invention, when these instructions are executed by the processor PROC, and in particular the steps of extracting (E402) tonal components and an ambience signal from a signal arising from the decoded low band signal (U(k)), of combining (E403) the tonal components (y(k)) and the ambience signal (UHBA(k)) by adaptive mixing using energy level control factors to obtain an audio signal, termed the combined signal (UHB2(k)), of extending (E401a) over at least one second frequency band higher than the first frequency band the low band decoded signal before the extraction step or the combined signal after the combining step.
Typically, the description of
The memory MEM stores, generally, all the data necessary for the implementation of the method.
In one possible embodiment, the device thus described can also comprise low-band decoding functions and other processing functions described for example in
Although the present disclosure has been described with reference to one or more examples, workers skilled in the art will recognize that changes may be made in form and detail without departing from the scope of the disclosure and/or the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
1450969 | Feb 2014 | FR | national |
This Application is a Divisional Application of U.S. Ser. No. 15/117,100, filed Aug. 5, 2016, which is a 371 National Stage Application of International Application No. PCT/FR2015/050257, filed Feb. 4, 2015, the content of which is incorporated herein by reference in its entirety, and published as WO 2015/118260 on Aug. 13, 2015, not in English, which also claims priority of French Application No. 1450969, filed Feb. 7, 2014.
Number | Date | Country | |
---|---|---|---|
Parent | 15117100 | Aug 2016 | US |
Child | 16011153 | US |