Disclosed are embodiments related to comfort noise (CN) generation.
Although the capacity in telecommunication networks is continuously increasing, it is still of great interest to limit the required bandwidth per communication channel. In mobile networks, less transmission bandwidth for each call means that the mobile network can service a larger number of users in parallel. Lowering the transmission bandwidth also yields lower power consumption in both the mobile device and the base station. This translates to energy and cost saving for the mobile operator, while the end user will experience prolonged battery life and increased talk-time.
One such method for reducing the transmitted bandwidth in speech communication is to exploit the natural pauses in the speech. In most conversations only one talker is active at a time thus the speech pauses in one direction will typically occupy more than half of the signal. The way to use this property of a typical conversation to decrease the transmission bandwidth is to employ a Discontinuous Transmission (DTX) scheme, where the active signal coding is discontinued during speech pauses. DTX schemes are standardized for all 3GPP mobile telephony standards, i.e. 2G, 3G and VoLTE. It is also commonly used in Voice over IP systems.
During speech pauses it is common to transmit a very low bit rate encoding of the background noise to allow for a Comfort Noise Generator (CNG) in the receiving end to fill the pauses with a background noise having similar characteristics as the original noise. The CNG makes the sound more natural since the background noise is maintained and not switched on and off with the speech. Complete silence in the inactive segments (i.e. speech pauses) is perceived as annoying and often leads to the misconception that the call has been disconnected.
A DTX scheme further relies on a Voice Activity Detector (VAD), which indicates to the system whether to use the active signal encoding methods in or the low rate background noise encoding in active respectively inactive segments. The system may be generalized to discriminate between other source types by using a (Generic) Sound Activity Detector (GSAD or SAD), which not only discriminates speech from background noise but also may detect music or other signal types which are deemed relevant.
Communication services may be further enhanced by supporting stereo or multichannel audio transmission. In these cases, a DTX/CNG system also needs to consider the spatial characteristics of the signal in order to provide a pleasant sounding comfort noise.
A common CN generation method, e.g. used in all 3GPP speech codecs, is to transmit information on the energy and spectral shape of the background noise in the speech pauses. This can be done using significantly less number of bits than the regular coding of speech segments. At the receiver side the CN is generated by creating a pseudo-random signal and then shaping the spectrum of the signal with a filter based on information received from the transmitting side. The signal generation and spectral shaping can be done in the time or the frequency domain.
In a typical DTX system, the capacity gain comes from the fact that the CN is encoded with fewer bits than the regular encoding. Part of this saving in bits comes from the fact that the CN parameters are normally sent less frequently than the regular coding parameters. This normally works well since the background noise character is not changing as fast as e.g. a speech signal. The encoded CN parameters are often referred to as a “SID frame” where SID stands for Silence Descriptor.
A typical case is that the CN parameters are sent every 8th speech encoder frame (one speech encoder frame is typically 20 ms) and these are then used in the receiver until the next set of CN parameters is received (see
In the first frame in a new inactive segment (i.e. directly after a speech burst), it may not be possible to use an average taken over several frames. Some codecs, like the 3GPP EVS codec, are using a so-called hangover period preceding inactive segments. In this hangover period, the signal is classified as inactive but active coding is still used for up to 8 frames before inactive encoding starts. One reason for this is to allow averaging of the CN parameters during this period (see
An issue with the above solution is that the first CN parameter set cannot always be sampled over several speech encoder frames but will instead be sampled in fewer or even only one frame. This can lead to a situation where inactive segments start with a CN that is different in the beginning and then changes and stabilizes when the transmission of the averaged parameters commences. This may be perceived as annoying for the listener, especially if it occurs frequently.
In embodiments of the present invention, a CN parameter is typically determined based on signal characteristics over the period between two consecutive CN parameter transmissions while in an inactive segment. The first frame in each inactive segment is however treated differently: here the CN parameter is based on signal characteristics of the first frame of inactive coding, typically a first SID frame, and any hangover frames, and also signal characteristics of the last-sent SID frame and any inactive frames after that in the end of the previous inactive segment. Weighting factors are applied such that the weight for the data from the previous inactive segment is decreasing as a function of the length of the active segment in-between. The older the previous data is, the less weight it gets.
Embodiments of the present invention improve the stability of CN generated in a decoder, while being agile enough to follow changes in the input signal.
According to a first aspect, a method for generating a comfort noise (CN) parameter is provided. The method includes receiving an audio input; detecting, with a Voice Activity Detector (VAD), a current inactive segment in the audio input; as a result of detecting, with the VAD, the current inactive segment in the audio input, calculating a CN parameter CNused; and providing the CN parameter CNused to a decoder. The CN parameter CNused is calculated based at least in part on the current inactive segment and a previous inactive segment.
In some embodiments, calculating the CN parameter includes calculating
CN
used=ƒ(Tactive,Tcurr,Tprev,CNcurr,CNprev),
where:
In some embodiments, the function ƒ(⋅) is defined as a weighted sum of functions g1(⋅) and g2 (⋅) such that the CN parameter CNused is given by:
CN
used
=W
1(Tactive,Tcurr,Tprev)*g1(CNcurr,Tcurr)+W2(Tactive,Tcurr,Tprev)*g2(CNprev,Tprev)
where W1(⋅) and W2(⋅) are weighting functions. In some embodiments, W1(⋅) and W2(⋅) sum to unity such that W2(Tactive,Tcurr,Tprev)=1−W1(Tactive,Tcurr,Tprev). In some embodiments, the functions g1(⋅) represents an average over the time period Tcurr and the function g2(⋅) represents an average over the time period Tprev. In some embodiments, the weighting functions W1(⋅) and W2(⋅) are functions of Tactive alone, such that W1(Tactive,Tcurr,Tpre)=W1(Tactive) and W2(Tactive,Tcurr,Tprev)=W2(Tactive). In some embodiments, 0<W1(⋅)≤1 and 0<1−W2(⋅)≤1, and wherein as the time Tactive approaches infinity, W1(⋅) converges to 1 and W2(⋅) converges to 0 in the limit.
In some embodiments, the function ƒ(⋅) is defined such that the CN parameter CNused is given by
where Ncurr represents the number of frames corresponding to the time-interval parameter Tcurr and Nprev represents the number of frames corresponding to the time-interval parameter Tprev; and where W1(Tactive) and W2(Tactive) are weighting functions.
According to a second aspect, a method for generating a comfort noise (CN) side-gain parameter is provided. The method includes receiving an audio input, wherein the audio input comprises multiple channels; detecting, with a Voice Activity Detector (VAD), a current inactive segment in the audio input; as a result of detecting, with the VAD, the current inactive segment in the audio input, calculating a CN side-gain parameter SG(b) for a frequency band b; and providing the CN side-gain parameter SG(b) to a decoder. The CN side-gain parameter SG(b) is calculated based at least in part on the current inactive segment and a previous inactive segment.
In some embodiments, calculating the CN side-gain parameter SG(b) for a frequency band b, includes calculating
where:
In some embodiments, W(k) is given by
According to a third aspect, a method for generating comfort noise (CN) is provided. The method includes receiving a CN parameter CNused generated according to any one of the embodiments of the first aspect, and generating comfort noise based on the CN parameter CNused.
According to a fourth aspect, a method for generating comfort noise (CN) is provided. The method includes receiving a CN side-gain parameter SG(b) for a frequency band b generated according to any one of the embodiments of the second aspect, and generating comfort noise based on the CN parameter SG(b).
According to a fifth aspect, a node for generating a comfort noise (CN) parameter is provided. The node includes a receiving unit configured to receive an audio input; a detecting unit configured to detect, with a Voice Activity Detector (VAD), a current inactive segment in the audio input; a calculating unit configured to calculate, as a result of detecting, with the VAD, the current inactive segment in the audio input, a CN parameter CNused; and a providing unit configured to provide the CN parameter CNused to a decoder. The CN parameter CNused is calculated by the calculating unit based at least in part on the current inactive segment and a previous inactive segment.
In some embodiments, the calculating unit is further configured to calculate the CN parameter CNused by calculating
CN
used=ƒ(Tactive,Tcurr,Tprev,CNcurr,CNprev),
where:
According to a sixth aspect, a node for generating a comfort noise (CN) side-gain parameter is provided. The node includes a receiving unit configured to receive an audio input, wherein the audio input comprises multiple channels; a detecting unit configured to detect, with a Voice Activity Detector (VAD), a current inactive segment in the audio input; a calculating unit configured to calculate, as a result of detecting, with the VAD, the current inactive segment in the audio input, a CN side-gain parameter SG(b) for a frequency band b; and a providing unit configured to provide the CN side-gain parameter SG(b) to a decoder. The CN side-gain parameter SG(b) is calculated based at least in part on the current inactive segment and a previous inactive segment
In some embodiments, the calculating unit is further configured to calculate the CN side-gain parameter SG(b) for a frequency band b, by calculating
where:
According to a seventh aspect, a node for generating comfort noise (CN) is provided. The node includes a receiving unit configured to receive a CN parameter CNused generated according to any one of the embodiments of the first aspect; and a generating unit configured to generate comfort noise based on the CN parameter CNused.
According to an eighth aspect, a node for generating comfort noise (CN) is provided. The node includes a receiving unit configured to receive a CN side-gain parameter SG(b) for a frequency band b generated according to any one of the embodiments of the second aspect; and a generating unit configured to generate comfort noise based on the CN parameter SG(b).
According to a ninth aspect, a computer program is provided, comprising instructions which when executed by processing circuitry of a node causes the node to perform the method of any one of the embodiments of the first and second aspects.
According to a tenth aspect, a carrier is provided, containing the computer program of any of the embodiments of the ninth aspect, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, and a computer readable storage medium.
The accompanying drawings, which are incorporated herein and form part of the specification, illustrate various embodiments.
In many cases, e.g. a person standing still with his mobile telephone, the background noise characteristics will be stable over time. In these cases it will work well to use the CN parameters from the previous inactive segment as a starting point in the current inactive segment, instead of relying on a more unstable sample taken in a shorter period of time in the beginning of the current inactive segment.
There are, however, cases where background noise conditions may change over time. The user can move from one location to another, e.g. from a silent office out to a noisy street. There might also be things in the environment that change even if the telephone user is not moving, e.g. a bus driving by on the street. This means that it might not always work well to base the CN parameters on signal characteristics from the previous inactive segment.
Embodiments of the present invention aim to adaptively balance the above-mentioned aspects for an improved DTX system with CNG. In embodiments, a comfort noise parameter CNused may be determined as follows based on a function ƒ(⋅):
CN
used=ƒ(Tactive,Tcurr,Tprev,CNcurr,CNprev)
In the equation above, the variables referenced have the following meanings:
In one embodiment, the function ƒ(⋅) is defined as a weighted sum of functions g1(⋅) and g2(⋅) of CNcurr and CNprev, i.e.
CN
used
=W
1(Tactive,Tcurr,Tprev)*g1(CNcurr,Tcurr)+W2(Tactive,Tcurr,Tprev)*g2(CNprev,Tprev)
where W1(⋅) and W2(⋅) are weighting functions.
The functions g1(⋅) and g2(⋅) may for example, in an embodiment, be an average over the time periods Tcurr and Tprev respectively. In embodiments, typically ΣWi=1.
In some embodiments, the weighting between previous and current CN parameter averages may be based only on the length of the active segment, i.e. on Tactive. For example, the following equation may be used:
In the equation above, the additional variables referenced have the following meanings:
An averaging of the parameter CN is done by using both an average taken from the current inactive segment and an average taken from the previous segment. These two values are then combined with weighting factors based on a weighting function that depends, in some embodiments, on the length of the active segment between the current and the previous inactive segment such that less weight is put on the previous average if the active segment is long and more weight if it is short.
In another embodiment, the weights are additionally adapted based on Tprev and Tcurr. This may, for example, mean that a larger weight is given the previous CN parameters because the Tcurr period is too short to give a stable estimate of the long-term signal characteristics that can be represented by the CNG system. An example of an equation corresponding to this embodiment follows:
In the equation above, the additional variables referenced have the following meanings:
An established method for encoding a multi-channel (e.g. stereo) signal is to create a mix-down (or downmix) signal of the input signals, e.g. mono in the case of stereo input signals and determine additional parameters that are encoded and transmitted with the encoded downmix signal to be utilized for an up-mix at the decoder. In the stereo DTX case a mono signal may be encoded and generated as CN and stereo parameters will then be used create a stereo signal from the mono CN signal. The stereo parameters are typically controlling the stereo image in terms of e.g. sound source localization and stereo width.
In the case with a non-fixed stereo microphone, e.g. mobile telephone or a headset connected to the mobile phone, the variation in the stereo parameters may be faster than the variation in the mono CN parameters.
To illustrate this with an example: turning your head 90 degrees can be done very fast but moving from one type of background noise environment to another will take a longer time. The stereo image will in many cases be continuously changing since it is hard to keep your mobile telephone or headset in the same position for any longer period of time. Because of this, embodiments of the present invention can be especially important for stereo parameters.
One example of a stereo parameter is the side gain SG. A stereo signal can be split into a mix-down signal DMX and a side signal S:
DMX(t)=L(t)+R(t)
S(t)=L(t)−R(t)
where L(t) and R(t) refer, respectively, to the Left and Right audio signal. The corresponding up-mix would then be:
In order to save bits for transmission of an encoded stereo signal, some components Ŝ(t) of the side signal S might be predicted from the DMX signal by utilizing a side gain parameter SG according to:
Ŝ(t)=SG·DMX(t)
A minimized prediction error E(t)=(Ŝ(t)−S(t))2 can be obtained by:
where <⋅,⋅> denotes an inner product between the signals (typically frames thereof).
Side gains may be determined in broad-band from time domain signals, or in frequency sub-bands obtained from downmix and side signals represented in a transform domain, e.g. the Discrete Fourier Transform (DFT) or Modified Discrete Cosine Transform (MDCT) domains, or by some other filterbank representation. If a side gain in the first frame of CNG would be significantly based on a previous inactive segment, and differ significantly from the following frames, the stereo image would change drastically in the beginning of an inactive segment compared to the slower pace during the rest of the inactive segment. This would be perceived as annoying by the listener, especially if it is repeated every time a new inactive segment (i.e. speech pause) starts.
The following formula shows one example of how embodiments of the present invention can be used to obtain CN side-gain parameters from frequency divided side gain parameters.
In the equation above, the variables referenced have the following meanings:
Note that Ncurr and Nprev can differ from each other and from time to time. Nprev will in addition to the frames of the last transmitted CN parameters also include the inactive frames (so-called no-data frames) between the last CN parameter transmission and the first active frames. An active frame can of course occur anytime, so this number will vary. Ncurr will include the number of frames in the hangover period plus the first inactive frame which may also vary if the length of the hangover period is adaptive. Ne may not only include consecutive hangover frames, but may in general represent the number of frames included in the determination of current CN parameters.
Note that changing the number of frames used in the average is just one way of changing the length of the time-interval on which the parameters are calculated. There are also other ways of changing the length of time-interval on which a parameter is based upon. For example, related to CN generation, the frame length in Linear Predictive Coding (LPC) analysis could also be changed.
The method includes receiving an audio input (step 702). The method further includes detecting, with a Voice Activity Detector (VAD), a current inactive segment in the audio input (step 704). The method further includes, as a result of detecting, with the VAD, the current inactive segment in the audio input, calculating a CN parameter CNuse(step 706). The method further includes providing the CN parameter CNused to a decoder (step 708). The CN parameter CNused is calculated based at least in part on the current inactive segment and a previous inactive segment (step 710).
In some embodiments, calculating the CN parameter CNused includes calculating CNused=ƒ(Tactive,Tcurr,Tprev,CNcurr,CNprev), where CNcurr refers to a CN parameter from a current inactive segment; CNprev refers to a CN parameter from a previous inactive segment; Tprev refers to a time-interval parameter related to CNprev; Tcurr refers to a time-interval parameter related to CNcurr; and Tactive refers to a time-interval parameter of an active segment between the previous inactive segment and the current inactive segment.
In some embodiments, the function ƒ(⋅) is defined as a weighted sum of functions g1(⋅) and g2 (⋅) such that the CN parameter CNused is given by:
CN
used
=W
1(Tactive,Tcurr,Tprev)*g1(CNcurr,Tcurr)+W2(Tactive,Tcurr,Tprev)*g2(CNprev,Tprev)
where W1 (⋅) and W2(⋅) are weighting functions. In some embodiment, W1(⋅) and W2(⋅) sum to unity such that W2(Tactive,Tcurr,Tprev)=1−W1(Tactive,Tcurr,Tprev). In some embodiments, the functions g(⋅) represents an average over the time period Tcurr and the function g2(⋅) represents an average over the time period Tprev. In some embodiments, the weighting functions W1(⋅) and W2(⋅) are functions of Tactive alone, such that W1(Tactive,Tcurr,Tprev)=W1(Tactive) and W2(Tactive, Tcurr,Tprev)=W2(Tactive). In some embodiments,
where Ncurr represents the number of frames corresponding to the time-interval parameter Tcurr and Nprev represents the number of frames corresponding to the time-interval parameter Tprev.
In some embodiments, 0<W1(⋅)≤1 and 0<1−W2(⋅)≤1, and as the time Tactive approaches infinity, W1(⋅) converges to 1 and W2(⋅) converges to 0 in the limit. In embodiments, the function ƒ(⋅) is defined such that the CN parameter CNused is given by
where Ncurr represents the number of frames corresponding to the time-interval parameter Tcurr and Nprev represents the number of frames corresponding to the time-interval parameter Tprev; and where W1(Tactive) and W2(Tactive) are weighting functions.
In some embodiments, calculating the CN side-gain parameter SG(b) for a frequency band b, includes calculating
where SGcurr(b,i) represents a side gain value for frequency band b and frame i in current inactive segment; SGprev(b,j) represents a side gain value for frequency band b and frame j in previous inactive segment; Ncurr represents the number of frames in the sum from current inactive segment; Nprev represents the number of frames in the sum from previous inactive segment; W(k) represents a weighting function; and nF represents the number of frames in the active segment between the current segment and the previous inactive segment, corresponding to Tactive.
In some embodiments, W(k) is given by
The node 1002 includes a receiving unit 1004 configured to receive an audio input; a detecting unit 1006 configured to detect, with a Voice Activity Detector (VAD), a current inactive segment in the audio input; a calculating unit 1008 configured to calculate, as a result of detecting, with the VAD, the current inactive segment in the audio input, a CN parameter CNused; and a providing unit 1010 configured to provide the CN parameter CNused to a decoder. The CN parameter CNused is calculated by the calculating unit based at least in part on the current inactive segment and a previous inactive segment.
While various embodiments of the present disclosure are described herein, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context.
Additionally, while the processes described above and illustrated in the drawings are shown as a sequence of steps, this was done solely for the sake of illustration. Accordingly, it is contemplated that some steps may be added, some steps may be omitted, the order of the steps may be re-arranged, and some steps may be performed in parallel.
Number | Date | Country | |
---|---|---|---|
62691069 | Jun 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17256073 | Dec 2020 | US |
Child | 18307319 | US |