The present disclosure relates to sound coding, in particular but not exclusively to classification of uncorrelated stereo content, cross-talk detection, and stereo mode selection in, for example, a multi-channel sound codec capable of producing a good sound quality in a complex audio scene at low bit-rate and low delay.
In the present disclosure and the appended claims:
Historically, conversational telephony has been implemented with handsets having only one transducer to output sound only to one of the user's ears. In the last decade, users have started to use their portable handset in conjunction with a headphone to receive the sound over their two ears mainly to listen to music but also, sometimes, to listen to speech. Nevertheless, when a portable handset is used to transmit and receive conversational speech, the content is still mono but presented to the user's two ears when a headphone is used.
With the newest 3GPP speech coding standard, EVS (Enhanced Voice Services) as described in Reference [1] of which the full content is incorporated herein by reference, the quality of the coded sound, for example speech and/or audio, that is transmitted and received through a portable handset has been significantly improved. The next natural step is to transmit stereo information such that the receiver gets as close as possible to a real life audio scene that is captured at the other end of the communication link.
In audio codecs, for example as described in Reference [2] of which the full content is incorporated herein by reference, transmission of stereo information is normally used.
For conversational speech codecs, mono signal is the norm. When a stereo sound signal is transmitted, the bitrate is often doubled since both the left and right channels of the stereo sound signal are coded using a mono codec. This works well in most scenarios, but presents the drawbacks of doubling the bitrate and failing to exploit any potential redundancy between the two channels (left and right channels of the stereo sound signal). Furthermore, to keep the overall bitrate at a reasonable level, a very low bitrate for each of the left and right channels is used, thus affecting the overall sound quality. To reduce the bitrate, efficient stereo coding techniques have been developed and used. As non-limitative examples, two stereo coding techniques that can be efficiently used at low bitrates are discussed in the following paragraphs.
A first stereo coding technique is called parametric stereo. Parametric stereo encodes two inputs (left and right channels) as mono signals using a common mono codec plus a certain amount of stereo side information (corresponding to stereo parameters) which represents a stereo image. The two input left and right channels are down-mixed into a mono signal and the stereo parameters are then computed. This is usually performed in frequency domain (FD), for example in the Discrete Fourier Transform (DFT) domain. The stereo parameters are related to so-called binaural or inter-channel cues. The binaural cues (see for example Reference [3], of which the full content is incorporated herein by reference) comprise Interaural Level Difference (ILD), Interaural Time Difference (ITD) and Interaural Correlation (IC). Depending on the sound signal characteristics, stereo scene configuration, etc., some or all binaural cues are coded and transmitted to the decoder. Information about what binaural cues are coded and transmitted is sent as signaling information, which is usually part of the stereo side information. Also, a given binaural cue can be quantized using different coding techniques which results in a variable number of bits being used. Then, in addition to the quantized binaural cues, the stereo side information may contain, usually at medium and higher bitrates, a quantized residual signal that results from the down-mixing. The residual signal can be coded using an entropy coding technique, e.g. an arithmetic encoder. In the remainder of the present disclosure, parametric stereo will be referred to as “DFT stereo” since the parametric stereo encoding technology usually operates in frequency domain and the present disclosure will describe a non-restrictive embodiment using DFT.
Another stereo coding technique is a technique operating in time-domain. This stereo coding technique mixes the two inputs (left and right channels) into so-called primary and secondary channels. For example, following the method as described in Reference [4], of which the full content is incorporated herein by reference, time-domain mixing can be based on a mixing ratio, which determines respective contributions of the two inputs (left and right channels) upon production of the primary and secondary channels. The mixing ratio is derived from several metrics, for example normalized correlations of the two inputs (left and right channels) with respect to a mono signal or a long-term correlation difference between the two inputs (left and right channels). The primary channel can be coded by a common mono codec while the secondary channel can be coded by a lower bitrate codec. Coding of the secondary channel may exploit coherence between the primary and secondary channels and might re-use some parameters from the primary channel. In certain sounds where the left and right channels exhibit little correlation, it is better to encode the left channel and the right channel of the stereo input signal in time domain either separately or with minimum inter-channel parametrization. Such approach in the encoder is a special case of time domain TD stereo and will be called “LRTD stereo” throughout the present disclosure.
Further, in last years, the generation, recording, representation, coding, transmission, and reproduction of audio is moving towards enhanced, interactive and immersive experience for the listener. The immersive experience can be described, for example, as a state of being deeply engaged or involved in a sound scene while sounds are coming from all directions. In immersive audio (also called 3D (Three-Dimensional) audio), the sound image is reproduced in all three dimensions around the listener, taking into consideration a wide range of sound characteristics like timbre, directivity, reverberation, transparency and accuracy of (auditory) spaciousness. Immersive audio is produced for a particular sound playback or reproduction system such as loudspeaker-based-system, integrated reproduction system (sound bar) or headphones. Then, interactivity of a sound reproduction system may include, for example, an ability to adjust sound levels, change positions of sounds, or select different languages for the reproduction.
There exist three fundamental approaches to achieve an immersive experience.
A first approach to achieve an immersive experience is a channel-based audio approach using multiple spaced microphones to capture sounds from different directions, wherein one microphone corresponds to one audio channel in a specific loudspeaker layout. Each recorded channel is then supplied to a loudspeaker in a given location. Examples of channel-based audio approaches are, for example, stereo, 5.1 surround, 5.1+4, etc.
A second approach to achieve an immersive experience is a scene-based audio approach which represents a desired sound field over a localized space as a function of time by a combination of dimensional components. The sound signals representing the scene-based audio are independent of the positions of the audio sources while the sound field is transformed to a chosen layout of loudspeakers at the renderer. An example of scene-based audio is ambisonics.
The third approach to achieve an immersive experience is an object-based audio approach which represents an auditory scene as a set of individual audio elements (for example singer, drums, guitar, etc.) accompanied by information such as their position, so they can be rendered by a sound reproduction system at their intended locations. This gives the object-based audio approach a great flexibility and interactivity because each object is kept discrete and can be individually manipulated.
Each of the above described audio approaches to achieve an immersive experience presents pros and cons. It is thus common that, instead of only one audio approach, several audio approaches are combined in a complex audio system to create an immersive auditory scene. An example can be an audio system that combines scene-based or channel-based audio with object-based audio, for example ambisonics with a few discrete audio objects.
In recent years, 3GPP (3 rd Generation Partnership Project) started working on developing a 3D (Three-Dimensional) sound codec for immersive services called IVAS (Immersive Voice and Audio Services), based on the EVS codec (See Reference [5] of which the full content is incorporated herein by reference).
The DFT stereo mode is efficient for coding single-talk utterances. In case of two or more speakers it is difficult for the parametric stereo technology to fully describe the spatial properties of the scene. This problem is especially evident when two talkers are talking simultaneously (cross-talk scenario) and when the signals in the left channel and the right channel of the stereo input signal are weakly correlated or completely uncorrelated. In that situation it is better to encode the left channel and the right channel of the stereo input signal in time domain either separately or with minimum inter-channel parametrization using the LRTD stereo mode. As the scene captured in the stereo input signal evolves it is desirable to switch between the DFT stereo mode and the LRTD stereo mode based on stereo scene classification.
According to a first aspect, the present disclosure relates to a method for classifying uncorrelated stereo content in a stereo sound signal including a left channel and a right channel in response to features extracted from the stereo sound signal including the left and right channels, comprising: calculating a score representative of uncorrelated stereo content in the stereo sound signal in response to the extracted features; and in response to the score, switching between a first class indicative of one of uncorrelated and correlated stereo content in the stereo sound signal and a second class indicative of the other of the uncorrelated and correlated stereo content.
According to a second aspect, the present disclosure provides a classifier of uncorrelated stereo content in a stereo sound signal including a left channel and a right channel in response to features extracted from the stereo sound signal including the left and right channels, comprising: a calculator of a score representative of uncorrelated stereo content in the stereo sound signal in response to the extracted features; and a class switching mechanism responsive to the score for switching between a first class indicative of one of uncorrelated and correlated stereo content in the stereo sound signal and a second class indicative of the other of the uncorrelated and correlated stereo content.
The present disclosure is also concerned with a method for detecting cross-talk in a stereo sound signal including a left channel and a right channel in response to features extracted from the stereo sound signal including the left and right channels, comprising: calculating a score representative of cross-talk in the stereo sound signal in response to the extracted features; calculating auxiliary parameters for use in detecting cross-talk in the stereo sound signal; and in response to the cross-talk score and the auxiliary parameters, switching between a first class indicative of a presence of cross-talk in the stereo sound signal and a second class indicative of an absence of cross-talk in the stereo sound signal.
According to a further aspect, the present disclosure provides a detector of cross-talk in a stereo sound signal including a left channel and a right channel in response to features extracted from the stereo sound signal including the left and right channels, comprising: a calculator of a score representative of cross-talk in the stereo sound signal in response to the extracted features; a calculator of auxiliary parameters for use in detecting cross-talk in the stereo sound signal; and a class switching mechanism responsive to the cross-talk score and the auxiliary parameters for switching between a first class indicative of a presence of cross-talk in the stereo sound signal and a second class indicative of an absence of cross-talk in the stereo sound signal.
The present disclosure is also concerned with a method for selecting one of a first stereo mode and a second stereo mode for coding a stereo sound signal including a left channel and a right channel, comprising: producing a first output indicative of a presence or absence of uncorrelated stereo content in the stereo sound signal; producing a second output indicative of a presence or absence of cross-talk in the stereo sound signal; calculating auxiliary parameters for use in selecting the stereo mode for coding a stereo sound signal; and selecting the stereo mode for coding a stereo sound signal in response to the first output, the second output and the auxiliary parameters.
According to a still further aspect, the present disclosure provides a device for selecting one of a first stereo mode and a second stereo mode for coding a stereo sound signal including a left channel and a right channel, comprising: a classifier for producing a first output indicative of a presence or absence of uncorrelated stereo content in the stereo sound signal; a detector for producing a second output indicative of a presence or absence of cross-talk in the stereo sound signal; an analysis processor for calculating auxiliary parameters for use in selecting the stereo mode for coding a stereo sound signal; and a stereo mode selector for selecting the stereo mode for coding a stereo sound signal in response to the first output, the second output and the auxiliary parameters.
The foregoing and other objects, advantages and features of the uncorrelated stereo content classifier and classifying method, the cross-talk detector and detecting method, and the stereo mode selecting device and method will become more apparent upon reading of the following non-restrictive description of illustrative embodiments thereof, given by way of example only with reference to the accompanying drawings.
In the appended drawings:
The present disclosure describes the classification of uncorrelated stereo content (hereinafter “UNCLR classification”) and the cross-talk detection (hereinafter “XTALK detection”) in an input stereo sound signal. The present disclosure also describes the stereo mode selection, for example an automatic LRTD/DFT stereo mode selection.
In particular,
The UNCLR classification and the XTALK detection form two independent technologies. However, they are based on a same statistical model and share some features and parameters. Also, both the UNCLR classification and the XTALK detection are designed and trained individually for the LRTD stereo mode and the DFT stereo mode. In the present disclosure, the LRTD stereo mode is given as a non-limitative example of time-domain stereo mode and the DFT stereo mode is given as a non-limitative example of frequency-domain stereo mode. It is within the scope of the present disclosure to implement other time-domain and frequency-domain stereo modes.
The UNCLR classification analyzes features extracted from the left and right channels of the stereo sound signal 190 and detects a weak or zero correlation between the left and right channels. The XTALK detection, on the other hand, detects the presence of two speakers speaking at the same time in a stereo scene. For example, both the UNCLR classification and the XTALK detection provide binary outputs. These binary outputs are combined together in a stereo mode selection logic. As a non-limitative general rule, the stereo mode selection selects the LRTD stereo mode when the UNCLR classification and the XTALK detection indicate the presence of two speakers standing on opposite sides of a capturing device (for example a microphone). This situation usually results in weak correlation between the left channel and the right channel of the stereo sound signal 190. The selection of the LRTD stereo mode or the DFT stereo mode is performed on a frame-by-frame basis (As well known in the art, the stereo sound signal 190 is sampled at a given sampling rate and processed by groups of these samples called “frames” divided into a number of “sub-frames”). Also, the stereo mode selection logic is designed to avoid frequent switching between the LRTD and DFT stereo modes and stereo mode switching within signal segments that are perceptually important.
Non-limitative, illustrative embodiments of the UNCLR classification, the XTALK detection, and the stereo mode selection will be described in the present disclosure, by way of example only, with reference to an IVAS coding framework referred to as IVAS codec (or IVAS sound codec). However, it is within the scope of the present disclosure to incorporate such classification, detection and selection in any other sound codec.
The UNCLR classification is based on the Logistic Regression (LogReg) model as described for example in Reference [9], of which the full content is incorporated herein by reference. The LogReg model is trained individually for the LRTD stereo mode and for the DFT stereo mode. The training is done using a large database of features extracted from the stereo sound signal coding device 100 (stereo codec). Similarly, the XTALK detection is based on the LogReg model which is trained individually for the LRTD stereo mode and for the DFT stereo mode. The features used in the XTALK detection are different from the features used in the UNCLR classification. However, certain features are shared by both technologies.
The features used in the UNCLR classification and the features used in the XTALK detection are extracted from the following operations:
The method 150 for coding the stereo sound signal comprises an operation (not shown) of extraction of the above-mentioned features. To perform the operation of feature extraction, the device 100 for coding a stereo sound signal comprises a feature extractor (not shown).
The operation (not shown) of feature extraction comprises an operation 151 of inter-channel correlation analysis for the LRTD stereo mode and an operation 152 of inter-channel correlation analysis for the DFT stereo mode. To perform operations 151 and 152, the feature extractor (not shown) comprises an analyzer 101 of inter-channel correlation and an analyzer 102 of inter-channel correlation, respectively. Operations 151 and 152 as well as analyzers 101 and 102 are similar and will be described concurrently.
The analyzer 101/102 receives as input the left channel and right channel of a current stereo sound signal frame. The left and right channels are first down-sampled to 8 kHz. Let, for example, the down-sampled left and right channels be denoted as:
X
L(n),XR(n),n=0, . . . ,N−1 (1)
where n is a sample index in the current frame and N=160 is a length of the current frame (length of 160 samples). The down-sampled left and right channels are used to calculate an inter-channel correlation function. First, an absolute energy of the left channel and the right channel is calculated using, for example, the following relations:
The analyzer 101/102 calculates the numerator of the inter-channel correlation function from the dot product between the left channel and the right channel over a range of lags <−40,40>. For negative lags, the dot product between the left channel and the right channel is calculated, for example, using the following relation:
and, for positive lags, the dot product is given, for example, by the following relation:
The analyzer 101/102 then calculates the inter-channel correlation function using, for example, the following relation:
where the superscript [−1] denotes reference to the previous frame. A passive mono signal is calculated by taking average over the left and the right channels:
A side signal is calculated as a difference between the left and the right channels using, as a non-limitative example, the following relation:
Finally, it is also useful to define the per-sample product of the left and right channel as:
X
P(n)=XL(n)·XR(n),n=0, . . . ,N−1 (8)
The analyzer 101/102 comprises an Infinite Impulse Response (IIR) filter (not shown) for smoothing the inter-channel correlation function using, for example, the following relation:
R
LT
[b](k)=αICARLT[n-1](k)+(1−αICA)R[n](k),k=−40, . . . ,40 (9)
where the superscript [n] denotes the current frame, superscript [n−1] denotes the previous frame, and αICA is a smoothing factor.
The smoothing factor αICA is set adaptively within the Inter-Channel Correlation Analysis (ICA) module (Reference [1]) of the stereo sound signal coding device 100 (stereo codec). The inter-channel correlation function is then weighted at locations in the region of the predicted peak. The mechanism for peak finding and local windowing is implemented within the ICA module and will not be described in this document; See Reference [1] for additional information about the ICA module. Let's denote the inter-channel correlation function after ICA weighting as Rw(k) with k∈<−40,40>.
The position of the maximum of the inter-channel correlation function is an important indicator of the direction from which the dominant sound is coming to the capturing point, and is used as a feature by the UNCLR classification and the XTALK detection in the LRTD stereo mode. The analyzer 101/102 calculates the maximum of the inter-channel correlation function also used as a feature by the XTALK detection in the LRTD stereo mode using, for example, the following relation:
and the position of this maximum using, as a non-limitative embodiment, the following relation:
When the maximum Rmax of the inter-channel correlation function is negative it is set to 0. The difference between the maximum value Rmax in the current frame and the previous frame is calculated, for example, as:
d
R max
=R
max
−R
max
[−1] (12)
where the superscript [−1] denotes reference to the previous frame.
The position of the maximum of the inter-channel correlation function determines which channel become a “reference” channel (REF) and a “target” channel (TAR) in the ICA module. If the position kmax≥0 the left channel (L) is the reference channel (REF) and the right channel (R) is the target channel (TAR). If kmax<0 the right channel (R) is the reference channel (REF) and the left channel (L) is the target channel (TAR). The target channel (TAR) is then shifted to compensate for its delay with respect to the reference channel (REF). The number of samples used to shift the target channel (TAR) can, for example, be set directly to |kmax|. However, to eliminate artifacts resulting from abrupt changes in position kmax between consecutive frames, the number of samples used to shift the target channel (TAR) may be smoothed with a suitable filter within the ICA module.
Let the number of samples used to shift the target channel (TAR) be denoted as kshift, where kshift>0. Let the reference channel signal be denoted Xref(n) and the target channel signal be denoted Xtar(n). The instantaneous target gain reflects the ratio of energies between the reference channel (REF) and the shifted target channel (TAR). The instantaneous target gain can be calculated, for example, using the following relation:
where N is the frame length. The instantaneous target gain is used as a feature by the UNCLR classification in the LRTD stereo mode.
The analyzer 101/102 derives a first series of features used in the UNCLR classification and the XTALK detection directly from the inter-channel analysis. The value of the inter-channel correlation function at zero lag, R(0), is used as a feature on its own by the UNCLR classification and the XTALK detection in the LRTD stereo mode. By computing the logarithm of the absolute value of C(0) another feature used by the UNCLR classification and the XTALK detection in the LRTD stereo mode is obtained, as follows:
The ratio of energies of the side signal and the mono signal is also used as a feature by the UNCLR classification and the XTALK detection in the LRTD stereo mode. This ratio is calculated using, for example, the following relation:
The ratio of energies of relation (15) is smoothed over time for example as follows:
where chang is a counter of VAD (Voice Activity Detection) hangover frames which is calculated as part of the VAD module (See for example Reference [1]) of the stereo sound signal coding device 100 (stereo codec). The smoothed ratio of relation (16) is used as a feature by the XTALK detection in the LRTD stereo mode.
The analyzer 101/102 derives the following dot products from the left channel and the mono signal and between the right channel and the mono signal. First, the dot product between the left channel and the mono signal is expressed for example as:
and the dot product between the right channel and the mono signal for example as:
Both dot products are positive with a lower bound of 0. A metric based on the difference of the maximum and the minimum of these two dot products is used as a feature by the UNCLR classification and the XTALK detection in the LRTD stereo mode. It can be calculated using the following relation:
d
mmLR=max[CLM,CRM]−min[CLM,CRM] (19)
A similar metric, used as a standalone feature by the UNCLR classification and the XTALK detection in the LRTD stereo mode, is based directly on the absolute difference between the two dot products both, in linear and in logarithmic domain, calculated using for example the following relations:
ΔLRM=CLM−CRM
d
LRM=log10|CLM−CRM| (20)
A last feature used by the UNCLR classification and the XTALK detection in the LRTD stereo mode is calculated as part of the inter-channel correlation analysis operation 151/152 and reflects the evolution of the inter-channel correlation function. It may be calculated as follows:
where the superscript [−2] denotes reference to the second frame preceding the current frame.
In the LRTD stereo mode, there is no mono down-mixing and both the left and right channels of the input stereo sound signal 190 are analyzed in respective time-domain pre-processing operations to extract features, i.e. operation 153 for time-domain pre-processing the left channel and operation 154 for time-domain pre-processing the right channel of the stereo sound signal 190. To perform operations 153 and 154, the feature extractor (not shown) comprises respective time-domain pre-processors 103 and 104 as shown in
The time-domain pre-processing operation 153/154 performs a number of sub-operations to produce certain parameters that are used as extracted features for conducting UNCLR classification and XTALK detection. Such sub-operations may include:
The time-domain pre-processor 103/104 performs the linear prediction analysis using the Levinson-Durbin algorithm. The output of the Levinson-Durbin algorithm is a set of linear prediction coefficients (LPCs). The Levinson-Durbin algorithm is an iterative method and the total number of iterations in the Levinson-Durbin algorithm may be denoted as M. In each ith iteration, where i=1, . . . , M, residual error energy eLPC[i-1] is calculated. In the present disclosure, as a non-limitative illustrative implementation, it is assumed that the Levinson-Durbin algorithm is run with M=16 iterations. The difference in residual error energy between the left channel and the right channel of the input stereo sound signal 190 is used as a feature for the XTALK detection in the LRTD stereo mode. The difference in residual error energy may be calculated as follows:
d
LPC13
=e
LPC,L
[13]
−e
LPC,R
[13]| (22)
where the subscripts L and R have been added to denote the left channel and the right channel of the input stereo sound signal 190, respectively. In this non-limitative embodiment, the feature (difference dLPC13) is calculated using the residual energy from the 14th iteration instead of the last iteration as it was found experimentally that this iteration has the highest discriminative potential for the UNCLR classification. More information about the Levinson-Durbin algorithm and details about residual error energy calculation can be found, for example, in Reference [1].
The LPC coefficients estimated with the Levinson-Durbin algorithm are converted into Line Spectral Frequencies, LSF(i), i=0, . . . , M−1. The sum of the LSF values can serve as an estimate of a gravity point of the envelope of the input stereo sound signal 190. The difference between the sum of the LSF values in the left channel and in the right channel contains information about the similarity of the two channels. For that reason, this difference is used as a feature in the XTALK detection in the LRTD stereo mode. The difference between the sum of the LSF values in the left channel and in the right channel may be calculated using the following relation:
Additional information about the above mentioned LPC to LSF conversion can be found, for example, in Reference [1].
The time-domain pre-processor 103/104 performs the open-loop pitch estimation and uses an autocorrelation function from which a left channel (L)/right channel (R) open-loop pitch difference is calculated. The left channel (L)/right channel (R) open-loop pitch difference may be calculated using the following relation:
where T[k] is the open-loop pitch estimate in the kth segment of the current frame. In the present disclosure it is assumed, as a non-limitative illustrative example, that the open-loop pitch analysis is performed in three adjacent half frames (segments), indexed k=1, 2, 3, where two segments are located in the current frame and one segment is located in the second half of the previous frame. It is possible to use different number of segments as well as different segment length and overlap. Additional information about the open-loop pitch estimation can be found, for example, in Reference [1].
The difference between the maximum autocorrelation values (voicing) of the left and right channels (determined by the above-mentioned autocorrelation function) of the input stereo sound signal 190 is also used as a feature by the XTALK detection in the LRTD stereo mode. The difference between the maximum autocorrelation values of the left and right channels may be calculated using the following relation:
where v[k] represents the maximum autocorrelation value of the left (L) and right (R) channels in the kth half-frame.
The background noise estimation is part of the Voice Activity Detection (VAD) detection algorithm (See Reference [1]). Specifically, the background noise estimation uses an active/inactive signal detector (not shown) relying on a set of features some of which are used by the UNCLR classification and the XTALK detection. For example, the active/inactive signal detector (not shown) produces a non-stationarity parameter, fsta, of the left channel (L) and the right channel (R) as a measure of spectral stability. A difference in non-stationarity between the left channel and the right channel of the input stereo sound signal 190 is used as a feature by the XTALK detection in the LRTD stereo mode. The difference in non-stationarity between the left (L) and right (R) channels may be calculated using the following relation:
d
sta
=|f
sta,L
−f
sta,R| (26)
The active/inactive signal detector (not shown) relies on the harmonic analysis which contains a correlation map parameter Cmap. The correlation map is a measure of tonal stability of the input stereo sound signal 190 and it is used by the UNCLR classification and the XTALK detection. A difference between the correlation maps of the left (L) and right (R) channels is used as a feature by the XTALK detection in the LRTD stereo mode and is calculated using, for example, the following relation:
d
cmap
=|f
map,L
−f
map,R| (27)
Finally, the active/inactive signal detector (not shown) takes regular measurements of spectral diversity and noise characteristics in each frame. These two parameters are also used as features by the UNCLR classification and the XTALK detection in the LRTD stereo mode. Specifically, (a) a difference in spectral diversity between the left channel (L) and the right channel (R) may be calculated as follows:
d
sdiv=|log(Sdiv,L)−log(Sdiv,R)| (28)
where Sdiv represents the measure of spectral diversity in the current frame, and (b) a difference of noise characteristics between the left channel (L) and the right channel (R) may be calculated as follows
d
nchar=|log(nchar,L)−log(nchar,R)| (29)
where nchar represents the measurement of noise characteristics in the current frame. Reference can be made to [1] for details about the calculation of correlation map, non-stationarity, spectral diversity and noise characteristics parameters.
The ACELP (Algebraic Code-Excited Linear Prediction) core encoder, which is part of the stereo sound signal coding device 100, comprises specific settings for encoding unvoiced sounds as described in Reference [1]. The use of these settings is conditioned by multiple factors, including a measure of sudden energy increase in short segments inside the current frame. The settings for unvoiced sound coding in the ACELP core encoder are only applied when there is no sudden energy increase inside the current frame. By comparing the measures of sudden energy increase in the left channel and in the right channel it is possible to localize the starting position of a cross-talk segment. The sudden energy increase can be calculated similarly to the Ed parameter as described in the 3GPP EVS codec (Reference [1]). The difference in sudden energy increase of the left channel (L) and the right channel (R) may be calculated using the following relation:
d
dE=log(Ed,L)−log(Ed,R) (30)
where the subscripts L and R have been added to denote the left channel and the right channel of the input stereo sound signal 190, respectively.
The time-domain pre-processor 103/104 and pre-processing operation 153/154 uses a FEC classification module containing the state machine for FEC technology. A FEC class in each frame is selected among predefined classes based on a function of merit. The difference between FEC classes selected in the current frame for the left channel (L) and the right channel (R) is used as a feature by the XTALK detection in the LRTD stereo mode. However, for the purposes of such classification and detection, the FEC class may be restricted as follows:
where tclass is the selected FEC class in the current frame. Thus, the FEC class is restricted to VOICED and UNVOICED only. The difference between the classes in the left channel (L) and the right channel (R) may be calculated as follows:
d
class
=|t
class,L
−t
class,R| (32)
Reference may be made to [1] for additional details about the FEC classification.
The time-domain pre-processor 103/104 and pre-processing operation 153/154 implements a speech/music classification and the corresponding speech/music classifier. This speech/music classification makes a binary decision in each frame according to a power spectrum divergence and a power spectrum stability. A difference in power spectrum divergence between the left channel (L) and the right channel (R) is calculated, for example, using the following relation:
d
Pdiff
=|P
diff,L
−P
diff,R| (33)
where Pdiff represents power spectral divergence in the left channel (L) and the right channel (R) in the current frame, and a difference in power spectrum stability between the left channel (L) and the right channel (R) is calculated, for example, using the following relation
d
Psta
=|P
sta,L
−P
sta,R| (34)
where Psta represents power spectrum stability in the left channel (L) and the right channel (R) in the current frame.
Reference [1] describes details about the power spectrum divergence and power spectrum stability calculated within the speech/music classification.
The method 150 for coding the stereo sound signal 190 comprises an operation 155 of calculating a Fast Fourier Transform (FFT) of the left channel (L) and the right channel (R). To perform the operation 155, the device 100 for coding the stereo sound signal 190 comprises a FFT transform calculator 105.
The operation (not shown) of feature extraction comprises an operation 156 of calculating DFT stereo parameters. To perform operation 156, the feature extractor (not shown) comprises a calculator 106 of DFT stereo parameters.
In the DFT stereo mode, the transform calculator 105 converts the left channel (L) and the right channel (R) of the input stereo sound signal 190 to frequency domain by means of the FFT transform.
Let the complex spectrum of the left channel (L) be denoted as {tilde over (S)}L(k) and the complex spectrum of the right channel (R) as {tilde over (S)}R(k) with k=0, . . . , NFFT−1 being the index of frequency bins and NFFT the length of the FFT transform. For example, when the sampling rate of the input stereo sound signal is 32 kHz, the calculator 106 of DFT stereo parameters calculates the complex spectra over a window of 40 ms resulting in NFTT=1280 samples. The complex cross-channel spectrum may be then calculated using, as a non-limitative embodiment, the following relation:
X
LR(k)={tilde over (S)}L(k){tilde over (S)}R*(k),k=0, . . . ,NFFT−1 (35)
with the star superscript indicating complex conjugate. The complex cross-channel spectrum can be decomposed into the real part and the imaginary part using the following relation:
Re(XLR(k))=Re({tilde over (S)}L(k))·Re({tilde over (S)}R(k))+Im({tilde over (S)}L(k))·Im({tilde over (S)}R(k)), k=0, . . . ,NFFT−1
Im(XLR(k))=Im({tilde over (S)}L(k))·Re({tilde over (S)}R(k))−Re({tilde over (S)}L(k))·Im({tilde over (S)}R(k)), k=0, . . . ,NFFT−1 (36)
Using the real and imaginary parts decomposition, it is possible to express an absolute magnitude of the complex cross-channel spectrum as:
|XLR(k)|=√{square root over (Re(XLR(k))2+Im(XLR(k))2)}, k=0, . . . ,NFFT−1 (37)
By summing the absolute magnitudes of the complex cross-channel spectrum over the frequency bins using the following relation, the calculator 106 of DFT stereo parameters obtain an overall absolute magnitude of the complex cross-channel spectra:
The energy spectrum of the left channel (L) and the energy spectrum of the right channel (R) can be expressed as:
E
L(k)=Re({tilde over (S)}L(k))2+Im({tilde over (S)}L(k))2, k=0, . . . ,NFFT−1
E
R(k)=Re({tilde over (S)}R(k))2+Im({tilde over (S)}R(k))2, k=0, . . . ,NFFT−1 (39)
By summing the energy spectra of the left channel (L) and the energy spectra of the right channel (R) over the frequency bins using the following relations, the total energies of the left channel (L) and the right channel (R) can be obtained:
The UNCLR classification and the XTALK detection in the DFT stereo mode use the overall absolute magnitude of the complex cross-channel spectra as one of their features but not in the direct form as defined above but rather in the energy-normalized form and in the logarithmic domain as expressed using, for example, the following relation:
It is possible for the calculator 106 of DFT stereo parameters to calculate a mono down-mix energy using, for example, the following relation:
E
M
=E
L
+E
R+2|XLR| (42)
An Inter-channel Level Difference (ILD) is a feature used by the UNCLR classification and the XTALK detection in the DFT stereo mode as it contains information about the angle from which the main sound is coming. For the purposes of the UNCLR classification and the XTALK detection, the Inter-channel Level Difference (ILD) can be expressed in the form of a gain factor. The calculator 106 of DFT stereo parameters calculates the Inter-channel Level Difference (ILD) gain using, for example, the following relation:
An Inter-channel Phase Difference (IPD) contains information from which the listeners can deduce the direction of the incoming sound signal. The calculator 106 of DFT stereo parameters calculates the Inter-channel Phase Difference (IPD) using, for example, the following relation:
A differential value of the Inter-channel Phase Difference (IPD) with respect to the previous frame is calculated using, for example, the following relation:
d
IPD=|IPD[n]−IPD[n-1]| (46)
where the superscript n is used to denote the current frame and the superscript n−1 is used to denote the previous frame. Finally, it is possible for the calculator 106 to calculate an IPD gain as a ratio between a phase-aligned (IPD=0) down-mix energy (numerator of relation (47)) and the energy of the mono down-mix energy EM:
The IPD gain gIPD_lin is restricted to the interval <0,1>. In case the value exceeds the upper threshold of 1.0, then the value of the IPD gain from the previous frame is substituted therefor. The UNCLR classification and the XTALK detection in the DFT stereo mode use the IPD gain in the logarithmic domain as a feature. The calculator 106 determines the IPD gain in the logarithmic domain using, for example, the following relation:
g
IPD=log(1−gIPD_lin) (48)
The Inter-channel Phase Difference (IPD) can also be expressed in the form of an angle used as a feature by the UNCLR classification and the XTALK detection in the DFT stereo mode and calculated, for example, as follows:
A side channel can be calculated as a difference between the left channel (L) and the right channel (R). It is possible to express a gain of the side channel by calculating the ratio of the absolute value of the energy of this difference (EL−ER) with respect to the mono down-mix energy EM, using the following relation:
The higher the gain gside, the bigger the difference between the energies of the left channel (L) and the right channel (R). The gain gside of a the side channel is restricted to the interval <0.01, 0.99>. Values outside of this range are limited.
The phase difference between the left channel (L) and the right channel (R) of the input stereo sound signal 190 can also be analyzed from a prediction gain calculated using, for example, the following relation:
g
pred_lin=(1−gside)EL+(1+gside)ER−2|XLR| (51)
where the value of the prediction gain gpred_lin is a restricted to the interval <0, ∞>, i.e. to positive values. The above expression of gpred_lin captures a difference between the cross-channel spectrum (XLR) energy and the mono down-mix energy EM=EL+ER+2|XLR|. The calculator 106 converts this gpred_lin into logarithmic domain using, for example, relation (52) for use as a feature by the UNCLR classification and the XTALK detection in the DFT stereo mode:
g
pred=log(gpred_lin+1) (52)
The calculator 106 also uses the per-bin channel energies of relation (39) to calculate a mean energy of Inter-Channel Coherence (ICC) forming a cue for determining a difference between the left channel (L) and the right channel (R) not captured by the Inter-channel Time Difference (ITD), to be described hereinafter, and the Inter-channel Phase Difference (IPD). First, the calculator 106 calculates an overall energy of the cross-channel spectrum using, for example, the following relation:
E
X=Re(XLR)2+Im(XLR)2 (53)
To express the mean energy of the Inter-Channel Coherence (ICC) it is useful to calculate the following parameter:
φtot=√{square root over ((EL−ER)(EL−ER)+4EX)} (54)
Then, the mean energy of the Inter-Channel Coherence (ICC) is used as a feature by the UNCLR classification and the XTALK detection in the DFT stereo mode and can be expressed as
The value of the mean energy Ecoh is set to 0 if the inner term is less than 1.0. Another possible interpretation of the Inter-Channel Coherence (ICC) is a side-to-mono energy ratio calculated as
Finally, the calculator 106 determines a ratio rpp of maximum and minimum intra-channel amplitude products used in the UNCLR classification and the XTALK detection. This feature, used as a feature by the UNCLR classification and the XTALK detection in the DFT stereo mode, is calculated, for example, using the following relation:
where the intra-channel amplitude products are defined as follows:
A parameter used in stereo signal reproduction is the Inter-channel Time Difference (ITD). In the DFT stereo mode, the calculator 106 of DFT stereo parameters estimates the Inter-channel Time Difference (ITD) from the Generalized Cross-channel Correlation function with Phase Difference (GCC-PHAT). The Inter-channel Time Difference (ITD) corresponds to a Time Delay of Arrival (TDOA) estimation. The GCC-PHAT function is a robust method for estimating the Inter-channel Time Difference (ITD) on reverberated signals. The GCC-PHAT is calculated, for example, using the following relation:
wherein IFFT stands for Inverse Fast Fourier Transform.
The Inter-channel Time Difference (ITD) is then estimated from the GCC-PHAT function using, for example, the following relation:
where d is a time lag in samples corresponding to a time delay in the range from −5 ms to +5 ms. The maximum value of the GCC-PHAT function corresponding to dITD is used as a feature by the UNCLR classification and the XTALK detection in the DFT stereo mode and can be retrieved using the following relation:
In single-talk scenarios there is usually a single dominant peak in the GCC-PHAT function corresponding to the Inter-channel Time Difference (ITD). However, in cross-talk situations with two talkers located on opposite sides of a capturing microphone, there are usually two dominant peaks located apart from each other.
The amplitude of the first peak, GITD, is calculated using relation (61) and its position, dITD, is calculated using relation (60). The amplitude of the second peak is localized by searching for the second maximum value of the GCC-PHAT function in an inverse direction with respect to the first peak. More specifically, the direction sITD of searching of the second peak is determined by the sign of the position dITD of the first peak:
s
ITD=sgn(dITD) (62)
where sgn(.) is the sign function.
The calculator 106 of DFT stereo parameters can then retrieve the second maximum value of the GCC-PHAT function in the direction sITD (second highest peak) using, for example, the following relation:
As a non-limitative embodiment, a threshold thrst=8 ensures that the second peak of the GCC-PHAT function is searched at a distance of at least 8 samples from the beginning (dITD=0). As far as the detection of cross-talk (XTALK) is concerned, this means that any potential secondary talker in the scene will have to be present at least a certain minimum distance apart both from the first “dominant” talker and from the middle point (d=0).
The position of the second highest peak of the GCC-PHAT function is calculated using relation (63) by replacing the max(.) function with arg max(.) function. The position of the second highest peak of the GCC-PHAT function will be denoted as dITD2.
The relationship between the amplitudes of the first peak and the second highest peak of the GCC-PHAT function is used as a feature by the XTALK detection in the DFT stereo mode and can be evaluated using the following ratio:
The ratio rGITD12 has a high discrimination potential but, in order to use it as a feature, the XTALK detection eliminates occasional false alarms resulting from a limited time resolution applied during frequency transformation in the DFT stereo mode. This can be done by multiplying the value of the ratio rGITD12 in the current frame with the value of the same ratio from the previous frame using, for example, the following relation:
r
GITD12
←r
GITD12(n)·rGITD12(n−1) (65)
where the index n has been added to denote the current frame and the index n−1 to denote the previous frame. For simplicity the parameter name, rGITD12, is reused to identify the output parameter.
The amplitude of the second highest peak alone constitutes an indicator of the strength of the secondary talker in the scene. Similarly to the ratio rGITD12, occasional random “spikes” of the value GITD2 are reduced using, for example, the following relation (66) to obtain another feature used by the XTALK detection in the DFT stereo mode:
m
ITD2
=G
ITD2(n)·GITD2(n−1) (66)
Another feature used in the XTALK detection in the DFT stereo mode is the difference of the position dITD2(n) of the second highest peak in the current frame with respect to the previous frame, calculated using, for example, the following relation:
ΔITD2=|dITD2(n)−dITD2(n−1)| (67)
In the DFT stereo mode, the method 150 for coding the stereo sound signal comprises an operation 157 of down-mixing the left channel (L) and the right channel (R) of the stereo sound signal 190 and an operation 158 of calculating an IFFT transform of the down-mixed signals. To perform the operations 157 and 158, the device 100 for coding the stereo sound signal 190 comprises a down-mixer 107 and an IFFT transform calculator 108.
The down-mixer 107 down-mixes the left channel (L) and the right channel (R) of the stereo sound signal into a mono channel (M) and a side channel (S), as described, for example, in Reference [6], of which the full content is incorporated herein by reference.
The IFFT transform calculator 108 then calculates an IFFT transform of the down-mixed mono channel (M) from the down-mixer 107 for producing a time-domain mono channel (M) to be processed in the TD pre-processor 109. The IFFT transform used in calculator 108 is the inverse of the FFT transform used in calculator 105.
In the DFT stereo mode, the operation (not shown) of feature extraction comprises a TD pre-processing operation 159 for extracting features used in the UNCLR classification and the XTALK detection. To perform operation 159, the feature extractor (not shown) comprises the TD pre-processor 109 responsive to the mono channel (M).
The UNCLR classification and the XTALK detection use a Voice Activity Detection (VAD) algorithm. In the LRTD stereo mode, the VAD algorithm is run separately on the left channel (L) and the right channel (R). In the DFT stereo mode, the VAD algorithm is run on the down-mixed mono channel (M). The output of the VAD algorithm is a binary flag fVAD. The VAD flag fVAD is not suitable for the UNCLR classification and the XTALK detection as it is too conservative and has a long hysteresis. This prevents fast switching between the LRTD stereo mode and the DFT stereo mode for example at the end of talk spurts or during short pauses in the middle of an utterance. Also, the VAD flag fVAD is sensitive to small changes in the input stereo sound signal 190. This leads to false alarms in cross-talk detection and incorrect selection of the stereo mode. Therefore, the UNCLR classification and the XTALK detection use an alternative measure of voice activity detection which is based on variations of the relative frame energy. Reference is made to [1] for details about the VAD algorithm.
6.1.1 Relative Frame Energy
The UNCLR classification and the XTALK detection use the absolute energy of the left channel (L) EL and the absolute energy of the right channel (R) ER obtained using relation (2). The maximum average energy of the input stereo sound signal can be calculated in the logarithmic domain using, for example, the following relation:
where the index n has been added to denote the current frame and N=160 is the length of the current frame (length of 160 samples). The value of the maximum average energy in the logarithmic domain Eave(n) is limited to the interval <0;∞>.
A relative frame energy of the input stereo sound signal can then be calculated by mapping the maximum average energy Eave(n) linearly in the interval <0; 0,9>, using, for example, the following relation:
where Eup(n) denotes an upper bound of the relative frame energy Erl(n), Edn(n) denotes a lower bound of the relative frame energy Erl(n), and the index n denotes the current frame.
The bounds of the relative frame energy Erl(n) are updated in each frame based on a noise updating counter aEn(n), which is part of the noise estimation module of the TD pre-processors 103, 104 and 109. Reference is made to [1] for additional information about this counter. The purpose of the counter aEn(n) is to signal that the background noise level in each channel in the current frame can be updated. This situation happens when the value of the counter aEn(n) is zero. As a non-limitative example, the counter aEn(n) in each channel is initialized to 6 and incremented or decremented in every frame with a lower threshold of 0 and an upper threshold of 6.
In the case of LRTD stereo mode, noise estimation is performed on the left channel (L) and the right channel (R) independently. Let us denote the two noise updating counters as aEn,L(n) and aEn,R(n) for the left channel (L) and the right channel (R), respectively. The two counters can then be combined into a single binary parameter with the following relation:
In the case of the DFT stereo mode, noise estimation is performed on the down-mixed mono channel (M). Let us denote the noise update counter in the mono channel as aEn,M(n). The binary output parameter is calculated with the following relation:
The UNCLR classification and the XTALK detection use the binary parameter fEn(n) to enable updating of the lower bound Edn(n) or the upper bound Eup(n) of the relative frame energy Erl(n). When the parameter fEn(n) is equal to zero the lower bound Edn(n) is updated. When the parameter fEn(n) is equal to 1 the upper bound Eup(n) is updated.
The upper bound Eup(n) of the relative frame energy Erl(n) is updated in frames where the parameter fEn(n) is equal to 1 using, for example, the following relation:
where the index n represents the current frame and the index n−1 represents the previous frame.
The first and second lines in relation (71) represent a slower update and a faster update, respectively. Thus, using relation (71) the upper bound Eup(n) is updated more rapidly when the energy increases.
The lower bound Edn(n) of the relative frame energy Erl(n) is updated in frames where the parameter fEn(n) is equal to 0 using, for example, the following relation:
E
dn(n)=0.9Edn(n−1)+0.1Eave(n) (72)
with a lower threshold of 30.0. If the value of the upper bound Eup(n) gets too close to the lower bound Edn(n), it is modified, as an example, as follows:
E
up(n)=Edn(n)+20.0, if Eup(n)<Edn(n)+20.0 (73)
6.1.2 Alternative VAD Flag Estimation
The UNCLR classification and the XTALK detection use the variation of the relative frame energy Erl(n), calculated in relations (71) as a basis for calculating an alternative VAD flag. Let the alternative VAD flag in the current frame be denoted as fxVAD(n). The alternative VAD flag fxVAD(n) is calculated by combining the VAD flags generated in the noise estimation module of the TD pre-processor 103/104 in the case of the LRTD stereo mode, or the VAD flag fVAD generated in TD pre-processor 109 in the case of the DFT stereo mode, with an auxiliary binary parameter fErl(n) reflecting the variations of the relative frame energy Erl(n).
First, the relative frame energy Erl(n) is averaged over a segment of 10 previous frames using, for example, the following relation:
where p is the index of the average. The auxiliary binary parameter is set, for example, according to the following logic:
In the LRTD stereo mode, the alternative VAD flag fxVAD(n) is calculated by means of a logical combination of the VAD flag in the left channel (L), fVAD,L(n), the VAD flag in the right channel (R), fVAD,R(n), and the auxiliary binary parameter fEri(n) using, for example, the following relation:
f
xVAD(n)−(fVAD,L(n) OR fVAD,R(n)) AND fErl(n) (76)
In the DFT stereo mode, the alternative VAD flag fxVAD(n) is calculated by means of a logical combination of the VAD flag in the down-mixed mono channel (M), fVAD,M(n), and the auxiliary binary parameter fErl(n), using, for example, the following relation.
f
xVAD(n)=fVAD,M(n) AND fErl(n) (77)
In the DFT stereo mode, it is also convenient to calculate a discrete parameter reflecting low level of the down-mixed mono channel (M). Such parameter, called stereo silence flag, can be calculated, for example, by comparing the average level of the active signal to a certain predefined threshold. As an example, the long-term active speech level,
The stereo silence flag can then be calculated using the following relation:
where EM(n) is the absolute energy of the down-mixed mono channel (M) in the current frame. The stereo silence flag fsil(n) is limited to the interval <0,∞>.
The UNCLR classification in the LRTD stereo mode and the DFT stereo mode is based on the Logistic Regression (LogReg) model (See Reference [9]). The LogReg model is trained individually for the LRTD stereo mode and the DFT stereo mode on a large labeled database consisting of correlated and uncorrelated stereo signal samples. The uncorrelated stereo training samples are created artificially, by combining randomly selected mono samples. The following stereo scenes may be simulated with such artificial mix of mono samples:
In a non-limitative implementation, the mono samples are selected from the AT&T mono clean speech database sampled at 16 kHz. Only active segments are extracted from the mono samples using any convenient VAD algorithm, for example the VAD algorithm of the 3GPP EVS codec as described in Reference [1]. The total size of the stereo training database with uncorrelated content is approximately 240 MB. No level adjustment is applied on the mono signals before they are combined to form the stereo sound signal. Level adjustment is applied only after this process. The level of each stereo sample is normalized to −26 dBov based on passive mono down-mix. Thus, the inter-channel level difference is unchanged and remains the main factor determining the position of the dominant speaker in the stereo scene.
The correlated stereo training samples are obtained from various real recordings of stereo sound signals. The total size of the training database with correlated stereo content is approximately 220 MB. The correlated stereo training samples contain, in a non-limitative implementation, samples from the following scenes illustrated in
Let the total size of the training database be denoted as:
N
T
=N
UNC
+N
CORR (79)
where NUNC is the size of the set of uncorrelated stereo training samples and NCORR the size of the set of correlated stereo training samples. The labels are assigned manually using, for example, the following simple rule:
where ΩUNC is the entire feature set of the uncorrelated training database and ΩCORR is the entire feature set of the correlated training database. In this illustrative, non-restrictive implementation, the inactive frames (VAD=0) are discarded from the training database.
Each frame in the uncorrelated training database is labeled “1” and each frame in the correlated training database is labeled “0”. Inactive frames for which VAD=0 are ignored during the training process.
In the LRTD stereo mode, the method 150 for coding the stereo sound signal 190 comprises an operation 161 of classification of uncorrelated stereo content (UNCLR). To perform operation 161, the device 100 for coding the stereo sound signal 190 comprises an UNCLR classifier 111.
The operation 161 of UNCLR classification in the LRTD stereo mode is based on the Logistic Regression (LogReg) model. The following features extracted by running the device 100 for coding the stereo sound signal (stereo codec) on both the uncorrelated stereo and correlated stereo training databases are used in the UNCLR classification operation 161:
In total, the UNCLR classifier 111 uses a number F=8 of features.
Before the training process, the UNCLR classifier 111 comprises a normalizer (not shown) performing a sub-operation (not shown) of normalizing the set of features by removing its mean and scaling it to unit variance. The normalizer (not shown) uses, for that purpose, for example the following relation:
where fi,raw denotes the ith feature of the set, fi denotes the normalized ith feature,
The LogReg model used by the UNCLR classifier 111 takes the real-valued features as an input vector and makes a prediction as to the probability of the input belonging to an uncorrelated class (class 0), indicative of uncorrelated stereo content (UNCLR). For that purpose, the UNCLR classifier 111 comprises a score calculator (not shown) performing a sub-operation (not shown) of calculating a score representative of uncorrelated stereo contents in the input stereo sound signal 190. The score calculator (not shown) computes the output of the LogReg model, which is real-valued, in the form of a linear regression of the extracted features which can be expressed using the following relation:
y
p
=b
0
+b
1
f
1
+ . . . +b
F
f
F (82)
where bi denotes coefficients of the LogReg model, and fi denotes the individual features. The real-valued output yp is then transformed into a probability using, for example, the following logistic function:
The probability, p (class=0), takes a real value between 0 and 1. Intuitively, probabilities closer to 1 mean that the current frame is highly stereo uncorrelated, i.e. having uncorrelated stereo content.
The objective of the learning process is to find the best values for the coefficients bi, i=1, . . . , F based on the training data. The coefficients are found iteratively by minimizing the difference between the predicted output, p(class=0), and the true output, y, on the training database. The UNCLR classifier 111 in the LRTD stereo mode is trained using the Stochastic Gradient Descent (SGD) iterative method as described, for example, in Reference [10], of which the full content is incorporated herein by reference.
By comparing the probabilistic output p(class=0) with a fixed threshold, for example 0.5, it is possible to make a binary classification. However, for the purpose of the UNCLR classification in the LRTD stereo mode, the probabilistic output p(class=0) is not used. Instead, the raw output of the LogReg model, yp, is processed further as shown below.
The score calculator (not shown) of the UNCLR classifier 111 first normalizes the raw output of the LogReg model, yp, using, for example, the function as shown in
The normalization function of
7.1.1 LogReg Output Weighting Based on Relative Frame Energy
The score calculator (not shown) of the UNCLR classifier 111 then weights the normalized output of the LogReg model ypn(n) with the relative frame energy using, for example, the following relation:
scr
UNCLR(n)=ypn(n)·Erl(n) (85)
where Erl(n) is the relative frame energy described by Relation (69). The normalized weighted output scrUNCLR(n) of the LogReg model is called the above mentioned “score” representative or uncorrelated stereo contents in the input stereo sound signal 190.
7.1.2 Rising Edge Detection
The score scrUNCLR(n) still cannot be used directly by the UNCLR classifier 111 for UNCLR classification as it contains occasional short-term “peaks” resulting from imperfect statistical model. These peaks can be filtered out by a simple averaging filter such as first order IIR filter. Unfortunately, the application of such averaging filter usually results in smearing of the rising edges representing transitions between stereo correlated and uncorrelated content in the input stereo sound signal 190. To preserve the rising edges, the smoothing process (application of the averaging IIR filter) is reduced or even stopped when a rising edge is detected in the input stereo sound signal 190. The detection of rising edges in the input stereo sound signal 190 is done by analyzing the evolution of the relative frame energy Erl(n).
The rising edges of the relative frame energy Erl(n) are found by filtering the relative frame energy with a cascade of P=20 identical first-order Resistor-Capacitor (RC) filters each of which having, for example, the following form:
The constants a0, a1 and b1 are chosen such that
Thus, a single parameter τedge is used to control the time constant of each RC filter. Experimentally, it was found that good results are achieved with τedge=0.3. The filtering of the relative frame energy Erl(n) with the cascade of P=20 RC filters can be performed as follows:
where the superscript p=0, 1, . . . , P−1 has been added to denote the stage in the RC filter cascade. The output of the cascade of RC filters is equal to the output from the last stage, i.e.
E
f(n)=Ef[P-1](n)=Ef[19](n) (89)
The reason for using a cascade of first-order RC filters instead of a single higher-order RC filter is to reduce the computational complexity. The cascade of multiple first-order RC filters acts as a low-pass filter with a relatively sharp step function. When used on the relative frame energy Erl(n) it tends to smear out occasional short-term spikes while preserving slower but important transitions such as onsets and offsets. The rising edges of the relative frame energy Erl(n) can be quantified by calculating the difference between the relative frame energy and the filtered output using, for example, the following relation:
f
edge(n)=0.95−0.05(Erl(n)−Ef(n)) (90)
The term fedge(n) is limited to the interval <0.9; 0.95>. The score calculator (not shown) of the UNCLR classifier 111 smoothes the normalized weighted output scrUNCLR(n) of the LogReg model with an IIR filter using fedge(n) as forgetting factor using, for example, the following relation to produce a normalized, weighted and smoothed score (output of the LogReg model):
wscr
UNCLR(n)=fedge(n)·wscrUNCLR(n−1)+(1−fedge(n))·scrUNCLR(n) (91)
In the DFT stereo mode, the method 150 for coding the stereo sound signal 190 comprises an operation 163 of classification of uncorrelated stereo content (UNCLR). To perform operation 163, the device 100 for coding the stereo sound signal 190 comprises a UNCLR classifier 113.
The UNCLR classification in the DFT stereo mode is done similarly as the UNCLR classification in the LRTD stereo mode as described above. Specifically, the UNCLR classification in the DFT stereo mode is also based on the Logistic Regression (LogReg) model. For simplicity, the symbols/names denoting certain parameters and the associated mathematical symbols from the UNCLR classification in the LRTD stereo mode are also used for the DFT stereo mode. Subscripts are added to avoid ambiguity when making reference to the same parameter from multiple sections simultaneously.
The following features extracted by running the device 100 for coding the stereo sound signal (stereo codec) on both the stereo uncorrelated and stereo correlated training databases are used by the UNCLR classifier 113 for UNCLR classification in the DFT stereo mode:
In total, the UNCLR classifier 113 uses a number F=8 of features.
Before the training process, the UNCLR classifier 113 comprises a normalizer (not shown) performing a sub-operation (not shown) of normalizing the set of features by removing its mean and scaling it to unit variance. The normalizer (not shown) uses, for that purpose, for example the following relation:
where fi,raw denotes the ith feature of the set,
The LogReg model used in the DFT stereo mode is similar to the LogReg model used in the LRTD stereo mode. The output of the LogReg model, yp, is described by Relation (82) and the probability that the current frame has uncorrelated stereo content (class=0) is given by Relation (83). The classifier training process and the procedure to find the optimal decision threshold are described herein above. Again, for that purpose, the UNCLR classifier 113 comprises a score calculator (not shown) performing a sub-operation (not shown) of calculating a score representative of uncorrelated stereo contents in the input stereo sound signal 190.
The score calculator (not shown) of the UNCLR classifier 113 first normalizes the raw output of the LogReg model, yp, similarly as in the LRTD stereo mode and according to the function as illustrated
7.2.1 LogReg Output Weighting Based on Relative Frame Energy
The score calculator (not shown) of the UNCLR classifier 113 then weights the normalized output of the LogReg model, ypn(n), with the relative frame energy Erl(n) using, for example, the following relation:
scr
UNCLR(n)=ypn(n)·Erl(n) (94)
where Erl(n) is the relative frame energy described by Relation (69).
The weighted normalized output of the LogReg model is called the “score” and it represents the same quantity as in the LRTD stereo mode described above. In the DFT stereo mode, the score scrUNCLR(n) is reset to 0 when the alternative VAD flag, fxVAD(n) (Relation (77)), is set to 0. This is expressed by the following relation:
scr
UNCLR(n)=0, if fxVAD(n)=0 (95)
7.2.2 Rising Edge Detection in DFT Stereo Mode
The score calculator (not shown) of the UNCLR classifier 113 finally smoothes the score scrUNCLR(n) in the DFT stereo mode with an IIR filter using the rising edge detection mechanism described above in the UNCLR classification in the LRTD stereo mode. For that purpose, the UNCLR classifier 113 uses the relation:
wscr
UNCLR(n)=fedge(n)·wscrUNCLR(n−1)+(1−fedge(n))·scrUNCLR(n) (96)
which is the same as Relation (91).
The final output of the UNCLR classifier 111/113 is a binary state. Let cUNCLR(n) denote the binary state of the UNCLR classifier 111/113. The binary state cUNCLR(n) has a value “1” to indicate an uncorrelated stereo content class or a value “0” to indicate a correlated stereo content class. The binary state at the output of the UNCLR classifier 111/113 is variable. It is initialized to “0”. The state of the UNCLR classifier 111/113 changes from a current class to the other class in frames where certain conditions are met.
The mechanism used in the UNCLR classifier 111/113 for switching between the stereo content classes is depicted in
Referring to
In the same manner, referring to
Finally, the variable cntsw(n) in the current frame is updated (608) and the procedure is repeated for the next frame (609).
The variable cntsw(n) is a counter of frames of the UNCLR classifier 111/113 in which it is possible to switch between LRTD and DFT stereo modes. This counter is initialized to zero and is updated (608) in each frame using, for example, the following logic:
The counter cntsw(n) has an upper limit of 100. The variable ctype indicates the type of the current frame in the device 100 for coding a stereo sound signal. The frame type is usually determined in the pre-processing operation of the device 100 for coding a stereo sound signal (stereo sound codec), specifically in pre-processor(s) 103/104/109. The type of the current frame is usually selected based on the following characteristics of the input stereo sound signal 190:
As a non-limitative example, the frame type from the 3GPP EVS codec as described in Reference [1] can be used in the UNCLR classifier 111/113 as the parameter ctype of Relation (97). The frame type in the 3GPP EVS codec is selected from the following set of classes:
c
type∈(INACTIVE,UNVOICED,VOICED,GENERIC,TRANSITION,AUDIO)
The parameter VAD0 in Relation (97) is the VAD flag without any hangover addition. The VAD flag without hangover addition is often calculated in the pre-processing operation of the device 100 for coding a stereo sound signal (stereo sound codec), specifically in TD pre-processor(s) 103/104/109. As a non-limitative example, the VAD flag without hangover addition from the 3GPP EVS codec as described in Reference [1] may be used in the UNCLR classifier 111/113 as the parameter VAD0.
The output binary state cUNCLR(n) of the UNCLR classifier 111/113 can be altered if the type of the current frame is GENERIC, UNVOICED or INACTIVE or if the VAD flag without hangover addition indicates inactivity (VAD0=0) in the input stereo sound signal. Such frames are generally suitable for switching between the LRTD and DFT stereo modes as they are located either in stable segments or in segments with perceptually low impact on the quality. An objective is to minimize the risk of switching artifacts.
The XTALK detection is based on the LogReg model trained individually for the LRTD stereo mode and for the DFT stereo mode. Both statistical models are trained on features collected from a large database of real stereo recordings and artificially-prepared stereo samples. In the training database each frame is labeled either as single-talk or cross-talk. The labeling is done either manually in case of real stereo recordings or semi-automatically in case of artificially-prepared samples. The manual labeling is made by identifying short compact segments with cross-talk characteristics. The semi-automatic labeling is made using VAD outputs from mono signals before their mixing into stereo signals. Details are provided at the end of the present section 8.
In the non-limitative example of implementation described in the present disclosure, the real stereo recordings are sampled at 32 kHz. The total size of these real stereo recordings is approximately 263 MB corresponding to approximately 30 minutes. The artificially-prepared stereo samples are created by mixing randomly selected speakers from mono clean speech database using the ITU-T G.191 reverberation tool. The artificially-prepared stereo samples are prepared by simulating the conditions in a large conference room with an AB microphones set-up as illustrated in
Two types of room are considered, echoic (LEAB) and anechoic (LAAB). Referring to
The randomly selected mono samples for speakers S1 and S2 are then convolved with the Room Impulse Responses (RIRs) corresponding to a given speaker/microphone position, thereby simulating a real AB microphone capture. Contributions from both speakers S1 and S2 in each microphone M1 and M2 are added together. A randomly selected offset in the range of 4-4.5 seconds is added to one of the speaker samples before convolution. This ensures that there is always some period of single-talk speech followed by a short period of cross-talk speech and another period of single-talk speech in all training sentences. After RIR convolution and mixing, the samples are again normalized to −26 dBov, this time applied to the passive mono down-mix.
The labels are created semi-automatically using a conventional VAD algorithm, for example the VAD algorithm of the 3GPP EVS codec as described in Reference [1]. The VAD algorithm is applied on the first speaker (S1) file and the second speaker (S2) file individually. Both binary VAD decisions are then combined by means of a logical “AND”. This results in the label file. The segments where the combined output is equal to “1” determine the cross-talk segments. This is illustrated in
The training set is unbalanced. The proportion of cross-talk frames to single-talk frames is approximately 1 to 5, i.e. only about 21% of the training data belong to the cross-talk class. This is compensated during the LogReg training process by applying class weights as described in Reference [6] of which the full content is incorporated herein by reference.
The training samples are concatenated and used as an input to the device 100 for coding a stereo sound signal (stereo sound codec). The features are collected individually in separate files during the encoding process for each 20 ms frame. This constitutes the training feature set. Let the total number of frames in the training feature set be denoted, for example, as:
N
T
=N
XTALK
+N
NORMAL (98)
where NXTALK is the total number of cross-talk frames and NNORMAL the total number of single-talk frames.
Also, let the corresponding binary label be denoted, for example, as:
where ΩXTALK is the superset of all cross-talk frames and ΩNORMAL is the superset of all single-talk frames. The inactive frames (VAD=0) are removed from the training database.
In the LRTD stereo mode, the method 150 for coding the stereo sound signal comprises an operation 160 of detecting cross-talk (XTALK). To perform operation 160, the device 100 for coding the stereo sound signal comprises a XTALK detector 110.
The operation 160 of detecting cross-talk (XTALK) in LRTD stereo mode is done similarly to the UNCLR classification in the LRTD stereo mode described above. The XTALK detector 110 is based on the Logistic Regression (LogReg) model. For simplicity the names of parameters and the associated mathematical symbols from the UNCLR classification are used also in this section. Subscripts are added to symbols to avoid ambiguity when referring to the same parameter name from different sections.
The following features are used by the XTALK detector 110:
Accordingly, the XTALK detector 110 uses a total number F=17 of features.
Before the training process, the XTALK detector 110 comprises a normalizer (not shown) performing a sub-operation (not shown) of normalizing the set of 17 features fi by removing its mean and scaling it to unit variance. The normalizer (not shown) uses, for example, the following relation:
where fi,raw denotes the ith feature of the set,
The output yp of the LogReg model is described by Relation (82) and the probability p(class=0) that the current frame belongs to the cross-talk segment class (class 0) is given by Relation (83). The details of the training process and the procedure to find the optimal decision threshold are provided above in the description of the UNCLR classification in the LRTD stereo mode. As described above, for that purpose, the XTALK detector 110 comprises a score calculator (not shown) performing a sub-operation (not shown) of calculating a score representative of uncorrelated stereo contents in the input stereo sound signal 190.
The score calculator (not shown) of the XTALK detector 110 normalizes the raw output of the LogReg model, yp, with the function shown, for example, in
The normalized output of the LogReg model, ypn(n), is set to 0 if the previous frame was encoded with the DFT stereo mode and the current frame is encoded with the LRTD stereo mode. Such procedure prevents switching artifacts.
8.1.1 LogReg Output Weighting Based on Relative Frame Energy
The score calculator (not shown) of the XTALK detector 110 weights normalized output of the LogReg model, ypn(n), based on the relative frame energy Erl(n). The weighting scheme applied in the XTALK detector 110 in LRTD stereo mode is similar to the weighting scheme applied in the UNCLR classifier 111 in the LRTD stereo mode, as described herein above. The main difference is that the relative frame energy Erl(n) is not used directly as a multiplicative factor as in Relation (85). Instead, the score calculator (not shown) of the XTALK detector 110 linearly maps the relative frame energy Erl(n) in the interval <0; 0.95> with inverse proportion. This mapping can be done, for example, using the following relation:
w
relE(n)=−2.375Erl(n)+2.1375 (102)
Thus, in frames with higher relative energy the weight will be close to 0 whereas in frames with low energy the weight will be close to 0.95. The score calculator (not shown) of the XTALK detector 110 then uses the weight wrelE(n) to filter the normalized output of the LogReg model, ypn(n), using for example the following relation:
scr
XTALK(n)=wrelEscrXTALK(n−1)+(1−wrelE)ypn(n) (103)
where the index n denotes the current frame and n−1 the previous frame.
The normalized weighted output scrXTALK(n) from the XTALK detector 110 is called the “XTALK score” representative of cross-talk in the input stereo sound signal 190.
8.1.2 Rising Edge Detection
In a similar fashion as in the UNCLR classification in the LRTD stereo mode, the score calculator (not shown) of the XTALK detector 110 smoothes the normalized weighted output scrXTALK(n) of the LogReg model. The reason is to smear out occasional short-term “peaks” and “dips” that would otherwise result in false alarms or errors. The smoothing is designed to preserve rising edges of the LogReg output as these rising edges might represent important transitions between the cross-talk and single-talk segments in the input stereo sound signal 190. The mechanism for detection of rising edges in the XTALK detector 110 in LRTD stereo mode is different from the mechanism of detection of rising edges described above in relation to the UNCLR classification in the LRTD stereo mode.
In the XTALK detector 110, the rising edge detection algorithm analyzes the LogReg output values from previous frames and compares them against a set of pre-calculated “ideal” rising edges with different slopes. The “ideal” rising edges are represented as linear functions of the frame index n.
For each “ideal” rising edge, the rising edge detection algorithm calculates the mean square error between the dotted line and the XTALK score scrXTALK(n). The output of the rising edge detection algorithm is the minimum mean square error among the tested “ideal” rising edges. The linear functions represented by the dotted lines are pre-calculated based on pre-defined thresholds for the minimum and the maximum value, scrmin and scrmax respectively. This is shown in
The rising edge detection is performed by the XTALK detector 110 only in frames meeting the following criterion:
where K=4 is the maximum length of the tested rising edges.
Let the output value of the rising edge detection algorithm be denoted ε0_1. The usage of the “0_1” subscript underlines the fact that the output value of the rising edge detection is limited in the interval <0; 1>. For frames not meeting the criterion in Relation (104), the output value of the rising edge detection is directly set to 0, i.e.
ε0_1=0 (105)
The set of linear functions representing the tested “ideal” rising edges can be mathematically expressed with the following relation:
where the index l denotes the length of the tested rising edge and n−k is the frame index. The slope of each linear function is determined by three parameters, the length of the tested rising edge l, the minimum threshold scrmin, and the maximum threshold scrmax. For the purposes of the XTALK detector 110 in the LRTD stereo mode the thresholds are set to scrmax=1.0 and scrmin=−0.2. The values of these thresholds were found experimentally.
For each length of the tested rising edge, the rising edge detection algorithm calculates the mean square error between the linear function t (Relation (106)) and the XTALK score scrXTALK, using for example the following relation:
where ε0 is the initial error given by:
ε0=[scrXTALK(n)−scrmax]2 (108)
The minimum mean square error is calculated by the XTALK detector 110 using:
The lower the minimum mean square error the stronger the detected rising edge. In a non-limitative implementation, if the minimum mean square error is higher than 0.3 then the output of the rising edge detection is set to 0, i.e.:
ε0_1=0 if εmin>0.3 (110)
and the rising edge detection algorithm quits. In all other cases, the minimum mean square error may be mapped linearly in the interval <0; 1> using, for example, the following relation:
ε0_1=1−2.5εmin (111)
In the above example, the relationship between the output of the rising edge detection and the minimum mean square error is inversely proportional.
The XTALK detector 110 normalizes the output of the rising edge detection in the interval <0.5; 0.9> to yield an edge sharpness parameter calculated using, for example, the following relation:
f
edge(n)=0.9−0.4ε0_1 (112)
with 0.5 and 0.9 used as a lower limit and an upper limit, respectively.
Finally, the score calculator (not shown) of the XTALK detector 110 smoothes the normalized weighted output of the LogReg model, scrXTALK(n), by means of an IIR filter of the XTALK detector 110 with fedge(n) being used in place of the forgetting factor. Such smoothing uses, for example, the following relation:
wscr
XTALK(n)=fedge(n)·wscrXTALK(n−1)+(1−fedge(n))·scrXTALK(n) (113)
The smoothed output wscrXTALK(n) (XTALK score) is reset to 0 in frames where the alternative VAD flag calculated in Relation (77) is zero. That is:
wscr
XTALK(n)=0, if fxVAD(n)=0 (114)
In the DFT stereo mode, the method 150 for coding the stereo sound signal 190 comprises an operation 162 of detecting cross-talk (XTALK). To perform operation 162, the device 100 for coding the stereo sound signal 190 comprises a XTALK detector 112.
The XTALK detection in the DFT stereo mode is done similarly as the XTALK detection in the LRTD stereo mode. The Logistic Regression (LogReg) model is used for binary classification of the input feature vector. For simplicity, the names of certain parameters and their associated mathematical symbols from the XTALK detection in the LRTD stereo mode are used also in this section. Subscripts are added to avoid ambiguity when referencing the same parameter from two sections simultaneously.
The following features are extracted from the device 100 for coding the stereo sound signal 190 by running the DFT stereo mode on both the single-talk and cross-talk training databases:
In total, the XTALK detector 112 uses a number F=11 of features.
Before the training process, the XTALK detector 112 comprises a normalizer (not shown) performing a sub-operation (not shown) of normalizing the set of extracted features by removing its global mean and scaling it to unit variance using, for example, the following relation:
where fi,raw denotes the ith feature of the set, fi denotes the normalized ith feature,
The output of the LogReg model is fully described by Relation (82) and the probability that the current frame belongs to the cross-talk segment class (class 0) is given by Relation (83). The details of the training process and the procedure to find the optimal decision threshold are provided above in the section on UNCLR classification in the LRTD stereo mode. Again, for that purpose, the XTALK detector 112 comprises a score calculator (not shown) performing a sub-operation (not shown) of calculating a score representative of XTALK detection in the input stereo sound signal 190.
The score calculator (not shown) of the XTALK detector 112 normalizes the raw output of the LogReg model, yp, using the function shown in
scr
XTALK(n)=ypn (116)
The XTALK score scrXTALK(n) is reset to 0 when the alternative VAD flag fxVAD(n) is set to 0. That can be expressed as follow:
scr
XTALK(n)=0, if fxVAD(n)=0 (117)
8.2.1 Rising Edge Detection
As in the case of the XTALK detection in the LRTD stereo mode, the score calculator (not shown) of the XTALK detector 112 smoothes the XTALK score scrXTALK(n) to remove short-term peaks. Such smoothing is performed by means of IIR filtering using the rising edge detection mechanism as described in relation to the XTALK detector 110 in the LRTD stereo mode. The XTALK score scrXTALK(n) is smoothed with an IIR filter using for example the following relation:
wscr
XTALK(n)=fedge(n)·wscrXTALK(n−1)+(1−fedge(n))·scrXTALK(n) (118)
where fedge(n) is the edge sharpness parameter calculated in Relation (112).
The final output of the XTALK detector 110/112 is binary. Let cXTALK(n) denote the output of the XTALK detector 110/112 with “1” representing the cross-talk class and “0” representing the single-talk class. The output cXTALK(n) can also be seen as a state variable. It is initialized to 0. The state variable is changed from the current class to the other only in frames where certain conditions are met. The mechanism for cross-talk class switching is similar to the mechanism of class switching on uncorrelated stereo content which has been described in detail above in Section 7.3. However, there are differences for both the LRTD stereo mode and the DFT stereo mode. These differences will be discussed herein after.
In the LRTD stereo mode, the XTALK detector 110 uses the cross-talk switching mechanism as shown in
Finally, the counter cntsw(n) in the current frame n is updated (1107) and the procedure is repeated for the next frame (1108).
The counter cntsw(n) is common to the UNCLR classifier 111 and the XTALK detector 110 and is defined in Relation (97). A positive value of the counter cntsw(n) indicates that switching of the state variable cXTALK(n) (output cXTALK(n) of the XTALK detector 110) is allowed. As can be seen in
In the DFT stereo mode, the XTALK detector 112 comprises an auxiliary parameters calculator (not shown) performing a sub-operation (not shown) of calculating the following auxiliary parameters. Specifically, the cross-talk switching mechanism uses the output wscrXTALK(n) of the XTALK detector 112, and the following auxiliary parameters:
In the DFT stereo mode, the XTALK detector 112 use the cross-talk switching mechanism as shown in
Finally, the counter cntsw(n) in the current frame n is updated (1220) and the procedure is repeated for the next frame (1221).
The variable cntsw(n) is the counter of frames where it is possible to switch between the LRTD and the DFT stereo modes. This counter cntsw(n) is common to the UNCLR classifier 113 and the XTALK detector 112. The counter cntsw(n) is initialized to zero and updated in each frame according to Relation (97).
The method 150 for coding the stereo sound signal 190 comprises an operation 164 of selecting the LRTD or DFT stereo mode. To perform operation 164, the device 100 for coding the stereo sound signal 190 comprises a LRTD/DFT stereo mode selector 114 receiving, delayed by one frame (191), the XTALK decision from the XTALK detector 110, the UNCLR decision from the UNCLR classifier 111, the XTALK decision from the XTALK detector 112, and the UNCLR decision from the UNCLR classifier 113.
The LRTD/DFT stereo mode selector 114 selects the LRTD or DFT stereo mode based on the binary output cUNCLR(n) of the UNCLR classifier 111/113 and the binary output cXTALK(n) of the XTALK detector 110/112. The LRTD/DFT stereo mode selector 114 also takes into account some auxiliary parameters. These parameters are used mainly to prevent stereo mode switching in perceptually sensitive segments or to prevent frequent switching in segments where both the UNCLR classifier 111/113 and the XTALK detector 110/112 do not provide accurate outputs.
The operation 164 of selecting the LRTD or DFT stereo mode is performed before down-mixing and encoding of the input stereo sound signal 190. As a consequence, the operation 164 uses the outputs from the UNCLR classifier 111/113 and the XTALK detector 110/112 from the previous frame, as shown at 191 in
As will be described in the following description, the DFT/LRTD stereo mode selection mechanism used in operation 164 comprises the following sub-operations:
The DFT stereo mode is the preferred mode for encoding single-talk speech with high inter-channel correlation between the left (L) and right (R) channel of the input stereo sound signal 190.
The LRTD/DFT stereo mode selector 114 starts initial selection of the stereo mode by determining whether the previous, processed frame was “likely a speech frame”. This can be done, for example, by examining the log-likelihood ratio between the “speech” class and the “music” class. The log-likelihood ratio is defined as the absolute difference between the log-likelihood of the input stereo sound signal frame being generated by a “music” source and the log-likelihood of the input stereo sound signal frame being generated by a “speech” source. The following relation may be used to calculate the log-likelihood ratio:
dL
SM(n)=LM(n)−LS(n) (119)
where LS(n) is the log-likelihood of the “speech” class and LM(n) the log-likelihood of the “music” class.
As an example, a Gaussian Mixture Model (GMM) from the 3GPP EVS codec as described in Reference [7], of which the full content is incorporated herein by reference, can be used for estimating the log-likelihood of the “speech” class, LS(n), and the log-likelihood of the “music” class, LM(n). Other methods of speech/music classification can also be used to calculate the log-likelihood ratio (differential score) dLSM(n).
The log-likelihood ratio dLSM(n) is smoothed with two IIR filters with different forgetting factors using, for example, the following relation:
wdL
SM
(1)(n)=0.97·wdLSM(1)(n−1)+0.03·dLSM(n−1)
wdL
SM
(2)(n)=0.995·wdLSM(2)(n−1)+0.005·dLSM(n−1) (120)
where the superscript (1) indicates the first IIR filter and the superscript (2) indicates the second IIR filter, respectively.
The smoothed values wdLSM(1)(n) and wdLSM(2)(n) are then compared with predefined thresholds and a new binary flag, fSM(n), is set to 1 if the following combined condition, for example, is met:
The flag fSM(n)=1 is an indicator that the previous frame was likely a speech frame. The threshold of 1.0 has been found experimentally.
The initial DFT/LRTD stereo mode selection mechanism then sets a new binary flag, fUX(n), to 1 if the binary output cUNCLR(n−1) of the UNCLR classifier 111/113 or the binary output cXTALK(n−1) of the XTALK detector 110/112, in the previous frame n−1, are set to 1, and if the previous frame was likely a speech frame. This can be expressed by the following relation:
Let MSMODE(n)∈(LRTD,DFT) be a discrete variable denoting the selected stereo mode in the current frame n. The stereo mode is initialized in each frame with the value from the previous frame n−1, i.e.:
M
SMODE(n)=MSMODE(n−1) (123)
If the flag fUX(n) is set to 1, then the LRTD stereo mode is selected for encoding in the current frame n. This can be expressed as follows:
M
SMODE(n)←LRTD if fUX=1 (124)
If the flag fUX(n) is set to 0 in the current frame n and the stereo mode in the previous frame n−1 was the LRTD stereo mode, then an auxiliary stereo mode switching flag fTDM(n−1), to be described herein after, from a LRTD energy analysis processor 1301 of the LRTD/DFT stereo mode selector 114 is analyzed to select the stereo mode in the current frame n, using for example the following relation:
The auxiliary stereo mode switching flag fTDM(n) is updated in every frame in the LRTD mode only. The updating of parameter fTDM(n) is described in the following description.
As shown in
If the flag fUX(n) is set to 0 in the current frame n and the stereo mode in the previous frame n−1 was the DFT stereo mode, no stereo mode switching is performed and the DFT stereo mode is selected in the current frame n as well.
The XTALK detector 110 in the LRTD mode has been described in the foregoing description. As can be seen from
If the LRTD/DFT stereo mode selector 114 selected the LRTD stereo mode in the previous frame n−1 and the initial stereo mode selection selected the LRTD mode in the current frame n and if, at the same time, the binary output cXTALK(n−1) of the XTALK detector 110 was 1, then the stereo mode may be changed from the LRTD to the DFT stereo mode. The latter change is allowed, for example when the below-listed conditions are fulfilled:
The set of conditions defined above contains references to clas and brate parameters. The brate parameter is a high-level constant containing the total bitrate used by the device 100 for coding a stereo sound signal (stereo codec). It is set during the initialization of the stereo codec and kept unchanged during the encoding process.
The clas parameter is a discrete variable containing the information about the frame type. The clas parameter is usually estimated as part of the signal pre-processing of the stereo codec. As a non-limitative example, the clas parameter from Frame Erasure Concealment (FEC) module of the 3GPP EVS codec as described in Reference [1] can be used in the DFT/LRTD stereo mode selection mechanism. The clas parameter from FEC module of the 3GPP EVS codec is selected with the consideration of the frame erasure concealment and decoder recovery strategy in mind. The clas parameter is selected from the following pre-defined set of classes
It is within the scope of the present disclosure to implement the DFT/LRTD stereo mode selection mechanism with other means of frame type classification.
In the set of conditions (126) defined above, the condition
refers to the clas parameter calculated during pre-processing of the down-mixed mono (M) channel when the device 100 for coding a stereo sound signal runs in the DFT stereo mode.
In case the device 100 for coding a stereo sound signal is in the LRTD stereo mode, the condition shall be replaced with:
where the indices “L” and “R” refer to clas parameter calculated in the preprocessing module of the left (L) channel and the right (R) channel, respectively.
The parameters cLRTD(n) and cDFT(n) are the counters of LRTD and DFT frames, respectively. These counters are updated in every frame as part of the LRTD energy analysis processor 1301. The updating of the two counters cLRTD(n) and cDFT(n) is described in detail in the next section.
When the device 100 for coding a stereo sound signal is run in the LRTD stereo mode, the LRTD/DFT stereo mode selector 114 calculates or updates several auxiliary parameters to improve the stability of the DFT/LRTD stereo mode selection mechanism.
For certain special types of frames, the LRTD stereo mode runs in the so-called “TD sub-mode”. The TD sub-mode is usually applied for short transition periods before switching from the LRTD stereo mode to the DFT stereo mode. Whether or not the LRTD stereo mode will run in the TD sub-mode is indicated by a binary sub-mode flag mTD(n). The binary flag mTD(n) is one of the auxiliary parameters and may be initialized in each frame as follows:
m
TD(n)=fTDM(n−1) (127)
where fTDM(n) is the above mentioned auxiliary switching flag described later on in this section.
The binary sub-mode flag mTD(n) is reset to 0 or 1 in frames where fUX(n)=1. The condition for resetting mTD(n) is defined, for example, as follows:
If fUX(n)=0, the binary sub-mode flag mTD(n) is not changed.
The LRTD energy analysis processor 1301 comprises the above-mentioned two counters, cLRTD(n) and cDFT(n). The counter cLRTD(n) is one of the auxiliary parameters and counts the number of consecutive LRTD frames. This counter is set to 0 in every frame where the DFT stereo mode has been selected in the device 100 for coding a stereo sound signal and is incremented by 1 in every frame where LRTD stereo mode has been selected. This can be expressed as follows:
Essentially, the counter cLRTD(n) contains the number of frames since the last DFT->LRTD switching point. The counter cLRTD(n) is limited by a threshold of 100. The counter cDFT(n) counts the number of consecutive DFT frames. The counter cDFT(n) is one of the auxiliary parameters and is set to 0 in every frame where LRTD stereo mode has been selected in the device 100 for coding a stereo sound signal and is incremented by 1 in every frame where the DFT stereo mode has been selected. This can be expressed as follows:
Essentially, the counter cDFT(n) contains the number of frames since the last LRTD->DFT switching point. The counter cDFT(n) is limited by a threshold of 100.
The last auxiliary parameter calculated in the LRTD energy analysis processor 1301 is the auxiliary stereo mode switching flag fTDM(n). This parameter is initialized, in every frame, with the binary flag fUX(n) as follows:
f
TDM(n)=fUX(n) (131)
The auxiliary stereo mode switching flag fTDM(n) is set to 0 when the left (L) and right (R) channel of the input stereo sound signal 190 are out-of-phase (OOP). An exemplary method for OOP detection can be found, for example, in Reference [8] of which the full content is incorporated herein by reference. When an OOP situation is detected, a binary flag s2m is set to 1 in the current frame n, otherwise it is set to zero. The auxiliary stereo mode switching flag fTDM(n) in the LRTD stereo mode is set to zero when the binary flag s2m is set to 1. This can be expressed with Relation (132):
f
TDM(n)←0 if s2m(n)=1 (132)
If the binary flag s2m(n) is set to zero, then the auxiliary switching flag fTDM(n) can be reset to zero based, for example, on the following sets of conditions:
Of course, the DFT/LRTD stereo mode switching mechanism can be implemented with other methods for OOP detection.
The auxiliary stereo mode switching flag fTDM(n) can also be reset to 0 based on the following sets of conditions:
In the two sets of conditions as defined above, the condition
clas(n−1)=UNVOICED_CLAS
refers to the clas parameter calculated during pre-processing of the down-mixed mono (M) channel when the device 100 for coding a stereo sound signal runs in the DFT stereo mode.
In case the device 100 for coding a stereo sound signal is in the LRTD stereo mode, the condition shall be replaced with:
clas
L(n−1)=UNVOICED_CLAS AND clasR(n−1)=UNVOICED_CLAS
where the indices “L” and “R” refer to clas parameter calculated during preprocessing of the left (L) channel and the right (R) channel, respectively.
The method 150 for coding a stereo sound signal comprise an operation 115 of core encoding the left channel (L) of the stereo sound signal 190 in the LRTD stereo mode, an operation 116 of core encoding the right channel (R) of the stereo sound signal 190 in the LRTD stereo mode, and an operation 117 of core encoding the down-mixed mono (M) channel of the stereo sound signal 190 in the DFT stereo mode.
To perform operation 115, the device 100 for coding a stereo sound signal comprises a core encoder 115, for example a mono core encoder. To perform operation 116, the device 100 comprises a core encoder 116, for example a mono core encoder. Finally, to perform operation 167, the device 100 for coding a stereo sound signal comprises a core encoder 117 capable of operating in the DFT stereo mode to code the down-mixed mono (M) channel of the stereo sound signal 190.
It is believed to be within the knowledge of one of ordinary skill in the art to select appropriate core encoders 115, 116 and 117. Accordingly, these encoders will not be further described in the present disclosure.
The device 100 for coding a stereo sound signal may be implemented as a part of a mobile terminal, as a part of a portable media player, or in any similar device. The device 100 (identified as 1400 in
The input 1402 is configured to receive the input stereo sound signal 190 of
The processor 1406 is operatively connected to the input 1402, to the output 1404, and to the memory 1408. The processor 1406 is realized as one or more processors for executing code instructions in support of the functions of the various components of the device 100 for coding a stereo sound signal as illustrated in
The memory 1408 may comprise a non-transient memory for storing code instructions executable by the processor(s) 1406, specifically, a processor-readable memory comprising/storing non-transitory instructions that, when executed, cause a processor(s) to implement the operations and components of the method 150 and device 100 for coding a stereo sound signal as described in the present disclosure. The memory 1408 may also comprise a random access memory or buffer(s) to store intermediate processing data from the various functions performed by the processor(s) 1406.
Those of ordinary skill in the art will realize that the description of the device 100 and method 150 for coding a stereo sound signal is illustrative only and is not intended to be in any way limiting. Other embodiments will readily suggest themselves to such persons with ordinary skill in the art having the benefit of the present disclosure. Furthermore, the disclosed device 100 and method 150 for coding a stereo sound signal may be customized to offer valuable solutions to existing needs and problems of encoding and decoding sound.
In the interest of clarity, not all of the routine features of the implementations of the device 100 and method 150 for coding a stereo sound signal are shown and described. It will, of course, be appreciated that in the development of any such actual implementation of the device 100 and method 150 for coding a stereo sound signal, numerous implementation-specific decisions may need to be made in order to achieve the developer's specific goals, such as compliance with application-, system-, network- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another. Moreover, it will be appreciated that a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking of engineering for those of ordinary skill in the field of sound processing having the benefit of the present disclosure.
In accordance with the present disclosure, the components/processors/modules, processing operations, and/or data structures described herein may be implemented using various types of operating systems, computing platforms, network devices, computer programs, and/or general purpose machines. In addition, those of ordinary skill in the art will recognize that devices of a less general purpose nature, such as hardwired devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, may also be used. Where a method comprising a series of operations and sub-operations is implemented by a processor, computer or a machine and those operations and sub-operations may be stored as a series of non-transitory code instructions readable by the processor, computer or machine, they may be stored on a tangible and/or non-transient medium.
The device 100 and method 150 for coding a stereo sound signal as described herein may use software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described herein.
In the device 100 and method 150 for coding a stereo sound signal as described herein, the various operations and sub-operations may be performed in various orders and some of the operations and sub-operations may be optional.
Although the present disclosure has been described hereinabove by way of non-restrictive, illustrative embodiments thereof, these embodiments may be modified at will within the scope of the appended claims without departing from the spirit and nature of the present disclosure.
The present disclosure mentions the following references, of which the full content is incorporated herein by reference:
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CA2021/051238 | 9/8/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63075984 | Sep 2020 | US |