The present invention is directed at a method of determining a perceptual impact of an amount of echo or reverberation in an degraded audio signal on a perceived quality thereof, wherein the degraded audio signal is received from an audio transmission system, wherein the degraded audio signal is obtained by conveying through said audio transmission system a reference audio signal such as to provide said degraded audio signal, as well as at a computer program product therefore.
During the past decades objective speech quality measurement methods have been developed and deployed using a perceptual measurement approach. In this approach a perception based algorithm simulates the behaviour of a subject that rates the quality of an audio fragment in a listening test. For speech quality one mostly uses the so-called absolute category rating listening test, where subjects judge the quality of a degraded speech fragment without having access to the clean reference speech fragment. Listening tests carried out within the International Telecommunication Union (ITU) mostly use an absolute category rating (ACR) 5 point opinion scale, which is consequently also used in the objective speech quality measurement methods that were standardized by the ITU, Perceptual Speech Quality Measure (PSQM (ITU-T Rec. P.861, 1996)), and its follow up Perceptual Evaluation of Speech Quality (PESQ (ITU-T Rec. P.862, 2000)). The focus of these measurement standards is on narrowband speech quality (audio bandwidth 100-3500 Hz), although a wideband extension (50-7000 Hz) was devised in 2005. PESQ provides for very good correlations with subjective listening tests on narrowband speech data and acceptable correlations for wideband data.
As new wideband voice services are being rolled out by the telecommunication industry the need emerged for an advanced measurement standard of verified performance, and capable of higher audio bandwidths. Therefore ITU-T (ITU-Telecom sector) Study Group 12 initiated the standardization of a new speech quality assessment algorithm as a technology update of PESQ. The new, third generation, measurement standard, POLQA (Perceptual Objective Listening Quality Assessment), overcomes shortcomings of the PESQ P.862 standard such as incorrect assessment of the impact of linear frequency response distortions, time stretching/compression as found in Voice-over-IP, certain type of codec distortions and reverberations.
POLQA (P.863) provides a number of improvements over the former quality assessment algorithms PSQM (P.861) and PESQ (P.862), and the present versions of POLQA also address a number of improvements such as correct assessment of the impact of linear frequency response distortions, time stretching/compression as found in Voice-over-IP, certain type of codec distortions, reverberations and the impact of playback level.
One of the factors that affects the perceived speech and sound quality is the presence of echo's and reverberations in an audio signal, the latter being superpositions of echo's. A determination of an amount of reverberation or echo may for example be achieved by performing an autocorrelation of a digitized audio signal to estimate an energy time curve. When both the reference and degraded signals are available, as in the case of POLQA, the energy time curve can be determined from the estimated transfer function of the system under test. This latter approach is used in POLQA, however, the accuracy of the obtained estimate is affected by the length of the signal, and the presence of some types of noise, pulses or time shift distortions, resulting in inaccuracy of the determination of the perceptual impact of the amount of reverberation on the perceived audio quality.
It is an object of the present invention to obviate the abovementioned disadvantages and to provide a method for accurately estimating the perceptual impact of reverberation in an audio signal on the perceived quality of that audio signal.
To this end, there is provided herewith a method of determining a perceptual impact of an amount of echo or reverberation in an degraded audio signal on a perceived quality thereof, wherein the degraded audio signal is received from an audio transmission system, wherein the degraded audio signal is obtained by conveying through said audio transmission system a reference audio signal such as to provide said degraded audio signal, the method comprising the steps of: obtaining, by a controller, at least one degraded digital audio sample from the degraded audio signal and at least one reference digital audio sample from the reference audio signal; determining, by the controller, based on the at least one degraded audio sample and the at least one reference audio sample, a local impulse response signal; determining, by the controller, an energy time curve based on the impulse response signal, wherein the energy time curve is proportional to a square root of an absolute value of the impulse response signal; and identifying one or more peaks in the energy time curve, the one or more peaks in time occurring at a delay in the energy time curve after an onset of the energy time curve based on the impulse response, and determining an estimate of the amount of echo or reverberation based on an amount of energy in the one or more peaks; wherein the step of obtaining the at least one degraded digital audio sample comprises a step of sampling the degraded audio signal in a time domain fraction, the sampling including performing a windowing operation on the degraded audio signal by multiplying the degraded audio signal with a window function such as to yield the degraded digital audio sample; and wherein the step of obtaining the at least one reference digital audio sample comprises a step of sampling the reference audio signal in a time domain fraction, the sampling including performing a windowing operation on the reference audio signal by multiplying the reference audio signal with the window function such as to yield the reference digital audio sample; wherein the window function, used for obtaining the at least one reference digital audio sample and the at least one degraded digital audio sample, has a non-zero value in the time domain fraction to be sampled and a zero value outside said time domain fraction.
The present invention is based on the insight that many disturbances in the signal have an influence on the correct determination or estimation of the perceptual impact of the amount of reverberation. These disturbances include different types of noise, different types of pulse distortions and different types of time shift distortions, some of which on an overall or global level impair the determination of the amount of reverberation, and some of which are mainly detrimental or present on a local level. The invention, by performing the windowing of the degraded signal and the reference signal prior to determining the amount of reverberation, enables to overcome this problem. For example, a set of perceptual reverberation impact parameters may be calculated from a frame or from a sequential set of frames that may make up an audio sample (by windowing) of the degraded and reference audio signal. Firstly, the use of windowing enables to calculate locate estimates of reverberation and take these into account in the final reverb estimation. Secondly, the use of windowing enables local compensation and local optimization of processing parameters. The latter may even be done dependent on the duration of the time domain fraction of the sample, or its relative location within the complete signal (or part concerned). Hence, due to the windowing operation, the method of the present invention provides a more accurate estimate of the amount of reverberation or echo. This may be applied in many different kinds of sound processing and evaluation methods. However, it has significant relevancy in the assessment of quality or intelligibility of degraded speech signals, such as with the POLQA methods described hereinabove, which application therefore provides a preferred embodiment of the method.
The step of obtaining the at least one digital audio sample preferably includes obtaining a plurality of digital audio samples from the audio signal, by sampling the audio signal in the time domain fraction using the step of performing the windowing operation described above. The time domain fractions of at least two sequential digital audio samples of the plurality of digital audio samples may in this case be overlapping. For example, an overlap between said at least two sequential digital audio samples is within a range of 10% to 90% overlap between the time domain fractions, preferably within a range of 25% to 75% overlap, more preferably within a range of 40% to 60% overlap, for example 50% overlap. This may be dependent on a type of window function applied, for example as part of an optimization.
The window function may, in some embodiments, be at least one of a group comprising: Hamming window, a Von Hann window, a Tukey window, a cosine window, a rectangular window, a B-spline window, a triangular window, a Bartlett window, a Parzen window, a Welch window, a nth power-of-cosine window wherein n>1, a Kaiser window, a Nuttall window, a Blackman window, a Blackman Harris window, a Blackman Nuttall window, or a Flattop window. The invention is not limited to a particular type of window function, and may be applied using different window functions than the one mentioned here. Even, new optimized window functions may be developed that may be of use in the method of the present invention, without departing from the inventive concept of the invention.
For determining the estimate of the amount of reverberation, in some embodiments, the invention may include a weighing of the amount of energy in each peak of the energy time curve, based on the magnitude of each peak and/or its (relative) delay position on the time axis. This is based on the insight that the peak with the largest magnitude typically has a significant impact on the perceived level of reverberation and how it may hamper intelligibility or quality of speech or sound.
In some preferred embodiments, the method additionally comprises the steps of: obtaining, by the controller, a digital signal representing at least a part of the audio signal and having a duration longer than the time domain fraction of the at least one digital audio sample; performing, by the controller, an autocorrelation operation on the digital signal such as to yield an overall impulse response signal; determining, by the controller, an overall energy time curve based on the impulse response signal, wherein the energy time curve is proportional to a square root of the overall impulse response signal; and identifying one or more peaks in the energy time curve, the one or more further peaks in time occurring at a delay in the energy time curve after an onset of the energy time curve based on the overall impulse response signal, and determining a further estimate of the amount of echo or reverberation based on the an amount of energy in the one or more further peaks.
The above described preferred embodiments provide a way of correctly including and compensating for both local and global disturbances, i.e. disturbances that have a local impact on the level of reverberation and disturbances that impair the estimate on a more global overall level of the sound signal (or signal part). Furthermore, like in the above locally applied reverb estimation methods, the step of determining the further estimate of the amount of reverberation on a global or overall level may likewise include a weighing of the amount of energy in each peak based on the magnitude of each peak.
In other or further embodiments, the method may further comprise at least one of the steps of: calculating, by the controller, a partial reverb indicator value based on the estimated amount of echo or reverberation; calculating, by the controller, a global reverb indicator value based on the further estimated amount of echo or reverberation; or calculating, by the controller, a final reverb indicator value based on the estimate and the further estimate of the amount of echo or reverberation.
Furthermore, in the abovementioned methods, the step of determining the (local or global) impulse response signal based on the audio samples or, where stated so the digital signals, comprises the steps of: converting, by the controller, the audio samples or the digital signals from a time domain into a frequency domain by applying a fourier transform to the audio samples or digital signals; determining, by the controller, a transfer function from a power spectrum signal from the audio samples or the digital signals in the frequency domain; and converting, by the controller, the power spectrum signal from the frequency domain into the time domain such as to yield the local impulse response signal or the global impulse response signal.
In preferred embodiments, the invention provides a method of evaluating quality or intelligibility of a degraded speech signal received from an audio transmission system, by conveying through said audio transmission system a reference speech signal such as to provide said degraded speech signal, wherein the method comprises:—sampling said reference speech signal into a plurality of reference signal frames, sampling said degraded speech signal into a plurality of degraded signal frames, and forming frame pairs by associating said reference signal frames and said degraded signal frames with each other;—providing for each frame pair a difference function representing a difference between said degraded signal frame and said associated reference signal frame;—compensating said difference function for one or more disturbance types such as to provide for each frame pair a disturbance density function which is adapted to a human auditory perception model;—deriving from said disturbance density functions of a plurality of frame pairs an overall quality parameter, said quality parameter being at least indicative of said quality or intelligibility of said degraded speech signal; wherein the method further comprises the steps of:—determining an amount of reverberation in at least one of the degraded speech signal and the reference speech signal, wherein said amount of reverberation is determined by applying a method as described in accordance with any of the embodiments above.
In the above described class of embodiments, the method in accordance with the present invention has been applied in a method for determining the quality or intelligibility of a degraded speech signal. The method of determining an estimate of an amount of reverberation, in accordance with the invention, is in particular useful in this method of evaluating quality or intelligibility due to the fact that the presence of reverberation significantly influences the perceived quality or intelligibility.
In some of the above embodiments, the step of obtaining, by the controller, the at least one digital audio sample may be performed by forming the audio sample from a plurality of consecutive signal frames, the signal frames including one or more of the degraded signal frames or one or more of the reference signal frames. For example, the number of signal frames to be included in the plurality of signal frames may be dependent on the duration of the time domain fraction of the at least one digital audio sample, wherein the duration is larger than 0.3 seconds, preferably between 0.4 seconds and 5.0 seconds, such as at least one of: 0.5 seconds, 1.0 seconds, 1.5 seconds, 2.0 seconds, 2.5 seconds, 3.0 seconds, 3.5 seconds, 4.0 seconds, 4.5 seconds, or 5.0 seconds. In some applications, such as for example POLQA, single frame would be typically too short to be significant for determining an amount of reverb, but audio signal fractions that are shorter than one second may be long enough to be analyzed for providing a local estimation of the amount of reverberation.
Therefore, in some embodiments, a first estimate of the amount of reverberation is obtained by performing a local estimation using digital audio samples of e.g. 0.5 seconds, wherein one or more second estimates are obtained for each of a plurality of digital audio samples formed of a plurality of consecutive signal frames providing a longer duration audio signal, and wherein a reverb indicator value is calculated based on the first estimate and at least one of the second estimates.
In some embodiments, for each frame pair, the step of compensating is performed by setting the determined amount of reverberation in the at least one of the degraded speech signal and the reference speech signal as one of said one or more disturbance types, and compensating each frame pair for the amount of reverberation associated with the respective frame pair based on said forming of the digital audio sample. Here the reverberation estimates may be taken into account on a local level, associated with the frame pairs. These are the frame pairs of those frames that make up the degraded signal samples.
In some embodiments, the method further comprises, prior to the step of determining the impulse response signal, a step of noise suppression the noise suppression comprising the steps of: performing a first scaling of at least one of the degraded speech signal or the reference speech signal such as to obtain a similar average volume; processing the degraded speech signal for removing local signal peaks therefrom; performing a second scaling of at least one of the degraded speech signal or the reference speech signal such as to obtain a similar average volume.
Furthermore, in the above, for assessment of the quality or intelligibility of speech or sound signals, the method may well be limited to a lower frequency range, i.e. a range of interest that is relevant to the speech or sound signal. For example, the method may be performed on the audio signal within a predetermined frequency range, such as the frequency range being below a threshold frequency or a frequency range corresponding with speech signals, for example the frequency range being below 5 kilohertz, preferably the frequency range being between 200 Hertz and 4 kiloHertz for speech signals, or frequencies up to 20 kHz for other sound signals.
The invention will further be elucidated by description of some specific embodiments thereof, making reference to the attached drawings. The detailed description provides examples of possible implementations of the invention, but is not to be regarded as describing the only embodiments falling under the scope. The scope of the invention is defined in the claims, and the description is to be regarded as illustrative without being restrictive on the invention. In the drawings:
POLQA Perceptual Model
The basic approach of POLQA (ITU-T rec. P.863) is the same as used in PESQ (ITU-T rec. P.862), i.e. a reference input and degraded output speech signal are mapped onto an internal representation using a model of human perception. The difference between the two internal representations is used by a cognitive model to predict the perceived speech quality of the degraded signal. An important new idea implemented in POLQA is the idealisation approach which removes low levels of noise in the reference input signal and optimizes the timbre. Further major changes in the perceptual model include the modelling of the impact of play back level on the perceived quality and a major split in the processing of low and high levels of distortion.
An overview of the perceptual model used in POLQA is given in
For degraded output signals with frequency domain warping 49 an align algorithm 52 is used given in
POLQA starts with the calculation of some basic constant settings after which the pitch power densities (power as function of time and frequency) of reference and degraded are derived from the time and frequency aligned time signals. From the pitch power densities the internal representations of reference and degraded are derived in a number of steps. Furthermore these densities are also used to derive 40 the first three POLQA quality indicators for frequency response distortions 41 (FREQ), additive noise 42 (NOISE) and room reverberations 43 (REVERB). These three quality indicators 41, 42 and 43 are calculated separately from the main disturbance indicator in order to allow a balanced impact analysis over a large range of different distortion types. These indicators can also be used for a more detailed analysis of the type of degradations that were found in the speech signal using a degradation decomposition approach.
As stated four different variants of the internal representations of reference and degraded are calculated in 7, 8, 9 and 10; two variants focussed on the disturbances for normal and big distortions, and two focussed on the added disturbances for normal and big distortions. These four different variants 7, 8, 9 and 10 are the inputs to the calculation of the final disturbance densities.
The internal representations of the reference 3 are referred to as ideal representations because low levels of noise in the reference are removed (step 33) and timbre distortions as found in the degraded signal that may have resulted from a non optimal timbre of the original reference recordings are partially compensated for (step 35).
The four different variants of the ideal and degraded internal representations calculated using operators 7, 8, 9 and 10 are used to calculate two final disturbance densities 142 and 143, one representing the final disturbance 142 as a function of time and frequency focussed on the overall degradation and one representing the final disturbance 143 as a function of time and frequency but focussed on the processing of added degradation.
Pre-Computation of Constant Settings
FFT Window Size Depending on the Sample Frequency
POLQA operates on three different sample rates, 8, 16, and 48 kHz sampling for which the window size W is set to respectively 256, 512 and 2048 samples in order to match the time analysis window of the human auditory system. The overlap between successive frames is 50% using a Hann window. The power spectra—the sum of the squared real and squared imaginary parts of the complex FFT components—are stored in separate real valued arrays for both, the reference and the degraded signal. Phase information within a single frame is discarded in POLQA and all calculations are based on the power representations, only.
Start Stop Point Calculation
In subjective tests, noise will usually start before the beginning of the speech activity in the reference signal. However one can expect that leading steady state noise in a subjective test decreases the impact of steady state noise while in objective measurements that take into account leading noise it will increase the impact; therefore it is expected that omission of leading and trailing noises is the correct perceptual approach. Therefore, after having verified the expectation in the available training data, the start and stop points used in the POLQA processing are calculated from the beginning and end of the reference file. The sum of five successive absolute sample values (using the normal 16 bits PCM range −+32,000) must exceed 500 from the beginning and end of the original speech file in order for that position to be designated as the start or end. The interval between this start and end is defined as the active processing interval. Distortions outside this interval are ignored in the POLQA processing.
The Power and Loudness Scaling Factor SP and SL
For calibration of the FFT time to frequency transformation a sine wave with a frequency of 1000 Hz and an amplitude of 40 dB SPL is generated, using a reference signal X(t) calibration towards 73 dB SPL. This sine wave is transformed to the frequency domain using a windowed FFT in steps 18 and 49 with a length determined by the sampling frequency for X(t) and Y(t) respectively. After converting the frequency axis to the Bark scale in 21 and 54 the peak amplitude of the resulting pitch power density is then normalized to a power value of 104 by multiplication with a power scaling factor SP 20 and 55 for X(t) and Y(t) respectively.
The same 40 dB SPL reference tone is used to calibrate the psychoacoustic (Sone) loudness scale. After warping the intensity axis to a loudness scale using Zwicker's law the integral of the loudness density over the Bark frequency scale is normalized in 30 and 58 to 1 Sone using the loudness scaling factor SL 31 and 59 for X(t) and Y(t) respectively.
Scaling and Calculation of the Pitch Power Densities
The degraded signal Y(t) 5 is multiplied 46 by the calibration factor C 47, that takes care of the mapping from dB overload in the digital domain to dB SPL in the acoustic domain, and then transformed 49 to the time-frequency domain with 50% overlapping FFT frames. The reference signal X(t) 3 is scaled 17 towards a predefined fixed optimal level of about 73 dB SPL equivalent before it's transformed 18 to the time-frequency domain. This calibration procedure is fundamentally different from the one used in PESQ where both the degraded and reference are scaled towards predefined fixed optimal level. PESQ pre-supposes that all play out is carried out at the same optimal playback level while in the POLQA subjective tests levels between 20 dB to +6 to relative to the optimal level are used. In the POLQA perceptual model one can thus not use a scaling towards a predefined fixed optimal level.
After the level scaling the reference and degraded signal are transformed 18, 49 to the time-frequency domain using the windowed FFT approach. For files where the frequency axis of the degraded signal is warped when compared to the reference signal a dewarping in the frequency domain is carried out on the FFT frames. In the first step of this dewarping both the reference and degraded FFT power spectra are preprocessed to reduce the influence of both very narrow frequency response distortions, as well as overall spectral shape differences on the following calculations. The preprocessing 77 may consists in smoothing, compressing and flattening the power spectrum. The smoothing operation is performed using a sliding window average in 78 of the powers over the FFT bands, while the compression is done by simply taking the logarithm 79 of the smoothed power in each band. The overall shape of the power spectrum is further flattened by performing sliding window normalization in 80 of the smoothed log powers over the FFT bands. Next the pitches of the current reference and degraded frame are computed using a stochastic subharmonic pitch algorithm. The ratio 74 of the reference to degraded pitch ration is then used to determine (in step 84) a range of possible warping factors. If possible, this search range is extended by using the pitch ratios for the preceding and following frame pair.
The frequency align algorithm then iterates through the search range and warps 85 the degraded power spectrum with the warping factor of the current iteration, and processes 88 the warped power spectrum using the preprocessing 77 described above. The correlation of the processed reference and processed warped degraded spectrum is then computed (in step 89) for bins below 1500 Hz. After complete iteration through the search range, the “best” (i.e. that resulted in the highest correlation) warping factor is retrieved in step 90. The correlation of the processed reference and best warped degraded spectra is then compared against the correlation of the original processed reference and degraded spectra. The “best” warping factor is then kept 97 if the correlation increases by a set threshold. If necessary, the warping factor is limited in 98 by a maximum relative change to the warping factor determined for the previous frame pair.
After the dewarping that may be necessary for aligning the frequency axis of reference and degraded, the frequency scale in Hz is warped in steps 21 and 54 towards the pitch scale in Bark reflecting that at low frequencies, the human hearing system has a finer frequency resolution than at high frequencies. This is implemented by binning FFT bands and summing the corresponding powers of the FFT bands with a normalization of the summed parts. The warping function that maps the frequency scale in Hertz to the pitch scale in Bark approximates the values given in the literature for this purpose, and known to the skilled reader. The resulting reference and degraded signals are known as the pitch power densities PPX(f)n (not indicated in
Computation of the Speech Active, Silent and Super Silent Frames (Step 25)
POLQA operates on three classes of frames, which are distinguished in step 25:
Calculation of the Frequency, Noise and Reverb Indicators
The global impact of frequency response distortions, noise and room reverberations is separately quantified in step 40. For the impact of overall global frequency response distortions, an indicator 41 is calculated from the average spectra of reference and degraded signals. In order to make the estimate of the impact for frequency response distortions independent of additive noise, the average noise spectrum density of the degraded over the silent frames of the reference signal is subtracted from the pitch loudness density of the degraded signal. The resulting pitch loudness density of the degraded and the pitch loudness density of the reference are then averaged in each Bark band over all speech active frames for the reference and degraded file. The difference in pitch loudness density between these two densities is then integrated over the pitch to derive the indicator 41 for quantifying the impact of frequency response distortions (FREQ).
For the impact of additive noise, an indicator 42 may be calculated from the average spectrum of the degraded signal over the silent frames of the reference signal. The difference between the average pitch loudness density of the degraded over the silent frames and a zero reference pitch loudness density determines a noise loudness density function that quantifies the impact of additive noise. This noise loudness density function is then integrated over the pitch to derive an average noise impact indicator 42 (NOISE). This indicator 42 is thus calculated from an ideal silence so that a transparent chain that is measured using a noisy reference signal will thus not provide the maximum MOS score in the final POLQA end-to-end speech quality measurement.
For the impact of room reverberations, the energy over time function (ETC) is calculated from the reference and degraded time series. The ETC represents the envelope of the impulse response h(t) of the system H(f), which is defined as Ya(f)=H(f)·X(f), where Ya(f) is the spectrum of a level aligned representation of the degraded signal and X(f) the spectrum of the reference signal. The level alignment (noise suppression) is carried out to suppress global and local gain differences between the reference and degraded signal. This is carried out by a first step of scaling, e.g. of the degraded speech signal (or the reference signal or both); followed by a smoothening by removing or suppressing peaks or spikes in the degraded signal. Thereafter, a second scaling step is performed to level the volumes in both signals, in order to finalize the level alignment. The impulse response h(t) is calculated from H(f) using the inverse discrete Fourier transform. The ETC is calculated from the absolute values of h(t) through normalization and clipping.
An example of a windowing operation on a speech signal using Hamming windows is schematically illustrated in
In accordance with some embodiments of the present invention, the reverb indicator 43 to be calculated may be based on both the global or overall reference and degraded speech signals 3 and 5, as well as a plurality of local samples 309 and 310 thereof. To calculate a global estimate, the global or overall reference and degraded speech signal 3 and 5 may be considered as a whole or may be divided long duration signal parts (e.g. any suitable duration, such as >5 seconds, or >10 seconds). The short local samples 309 and 310 may be obtained by performing windowing operations 320a and 320b on the reference and degraded speech signals 3 and 5 or its long duration signal parts, or by integration or combining of a plurality of signal frames from the reference signal X(t) 3 and degraded signal Y(t) 5. For example, the short local samples 309 and 310 may include sound fractions having a duration (herein occasionally referred to as time domain fraction 305) of for example 0.5 or 1.0 seconds. Smaller fractions may provide too little information on reverberation. The short duration local fractions 309 and 310 that are obtained using the windowing operation 320 (i.e. 320a and 320b), for example have been obtained by applying Hamming windows 300 that have a 50% overlap with each other. The short duration local samples 309 and 310 are formed by multiplying of the degraded speech signal 5 with the windowing function 300 applied (e.g. Hamming window function). For an optimal determination of the local reverberation indicators, a weighting factor may be used that gives a lower weight to degraded samples earlier in the window if the speech of the corresponding reference samples are below a threshold, indicating a perceptually silent interval. This weighing is performed in 321a and 321b. Thereafter, a fast Fourier transform (FFT) is performed in steps 322a and 322b on the samples 309 and 310 and the overall degraded speech signal 5. The global reference and degraded speech signal 3 and 5 are processed by performing in steps 340a and 340b a fast Fourier transform (FFT) on the reference and degraded digital signals 3 and 5. The FFT in steps 322a/b and 340a/b may be performed over the a part of the frequency range (e.g. below 5 kHz or between 200 Hz and 4 kHz) that contains the speech signal contributions.
In steps 324 and 342, the transfer functions H(f), are calculated from the transformed signals in the frequency domain. The impulse response signals, in steps 326 and 344, are obtained by inverse FFT, from which the ETC's can be calculated in steps 328 and 346. The ETC is determined in steps 328 and 346 on both these long duration signal parts (or the whole reference and degraded signals) 3 and 5 and on the short duration local samples 309 and 310 in the manner described above. In each of the ETC's, one or more peaks are identified in steps 330 and 348, which peaks in time occur delayed after an onset of the energy time curve based on the impulse response. For example, the three largest peaks may be determined occurring at least 60 milliseconds after the onset of the curve. The energy in these peaks is determined, and used in combination with their delay position on the time axis to calculate the local and global reverb indicators in steps 332 and 350. For both the local samples and the global parts, a partial and global reverb indicator may be calculated in steps 332 and 350, which may be combined in step 360 to yield a good estimation of the reverb indicator 43 to be used.
Based on the ETC's of the global parts and local samples, multiple reflections may be searched in each ETC in steps 330 and 348. In a first step the loudest reflection is calculated by simply determining the maximum value of the ETC curve after the direct sound. In the POLQA model direct sound is defined as all sounds that arrive within 60 ms. Next a second loudest reflection is determined over the interval without the direct sound and without taking into account reflections that arrive within 100 ms from the loudest reflection. Then the third loudest reflection is determined over the interval without the direct sound and without taking into account reflections that arrive within 100 ms from the loudest and second loudest reflection. The energies and delays of the three loudest reflections are then combined to form the partial and global reverb indicator values, which may thereafter be combined into a single reverb indicator 43 (REVERB).
Optionally, in the calculation of the reverb indicator 43, only reverb estimates may be taken along that are within a single standard deviation from the average of the partial reverb estimates. These may then be weighted in a particular manner. In a computer program product developed for implementing the method described herein, this may for example be implemented as follows:
As an alternative to the above, the reverb indicator may also be estimated based on the short duration local samples only, providing already an improvement over conventional manners of estimating an amount of reverberation in a signal.
Global and Local Scaling of the Reference Signal Towards the Degraded Signal (step 26)
The reference signal is now in accordance with step 17 at the internal ideal level, i.e. about 73 dB SPL equivalent, while the degraded signal is represented at a level that coincides with the playback level as a result of 46. Before a comparison is made between the reference and degraded signal the global level difference is compensated in step 26. Furthermore small changes in local level are partially compensated to account for the fact that small enough level variations are not noticeable to subjects in a listening-only situation. The global level equalization 26 is carried out on the basis of the average power of reference and degraded signal using the frequency components between 400 and 3500 Hz. The reference signal is globally scaled towards the degraded signal and the impact of the global playback level difference is thus maintained at this stage of processing. Similarly, for slowly varying gain distortions a local scaling is carried out for level changes up to about 3 dB using the full bandwidth of both the reference and degraded speech file.
Partial Compensation of the Original Pitch Power Density for Linear Frequency Response Distortions (step 27)
In order to correctly model the impact of linear frequency response distortions, induced by filtering in the system under test, a partial compensation approach is used in step 27. To model the imperceptibility of moderate linear frequency response distortions in the subjective tests, the reference signal is partially filtered with the transfer characteristics of the system under test. This is carried out by calculating the average power spectrum of the original and degraded pitch power densities over all speech active frames. Per Bark bin, a partial compensation factor is calculated 27 from the ratio of the degraded spectrum to the original spectrum.
Modelling of Masking Effects, Calculation of the Pitch Loudness Density Excitation
Masking is modelled in steps 30 and 58 by calculating a smeared representation of the pitch power densities. Both time and frequency domain smearing are taken into account in accordance with the principles illustrated in
with SL the loudness scaling factor, P0(f) the absolute hearing threshold, fB and Pfn a frequency and level dependent correction defined by:
f
B−0.03*f+1.06 for f<2.0 Bark
f
B=1.0 for 2.0≤f≤22 Bark
f
B=−0.2*(f−22.0)+1.0 for f>22.0 Bark
P
fn=(PPX(f)n+600)0.008
with f representing the frequency in Bark, PPX(f)n the pitch power density in frequency time cell f, n. The resulting two dimensional arrays LX(f)n and LY(f)n are called pitch loudness densities, at the output of step 30 for the reference signal X(t) and step 58 for the degraded signal Y(t) respectively.
Global Low Level Noise Suppression in Reference and Degraded Signals
Low levels of noise in the reference signal, which are not affected by the system under test (e.g., a transparent system) will be attributed to the system under test by subjects due to the absolute category rating test procedure. These low levels of noise thus have to be suppressed in the calculation of the internal representation of the reference signal. This “idealization process” is carried out in step 33 by calculating the average steady state noise loudness density of the reference signal LX(f)n over the super silent frames as a function of pitch. This average noise loudness density is then partially subtracted from all pitch loudness density frames of the reference signal. The result is an idealized internal representation of the reference signal, at the output of step 33.
Steady state noise that is audible in the degraded signal has a lower impact than non-steady state noise. This holds for all levels of noise and the impact of this effect can be modelled by partially removing steady state noise from the degraded signal. This is carried out in step 60 by calculating the average steady state noise loudness density of the degraded signal LY(f)n frames for which the corresponding frame of the reference signal is classified as super silent, as a function of pitch. This average noise loudness density is then partially subtracted from all pitch loudness density frames of the degraded signal. The partial compensation uses a different strategy for low and high levels of noise. For low levels of noise the compensation is only marginal while the suppression that is used becomes more aggressive for loud additive noise. The result is an internal representation 61 of the degraded signal with an additive noise that is adapted to the subjective impact as observed in listening tests using an idealized noise free representation of the reference signal.
In step 33 above, in addition to performing the global low level noise suppression, also the LOUDNESS indicator 32 is determined for each of the reference signal frames. The LOUDNESS indicator or LOUDNESS value may be used to determine a loudness dependent weighting factor for weighing specific types of distortions. The weighing itself may be implemented in steps 125 and 125′ for the four representations of distortions provided by operators 7, 8, 9 and 10, upon providing the final disturbance densities 142 and 143.
Here, the loudness level indicator has been determined in step 33, but one may appreciate that the loudness level indicator may be determined for each reference signal frame in another part of the method. In step 33 determining the loudness level indicator is possible due to the fact that already the average steady state noise loud density is determined for reference signal LX(f)n over the super silent frames, which are then used in the construction of the noise free reference signal for all reference frames. However, although it is possible to implement this in step 33, it is not the most preferred manner of implementation.
Alternatively, the loudness level indicator (LOUDNESS) may be taken from the reference signal in an additional step following step 35. This additional step is also indicated in
Local Scaling of the Distorted Pitch Loudness Density for Time-Varying Gain Between Degraded and Reference Signal (steps 34 and 63)
Slow variations in gain are inaudible and small changes are already compensated for in the calculation of the reference signal representation. The remaining compensation necessary before the correct internal representation can be calculated is carried out in two steps; first the reference is compensated in step 34 for signal levels where the degraded signal loudness is less than the reference signal loudness, and second the degraded is compensated in step 63 for signal levels where the reference signal loudness is less than the degraded signal loudness.
The first compensation 34 scales the reference signal towards a lower level for parts of the signal where the degraded shows a severe loss of signal such as in time clipping situations. The scaling is such that the remaining difference between reference and degraded represents the impact of time clips on the local perceived speech quality. Parts where the reference signal loudness is less than the degraded signal loudness are not compensated and thus additive noise and loud clicks are not compensated in this first step.
The second compensation 63 scales the degraded signal towards a lower level for parts of the signal where the degraded signal shows clicks and for parts of the signal where there is noise in the silent intervals. The scaling is such that the remaining difference between reference and degraded represents the impact of clicks and slowly changing additive noise on the local perceived speech quality. While clicks are compensated in both the silent and speech active parts, the noise is compensated only in the silent parts.
Partial Compensation of the Original Pitch Loudness Density for Linear Frequency Response Distortions (step 35)
Imperceptible linear frequency response distortions were already compensated by partially filtering the reference signal in the pitch power density domain in step 27. In order to further correct for the fact that linear distortions are less objectionable than non-linear distortions, the reference signal is now partially filtered in step 35 in the pitch loudness domain. This is carried out by calculating the average loudness spectrum of the original and degraded pitch loudness densities over all speech active frames. Per Bark bin, a partial compensation factor is calculated from the ratio of the degraded loudness spectrum to the original loudness spectrum. This partial compensation factor is used to filter the reference signal with smoothed, lower amplitude, version of the frequency response of the system under test. After this filtering, the difference between the reference and degraded pitch loudness densities that result from linear frequency response distortions is diminished to a level that represents the impact of linear frequency response distortions on the perceived speech quality.
Final Scaling and Noise Suppression of the Pitch Loudness Densities
Up to this point, all calculations on the signals are carried out on the playback level as used in the subjective experiment. For low playback levels, this will result in a low difference between reference and degraded pitch loudness densities and in general in a far too optimistic estimation of the listening speech quality. In order to compensate for this effect the degraded signal is now scaled towards a “virtual” fixed internal level in step 64. After this scaling, the reference signal is scaled in step 36 towards the degraded signal level and both the reference and degraded signal are now ready for a final noise suppression operation in 37 and 65 respectively. This noise suppression takes care of the last parts of the steady state noise levels in the loudness domain that still have a too big impact on the speech quality calculation. The resulting signals 13 and 14 are now in the perceptual relevant internal representation domain and from the ideal pitch-loudness-time LX ideal(f)n 13 and degraded pitch-loudness-time LY deg( )n 14 functions the disturbance densities 142 and 143 can be calculated. Four different variants of the ideal and degraded pitch-loudness-time functions are calculated in 7, 8, 9 and 10, two variants (7 and 8) focussed on the disturbances for normal and big distortions, and two (9 and 10) focussed on the added disturbances for normal and big distortions.
Calculation of the Final Disturbance Densities
Two different flavours of the disturbance densities 142 and 143 are calculated. The first one, the normal disturbance density, is derived in 7 and 8 from the difference between the ideal pitch-loudness-time LX ideal(f)n and degraded pitch-loudness-time function LY deg(f)n. The second one is derived in 9 and 10 from the ideal pitch-loudness-time and the degraded pitch-loudness-time function using versions that are optimized with regard to introduced degradations and is called added disturbance. In this added disturbance calculation, signal parts where the degraded power density is larger than the reference power density are weighted with a factor dependent on the power ratio in each pitch-time cell, the asymmetry factor.
In order to be able to deal with a large range of distortions two different versions of the processing are carried out, one focussed on small to medium distortions based on 7 and 9 and one focussed on medium to big distortions based on 8 and 10. The switching between the two is carried out on the basis of a first estimation from the disturbance focussed on small to medium level of distortions. This processing approach leads to the necessity of calculating four different ideal pitch-loudness-time functions and four different degraded pitch-loudness-time functions in order to be able to calculate a single disturbance and a single added disturbance function (see
Severe deviations of the optimal listening level are quantified in 127 and 127′ by an indicator directly derived from the signal level of the degraded signal. This global indicator (LEVEL) is also used in the calculation of the MOS-LQO.
Severe distortions introduced by frame repeats are quantified 128 and 128′ by an indicator derived from a comparison of the correlation of consecutive frames of the reference signal with the correlation of consecutive frames of the degraded signal.
Severe deviations from the optimal “ideal” timbre of the degraded signal are quantified 129 and 129′ by an indicator derived from the difference in loudness between an upper frequency band and a lower frequency band. A timbre indicator is calculated from the difference in loudness in the Bark bands between 2 and 12 Bark in the low frequency part and 7-17 Bark in the upper range. (i.e. using a 5 Bark overlap) of the degraded signal which “punishes” any severe imbalances irrespective of the fact that this could be the result of an incorrect voice timbre of the reference speech file. Compensations are carried out per frame and on a global level. This compensation calculates the power in the lower and upper Bark bands (below 12 and above 7 Bark, i.e. using a 5 Bark overlap) of the degraded signal and “punishes” any severe imbalance irrespective of the fact that this could be the result of an incorrect voice timbre of the reference speech file. Note that a transparent chain using poorly recorded reference signals, containing too much noise and/or an incorrect voice timbre, will thus not provide the maximum MOS score in a POLQA end-to-end speech quality measurement. This compensation also has an impact when measuring the quality of devices which are transparent. When reference signals are used that show a significant deviation from the optimal “ideal” timbre the system under test will be judged as non-transparent even if the system does not introduce any degradation into the reference signal.
The impact of severe peaks in the disturbance is quantified in 130 and 130′ in the FLATNESS indicator which is also used in the calculation of the MOS-LQO.
Severe noise level variations which focus the attention of subjects towards the noise are quantified in 131 and 131′ by a noise contrast indicator derived from the degraded signal frames for which the corresponding reference signal frames are silent.
In steps 133 and 133′, a weighting operation is performed for weighing disturbances dependent on whether or not they coincide with the actual spoken voice. In order to assess the quality or intelligibility of the degraded signal, disturbances which are perceived during silent periods are not considered to be as detrimental as disturbances which are perceived during actual spoken voice. Therefore, based on the LOUDNESS indicator determined in step 33 (or alternatively step 35′) from the reference signal, a weighting value is determined for weighing any disturbances. The weighting value is used for weighing the difference function (i.e. disturbances) for incorporating the impact of the disturbances on the quality or intelligibility of the degraded speech signal into the evaluation. In particular, since the weighting value is determined based on the LOUDNESS indicator, the weighting value may be represented by a loudness dependent function. The loudness dependent weighting value may be determined by comparing the loudness value to a threshold. If the loudness indicator exceeds the threshold the perceived disturbances are fully taken in consideration when performing the evaluation. On the other hand, if the loudness value is smaller than the threshold, the weighting value is made dependent on the loudness level indicator; i.e. in the present example the weighting value is equal to the loudness level indicator (in the regime where LOUDNESS is below the threshold). The advantage is that for weak parts of the speech signal, e.g. at the ends of spoken words just before a pause or silence, disturbances are taken partially into account as being detrimental to the quality or intelligibility. As an example, one may appreciate that a certain amount of noise perceived while speaking out the letter T at the end of a word, may cause a listener to perceive this as being the letter ‘s’. This could be detrimental to the quality or intelligibility. On the other hand, the skilled person may appreciate that it is also possible to simply disregard any noise during silence or pauses, by turning the weighting value to zero when the loudness value is below the above mentioned threshold.
Proceeding again with
Finally the disturbance and added disturbance densities are clipped in 137 and 137′ to a maximum level and the variance of the disturbance 138 and 138′ and the impact of jumps 140 and 140′ in the loudness of the reference signal are used to compensate for specific time structures of the disturbances.
This yields the final disturbance density D(f). 142 for regular disturbance and the final disturbance density DA(f). 143 for added disturbance.
Aggregation of the Disturbance over Pitch, Spurts and Time, Mapping to Intermediate MOS Score
The final disturbance D(f)n 142 and added disturbance DA(f)n densities 143 are integrated per frame over the pitch axis resulting in two different disturbances per frame, one derived from the disturbance and one derived from the added disturbance, using an L1 integration 153 and 159 (see
with Wf a series of constants proportional to the width of the Bark bins.
Next these two disturbances per frame are averaged over a concatenation of six
consecutive speech frames, defined as a speech spurt, with an L4 155 and an L1 160 weighing for the disturbance and for the added disturbance, respectively.
Finally a disturbance and an added disturbance are calculated per file from an L2 156 and 161 averaging over time:
The added disturbance is compensated in step 161 for loud reverberations and loud additive noise using the REVERB 42 and NOISE 43 indicators. The two disturbances are then combined 170 with the frequency indicator 41 (FREQ) to derive an internal indicator that is linearized with a third order regression polynomial to get a MOS like intermediate indicator 171.
Computation of the Final POLQA MOS-LQO
The raw POLQA score is derived from the MOS like intermediate indicator using four different compensations all in step 175:
The training of this mapping is carried out on a large set of degradations, including degradations that were not part of the POLQA benchmark. These raw MOS scores 176 are for the major part already linearized by the third order polynomial mapping used in the calculation of the MOS like intermediate indicator 171.
Finally the raw POLQA MOS scores 176 are mapped in 180 towards the MOS-LQO scores 181 using a third order polynomial that is optimized for the 62 databases as were available in the final stage of the POLQA standardization. In narrowband mode the maximum POLQA MOS-LQO score is 4.5 while in super-wideband mode this point lies at 4.75. An important consequence of the idealization process is that under some circumstances, when the reference signal contains noise or when the voice timbre is severely distorted, a transparent chain will not provide the maximum MOS score of 4.5 in narrowband mode or 4.75 in super-wideband mode.
The consonant-vowel-consonant compensation, in accordance with the present invention, may be implemented as follows. In
The consonant-vowel-consonant compensation continues as indicated in
Steps 224 and 225 yield the reference signal frames that correspond to the active speech and soft speech parts, respectively the active speech reference signal part frames 234 and the soft speech reference signal parts frames 235. These frames are provided to step 260 to be discussed below.
Completely similar to the calculation of the relevant signal parts of the reference signal, also the degraded signal frames 240 are first, in step 242, analysed for calculating the signal power in the desired frequency domain. For the degraded signal frames, it will be advantageous to calculate the signal power within a frequency range including the spoken voice frequency range and the frequency range wherein most of the audible noise is present, for example the frequency range between 300 hertz and 8000 hertz.
From the calculated signal powers in step 242, the relevant frames are selected, i.e. the frames that are associated with the relevant reference frames. Selection takes place in steps 244 and 245. In step 245, for each degraded signal frame it is determined whether or not it is time aligned with a reference signal frame that is selected in step 225 as a soft speech reference signal frame. If the degraded frame is time aligned with a soft speech reference signal frame, the degraded frame is identified as a soft speech degraded signal frame, and the calculated signal power will be used in the calculation in step 260. Otherwise, the frame is discarded as soft speech degraded signal frame for calculation of the compensation factor in step 247. In step 244, for each degraded signal frame it is determined whether or not it is time aligned with a reference signal frame that is selected in step 224 as an active speech reference signal frame. If the degraded frame is time aligned with an active speech reference signal frame, the degraded frame is identified as an active speech degraded signal frame, and the calculated signal power will be used in the calculation in step 260. Otherwise, the frame is discarded as active speech degraded signal frame for calculation of the compensation factor in step 247. This yields the soft speech degraded signal parts frames 254 and the active speech degraded signal parts frames 255 which are provided to step 260.
Step 260 receives as input the active speech reference signal parts frames 234, the soft speech reference signal part frames 235, the soft speech degraded signal parts frames 254 and the active speech degraded signal parts frames 255. In step 260, the signal powers for these frames are processed such as to determine the average signal power for the active speech and soft speech reference signal parts and for the active speech and soft speech degraded signal parts, and from this (also in step 260) the consonant-vowel-consonant signal-to-noise ration compensation parameter (CVCSNR_factor) is calculated as follows:
The parameters Δ1 and Δ2 are constant values that are used to adapt the behavior of the model to the behavior of subjects. The other parameters in this formula are as follows: Pactive, ref, average is the average active speech reference signal part signal power. The parameter Psoft, ref, average is the average soft speech reference signal part signal power. The parameter Pactive, degraded, average is the average active speech degraded signal part signal power, and the parameter Psoft, degraded, average is the average soft speech degraded signal part signal power. At the output of step 260 there is provided the consonant-vowel-consenant signal-to-noise ratio compensation parameter CVCSNR_factor.
The CVCSNR_factor is compared to a threshold value, in the present example 0.75 in step 262. If the CVCSNR_factor is larger than this threshold, the compensation factor in step 265 will be determined as being equal to 1.0 (no compensation takes place). In case the CVCSNR_factor is smaller than the threshold (here 0.75), the compensation factor is in step 267 calculated as follows: the compensation factor=(CVCSNR_factor+0.25)1/2 (note that the value 0.25 is taken to be equal to 1.0-0.75 wherein 0.75 is the threshold used for comparing the CVCSNR_factor). The compensation factor 270 thus provides is used in step 182 of
The present invention has been described in terms of some specific embodiments thereof. It will be appreciated that the embodiments shown in the drawings and described herein are intended for illustrated purposes only and are not by any manner or means intended to be restrictive on the invention. It is believed that the operation and construction of the present invention will be apparent from the foregoing description and drawings appended thereto. It will be clear to the skilled person that the invention is not limited to any embodiment herein described and that modifications are possible which should be considered within the scope of the appended claims. Also kinematic inversions are considered inherently disclosed and to be within the scope of the invention. Moreover, any of the components and elements of the various embodiments disclosed may be combined or may be incorporated in other embodiments where considered necessary, desired or preferred, without departing from the scope of the invention as defined in the claims.
In the claims, any reference signs shall not be construed as limiting the claim. The term ‘comprising’ and ‘including’ when used in this description or the appended claims should not be construed in an exclusive or exhaustive sense but rather in an inclusive sense. Thus the expression ‘comprising’ as used herein does not exclude the presence of other elements or steps in addition to those listed in any claim. Furthermore, the words ‘a’ and ‘an’ shall not be construed as limited to ‘only one’, but instead are used to mean ‘at least one’, and do not exclude a plurality. Features that are not specifically or explicitly described or claimed may be additionally included in the structure of the invention within its scope. Expressions such as: “means for . . . ” should be read as: “component configured for . . . ” or “member constructed to . . . ” and should be construed to include equivalents for the structures disclosed. The use of expressions like: “critical”, “preferred”, “especially preferred” etc. is not intended to limit the invention. Additions, deletions, and modifications within the purview of the skilled person may generally be made without departing from the spirit and scope of the invention, as is determined by the claims. The invention may be practiced otherwise then as specifically described herein, and is only limited by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
20186733.0 | Jul 2020 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/NL2021/050460 | 7/19/2021 | WO |