The present application is related to audio signal processing and, particularly, to audio processing usable in artificial reverberators.
The determination of a measure for a perceived level of reverberation is, for example, desired for applications where an artificial reverberation processor is operated in an automated way and needs to adapt its parameters to the input signal such that the perceived level of the reverberation matches a target value. It is noted that the term reverberance while alluding to the same theme, does not appear to have a commonly accepted definition which makes it difficult to use as a quantitative measure in a listening test and prediction scenario.
Artificial reverberation processors are often implemented as linear time-invariant systems and operated in a send-return signal path, as depicted in
These parameters have an impact on the resulting audio signal in terms of perceived level, distance, room size, coloration and sound quality. Furthermore, the perceived characteristics of the reverberation depend on the temporal and spectral characteristics of the input signal [1]. Focusing on a very important sensation, namely loudness, it can be observed that the loudness of the perceived reverberation is monotonically related to the non-stationarity of the input signal. Intuitively speaking, an audio signal with large variations in its envelope excites the reverberation at high levels and allows it to become audible at lower levels. In a typical scenario where the long-term DRR expressed in decibels is positive, the direct signal can mask the reverberation signal almost completely at time instances where its energy envelope increases. On the other hand, whenever the signal ends, the previously excited reverberation tail becomes apparent in gaps exceeding a minimum duration determined by the slope of the post-masking (at maximum 200 ms) and the integration time of the auditory system (at maximum 200 ms for moderate levels).
To illustrate this,
Although such observations have been made many times [4, 5, 6], it is still worth emphasizing them because it illustrates qualitatively why models of partial loudness can be applied in the context of this work. In fact, it has been pointed out that the perception of reverberation arises from stream segregation processes in the auditory system [4, 5, 6] and is influenced by the partial masking of the reverberation due to the direct sound.
The considerations above motivate the use of loudness models. Related investigations were performed by Lee et al. and focus on the prediction of the subjective decay rate of RIRs when listening to them directly [7] and on the effect of the playback level on reverberance [8]. A predictor for reverberance using loudness-based early decay times is proposed in [9]. In contrast to this work, the prediction methods proposed here process the direct signal and the reverberation signal with a computational model of partial loudness (and with simplified versions of it in the quest for low-complexity implementations) and thereby consider the influence of the input (direct) signal on the sensation. Recently, Tsilfidis and Mourjopoulus [10] investigated the use of a loudness model for the suppression of the late reverberation in single-channel recordings. An estimate of the direct signal is computed from the reverberant input signal using a spectral subtraction method, and a reverberation masking index is derived by means of a computational auditory masking model, which controls the reverberation processing.
It is a feature of a multi-channel synthesizers and other devices to add reverberation in order to make the sound better from a perceptual point of view. On the other hand, the generated reverberation is an artificial signal which when added to the signal at to low level is barely audible and when added at to high level leads to unnatural and unpleasant sounding final mixed signal. What makes things even worse is that, as discussed in the context of
An additional problem related to reverberation is that the reverberated signal is intended for the ear of an entity or individual, such as a human being and the final goal of generating a mix signal having a direct signal component and a reverberation signal component is that the entity perceives this mixed signal or “reverberated signal” as sounding well or as sounding natural. However, the auditory perception mechanism or the mechanism how sound is actually perceived by an individual is strongly non-linear, not only with respect to the bands in which the human hearing works, but also with respect to the processing of signals within the bands. Additionally, it is known that the human perception of sound is not so much directed by the sound pressure level which can be calculated by, for example, squaring digital samples, but the perception is more controlled by a sense of loudness. Additionally, for mixed signals, which include a direct component and a reverberation signal component, the sensation of the loudness of the reverberation component depends not only on the kind of direct signal component, but also on the level or loudness of the direct signal component.
Therefore, there exists a need for determining a measure for a perceived level of reverberation in a signal consisting of a direct signal component and a reverberation signal component in order to cope with the above problems related with the auditory perception mechanism of an entity.
According to an embodiment, an apparatus for determining a measure for a perceived level of reverberation in a mix signal having a direct signal component and a reverberation signal component may have a loudness model processor having a perceptual filter stage for filtering the dry signal component, the reverberation signal component or the mix signal, wherein the perceptual filter stage is configured for modeling an auditory perception mechanism of an entity to acquire a filtered direct signal, a filtered reverberation signal or a filtered mix signal; a loudness estimator for estimating a first loudness measure using the filtered direct signal and for estimating a second loudness measure using the filtered reverberation signal or the filtered mix signal, where the filtered mix signal is derived from a superposition of the direct signal component and the reverberation signal component; and a combiner for combining the first and the second loudness measures to acquire a measure for the perceived level of reverberation.
According to another embodiment, a method of determining a measure for a perceived level of reverberation in a mix signal having a direct signal component and a reverberation signal component may have the steps of filtering the dry signal component, the reverberation signal component or the mix signal, wherein the filtering is performed using a perceptual filter stage being configured for modeling an auditory perception mechanism of an entity to acquire a filtered direct signal, a filtered reverberation signal or a filtered mix signal; estimating a first loudness measure using the filtered direct signal; estimating a second loudness measure using the filtered reverberation signal or the filtered mix signal, where the filtered mix signal is derived from a superposition of the direct signal component and the reverberation signal component; and combining the first and the second loudness measures to acquire a measure for the perceived level of reverberation.
According to another embodiment, an audio processor for generating a reverberated signal from a direct signal component may have a reverberator for reverberating the direct signal component to acquire a reverberated signal component; an apparatus for determining a measure for a perceived level of reverberation in the reverberated signal having the direct signal component and the reverberated signal component which may have a loudness model processor having a perceptual filter stage for filtering the dry signal component, the reverberation signal component or the mix signal, wherein the perceptual filter stage is configured for modeling an auditory perception mechanism of an entity to acquire a filtered direct signal, a filtered reverberation signal or a filtered mix signal; a loudness estimator for estimating a first loudness measure using the filtered direct signal and for estimating a second loudness measure using the filtered reverberation signal or the filtered mix signal, where the filtered mix signal is derived from a superposition of the direct signal component and the reverberation signal component; and a combiner for combining the first and the second loudness measures to acquire a measure for the perceived level of reverberation; a controller for receiving the perceived level generated by the apparatus for determining a measure of a perceived level of reverberation, and for generating a control signal in accordance with the perceived level and a target value; a manipulator for manipulating the dry signal component or the reverberation signal component in accordance with the control value; and a combiner for combining the manipulated dry signal component and the manipulated reverberation signal component, or for combining the dry signal component and the manipulated reverberation signal component, or for combining the manipulated dry signal component and the reverberation signal component to acquire the mix signal.
According to another embodiment, a method of processing an audio signal for generating a reverberated signal from a direct signal component may have the steps of reverberating the direct signal component to acquire a reverberated signal component; a method of determining a measure for a perceived level of reverberation in the reverberated signal having the direct signal component and the reverberated signal component which may have the steps of filtering the dry signal component, the reverberation signal component or the mix signal, wherein the filtering is performed using a perceptual filter stage being configured for modeling an auditory perception mechanism of an entity to acquire a filtered direct signal, a filtered reverberation signal or a filtered mix signal; estimating a first loudness measure using the filtered direct signal; estimating a second loudness measure using the filtered reverberation signal or the filtered mix signal, where the filtered mix signal is derived from a superposition of the direct signal component and the reverberation signal component; and combining the first and the second loudness measures to acquire a measure for the perceived level of reverberation; receiving the perceived level generated by the method for determining a measure of a perceived level of reverberation, generating a control signal in accordance with the perceived level and a target value; manipulating the dry signal component or the reverberation signal component in accordance with the control value; and combining the manipulated dry signal component and the manipulated reverberation signal component, or combining the dry signal component and the manipulated reverberation signal component, or combining the manipulated dry signal component and the reverberation signal component to acquire the mix signal.
According to another embodiment, a computer program may have a program code for performing, when running on a computer, the method of determining a measure for a perceived level of reverberation in a mix signal having a direct signal component and a reverberation signal component which may have the steps of filtering the dry signal component, the reverberation signal component or the mix signal, wherein the filtering is performed using a perceptual filter stage being configured for modeling an auditory perception mechanism of an entity to acquire a filtered direct signal, a filtered reverberation signal or a filtered mix signal; estimating a first loudness measure using the filtered direct signal; estimating a second loudness measure using the filtered reverberation signal or the filtered mix signal, where the filtered mix signal is derived from a superposition of the direct signal component and the reverberation signal component; and combining the first and the second loudness measures to acquire a measure for the perceived level of reverberation.
According to another embodiment, a computer program may have a program code for performing, when running on a computer, the method of processing an audio signal for generating a reverberated signal from a direct signal component which may have the steps of reverberating the direct signal component to acquire a reverberated signal component; a method of determining a measure for a perceived level of reverberation in the reverberated signal having the direct signal component and the reverberated signal component which may have the steps of filtering the dry signal component, the reverberation signal component or the mix signal, wherein the filtering is performed using a perceptual filter stage being configured for modeling an auditory perception mechanism of an entity to acquire a filtered direct signal, a filtered reverberation signal or a filtered mix signal; estimating a first loudness measure using the filtered direct signal; estimating a second loudness measure using the filtered reverberation signal or the filtered mix signal, where the filtered mix signal is derived from a superposition of the direct signal component and the reverberation signal component; and combining the first and the second loudness measures to acquire a measure for the perceived level of reverberation; receiving the perceived level generated by the method for determining a measure of a perceived level of reverberation, generating a control signal in accordance with the perceived level and a target value; manipulating the dry signal component or the reverberation signal component in accordance with the control value; and combining the manipulated dry signal component and the manipulated reverberation signal component, or combining the dry signal component and the manipulated reverberation signal component, or combining the manipulated dry signal component and the reverberation signal component to acquire the mix signal.
The present invention is based on the finding that the measure for a perceived level of reverberation in a signal is determined by a loudness model processor comprising a perceptual filter stage for filtering a direct signal component, a reverberation signal component or a mix signal component using a perceptual filter in order to model an auditory perception mechanism of an entity. Based on the perceptually filtered signals, a loudness estimator estimates a first loudness measure using the filtered direct signal and a second loudness measure using the filtered reverberation signal or the filtered mix signal. Then, a combiner combines the first measure and the second measure to obtain a measure for the perceived level of reverberation. Particularly, a way of combining two different loudness measures advantageously by calculating difference provides a quantitative value or a measure of how strong a sensation of the reverberation is compared to the sensation of the direct signal or the mix signal.
For calculating the loudness measures, the absolute loudness measures can be used and, particularly, the absolute loudness measures of the direct signal, the mixed signal or the reverberation signal. Alternatively, the partial loudness can also be calculated where the first loudness measure is determined by using the direct signal as the stimulus and the reverberation signal as noise in the loudness model and the second loudness measure is calculated by using the reverberation signal as the stimulus and the direct signal as the noise. Particularly, by combining these two measures in the combiner, a useful measure for a perceived level of reverberation is obtained. It has been found out by the inventors that such useful measure cannot be determined alone by generating a single loudness measure, for example, by using the direct signal alone or the mix signal alone or the reverberation signal alone. Instead, due to the inter-dependencies in human hearing, combining measures which are derived differently from either of these three signals, the perceived level of reverberation in a signal can be determined or modeled with a high degree of accuracy.
Advantageously, the loudness model processor provides a time/frequency conversion and acknowledges the ear transfer function together with the excitation pattern actually occurring in human hearing an modeled by hearing models.
In an embodiment, the measure for the perceived level of reverberation is forwarded to a predictor which actually provides the perceived level of reverberation in a useful scale such as the Sone-scale. This predictor is advantageously trained by listening test data and the predictor parameters for a linear predictor comprise a constant term and a scaling factor. The constant term advantageously depends on the characteristic of the actually used reverberation filter and, in one embodiment of the reverberation filter characteristic parameter T60, which can be given for straightforward well-known reverberation filters used in artificial reverberators. Even when, however, this characteristic is not known, for example, when the reverberation signal component is not separately available, but has been separated from the mix signal before processing in the inventive apparatus, an estimation for the constant term can be derived.
Subsequently, embodiments of the present invention are described with respect to the accompanying drawings, in which:
The perceived level of reverberation depends on both the input audio signal and the impulse response. Embodiments of the invention aim at quantifying this observation and predicting the perceived level of late reverberation based on separate signal paths of direct and reverberant signals, as they appear in digital audio effects. An approach to the problem is developed and subsequently extended by considering the impact of the reverberation time on the prediction result. This leads to a linear regression model with two input variables which is able to predict the perceived level with high accuracy, as shown on experimental data derived from listening tests. Variations of this model with different degrees of sophistication and computational complexity are compared regarding their accuracy. Applications include the control of digital audio effects for automatic mixing of audio signals.
Embodiments of the present invention are not only useful for predicting the perceived level of reverberation in speech and music when the direct signal and the reverberation impulse response (RIR) are separately available. In other embodiments, in which a reverberated signal occurs, the present invention can be applied as well. In this instance, however, a direct/ambience or direct/reverberation separator would be included to separate the direct signal component and the reverberated signal component from the mix signal. Such an audio processor would then be useful to change the direct/reverberation ratio in this signal in order to generate a better sounding reverberated signal or better sounding mix signal.
Particularly, the perceptual filter stage is configured for filtering the direct signal component, the reverberation signal component or the mix signal component, wherein the perceptual filter stage is configured for modeling an auditory perception mechanism of an entity such as a human being to obtain a filtered direct signal, a filtered reverberation signal or a filtered mix signal. Depending on the implementation, the perceptual filter stage may comprise two filters operating in parallel or can comprise a storage and a single filter since one and the same filter can actually be used for filtering each of the three signals, i.e., the reverberation signal, the mix signal and the direct signal. In this context, however, it is to be noted that, although
The loudness calculator 104b or loudness estimator is configured for estimating the first loudness-related measure using the filtered direct signal and for estimating the second loudness measure using the filtered reverberation signal or the filtered mix signal, where the mix signal is derived from a super position of the direct signal component and the reverberation signal component.
However, other computationally efficient embodiments additionally exist which are indicated at lines 2, 3, and 4 in
In a further embodiment, the loudness model processor 104 is operating in the frequency domain as discussed in more detail in
Subsequently, the loudness model illustrated in
The implementation of the loudness model in
This section describes the implementation of a model of partial loudness, the listening test data that was used as ground truth for the computational prediction of the perceived level of reverberation, and a proposed prediction method which is based on the partial loudness model.
The loudness model computes the partial loudness Nx,n[k] of a signal x[k] when presented simultaneously with a masking signal n[k]
Nx,n[k]=f(x[k], n[k]). (1)
Although early models have dealt with the perception of loudness in steady background noise, some work exists on loudness perception in backgrounds of co-modulated random noise [14], complex environmental sounds [12], and music signals [15].
The model used in this work is similar to the models in [11, 12] which itself drew on earlier research by Fletcher, Munson, Stevens, and Zwicker, with some modifications as described in the following. A block diagram of the loudness model is shown in
The specific partial loudness, i.e., the partial loudness evoked in each of the auditory filter band, is computed from the excitation levels from the signal of interest (the stimulus) and the interfering noise according to Equations (17)-(20) in [11], illustrated in
Particularly,
N′TOT=C{[(ESIG+ENOISE)G+A]a−Aa}
It is assumed that the listener can partition a specific loudness at a given center frequency between the specific loudness of the signal and that of the noise, but in a way that choses in favor of the total specific loudness.
N′TOT=N′SIG+NNOISE.
This assumption is consistent, since in most experiments measuring partial masking, the listener hears first the noise alone and then the noise plus signal. The specific loudness for the noise alone, assuming that it is above threshold, is
N′NOISE=C[(ENOISEG+A)a−Aa].
Hence, if the specific loudness of the signal were derived simply by subjecting the specific loudness of the noise from the total specific loudness, the result would be
N′SIG=C{[(ESIG+ENOISE)G+A]a−Aa}−C[(ENOISEG+A)a−Aa]
In practice, the way that specific loudness is partitioned between signal and noise appears to vary depending on the relative excitation of the signal and the noise.
Four situations are considered that indicate how specific loudness is assigned at different signal levels. Let ETHRN denote the peak excitation evoked by a sinusoidal signal when it is at its masked threshold in the background noise. When ESIG is well below ETHRN, all the specific loudness is assigned to the noise, and the partial specific loudness of the signal approaches zero. Second, when ENOISE is well below ETHRQ, the partial specific loudness approaches the value it would have for a signal in quiet. Third, when the signal is at its masked threshold, with excitation ETHRN, it is assumed that the partial specific loudness is equal to the value that would occur for a signal at the absolute threshold. Finally, when a signal is centered in narrow-band noise is well above its masked threshold, the loudness of the signal approaches its unmasked value. Therefore, the partial specific loudness of the signal also approaches its unmasked value.
Consider the implications of these various boundary conditions. At masked threshold, the specific loudness equal that for a signal at threshold in quiet. This specific loudness is less than it would be predicted from the above equation, presumably because some of the specific loudness of the signal is assigned to the noise. In order to obtain the correct specific loudness for the signal, it is assumed that the specific loudness assigned to the noise is increased by the factor B, where
Applying this factor to the second term in the above equation for N′SIG gives
NSIG′=C{[(ESIG+ENOISE)G+A]a−Aa}−C{[(ETHRN+ENOISE)G+A]a−(ETHRQG+A)a}.
It is assumed that when the signal is at masked threshold, its peak excitation ETHRN is equal to KENOISE+ETHRQ, where K is the signal-to-noise ratio at the output of the auditory filter needed for threshold at higher masker levels. Recent estimates of K, obtained for masking experiments using notched noise, suggest that K increases markedly at very low frequencies, becoming greater than unity. In the reference, the value of K is estimated as a function of frequency. The value decreases from high levels at low frequencies to constant low levels at higher frequencies. Unfortunately, there are no estimates for K for center frequencies below 100 Hz, so values from 50 to 100 Hz substituting ETHRN in the above equation results in:
N′SIG=C{[(ESIG+ENOISE)G+A]a−Aa}−C{[(ENOISE(1+K)+ETHRQ)G+A]a−(ETHRQG+A)a}
When ESIG=ETHRN, this equation specifies the peak specific loudness for a signal at the absolute threshold in quiet.
When the signal is well above its masked threshold, that is, when ESIG>>ETHRN, the specific loudness of the signal approaches the value that it would have when no background noise is present. This means that the specific loudness assigned to the noise becomes vanishingly small. To accommodate this, the above equation is modified by introducing an extra term which depends on the ratio ETHRN/ESIG. This term decreases as E ESIG is increased above the value corresponding to masked threshold. Hence, the above equation becomes equation 17 on
This is the final equation for N′SIG in the case when ESIG>ETHRN and ESIG+ENOISE≦1010. The exponent 0.3 in the final term was chosen empirically so as to give a good fit to data on the loudness of a tone in noise as a function of the signal-to-noise ratio.
Subsequently, the situation is considered where ESIG<ETHRN. In the limiting case where ESIG is just below ETHRN, the specific loudness would approach the value given in Equation 17 in
The equations for partial loudness described so far apply when ESIG+ENOISE<1010. By applying the same reasoning as used for the derivation of equation (17) of
The following points are to be noted. This standard model is applied for the present invention where, in a first run, SIG corresponds to for example, the direct signal as the “stimulus” and Noise corresponds to for example the reverberation signal or the mix signal as the “noise”. In the second run as discussed in the context of the first embodiment in
In order to assess the suitability of the described loudness model for the task of predicting the perceived level of the late reverberation, a corpus of ground truth generated from listener responses is advantageous. To this end, data from an investigation featuring several listening test [13] is used in this paper which is briefly summarized in the following. Each listening test consisted of multiple graphical user interface screens which presented mixtures of different direct signals with different conditions of artificial reverberation. The listeners were asked to rate this perceived amount of reverberation on a scale from 0 to 100 points. In addition, two anchor signals were presented at 10 points and at 90 points. The listeners were asked to rate the perceived amount of reverberation on a scale from 0 to 100 points. In addition, two anchor signals were presented at 10 points and at 90 points. The anchor signals were created from the same direct signal with different conditions of reverberation.
The direct signals used for creating the test items were monophonic recordings of speech, individual instruments and music of different genres with a length of about 4 seconds each. The majority of the items originated from anechoic recordings but also commercial recordings with a small amount of original reverberation were used.
The RIRs represent late reverberation and were generated using exponentially decaying white noise with frequency dependent decay rates. The decay rates are chosen such that the reverberation time decreases from low to high frequencies, starting at a base reverberation time T60. Early reflections were neglected in this work. The reverberation signal r[k] and the direct signal x[k] were scaled and added such that the ratio of their average loudness measure according to ITU-R BS.1770 [16] matches a desired DRR and such that all test signal mixtures have equal long-term loudness. All participants in the tests were working in the field of audio and had experience with subjective listening tests.
The ground truth data used for the training and the verification/testing of the prediction method were taken from two listening tests and are denoted by A and B, respectively.
The data set A consisted of ratings of 14 listeners for 54 signals. The listeners repeated the test once and the mean rating was obtained from all of the 28 ratings for each item. The 54 signals were generated by combining 6 different direct signals and 9 stereophonic reverberation conditions, with T60ε{1,1.6,2.4} s and DRRε{3,7.5,12} dB, and no pre-delay.
The data in B were obtained from ratings of 14 listeners for 60 signals. The signals were generated using 15 direct signals and 36 reverberation conditions. The reverberation conditions sampled four parameters, namely T60, DRR, pre-delay, and ICC. For each direct signal 4 RIRs were chosen such that two had no pre-delay and two had a short pre-delay of 50 ms, and two were monophonic and two were stereophonic.
Subsequently, further features of an embodiment of the combiner 110 in
The basic input feature for the prediction method is computed from the difference of the partial loudness Nr,x[k] of the reverberation signal r[k] (with the direct signal x[k] being the interferer) and the loudness Nx,r[k] of x[k] (where r[k] is the interferer), according to Equation 2.
ΔNr,x[k]=Nr,x[k]−Nx,r[k] (2)
The rationale behind Equation (2) is that the difference ΔNr,x[k] is a measure of how strong the sensation of the reverberation is compared to the sensation of the direct signal. Taking the difference was also found to make the prediction result approximately invariant with respect to the playback level. The playback level has an impact on the investigated sensation [17, 8], but to a more subtle extent than reflected by the increase of the partial loudness Nr,x with increasing playback level. Typically, musical recordings sound more reverberant at moderate to high levels (starting at about 75-80 dB SPL) than at about 12 to 20 dB lower levels. This effect is especially obvious in cases where the DRR is positive, which is valid “for nearly all recorded music” [18], but not in all cases for concert music where “listeners are often well beyond the critical distance” [6].
The decrease of the perceived level of the reverberation with decreasing playback level is best explained by the fact that the dynamic range of reverberation is smaller than that of the direct sounds (or, a time-frequency representation of reverberation is more dense whereas a time-frequency representation of direct sounds is more sparse [19]). In such a scenario, the reverberation signal is more likely to fall below the threshold of hearing than the direct sounds do.
Although equation (2) describes, as the combination operation, a difference between the two loudness measures Nr,x[k] and Nx,r[k], other combinations can be performed as well such as multiplications, divisions or even additions. In any case, it is sufficient that the two alternatives indicated by the two loudness measures are combined in order to have influences of both alternatives in the result. However, the experiments have shown that the difference results in the best values from the model, i.e. in the results of the model which fit with the listening tests to a good extent, so that the difference is the advantageous way of combining.
Subsequently, details of the predictor 114 illustrated in
The prediction methods described in the following are linear and use a least squares fit for the computation of the model coefficients. The simple structure of the predictor is advantageous in situations where the size of the data sets for training and testing the predictor is limited, which could lead to overfitting of the model when using regression methods with more degrees of freedom, e.g. neural networks. The baseline predictor {circumflex over (R)}b is derived by the linear regression according to Equation (3) with coefficients ai, with K being the length of the signal in frames,
The model has only one independent variable, i.e. the mean of ΔNr,x[k]. To track changes and to be able to implement a real-time processing, the computation of the mean can be approximated using a leaky integrator. The model parameters derived when using data set A for the training are a0=48.2 and a1=14.0, where a0 equals the mean rating for all listeners and items.
The model parameters derived from the data set A are a0=48.2, a1=12.9, a2=10.2. The results are shown in
Alternatively, an averaging over more or less blocks can be performed as long as an averaging over at least two blocks takes place, although, due to the theory of linear equation, the best results may be obtained, when an averaging over the whole music piece up to a certain frame is performed. However, for real time applications, it is advantageous to reduce the number of frames over which is averaged depending on the actual application.
In the following, the models are evaluated using the correlation coefficient r, the mean absolute error (MAE) and the root mean squared error (RMSE) between the mean listener ratings and the predicted sensation. The experiments are performed as two-fold cross-validation, i.e. the predictor is trained with data set A and tested with data set B, and the experiment is repeated with B for training and A for testing. The evaluation metrics obtained from both runs are averaged, separately for the training and the testing.
The results are shown in Table 1 for the prediction models {circumflex over (R)}b and {circumflex over (R)}e. The predictor {circumflex over (R)}e yields accurate results with an RMSE of 10.6 points. The average of the standard deviation of the individual listener ratings per item are given as a measure for the dispersion from the mean (of the ratings of all listeners per item) as
The accuracies of the predictions for the data sets differ slightly, e.g. for {circumflex over (R)}e both MAE and RMSE are approximately one point below the mean value (as listed in the table) when testing with data set A and one point above average when testing with data set B. The fact that the evaluation metrics for training and test are comparable indicates that overfitting of the predictor has been avoided.
In order to facilitate an economic implementation of such prediction models, the following experiments investigate how the use of loudness features with less computational complexity influence the precision of the prediction result. The experiments focus on replacing the partial loudness computation by estimates of total loudness and on simplified implementations of the excitation pattern.
Instead of using the partial loudness difference ΔNr,x[k], three differences of total loudness estimates are examined, with the loudness of the direct signal Nx[k], the loudness of the reverberation Nr[k], and the loudness of the mixture signal Nm[k], as shown in Equations (5)-(7), respectively.
ΔNm-x[k]=Nm[k]−Nx[k] (5)
Equation (5) is based on the assumption that the perceived level of the reverberation signal can be expressed as the difference (increase) in overall loudness which is caused by adding the reverb to the dry signal.
Following a similar rationale as for the partial loudness difference in Equation (2), loudness features using the differences of total loudness of the reverberation signal and the mixture signal or the direct signal, respectively, are defined in Equations (6) and (7). The measure for predicting the sensation is derived from as the loudness of the reverberation signal when listened to separately, with subtractive terms for modelling the partial masking and for normalization with respect to playback level derived from the mixture signal or the direct signal, respectively.
ΔNr-m[k]=Nr[k]−Nm[k] (6)
ΔNr-x[k]=Nr[k]−Nx[k] (7)
Table 2 shows the results obtained with the features based on the total loudness and reveals that in fact two of them, ΔNm-x[k] and ΔNr-x[k], yield predictions with nearly the same accuracy as {circumflex over (R)}e. But as shown in Table 2, even ΔNr-n[k] provides use for results.
Finally, in an additional experiment, the influence of the implementation of the spreading function is investigated. This is of particular significance for many application scenarios, because the use of the level dependent excitation patterns demands implementations of high computational complexity. The experiments with a similar processing as for {circumflex over (R)}e but using one loudness model without spreading and one loudness model with level-invariant spreading function led to the results shown in Table 2. The influence of the spreading seems to be negligible.
Therefore, equations (5), (6) and (7) which indicate embodiments 2, 3, 4 of
Subsequently, an application of the inventive determination of measures for a perceived level of reverberation are discussed in the context of
This gain value is input into a manipulator 805 which is configured for manipulating, in this embodiment, the reverberation signal component 806 output by the reverberator 801. As illustrated
The present invention provides a simple and robust prediction of the perceived level of reverberation and, specifically, late reverberation in speech and music using loudness models of varying computational complexity. The prediction modules have been trained and evaluated using subjective data derived from three listening tests. As a starting point, the use of a partial loudness model has lead to a prediction model with high accuracy when the T60 of the RIR 606 of
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.
Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
Some embodiments according to the invention comprise a non-transitory or tangible data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.
A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are performed by any hardware apparatus.
The above described embodiments are merely illustrative for the principles of the present invention. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, to be limited only by the scope of the impending patent claims and not by the specific details presented by way of description and explanation of the embodiments herein.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
111 71 488 | Jun 2011 | EP | regional |
This application is a continuation of copending International Application No. PCT/EP2012/053193, filed Feb. 24, 2012, which is incorporated herein by reference in its entirety, and additionally claims priority from U.S. application Ser. No. 61/448,444, filed Mar. 2, 2011 and European Application No. 11171488.7, filed Jun. 27, 2011, all of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
20050100171 | Reilly | May 2005 | A1 |
20070253564 | Katayama | Nov 2007 | A1 |
20070256544 | Yamada et al. | Nov 2007 | A1 |
20080069366 | Soulodre | Mar 2008 | A1 |
20080232603 | Soulodre | Sep 2008 | A1 |
20080267413 | Faller | Oct 2008 | A1 |
20110164756 | Baumgarte et al. | Jul 2011 | A1 |
20120057710 | Disch et al. | Mar 2012 | A1 |
Number | Date | Country |
---|---|---|
101341793 | Jan 2009 | CN |
1565036 | Aug 2005 | EP |
2154911 | Feb 2010 | EP |
2007271686 | Oct 2007 | JP |
2330390 | Jul 2008 | RU |
WO-2006022248 | Mar 2006 | WO |
2009039897 | Apr 2009 | WO |
2010070016 | Jun 2010 | WO |
Entry |
---|
“Algorithms to measure audio programme loudness and true-peak audio level”, International Telecommunication Union, Radiocommunication Study Groups; Document 6C/TEMP/219(Rev.2); Revision 1 to Document 6/272-E, Nov. 18, 2010, Aug. 2006, 8 pages. |
Bradter, C. et al., “Loudness calculation for individual acoustical objects within complex temporally variable sounds”, AES Convention Paper 7494, Presented at the 124th Convention, Amsterdam, The Netherlands, May 17-20, 2008, 12 pages. |
Czyzewski, et al., “A Method of Artificial Reverberation Quality Testing”, Journal of the Audio Engineering Society, vol. 38, No. 3, Mar. 1990, pp. 129-141. |
Gardner, W. et al., “Reverberation Level Matching Experiments”, In the Proceedings of the Sabine Centennial Symposium, Acoust. Soc. of Am., Jun. 1994, 15 pages. |
Glasberg, B. et al., “Development and Evaluation of a Model for Predicting the Audibility of Time-Varying Sounds in the Presence of Background Sounds”, J. Audio Eng. Soc., vol. 53, No. 10, Oct. 2005, pp. 906-918. |
Griesinger, D. , “Further Investigation Into the Loudness of Running Reverberation”, Proceedings of the Institute of Acoustics (UK) Conference, May 1995, 9 pages. |
Griesinger, D. , “How Loud is My Reverberation”, Presented at the 98th Convention, Paris, France, Feb. 25-28, 1995, 13 pages. |
Griesinger, D. , “The importance of the direct to reverberant ratio in the perception of distance, localization, clarity, and envelopment”, AES Convention Paper 7724, Presented at the 126the Convention, May 7-10, 2009, 13 pages. |
Hase, S. et al., “Reverberance of an Existing Hall in Relation to Both Subsequent Reverberation Time and SPL”, Journal of Sound and Vibration, vol. 232, No. 1, Apr. 2000, pp. 149-155. |
Lee, D. et al., “Effect of listening level and background noise on the subjective decay rate of room impulse responses: Using time-varying loudness to model reverberance”, Applied Acoustics, vol. 71, May 20, 2010, pp. 801-811. |
Lee, D. et al., “Equal reverberance matching of music”, Proceedings of Acoustics 2009, Adelaide Australia, pp. 23-25, 2009, 6 pages. |
Lee, D. , “Equal reverberance matching of running musical stimuli having various reverberation times and SPLs”, Proceedings of the 20th Int'l Congress on Acoustics, Sydney, Australia, Aug. 23-27, 2010, 5 pages. |
Moore, B. et al., “A Model for the Prediction of Thresholds, Loudness, and Partial Loudness”, Journal of the Audio Engineering Society, vol. 45, No. 4, Apr. 1997, pp. 224-232. |
Moorer, J. , “About This Reverberation Business”, Computer Music Journal, vol. 3, No. 2, Jun. 1979, pp. 13-28. |
Paulus, J. , “Perceived Level of Late Reverberation in Speech and Music”, AES Convention Paper, Presented at the 130th Convention, London, UK, May 13-16, 2011, 12 pages. |
Scharf, B. , “Fundamentals of Auditory Masking”, Audiology, vol. 10; presented at the Round Table on Auditory Masking at the Tenth International Congress of Audiology in Dallas, Texas., Oct. 14, 1970, pp. 30-40. |
Tsilfidis, et al., “Blind single-channel suppression of late reverberation based on perceptual reverberation modeling”, J. Acoust. Soc. Am., vol. 129 (3), Mar. 2011, pp. 1439-1451. |
Uhle, C. et al., “Ambience Separation from Mono recordings using Non-negative Matrix Factorization”, AES 30th Int'l Conference, Saariselka, Finland, Mar. 15-17, 2007, 8 pages. |
Verhey, J. et al., “Einfluss der Zeitstruktur des Hintergrundes auf die Tonhaltigkeit und Lautheit des tonalen Vordergrundes”, In Proceedings of DAGA 2010, Berlin, Germany, Mar. 15-18, 2010, pp. 595-596. |
Number | Date | Country | |
---|---|---|---|
20140072126 A1 | Mar 2014 | US |
Number | Date | Country | |
---|---|---|---|
61448444 | Mar 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2012/053193 | Feb 2012 | US |
Child | 14016066 | US |