The invention relates to audio signal processing in general and to improving clarity of dialog and narrative in surround entertainment audio in particular.
Unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
Modern entertainment audio with multiple, simultaneous channels of audio (surround sound) provides audiences with immersive, realistic sound environments of immense entertainment value. In such environments many sound elements such as dialog, music, and effects are presented simultaneously and compete for the listener's attention. For some members of the audience—especially those with diminished auditory sensory abilities or slowed cognitive processing—dialog and narrative may be hard to understand during parts of the program where loud competing sound elements are present. During those passages these listeners would benefit if the level of the competing sounds were lowered.
The recognition that music and effects can overpower dialog is not new and several methods to remedy the situation have been suggested. However, as will be outlined next, the suggested methods are either incompatible with current broadcast practice, exert an unnecessarily high toll on the overall entertainment experience, or do both.
It is a commonly adhered-to convention in the production of surround audio for film and television to place the majority of dialog and narrative into only one channel (the center channel, also referred to as the speech channel). Music, ambiance sounds, and sound effects are typically mixed into both the speech channel and all remaining channels (e.g., Left [L], Right [R], Left Surround [ls] and Right Surround [rs], also referred to as the non-speech channels). As a result, the speech channel carries the majority of speech and a significant amount of the non-speech audio contained in the audio program, whereas the non-speech channels carry predominantly non-speech audio, but may also carry a small amount of speech. One simple approach to aiding the perception of dialog and narrative in these conventional mixes is to permanently reduce the level of all non-speech channels relative to the level of the speech channel, for example by 6 dB. This approach is simple and effective and is practiced today (e.g., SRS [Sound Retrieval System] Dialog Clarity or modified downmix equations in surround decoders). However, it suffers from at least one drawback: the constant attenuation of the non-speech channels may lower the level of quiet ambiance sounds that do not interfere with speech reception to the point where they can no longer be heard. By attenuating non-interfering ambiance sounds the aesthetic balance of the program is altered without any attendant benefit for speech understanding.
An alternative solution is described in a series of patents (U.S. Pat. No. 7,266,501, U.S. Pat. No. 6,772,127, U.S. Pat. No. 6,912,501, and U.S. Pat. No. 6,650,755) by Vaudrey and Saunders. As understood, their approach involves modifying the content production and distribution. According to that arrangement, the consumer receives two separate audio signals. The first of these signals comprises the “Primary Content” audio. In many cases this signal will be dominated by speech but, if the content producer desires, may contain other signal types as well. The second signal comprises the “Secondary Content” audio, which is composed of all the remaining sounds elements. The user is given control over the relative levels of these two signals, either by manually adjusting the level of each signal or by automatically maintaining a user-selected power ratio. Although this arrangement can limit the unnecessary attenuation of non-interfering ambiance sounds, its widespread deployment is hindered by its incompatibility with established production and distribution methods.
Another example of a method to manage the relative levels of speech and non-speech audio has been proposed by Bennett in U.S. Application Publication No. 20070027682.
All the examples of the background art share the limitation of not providing any means for minimizing the effect the dialog enhancement has on the listening experience intended by the content creator, among other deficiencies. It is therefore the object of the present invention to provide a means of limiting the level of non-speech audio channels in a conventionally mixed multi-channel entertainment program so that speech remains comprehensible while also maintaining the audibility of the non-speech audio components.
Thus, there is a need for improved ways of maintaining speech audibility. The present invention solves these and other problems by providing an apparatus and method of improving speech audibility in a multi-channel audio signal.
Embodiments of the present invention improve speech audibility. In one embodiment the present invention includes a method of improving audibility of speech in a multi-channel audio signal. The method includes comparing a first characteristic and a second characteristic of the multi-channel audio signal to generate an attenuation factor. The first characteristic corresponds to a first channel of the multi-channel audio signal that contains speech and non-speech audio, and the second characteristic corresponds to a second channel of the multi-channel audio signal that contains predominantly non-speech audio. The method further includes adjusting the attenuation factor according to a speech likelihood value to generate an adjusted attenuation factor. The method further includes attenuating the second channel using the adjusted attenuation factor.
A first aspect of the invention is based on the observation that the speech channel of a typical entertainment program carries a non-speech signal for a substantial portion of the program duration. Consequently, according to this first aspect of the invention, masking of speech audio by non-speech audio may be controlled by (a) determining the attenuation of a signal in a non-speech channel necessary to limit the ratio of the signal power in the non-speech channel to the signal power in the speech channel not to exceed a predetermined threshold and (b) scaling the attenuation by a factor that is monotonically related to the likelihood of the signal in the speech channel being speech, and (c) applying the scaled attenuation.
A second aspect of the invention is based on the observation that the ratio between the power of the speech signal and the power of the masking signal is a poor predictor of speech intelligibility. Consequently, according to this second aspect of the invention, the attenuation of the signal in the non-speech channel that is necessary to maintain a predetermined level of intelligibility is calculated by predicting the intelligibility of the speech signal in the presence of the non-speech signals with a psycho-acoustically based intelligibility prediction model.
A third aspect of the invention is based on the observations that, if attenuation is allowed to vary across frequency, (a) a given level of intelligibility can be achieved with a variety of attenuation patterns, and (b) different attenuation patterns can yield different levels of loudness or salience of the non-speech audio. Consequently, according to this third aspect of the invention, masking of speech audio by non-speech audio is controlled by finding the attenuation pattern that maximizes loudness or some other measure of salience of the non-speech audio under the constraint that a predetermined level of predicted speech intelligibility is achieved.
The embodiments of the present invention may be performed as a method or process. The methods may be implemented by electronic circuitry, as hardware or software or a combination thereof. The circuitry used to implement the process may be dedicated circuitry (that performs only a specific task) or general circuitry (that is programmed to perform one or more specific tasks).
The following detailed description and accompanying drawings provide a better understanding of the nature and advantages of the present invention.
Described herein are techniques for maintaining speech audibility. In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one skilled in the art that the present invention as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein.
Various method and processes are described below. That they are described in a certain order is mainly for ease of presentation. It is to be understood that particular steps may be performed in other orders or in parallel as desired according to various implementations. When a particular step must precede or follow another, such will be pointed out specifically when not evident from the context.
The principle of the first aspect of the invention is illustrated in
Because there is a unique relation between a measure expressed on a logarithmic scale (dB) and that same measure expressed on a linear scale, a circuit that is equivalent to
One noteworthy feature of the first aspect of the invention is to scale the gain thus derived by a value monotonically related to the likelihood of the signal in the speech channel in fact being speech. Still referring to
Those skilled in the art will easily recognize how the arrangement can be extended to any number of input channels.
The principle of the second aspect of the invention is illustrated in
The power spectra are fed into comparison circuit 204. The purpose of this block is to determine the attenuation to be applied to each non-speech channel to ensure that the signal in the non-speech channel does not reduce the intelligibility of the signal in the speech channel to be less than a predetermined criterion. This functionality is achieved by employing an intelligibility prediction circuit (205 and 206) that predicts speech intelligibility from the power spectra of the speech signal (201) and non-speech signals (202 and 203). The intelligibility prediction circuits 205 and 206 may implement a suitable intelligibility prediction model according to design choices and tradeoffs. Examples are the Speech Intelligibility Index as specified in ANSI S3.5-1997 (“Methods for Calculation of the Speech Intelligibility Index”) and the Speech Recognition Sensitivity model of Muesch and Buus (“Using statistical decision theory to predict speech intelligibility. I. Model structure” Journal of the Acoustical Society of America, 2001, Vol 109, p 2896-2909). It is clear that the output of the intelligibility prediction model has no meaning when the signal in the speech channel is something other than speech. Despite this, in what follows the output of the intelligibility prediction model will be referred to as the predicted speech intelligibility. The perceived mistake will be accounted for in subsequent processing by scaling the gain values output from the comparison circuit 204 with a parameter that is related to the likelihood of the signal being speech (113, not yet discussed).
The intelligibility prediction models have in common that they predict either increased or unchanged speech intelligibility as the result of lowering the level of the non-speech signal. Continuing on in the process flow of
Continuing on in the process flow of
The principle of the third aspect of the invention is illustrated in
Describing now the side-branch path of the process of
Depending on the computational resources available and the constraints imposed, the form and complexity of the optimization circuits (307, 308) may vary greatly. According to one embodiment an iterative, multidimensional constrained optimization of N free parameters is used. Each parameter represents the gain applied to one of the frequency bands of the non-speech channel. Standard techniques, such as following the steepest gradient in the N-dimensional search space may be applied to find the maximum. In another embodiment, a computationally less demanding approach constrains the gain-vs.-frequency functions to be members of a small set of possible gain-vs.-frequency functions, such as a set of different spectral gradients or shelf filters. With this additional constraint the optimization problem can be reduced to a small number of one-dimensional optimizations. In yet another embodiment an exhaustive search is made over a very small set of possible gain functions. This latter approach might be particularly desirable in real-time applications where a constant computational load and search speed are desired.
Those skilled in the art will easily recognize additional constraints that might be imposed on the optimization according to additional embodiments of the present invention. One example is restricting the loudness of the modified non-speech channel to be not larger than the loudness before modification. Another example is imposing a limit on the gain differences between adjacent frequency bands in order to limit the potential for temporal aliasing in the reconstruction filter bank (313, 314) or to reduce the possibility for objectionable timbre modifications. Desirable constraints depend both on the technical implementation of the filter bank and on the chosen tradeoff between intelligibility improvement and timbre modification. For clarity of illustration, these constraints are omitted from
Continuing on in the process flow of
In the above description, the terms “speech” (or speech audio or speech channel or speech signal) and “non-speech” (or non-speech audio or non-speech channel or non-speech signal) are used. A skilled artisan will recognize that these terms are used more to differentiate from each other and less to be absolute descriptors of the content of the channels. For example, in a restaurant scene in a film, the speech channel may predominantly contain the dialogue at one table and the non-speech channels may contain the dialogue at other tables (hence, both contain “speech” as a layperson uses the term). Yet it is the dialogue at other tables that certain embodiments of the present invention are directed toward attenuating.
Implementation
The invention may be implemented in hardware or software, or a combination of both (e.g., programmable logic arrays). Unless otherwise specified, the algorithms included as part of the invention are not inherently related to any particular computer or other apparatus. In particular, various general-purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct more specialized apparatus (e.g., integrated circuits) to perform the required method steps. Thus, the invention may be implemented in one or more computer programs executing on one or more programmable computer systems each comprising at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device or port, and at least one output device or port. Program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.
Each such program may be implemented in any desired computer language (including machine, assembly, or high level procedural, logical, or object oriented programming languages) to communicate with a computer system. In any case, the language may be a compiled or interpreted language.
Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system may also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein.
The above description illustrates various embodiments of the present invention along with examples of how aspects of the present invention may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present invention as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents will be evident to those skilled in the art and may be employed without departing from the spirit and scope of the invention as defined by the claims.
This application claims the benefit of priority of U.S. Provisional Patent Application No. 61/046,271, filed Apr. 18, 2008, hereby incorporated by reference in its entirety.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2009/040900 | 4/17/2009 | WO | 00 | 10/15/2010 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2010/011377 | 1/28/2010 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5046097 | Lowe | Sep 1991 | A |
5105462 | Lowe | Apr 1992 | A |
5208860 | Lowe | May 1993 | A |
5212733 | DeVitt | May 1993 | A |
5375188 | Serikawa | Dec 1994 | A |
5956674 | Smyth et al. | Sep 1999 | A |
6311155 | Vaudrey | Oct 2001 | B1 |
6442278 | Vaudrey | Aug 2002 | B1 |
6487535 | Smyth et al. | Nov 2002 | B1 |
6650755 | Vaudrey et al. | Nov 2003 | B2 |
6697491 | Griesinger | Feb 2004 | B1 |
6772127 | Saunders | Aug 2004 | B2 |
6912501 | Vaudrey | Jun 2005 | B2 |
6914988 | Irwan et al. | Jul 2005 | B2 |
7050966 | Schneider et al. | May 2006 | B2 |
7076071 | Katz | Jul 2006 | B2 |
7107211 | Griesinger | Sep 2006 | B2 |
7251337 | Jacobs | Jul 2007 | B2 |
7260231 | Wedge | Aug 2007 | B1 |
7261182 | Zainea | Aug 2007 | B2 |
7266501 | Saunders | Sep 2007 | B2 |
7376558 | Gemello et al. | May 2008 | B2 |
7551745 | Gundry | Jun 2009 | B2 |
8144881 | Crockett et al. | Mar 2012 | B2 |
8194889 | Seefeldt | Jun 2012 | B2 |
8199933 | Seefeldt | Jun 2012 | B2 |
20020013698 | Vaudrey | Jan 2002 | A1 |
20030002683 | Vaudrey et al. | Jan 2003 | A1 |
20030044032 | Irwan | Mar 2003 | A1 |
20030112088 | Bizjak | Jun 2003 | A1 |
20040042626 | Balan | Mar 2004 | A1 |
20040213420 | Gundry et al. | Oct 2004 | A1 |
20050071028 | Yuen | Mar 2005 | A1 |
20050117762 | Sakurai | Jun 2005 | A1 |
20050232445 | Vaudrey | Oct 2005 | A1 |
20070027682 | Bennett | Feb 2007 | A1 |
20070076902 | Master | Apr 2007 | A1 |
20100121634 | Muesch | May 2010 | A1 |
20110054887 | Muesch | Mar 2011 | A1 |
20110150233 | Gautama | Jun 2011 | A1 |
Number | Date | Country |
---|---|---|
101151659 | Mar 2008 | CN |
0517233 | Dec 1992 | EP |
0637011 | Feb 1995 | EP |
0645756 | Mar 1995 | EP |
2003-084790 | Sep 2003 | JP |
2006-072130 | Mar 2006 | JP |
2163032 | Feb 2001 | RU |
9912386 | Mar 1999 | WO |
03022003 | Mar 2003 | WO |
03028407 | Apr 2003 | WO |
2007120453 | Oct 2007 | WO |
2008032209 | Mar 2008 | WO |
2008031611 | Mar 2008 | WO |
Entry |
---|
Shirley, et al., “Measurement of speech intelligibility in noise: A comparison of a stereo image source and a central loudspeaker source”, Audio Engineering Society, Convention Paper 6372, presented at the 118th Convention, May 28-31, 2005 in Barcelona, Spain, pp. 1-6. |
Vinton, et al., “Automated Speech/Other Discrimination for Loudness Monitoring”, Audio Engineering Society, Convention Paper 6437, presented at the 118th Convention, May 28-31, 2005 in Barcelona, Spain; pp. 1-11. |
Goodwin, et al., “A Dynamic Programming Approach to Audio Segmentation and Speech/Music Discrimination”, International Conference on Acoustics on May 17-21, 2004, Fairmont Queen Elizabeth Hotel, Montreal, Quebec, Canada; vol. 4 of 5, pp. IV-309-IV-312. |
Avendano, et al., “Ambience Extraction and Synthesis From Stereo Signals for Multi-Channel Audio Up-Mix”, Acoustics, Speech, and Signal Processing, 2002, vol. 2, pp. 1957-1960. |
Pollack, et al., “Stereophonic Listening and Speech Intelligibility againstVoice Babble”, The Journal of the Acoustical Society of America, vol. 30, No. 2, Feb. 1958, pp. 131-133. |
Number | Date | Country | |
---|---|---|---|
20110054887 A1 | Mar 2011 | US |
Number | Date | Country | |
---|---|---|---|
61046271 | Apr 2008 | US |