The present application disclosure generally relates to audio presentation and, in particular, to distortion reduction during presentation.
Many audio playback systems contain amplifiers and speakers with limited output capabilities. Mobile phones and tablets are two extreme examples where the design is rigidly limited by the dimension and power requirements of the device. In such systems it is common for the audio to distort as the playback level is increased, and oftentimes the characteristics of this distortion are frequency dependent. Therefore, it is common practice to apply multi-band compression to the audio signal prior to playback to reduce distortion and attempt to maximize playback level on a playback device. A distortion threshold is specified for each frequency band of the signal, and a compressor applies an independent gain to each band to ensure that the signal level in each band does not exceed the corresponding distortion threshold. A problem with such a compressor is that the gains applied for the purposes of distortion reduction might be content dependent. The thresholds set in order to eliminate perceived distortion for a narrowband signal are oftentimes more than what is required for broadband signals, since the broadband signal itself may significantly mask some of the distortion which it induces whereas a narrowband signal may be much less effective at masking its induced distortion. To address this problem, the applicant proposed the multiband compressor augmented with a distortion audibility model that gives audibility measure which is then utilized to dynamically modify the thresholds of the compressor to achieve maximum playback level with minimal perceived distortion, as illustrated in
The present application introduces a scene switch analyzer, to determine if a scene switch has occurred in the input audio signal, to guide the distortion audibility model. This scene switch analyzer makes sure that the rapid change of compressor thresholds only happens at the same moment as the scene switches, so as to give a more natural experience. Generally, a scene switch occurs when a paragraph of content is comprised of narrowband signals, and the following paragraph is comprised of broadband signals, or vice versa. For example, if the vocal comes in after a piano solo, it is considered as a scene switch, thus the compressor thresholds could change rapidly as the distortion audibility measure changes. A scene switch also occurs when one piece of content is comprised of narrowband signals, and the next piece of content in the playlist is comprised of broadband signals, or vice versa. For example, a low-quality narrowband user-generated content (UGC) is followed by a professional broadband content.
Hence, when there is no scene switch in the input audio signal, slow smoothing of the dynamic compressor thresholds is applied such that they change slowly. This can be obtained by using a large attack time constant and/or release time constant of a one pole smoother used for the smoothing. When a scene switch is detected, fast smoothing is applied to allow for a rapid change of the compressor thresholds by using a smaller attack time constant and/or release time constant of the smoother.
In some implementations, a scene switch analyzer receives an input audio signal having a plurality of frequency band components. The scene switch analyzer determines whether a scene switch has occurred in the input audio signal. The frequency band components of the input audio signal are processed. In response to determining that scene switch has not occurred, a distortion audibility model applies slow smoothing to compressor thresholds of the frequency band components. In response to determining that scene switch has occurred, the distortion audibility model applies fast smoothing or no smoothing to the compressor thresholds of the frequency band components.
In some implementations, the scene switch includes a switch between a broadband signal and a narrowband signal, or vice versa. The broadband signal corresponds to a vocal sound or a professional movie content, and the narrowband signal corresponds to an instrumental sound, e.g., a piano sound or a low-quality narrowband UGC content.
In some implementations, determining whether a scene switch has occurred in the input audio signal is based on all frequency band components of an input audio signal. For example, determining whether a scene switch has occurred in the input audio signal is based on a time-varying estimation of the centroid of or the estimation of the cutoff band of the signal power spectrum by smoothing each frequency band component signal. Specifically, the scene switch analyzer computes the time-varying estimation of the signal power spectrum centroid by performing operation including estimating a signal power spectrum by smoothing each frequency band component signal and determining the centroid of the signal power spectrum using the estimated signal power spectrum. Determining whether the scene switch has occurred in the input audio signal can include the following operations: smoothing the centroid; determining a difference between the centroid and the smoothed centroid; and determining whether the scene switch has occurred based on whether the difference satisfies a threshold. In addition, the scene switch analyzer computes the estimation of the cutoff band of the signal power spectrum at least by performing operations including estimating a signal power spectrum by smoothing each frequency band component signal and determining the cutoff band of the signal power spectrum using the estimated signal power spectrum. Determining whether the scene switch has occurred in the input audio signal can include the following operations: smoothing the cutoff band; determining a difference between the cutoff band and the smoothed cutoff band; and determining whether the scene switch has occurred based on whether the difference satisfies a threshold.
In some implementations, the scene switch analyzer provides one or more control signals to the distortion audibility model to guide the smoothing to compressor thresholds of the frequency band components of the input audio signal after determining whether the scene switch has occurred. In addition, in some implementations, one or more control signals guide the change of the time constants including attack time constant and/or release time constant. In some implementations, the function of one or more control signals is mapped to the range [0, 1], which can be the step function or the sigmoid function.
In some implementations, a scene switch analyzer for determining whether a scene switch has occurred in the input audio signal includes one or more computing devices operable to cause some or all of the operations described above to be performed.
In some implementations, a computer-readable medium stores instructions executable by one or more processors to cause some or all of operations described above to be performed.
The included Figures are for illustrative purposes and serve only to provide examples of possible and operations for the disclosed inventive methods, system and computer-readable medium. These figures in no way limit any changes in form and detail that may be made by one skilled in the art without departing from the spirit and scope of the disclosed implementations.
As above mentioned, now, the multiband compressor augmented with a distortion audibility model is used to give audibility measure which is then utilized to dynamically modify the thresholds of the compressor to achieve maximum playback level with minimal perceived distortion. A plurality of dynamic (time-varying) thresholds are determined according to the plurality of frequency band components, wherein each time-varying threshold corresponds to a respective frequency band component. The compressor then performs a compression operation on each frequency band component, wherein the compression has the corresponding time-varying threshold to produce a gain for each frequency band component. However, the problem with such a distortion audibility model augmented compressor is that when applied to mobile devices, whose dimensions are rigidly limited, the perceived distortion for a narrowband signal is harder to eliminate, thus the threshold set for narrowband signals is oftentimes much lower than that is required for broadband signals. That means a small change in distortion audibility measure will cause a large threshold change, resulting in considerable output volume change. When the rapid and remarkable change occurs at unexpected moments, it will have a negative impact on listening experience.
To address this problem, the present application discloses techniques that incorporate a scene switch analyzer configured to guide a distortion audibility model to smooth the dynamic (time-varying) thresholds, which can be applied by a multi-band compressor. Some examples of methods, systems and computer-readable medium implementing said techniques for dynamically adjusting the thresholds of a compressor responsive to an input audio signal are disclosed as follows.
xb[n]=hb[n]*x[n],b=1 . . . B (1)
In
Ck[n]=SSA({xi[n]|i=1 . . . B}), (2)
Next, one or more control signals Ck[n] are fed into a distortion audibility model 112 to guide it to compute each time-varying threshold Db[n] based on all frequency band components x1[n]−xB[n] and fixed thresholds Lb across bands b=1 . . . B, as represented in Equation (3):
Db[n]=DAM({xi[n],Li,Ck[n]|i=1 . . . B}) (3)
Wherein, in some implementations, a scene switch analyzer 108 can create only one control signal to guide computing all time-varying thresholds Db[n] for all frequency band components x1[n]−xB[n]; in some other implementations, rather than only one control signal, a scene switch analyzer 108 can create a plurality of control signals to guide computing all time-varying thresholds Db[n] for all frequency band components x1[n]−xB[n], for example, the number of the control signals correspond to the number of the frequency band components. Next, each frequency band component is passed into a compression function 116 along with the limit thresholds Db[n] to create the time-varying gains gb[n], as represented in Equation (4):
gb[n]=CF(xb[n],Db[n]) (4)
Finally, the processed output signal y[n] is computed by summing delayed versions of all of frequency band components x1[n]−xB[n] multiplied with their corresponding gains g1[n]−gB[n]. In
Therefore, rather than solely decided by DAM, SSA will also take the frequency band components x1[n]−yB[n], and based on its analysis give one or more control signals Ck[n] to control DAM to guide the smoothing to Db[n]. For example, Ck[n] guides the change of the time constants, which could give smaller time constants during a scene switch, to allow rapid changes, and give larger time constants when there is not a scene switch, to smooth out the fluctuations, since the attack and release time constants of a typical fast-attack/slow-release one pole smoother for Db[n] applied by the prior compressor would be fixed.
Where αA is the attack time constant and an is the release time constant of a fast-attack/slow-release one pole smoother. This signal power spectrum sb[n] is then represented in dB, in Equation (7):
Sb[n]=10 log10(sb[n]) (7)
Next, at 308, the centroid of the signal power spectrum C[n] is determined by the estimated signal power spectrum, as represented in Equation (8):
wherein fb is the center frequency of the band and, preferably, the fixed offset 130 dB is chosen so that all potentially audible signal, generally louder than −130 dB, would be counted into the signal power spectrum. Then, at 312, the centroid of the signal power spectrum would also be smoothed with a fast-attack/slow-release one pole smoother to obtain the smoothed version centroid Cs[n], as represented in Equation (9):
Next, at 316, the difference between the centroid C[n] and the smoothed centroid Cs[n] is determined and then compared with the threshold, preferably, the threshold of 500 Hz is chosen which is effective to indicate the occurrence of scene switch, to produce one or more control signals Ck[n], which could be mapped to the range [0, 1], as represented in Equation (10):
Ck[n]=f(C[n]−Cs[n]) (10)
At 320, Ck[n] guides the change of the time constants, such as, the attack time constant aa, as represented in Equation (11):
αA=Ck[n]αAfast+(1−Ck[n])αAslow (11)
Where αAfast and the αAslow could be set to a plurality of different values, for example, could be set to slightly different values or same value for each band; wherein, preferably, αAfast is set to one half of αAslow, or even smaller, to create a potentially more natural listening experience during dramatic scene switch.
Next, at 324, the time constants, such as, the attack time constant αA in Equation (11) is applied to guide the smoothing to Db[n], as represented in Equations (12) and (13), respectively:
Where db[n] is the unsmoothed per-band limit threshold generated in DAM. In some implementations, the Equation (12) illustrates the regular fast-attack/slow-release smoothing to Db[n]; in addition, if the most rapid changes are needed, the αA and the αAfast could even be set as zero; in this case, the DAM is guided to apply no smoothing when a scene switch is detected during an attack of db[n], as represented in Equation (13).
In addition to or instead of utilizing the centroid as represented in
Then, at 412, the cutoff band of the signal power spectrum would also be smoothed with a fast-attack/slow-release one pole smoother to obtain the smoothed version cutoff band bcutoff[n], as represented similarly in Equation (9). Next, at 416, the difference between the cutoff band and the smoothed cutoff band is determined and then compared with the threshold to produce one or more control signals Ck[n], as represented similarly in Equation (10). At 420, Ck[n] guides the change of the time constants, as represented similarly in Equation (11). Next, at 424, the time constants could be applied to guide the smoothing to Db[n], as represented similarly in Equations (12) and (13).
Where xTh is the threshold. In addition, in the other preferable embodiment as illustrated by
Where xTh is the threshold and a is a scale factor.
Instead of guiding the attack time constant, an alternative is that one or more control signals Ck[n] could be created to guide the other parameters, such as the release time constant an, etc., by following the generate steps from 304/404 to 320/420 described above; wherein some of parameters used in steps from 304/404 to 320/420 could be changed, such as changing the smoothing scheme, by changing the time constants used, of the signal power spectrum Sb[n] at 312/412, or changing the mapping function at 316/416, etc.
The techniques of the scene switch analyzer described herein could be implemented by one or more computing devices. For example, a controller of a special-purpose computing device may be hard-wired to perform the disclosed operations or cause such operations to be performed and may include digital electronic circuitry such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGA) persistently programmed to perform operations or cause operations to be performed. In some implementations, custom hard-wired logic, ASICs and/or FPGAs with custom programming are combined to accomplish the techniques.
In some other implementations, a general purpose computing device could include a controller incorporating a central processing unit (CPU) programmed to cause one or more of the disclosed operations to be performed pursuant to program instruction in firmware, memory, other storage, or a combination thereof.
The term “computer-readable storage medium” as used herein refers to any medium that storage instructions and/or data that cause a computer or type of machine to operate in a specific fashion. Any of the models, analyzer and operations described herein may be implemented as or caused to be implemented by software code executable by a processor of a controller using suitable computer language. The software code may be stored as a series of instructions on a computer-readable medium for storage. Example of suitable computer-readable storage medium include random access memory (RAM), read only memory (ROM), a magnetic medium, optical medium, a solid state drive, flash memory, and any other memory chip or cartridge. The computer-readable storage medium may be any combination of such storage devices. Any such computer-readable storage medium may reside on or within a single computing device or an entire computer system, and may be among other computer-readable storage medium within a system or network.
While the subject matter of this application has been particularly shown and described with reference to implementations thereof, it will be understood by those skilled in the art that changes in the form and details of the disclosed implementations may be made without departing from the spirit or scope of this disclosure. Examples of some of these implementations are illustrated in the accompany drawings, and specific details are set forth in order to provide a thorough understanding thereof. It should be noted that implementations may be practiced without some or all of these specific details. In addition, well known features may not have been described in detail to promote clarity. Finally, although advantages have been discussed herein with reference to some implementations, it will be understood that the scope should not be limited by reference to such advantages. Rather, the scope should be determined with reference to the appended claims.
Various aspects of the present invention may be appreciated from the following enumerated example embodiments (EEEs):
EEE1. A method of dynamically adjusting thresholds of a compressor responsive to an input audio signal, the method comprising:
Number | Date | Country | Kind |
---|---|---|---|
PCT/CN2018/108287 | Sep 2018 | WO | international |
19155298 | Feb 2019 | EP | regional |
This application claims the benefit of priority to International Patent Application No. PCT/CN2018/108287 filed on 28 Sep. 2018; U.S. Provisional Patent Application No. 62/798,149, filed 29 Jan. 2019, and European Patent Application No. 19155298.3 filed 4 Feb. 2019, all of which are hereby incorporated by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2019/053142 | 9/26/2019 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/069120 | 4/2/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4099035 | Yanick | Jul 1978 | A |
9031834 | Coorman | May 2015 | B2 |
9304988 | Terrell | Apr 2016 | B2 |
9307340 | Seefeldt | Apr 2016 | B2 |
9419577 | Seefeldt | Aug 2016 | B2 |
9619199 | Koppens | Apr 2017 | B2 |
9654876 | Neely | May 2017 | B2 |
9672834 | Pandey | Jun 2017 | B2 |
9673770 | Vickers | Jun 2017 | B2 |
9762198 | Seefeldt | Sep 2017 | B2 |
9779721 | Donjon | Oct 2017 | B2 |
9935599 | Seefeldt | Apr 2018 | B2 |
20040083094 | Zelazo | Apr 2004 | A1 |
20090220109 | Crockett | Sep 2009 | A1 |
20100124339 | Turnbull | May 2010 | A1 |
20100142729 | Noguchi | Jun 2010 | A1 |
20120278087 | Hosokawa | Nov 2012 | A1 |
20120321096 | Crockett | Dec 2012 | A1 |
20150270819 | Seefeldt | Sep 2015 | A1 |
20150332685 | Bleidt | Nov 2015 | A1 |
20160072467 | Seefeldt | Mar 2016 | A1 |
20170061982 | Pakarinen | Mar 2017 | A1 |
20180136899 | Risberg | May 2018 | A1 |
Number | Date | Country |
---|---|---|
102684628 | Nov 2014 | CN |
105556837 | Apr 2019 | CN |
107534820 | Sep 2020 | CN |
2899883 | Jul 2016 | EP |
2004191821 | Jul 2004 | JP |
2004191821 | Jul 2004 | JP |
2010136173 | Jun 2010 | JP |
2014179021 | Nov 2014 | WO |
Entry |
---|
Hamacher, V. et al. “Signal Processing in High-End Hearing Aids: State of the Art, Challenges, and Future Trends” EURASIP Journal on Applied Signal Processing, 2005, pp. 2915-2929. |
Number | Date | Country | |
---|---|---|---|
20210343308 A1 | Nov 2021 | US |
Number | Date | Country | |
---|---|---|---|
62798149 | Jan 2019 | US |