This disclosure relates generally to audio signal processing.
Unlike professional scenarios, background noise is a potential problem in user-generated audio content (UGC), due to the limitations of the equipment used and the uncontrolled acoustic environment where the recordings take place. Such background noise, besides being annoying, might be made even louder by processing tools, which apply a significant amount of dynamic range compression and equalization to the audio content. Noise reduction is therefore a key element of the audio processing chain to reduce background noise. Noise reduction relies on a successful measurement of a noise floor, which may be obtained by analyzing the power spectrum of a fragment of the recording that contains only background noise. Such a fragment could be identified manually by the user, it could be found automatically, or it could be obtained by asking performers/speakers to be quiet during the first few seconds of the recording. There are, however, scenarios where a fragment of audio content containing only noise is not available.
Existing approaches based on finding a quiet fragment of audio (either manually or automatically) fail in the case where no such fragment exists, for example because the signal is present at different times at different frequencies. Other approaches are based on fitting the audio frequency spectrum with a smooth curve that passes through the minima. Such methods usually discard the narrow-band tonal components of the noise, such as electric hum. Other methods based on computing the distribution of levels at each frequency and selecting a low percentile of the distribution (e.g., the 10% percentile) as noise, are not robust to, for example, fade in and fade out of the signal. Finally, other methods rely on assumptions about the nature of the signal (e.g., assuming the signal is speech) and therefore do not generalize to all types of audio signals.
Implementations are disclosed for noise floor estimation and noise reduction.
In an embodiment, a method comprises: obtaining an audio signal; dividing the audio signal into a plurality of buffers; determining time-frequency samples for each buffer of the audio signal; for each buffer and for each frequency, determining a median and a measure of an amount of variation of energy based on the samples in the buffer and samples in neighboring buffers that together span a specified time range of the audio signal; combining the median and the measure of the amount of variation of energy into a cost function; for each frequency: determining a signal energy of a particular buffer of the audio signal that corresponds to a minimum value of the cost function; selecting the signal energy as the estimated noise floor of the audio signal; and reducing, using the estimated noise floor, noise in the audio signal.
In an embodiment, a mean is determined instead of the median.
In an embodiment, the measure of the amount of variation and median or mean are scaled between 0.0 and 1.0.
In an embodiment, the combination of the amount of variation and mean or median is the sum of their values plus an inverse of the sum of their product and 1.
In an embodiment, the combination of the amount of variation and the median or mean is the sum of their square values.
In an embodiment, the combination of the amount of variation and median or mean is the sum of the square of the median or mean and a sigmoid of a variance of the energy.
In an embodiment, the combination of the amount of variation and median or mean is the sum of the median or mean and a sigmoid of the variance.
In an embodiment, the amount of variation is replaced with a difference between a maximum value of the energy across the buffers in the specified time range and a minimum value of the energy across the buffers in the specified time range.
In an embodiment, buffers having a median or mean and variance computed on chunks of the audio signal comprise at least one buffer where the overall signal energy is below a predefined threshold and the at least one buffer is not used in estimating the noise floor of the audio signal.
In an embodiment, the predefined threshold is determined relative to a maximum level of the audio signal.
In an embodiment, the predefined threshold is determined relative to an average level of the audio signal.
In an embodiment, the method further comprises: analyzing, using the one or more processors, a distribution of chunks of the audio signal from which the noise floor is estimated at each frequency; selecting a chunk k and a frequency f; and replacing an estimated noise at the frequency f with a value computed from chunk k if the increased cost is smaller than a second predefined threshold.
In an embodiment, the method further comprises determining a confidence value from a value of the amount of variation of energy at the selected buffer.
In an embodiment, the confidence value is smoothed across frequency
In an embodiment, reducing noise in the audio signal, further comprises applying a gain reduction at each frequency that is reduced as a function of the confidence value at the frequency.
In an embodiment, the method further comprises: selecting, using the one or more processors, a frequency f1; computing, using the one or more processors, averages of discrete derivatives of the frequency spectrum in blocks of predefined size for all intervals of a predetermined size above the selected frequency f1; selecting, using the one or more processors, a block with a largest negative derivative as a cut-of frequency fc, if such negative value is smaller than a predefined value; and replacing, using the one or more processors, values of the frequency spectrum above the cut-off frequency with an average of the frequency spectrum in a frequency band of predefined length having an upper boundary that is adjacent to the cut-off frequency.
In an embodiment, the cost function increases for increasing median or mean and increases for an increasing measure of the amount of variation of energy.
In an embodiment, the cost function is non-linear.
In an embodiment, the cost function is symmetric in the measure of the amount of variation of energy and mean or median.
In an embodiment, the cost function is asymmetric, and the measure of the amount of variation of energy is weighted less than the mean or median when the measure of the amount of variation of energy is smaller than a predefined threshold.
In an embodiment, a system comprises: one or more processors; and a non-transitory computer-readable medium storing instructions that, upon execution by the one or more processors, cause the one or more processors to perform operations of any one of the methods described above.
In an embodiment, a non-transitory, computer-readable medium stores instructions that, upon execution by one or more processors, cause the one or more processors to perform operations of any one of the methods described above.
Other implementations disclosed herein are directed to a system, apparatus and computer-readable medium. The details of the disclosed implementations are set forth in the accompanying drawings and the description below. Other features, objects and advantages are apparent from the description, drawings and claims.
Particular implementations disclosed herein provide one or more of the following advantages. In cases where a reliable estimate of a noise floor of an audio signal is not available (e.g., a fragment of background noise only), the disclosed system and method can be used to estimate the noise floor. Unlike existing solutions, the disclosed system and method do not discard narrow-band tonal components of the audio signal (e.g., electric hum) and are robust to, for example, fade in and fade out of the audio signal. Also, no assumptions of the nature of the audio signal are needed, allowing the disclosed system and method to be applied to all types of audio signals.
In the drawings, specific arrangements or orderings of schematic elements, such as those representing devices, units, instruction blocks and data elements, are shown for ease of description. However, it should be understood by those skilled in the art that the specific ordering or arrangement of the schematic elements in the drawings is not meant to imply that a particular order or sequence of processing, or separation of processes, is required. Further, the inclusion of a schematic element in a drawing is not meant to imply that such element is required in all embodiments or that the features represented by such element may not be included in or combined with other elements in some implementations.
Further, in the drawings, where connecting elements, such as solid or dashed lines or arrows, are used to illustrate a connection, relationship, or association between or among two or more other schematic elements, the absence of any such connecting elements is not meant to imply that no connection, relationship, or association can exist. In other words, some connections, relationships, or associations between elements are not shown in the drawings so as not to obscure the disclosure. In addition, for ease of illustration, a single connecting element is used to represent multiple connections, relationships or associations between elements. For example, where a connecting element represents a communication of signals, data, or instructions, it should be understood by those skilled in the art that such element represents one or multiple signal paths, as may be needed, to affect the communication.
The same reference symbol used in various drawings indicates like elements.
In the following detailed description, numerous specific details are set forth to provide a thorough understanding of the various described embodiments. It will be apparent to one of ordinary skill in the art that the various described implementations may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits, have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. Several features are described hereafter that can each be used independently of one another or with any combination of other features.
As used herein, the term “includes” and its variants are to be read as open-ended terms that mean “includes, but is not limited to.” The term “or” is to be read as “and/or” unless the context clearly indicates otherwise. The term “based on” is to be read as “based at least in part on.” The term “one example implementation” and “an example implementation” are to be read as “at least one example implementation.” The term “another implementation” is to be read as “at least one other implementation.” The terms “determined,” “determines,” or “determining” are to be read as obtaining, receiving, computing, calculating, estimating, predicting or deriving. In addition, in the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs.
The disclosed embodiments find, for every frequency of an audio signal (e.g., and audio file or stream), a fragment of the audio recording where the energy is smaller than in other fragments of the audio recording, and the variance of the energy is reasonably small within such fragment. The energy of such fragment at the frequency of interest is taken as the level of the steady noise at this frequency. At each frequency, the choice of a suitable fragment is framed as a minimization problem, where fragments with low energy and low variance are favored, thus finding the best compromise between the two independent variables. If at a certain frequency, the level identified as the noise floor corresponds to a relatively high variance, a small confidence is associated to such frequency. The value of the confidence is used to inform a subsequent noise reduction unit so the gain attenuation applied to suppress the noise is reduced according to the confidence value, allowing a conservative approach where potentially inaccurate noise estimation does not negatively impact the quality of the output of the noise reduction. In cases where the noise floor has a large drop at high frequencies (e.g., typically due to band limiting in lossy codecs), the value of the estimated noise before the falloff is held until the end of the spectrum to avoid reduction of attenuation gains due to their smoothing across frequency around the falloff region.
In an embodiment, an input audio signal x(t) (e.g., an audio file or stream) is divided into a plurality of buffers 102 by dividing unit 108, each buffer comprising N samples (e.g., 4096 samples) with Y percentage overlap with adjacent buffers (e.g., 50% overlap) at Z kHz sampling rate (e.g., 48 kHz). Spectrum generating unit 101 applies a frequency transformation to the contents of the plurality of buffers 102 to obtain the time-frequency representation X(n, f) comprising buffers of M frequency bins (e.g., 4096 samples) at Z kHz sampling rate (e.g., 48 kHz). For example, 4096 samples, 50% overlap and a 48 kHz sampling rate results in a frequency resolution of about 12 Hz for each buffer. In some embodiments, the frequency transformation is a short-time Fourier transform (STFT), which outputs time-frequency data (e.g., time-frequency tiles).
For each buffer i, RMS calculator 103 computes the RMS level for the buffer in the time domain and defines a silence threshold relative to a maximum RMS (e.g., −80 dB below the maximum RMS). The silence threshold is computed by analyzing the entire audio signal, and is therefore limited to an “offline” use case. Alternatively, the silence threshold is defined as a fixed number (e.g., −100 dBFS), or a fixed number that depends on the bit-depth of the input audio file/stream (e.g. −90 dBFS for 16-bit signals, and −140 dBFS for 24-bit signals). Silent buffers are those buffers that have an RMS level below the silence threshold.
For each frequency f and each buffer i, statistical analysis unit 104 computes a median and a measure of an amount of variation (e.g., standard deviation, variance, range (max-min), interquartile range) of the energy of samples in j buffers, where the j buffers belong to a chunk of the audio signal x(t) (e.g., 1 second of audio) centered around the buffer i. Equations [1] and [2] describe the operations of statistical analysis unit 104 using a median μ and standard deviation σ of the energy of samples in j buffers, as follows:
μ(i, f)=median(20*Log(|Xi(j, f)|)), [1]
σ(i, f)=std(20*Log(|Xi(j, f)|)). [2]
Chunks of the audio signal containing one or more silent buffers (as determined by the silence threshold) are not used in the calculation of median and standard deviation. In some embodiments, the median can be replaced by the mean to reduce computational costs.
Once the buffer k(f) corresponding to argmini{J(i, f)} is determined, the noise floor of the audio file/stream is given equal to the median/mean of the buffer k:
noise(f)=μ(k(f), f). [4]
The chunk of audio corresponding to the buffer k, which comprises some neighboring buffers of the buffer k is referred to as the selected chunk at frequency f.
Note that rescaling μ and σ a posteriori requires obtaining their values for the whole audio file. If noise estimation is to be done online, while the file is being recorded or processed, the rescaling can be done by introducing a fixed range [μmax, μmin] and [σmax, σmin] for both variables based on previous empirical observations, so that the rescaled variables become:
μ(i, f)=0, if μ(i, f)≤μmin [5]
μ(i, f)=μ(i, f)−μmin)/(μmax−μmin), if μmin<μ(i, f)<μmax [6]
μ(i, f)=1, if μ(i, f)≥μmin. [7]
The rescaling of σ can be done in a similar manner using Equations [5]-[7], and substituting μ with σ.
In some embodiments, the following changes to the cost function are considered (still assuming μ and σ are rescaled to [0, 1], either a posteriori based on their max and min values, or online based on guessed max and min values). The cost function can be expressed with quadratic terms:
J(i, f)=μ2(i, f)+σ2(i, f). [8]
The respective role and importance of μ and σ can be changed, thus breaking the symmetry of the cost function. One approach is to transform σ so that it gives a small cost when it is below a certain threshold, and high cost above, with a smooth transition in between. This formulation would minimize J(i, f) for small values of σ. A possible implementation is using the sigmoid function shown in Equation [9]:
where α=10 is a good example scale factor for the sigmoid function.
In some embodiments, the quadratic term μ2(i, f) can be replaced with a linear term μ(i, f) to give less weight to chunks with small level, thus avoiding potential underestimations.
It can be beneficial to favor noise estimation of neighboring frequencies to be selected from the same chunk of audio, to avoid occasional underestimated outliers in an otherwise very smooth noise curve. One embodiment for achieving this is by examining the distribution of selected chunks k(f) across frequencies, for example by visualizing the histogram of the position of selected chunks in the audio file. If one finds a large cluster on a certain chunk {tilde over (k)} and few occasional outliers, it can be assumed that the chunk {tilde over (k)} is mostly background noise, and estimation of outlier frequencies on the same chunk could be forced. For a frequency where the corresponding chunk is k(f)={tilde over (k)}, the cost J({tilde over (k)}, f) can be computed and replace noise(f)=μ(k, f) with noise(f)=μ({tilde over (k)}, f), if the cost increase is smaller than a certain threshold: J({tilde over (k)}, f)−J(k, f)<JTh. A slight variance of this rule is choosing the noise estimate corresponding to the smallest cost in a range of nk buffers around {tilde over (k)}, as long as the cost difference is smaller than JTh.
In an embodiment, optional smoothing unit 106 applies smoothing to the estimated noise floor to avoid fluctuations that are due to estimating adjacent bins from different chunks of the audio signal. Smoothing unit 106 replaces each value of noise(f) with the average of the values in a band around f. The shape of such bands can be rectangular, triangular, etc. In some embodiments, smooth functions reaching values of 0 at the band boundaries can be used. For perceptual reasons, the width of the band is exponential and corresponds to a constant fraction of octave. In some embodiments, the constant fraction is 1/100, which is a very narrow bandwidth to preserve sufficient resolution for accurate measurement of noise components.
A confidence value c(f) representing how reliable is the estimation can be obtained from the value of σ(k), by associating small confidence to frequencies with high values of variance and vice-versa:
Example values, empirically determined, are σH=14 and σL=7.5. The confidence can be used to inform noise reduction unit 107 about the accuracy of the noise floor estimation, therefore improving noise reduction to avoid undesired artifacts in frequencies where the estimation is not deemed accurate.
in accordance with Equation [11], and when σ is greater than σH the confidence is 0, in accordance to Equation [10].
In an embodiment, noise reduction unit 107 is a frequency-band-based or FFT-based expander. At any given frame, frequency bins whose energy is close to the estimated noise floor are attenuated with a gain somewhat proportional to their proximity to the noise floor. In some embodiments, the gain attenuation G(n, f) is determined by L(n, f) using a curve similar to the one shown in
Specifically, let N(f) be the energy level of the noise in dB, and let S(n, f) be the energy level of the audio content at frame n and frequency f. In some embodiments, a threshold Th in decibels is defined, and the amount of level above the threshold is computed as:
L(n, f)=10 Log(S(n, f))−(N(f)+Th). [13]
Referring to
G(i, f)=c(f)G(i, f). [14]
In some embodiments, the confidence can also be smoothed by smoothing unit 105, thus ensuring a continuous transition between full noise reduction in bands with high confidence, and no noise reduction in bands with low confidence.
In cases where the noise floor has a large drop at high frequencies (e.g., typically due to band limiting in loss codecs) as shown in
In some embodiments, the frequency of the falloff is determined by: 1) choosing a first frequency f1 above which a cutoff frequency f c is to be estimated, as shown in
Process 800 begins by obtaining, using one or more processors, an audio signal (e.g., file, stream) (801), dividing the audio signal into a plurality of buffers (802), generating time-frequency samples for each buffer of the audio signal (803), as described in reference to
Process 800 continues by, for each buffer and for each frequency, determining a median (or mean) and a standard deviation of energy based on the energy in the samples in the buffer and samples in neighboring buffers that together span a specified time range of the audio signal (804), and combining the median and standard deviation into a cost function (805), as described in reference to
Process 800 continues by, for each frequency, estimating a noise floor of the audio signal as the signal energy of a particular buffer of the audio signal corresponding to a minimum value of the cost function (806), and reducing, using the estimated noise floor, noise in the audio signal (807), as described in reference to
As shown, the system 900 includes a central processing unit (CPU) 901 which is capable of performing various processes in accordance with a program stored in, for example, a read only memory (ROM) 902 or a program loaded from, for example, a storage unit 908 to a random access memory (RAM) 903. In the RAM 903, the data required when the CPU 901 performs the various processes is also stored, as required. The CPU 901, the ROM 902 and the RAM 903 are connected to one another via a bus 909. An input/output (I/O) interface 905 is also connected to the bus 904.
The following components are connected to the I/O interface 905: an input unit 906, that may include a keyboard, a mouse, or the like; an output unit 907 that may include a display such as a liquid crystal display (LCD) and one or more speakers; the storage unit 908 including a hard disk, or another suitable storage device; and a communication unit 909 including a network interface card such as a network card (e.g., wired or wireless).
In some implementations, the input unit 906 includes one or more microphones in different positions (depending on the host device) enabling capture of audio signals in various formats (e.g., mono, stereo, spatial, immersive, and other suitable formats).
In some implementations, the output unit 907 include systems with various number of speakers. As illustrated in
The communication unit 909 is configured to communicate with other devices (e.g., via a network). A drive 910 is also connected to the I/O interface 905, as required. A removable medium 911, such as a magnetic disk, an optical disk, a magneto-optical disk, a flash drive or another suitable removable medium is mounted on the drive 910, so that a computer program read therefrom is installed into the storage unit 908, as required. A person skilled in the art would understand that although the system 900 is described as including the above-described components, in real applications, it is possible to add, remove, and/or replace some of these components and all these modifications or alteration all fall within the scope of the present disclosure.
In accordance with example embodiments of the present disclosure, the processes described above may be implemented as computer software programs or on a computer-readable storage medium. For example, embodiments of the present disclosure include a computer program product including a computer program tangibly embodied on a machine readable medium, the computer program including program code for performing methods. In such embodiments, the computer program may be downloaded and mounted from the network via the communication unit 909, and/or installed from the removable medium 911, as shown in
Generally, various example embodiments of the present disclosure may be implemented in hardware or special purpose circuits (e.g., control circuitry), software, logic or any combination thereof. For example, the units discussed above can be executed by control circuitry (e.g., a CPU in combination with other components of
Additionally, various blocks shown in the flowcharts may be viewed as method steps, and/or as operations that result from operation of computer program code, and/or as a plurality of coupled logic circuit elements constructed to carry out the associated function(s). For example, embodiments of the present disclosure include a computer program product including a computer program tangibly embodied on a machine readable medium, the computer program containing program codes configured to carry out the methods as described above.
In the context of the disclosure, a machine readable medium may be any tangible medium that may contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. A machine readable medium may be non-transitory and may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the machine readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Computer program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These computer program codes may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus that has control circuitry, such that the program codes, when executed by the processor of the computer or other programmable data processing apparatus, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented. The program code may execute entirely on a computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server or distributed over one or more remote computers and/or servers.
While this document contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can, in some cases, be excised from the combination, and the claimed combination may be directed to a sub combination or variation of a sub combination. Logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
P202030040 | Jan 2020 | ES | national |
This application claims priority of the following priority applications: ES application P202030040 (reference: D19149ES), filed 21 Jan. 2020, U.S. provisional application 63/000,223 (reference: D19149USP1), filed 26 Mar. 2020 and U.S. provisional application 63/117,313 (reference: D19149USP2), filed 23 Nov. 2020, which are hereby incorporated by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/050921 | 1/18/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63117313 | Nov 2020 | US | |
63000223 | Mar 2020 | US |