This application, and the innovations and related subject matter disclosed herein, (collectively referred to as the “disclosure”) generally concern systems for detecting and removing unwanted noise in an observed signal, and associated techniques. More particularly but not exclusively, disclosed systems and associated techniques can detect undesirable audio noise in an observed audio signal and remove the unwanted noise in an imperceptible or suitably imperceptible manner. As but one example, disclosed systems and techniques can detect and remove unwanted “clicks” arising from manual activation of an actuator (e.g., one or more keyboard strokes, or mouse clicks) or emitted by a speaker transducer to mimic activation of such an actuator. Some disclosed systems are suitable for removing unwanted noise from a recorded signal, a live signal (e.g., telephony, video and/or audio simulcast of a live event), or both. Disclosed systems and techniques can be suitable for removing unwanted noise from signals other than audio signals, as well.
By way of illustration, clicking a button or a mouse might occur when a user records a video or attends a telephone conference. Such interactions can leave an audible “click” or other undesirable artifact in the audio of the video or telephone conference. Such artifacts can be subtle (e.g., have a low artifact-signal-to-desired-signal ratio), yet perceptible, in a forgiving listening environment.
Solving such a problem involves two different aspects: (1) target-signal detection; and (2) target-signal removal. Detection of a target signal, sometimes referred to in the art as “signal localization” addresses two primary issues: (1) whether a target signal is present; and (2) if so, when it occurred. With a known target signal and only additive white noise, a matched filter is optimal and can efficiently be computed for all partitions using known FFT techniques. The matched filter can be used to remove the target signal.
However, previously known detectors, e.g., based on matched filters, generally are unsuitable for use in real-world applications where target signals are unknown and can vary. For example, the presence of a noise (or “target”) signal within an observed signal cannot be guaranteed. Moreover, a noise signal can vary among different frequencies, and a target signal can emphasize one or more frequency bands. Still further, some target signals have a primary component and one or more secondary components.
Thus, a need remains for computationally efficient systems and associated techniques to detect unwanted noise signals in real-world applications, where the presence or absence of a target signal is not known, and where target signals can vary. As well, a need remains for computationally efficient systems and techniques to remove unwanted noise from an observed signal in a manner that suitably obscures the removal processing from a user's perception. Ideally, such systems and techniques will be suitable for removing a variety of classes of target signals (e.g., mouse clicks, keyboard clicks, hands clapping) from a variety of classes of observed signals (e.g., speech, music, environmental background sounds, street noise, café noise, and combinations thereof).
The innovations disclosed herein overcome many problems in the prior art and address one or more of the aforementioned or other needs. In some respects, the innovations disclosed herein generally concern systems and associated techniques for detecting and removing unwanted noise in an observed signal, and more particularly, but not exclusively for detecting undesirable audio noise in an observed or recorded audio signal, and removing the unwanted noise in an imperceptible manner. For example, disclosed systems and techniques can be used to detect and remove unwanted “clicks” arising from manual activation of an actuator (e.g., one or more keyboard strokes, or mouse clicks), and some disclosed systems are suitable for use with recorded audio, live audio (e.g., telephony, video and/or audio simulcast of a live event), or both.
Disclosed approaches for removing unwanted noise can supplant the impaired portion of the observed signal with an estimate of a corresponding portion of a desired signal. Some embodiments include one or more of the three following, innovative aspects: (1) detection of an unwanted noise (or a target) signal within an observed signal (e.g., a combination of the target signal, for example a “click”, and a desired signal, for example speech, music, or other environmental sounds); (2) removal of the unwanted noise from the observed signal; and (3) filling of a gap in the observed signal generated by removal of the unwanted noise from the observed signal. Other embodiments directly overwrite the impaired portion of the signal with the estimate of the desired signal.
Related aspects also are described. For example, disclosed noise detection and/or removal methods can include converting an incoming acoustic signal to a corresponding electrical signal (or other representative signal). As well, the corresponding electrical signal (or other representative signal) can be converted (e.g., sampled) into a machine-readable form. The corresponding electrical signal and/or other representation of the incoming acoustic signal can be corrected or otherwise processed to remove and/or replace a segment corresponding to the impairment in the observed signal. And, a corrected signal can be converted to a human-perceivable form, and/or to a modulated signal form conveyed over a communication connection.
Although references are made herein to an observed signal, impairments thereto, and a corresponding correction to the observed signal, those of ordinary skill in the art will understand and appreciate from the context of those references that they can include corresponding electrical or other representations of such signals (e.g., sampled streams) that are machine-readable.
In some methods, each of a plurality of regions of an observed signal can be assessed to determine whether the respective region includes a component of an unwanted target signal. Each region can span a selected number of samples of the observed signal, and the selected number of samples in each region can be substantially less than a total number of samples of the observed signal. The unwanted target signal can include one or more of a stationary signal, a non-stationary signal, and a colored signal. As well, an observed signal can be stationary, non-stationary, and/or colored. Thus, a model of the observed signal can include a model trained to detect a target signal within one or more of a stationary, a non-stationary, and/or a colored signal. In response to determining one of the regions contains a component of the target signal, the observed signal can be searched within the respective region and over a selected number of samples adjacent the respective region for one or more other components of the unwanted target signal. A removal region of the observed signal corresponding to each detected component of the target signal can be identified. Each detected component of the observed signal corresponding to each respective removal region can be supplanted, and a corrected signal can be formed by replacing each portion of the observed signal in the removal region with an estimate of a corresponding portion of a desired (or intended) signal. The estimate of the corresponding portion of the desired signal can be based on the observed signal in a region adjacent the respective removal region.
In some instances, the observed signal is an audio signal and the unwanted target signal is an unwanted audio signal. As an example, the unwanted audio signal can be an audio signal generated by activation of a mechanical actuator.
Some disclosed methods and systems transform the corrected signal into a human-perceivable form, and/or into a modulated signal conveyed over a communication connection.
The region adjacent the respective removal region from which the estimate of the desired signal is based can be a first region. The estimate of the desired signal can also be based on the observed signal in a second region adjacent the respective removal region.
In some examples, the act of assessing each of the plurality of regions of the observed signal can include estimating a variance of the observed signal within each respective region. In turn, the act of estimating the variance of the observed signal can include computing a mask-weighted average of the square of the value of the observed signal for each of one or more samples based on a pair of sliding masks centered on the respective sample. In other examples, the act of assessing each of the plurality of regions of the observed signal can include computing an estimate of a maximum likelihood that the respective region contains a component of a target signal.
The assessment of each of the plurality of regions of the observed signal can include an assessment of a plurality of frequency bands within each region. Such an assessment can determine whether the respective region includes a component of the unwanted target signal within one or more of the frequency bands.
In other assessments, at least the portion of the observed signal within each respective region can be whitened, and a variance of the whitened signal within the respective region can be estimated.
In other examples, the assessment of each of the plurality of regions include tuning a plurality of model parameters against one or more representative unwanted signals, one or more classes of environmental signals, and combinations thereof.
As well, the assessment can include receiving prior information regarding a presence of the unwanted signal, a location of the unwanted signal within the observed signal, or both. For example, the prior information can include a probability distribution function describing a probability that the unwanted signal is present at a given location in the observed signal given a notification of an earlier event.
Also disclosed are tangible, non-transitory computer-readable media including computer executable instructions that, when executed, cause a computing environment to implement one or more methods disclosed herein. Digital signal processors (DSPs) suitable for implementing such instructions are also disclosed. Such DSPs can be implemented in software, firmware, or hardware.
The foregoing and other features and advantages will become more apparent from the following detailed description, which proceeds with reference to the accompanying drawings.
Unless specified otherwise, the accompanying drawings illustrate aspects of the innovations described herein. Referring to the drawings, wherein like numerals refer to like parts throughout the several views and this specification, several embodiments of presently disclosed principles are illustrated by way of example, and not by way of limitation.
The following describes various innovative principles related to noise-detection and noise-removal systems and related techniques by way of reference to specific system embodiments. For example, certain aspects of disclosed subject matter pertain to systems and techniques for detecting unwanted noise in an observed signal, and more particularly but not exclusively to systems and techniques for correcting an observed signal including non-stationary and/or colored noise. Embodiments of such systems described in context of specific acoustic scenes (e.g., human speech, music, vehicle traffic, animal activity) are but particular examples of contemplated detection, removal, and correction systems, and examples of noise described in context of specific sources or types (e.g., “clicks” generated from manual activation of an actuator) are but particular examples of environmental signals and noise signals, and are chosen as being convenient illustrative examples of disclosed principles. Nonetheless, or more of the disclosed principles can be incorporated in various other noise detection, removal, and correction systems to achieve any of a variety of corresponding system characteristics.
Thus, noise detection, removal, and correction systems (and associated techniques) having attributes that are different from those specific examples discussed herein can embody one or more presently disclosed innovative principles, and can be used in applications not described herein in detail, for example, in telephony or other communications systems, in telemetry systems, in sonar and/or radar systems, etc. Accordingly, such alternative embodiments can also fall within the scope of this disclosure.
This disclosure concerns methods for detecting and/or removing an unwanted target signal from an observed signal.
The system 3 includes a signal acquisition engine 100 configured to observe a given, e.g., audio, signal 1, 2. The system 3 also includes a noise-detection-and-removal engine 200 configured to detect and remove unwanted components in the observed signal. In some examples, the engine 200 also includes a gap-filler configured to estimate a desired portion of the observed signal in regions that were removed by the engine 200. The illustrated system also includes a clean-signal engine 300 configured to further process the observed signal after the unwanted components are removed and the resulting gaps filled with an estimate of the desired portion of the observed signal. Although such an estimate might, and often does, differ from the original desired portion of the observed signal, estimates derived using approaches herein are perceptually equivalent, or acceptable perceptual equivalents, to the original, unimpaired version of a desired signal. Such perceptual equivalence, and acceptable levels of perceptual equivalence, are discussed more fully below in relation to user tests.
Disclosed approaches for removing unwanted noise, as in the engine 200, can include one or more of the three following innovative aspects: (1) detection of an unwanted noise (or a target signal) within an observed signal (e.g., a combination of the target signal, like a “click”, and a desired signal, like speech, music, or other environmental sounds); (2) removal of the unwanted noise from the observed signal; and (3) filling of a gap in the observed signal generated by removal of the unwanted noise from the observed signal. Unlike conventional systems, e.g., based on matched filtering, disclosed noise detection and/or removal systems can detect and/or remove an impairment signal in the presence of non-stationary, colored noise.
Some disclosed systems can be trained with clean representations of different classes of target signals 11 (
The block diagram in
As shown in
The system shown in
The incoming signal is sometimes referred to herein as an “observed signal.” Ideally, the “clean” signal contains all of the desired aspects of the observed signal and none of the target signal. In practice, the “clean” signal loses a small measure of the desired aspects of the observed signal and, at least in some instances, retains at least an artifact of the target signal. Some disclosed approaches eliminate or at least render imperceptible such artifacts in many contexts.
Referring still to
Referring again to
Once the region(s) of the observed signal for removal are defined (e.g., regardless of whether the removal region was adapted to avoid a transient or remained unchanged), the engine 270 can supplant the portions of the observed signal dominated by or otherwise tainted by the unwanted target signal with an estimate of the desired signal within the removal region, and output a “clean” signal.
Related aspects also are disclosed. For example, a corrected (or “clean”) signal can be converted to a human-perceivable form, and/or to a modulated signal form conveyed over a communication connection. Also disclosed are machine-readable media containing instructions that, when executed, cause a processor of, e.g., a computing environment, to perform disclosed methods. Such instructions can be embedded in software, firmware, or hardware. In addition, disclosed methods and techniques can be carried out in a variety of forms of signal processor, again, in software, firmware, or hardware.
Additional details of disclosed noise-detection-and-removal systems and associated techniques and methods follow.
As used herein, the phrase “acoustic transducer” means an acoustic-to-electric transducer or sensor that converts an incident acoustic signal, or sound, into a corresponding electrical signal representative of the incident acoustic signal. Although a single microphone is depicted in
The audio acquisition module 100 can also include a signal conditioner to filter or otherwise condition the acquired representation of the incident acoustic signal. For example, after recording and before presenting a representation of the acoustic signal to the noise-detection-and-removal engine 200, characteristics of the representation of the incident acoustic signal can be manipulated. Such manipulation can be applied to the representation of the observed acoustic signal (sometimes referred to in the art as a “stream”) by one or more echo cancelers, echo-suppressors, noise-suppressors, de-reverberation techniques, linear-filters (EQs), and combinations thereof. As but one example, an equalizer can equalize the stream, e.g., to provide a uniform frequency response, as between about 150 Hz and about 8,000 Hz.
The output from the audio acquisition module 100 (i.e., the observed signal) can be conveyed to the noise-detection-and-removal engine 100.
Referring now to
Detection of a target signal, sometimes referred to in the art as “signal localization” addresses two primary issues: (1) whether a target signal is present; and (2) if so, when it occurred. With a known target signal and only additive white noise, a matched filter is optimal and can efficiently be computed for all partitions using known FFT techniques.
However, in the real world, presence of a target signal within an observed signal cannot be guaranteed, though prior information about presence and location (e.g., time) of a target signal might be available. For example, as noted in the brief discussion of
In general, though, target signals are unknown and can vary in time and among frequency bands. As well, environmental noise typically is neither stationary nor white. Thus, a matched filter is not typically optimal, and in some instances is unsuitable, for detecting target signals in real-world scenarios.
Disclosed detectors account for colored and non-stationary observed signals through training a likelihood model over various different observed signals (e.g., so-called “signal plus noise”). Such training can include stationary white noise, non-stationary white noise (plus noise estimation) and noise with stationary coloration. As discussed more fully below, using FFT techniques, disclosed solutions can have complexity on the order of N log N, where N represents the number of partitions in an observed signal, y0:N-1. A prototype signal S0:N-1 can be defined, and assumed unwanted target signals can be assumed to have L partitions, where L is substantially less than N. Accordingly, a subspace constraint and prior information can be imposed:
s=ΦS, Φ∈RN×J, orthonormal basis
S˜N(μS,ΣS)
The parameters Φ, μS, ΣS can be learned from clean examples of the prototype signal. With a circular shift of the prototype, a value of the signal at a selected partition, n, can be determined:
Sn=PnS=[PnΦ]S
Sn=ΦnS,ΦnPnΦ
Hypotheses regarding the presence of a target signal, and associated cost functions, can be defined. In the following, the term “signal” refers to a target or impairment signal, rather than a desired signal.
Next, the expected cost C(m,n) can be minimized over H and y, with the closed-form equation:
Recognizing that Bayes' rule is that the posterior probability is proportional to the prior probability times a likelihood
P(H=n|Y)∝P(H=n)P(y|H=n)
the posterior
P(H=n|Y)
can be computed over n provided that the prior probability
P(H=n)
and the likelihood
P(y|H=n)
are available, as from, for example, training data based on button notifications and accuracy models. Otherwise, the prior can be assumed to be flat, or constant, in the absence of specific information. The likelihood can be thought of as a “shifted signal plus noise” model, and the hypothesis values can be as follows:
In context of actuation of a mechanical actuator, the prior can be a log-normal model, and a probability of a false-alarm
P(H=N)
can be fixed (e.g., at a value of 0.001, or some other tuned value), as generally indicated in
For stationary white noise, the likelihood of a target signal being present can be modeled as
P(y|H=n)=N(Φnμs,ΦnΣSΦnT+σy2IN) (1)
and the likelihood of a target signal being absent can be modeled as
P(y|H=N)=N(0,σy2IN)
The noise variance
σy2
can be estimated in regions immediately before and after, e.g., at partitions 0 and N−1. The complexity of the foregoing if directly evaluated is on the order of N3.373, though the complexity can be reduced to be on the order of N log N using an FFT approach. The following can be evaluated for all partitions, n
(y−ΦnμS)T(σy2IN+ΦnΣSΦnT)−1(y−ΦnμS) (2)
The Matrix Inversion Lemma can reduce N×N matrices to be J×J:
(σy2IN+ΦnΣSΦnT)−1=σy−2(IN−σy−2ΦnΩS−1ΦnT) (2)
Inverting ΩS has a complexity on the order of J3, and Equation (2) can reduce to
A+B
where
Aσy−2(yTy+μSTμS)−σy−4μSTΩS−1μS
B−σy−22μSTYn−σy−4(2μST−Yn)ΩS−1Yn
All Yn can be computed with complexity on the order of N log N via FFT.
The input signal y can be filtered (circularly) by each of the reversed basis vectors
If the impairment signal s is completely known, there is only one basis vector (the matched filter:
When the prior is flat, the peak of the matched filter output can be taken, as noise variance is less or not important. However, when the prior is not flat, noise variance estimation can become more significant.
1. Non-Stationarity
In the case of non-stationary white noise, the noise can have a different variance with each sample:
P(y|s,H=n)=N(sn,Σy),n∈0:N−1
where
Σy=diag(σy,02,σy,12, . . . ,σy,N-12)
The likelihood for non-stationary white noise can be modeled as follows:
Signal present:
P(y|H=n)=N(ΦnμS,ΦnΣSΦnT+Σy)
Signal absent:
P(y|H=N)=N(0,Σy) (3)
To simplify Equation (5), the following is useful
Thus, after substantial computations, e.g., Schur complements, Matrix Inversion Lemma, etc., A and B can be expressed in terms of scalar quantities, J×J matrices Ωs,n−1, ωs,n and a J×1 vector ζs,n as follows:
A=N log 2π+log |Σy|+log |Σs|+log |Ωs,n|
B=yTΣy−1y−2μsTζs,n+μsTψs,nμs−ζs,nTΩs,n−1ζs,n+2μsTψs,nΩs,n−1ζs,n . . . −μsTψs,nΩs,n−1ψs,nμs
Defining the following intermediate quantities,
ψs,nΦnTΣy−1Φn
Ωs,nΣs−1+ψs,n
ζs,nΦnTΣy−1y (6)
direct evaluation of the foregoing via Equation (6) can have a complexity for all n on the order of N2, whereas using on the order of J2 FFTs, the complexity can be reduced to be on the order of N log N.
Assuming a width L of an undesired target (sometimes referred to as an “impairment”) signal is substantially less than the number of partitions N, the variance σy,n2 of nonstationary white noise can be estimated as a mask-weighted average of yn2 in relation to two sliding masks arranged as in
Stated differently, disclosed systems estimate a region where target signal occurs. Such a system can assume a target signal is short in duration relative to an observed, time-varying signal. The system can estimate noise variance over a moving window and assume that a target signal is centered within the window.
As but one example for making such an estimate, two sliding masks can be used, with an inner mask having a temporal width selected to correspond to a width of a given target signal, and an outer mask can have a selected look-ahead and look-back width relative to the inner mask. The inner mask can be centered within the outer mask. The estimated noise variance can be a mask-weighted average of a square of the observed signal.
Alternatively, an expectation maximization approach can be used to formalize the sliding mask computations, but the computational overhead increases.
In any event, disclosed target signal detectors can assess each of a plurality of regions of an observed signal to determine whether the respective region includes a component of an unwanted target signal. Each region spans a selected number of samples of the observed signal, and the selected number of samples in each region is substantially less than a total number of samples of the observed signal. Such approaches are suitable for a variety of unwanted target signals, including a stationary signal, a non-stationary signal, and a colored signal.
2. Detection in “Colored” Noise: A “Whitening” Approach
Noise can vary among different frequencies, and a target signal can emphasize one or more frequency bands. General noise detectors can incorporate a so-called multiband detector. For example, each band can have a corresponding set of subspaces. Under such approaches, model complexity can increase and can require additional data for training. As well, additional computational cost can be incurred, but some disclosed systems assess a plurality of frequency bands within each region to determine whether the respective region includes a component of the unwanted target signal within one or more of the frequency bands
Nonetheless, with many signals (less true for music and speech), the degree of noise coloration can be approximately constant. That assumption can be better suited for signals with lower frequency resolutions and arbitrary impulse-like excitations are still possible. A noise coloration model can be employed:
Despite having a circulant model, pad regions and Burg's method can be used to estimate the wk and en.
Disclosed detectors can transform observed signals to “whiten” them. After whitening, the detector can apply non-stationary signal detection to an observed signal as described above.
For example, the likelihood model can include a change of variables relative to the stationary white noise model (e.g., y becomes e; constant Jacobian).
can be simplified using
ΦnPnΦ
and, since W and Pn are circulant, multiplication can be interchanged:
WΦn=Pn(WΦ)
Although the columns WΦn are not orthonormal, Gram-Schmidt can be applied:
WΦ=Φ′V,
Φ′∈RN×J
V∈RJ×J
Defining
Φ′nPnΦ′
μ′sVμs
Σ′sVΣsVT
it follows that:
P(e|H=n)=N(Φ′nμ′s′Φ′nΣ′sΦ′nT+Σe) (7)
which reduces the problem to that of non-stationary white noise:
ζ′s,nΦ′nTΣe−1e
ψ′s,nΦ′nTΣe−1Φ′n
Ω′s,nΣ′s−1+ψ′s,n
Thus, after whitening of the colored signal, noise detection as described above in connection with the non-stationary white noise can proceed.
3. Training
Systems as disclosed herein can be trained using a database of button click sounds (or any other template for a target signal) recorded over a domain of interest. That template can then be recorded in combination with a variety of different environments (e.g., speech, automobile traffic, road noise, music, etc.). Disclosed systems then can be trained to adapt to detect and localize the target signal when in the presence of arbitrary, non-stationary signals/noises (e.g., music, etc.). Such training can include tuning a plurality of model parameters against one or more representative unwanted signals, one or more classes of environmental signals, and combinations thereof.
For example, in a working embodiment, a noise detector was trained to detect unwanted audible sounds. To train the detector, raw audio (e.g., without processing) of several unwanted noise signals (e.g., slow, fast, and rapid “clicks”, button taps, screen taps, and even rubbing of hands against an electronic device) were acquired in connection with different devices and stored. For example, two minutes of unperturbed, unwanted noise signals were obtained with minimal or no other audible noise. As well, samples of several classes of desired signals (e.g., music, speech, environmental sounds, or textures, including traffic audio, café audio) were recorded with a similar raw device configuration.
Referring now to
In some instances, a frame 30 containing the impairment signal 31 can be removed (e.g., deleted) from the observed signal and the resulting empty frame (e.g.,
For clarity in describing available techniques to develop the estimate, the remainder of this description proceeds by way of reference to a two-step approach—removal followed by gap-filling. Nonetheless, those of ordinary skill in the art will appreciate that described techniques to develop the estimate can be employed in removal by directly overwriting a frame of the observed signal with the estimate. The frame 30 containing the impaired segment 31 is sometimes referred to as a “removal region,” despite that the impaired segment 31 can be removed and the resulting gap filled, or that the impaired segment 31 can be directly overwritten.
1. Overview
Several approaches are available to estimate a portion of a desired signal to supplant the impaired portion of the observed signal within the frame 30. For example, one or both of segments 21a, 25a of the observed signal in the respective frames 20, 24 adjacent the removal region 30 can be extended into or across the frame 30, as generally depicted in
The extended segments 21b, 25b, if both are generated, can be combined to form the estimated segment 34 of the desired signal within the frame 30. Since those extensions 21b, 25b likely will differ and thus not identically overlap with each other, the extensions can be cross-faded with each other using known techniques. The cross-faded segment 34 (
The segments 21a, 25a can be extended using a variety of techniques. For example, a time-scale of the segments 21a, 25a can be modified to extend the respective segments of the observed signal into or across the removal region 30. As an alternative, the observed signal can be extended by an autoregressive modeling approach, with or without adapting a width of the removal region 30 and/or the adjacent regions 20, 24, e.g., to account for one or more characteristics (e.g., transients) of the observed signal.
Autoregressive (AR) modeling is a method that is commonly used in audio processing, especially with speech, for determining a spectral shape of a signal. AR modeling can be a suitable approach insofar as it can capture spectral content of a signal while allowing an extension of the signal to maintain the spectral shape 32, 33 (
In one approach, AR coefficients for both a forward extension 21b of the segment 21a and a backward extension 25b of the segment 25a can be determined using Burg's method (e.g., as opposed to, for example, Yule-Walker equations):
A(z)=1−Σk=1pa(k)z−k
The original signal can be inversed filtered to obtain an excitation signal:
E(z)=A(z)X(z)
and the front and rear regions of the observed signal can be extended by combining the excitation signal with the AR coefficients corresponding to the respective front and rear regions. For example, the well-known computational tool Matlab has a function filtic( ) that returns initial conditions of a filter, which allows extension of the front and rear regions of the observed signal. The extensions 21b and 25b can then be cross-faded with each other.
Line Spectral Pairs Polynomials can extend the excitation signal across the removal region. For example, after estimating the AR coefficients, two polynomials P and Q can be generated by flipping an order of the AR coefficients, shifting them by one and adding them back:
P(z)=A(z)+z−(P+1)A(z−1)
Q(z)=A(z)−z−(P+1)A(z−1)
To make use of the Line Spectral Pairs, a function D can be defined as a weighted combination:
D(z,n)=ηP(z)+(1−η)Q(z)
For example, D equals A, the AR polynomial, when η equals 0.5. The Line Spectral Pairs Polynomial can be used to extend the excitation signal, as depicted in
2. Estimating a Desired Signal with Adjacent Transients
Standard autoregressive models work well when the observed signal is stationary in the look-back region 24 and in the look-ahead region 20 relative to the removal region 30. However, when an observed signal 41, 42, 51, 45 contains a transient 45 in either region 40, 44, as in FIG. 13A, conventional autoregressive models can extend the transient 45 into the gap 50 and accentuate the transient, introducing an undesirable artifact 52 into the processed signal, as shown in
To account for transients in the segments of the observed signal falling in the regions 40, 44 adjacent the removal region 50, a width of the adjacent training regions 40, 44 can be adjusted, or “adapted,” to avoid the transient portions 45. Further, the weighted line spectral pairs can control an excitation level.
In an attempt to avoid such artifacts, several measures of the observed signal in the adjacent regions 40, 44 can be considered, as in
As shown in
3. Band-Wise Gap Filling
In some instances, a component of the unwanted target signal within the removal region includes content of the observed signal within a selected frequency band. Such content of the observed signal within the selected frequency band can be supplanted on a band-by-band basis, as by replacing a portion of the observed signal with an estimate of content of the desired signal within the selected frequency band. As above, such an estimate can be a perceptual equivalent, or an acceptable perceptual equivalent, to the original, unimpaired version of a desired signal.
1. Overview
As depicted in
2. Detection
Accordingly, disclosed detectors can be trained to look ahead or behind in relation to a detected primary target 12, 14. A window size of the look ahead/behind region can be adapted during training of the detector according to the target signal(s) characteristics.
Referring now to
With such secondary component detectors, secondary targets 64, 65 that would otherwise remain or appear in the processed signal as an artifact can be identified and supplanted. Secondary components can result from, for example, initial contact between a user's finger and an actuator before actuation thereof that can give rise to a primary component, as well as release of an actuator and other mechanical actions. If the gap-filling techniques described herein thus far are applied to observed signals containing such secondary components, the secondary components can be unintentionally reproduced and/or accentuated.
3. Removal and Gap-Filling
Under one approach, the secondary components 64, 65 of a target signal can be supplanted in conjunction with supplanting nearby primary components 63. Accordingly, one or more narrower removal regions within the observed signal can be defined to, initially, correspond to each of the one or more other components 64, 65 of the unwanted target signal, as generally depicted in
Primary and secondary target signal components can be grouped together if they are found to be within a selected time (e.g., about 100 ms, such as, for example, between about 80 ms and about 120 ms, with between 90 ms and 110 ms being but one particular example) of each other, as with the secondary components shown in the frame 60.
However, if adjacent segments of an observed signal 61 between adjacent removal regions 64 are too close together, e.g., less than about 5 ms, such as for example between about 3 ms and about 5 ms apart, insufficient observed signal can be available for training the extensions used to supplant the secondary components of the target signal. Consequently, the adjacent removal regions 64 can be merged into a single removal region 64′ (
After merging, the remaining frames 62, 64′ and 65 containing components of the target signal can be ordered from smallest to largest, as in
A working embodiment of disclosed systems was developed and several user trials were performed to assess perceptual quality of disclosed approaches. A listening environment matching that of a good speaker system was set up with levels set to about 10 dB higher than THX® reference; −26 dB full scale mapped to an 89 dB sound pressure level (e.g., a loud listening level). Eight subjects were asked to rate perceived sound quality of a variety of audio clips. During the test, users heard a clean audio clip without a click and audio clips with the click removed using various embodiments of disclosed approaches. The order of clip playback was randomized so the user didn't know which clip was the original.
Then, users were asked to rate the quality of the audio clip with the click removed on a scale from 5 to 1, as follows:
For comparison, the test was performed with a multi band approach, a naive AR with 50 coefficients, a naïve AR with 1000 coefficients, and time scale modification. Results are shown in
In all cases, disclosed approaches scored a 5 (e.g., were perceptual equivalents to the original, unimpaired signal) for over 90% of the cases run, as shown in
As shown in
The computing environment 400 includes at least one central processing unit 410 and memory 420. In
A computing environment may have additional features. For example, the computing environment 400 includes storage 440, one or more input devices 450, one or more output devices 460, and one or more communication connections 470. An interconnection mechanism (not shown) such as a bus, a controller, or a network, interconnects the components of the computing environment 400. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 400, and coordinates activities of the components of the computing environment 400.
The store 440 may be removable or non-removable, and can include selected forms of machine-readable media. In general, machine-readable media includes magnetic disks, magnetic tapes or cassettes, non-volatile solid-state memory, CD-ROMs, CD-RWs, DVDs, magnetic tape, optical data storage devices, and carrier waves, or any other machine-readable medium which can be used to store information and which can be accessed within the computing environment 400. The storage 440 stores instructions for the software 480, which can implement technologies described herein.
The store 440 can also be distributed over a network so that software instructions are stored and executed in a distributed fashion. In other embodiments, some of these operations might be performed by specific hardware components that contain hardwired logic. Those operations might alternatively be performed by any combination of programmed data processing components and fixed hardwired circuit components.
The input device(s) 450 may be a touch input device, such as a keyboard, keypad, mouse, pen, touchscreen, touch pad, or trackball, a voice input device, a scanning device, or another device, that provides input to the computing environment 400. For audio, the input device(s) 450 may include a microphone or other transducer (e.g., a sound card or similar device that accepts audio input in analog or digital form), or a computer-readable media reader that provides audio samples to the computing environment 400.
The output device(s) 460 may be a display, printer, speaker transducer, DVD-writer, or another device that provides output from the computing environment 400.
The communication connection(s) 470 enable communication over a communication medium (e.g., a connecting network) to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed graphics information, processed signal information (including processed audio signals), or other data in a modulated data signal.
Thus, disclosed computing environments are suitable for transforming a signal corrected as disclosed herein into a human-perceivable form. As well, or alternatively, disclosed computing environments are suitable for transforming a signal corrected as disclosed herein into a modulated signal and conveying the modulated signal over a communication connection
Machine-readable media are any available media that can be accessed within a computing environment 400. By way of example, and not limitation, with the computing environment 400, machine-readable media include memory 420, storage 440, communication media (not shown), and combinations of any of the above. Tangible machine-readable (or computer-readable) media exclude transitory signals.
The examples described above generally concern apparatus, methods, and related systems for removing unwanted noise from observed signals, and more particularly but not exclusively to audio noise in observed audio signals. Nonetheless, embodiments other than those described above in detail are contemplated based on the principles disclosed herein, together with any attendant changes in configurations of the respective apparatus described herein. For example, disclosed systems can be used to process real-time signals being transmitted, as in a telephony application (subject to latency considerations on different computational platforms). Other disclosed systems can be used to process recordings of observed signals. And, disclosed principles are not limited to audio signals, but are generally applicable to other types of signals susceptible to unwanted noise.
Directions and other relative references (e.g., up, down, top, bottom, left, right, rearward, forward, etc.) may be used to facilitate discussion of the drawings and principles herein, but are not intended to be limiting. For example, certain terms may be used such as “up,” “down,”, “upper,” “lower,” “horizontal,” “vertical,” “left,” “right,” and the like. Such terms are used, where applicable, to provide some clarity of description when dealing with relative relationships, particularly with respect to the illustrated embodiments. Such terms are not, however, intended to imply absolute relationships, positions, and/or orientations. For example, with respect to an object, an “upper” surface can become a “lower” surface simply by turning the object over. Nevertheless, it is still the same surface and the object remains the same. As used herein, “and/or” means “and” or “or”, as well as “and” and “or.” Moreover, all patent and non-patent literature cited herein is hereby incorporated by reference in its entirety for all purposes.
The principles described above in connection with any particular example can be combined with the principles described in connection with another example described herein. Accordingly, this detailed description shall not be construed in a limiting sense, and following a review of this disclosure, those of ordinary skill in the art will appreciate the wide variety of signal processing techniques that can be devised using the various concepts described herein.
Moreover, those of ordinary skill in the art will appreciate that the exemplary embodiments disclosed herein can be adapted to various configurations and/or uses without departing from the disclosed principles. Applying the principles disclosed herein, it is possible to provide a wide variety of systems adapted to remove impairments from observed signals. For example, modules identified as constituting a portion of a given computational engine in the above description or in the drawings can be omitted altogether or implemented as a portion of a different computational engine without departing from some disclosed principles.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosed innovations. Various modifications to those embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of this disclosure. Thus, the claimed inventions are not intended to be limited to the embodiments shown herein, but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular, such as by use of the article “a” or “an” is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. All structural and functional equivalents to the features and method acts of the various embodiments described throughout the disclosure that are known or later come to be known to those of ordinary skill in the art are intended to be encompassed by the features described and claimed herein. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 USC 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or “step for”.
Thus, in view of the many possible embodiments to which the disclosed principles can be applied, we reserve to the right to claim any and all combinations of features and technologies described herein as understood by a person of ordinary skill in the art, including, for example, all that comes within the scope and spirit of the following claims.
This application claims benefit of and priority to U.S. Provisional Patent Application No. 62/348,662, filed on Jun. 10, 2016, which application is hereby incorporated by reference in its entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
8271200 | Sieracki | Sep 2012 | B2 |
8325939 | King | Dec 2012 | B1 |
8428936 | Mittal et al. | Apr 2013 | B2 |
8762138 | Holtel et al. | Jun 2014 | B2 |
8886529 | Faure et al. | Nov 2014 | B2 |
9286907 | Yang et al. | Mar 2016 | B2 |
20070021958 | Visser | Jan 2007 | A1 |
20080118082 | Seltzer | May 2008 | A1 |
20110218799 | Mittal | Sep 2011 | A1 |
20130132076 | Yang | May 2013 | A1 |
20140126744 | Petit | May 2014 | A1 |
20150248893 | Kleijn et al. | Sep 2015 | A1 |
20160078880 | Avendano et al. | Mar 2016 | A1 |
20160133265 | Disch et al. | May 2016 | A1 |
Entry |
---|
Esquef, Paulo A.A. “An efficient model-based multirate method for reconstruction of audio signals across long gaps”. IEEE Transactions on Audio Speech and Language Processing. 14(4):1391-1400. Jul. 2006. |
Drori, I., et al. “Spectral Sound Gap Filling”. 2004. |
Bartkowiak, M. et al. “Mitigation of Long Gaps in Music Using Hybrid Sinusoidal and Noise Model with Context Adaptation”. |
FabFilter, FabFilter Pro-DS Manual, 2002, All. |
Final Office Action for U.S. Appl. No. 15/200,841, dated Dec. 14, 2017, 20 pages. |
Non-Final Office Action in U.S. Appl. No. 15/200,841 dated Jul. 6, 2017. |
Number | Date | Country | |
---|---|---|---|
20170358316 A1 | Dec 2017 | US |
Number | Date | Country | |
---|---|---|---|
62348662 | Jun 2016 | US |