An aspect of the disclosure here relates to digital audio signal processing techniques that reduce the effort by a person of having a conversation with another person in a noisy ambient sound environment. Other aspects are also described and claimed.
Having a conversation with someone who is nearby but in a noisy environment, such as in a restaurant, bar, airplane, or a bus, takes effort as it is difficult to hear and understand the other person. A solution that may reduce this effort is to wear headphones that passively isolate the wearer from the noisy environment but also actively reproduce the other person's voice through the headphone's speakers in a so-called transparency function. Such selective reproduction of the ambient sound environment may be achieved by applying beamforming signal processing to the output of a microphone array in the headphones, which focuses sound pickup in the direction of the other talker (and at the same time de-emphasizes or suppresses the pickup of other sounds in the environment.) Such headphones may also have an acoustic noise cancellation, ANC, mode of operation in which a quiet listening experience is created for the wearer by electronically cancelling any undesired ambient sounds that are still being heard by the wearer (due to having leaked past the passive sound isolation of the headphones.)
An aspect of the disclosure here is a digital audio signal processing technique that helps reduce speech listening effort by a headphone wearer in a noisy environment by suppressing the background noise of the ambient sound environment. Several external microphone signals are passed through a transparency digital filter path that drives a speaker of the headphone. In a sidechain process, different sounds are separated, e.g., the wearer's voice and another talker's voice, into respective frequency domain filter definitions (or frequency domain masks), on a per audio frame basis, and those frequency domain filters are then processed to update, on a per audio frame basis, the low latency time domain digital filters of a first transparency path that is filtering the external microphone signals before driving a headphone speaker. While so doing the process can also independently raise and lower each of the separate sounds depending on the wearer's context or use case, based on output from various sensors (including the external microphone signals), to improve the wearer's listening experience.
For the wearer who has an audiogram (hearing test results having non-zero dB Hearing Loss values) stored in their headphone or companion device such as a smartphone, there is a second transparency mode of operation in which a personalized enhancement path (instead of the transparency path) is active. The personalized enhancement path also contains time domain digital filters that are configured to suppress background noise, but the path uses high latency digital filters (their latency is higher than those of the transparency path) and also provides some amplification of the reproduced ambient sounds based on the audiogram which compensates for the frequency dependent hearing sensitivity of the headset wearer.
Such headphones may also have feedforward and feedback acoustic noise cancellation, ANC, paths that may be activated simultaneously with either the transparency path or the personalized enhancement path, or separately to produce a quiet listening experience, to further tailor the headphone wearer's listening experience to different usage scenarios.
In another aspect, data (control values, not audio signals) that is relevant to improving the transparency, personalized enhancement, or ANC experience is shared between primary and secondary headphones, or between a headphone and a companion device of the wearer such as a smartphone, via wireless communications links to the headphones.
In another aspect, multiple transparency modes of operation are described with varying speech boosting contributions by target voice and wearer voice processing (TVWP.)
The above summary does not include an exhaustive list of all aspects of the present disclosure. It is contemplated that the disclosure includes all systems and methods that can be practiced from all suitable combinations of the various aspects summarized above, as well as those disclosed in the Detailed Description below and particularly pointed out in the Claims section. Such combinations may have advantages not specifically recited in the above summary.
Several aspects of the disclosure here are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” aspect in this disclosure are not necessarily to the same aspect, and they mean at least one. Also, in the interest of conciseness and reducing the total number of figures, a given figure may be used to illustrate the features of more than one aspect of the disclosure, and not all elements in the figure may be required for a given aspect.
Several aspects of the disclosure with reference to the appended drawings are now explained. Whenever the shapes, relative positions and other aspects of the parts described are not explicitly defined, the scope of the invention is not limited only to the parts shown, which are meant merely for the purpose of illustration. Also, while numerous details are set forth, it is understood that some aspects of the disclosure may be practiced without these details. In other instances, well-known circuits, structures, and techniques have not been shown in detail so as not to obscure the understanding of this description.
The headphone 1 is part of an audio system that has a digital audio processor 5, two or more external microphones 2, 3, at least one internal microphone (not shown in the figure), and a headphone speaker 4, all of which may be integrated within the housing of the headphone 1. The internal microphone may be one that is arranged and configured to directly receive the sound reproduced by the speaker 4 and is sometimes referred to as an error microphone. The external microphone 2 is arranged and configured to receive ambient sound directly (or is open to the ambient directly) and is sometimes referred to as a reference microphone. The external microphone 3 is arranged and configured to be more responsive than the external microphone 2 when picking up the sound of the wearer voice and is sometimes referred to as a voice microphone due to being located closer the wearer's mouth than the external microphone 2.
The audio system actively reproduces the other talker's speech (that has been picked up by the external microphones 2, 3) through the headphone speaker 4 while the processor 5 is suppressing the background noise, in a so-called transparency function. The transparency function may be implemented separately in each of the primary headphone 1a and the secondary headphone 1b, using much the same methodology described below. The primary headphone 1a may be in wireless data communication with the secondary headphone 1b, for purposes of sharing control data as described further below in certain aspects of the disclosure here. Also, one or both of the headphones may also be in wireless communication with a companion device (e.g., a smartphone) of the wearer, for purposes of for example receiving from the companion device a user content playback signal (e.g., a downlink call signal, a media player signal), sending an external microphone signal to the companion device as an uplink call signal, or receiving control data from the companion device that configures the transparency function as it is being performed in the headphone 1.
Referring now to
Still referring to
In both the first and second transparency modes, a separator is configuring the digital filter coefficients of whichever path (either the low latency path or the high latency path) is active. The separator does so by processing, e.g., using a machine learning, ML, model the external microphone signals to produce, in parallel, i) a number of instances, over time, of a first frequency domain filter (or frequency domain mask) that represents a first sound source in the ambient sound environment, and ii) a number of instances of a second frequency domain filter (or frequency domain mask) that represents a second sound source in the ambient sound environment. The separator uses these first and second frequency domain filters to update or configure the digital filter coefficients of the low latency path or the high latency path which is driving the speaker 4 (depending on which transparency mode the processor is operating in.) The separator has a latency that is longer than that of the high latency digital filter path.
In both the first transparency mode and the second transparency mode, the speaker 4 is reproducing the first and second sound sources with the benefit of the separator suppressing background noise of the ambient sound environment. In the case of
Next, a multi-channel speech enhancer (or multichannel voice enhancer) produces the following two frequency domain filters, in response to receiving one or more of the plurality of external microphone signals (in this example, at least one produced by the external microphone 2 which is a so-called reference microphone), the first and second frequency domain filters, and a frequency domain noise estimate produced by a one channel or two channel noise estimator (not shown) whose input includes one or more of the external microphone signals: i) an upward compression filter when the processor 5 is operating in the second transparency mode of operation, and ii) a noise suppression filter in both the first and second transparency modes of operation. In one aspect, the multi-channel speech enhancer does so, based only on statistical signal processing algorithms, but in other versions the enhancer may be ML-model based. The upward compression filter controls how much the wearer's voice is attenuated relative to the target voice; it is computed based on the wearer's audiogram and as such its use avoids over amplification of the wearer's voice when the processor is in the second transparency mode of operation (where the audiogram contains non-zero dBHL values that boost gain in certain frequency bins.) The other output of the speech enhancer, namely the noise suppression filter, is generated in both the first and second transparency modes of operation and could be designed to perform beamforming to for example suppress sound sources that are in an undesired direction.
The separator also has a wind detector that, responsive to the external microphone signals, produces a wind detection frequency domain filter which controls how much wind noise is to be attenuated. The wind detector may be active in both the first and second transparency modes of operation.
The frequency domain filter produced by the wind detection filter, together with the upward compression filter and the noise suppression filter produced by the multichannel speech enhancer, are then processed by a transparency controller to update, on a per audio frame basis, the digital filter coefficients of the low latency digital filter path (transparency) and the high latency digital filter path (personalized enhancement.) The transparency controller does so by combining its various input frequency domain filters into a time domain filter definition for each of the digital filters in the respective paths, as follows:
Referring to
The wireless data may be used by the separator to adjust binaural cues that the headset wearer experiences when hearing the second sound source (e.g., another talker's voice) that is being reproduced through both the speaker 4 of the primary headphone 1a and through the speaker 4 of the secondary headphone 1b. In one example, referring to
In another aspect of the wireless data sharing between the primary and secondary headphones, each of the headphones has an instance of a voice activity detector (VAD) that operates on one or more local microphone signals (from microphones that are local to, e.g., integrated in the respective headphone, which may include the external microphone 2 and the external microphone 3) and perhaps also on a bone conduction sensor signal (e.g., from an accelerometer in the respective headphone.) Selected output values of the VAD, e.g., as a time sequence of binary values being speech vs. non-speech in each frequency bin, are transmitted over the air to the other headphone. The separator in the other headphone receives this wireless data and processes it, e.g., using the ML model described above, to produce, in parallel, its first frequency domain filter (or frequency domain mask) that represents the first sound source in the ambient sound environment, and its second frequency domain filter (or frequency domain mask) that represents the second sound source in the ambient sound environment. In other words, the ML model that produces the first and second frequency domain filters in the secondary headphone is being assisted by a VAD in the primary headphone.
In yet another aspect of the disclosure here, the headphone 1 also has an acoustic noise cancellation, ANC, subsystem whose components include, as seen in
The feedback ANC digital filter path through which the internal microphone signal of the primary headphone is filtered to produce an anti-noise signal that drives the speaker 4, serves to make the listening experience more pleasant in several modes of operation of the processor 5. The feedback ANC filter path may be active in both the first and second transparency modes of operation described above.
The feedforward ANC digital filter path through which one or more of the external microphone signals (at least the reference microphone signal) are filtered to produce an anti-noise signal that drives the speaker 4, may be active in a so-called full ANC mode of operation. In the full ANC mode of operation, the feedforward ANC filter path is active but the transparency path filters (in the low latency path) and the personalized enhancement paths filters (in the high latency path) are inactive. This results in the anti-noise signal creating a quiet listening experience for the wearer by electronically cancelling any ambient sounds that are still being heard by the wearer (due to having leaked past the passive sound isolation of the headphones.) In addition, the feedforward ANC digital filter path may also be active (to produce anti-noise) in both the first and second transparency modes of operation, when they react to reduce the severity of for example an undesirably loud ambient sound that the wearer would otherwise hear more strongly.
In another aspect, illustrated using the example diagram and curve in
In one aspect, the weight A may be a gain vector whose gain values can be set on a per frequency bin basis. In another aspect, the weight A is a scalar or wideband value. Within a given mode of operation, the weight A may be varied as a function of the current wearer's context or use case changing, e.g., the wearer moves from a loud ambient environment to a quiet ambient environment which may be determined by computing an estimate of the current ambient noise (or the undesired sound in the ambient environment of the headphone for example as a sound pressure level, SPL.)
The anti-noise and transparency signals may be produced by respective signal processing paths such as described above in connection with
The processor 5 enters the first configuration 21 in response to the estimated ambient acoustic noise being lower than a first threshold 31—see
In one aspect, the processor 5 performs the TVWVP in accordance with the techniques described above in connection
The TVWP may perform the following process to compute a speech boost gain vector, Gb, which defines a gain boost value such as between 0 and 1 for each detected frequency bin of interest, e.g., the ones that a detector indicates are likely to contain speech of the target voice of a person near the wearer or of the wearer voice (own voice):
deltaG=20 log 10(g_ssl)−20 log 10(g_f);
r_b (as a value between 0 and) is determined using for example a linear mapping from deltaG;
Gb=own_voice_presence_probability×r_b×boost gain for own voice (a function of ambient acoustic noise level or the gain A)+(1-own-voice_presence_probability)×r_b×boost gain for target voice (a function of the ambient acoustic noise level).
In one instance, an initial transparency gain vector Gt is computed with a goal of resulting in a flat, gain frequency response experienced in the wearer's ear canal (e.g., as an attenuated version of the ambient sound environment) when both ANC and transparency functions are active. The goal of Gt resulting a flat frequency response may be achieved by appropriately setting the weight A. Gt is then combined, on a per frequency bin basis, with the speech boost gain vector, Gb, to obtain the output vector G. Combining Gb with the intentionally flat Gt will result in “gain bumps” that are in response to having detected the target voice. As a result of this TVWVP, the wearer will better hear the speech of the nearby person despite the ambient noise.
The processor 5 enters the second configuration 22 in response to the estimated ambient acoustic noise being higher than the first threshold 31. When the processor 5 is in the second configuration 22, the speech boosting effect of the TVWVP is deliberately reduced by the processor 5, e.g., the gain values in Gb are made smaller or even zero. This is because the TVWVP may not be as effective in making the speech of the nearby person (the target voice) more intelligible, in conditions where the ambient noise levels are high. Instead, the processor 5, in the second configuration 22, configures its transparency path to use sound pickup beamforming to help isolate the target voice. The beamforming is applied to the audio signals from at least two of the external microphones 2, 3 (e.g., one or more of several reference microphones plus the voice microphone), to produce the input audio signal of the audio filter in the transparency path. In this manner, sound coming from the direction of the target voice may be spatially favored in contrast to sound coming from undesired sources in other directions, while avoiding any potential artifacts that may be caused by the TVWVP.
While certain aspects have been described and shown in the accompanying drawings, it is to be understood that such are merely illustrative of and not restrictive on the broad invention, and that the invention is not limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those of ordinary skill in the art. For instance, while the gradual change in the TVWVP contribution is shown in
This nonprovisional patent application claims the benefit of the earlier filing date of U.S. provisional application No. 63/357,475 filed Jun. 30, 2022.
Number | Date | Country | |
---|---|---|---|
63357475 | Jun 2022 | US |