1. Field
This disclosure relates to processing of speech signals.
2. Background
Many activities that were previously performed in quiet office or home environments are being performed today in acoustically variable situations like a car, a street, or a café. For example, a person may desire to communicate with another person using a voice communication channel. The channel may be provided, for example, by a mobile wireless handset or headset, a walkie-talkie, a two-way radio, a car-kit, or another communications device. Consequently, a substantial amount of voice communication is taking place using mobile devices (e.g., smartphones, handsets, and/or headsets) in environments where users are surrounded by other people, with the kind of noise content that is typically encountered where people tend to gather. Such noise tends to distract or annoy a user at the far end of a telephone conversation. Moreover, many standard automated business transactions (e.g., account balance or stock quote checks) employ voice recognition based data inquiry, and the accuracy of these systems may be significantly impeded by interfering noise.
For applications in which communication occurs in noisy environments, it may be desirable to separate a desired speech signal from background noise. Noise may be defined as the combination of all signals interfering with or otherwise degrading the desired signal. Background noise may include numerous noise signals generated within the acoustic environment, such as background conversations of other people, as well as reflections and reverberation generated from the desired signal and/or any of the other signals. Unless the desired speech signal is separated from the background noise, it may be difficult to make reliable and efficient use of it. In one particular example, a speech signal is generated in a noisy environment, and speech processing methods are used to separate the speech signal from the environmental noise.
Noise encountered in a mobile environment may include a variety of different components, such as competing talkers, music, babble, street noise, and/or airport noise. As the signature of such noise is typically nonstationary and close to the user's own frequency signature, the noise may be hard to model using traditional single microphone or fixed beamforming type methods. Single microphone noise reduction techniques typically require significant parameter tuning to achieve optimal performance. For example, a suitable noise reference may not be directly available in such cases, and it may be necessary to derive a noise reference indirectly. Therefore multiple microphone based advanced signal processing may be desirable to support the use of mobile devices for voice communications in noisy environments.
A method of processing an audio signal according to a general configuration includes determining, for each of a first plurality of consecutive segments of the audio signal, that voice activity is present in the segment. This method also includes determining, for each of a second plurality of consecutive segments of the audio signal that occurs immediately after the first plurality of consecutive segments in the audio signal, that voice activity is not present in the segment. This method also includes detecting that a transition in a voice activity state of the audio signal occurs during one among the second plurality of consecutive segments that is not the first segment to occur among the second plurality, and producing a voice activity detection signal that has, for each segment in the first plurality and for each segment in the second plurality, a corresponding value that indicates one among activity and lack of activity. In this method, for each of the first plurality of consecutive segments, the corresponding value of the voice activity detection signal indicates activity. In this method, for each of the second plurality of consecutive segments that occurs before the segment in which the detected transition occurs, and based on said determining, for at least one segment of the first plurality, that voice activity is present in the segment, the corresponding value of the voice activity detection signal indicates activity, and for each of the second plurality of consecutive segments that occurs after the segment in which the detected transition occurs, and in response to said detecting that a transition in the speech activity state of the audio signal occurs, the corresponding value of the voice activity detection signal indicates a lack of activity. Computer-readable media having tangible structures that store machine-executable instructions that when executed by one or more processors cause the one or more processors to perform such a method are also disclosed.
An apparatus for processing an audio signal according to another general configuration includes means for determining, for each of a first plurality of consecutive segments of the audio signal, that voice activity is present in the segment. This apparatus also includes means for determining, for each of a second plurality of consecutive segments of the audio signal that occurs immediately after the first plurality of consecutive segments in the audio signal, that voice activity is not present in the segment. This apparatus also includes means for detecting that a transition in a voice activity state of the audio signal occurs during one among the second plurality of consecutive segments, and means for producing a voice activity detection signal that has, for each segment in the first plurality and for each segment in the second plurality, a corresponding value that indicates one among activity and lack of activity. In this apparatus, for each of the first plurality of consecutive segments, the corresponding value of the voice activity detection signal indicates activity. In this apparatus, for each of the second plurality of consecutive segments that occurs before the segment in which the detected transition occurs, and based on said determining, for at least one segment of the first plurality, that voice activity is present in the segment, the corresponding value of the voice activity detection signal indicates activity. In this apparatus, for each of the second plurality of consecutive segments that occurs after the segment in which the detected transition occurs, and in response to said detecting that a transition in the speech activity state of the audio signal occurs, the corresponding value of the voice activity detection signal indicates a lack of activity.
An apparatus for processing an audio signal according to another configuration includes a first voice activity detector configured to determine, for each of a first plurality of consecutive segments of the audio signal, that voice activity is present in the segment. The first voice activity detector is also configured to determine, for each of a second plurality of consecutive segments of the audio signal that occurs immediately after the first plurality of consecutive segments in the audio signal, that voice activity is not present in the segment. This apparatus also includes a second voice activity detector configured to detect that a transition in a voice activity state of the audio signal occurs during one among the second plurality of consecutive segments; and a signal generator configured to produce a voice activity detection signal that has, for each segment in the first plurality and for each segment in the second plurality, a corresponding value that indicates one among activity and lack of activity. In this apparatus, for each of the first plurality of consecutive segments, the corresponding value of the voice activity detection signal indicates activity. In this apparatus, for each of the second plurality of consecutive segments that occurs before the segment in which the detected transition occurs, and based on said determining, for at least one segment of the first plurality, that voice activity is present in the segment, the corresponding value of the voice activity detection signal indicates activity. In this apparatus, for each of the second plurality of consecutive segments that occurs after the segment in which the detected transition occurs, and in response to said detecting that a transition in the speech activity state of the audio signal occurs, the corresponding value of the voice activity detection signal indicates a lack of activity.
In a speech processing application (e.g., a voice communications application, such as telephony), it may be desirable to perform accurate detection of segments of an audio signal that carry speech information. Such voice activity detection (VAD) may be important, for example, in preserving the speech information. Speech coders (also called coder-decoders (codecs) or vocoders) are typically configured to allocate more bits to encode segments that are identified as speech than to encode segments that are identified as noise, such that a misidentification of a segment carrying speech information may reduce the quality of that information in the decoded segment. In another example, a noise reduction system may aggressively attenuate low-energy unvoiced speech segments if a voice activity detection stage fails to identify these segments as speech.
Recent interest in wideband (WB) and super-wideband (SWB) codecs places emphasis on preserving high-frequency speech information, which may be important for high-quality speech as well as intelligibility. Consonants typically have energy that is generally consistent in time across a high-frequency range (e.g., from four to eight kilohertz). Although the high-frequency energy of a consonant is typically low compared to the low-frequency energy of a vowel, the level of environmental noise is usually lower in the high frequencies.
It may be desirable to perform detection of speech onsets and/or offsets based on the principle that a coherent and detectable energy change occurs over multiple frequencies at the onset and offset of speech. Such an energy change may be detected, for example, by computing first-order time derivatives of energy (i.e., rate of change of energy over time) over frequency components in a desired frequency range (e.g., a high-frequency range, such as from four to eight kHz). By comparing the amplitudes of these derivatives to threshold values, one can compute an activation indication for each frequency bin and combine (e.g., average) the activation indications over the frequency range for each time interval (e.g., for each 10-msec frame) to obtain a VAD statistic. In such case, a speech onset may be indicated when a large number of frequency bands show a sharp increase in energy that is coherent in time, and a speech offset may be indicated when a large number of frequency bands show a sharp decrease in energy that is coherent in time. Such a statistic is referred to herein as “high-frequency speech continuity.”
Unless expressly limited by its context, the term “signal” is used herein to indicate any of its ordinary meanings, including a state of a memory location (or set of memory locations) as expressed on a wire, bus, or other transmission medium. Unless expressly limited by its context, the term “generating” is used herein to indicate any of its ordinary meanings, such as computing or otherwise producing. Unless expressly limited by its context, the term “calculating” is used herein to indicate any of its ordinary meanings, such as computing, evaluating, smoothing, and/or selecting from a plurality of values. Unless expressly limited by its context, the term “obtaining” is used to indicate any of its ordinary meanings, such as calculating, deriving, receiving (e.g., from an external device), and/or retrieving (e.g., from an array of storage elements). Unless expressly limited by its context, the term “selecting” is used to indicate any of its ordinary meanings, such as identifying, indicating, applying, and/or using at least one, and fewer than all, of a set of two or more. Where the term “comprising” is used in the present description and claims, it does not exclude other elements or operations. The term “based on” (as in “A is based on B”) is used to indicate any of its ordinary meanings, including the cases (i) “derived from” (e.g., “B is a precursor of A”), (ii) “based on at least” (e.g., “A is based on at least B”) and, if appropriate in the particular context, (iii) “equal to” (e.g., “A is equal to B” or “A is the same as B”). Similarly, the term “in response to” is used to indicate any of its ordinary meanings, including “in response to at least.”
References to a “location” of a microphone of a multi-microphone audio sensing device indicate the location of the center of an acoustically sensitive face of the microphone, unless otherwise indicated by the context. The term “channel” is used at times to indicate a signal path and at other times to indicate a signal carried by such a path, according to the particular context. Unless otherwise indicated, the term “series” is used to indicate a sequence of two or more items. The term “logarithm” is used to indicate the base-ten logarithm, although extensions of such an operation to other bases are within the scope of this disclosure. The term “frequency component” is used to indicate one among a set of frequencies or frequency bands of a signal, such as a sample (or “bin”) of a frequency-domain representation of the signal (e.g., as produced by a fast Fourier transform) or a subband of the signal (e.g., a Bark scale or mel scale subband).
Unless indicated otherwise, any disclosure of an operation of an apparatus having a particular feature is also expressly intended to disclose a method having an analogous feature (and vice versa), and any disclosure of an operation of an apparatus according to a particular configuration is also expressly intended to disclose a method according to an analogous configuration (and vice versa). The term “configuration” may be used in reference to a method, apparatus, and/or system as indicated by its particular context. The terms “method,” “process,” “procedure,” and “technique” are used generically and interchangeably unless otherwise indicated by the particular context. The terms “apparatus” and “device” are also used generically and interchangeably unless otherwise indicated by the particular context. The terms “element” and “module” are typically used to indicate a portion of a greater configuration. Unless expressly limited by its context, the term “system” is used herein to indicate any of its ordinary meanings, including “a group of elements that interact to serve a common purpose.” Any incorporation by reference of a portion of a document shall also be understood to incorporate definitions of terms or variables that are referenced within the portion, where such definitions appear elsewhere in the document, as well as any figures referenced in the incorporated portion.
The near-field may be defined as that region of space which is less than one wavelength away from a sound receiver (e.g., a microphone or array of microphones). Under this definition, the distance to the boundary of the region varies inversely with frequency. At frequencies of two hundred, seven hundred, and two thousand hertz, for example, the distance to a one-wavelength boundary is about 170, forty-nine, and seventeen centimeters, respectively. It may be useful instead to consider the near-field/far-field boundary to be at a particular distance from the microphone or array (e.g., fifty centimeters from the microphone or from a microphone of the array or from the centroid of the array, or one meter or 1.5 meters from the microphone or from a microphone of the array or from the centroid of the array).
Unless the context indicates otherwise, the term “offset” is used herein as an antonym of the term “onset.”
Task T200 calculates a value of the energy E(k,n) (also called “power” or “intensity”) for each frequency component k of segment n over a desired frequency range.
In an alternative implementation, method M100 is configured to receive the audio signal as a plurality of time-domain subband signals (e.g., from a filter bank). In such case, task T200 may be configured to calculate the energy based on a sum of the squares of the time-domain sample values of the corresponding subband (e.g., as the sum, or as the sum normalized by the number of samples (e.g., average squared value)). A subband scheme may also be used in a frequency-domain implementation of task T200 (e.g., by calculating a value of the energy for each subband as the average energy, or as the square of the average magnitude, of the frequency bins in the subband k). In any of these time-domain and frequency-domain cases, the subband division scheme may be uniform, such that each subband has substantially the same width (e.g., within about ten percent). Alternatively, the subband division scheme may be nonuniform, such as a transcendental scheme (e.g., a scheme based on the Bark scale) or a logarithmic scheme (e.g., a scheme based on the Mel scale). In one such example, the edges of a set of seven Bark scale subbands correspond to the frequencies 20, 300, 630, 1080, 1720, 2700, 4400, and 7700 Hz. Such an arrangement of subbands may be used in a wideband speech processing system that has a sampling rate of 16 kHz. In other examples of such a division scheme, the lower subband is omitted to obtain a six-subband arrangement and/or the high-frequency limit is increased from 7700 Hz to 8000 Hz. Another example of a nonuniform subband division scheme is the four-band quasi-Bark scheme 300-510 Hz, 510-920 Hz, 920-1480 Hz, and 1480-4000 Hz. Such an arrangement of subbands may be used in a narrowband speech processing system that has a sampling rate of 8 kHz.
It may be desirable for task T200 to calculate the value of the energy as a temporally smoothed value. For example, task T200 may be configured to calculate the energy according to an expression such as E(k,n)=βEu(k,n)+(1−βE(k,n−1), where Eu(k,n) is an unsmoothed value of the energy calculated as described above; E(k,n) and E(k,n−1) are the current and previous smoothed values, respectively; and β is a smoothing factor. The value of smoothing factor β may range from 0 (maximum smoothing, no updating) to 1 (no smoothing), and typical values for smoothing factor β (which may be different for onset detection than for offset detection) include 0.05, 0.1, 0.2, 0.25, and 0.3.
It may be desirable for the desired frequency range to extend above 2000 Hz. Alternatively or additionally, it may be desirable for the desired frequency range to include at least part of the top half of the frequency range of the audio signal (e.g., at least part of the range of from 2000 to 4000 Hz for an audio signal sampled at eight kHz, or at least part of the range of from 4000 to 8000 Hz for an audio signal sampled at sixteen kHz). In one example, task T200 is configured to calculate energy values over the range of from four to eight kilohertz. In another example, task T200 is configured to calculate energy values over the range of from 500 Hz to eight kHz.
Task T300 calculates a time derivative of energy for each frequency component of the segment. In one example, task T300 is configured to calculate the time derivative of energy as an energy difference ΔE(k,n) for each frequency component k of each frame n [e.g., according to an expression such as ΔE(k,n)=E(k,n)−E(k,n−1)].
It may be desirable for task T300 to calculate ΔE(k,n) as a temporally smoothed value. For example, task T300 may be configured to calculate the time derivative of energy according to an expression such as ΔE(k,n)=α[E(k,n)−E(k,n−1)]+(1−α)[ΔE(k,n−1)], where α is a smoothing factor. Such temporal smoothing may help to increase reliability of the onset and/or offset detection (e.g., by deemphasizing noisy artifacts). The value of smoothing factor α may range from 0 (maximum smoothing, no updating) to 1 (no smoothing), and typical values for smoothing factor α include 0.05, 0.1, 0.2, 0.25, and 0.3. For onset detection, it may be desirable to use little or no smoothing (e.g., to allow a quick response). It may be desirable to vary the value of smoothing factor α and/or β, for onset and/or for offset, based on an onset detection result.
Task T400 produces an activity indication A(k,n) for each frequency component of the segment. Task T400 may be configured to calculate A(k,n) as a binary value, for example, by comparing ΔE(k,n) to an activation threshold.
It may be desirable for the activation threshold to have a positive value Tact-on for detection of speech onsets. In one such example, task T400 is configured to calculate an onset activation parameter Aon(k,n) according to an expression such as
It may be desirable for the activation threshold to have a negative value Tact-off for detection of speech offsets. In one such example, task T400 is configured to calculate an offset activation parameter Aoff(k,n) according to an expression such as
In another such example, task T400 is configured to calculate Aoff(k,n) according to an expression such as
Task T500 combines the activity indications for segment n to produce a segment activity indication S(n). In one example, task T500 is configured to calculate S(n) as the sum of the values A(k,n) for the segment. In another example, task T500 is configured to calculate S(n) as a normalized sum (e.g., the mean) of the values A(k,n) for the segment.
Task T600 compares the value of the combined activity indication S(n) to a transition detection threshold value Ttx. In one example, task T600 indicates the presence of a transition in voice activity state if S(n) is greater than (alternatively, not less than) Ttx. For a case in which the values of A(k,n) [e.g., of Aoff(k,n)] may be negative, as in the example above, task T600 may be configured to indicate the presence of a transition in voice activity state if S(n) is less than (alternatively, not greater than) the transition detection threshold value Ttx.
It may be desirable for a system (e.g., a portable audio sensing device) to perform an instance of method M100 that is configured to detect onsets and another instance of method M100 that is configured to detect offsets, with each instance of method M100 typically having different respective threshold values. Alternatively, it may be desirable for such a system to perform an implementation of method M100 which combines the instances.
It may be desirable to combine onset and offset indications as described above into a single metric. Such a combined onset/offset score may be used to support accurate tracking of speech activity (e.g., changes in near-end speech energy) over time, even in different noise environments and sound pressure levels. Use of a combined onset/offset score mechanism may also result in easier tuning of an onset/offset VAD.
A combined onset/offset score Son-off(n) may be calculated using values of segment activity indication S(n) as calculated for each segment by respective onset and offset instances of task T500 as described above.
A non-speech sound impulse, such as a slammed door, a dropped plate, or a hand clap, may also create responses that show consistent power changes over a range of frequencies.
Non-speech impulsive activations are likely to be consistent over a wider range of frequencies than a speech onset or offset, which typically exhibits a change in energy with respect to time that is continuous only over a range of about four to eight kHz. Consequently, an non-speech impulsive event is likely to cause a combined activity indication (e.g., S(n)) to have a value that is too high to be due to speech. Method M100 may be implemented to exploit this property to distinguish non-speech impulsive events from voice activity state transitions.
Non-speech impulsive noise may also be distinguished from speech by the speed of the onset. For example, the energy of a speech onset or offset in a frequency component tends to change more slowly over time than energy due to a non-speech impulsive event, and method M100 may be implemented to exploit this property (e.g., additionally or in the alternative to over-activation as described above) to distinguish non-speech impulsive events from voice activity state transitions.
Instance T410 of task T400 is arranged to calculate an impulsive activation value Aimp-d2(k,n) for each frequency component of segment n. Task T410 may be configured to calculate Aimp-d2(k,n) as a binary value, for example, by comparing Δ2E(k,n) to an impulsive activation threshold. In one such example, task T410 is configured to calculate an impulsive activation parameter Aimp-d2(k,n) according to an expression such as
Instance T510 of task T500 combines the impulsive activity indications for segment n to produce a segment impulsive activity indication Simp-d2(n). In one example, task T510 is configured to calculate Simp-d2(n) as the sum of the values Aimp-d2(k,n) for the segment. In another example, task T510 is configured to calculate Simp-d2(n) as the normalized sum (e.g., the mean) of the values Aimp-d2(k,n) for the segment.
Instance T620 of task T600 compares the value of the segment impulsive activity indication Simp-d2(n) to an impulse detection threshold value Timp-d2 and indicates detection of an impulsive event if Simp-d2(n) is greater than (alternatively, not less than) Timp-d2.
Indication of speech onsets and/or offsets (or a combined onset/offset score) as produced by an implementation of method M100 as described herein may be used to improve the accuracy of a VAD stage and/or to quickly track energy changes in time. For example, a VAD stage may be configured to combine an indication of presence or absence of a transition in voice activity state, as produced by an implementation of method M100, with an indication as produced by one or more other VAD techniques (e.g., using AND or OR logic) to produce a voice activity detection signal.
Examples of other VAD techniques whose results may be combined with those of an implementation of method M100 include techniques that are configured to classify a segment as active (e.g., speech) or inactive (e.g., noise) based on one or more factors such as frame energy, signal-to-noise ratio, periodicity, autocorrelation of speech and/or residual (e.g., linear prediction coding residual), zero crossing rate, and/or first reflection coefficient. Such classification may include comparing a value or magnitude of such a factor to a threshold value and/or comparing the magnitude of a change in such a factor to a threshold value. Alternatively or additionally, such classification may include comparing a value or magnitude of such a factor, such as energy, or the magnitude of a change in such a factor, in one frequency band to a like value in another frequency band. It may be desirable to implement such a VAD technique to perform voice activity detection based on multiple criteria (e.g., energy, zero-crossing rate, etc.) and/or a memory of recent VAD decisions. One example of a voice activity detection operation whose results may be combined with those of an implementation of method M100 includes comparing highband and lowband energies of the segment to respective thresholds as described, for example, in section 4.7 (pp. 4-48 to 4-55) of the 3GPP2 document C.S0014-D, v3.0, entitled “Enhanced Variable Rate Codec, Speech Service Options 3, 68, 70, and 73 for Wideband Spread Spectrum Digital Systems,” October 2010 (available online at www-dot-3gpp-dot-org). Other examples include comparing a ratio of frame energy to average energy and/or a ratio of lowband energy to highband energy.
A multichannel signal (e.g., a dual-channel or stereophonic signal), in which each channel is based on a signal produced by a corresponding one of an array of microphones, typically contains information regarding source direction and/or proximity that may be used for voice activity detection. Such a multichannel VAD operation may be based on direction of arrival (DOA), for example, by distinguishing segments that contain directional sound arriving from a particular directional range (e.g., the direction of a desired sound source, such as the user's mouth) from segments that contain diffuse sound or directional sound arriving from other directions.
One class of DOA-based VAD operations is based on the phase difference, for each frequency component of the segment in a desired frequency range, between the frequency component in each of two channels of the multichannel signal. Such a VAD operation may be configured to indicate voice detection when the relation between phase difference and frequency is consistent (i.e., when the correlation of phase difference and frequency is linear) over a wide frequency range, such as 500-2000 Hz. Such a phase-based VAD operation, which is described in more detail below, is similar to method M100 in that presence of a point source is indicated by consistency of an indicator over multiple frequencies. Another class of DOA-based VAD operations is based on a time delay between an instance of a signal in each channel (e.g., as determined by cross-correlating the channels in the time domain).
Another example of a multichannel VAD operation is based on a difference between levels (also called gains) of channels of the multichannel signal. A gain-based VAD operation may be configured to indicate voice detection, for example, when the ratio of the energies of two channels exceeds a threshold value (indicating that the signal is arriving from a near-field source and from a desired one of the axis directions of the microphone array). Such a detector may be configured to operate on the signal in the frequency domain (e.g., over one or more particular frequency ranges) or in the time domain.
It may be desirable to combine onset/offset detection results (e.g., as produced by an implementation of method M100 or apparatus A100 or MF100) with results from one or more VAD operations that are based on differences between channels of a multichannel signal. For example, detection of speech onsets and/or offsets as described herein may be used to identify speech segments that are left undetected by gain-based and/or phase-based VADs. The incorporation of onset and/or offset statistics into a VAD decision may also support the use of a reduced hangover period for single- and/or multichannel (e.g., gain-based or phase-based) VADs.
Multichannel voice activity detectors that are based on inter-channel gain differences and single-channel (e.g., energy-based) voice activity detectors typically rely on information from a wide frequency range (e.g., a 0-4 kHz, 500-4000 Hz, 0-8 kHz, or 500-8000 Hz range). Multichannel voice activity detectors that are based on direction of arrival (DOA) typically rely on information from a low-frequency range (e.g., a 500-2000 Hz or 500-2500 Hz range). Given that voiced speech usually has significant energy content in these ranges, such detectors may generally be configured to reliably indicate segments of voiced speech.
Segments of unvoiced speech, however, typically have low energy, especially as compared to the energy of a vowel in the low-frequency range. These segments, which may include unvoiced consonants and unvoiced portions of voiced consonants, also tend to lack important information in the 500-2000 Hz range. Consequently, a voice activity detector may fail to indicate these segments as speech, which may lead to coding inefficiencies and/or loss of speech information (e.g., through inappropriate coding and/or overly aggressive noise reduction).
It may be desirable to obtain an integrated VAD stage by combining a speech detection scheme that is based on detection of speech onsets and/or offsets as indicated by spectrogram cross-frequency continuity (e.g., an implementation of method M100) with detection schemes that are based on other features, such as inter-channel gain differences and/or coherence of inter-channel phase differences. For example, it may be desirable to complement a gain-based and/or phase-based VAD framework with an implementation of method M100 that is configured to track speech onset and/or offset events, which primarily occur in the high frequencies. The individual features of such a combined classifier may complement each other, as onset/offset detection tends to be sensitive to different speech characteristics in different frequency ranges as compared to gain-based and phase-based VADs. The combination of a 500-2000 Hz phase-sensitive VAD and a 4000-8000 Hz high-frequency speech onset/offset detector, for example, allows preservation of low-energy speech features (e.g., at consonant-rich beginnings of words) as well as high-energy speech features. It may be desirable to design a combined detector to provide a continuous detection indication from an onset to the corresponding offset.
In order to effectively preserve low-energy speech components that occur at the ends of voiced segments, it may be desirable for a voice activity detector, such as a gain-based or phase-based multichannel voice activity detector or an energy-based single-channel voice activity detector, to include an inertial mechanism. One example of such a mechanism is logic that is configured to inhibit the detector from switching its output from active to inactive until the detector continues to detect inactivity over a hangover period of several consecutive frames (e.g., two, three, four, five, ten, or twenty frames). For example, such hangover logic may be configured to cause the VAD to continue to identify segments as speech for some period after the most recent detection.
It may be desirable for the hangover period to be long enough to capture any undetected speech segments. For example, it may be desirable for a gain-based or phase-based voice activity detector to include a hangover period of about two hundred milliseconds (e.g., about twenty frames) to cover speech segments that were missed due to low energy or to lack of information in the relevant frequency range. If the undetected speech ends before the hangover period, however, or if no low-energy speech component is actually present, the hangover logic may cause the VAD to pass noise during the hangover period.
Speech offset detection may be used to reduce the length of VAD hangover periods at the ends of words. As noted above, it may be desirable to provide a voice activity detector with hangover logic. In such case, it may be desirable to combine such a detector with a speech offset detector in an arrangement to effectively terminate the hangover period in response to an offset detection (e.g., by resetting the hangover logic or otherwise controlling the combined detection result). Such an arrangement may be configured to support a continuous detection result until the corresponding offset may be detected. In a particular example, a combined VAD includes a gain and/or phase VAD with hangover logic (e.g., having a nominal 200-msec period) and an offset VAD that is arranged to cause the combined detector to stop indicating speech as soon as the end of the offset is detected. In such manner, an adaptive hangover may be obtained.
Task TM302 may be configured such that the detected transition is the start of an offset or, alternatively, the end of an offset.
Apparatus A210 also includes an implementation A110 of apparatus A100 as described herein that is configured to receive one channel (e.g., the primary channel) of the multichannel signal and to produce a corresponding onset indication TI10a and a corresponding offset indication TI10b. In one particular example, indications TI10a and TI10b are based on differences in the frequency range of 510 Hz to eight kHz. (It is expressly noted that in general, a speech onset and/or offset detector arranged to adapt a hangover period of a multichannel detector may operate on a channel that is different from the channels received by the multichannel detector.) In a particular example, onset indication TI10a and offset indication TI10b are based on energy differences in the frequency range of from 500 to 8000 Hz. Apparatus A210 also includes an implementation SG12 of signal generator SG10 that is configured to receive the VAD signals V10 and V20 and the transition indications TI10a and TI10b and to produce a corresponding combined VAD signal V30.
Additionally or in the alternative to adaptive hangover control, onset and/or offset detection may be used to vary a gain of another VAD signal, such as gain difference VAD signal V10 and/or phase difference VAD signal V20. For example, the VAD statistic may be multiplied (before thresholding) by a factor greater than one, in response to onset and/or offset indication. In one such example, a phase-based VAD statistic (e.g., a coherency measure) is multiplied by a factor ph_mult>1, and a gain-based VAD statistic (e.g., a difference between channel levels) is multiplied by a factor pd_mult>1, if onset detection or offset detection is indicated for the segment. Examples of values for ph_mult include 2, 3, 3.5, 3.8, 4, and 4.5. Examples of values for pd_mult include 1.2, 1.5, 1.7, and 2.0. Alternatively, one or more such statistics may be attenuated (e.g., multiplied by a factor less than one), in response to a lack of onset and/or offset detection in the segment. In general, any method of biasing the statistic in response to onset and/or offset detection state may be used (e.g., adding a positive bias value in response to detection or a negative bias value in response to lack of detection, raising or lowering a threshold value for the test statistic according to the onset and/or offset detection, and/or otherwise modifying a relation between the test statistic and the corresponding threshold).
It may be desirable to perform such multiplication on VAD statistics that have been normalized (e.g., as described with reference to expressions (N1)-(N4) below) and/or to adjust the threshold value for the VAD statistic when such biasing is selected. It is also noted that a different instance of method M100 may be used to generate onset and/or offset indications for such purpose than the instance used to generate onset and/or offset indications for combination into combined VAD signal V30. For example, a gain control instance of method M100 may use a different threshold value in task T600 (e.g., 0.01 or 0.02 for onset; 0.05, 0.07, 0.09, or 1.0 for offset) than a VAD instance of method M100.
Another VAD strategy that may be combined (e.g., by signal generator SG10) with those described herein is a single-channel VAD signal, which may be based on a ratio of frame energy to average energy and/or on lowband and highband energies. It may be desirable to bias such a single-channel VAD detector toward a high false alarm rate. Another VAD strategy that may be combined with those described herein is a multichannel VAD signal based on inter-channel gain difference in a low-frequency range (e.g., below 900 Hz or below 500 Hz). Such a detector may be expected to accurately detect voiced segments with a low rate of false alarms.
Combining results from different VAD techniques may also be used to decrease sensitivity of the VAD system to microphone placement. When a phone is held down (e.g., away from the user's mouth), for example, both phase-based and gain-based voice activity detectors may fail. In such case, it may be desirable for the combined detector to rely more heavily on onset and/or offset detection. An integrated VAD system may also be combined with pitch tracking.
Although gain-based and phase-based voice activity detectors may suffer when SNR is very low, noise is not usually a problem in high frequencies, such that an onset/offset detector may be configured to include a hangover interval (and/or a temporal smoothing operation) that may be increased when SNR is low (e.g., to compensate for the disabling of other detectors). A detector based on speech onset/offset statistics may also be used to allow more precise speech/noise segmentation by filling in the gaps between decaying and increasing gain/phase-based VAD statistics, thus enabling hangover periods for those detectors to be reduced.
An inertial approach such as hangover logic is not effective on its own for preserving the beginnings of utterances with words rich in consonants, such as “the”. A speech onset statistic may be used to detect speech onsets at word beginnings that are missed by one or more other detectors. Such an arrangement may include temporal smoothing and/or a hangover period to extend the onset transition indication until another detector may be triggered.
For most cases in which onset and/or offset detection is used in a multichannel context, it may be sufficient to perform such detection on the channel that corresponds to the microphone that is positioned closest to the user's mouth or is otherwise positioned to receive the user's voice most directly (also called the “close-talking” or “primary” microphone). In some cases, however, it may be desirable to perform onset and/or offset detection on more than one microphone, such as on both microphones in a dual-channel implementation (e.g., for a use scenario in which the phone is rotated to point away from the user's mouth).
From top to bottom, the plots in
High-frequency information may be important for speech intelligibility. Because air acts like a lowpass filter to the sounds that travel through it, the amount of high-frequency information that is picked up by a microphone will typically decrease as the distance between the sound source and the microphone increases. Similarly, low-energy speech tends to become buried in background noise as the distance between the desired speaker and the microphone increases. However, an indicator of energy activations that are coherent over a high-frequency range, as described herein with reference to method M100, may be used to track near-field speech even in the presence of noise that may obscure low-frequency speech characteristics, as this high-frequency feature may still be detectable in the recorded spectrum.
It may be desirable to use the results of a voice activity detection (VAD) operation for noise reduction and/or suppression. In one such example, a VAD signal is applied as a gain control on one or more of the channels (e.g., to attenuate noise frequency components and/or segments). In another such example, a VAD signal is applied to calculate (e.g., update) a noise estimate for a noise reduction operation (e.g., using frequency components or segments that have been classified by the VAD operation as noise) on at least one channel of the multichannel signal that is based on the updated noise estimate. Examples of such a noise reduction operation include a spectral subtraction operation and a Wiener filtering operation. Further examples of post-processing operations (e.g., residual noise suppression, noise estimate combination) that may be used with the VAD strategies disclosed herein are described in U.S. Pat. Appl. No. 61/406,382 (Shin et al., filed Oct. 25, 2010).
The acoustic noise in a typical environment may include babble noise, airport noise, street noise, voices of competing talkers, and/or sounds from interfering sources (e.g., a TV set or radio). Consequently, such noise is typically nonstationary and may have an average spectrum is close to that of the user's own voice. A noise power reference signal as computed from a single microphone signal is usually only an approximate stationary noise estimate. Moreover, such computation generally entails a noise power estimation delay, such that corresponding adjustments of subband gains can only be performed after a significant delay. It may be desirable to obtain a reliable and contemporaneous estimate of the environmental noise.
Examples of noise estimates include a single-channel long-term estimate, based on a single-channel VAD, and a noise reference as produced by a multichannel BSS filter. A single-channel noise reference may be calculated by using (dual-channel) information from the proximity detection operation to classify components and/or segments of a primary microphone channel. Such a noise estimate may be available much more quickly than other approaches, as it does not require a long-term estimate. This single-channel noise reference can also capture nonstationary noise, unlike the long-term-estimate-based approach, which is typically unable to support removal of nonstationary noise. Such a method may provide a fast, accurate, and nonstationary noise reference. The noise reference may be smoothed (e.g., using a first-degree smoother, possibly on each frequency component). The use of proximity detection may enable a device using such a method to reject nearby transients such as the sound of noise of a car passing into the forward lobe of the directional masking function.
A VAD indication as described herein may be used to support calculation of a noise reference signal. When the VAD indication indicates that a frame is noise, for example, the frame may be used to update the noise reference signal (e.g., a spectral profile of the noise component of the primary microphone channel). Such updating may be performed in a frequency domain, for example, by temporally smoothing the frequency component values (e.g., by updating the previous value of each component with the value of the corresponding component of the current noise estimate). In one example, a Wiener filter uses the noise reference signal to perform a noise reduction operation on the primary microphone channel. In another example, a spectral subtraction operation uses the noise reference signal to perform a noise reduction operation on the primary microphone channel (e.g., by subtracting the noise spectrum from the primary microphone channel). When the VAD indication indicates that a frame is not noise, the frame may be used to update a spectral profile of the signal component of the primary microphone channel, which profile may also be used by the Wiener filter to perform the noise reduction operation. The resulting operation may be considered to be a quasi-single-channel noise reduction algorithm that makes use of a dual-channel VAD operation.
An adaptive hangover as described above may be useful in a vocoder context to provide more accurate distinction between speech segments and noise while maintaining a continuous detection result during an interval of speech. In another context, however, it may be desirable to allow a more rapid transition of the VAD result (e.g., to eliminate hangovers) even if such action causes the VAD result to change state within the same interval of speech. In a noise reduction context, for example, it may be desirable to calculate a noise estimate, based on segments that the voice activity detector identifies as noise, and to use the calculated noise estimate to perform a noise reduction operation (e.g., a Wiener filtering or other spectral subtraction operation) on the speech signal. In such case, it may be desirable to configure the detector to obtain a more accurate segmentation (e.g., on a frame-by-frame basis), even if such tuning causes the VAD signal to change state while the user is talking.
An implementation of method M100 may be configured, whether alone or in combination with one or more other VAD techniques, to produce a binary detection result for each segment of the signal (e.g., high or “1” for voice, and low or “0” otherwise). Alternatively, an implementation of method M100 may be configured, whether alone or in combination with one or more other VAD techniques, to produce more than one detection result for each segment. For example, detection of speech onsets and/or offsets may be used to obtain a time-frequency VAD technique that individually characterizes different frequency subbands of the segment, based on the onset and/or offset continuity across that band. In such case, any of the subband division schemes mentioned above (e.g., uniform, Bark scale, Mel scale) may be used, and instances of tasks T500 and T600 may be performed for each subband. For a nonuniform subband division scheme, it may be desirable for each subband instance of task T500 to normalize (e.g., average) the number of activations for the corresponding subband such that, for example, each subband instance of task T600 may use the same threshold (e.g., 0.7 for onset, −0.15 for offset).
Such a subband VAD technique may indicate, for example, that a given segment carries speech in the 500-1000 Hz band, noise in the 1000-1200 Hz band, and speech in the 1200-2000 Hz band. Such results may be applied to increase coding efficiency and/or noise reduction performance. It may also be desirable for such a subband VAD technique to use independent hangover logic (and possibly different hangover intervals) in each of the various subbands. In a subband VAD technique, adaptation of a hangover period as described herein may be performed independently in each of the various subbands. A subband implementation of a combined VAD technique may include combining subband results for each individual detector or, alternatively, may include combining subband results from fewer than all detectors (possibly only one) with segment-level results from the other detectors.
In one example of a phase-based VAD, a directional masking function is applied at each frequency component to determine whether the phase difference at that frequency corresponds to a direction that is within a desired range, and a coherency measure is calculated according to the results of such masking over the frequency range under test and compared to a threshold to obtain a binary VAD indication. Such an approach may include converting the phase difference at each frequency to a frequency-independent indicator of direction, such as direction of arrival or time difference of arrival (e.g., such that a single directional masking function may be used at all frequencies). Alternatively, such an approach may include applying a different respective masking function to the phase difference observed at each frequency.
In another example of a phase-based VAD, a coherency measure is calculated based on the shape of distribution of the directions of arrival of the individual frequency components in the frequency range under test (e.g., how tightly the individual DOAs are grouped together). In either case, it may be desirable to calculate the coherency measure in a phase VAD based only on frequencies that are multiples of a current pitch estimate.
For each frequency component to be examined, for example, the phase-based detector may be configured to estimate the phase as the inverse tangent (also called the arctangent) of the ratio of the imaginary term of the corresponding FFT coefficient to the real term of the FFT coefficient.
It may be desirable to configure a phase-based voice activity detector to determine directional coherence between channels of each pair over a wideband range of frequencies. Such a wideband range may extend, for example, from a low frequency bound of zero, fifty, one hundred, or two hundred Hz to a high frequency bound of three, 3.5, or four kHz (or even higher, such as up to seven or eight kHz or more). However, it may be unnecessary for the detector to calculate phase differences across the entire bandwidth of the signal. For many bands in such a wideband range, for example, phase estimation may be impractical or unnecessary. The practical valuation of phase relationships of a received waveform at very low frequencies typically requires correspondingly large spacings between the transducers. Consequently, the maximum available spacing between microphones may establish a low frequency bound. On the other end, the distance between microphones should not exceed half of the minimum wavelength in order to avoid spatial aliasing. An eight-kilohertz sampling rate, for example, gives a bandwidth from zero to four kilohertz. The wavelength of a four-kHz signal is about 8.5 centimeters, so in this case, the spacing between adjacent microphones should not exceed about four centimeters. The microphone channels may be lowpass filtered in order to remove frequencies that might give rise to spatial aliasing.
It may be desirable to target specific frequency components, or a specific frequency range, across which a speech signal (or other desired signal) may be expected to be directionally coherent. It may be expected that background noise, such as directional noise (e.g., from sources such as automobiles) and/or diffuse noise, will not be directionally coherent over the same range. Speech tends to have low power in the range from four to eight kilohertz, so it may be desirable to forego phase estimation over at least this range. For example, it may be desirable to perform phase estimation and determine directional coherency over a range of from about seven hundred hertz to about two kilohertz.
Accordingly, it may be desirable to configure the detector to calculate phase estimates for fewer than all of the frequency components (e.g., for fewer than all of the frequency samples of an FFT). In one example, the detector calculates phase estimates for the frequency range of 700 Hz to 2000 Hz. For a 128-point FFT of a four-kilohertz-bandwidth signal, the range of 700 to 2000 Hz corresponds roughly to the twenty-three frequency samples from the tenth sample through the thirty-second sample. It may also be desirable to configure the detector to consider only phase differences for frequency components which correspond to multiples of a current pitch estimate for the signal.
A phase-based detector may be configured to evaluate a directional coherence of the channel pair, based on information from the calculated phase differences. The “directional coherence” of a multichannel signal is defined as the degree to which the various frequency components of the signal arrive from the same direction. For an ideally directionally coherent channel pair, the value of Δφ/f is equal to a constant k for all frequencies, where the value of k is related to the direction of arrival θ and the time delay of arrival τ. The directional coherence of a multichannel signal may be quantified, for example, by rating the estimated direction of arrival for each frequency component (which may also be indicated by a ratio of phase difference and frequency or by a time delay of arrival) according to how well it agrees with a particular direction (e.g., as indicated by a directional masking function), and then combining the rating results for the various frequency components to obtain a coherency measure for the signal.
It may be desirable to produce the coherency measure as a temporally smoothed value (e.g., to calculate the coherency measure using a temporal smoothing function). The contrast of a coherency measure may be expressed as the value of a relation (e.g., the difference or the ratio) between the current value of the coherency measure and an average value of the coherency measure over time (e.g., the mean, mode, or median over the most recent ten, twenty, fifty, or one hundred frames). The average value of a coherency measure may be calculated using a temporal smoothing function. Phase-based VAD techniques, including calculation and application of a measure of directional coherence, are also described in, e.g., U.S. Publ. Pat. Appls. Nos. 2010/0323652 A1 and 2011/038489 A1 (Visser et al.).
A gain-based VAD technique may be configured to indicate presence or absence voice activity in a segment based on differences between corresponding values of a gain measure for each channel. Examples of such a gain measure (which may be calculated in the time domain or in the frequency domain) include total magnitude, average magnitude, RMS amplitude, median magnitude, peak magnitude, total energy, and average energy. It may be desirable to configure the detector to perform a temporal smoothing operation on the gain measures and/or on the calculated differences. As noted above, a gain-based VAD technique may be configured to produce a segment-level result (e.g., over a desired frequency range) or, alternatively, results for each of a plurality of subbands of each segment.
Gain differences between channels may be used for proximity detection, which may support more aggressive near-field/far-field discrimination, such as better frontal noise suppression (e.g., suppression of an interfering speaker in front of the user). Depending on the distance between microphones, a gain difference between balanced microphone channels will typically occur only if the source is within fifty centimeters or one meter.
A gain-based VAD technique may be configured to detect that a segment is from a desired source (e.g., to indicate detection of voice activity) when a difference between the gains of the channels is greater than a threshold value. The threshold value may be determined heuristically, and it may be desirable to use different threshold values depending on one or more factors such as signal-to-noise ratio (SNR), noise floor, etc. (e.g., to use a higher threshold value when the SNR is low). Gain-based VAD techniques are also described in, e.g., U.S. Publ. Pat. Appl. No. 2010/0323652 A1 (Visser et al.).
It is also noted that one or more of the individual detectors in a combined detector may be configured to produce results on a different time scale than another of the individual detectors. For example, a gain-based, phase-based, or onset-offset detector may be configured to produce a VAD indication for each segment of length n, to be combined with results from a gain-based, phase-based, or onset-offset detector that is configured to produce a VAD indication for each segment of length m, when n is less than m.
Voice activity detection (VAD), which discriminates speech-active frames from speech-inactive frames, is an important part of speech enhancement and speech coding. As noted above, examples of single-channel VADs include SNR-based ones, likelihood ratio-based ones, and speech onset/offset-based ones, and examples of dual-channel VAD techniques include phase-difference-based ones and gain-difference-based (also called proximity-based) ones. Although dual-channel VADs are in general more accurate than single-channel techniques, they are typically highly dependent on the microphone gain mismatch and/or the angle at which the user is holding the phone.
It is not uncommon for a user of a portable audio sensing device (e.g., a headset or handset) to use the device in an orientation with respect to the user's mouth (also called a holding position or holding angle) that is not optimal and/or to vary the holding angle during use of the device. Such variation in holding angle may adversely affect the performance of a VAD stage.
One approach to dealing with a variable holding angle is to detect the holding angle (for example, using direction of arrival (DoA) estimation, which may be based on phase difference or time-difference-of-arrival (TDOA), and/or gain difference between microphones). Another approach to dealing with a variable holding angle that may be used alternatively or additionally is to normalize the VAD test statistics. Such an approach may be implemented to have the effect of making the VAD threshold a function of statistics that are related to the holding angle, without explicitly estimating the holding angle.
For online processing, a minimum statistics-based approach may be utilized. Normalization of the VAD test statistics based on maximum and minimum statistics tracking is proposed to maximize discrimination power even for situations in which the holding angle varies and the gain responses of the microphones are not well-matched.
The minimum-statistics algorithm, previously used for noise power spectrum estimation algorithm, is applied here for minimum and maximum smoothed test-statistic tracking. For maximum test-statistic tracking, the same algorithm is used with the input of (20-test statistic). For example, the maximum test statistic tracking may be derived from the minimum statistic tracking method using the same algorithm, such that it may be desirable to subtract the maximum test statistic from a reference point (e.g., 20 dB). Then the test statistics may be warped to make a minimum smoothed statistic value of zero and a maximum smoothed statistic value of one as follows:
where st denotes the input test statistic, st denotes the normalized test statistic, smin denotes the tracked minimum smoothed test statistic, sMAX denotes the tracked maximum smoothed test statistic, and ξ denotes the original (fixed) threshold. It is noted that the normalized test statistic st′ may have a value outside of the [0, 1] range due to the smoothing.
It is expressly contemplated and hereby disclosed that the decision rule shown in expression (N1) may be implemented equivalently using the unnormalized test statistic st with an adaptive threshold as follows:
s
t
[ξ□=(sMAX−smin)ξ+smin] (N2)
where (sMAX−smin)ξ+smin denotes an adaptive threshold ξ□ that is equivalent to using a fixed threshold ξ with the normalized test statistic st′.
Although a phase-difference-based VAD is typically immune to differences in the gain responses of the microphones, a gain-difference-based VAD is typically highly sensitive to such a mismatch. A potential additional benefit of this scheme is that the normalized test statistic st′ is independent of microphone gain calibration. For example, if the gain response of the secondary microphone is 1 dB higher than normal, then the current test statistic st, as well as the maximum statistic sMAX and the minimum statistic smin, will be 1 dB lower. Therefore, the normalized test statistic st′ will be the same.
One issue with the normalization in equation (N1) is that although the whole distribution is well-normalized, the normalized score variance for noise-only intervals (black dots) increases relatively for the cases with narrow unnormalized test statistic range. For example,
or, equivalently,
s
t
[ξ□=(sMAX−smin)1−αξ+smin] (N4)
where 0≦α≦1 is a parameter controlling a trade-off between normalizing the score and inhibiting an increase in the variance of the noise statistics. It is noted that the normalized statistic in expression (N3) is also independent of microphone gain variation, since sMAX−smin will be independent of microphone gains.
A value of alpha=0 will lead to
Such a test statistic may be normalized (e.g., as in expression (N1) or (N3) above). Alternatively, a threshold value corresponding to the number of frequency bands that are activated (i.e., that show a sharp increase or decrease in energy) may be adapted (e.g., as in expression (N2) or (N4) above).
Additionally or alternatively, the normalization techniques described with reference to expressions (N1)-(N4) may also be used with one or more other VAD statistics (e.g., a low-frequency proximity VAD, onset and/or offset detection). It may be desirable, for example, to configure task T300 to normalize ΔE(k,n) using such techniques. Normalization may increase robustness of onset/offset detection to signal level and noise nonstationarity.
For onset/offset detection, it may be desirable to track the maximum and minimum of the square of ΔE(k,n) (e.g., to track only positive values). It may also be desirable to track the maximum as the square of a clipped value of ΔE(k,n) (e.g., as the square of max[0, ΔE(k,n)] for onset and the square of min[0, ΔE(k,n)] for offset). While negative values of ΔE(k,n) for onset and positive values of ΔE(k,n) for offset may be useful for tracking noise fluctuation in minimum statistic tracking, they may be less useful in maximum statistic tracking. It may be expected that the maximum of onset/offset statistics will decrease slowly and rise rapidly.
In general, the onset and/or offset and combined VAD strategies described herein (e.g., as in the various implementations of methods M100 and M200) may be implemented using one or more portable audio sensing devices that each has an array R100 of two or more microphones configured to receive acoustic signals. Examples of a portable audio sensing device that may be constructed to include such an array and to be used with such a VAD strategy for audio recording and/or voice communications applications include a telephone handset (e.g., a cellular telephone handset); a wired or wireless headset (e.g., a Bluetooth headset); a handheld audio and/or video recorder; a personal media player configured to record audio and/or video content; a personal digital assistant (PDA) or other handheld computing device; and a notebook computer, laptop computer, netbook computer, tablet computer, or other portable computing device. Other examples of audio sensing devices that may be constructed to include instances of array R100 and to be used with such a VAD strategy include set-top boxes and audio- and/or video-conferencing devices.
Each microphone of array R100 may have a response that is omnidirectional, bidirectional, or unidirectional (e.g., cardioid). The various types of microphones that may be used in array R100 include (without limitation) piezoelectric microphones, dynamic microphones, and electret microphones. In a device for portable voice communications, such as a handset or headset, the center-to-center spacing between adjacent microphones of array R100 is typically in the range of from about 1.5 cm to about 4.5 cm, although a larger spacing (e.g., up to 10 or 15 cm) is also possible in a device such as a handset or smartphone, and even larger spacings (e.g., up to 20, 25 or 30 cm or more) are possible in a device such as a tablet computer. In a hearing aid, the center-to-center spacing between adjacent microphones of array R100 may be as little as about 4 or 5 mm. The microphones of array R100 may be arranged along a line or, alternatively, such that their centers lie at the vertices of a two-dimensional (e.g., triangular) or three-dimensional shape. In general, however, the microphones of array R100 may be disposed in any configuration deemed suitable for the particular application.
During the operation of a multi-microphone audio sensing device as described herein, array R100 produces a multichannel signal in which each channel is based on the response of a corresponding one of the microphones to the acoustic environment. One microphone may receive a particular sound more directly than another microphone, such that the corresponding channels differ from one another to provide collectively a more complete representation of the acoustic environment than can be captured using a single microphone.
It may be desirable for array R100 to perform one or more processing operations on the signals produced by the microphones to produce multichannel signal S10.
It may be desirable for array R100 to produce the multichannel signal as a digital signal, that is to say, as a sequence of samples. Array 8210, for example, includes analog-to-digital converters (ADCs) C10a and C10b that are each arranged to sample the corresponding analog channel. Typical sampling rates for acoustic applications include 8 kHz, 12 kHz, 16 kHz, and other frequencies in the range of from about 8 to about 16 kHz, although sampling rates as high as about 44 or 192 kHz may also be used. In this particular example, array R210 also includes digital preprocessing stages P20a and P20b that are each configured to perform one or more preprocessing operations (e.g., echo cancellation, noise reduction, and/or spectral shaping) on the corresponding digitized channel.
It is expressly noted that the microphones of array R100 may be implemented more generally as transducers sensitive to radiations or emissions other than sound. In one such example, the microphones of array R100 are implemented as ultrasonic transducers (e.g., transducers sensitive to acoustic frequencies greater than fifteen, twenty, twenty-five, thirty, forty, or fifty kilohertz or more).
Device D20 is configured to receive and transmit the RF communications signals via an antenna C30. Device D20 may also include a diplexer and one or more power amplifiers in the path to antenna C30. Chip/chipset CS10 is also configured to receive user input via keypad C10 and to display information via display C20. In this example, device D20 also includes one or more antennas C40 to support Global Positioning System (GPS) location services and/or short-range communications with an external device such as a wireless (e.g., Bluetooth™) headset. In another example, such a communications device is itself a Bluetooth headset and lacks keypad C10, display C20, and antenna C30.
Typically each microphone of array R100 is mounted within the device behind one or more small holes in the housing that serve as an acoustic port.
A headset may also include a securing device, such as ear hook Z30, which is typically detachable from the headset. An external ear hook may be reversible, for example, to allow the user to configure the headset for use on either ear. Alternatively, the earphone of a headset may be designed as an internal securing device (e.g., an earplug) which may include a removable earpiece to allow different users to use an earpiece of different size (e.g., diameter) for better fit to the outer portion of the particular user's ear canal.
In an example of a four-microphone instance of array R100, the microphones are arranged in a roughly tetrahedral configuration such that one microphone is positioned behind (e.g., about one centimeter behind) a triangle whose vertices are defined by the positions of the other three microphones, which are spaced about three centimeters apart. Potential applications for such an array include a handset operating in a speakerphone mode, for which the expected distance between the speaker's mouth and the array is about twenty to thirty centimeters.
Another example of a four-microphone instance of array R100 for a handset application includes three microphones at the front face of the handset (e.g., near the 1, 7, and 9 positions of the keypad) and one microphone at the back face (e.g., behind the 7 or 9 position of the keypad).
Additional placement examples for a portable audio sensing device having one or more microphones to be used with a switching strategy as disclosed herein include but are not limited to the following: visor or brim of a cap or hat; lapel, breast pocket, shoulder, upper arm (i.e., between shoulder and elbow), lower arm (i.e., between elbow and wrist), wristband or wristwatch. One or more microphones used in the strategy may reside on a handheld device such as a camera or camcorder.
The class of portable computing devices currently includes devices having names such as laptop computers, notebook computers, netbook computers, ultra-portable computers, tablet computers, mobile Internet devices, smartbooks, or smartphones. One type of such device has a slate or slab configuration as described above and may also include a slide-out keyboard.
Applications of a VAD strategy as disclosed herein are not limited to portable audio sensing devices.
It is expressly disclosed that applicability of systems, methods, and apparatus disclosed herein includes and is not limited to the particular examples shown in
It is expressly contemplated and hereby disclosed that communications devices disclosed herein may be adapted for use in networks that are packet-switched (for example, wired and/or wireless networks arranged to carry audio transmissions according to protocols such as VoIP) and/or circuit-switched. It is also expressly contemplated and hereby disclosed that communications devices disclosed herein may be adapted for use in narrowband coding systems (e.g., systems that encode an audio frequency range of about four or five kilohertz) and/or for use in wideband coding systems (e.g., systems that encode audio frequencies greater than five kilohertz), including whole-band wideband coding systems and split-band wideband coding systems.
The foregoing presentation of the described configurations is provided to enable any person skilled in the art to make or use the methods and other structures disclosed herein. The flowcharts, block diagrams, and other structures shown and described herein are examples only, and other variants of these structures are also within the scope of the disclosure. Various modifications to these configurations are possible, and the generic principles presented herein may be applied to other configurations as well. Thus, the present disclosure is not intended to be limited to the configurations shown above but rather is to be accorded the widest scope consistent with the principles and novel features disclosed in any fashion herein, including in the attached claims as filed, which form a part of the original disclosure.
Those of skill in the art will understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, and symbols that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Important design requirements for implementation of a configuration as disclosed herein may include minimizing processing delay and/or computational complexity (typically measured in millions of instructions per second or MIPS), especially for computation-intensive applications, such as applications for voice communications at sampling rates higher than eight kilohertz (e.g., 12, 16, or 44 kHz).
Goals of a multi-microphone processing system as described herein may include achieving ten to twelve dB in overall noise reduction, preserving voice level and color during movement of a desired speaker, obtaining a perception that the noise has been moved into the background instead of an aggressive noise removal, dereverberation of speech, and/or enabling the option of post-processing (e.g., spectral masking and/or another spectral modification operation based on a noise estimate, such as spectral subtraction or Wiener filtering) for more aggressive noise reduction.
The various elements of an implementation of an apparatus as disclosed herein (e.g., apparatus A100, MF100, A110, A120, A200, A205, A210, and/or MF200) may be embodied in any hardware structure, or any combination of hardware with software and/or firmware, that is deemed suitable for the intended application. For example, such elements may be fabricated as electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or logic gates, and any of these elements may be implemented as one or more such arrays. Any two or more, or even all, of these elements may be implemented within the same array or arrays. Such an array or arrays may be implemented within one or more chips (for example, within a chipset including two or more chips).
One or more elements of the various implementations of the apparatus disclosed herein (e.g., apparatus A100, MF100, A110, A120, A200, A205, A210, and/or MF200) may also be implemented in part as one or more sets of instructions arranged to execute on one or more fixed or programmable arrays of logic elements, such as microprocessors, embedded processors, IP cores, digital signal processors, FPGAs (field-programmable gate arrays), ASSPs (application-specific standard products), and ASICs (application-specific integrated circuits). Any of the various elements of an implementation of an apparatus as disclosed herein may also be embodied as one or more computers (e.g., machines including one or more arrays programmed to execute one or more sets or sequences of instructions, also called “processors”), and any two or more, or even all, of these elements may be implemented within the same such computer or computers.
A processor or other means for processing as disclosed herein may be fabricated as one or more electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or logic gates, and any of these elements may be implemented as one or more such arrays. Such an array or arrays may be implemented within one or more chips (for example, within a chipset including two or more chips). Examples of such arrays include fixed or programmable arrays of logic elements, such as microprocessors, embedded processors, IP cores, DSPs, FPGAs, ASSPs, and ASICs. A processor or other means for processing as disclosed herein may also be embodied as one or more computers (e.g., machines including one or more arrays programmed to execute one or more sets or sequences of instructions) or other processors. It is possible for a processor as described herein to be used to perform tasks or execute other sets of instructions that are not directly related to a procedure of selecting a subset of channels of a multichannel signal, such as a task relating to another operation of a device or system in which the processor is embedded (e.g., an audio sensing device). It is also possible for part of a method as disclosed herein to be performed by a processor of the audio sensing device (e.g., task T200) and for another part of the method to be performed under the control of one or more other processors (e.g., task T600).
Those of skill will appreciate that the various illustrative modules, logical blocks, circuits, and tests and other operations described in connection with the configurations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Such modules, logical blocks, circuits, and operations may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an ASIC or ASSP, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to produce the configuration as disclosed herein. For example, such a configuration may be implemented at least in part as a hard-wired circuit, as a circuit configuration fabricated into an application-specific integrated circuit, or as a firmware program loaded into non-volatile storage or a software program loaded from or into a data storage medium as machine-readable code, such code being instructions executable by an array of logic elements such as a general purpose processor or other digital signal processing unit. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. A software module may reside in a non-transitory storage medium such as RAM (random-access memory), ROM (read-only memory), nonvolatile RAM (NVRAM) such as flash RAM, erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), registers, hard disk, a removable disk, or a CD-ROM; or in any other form of storage medium known in the art. An illustrative storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
It is noted that the various methods disclosed herein (e.g., method M100, M110, M120, M130, M132, M140, M142, and/or M200) may be performed by an array of logic elements such as a processor, and that the various elements of an apparatus as described herein may be implemented in part as modules designed to execute on such an array. As used herein, the term “module” or “sub-module” can refer to any method, apparatus, device, unit or computer-readable data storage medium that includes computer instructions (e.g., logical expressions) in software, hardware or firmware form. It is to be understood that multiple modules or systems can be combined into one module or system and one module or system can be separated into multiple modules or systems to perform the same functions. When implemented in software or other computer-executable instructions, the elements of a process are essentially the code segments to perform the related tasks, such as with routines, programs, objects, components, data structures, and the like. The term “software” should be understood to include source code, assembly language code, machine code, binary code, firmware, macrocode, microcode, any one or more sets or sequences of instructions executable by an array of logic elements, and any combination of such examples. The program or code segments can be stored in a processor-readable storage medium or transmitted by a computer data signal embodied in a carrier wave over a transmission medium or communication link.
The implementations of methods, schemes, and techniques disclosed herein may also be tangibly embodied (for example, in tangible, computer-readable features of one or more computer-readable storage media as listed herein) as one or more sets of instructions executable by a machine including an array of logic elements (e.g., a processor, microprocessor, microcontroller, or other finite state machine). The term “computer-readable medium” may include any medium that can store or transfer information, including volatile, nonvolatile, removable, and non-removable storage media. Examples of a computer-readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash memory, an erasable ROM (EROM), a floppy diskette or other magnetic storage, a CD-ROM/DVD or other optical storage, a hard disk, a fiber optic medium, a radio frequency (RF) link, or any other medium which can be used to store the desired information and which can be accessed. The computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic, RF links, etc. The code segments may be downloaded via computer networks such as the Internet or an intranet. In any case, the scope of the present disclosure should not be construed as limited by such embodiments.
Each of the tasks of the methods described herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. In a typical application of an implementation of a method as disclosed herein, an array of logic elements (e.g., logic gates) is configured to perform one, more than one, or even all of the various tasks of the method. One or more (possibly all) of the tasks may also be implemented as code (e.g., one or more sets of instructions), embodied in a computer program product (e.g., one or more data storage media, such as disks, flash or other nonvolatile memory cards, semiconductor memory chips, etc.), that is readable and/or executable by a machine (e.g., a computer) including an array of logic elements (e.g., a processor, microprocessor, microcontroller, or other finite state machine). The tasks of an implementation of a method as disclosed herein may also be performed by more than one such array or machine. In these or other implementations, the tasks may be performed within a device for wireless communications such as a cellular telephone or other device having such communications capability. Such a device may be configured to communicate with circuit-switched and/or packet-switched networks (e.g., using one or more protocols such as VoIP). For example, such a device may include RF circuitry configured to receive and/or transmit encoded frames.
It is expressly disclosed that the various methods disclosed herein may be performed by a portable communications device (e.g., a handset, headset, or portable digital assistant (PDA)), and that the various apparatus described herein may be included within such a device. A typical real-time (e.g., online) application is a telephone conversation conducted using such a mobile device.
In one or more exemplary embodiments, the operations described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, such operations may be stored on or transmitted over a computer-readable medium as one or more instructions or code. The term “computer-readable media” includes both computer-readable storage media and communication (e.g., transmission) media. By way of example, and not limitation, computer-readable storage media can comprise an array of storage elements, such as semiconductor memory (which may include without limitation dynamic or static RAM, ROM, EEPROM, and/or flash RAM), or ferroelectric, magnetoresistive, ovonic, polymeric, or phase-change memory; CD-ROM or other optical disk storage; and/or magnetic disk storage or other magnetic storage devices. Such storage media may store information in the form of instructions or data structures that can be accessed by a computer. Communication media can comprise any medium that can be used to carry desired program code in the form of instructions or data structures and that can be accessed by a computer, including any medium that facilitates transfer of a computer program from one place to another. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and/or microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technology such as infrared, radio, and/or microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray Disc™ (Blu-Ray Disc Association, Universal City, Calif.), where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
An acoustic signal processing apparatus as described herein may be incorporated into an electronic device that accepts speech input in order to control certain operations, or may otherwise benefit from separation of desired noises from background noises, such as communications devices. Many applications may benefit from enhancing or separating clear desired sound from background sounds originating from multiple directions. Such applications may include human-machine interfaces in electronic or computing devices which incorporate capabilities such as voice recognition and detection, speech enhancement and separation, voice-activated control, and the like. It may be desirable to implement such an acoustic signal processing apparatus to be suitable in devices that only provide limited processing capabilities.
The elements of the various implementations of the modules, elements, and devices described herein may be fabricated as electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or gates. One or more elements of the various implementations of the apparatus described herein may also be implemented in whole or in part as one or more sets of instructions arranged to execute on one or more fixed or programmable arrays of logic elements such as microprocessors, embedded processors, IP cores, digital signal processors, FPGAs, ASSPs, and ASICs.
It is possible for one or more elements of an implementation of an apparatus as described herein to be used to perform tasks or execute other sets of instructions that are not directly related to an operation of the apparatus, such as a task relating to another operation of a device or system in which the apparatus is embedded. It is also possible for one or more elements of an implementation of such an apparatus to have structure in common (e.g., a processor used to execute portions of code corresponding to different elements at different times, a set of instructions executed to perform tasks corresponding to different elements at different times, or an arrangement of electronic and/or optical devices performing operations for different elements at different times).
The present application for patent claims priority to Provisional Application No. 61/327,009, Attorney Docket No. 100839P1, entitled “SYSTEMS, METHODS, AND APPARATUS FOR SPEECH FEATURE DETECTION,” filed Apr. 22, 2010, and assigned to the assignee hereof.
Number | Date | Country | |
---|---|---|---|
61327009 | Apr 2010 | US |