The present invention relates to a microphone apparatus and more specifically to a microphone apparatus with a beamformer that provides a directional audio output by combining microphone signals from multiple microphones. The present invention also relates to a headset with such a microphone apparatus. The invention may e.g. be used to enhance speech quality and intelligibility in headsets and other audio devices.
In the prior art, it is known to filter and combine signals from two or more spatially separated microphones to obtain a directional microphone signal. This form of signal processing is generally known as beamforming. The quality of beamformed microphone signals depends on the individual microphones having equal sensitivity characteristics across the relevant frequency range, which, however, is challenged by finite production tolerances and variations in aging of components. The prior art therefore comprises various techniques directed to calibrate microphones or otherwise handle deviating microphone characteristics in beamformers.
Also, adaptive alignment of the beam of a beamformer to varying locations of a target sound source is known in the art. An example of an adaptive beamformer is the so-called “General Sidelobe Canceller” or GSC. The GSC separates the adaptive beamformer into two main processing paths. The first of these implements a standard fixed beamformer, with constraints on the desired signal. The second path implements an adaptive beamformer, which provides a set of filters that adaptively minimize the power in the output. The desired signal is eliminated from the second path by a blocking matrix, ensuring that it is the noise power that is minimized. The output of the second path (the noise) is subtracted from the output of the fixed beamformer to provide the desired signal with less noise. The GSC is an example of a so-called “Linearly Constrained Minimum Variance” or LCMV beamformer. Use of the GSC requires that the direction to the desired source is known.
Furthermore, a general problem for many adaptive beamformer algorithms is the determination of when the microphone input signals comprise the desired signal.
European Patent Application EP 18205678.8 discloses a microphone apparatus with a main beamformer operating on input audio signals from a first and a second microphone unit. The microphone apparatus comprises a suppression beamformer operating on the same two input audio signals to provide a suppression beamformer signal and a suppression filter controller that controls the suppression beamformer to minimize the suppression beamformer signal. The microphone apparatus further comprises a candidate beamformer operating on the same two input audio signals to provide a candidate beamformer signal and a candidate filter controller that controls the candidate beamformer to have a transfer function equaling the complex conjugate of a transfer function of the suppression beamformer. The microphone apparatus controls a transfer function of the main beamformer to converge towards the transfer function of the candidate beamformer in dependence on determined voice activity in the candidate beamformer signal. The disclosure does, however, only mention beamformers operating on input audio signals from two microphone units.
There is thus still a need for improvement.
It is an object of the present invention to provide an improved microphone apparatus without some disadvantages of prior art apparatuses. It is a further object of the present invention to provide an improved headset without some disadvantages of prior art headsets.
These and other objects of the invention are achieved by the invention defined in the independent claims and further explained in the following description. Further objects of the invention are achieved by embodiments defined in the dependent claims and in the detailed description of the invention.
Within this document, the singular forms “a”, “an”, and “the” specify the presence of a respective entity, such as a feature, an operation, an element or a component, but do not preclude the presence or addition of further entities. Likewise, the words “have”, “include” and “comprise” specify the presence of respective entities, but do not preclude the presence or addition of further entities. The term “and/or” specifies the presence of one or more of the associated entities. The steps or operations of any method disclosed herein need not be performed in the exact order disclosed, unless expressly stated so.
The invention will be explained in more detail below together with preferred embodiments and with reference to the drawings in which:
The figures are schematic and simplified for clarity, and they just show details essential to understanding the invention, while other details may be left out. Where practical, like reference numerals and/or labels are used for identical or corresponding parts.
The headset 1 shown in
In the following, the location of the user's mouth 7, i.e. the source of the voice sound V, relative to the sound inlets 8, 9, 10 may be referred to as “speaker location”. The headset 1 may preferably be designed such that when the headset is worn in the intended wearing position, a first one of the first and second sound inlets 8, 9 is closer to the user's mouth 7 than the respective other sound inlet 8, 9. The headset 1 may preferably comprise a microphone apparatus as described in the following. Also other types of headsets may comprise such a microphone apparatus, e.g. a headset as shown but with only one earphone 2, 3, a headset with the microphone arm 5 extending from the right-hand side earphone 2, a headset with other wearing components than a headband, such as e.g. a neck band, an ear hook or the like, or a headset without a microphone arm 5; in the latter case, the first and second sound inlets 8, 9 may be arranged e.g. at an earphone 2, 3 or on respective earphones 2, 3 of a headset. The third sound inlet 10 may alternatively be arranged otherwise, e.g. at the right-hand side earphone 2 or at the microphone arm 5. The third sound inlet 10 may e.g. be arranged to pick up sound near or in the concha and/or the ear canal of the user's ear.
The polar diagram 20 shown in
The microphone apparatus 30 shown in
The first microphone unit 11 provides a first input audio signal X in dependence on sound received at a first sound inlet 8, the second microphone unit 12 provides a second input audio signal Y in dependence on sound received at a second sound inlet 9 spatially separated from the first sound inlet 8, and the third unit 13 provides a third input audio signal Q in dependence on sound received at a third sound inlet 10 spatially separated from the first sound inlet 8 and the second sound inlet 9. Where the microphone apparatus 30 is comprised by a small device, like a stand-alone microphone, a microphone arm 5 or an earphone 2, 3, the spatial separation between the sound inlets 8, 9, 10 is normally chosen within the range 5-30 mm, but larger or smaller spacing may be used.
The microphone apparatus 30 may preferably be designed to nudge or urge a user 6 to arrange the microphone apparatus 30 in a position with the first sound inlet 8 closer to the user's mouth 7 than the second sound inlet 9. Where the microphone apparatus 30 is comprised by a headset 1 with a microphone arm 5 extending from an earphone 3, the first and second sound inlets 8, 9 may thus e.g. be located at the microphone arm 5 with the first sound inlet 8 arranged further away from the earphone 3 than the second sound inlet 9.
The first, the second and the third microphone unit 11, 12, 13 constitute a main microphone array 14, with an output in the form of a vector. The main microphone array 14 thus provides as output a main input vector MM=(X, Y, Q) comprising as components the first, the second and the third input audio signal X, Y, Q.
The main beamformer 31 determines the main output audio signal SM as already known in the technical field of filter-sum beamformers. The main beamformer 31 applies a first main weight function BMX to the first input audio signal X to provide a first main weighted signal BMXX, applies a second main weight function BMY to the second input audio signal Y to provide a second main weighted signal BMYY, and applies a third main weight function BMQ to the third input audio signal Q to provide a third main weighted signal BMQQ, wherein the first, the second and the third main weight function BMx, BMY, BMQ differ from each other. The main beamformer 31 provides the main output audio signal SM by summing the first, the second and the third main weighted signal BMXX, BMYY, BMQQ.
The main beamformer 31 may perform the above beamformer computations in different ways and still arrive at the same result. In the present context, the action of applying a specific weight vector to a specific input vector shall be defined to include all computation algorithms and/or structures that yield the same result as performing element-by-element multiplication of the two vectors and summation of the multiplication results as described above. The main beamformer 31 thus provides the main output audio signal SM as a beamformed signal by applying a main weight vector BM=(BMX, BMY, BMQ) comprising as components the first, the second and the third main weight function BMX, BMY, BMQ to the main input vector MM.
In the present context, a weight vector is an ordered set of weight functions, wherein the weight functions are ordered by the components of the input vector to which they apply, and wherein a weight function is a frequency-dependent transfer function. A weight function is normally a complex transfer function, and the weight functions of a weight vector normally differ from each other. Note, however, that a weight vector may be normalized so that one of its weight functions equals the unity function.
The main beamformer controller 32 repeatedly determines a main steering vector dM=(dMX, dMY, dMQ) and adaptively determines the main weight vector BM in dependence on the main steering vector dM and the main input vector MM to increase the relative amount of voice sound V from the user 6 in the main output audio signal SM, wherein the main steering vector dM indicates a desired, preferably undistorted, response of the main beamformer 31. The steering vector dM thus has a respective component dMX, dMY, dMQ for each of the components X, Y, Q of the main input vector MM. The steering vector dM is an ordered set of weight functions, wherein the weight functions are ordered by the components of the input vector to which they apply, and wherein a weight function is a frequency-dependent transfer function. A weight function is normally a complex transfer function, and the weight functions of the steering vector dM normally differ from each other.
The main beamformer controller 32 preferably operates according to the widely used Minimum Variance Distortionless Response (MVDR) beamformer algorithm. The MVDR beamformer algorithm is an adaptive beamforming algorithm whose goal is to minimize the variance of the beamformer output signal while maintaining an undistorted response towards a desired signal, i.e. the voice sound V. If the desired signal and the undesired noise are uncorrelated, then the variance of the beamformer output signal equals the sum of the variances of the desired signal and the noise. The MVDR beamformer algorithm seeks to minimize this sum, thereby reducing the effect of the noise, preferably by estimating a noise covariance matrix for the main input vector MM and using the estimated noise covariance matrix in the computation of the components BMX, BMY, BMQ of the main weight vector BM as well known in the art.
The MVDR beamformer algorithm takes as inputs the steering vector dM and an estimated noise covariance matrix for the main input vector MM. The steering vector dM defines the desired response of the main beamformer 31. In the present context, the desired signal is the voice sound V, and the desired response thus equals the response of the main beamformer 31 when the main input vector MM only contains voice sound V of the user 6. The steering vector dM may thus easily be computed from the main input vector MM when it only contains voice sound V of the user 6. It is, however, difficult to determine when the main input vector MM only contains voice sound V of the user 6, and accurate determination of the steering vector dM is thus also difficult. Errors in the steering vector dM may cause the main beamformer 31 to distort the voice sound V in the main output audio signal SM, particularly if the errors represent deviations in the sensitivity of the microphone units 11, 12, 12 or in the locations of the sound inlets 8, 9, 10.
In the prior art, it is known to analyse the main output audio signal SM to detect voice sound V and to estimate the steering vector dM in dependence on the detected voice sound V. It is also known to detect voice sound V by computing the correlation between the main output audio signal SM and a microphone signal known to include mainly voice sound V. Both methods do, however, introduce an inherent instability and/or inaccuracy caused by the steering vector dM being, at least partly, circularly dependent on itself.
To mitigate the above-mentioned problems of MVDR and similar beamformers, the main beamformer controller 32 determines the steering vector dM in dependence on an auxiliary weight vector BF=(BFX, BFY) determined for the auxiliary beamformer 33 by the auxiliary beamformer controller 34. This may enable the main beamformer controller 32 to utilize further information derived independently of the steering vector dM and may thus improve stability and/or accuracy of the estimation of the steering vector dM, and may further reduce the computation load for the main beamformer controller 32. Furthermore, the auxiliary beamformer 33 preferably operates on a proper subset of the input audio signals X, Y, Q on which the main beamformer 31 operates, which may cause the auxiliary beamformer 33 to have less degrees of freedom than the main beamformer 31. This may further cause the auxiliary beamformer controller 34 to have an easier task in accurately determining the auxiliary weight vector BF than the main beamformer controller 32 has in accurately determining the steering vector dM. The main beamformer controller 32 may determine the steering vector dM in dependence on the auxiliary weight vector BF only during start-up of the beamformer, e.g. until the main weight vector BM has stabilized, which may easily be detected by the main beamformer controller 32 in known ways. When the main beamformer controller 32 detects disturbances, it may then return to determining the steering vector dM in dependence on the auxiliary weight vector BF.
The auxiliary beamformer 33 applies a first auxiliary weight function BFX to the first input audio signal X to provide a first auxiliary weighted signal BFXX, applies a second auxiliary weight function BFY to the second input audio signal Y to provide a second auxiliary weighted signal BFYY, and provides an auxiliary beamformer signal SF by summing the first and the second auxiliary weighted signal BFxX, BFYY. The auxiliary beamformer 33 thus provides the auxiliary beamformer signal SF as a beamformed signal by applying the auxiliary weight vector BF comprising as components the first and the second auxiliary weight function BFX, BFY to an auxiliary input vector MA=(X, Y) comprising as components the first and the second input audio signal X, Y. The first and the second microphone unit 11, 12 thus constitute an auxiliary microphone array 15 that provides the auxiliary input vector MA=(X, Y) comprising as components the first and the second input audio signal X, Y. The auxiliary microphone array 15 preferably comprises a proper subset of the microphone units 11, 12, 13 of the main microphone array 14, meaning that the the main microphone array 14 comprises at least one microphone unit 11, 12, 13 that is not comprised by the auxiliary microphone array 15. Correspondingly, the auxiliary input vector MA is preferably a proper subvector of the main input vector MM. The auxiliary beamformer controller 34 adaptively determines the auxiliary weight vector BF to increase the relative amount of voice sound V from the user 6 in the auxiliary beamformer signal SF. The auxiliary voice detector 35 preferably applies a predefined voice measure function A to the auxiliary beamformer signal SF to determine an auxiliary voice measure VF of voice sound V in the auxiliary beamformer signal SF, wherein the voice measure function A is chosen to correlate with voice sound V in its input signal SF, and the auxiliary beamformer controller 34 may preferably determine the auxiliary weight vector BF in dependence on the auxiliary voice measure VF. The voice measure function A and the auxiliary voice measure VF are preferably frequency-dependent functions.
In some embodiments, the main beamformer controller 32 may determine the steering vector component dMX for the first input audio signal X to be equal to, or converge towards being equal to, the first auxiliary weight function BFX and determine the steering vector component dMY for the second input audio signal Y to be equal to, or converge towards being equal to, the second auxiliary weight function BFY. To complete the steering vector dM, the main beamformer controller 32 then only needs to determine the steering vector component dMQ for the third input audio signal Q. The main beamformer controller 32 may determine the steering vector component dMQ for the third input audio signal Q based on the main output audio signal SM as known in the prior art.
Alternatively, or additionally, the main beamformer controller 32 may determine the steering vector dM in dependence on the auxiliary voice measure VF. The auxiliary voice detector 35 may derive a user-voice activity signal VAD from the auxiliary voice measure VF such that the user-voice activity signal VAD indicates voice activity when the main input vector WM only, or mainly, contains voice sound V of the user 6, and the main beamformer controller 32 may determine one or more components dMX, dMV, dMQ of the steering vector dM from values of the main input vector MM collected during periods wherein the user-voice activity signal VAD indicates voice activity. The main beamformer controller 32 may further restrict modification of the steering vector dM to periods wherein the user-voice activity signal VAD indicates voice activity. The user-voice activity signal VAD may be a frequency-dependent function, and the main beamformer controller 32 may determine the steering vector dM in dependence on the auxiliary voice measure VF only for frequency bands or frequency bins wherein the user-voice activity signal VAD indicates voice activity and/or restrict other voice-based modification of the steering vector dM to such frequency bands or frequency bins. For other frequency bands or frequency bins, the main beamformer controller 32 may determine the steering vector dM based on the main output audio signal SM as known in the prior art.
The main beamformer controller 32 may further determine the main weight vector BM in dependence on the auxiliary voice measure VF. The auxiliary voice detector 35 may derive a no-user-voice activity signal NVAD from the auxiliary voice measure VF such that the no-user-voice activity signal NVAD indicates the absence of voice activity when the main input vector MM not, or nearly not, contains voice sound V of the user 6, and the main beamformer controller 32 may determine the main weight vector BM in dependence on values of the main input vector MM collected during periods wherein the no-user-voice activity signal NVAD indicates the absence of voice activity. The main beamformer controller 32 may further restrict noise-based modification of the main weight vector BM to periods wherein the no-user-voice activity signal NVAD indicates the absence of voice activity. The no-user-voice activity signal NVAD may be a frequency-dependent function, and the main beamformer controller 32 may determine the main weight vector BM based on noise estimates only for frequency bands or frequency bins wherein the no-user-voice activity signal NVAD indicates the absence of voice activity and/or restrict noise-based modification of the main weight vector BM to such frequency bands or frequency bins.
In some embodiments, the main beamformer controller 32 may determine the steering vector dM to be congruent with, or converge towards being congruent with, the auxiliary weight vector BF. In the present context, two vectors are considered congruent if and only if one of them can be obtained by a linear scaling of the respective other one, wherein linear scaling encompasses scaling by any factor or frequency-dependent function, which may be real or complex, including the factor one as well as factors and functions with negative values, and wherein components that are only present in one of the vectors are disregarded. In the embodiment shown, the steering vector dM is thus considered congruent with the auxiliary weight vector BF if and only if the steering vector component dMX for the first input audio signal X can be obtained by a linear scaling of the weight function BFX for the first input audio signal X and the steering vector component dMV for the second input audio signal Y can be obtained by a linear scaling of the weight function BFY for the second input audio signal Y using one and the same scaling factor or function. The main beamformer controller 32 may e.g. determine the steering vector dM based on the main output audio signal SM as known in the prior art and by applying the congruence constraint in the determination.
The auxiliary beamformer controller 34 may determine the auxiliary weight vector BF based on any of the many known methods for determining an optimum two-microphone beamformer. However, the auxiliary beamformer controller 34 may determine the auxiliary weight vector BF based on a preferred embodiment of the auxiliary controller 40 as described in the following.
The auxiliary controller 40 shown in
The auxiliary filter F is a linear filter with an auxiliary transfer function HF. The auxiliary filter F provides an auxiliary filtered signal FY in dependence on the second input audio signal Y, and the auxiliary mixer JF is a linear mixer that provides the auxiliary beamformer signal SF as a beamformed signal in dependence on the first input audio signal X and the auxiliary filtered audio signal FY. The auxiliary filter F and the auxiliary mixer JF thus cooperatively constitute the linear auxiliary beamformer 33 as generally known in the art.
The null filter Z is a linear filter with a null transfer function HZ. The null filter Z provides a null filtered signal ZY in dependence on the second input audio signal Y, and the null mixer JZ is a linear mixer that provides the null beamformer signal SZ as a beamformed signal in dependence on the first input audio signal X and the null filtered signal ZY. The null filter Z and the null mixer JZ thus cooperatively constitute the linear null beamformer 41 as generally known in the art.
The candidate filter W is a linear filter with a candidate transfer function HW. The candidate filter W provides a candidate filtered signal WY in dependence on the second input audio signal Y, and the candidate mixer JW is a linear mixer that provides the candidate beamformer signal SW as a beamformed signal in dependence on the first input audio signal X and the candidate filtered signal WY. The candidate filter W and the candidate mixer JW thus cooperatively constitute the linear candidate beamformer 44 as generally known in the art.
Depending on the intended use of the microphone apparatus 30, the first microphone unit 11 and the second microphone unit 12 may each comprise an omnidirectional microphone, in which case each of the auxiliary beamformer 33, the null beamformer 41 and the candidate beamformer 44 will cause their respective output signal SF, SZ, SW to have a second-order directional characteristic, such as e.g. a forward cardioid 24, a rearward cardioid 25, a supercardioid, a hypercardioid, a bidirectional characteristic or any of the other well-known second-order directional characteristics. A directional characteristic is normally used to suppress unwanted sound, i.e. noise, in order to enhance desired sound, such as voice sound V from a user 6 of a device 1, 30. The directional characteristic of a beamformed signal typically depends on the frequency of the signal.
Generally, when two beamformers operating on the same input vector have identical shape of their directional characteristics, then their weight vectors are congruent. If they are both implemented as equally configured single-filter beamformers operating on the same two microphone input signals, then the transfer functions of their filters will be equal.
In the following, it is assumed that each of the auxiliary mixer 1F, the null mixer 1Z and the candidate mixer JW simply subtracts respectively the auxiliary filtered signal FY, the null filtered signal ZY and the candidate filtered signal WY from the first input audio signal X to obtain respectively the auxiliary beamformer signal SF, the null beamformer signal SZ and the candidate beamformer signal SW. This corresponds to applying respectively the auxiliary weight vector BF, a null weight vector BZ and a candidate weight vector BW, to the auxiliary input vector MA, wherein the auxiliary weight vector components (BFX, BFY) equal (1, −HF), the null weight vector components (BZX, BZY) equal (1, −HZ) and the candidate weight vector components (BWX, BW) equal (1, −HW). In some embodiments, one or more of the mixers JF, JZ, JW may be configured to apply other or further linear operations, such as e.g. scaling, inversion and/or summing instead of subtraction, and in such embodiments, the respective weight vectors BF, BZ, BW may differ from the ones shown here, but will still be congruent with them. In this case, the respective transfer functions HF, HZ, HW, of the beamformer filters will also be congruent with the ones shown here, meaning that the respective transfer function HF, HZ, HW can be obtained by a linear scaling of the one shown here, wherein linear scaling encompasses scaling by any non-frequency-dependent factor, which may be real or complex, including the factor one and factors with negative values. Also, two filters are considered congruent if and only if their transfer functions are congruent.
The auxiliary beamformer controller 34 adaptively determines the auxiliary transfer function HF of the auxiliary filter F to increase the relative amount of voice sound V in the auxiliary beamformer signal SF. The auxiliary beamformer controller 34 preferably does this based on information derived from the first input audio signal X and the second input audio signal Y as described in the following. This adaptation of the auxiliary transfer function HF changes the directional characteristic of the auxiliary beamformer signal SF.
In a first step, the null beamformer controller 42 determines the null transfer function HZ of the null filter Z to minimize the null beamformer signal SZ. The prior art knows many algorithms for achieving such minimization, and the null beamformer controller 42 may in principle apply any such algorithm. A preferred embodiment of the null beamformer controller 42 is described further below. When the auxiliary input vector MA only or mainly comprises voice sound V from the user, or when the noise comprised by the auxiliary input vector MA is steady and spatially omnidirectional, then the minimization will cause the voice sound V to be decreased or suppressed in the null filtered signal SZ. The null beamformer controller 42 thus adaptively determines the null weight vector BZ to decrease or minimize the relative amount of voice sound V from the user 6 in the null beamformer signal SZ.
In an ideal case with the first and second audio input signals X, Y having equal delays relative to the sound at the respective sound inlets 8, 9, with steady broad-spectred voice sound V arriving from the far-field and exactly (and only) from the forward direction 22 and with steady and spatially omnidirectional noise, then the minimization by the null beamformer controller 42 would cause the null beamformer signal SZ to have a rearward cardioid directional characteristic 25 with a null in the forward direction 22, thus suppressing the voice sound V completely also in the case where the first and the second microphone units 11, 12 have different sensitivities.
In a second step, the candidate beamformer controller 45 determines the candidate transfer function HW, of the candidate filter W to equal the complex conjugate of the null transfer function HZ of the null filter Z. The candidate beamformer controller 45 thus determines the candidate weight vector BW to be equal to the complex conjugate of the null weight vector BZ. However, it suffices that the candidate beamformer controller 45 determines the candidate weight vector BW to be congruent with the complex conjugate of the null weight vector BZ.
In the ideal case mentioned above, determining the candidate weight vector BW to be congruent with the complex conjugate of the null weight vector BZ will cause the candidate beamformer signal SW to have the same shape of its directional characteristic as the null beamformer signal SZ would have with swapped locations of the first and second sound inlets 8, 9, i.e. a forward cardioid 24, which effectively amounts to spatially flipping the rearward cardioid 25 with respect to the forward and rearward directions 22, 23. In the ideal case, the forward cardioid 24 is indeed the optimum directional characteristic for increasing or maximizing the relative amount of voice sound V in the candidate beamformer signal SW. The requirement of complex conjugate congruence ensures that the flipping of the directional characteristic works independently of differences in the sensitivities of the first and the second microphone units 11, 12. For voice sound V arriving from near-field, the directional characteristics obtained are not ideal cardioids, but the flipping by complex conjugation still works to maximize the voice sound V in the candidate beamformer signal SW. Determining the candidate weight vector BW to be congruent with the complex conjugate of the null weight vector BZ is an optimum solution. In some embodiments, however, it may suffice to determine the candidate weight vector BW to define a non-optimum candidate beamformer 44. For instance, the candidate beamformer controller 45 may estimate a null direction indicating the direction of the null of the directional characteristic 25 of the null beamformer 41 in dependence on the null weight vector BZ. and then determine the candidate weight vector BW to define a cardioid directional characteristic for the candidate beamformer 44 with a null oriented more or less opposite to the estimated null direction, such as e.g. in a direction at least 1600 away from the estimated null direction.
In a third step, the auxiliary beamformer controller 34 estimates the performance of the candidate beamformer 44, estimates whether it performs better than the current auxiliary beamformer 33, and in that case, updates the auxiliary transfer function Hr to equal the candidate transfer function HW. The auxiliary beamformer controller 34 thus adaptively determines the auxiliary weight vector Br to be equal to, or just be congruent with, the candidate weight vector BW. The auxiliary beamformer controller 34 may alternatively adaptively determine the auxiliary weight vector BF to converge towards being equal to, or just congruent with, the candidate weight vector BW. For the performance estimation, the candidate voice detector 46 applies the predefined measure function A to determine a candidate voice measure VW of voice sound V in the candidate beamformer signal SW. The auxiliary beamformer controller 34 thus adaptively determines the auxiliary weight vector BF in dependence on the candidate voice measure VW.
The auxiliary beamformer controller 34 may e.g. compare the candidate voice measure VW to the auxiliary voice measure Vr and update the auxiliary weight vector Br when the candidate voice measure VW exceeds the auxiliary voice measure VF. Alternatively, or additionally, the auxiliary beamformer controller 34 may compare the candidate voice measure VW to a voice measure threshold, update the auxiliary weight vector BF when the candidate voice measure VW exceeds the voice measure threshold and then also update the voice measure threshold to equal the candidate voice measure VW.
For the performance estimation, the null voice detector 43 may further apply the predefined measure function A to determine a null voice measure VZ of voice sound V in the null beamformer signal SZ. The auxiliary beamformer controller 34 may adaptively determine the auxiliary weight vector BF in dependence on the candidate voice measure VW and the null voice measure VZ.
The voice measure function A may be chosen as a function that simply correlates positively with an energy level or an amplitude of the signal to which it is applied. The output of the voice measure function A may thus e.g. equal an averaged energy level or an averaged amplitude of its input signal. In environments with high noise levels, however, more sophisticated voice measure functions A may be better suited, and a variety of such functions exists in the prior art, e.g. functions that also take frequency distribution into account.
Preferably, the auxiliary beamformer controller 34 determines a candidate beamformer score EW in dependence on the candidate voice measure VW and preferably further on the residual voice measure VZ. The auxiliary beamformer controller 34 may thus use the candidate beamformer score EW as an indication of the performance of the candidate beamformer 44. The auxiliary beamformer controller 34 may e.g. determine the candidate beamformer score EW, as a positive monotonic function of the candidate voice measure VW alone, as a difference between the candidate voice measure VW and the residual voice measure VZ, or more preferably, as a ratio of the candidate voice measure VW to the residual voice measure VZ. In the latter case, the voice measure function A is preferably chosen as a non-zero function to avoid division errors. Using both the candidate voice measure VW and the residual voice measure VZ for determining the candidate beamformer score EW may help to ensure that a candidate beamformer score EW stays low when adverse conditions for adapting the auxiliary beamformer prevail, such as e.g. in situations with no speech and loud noise. The voice measure function A should be chosen to correlate positively with voice sound V in the respective beamformer signal SF, SW, SZ, and the above suggested computations of the candidate beamformer score EWf should then also correlate positively with the performance of the candidate beamformer 44.
To increase the stability of the beamformer adaptation, the auxiliary beamformer controller 34 preferably determines the candidate beamformer score EW in dependence on averaged versions of the candidate voice measure VW and/or the residual voice measure VZ. The auxiliary beamformer controller 34 may e.g. determine the candidate beamformer score EW, as a positive monotonic function of a sum of N consecutive values of the candidate voice measure VW, as a difference between a sum of N consecutive values of the candidate voice measure VW and a sum of N consecutive values of the residual voice measure VZ, or more preferably, as a ratio of a sum of N consecutive values of the candidate voice measure VW to a sum of N consecutive values of the residual voice measure VZ, where N is a predetermined positive integer number, e.g. a number in the range from 2 to 100.
The auxiliary voice detector 35 may determine an auxiliary beamformer score EF according to any of the principles described above for determining the candidate beamformer score EW, however using the auxiliary voice measure VF as input instead of the candidate voice measure VW. The auxiliary voice detector 35 may further determine a suppression beamformer signal by applying a suppression weight vector to the auxiliary input vector MA, wherein the suppression weight vector is equal to, or is congruent with, the complex conjugate of the auxiliary weight vector BF, determine a suppression voice measure by applying the voice measure function A to the suppression beamformer signal, and use the suppression voice measure instead of the null voice measure VZ as input for determining the auxiliary beamformer score EF. The auxiliary beamformer score EF may be a frequency-dependent function. The auxiliary beamformer score EF may thus reflect or represent the candidate beamformer score EW, however based on the “best” version of the candidate beamformer 44 as represented by the auxiliary beamformer 33.
The auxiliary beamformer controller 34 preferably determines the auxiliary weight vector Br in dependence on the candidate beamformer score EW exceeding the auxiliary beamformer score EF and/or a beamformer-update threshold EB, and preferably also increases the beamformer-update threshold EB in dependence on the candidate beamformer score EW. For instance, when determining that the candidate beamformer score EW, exceeds the auxiliary beamformer score EF and/or the beamformer-update threshold EB the auxiliary beamformer controller 34 may update the auxiliary filter F to equal, or be congruent with, the candidate filter W and may at the same time set the beamformer-update threshold EB equal to equal the determined candidate beamformer score EW. In order to accomplish a smooth transition, the auxiliary beamformer controller 34 may instead control the auxiliary transfer function HF of the auxiliary filter F to slowly converge towards being equal to, or just congruent with, the candidate transfer function HW of the candidate filter W. The auxiliary beamformer controller 34 may e.g. control the auxiliary transfer function HF of the auxiliary filter F to equal a weighted sum of the candidate transfer function RN of the candidate filter W and the current auxiliary transfer function HF of the auxiliary filter F. The auxiliary beamformer controller 34 may preferably further determine a reliability score R and determine the weights applied in the computation of the weighted sum based on the determined reliability score R, such that beamformer adaptation is faster when the reliability score R is high and vice versa. The auxiliary beamformer controller 34 may preferably determine the reliability score R in dependence on detecting adverse conditions for the beamformer adaptation, such that the reliability score R reflects the suitability of the acoustic environment for the adaptation. Examples of adverse conditions include highly tonal sounds, i.e. a concentration of signal energy in only a few frequency bands, very high values of the determined candidate beamformer score EW, wind noise and other conditions that indicate unusual acoustic environments. The auxiliary beamformer 33 is thus repeatedly updated to reflect or equal the “best” version of the candidate beamformer 44. The residual voice measure VZ, the candidate beamformer score EW and/or the beamformer-update threshold EB may be frequency-dependent functions, and the auxiliary beamformer controller 34 may update the auxiliary weight vector Br only for frequency bands or frequency bins wherein the candidate beamformer score EW exceeds the auxiliary beamformer score EF and/or the beamformer-update threshold EB.
The auxiliary beamformer controller 34 preferably lowers the beamformer-update threshold EB in dependence on a trigger condition, such as e.g. power-on of the microphone apparatus 30, timer events, user input, absence of user voice V etc., in order to avoid that the auxiliary filter F remains in an adverse state, e.g. after a change of the speaker location 7. The auxiliary beamformer controller 34 may e.g. reset the beamformer-update threshold EB to zero or a predefined low value at power-on or when detecting that the user presses a reset-button or manipulates the microphone arm 5, and/or e.g. regularly lower the beamformer-update threshold EB by a small amount, e.g. every five minutes. The auxiliary beamformer controller 34 may preferably further reset the auxiliary filter F to a precomputed transfer function HFQ when lowering the beamformer-update threshold EB, such that the microphone apparatus 30 learns the optimum directional characteristic anew from a suitable starting point each time. The precomputed transfer function HFG may be predefined when designing or producing the microphone apparatus 30. Additionally, or alternatively, the precomputed transfer function HFQ may be computed from an average of transfer functions HF of the auxiliary filter F encountered during use of the microphone apparatus 30 and further be stored in a memory for reuse as precomputed transfer function HFQ after powering on the microphone apparatus 30, such that the microphone apparatus 30 normally starts up with a suitable starting point for learning the optimum directional characteristic.
The auxiliary voice detector 35 may derive the user-voice activity signal VAD from the auxiliary beamformer score EF or the candidate beamformer score EW as an indication of when the user 6 is speaking, and may further use the user-voice activity signal VAD for other signal processing, such as e.g. a squelch function or a subsequent noise reduction filter. Preferably, the auxiliary beamformer controller 34 provides the user-voice activity signal VAD in dependence on the auxiliary beamformer score EF or the candidate beamformer score EW exceeding a user-voice threshold EV. Preferably, the auxiliary voice detector 35 further provides a no-user-voice activity signal NVAD in dependence on the auxiliary beamformer score EF or the candidate beamformer score EW not exceeding a no-user-voice threshold EN, which is lower than the user-voice threshold EV. Using the auxiliary beamformer score EF or the candidate beamformer score EW for determination of a user-voice activity signal VAD and/or a no-user-voice activity signal NVAD may ensure improved stability of the signaling of user-voice activity, since the criterion used is in principle the same as the criterion for controlling the auxiliary beamformer. The user-voice threshold EV, the user-voice activity signal VAD, the no-user-voice threshold EN and/or the no-user-voice activity signal NVAD may be frequency-dependent functions.
In some embodiments, the candidate beamformer score EW may be determined from an averaged signal, and in that case, the auxiliary voice detector 35 preferably determines the user-voice activity signal VAD and/or the no-user-voice activity signal NVAD from the auxiliary beamformer score EF to obtain faster signaling of user-voice activity.
Each of the first, second and third microphone units 11, 12, 13 may preferably be configured as shown in
In addition to facilitating filter computation and signal processing in general, spectral transformation of the microphone signals SA provides an inherent signal delay to the input audio signals X, Y, Q that allows the beamformer weight functions and the linear filters F, Z, W to implement negative delays and thereby enable free orientation of the microphone apparatus 30 with respect to the location of the user's mouth 7. However, where desired, one or more of the beamformer controllers 32, 34, 42, 45 may be constrained to limit the range of directional characteristics. For instance, the null beamformer controller 42 may be constrained to ensure that any null in the directional characteristic of the null beamformer signal SZ falls within the half space defined by the forward direction 22. Many algorithms for implementing such constraints are known in the prior art.
The null beamformer controller 42 may preferably determine the null transfer function HZ based on accumulated power spectra derived from the first input audio signal X and the second input audio signal Y. This allows for applying well-known and effective algorithms, such as the finite impulse response (FIR) Wiener filter computation, to minimize the null beamformer signal SZ. If the null mixer JZ is implemented as a subtractor, then the null beamformer signal SZ will be minimized when the null filtered signal ZY equals the first input audio signal X. FIR Wiener filter computation was designed for solving exactly this type of problems, i.e. for estimating a filter that for a given input signal provides a filtered signal that equals a given target signal. If the mixer JZ is implemented as a subtractor, then the first input audio signal X and the second input audio signal Y can be used respectively as target signal and input signal to a FIR Wiener filter computation that then estimates the wanted null filter Z.
As shown in
The filter estimator FE preferably controls the null transfer function HZ using a FIR Wiener filter computation based on the first auto-power spectrum, the second auto-power spectrum and the first cross-power spectrum. Note that there are different ways to perform the Wiener filter computation and that they may be based on different sets of power spectra, however, all such sets are based, either directly or indirectly, on the first input audio signal X and the second input audio signal Y.
Depending on the implementation of the null beamformer controller 42 and the null filter Z, the null beamformer controller 42 does not necessarily need to estimate the null transfer function HZ itself. For instance, if the null filter Z is a time-domain FIR filter, then the null beamformer controller 42 may instead estimate a set of filter coefficients that may cause the null filter Z to effectively apply the null transfer function HZ.
It will usually be intended that the auxiliary beamformer signal SF provided by the auxiliary beamformer 33 shall contain intelligible speech, and in this case the auxiliary beamformer 33 preferably operates on input audio signals X, Y which are not or only moderately averaged or otherwise low-pass filtered. Conversely, since the main purpose of the null beamformer signal SZ and the candidate beamformer signal SW may be to allow adaptation of the auxiliary beamformer 32, the null beamformer 41 and the candidate beamformer 44 may preferably operate on averaged signals, e.g. in order to reduce computation load. Furthermore, a better adaptation to speech signal variations may be achieved by estimating the null filter Z and the candidate filter W based on averaged versions of the input audio signals X, Y.
Since each of the first auto-power spectrum PXX, the second auto-power spectrum PYY and the cross-power spectrum PXY may in principle be considered an average of the respective spectral signal X, Y, Z, these power spectra may also be used for determining the candidate voice measure VW and/or the residual voice measure VZ. Correspondingly, the null filter Z may preferably take the second auto-power spectrum PYY as input and thus provide the null filtered signal ZY as an inherently averaged signal, the null mixer JZ may take the first auto-power spectrum PXX and the inherently averaged null filtered signal ZY as inputs and thus provide the null beamformer signal SZ as an inherently averaged signal, and the residual voice detector 43 may take the inherently averaged null beamformer signal SZ as an input and thus provide the residual voice measure VZ as an inherently averaged signal.
Similarly, the candidate filter W may preferably take the second auto-power spectrum PYY as input and thus provide the candidate filtered signal WY as an inherently averaged signal, the candidate mixer JW may take the first auto-power spectrum PXX and the inherently averaged candidate filtered signal WY as inputs and thus provide the candidate beamformer signal SW as an inherently averaged signal, and the candidate voice detector 46 may take the inherently averaged candidate beamformer signal SW as an input and thus provide the candidate voice measure VW as an inherently averaged signal.
The first auto-power accumulator PAX, the second auto-power accumulator PAY and the cross-power accumulator CPA preferably accumulate the respective power spectra over time periods of 50-500 ms, more preferably between 150 and 250 ms, to enable reliable and stable determination of the voice measures VW, VZ.
The candidate beamformer controller 45 may preferably determine the candidate transfer function HW by computing the complex conjugation of the null transfer function HZ. For a filter in the binned frequency domain, complex conjugation may be accomplished by complex conjugation of the filter coefficient for each frequency bin. In the case that the configuration of the candidate mixer JW differs from the configuration of the null mixer JZ, then the candidate beamformer controller 45 may further apply a linear scaling to ensure correct functioning of the candidate beamformer 44. The candidate beamformer controller 45 may generally determine the candidate weight vector as the complex conjugation of the weight vector BZ.
In the case that the auxiliary filter F, the null filter Z and the candidate filter W are implemented as FIR time-domain filters, then the null transfer function HZ may not be explicitly available in the microphone apparatus 30, and then the candidate beamformer controller 45 may compute the candidate filter W as a copy of the null filter Z, however with reversed order of filter coefficients and with reversed delay. Since negative delays cannot be implemented in the time domain, reversing the delay of the resulting candidate filter W may require that an adequate delay has been added to the signal used as X input to the candidate mixer JW. In any case, one or both of the first and second microphone units 11, 12 may comprise a delay unit (not shown) in addition to—or instead of—the spectral transformer FT in order to delay the respective input audio signal X, Y.
In the case that the first and second audio input signals X, Y have different delays relative to the sound at the respective sound inlets 8, 9, then the flipping of the directional characteristic will typically produce a directional characteristic of the candidate beamformer 44 with a different type of shape than the directional characteristic of the null beamformer 41. Depending on the delay difference, the flipping may e.g. produce a forward hypercardioid characteristic from a rearward cardioid 25. This effect may be utilized to adapt the candidate beamformer 44 to specific usage scenarios, e.g. specific spatial noise distributions and/or specific relative speaker locations 7. The auxiliary beamformer controller 34 and/or the candidate beamformer controller 45 may be adapted to control a delay provided by one or more of the spectral transformers FT and/or the delay units, e.g. in dependence on a device setting, on user input and/or on results of further signal processing.
In some embodiments, like e.g. in the headset 1 shown in
In embodiments with main microphone arrays 14 having three or more, such as e.g. four, five, six, seven, eight or even more microphone units 11, 12, 13 with sound inlets 8, 9, 10 that are not all arranged on the straight line 21, the microphone apparatus 30 may comprise multiple auxiliary controllers 40, such as e.g. two, three, four or even more, and the main beamformer controller 32 may determine the steering vector dM in dependence on two or more auxiliary weight vectors BF determined for respective auxiliary beamformers 33 of the multiple auxiliary controllers 40. In such embodiments, the microphone apparatus 30 should generally be designed such that if any two auxiliary beamformers 33 operate on microphone inputs X, Y, Q from microphone units 11, 12, 13 with sound inlets 8, 9, 10 that are not arranged on one and the same straight line 21, then these auxiliary beamformers 33 should not share any of their microphone inputs X, Y, Q. Otherwise, the main beamformer controller 32 may fail to accurately determine steering vector dM. This may e.g. apply to main microphone arrays 14 having microphone units 11, 12, 13 with sound inlets 8, 9, 10 on both earphones 2, 3 of a headset 1.
The auxiliary beamformer 33 will normally perform better when the auxiliary microphone array 15 is oriented such that the straight line 21 extends approximately in the direction of the user's mouth 7. The microphone apparatus 30 should thus preferably be designed to nudge or urge a user 6 to arrange the auxiliary microphone array 15 accordingly, e.g. like in the headset 1 shown in
Although the examples disclosed herein are based on a main beamformer 31 configured as a MVDR beamformer, the principles of the present disclosure may be adapted to other adaptive beamformer types that require a steering vector, a user-voice activity signal VAD and/or a no-user-voice activity signal NVAD for proper operation.
Functional blocks of digital circuits may be implemented in hardware, firmware or software, or any combination hereof. Digital circuits may perform the functions of multiple functional blocks in parallel and/or in interleaved sequence, and functional blocks may be distributed in any suitable way among multiple hardware units, such as e.g. signal processors, microcontrollers and other integrated circuits.
The detailed description given herein and the specific examples indicating preferred embodiments of the invention are intended to enable a person skilled in the art to practice the invention and should thus be regarded mainly as an illustration of the invention. The person skilled in the art will be able to readily contemplate further applications of the present invention as well as advantageous changes and modifications from this description without deviating from the scope of the invention. Any such changes or modifications mentioned herein are meant to be non-limiting for the scope of the invention.
The invention is not limited to the embodiments disclosed herein, and the invention may be embodied in other ways within the subject-matter defined in the following claims. As an example, features of the described embodiments may be combined arbitrarily, e.g. in order to adapt devices according to the invention to specific requirements.
Any reference numerals and labels in the claims are intended to be non-limiting for the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
18215941.8 | Dec 2018 | EP | regional |