DEVICE AND METHOD FOR DIRECTION DEPENDENT SPATIAL NOISE REDUCTION

Abstract
A device and a method reduce direction dependent spatial noise. The device includes a plurality of microphones for measuring an acoustic input signal from an acoustic source. The plurality of microphones form at least one monaural pair and at least one binaural pair. Directional signal processing circuitry is provided for obtaining, from the input signal, at least one monaural directional signal and at least one binaural directional signal. A target signal level estimator estimates a target signal level by combining at least one of the monaural directional signals and at least one of the binaural directional signals, which at least one monaural directional signal and at least one binaural directional signal mutually have a maximum response in a direction of the acoustic source. A noise signal level estimator estimates a noise signal level by combining at least one of the monaural directional signals and at least one of the binaural directional signals, which at least one monaural directional signal and at least one binaural directional signal mutually have a minimum sensitivity in the direction of the acoustic source.
Description

The present invention relates to direction dependent spatial noise reduction, for example, for use in binaural hearing aids.


For non-stationary signals such as speech in a complex hearing environment with multiple speakers, directional signal processing is vital to improve speech intelligibility by enhancing the desired signal. For example, traditional hearing aids utilize simple differential microphones to focus on targets in front or behind the user. In many hearing situations, the desired speaker azimuth varies from these predefined directions. Therefore, directional signal processing which allows the focus direction to be steerable would be effective at enhancing the desired source.


Recently approaches for binaural beamforming have been presented. In

    • T. Rohdenburg, V. Hohmann, B. Kollmeier, “Robustness Analysis of Binaural Hearing Aid Beamformer Algorithms by Means of Objective Perceptual Quality Measures,” in 2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pp.315-318, October 2007


      a binaural beamformer was designed using a configuration with two 3-channel hearing aids. The beamformer constraints were set based on the desired look direction to achieve a steerable beam with the use of three microphones in each hearing aid which is impractical in state of the art hearing aids. The system performance was shown to be dependent on the propagation model used in formulating the steering vector. Binaural multi-channel Wiener filtering (MWF) was used in
    • S. Doclo, M. Moonen, T. Van den Bogaert, J. Wouters, “Reduced-Bandwidth and Distributed MWF-Based Noise Reduction Algorithms for Binaural Hearing Aids,” IEEE Transactions on Audio, Speech, and Language Processing, vol.17, no.1, pp.38-51, January 2009


      to obtain a steerable beam by estimating the statistics of the speech signal in each hearing aid. MWF is computationally expensive and the results presented were achieved using a perfect VAD (voice activity detection) to estimate the noise while assuming the noise to be stationary during speech activity. Another technique for forming one spatial null in a desired direction has been shown in
    • M. Ihle, “Differential Microphone Arrays for Spectral Subtraction”, in Intl Workshop on Acoustic Echo and Noise Control (IWAENC 2003), September 2003


      but is sensitive to the microphone array geometry and therefore not applicable to a hearing aid setup.


The object of the present invention is to provide a device and method for direction dependent spatial noise reduction that can be used to focus the angle of maximum sensitivity to a target acoustic source at any given azimuth, i.e., also to directions other than 0° (i.e., directly in front of the user) or 180° (i.e., directly behind the user).


The above object is achieved by the method according to claim 1 and the device according to claim 8.


The underlying idea of the present invention lies in the manner in which the estimates of the target signal level and the noise signal level are obtained, so as to focus on a desired acoustic source at any arbitrary direction. The target signal power estimate is obtained by combination of at least two directional outputs, one monaural and one binaural, which mutually have maximum response in the direction of the signal. The noise signal power estimate is obtained by measuring the maximum power of at least two directional signals, one monaural and one binaural, which mutually have minimum sensitivity in the direction of the desired source. An essential feature of the present invention thus lies in the combination of monaural and binaural directional signals for the estimation of the target and noise signal levels.


In one embodiment, to obtain the desired target signal level in the direction of the acoustic signal source, the proposed method further comprises estimating the target signal level by selecting the minimum of the at least one monaural directional signal and the at least one binaural directional signal, which mutually have a maximum response in a direction of the acoustic source.


In one embodiment, to steer the beam in the direction of the acoustic source, the proposed method further comprises estimating the noise signal level by selecting the maximum of the at least one monaural directional signal and the at least one binaural directional signal, which mutually have a minimum sensitivity in the direction of the acoustic source.


In an alternate embodiment, the proposed method further comprises estimating the noise signal level by calculating the sum of the at least one monaural directional signal and the at least one binaural directional signal, which mutually have a minimum sensitivity in the direction of the acoustic source.


In a further embodiment, the proposed method further comprises calculating, from the estimated target signal level and the estimated noise signal level, a Wiener filter amplification gain using the formula:





amplification gain=target signal level/[noise signal level+target signal level].


Applying the above gain to the input signal produces an enhanced signal output that has reduced noise in the direction of the acoustic source.


In a contemplated embodiment, since the response of directional signal processing circuitry is a function of acoustic frequency, the acoustic input signal is separated into multiple frequency bands and the above-described method is used separately for multiple of said multiple frequency bands.


In various different embodiments, for said signal levels one or multiple of the following units are used: power, energy, amplitude, smoothed amplitude, averaged amplitude, absolute level.





The present invention is further described hereinafter with reference to illustrated embodiments shown in the accompanying drawings, in which:



FIG. 1 illustrates a binaural hearing aid set up with wireless link, where embodiments of the present invention may be applicable,



FIG. 2 is a block diagram illustrating a first order differential microphone array circuitry,



FIG. 3 is a block diagram illustrating an adaptive differential microphone array circuitry,



FIG. 4 is a block diagram of a side-look steering system,



FIG. 5 is a schematic diagram illustrating a steerable binaural beamformer in accordance with the present invention,



FIGS. 6A-6D illustrate differential microphone array outputs for monaural and binaural cases. FIG. 6A shows the output when side_select=1. FIG. 6B shows the output when side_select=0.



FIG. 6C shows the output when plane_select=1. FIG. 6D shows the output when plane_select=0.



FIG. 7 is a block diagram of a device for direction dependent spatial noise reduction according to one embodiment of the present invention,



FIG. 8A illustrates an example of how the target signal level can be estimated,



FIG. 8B illustrates an example of how the noise signal level can be estimated, and



FIGS. 9A-9D illustrate steered beam patterns formed for various test cases. FIG. 9A illustrates the pattern for a beam steered to left side at 250 Hz. FIG. 9B illustrates the pattern for a beam steered to left side at 2 kHz. FIG. 9C illustrates the pattern for a beam steered to 45° at 250 Hz. FIG. 9D illustrates the pattern for a beam steered to 45° at 500 Hz





Embodiments of the present invention discussed herein below provide a device and a method for direction dependent spatial noise reduction, which may be used in a binaural hearing aid set up 1 as illustrated in FIG. 1. The set up 1 includes a right hearing aid comprising a first pair of monaural microphones 2, 3 and a left hearing aid comprising a second pair of monaural microphones 4, 5. The right and left hearing aids are fitted into respective right and left ears of a user 6. The monaural microphones in each hearing aid are separated by a distance l1, which may, for example, be approximately equal to 10 mm due to size constraints. The right and left hearing aids are separated by a distance l2 and are connected by a bi-directional audio link 8, which is typically a wireless link. To minimize power consumption, only one microphone signal may be transmitted from one hearing aid to the other. In this example, the front microphones 2 and 4 of the left and right hearing aids respectively form a binaural pair, transmitting signals by the audio link 8. In FIG. 1, xR1[n] and xR2[n] represent nth omni-directional signals measured by the front microphone 2 and back microphone 3 respectively of the right hearing aid, while xL1[n] and xL2[n] represent nth omni-directional signals measured by the front microphone 4 and back microphone 5 respectively of the left hearing aid. The signals xR1[n] and xL1[n] thus respectively correspond to the signals transmitted from the respective front microphones 2 and 4 of the right and left hearing aids.


The monaural microphone pairs 2, 3, and 4, 5 each provide directional sensitivity to target acoustic sources located directly in front of or behind the user 6. With the help of the binaural microphones 2 and 4, side-look beam steering is realized which provides directional sensitivity to target acoustic sources located to sides (left or right) of the user 6. The idea behind the present invention is to provide direction dependent spatial noise reduction that can be used to focus the angle of maximum sensitivity of the hearing aids to a target acoustic source 7 at any given azimuth P steer that includes angles other than 0°/180° (front and back direction) and 90°/270° (right and left sides).


In precedence to the discussion on the embodiments of the proposed invention, the following sections discuss how monaural directional sensitivity (for front and back directions) and binaural side look steering (for left and right sides) are achieved.


Directional sensitivity is achieved by directional signal processing circuitry, which generally includes differential microphone arrays (DMA). A typical first order DMA circuitry 22 is explained referring to FIG. 2. Such first order DMA circuitry 22 is generally used in traditional hearing aids that include two omni-directional microphones 23 and 24 separated by a distance l (approx. 10 mm) to generate a directional response. This directional response is independent of frequency as long as the assumption of small spacing l to acoustic wavelength λ, holds. In this example, the microphone 23 is considered to be on the focus side while the microphone 24 is considered to be on the interferer side. The DMA 22 includes time delay circuitry 25 for delaying the response of the microphone 24 on the interferer side by a time interval T. At the node 26, the delayed response of the microphone 24 is subtracted from the response of the microphone 23 to yield a directional output signal y[n]. For a signal x[n] impinging on the first order DMA 22 at an angle θ, under farfield conditions, the magnitude of the frequency and angular dependent response of the DMA 22 is given by:












H


(

Ω
,
θ

)




=



1
-




-
j







Ω


(

T
+


l
c


c





os





θ


)











(
1
)







where c is the speed of sound.


The delay T may be adjusted to cancel a signal from a certain direction to obtain the desired directivity response. In hearing aids, this delay T is fixed to match the microphone spacing l/c and the desired directivity response is instead achieved using a back-to-back cardioid system as shown in the adaptive differential microphone array (ADMA) 27 in FIG. 3. As shown, the ADMA circuitry 27 includes time delay circuitry 30 and 31 for delaying the responses from the microphones 28 and 29 that are spaced apart by a distance l. CFis the cardioid beamformer output obtained from the node 33 that attenuates signals from the interferer direction and CR is the anti-cardioid (backward facing cardioid) beamformer output obtained from the node 32 which attenuates signals from the focus direction. The anti-cardioid beamformer output CR is multiplied by a gain β and subtracted from the cardioid beamformer output CF at the node 35, such that the array output y[n] is given by:






y[n]=C
F
−βC
R   (2)


For yin] from equation (2), the signal from 0° is not attenuated and a single spatial notch is formed in the direction θ1 for a value of β given by:










θ
1

=

arc





cos







β
-
1


β
+
1







(
3
)







In ADMA for hearing aids, the parameter β is adapted to steer he notch to direction θ1 of a noise source to optimize the directivity index. This is performed by minimizing the MSE of the output signal y[n]. Using a gradient descent technique to follow the negative gradient of the MSE cost function, the parameter β is adapted by equation (4) expressed as:










β


[

n
+
1

]


=


β


[
n
]


-

μ


δ
δβ



ɛ


(


y
2



[
n
]


)








(
3
)







In hearing situations, when a desired acoustic source is on one side of the user, side-look beam steering is realized using binaural hearing aids with a bidirectional audio link. It is known that at high frequencies, the Interaural Level Difference (ILD) between measured signals at both sides of the head is significant due to the head-shadowing effect. The ILD increases with frequency. This head-shadow effect may be exploited in the design of the binaural Wiener filter for the higher frequencies. At lower frequencies, the acoustic wavelength λs is long with respect to the head diameter. Therefore, there is minimal change between the sound pressure levels at both sides of the head and the Interaural Time Difference (ITD) is found to be the more significant acoustic cue. At lower frequencies, a binaural first-order DMA is designed to create the side-look. Therefore, the problem of side-look steering may decomposed into two smaller problems with a binaural DMA for the lower frequencies and a binaural Wiener filter approach for the higher frequencies as illustrated by a side-look steering system 36 in FIG. 3. Herein, the input noisy input signal x[n] is given by:






x[n]=s[n]+d[n]  (4)


where s[n] is the target signal from direction θs∈[90°-90°], which corresponds to the focus side, and d[n] is the noise signal incident from direction θd (where θd=−θs), which corresponds to the interferer side.


The input signal x[n] is decomposed into frequency sub-bands by an analysis filter-bank 37. The decomposed sub-band signals are separately processed by high frequency-band directional signal processing module 38 and low frequency-band directional signal processing module 39, the former incorporating a Wiener filter and the latter incorporating DMA circuitry. Finally, a synthesis filter-bank 40 reconstructs an output signal ŝ[n] that is steered in the direction θs of the focus side.


At the high frequency-band directional signal processing module 38, the head shadowing effect is exploited in the design of a binaural system to perform the side-look at higher frequencies (for example for frequencies greater than 1 kHz). The signal from the interferer side is attenuated across the head at these higher frequencies and the analysis of the proposed system is given below.


Considering a scenario where a target signal s[n] arrives from the left side (−90°) of the hearing aid user and an interferer signal d[n] is on the right side (90°), from FIG. 1, the signal xL1[n] recorded at the front left microphone and the signal xR,1[n] recorded at the front right microphone are given by:






x
L1
[n]=s[n]+h
L1
[n]*d[n]  (5)






x
R1
[n]=h
R1
[n]*s[n]+d[n]  (6)


where hL1[n] is the transfer function from the front right microphone to the left front microphone and hR1[n] is the transfer function from the front left microphone to the front right microphone. Transformation of equations (5) and (6) into the frequency domain gives:






X
L1(Ω)=S(Ω)+HL1 (Ω)*D(Ω)   (7)






X
R1(Ω)=HR1(Ω)*S(Ω)+D(Ω)   (8)


Let the short-time spectral power of signal Xa(Ω) be denoted as Φα(Ω). Since the left side is the focus side and the right side is the interferer side, a classical Wiener filter can be derived as:










W


(
Ω
)


=



Φ

X

t
,
1





(
Ω
)





Φ

X

t
,
1





(
Ω
)


+


Φ

X

R
,
1





(
Ω
)








(
9
)







For analysis purposes, it is assumed that ΦHL(Ω)=ΦHR(Ω)=α(Ω). α(Ω) is the frequency dependent attenuation corresponding to the transfer function from one hearing aid to the other across the head. Therefore (9) can be simplified to:










W


(
Ω
)


=




Φ
S



(
Ω
)


+


α


(
Ω
)





Φ
D



(
Ω
)






(

1
+

α


(
Ω
)



)



(



Φ
S



(
Ω
)


+


Φ
D



(
Ω
)



)







(
10
)







As explained earlier, at higher frequencies the ILD attenuation α(Ω)→0 due to the head-shadowing effect and equation (10) tends to a traditional Wiener filter. At lower frequencies, the attenuation α(Ω)→1 and the Wiener filter gain W(Ω)→0.5. The output filtered signal at each side of the head is obtained by applying the gain W(Ω) to the omni-directional signals at the front microphones on both hearing aid sides. If X is defined as the vector [XL1(Ω) XR1(Ω)] and the output from both hearing aids is denoted as Y=[YL1(Ω) YR1(Ω)], then Y is given by:






Y=W(Ω)X   (11)


Thus, the spatial impression cues from the focused and interferer sides are preserved since the gain is applied to the original microphone signals on either side of the head.


At lower frequencies, the signal's wavelength is small compared to the distance l2 across the head between the two hearing aids. Therefore spatial aliasing effects are not significant. Assuming l2=17 cm, the maximum acoustic frequency to avoid spatial aliasing is approximately 1 kHz.


Referring back to FIG. 3, the low frequency-band directional signal processing module 39 incorporates a first-order ADMA across the head, wherein the left side is the focused side of the user and the right side is the interferer side. An ADMA, of the type illustrated in FIG. 3, is accordingly designed so as to perform directional signal processing to steer to the side of interest. Thus in this case, a binaural first order ADMA is implemented along the microphone sensor axis pointing to −90° across the head. Two back-to-back cardioids are thus resolved setting the delay to l2/c where c is the speed of sound. The array output is a scalar combination of a forward facing cardioid CF[n] (pointing to −90°) and a backward facing cardioid CB[n] (pointing to 90°) as expressed in equation (2) above.


Thus, it is seen that beam steering to 0° and 180° may be achieved using the basic first order DMA illustrated in FIGS. 2-3 while beam steering to 90° and 270° may be achieved by a system illustrating in FIG. 4 incorporating a first order DMA for low frequency band directional signal processing and a Wiener filter for high frequency directional signal processing.


Embodiments of the present invention provide a steerable system to achieve specific look directions θd,n where:





θd,n=45*n°∀n=0, . . . 7   (12)


To that end, a parametric model is proposed for focusing the beam to the subset of angles θsteer ⊂ θd,n where θsteer ∈ [45°, 135°, 225°, 315°]. This model may be used to derive an estimate of the desired signal and an estimate of the interfering signal for enhancing the input noisy signal.


The desired signal incident from angle A - steer and the interfering signal are estimated by a combination of directional signal outputs. The directional signals used in this estimation are derived as shown in FIG. 5. In FIG. 5, the inputs XL1(Ω) and XL2(Ω) correspond to omni-directional signals measured by the front and back microphones respectively of the left hearing aid 46. The inputs XR1 (Ω) and XR2(Ω) correspond to omni-directional signals measured by the front and back microphones respectively of the right hearing aid 47. The binaural DMA 42 and the monaural DMA 43 correspond to the left hearing aid 46 while the binaural DMA 44 and the monaural DMA 45 correspond to the right hearing aid 47. The outputs CFb(Ω) and CRb (Ω) result from the binaural first order DMAs 42 and 44 and respectively denote the forward facing and backward facing cardioids. The outputs CFm(Ω) and CRm(Ω) result from the monaural first order DMAs 43 and 45 and follow the same naming convention as in the binaural case.


A first parameter “side_select” selects which microphone signal from the binaural DMA is delayed and subtracted and therefore is used to select the direction to which CFb (Ω) and CRb(Ω) point. Conversely, when “side_select” is set to one, CFb (Q) points to the right at 90° and CRb(Ω) points to the left at 270° (or −90°) as indicated in FIG. 6A. When “side_select” is set to zero CFt(Ω) points to the left at 270° (or −90°) ° and CRb(Ω) points to the left at 90° as indicated in FIG. 6B. A second parameter “plane_select” selects which microphone signal from the monaural DMA is delayed and subtracted. Therefore, when “plane_select” is set to one, Crb (Q) points to the front plane at 0° and CRb(Ω) points to the back plane at 180° as indicated in FIG. 6C. Conversely, when “plane_select” is set to zero, CFb(D) points to the back plane at 180° and CRb(Ω) points to the front plane at 0° as indicated in FIG. 6D.


A method is now illustrated below for calculating a target signal level and a noise signal level, in accordance with the present invention, in the case when a desired acoustic source is at an azimuth θsteer of 45°. Since the direction of the desired signal θsteer is known, an estimate of the target signal level is obtained by combining the monaural and binaural di-rectional outputs which mutually have maximum response in the direction of the acoustic source. In this example (for θsteer=45°, the parameters “side_select” and “plane_select” are both set to 1 to give binaural and monaural cardioids and ant-cardioids as indicated in FIG. 6A and 6C respectively. Based on equation (2), a first monaural directional signal is calculated which is defined by a hypercardioid Y1 and a first binaural directional signal output is calculated which is defined by a hypercardioid Y2. Further, signals Y3 and Y4 are obtained that create notches at 90°/270° and 0°/180°. Y1, Y2,Y3 and Y4 are represented as:










[



Y





Y
2






Y
3






Y
4




]

=


[




C
Fm






C
Fb






C
Fm






C
Fb




]

-


β
hyp



[




C
Rm






C
Rb







C
Rm

/

β
hyp








C
Rb

/

β
hyp





]







(
13
)







where βhyp is set to a value to create the desired hypercardioid. Equation (13) can be rewritten as:






Y=C
F,1−βhyp C R,1   (14)





where Y=[Y1 Y2 Y3 Y4]T, CF,1=[CFm CFb CFm CFb]T and CR,1=[CRm CRb CRmhyp CRbhyp/]T.


An estimate of the target signal level can be obtained by selecting the minimum of the directional signals Y1, Y2, Y3 and Y4, which mutually have maximum response in the direction of the acoustic source. In an exemplary embodiment, for signal level, the unit used is power. In this case, an estimate of the short time target signal power {circumflex over (Φ)}S is obtained by measuring the minimum short time power of the four signal components in Y as given by:





{circumflex over (Φ)}S=min(ΦY)   (15)


The estimate of the noise signal level is obtained by combining a second monaural directional signal N1 and a second binaural directional signal N2, that have null placed at the direction of the acoustic source, i.e., that have minimum sensitivity in the direction of the acoustic source. Using the same parametric values of “side_select” and “plane_select”, N1 and N2 are calculated as:






N=C
R,2−βsteer CF,2   (16)


where CR,2=[CRm CrRb]T and CF,2=[CFm FFb]T, N=[N1 N2]T and βsteer is set to place a null at the direction of the acoustic source.


In this example, the estimated noise signal level is obtained by selecting the maximum of the directional signals N1 and N2. As before, for signal level, the unit used is power. Thus in this case, an estimate of the short time noise signal power {circumflex over (Φ)}D is obtained from measuring the maximum short time power of the two noise components in N, and is given by:





{circumflex over (Φ)}D=max(ΦN)   (17)


Based on the estimated target signal level {circumflex over (Φ)}S and noise signal level {circumflex over (Φ)}D, a Wiener filter gain W(Ω) is obtained from:










W


(
Ω
)


=



Φ
^

S




Φ
^

S

+


Φ
^

D







(
18
)







An enhanced desired signal is obtained by filtering the locally available omni-directional signal using the gain calculated in equation (19). Other directions can be steered to by varying “side_select” and “plane_select”.



FIG. 7 shows a block diagram of a device 70 that accomplishes the method described above to provide direction dependent spatial noise reduction that can be used to focus the angle of maximum sensitivity to a target acoustic source at an azimuth ηsteer. The device 70, in this example, is incorporated within the circuitry of the left and right hearing aids shown in FIG. 1. Referring to FIG. 7, the microphone 2 and 3 mutually form a monaural pair while the microphones 2 and 4 mutually form a binaural pair. The input omni-directional signals measured by the microphones 2, 3 and 4 are XR1[n], XR2[n] and XL1[n] expressed in frequency domain. It is also assumed that the azimuth e steer in this example is 45°.


From the input omni-directional signals measured by the microphones, monaural and binaural directional signals are obtained by directional signal processing circuitry. The directional signal processing circuitry comprises a first and a second monaural DMA circuitry 71 and 72 and first and a second binaural DMA circuitry 73 and 74. The first monaural DMA circuitry 71 uses the signals XR1[n] and XR2[n]] measured by the monaural microphones 2 and 3 to calculate, therefrom, a first monaural directional signal Y1 having maximum response in the direction of the desired acoustic source, based on the value of θsteer. The first binaural DMA circuitry 73 uses the signals XR1[n] and XL1[n] measured by the binaural microphones 2 and 4 to calculate, therefrom, a first binaural directional signal Y2 having maximum response in the direction of the desired acoustic source, based on the value of θsteer. The directional signals Y1 and Y2 are calculated based on equation (14).


The second monaural DMA circuitry 72 uses the signals XR1[n] and XR2[n] to calculate therefrom a second monaural directional signal N1 having minimum sensitivity in the direction of the acoustic source, based on the value of θsteer. The secand monaural DMA circuitry 74 uses the signals XR1[n] and XL1[n] to calculate therefrom a second binaural directional signal N2 having minimum sensitivity in the direction of the acoustic source, based on the value of θsteer. The directional signals N1 and N2 are calculated based on equation (17).


In the illustrated embodiment, the directional signals Y1, Y2, N1 and N2 are calculated in frequency domain


The target signal level and the noise signal level are obtained by combining the above-described monaural and binaural directional signals. As shown, a target signal level estimator 76 estimates a target signal level {circumflex over (Φ)}S by combining the monaural directional signal Y1 and binaural directional signal Y2, which mutually have a maximum response in the direction the acoustic source. In one embodiment the estimated target signal level {circumflex over (Φ)}S is obtained by selecting the minimum of monaural and binaural signals Y1 and Y2. The estimated target signal level {circumflex over (Φ)}S may be calculated, for example, as a minimum of the short time powers of the signals Y1 and Y2. However, the estimated target signal level may also be calculated as the minimum of the any of the following units of the signals Y1 and Y2, namely, energy, amplitude, smoothed amplitude, averaged amplitude and absolute level. A noise signal level estimator 75 estimates a noise signal level {circumflex over (Φ)}D by combining the monaural directional signal N1 and the binaural directional signal N2, which mutually have a minimum sensitivity in the direction of the acoustic source. The estimated noise signal {circumflex over (Φ)}D may be obtained, for example by selecting the maximum of the monaural directional signal N1 and the binaural directional signal N2. Alternately, the estimated noise signal {circumflex over (Φ)}D may be obtained by calculating monaural directional signal N1 and the binaural directional signal N2. As in case of the target signal level, for calculating the estimated noise signal level {circumflex over (Φ)}D, one or multiple of the following units are used, namely, power, energy, amplitude, smoothed amplitude, averaged amplitude, absolute level.


Using the estimated target signal level {circumflex over (Φ)}S and the noise level {circumflex over (Φ)}D, a gain calculator 77 calculates a Wiener filter gain W using equation (19). A gain multiplier 78 filters the locally available omni-directional signal by applying the calculated gain W to obtain the enhanced desired signal output F that has reduced noise and increased target signal sensitivity in the direction of the acoustic source. Since, in this example, the focus direction (45°) is towards the front direction and the right side, the desired signal output F is obtained my applying the Wiener filter gain W to the omni-directional signal XR1[n] measured by the front microphone 2 of the right hearing aid. Since the response of directional signal processing circuitry is a function of acoustic frequency, the acoustic input signal is typically separated into multiple frequency bands and the above-described technique is used separately for each of these multiple frequency bands.



FIG. 8A shows an example of how the target signal level can be estimated. The monaural signal is shown as solid line 85 and the binaural signal is shown as dotted line 84. As target signal level the minimum of the monaural signal and the binaural signal could be used. Using this criteria for spatial directions from ˜345°-195° the monaural signal is the minimum, from ˜195°-255° the binaural signal is the minimum etc. FIG. 8B shows an example of how the noise signal level can be estimated. The monaural signal is shown as solid line 87 and the binaural signal is shown as dotted line 86. As noise signal level the maximum of the monaural signal and the binaural signal could be used. Using this criteria for spatial directions from ˜100°-180° the monaural signal is the maximum, from ˜180°-20° the binaural signal is the minimum etc.


The performance of the proposed side-look beamformer and the proposed steerable beamformer were evaluated by examining the output directivity patterns. A binaural hearing aid system was set up as illustrated in FIG. 1 with two “Behind the Ear” (BTE) hearing aids on each ear and only one signal being transmitted from one ear to the other. The measured microphone signals were recorded on a KEMAR dummy head and the beam patterns were obtained by radiating a source signal from different directions at a constant distance.


The binaural side-look steering beamformer was decomposed into two subsystems to independently process the low frequencies (≦1 kHz) and the high frequencies (>1 kHz). In this scenario, the desired source was located on the left side of the hearing aid user at −90° (=270° on the plots) and the interferer on the right side of the user at 90°. The effectiveness of these two systems is demonstrated with representative directivity plots illustrated in FIGS. 9A and 9B. FIG. 9A shows the directivity plots obtained at 250 Hz (low frequency) wherein the plot 91 (thick line) represents the right ear signal and the plot 92 (thin line) represents the left ear signal. FIG. 9B shows the directivity plots obtained at 2 kHz (high frequency), wherein the plot 93 (thick line) represents the right ear signal and the plot 94 (thin line) represents the left ear signal. In both FIGS. 9A and 9B, the responses from both ears are shown together to illustrate the desired preservation of the spatial cues. It can be seen that the attenuation is more significant on the interfering signal impinging on the right side of the hearing aid user. Similar frequency responses may be obtained across all frequencies for focusing on desired signals located either at the left (270°) or the right (90°) of the hearing aid user.


The performance of the steerable beamformer is demonstrated for the scenario described referring to FIG. 7, where the desired acoustic source is at azimuth θsteer of 45°. Since a null is placed at 45°, as per equation (3), βsteer can be calculated by:










θ
steer

=

arccos
(



β
steer

-
1



β
steer

+
1


)





(
19
)









β
steer


=


2
-

2



2
+

2







(
20
)







From equations (15) and (17), estimates of the signal power {circumflex over (Φ)}S and the noise power {circumflex over (Φ)}D were obtained. FIG. 9C shows the polar plot of the beam pattern of the proposed steering system to 45° at 250 Hz, wherein the plot 101 (thick line) represents the right ear signal and the plot 102 (thin line) represents the left ear signal. FIG. 9D shows the polar plot of the beam pattern of the proposed steering system to 45° at 500 Hz, wherein the plot 103 (thick line) represents the right ear signal and the plot 104 (thin line) represents the left ear signal. As required, the maximum gain is in the direction of θsteer. Since the simulations were performed using actual recorded signals, the steering of the beam can be adjusted to the direction P steer by fine-tuning the ideal value of βsteer from (20) for real implementations.


While this invention has been described in detail with reference to certain preferred embodiments, it should be appreciated that the present invention is not limited to those precine embodiments. Rather, in view of the present disclosure which describes the current best mode for practicing the invention, many modifications and variations would present themselves, to those of skill in the art without departing from the scope and spirit of this invention. The scope of the invention is, therefore, indicated by the following claims rather than by the foregoing description. All changes, modifications, and variations coming within the meaning and range of equivalency of the claims are to be considered within their scope.

Claims
  • 1-16. (canceled)
  • 17. A method for direction dependent spatial noise reduction, which comprises the following steps: measuring an acoustic input signal from an acoustic source;obtaining, from the acoustic input signal, at least one monaural directional signal and at least one binaural directional signal;estimating a target signal level by combining the at least one monaural directional signal and the at least one binaural directional signal, the at least one monaural directional signal and the at least one binaural directional signal mutually have a maximum response in a direction of the acoustic source; andestimating a noise signal level by combining the at least one monaural directional signal and the at least one binaural directional signal, the at least one monaural directional signal and the at least one binaural directional signal mutually have a minimum sensitivity in a direction of the acoustic source.
  • 18. The method according to claim 17, which further comprises estimating the target signal level by selecting a minimum of the at least one monaural directional signal and the at least one binaural directional signal, which mutually have the maximum response in the direction of the acoustic source.
  • 19. The method according to claim 17, which further comprises estimating the noise signal level by selecting a maximum of the at least one monaural directional signal and the at least one binaural directional signal, which mutually have the minimum sensitivity in the direction of said acoustic source.
  • 20. The method according to claim 17, which further comprises estimating the noise signal level by calculating a sum of the at least one monaural directional signal and the at least one binaural directional signal, which mutually have the minimum sensitivity in the direction of the acoustic source.
  • 21. The method according to claim 17, which further comprises calculating, from the target signal level estimated and the noise signal level estimated, a Wiener filter amplification gain using the formula: amplification gain=target signal level/[noise signal level+target signal level].
  • 22. The method according to claim 17, which further comprises separating the acoustic input signal into multiple frequency bands and the method is used separately for a multiple of the multiple frequency bands.
  • 23. The method according to claim 17, which further comprises selecting the target signal level and the noise signal level from the group of power signals, energy signals, amplitude levels, smoothed amplitude levels, averaged amplitude levels, and absolute levels.
  • 24. A device for direction dependent spatial noise reduction, comprising: a plurality microphones for measuring an acoustic input signal from an acoustic source, said plurality of microphones forming at least one monaural pair and at least one binaural pair;directional signal processing circuitry for obtaining, from the acoustic input signal, at least one monaural directional signal and at least one binaural directional signal;a target signal level estimator for estimating a target signal level by combining the at least one monaural directional signal and the at least one binaural directional signal, the at least one monaural directional signal and the at least one binaural directional signal mutually have a maximum response in a direction of the acoustic source; anda noise signal level estimator for estimating a noise signal level by combining the at least one monaural directional signal and the at least binaural directional signal, the at least one monaural directional signal and the at least one binaural directional signal mutually have a minimum sensitivity in the direction of the acoustic source.
  • 25. The device according to claim 24, wherein said target signal level estimator is configured for estimating the target signal level by selecting a minimum of the at least one monaural directional signal and the at least one binaural directional signal, which mutually have the maximum response in the direction of the acoustic source.
  • 26. The device according to claim 24, wherein said noise signal level estimator is configured for estimating the noise signal level by selecting a maximum of the at least one monaural directional signal and the at least one binaural directional signal, which mutually have the minimum sensitivity in the direction of the acoustic source.
  • 27. The device according to claim 24, wherein said noise signal level estimator is configured for estimating the noise signal level by calculating a sum of the at least one monaural directional signal and the at least one binaural directional signal, which mutually have the minimum sensitivity in the direction of the acoustic source.
  • 28. The device according to claim 24, further comprising a signal amplifier for amplifying the input acoustic signal based on an Wiener filter amplification gain calculated using the formula: amplification gain=target signal level/[noise signal level+target signal level].
  • 29. The device according to claim 24, wherein the noise signal level and the target signal level are selected from the group consisting of power signals, energy signals, amplitude signals, smoothed amplitude signals, averaged amplitude signals, and absolute levels.
  • 30. The device according to claim 24, further comprising means for separating the acoustic input signal into multiple frequency bands, wherein the target signal level and the noise signal level are calculated separately for a multiple of the multiple frequency bands.
  • 31. The device according to claim 24, wherein said directional signal processing circuitry further comprises: a monaural differential microphone array circuitry for obtaining the at least one monaural directional signal; anda binaural differential microphone array circuitry for obtaining the at least one binaural directional signal.
  • 32. The device according to claim 30, wherein said directional signal processing circuitry further comprising a binaural Wiener filter circuitry for obtaining the at least one binaural directional signal, for frequency bands above a threshold value, said binaural Wiener filter circuitry having an amplification gain that is calculated on a basis of signal attenuation corresponding to a transfer function between said binaural pair of microphones.
Priority Claims (1)
Number Date Country Kind
10154098.7 Feb 2010 EP regional
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/EP2010/065801 10/20/2010 WO 00 11/5/2012