Generally, a hearing aid system according to the invention is understood as meaning any device which provides an output signal that can be perceived as an acoustic signal by a user or contributes to providing such an output signal, and which has means which are customized to compensate for an individual hearing loss of the user or contribute to compensating for the hearing loss of the user. They are, in particular, hearing aids which can be worn on the body or by the ear, in particular on or in the ear, and which can be fully or partially implanted. However, some devices whose main aim is not to compensate for a hearing loss, may also be regarded as hearing aid systems, for example consumer electronic devices (televisions, hi-fi systems, mobile phones, MP3 players etc.) provided they have, however, measures for compensating for an individual hearing loss.
Within the present context a traditional hearing aid can be understood as a small, battery-powered, microelectronic device designed to be worn behind or in the human ear by a hearing-impaired user. Prior to use, the hearing aid is adjusted by a hearing aid fitter according to a prescription. The prescription is based on a hearing test, resulting in a so-called audiogram, of the performance of the hearing-impaired user's unaided hearing. The prescription is developed to reach a setting where the hearing aid will alleviate a hearing loss by amplifying sound at frequencies in those parts of the audible frequency range where the user suffers a hearing deficit. A hearing aid comprises one or more microphones, a battery, a microelectronic circuit comprising a signal processor, and an acoustic output transducer. The signal processor is preferably a digital signal processor. The hearing aid is enclosed in a casing suitable for fitting behind or in a human ear.
Within the present context a hearing aid system may comprise a single hearing aid (a so called monaural hearing aid system) or comprise two hearing aids, one for each ear of the hearing aid user (a so called binaural hearing aid system). Furthermore, the hearing aid system may comprise an external device, such as a smart phone having software applications adapted to interact with other devices of the hearing aid system. Thus, within the present context the term “hearing aid system device” may denote a hearing aid or an external device.
The mechanical design has developed into a number of general categories. As the name suggests, Behind-The-Ear (BTE) hearing aids are worn behind the ear. To be more precise, an electronics unit comprising a housing containing the major electronics parts thereof is worn behind the ear. An earpiece for emitting sound to the hearing aid user is worn in the ear, e.g. in the concha or the ear canal. In a traditional BTE hearing aid, a sound tube is used to convey sound from the output transducer, which in hearing aid terminology is normally referred to as the receiver, located in the housing of the electronics unit and to the ear canal. In some modern types of hearing aids, a conducting member comprising electrical conductors conveys an electric signal from the housing and to a receiver placed in the earpiece in the ear. Such hearing aids are commonly referred to as Receiver-In-The-Ear (RITE) hearing aids. In a specific type of RITE hearing aids the receiver is placed inside the ear canal. This category is sometimes referred to as Receiver-In-Canal (RIC) hearing aids.
In-The-Ear (ITE) hearing aids are designed for arrangement in the ear, normally in the funnel-shaped outer part of the ear canal. In a specific type of ITE hearing aids the hearing aid is placed substantially inside the ear canal. This category is sometimes referred to as Completely-In-Canal (CIC) hearing aids. This type of hearing aid requires an especially compact design in order to allow it to be arranged in the ear canal, while accommodating the components necessary for operation of the hearing aid.
Hearing loss of a hearing impaired person is quite often frequency dependent. This means that the hearing loss of the person varies depending on the frequency. Therefore, when compensating for hearing losses, it can be advantageous to utilize frequency-dependent amplification. Hearing aids therefore often provide to split an input sound signal received by an input transducer of the hearing aid into various frequency intervals, also called frequency bands, which are independently processed. In this way, it is possible to adjust the input sound signal of each frequency band individually to account for the hearing loss in respective frequency bands. The frequency dependent adjustment is normally done by implementing a band split filter and compressors for each of the frequency bands, so-called band split compressors, which may be summarized to a multi-band compressor. In this way, it is possible to adjust the gain individually in each frequency band depending on the hearing loss as well as the input level of the input sound signal in a specific frequency range. For example, a band split compressor may provide a higher gain for a soft sound than for a loud sound in its frequency band.
The filter banks used in such multi-band compressors are well known within the art of hearing aids but are nevertheless based on a number of tradeoffs. Most of these tradeoffs deal with the frequency resolution as will be further described below.
There are some very clear advantages of having a high-resolution filter bank. The higher the frequency resolution, the better individual periodic components can be distinguished from each other. This gives a much finer signal analysis and enables more advanced signal processing. Especially noise reduction and speech enhancement schemes may benefit from a higher frequency resolution.
However, a filter bank with a high frequency resolution generally introduces a correspondingly long delay, which for most people will have a detrimental effect on the perceived sound quality.
It has therefore been suggested to reduce the delay incurred by filter banks, such as Discrete Fourier Transform (DFT), Finite Impulse Response (FIR) or Infinite Impulse Response (IIR) filter banks, by applying a broadband time-varying filter with a response that corresponds to the frequency dependent target gains that were otherwise to be applied to the frequency bands provided by the filter banks. A broadband time-varying filter, such as the one discussed above, will also inherently introduce a delay but this delay is generally significantly shorter than the delay introduced by filter banks.
However, this solution still requires that the frequency dependent gains are calculated in an analysis part of the system, and in case the analysis part comprises filter banks, then the determined frequency dependent gains will be correspondingly delayed relative to the signal that the gains are to be applied to using the time-varying filter, but this is generally considered a minor issue because the frequency dependent gains for most situations need not change very fast.
It has furthermore been suggested in the art to minimize the delay introduced by the time-varying filter by implementing the time-varying filter as minimum-phase.
In the present context only monaural beam forming (as opposed to binaural beam forming) will be considered unless specifically noted otherwise. This type of beam forming applies more than one microphone in a hearing aid and represents a type of noise reduction. Generally, it provides the most significant improvement of speech intelligibility among all types of noise reduction. Additionally, beam forming can help restore pinna cues (i.e. spatial cues) lost by behind the ear hearing aids, which is essential for spatial perception of the wearer especially in order to avoid front-back confusion.
However, a beam former needs to meet some rather strict requirements in order to be suitable for implementation in a low delay system. Among these are a maximum allowed delay in the range of 0.1 milliseconds (including microphone matching), which means that classic beam former designs are not an option. Especially, multiband and binaural beam formers introduce much larger delays.
The document U.S. Pat. No. 5,473,701 describes an adaptive microphone array, in particular a combination of an omnidirectional sensor and a dipole sensor to form an adaptive first order differential microphone array. However, the document is silent with respect to means for providing low delay processing.
It is therefore a feature of the present invention to provide an improved hearing aid with beamforming and low delay signal processing.
It is another feature of the present invention to provide an improved method of operating a hearing aid.
The invention, in a first aspect, provides a hearing aid comprising at least two microphones, a signal processor, a combiner, a minimum phase filter and an electrical-acoustical output receiver, wherein the hearing aid is adapted to:
The invention, in a second aspect, provides a method of operating a binaural hearing aid, comprising the steps of:
Further advantageous features are defined in the dependent claims.
Still other features of the present invention will become apparent to those skilled in the art from the following description wherein the invention will be explained in more detail.
By way of example, there is shown and described a preferred embodiment of this invention. As will be realized, the invention is capable of other embodiments, and its several details are capable of modification in various, obvious aspects, all without departing from the invention. Accordingly, the drawings and descriptions will be regarded as illustrative in nature and not as restrictive. In the drawings:
In the present context the term signal processing is to be understood as any type of hearing aid related signal processing that includes at least: noise reduction (including beam forming), speech enhancement and hearing compensation.
In the present context the term omnidirectional signal is to be understood as a signal that represents a situation where the relative sensitivity of the signal, with respect to impinging sound from all directions from 0° to 360° is the same.
As opposed hereto, the term directional signal represents all other situations. Thus, signals representing a situation where said relative sensitivity has e.g. a sub-cardioid shape, a cardioid shape, a super-cardioid shape, a hyper-cardioid shape or a bidirectional shape may in the following all be denoted a directional signal.
In the present context the term microphone signals may also be used to denote a microphone signal whereto an artificial delay has been applied.
However, if two (or more) microphone signals are added (with an inherent sound transmission time delay between them due to the spacing between the microphones), in order to provide an average of the at least two signals, then this average will still be considered an omnidirectional signal in the present context. This is likewise so if an artificial delay is added to at least one of said at least two microphone signals in order to avoid a dip in the frontal sensitivity in the high frequency range of the resulting omnidirectional signal due destructive interference between at least two of the microphone signals.
Reference is first made to
In the hearing aid 100 of
Reference is now made to
The hearing aid 200 of
According to the embodiment of
According to the embodiment of
For the purpose of beamforming, the output signals from the microphones 201-a and 201-b (which in the following may be denoted microphone signals) are preferably matched in-situ (i.e. adaptive matching carried out during normal operation) (not illustrated in
The two microphone signals are branched and each microphone signal is hereby provided both to the input of one of the delay units 202-a, 202-b which provide a fractional time delay to the respective microphone signal as well as provided to one of the combiners 203-a and 203-b. The delays influence the directional characteristics of the signals that are provided as output from the combiners 203-a and 203-b, which signals in the following may be denoted omnidirectional signal and directional signal respectively. It is noted that the impact from the selected delay on the directional characteristic depends on the distance between the two microphones 201-a, 201-b.
According to the present embodiment both the front and the rear microphone signals are delayed.
The delaying of the front microphone signal in delay unit 202-b is done in order to avoid that the sensitivity of the omnidirectional signal, for at least some impinging sound directions, decreases in parts of the higher frequency range (due to destructive interference). Furthermore, careful selection of the applied delay to the front microphone signal can provide that, in addition to alleviating the sensitivity loss for high frequency sounds impinging from the front hemi-sphere, then the sensitivity for high frequency sounds impinging from the back hemi-sphere may be attenuated. Such a difference in front-back sensitivity is very advantageous because this type of spatial cue can be used to avoid the so called front-back confusion, that may result for users with hearing aids that are not able to take advantage of the natural pinna-effect. Finally, it is generally advantageous to be able to provide an omnidirectional signal capable of suppressing high frequency sound from the back hemisphere.
According to the present embodiment the delay applied by the delay unit 202-b corresponds to approximately ⅔ of the time required for sound to travel the distance between the front and rear microphones (which in the following may also be denoted the acoustic microphone distance), which accounts to approximately 0.03 milliseconds for a distance of 1.5 cm, and according to variations the delay may be in the range between say 0.01 and 0.05 milliseconds dependent on the distance between the front and rear microphone. In other words, the delay applied by the delay unit 202-b may be anything between zero and the full acoustic microphone distance.
The delaying of the rear microphone signal in delay unit 202-a is done in order to ensure that the directional signal as output from the combiner 203-a has a desired directional pattern (e.g. with respect to avoiding front-back confusion) such as a hyper-cardioid instead of e.g. a bidirectional shape, because the broadband mixing carried out by the combiners 204 and 206 and the multiplier 205 only allows an effective mixing of the omnidirectional and directional signals in a relatively narrow frequency range and consequently outside this narrow range either the shape of the omnidirectional or directional signal, as output from the combiners 203-a and 203-b will dominate and therefore need to have desirable shapes also without an effective mixing.
According to the present embodiment the delay applied by the delay unit 202-a also corresponds to approximately ⅔ of the time required for sound to travel the distance between the front and rear microphones, which accounts to approximately 0.03 milliseconds for a distance of 1.5 cm and which provides a hyper-cardioid. However, according to variations the delay may be in the range between say no delay and up to 0.05 milliseconds dependent on the distance between the front and the rear microphone. In other words, the delay applied by the delay unit 202-b may also be anything between zero and the full acoustic microphone distance.
Thus according to the present embodiment the omnidirectional signal (which in the following may be abbreviated “omni”) is provided as the output signal from the combiner 203-b by adding the signal from the rear microphone (201-a) with the signal from the front microphone (201-b). In the following these two signals may be denoted xrear and xfront respectively. Hereby the omnidirectional signal can be expressed as given below in equation (1):
omni=(xfront(t+Tfront)+xrear(t)) (1)
wherein Tfront represents the delay introduced by the delay unit 202-b.
In a similar manner the directional signal (which in the following may be abbreviated “dir”) is provided as the output signal from the combiner 203-a by subtracting the signal from the rear microphone (201-a) from the signal from the front microphone (201-b). Hereby the directional signal can be expressed as given below in equation (2):
dir=(xfront(t)−xrear(t+Trear)) (2)
wherein Trear represents the delay introduced by the delay unit 202-a.
In continuation of the above an intermediate beamformed signal (which in the following may be abbreviated iBF or simply be denoted beamformed signal), is finally provided by linearly combining the omnidirectional signal and the bidirectional signal as given below in equation (3):
iBF=γ*omni+(1−γ)*dir (3)
wherein γ (gamma) is an adaptive parameter, whose selected value controls the shape of the directional pattern for the beamformed signal. More specifically the selected value of γ is used to control whether the hearing aid output signal, in a given frequency range, is primarily omnidirectional or primarily directional and as such may also be used to fade between these omnidirectional and directional characteristics as a function of frequency.
The value of the gamma parameter is determined by the digital signal processor (DSP) 209.
According to the present embodiment the gamma parameter is restricted to be within the range of one and zero, but in variations other ranges may be considered. If a gamma value of one is selected (i.e. first extreme or first endpoint of the range of gamma values) then a hearing aid output signal that has an omnidirectional characteristic for all frequencies of the audible spectrum is provided. On the other hand, if a gamma value of zero is selected (i.e. the second extreme or second endpoint of the range of gamma values) a hearing aid output signal that has a directional characteristic for for all frequencies of the audible spectrum is provided.
According to specific variations of the present embodiment, the gamma parameter is restricted to be within a range of say 1 and 0.001 or within a range of say 1 and 0.03. One advantage of these more narrow ranges is that the amplification of microphone noise in the very low frequency range is attenuated because the omnidirectional signal will dominate the directional signal in this very low frequency range. However, according to yet other alternative embodiments the relative weighting of the omnidirectional and directional signal may be carried out in other ways that the one given in equation (3), as will be obvious for the skilled person.
For gamma values between the above mentioned extreme values (which in the following may also be denoted endpoints) a mix of the omnidirectional and directional signals will result, where the amount of mixing will vary across frequency due to the difference in frequency response between the omnidirectional signal and the directional signal. This constitutes a specific advantage of the present invention because it provides a frequency dependent beamforming that only requires broadband mixing as controlled by the broadband gamma parameter and without requiring filterbanks (that introduces a significant delay) in the signal path.
Furthermore, by refraining from low frequency gain restoration of the directional signal before the combiner 206, the resulting processing delay may be reduced compared to the situation with low frequency gain restoration of the directional signal which requires that a delay is added to the omni directional signal in order maintain the phase relationship between the two signals.
The intermediate beamformed signal at the output of combiner 206 is provided to the minimum phase filter 207. The filter coefficients for the operation of the minimum phase filter 207 are provided by digital signal processor (DSP) 209.
According to the present embodiment the DSP 209 analyses the microphone signals provided by the two microphones 201-a and 201-b in order to provide a target gain that is adapted to at least one of suppressing noise, customizing the sound to a user preference and alleviating a hearing deficit of an individual wearing the hearing aid system. However according to other embodiments, the DSP 209 may additionally or alternatively analyse other signals such as the omnidirectional signal from the combiner 203-b, the directional signal from the combiner 203-a and the intermediate beamformed signal from the combiner 206.
The hearing aid 200 illustrated in
The low delay beam former of the present invention differs from prior art beam formers Such as the one given in
According to the present embodiment the calculation of the resulting hearing aid gain, to be applied by the minimum phase filter 207, will take into account that a low frequency boost, i.e. an additional amplification of the lower frequencies is generally required because the beamforming involves a directional signal that is formed by subtracting one microphone signal from the other and a consequence hereof is that the directional signal will exhibit a decrease in magnitude with decreasing frequency.
Thus the approach of the present invention is to combine a low frequency boost gain and the frequency dependent target gain in order to provide the resulting hearing aid gain to be applied by the minimum phase filter 207, which is positioned downstream of the combiner 206. This is an efficient approach, that avoids unnecessary gain adjustments compared to an approach of the prior art where a low frequency boost gain is initially applied to the directional signal and then subsequently (after the beamforming) a high frequency boost gain is applied (for the majority of persons suffering from a high frequency hearing loss).
It is a particular insight of the inventors that this approach according to the invention is particularly advantageous for the plurality of hearing aid users that suffer from a larger hearing loss in the high frequency range than in the low frequency range and therefore need a gain with a relatively strong frequency dependence, which translates into a correspondingly high group delay when such a frequency dependent gain is to be applied by a broadband filter, such as e.g. the minimum phase filter 207 of the present invention. Thus by incorporating the low frequency boost gain and the target gain the resulting hearing aid gain to be applied by the minimum phase filter 207 will, for the majority of persons suffering from a high frequency hearing loss, have a relatively weaker frequency dependence which translates into a shorter minimum phase filter 209 and hereby also lower group delay.
It is a further specific advantage of the present invention that a lower group delay leads to less sound artefacts arising from e.g. mixing of hearing aid sound and directly transmitted ambient sound in the ear canal, due to the so called comb filter effect. However, it is noted that the bone conducted sound from the users own voice may also interfere with the directly transmitted ambient sound and with the hearing aid sound and hereby also creating a comb like filter effect.
In this context it is noted that low delay systems are generally especially advantageous for hearing aids with so called open fittings, i.e. hearing aids where sound can enter the ear directly despite the presence of a hearing aid in the ear canal, as one example a hearing aid with a large vent may be denoted a hearing aid with an open fitting. A significant issue with open fittings is the comb filter effect, i.e. destructive interference between direct sound entering the ear (e.g. through the vent) and the sound processed (and hereby delayed) and subsequently provided by the hearing aid. The characteristics of this destructive interference is dependent on the delays and gains introduced by the hearing aid sound processing and may generally be relieved by low delay processing.
Additionally, hearing aids with open fittings are not really suited to provide a significant gain in the low frequency because a significant part of the low frequency sound provided by the hearing aid disappears into the environment through e.g. the vent. Therefore, open fittings are primarily useful to compensate high frequency hearing losses. This concurs with the low delay hearing aids described above which are especially suited to compensate hearing loss in the high-frequency range.
Additionally, the difference in frequency response between the omnidirectional and the directional signals is used to provide a beamformed signal with omnidirectional characteristics at low frequencies while having directional characteristics at higher frequencies, and for fading between the two characteristics as a function of frequency, i.e. determining the frequency ranges where either of the two characteristics are dominating by varying the value of the broadband (i.e. frequency independent) gamma parameter.
Thus, the inventors have found that the frequency independent gamma may advantageously be adapted in order to provide e.g. suppression of noise while also providing spatial cues, such as pinna cues. According to an embodiment this may be carried out using at least one of an energy minimization and sound scene classification, but other methods may also be used.
Reference is now given to
In a first step 301, a first signal is provided by adding a first and a second microphone signal provided from the first and the second microphones (201-a, 201-b) respectively whereby an omnidirectional signal is provided.
However, according to a specific variation only a single microphone signal is used to provide said first (omnidirectional) signal and according to an even more specific variation this variation is selected in case wind noise is detected.
In a second step 302, a second signal, that is different from the first signal, is provided by combining a third and a fourth microphone signal provided from the first and the second microphones (201-a, 201-b) respectively. According to the present embodiment the second signal is provided by subtracting one of the microphone signals from the other microphone signal whereby a directional signal is provided.
According to a variation broadband matching of the microphone signals from the microphones 201-a and 201-b is carried out.
According to various embodiments, application of a delay to at least one of said microphone signals is carried out, whereby the omnidirectional and bidirectional signals may be replaced by other types of directional signals such as the various forms of cardioids.
It is noted that dependent on whether a delay is applied to any of said first, second, third and fourth microphone signals, then some of these signals may be identical.
In a third step 303, an intermediate beamformed signal is provided by linearly combining said first and second signals using a frequency independent (i.e. broadband) adaptive parameter (gamma) to weight said first and said second signal.
In a fourth step 304, a desired target gain is determined in order to provide at least one of: alleviating a hearing deficit of a user, suppressing noise and customizing the sound to at least one of a user preference and a sound environment.
In a fifth step 305 a resulting hearing aid gain is determined in order to be applied to the intermediate beamformed signal based on the desired target gain and based on the value of the adaptive parameter. Hereby the impact from the selected value of the adaptive parameter (gamma), on the frequency response of the intermediate beamformed signal, is also compensated.
In a sixth step 306, the minimum phase filter 207 is synthesized in order to apply said resulting hearing aid gain to said intermediate beamformed signal in order to provide a hearing aid output signal that has been processed with the desired target gain.
Finally, in a seventh step 307, the output signal from the minimum phase filter 207 is provided to the electrical-acoustical output receiver 208 wherefrom the output signal is provided as sound.
Methods for synthesizing filter coefficients for a digital filter in order to adapt the digital filter to be of minimum phase and to provide a frequency dependent target gain |H(ω)| are known in the prior art. However, reference is now made to
In a first step, at least one input signal is analysed in order to provide a frequency dependent target gain |H(ω)|.
In a second step, the real cepstrum cx(n) of the complex cepstrum x(n) of the desired frequency dependent target gain |H(ω)| is obtained by taking the inverse Fourier transformation (processing block 402) of the logarithm (processing block 401) of the frequency dependent target gain |H(ω)|. Generally, the relation between the real cepstrum cx(n), the complex cepstrum, the frequency dependent target gain and the filter transfer function H(ω) is given by:
and consequently the real cepstrum cx(n) is given by:
c
x(n)=F−1 [log(|H(ω)|))]. (5)
In a third step a window function is applied by processing block 403 to the real cepstrum of the frequency dependent target gain |H(ω)|, whereby the complex cepstrum xmin(n) representing the desired minimum phase filter impulse response is provided:
x
min(n)=Imin(n)cx(n) (6)
Thus the window function Imin is the unique function that can reconstruct the minimum phase complex cepstrum from the real cepstrum representing the frequency dependent target gain.
The discrete and finite window function Imin is given as:
Wherein N is the length of the inverse Fourier transform used to provide the real cepstrum, N/2 is the Nyquist frequency, δ(n) is the Kronecker delta function and n is the cepstrum variable.
In a fourth step carried out by the processing block 404 a Fourier transformation is applied to the provided complex cepstrum xmin(n) representing the desired minimum phase filter impulse response and hereby providing a logarithmic filter transfer function that is minimum phase.
In a fifth step carried out by the processing block 405 a filter transfer function Hmin(ω) that is minimum phase is provided by applying a complex exponential function to the provided logarithmic filter transfer function.
In a sixth step carried out by the processing block 406 an inverse Fourier transformation is applied to the filter transfer function that is minimum phase and hereby the desired minimum phase filter impulse response hmin(n) is provided, whereby the filter coefficients that will make the digital filter minimum phase and provide the desired frequency dependent target can be determined.
In summary
It is generally noted that even though many features of the present invention are disclosed in embodiments comprising other features then this does not imply that these features by necessity need to be combined.
As one obvious example the various beamformer configurations and corresponding methods are independent of the specific method for synthesizing filter coefficients for a digital filter in order to adapt the digital filter to be of minimum phase and to provide a desired frequency dependent target gain. As one alternative the synthetization may be carried out based on Hilbert transforms.
As another example, the various beamformer configurations and corresponding methods may or may not comprise the feature of delaying at least one of the (at least) two microphone signals used for the beamforming. Furthermore, the delay may or may not be a fractional delay (i.e. the delay may also be equal to an integer of the sampling period).
Likewise, the various beamformer configurations and corresponding methods are generally independent on the specific type of directional signal that is used as input to the adaptive weighting of the directional signal and an omnidirectional signal. Thus the directional signal may be a bidirectional signal or may be a hyper-cardioid just to mention two examples.
The various beamformer configurations and corresponding methods are also independent on whether the omnidirectional signal is derived from one or two (or more) microphone signals and independent on whether at least one of said microphone signals have been delayed. One specific advantage of using only one microphone for the omnidirectional signal is that it enables a simple manner to provide improved wind noise suppression by selecting the microphone that is less impacted by the wind noise.
Likewise said adaptive weighting may be carried out in a variety of different ways all of which will be obvious for a person skilled in the art. One obvious variation of the present embodiment is to carry out the adaptive weighting of the omnidirectional signal and the directional signal as given below:
iBF=omni+γ*dir (8)
and according to another variation the gamma parameter may be implemented as a frequency dependent filter, which will add a delay that for some situations may be acceptable.
Additionally, the beamforming need not be based on a combination of an omnidirectional and a directional signal. As one alternative two opposing cardioids (i.e. cardioids pointing in opposite directions) may be used despite this solution generally being considered less advantageous because the lack of difference in frequency response makes it difficult to provide low delay beamforming (i.e. broadband beamforming, because filterbank based beamforming introduces unacceptable high delays) wherein the beamformed signal has omnidirectional characteristics in the low frequency range and directional characteristics in the high frequency range.
However, it is noted that beamforming based on a combination of an omnidirectional and a directional signal may also include e.g. a system based on two opposing cardioids and an omnidirectional signal, wherein the two opposing cardioids are used to enable an adaptive control of the directional signal, such that a plurality of directional signal forms may be selected dependent on e.g. the present sound environment. Furthermore, it is noted that such a system may still be implemented using only 2 microphones.
Reference is therefore now given to
The hearing aid 500 comprises an additional combiner 501 that provides the same functionality as the combiner 203-a except in that combiner 501 provides a first directional signal with a different orientation than a second directional signal provided by the combiner 203-a. Additionally the hearing aid 500 comprises a multiplier 502 that enables weighting of the first directional signal provided by the combiner 501 relative to the second directional signal provided by the combiner 203-a based on an adaptive parameter β that is controlled by the DSP 209. Finally, the hearing aid 500 comprises a combiner 503 that combines the first directional signal and the second directional signal and hereby provides a third directional signal that subsequently is combined with an omni-signal as already discussed with reference to
Hereby the third directional signal as output from the combiner 503 can be expressed as given below in equation (9):
dir=(xfront(t)−xrear(t+Trear))−β*(xrear(t)−xfront(t+Trear)) (9)
wherein xfront (t) and xrear (t) represent the microphone signals from the microphones 201-b and 201-a respectively and wherein Trear represents the acoustic microphone distance which is added by the delay units 202-a and 202-b.
According to variations of the
According to further variations of the
Finally, it is noted that according to variations of the
Other modifications and variations of the structures and procedures will be evident to those skilled in the art.
Number | Date | Country | Kind |
---|---|---|---|
PA201901425 | Dec 2019 | DK | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2020/084651 | 12/4/2020 | WO |