This application is the US National Stage of International Application No. PCT/EP2007/060652, filed Oct. 8, 2007 and claims the benefit thereof. The International Application claims the benefits of German application No. 10 2006 047 987.4 filed Oct. 10, 2006, both of the applications are incorporated by reference herein in their entirety.
The invention relates to a method for operating a hearing aid consisting of a single hearing device or two hearing devices. The invention also relates to a corresponding hearing aid or hearing device.
When one is listening to someone or something, disturbing noise or unwanted acoustic signals are present everywhere that interfere with the other person's voice or with a wanted acoustic signal. People with a hearing impairment are especially susceptible to such noise interference. Background conversations, acoustic disturbance from digital devices (cell phones), traffic or other environmental noise can make it very difficult for a hearing-impaired person to understand the speaker they want to listen to. Reducing the noise level in an acoustic signal, combined with automatic focusing on a wanted acoustic signal component, can significantly improve the efficiency of an electronic speech processor of the type used in modern hearing aids.
Hearing aids employing digital signal processing have recently been introduced. They contain one or more microphones, A/D converters, digital signal processors, and loudspeakers. The digital signal processors usually subdivide the incoming signals into a plurality of frequency bands. Within each of these bands, signal amplification and processing can be individually matched to the requirements of a particular hearing aid wearer in order to improve the intelligibility of a particular component. Also available in connection with digital signal processing are algorithms for minimizing feedback and interference noise, although these have significant disadvantages. The disadvantageous feature of the algorithms currently employed for minimizing interference noise is, for example, the maximum improvement they can achieve in hearing-aid acoustics when speech and background noise are within the same frequency region, making them incapable of distinguishing between spoken language and background noise. (See also EP 1 017 253 A2).
This is one of the most frequently occurring problems in acoustic signal processing, namely extracting one or more acoustic signals from different overlapping acoustic signals. It is also known as the “cocktail party problem”, wherein all manner of different sounds such as music and conversations merge into an indefinable acoustic backdrop. Nevertheless, people generally do not find it difficult to hold a conversation in such a situation. It is therefore desirable for hearing aid wearers to be able to converse in just such situations in the same way as people without a hearing impairment.
In acoustic signal processing there exist spatial (e.g. directional microphone, beam forming), statistical (e.g. blind source separation), and hybrid methods which, by means of algorithms and otherwise, are able to separate out one or more sound sources from a plurality of simultaneously active sound sources. For example, by means of statistical signal processing of at least two microphone signals, blind source separation enables source signals to be separated without prior knowledge of their geometric arrangement. When applied to hearing aids, that method has advantages over conventional approaches involving a directional microphone. Using a BSS (Blind Source Separation) method of this kind it is inherently possible, with n microphones, to separate up to n sources, i.e. to generate n output signals.
Known from the relevant literature are blind source separation methods wherein sound sources are analyzed by analyzing at least two microphone signals. A method and corresponding device of this kind are known from EP 1 017 253 A2, the scope of whose disclosure is expressly to be included in the present specification. Corresponding points of linkage between the invention and EP 1 017 253 A2 are indicated mainly at the end of the present specification.
In a specific application for blind source separation in hearing aids, this requires communication between two hearing devices (analysis of at least two microphone signals (right/left)) and preferably binaural evaluation of the signals of the two hearing devices which is preferably performed wirelessly. Alternative couplings of the two hearing devices are also possible in such an application. Binaural evaluation of this kind with stereo signals being provided for a hearing aid wearer is taught in EP 1 655 998 A2, the scope of whose disclosure is likewise to be included in the present specification. Corresponding points of linkage between the invention and EP 1 655 998 A2 are indicated at the end of the present specification.
Directional microphone control in the context of blind source separation is subject to ambiguity once a plurality of competing wanted sources, e.g. speakers, are simultaneously present. While blind source separation basically allows the different sources to be separated, provided they are spatially separate, the potential benefit of a directional microphone is reduced by said ambiguity problems, although a directional microphone can be of great benefit in improving speech intelligibility specifically in such scenarios.
The hearing aid or more specifically the mathematical algorithms for blind source separation is/are basically faced with the dilemma of having to decide which of the signals produced by blind source separation can be most advantageously forwarded to the algorithm user, i.e. the hearing aid wearer. This is basically an unresolvable problem for the hearing aid because the choice of wanted acoustic source will depend directly on the hearing aid wearer's momentary intention and hence cannot be available to a selection algorithm as an input variable. The choice made by said algorithm must accordingly be based on assumptions about the listener's likely intention.
The prior art is based on the assumption that the hearing aid wearer prefers an acoustic signal from a 0° direction, i.e. from the direction in which the hearing aid wearer is looking. This is realistic insofar as, in an acoustically difficult situation, the hearing aid wearer would look at his/her current interlocutor to obtain further cues (e.g. lip movements) for increasing said interlocutor's speech intelligibility. This means that the hearing aid wearer is compelled to look at his/her interlocutor so that the directional microphone will produce increased speech intelligibility. This is annoying particularly when the hearing aid wearer wants to converse with just one person, i.e. is not involved in communicating with a plurality of speakers, and does not always wish/have to look at his/her interlocutor.
However, the conventional assumption that the hearing aid wearer's wanted acoustic source is in his/her 0° viewing direction is incorrect for many cases; namely, for example, for the case that the hearing aid wearer is standing or sitting next to his/her interlocutor and other people, e.g. at the same table, are holding a shared conversation with him/her. With a preset acoustic source in 0° viewing direction, the hearing aid wearer would constantly have to turn his/her head from side to side in order to follow his/her conversation partners.
Furthermore, there is to date no known technical method for making a “correct” choice of acoustic source, or more specifically one preferred by the hearing aid wearer, after source separation has taken place.
On the assumption that, in a communication situation, e.g. sitting at a table, a person in a 0° viewing direction of a hearing aid wearer is not continually the preferred acoustic source, a more flexible acoustic signal selection method can be formulated that is not limited by a geometric acoustic source distribution. An object of the invention is therefore to specify an improved method for operating a hearing aid, and an improved hearing aid. In particular, it is an object of the invention to determine which output signal resulting from source separation, in particular blind source separation, is acoustically fed to the hearing aid wearer. It is therefore an object of the invention to discover which source is, with a high degree of probability, a preferred acoustic source for the hearing aid wearer.
A choice of wanted acoustic source is inventively made such that the wanted speaker, i.e. the wanted acoustic source, is always the one whose distance from a microphone (system) of the hearing aid is preferably the shortest of all the distances of the detected speakers, i.e. acoustic sources. This also inventively applies to a plurality of speakers or acoustic sources whose distances from the microphone (system) are short compared to other speakers or acoustic sources.
A method for operating a hearing aid is inventively provided wherein, for tracking and selectively amplifying an acoustic source, a signal processing section of the hearing aid determines a distance from the acoustic source to the hearing aid wearer for preferably all the electrical acoustic signals available to said hearing aid wearer and assigns it to the corresponding acoustic signal. The acoustic source or sources with short or the shortest distances with respect to the hearing aid wearer are tracked by the signal processing section and particularly taken into account in the hearing aid's acoustic output signal.
In addition, a hearing aid is inventively provided wherein a distance of an acoustic source from the hearing aid wearer can be determined by an acoustic module (signal processing section) of the hearing aid and can then be assigned to electrical acoustic signals. The acoustic module then selects at least one electrical acoustic signal, said signal representing a short spatial distance from the assigned acoustic source to the hearing aid wearer. This electrical acoustic signal can be taken into account in particular in the hearing aid's output sound.
The electrical acoustic signals are analyzed by the hearing aid in particular for features which—individually or in combination—are indicative of the distance from the acoustic source to the microphone (system) or the hearing aid wearer. This preferably takes place after applying a blind source separation algorithm.
It is inventively possible, depending on the number of microphones in the hearing aid, to select one or more (speech) acoustic sources present in the ambient sound and emphasize it/them in the hearing aid's output sound, it being possible to flexibly adjust a volume of the acoustic source or sources in the hearing aid's output sound.
In a preferred embodiment of the invention, the signal processing section has an unmixer module that preferably operates as a blind source separation device for separating the acoustic sources within the ambient sound. The signal processing section also has a post-processor module which, when an acoustic source is detected in the vicinity (local acoustic source), sets up a corresponding “local source” operating mode in the hearing aid. The signal processing section can also have a pre-processor module—the electrical output signals of which are the unmixer module's electrical input signals—which standardizes and conditions electrical acoustic signals originating from microphones of the hearing aid. In respect of the pre-processor module and unmixer module, reference is made to EP 1 017 253 A2 paragraphs [0008] to [0023].
In a preferred embodiment of the invention, the hearing aid or more specifically the signal processing section or more specifically the post-processor module performs distance analysis of the electrical acoustic signals to the effect that, for each of the electrical acoustic signals, a distance of the corresponding acoustic source from the hearing aid is simultaneously determined and then mainly the electrical acoustic signal or signals with a short source distance are output by the signal processing section or more specifically the post-processor module to a hearing aid receiver or more specifically loudspeaker which converts the electrical acoustic signals into analog sound information.
Preferred acoustic sources are speech or more specifically speaker sources, the probability of automatically selecting the “correct” speech or more specifically speaker source, i.e. the one currently wanted by the hearing aid wearer, being increased—at least for many conversation situations—by selecting the speaker with the shortest horizontal distance from the hearing aid wearer's ear.
According to the invention, the electrical acoustic signals to be processed in the hearing aid, in particular the electrical acoustic signals separated by source separation, are examined for information contained therein that is indicative of a distance of the acoustic source from the hearing aid wearer. It is possible to differentiate here between a horizontal distance and a vertical distance, an excessively large vertical distance representing a non-preferred source. The items of distance information contained in an individual electrical acoustic signal are processed individually or plurally or in their respective totality to the effect that a spatial distance of the acoustic source represented thereby can be determined.
In a preferred embodiment of the invention it is advantageous if the corresponding electrical acoustic signal is examined to ascertain whether it contains spoken language, it being particularly advantageous here if it is a known speaker, i.e. a speaker known to the hearing aid, the speech profile of which has been stored with corresponding parameters inside the hearing aid.
Additional preferred embodiments of the invention will emerge from the other dependent claims.
The invention will now be explained in greater detail with the aid of exemplary embodiments and with reference to the accompanying drawings in which:
Within the scope of the invention (
The following description also discusses “tracking” of an electrical acoustic signal by a hearing aid wearer's hearing aid. This is to be understood in the sense of a selection made by a hearing aid, or more specifically by a signal processing section of the hearing aid, or more specifically by a post-processor module of the signal processing section, of one or more electrical speech signals that are electrically or electronically selected by the hearing aid from other acoustic sources in the ambient sound and which are reproduced in an amplified manner compared to the other acoustic sources in the ambient sound, i.e. in a manner experienced as louder for the hearing aid wearer. Preferably, no account is taken by the hearing aid of a position of the hearing aid wearer in space, in particular a position of the hearing aid in space, i.e. a direction in which the hearing aid wearer is looking, while the electrical acoustic signal is being tracked.
In the prior art, the electrical acoustic signals 202, 212 are mainly conditioned in three stages. In a first stage, the electrical acoustic signals 202, 212 are pre-processed in a pre-processor module 310 to improve the directional characteristic, starting with standardizing the original signals (equalizing the signal strength). In a second stage, blind source separation takes place in a BSS module 320, the output signals of the pre-processor module 310 undergoing an unmixing process. The output signals of the BSS module 320 are then post-processed in a post-processor module 330 in order to generate a desired electrical output signal 332 which is used as an input signal for a receiver 400, or more specifically a loudspeaker 400 of the hearing aid 1 and to deliver a sound generated thereby to the hearing aid wearer. According to the specification in EP 1 017 253 A2, steps 1 and 3, i.e. the pre-processor module 310 and post-processor module 330, are optional.
It shall be assumed in the following that there are two mutually independent acoustic 102, 104, i.e. signal sources 102, 104, in the ambient sound 100. One of said acoustic sources 102 is a speech source 102 disposed close to the hearing aid wearer, also referred to as a local acoustic source 102. The other acoustic source 104 shall in this example likewise be a speech source 104, but one that is further away from the hearing aid wearer than the speech source 102. The speech source 102 is to be selected and tracked by the hearing aid 1 or more specifically the signal processing section 300 and is to be a main acoustic component of the receiver 400 so that an output sound 402 of the loudspeaker 400 mainly contains said signal (102).
The two microphones 200, 210 of the hearing aid 1 each pick up a mixture of the two acoustic signals 102, 104—indicated by the dotted arrow (representing the preferred acoustic signal 102) and by the continuous arrow (representing the non-preferred acoustic signal 104)—and deliver them either to the pre-processor module 310 or immediately to the BSS module 320 as electrical input signals. The two microphones 200, 210 can be arranged in any manner. They can be located in a single hearing device 1 of the hearing aid 1 or distributed over both hearing devices 1. It is also possible, for instance, to provide one or both microphones 200, 210 outside the hearing aid 1, e.g. on a collar or in a pin, so long as it is still possible to communicate with the hearing aid 1. This also means that the electrical input signals of the BSS module 320 do not necessarily have to originate from a single hearing device 1 of the hearing aid 1. It is, of course, possible to implement more than two microphones 200, 210 for a hearing aid 1. A hearing aid 1 consisting of two hearing devices 1 preferably has a total of four or six microphones.
The pre-processor module 310 conditions the data for the BSS module 320 which, depending on its capability, for its part forms two separate output signals from its two, in each case mixed input signals, each of said output signals representing one of the two acoustic signals 102, 104. The two separate output signals of the BSS module 320 are input signals for the post-processor module 330 in which it is then decided which of the two acoustic signals 102, 104 will be fed out to the loudspeaker 400 as an electrical output signal 332.
For this purpose (see also
The electrical microphone signals x1(t), x2(t), xn(t) are input signals for the BSS module 320 which separates the acoustic signals respectively contained in the electrical microphone signals x1(t), x2(t), xn(t) according to acoustic sources s1(t), s2(t), sn(t) and feeds them out as electrical output signals s′1(t), s′2(t), s′n(t) to the post-processor module 330.
In the following there are two speech sources s1(t), sn(t) in the vicinity of the hearing aid wearer, so that there is a high degree of probability that the hearing aid wearer is in a conversation situation with said two speech sources s1(t), sn(t). This is also indicated in
Contained in the electrical acoustic signals s′1(t), s′2(t), s′n(t) generated by the BSS module 320, which correspond to the speech or more specifically acoustic sources s1(t), s2(t), sn(t), is distance information y1(t), y2(t), yn(t) which is indicative of how far the respective speech source s1(t), s2(t), sn(t) is away from the hearing aid 1 or more specifically the hearing aid wearer. The reading of this information in the form of distance analysis takes place in the post-processor module 330 which assigns distance information y1(t), y2(t), yn(t) of the acoustic source s1(t), s2(t), sn(t) to each electrical speech signal s′1(t), s′2(t), s′n(t) and then selects the electrical acoustic signal or signals s1(t), sn(t) for which it is probable, on the basis of the distance information, that the hearing aid wearer is in conversation with his/her speech sources s1(t), sn(t). This is illustrated in
The post-processor module 330 now delivers the two electrical acoustic signals s′1(t), s′n(t) to the loudspeaker 400 in an amplified manner. It is also conceivable, for example, for the acoustic source s2(t) to be a noise source and therefore to be ignored by the post-processor module 330, this being ascertainable by a corresponding module or more specifically a corresponding device in the post-processor module 330.
There are a large number of possibilities for ascertaining how far an acoustic source 102, 104; s1(t), s2(t), sn(t) is away from the hearing aid 1 or more specifically the hearing aid wearer, namely by evaluating the electrical representatives 322, 324; s′1(t), s′2(t), s′n(t) of the acoustic sources 102, 104; s1(t), s2(t), sn(t) accordingly.
For example, a ratio of a direct sound component to an echo component of the corresponding acoustic source 102, 104; s1(t), s2(t), sn(t) or more specifically the corresponding electrical signal 322, 324; s′1(t), s′2(t), s′n(t) can give an indication of the distance between the acoustic source 102, 104; s1(t), s2(t), sn(t) and the hearing aid wearer. That is to say, in the individual case, the larger the ratio, the closer the acoustic source 102, 104; s1(t), s2(t), sn(t) is to the hearing aid wearer. For this purpose, additional states which precede the decision as to local acoustic source 102; s1(t), sn(t) or other acoustic source 104; s2(t) can be analyzed within the source separation process. This is indicated by the dashed arrow from the BSS module 320 to the distance analysis section of the post-processor module 330.
In addition, a level criterion can indicate how far an acoustic source 102, 104; s1(t), s2(t), sn(t) is away from the hearing aid 1, i.e. the louder an acoustic source 102, 104; s1(t), s2(t), sn(t), the greater the probability that it is near the microphones 200, 210 of the hearing aid 1.
In addition, inferences can be drawn about the distance of an acoustic source 102, 104; s1(t), s2(t), sn(t) on the basis of a head shadow effect. This is due to differences in sound incident on the left and right ear or more specifically a left and right hearing device 1 of the hearing aid 1.
Source “punctiformity” likewise contains distance information. There exist methods allowing inferences to be drawn as to how “punctiform” (in contrast to “diffuse”) the respective acoustic source 102, 104; s1(t), s2(t), sn(t) is. It generally holds true that the more punctiform the acoustic source, the closer it is to the microphone system of the hearing aid 1.
In addition, indications of a distance of the respective acoustic source 102, 104; s1(t), s2(t), sn(t) from the hearing aid 1 can be determined via time-related signal features. In other words, from the shape of the time signal, e.g. the edge steepness of an envelope curve, inferences can be drawn as to the distance away of the corresponding acoustic source 102, 104; s1(t), s2(t), sn(t).
Moreover, it is self-evidently also possible, by means of a plurality of microphones 200, 210, to determine the distance of the hearing aid wearer from an acoustic source 102, 104; s1(t), s2(t), sn(t) e.g. by triangulation.
In the second embodiment of the invention, it is self-evidently also possible to reproduce a single speech acoustic source or three or more speech acoustic sources s1(t), sn(t) in an amplified manner.
According to the invention, distance analysis can always be running in the background in the post-processor module 330 in the hearing aid 1 and be initiated when a suitable electrical speech signal 322; s′1(t), s′n(t) occurs. It is also possible for the inventive distance analysis to be invoked by the hearing aid wearer, i.e. establishment of “local source” mode of the hearing aid 1 can be initiated by an input device that can be called up or actuated by the hearing aid wearer. Here, the input device can be a control on the hearing aid 1 and/or a control on a remote control of the hearing aid 1, e.g. a button or switch (not shown in the Fig.). It is also possible for the input device to be implemented as a voice control unit with an assigned speaker recognition module which can be matched to the hearing aid wearer's voice, the input device being implemented at least partly in the hearing aid 1 and/or at least partly in a remote control of the hearing aid 1.
Moreover, it is possible by means of the hearing aid 1 to obtain additional information as to which of the electrical speech signals 322; s′1(t), s′n(t) are preferably reproduced to the hearing aid wearer as output sound 402, s″ (t). This can be the angle of incidence of the corresponding acoustic source 102, 104; s1(t), s2(t), sn(t) on the hearing aid 1, particular angles of incidence being preferred. For example, the 0 to ±10° viewing direction (interlocutor sitting directly opposite) and/or a ±70 to ±100° lateral direction (interlocutor right/left) and/or a ±20 to ±45° viewing direction (interlocutor sitting obliquely opposite) of the hearing aid wearer may be preferred. It is also possible to weight the electrical speech signals 322; s′1(t), s′n(t) as to whether one of the electrical speech signal 322; s′1(t), s′n(t) is a predominant and/or a comparatively loud electrical speech signal 322; s′1(t), s′n(t) and/or contains (a known) spoken language.
According to the invention it is not necessary for distance analysis of the electrical acoustic signals 322; 324; s′1(t), s′2(t), s′n(t) to be performed inside the post-processor module 330. It is likewise possible, e.g. for reasons of speed, for distance analysis to be carried out by another module of the hearing aid 1 and only the selecting of the electrical acoustic signal(s) 322, 324; s′1(t), s′2(t), s′n(t) with the shortest distance information to be left to the post-processor module 330. For such an embodiment of the invention, said other module of the hearing aid 1 shall by definition be incorporated in the post-processor module 330, i.e. in an embodiment of this kind the post-processor module 330 contains this other module.
The present specification relates inter alia to a post-processor module 20 as in EP 1 017 253 A2 (the reference numerals are those given in EP 1 017 253 A2), in which module one or more speakers/acoustic sources is/are selected for an electrical output signal of the post-processor module 20 by means of distance analysis and reproduced therein in at least amplified form, as to which see also paragraph [0025] in EP 1 017 253 A2. In the invention, the pre-processor module and the BSS module can also be structured in the same way as the pre-processor 16 and the unmixer 18 in EP 1 017 253 A2, as to which see in particular paragraphs [0008] to [0024] in EP 1 017 253 A2.
The invention also links to EP 1 655 998 A2 in order to provide stereo speech signals or rather enable a hearing aid wearer to be supplied with speech in a binaural acoustic manner, the invention (notation according to EP 1 655 998 A2) preferably being connected downstream of the output signals z1, z2 for the right(k) and left(k) respectively of a second filter device in EP 1 655 998 A2 (see FIGS. 2 and 3) for accentuating/amplifying the corresponding acoustic source. In addition, it is also possible to apply the invention in the case of EP 1 655 998 A2 to the effect that it will come into play after the blind source separation disclosed therein and ahead of the second filter device, i.e. selection of a signal y1(k), y2(k) inventively taking place (see FIG. 3 in EP 1 655 998 A2).
Number | Date | Country | Kind |
---|---|---|---|
10 2006 047 987 | Oct 2006 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2007/060652 | 10/8/2007 | WO | 00 | 4/7/2009 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2008/043731 | 4/17/2008 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6430528 | Jourjine et al. | Aug 2002 | B1 |
6947570 | Maisano | Sep 2005 | B2 |
20050265563 | Maisano | Dec 2005 | A1 |
20070257840 | Wang et al. | Nov 2007 | A1 |
Number | Date | Country |
---|---|---|
1017253 | Jul 2000 | EP |
1463378 | Sep 2004 | EP |
1655998 | May 2006 | EP |
1670285 | Jun 2006 | EP |
9033329 | Feb 1997 | JP |
2000066698 | Mar 2000 | JP |
WO 0187011 | Nov 2001 | WO |
2008043731 | Apr 2008 | WO |
Number | Date | Country | |
---|---|---|---|
20100034406 A1 | Feb 2010 | US |