Hearing aids are customized for the user's specific type of hearing loss and are typically programmed to optimize each user's audible range and speech intelligibility. There are many different types of prescription models that may be used for this purpose (H. Dillon, Hearing Aids, Sydney: Boomerang Press 2001), the most common ones being based on hearing thresholds and discomfort levels. Each prescription method is based on a different set of assumptions and operates differently to find the optimum gain-frequency response of the device for a given user's hearing profile. In practice, the optimum gain response depends on many other factors such as the type of environment, the listening situation and the personal preferences of the user. The optimum adjustment of other components of the hearing aid, such as noise reduction algorithms and directional microphones, also depend on the environment, specific listening situation and user preferences. It is therefore not possible to optimize the listening experience for all environments using a fixed set of parameters for the hearing aid. It is widely agreed that a hearing aid that changes its algorithm or features for different environments would significantly increase the user's satisfaction (D. Fabry, and P. Stypulkowski, Evaluation of Fitting Procedures for Multiple-memory Programmable Hearing Aids.—paper presented at the annual meeting of the American Academy of Audiology, 1992). Currently this adaptability typically requires the user's interaction through the switching of listening modes.
It is presently known that classification systems and methods for hearing aids are based on a set of fixed acoustical situations (“classes”) that are described by the values of some features and detected by a classification unit. The detected classes 10, 11, and 12 are mapped to respective parameter settings 13, 14, and 15 in the hearing aid that may be also fixed (
New hearing aids are now being developed with automatic environmental classification systems which are designed to automatically detect the current environment and adjust their parameters accordingly. This type of classification typically uses supervised learning with predefined classes that are used to guide the learning process. This is because environments can often be classified according to their nature (speech, noise, music, etc.). A drawback is that the classes must be specified a priori and may or may not be relevant to the particular user. Also there is little scope for adapting the system or class set after training or for different individuals.
EP-A-1 395 080 discloses a method for setting filters for audio processing (beam forming) wherein a clustering algorithm is used to distinguish acoustic scenarios (different noise situations). The acoustic scenario clustering unit monitors the acoustic scenario. As soon as they change and the acoustic scenario is detected, a learning phase is initiated and a new scenario is determined with the help of a clustering training (
EP-A-1 670 285 shows a method to adjust parameters of a transfer function of a hearing aid having a feature extractor and a classifier.
EP-A-1 404 152 discloses a hearing aid device that adapts itself to the hearing aid user by means of a continuous weighting function that passes through various data points which respectively represent individual weightings of predetermined acoustic situations. New classes are added but ones not used are not deleted.
It is an object to provide a hearing aid system and method which does not have unchanging fixed classes and is learnable as to a specific user.
A method for operating a hearing aid in a hearing aid system where the hearing aid is continuously learnable for the particular user. A sound environment classification system is provided for tracking and defining sound environment classes relevant to the user. In an ongoing learning process, the classes are redefined based on new environments to which the hearing aid is subjected by the user.
For the purposes of promoting an understanding of the principles of the invention, reference will now be made to the preferred embodiment/best mode illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, and such alterations and further modifications in the illustrated device and such further applications of the principles of the invention as illustrated as would normally occur to one skilled in the art to which the invention relates are included.
An adaptive environmental classification system is provided in which classes can be split and merged based on changes in the environment that the hearing aid encounters. This results in the creation of classes specifically relevant to the user. This process continues to develop during the use of the hearing aid and therefore adapts to evolving needs of the user.
Overall System
Buffer
The buffer 23 comprises an array that stores past feature vectors. Typically, the buffer 23 can be 15-60 seconds long depending on the rate at which the adaptive classifier 22 needs to be updated. This allows the adaptation of the classifier 22 to run at a much slower rate than the ongoing classification of input feature vectors. The buffer processing stage 23A calculates a single feature vector to represent all of the unbuffered data, allowing a more accurate assessment of the acoustical characteristics of the current environment for the purpose of adapting the classifier 22.
Adaptive Classifier
The adaptive classification system is divided into two phases. The first phase, the initial classification system, is the starting point for the adaptive classification system when the hearing aid is first used. The initial classification system organizes the environments into four classes: speech, speech in noise, noise, and music. This will allow the user to take home a working automatic classification hearing aid. Since the system is being trained to recognize specific initial classes, a supervised learning algorithm is appropriate.
The second phase is the adaptive learning phase which begins as soon as the user turns the hearing aid on following the fitting process, and modifies the initial classification system to adapt to the user-specific environments. The algorithm continuously monitors changes in the feature vectors. As the user enters new and different environments the algorithm continuously checks to determine if a class should split and/or if two classes should merge together. In the case where a new cluster of feature vectors is detected and the algorithm decides to split, an unsupervised learning algorithm is used since there is no a priori knowledge about the new class.
Test Results
The following example illustrates the general behavior of the adaptive classifier and the process of splitting and merging environment classes. The initial classifier is trained with two ideal classes, meaning the classes have very defined clusters in the feature space as seen in
Splitting
While introducing the test data, a split criterion is continuously monitored and checked until enough data lies outside of the cluster area. This sets a flag that then triggers the algorithm to split the class 27 or 28 (
Merging
Once the fourth cluster is detected and the splitting process occurs, as shown in
According to the preferred embodiment, a system is provided that does not have pre-defined fixed classes but is able—by using a common clustering algorithm that is running in the background—to find classes for itself and is also able to modify, delete and merge existing ones dependent on the acoustical environment the hearing aid user is in.
All features used for classification are forming a n-dimensional feature space; all parameters that are used to configure the hearing aid are forming a m-dimensional feature space; n and m are not necessarily equal.
Starting with one or more pre-defined classes and one or more corresponding parameter sets that are activated according to the occurrence of the classes, the system and method continuously analyzes the distribution of feature values in the feature space (using common clustering algorithms, known from literature) and modifies the borders of the classes accordingly, so that preferably always one cluster will represent one class. If two distinct clusters are detected within one existing class, the class will be split into two new classes. If one cluster is covering two existing classes, the two classes will be merged to one new class. There may be an upper limit fo the total number of classes, so that whenever a new class is built, two old ones have to be merged.
At the same time the parameter settings, representing possible user input, are clustered and a mapping to the current clusters in feature space is calculated, according to which parameter setting is used in which acoustical surround: One cluster in parameter space can belong to one or more clusters in feature space for the case that the same setting is chosen for different environments.
The result is a dynamic mapping between dynamically changing clusters 25 in feature space (depending on individual acoustic surroundings) and corresponding clusters 26 in parameter space (depending on the individual users' preferences) is the result of this system and method. This is illustrated in
A new adaptive classification system is provided for hearing aids which allows the device to track and define environmental classes relevant to each user. Once this is accomplished the hearing aid may then learn the user preferences (volume control, directional microphone, noise reduction, spectral balance, etc.) for each individual class.
While a preferred embodiment has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only the preferred embodiment has been shown and described and that all changes and modifications that come within the spirit of the invention both now or in the future are desired to be protected.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2008/057919 | 6/23/2008 | WO | 00 | 5/14/2010 |
Number | Date | Country | |
---|---|---|---|
60936616 | Jun 2007 | US |