HEARING DEVICE AND METHOD FOR OPERATING A HEARING DEVICE

Information

  • Patent Application
  • 20070269053
  • Publication Number
    20070269053
  • Date Filed
    September 07, 2006
    17 years ago
  • Date Published
    November 22, 2007
    16 years ago
Abstract
The method for operating a hearing device having an adjustable transfer function comprising M sub-functions, wherein M is an integer with M≧1, and wherein said transfer function describes how input audio signals generated by an input transducer unit of said hearing device relate to output audio signals to be fed to an output transducer unit of said hearing device, comprises the steps of: deriving said input audio signals from a current acoustic environment; and for each of said M sub-functions: deriving, on the basis of said input audio signals and for each class of N classes each of which describes a predetermined acoustic environment, a class similarity factor indicative of the similarity of said current acoustic environment with the predetermined acoustic environment described by the respective class, wherein N is an integer with N≧2;deriving from N predetermined base parameter sets assigned to the respective sub-function and in dependence of said class similarity factors an activity parameter set for the respective sub-function, wherein each of said N base parameter sets assigned to the respective sub-function is assigned to a different class of said N classes;adjusting the respective sub-function by means of said activity parameter set.
Description

BRIEF DESCRIPTION OF THE DRAWINGS

Below, the invention is described in more detail by means of examples and the included drawings. The figures show schematically:



FIG. 1 a diagrammatical illustration of a hearing device;



FIG. 2 a diagrammatical illustration of a hearing device;



FIG. 3 an illustration of an activity parameter and a corresponding time-averaged activity parameter as a function of time;



FIG. 4 an exemplary embodiment of an averaging unit.





The reference symbols used in the figures and their meaning are summarized in the list of reference symbols. Generally, alike or alike-functioning parts are given the same or similar reference symbols. The described embodiments are meant as examples and shall not confine the invention.


DETAILED DESCRIPTION OF THE INVENTION


FIG. 1 shows a diagrammatical illustration of a hearing device 1, which comprises an input transducer unit 2, e.g., a microphone or an arrangement of microphones, for transducing sound from the current (actual) acoustic environment into input audio signals S1, wherein audio signals are electrical signals, of analogue and/or digital type, which represent sound. The input audio signals S1 are fed to a signal processing unit 3 for processing according to a transfer function G, which can be adapted to the needs of a user of the hearing device in dependence of said current acoustic environment. The transfer function G is or comprises at least one sub-function. In FIG. 1, the transfer function G is or comprises only one sub-function g1, which is realized in a signal processing circuit 3/1. Said signal processing circuit 3/1 may, e.g., provide for beam forming or for noise suppression or for another part of the transfer function G.


From the input signals S1, the signal processing circuit 3 derives output audio signals S2, which are fed to an output transducer unit 5, e.g., a loudspeaker. The output transducer unit 5 transduces the output audio signals S2 into signals to be perceived by the user of the hearing device, e.g., into acoustic sound, as indicated in FIG. 1.


An automatic adaptation of the transfer function G to said current acoustic environment is accomplished in the following manner:


The input audio signals S1 are fed to a classifier unit 4, in which said current acoustic environment is classified, wherein any known classification method can in principle be used. I.e., the current acoustic environment, represented by the input audio signals S1, is compared to N predetermined acoustic environments, each described by one class of a set of N predefined classes C1 . . . CN.


A set of N class similarity factors p1 . . . pN is output, wherein each of the class similarity factors p1 . . . pN is indicative of the similarity of said current acoustic environment with the respective predetermined acoustic environment of classes C1 . . . CN or, put in other words, of the likeness (resemblance) of said current acoustic environment and the respective predetermined acoustic environment, or, expressed differently, of the degree of correspondence between said current acoustic environment and the respective predetermined acoustic environment.


The classification may be accomplished in various ways known in the art. E.g., as indicated in FIG. 1, the input audio signals S1 may be fed to a feature extractor FE, in which a set of (technical, auditory or other) features are extracted from the input audio signals S1. That set of features is analyzed and classified in a classifier C, which also provides for further processing in order to derive said class similarity factors p1 . . . pN.


Today, N may typically be N=2, N=3, N=4, N=5 or possibly larger. Typical classes may be “speech”, “speech in noise”, “noise”, “music” or others. Typical features are, e.g., spectral shape, harmonic structure, coherent frequency and/or amplitude modulations, signal-to-noise ratio, spectral center of gravity, spatial distribution of sound sources and many more.


The automatic adaptation of the transfer function G is on the one hand based on said class similarity factors p1 . . . pN and on the other hand based on base parameter sets. Said base parameter sets are predetermined, and their respective values are usually obtained during a fitting procedure and/or may be at least partly pre-defined in the hearing device 1.


For each sub-function (in FIG. 1, there is only one sub-function g1 shown), one base parameter set B1/1, . . . , B1/N is provided per class, B1/1 for class C1, B1/2 for class C2, . . . and B1/N for class CN. I.e., for each class C1 . . . CN and each sub-function, there is one base parameter set. Each base parameter set comprises data (typically one number or several numbers), which optimally adjust the respective sub-function to the user's needs and preferences in the respective pre-defined acoustic environment.


In order to adapt the transfer function G, and in particular each sub-function, to a current acoustic environment, for each sub-function, the base parameter sets are mixed in dependence of their class similarity factors p1 . . . pN. In the embodiment of FIG. 1, this is accomplished by multiplying each base parameter set B1/1, . . . , B1/N with a respective class weight factor P1 . . . PN and summing up the accordingly weighted base parameter sets B1/1, . . . , B1/N in a processing unit 8. Said multiplication and summing up of base parameter sets is done separately for each parameter of a base parameter set.


Said class weight factors P1 . . . PN are derived from said class similarity factors p1 . . . pN. In the example of FIG. 1, the class weight factor P1 . . . PN are obtained by adding to each class similarity factor p1 . . . pN an individual class offset o1 . . . oN and multiplying the result (class-wise) by an individual class factor f1 . . . fN. An optional normalization of the class weight factors P1 . . . PN is not shown in FIG. 1. This enables an adaptation of the mixing and, accordingly, of the whole automatic adaptation behaviour, to preferences of the user.


The processing unit 8 outputs an activity parameter set a1 (generally: one for each sub-function), which is fed to the transfer function G, or, more precisely, to the respective sub-function. Accordingly, the transfer function G is adapted to the current acoustic environment in a fashion based on the predetermined base parameter sets.


A simple example:


M=1, g1: beamformer; N=2, C1: music, C2: speech in noise. The according base parameter sets B1/1, B1/2 do not have to be derived in a fitting procedure, but can be pre-programmed by the hearing device manufacturer: B1/1=0, B1/2=1, which means that no beam forming (zero activity of g1) shall be used when the user wants to listen to music, and full beam forming (full activity of g1) shall be used when the user wants to understand a speaker in a noisy place. Zero beam forming activity will usually mean that an omnidirectional polar pattern of the input transducer unit 2 shall be used, and full beam forming activity will typically mean that a high sensitivity towards the front direction (along the user's nose) shall be used, with little sensitivity for sound from other directions.


When the user is in an acoustic environment with p1=99% and p2=1%, i.e., the classification result implies that the current acoustic environment is practically pure music, the beam former (realized by sub-function g1) is run with (at least approximately) B1/1, i.e., at practically zero activity (o1=o2=0, f1=f2=1 implied).


When the user is in an acoustic environment with p1=1% and p2=99%, i.e., the classification result implies that the current acoustic environment is practically purely speech-in-noise, the beam former (realized by sub-function g1) is run with (at least approximately) B1/2, i.e., with practically full activity (o1=o2=0, f1=f2=1 implied).


When, however, the user is in an acoustic environment with p1=40% and p2=60% (e.g., in a restaurant situation with background music), i.e., the classification result implies that the current acoustic environment has aspects of music and somewhat stronger aspects of speech-in-noise, the beam former (realized by sub-function g1) is run with 0.4×B1/1+0.6×B1/2, i.e., with moderate activity (o1=o2=0, f1=f2=1 implied). The beam former may provide for a medium emphasis of sound from the front hemisphere and only little suppression of sound from elsewhere.


Of course, instead of the simple linear behaviour of the mixing of the base parameter sets that is exemplary discussed above, also more sophisticated (non-linear) ways of mixing the base parameter sets may be applied.


If it is particularly important to the user to understand speech in noisy surroundings, whereas he is not particularly fond of music, this individual preference may be taken into account by using something like o1=0, o2=0.3 and/or f1=0.8, f2=1.5, or the like.


Another simple example:


M=1, g1: gain model (amplification characteristic); N=2, C1: music, C2: speech. The according base parameter sets B1/1, B1/2 will usually be derived in a fitting procedure and indicate the amplification in dependence of incoming signal power that shall be used; characterized, e.g., in terms of decibel values characterizing the incoming signal power and compression values characterizing the steepness of increase of output signal with increase of incoming signal power. E.g., B1/1=(50 dB, 2.5; 90 dB, 0.8; 110 dB, 0.3; 0) indicating expansion below 50 dB, light compression up to 90 dB, strong compression up to 110 dB and limiting (infinite compression) thereabove. On the other hand, for speech, other values may be used, e.g., B1/1=(30 dB, 2.5; 80 dB, 0.4; 105 dB, 0.2; 0) indicating expansion below 30 dB, medium compression up to 80 dB, strong compression up to 105 dB and limiting thereabove. These rather arbitrarily chosen numbers for the base parameter sets shall just indicate one possible way of forming base parameter sets. Usually, gain models are furthermore frequency-dependent, so that the base parameter sets will, in addition, comprise frequency values and, accordingly, even more decibel values and compression values (for the various frequency ranges).


When the user is in an acoustic environment with p1=99% and p2=1%, i.e., the classification result implies that the current acoustic environment is practically pure music, the gain model (realized by sub-function g1) is run with (at least approximately) B1/1 (o1=o2=0, f1=f2=1 implied).


When the user is in an acoustic environment with p1=1% and p2=99%, i.e., the classification result implies that the current acoustic environment is practically pure speech, the gain model (g1) is run with (at least approximately) B1/2 (o1=o2=0, f1=f2=1 implied).


When, however, the user is in an acoustic environment with p1=40% and p2=60% (e.g., in a conversation situation with background music), i.e., the classification result implies that the current acoustic environment has aspects of music and somewhat stronger aspects of speech, the beam former (g1) is run with 0.4×B1/1+0.6×B1/2 (o1=o2=0, f1=f2=1 implied). I.e., the gain model is a linear combination of the gain model for music and the gain model for speech, obtained in processing unit 8. The activity parameter set a1 may be identical with this linear combination. Such an activity parameter set a1 is, of course, no more just a simple strength value or an activity setting. Such an activity parameter set a1 can already be, without further processing, the parameters used in the corresponding sub-function.


Of course, instead of the simple linear behaviour of the mixing of the base parameter sets that is exemplary discussed above, also more sophisticated (non-linear) ways of mixing the base parameter sets may be applied.


Said class similarity factors p1, p2 can be obtained, e.g., in the following manner (in classifier unit 4):


In the feature extractor FE, a number of features is extracted from the input audio signals S, e.g., rather technical characteristics like the signal power between 200 Hz and 600 Hz relative to the overall signal power and the harmonicity of the signal, or auditory-based characteristics like common build-up and decay processes and coherent amplitude modulations. Each examined feature provides for at least one value in a feature vector. For one specific current acoustic environment (represented by the input audio signals S1), the feature vector might be (3.0; 2.6; 4.1); note that usually, there will typically be between 5 and 10 or even more features and vector components. There is one feature vector for each predetermined acoustic environment, e.g., (5.3; 1.8; 3.6) for class C1 and (1.2; 3.1; 3.9) for class C2. The class similarity factors p1, p2 are a measure for the inverse distance between the feature vector of the current acoustic environment and the feature vector of class C1 and class C2, respectively. I.e., p1, p2 are measures for the closeness of the feature vector of the current acoustic environment and the feature vector of class C1 and class C2, respectively. A measure for said distance can be obtained, e.g., as the euclidian distance between the vectors, or by means of multivariate variance analysis. For example, the inverse of the square root of the sum of the squares of the differences between the components of the vectors can be used, i.e.,










p





1

=


1
/
sqrt



{



(

3.0
-
5.3

)


2

+


(

2.6
-
1.8

)


2

+


(

4.1
-
3.6

)


2


}








=

1
/

sqrt


(
6.18
)









=

0.402





and














p





2

=


1
/
sqrt



{



(

3.0
-
1.2

)


2

+


(

2.6
-
3.1

)


2

+


(

4.1
-
3.9

)


2


}








=

1
/

sqrt


(
3.53
)










=

0.532




.













In this case, the current acoustic environment is more similar to class C2 than to class C1, since p1<p2.


Of course, normalization of each feature vector component (corresponding to a specific feature), e.g., to a range from 0 to 1, and/or a normalization during determining p1,p2 is advisable, and it is also possible to weight different features differently strong during determining p1,p2. A suitable normalization allows to generate class similarity factors, which lie between 0 and 1 and can therefore be expressed in percent (%), wherein the likeness of the current acoustic environment with a predetermined acoustic environment is the higher, the higher (and closer to 100%) the corresponding class similarity factor is. The p1, p2 values in the two simple examples above were assumed to be class similarity factors normalized in such a way.



FIG. 2 shows a diagrammatical illustration of a hearing device 1, which is similar to the hearing device 1 of FIG. 1; the underlying principle is basically the same as in FIG. 1. But the hearing device 1 comprises an averaging unit 9, and at least two sub-functions g1 . . . gM are drawn. And, the class similarity factors are processed by a processing circuit 6, which outputs the class weight factors P1 . . . PN. The processing circuit 6 may perform various calculations, in particular take care of individual adaptations as provided by f1 . . . fN and o1 . . . oN (see FIG. 1).


The averaging unit 9 outputs time-averaged activity parameter sets a1* . . . aM*, which are used for steering the sub-functions g1 . . . gM. The advantages of this will become clear in the following.


The above-described mixing of base-parameter sets already provides for a significant improvement over prior art hearing devices, which can only run at one of a number of predetermined hearing programs at a time, wherein these hearing programs correspond to base parameter sets, which are optimized for a corresponding predefined class. The according switching between the predetermined hearing programs in such prior art hearing devices can be annoying to the user, in particular, if similarity values for competing classes are about equal to each other (e.g., about 50% for each of two classes). In that case, a frequent switching between hearing programs may occur. Since, by means of the above-described mixing of base-parameter sets, (quasi-) continuous adaptations of the transfer function G are possible by means of the invention (without switching), and smooth and agreeable changes will take place in most situations.


There are, nevertheless, situations, when there might still occur undesirable recognizable changes in the transfer function G despite of the base parameter set mixing. E.g., in a car, classification may change within seconds from nearly 100% speech (conversation at a red light) to nearly 100% noise (acceleration) to nearly 100% music (car radio) to nearly 100% speech-in-noise (car radio speaker at medium or high speeds). A too fast adaptation of the transfer function may, in such a case, be undesirable.


A preferable behaviour of the adaptation of the transfer function G shall, as far as possible, fulfill the following points:


1. Upon a changing acoustic situation, the hearing device shall change its signal processing sufficiently fast, but as inconspicuous to the user as possible. This should provide for an optimum performance during most of the time.


2. In a constantly strongly changing situation, however, the user shall not be annoyed by the partly significant changes in signal processing, which would be needed for a full adaptation to different acoustic environments.

These features can be accomplished, at least in part, by means of the following behaviour:


a. In a constantly strongly changing situation, the partly significant changes in signal processing, which would be needed for a full adaptation to different sound classes, shall be averaged out, in order to achieve a more constant (more stable) signal processing.


b. When (after strong changes) an acoustic situation is (again) practically stable (for a certain span of time), the signal processing shall slowly fade towards the appropriate parameter set values (activity parameter sets) for this situation.


c. Only, when class similarity factors have remained relatively stable for a sufficiently long time (i.e., detection of a rather constant acoustic situation for a certain span of time), the hearing device shall (again) react fast upon a detected significant change in the acoustic environment.



FIG. 3 is a schematic illustration of an activity parameter a1 and a corresponding time-averaged activity parameter a1* as a function of time t, which shall illustrate the above-depicted behaviour, wherein—for reasons of simplicity—only one parameter of an activity parameter set, or an activity parameter set comprising only one parameter is assumed. When fast great changes happen to a1, a1* will not fully follow a1. Later then, when changes in a1 become weaker, a1* slowly drifts towards a1. Finally, after quite a while of approximately constant conditions, a rapid strong change in a1 will be followed by a1* rather quickly and in full.


Such a behaviour can be readily implemented in form of software or otherwise. One exemplary implementation is shown in FIG. 4. The averaging unit 9 receives a1(t) and outputs a1*(t). The averaging time τ, during which a1(t)-values are averaged, is controlled in dependence of past a1(t)-values.


a1(t) is fed to a differentiator 91, which outputs a value representative of the derivative of a1(t), i.e., a measure for the changes in a1(t). Therefrom, the absolute value is taken (reference 92), which then is integrated (summed up) in a leaky integrator 93. Through a leakage factor α, the time, until which the circuit reacts again to a fast change of the input after a series of former fast input changes, is determined.


Accordingly, a measure for the magnitude of changes during the past time is obtained. The corresponding value can be multiplied with a base time constant t0 for adjustment. The so-obtained value is used as the time constant τ for an averager 90, which averages a1(t) during a time span τ and outputs the so-derived a1*(t).


Using an averager with different attack and release time constants (not shown) allows the averaging unit to settle towards a predetermined percentage of the dynamic range of the many fast changes, when many fast changes occur. Only when the input to the averaging unit settles, the output of the averaging unit will follow slowly.


Both, the averaging in the averaging unit 9 and the processing in the processing unit 8 may be adjusted individually for different parameters of an activity parameter set and/or for parameter sets for different sub-functions.


E.g., for sub-functions, which tend to strongly annoy the user when subject to rapid changes, greater time constants for averaging may be chosen (e.g., via to), whereas a more rapid following of a1*(t) to a1(t) may be chosen for sub-functions that result in less strong irritations when changed. In the case of an averager with different attack and release time constants (not shown), different ratios of attack time constants to release time constants may be chosen for different sub-functions.


As has already been stated above, it is possible to have just one single parameter as a1 for a sub-function. That parameter can be considered the “strength” or the “activity” of the sub-function.


It is to be noted that a time-averaging like the time-averaging described above, may not only be used for activity parameters (or more particularly, for each value or number of an activity parameter set), but may also be used, in general, for smoothing any other adjustments of a transfer function G. It is applicable to any (dynamically and/or continuously) adjustable processing algorithm.


It is furthermore to be noted, that the various units and parts in the Figures are merely logic units. They may be implemented in various ways, e.g., all in one processor chip or distributed over a number of processors; in one or several pieces of software and so on.












List of Reference Symbols
















 1
hearing device


 2
input transducer unit, microphone unit, microphone


 3
signal processing unit, transmission unit


3/1 . . . 3/M
signal processing circuits


 4
classifier unit


 5
output transducer unit, loudspeaker


 6
processing circuit


 7
base parameter storage unit


 8
processing unit


 9
averaging unit


90
differentiator


91
averager


92
calculating the absolute value


93
integrator


a1 . . . aM
activity parameter set


a1* . . . aM*
time-averaged activity parameter set


B1/1 . . . BM/N
base parameter sets


C
classifier


C1 . . . CN
classes


FE
feature extractor


f1 . . . fN
individual class factor


G
transfer function


g1 . . . gM
sub-function


M
number of sub-functions


N
number of classes


o1 . . . oN
individual class offset


p1 . . . pN
class similarity factor


P1 . . . PN
class weight factor


S1
input audio signals


S2
output audio signals


t
time


t0
base time constant


α
leakage factor


τ
time constant for averaging, averaging time








Claims
  • 1. Method for operating a hearing device having an adjustable transfer function comprising M sub-functions, wherein M is an integer with M≧1, and wherein said transfer function describes how input audio signals generated by an input transducer unit of said hearing device relate to output audio signals to be fed to an output transducer unit of said hearing device, said method comprising the steps of deriving said input audio signals from a current acoustic environment; and
  • 2. Method according to claim 1, with M≧2.
  • 3. Method according to claim 1, wherein the base parameter sets are chosen such that using each of the M base parameter sets assigned to one specific class of said N classes for adjusting the sub-function to which the respective base parameter set is assigned provides for optimized output audio signals, when said current acoustic environment is identical with the predetermined acoustic environment described by that specific class.
  • 4. Method according to claim 1, wherein each of said activity parameter sets comprises a multitude of values, in particular a multitude of numbers.
  • 5. Method according to claim 1, wherein each of said activity parameter sets is a single value, in particular, a single number.
  • 6. Method according to claim 1, comprising the step of deriving, for each of said N classes, a class weight factor from the corresponding class similarity factor;
  • 7. Method according to claim 6, wherein, for at least one of said N classes, said deriving of said class weight factor comprises multiplication with an individual class factor and/or addition of an individual class offset.
  • 8. Method according to claim 1,—wherein, for at least one of said M sub-functions, a time-averaged activity parameter set is used for adjusting the respective at least one of said M sub-functions.
  • 9. Method according to claim 8, further comprising the step of choosing an averaging time for said time-averaging in dependence of past changes in the respective activity parameter set.
  • 10. Method according to claim 9, further comprising the steps of decreasing said averaging time when said past changes in the respective activity parameter set decrease; andincreasing said averaging time when said past changes in the respective activity parameter set increase.
  • 11. Method according to claim 1, wherein at least one of the group comprising beam forming, noise cancelling, feedback cancelling, dynamics processing, filtering is realized by means of at least one of said M sub-functions.
  • 12. Hearing device comprising an input transducer unit for deriving input audio signals from a current acoustic environment;an output transducer unit for receiving output audio signals;a signal processing unit for deriving said output audio signals from said input audio signals by processing said input audio signals according to an adjustable transfer function, which adjustable transfer function describes how said input audio signals relate to said output audio signals and comprises M sub-functions, wherein M is an integer with M≧1;a classifier unit for deriving, on the basis of said input audio signals and for each class of N classes each of which describes a predetermined acoustic environment, a class similarity factor indicative of the similarity of said current acoustic environment with the predetermined acoustic environment described by the respective class, wherein N is an integer with N≧2;a base parameter storage unit storing, for each of said M sub-functions, N predetermined base parameter sets each assigned to a different class of said N classes;a processing unit operationally connected to said base parameter storage unit and adapted to deriving an activity parameter set for each of said M sub-functions, wherein each of said activity parameter sets is derived in dependence of said class similarity factors from the base parameter sets assigned to the respective sub-function;
  • 13. Device according to claim 12, with M≧2.
  • 14. Device according to claim 12, wherein, for each of said N classes, the M base parameter sets assigned to one specific class of said N classes are chosen such that optimized output audio signals are generated when said M base parameter sets are each used for adjusting that sub-function to which the respective base parameter set is assigned and when said current acoustic environment is identical with the predetermined acoustic environment described by said specific class.
  • 15. Device according to claim 12, wherein each of said activity parameter sets comprises a multitude of values, in particular a multitude of numbers.
  • 16. Device according to claim 12, wherein each of said activity parameter sets is a single value, in particular, a single number.
  • 17. Device according to claim 12, wherein said processing unit comprises an averaging unit for deriving, for each of at least one of said M sub-functions, a time-averaged activity parameter set, and wherein said at least one of said M sub-functions is adjusted by means of the respective time-averaged activity parameter set.
  • 18. Hearing device comprising means for deriving input audio signals from a current acoustic environment;means for processing said input audio signals according to an adjustable transfer function, which transfer function comprises M sub-functions, wherein M is an integer with M≧1;means for deriving, on the basis of said input audio signals and for each class of N classes each of which describes a predetermined acoustic environment, a class similarity factor indicative of the similarity of said current acoustic environment with the predetermined acoustic environment described by the respective class, wherein N is an integer with N≧2;means for deriving an activity parameter set for each of said M sub-functions, wherein each of said activity parameter sets is derived in dependence of said class similarity factors from N base parameter sets assigned to the respective sub-function, wherein each of said N base parameter sets assigned to the respective sub-function is assigned to a different class of said N classes;
  • 19. Hearing system comprising a hearing device according to one of claims 12 to 18.
  • 20. Method for manufacturing an audible signal by means of a hearing device having an adjustable transfer function comprising M sub-functions, wherein M is an integer with M≧1, and wherein said transfer function describes how input audio signals generated by an input transducer unit of said hearing device relate to output audio signals to be fed to an output transducer unit of said hearing device, said method comprising the step of deriving said input audio signals from a current acoustic environment; and
Provisional Applications (1)
Number Date Country
60747330 May 2006 US