1. Field of the Invention
The present invention relates to a method of operating a hearing aid. More specifically the invention relates to a method of operating a hearing aid wherein speech intelligibility for the user is optimized. Further the present invention relates to a hearing aid adapted to provide improved speech intelligibility.
In the context of the present disclosure, a hearing aid should be understood as a small, microelectronic device designed to be worn behind or in a human ear of a hearing-impaired user. A hearing aid system may be monaural and comprise only one hearing aid or be binaural and comprise two hearing aids. Prior to use, the hearing aid is adjusted by a hearing aid fitter according to a prescription. The prescription is based on a hearing test, resulting in a so-called audiogram, of the performance of the hearing-impaired user's unaided hearing. The prescription is developed to reach a setting where the hearing aid will alleviate a hearing loss by amplifying sound at frequencies in those parts of the audible frequency range where the user suffers a hearing deficit. A hearing aid comprises one or more microphones, a microelectronic circuit comprising a signal processor, and an acoustic output transducer (which may also be denoted a hearing aid receiver). The signal processor is preferably a digital signal processor. The hearing aid is enclosed in a casing suitable for fitting behind or in a human ear.
The mechanical design has developed into a number of general categories. As the name suggests, Behind-The-Ear (BTE) hearing aids are worn behind the ear. To be more precise, an electronics unit comprising a housing containing the major electronics parts thereof is worn behind the ear. An earpiece for emitting sound to the hearing aid user is worn in the ear, e.g. in the concha or the ear canal. In a traditional BTE hearing aid, a sound tube is used to convey sound from the output transducer, which in hearing aid terminology is normally referred to as the receiver, located in the housing of the electronics unit and to the ear canal. In some modern types of hearing aids a conducting member comprising electrical conductors conveys an electric signal from the housing and to a receiver placed in the earpiece in the ear. Such hearing aids are commonly referred to as Receiver-In-The-Ear (RITE) hearing aids. In a specific type of RITE hearing aids the receiver is placed inside the ear canal. This category is sometimes referred to as Receiver-In-Canal (RIC) hearing aids.
In-The-Ear (ITE) hearing aids are designed for arrangement in the ear, normally in the funnel-shaped outer part of the ear canal. In a specific type of ITE hearing aids the hearing aid is placed substantially inside the ear canal. This category is sometimes referred to as Completely-In-Canal (CIC) hearing aids. This type of hearing aid requires an especially compact design in order to allow it to be arranged in the ear canal, while accommodating the components necessary for operation of the hearing aid.
Prior to use, the hearing aid must be fitted to the individual user. The fitting procedure basically comprises adapting a transfer function dependent on level and frequency to best compensate the user's hearing loss according to the particular circumstances such as the user's hearing impairment and the specific hearing aid selected. The selected settings of the parameters governing the transfer function are stored in the hearing aid. The settings can later be changed through a repetition of the fitting procedure, e.g. to account for a change in impairment. In case of multi-program hearing aids, the adaptation procedure may be carried out once for each program, selecting settings dedicated to take specific sound environments into account.
According to the state of the art, hearing aids process sound in a number of frequency bands with facilities for specifying gain levels according to some predefined input/gain-curves in the respective bands.
The level-dependent transfer function is adapted for compressing the signal in order to control the dynamic range of the output of the hearing aid. The compression can be regarded as an automatic adjustment of the gain levels for the purpose of improving the listening comfort of the user of the hearing aid, and the compression may therefore be denoted Automatic Gain Control (AGC). The AGC also provides the gain values required for alleviating the hearing loss of the person using the hearing aid.
2. The Prior Art
Compression may be implemented in the way described in the international application WO-A1-9934642.
Advanced hearing aids may further comprise anti-feedback routines for continuously monitoring input signals and output signals in respective frequency bands for the purpose of continuously controlling acoustic feedback instability through providing cancellation signals and through lowering of the gain settings in the respective bands when necessary.
However, in all these “predefined” gain adjustment methods, the gain levels are modified according to functions that have been predefined during the programming/fitting of the hearing aid to reflect requirements for generalized situations.
Recently it has been suggested to use models for the prediction of the intelligibility of speech after a transmission though a linear system. The most well-known of these models is the “articulation index”, AI, the speech intelligibility index, SII, and the “speech transmission index”, STI, but other indices exist.
Determinations of speech intelligibility have been used to assess the quality of speech signals in telephone lines, see e.g. H. Fletcher and R. H. Galt “The perception of speech and its relation to telephony,” J. Acoust. Soc. Am. 22, 89-151 (1950).
The SII is always a number between 0 (speech is not intelligible at all) and 1 (speech is fully intelligible). The SII is, in fact, an objective measure of the system's ability to convey speech intelligibility and hereby hopefully making it possible for the listener to understand what is being said.
The ANSI S3.5-1997 standard provides methods for the calculation of the speech intelligibility index, SII. The SII makes it possible to predict the intelligible amount of the transmitted speech information, and thus, the speech intelligibility in a linear transmission system. The SII is a function of the system's transfer function and of the acoustic input, i.e. indirectly of the speech spectrum at the output of the system. Furthermore, it is possible to take the effects of a masking noise into account in the SII.
The ANSI S3.5-1997 (Revised 2007) standard is based on hearing thresholds for normal hearing persons. However Annex A of the standard discloses a modification of the speech level distortion factor with an additional loss factor that is the part of the equivalent hearing threshold level due to the presence of a conductive hearing loss.
Various procedures have been proposed for correcting the SII protocol to include the so called supra-threshold deficits, but in the ANSI S3.5-1997 (Revised 2007) standard only the effect of an elevated hearing threshold level is included.
EP-B1-1522206 discloses a hearing aid and a method of operating a hearing aid wherein speech intelligibility is improved based on frequency band gain adjustments based on real-time determinations of speech intelligibility and loudness, and which is suitable for implementation in a processor in a hearing aid.
This type of hearing aid and operation method requires the capability of increasing or decreasing the gain independently in the different bands depending on the current sound situation. For bands with high noise levels, e.g., it may be advantageous to decrease the gain, while an increase of gain can be advantageous in bands with low noise levels, in order to maximise the SII. However, such a simple strategy will not always be an optimal solution, as the SII also takes inter-band interactions, such as mutual masking, into account. A precise calculation of the SII is therefore necessary.
This type of hearing aid and methods of enhancing speech are advantageous, but are still based on standard assumptions concerning a user's hearing loss, which means that the hearing aids and the corresponding methods, apart from the measured hearing loss threshold, cannot be individualized to the user.
It is therefore a feature of the invention to provide a method of operating a hearing aid wherein improved speech enhancement is achieved.
It is also a feature of the invention to provide a method of operating a hearing aid with improved means for individualization of the methods to the specific user.
It is a further feature of the invention to provide a hearing aid comprising means for enhancing listening comfort and means for optimizing speech intelligibility in real time.
The invention in a first aspect provides a method of processing a signal in a hearing aid, the method comprising the steps of receiving an input signal from an acoustical-electrical transducer; splitting the input signal into a multitude N of hearing aid frequency bands using a first filterbank; estimating ambient speech level and noise level in a multitude of said hearing aid frequency bands and applying a hearing aid gain to said speech level and noise level in said multitude of hearing aid frequency bands, whereby estimated speech and noise spectra are provided; estimating hearing loss levels in said multitude of said hearing aid frequency bands hereby providing an estimated hearing loss spectrum; providing excitation values for the speech and noise in said multitude of hearing aid frequency bands, by using an auditory model of the cochlea for a hearing impaired person, and the estimated speech, noise and hearing loss spectra; using said excitation values to calculate a speech intelligibility measure; and optimizing said speech intelligibility measure by iteratively varying the applied gain in the hearing aid frequency bands.
This provides a method of operating a hearing aid that provides improved speech intelligibility and listening comfort.
The invention in a second aspect provides a hearing aid system comprising frequency splitting means adapted for splitting an input signal into a multitude of frequency bands; estimating means adapted for estimating speech levels, noise levels and hearing loss levels in said frequency bands; an auditory model of the cochlea adapted for providing excitation values for speech and noise in said frequency bands; speech intelligibility estimation means adapted for calculating a speech intelligibility measure based on said excitation values; and hearing aid gain optimization means adapted for optimizing said speech intelligibility measure by varying the gain in the hearing aid frequency bands.
This provides a hearing aid with improved means for optimizing speech intelligibility.
Further advantageous features appear from the dependent claims.
Still other features of the present invention will become apparent to those skilled in the art from the following description wherein the invention will be explained in greater detail.
By way of example, there is shown and described a preferred embodiment of this invention. As will be realized, the invention is capable of other different embodiments, and its several details are capable of modification in various, obvious aspects all without departing from the invention. Accordingly, the drawings and descriptions will be regarded as illustrative in nature and not as restrictive. In the drawings:
The inventors have found that an improved method for speech enhancement in a hearing aid can be obtained by replacing estimates of sound pressure levels in the ambience with excitation pattern values that represent how the sounds are perceived by the hearing aid user.
Surprisingly the inventors have found that the complex and non-linear excitation pattern models can be implemented in a way that makes an excitation pattern model suitable for use in a hearing aid.
Further the inventors have found that the implementation of an excitation pattern model can simplify the complexity required for estimating speech intelligibility. One particularly important advantage is that the calculation of the equivalent masking spectrum level is unnecessary, since it is an implicit part of the excitation pattern model, and the same holds true for the estimation of the slope of upward spread of masking.
The inventors have further demonstrated how supra-threshold deficits, in particular the reduced frequency selectivity from hearing loss, can be included in a model for estimating speech intelligibility, and wherein said model is suitable for implementation in a hearing aid.
Additionally the inventors have shown that the use of an excitation pattern model allows the speech enhancement method to be individualized in a manner that was not possible before. In particular it has been demonstrated that consequences of inner and outer hair cell loss can be accounted for.
Finally the inventors have shown that the combined impact from inner and outer hair cell loss can be modeled in a simple manner.
Reference is first made to
The hearing aid 50 comprises a microphone 1 connected to a block splitting means 2, which further connects to a filter block 3. The block splitting means 2 may apply an ordinary, temporal, optionally weighted windowing function, and the filter block 3 may preferably comprise a predefined set of low pass, band pass and high pass filters defining the hearing aid frequency bands.
The total output from the filter block 3 is fed to a multiplication point 10, and the output from the separate bands 1, 2, . . . M in filter block 3 are fed to respective inputs of a speech and noise estimator 4. The outputs from the separate filter bands are shown in
The speech and noise estimator 4 also provides input to the AGC means 5 wherefrom the required gains G0,f for alleviating the hearing loss of the hearing aid user, in the various frequency bands, are determined.
The speech and noise estimator 4 may be implemented as a percentile estimator. A percentile is, by definition, the value for which the cumulative distribution is equal to or below that percentile. The output values from the percentile estimator each correspond to an estimate of a level value below which the signal level lies within a certain percentage of the time during which the signal level is estimated. The vectors preferably correspond to a 10% percentile (the noise, N) and a 90% percentile (the speech, S) respectively, but other percentile figures can be used. In practice, this means that the noise level vector N comprises the signal levels below which the frequency band signal levels lie during 10% of the time, and the speech level vector S is the signal level below which the frequency band signal levels lie during 90% of the time. The speech and noise estimator 4 implements a very efficient way of estimating for each block the frequency band levels of noise as well as the frequency band levels of speech.
A percentile estimator may be implemented e.g. as the kind presented in the U.S. Pat. No. 5,687,241.
In variations of the embodiment of
The output of multiplication point 10 is further connected to a loudspeaker 12 via a block overlap means 11. The speech and noise estimator 4 is connected to a speech optimization unit 8 and Automatic Gain Control (AGC) means 5 by two multi-band signal paths carrying respectively the estimated signal S and the estimated noise N.
The block overlap means 11 may be implemented as a band interleaving function and a regeneration function for recreating an optimized signal suitable for reproduction. The block overlap means 11 forms the final, speech-optimized signal block and presents this to the loudspeaker 12.
The AGC means provides the required gains G0,f for alleviating the hearing loss of the hearing aid user, in the various hearing aid frequency bands. The AGC means 5 is connected to one input of a summation point 9, feeding it with a first set of gain values G0,f, for each hearing aid frequency band, based on the compressor characteristics and the specific hearing loss of the hearing aid user. In variations of the embodiment of
Furthermore the gain values G0,f are fed to the speech optimization unit 8 in order to calculate the speech intelligibility value.
The AGC means 5 may be implemented as a multiband compressor, for instance of the kind described in WO-A1-2007/025569.
After optimizing the speech intelligibility, preferably by means of an iterative algorithm shown below with reference to
In variations of the embodiment of
Reference is now given to
If the new SI value is larger than the initial value SI0, the routine continues in step 106, where G′0,f is set to G′f. Otherwise, the routine continues in step 107, where the new gain value G′f is set to G′0,f minus the incremental gain value ΔGf.
The routine then continues in step 111 by examining the hearing aid frequency band number f to see if the highest number of frequency bands fmax has been reached.
If, however, the new SI value calculated in step 104 is smaller than the initial value SI0, then the new gain value G′f is set to G′0,f minus the gain value increment ΔGf in step 107. The proposed speech intelligibility value SI is then calculated again for the new gain value G′f in step 108.
The proposed speech intelligibility SI is again compared to the initial value SI0 in step 109. If the new value SI is larger than the initial value SI0, the routine continues in step 110, where G′0,f is set to G′f.
If neither an increased or a decreased gain value ΔG results in an increased SI, the initial gain value G′0,f is preserved for the hearing aid frequency band f. The routine continues in step 111 by examining the band number f to see if the highest number of frequency bands fmax has been reached. If this is not the case, the routine continues via step 113, incrementing the number of the frequency band f subject to optimization by one. Otherwise, the routine continues in step 112 by comparing the new SI vector with the old vector SI0 to determine if the difference between them is smaller than a tolerance value c.
If any of the f values of SI calculated in each band in either step 104 or step 108 are substantially different from SI0, i.e. the vectors differ by more than the tolerance value c, the routine proceeds towards step 115, where the iteration counter k is compared to a maximum iteration number kmax.
If k is smaller than kmax, the routine continues in step 114, by defining a new gain increment ΔG by multiplying the current gain increment with a factor 1/d, where d is a positive number greater than 1, and incrementing the iteration counter k. The routine then continues by iteratively calculating all fmax frequency bands again in step 101, starting over with the first frequency band f=1. If k is larger than kmax, the new, individual gain values are transferred to the transfer function of the signal processor in step 116 and terminates the optimization routine in step 117. This is also the case if the SI did not increase by more than the tolerance value ε in any band (step 112). Then the need for further optimization no longer exists.
In essence, the algorithm traverses the fmax-dimensional vector space of fmax hearing aid frequency band gain values iteratively, optimizing the gain values for each frequency band with respect to the largest SI value. Practical values for the tolerance variable c and d in this example are 0.005 and 2, respectively. The number of frequency bands fmax may be set to 12 or 15 frequency bands. A convenient starting point for ΔG is 10 dB. Simulated tests have shown that the algorithm usually converges after four to six iterations, i.e. a point is reached where the difference between the old SI0 vector and the new SI vector becomes negligible and thus execution of subsequent iterative steps may be terminated. Thus, this algorithm is very effective in terms of processing requirements and speed of convergence.
According to a variation of the invention, the optimised gain vector can be determined using an estimation of the gradient of a speech intelligibility measure as a function of the gain vector.
According to yet another variation of the invention, the optimised gain vector can be determined as disclosed in EP-B1-1522206 in
Reference is now given to
The flow chart comprises a start point block 200 connected to a subsequent block 201, where an initial hearing aid frequency band number f=1, an initial iteration number m=1, an SII gain vector G′ and a penalty gain vector Gpen are set. The elements of the gain vectors G′f and Gpen,f represent the gain values corresponding to each of the hearing aid frequency bands f.
The estimated speech vector S, the estimated noise vector N and the gain values G0,f, that are required for the calculation of the gradient of the speech intelligibility measure and the penalty gain vector Gpen, are initialized once and kept constant throughout the optimization of the SII gain vector G′.
The values of the penalty gains are selected from the range between zero and −18 dB. Further details concerning, one example of, how to provide the penalty gain vector can be found in the patent application PCT/EP2011/073746, filed 22 Dec. 2011, published as WO-A1-2013091702, particularly from page 14, line 16 and to page 16, line 2.
In the following step 202, the gradient of the speech intelligibility measure in the point G′f is determined. In the following the gradient in the point G′f may also be denoted a gradient element or a partial derivative of the gradient.
After step 202, the gradient of the speech intelligibility measure is modified in step 203 by adding a term comprising the difference between the penalty gain value Gpen,f and the gain value G′f multiplied by a proportionality constant K.
In step 204 the sign of the modified gradient is determined. If the new modified gradient is positive the algorithm continues in step 205, where a new gain value G′f is set to the current gain value G′f plus a gain value increment Gm,f. Otherwise, the routine continues in step 206, where the new gain value G′f is set to the current gain value G′f minus the gain value increment Gm,f. The gain value increment Gm,f may be a constant or it may vary as a function of both iteration number m and/or frequency band number f.
The algorithm then continues in step 207 by examining the frequency band number f to see if the highest number of frequency bands fmax has been reached. If this is not the case the frequency band number f is updated by one in step 209, and the algorithm proceeds to step 202.
According to a variation of the current embodiment the gain value increment Gm depends on the iteration number m such that the magnitude of the gain value increment decreases with increasing iteration number.
When the highest number of frequency bands fmax has been reached, the algorithm continues in step 208 by examining the iteration number m to see if the highest iteration number of mmax has been reached. If this is not the case the iteration number m is updated by one, the frequency band number f is reset to one in step 210, and the algorithm proceeds to step 202.
The inventors have found that when the highest number of iterations mmax has been reached the need for further optimization no longer exists, and the resulting speech-optimized gain value vector G′ is transferred to the transfer function of the signal processor in step 211 and the optimization routine is terminated.
In essence, the algorithm traverses the fmax-dimensional vector space of fmax frequency band gain values iteratively, optimizing the gain values G′f for each frequency band with respect to both speech intelligibility and listening comfort.
The gradient of the speech intelligibility measure may be derived using an analytical expression which is the preferred option, but it may also be calculated based on results of empirical studies.
Reference is now given to
The SI algorithm initializes in step 401, and in step 402 the SI algorithm determines the number of frequency bands fmax and the center frequencies CF of the frequency bands.
According to the present embodiment only 15 frequency bands are used and the following center frequencies have been selected (all measured in Hz): 128, 220, 348, 489, 634, 796, 1002, 1264, 1594, 2006, 2530, 3213, 4155, 5688, 8720.
Thus the inventors have surprisingly found that only a limited number of frequency bands, i.e. above 10, and preferably between 12 and 18 are required to obtain a sufficiently precise model of the excitation pattern for a hearing impaired. Based on this the inventors have shown that hearing aid frequency bands that are already available in many modern hearing aids can be used to model the excitation patterns.
In step 403 an estimate of a noise signal level and a speech signal level is determined for a multitude of frequency bands, hereby providing an assumed noise vector and an assumed speech vector.
In step 404 the insertion gain to be applied by the hearing aid, in said multitude of frequency bands, is applied to the assumed noise and speech vectors, hereby providing processed noise and speech vectors.
In step 405 the acoustical effect of the middle ear on the transmission of sound from the eardrum to the cochlea (the inner ear) is taken into account using a transfer function, which is specified in ANSI S 3.4-2007. The end result of this step is a specification of the spectrum of the estimated sound levels applied to the cochlea.
According to an advantageous variation the middle ear transfer function can be determined based on air-bone gap audiometry for the individual hearing aid user, whereby a more precise and individualized estimation of the middle ear transfer function can be obtained.
In step 406 the processed noise and speech vectors are filtered in a corresponding set of wideband filters, wherein each of said wideband filters Ww are defined by the equations:
wherein f is the sound frequency, CF is the center frequency of the wideband filter and tl(CF) and tu(CF) are parameters describing the shape of the filter for frequencies below and above the center frequency CF, respectively.
In step 407 the excitation Ew(CF) at the output of a wideband filter with center frequency CF given an input with power spectrum X(f) is given by:
Ew(CF)=∫X(f)·Ww(f,CF)df
According to the present embodiment the power spectrum X(f) is obtained based on the estimated noise or speech levels in the hearing aid frequency bands.
In step 408 the processed noise and speech vectors are filtered in a corresponding set of narrowband filters, wherein each of said narrowband filters Wn are defined by the equations:
wherein pl (CF) and Pu(CF) are parameters describing the shape of the filters for frequencies below and above the center frequency CF, respectively and wherein G(CF) represents a linear gain that is controlled by the output from a wideband filter as specified in the following.
In step 409 the excitation En at the output of a narrowband filter given an input with power spectrum X(f) is given by:
En(CF)=∫X(f)·Wn(CF)df
The excitation Ew(CF) at the output of a wideband filter Ww(CF) is used to control the corresponding linear gain G(CF) of a narrowband filter, given in the equations of step 408, according to the formulas:
wherein GdB,Max(CF) is the maximum gain, in dB, of the narrowband filter having the center frequency CF. GdB,Max(CF) is determined based on the Outer Hair Cell loss (OHCL):
GdB,Max(CF)=Gdb,Max,normal(CF)−OHCLdB(CF)
wherein GdB,Max,normal(CF) represents the maximum gain of the narrowband filter for a normal hearing. This corresponds to the gain of the narrowband filter for very low input levels. When the input level is increased, the gain of the narrowband filter GdB(CF) is reduced as given by the formulas above. This in turn leads to reduced frequency selectivity and reduced compressive nonlinearity. When OHCLdB(CF)=0 dB there is no outer hair cell loss.
In step 410 the excitation at the output of the narrowband and wideband filters are summed, hereby providing the summed excitations EdB(CF).
In step 411 the summed excitations EdB(CF) are modified by including the effects of Inner Hair Cell Loss (IHCL) according to the formula, hereby providing the resultant excitation EdB (CF) given by:
EdB(CF)=EdB(CF)−IHCLdB(CF)
The resultant excitation will in the following be denoted EPnoise if derived from a noise spectrum, and EPspeech if derived from a speech spectrum.
By incorporating a model of the hair cell loss proportion (i.e. outer hair cell loss relative to inner hair cell loss) as a function of the hearing loss threshold, the inventors have demonstrated that speech intelligibility estimation, relying on inner and outer cell losses, can be provided based only on a measurement of the hearing loss threshold.
According to the present embodiment, the proportion of inner and outer hair cell is estimated based on the following table:
However, in variations the inner hair cell loss and outer hair cell loss may also be determined using well known measurement techniques.
In step 412 a self-speech-masking (SSM) spectrum is estimated based on calculated resultant excitation spectrums derived from processed noise and speech spectra according to the formula:
SSM(CF)=k1·(EPspeech(CF−1)+EPspeech(CF+1))+EPnoise(CF)
where k1 is a constant that is set to 1 and according to variations is in the range between zero and one.
In step 413 a measure D(CF) corresponding to an Equivalent Disturbance Level as defined in the ANSI S 3.5-1997 is derived as the largest of the hearing loss spectrum and the self-speech-masking spectrum SSM(CF).
In step 414 a speech level distortion factor L(CF) is calculated as:
L(CF)=1−(EPspeech(CF)−U(CF)−k4)/k5
The standard speech spectrum level at normal vocal effort, U(CF) can be obtained from Table 1 of ANSI S 3.5-1997. The inventors have discovered that k4 can be set to 7 while k5 can be set to 40. However, the inventors have discovered that an appropriate value of k4 can also be selected from the range between 1 and 30 and that a value for k5 can be selected from the range between 1 and 60.
The band audibility A is calculated in step 415 as:
A(CF)=L(CF)·K(CF)·I(CF)
The temporary variable K(CF), which may be denoted audible speech, is calculated according to the formula:
K(CF)=(EPspeech(CF)−D(CF)+k2)/k3
wherein k2 is set to 15 and k3 is set to 30 and wherein, according to variations, k2 is in the range between 1 and 30 and k3 is in the range between 1-60, and wherein I(CF) is the band importance function that is used to weigh the audibility with respect to speech frequencies.
The total speech intelligibility index SII is calculated in step 416 as the sum of the band audibilities in each of the hearing aid frequency bands.
The present application is a continuation-in-part of application PCT/EP2012076565, filed on Dec. 21, 2012, in Europe, and published as WO 2014094865 A1.
Number | Name | Date | Kind |
---|---|---|---|
5687241 | Ludvigsen | Nov 1997 | A |
7328151 | Muesch | Feb 2008 | B2 |
20020057808 | Goldstein | May 2002 | A1 |
20090304215 | Hansen | Dec 2009 | A1 |
20100250242 | Li | Sep 2010 | A1 |
Number | Date | Country |
---|---|---|
1522206 | Oct 2007 | EP |
9934642 | Jul 1999 | WO |
2004008801 | Jan 2004 | WO |
2007025569 | Mar 2007 | WO |
2012076045 | Jun 2012 | WO |
2013091702 | Jun 2013 | WO |
Entry |
---|
International Search Report for PCT/EP2012/076565 dated Apr. 24, 2013. |
H. Fletcher and R.H. Galt, “The perception of speech and its relation to telephony”, J. Acoust. Soc. Am. 22, pp. 89-151 (1950). |
ANSI S3.5-1997 Standard—American National Standard—Methods for Calculation of the Speech Intelligibility Index—May 18, 2007. |
Number | Date | Country | |
---|---|---|---|
20150281857 A1 | Oct 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2012/076565 | Dec 2012 | US |
Child | 14739372 | US |