This application relates generally to hearing assistance systems and in particular to a method and apparatus for detecting user activities from within a hearing aid using sensors employing micro electro-mechanical structures (MEMS).
For hearing aid users, certain physical activities induce low-frequency vibrations that excite the hearing aid microphone in such a way that the low frequencies are amplified by the signal processing circuitry thereby causing excessive buildup of unnatural sound pressure within the residual ear-canal air volume. The hearing aid industry has adapted the term “ampclusion” for these phenomena as noted in “Ampclusion Management 101: Understanding Variables” The Hearing Review, pp. 22-32, August (2002) and “Ampclusion Management 102: A 5-step Protocol” The Hearing Review, pp. 34-43, September (2002), both authored by F. Kuk and C. Ludvigsen. In general, ampclusion can be caused by such activities as chewing or heavy footfall motion during walking or running. These activities induce structural vibrations within the user's body that are strong enough to be sensed by a MEMS accelerometer that is properly positioned within the earmold of a hearing assistance device. Another user activity that can excite such a MEMS accelerometer is simple speech, particularly the vowel sounds of [i] as in piece and [u] is as in rule and annunciated according to the International Phonetic Alphabet. Yet another activity that can be sensed by a MEMS accelerometer is automobile motion or acceleration, which is commonly perceived as excessive rumble by passengers wearing hearing aids. Automobile motion is unique from the previously-mentioned activities in that its effect, i.e., the rumble, is generally produced by acoustical energy propagating from the engine of the automobile to the microphone of the hearing aid. The output signal(s) of a MEMS accelerometer can be processed such that the device can detect automobile motion or acceleration relative to gravity. One additional user activity, not related to ampclusion, that can be detected by a MEMS accelerometer is head tilt. Finally, it should be noted that a MEMS gyrator or a MEMS microphone can be used to detect all of the above-referenced user activities instead of a MEMS accelerometer. It is understood that a MEMS acoustical microphone may be modified to function as a mechanical or vibration sensor. For example, in one embodiment the acoustical inlet of the MEMS microphone is sealed. Other techniques modifying an acoustical microphone may be employed without departing from the scope of the present subject matter. In addition to the translational acceleration estimates provided by a MEMS accelerometer, a MEMS gyrator provides three additional rotational acceleration estimates.
Thus, there is a need in the art for a detection scheme that can reliably identify user activities and trigger the signal processing algorithms and circuitry to process, filter, and equalize their signal so as to mitigate the undesired effects of ampclusion and other user activities. In all of the activities described in the previous paragraph, the MEMS device acts as a detection trigger to alert the hearing aid's signal processing algorithm to specific user activities thereby allowing the algorithm to filter and equalize its frequency response according to each activity. Such a detection scheme should be computationally efficient, consume low power, require small physical space, and be readily reproducible for cost-effective production assembly.
The above-mentioned problems and others not expressly discussed herein are addressed by the present subject matter and will be understood by reading and studying this specification. The present system provides methods and apparatus to detect various motion events that effect audio signal processing and apply appropriate filters to compensate audio processing related to the detected motion events. In one embodiment an apparatus is provided with a micro electro-mechanical structure (MEMS) to sense motion and a processor to compare the sensed motion to signature motion events and provide further processing to adjust filters to compensate for audio effects resulting from the detected motion events.
This Summary is an overview of some of the teachings of the present application and not intended to be an exclusive or exhaustive treatment of the present subject matter. Further details about the present subject matter are found in the detailed description and appended claims. The scope of the present invention is defined by the appended claims and their legal equivalents.
Various embodiments are illustrated by way of example in the figures of the accompanying drawings. Such embodiments are demonstrative and not intended to be exhaustive or exclusive embodiments of the present subject matter.
The following detailed description of the present invention refers to subject matter in the accompanying drawings which show, by way of illustration, specific aspects and embodiments in which the present subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present subject matter. References to “an”, “one”, or “various” embodiments in this disclosure are not necessarily to the same embodiment, and such references contemplate more than one embodiment. The following detailed description is demonstrative and therefore not exhaustive, and the scope of the present subject matter is defined by the appended claims and their legal equivalents.
There are many benefits in using the output(s) of a properly-positioned MEMS accelerometer as the detection sensor for user activities. Consider, for example, that the sensor output is not degraded by acoustically-induced ambient noise; the user activity is detected via a structural path within the user's body. Detection and identification of a specific event typically occurs within approximately 2 msec from the beginning of the event. For speech detection, a quick 2 msec detection is particularly advantageous. If, for example, a hearing aid microphone is used as the speech detection sensor, a (≈0.8 msec) time delay would exist due to acoustical propagation from the user's vocal chords to the user's hearing aid microphone thereby intrinsically slowing any speech detection sensing. This 0.8 msec latency is effectively eliminated by the structural detection of a MEMS accelerometer sensor in an earmold. Considering that a DSP circuit delay for a typical hearing aid is ≈5 msec, and that a MEMS sensor positively detects speech within 2 msec from the beginning of the event, the algorithm is allowed ≈3 msec to implement an appropriate filter for the desired frequency response in the ear canal. These filters can be, but are not limited to, low order high-pass filters to mitigate the user's perception of rumble and boominess.
The most general detection of a user's activities can be accomplished by digitizing and comparing the amplitude of the output signal(s) of the MEMS accelerometer to some predetermined threshold. If the threshold is exceeded, the user is engaged in some activity causing higher acceleration as compared to a quiescent state. Using this approach, however, the sensor cannot distinguish between a targeted, desired activity and any other general motion, thereby producing “false triggers” for the desired activity. A more useful approach is to compare the digitized signal(s) to stored signature(s) that characterize each of the user events, and to compute a (squared) correlation coefficient between the real-time signal and the stored signals. When the coefficient exceeds a predetermined threshold for the correlation coefficient, the hearing aid filtering algorithms are alerted to a specific user activity, and the appropriate equalization of the frequency response is implemented. The squared correlation coefficient γ2 is defined as:
where x is the sample index for the incoming data, f1 is the last n samples of incoming data, f2 is the n-length signature to be recognized, and s is indexed from 1 to n. Vector arguments with overstrikes are taken as the mean value of the array, i.e.,
There are many benefits in using the squared correlation coefficient as the detection threshold for user activities. Empirical data indicate that merely 2 msec of digitized information (an n value of 24 samples at a sampling rate of 12.8 kHz) are needed to sufficiently capture the types of user activities described previously in this discussion. Thus, five signatures having 24 samples at 8 bits per sample require merely 960 bits of storage memory within the hearing aid. It should be noted that the cross correlation computation is immune to amplitude disparity between the stored signatures f1 and the signature to be identified f2. In addition, it is computed completely in the time domain using basic {+ − × ÷} operators, without the need for computationally-expensive butterfly networks of a DFT. Empirical data also indicate that the detection threshold is the same for all activities, thereby reducing detection complexity.
Although a single MEMS sensor is used, the sensing of various user activities is typically exclusive, and separate signal processing schemes can be implemented to correct the frequency response of each activity. The types of user activities that can be characterized include speech, chewing, footfall, head tilt, and automobile de/acceleration. Speech vowels of [i] as in piece and [u] is as in rule typically trigger a distinctive sinusoidal acceleration at their fundamental formant region of a (few) hundred hertz, depending on gender and individual physiology. Chewing typically triggers a very low frequency (<10 Hz) acceleration with a unique time signature. Although chewing of crunchy objects can induce some higher frequency content that is superimposed on top of the low frequency information, empirical data have indicated that it has negligible effect on detection precision. Footfall too is characterized by low frequency content, but with a time signature distinctly different from chewing. Head tilt can be detected by low-pass filtering and differentiating the output signals from a multi-axis MEMS accelerometer.
The MEMS accelerometer can be designed to detect any or all of the three translational acceleration components of a rectangular coordinate system. Typically, a dedicated micro-sensor is used in a 3-axis MEMS accelerometer to detect both the x and y components of acceleration, and a different micro-sensor is used to detect the z component. In our application, a 3-axis accelerometer in the earmold could be orientated such that the relative z component is approximately parallel with the relatively-central axis of the ear canal, and the x and y components define a plane that is relatively perpendicular to the surface of the earmold in the immediate vicinity of the ear canal tip. Alternatively, the MEMS accelerometer could be orientated such that the x and y components define any relative plane that is tangent to the surface of the earmold in the immediate vicinity of side of the ear canal, and the z component points perpendicularly inward towards the interior of the earmold. Although specific orientations have been described herein, it will be appreciated by those of ordinary skill in the art that other orientations are possible without departing from the scope of the present subject matter. In each of these orientations, a calibration procedure can be performed in-situ during the hearing aid fitting process. For example, the user could be instructed during the fitting/calibration process to do the following: 1) chew a nut, 2) chew a soft sandwich, 3) speak the phrase: “teeny weeny blue zucchini”, 4) walk a known distance briskly. These events are digitized and stored for analysis, either on board the hearing aid itself or on the fitting computer following some data transfer process. An algorithm clips and conditions the important events and these clipped events are stored in the hearing aid as “target” events. The MEMS detection algorithm is engaged and the (4) activities described above are repeated by the user. Detection thresholds for the squared correlation coefficient and ampclusion filtering characteristics are adjusted until positive identification and perceived sound quality is acceptable to the user. The adjusted thresholds for each individual user will depend on the orientation of the MEMS accelerometer, the number of active axes in the MEMS accelerometer, and the relative strength of signal to noise. For the walking task, the accelerometer can be calibrated as a pedometer, and the hearing aid can be used to inform the user of accomplished walking distance status. In addition, head tilt could be calibrated by asking the user to do the following from a standing or sitting position looking straight ahead: 1) rotate the head slowly to the left or right, and 2) rotate the head such that the user's eyes are pointing directly upwards. These events are digitized as done previously, and the accelerometer output is filtered, conditioned, and differentiated appropriately to give an estimate of head tilt in units of mV output per degree of head tilt, or some equivalent. This information could be used to adjust head related transfer functions, or as an alert to a notify that the user has fallen or is falling asleep.
It is understood that a MEMS accelerometer or gyrator can be employed in either a custom earmold in various embodiments, or a standard earmold in various embodiments. Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that other embodiments are possible without departing from the scope of the present subject matter.
The ITE device 100 of the embodiment illustrated in
The embodiment of
The embodiment of
The embodiment of
The middle panel of
The present subject matter relates to a MEMS accelerometer, however, it is understood that other accelerometer designs and MEMS sensors may be substituted for the MEMS accelerometer.
This application claims priority under 35 U.S.C 119(e) of U.S. Provisional Patent Application Ser. No. 60/973,399 filed on Sep. 18, 2007 which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4598585 | Boxenhorn et al. | Jul 1986 | A |
5091952 | Williamson et al. | Feb 1992 | A |
5390254 | Adelman | Feb 1995 | A |
5692059 | Kruger | Nov 1997 | A |
5796848 | Martin | Aug 1998 | A |
6310556 | Green et al. | Oct 2001 | B1 |
6330339 | Ishige et al. | Dec 2001 | B1 |
6631197 | Taenzer | Oct 2003 | B1 |
7209569 | Boesen | Apr 2007 | B2 |
7289639 | Abel et al. | Oct 2007 | B2 |
7433484 | Asseily et al. | Oct 2008 | B2 |
7778434 | Juneau et al. | Aug 2010 | B2 |
7983435 | Moses | Jul 2011 | B2 |
8005247 | Westerkull | Aug 2011 | B2 |
20010007050 | Adelman | Jul 2001 | A1 |
20060029246 | Boesen | Feb 2006 | A1 |
20060159297 | Wirola et al. | Jul 2006 | A1 |
20060280320 | Song et al. | Dec 2006 | A1 |
20070036348 | Orr | Feb 2007 | A1 |
20070053536 | Westerkull | Mar 2007 | A1 |
20070167671 | Miller, III | Jul 2007 | A1 |
20080205679 | Darbut et al. | Aug 2008 | A1 |
20100172523 | Burns et al. | Jul 2010 | A1 |
20100172529 | Burns et al. | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
1063837 | Dec 2000 | EP |
2040490 | Nov 2012 | EP |
WO-0057616 | Sep 2000 | WO |
WO-2004057909 | Jul 2004 | WO |
WO-2004092746 | Oct 2004 | WO |
WO-2006076531 | Jul 2006 | WO |
Entry |
---|
“European Application Serial No. 08253052.8, Extended European Search Report mailed May 6, 2010”, 6 pgs. |
“European Application Serial No. 08253052.8, Response filed Dec. 1, 2010”, 14 pgs. |
Kuk, F., et al., “Ampclusion Management 101: Understanding Variables”, The Hearing Review, (Aug. 2002), 6 pages. |
Kuk, F., et al., “Ampclusion Management 102: A 5-Step Protocol”, The Hearing Review, (Sep. 2002), 6 pages. |
“U.S. Appl. No. 12/649,618, Non Final Office Action mailed Nov. 14, 2011”, 7 pgs. |
“U.S. Appl. No. 12/649,634, Non Final Office Action mailed Dec. 15, 2011”, 10 pgs. |
“U.S. Appl. No. 12/649,618, Advisory Action mailed Oct. 2, 2012”, 3 pgs. |
“U.S. Appl. No. 12/649,618, Final Office Action mailed Jun. 14, 2012”, 12 pgs. |
“U.S. Appl. No. 12/649,618, Final Office Action mailed Aug. 15, 2013”, 17 pgs. |
“U.S. Appl. No. 12/649,618, Non Final Office Action mailed Dec. 14, 2012”, 16 pgs. |
“U.S. Appl. No. 12/649,618, Response filed Apr. 16, 2012 to Non Final Office Action mailed Nov. 14, 2011”, 6 pgs. |
“U.S. Appl. No. 12/649,618, Response filed May 14, 2013 to Non Final Office Action mailed Dec. 14, 2012”, 9 pgs. |
“U.S. Appl. No. 12/649,618, Response filed Sep. 13, 2012 to Final Office Action mailed Jun. 14, 2012”, 10 pgs. |
“U.S. Appl. No. 12/649,634, Advisory Action mailed Sep. 26, 2012”, 3 pgs. |
“U.S. Appl. No. 12/649,634, Final Office Action mailed Jun. 15, 2012”, 13 pgs. |
“U.S. Appl. No. 12/649,634, Final Office Action mailed Aug. 15, 2013”, 18 pgs. |
“U.S. Appl. No. 12/649,634, Non Final Office Action mailed Dec. 14, 2012”, 18 pgs. |
“U.S. Appl. No. 12/649,634, Response filed Apr. 16, 2012 to Non Final Office Action mailed Dec. 15, 2011”, 8 pgs. |
“U.S. Appl. No. 12/649,634, Response filed May 14, 2013 to Non Final Office Action mailed Dec. 14, 2012”, 9 pgs. |
“U.S. Appl. No. 12/649,634, Response filed Sep. 17, 2012 to Final Office Action mailed Jun. 15, 2012”, 11 pgs. |
U.S. Appl. No. 12/649,618 , Response filed Oct. 15, 2013 to Final Office Action mailed Aug. 15, 2013, 11 pages. |
U.S. Appl. No. 12/649,618, Advisory Action mailed Oct. 28, 2013, 3 pages. |
U.S. Appl. No. 12/649,618, Examiner Interview Summary mailed Dec. 27, 2013, 3 pages. |
U.S. Appl. No. 12/649,634 , Response filed Oct. 15, 2013 to Final Office Action mailed Aug. 15, 2013, 11 pages. |
U.S. Appl. No. 12/649,634, Advisory Action mailed Oct. 28, 2013, 3 pages. |
U.S. Appl. No. 12/649,634, Examiner Interview Summary mailed Dec. 27, 2013, 3 pages. |
European Application Serial No. [Pending], Notice of Opposition mailed Aug. 6, 2013, 48 pages. |
European Application Serial No. 12191166.3, Partial European Search Report mailed Oct. 9, 2013, 5 pages. |
Number | Date | Country | |
---|---|---|---|
20090097683 A1 | Apr 2009 | US |
Number | Date | Country | |
---|---|---|---|
60973399 | Sep 2007 | US |