This document relates generally to hearing assistance systems and more particularly to methods and apparatus for hearing assistance in multiple-talker settings.
Modern hearing assistance devices, such as hearing aids, are electronic instruments worn in or around the ear that compensate for hearing losses of hearing-impaired people by specially amplifying sound. Hearing-impaired people encounter great difficulty with speech communication in multi-talker settings, particularly when attention needs to be divided between multiple talkers.
Current hearing assistance technology employs single-microphone noise reduction algorithms in order to increase perceived sound quality. This may also reduce listening effort in complex environments. However, current noise reduction algorithms do not increase speech intelligibility in multiple-talker settings. In contrast, use of static directionality systems such as microphone arrays or directional microphones in hearing aids can increase speech intelligibility by passing signals from the direction of a target talker, typically assumed to be located in front, and attenuating signals from other directions. Recently, adaptive directional systems have also been employed that adaptively follow a target with changing direction.
Directional systems only increase speech intelligibility when the direction of a target talker, or the talker of interest to the listener, relative to the listener's head remains constant in front of the listener or can be identified unambiguously. However, in many real-world situations, this is not the case. In a dinner conversation, for example, where speech from multiple concurrent talkers can reach the ear from different directions at similar sound levels, identifying the desired target location is a difficult problem. Active user feedback via a remote control may help in static scenarios where the spatial configuration does not change. However, user feedback would not be practical in situations where targets can change dynamically, such as two or more alternating talkers in a conversation.
Accordingly, there is a need in the art for improved systems and methods for enhancing speech intelligibility and reducing listening effort in multi-talker settings.
Disclosed herein, among other things, are systems and methods for hearing assistance in multiple-talker settings. One aspect of the present subject matter includes a method of operating a hearing assistance device for a user in an environment. A parameter is sensed relating to facing orientation, location, and/or talking activity of a talker in communication within the environment. In various embodiments, facing orientation, location, and talking activity of the talker is estimated based on the sensed parameter. A hearing assistance device parameter is adjusted based on the estimated facing orientation, location, and talking activity of the talker, according to various embodiments.
One aspect of the present subject matter includes a hearing assistance system including a hearing assistance device for a user in an environment. The system includes a sensor configured to sense a parameter related to facing orientation, location, and/or talking activity of a talker in communication within the environment. An estimation unit is configured to estimate facing orientation, location, and talking activity of the talker based on the sensed parameter. According to various embodiments, the system also includes a processor configured to adjust a hearing assistance device parameter based on the estimated facing orientation, location, and talking activity of the talker.
This Summary is an overview of some of the teachings of the present application and not intended to be an exclusive or exhaustive treatment of the present subject matter. Further details about the present subject matter are found in the detailed description and appended claims. The scope of the present invention is defined by the appended claims and their legal equivalents.
The following detailed description of the present subject matter refers to subject matter in the accompanying drawings which show, by way of illustration, specific aspects and embodiments in which the present subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present subject matter. References to “an”, “one”, or “various” embodiments in this disclosure are not necessarily to the same embodiment, and such references contemplate more than one embodiment. The following detailed description is demonstrative and not to be taken in a limiting sense. The scope of the present subject matter is defined by the appended claims, along with the full scope of legal equivalents to which such claims are entitled.
The present detailed description will discuss hearing assistance devices using the example of hearing aids. Hearing aids are only one type of hearing assistance device. Other hearing assistance devices include, but are not limited to, those in this document. It is understood that their use in the description is intended to demonstrate the present subject matter, but not in a limited or exclusive or exhaustive sense.
Hearing-impaired people encounter great difficulty with speech communication in multi-talker settings, particularly when attention needs to be divided between multiple talkers. Current hearing assistance technology employs single-microphone noise reduction algorithms in order to increase perceived sound quality. This may also reduce listening effort in complex environments. However, current noise reduction algorithms do not increase speech intelligibility in multiple-talker settings. In contrast, use of static directionality systems such as microphone arrays or directional microphones in hearing aids can increase speech intelligibility by passing signals from the direction of a target talker, typically assumed to be located in front, and attenuating signals from other directions. Recently, adaptive directional systems have also been employed that adaptively follow a target with changing direction or changing targets. Directional systems only increase speech intelligibility when the direction of a target talker, or the talker of interest to the listener, relative to the listener's head remains constant in front of the listener or can be identified unambiguously. However, in many real-world situations, this is not the case. In a dinner conversation, for example, where speech from multiple concurrent talkers can reach the ear from different directions at similar sound levels, identifying the desired target location is a difficult problem. Active user feedback via a remote control may help in static scenarios where the spatial configuration does not change. However, user feedback will not be feasible in situations where targets can change dynamically, such as two or more alternating talkers in a conversation.
The present subject matter uses knowledge of real-time talker facing orientation in an acoustic scene to aid and assist listeners in multi-talker listening. Adding knowledge of facing orientation turns hearing assistance devices into intelligent agents. The intelligence derives from the fact that talkers and receivers face each other in most scenarios of human communication. One aspect of the present subject matter includes a hearing assistance system including a hearing assistance device for a user in an environment. The system includes a sensor configured to sense a parameter related to facing orientation of a talker in communication within the environment. An estimation unit is configured to estimate facing orientation of the talker based on the sensed parameter. According to various embodiments, the system also includes a processor configured to adjust a hearing assistance device parameter based on the estimated facing orientation of the talker. In various embodiments, a sensor is configured to sense a parameter related to a location of the talker, the estimation unit is configured to estimate the location of the talker based on the sensed parameter, and the processor is configured to adjust a hearing assistance device parameter based on the estimated location of the talker. In various embodiments, a sensor is configured to sense a parameter related to talking activity of the talker, the estimation unit is configured to estimate the talking activity of the talker based on the sensed parameter, and the processor is configured to adjust a hearing assistance device parameter based on the estimated talking activity of the talker. One or more of location and talking activity of the talker can be sensed, estimated and used by the system in addition to facing orientation, in various embodiments.
The real-time estimates of talker locations, talker facing orientations, and/or talker activity provide the input to a decision module 104. The decision module 104 analyzes the configuration of talker locations, facing orientations, and talker activity in real-time and outputs a marker signal, which indicates the single most promising target listening direction. If no such target is determined, an idle marker is returned. In various embodiments, the marker tracks the most promising listening direction and activates an acoustic pointer that is perceived in this desired target direction. The marker is configured to control adaptive directionality and/or binary masking to enhance target intelligibility, in various embodiments.
In one embodiment, the decision module performs a slow (i.e., on the order of minutes) cluster analysis on the talker locations. Then, the subsequent processing takes into account people that belong to the same cluster that the user belongs to, in various embodiments. For example, this can be a group of people sitting with the user around a table in a restaurant or a group sitting in a circle.
When a talker 204 in the user's cluster faces the user 202 and speaks, the marker 210 is pointed at this talker 204 independent of the user's facing direction, as shown in the embodiment of
Next, the marker signal 210 is passed on to a sound processing unit 106. In alternate embodiments, the sound processing unit 106 executes the following processing: (1) When the marker signal changes its direction (with exception of continuous rotations because they are due to rotations of the user's head) or when it changes from the idle to the active state, the sound processing unit synthesizes a short notification signal, such as a tonal beep or a short burst of broadband noise, that is localized in the direction of the marker. This is achieved by convolution with the appropriate head-related-transfer-function. Thus, the user's attention is drawn to the target direction. Note that a notification signal as described above is not to be used in situations where user head turns are penalized such as driving an automobile; (2) When the marker signal is active, the sound processing unit 106 is an adaptive directional system that amplifies the target sound in the direction of the marker relative to the sounds from other directions; (3) When the marker signal is active, the sound processing unit 106 employs binary masking to enhance sounds in the direction of the marker and attenuate all other sounds.
The present subject matter aids communication in challenging environments in intelligent ways. It improves the communication experience for both users and talkers, for the latter by reducing the need to repeat themselves.
Various embodiments of the present subject matter support wireless communications with a hearing assistance device. In various embodiments the wireless communications can include standard or nonstandard communications. Some examples of standard wireless communications include link protocols including, but not limited to, Bluetooth™, IEEE 802.11 (wireless LANs), 802.15 (WPANs), 802.16 (WiMAX), cellular protocols including, but not limited to CDMA and GSM, ZigBee, and ultra-wideband (UWB) technologies. Such protocols support radio frequency communications and some support infrared communications. Although the present system is demonstrated as a radio system, it is possible that other forms of wireless communications can be used such as ultrasonic, optical, infrared, and others. It is understood that the standards which can be used include past and present standards. It is also contemplated that future versions of these standards and new future standards may be employed without departing from the scope of the present subject matter.
The wireless communications support a connection from other devices. Such connections include, but are not limited to, one or more mono or stereo connections or digital connections having link protocols including, but not limited to 802.3 (Ethernet), 802.4, 802.5, USB, SPI, PCM, ATM, Fibre-channel, Firewire or 1394, InfiniBand, or a native streaming interface. In various embodiments, such connections include all past and present link protocols. It is also contemplated that future versions of these protocols and new future standards may be employed without departing from the scope of the present subject matter.
It is understood that variations in communications protocols, antenna configurations, and combinations of components may be employed without departing from the scope of the present subject matter. Hearing assistance devices typically include an enclosure or housing, a microphone, hearing assistance device electronics including processing electronics, and a speaker or receiver. It is understood that in various embodiments the microphone is optional. It is understood that in various embodiments the receiver is optional. Antenna configurations may vary and may be included within an enclosure for the electronics or be external to an enclosure for the electronics. Thus, the examples set forth herein are intended to be demonstrative and not a limiting or exhaustive depiction of variations.
It is further understood that any hearing assistance device may be used without departing from the scope and the devices depicted in the figures are intended to demonstrate the subject matter, but not in a limited, exhaustive, or exclusive sense. It is also understood that the present subject matter can be used with a device designed for use in the right ear or the left ear or both ears of the user.
It is understood that the hearing aids referenced in this patent application include a processor. The processor may be a digital signal processor (DSP), microprocessor, microcontroller, other digital logic, or combinations thereof. The processing of signals referenced in this application can be performed using the processor. Processing may be done in the digital domain, the analog domain, or combinations thereof. Processing may be done using subband processing techniques. Processing may be done with frequency domain or time domain approaches. Some processing may involve both frequency and time domain aspects. For brevity, in some examples drawings may omit certain blocks that perform frequency synthesis, frequency analysis, analog-to-digital conversion, digital-to-analog conversion, amplification, audio decoding, and certain types of filtering and processing. In various embodiments the processor is adapted to perform instructions stored in memory which may or may not be explicitly shown. Various types of memory may be used, including volatile and nonvolatile forms of memory. In various embodiments, instructions are performed by the processor to perform a number of signal processing tasks. In such embodiments, analog components are in communication with the processor to perform signal tasks, such as microphone reception, or receiver sound embodiments (i.e., in applications where such transducers are used). In various embodiments, different realizations of the block diagrams, circuits, and processes set forth herein may occur without departing from the scope of the present subject matter.
The present subject matter is demonstrated for hearing assistance devices, including hearing aids, including but not limited to, behind-the-ear (BTE), in-the-ear (ITE), in-the-canal (ITC), receiver-in-canal (RIC), completely-in-the-canal (CIC) or invisible-in-canal (IIC) type hearing aids. It is understood that behind-the-ear type hearing aids may include devices that reside substantially behind the ear or over the ear. Such devices may include hearing aids with receivers associated with the electronics portion of the behind-the-ear device, or hearing aids of the type having receivers in the ear canal of the user, including but not limited to receiver-in-canal (RIC) or receiver-in-the-ear (RITE) designs. The present subject matter can also be used in hearing assistance devices generally, such as cochlear implant type hearing devices and such as deep insertion devices having a transducer, such as a receiver or microphone, whether custom fitted, standard, open fitted or occlusive fitted. It is understood that other hearing assistance devices not expressly stated herein may be used in conjunction with the present subject matter.
This application is intended to cover adaptations or variations of the present subject matter. It is to be understood that the above description is intended to be illustrative, and not restrictive. The scope of the present subject matter should be determined with reference to the appended claims, along with the full scope of legal equivalents to which such claims are entitled.
This application is a continuation of U.S. patent application Ser. No. 13/939,004, filed Jul. 10, 2013, now issued as U.S. Pat. No. 9,124,990 on Sep. 1, 2015, which application is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6154552 | Koroljow et al. | Nov 2000 | A |
6243476 | Gardner | Jun 2001 | B1 |
6961439 | Ballas | Nov 2005 | B2 |
7853030 | Grasbon et al. | Dec 2010 | B2 |
8170247 | Nishizaki | May 2012 | B2 |
9124990 | Strelcyk et al. | Sep 2015 | B2 |
20030099370 | Moore | May 2003 | A1 |
20050141731 | Hamalainen | Jun 2005 | A1 |
20100074460 | Marzetta | Mar 2010 | A1 |
20110091056 | Nishizaki et al. | Apr 2011 | A1 |
20120020503 | Endo et al. | Jan 2012 | A1 |
20120128186 | Endo | May 2012 | A1 |
20130329923 | Bouse | Dec 2013 | A1 |
20150016644 | Strelcyk et al. | Jan 2015 | A1 |
Entry |
---|
“U.S. Appl. No. 13/939,004, Advisory Action mailed Mar. 27, 2015”, 2 pgs. |
“U.S. Appl. No. 13/939,004, Final Office Action mailed Jan. 13, 2015”, 12 pgs. |
“U.S. Appl. No. 13/939,004, Non Final Office Action mailed Aug. 13, 2014”, 11 pgs. |
“U.S. Appl. No. 13/939,004, Notice of Allowance mailed Apr. 28, 2015”, 6 pgs. |
“U.S. Appl. No. 13/939,004, Response filed Mar. 12, 2015 to Final Office Action mailed Jan. 13, 2015”, 8 pgs. |
“U.S. Appl. No. 13/939,004, Response filed Nov. 13, 2014 to Non Final Office Action mailed Aug. 13, 2014”, 8 pgs. |
Boldt, Jesper Bunsow, “Estimation of the Ideal Binary Mask Using Directional Systems”, Proceedings of the 11th International Workshop on Acoustic Echo and Noise Control, Seattle, WA, (2008), 4 pgs. |
Nakano, Alberto Yoshihiro, et al., “Auditory perception versus automatic estimation of location and orientation of an acoustic source in a real environment”, Acoust. Sci. & Tech. 31(5), (2010), 309-319. |
Number | Date | Country | |
---|---|---|---|
20150373465 A1 | Dec 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13939004 | Jul 2013 | US |
Child | 14841315 | US |