The present subject matter relates generally to hearing assistance systems, and in particular to methods and apparatus for programming hearing assistance devices using initial settings that are adjusted to more optimal settings over a period of time.
A hearing assistance device, such as a hearing aid, may include a signal processor in communication with a microphone and receiver. Sound signals detected by the microphone and/or otherwise communicated to the hearing assistance device are processed by the signal processor to be heard by a listener. Modern hearing assistance devices may include programmable devices that have settings based on the hearing and needs of each individual listener such as a hearing aid wearer.
Wearers of hearing aids undergo a process called “fitting” to adjust the hearing aid to their particular hearing and use. In such fitting sessions a wearer may select one setting over another. Hearing aid settings may be optimized for a wearer through a process of patient interview and device adjustment. Multiple iterations of such interview and adjustment may be needed before sound quality as perceived by the wearer becomes satisfactory. This may require multiple visits to an audiologist's office. Thus, there is a need for a more efficient process for fitting the hearing aid for the wearer.
The following detailed description of the present subject matter refers to subject matter in the accompanying drawings which show, by way of illustration, specific aspects and embodiments in which the present subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present subject matter. References to “an”, “one”, or “various” embodiments in this disclosure are not necessarily to the same embodiment, and such references contemplate more than one embodiment. The following detailed description is demonstrative and not to be taken in a limiting sense. The scope of the present subject matter is defined by the appended claims, along with the full scope of legal equivalents to which such claims are entitled.
The present disclosure relates to a hearing assistance system for delivering sounds to a listener provides for subjective, listener-driven programming of a hearing assistance device, such as a hearing aid, using a mobile device. Hearing assistance devices, such as hearing aids, typically include an enclosure or housing, a microphone, hearing assistance device electronics including processing electronics, and a speaker or receiver. In various designs, the speaker or receiver of a hearing assistance device is placed substantially in or near the ear canal of a wearer such that amplified sound waves may be directed towards an ear drum of the wearer. In various designs, the receiver may include a tubular structure that directs sound from the speaker to the ear drum.
Hearing professionals desire the ability to program hearing aids with less gain initially in order to let the patient adapt to wearing a hearing aid, thereby improving the experience of the patient while becoming acclimated to the hearing aid. The patient is typically instructed to come back to the professional so that the professional may manually increase the gain in steps over time. An example technique for hearing aid adjustment is discussed in U.S. Pat. No. 7,206,424, which is incorporated by reference herein in its entirety.
Generally, hearing aid fitting software may allow a hearing professional the ability to configure the desired final hearing aid settings for the patient and an initial starting point for any variety of settings (gain, compression, noise management, and any other setting a hearing professional might manipulate). The professional can also set a time frame for adaptation to occur across. The individualized settings and the time frame for adaptation may all be configured in the hearing aid firmware and read out by a specific mobile application. The mobile application may be installed on a remote mobile device, such as a smart phone or other wireless remote control device, and communicate with the hearing aid firmware via a wireless connection.
In an embodiment, the hearing assistance system includes a hearing aid device and a remote mobile device. The hearing aid device is configured to receive real time data, for example absolute time values, ambient noise values, or updated configuration data from the remote mobile device via a wireless connection. The wireless connection may be established over any appropriate wireless frequency or protocol (e.g., 2.4 GHz, 900 MHz, Wi-Fi, Bluetooth, etc.), or combination thereof. The hearing aid device may be configured to send information, for example volume settings, battery life or configuration data to the remote mobile device via the wireless connection. A user interface on the remote mobile device may display and provide a patient with information about the settings, performance, battery life or other information with respect to the hearing aid device.
In an embodiment, a hearing assistance system may combine hearing aid firmware in a hearing aid that is capable of connecting wirelessly to a mobile device, and a mobile software application on the mobile device that includes fitting or acclimation software, to create a method for the professional to prescribe starting settings and targeted settings for an initial hearing aid fitting. In an example, a real time clock in the mobile device may be utilized to coordinate automatic changes made to the hearing aid by the mobile software application. The changes to the hearing aid may be communicated over a wireless link between the mobile device and the hearing aid.
In an embodiment, a user may launch a mobile application on a remote device that is configured to communicate with a hearing aid device of the user. Upon establishing a communication session with the hearing aid device, the application may receive and store information from the hearing aid device such as the initial starting point settings, final user settings, and a time frame for adaptation configured by the hearing professional. The mobile application may utilize the information to setup automatic gradual changes to the hearing aid to move from starting to final user settings over that desired time frame. In an example, the mobile application may have advantages over the hearing aid, including the presence of a real time clock to make these adjustments over defined period of time unique to the individual. The mobile application may have an additional advantage of having access to greater computing resources (e.g., processing power or battery power) than the hearing aid, which may provide for a benefit of increasing the battery life of a battery in the hearing aid by offloading processing tasks from the hearing aid.
In an example, an automatic adaption scheme for hearing aid fittings through a system that combines fitting software in a mobile software application may also provide the professional with the ability to remotely receive notifications about the extent of the user's adaptation to the hearing aid, or to control or adjust the time frame or other settings of the hearing aid. The Professional may have the ability to provide any mix of settings for a starting point and a desired fitting point to help the patient adapt easier. This helps the patient ease into their hearing aid fitting without returning to the professional. Additional benefits include easier adaptation, earlier satisfaction, and lower return rates. The proliferation of smartphones and tablet computers across the world may also help to create a demand for the convergence of hearing aid technologies with smartphone applications.
In an embodiment, a method for fitting a hearing assistance device for a listener is provided. A plurality of presets including predetermined settings for a plurality of parameters of a signal processing algorithm may be included in a hearing aid and a hearing aid application stored on a mobile device. The hearing aid application provides for a calculation of when to transition between a pair of presets of the plurality of presets so as to improve the performance of the hearing aid as perceived by the listener. An input sound signal is processed to produce an output sound signal to be delivered to the listener by executing the signal processing algorithm at the hearing aid using the selected values of the plurality of parameters established by the hearing aid appellation on the mobile device. The mobile device may receive inputs from one or more sensors or modules (e.g., microphones, timers, clocks, GPS receivers, radio receivers, or other devices) to determine when to transition between presets.
Other embodiments may incorporate an input transducer that produces a digital output directly. The device's signal processing circuitry 100 processes the digitized input signal into an output signal in a manner that compensates for the patient's hearing deficit. The output signal is then passed to an audio amplifier 150 that drives an output transducer 160 for converting the output signal into an audio output, such as a speaker within an earphone.
In the example illustrated in
The signal processing modules 120, 130, and 135 may represent specific code executed by the controller or may represent additional hardware components. The filtering and amplifying module 120 amplifies the input signal in a frequency specific manner as defined by one or more signal processing parameters specified by the controller. As described above, the patient's hearing deficit is compensated by selectively amplifying those frequencies at which the patient has a below normal hearing threshold. Other signal processing functions may also be performed in particular embodiments. The example illustrated in
The signal processing circuitry 100 may be implemented in a variety of different ways, such as with an integrated digital signal processor or with a mixture of discrete analog and digital components. For example, the signal processing may be performed by a mixture of analog and digital components having inputs that are controllable by the controller that define how the input signal is processed, or the signal processing functions may be implemented solely as code executed by the controller. The terms “controller,” “module,” or “circuitry” as used herein should therefore be taken to encompass either discrete circuit elements or a processor executing programmed instructions contained in a processor-readable storage medium.
The programmable controller specifies one or more signal processing parameters to the filtering and amplifying module and/or other signal processing modules that determine the manner in which the input signal is converted into the output signal. The one or more signal processing parameters that define a particular mode of operation are referred to herein as a signal processing parameter set. A signal processing parameter set thus defines at least one operative characteristic of the hearing aid's signal processing circuit. A particular signal processing parameter set may, for example, define the frequency response of the filtering and amplifying circuit and define the manner in which amplification is performed by the device. In a hearing aid with more sophisticated signal processing capabilities, such as for noise reduction or processing multi-channel inputs, the parameter set may also define the manner in which those functions are performed.
As noted above, a hearing aid programmed with a parameter set that provides optimal compensation may not be initially well tolerated by the patient. In order to provide for a gradual adjustment period, the controller is programmed to select a parameter set from a group of such sets in a defined sequence such that the hearing aid progressively adjusts from a sub-optimal to an optimal level of compensation delivered to the patient. In order to define the group of parameter sets, the patient is tested to determine an optimal signal processing parameter set that compensates for the patient's hearing deficit. From that information, a sub-optimal parameter set that is initially more comfortable for the patient can also determined, as can a group of such sets that gradually increase the degree of compensation.
The controller of the hearing aid may then be programmed to select a signal processing parameter set for use by the signal processing circuitry by sequencing through the group of signal processing parameter sets over time so that the patient's hearing is gradually compensated at increasingly optimal levels until the optimal signal processing parameter set is reached. For example, each parameter set may include one or more frequency response parameters that define the amplification gain of the signal processing circuit at a particular frequency. The controller of the hearing aid may be configured to transition between the group of signal processing parameters in response to receiving a specific command from a remote device via a communication interface, or in response to receiving time date from the remote device via the communication interface. For example, the specific command may indicate that the wearer of the hearing aid has entered a noisy environment (e.g., a loud restaurant) and a signal processing parameter with a higher level of noise reduction should be implemented by the controller.
In an example, the overall gain of the hearing aid may be gradually increased with each successively selected signal processing parameter set. If the patient has a high frequency hearing deficit, the group of parameter sets may be defined so that sequencing through them results in a gradual increase in the high frequency gain of the hearing aid. Conversely, if the patient has a low frequency hearing deficit, the hearing aid may be programmed to gradually increase the low frequency gain with each successively selected parameter set. In this manner, the patient is allowed to adapt to the previously unheard sounds through the automatic operation of the hearing aid. Other features implemented by the hearing aid in delivering optimal compensation may also be automatically adjusted toward the optimal level with successively selected parameter sets such as compression parameters that define the amplification gain of the signal processing circuit at a particular input signal level, parameters defining frequency specific compression, noise reduction parameters, and parameters related to multi-channel processing.
For example, a care provider may be able to receive a notification if a patient wearing the hearing aid device 202 has not turned the hearing aid on during a specified period of time. The notification may be generated by an application on the mobile device 204 in response to a failure to communicate with the hearing aid device 202 for a predetermined number of hours or days. In another example, a hearing professional may interact with the personal computer 210 to request data from the hearing aid device 202 in response to a query or complaint by the wearer of the hearing aid device 202. An application on the mobile device 204 may retrieve from the hearing aid device 202, or an internal memory in the mobile device 204, any data corresponding to the performance of the hearing aid device 202 or configuration settings that have been in use by the hearing aid device 202.
At 302, a device may operate with an initial parameter configuration. For example, a hearing aid device may be configured with an initial factory setting that provides a minimum of sound amplification and maximum noise reduction, or a hearing professional may establish a set of initial parameters based on one or more tests performed on a specific patient that will be fitted with the device.
At 304, the device may establish communication with a wireless device. The wireless device may be a mobile device, such as a smart phone or personal data assistant, as depicted in
At 306, the device may receive data from the wireless device. The data may include, for example, configuration parameters, time data, sensor data, or any other information that be utilized by the device to change or improve the operation of the device.
At 308, the device may provide device information to the wireless device. The device information may include, for example: total operating time, batter life, current configuration settings, a count of power cycles, an amount of elapsed time since power-on, or any other device specific data.
At 308, the device may update the device's configuration (e.g., parameters, software, firmware, etc.) based on the data received from the wireless device. For example, the data may include an upgrade to firmware in the device, new configuration settings, or time data that may trigger the device to transition from a first set of parameters to a second set of parameters.
Though arranged serially in the example of
Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules are tangible entities capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside (1) on a non-transitory machine-readable medium or (2) in a transmission signal. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
Accordingly, the term “module” is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general-purpose hardware processor may be configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.
Machine (e.g., computer system) 400 may include a hardware processor 402 (e.g., a processing unit, a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 404, and a static memory 406, some or all of which may communicate with each other via a link 408 (e.g., a bus, link, interconnect, or the like). The machine 400 may further include a display device 410, an input device 412 (e.g., a keyboard), and a user interface (UI) navigation device 414 (e.g., a mouse). In an example, the display device 410, input device 412, and UI navigation device 414 may be a touch screen display. The machine 400 may additionally include a mass storage (e.g., drive unit) 416, a signal generation device 418 (e.g., a speaker), a network interface device 420, and one or more sensors 421, such as a global positioning system (GPS) sensor, camera, video recorder, compass, accelerometer, or other sensor. The machine 400 may include an output controller 428, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR)) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
The mass storage 416 may include a machine-readable medium 422 on which is stored one or more sets of data structures or instructions 424 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 424 may also reside, completely or at least partially, within the main memory 404, within static memory 406, or within the hardware processor 402 during execution thereof by the machine 400. In an example, one or any combination of the hardware processor 402, the main memory 404, the static memory 406, or the mass storage 416 may constitute machine readable media.
While the machine-readable medium 422 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that configured to store the one or more instructions 424.
The term “machine-readable medium” may include any tangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine 400 and that cause the machine 400 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples may include solid-state memories, and optical and magnetic media. Specific examples of machine-readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The instructions 424 may further be transmitted or received over a communications network 426 using a transmission medium via the network interface device 420 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), peer-to-peer (P2P) networks, among others. In an example, the network interface device 420 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 426. In an example, the network interface device 420 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 400, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
Various embodiments of the present subject matter may be utilized in conjunction with a hearing assistance device that supports wireless communications from other devices. It is further understood that many hearing assistance devices may be used without departing from the scope of the present subject matter and that the devices depicted in the figures are intended to demonstrate the subject matter, but not in a limited, exhaustive, or exclusive sense. It is also understood that the present subject matter can be used with a device designed for use in the right ear or the left ear or both ears of the wearer.
The present subject matter is demonstrated for hearing assistance devices, including hearing aids, including but not limited to, behind-the-ear (BTE), in-the-ear (ITE), in-the-canal (ITC), receiver-in-canal (RIC), or completely-in-the-canal (CIC) type hearing aids. It is understood that behind-the-ear type hearing aids may include devices that reside substantially behind the ear or over the ear. Such devices may include hearing aids with receivers associated with the electronics portion of the behind-the-ear device, or hearing aids of the type having receivers in the ear canal of the user, including but not limited to receiver-in-canal (RIC) or receiver-in-the-ear (RITE) designs. It is understood that other hearing assistance devices not expressly stated herein may be used in conjunction with the present subject matter.
It is understood that digital hearing aids referenced in this patent application include a processor. In digital hearing aids with a processor programmed to provide corrections to hearing impairments, programmable gains are employed to tailor the hearing aid output to a wearer's particular hearing impairment. The processor may be a digital signal processor (DSP), microprocessor, microcontroller, other digital logic, or combinations thereof. The processing of signals referenced in this application can be performed using the processor. Processing may be done in the digital domain, the analog domain, or combinations thereof. Processing may be done using subband processing techniques. Processing may be done with frequency domain or time domain approaches. Some processing may involve both frequency and time domain aspects. For brevity, in some examples drawings may omit certain blocks that perform frequency synthesis, frequency analysis, analog-to-digital conversion, digital-to-analog conversion, amplification, and certain types of filtering and processing. In various embodiments the processor is adapted to perform instructions stored in memory which may or may not be explicitly shown. Various types of memory may be used, including volatile and nonvolatile forms of memory. In various embodiments, instructions are performed by the processor to perform a number of signal processing tasks. In such embodiments, analog components are in communication with the processor to perform signal tasks, such as microphone reception, or receiver sound embodiments (i.e., in applications where such transducers are used). In various embodiments, different realizations of the block diagrams, circuits, and processes set forth herein may occur without departing from the scope of the present subject matter.
In various embodiments the hearing assistance device may include additional electronics, such as wireless communications electronics that can include support standard or nonstandard communications. Some examples of standard wireless communications include link protocols including, but not limited to, Bluetooth™, IEEE 802.11 (wireless LANs), 802.15 (WPANs), 802.16 (WiMAX), cellular protocols including, but not limited to CDMA and GSM, ZigBee, and ultra-wideband (UWB) technologies. Such protocols support radio frequency communications and some support infrared communications. In various embodiments it is possible that other forms of wireless communications can be used such as ultrasonic, optical, and others.
Various configurations of wireless electronics and antennas may be employed. It is understood that variations in communications protocols, antenna configurations, and combinations of components may be employed without departing from the scope of the present subject matter. Hearing assistance devices typically include an enclosure or housing, a microphone, hearing assistance device electronics including processing electronics, and a speaker or receiver. It is understood that in various embodiments the microphone is optional. Thus, the examples set forth herein are intended to be demonstrative and not a limiting or exhaustive depiction of variations.
This application is intended to cover adaptations or variations of the present subject matter. It is to be understood that the above description is intended to be illustrative, and not restrictive. The scope of the present subject matter should be determined with reference to the appended claims, along with the full scope of legal equivalents to which such claims are entitled.