The disclosed technology generally relates to a hearing device and volume control of the hearing device. More specifically, the disclosed technology relates to a hearing device configured to provide volume control service to simple and rich client devices, where simple devices have limited volume control and rich devices have more complex volume control.
Hearing devices provide audio or audio signals to a user wearing the hearing devices. Some example hearing devices include hearing aids, headphones, earphones, assistive listening devices, cochlear devices paired with a cochlear implant, or any combination thereof. Hearing devices include both prescription devices and non-prescription devices configured to be worn on or near a human head.
Hearing device users prefer devices that adjust to everyday listening situations. Specifically, hearing device users prefer devices that can be adapted to a busy coffee shop, a windy park, quiet home, phone call in a loud room, listening to music, or conversation in a loud room. Generally, hearing device users can adjust volume settings directly on the hearing device by moving or adjusting a button, toggle, dial, or switch. Hearing device users can adjust the volume settings to better hear or experience sound.
When a hearing device outputs audio or audio signals, it can provide a balance of ambient sound and external sound. Ambient sound refers to sound that was received or generated locally at the hearing device by a microphone of the hearing device. For example, ambient sound can be wind noise picked up by a hearing device microphone. External sound refers to sound or sound signals received from another device at the hearing device. For example, a mobile phone can transmit audio signals for a phone call to a hearing device, where the hearing device user is using the hearing device to listen to the audio of the phone call, which is considered the external sound.
When a hearing device outputs an audio signal, it can change the volume or amplification of the signal, where the signal includes both external sound and ambient sound. For example, the hearing device can increase the amplification of a combined external sound signal and ambient sound signal. If an output signal includes both external sound and ambient sound, a hearing device user would interpret increasing the volume as everything being louder (e.g., for a windy phone call, the wind noise and the phone call audio would both get louder). Alternatively, if volume or amplification is decreased, a hearing device user would interpret decreasing the volume as everything being softer (e.g., for a windy phone call, the wind noise and the phone call audio signal would both be softer).
Providing an output signal with a volume that is comfortable for the user can be difficult given the variables and constraints of external devices and hearing devices. Accordingly, there exists a need to address the above-mentioned problems and provide additional benefits.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features of the claimed subject matter. The disclosed technology includes a method (e.g., a computed-implemented method) and a hearing device configured to implement the method. The method can include establishing a wireless communication connection between a hearing device and a client device; providing volume control service for the hearing device to the client device; determining, at the hearing device, whether the client device is implementing rich or simple volume control based on communication with the client device, wherein the rich volume control is associated with an ability of the client device to provide an ambient sound level and an external sound level associated with volume of a hearing device output signal, and wherein the simple volume control is associated with an ability of the client device to adjust only a master volume level associated with the volume of the hearing device output signal; in response to determining the client device is implementing the rich volume control, modifying, only the master volume at the hearing device based on a master volume level provided by the client device; or in response to determining the client device is implementing the simple volume control, modifying a balance of ambient sound and external sound for the hearing device output signal based at least partially on the master volume level provided by the client device.
In some implementations, determining whether the client device is implementing the rich or the simple volume control further comprises determining that the client device is implementing the rich volume control based on determining that the client device has registered for notification of volume state changes for the hearing device, read volume state settings for the hearing device, and/or registered for notification of the ambient sound level and external sound level. Also, determining that the client device is implementing the simple volume control can be based on determining that the client device has not registered for the notification of volume state changes for the hearing device, has not read the volume state settings for the hearing device, and/or has not registered for notification of the ambient sound level and external sound level.
In some implementations, a rich client device may have separate controls to adjust a level of a tinnitus-masking signal (e.g., as generated by a hearing device) as compared to a simple client may just have a signal knob. In a configuration of the hearing device where it was rendering both the tinnitus masking signal and the ambient signal, the hearing device can map the control of a simple client (e.g., 1-dimension of control) to increase ambient sound or increase tinnitus masking. In contrast, a rich client's actions would have the hearing device to just apply what the rich client has requested with respect to tinnitus masking signals and volume settings
The method can be implemented by the processor of the hearing device or the method can be stored in the memory of the hearing device.
The drawings are not to scale. Some components or operations may be separated into different blocks or combined into a single block for the purposes of discussion of some of the disclosed technology. Moreover, while the technology is amenable to various modifications and alternative forms, specific implementations have been shown by way of example in the drawings and are described in detail below. The intention, however, is not to limit the technology to the selected implementations described. On the contrary, the technology is intended to cover all modifications, equivalents, and alternatives falling within the scope of the technology as defined by the appended claims.
The disclosed technology relates to a hearing device that can determine whether a client device is implementing rich or simple volume control. If the client device is implementing rich volume control, the hearing device can only locally adjust master volume control (e.g., amplification) of a hearing device output signal (e.g., based on input from a button on local hearing device). In contrast, if the client device is implementing simple volume control, the hearing device can locally adjust the master volume, ambient sound level, and external sound level of the hearing device output signal. More generally, a rich client device knows what to do with respect to volume control, e.g., the hearing device does volume adjustment as requested by the rich client (e.g., exactly the same settings of the rich client). The simple client is less sophisticated in that it can act on master volume only. Therefore, the hearing device interprets the master volume from a simple client device as preferring more or less external signal and/or preferring more or less ambient signal.
An ambient sound level refers to a level, e.g., between 1-10 where 1 refers to 0% or no ambient sound and 10 refers to 100% or maximum ambient sound (e.g., can only hear ambient sound signal). An external sound level refers to a level, e.g., between 1-10 where 1 refers to 0% or no external sound and 10 refers to 100% or maximum external level sound (e.g., can only hear external sound signal). Other numerical values for levels can be used (e.g., 1-100, etc.).
A balance refers to the level of the external sound versus the level of the ambient sound or vice versa. The hearing device can output sound with different balances of ambient sound level and external sound level. For example, the hearing device can output sound in a 50/50 balance, where 50% of the sound output is ambient sound and 50% is external sound. The hearing device can then amplify the output signal, e.g., amplify a signal that has 50% external sound and 50% ambient sound, which causes the user to hear both sounds louder. As another example, the hearing device can output sound with a 60/40 balance or 40/60 balance, where 60% of the sound output is ambient sound and 40% is external sound or 40% of the sound output is ambient sound and 60% is external sound. In the latter example, the hearing device output signal would have a higher signal-to-noise ratio (SNR) for the external signal. Having a higher SNR enables the hearing device to hear a signal more clearly even though the signal was not amplified more. Rather, it is relatively easier to hear the external sound when there is less ambient sound.
In communicating between a wireless communication device and a hearing device regarding volume control, a hearing device can be considered a server because it provides Generic Attribute Profile (GATT) services to client devices (e.g., one or more clients devices). Specifically, the hearing device can provide control of its volume control to client devices such that the client devices can adjust volume settings of the hearing device. Volume control generally includes the settings, programming, and/or hardware that a hearing device uses to adjust the volume of its output signal. With GATT services, a hearing device can provide notification of its volume states or changes of its volume state to client devices.
In some implementations, a rich client device may have separate controls to adjust a level of a tinnitus-masking signal (e.g., as generated by a hearing aid) as compared to a simple client may just have a signal knob. Here, a hearing device would detect the rich client device as being explicitly interested (e.g., by reading/registering for tinnitus or volume settings notifications) in the level of the tinnitus masking signal. In a configuration of the hearing device where it was rendering both the tinnitus masking signal and the ambient signal, the hearing device can map the control of a simple client (e.g., 1-dimension of control) to increase ambient sound or increase tinnitus masking. In contrast, a rich client's actions would have the hearing device to just apply what the rich client has requested with respect to tinnitus masking signals and volume settings.
The disclosed technology has the advantage that volume settings can be improved (e.g., optimized) for a hearing device user. For example, if a hearing user has a simple device that does not offer rich volume control, the hearing device can receive an external signal from that device and handle the rich volume control at the hearing device without feedback or information from the simple device. For example, with a simple volume control client device, the hearing device can convert volume control actions to ambient/balance control locally on the hearing device. Alternatively, if the hearing device user has rich device that is connected to the hearing device and offers rich volume control, the hearing device does not need to further modify the volume settings received from the external device. Rather, the hearing device only needs to increase or decrease (e.g., amplify) For example, with a rich volume control client device, the hearing device can take its request literally, applying individual changes to ambient sound level, external sound level, and total amplification as requested by the rich volume control client.
In communication environment 100, the hearing device 103 can be considered a server because provides a volume control service to wireless communications devices 102 as client devices. A client device can be any of the wireless communication devices 102. For example, a wireless communication device 102 can be a mobile phone and it can connect with the hearing device 103 via a wireless communication protocol, and then it can use that wireless communication protocol to transmit an external signal to the hearing device. The wireless communication device 102, as a client, can request to receive updates regarding the states of the volume control of the hearing device 103. The wireless communication device 102 can also provide an external sound level, ambient sound level, and/or master volume setting for the hearing device. The hearing device 103 can use that received information in providing its output signal as further described in
A wireless communication protocol can include Bluetooth Basic Rate/Enhanced Data Rate™, Bluetooth Low Energy™, a proprietary communication (e.g., binaural communication protocol between hearing aids, ZigBee™, Wi-Fi™, or an Industry of Electrical and Electronic Engineers (IEEE) wireless communication standard. As part of using a protocol, the hearing device 103 and the wireless communication 102 may perform steps of authentication and establishing a wireless communication connection (e.g., complete a pairing process for Bluetooth Low Energy™).
The wireless communication devices 102 are computing devices that are configured to wirelessly communicate. Wireless communication includes wirelessly transmitting information, wirelessly receiving information, or both. The wireless communication devices 102 shown in
Also, the wireless communication devices 102 can have microphones to receive or generate a sound, and this sound can be transmitted to the hearing device 103. The wireless communication device 102 can generate an audio signal in other ways, e.g., providing an audio signal or sound from memory. Audio signals transmitted from the wireless communication 102 to the hearing device are considered external sound signals or external signals because the hearing device did not generate the signal; rather, the hearing device received it from an external device. An external device is any device that is not the hearing device and located external to the hearing device.
The hearing devices 103 are devices that provide audio to a user wearing the hearing devices. Some example hearing devices include hearing aids, headphones, earphones, assistive listening devices, or any combination thereof. Hearing devices include both prescription devices and non-prescription devices configured to be worn on or near a human head. As an example of a hearing device, a hearing aid is a device that provides amplification, attenuation, or frequency modification of audio signals to compensate for hearing loss or attenuation functionalities; some example hearing aids include a Behind-the-Ear (BTE), Receiver-in-the-Canal (RIC), In-the-Ear (ITE), Completely-in-the-Canal (CIC), Invisible-in-the-Canal (IIC) hearing aids or a cochlear implant (where a cochlear implant includes a device part and an implant part).
The hearing devices 103 are configured to binaurally communicate or bimodally communicate. The binaural communication can include a hearing device 103 transmitting information to or receiving information from another hearing device 103. Information can include volume control, signal processing information (e.g., noise reduction, wind canceling, directionality such as beam forming information), or compression information to modify sound fidelity or resolution. Binaural communication can be bidirectional (e.g., between hearing devices) or unidirectional (e.g., one hearing device receiving or streaming information from another hearing device). Bimodal communication is like binaural communication, but bimodal communication includes a cochlear device communicating with a hearing aid.
The network 105 is a communication network. The network 105 enables the hearing devices 103 or the wireless communication devices 102 to communicate with a network or other devices. The network 105 can be a Wi-Fi™ network, a wired network, or e.g. a network implementing any of the Institute of Electrical and Electronic Engineers (IEEE) 802.11 standards. The network 105 can be a single network, multiple networks, or multiple heterogeneous networks, such as one or more border networks, voice networks, broadband networks, service provider networks, Internet Service Provider (ISP) networks, and/or Public Switched Telephone Networks (PSTNs), interconnected via gateways operable to facilitate communications between and among the various networks. In some implementations, the network 105 can include communication networks such as a Global System for Mobile (GSM) mobile communications network, a code/time division multiple access (CDMA/TDMA) mobile communications network, a 3rd, 4th or 5th generation (3G/4G/5G) mobile communications network (e.g., General Packet Radio Service (GPRS)).
The software 215 performs certain methods or functions for the hearing device 103 and can include components, subcomponents, or other logical entities that assist with or enable the performance of these methods or functions. Although a single memory 205 is shown in
The GATT 220 generally establishes common operations and a framework for data transported and stored in an attribute protocol. The GATT 220 includes the hierarchy of services, characteristics and attributes used in the attribute server (e.g., volume attributes and service). The GATT provides interfaces for discovering, reading, writing, and indicating of service characteristics and attributes. GATT is used on Bluetooth Low Energy (LE) devices for LE profile service discovery. More information regarding GATT can be found in the Bluetooth Core Specification 5.2, which has an adoption date of Dec. 31, 2019 and is available at https://www.bluetooth.com/specifications/bluetooth-core-specification/, all of which is incorporated herein by reference.
Also, the GATT 220 can provide volume service to other devices (e.g., client devices). Volume service can include providing states of volume controls or settings of the hearing device and/or providing notification of changes to the states or settings of volume for the hearing device. Specifically, if a hearing device establishes a wireless connection with another device (e.g., via Bluetooth Low Energy), the other device can access the GATT 220 of the hearing device and the GATT 220 can provide information about the hearing device, including volume information and/or settings.
The volume determiner 225 determines a volume setting or parameter for an output signal of the hearing device. The volume determiner 225 can receive volume information from the GATT 220, from a wireless communication device, or another input from the hearing device user. The volume determiner 225 can receive ambient sound level and external sound level information from a wireless communication device and use this information to set the volume or levels of an output signal for the hearing device 103.
In some implementations, the volume determiner 225 can receive volume control signals or volume settings from a remote control or mobile application. The hearing device may also receive external sound signals from a wireless communication or multiple wireless communication devices. In some implementations, the wireless communication device and the remote control device are different devices such that the user can control volume levels with one device and receive an external sound signal from another device. The volume determiner 225 can determine how to balance the volume control of the hearing device based on these received signals from external devices, programming, and/or settings of the hearing device (e.g., input from the hearing device user directly on the hearing device via a slider, dial, button).
The processor 230 can include special-purpose hardware such as application specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), programmable circuitry (e.g., one or more microprocessors microcontrollers), appropriately programmed with software and/or computer code, or a combination of special purpose hardware and programmable circuitry. The hearing device 103 can have a separate DSP to process audio signals. Yet, in some implementations, the processor 230 can be combined with the DSP in a single unit, wherein the processor 230 can process audio signals. Also, in some implementations, the hearing device 103 can have multiple processors, where the multiple processors can be physically coupled to the hearing device 103 and configured to communicate with each other.
The battery 235 can be a rechargeable battery (e.g., lithium ion battery) or a non-rechargeable battery (e.g., Zinc-Air) and the battery 235 can provide electrical power to the hearing device 103 or its components. Because some rechargeable batteries are composed of different material compared to non-rechargeable batteries, some rechargeable batteries have different magnetic or electrical properties compared to non-rechargeable batteries.
The transceiver 240 communicates with the antenna 245 to transmit or receive information. The antenna 245 is configured to operation in unlicensed bands such as Industrial, Scientific, and Medical Band (ISM)) using a frequency of 2.4 GHz. The antenna 245 can also be configured to operation in other frequency bands such as 5 GHz, 5 MHz, 10 MHz, or other unlicensed or licensed bands.
The sensor 250 can be a pressure sensor, an optical sensor, a temperature sensor, capacitive sensor (e.g., for touch detection), mechanical sensor (e.g., for touch detection), a magnetic sensor (e.g., proximity detection), an accelerometer, or other sensor configured to fit in or around a hearing device.
The transducer 255 is a component that converts energy from one form to another. A transducer 255 can be a speaker, actuator, coil, or other component configured to convert energy from one form to another. For example, the transducer 255 can be a coil for a cochlear device that converts electrical signals or energy into magnetic signals or energy (or vice versa).
The microphone 260 is configured to capture sound and provide an audio signal of the captured sound to the processor 230. The processor 230 can modify the sound (e.g., in a digital signal processor (DSP)) and provide the modified sound to a user of the hearing device 103. Although a single microphone 260 is shown in
At the establish wireless connection operation 305, a hearing device and a wireless communication device establish a wireless communication connection (e.g., a server hearing device connects to a client device such as a remote control, audio player, TV streamer, or mobile phone). The wireless connection can be based on Bluetooth Low Energy™. Establishing a wireless connection can include the hearing device and the wireless communication device looking for each other within a range (e.g., the range of Bluetooth), the two devices finding each other (or one device finding the other device), pairing (e.g., prompting for passkey, exchanging passkey, sharing passkey, and verifying passkey is correct), and then communicating using a secure Bluetooth connection. Although Bluetooth™ is one possible wireless connection type, other wireless communication connections or protocols can be used to establish the wireless connection.
At determine operation 310, the hearing device determines whether the wireless communication device (e.g., client device) is implementing a rich or simple volume control. The rich volume control is associated with an ability of the wireless communication client device to provide an ambient sound level and an external sound level associated with volume of a hearing device output signal. For example, the rich volume control can be associated with a smart phone that has an ability that allows a hearing device user to adjust both an ambient sound level of the hearing device and an external sound level of an external signal at the hearing device (e.g., levels 1-5, where 1 is low and 5 is high). The wireless communication can adjust these levels automatically based on settings or programming. Alternatively or additionally, the wireless communication device can adjust the ambient sound level and/or external sound level based on input from the hearing device user via a user interface (e.g., moving a dial, moving a slider, or manually inputting a level).
The hearing device can determine that the client device is implementing rich volume control based on determining that the client device has registered for notification of volume state changes for the hearing device, read volume state settings for the hearing device and/or registered for notification of the ambient sound level and external sound level. Alternatively, determining that the client device is implementing the simple volume control can be based on determining that the client device has not registered for the notification of volume state changes for the hearing device, has not read the volume state settings for the hearing device, and/or has not registered for notification of the ambient sound level and external sound level. For example, after the wireless communication device and the hearing device have wirelessly connected (operation 305), the hearing device can receive a request from the wireless communication device that it was to receive notification of any state changes in the volume settings of the hearing device. As shown in
The simple volume control is associated with an ability of the wireless communication device (e.g., client device) to adjust only a master volume level associated with the volume of the hearing device output signal. The hearing device can determine that the wireless communication device is implementing simple volume control based on determining that the client device has not registered for the notification of volume state changes for the hearing device or has not read the volume state settings for the hearing device. More specifically, if the wireless communication device is just sharing master volume settings and not reading, accessing, or otherwise using specific volume settings related to ambient and/or external sound levels, it is presumed that the wireless communication device is implementing a simple volume control that generally only relates to the master volume control (e.g., output level or amplification of signal output at hearing device).
At adjust volume control operation 315, the hearing device adjust the output signal of the hearing device based on the volume control information determined from operation 310. Adjusting the output signal can include modifying the ambient sound level, the external signal level, and/or the master volume level (e.g., amplification of the master volume). For example, if the hearing device determines that the wireless communication device is simple, the hearing device can decrease the ambient sound level from 5 (or 50%) to 4 (or 40%) and increase the external sound level from 5 (e.g., 50%) to 6 (e.g., 60%) in response to determining that the hearing device wants the external sound to be louder or easier to understand.
As another example, if the hearing device determines that the wireless communication device is rich, it can receive the ambient sound level and external sound level from the wireless communication device, and modify only the master volume of an output signal for the hearing device. The master volume generally controls the amplification of the output signal such that amplifying makes it louder (both ambient sound an external sound).
Aspects and implementations of the process 300 of the disclosure have been disclosed in the general context of various steps and operations. A variety of these steps and operations may be performed by hardware components or may be embodied in computer-executable instructions, which may be used to cause a general-purpose or special-purpose processor (e.g., in a computer, server, or other computing device) programmed with the instructions to perform the steps or operations. For example, the steps or operations may be performed by a combination of hardware, software, and/or firmware such with a wireless communication device or a hearing device. The computer-executable instructions can be stored on a non-transitory computer-readable medium, which when executed by a processor or hearing device cause the hearing device to perform the process 300.
At the top of
As shown on the right side of
The phrases “in some implementations,” “according to some implementations,” “in the implementations shown,” “in other implementations,” and generally mean a feature, structure, or characteristic following the phrase is included in at least one implementation of the disclosure, and may be included in more than one implementation. In addition, such phrases do not necessarily refer to the same implementations or different implementations.
The techniques introduced here can be embodied as special-purpose hardware (e.g., circuitry), as programmable circuitry appropriately programmed with software or firmware, or as a combination of special-purpose and programmable circuitry. Hence, implementations may include a machine-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, optical disks, compact disc read-only memories (CD-ROMs), magneto-optical disks, ROMs, random access memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions. In some implementations, the machine-readable medium is non-transitory computer readable medium, where in non-transitory excludes a propagating signal.
The above detailed description of examples of the disclosure is not intended to be exhaustive or to limit the disclosure to the precise form disclosed above. While specific examples for the disclosure are described above for illustrative purposes, various equivalent modifications are possible within the scope of the disclosure, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in an order, alternative implementations may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, or modified to provide alternative or subcombinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed or implemented in parallel, or may be performed at different times. Further any specific numbers noted herein are only examples: alternative implementations may employ differing values or ranges.
As used herein, the word “or” refers to any possible permutation of a set of items. For example, the phrase “A, B, or C” refers to at least one of A, B, C, or any combination thereof, such as any of: A; B; C; A and B; A and C; B and C; A, B, and C; or multiple of any item such as A and A; B, B, and C; A, A, B, C, and C; etc. As another example, “A or B” can be only A, only B, or A and B.
Number | Name | Date | Kind |
---|---|---|---|
20030002698 | Ludvigsen | Jan 2003 | A1 |
20060093997 | Kearby | May 2006 | A1 |
20080298606 | Johnson | Dec 2008 | A1 |
20100041940 | Hillbratt | Feb 2010 | A1 |
20150365771 | Tehrani | Dec 2015 | A1 |
20180061411 | Bhat | Mar 2018 | A1 |
Number | Date | Country |
---|---|---|
0917398 | May 1999 | EP |