This document relates generally to audio device systems and more particularly to systems and methods for wireless communication between users of ear-wearable devices.
Audio devices can be used to provide audible output to a user based on received wireless signals. Examples of audio devices include speakers and ear-wearable devices, also referred to herein as hearing devices. Example of hearing devices include hearing assistance devices or hearing instruments, including both prescriptive devices and non-prescriptive devices. Specific examples of hearing devices include, but are not limited to, hearing aids, headphones, and earbuds.
Hearing aids are used to assist patients suffering hearing loss by transmitting amplified sounds to ear canals. In one example, a hearing aid is worn in and/or around a patient's ear. Hearing aids may include processors and electronics that improve the listening experience for a specific wearer or in a specific acoustic environment. Hearing and understanding speech in a noisy environment can be challenging, especially for a hearing-impaired person.
Thus, there is a need in the art for improved systems and methods for improving understanding of conversations with others in background noise.
Disclosed herein, among other things, are systems and methods for a conversation bridge between ear-wearable devices. A method includes receiving, at a central device, a first wireless signal from one or more first hearing devices configured to be worn by a first user via a first wireless connection, a second wireless signal from one or more second hearing devices configured to be worn by a second user via a second wireless connection, and a third wireless signal from one or more third hearing devices configured to be worn by a third user via a third wireless connection. The central device transmits to the one or more first hearing devices, the one or more second hearing devices, and the one or more third hearing devices, one or more audio packets based on the received first, second and third wireless signals. The central device may be a dedicated standalone device, or any of the hearing devices may act as the central device. Another example includes the hearing devices broadcasting their audio without a central device. Any number of hearing devices or pairs of hearing devices may be used in the present system, and the audio packets may be broadcast or unicast, and may include mixed or multiplexed audio from all devices or a select number of devices, in various examples.
This Summary is an overview of some of the teachings of the present application and not intended to be an exclusive or exhaustive treatment of the present subject matter. Further details about the present subject matter are found in the detailed description and appended claims.
Various embodiments are illustrated by way of example in the figures of the accompanying drawings. Such embodiments are demonstrative and not intended to be exhaustive or exclusive embodiments of the present subject matter.
The following detailed description of the present subject matter refers to subject matter in the accompanying drawings which show, by way of illustration, specific aspects and embodiments in which the present subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present subject matter. References to “an”, “one”, or “various” embodiments in this disclosure are not necessarily to the same embodiment, and such references contemplate more than one embodiment, including combinations of such embodiments. The following detailed description is demonstrative and not to be taken in a limiting sense. The scope of the present subject matter is defined by the appended claims, along with the full scope of legal equivalents to which such claims are entitled.
The present detailed description will discuss audio devices such as hearing devices and speakers. The description refers to hearing devices generally, which include earbuds, headsets, headphones and hearing assistance devices using the example of hearing aids. Other hearing devices include, but are not limited to, those in this document. It is understood that their use in the description is intended to demonstrate the present subject matter, but not in a limited or exclusive or exhaustive sense.
Hearing aid users, depending on the severity of their hearing loss, may have trouble communicating with others nearby in the face of background noise. Therefore, there is a need for efficient wireless communication between hearing aid wearers as well as others who may want to participate in conversations where noise is prevalent. Modern ear worn devices contain wireless radios capable of two-way and broadcast communication over such protocols as Bluetooth and Bluetooth Low Energy (BLE). Such devices may use protocols allowing for unicast transmission and reception as well as broadcast transmission and reception.
The present subject matter provides wireless network topologies that allow for ear worn devices to communicate with one another for the purpose of improving conversations with others in background noise. Disclosed herein are various embodiments allowing such communication between participants. Thus, the present subject matter provides for improved speech understanding between participating users.
The present subject matter provides for streaming audio picked up by ear worn microphones built into earbuds or hearing aids which can be shared with others when conditions warrant, such as in high noise environments or when acoustic-only audio can be improved through wireless communication. In various examples, each peripheral device or group of devices incorporates a microphone to pick up the wearers voice, and sends the corresponding audio over a wireless uplink to a central device which coordinates the wireless scheduling of packet-based communication to one or more other peripheral device or group of devices. Each peripheral device is equipped with a receiver or speaker to render the audio received in the downlinks, in various examples.
In one example, each of the peripheral devices may send unicast audio to a central intermediary device (or conversation bridge) or hearing device acting as a central device (such as in a Bluetooth piconet), and the central device can then mix and/or multiplex the audio together from multiple sources and send the audio to the peripheral devices either via a broadcast mode or via a unicast mode used by the participants. To avoid echo in the unicast mode, each of the participants own voice could be removed from the audio sent to each participant as shown in
In one example, the audio may be decoded from each uplink channel by the central device, mixed as combination, and then re-encoded and sent out as a wireless packet to each participant, as shown in
In
In one example, the central device 19 is configured to send an audio packet 2+3 in unicast mode to the one or more first hearing devices 11 that includes the audio received from the one or more second hearing devices 12 and the audio received from the one or more third hearing devices 13. The one or more first hearing devices 11 does not receive back the audio transmitted by the one or more first hearing devices 11 to the central device 19, thus avoiding echo or overlap. In a similar manner, the central device 19 is configured to send an audio packet 1+3 to the one or more second hearing devices 12 that includes the audio received from the one or more first hearing devices 11 and the audio received from the one or more third hearing devices 13. Likewise, the central device 19 is configured to send an audio packet 1+2 to the one or more third hearing devices 13 that includes the audio received from the one or more second hearing devices 12 and the audio received from the one or more first hearing devices 11.
In some examples, one or more of the first wireless connection, the second wireless connection or the third wireless connection include a Bluetooth® connection or a Bluetooth® Low Energy (BLE) connection. In the depicted embodiment, transmitting to the one or more first hearing devices, the one or more second hearing devices and the one or more third hearing devices includes sending multiple unicast transmissions. In some examples, the multiple unicast transmissions include audio packets generated by mixing the first wireless signal, the second wireless and the third wireless signal, and then removing the first wireless signal from the mixed wireless signals to prevent an echo. As shown with respect to
In another example a wired, wireless, or built-in microphone may be included in any of the various examples herein, whereby a non-hearing impaired person may wear such a microphone or a microphone in combination with the central device 19. In some examples, ambient noise may be cancelled out by eliminating highly correlated sound from all or the microphones while enhancing uncorrelated sound such as a wearer's voice from each microphone. Additional noise cancelling techniques may be used given the number of microphones employed in these topologies, in various examples. While three peripheral devices are shown, the present subject matter can be expanded to any number of devices limited only by bandwidth and power resources of the network, in various examples. The signal length used is compatible with that used for conversations between participants, in various examples.
In one example, the central device 19 is configured to send an audio packet 2+3+4 in unicast mode to the one or more first hearing devices 11 that includes the audio from the microphone 14 the audio received from the one or more second hearing devices 12 and the audio received from the one or more third hearing devices 13. Thus, the one or more first hearing devices 11 does not receive back the audio transmitted by the one or more first hearing devices 11 to the central device 19, thus avoiding echo or overlap. In a similar manner, the central device 19 is configured to send an audio packet 1+3+4 to the one or more second hearing devices 12 that includes the audio received from the microphone 14, the audio received from the one or more first hearing devices 11 and the audio received from the one or more third hearing devices 13. Likewise, the central device 19 is configured to send an audio packet 1+2+4 to the one or more third hearing devices 13 that includes the audio received from the microphone 14, the audio received from the one or more second hearing devices 12 and the audio received from the one or more first hearing devices 11.
According to various examples, the central device 19 may operate in a unicast-broadcast mode in which the central device can receive the audio channels from each device over a unicast uplink and set up multiple broadcast isochronous streams (BISes) within one broadcast isochronous group (BIG), from which each peripheral device can pick up the other audio streams as shown in
In one example, the central device 19 is configured to send an audio packet 1+2+3 in broadcast mode to all of the devices (here the one or more first hearing devices 11, the one or more second hearing devices 12, and the one or more third hearing devices 13). The audio packet 1+2+3 includes the audio received from the one or more first hearing devices 11, the audio received from the one or more second hearing devices 12 and the audio received from the one or more third hearing devices 13, in one example. In various examples, all audio picked up by the microphone of the devices may be unicast to the central device 19, including but not limited to own voice signals from the users of the devices. The central device 19 may control which microphones are open on some devices, and may use auto mixing to cancel correlated (ambient noise) signals based on signal levels in input, pass through the un-correlated (most likely speech). For example, the central device may have a microphone, listen for ambient sounds, and cancel these sounds. The central device 19 may be a standalone device with noise cancelling and dedicated processing, in some examples. In some examples, multiple separate radios may be used in a modular approach instead of one radio with central timing. In various examples, the central device 19 may send output to any number of Bluetooth devices via a single broadcast for use as a conference speaker panel mixer.
In another example, one of the audio or peripheral devices acts as central bridge device, and sets up bi-directional streams with other devices as shown in
In one example, transmitting to the one or more second hearing devices 12 and the one or more third hearing devices 13 includes sending multiple unicast transmissions. In unicast mode, the one or more first hearing devices 11 act as the central device and are configured to send an audio packet 1+3 to the one or more second hearing devices 12 that includes the audio from the one or more first hearing devices 11 and the audio received from the one or more third hearing devices 13. Likewise, the one or more first hearing devices 11 are configured to send an audio packet 1+2 to the one or more third hearing devices 13 that includes the audio received from the one or more second hearing devices 12 and the audio from the one or more first hearing devices 11.
In another example, device 11 may function as the central device while broadcasting encoded streams as BISes in a BIG, such as shown in
In this example, transmitting to the one or more second hearing devices 12 and the one or more third hearing devices 13 includes sending a broadcast transmission. In broadcast mode, the one or more first hearing devices 11 act as the central device and are configured to send an audio packet 1+2+3 to the one or more second hearing devices 12 and the one or more third hearing devices 13 that includes the audio from the one or more first hearing devices 11, the audio received from the one or more second audio devices 12, and the audio received from the one or more third hearing devices 13.
In another example, the system does not rely on a bridge device or one device acting as bridge. In this topology, depicted in
For example, the one or more first hearing devices 11 broadcasts an audio packet 1 including acoustic signals received by the microphone of the one or more first hearing devices 11, and receives broadcasted signals 2+3 from the other devices. In addition, the one or more second hearing devices 12 broadcasts an audio packet 2 including acoustic signals received by the microphone of the one or more second hearing devices 12, and receives broadcasted signals 1+3 from the other devices. Also, the one or more third hearing devices 13 broadcasts an audio packet 3 including acoustic signals received by the microphone of the one or more third hearing devices 13, and receives a broadcasted signal 1+2 from the other devices.
Additionally or alternatively, each of the one or more first hearing devices 11, the one or more second hearing devices 12, and the one or more third hearing devices 13 is configured to obtain an encryption code to enable reception and decryption of the broadcasted signals. In various examples, the encryption code is configured to be transmitted using a paired connection, a QR code, or using near field communication between at least two of the one or more first hearing devices 11, the one or more second hearing devices 12, and the one or more third hearing devices 13.
In the previous examples, the depicted embodiments show three devices plus a central device or three devices where any one device may act as a central device. However, the present methodology can be expanded to include any number of participants and devices, or may be reduced to allow only 2-way communication between two devices. In addition, each of the depicted devices (11, 12, 13, 19) may be a single device or it may be part of a coordinated set of devices such as two hearing aids or earbuds. Thus, the only limit on the possible topologies is the occupied bandwidth, in various examples. For illustrative purposes,
In one example, the central device 19 is configured to send an audio packet 1+2+3 in broadcast mode to all of the sets of devices (here the first set of ear-wearable devices 61, the second set of ear-wearable devices 62 and the third set of ear-wearable devices 63). The audio packet 1+2+3 includes the audio received from the first set of ear-wearable devices 61, the second set of ear-wearable devices 62 and the third set of ear-wearable devices 63, in one example. As stated above, any of the examples provided herein may be used with single devices or sets of devices. In one example, the pair of devices provides for stereo signals, and the left or right device may control wireless communication for the pair.
In still other examples, the central device 19 may include wired microphone inputs or may include built-in microphones. Such microphones may be directional or configured in a directional array so that the signal to noise ratio can be improved by directing the microphones or steering the array of microphones at various speakers. The present system may be thus used by persons wearing headphones or earbuds as well as persons who are not wearing such devices but who can be given a wireless or wired microphone to allow the hearing impaired person or persons to hear the speakers (using the microphone) more easily in the face of noise. In still other examples, the central device 19 may be a body worn device such as on a neck loop with a Bluetooth radio and such a device may include a telecoil transmitter to convey the audio to a person wearing a hearing aid or hearing aids equipped with a telecoil.
Various examples of the present subject matter use own voice detection of the ear-wearable device, and cancel out other ambient noise (e.g., background noise) using binaural noise reduction before streaming the voice signals via a wireless connection to the other user's ear-wearable device. Additionally or alternatively, the present system provides for multiple modes of bidirectional streaming communication, including for example a mode that allows a user to continue to hear ambient sounds as well as incoming streaming of voice communications from the other user or users.
In various examples, at least one of the hearing devices includes a connection to a smartphone application. The smartphone application is configured to be used to pair or unpair the hearing devices, in some examples. In some examples, at least one of the hearing devices includes a voice control configured to be used to pair or unpair the hearing devices. In some examples, the one or more first hearing devices and the one or more second hearing devices are configured to share audio information to enhance performance of one or more of speech intelligibility or noise reduction. In various examples, at least one of the hearing devices is a hearing assistance device, such as a hearing aid.
In various examples, the present subject matter provides for audio devices to be able to discover broadcasts from other audio devices. The present subject matter includes a method of measuring proximity between devices, conveys sufficient information unique to finding the broadcaster, and may include the broadcaster's broadcast code when applicable, in various examples. In some examples, the broadcast code and other information is consistent with information that may be conveyed using out of band mechanisms such as QR codes or RFID (radio frequency identification) tags for sharing this information among participants.
In one example, the present subject matter provides a method of measuring proximity between participating devices. In some examples, participating devices may include a proximity sensor (or receiver) that has calibrated its RF (radio frequency) path and written its RF path loss which includes antenna gain to its controller using an HCI_LE_Write_RF_Path_Compensation command. A peripheral device may calibrate its RF path loss including its antenna gain to its controller using the same HCI command, in various examples. In some examples, an advertising or broadcasting device may include its calibrated transmission (TX) power in its advertisement (AUX_ADV_IND). In one example, the total path loss may be calculated using RSSI-TX (received signal strength indicator) power, and path loss may be greater than a threshold that is programmable based on use case. Various examples may use other mechanisms such as channel sounding for determining proximity measurements.
In various examples, the present subject matter provides for meeting applicable broadcast source requirements. For a public broadcast source to be able to utilize the function of a remote Broadcast Discovery Assistant (BDA), it is bonded with the receiving device. In some examples, the source may include a broadcast audio profile (BAP) broadcast assistant role which allows it to provide information directly to the BDA. Furthermore, the source may use its public address in its broadcast announcements, in various examples. The BDA may be bonded with the broadcast sender, in various examples, and may include a Broadcast Audio Scan service so that the broadcast sender can add a source using the procedures in BAP section 6.5.4 for adding sources. The BDA may include a service characteristic for setting the RF path loss threshold, used for estimating proximity, and may scan at a sufficient duty cycle to find acceptors sending general announcements within 1-2 seconds, in an example. The BDA may connect and write one or more characteristics necessary for the broadcast receiver to find and decode the broadcast, in various examples.
In various examples, for a broadcast receiver to make use of the broadcast discovery service, a broadcast receiver sends general connectable advertisements, allows temporary ad hoc connections, and enables a few non-secure writeable characteristics. The format for communications is consistent and includes data such as Broadcast Name, Broadcast ID, and Broadcast Code (if encrypted), in various examples.
Additionally or alternatively, the method includes generating the one or more audio packets by mixing the first wireless signal, the second wireless and the third wireless signal. In an example, generating the one or more audio packets includes generating a first audio packet to be transmitted to the one or more first hearing devices by removing the first wireless signal from the mixed wireless signals to prevent an echo. Additionally or alternatively, the method includes generating the one or more audio packets by multiplexing the first wireless signal, the second wireless signal and the third wireless signal.
Additionally or alternatively, the method also includes the hearing devices suppressing ambient noise in an environment of the user to improve audibility. The method may include the hearing devices reducing volume on an existing incoming audio stream to improve audibility, in some examples. In various examples, the method includes the hearing devices using machine learning to detect the acoustic own voice signals and to suppress non-speech sounds. The method may include the second hearing device detecting a position of the first user to obtain a direction of incoming communication and providing a directional component to the output signal for the second user based on the direction of incoming communication, in an embodiment. Various types of wireless connections may be used, including but not limited to Bluetooth® (such as Bluetooth® 5.2, for example) or Bluetooth® Low Energy (BLE) connections. Additionally or alternatively, the wireless connection provides for use of isochronous channels. For example, Bluetooth® 5.2 permits one device to stream to multiple devices over isochronous channels.
Examples, as described herein, may include, or may operate by, logic or a number of components, or mechanisms. Circuit sets are a collection of circuits implemented in tangible entities that include hardware (e.g., simple circuits, gates, logic, etc.). Circuit set membership may be flexible over time and underlying hardware variability. Circuit sets include members that may, alone or in combination, perform specified operations when operating. In an example, hardware of the circuit set may be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuit set may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuit set in hardware via the variable connections to carry out portions of the specific operation when in operation. Accordingly, the computer readable medium is communicatively coupled to the other components of the circuit set member when the device is operating. In an example, any of the physical components may be used in more than one member of more than one circuit set. For example, under operation, execution units may be used in a first circuit of a first circuit set at one point in time and reused by a second circuit in the first circuit set, or by a third circuit in a second circuit set at a different time.
Machine (e.g., computer system) 400 may include a hardware processor 402 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 404 and a static memory 406, some or all of which may communicate with each other via an interlink (e.g., bus) 408. The machine 400 may further include a display unit 410, an alphanumeric input device 412 (e.g., a keyboard), and a user interface (UI) navigation device 414 (e.g., a mouse). In an example, the display unit 410, input device 412 and UI navigation device 414 may be a touch screen display. The machine 400 may additionally include a storage device (e.g., drive unit) 416, one or more input audio signal transducers 418 (e.g., microphone), a network interface device 420, and one or more output audio signal transducer 421 (e.g., speaker). The machine 400 may include an output controller 432, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
The storage device 416 may include a machine readable medium 422 on which is stored one or more sets of data structures or instructions 424 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 424 may also reside, completely or at least partially, within the main memory 404, within static memory 406, or within the hardware processor 402 during execution thereof by the machine 400. In an example, one or any combination of the hardware processor 402, the main memory 404, the static memory 406, or the storage device 416 may constitute machine readable media.
While the machine readable medium 422 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 424.
The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 400 and that cause the machine 400 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples may include solid-state memories, and optical and magnetic media. In an example, a massed machine-readable medium comprises a machine-readable medium with a plurality of particles having invariant (e.g., rest) mass. Accordingly, massed machine-readable media are not transitory propagating signals. Specific examples of massed machine-readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The instructions 424 may further be transmitted or received over a communications network 426 using a transmission medium via the network interface device 420 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 420 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 426. In an example, the network interface device 420 may include a plurality of antennas to communicate wirelessly using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine 400, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
Various examples of the present subject matter support wireless communications with a hearing device. In various examples the wireless communications may include standard or nonstandard communications. Some examples of standard wireless communications include link protocols including, but not limited to, Bluetooth™, Bluetooth™ Low Energy (BLE), IEEE 802. 11 (wireless LANs), 802.15 (WPANs), 802.16 (WiMAX), cellular protocols including, but not limited to CDMA and GSM, ZigBee, and ultra-wideband (UWB) technologies. Such protocols support radio frequency communications and some support infrared communications while others support NFMI. Although the present system is demonstrated as a radio system, it is possible that other forms of wireless communications may be used such as ultrasonic, optical, infrared, and others. It is understood that the standards which may be used include past and present standards. It is also contemplated that future versions of these standards and new future standards may be employed without departing from the scope of the present subject matter.
The wireless communications support a connection from other devices. Such connections include, but are not limited to, one or more mono or stereo connections or digital connections having link protocols including, but not limited to 802.3 (Ethernet), 802.4, 802.5, USB, SPI, PCM, ATM, Fibre-channel, Firewire or 1394, InfiniBand, or a native streaming interface. In various examples, such connections include all past and present link protocols. It is also contemplated that future versions of these protocols and new future standards may be employed without departing from the scope of the present subject matter.
Hearing assistance devices typically include at least one enclosure or housing, a microphone, hearing assistance device electronics including processing electronics, and a speaker or “receiver.” Hearing assistance devices may include a power source, such as a battery. In various examples, the battery is rechargeable. In various examples multiple energy sources are employed. It is understood that in various examples the microphone is optional. It is understood that in various examples the receiver is optional. It is understood that variations in communications protocols, antenna configurations, and combinations of components may be employed without departing from the scope of the present subject matter. Antenna configurations may vary and may be included within an enclosure for the electronics or be external to an enclosure for the electronics. Thus, the examples set forth herein are intended to be demonstrative and not a limiting or exhaustive depiction of variations.
It is understood that digital hearing assistance devices include a processor. In digital hearing assistance devices with a processor, programmable gains may be employed to adjust the hearing assistance device output to a wearer's particular hearing impairment. The processor may be a digital signal processor (DSP), microprocessor, microcontroller, other digital logic, or combinations thereof. The processing may be done by a single processor, or may be distributed over different devices. The processing of signals referenced in this application may be performed using the processor or over different devices. Processing may be done in the digital domain, the analog domain, or combinations thereof. Processing may be done using subband processing techniques. Processing may be done using frequency domain or time domain approaches. Some processing may involve both frequency and time domain aspects. For brevity, in some examples drawings may omit certain blocks that perform frequency synthesis, frequency analysis, analog-to-digital conversion, digital-to-analog conversion, amplification, buffering, and certain types of filtering and processing. In various examples of the present subject matter the processor is adapted to perform instructions stored in one or more memories, which may or may not be explicitly shown. Various types of memory may be used, including volatile and nonvolatile forms of memory. In various examples, the processor or other processing devices execute instructions to perform a number of signal processing tasks. Such examples may include analog components in communication with the processor to perform signal processing tasks, such as sound reception by a microphone, or playing of sound using a receiver (i.e., in applications where such transducers are used). In various examples of the present subject matter, different realizations of the block diagrams, circuits, and processes set forth herein may be created by one of skill in the art without departing from the scope of the present subject matter.
It is further understood that different hearing devices may embody the present subject matter without departing from the scope of the present disclosure. The devices depicted in the figures are intended to demonstrate the subject matter, but not necessarily in a limited, exhaustive, or exclusive sense. It is also understood that the present subject matter may be used with a device designed for use in the right ear or the left ear or both ears of the wearer.
The present subject matter is demonstrated for hearing devices, including hearing assistance devices, including but not limited to, behind-the-car (BTE), in-the-ear (ITE), in-the-canal (ITC), receiver-in-canal (RIC), invisible-in-canal (IIC) or completely-in-the-canal (CIC) type hearing assistance devices. It is understood that behind-the-ear type hearing assistance devices may include devices that reside substantially behind the ear or over the ear. Such devices may include hearing assistance devices with receivers associated with the electronics portion of the behind-the-ear device, or hearing assistance devices of the type having receivers in the ear canal of the user, including but not limited to receiver-in-canal (RIC) or receiver-in-the-ear (RITE) designs. The present subject matter may also be used in hearing assistance devices generally, such as cochlear implant type hearing devices. The present subject matter may also be used in deep insertion devices having a transducer, such as a receiver or microphone. The present subject matter may be used in bone conduction hearing devices, in some examples. The present subject matter may be used in devices whether such devices are standard or custom fit and whether they provide an open or an occlusive design. It is understood that other hearing devices not expressly stated herein may be used in conjunction with the present subject matter.
Example 1 is a system including one or more first hearing devices configured to be worn on or in an ear of a first user, one or more second hearing devices configured to be worn on or in an ear of a second user, and a central device configured for wireless communication with the one or more first hearing devices and the one or more second hearing devices, wherein the central device includes one or more processors programmed to: receive a first wireless signal from the one or more first hearing devices via a first wireless connection, receive a second wireless signal from the one or more second hearing devices via a second wireless connection, and transmit, to the one or more first hearing devices and the one or more second hearing devices, one or more audio packets based on the received first wireless signal and the received second wireless signal.
In Example 2, the subject matter of Example 1 optionally includes wherein one or more of the first wireless connection or the second wireless connection include a Bluetooth® connection.
In Example 3, the subject matter of Example 1 optionally includes wherein one or more of the first wireless connection or the second wireless connection include a Bluetooth® Low Energy (BLE) connection.
In Example 4, the subject matter of Example 1 optionally includes wherein transmitting to the one or more first hearing devices and the one or more second hearing devices includes sending a broadcast transmission.
In Example 5, the subject matter of Example 1 optionally includes wherein transmitting to the one or more first hearing devices and the one or more second hearing devices includes sending multiple unicast transmissions.
In Example 6, the subject matter of Example 1 optionally includes wherein the one or more processors are further programmed to generate the one or more audio packets by mixing the first wireless signal and the second wireless signal.
In Example 7, the subject matter of Example 6 optionally includes wherein generating the one or more audio packets includes generating a first audio packet to be transmitted to the one or more first hearing devices by removing the first wireless signal from the mixed wireless signals to prevent an echo.
In Example 8, the subject matter of Example 1 optionally includes wherein the one or more processors are further programmed to generate the one or more audio packets by multiplexing the first wireless signal and the second wireless signal.
In Example 9, the subject matter of Example 1 optionally includes wherein the central device further includes a wired or wireless microphone.
In Example 10, the subject matter of Example 1 optionally further includes one or more third hearing devices configured to be worn on or in an ear of a third user, wherein the one or more processors are further programmed to receive a third wireless signal from the one or more third hearing devices via a third wireless connection and transmit, to the one or more first hearing devices, the one or more second hearing devices, and the one or more third hearing devices, one or more audio packets based on the received first wireless signal, the received second wireless signal and the received third wireless signal.
In Example 11, the subject matter of Example 10 optionally further includes one or more fourth hearing devices configured to be worn on or in an ear of a fourth user, wherein the one or more processors are further programmed to receive a fourth wireless signal from the one or more fourth hearing devices via a fourth wireless connection and transmit, to the one or more first hearing devices, the one or more second hearing devices, the one or more third hearing devices, and the one or more fourth hearing devices, one or more audio packets based on the received first wireless signal, the received second wireless signal, the received third wireless signal and the received fourth wireless signal.
Example 12 is a system including one or more first hearing devices configured to be worn on or in an ear of a first user, one or more second hearing devices configured to be worn on or in an ear of a second user, and one or more third hearing devices configured to be worn on or in an ear of a third user, wherein the one or more first hearing devices is configured as a central device for wireless communication with the one or more second hearing devices and the one or more third hearing devices, wherein the central device includes one or more processors programmed to receive a second wireless signal from the one or more second hearing devices via a second wireless connection, receive a third wireless signal from the one or more third hearing devices via a third wireless connection, and transmit, to the one or more second hearing devices and the one or more third hearing devices, one or more audio packets based on the received second wireless signal and the received third wireless signal.
In Example 13, the subject matter of Example 12 optionally includes wherein transmitting to the one or more second hearing devices and the one or more third hearing devices includes sending a broadcast transmission.
In Example 14, the subject matter of Example 12 optionally includes wherein transmitting to the one or more second hearing devices and the one or more third hearing devices includes sending multiple unicast transmissions.
Example 15 is a system including one or more first hearing devices configured to be worn on or in an ear of a first user, and one or more second hearing devices configured to be worn on or in an ear of a second user, wherein each device of the one or more first hearing devices and the one or more second hearing devices is configured with one or more processors programmed to: broadcast an audio packet to other devices of the one or more first hearing devices and the one or more second hearing devices, including acoustic signals received by a microphone of the each device, and receive a broadcasted signal from the other devices including acoustic signals received by microphones of the other devices.
In Example 16, the subject matter of Example 15 optionally includes wherein each of the one or more first hearing devices and the one or more second hearing devices is configured to obtain an encryption code to enable reception and decryption of the broadcasted signal.
In Example 17, the subject matter of Example 16 optionally includes wherein the encryption code is configured to be transmitted using a paired connection, a QR code, or using near field communication between the one or more first hearing devices and the one or more second hearing devices.
In Example 18, the subject matter of Example 15 optionally further includes one or more third hearing devices configured to be worn on or in an ear of a third user, wherein each device of the one or more first hearing devices, the one or more second hearing devices, and the one or more third hearing devices is configured with one or more processors programmed to: broadcast an audio packet to other devices of the one or more first hearing devices, the one or more second hearing devices, and the one or more third hearing devices, including acoustic signals received by a microphone of the each device, and receive a broadcasted signal from the other devices including acoustic signals received by microphones of the other devices.
Example 19 is a method including receiving, at a central wireless communication device, a first wireless signal from one or more first hearing devices configured to be worn on or in an ear of a first user via a first wireless connection, receiving, at the central device, a second wireless signal from one or more second hearing devices configured to be worn on or in an ear of a second user via a second wireless connection, and transmitting, by the central device, to the one or more first hearing devices and the one or more second hearing devices, one or more audio packets based on the received first wireless signal and the received second wireless signal.
In Example 20, the subject matter of Example 19 optionally further includes generating, by the central device, the one or more audio packets by mixing the first wireless signal and the second wireless.
In Example 21, the subject matter of Example 19 optionally further includes generating, by the central device, the one or more audio packets by multiplexing the first wireless signal and the second wireless signal.
In Example 22, the subject matter of Example 19 optionally further includes receiving, at the central device, a third wireless signal from one or more third hearing devices configured to be worn on or in an ear of a third user via a third wireless connection, and transmitting, by the central device, to the one or more first hearing devices, the one or more second hearing devices, and the one or more third hearing devices, one or more audio packets based on the received first wireless signal, the received second wireless signal and the received third wireless signal.
In Example 23, the subject matter of Example 22 optionally further includes receiving, at the central device, a fourth wireless signal from one or more fourth hearing devices configured to be worn on or in an ear of a fourth user via a fourth wireless connection, and transmitting, by the central device, to the one or more first hearing devices, the one or more second hearing devices, the one or more third hearing devices, and the one or more fourth hearing devices, one or more audio packets based on the received first wireless signal, the received second wireless signal, the received third wireless signal and the received fourth wireless signal.
Example 24 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-23.
Example 25 is an apparatus comprising means to implement of any of Examples 1-20.
Example 26 is a system to implement of any of Examples 1-23.
Example 27 is a method to implement of any of Examples 1-23.
This application is intended to cover adaptations or variations of the present subject matter. It is to be understood that the above description is intended to be illustrative, and not restrictive. The scope of the present subject matter should be determined with reference to the appended claims, along with the full scope of legal equivalents to which such claims are entitled.
The present application claims the benefit under 35 U.S.C. § 119 (e) of U.S. Provisional Patent Application 63/507,241, filed Jun. 9, 2023, the disclosure of which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 63507241 | Jun 2023 | US |
Child | 18664918 | US |