The present invention relates generally to adaptive noise reduction in wearable or implantable systems.
Medical devices have provided a wide range of therapeutic benefits to recipients over recent decades. Medical devices can include internal or implantable components/devices, external or wearable components/devices, or combinations thereof (e.g., a device having an external component communicating with an implantable component). Medical devices, such as traditional hearing aids, partially or fully-implantable hearing prostheses (e.g., bone conduction devices, mechanical stimulators, cochlear implants, etc.), pacemakers, defibrillators, functional electrical stimulation devices, and other medical devices, have been successful in performing lifesaving and/or lifestyle enhancement functions and/or recipient monitoring for a number of years.
The types of medical devices and the ranges of functions performed thereby have increased over the years. For example, many medical devices, sometimes referred to as “implantable medical devices,” now often include one or more instruments, apparatus, sensors, processors, controllers or other functional mechanical or electrical components that are permanently or temporarily implanted in a recipient. These functional devices are typically used to diagnose, prevent, monitor, treat, or manage a disease/injury or symptom thereof, or to investigate, replace or modify the anatomy or a physiological process. Many of these functional devices utilize power and/or data received from external devices that are part of, or operate in conjunction with, implantable components.
In one aspect, a method is provided. The method comprises: capturing sound signals at a hearing device configured to be worn by a user and at one or more remote devices in wireless communication with the hearing device; determining, based on the sound signals, one or more noises present in an ambient environment of the hearing device; determining at least one user-preferred noise from the one or more noises for suppression; and suppressing the at least one user-preferred noise within the sound signals to generate noise-suppressed sound signals.
In another aspect, one or more non-transitory computer readable storage media are provided. The one or more non-transitory computer readable storage media comprise instructions that, when executed by a processor of a hearing device, cause the processor to: receive, at the hearing device configured to be worn by a user, noise model parameters from at least one external device in wireless communication with the hearing device, wherein the noise model parameters represent noise detected by the at least one external device; determine, based on sound signals received at the hearing device and the noise model parameters, one or more noises present in an ambient environment of the hearing device; determine at least one user-preferred noise from the one or more noises for suppression; suppress the at least one user-preferred noise within the sound signals to generate noise-suppressed sound signals; and process the noise-suppressed sound signals for generation of stimulation signals for delivery to the user.
In another aspect, a method is provided. The method comprises: capturing environmental signals at an implantable medical device system; determining, based on the environmental signals, one or more noises present in an ambient environment of a user of the implantable medical device system; determining at least one user-preferred noise from the one or more noises; attenuating the at least one user-preferred noise within the environmental signals to generate noise-reduced environmental signals; and generating, based on the noise-reduced environmental signals, one or more stimulation signals for delivery to the user of the implantable medical device system.
In another aspect, a system is provided. The system comprises: a user device is configured to be worn by a user comprising one or more sensors configured to capture environmental signals; one or more remote devices in wireless communication with the user device, wherein the one or more remote devices each include at least one sensor configured to capture environmental signals; and one or more processors configured to: determine, based on the environmental signals, one or more noises present in an ambient environment of the user device, determine at least one user-preferred noise from the one or more noises for suppression, and suppress the at least one user-preferred noise within the environmental signals to generate noise-suppressed environmental signals.
Embodiments of the present invention are described herein in conjunction with the accompanying drawings, in which:
Presented herein are techniques for enabling a user of a wearable or implantable device to define noise sources for suppression/attenuation in an ambient environment. In particular, a plurality of devices within the ambient environment form a wearable or implantable system. The plurality of devices capture environmental signals (e.g., sound signals, light signals, etc.) from the ambient environment and the system determines, from the environmental signals, one or more noises (e.g., noise sources, noise types, etc.) present in an ambient environment. The system is configured to determine at least one user-preferred noise from the one or more noises for suppression (attenuation) and, accordingly, suppress the at least one user-preferred noise within the environmental signals to generate noise-suppressed environmental signals. In certain examples, the system generates stimulation signals from the noise-suppressed environmental signals and the system delivers the stimulation signals to a user.
More specifically, hearing and other types of devices can only do so much with their limited inputs (e.g., the limitation of processing power and memory usage) to eliminate the background noise mixed with the target signal (signal of interest). Generally, it is a one-size-fit-all approach to cancel/suppress the noise in the background. However, presented herein are techniques that make use of the presence of multiple microphones provided by network-connected devices in order to provide an improved noise reduction/suppression system. In particular, and as described further below, the techniques presented herein create a profile showing the existing types of background noise (noise) in the ambient environment, estimate/build a likelihood metric system, learn to prioritize mitigating different noise types depending on the user's preference, and pass the noise parameters from the analysis model to the hearing device for use in its noise cancellation algorithm.
The proposed system is an adaptive system that can identify, prioritize, and suppress the background noise(s) which are relevant to that specific user. Besides reducing the background noise(s) in the general term, the proposed system can adaptively prioritize, suppress, and update the model for real time noise reduction to reduce the noise(s) that are relevant to the user. In certain aspects, user input is used to select which components of the background noise should be attenuated, with a learning aspect proposed.
Merely for ease of description, the techniques presented herein are primarily described with reference to a specific implantable medical device system, namely a cochlear implant system, and with reference to a specific type of environmental signals, namely sound signals. However, it is to be appreciated that the techniques presented herein may also be partially or fully implemented by other types of devices or systems with other types of environmental signals. For example, the techniques presented herein may be implemented by other hearing devices, personal sound amplification products (PSAPs), or hearing device systems that include one or more other types of hearing devices, such as hearing aids, middle ear auditory prostheses, bone conduction devices, direct acoustic stimulators, electro-acoustic prostheses, auditory brain stimulators, cochlear implants, combinations or variations thereof, etc. The techniques presented herein may also be implemented by dedicated tinnitus therapy devices and tinnitus therapy device systems. In further embodiments, the presented herein may also be implemented by, or used in conjunction with, vestibular devices (e.g., vestibular implants), visual devices (i.e., bionic eyes), sensors, pacemakers, drug delivery systems, defibrillators, functional electrical stimulation devices, catheters, seizure devices (e.g., devices for monitoring and/or treating epileptic events), sleep apnea devices, electroporation devices, wearable devices, etc.
As noted, cochlear implant system 102 includes an external component 104 that is configured to be directly or indirectly attached to the body of the recipient and an implantable component 112 configured to be implanted in the recipient. In the examples of
In the example of
It is to be appreciated that the OTE sound processing unit 106 is merely illustrative of the external component that could operate with implantable component 112. For example, in alternative examples, the external component may comprise a behind-the-ear (BTE) sound processing unit or a micro-BTE sound processing unit and a separate external. In general, a BTE sound processing unit comprises a housing that is shaped to be worn on the outer ear of the recipient and is connected to the separate external coil assembly via a cable, where the external coil assembly is configured to be magnetically and inductively coupled to the implantable coil 114. It is also to be appreciated that alternative external components could be located in the recipient's ear canal, worn on the body, etc.
Also shown in
The processing module 109 may comprise, for example, one or more processors, and a memory device (memory) that includes the user-preferred noise suppression logic 131. The memory device may comprise any one or more of: Non-Volatile Memory (NVM), Ferroelectric Random Access Memory (FRAM), read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. The one or more processors are, for example, microprocessors or microcontrollers that execute instructions for the user-preferred noise suppression logic 131 stored in the memory device.
The wearable device 103 and the sound processing unit 106 (or the cochlear implant 112) wirelessly communicate via a communication link 127. The communication link 127 may comprise, for example, a short-range communication link, such as Bluetooth link, Bluetooth Low Energy (BLE) link, a proprietary link, etc.
As noted above, the cochlear implant system 102 includes the sound processing unit 106 and the cochlear implant 112. However, as described further below, the cochlear implant 112 can operate independently from the sound processing unit 106, for at least a period, to stimulate the recipient. For example, the cochlear implant 112 can operate in a first general mode, sometimes referred to as an “external hearing mode,” in which the sound processing unit 106 captures sound signals which are then used as the basis for delivering stimulation signals to the recipient. The cochlear implant 112 can also operate in a second general mode, sometimes referred as an “invisible hearing” mode, in which the sound processing unit 106 is unable to provide sound signals to the cochlear implant 112 (e.g., the sound processing unit 106 is not present, the sound processing unit 106 is powered-off, the sound processing unit 106 is malfunctioning, etc.). As such, in the invisible hearing mode, the cochlear implant 112 captures sound signals itself via implantable sound sensors and then uses those sound signals as the basis for delivering stimulation signals to the recipient. Further details regarding operation of the cochlear implant 112 in the external hearing mode are provided below, followed by details regarding operation of the cochlear implant 112 in the invisible hearing mode. It is to be appreciated that reference to the external hearing mode and the invisible hearing mode is merely illustrative and that the cochlear implant 112 could also operate in alternative modes.
In
The processing module 119 may comprise, for example, one or more processors, and a memory device (memory) that includes the user-preferred noise suppression logic 131. The memory device may comprise any one or more of: Non-Volatile Memory (NVM), Ferroelectric Random Access Memory (FRAM), read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. The one or more processors are, for example, microprocessors or microcontrollers that execute instructions for the user-preferred noise suppression logic 131 stored in the memory device.
The external device 110 and the sound processing unit 106 (or the cochlear implant 112) wirelessly communicate via a communication link 126. The communication link 126 may comprise, for example, a short-range communication link, such as Bluetooth link, Bluetooth Low Energy (BLE) link, a proprietary link, etc.
The OTE sound processing unit 106 comprises one or more input devices that are configured to receive input signals (e.g., sound or data signals). The one or more input devices include one or more sound input devices 118 (e.g., one or more external microphones, audio input ports, telecoils, etc.), one or more auxiliary input devices 128 (e.g., audio ports, such as a Direct Audio Input (DAI), data ports, such as a Universal Serial Bus (USB) port, cable port, etc.), and a wireless module (e.g., transmitter, receiver, and/or transceiver) 120 (e.g., for communication with the external device 110). However, it is to be appreciated that one or more input devices may include additional types of input devices and/or less input devices (e.g., the wireless module 120 and/or one or more auxiliary input devices 128 could be omitted).
The OTE sound processing unit 106 also comprises the external coil 108, a charging coil 130, a closely-coupled transmitter/receiver (RF transceiver) 122, sometimes referred to as or radio-frequency (RF) transceiver 122, at least one rechargeable battery 132, and an external sound processing module 124. The external sound processing module 124 may comprise, for example, one or more processors, and a memory device (memory) that includes user-preferred noise suppression logic 131.
The memory device may comprise any one or more of: Non-Volatile Memory (NVM), Ferroelectric Random Access Memory (FRAM), read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. The one or more processors are, for example, microprocessors or microcontrollers that execute instructions for the user-preferred noise suppression logic 131 stored in the memory device.
The implantable component 112 comprises an implant body (main module) 134, a lead region 136, and the intra-cochlear stimulating assembly 116, all configured to be implanted under the skin/tissue (tissue) 115 of the recipient. The implant body 134 generally comprises a hermetically-sealed housing 138 in which RF interface circuitry 140 and a stimulator unit 142 are disposed. The implant body 134 also includes the internal/implantable coil 114 that is generally external to the housing 138, but which is connected to the RF interface circuitry 140 via a hermetic feedthrough (not shown in
As noted, stimulating assembly 116 is configured to be at least partially implanted in the recipient's cochlea. Stimulating assembly 116 includes a plurality of longitudinally spaced intra-cochlear electrical stimulating contacts (electrodes) 144 that collectively form a contact or electrode array 146 for delivery of electrical stimulation (current) to the recipient's cochlea.
Stimulating assembly 116 extends through an opening in the recipient's cochlea (e.g., cochleostomy, the round window, etc.) and has a proximal end connected to stimulator unit 142 via lead region 136 and a hermetic feedthrough (not shown in
As noted, the cochlear implant system 102 includes the external coil 108 and the implantable coil 114. The external magnet 152 is fixed relative to the external coil 108 and the implantable magnet 152 is fixed relative to the implantable coil 114. The magnets fixed relative to the external coil 108 and the implantable coil 114 facilitate the operational alignment of the external coil 108 with the implantable coil 114. This operational alignment of the coils enables the external component 104 to transmit data and power to the implantable component 112 via a closely-coupled wireless link 148 formed between the external coil 108 with the implantable coil 114. In certain examples, the closely-coupled wireless link 148 is a radio frequency (RF) link. However, various other types of energy transfer, such as infrared (IR), electromagnetic, capacitive and inductive transfer, may be used to transfer the power and/or data from an external component to an implantable component and, as such,
As noted above, sound processing unit 106 includes the external sound processing module 124. The external sound processing module 124 is configured to convert received input signals (received at one or more of the input devices) into output signals for use in stimulating a first ear of a recipient (i.e., the external sound processing module 124 is configured to perform sound processing on input signals received at the sound processing unit 106). Stated differently, the one or more processors in the external sound processing module 124 are configured to execute sound processing logic in memory to convert the received input signals into output signals that represent electrical stimulation for delivery to the recipient. The input signals can comprise signals received at the external component 104 (e.g., received at sound input devices 118), signals received at the wearable device 103, and/or signals received at the external device 110.
As noted,
Returning to the specific example of
As detailed above, in the external hearing mode the cochlear implant 112 receives processed sound signals from the sound processing unit 106. However, in the invisible hearing mode, the cochlear implant 112 is configured to capture and process sound signals for use in electrically stimulating the recipient's auditory nerve cells. In particular, as shown in
In the invisible hearing mode, the implantable sound sensors 160 are configured to detect/capture signals (e.g., acoustic sound signals, vibrations, etc.), which are provided to the implantable sound processing module 158. The implantable sound processing module 158 is configured to convert received input signals (received at one or more of the implantable sound sensors 160) into output signals for use in stimulating the first ear of a recipient (i.e., the processing module 158 is configured to perform sound processing operations). Stated differently, the one or more processors in implantable sound processing module 158 are configured to execute sound processing logic in memory to convert the received input signals into output signals 156 that are provided to the stimulator unit 142. The stimulator unit 142 is configured to utilize the output signals 156 to generate electrical stimulation signals (e.g., current signals) for delivery to the recipient's cochlea, thereby bypassing the absent or defective hair cells that normally transduce acoustic vibrations into neural activity.
It is to be appreciated that the above description of the so-called external hearing mode and the so-called invisible hearing mode are merely illustrative and that the cochlear implant system 102 could operate differently in different embodiments. For example, in one alternative implementation of the external hearing mode, the cochlear implant 112 could use signals captured by the sound input devices 118 and the implantable sound sensors 160 in generating stimulation signals for delivery to the recipient.
Different technologies exist to extract the vital information out of the sound signals for processing the speech information to help a user better perceives the audio in the ambient environment. In conventional systems, hearing devices are only able to process based on the sound signals received at its own microphones. In general, a hearing device can only do so much based on its limited inputs (e.g., the limitation of processing power and memory usage) to eliminate the background noise mixed with the signal of interest.
With the growth of the Internet of Things and wireless connectivity between electronic devices (e.g., the Bluetooth Low Energy), devices can communicate with each other within a network (e.g., body area network). Increasingly, these devices include microphones or other input sensors that can capture information about the ambient environment of a hearing device. For instance, it is increasingly common for mobile phones, wearable devices, to include one or more input sensors (e.g., microphones). As such, presented herein are techniques that leverage the input sensors of other devices in a process that determines the presence and type of background noise around the hearing device. Depending on the determined type of noise, the additional information contributed by the other supporting device(s) can be used to, for example, construct an adaptive masking scheme to filter out user-preferred noise.
In general,
More specifically, as shown in
More specifically, presented herein is a user-preferred noise suppression system that is configured to use environmental signals, such as sound signals, captured from multiple sources (e.g., the sound signals 121(A), 121(B), and 121(C)) to generate a profile of the noise present in the ambient environment (e.g., representing the nature/attributes of the detected noise in the ambient environment 123). The user-preferred noise suppression system is configured to determine at least one user-preferred noise for suppression.
In certain embodiments, the system can classify/categorize the noise into different “noise categories.” The noise categories can be, for example different type of noise, different noise sources, different shared sound attributes (e.g., high frequency, low frequency, etc.). The system can allow a user to select, in real-time, specific noises, or noise categories, for suppression/attenuation (e.g., cancellation). In certain embodiments, the system is configured to learn and automatically feedback some particular data to an adaptive masking system to suppress or cancel certain user-preferred noises (e.g., noise patterns, etc.).
Turning to the example of
As noted, these modules 363, 364, 366, and 368 can each be implemented by different components of a wearable or implantable system. For example, certain modules can be implemented by components of a wearable device (e.g., sound processing unit 106 in
Returning to the example of
The noise source profile module 364 is configured to create a “noise model” or “noise profile” showing the existing types/sources of background noise in the ambient environment. For instance, the noise model can include the fundamental and/or harmonics of the noises, approximate frequency range of the noise, repeatability/periodicity of the noise, the amplitude/duration of the noise, etc. In general, the noise models are parameters that describe the noise so the entire signal captured by the external device 110 and or remote device 103 are not streamed to the sound processing unit 106. The parameters could be filter coefficients for use by the sound processing unit 106.
The user preference module 366 is configured to determine, using the sound signals, the noise profile, and/or other information, at least one user-preferred noise, which is present in the ambient environment 123, for suppression/attenuation. That is, the user preference module 366 is configured to determine which of the noises (e.g., noises sources, noise types, noise attributes, etc.) present in the sound signals 121(A)-121(C) are preferred, by the user, for suppression. The user preference module 366 could be implemented, for example, at the external device 110, remote device 103, and/or the sound processing unit 106.
In certain embodiments, the user preference module 366 is configured to determine the user-preferred noise source for suppression based on a user input. For example, the system 102 (e.g., external device 110) can be configured to provide the user with an indication of the one or more noises present in an ambient environment 123 (e.g., determined from the noise profile module 364). The system can then receive (e.g., from the user, a caregiver, etc.) a selection of the at least one user-preferred noise to suppress (e.g., a user input identifying one of the noise categories for suppression).
In one such example, the system provides the recipient a list of determined noise categories (e.g., as shown below in
In other embodiments, the user preference module 366 is configured to automatically determine the user-preferred noise source for suppression based on a user input. In such embodiments, the user preference module 366 can be implemented as, for example, a machine-learning system. In such embodiments, the machine-learning system is configured to determine which of the one or more noise sources should be suppressed to provide the user with an optimal listening experience. This determination can be made based on a number of different factors, but is generally based on machine-learning preferences of the user and attributes of the sound signals themselves.
In certain embodiments, the selection of noises for cancellation by the user can form part of a training process for the machine-learning system. That is, in certain embodiments, the system initially relies on user inputs to determine which noises to suppress. Over time, the system can use machine-learning to progress to, for example, providing the user with a recommendation of a noise to suppress and, eventually, automatically selecting a noise to suppress. The user can also selectively activate/deactivate the user-preferred noise suppression system 362, override a selection made by the user-preferred noise suppression system 362, etc.
Also shown is a noise suppression prioritization module 374 that is configured to learn to prioritize suppression of different noises (e.g., different noise types), by incorporating the attributes of the noise, likelihood metric 371, and other factors. That is, the noise suppression prioritization module 374 is a machine-learning algorithm that is configured to learn the attributes of noise types that the user prefers to suppress. For example, the suppression prioritization module 374 can learn to prioritize suppression of different noises based on the physiological and/or cognitive state of that user or objective measures (e.g., electrically evoked compound action potential (ECAP) measurements, electrocochleography (ECoG) measurements, higher evoked potentials measured from the brainstem and auditory cortex, and measurements of the electrical properties of the cochlea and the electrode array, electrophysiological measurements, etc.), etc. In certain embodiments, the noise suppression prioritization module 374 is configured to learn the attributes of noise types that the user prefers to suppress through audio processing mechanisms (e.g., learn some common characteristics shared between the majority of noise types, such as low frequency, impulsive, continuous or intermittent, etc.). In certain embodiments, the noise suppression prioritization module 374 is configured to learn the attributes of noise types that the user prefers to suppress through subjective measures. Subjective measures can be considered, for example, relative to that particular individual (there could be a machine learning model driven behind the scene to learn the reactions of that particular individual when he/she is exposed to different types of sounds and any sounds resulting in having the individual responding in an unpleasant manner could be considered noise ‘i.e. the unwanted sound’). In another example, the subjective measurements can be based a larger portion of the population (e.g., if over 70% of users would respond to a given sound in a negative way, that sound source could be considered a noise source).
Regardless of the user preference module 366 implementation (e.g., manual selection, recommendation, or automated selection), an indication of the selected at least one user-preferred noise to suppress can be provided to the noise suppression module 368. The noise suppression module 368 uses this information to generate noise-suppressed sound signals 373 (e.g., signals in which the least one user-preferred noise source has been cancelled, reduced, attenuated, or otherwise suppressed).
With specific reference to the example of
As noted, the wearable device 103 comprises at least one microphone 105 that is configured to capture sound signals 121 (A) from the ambient environment 123. Similarly, the external device 110 comprises at least one microphone 113 configured to capture sound signals 121(C) from the ambient environment 123. In certain examples, the microphones 105, 113, as well as the microphones 118 of the sound processing unit 106, form the noise capture module 363 of
The wearable device 103 and the external device 110 are configured to process the respective sound signals 121(A) and 121(C) received thereby and, in certain embodiments, are configured to construct the noise model (noise profile) of the noise present in the ambient environment 123. In certain examples, the sound processing unit 106 is also configured to generate a noise model from the sound signals 121 (B) received at the microphones 118. That is, in certain embodiments, the wearable device 103, the external device 110, and the sound processing unit 106 each implement aspects of the noise source profile module 364, described above. Since, as noted, the sound signals 121(A), 121(B), and 121(C) have different attributes of the noise and other sound sources present in the ambient environment, the noise models generated by the wearable device 103, the external device 110, and the sound processing unit 106 may differ in certain respects. In
The user-preferred noise suppression system 362 is advantageous in that it is not a one-size-fit-all approach. Instead, it is an adaptive system to apply the customized noise masking scheme. In particular, from the system's perspective, a signal may be classified as noise signal, but different user can have different levels of acceptance and/or influencing factors and, as such, their acceptance or problems with the same type of noise could be different. In certain embodiments, the proposed system would also take into the account of the individual's level of acceptance when prioritizing the extent of the types of noise showing up in the user-specific profile. For instance, for this person, they may be more sensitive to this type of noise than other types of noises. He/she would try to turn away upon hearing such noise. On the other hand, a person may not be able to hear a particular frequency because of the medical conditions, aging, noise exposure, etc. Thus, if the background noise happens to occur at this frequency and/or range, the system may prioritize this noise to the bottom of the list (after having matched with the user's body condition) freeing up the system resources to handle other dominant background noises.
In certain examples, the wearable device 103 and the external device 110 send, in real-time, the parameters of their respective noise model to the sound processing unit 106, which can then use these noise models (potentially with its own noise model) to reduce the incident input noise via noise cancellation/suppression techniques. An example of this noise cancellation could be an Active Noise Cancellation (ANC) system, where the noise model parameters are used to regenerate the noise signal in the sound processing unit 102, and this is then used to subtract from the input signal to reduce the noise component, using standard ANC techniques (such as a Kalman filter).
More specifically,
As noted, the techniques presented herein enable the suppression of user-preferred noise (e.g., noises sources, types, etc.) present in the ambient environment 123. As such, as shown in
As noted above, in certain examples, the sound processing unit 106 is configured to provide active noise cancellation of user-preferred noises present in the ambient environment 123. Active noise cancellation is based on the presence of at least two input signals, where one input signal is considered to be include predominantly noise and the other signal(s) is considered to include both target signal and noise (target signal +noise). In a simple form, active noise cancellation generates a noise-reduced output by summing together the target signal +noise input and an inverted version of the noise input. In practice, an adaptive algorithm, like a Kalman filter, is used to determine the output (filter the noise to subtract from the input to better handle variations in levels and frequencies, etc.). In the present application, the input microphone(s) on the sound processing unit 106 receive the signal and noise, while the microphones of the remote device 103 and/or external device 110 receive predominantly noise, and so can then be used to subtract the noise from the input.
As noted above, certain aspects presented herein use a machine-learning device, referred to as a noise suppression prioritization module, to determine which noises should be suppressed in the ambient environment of a user (e.g., identifying the noises that are preferred by the user for suppression in the environment and proactively suppressing those noises). The noise suppression prioritization module is a functional block (e.g., one or more processors operating based on code, algorithm(s), etc.) that is trained, through a machine-learning process, to select a noise for suppression, while accounting for the user's preferences and attributes of the ambient environment.
As shown, the noise suppression prioritization module 774 includes a state observing unit (state unit) 782, a label data unit 784, and a learning unit 786. As described below, the noise suppression prioritization module 774 is configured to generate data 775 presenting the user-preferred noise for suppressed. Stated differently, the noise suppression prioritization module 774 is configured to determine noise source present in the ambient environment that, according to the user's preferences, should be suppressed.
In the example of
In general, the preferred noise source for suppression is subjective for the user and does not follow a linear function corresponding to the state data 779. That is, the user-preferred noise source for suppression cannot be predicted for different users based on the state data. Therefore, the label data unit 784 also provides the learning unit 786 with label data, represented by arrow 785, to collect the subjective experience/preferences of the user, which is highly user specific. Stated differently, the label data unit 784 collects subjective user inputs of the user's preferred noise sources for cancellation, which is represented in the label data 785. Through machine-learning techniques, the learning unit 786 correlates the state data 779 and the label data 785, over time, to develop the ability to automatically select a user preferred noise source for suppression, user, given the specific attributes of the ambient environment and the user's subjective preferences. Stated differently, the learning unit 786 develops the ability to identify the noises that are preferred by the user for suppression in the environment and proactively suppressing those noises. As a result, the noise suppression is specifically tailored to the noise attributes that are most problematic for the specific user.
As previously described, the techniques of the present disclosure can be applied to other medical devices, such as neurostimulators, cardiac pacemakers, cardiac defibrillators, sleep apnea management stimulators, seizure therapy stimulators, tinnitus management stimulators, and vestibular stimulation devices, as well as other medical devices that deliver stimulation to tissue. Further, technology described herein can also be applied to consumer devices. For example, besides hearing, the techniques presented herein could be used by retinal prostheses where the “noise” refers to the content of visible signals (e.g., color level, brightness, etc.), rather than sound signals. That is, in these examples, the ‘noise’ would be related to the content of the light (for example) where different vision impaired users may be sensitive to different kinds of light.
The processing module 919 may comprise, for example, one or more processors, and a memory device (memory) that includes the user-preferred noise suppression logic 931. The memory device may comprise any one or more of: Non-Volatile Memory (NVM), Ferroelectric Random Access Memory (FRAM), read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. The one or more processors are, for example, microprocessors or microcontrollers that execute instructions for the user-preferred noise suppression logic 931 stored in the memory device.
The external device 910 and the retinal prosthesis 900 wirelessly communicate via a communication link 926. The communication link 926 may comprise, for example, a short-range communication link, such as Bluetooth link, Bluetooth Low Energy (BLE) link, a proprietary link, etc.
The retinal prosthesis 900 comprises an implanted processing module 925 and a retinal prosthesis sensor-stimulator 990 is positioned proximate the retina of a recipient. In an example, sensory inputs (e.g., photons entering the eye) are absorbed by a microelectronic array of the sensor-stimulator 990 that is hybridized to a glass piece 992 including, for example, an embedded array of microwires. The glass can have a curved surface that conforms to the inner radius of the retina. The sensor-stimulator 990 can include a microelectronic imaging device that can be made of thin silicon containing integrated circuitry that converts the incident photons to an electronic charge.
The processing module 925 includes a wireless module 920, user-preferred noise suppression logic 931, and an image processor 923 that is in signal communication with the sensor-stimulator 990 via, for example, a lead 988 which extends through surgical incision 989 formed in the eye wall. In other examples, processing module 925 is in wireless communication with the sensor-stimulator 990. The image processor 923 processes the input into the sensor-stimulator 990, and provides control signals back to the sensor-stimulator 990 so the device can provide an output to the optic nerve. That said, in an alternate example, the processing is executed by a component proximate to, or integrated with, the sensor-stimulator 990. The electric charge resulting from the conversion of the incident photons is converted to a proportional amount of electronic current which is input to a nearby retinal cell layer. The cells fire and a signal is sent to the optic nerve, thus inducing a sight perception.
The processing module 925 can be implanted in the recipient and function by communicating with the external device 910, such as a behind-the-ear unit, a pair of eyeglasses, etc. The external device 910 can include an external light/image capture device (e.g., located in/on a behind-the-ear device or a pair of glasses, etc.), while, as noted above, in some examples, the sensor-stimulator 990 captures light/images, which sensor-stimulator is implanted in the recipient.
As noted, the external device 910 and the retinal prosthesis 900 include user-preferred noise suppression logic 931. Similar to the above embodiments, the user-preferred noise suppression logic 931 represents a user-preferred noise suppression system that is configured to use light signals captured from multiple sources (e.g., by external device 910 and retinal prosthesis 900 to generate a profile of the light noise sources present in the ambient environment. Using the profile, the user-preferred noise suppression system can determine the nature of the detected background noise(s) in the ambient environment. The user-preferred noise suppression system can then, for example, allow a user to select specific noise sources for suppression or cancellation, learn and automatically feedback some particular data to adaptive masking system to suppress or cancel certain noise patterns, etc. (e.g., filter out user-preferred light noise).
As should be appreciated, while particular uses of the technology have been illustrated and discussed above, the disclosed technology can be used with a variety of devices in accordance with many examples of the technology. The above discussion is not meant to suggest that the disclosed technology is only suitable for implementation within systems akin to that illustrated in the figures. In general, additional configurations can be used to practice the processes and systems herein and/or some aspects described can be excluded without departing from the processes and systems disclosed herein.
This disclosure described some aspects of the present technology with reference to the accompanying drawings, in which only some of the possible aspects were shown. Other aspects can, however, be embodied in many different forms and should not be construed as limited to the aspects set forth herein. Rather, these aspects were provided so that this disclosure was thorough and complete and fully conveyed the scope of the possible aspects to those skilled in the art.
As should be appreciated, the various aspects (e.g., portions, components, etc.) described with respect to the figures herein are not intended to limit the systems and processes to the particular aspects described. Accordingly, additional configurations can be used to practice the methods and systems herein and/or some aspects described can be excluded without departing from the methods and systems disclosed herein.
According to certain aspects, systems and non-transitory computer readable storage media are provided. The systems are configured with hardware configured to execute operations analogous to the methods of the present disclosure. The one or more non-transitory computer readable storage media comprise instructions that, when executed by one or more processors, cause the one or more processors to execute operations analogous to the methods of the present disclosure.
Similarly, where steps of a process are disclosed, those steps are described for purposes of illustrating the present methods and systems and are not intended to limit the disclosure to a particular sequence of steps. For example, the steps can be performed in differing order, two or more steps can be performed concurrently, additional steps can be performed, and disclosed steps can be excluded without departing from the present disclosure. Further, the disclosed processes can be repeated.
Although specific aspects were described herein, the scope of the technology is not limited to those specific aspects. One skilled in the art will recognize other aspects or improvements that are within the scope of the present technology. Therefore, the specific structure, acts, or media are disclosed only as illustrative aspects. The scope of the technology is defined by the following claims and any equivalents therein.
It is also to be appreciated that the embodiments presented herein are not mutually exclusive and that the various embodiments may be combined with another in any of a number of different manners.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2022/062392 | 12/16/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63294955 | Dec 2021 | US |