The present invention relates generally to the regulation of filters associated with implantable sensor.
Medical devices have provided a wide range of therapeutic benefits to recipients over recent decades. Medical devices can include internal or implantable components/devices, external or wearable components/devices, or combinations thereof (e.g., a device having an external component communicating with an implantable component). Medical devices, such as traditional hearing aids, partially or fully-implantable hearing prostheses (e.g., bone conduction devices, mechanical stimulators, cochlear implants, etc.), pacemakers, defibrillators, functional electrical stimulation devices, and other medical devices, have been successful in performing lifesaving and/or lifestyle enhancement functions and/or recipient monitoring for a number of years.
The types of medical devices and the ranges of functions performed thereby have increased over the years. For example, many medical devices, sometimes referred to as “implantable medical devices,” now often include one or more instruments, apparatus, sensors, processors, controllers or other functional mechanical or electrical components that are permanently or temporarily implanted in a recipient. These functional devices are typically used to diagnose, prevent, monitor, treat, or manage a disease/injury or symptom thereof, or to investigate, replace or modify the anatomy or a physiological process. Many of these functional devices utilize power and/or data received from external devices that are part of, or operate in conjunction with, implantable components.
In one aspect, a method is provided. The method comprises: detecting signals with an implantable microphone configured to be implanted in a recipient; detecting signals with an implantable vibration sensor configured to be implanted in the recipient; adaptively equalizing a response of the implantable microphone to a response of an external microphone, where the adaptation is controlled by a coherence between the signals detected by the implantable microphone and signals detected by the external microphone; and adaptively filtering vibration signals from the signals detected by the implantable microphone, where the adaptation is controlled by a coherence between the signals detected by the implantable microphone and the signals detected by the implantable vibration sensor.
In another aspect, an apparatus is provided. The apparatus comprises: a first implantable sensor configured to capture signals comprising acoustic sounds and vibration signals; a second implantable sensor configured to capture signals comprising at least vibration signals; an implantable sound processing module configured to filter the vibration signals from the implantable sound signals based on a coherence between the signals captured by the first implantable sensor and the signals captured by the second implantable sensor to generate output signals; and an implantable stimulator unit configured to generate, based on the outputs signals, stimulation signals for delivery to a recipient of the apparatus to evoke perception by the recipient of the acoustic sounds.
In another aspect, one or more non-transitory computer readable storage media are provided. The non-transitory computer readable storage comprises instructions that, when executed by at least one processor, are operable to: obtain signals detected by an implantable microphone configured to be implanted in a recipient; obtain signals detected by an implantable vibration sensor configured to be implanted in the recipient; and adaptively equalize a response of the implantable microphone to a response of an external microphone based on a coherence between the signals detected by the implantable microphone and the signals detected by the external microphone.
In another aspect, one or more non-transitory computer readable storage media are provided. The non-transitory computer readable storage comprises instructions that, when executed by at least one processor, are operable to: obtain signals detected by an implantable microphone configured to be implanted in a recipient, wherein the signals detected by the implantable microphone comprise acoustic sounds and vibration signals; obtain signals detected by an implantable vibration sensor configured to be implanted in the recipient; and filter the vibration signals from the signals detected by the implantable microphone based on a coherence between the signals detected by the implantable microphone and the signals detected by the implantable vibration sensor.
Embodiments of the present invention are described herein in conjunction with the accompanying drawings, in which:
Presented herein are techniques for regulation/adjustment of filters associated with implantable sensors of an implantable medical device. In particular, the implantable medical device is configured to detect/capture signals with a first implantable sensor configured to be implanted in a recipient and to capture signals with a second implantable sensor configured to be implanted in the recipient. The implantable medical device is configured to adaptively equalize a response of the first implantable sensor to a response of a similar external sensor, wherein adaption control of the equalization is based on a coherence between the signals detected by first implantable sensor and the signals detected by the external microphone indicating the presence of acoustic signals. In addition, the implantable medical device is configured to adaptively filter vibration signals, including body noises, from the implantable sound signals, wherein adaption control of the filter is based on a coherence between the signals detected by first implantable sensor and the signals detected by the second implantable sensor indicating the presence of vibration.
Merely for ease of description, the techniques presented herein are primarily described with reference to a specific implantable medical device system, namely a cochlear implant system. However, it is to be appreciated that the techniques presented herein may also be implemented by other types of implantable medical devices. For example, the techniques presented herein may be implemented by other auditory prosthesis systems that include one or more other types of auditory prostheses, such as middle ear auditory prostheses, bone conduction devices, direct acoustic stimulators, electro-acoustic prostheses, auditory brain stimulators, etc. The techniques presented herein may also be used with tinnitus therapy devices, vestibular devices (e.g., vestibular implants), visual devices (i.e., bionic eyes), sensors, pacemakers, drug delivery systems, defibrillators, functional electrical stimulation devices, catheters, seizure devices (e.g., devices for monitoring and/or treating epileptic events), sleep apnea devices, electroporation devices, etc.
As noted, cochlear implant system 102 includes an external component 104 that is configured to be directly or indirectly attached to the body of the recipient and an implantable component 112 configured to be implanted in the recipient. In the examples of
In the example of
It is to be appreciated that the OTE sound processing unit 106 is merely illustrative of the external devices that could operate with implantable component 112. For example, in alternative examples, the external component may comprise a behind-the-ear (BTE) sound processing unit or a micro-BTE sound processing unit and a separate external. In general, a BTE sound processing unit comprises a housing that is shaped to be worn on the outer ear of the recipient and is connected to the separate external coil assembly via a cable, where the external coil assembly is configured to be magnetically and inductively coupled to the implantable coil 114. It is also to be appreciated that alternative external components could be located in the recipient's ear canal, worn on the body, etc.
As noted above, the cochlear implant system 102 includes the sound processing unit 106 and the cochlear implant 112. However, as described further below, the cochlear implant 112 can operate with the sound processing unit 106 stimulate the recipient or the cochlear implant 112 can operate independently from the sound processing unit 106, for at least a period, to stimulate the recipient. For example, the cochlear implant 112 can operate in a first general mode, sometimes referred to as an “external hearing mode,” in which the sound processing unit 106 captures sound signals which are then used as the basis for delivering stimulation signals to the recipient. The cochlear implant 112 can also operate in a second general mode, sometimes referred as an “invisible hearing” mode, in which the sound processing unit 106 is unable to provide sound signals to the cochlear implant 112 (e.g., the sound processing unit 106 is not present, the sound processing unit 106 is powered-off, the sound processing unit 106 is malfunctioning, etc.). As such, in the invisible hearing mode, the cochlear implant 112 captures sound signals itself via implantable sound sensors and then uses those sound signals as the basis for delivering stimulation signals to the recipient. Further details regarding operation of the cochlear implant 112 in the external hearing mode are provided below, followed by details regarding operation of the cochlear implant 112 in the invisible hearing mode. It is to be appreciated that reference to the external hearing mode and the invisible hearing mode is merely illustrative and that the cochlear implant 112 could also operate in alternative modes.
Referring first to the external hearing mode,
The OTE sound processing unit 106 also comprises the external coil 108, a charging coil 121, a closely-coupled transmitter/receiver (RF transceiver) 122, sometimes referred to as or radio-frequency (RF) transceiver 122, at least one rechargeable battery 123, and an external sound processing module 124. The external sound processing module 124 may comprise, for example, one or more processors and a memory device (memory) that includes sound processing logic. The memory device may comprise any one or more of: Non-Volatile Memory (NVM), Ferroelectric Random Access Memory (FRAM), read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. The one or more processors are, for example, microprocessors or microcontrollers that execute instructions for the sound processing logic stored in memory device.
The implantable component 112 comprises an implant body (main module) 134, a lead region 136, and the intra-cochlear stimulating assembly 116, all configured to be implanted under the skin/tissue (tissue) 115 of the recipient. The implant body 134 generally comprises a hermetically-sealed housing 138 in which RF interface circuitry 140 and a stimulator unit 142 are disposed. The implant body 134 also includes the internal/implantable coil 114 that is generally external to the housing 138, but which is connected to the transceiver 140 via a hermetic feedthrough (not shown in
As noted, stimulating assembly 116 is configured to be at least partially implanted in the recipient's cochlea. Stimulating assembly 116 includes a plurality of longitudinally spaced intra-cochlear electrical stimulating contacts (electrodes) 144 that collectively form a contact or electrode array 146 for delivery of electrical stimulation (current) to the recipient's cochlea.
Stimulating assembly 116 extends through an opening in the recipient's cochlea (e.g., cochleostomy, the round window, etc.) and has a proximal end connected to stimulator unit 142 via lead region 136 and a hermetic feedthrough (not shown in
As noted, the cochlear implant system 102 includes the external coil 108 and the implantable coil 114. The external magnet 152 is fixed relative to the external coil 108 and the implantable magnet 152 is fixed relative to the implantable coil 114. The magnets fixed relative to the external coil 108 and the implantable coil 114 facilitate the operational alignment of the external coil 108 with the implantable coil 114. This operational alignment of the coils enables the external component 104 to transmit data and power to the implantable component 112 via a closely-coupled wireless RF link 131 formed between the external coil 108 with the implantable coil 114. In certain examples, the closely-coupled wireless link 131 is a radio frequency (RF) link. However, various other types of energy transfer, such as infrared (IR), electromagnetic, capacitive and inductive transfer, may be used to transfer the power and/or data from an external component to an implantable component and, as such,
As noted above, sound processing unit 106 includes the external sound processing module 124. The external sound processing module 124 is configured to convert received input signals (received at one or more of the input devices 113) into output signals for use in stimulating a first ear of a recipient (i.e., the external sound processing module 124 is configured to perform sound processing on input signals received at the sound processing unit 106). Stated differently, the one or more processors in the external sound processing module 124 are configured to execute sound processing logic in memory to convert the received input signals into output signals that represent electrical stimulation for delivery to the recipient.
As noted,
Returning to the specific example of
As detailed above, in the external hearing mode the cochlear implant 112 receives processed sound signals from the sound processing unit 106. However, in the invisible hearing mode, the cochlear implant 112 is configured to capture and process sound signals for use in electrically stimulating the recipient's auditory nerve cells. In particular, as shown in
In the invisible hearing mode, the implantable sensors 153 are configured to detect/capture signals (e.g., acoustic sound signals, vibrations, etc.), which are provided to the implantable sound processing module 158. The implantable sound processing module 158 is configured to convert received input signals (received at one or more of the implantable sensors 153) into output signals for use in stimulating the first ear of a recipient (i.e., the processing module 158 is configured to perform sound processing operations). Stated differently, the one or more processors in implantable sound processing module 158 are configured to execute sound processing logic in memory to convert the received input signals into output signals 155 that are provided to the stimulator unit 142. The stimulator unit 142 is configured to utilize the output signals 155 to generate electrical stimulation signals (e.g., current signals) for delivery to the recipient's cochlea, thereby bypassing the absent or defective hair cells that normally transduce acoustic vibrations into neural activity.
It is to be appreciated that the above description of the so-called external hearing mode and the so-called invisible hearing mode are merely illustrative and that the cochlear implant system 102 could operate differently in different embodiments. For example, in one alternative implementation of the external hearing mode, the cochlear implant 112 could use signals captured by the sound input devices 118 and the implantable sensors 153 in generating stimulation signals for delivery to the recipient.
As noted above, the cochlear implant 112 comprises implantable sensors 153. In certain embodiments, the implantable sensors 153 comprise at least two sensors 156 and 160, where at least one of the sensors is designed to be more sensitive to bone-transmitted vibrations than it is to acoustic (air-borne) sound waves. In the illustrative embodiment of
The implantable microphone 156 and the accelerometer 160 can each be disposed in, or electrically connected to, the implant body 134. In operation, the implantable microphone 156 and the accelerometer 160 each detect input signals and convert the detected input signals into electrical signals. The input signals detected by the implantable microphone 156 and the accelerometer 160 can each include external acoustic sounds and/or vibration signals, including body noises.
Implantable sensors, when positioned in the body of a recipient, generally have a different response to input signals than an external sensor positioned outside of the body of the recipient. For example, an implanted subcutaneous microphone (e.g., an implantable microphone implanted in the body of the recipient) may have a different frequency response than a substantially similar microphone positioned outside of the recipient. In particular, the sensitivity to acoustic inputs at each frequency of an implanted subcutaneous microphone is influenced by many factors that are difficult to predict and/or measure, such as skin flap thickness, coupling to underlying bone, implant location in the head, etc. As such, there is a need to “equalize” the sensitivity to acoustic inputs (at each frequency) of an implantable sound sensor (e.g., implanted subcutaneous microphone) to that of a similar external sound sensor (e.g., external microphone). As used herein, “equalizing” the sensitivity to acoustic inputs of an implantable sound sensor refers to application of an equalization filter to the signals detected by the implantable sound sensor, where the equalization filter coefficients are selected so to make the sensitivity to acoustic inputs at each frequency of the implantable sound signals substantially the same as the sensitivity to acoustic inputs at each frequency of the sound signals generated by the similar external sound sensor.
As described below, presented herein are techniques to equalize the sensitivity to acoustic inputs of an implantable sound sensor to that of a similar external sound sensor based on a coherence between the signals captured by the implantable sound sensor and the signals captured by the external sound sensor. In accordance with certain embodiments presented herein, the filter coefficients are dynamically (e.g., adaptively) updated during normal use (e.g., based on signals received during normal operation of the cochlear implant 112) and the coherence is used to control the rate of change or speed at which the filter coefficients are dynamically updated.
As noted, implantable sensors, when positioned in the body of the recipient, are subject to unwanted vibration, sometimes referred to as “body-noises” (e.g., the will capture vibrations of the body). Although some implantable sensors are intended to capture such vibration signals, other sensors, such as implantable microphones, are not. In general, the unwanted vibrations (body noises) can be attenuated through use of a second transducer designed as a pure vibration sensor and a filtering process to remove vibration-based bone conducted sounds. This filtering process is sometimes referred to herein as a vibration/body noise filter or body noise canceller. Stated differently, for implantable sensors that are not intended to capture vibration signals, a body noise cancellation filter (body noise canceller) is applied to the implantable sound signals in order to substantially remove the vibration signals, which include body noises. In the case of an implantable microphone, the removal of the vibration signals leaves substantially only the desired acoustic sound signals.
In addition to body-generated vibration, the use of a contralateral hearing aid with sufficiently high amplification can induce vibration in the head. During the measurement of acoustic sensitivity, this acoustically induced vibration can dominate the response of the microphone at some frequencies, such that the vibration induced response is larger than the direct acoustic response. In normal operation of the device, the body noise canceller successfully removes the induced vibration component, leaving only the direct acoustic component. Therefore, it is good practice to equalize the response after the body noise canceller has removed the induced vibration component from the signal.
As such, also presented herein are techniques that calibrate a body noise filter during normal use (e.g., based on signals received during normal operation of the cochlear implant 112). The techniques presented herein calibrate (update) the body noise filter coefficients based on a coherence of (between) the signals detected by the implantable sound sensor and the implantable vibration signals generated by the implantable vibration sensor. Specifically, the favorable conditions for updating the body noise filter is when only or primarily vibration signals (body noise) are present in the signals detected by the implantable sound sensor and the signals detected by the implantable vibration sensor, as indicated by the coherence between the detected signals. In accordance with certain embodiments presented herein, the coherence between the signals detected by the implantable sound sensor and the signals detected by the implantable vibration sensor is used to control the rate of change or speed at which the body noise filter coefficients are dynamically updated.
As described further below, the cochlear implant 112 is also configured to automatically detect favorable acoustic conditions under which the equalization filter coefficients can be updated. Specifically, the favorable conditions for updating the equalization filter is when only or primarily acoustic signals are present in the signals detected by the implantable sound sensor and the signals detected by the external sound sensor, as indicated by the coherence between the detected signals. In accordance with certain embodiments presented herein, the coherence between the signals detected by the implantable sound sensor and the signals detected by the external sound sensor is used to control the rate of change or speed at which the equalization filter coefficients are dynamically updated.
For both the body noise filter and the equalization filter, the detection mechanism uses magnitude squared coherence as the principal indicator of favorable conditions. This means that the system will update the coefficients of the body noise filter and/or the coefficients of the equalization filter when the corresponding magnitude squared coherence (e.g., the coherence between the signals detected by the implantable sound sensor and the signals detected by the external sound sensor for the equalization filter or the coherence between the signals detected by the implantable sound sensor and the signals detected by the implantable vibration sensor for the bod noise filter) is acceptable during general use of the device.
As shown in
The implantable sound processing module 258 generally comprises two primary filtering sub-modules. The first filtering sub-module in the implantable sound processing module 258 is referred to as the body noise canceller 262. The body noise canceller 262 is used for attenuation of vibration components (body noises) in the input signals Y(f) detected by the implantable microphone. In the example of
The second filtering sub-module in the implantable sound processing module 258 is referred to as the equalization module 264. The equalization module 264 is used to equalize the magnitude response of the internal microphone 256 to that of the external microphone 218 based on a coherence between the signals X(f) detected by the external microphone 218 and the signals Y(f) detected by the implantable microphone 256. The equalization module 264 comprises an equalization gain calculation block 270, variable smoothing block 272, and an equalization filter 274. Further details regarding operation of the equalization module 264 are provided below.
As described further below, the techniques presented herein utilize a measure referred to as the “magnitude squared coherence” to control the adaptation of the body noise canceller 262 and the equalization module 264 so that they adapt under favorable acoustic/vibration conditions. In general, magnitude squared coherence provides a frequency domain measure of how well two signals are correlated with one another. As detailed below, the magnitude squared coherence is calculated from the time-averaged auto and cross correlation power spectrums of the two signals, where the power spectrums are smoothed over time before the coherence is calculated. The coherence at each frequency is a value between zero (0) and one (1), where one represents high coherence and zero represents no coherence as would be the case with two uncorrelated noise signals.
The calculated coherence value is used to control the update of the body noise canceller 262 and the equalization module 264, more specifically the speed/rate at which the equalization filter coefficients and the body noise filter coefficients are dynamically updated (e.g., adapted). When the coherence is high, the corresponding filter coefficients are updated more quickly. However, when the coherence is low, the corresponding filter coefficients are updated more slowly. The smoothing can be chosen to provide quite slow update for general use, over minutes, hours, or even days. The smoothing can be adjusted to update faster under conditions where the acoustic environment is favorable, such as the user initiating an equalization measurement, or the clinician making a measurement in the clinic.
Shown in
As noted, the equalization module 264 first includes the equalization gain calculation block 270. The equalization gain calculation block 270 is configured to determine, in real-time, equalization filter coefficients 265 (eqGains) from the signals X(f) and the signals Y(f). The equalization module 264 further includes the variable smoothing block 272 that stores previously determined equalization filter coefficients.
The variable smoothing block 272 receives the real-time equalization filter coefficients 265 from the equalization gain calculation block 270, as well as the coherence signal CMM generated from the signals X(f) detected by the external microphone 218 and the signals Y(f) detected by the implantable microphone 256. At the variable smoothing block 272, the coherence signal CMM is used to control how the previously determined equalization filter coefficients stored in the variable smoothing block 272 are dynamically updated to match (i.e., adjusted towards) the real-time determined equalization filter coefficients 265. In other words, in certain embodiments, the variable smoothing block 272 is a first order smoothing block where a sequence of estimates are smoothed over time, where the coherence (CMM) controls the smoothing time (e.g., CMM controls the rate of change of the previously determined equalization filter coefficients).
When the coherence signal CMM is relatively high, the previously determined equalization filter coefficients are updated more quickly to move towards the real-time determined equalization filter coefficients 265. However, when the coherence signal CMM is low, the previously determined equalization filter coefficients are updated more slowly. Stated differently, if the coherence is high, the smoothing time constant is increased to enable faster updating, while when the coherence is low, the time constant is made very small to prevent or limit the updating. The output of the variable smoothing block 272 are the updated equalization filter coefficients (eqGains_) 273 that are used by the equalization filter 274 to equalize (filter) the implantable microphone signal Y(f).
The coherence signal CMM can be used in a number of different manners to control the rate of change of the previously determined equalization filter coefficients. As noted, the coherence is a value between 0 and 1 and the rate of change of the coefficients can be an adjustable value (e.g., a sliding scaled value), with an increasingly high rate of change as the coherence approaches a value of 1 and an increasingly lower rate of change as the coherence approaches a value of 0.
In certain embodiments, one of more thresholds may be introduced to limit a rate of change of the equalization filter coefficients for a given coherence. For example, if the coherence signal CMM is below a predetermined threshold, the variable smoothing block 272 can be configured to prevent updating of the previously determined equalization filter coefficients.
As noted, the coherence signal CMM is generated from the signals X(f) detected by the external microphone 218 and the signals Y(f) detected by the implantable microphone 256. However, also as noted above, a cochlear implant, such as cochlear implant 112 or another implantable component, can operate for periods of time without an external component and, as such, without receiving signals X(f) detected by the external microphone 218. In such circumstances, the coherence signal CMM would be 0 or a very low value and, accordingly, the previously determined equalization filter coefficients are not updated
In summary of the above, the equalization module 264 operates to determine equalization filter coefficients (equalization gains) that, when applied to the signals Y(f) detected by the implantable microphone 256, equalize the sensitivity to acoustic inputs to that of the external microphone 218. As noted, the variable smoothing block 272 is introduced to control the update speed at which the equalization filter coefficients are adapted, where the update is controlled by the coherence CMM. When the coherence CMM is low, or the signals from the external microphone 218 are missing, the eqGains are not updated. However, when the coherence CMM is high, the eqGains are updated more quickly (e.g., faster rate of change is allowed).
It is noted that the eqGains uses the output of the BNC pre-filter 266, and not the output of the BNC filter 268. In this way, the BNC filter 268 can operate normally, providing body noise reduction for the recipient, while the calibration blocks can continue to operate, using a stable yet substantially body noise free signal.
As noted above, the body noise canceller 262 includes the BNC pre-filter 266 and the BNC filter 264. In certain embodiments, the BNC filter 264 is an adaptive Normalized least mean squares (NLMS) filter in the frequency domain. The speed of adaptation of the BNC filter 264 is controlled by the parameter setting of the regulation block 267. In an alternative embodiment, the speed of adaptation of the BNC filter 264 could also be regulated based on the coherence CMA.
The BNC pre-filter 266 is, in certain embodiments, an adaptive NLMS filter, with a similar structure as that of the BNC filter 264. However, the BNC pre-filter 266 operates as a calibration filter that is essentially fixed and updated only when the conditions are favorable. That is, the coherence signal CMA determined by the body noise coherence block 278 controls the update speed at which the filter coefficients/gains of the BNC pre-filter 266 are adapted. When the coherence signal CMA is low, the filter coefficients/gains of the BNC pre-filter are not updated or are updated more slowly. However, when the coherence signal CMA is high, the filter coefficients/gains of the BNC pre-filter are updated more quickly (e.g., faster rate of change is allowed).
As noted, the body noise reduction is split into two parts, the BNC pre-filter 266 and the BNC filter 264. In the example arrangement of
In
In certain embodiments, the BNC pre-filter 266 is a complex transfer function and the coefficients of the BNC pre-filter 266 include both magnitude/amplitude and phase components that are each updated based on the coherence signal CMA. That is, both a magnitude and phase of the coefficients of the BNC pre-filter 266 (first body noise cancellation filter) are adapted based on the coherence between the signals Y(f) detected by the implantable microphone 256 and the signals Z(f) detected by the implantable accelerometer 260. In contrast, the coefficients of the equalization filter can include only the magnitude/amplitude components that are updated based on the coherence signal CMM. That is, only a magnitude of the coefficients of the equalization filter are updated based on the coherence between the signals X(f) detected by the external microphone 218 and the signals Y(f) detected by the implantable microphone 256.
As noted above, the BNC filter coefficients are updated when vibration input is dominant, while the equalization filter coefficients are updated when an acoustic input is dominant. As a result, the two coherence measures CMM and CMA could be used to inhibit one another. Therefore, in certain examples, only one of the two filters are updated at a given time. This mix may vary across frequency. For example, in one illustrative arrangement, the following rules could be applied:
As noted, the techniques presented herein utilize the magnitude squared coherence to control the rate of change of the filter coefficients for each of the equalization module 264 and the BNC pre-filter 266. The magnitude squared coherence is calculated as shown below in Equation 1.
The coherence can be optionally thresholded to completely prevent adaptation when the coherence is low, as shown below in Equation 2.
In contrast to
As noted,
More specifically,
In the embodiment of
As shown in
The implantable sound processing module 558 generally comprises two primary filtering modules. The first filtering module in the implantable sound processing module 558 is referred to as the body noise canceller 562 (
As described above, the techniques presented herein utilize the magnitude squared coherence to control the adaptation of the body noise canceller 562 and the equalization module 564 so that they adapt under favorable acoustic/vibration conditions, more specifically the speed/rate at which the coefficients of the two filters adapt/update. When the coherence is high, the corresponding filter coefficients are updated more quickly. However, when the coherence is low, the corresponding filter coefficients are updated more slowly. The smoothing can be chosen to provide quite slow update for general use, over minutes, hours, or even days. The smoothing can be adjusted to update faster under conditions where the acoustic environment is favorable, such as the user initiating an equalization measurement, or the clinician making a measurement in the clinic.
Shown in
In
As noted,
In particular, at block 577, the power and cross-power spectrums of Y(k,n) and Z(k,n) are calculated at block 579. In general, the auto-power spectrums are calculated as shown in Equations 3 and 4, where * indicates complex conjugate.
P
xx
[k,n]=X*[k,n]X[k,n] Equation 3
P
yy
[k,n]=Y*[k,n]Y[k,n] Equation 4
And the cross-power spectrum is calculated as shown below in Equation 5.
P
xy
[k,n]=X*[k,n]Y[k,n] Equation 5
As shown in
[k,n]=αPxx[k,n]+(1−α)[k,n−1] Equation 6
[k,n]=αPyy[k,n]+(1−α)[k,n−1] Equation 7
[k,n]=αPxy[k,n]+(1−α)[k,n−1] Equation 8
In
In
In certain embodiments, the coherence can be optionally thresholded to completely prevent adaptation when the coherence is low, as shown below in Equation 11.
Finally, the transfer function is smoothed using first order IIR filter 579, where the smoothing coefficient, β is scaled by the coherence, such that the filter is updated only when the coherence is high. The smoothing coefficient β is chosen so that the filter is updated quite slowly, over minutes, hours, or even days. The smoothing can be adjusted to update faster under conditions where the acoustic environment is favorable, such as the user initiating a measurement, or the clinician initiating a measurement in the clinic, as shown below in Equation 12.
As detailed above, the techniques presented here are generally directed to setting the coefficients (gains) of two filters, namely the body noise filter and the microphone equalization filter. The techniques presented herein are configured to determine favorable conditions for setting each of the filters based on a coherence between relevant signals, which allows the body noise filter and the microphone equalization filter to dynamically update on the fly during normal operation. The techniques presented may provide an automatic and reliable microphone equalization procedure that requires no intervention from the user or clinician. Options are provided to allow a semi-automatic measurement under loosely controlled acoustic conditions such as provided a stimulus from a smart phone, and to revert to a fully controlled acoustic measurement under calibrated conditions in the sound booth, as is the current clinical practice.
As noted elsewhere herein, embodiments presented herein have been primarily described with reference to an example auditory prosthesis system, namely a cochlear implant system. However, as noted above, it is to be appreciated that the techniques presented herein may be implemented by a variety of other types of implantable medical devices (or systems that include other types of implantable medical devices). For example, the techniques presented herein may be implemented by other auditory prostheses, such as acoustic hearing aids, middle ear auditory prostheses, bone conduction devices, direct acoustic stimulators, electro-acoustic prostheses, other electrically simulating auditory prostheses (e.g., auditory brain stimulators), etc. The techniques presented herein may also be implemented by tinnitus therapy devices, vestibular devices (e.g., vestibular implants), visual devices (i.e., bionic eyes), sensors, pacemakers, drug delivery systems, defibrillators, functional electrical stimulation devices, catheters, seizure devices (e.g., devices for monitoring and/or treating epileptic events), sleep apnea devices, electroporation devices, etc.
It is to be appreciated that the embodiments presented herein are not mutually exclusive and that the various embodiments may be combined with another in any of a number of different manners.
The invention described and claimed herein is not to be limited in scope by the specific preferred embodiments herein disclosed, since these embodiments are intended as illustrations, and not limitations, of several aspects of the invention. Any equivalent embodiments are intended to be within the scope of this invention. Indeed, various modifications of the invention in addition to those shown and described herein will become apparent to those skilled in the art from the foregoing description. Such modifications are also intended to fall within the scope of the appended claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2021/059454 | 10/14/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63113415 | Nov 2020 | US |