The present invention relates generally to audio training in auditory prosthesis systems.
Hearing loss is a type of sensory impairment that is generally of two types, namely conductive and/or sensorineural. Conductive hearing loss occurs when the normal mechanical pathways of the outer and/or middle ear are impeded, for example, by damage to the ossicular chain or ear canal. Sensorineural hearing loss occurs when there is damage to the inner ear, or to the nerve pathways from the inner ear to the brain.
Individuals who suffer from conductive hearing loss typically have some form of residual hearing because the hair cells in the cochlea are undamaged. As such, individuals suffering from conductive hearing loss typically receive an auditory prosthesis that generates motion of the cochlea fluid. Such auditory prostheses include, for example, acoustic hearing aids, bone conduction devices, and direct acoustic stimulators.
In many people who are profoundly deaf, however, the reason for their deafness is sensorineural hearing loss. Those suffering from some forms of sensorineural hearing loss are unable to derive suitable benefit from auditory prostheses that generate mechanical motion of the cochlea fluid. Such individuals can benefit from implantable auditory prostheses that stimulate nerve cells of the recipient's auditory system in other ways (e.g., electrical, optical and the like). Cochlear implants are often proposed when the sensorineural hearing loss is due to the absence or destruction of the cochlea hair cells, which transduce acoustic signals into nerve impulses. An auditory brainstem stimulator is another type of stimulating auditory prosthesis that might also be proposed when a recipient experiences sensorineural hearing loss due to damage to the auditory nerve.
In one aspect, a method is provided. The method comprises: recording segments of sound signals received at an auditory prosthesis system, wherein the auditory prosthesis system comprises an auditory prosthesis configured to be at least partially implanted in a recipient; detecting one or more sound identification trigger conditions associated with at least one of the segments of sound signals; determining an identity of one or more sounds present in the at least one of the segments of sound signals; and providing the identity of the one or more sounds present in the at least one of the segments of sound signals to the recipient of the auditory prosthesis.
In another aspect, a method is provided. The method comprises: receiving sounds via at least one or more sound inputs of an auditory prosthesis; generating, based on one or more of the sounds, stimulation signals for delivery to the recipient to evoke perception of the one or more sounds; determining sound identity information associated with the one or more sounds; and providing the recipient with at least one of an audible or visible descriptor of the sound identity information.
In another aspect, a system is provided. The system comprises: one or more microphones configured to receive sounds; one or more memory devices configured to store instructions for an audio training program; and one or more processors configured to execute the instructions for the audio training program to: determine sound identity information associated with the one or more sounds; and provide the recipient with at least one of an audible or visible representation of the sound identity information.
Embodiments of the present invention are described herein in conjunction with the accompanying drawings, in which:
In a fully functional human ear, the outer ear (auricle) collects sound signals/waves which are channeled into and through the ear canal. Disposed across the distal end of ear canal is the tympanic membrane (ear drum) which vibrates in response to the sound waves. This vibration is coupled to an opening in the cochlea, known as the oval window, through bones of the middle ear. The bones of the middle ear serve to filter and amplify the sound waves, which in turn cause the oval window to articulate (vibrate) (e.g., the oval window vibrates in response to vibration of the tympanic membrane). This vibration of the oval window sets up waves of fluid motion of the perilymph within the cochlea. Such fluid motion, in turn, activates thousands of tiny hair cells inside of cochlea. Activation of the hair cells causes the generation of appropriate nerve impulses, which are transferred through the spiral ganglion cells and auditory nerve to the brain where they are perceived as sound.
As noted above, sensorineural hearing loss may be due to the absence or destruction of the hair cells in the cochlea. Therefore, individuals with this type of sensorineural hearing loss are often implanted with a cochlear implant or another electrically-stimulating auditory/hearing prosthesis (e.g., electroacoustic hearing prosthesis, etc.) that operates by converting at least a portion of received sound signals into electrical stimulation signals (current signals) for delivery to a recipient's auditory system, thereby bypassing the missing or damaged hair cells of the cochlea.
Due to the use of electrical stimulation and the bypassing of the hair cells in the cochlea (referred to herein as “electrical hearing” or an “electrical pathway”), new recipients of electrically-stimulating auditory prostheses often have difficulty understanding certain (possibly many) sounds. For a recipient that had hearing capabilities before implantation, in particular, sounds that they previously perceived and interpreted as common place (e.g., a coffee machine, a bubbling brook, the bark of a dog, etc.), can be misunderstood and confusing when first heard through the electrical pathway.
As a result of the difficulties associated with electrical hearing, electrically-stimulating auditory prosthesis recipients typically undergo extensive habilitation (e.g., intervention for recipients who have never heard before) or rehabilitation (e.g., intervention for recipients who are learning to hear again). For ease of description, “habilitation” and “rehabilitation” are collectively and generally referred to herein as “rehabilitation,” which, again as used herein, refers to a process during which a recipient learns to properly understand/perceive sounds signals (sounds) heard via his/her auditory prosthesis.
In conventional arrangements, rehabilitation often occurs within a clinical environment using complex equipment and techniques implemented by trained audiologists/clinicians. However, recipients often do not visit clinics on a regular basis due to, for example, costs, lack of insurance coverage, low availability of trained audiologists, such as in rural areas, etc. Therefore, the need to visit a clinic for all rehabilitation activities may not only be cost prohibitive for certain recipients, but may also require the recipient to live with improper sound perceptions (possibly unknowingly) for significant periods of time.
Accordingly, presented herein are audio training techniques that facilitate the rehabilitation of a recipient of an auditory prosthesis. In certain embodiments, the audio training techniques presented herein may include real time training aspects in which the recipient's surrounding (ambient) auditory environment, including the sounds present therein, is analyzed in real time. The recipient can then be provided with a real time identity (e.g., audible or visible representation/description) of the sounds present in the auditory environment. The identity of the sounds can be provided to the recipient automatically and/or in response to recipient queries. In further embodiments, the audio training techniques presented herein may include non-real time training aspects in which the identities of sounds present in the recipient's auditory environment, along with additional information (e.g., the sounds, sound characteristics, etc.), are logged and used for offline rehabilitation exercises.
Merely for ease of description, the techniques presented herein are primarily described with reference to one illustrative auditory prosthesis, namely a cochlear implant. However, it is to be appreciated that the techniques presented herein may also be used with a variety of other types of auditory prostheses, such as electro-acoustic hearing prostheses, auditory brainstem implants, bimodal auditory prostheses, bilateral auditory prostheses, acoustic hearing aids, bone conduction devices, middle ear auditory prostheses, direct acoustic stimulators, etc. As such, description of the invention with reference to a cochlear implant should not be interpreted as a limitation of the scope of the techniques presented herein.
In this example, the external component 108 comprises a behind-the-ear (BTE) sound processing unit 110, such as a mini or micro-BTE, and an external coil 112. However, it is to be appreciated that this arrangement is merely illustrate and that embodiments presented herein may be implemented with other external component arrangements. For example, in one alternative embodiment, the external component 108 may comprise an off-the-ear (OTE) sound processing unit in which the external coil, microphones, and other elements are integrated into a single housing/unit configured to be worn on the head of the recipient.
In the example of
As shown in
As noted, the cochlear implant system 100 includes an external device 106, further details of which are shown in
The cochlear implant 104 comprises an implant body 114, a lead region 116, and an elongate intra-cochlear stimulating assembly 118. Elongate stimulating assembly 118 is configured to be at least partially implanted in the cochlea of a recipient and includes a plurality of intra-cochlear stimulating contacts 128. The stimulating contacts 128 collectively form a contact array 126 and may comprise electrical contacts and/or optical contacts. Stimulating assembly 118 extends through an opening in the cochlea (e.g., cochleostomy, the round window, etc.) and has a proximal end connected to the stimulator unit in implant body 114 via lead region 116 that extends through the recipient's mastoid bone.
Cochlear implant 104 also comprises an internal RF coil 120, a magnet fixed relative to the internal coil, a stimulator unit, and a closely coupled wireless transceiver positioned in the implant body 114. The magnets adjacent to external coil 112 and in the cochlear implant 104 facilitate the operational alignment of the external coil 112 with the internal coil 120 in the implant body. The operational alignment of the coils 112 and 120 enables the internal coil 120 to transcutaneously receive power and data (e.g., the control signals generated based on the sound signals 121) from the external coil 112 over the closely-coupled RF link 130. The external and internal coils 112 and 120 are typically wire antenna coils.
As noted above,
In the example of
In the example of
As shown, external device 106 first comprises an antenna 136 and a telecommunications interface 138 that are configured for communication on a telecommunications network. The telecommunications network over which the radio antenna 136 and the radio interface 138 communicate may be, for example, a Global System for Mobile Communications (GSM) network, code division multiple access (CDMA) network, time division multiple access (TDMA), or other kinds of networks.
External device 106 also includes a wireless local area network interface 140 and a short-range wireless interface/transceiver 142 (e.g., an infrared (IR) or Bluetooth® transceiver). Bluetooth® is a registered trademark owned by the Bluetooth® SIG. The wireless local area network interface 140 allows the external device 106 to connect to the Internet, while the short-range wireless transceiver 142 enables the external device 106 to wirelessly communicate (i.e., directly receive and transmit data to/from another device via a wireless connection), such as over a 2.4 Gigahertz (GHz) link. As described further below, the short-range wireless transceiver 142 is used to wirelessly connect the external device 106 to sound processing unit 110. It is to be appreciated that that any other interfaces now known or later developed including, but not limited to, Institute of Electrical and Electronics Engineers (IEEE) 802.11, IEEE 802.16 (WiMAX), fixed line, Long Term Evolution (LTE), etc., may also or alternatively form part of the external device 106.
In the example of
The display screen 150 is an output device, such as a liquid crystal display (LCD), for presentation of visual information to the cochlear implant recipient. The user interface 156 may take many different forms and may include, for example, a keypad, keyboard, mouse, touchscreen, etc. In certain examples, the display screen 150 and user interface 156 may be integrated with one another (e.g., in a touchscreen arrangement in which an input device is layered on the top of an electronic visual display).
Memory device 160 may comprise any one or more of ROM, RAM, magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. The one or more processors 158 are, for example, microprocessors or microcontrollers that execute instructions for the audio streaming application 162 stored in memory device 160.
The closely coupled wireless transceiver 178 is configured to transcutaneously transmit power and/or data to, and/or receive data from, cochlear implant 104 via the closely coupled RF link 130 (
Memory device 184 may comprise any one or more of ROM, RAM, magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. The one or more processors 172 may be one or more microprocessors or microcontrollers that executes instructions for the audio capture logic 186 and the sound processing logic 190 stored in memory device 160.
When executed, the sound processing logic 190 cause the processor 172 to convert sound signals received via, for example, the one or more sound input elements 111 into coded control signals that represent stimulation signals for delivery to the recipient to evoke perception of the sound signals. The control signals are sent/transmitted over the closely coupled RF link 130 to implantable component 104. As noted, the implantable component 104 is configured to use the control signals to generate stimulation signals (e.g., current signals) for delivery to the recipient's cochlea (not shown) via the contact array 126.
As noted, the sound processing unit 110 includes audio capture logic 184, the external device 106 comprises audio streaming logic 162, and the remote computing system 122 includes audio analysis logic 129. Collectively, audio capture logic 184, audio streaming logic 162, and audio analysis logic 129 form an “audio training program” that, as described in greater detail below, can be used for rehabilitation of the recipient of cochlear implant 101. That is, audio capture logic 184, audio streaming logic 162, and audio analysis logic 129 are distributed logic/software components of a program that is configured to perform the techniques presented herein. Merely for ease of illustration, the following description makes reference to the audio training program, the audio capture logic 184, the audio streaming logic 162, and/or or the audio analysis logic 129 as performing various operations/functions. Additionally, the following description makes reference to the sound processing unit 110, external device 106, and/or the remote computing system 122 performing various operations. It is to be appreciated that such references refer to the one or more processors 172, 158, and 125 executing associated software instructions to perform the various operations.
In general, the audio training program is configured to monitor the recipient's ambient/surround auditory environment (i.e., the current or real-time sound environment experienced by the recipient) and to analyze the sounds present therein. Upon detection of certain sound identification trigger conditions, the audio training program is configured to identify the sounds present within the ambient auditory environment and to provide the recipient with an audible or visible descriptor of the sound identities.
More specifically, as noted above and as shown in
The external device 106 is configured to temporarily store/save the recorded sound segments 191 (e.g., in the one or more buffers 163) received from sound processing unit 110. For example, the external device 106 may store recorded sound segments 191 received from the sound processing unit 110 within a previous time period (e.g., store recorded sound segments 191 received within the last one minute, received within last three minutes, received within last five minutes, etc.). At 494 of
As described further below, sound identification trigger conditions 495 in accordance with embodiments presented herein can take a number of different forms. In certain embodiments, the one or more sound identification trigger conditions 495 may comprise inputs received from the recipient (e.g., a touch input received via the user interface 156 of the external device 106, a verbal or voice input/command received from the recipient and detected at the sound inputs 111 of external processing 110 and/or detected at microphone 146 of the external device 106, etc.). In other embodiments, the one or more sound identification trigger conditions may comprise the detection of certain (e.g., predetermined) trigger sounds, such as predetermined trigger sounds that are known to confuse new recipients. These specific sound identification trigger conditions are illustrative and further details regarding potential sound identification trigger conditions are provided below.
Returning to
However, if one or more sound identification trigger conditions 495 are detected by the external device 106, then the method 492 includes two branches. In particular, as shown by arrow 496, method 492 first returns to 493 where the sound processing unit 110 continues to record sound signals and send recorded sound signal segments 191 to the external device 106. However, while the sound processing unit 110 continues to record sound signals, the external device 106 sends at least one of the one or more recorded sound segments 191 stored at external device 106 to the remote computing system 122 via the network connections 117.
The remote computing system 122 is configured to at least temporarily store/save the recorded sound segments 191 (e.g., in the buffers 132). At 497, the remote computing system 122 (e.g., audio analysis logic 129) is configured to analyze the one or more recorded sound segments 191 to identify the sounds present in the recorded sound segments. In general, the audio analysis logic 129 includes or uses a type of decision structure (e.g., machine learning algorithm, decision tree, and/or other structures that operate based on individual extracted characteristics from the recorded sound signals) to “classify” the sounds present within the one or more recorded sound segments 191 into different categories. In general, the classification made by the audio analysis logic 129 generates a “sound identity classification” or, more simply, “sound identity” for the one or more sounds. As used herein, the “sound identity” of a sound is some form of description of the sound, rather than the sound itself. The sound identity (i.e., the sound description) may describe one or more of source of the sound (e.g., dog bark, cat meow, car horn, truck engine, etc.), content of the sound (e.g., content of the speech), a type or category of the sound (e.g., language spoke, type of motor, type of noise, type of accent, etc.), characteristics of the sound, the identity of a speaker, and/or other information allowing the recipient to differentiate the sound from other sounds, including speech and non-speech identity information. However, as described further below, the sound identity classification(s) made by the audio analysis logic 129 can take a number of different forms and can adapt/change over time.
As described further below, the audio analysis logic 129 may be executed in a number of different manners to classify the sounds present in the recorded sound segments 191 received from external device 106 (i.e., to generate a sound identity). However, in general, the audio analysis logic 129 is configured to extract sound features from the recorded sound segments 191 (i.e., from the sounds present therein). The extracted features may include, for example, (e.g., time information, signal levels, frequency, measures regarding the static and/or dynamic nature of the signals, timbre, harmonics, repeatability or the repeat pattern of a sound within a duration, etc. The audio analysis logic 129 is then configured to perform a multi-dimensional classification analysis of the features extracted from the recorded sound signal segment. As a result of these operations, the audio analysis logic 129 outputs “sound identity information,” which includes at least the sound identity classifications for the one or more sounds present in the recorded sound segments 191. The sound identity information is then sent to the external device 106 via the network connections 117.
It is to be appreciated that the one or more recorded sound segments 191 classified by the audio analysis logic 129 can include multiple sounds that could be identified, possibly in the presence of background noise. When multiple sounds are present, the audio analysis logic 129 may be configured to identify all of the sounds or only a subset of the sounds. For example, the audio analysis logic 129 can be configured to correlate, in time, a recipient query (i.e., a sound identification trigger condition) with the timing at which sounds in the recorded sound segments 191 are delivered to the recipient. In such examples, audio analysis logic 129 could only identify sounds that are delivered to recipient substantially simultaneously/concurrently with, or within a predetermined time period before, detection of the recipient query.
As noted above, the one or more recorded sound segments 191 may include background noise. In certain embodiments, the audio analysis logic 129 may be configured to cancel the background noise before generating the sound identity classifications(s) (i.e., before analyzing the one or more recorded sound segments with the decision structure(s)). In other embodiments, the audio analysis logic 129 may be configured to identify that the one or more record sound segments 191 include background noise and/or to classify/identify the type of background noises.
As noted above, the audio analysis logic 129 is configured to generate the sound identity classifications(s) by analyzing features extracted from the record sound signals (e.g., analyzing sound features with the decision structure(s)). In accordance with certain embodiments, the audio analysis logic 129 may use “contextual data” to make the sound identity classifications. In certain examples, the contextual data, which may be part of the data sent to the remote computing system 122 by external device 106, may include geographic or location information (e.g., Global Positioning System (GPS) coordinates, Wi-Fi location information), image data (e.g., images captured by the one or more cameras 145 of the external device 106), etc. For example, the location information may indicate that the recipient is at a zoo, beach, etc., which in turn can be used by the audio analysis logic 129 (i.e., in the classification analysis) to improve (e.g., make more accurate) or to speed up the generation of the sound identity classifications. In another example, the audio analysis logic 129 may receive an image of one or more objects or persons in the recipient's auditory environment. In such examples, classification of the objects or persons in the image(s) may be used in making the sound identity classifications, thereby potentially improving the accuracy of the sound identity classifications.
Again returning to
In summary,
In the illustrative example of
For ease of illustration, method 492 of
For example, in certain embodiments, the audio training program may be fully implemented at an auditory prosthesis, such as cochlear implant 101. In such embodiments, the auditory prosthesis is configured to: (1) capture and record sound signals, (2) detect the occurrence of one or more sound identification trigger conditions, (3) analyze the recorded sound signals to determine sound identity information for the sounds present therein, and (4) provide the sound identity information to the recipient. That is, in such embodiments, the auditory prosthesis integrates certain functionality of each of the audio capture logic 186, the audio streaming logic 162, and the audio analysis logic 129, as described above.
In other embodiments, the external device may be omitted and the audio training program may be implemented at an auditory prosthesis and a remote computing system. In such embodiments, the auditory prosthesis is configured to: (1) capture and record sound signals, (2) detect the occurrence of one or more sound identification trigger conditions, and (3) send recorded sound segments to the remote computing system. In these embodiments, the remote computing system is configured to analyze the recorded sound signals to determine sound identity information for the sounds present therein and then provide the sound identity information to the auditory prosthesis. The auditory prosthesis is then further configured to provide the sound identity information to the recipient. That is, in such embodiments, the auditory prosthesis integrates certain functionality of each of the audio capture logic 186 and the audio streaming logic 162, as described above, while the audio analysis logic 129 is implemented at the remote computing system.
Provided below are a few example use cases illustrating operation of an audio training program in accordance with certain techniques presented herein. Merely for ease of illustration, these examples will be described with reference to the example arrangement of
In particular, in a first example shown in
In the example of
In another example, the recipient of cochlear implant 101 may be rehabilitating at home and begins to perceive new sounds as her hearing progresses/improves. For example, she may begin to newly hear/perceive a “humming” sound in her home. As such, in this example the recipient uses the user interface 156 of external device 106 to enter a request for an identification of the sounds in the surrounding environment (e.g., a button press, a touch input at a touchscreen, etc.). In this example, the request entered by the recipient via user interface 156 is a sound identification trigger condition that causes the audio training program to identify the sounds present in the recipient's auditory environment and then provide the recipient with those sound identifications, including an identification of the source “humming” sound (e.g., “You are hearing the humming of a refrigerator.
In yet another example, the recipient of cochlear implant 101 may put some food in a microwave, but she may not perceive the “beep” sound when the food is ready (e.g., the “beep” will sound different to her post-implantation, than the equivalent sound prior to implantation). In such examples, the audio training program could automatically detect the “beep” sound and provide the recipient with an alert message via the external device 106 and/or the cochlear implant 100 informing the recipient that the food is ready (e.g., an audible or visible “Your food is ready” message).
In the above example, the “beep” is a sound identification trigger condition that can be automatically detected by the audio training program through monitoring of the auditory environment for predetermined trigger words, sounds, sound characteristics, etc. In such examples, the recorded sound segments may be streamed continuously to the cloud, with sound identifications likewise being streamed back to the external device 106. The audio program can then automatically trigger the alert message to the recipient.
It is to be appreciated that similar techniques (i.e., continuous streaming to the cloud) may be used to automatically detect other sounds and to trigger automatic sound identifications. For example, the audio training program may be configured to automatically detect and identify other ordinary every day sounds (e.g., ‘door closing’, ‘door opening’, ‘toilet flushing’, etc.) that the recipient has difficult associating with specific events. In the same or other embodiments, the embodiments, the audio training program may be configured to automatically detect and identify certain danger sounds (e.g., smoke/fire alarm, angry dog, etc.), and/or sounds with certain characteristics (e.g., siren of emergency services, such as ambulance, fire, and police), an approaching thunderstorm, a jet aircraft flying in the sky, sound of an ice-cream van/truck, etc.
In accordance with the techniques presented herein, the recipient, clinician, or other user may have the flexibility as to how to use the audio training program. For example, a user may configure the audio training to provide sound identifications automatically based on predetermined criteria and/or to provide sound identifications on demand (e.g., in response to user queries).
In the above examples, the recipient is generally provided with an audible or visible descriptor associated the identity of the sounds within the auditory environment. It is to be appreciated that, in accordance with certain embodiments presented herein, the identity of the sounds may be accompanied by information identifying a location/direction associated with the one or more sounds. In such embodiments, the location information, sometimes referred to as location description, indicates the location(s) of the source(s) of the sounds, relative to the recipient. For example, if multiple microphones are present (e.g., two microphones at the sound processing unit, microphones on both the sound processing unit and the external device, etc.), the audio training program could indicate not just the sound but the direction of the sound. In certain such examples, the information provided to the recipient includes both identity and location information in an audible form (e.g., “A door to your left is opening”). In other such examples, the identity and location information could be provided to the recipient in a visible form (e.g., the user interface 156 displays a “door” symbol/representation, along with an arrow indicating the direction of the opening door). In still other such examples, the identity information could be provided to the recipient in an audible form (e.g., “A door is opening”), while the location information is provided in a visible form (with an arrow at the user interface 156 indicating the direction of the opening door). It is to be appreciated that other techniques for providing the identity and location information could also be used in different embodiments presented herein.
In certain examples, the sound external device 106 and/or the sound processing unit 110 can provide the identifications intermingled with replays of the sound. For example, when providing the recipient with identity information obtained from recorded sound signals, the sound processing unit 110 could generate control signals that cause implantable component 104 to stimulate the recipient in a manner that causes the recipient to perceive: “You are hearing a bubbling brook [replay of recorded bubbling book sound], a dog barking [replay of recorded barking dog sound], and a bird chirping [replay of chirping bird sound].”). Alternatively, the sound external device 106 could generate a sequence of text and/or images that conveys similar information to the recipient.
As noted above, the sound identity information provided to the external device 106, which is then provided to the recipient, includes the sound identity classifications for the one or more sounds present in the recorded sound segments 191. In accordance with certain embodiments presented herein, the sound identity classifications and, more generally, the sound identity information generated by the audio analysis logic 129 and provided to the recipient, can change/adapt over time. That is, the audio training program may implement an adaptive learning process that, over time, increases the amount of identity information provided to the recipient (e.g., the classifications made by the audio analysis logic 129 change over time to adapt the information that can be provided to the recipient).
More specifically, when the recipient's cochlear implant 101 is first activated/switched on, she may have difficulty understanding many sounds. As such, the audio training program may initially only provide the recipient with basic identity information (e.g., “You are hearing a dog barking,” “You are hearing a motor vehicle,” etc.). However, the ability to discriminate between different sounds (e.g., different breeds of dogs, different accents, different types of vehicular sounds, etc.) can be important for proper sound perception and learning. Therefore, in accordance with certain embodiments presented herein, as the recipient's perception improves the audio training program may adapt, in terms of specificity, the identity information provided to the recipient. Additionally, as the recipient's perception improves, the audio training program may adapt the types or amount of descriptive information provided to the recipient. To facilitate understanding of these embodiments, several examples adaptions that may be implemented by the audio training program are provided below.
In one example, the recipient initially has trouble understanding the sound of a dog barking. As such, the initial identity information provided to the recipient may indicate: “You are hearing a dog barking.” Over time, the recipient's perception improves and the audio training program increases the specificity of the information provided to the recipient. In particular, after a first level of adaption, when a dog bark is detected the identity information provided to the recipient may indicate: “You are hearing a large dog barking.” As the recipient's perception further improves, the audio training program again increases the specificity of the information provided to the recipient. In particular, after a second level of adaption, when a dog bark is detected the identity information provided to the recipient may indicate: “You are hearing a German shepherd barking.”
In another example, the recipient initially has trouble understanding certain speakers. As such, the initial identity information provided to the recipient may indicate: “You are hearing a speaker with a foreign accent.” Over time, and after a first level of adaption, when a foreign accent is detected the identity information provided to the recipient may indicate: “You are hearing a speaker with a Chinese accent.” As the recipient's perception further improves, the audio training program again increases the specificity of the information provided to the recipient. In particular, after a second level of adaption, when a foreign accent is detected the identity information provided to the recipient may indicate: “You are hearing a child speaking with a Chinese accent.”
In another example, the recipient initially has trouble perceiving vehicular noises. As such, the initial identity information provided to the recipient may indicate: “You are hearing a motor vehicle.” Over time, and after a first level of adaption, when a motor vehicle is detected the identity information provided to the recipient may indicate: “You are hearing a truck engine.” As the recipient's perception further improves, the audio training program again increases the specificity of the information provided to the recipient. In particular, after a second level of adaption, when a foreign accent is detected the identity information provided to the recipient may indicate: “You are hearing a diesel truck engine.”
As noted, in general, the adaptions to the sound identity information would occur as the recipient's perception improves. The audio training program may determine when to make the adaptions (e.g., increase the amount of information provided to the recipient) in a number of different manners. In certain examples, the recipient, clinician, or other user may manually initiate the adaption changes. In other examples, the audio training program may initiate the adaptions after certain time periods (e.g., increase the amount of information provided after two weeks with the implant, increase the amount of information provided again after four weeks with the implant, and so). In still other embodiments, the audio training program can monitor the recipient's queries for information (e.g., in terms of the number of queries initiated, the sounds associated with the queries, etc.), and use this information to initiate the adaptions.
As noted above, the audio training techniques presented herein may also include non-real time training aspects. Further details regarding example non-real time training aspects are provided below, again with reference to the arrangement of
In certain examples, the audio training program is configured to store/log, over time, sounds that are detected in the recipient's auditory environment. The audio training program (e.g., the external device 106, remote computing system 122, etc.) can log sounds in response to the detection of one or more “sound logging” trigger conditions. As used herein, a sound logging trigger condition is a detectable event, condition, or action indicating that at least the identity of the sounds in one or more of the recorded sound segments 191 should be logged to the recipient.
As described further below, sound logging conditions in accordance with embodiments presented herein can take a number of different forms. In certain embodiments, the one or more sound logging trigger conditions may be the same as certain sound identification trigger conditions 495, described above. That is, the sound logging trigger conditions may comprise inputs received from the recipient (e.g., a touch input received via the user interface 156 of the external device 106, a verbal or voice input received from the recipient and detect at the microphone 146 of the external device 106, etc.). In other words, in certain embodiments, the sound logging occurs when the recipient asks the audio training program to identify a sound. It is to be appreciated that these specific sound logging trigger conditions are illustrative.
When a sound logging condition is detected, the audio training program is configured to store the identity of the sounds present in the one or more of the recorded sound segments 191 that are associated with a sound logging condition. As used herein, a recorded sound segment 191 is associated with a sound logging condition when it is received around the same time as a sound logging condition is detected (e.g., immediately prior to the detection of a sound logging condition). Over time, the audio training program generates/populates an “identified sound database” (i.e., the log of the sound identifications/classifications over time).
In the example of
As noted above, the sound logging may occur when the recipient asks the audio training program to identify sounds (e.g., the sound logging occurs in response to the detection of a recipient-initiated sound identification trigger condition). Therefore, the identified sound database 131 represents the identity of the sounds that the recipient had difficulty understanding/perceiving in auditory environment. Therefore, as the identified sound database 131 is populated, the database may be analyzed to generate a profile of, for example, identified sounds, sound characteristics, sound combinations, etc. that the recipient is repeatedly or continually having trouble perceiving correctly. The identified sounds, sound characteristics, sound combinations, etc. that the recipient is repeatedly or continually having difficult perceiving correctly is collectively and generally referred to as “difficult sound information.”
As noted above, the difficult sound information includes the identities of the sounds present in the one or more of the recorded sound segments 191 that are associated with a sound logging condition. In certain embodiments, the difficult sound information may include additional information related to the sounds (i.e., information other than the identities of the sounds). This additional sound information may include the identified sounds (e.g., a recording segment of the sound(s) that triggered the logging), time information (e.g., time stamps) that indicate, for example, a time-of-day (ToD) and/or date when a sound was detected, signal levels, frequency, measures regarding the static and/or dynamic nature of the signals, a classification of the type of sound environment in which the sound was detected (e.g., a “speech,” “speech-in-noise,” “quiet” environment, etc.).
As described further below, the difficult sound information stored in sound identity database 131 can be used in a number of different manners for rehabilitation of the recipient. In certain embodiments, the difficult sound information can be analyzed and used to suggest changes/adjustments to the operational settings of the cochlear implant 101. In such embodiments, the analysis of the difficult sound information stored in sound identity database 131 can indicate that the recipient is having trouble understanding certain sounds. Therefore, the audio training program can recommend (e.g., to the recipient, caregiver, clinician, etc.) setting changes to the cochlear implant 101 or, in certain examples, automatically institute changes to the settings of cochlear implant 101.
In similar manners, the difficult sound information stored in sound identity database 131 can be used in a clinical setting to make adjustments/changes to the operational settings of the cochlear implant 101. In such embodiments, a clinician may have access to the difficult sound information stored in sound identity database 131 and determine one or more sound perception trends that can be corrected/remediated through setting changes.
In certain embodiments, the difficult sound information stored in sound identity database 131 can be used to generate rehabilitation exercises for the recipient. In such embodiments, the analysis of the difficult sound information stored in sound identity database 131 can indicate that the recipient is having trouble understanding certain sounds. As such, the audio training program may be configured to implement a process in which the cochlear implant 101 delivers a sound (e.g., recorded sound segment) to the recipient, along with a visible or audible identification of the sound (e.g., the delivered sound is preceded or followed by an audible identification of the sound, an image of the sound source is displayed at the external device 106 while the sound is delivered to the recipient, etc.).
The rehabilitation can be static and/or dynamic. In certain arrangements, the system can use the types of queries and/or the frequency of similar queries raised by the user, and some background data gathering, be able to suggest the user to go to a place or venue (e.g., café) to certain experience sound identities (e.g., a person does not know how the sound of an ice-cream van may be instructed to go to a public park). For example, based on a specific query, the system would deliver a recorded sound along with a visible identification to the user. At the same time, the system would save that query and wait to create an opportunity for the user to experience the sound identify in person at a subsequent time. Based on the real time data feeds (e.g., community Whatapps group), the system realizes that there will be/is an ice-cream van showing up at a nearby park for a festival. As such, the system would create a live rehabilitation exercise by recommending the person to go to the park to hear the ice-cream van in reality.
In certain examples, the rehabilitation exercises may be performed “offline,” meaning at times that are convenient for the recipient and enable the recipient to more quickly learn to perceive difficult sounds. The recipient of cochlear implant 101 could initiate the rehabilitation exercises, for example, from the user interface 156 of the external device 106.
Although the above examples illustrate the performance of the rehabilitation exercises in response to difficult sound information, it is to be appreciated that the audio training techniques presented herein may also facilitate targeted or real time training. In certain embodiments, a recipient may desire to quickly perceive one or more predetermined sounds. In such examples, the predetermined sounds may be used to trigger real time rehabilitation training (i.e., rehabilitation training that occurs immediately following the detection of the predetermined sounds).
For example, a recipient may want to quickly learn to distinguish the sound of a dog barking from other sounds. Therefore, in such an example, each time that the audio training program detects a dog barking (at least initially), the audio training program can provide an indication to the recipient noting that the sound she just heard was a “dog barking.”
It is to be appreciated that the embodiments presented herein are not mutually exclusive.
The invention described and claimed herein is not to be limited in scope by the specific preferred embodiments herein disclosed, since these embodiments are intended as illustrations, and not limitations, of several aspects of the invention. Any equivalent embodiments are intended to be within the scope of this invention. Indeed, various modifications of the invention in addition to those shown and described herein will become apparent to those skilled in the art from the foregoing description. Such modifications are also intended to fall within the scope of the appended claims.
Number | Date | Country | |
---|---|---|---|
62876825 | Jul 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17625017 | Jan 2022 | US |
Child | 18533575 | US |