The present invention relates to methods and apparatus for processing and/or enhancing audio signals for transmitting these signals as vibrations through teeth or bone structures in and/or around a mouth. More particularly, the present invention relates to methods and apparatus for receiving audio signals and processing them to enhance its quality and/or to emulate various auditory features for transmitting these signals via sound conduction through teeth or bone structures in and/or around the mouth such that the transmitted signals correlate to auditory signals received by a user.
Hearing loss affects over 31 million people in the United States (about 13% of the population). As a chronic condition, the incidence of hearing impairment rivals that of heart disease and, like heart disease, the incidence of hearing impairment increases sharply with age.
While the vast majority of those with hearing loss can be helped by a well-fitted, high quality hearing device, only 22% of the total hearing impaired population own hearing devices. Current products and distribution methods are not able to satisfy or reach over 20 million persons with hearing impairment in the U.S. alone.
Hearing loss adversely affects a person's quality of life and psychological well-being. Individuals with hearing impairment often withdraw from social interactions to avoid frustrations resulting from inability to understand conversations. Recent studies have shown that hearing impairment causes increased stress levels, reduced self-confidence, reduced sociability and reduced effectiveness in the workplace.
The human ear generally comprises three regions: the outer ear, the middle ear, and the inner ear. The outer ear generally comprises the external auricle and the ear canal, which is a tubular pathway through which sound reaches the middle ear. The outer ear is separated from the middle ear by the tympanic membrane (eardrum). The middle ear generally comprises three small bones, known as the ossicles, which form a mechanical conductor from the tympanic membrane to the inner ear. Finally, the inner ear includes the cochlea, which is a fluid-filled structure that contains a large number of delicate sensory hair cells that are connected to the auditory nerve.
Hearing loss can also be classified in terms of being conductive, sensorineural, or a combination of both. Conductive hearing impairment typically results from diseases or disorders that limit the transmission of sound through the middle ear. Most conductive impairments can be treated medically or surgically. Purely conductive hearing loss represents a relatively small portion of the total hearing impaired population (estimated at less than 5% of the total hearing impaired population).
Sensorineural hearing losses occur mostly in the inner ear and account for the vast majority of hearing impairment (estimated at 90-95% of the total hearing impaired population). Sensorineural hearing impairment (sometimes called “nerve loss”) is largely caused by damage to the sensory hair cells inside the cochlea. Sensorineural hearing impairment occurs naturally as a result of aging or prolonged exposure to loud music and noise. This type of hearing loss cannot be reversed nor can it be medically or surgically treated; however, the use of properly fitted hearing devices can improve the individual's quality of life.
Conventional hearing devices are the most common devices used to treat mild to severe sensorineural hearing impairment. These are acoustic devices that amplify sound to the tympanic membrane. These devices are individually customizable to the patient's physical and acoustical characteristics over four to six separate visits to an audiologist or hearing instrument specialist. Such devices generally comprise a microphone, amplifier, battery, and speaker. Recently, hearing device manufacturers have increased the sophistication of sound processing, often using digital technology, to provide features such as programmability and multi-band compression. Although these devices have been miniaturized and are less obtrusive, they are still visible and have major acoustic limitation.
Industry research has shown that the primary obstacles for not purchasing a hearing device generally include: a) the stigma associated with wearing a hearing device; b) dissenting attitudes on the part of the medical profession, particularly ENT physicians; c) product value issues related to perceived performance problems; d) general lack of information and education at the consumer and physician level; and e) negative word-of-mouth from dissatisfied users.
Other devices such as cochlear implants have been developed for people who have severe to profound hearing loss and are essentially deaf (approximately 2% of the total hearing impaired population). The electrode of a cochlear implant is inserted into the inner ear in an invasive and non-reversible surgery. The electrode electrically stimulates the auditory nerve through an electrode array that provides audible cues to the user, which are not usually interpreted by the brain as normal sound. Users generally require intensive and extended counseling and training following surgery to achieve the expected benefit.
Other devices such as electronic middle ear implants generally are surgically placed within the middle ear of the hearing impaired. They are surgically implanted devices with an externally worn component.
The manufacture, fitting and dispensing of hearing devices remain an arcane and inefficient process. Most hearing devices are custom manufactured, fabricated by the manufacturer to fit the ear of each prospective purchaser. An impression of the ear canal is taken by the dispenser (either an audiologist or licensed hearing instrument specialist) and mailed to the manufacturer for interpretation and fabrication of the custom molded rigid plastic casing. Hand-wired electronics and transducers (microphone and speaker) are then placed inside the casing, and the final product is shipped back to the dispensing professional after some period of time, typically one to two weeks.
The time cycle for dispensing a hearing device, from the first diagnostic session to the final fine-tuning session, typically spans a period over several weeks, such as six to eight weeks, and involves multiple with the dispenser.
Moreover, typical hearing aid devices fail to eliminate background noises or fail to distinguish between background noise and desired sounds. Accordingly, there exists a need for methods and apparatus for receiving audio signals and processing them to enhance its quality and/or to emulate various auditory features for transmitting these signals via sound conduction through teeth or bone structures in and/or around the mouth for facilitating the treatment of hearing loss in patients.
An electronic and transducer device may be attached, adhered, or otherwise embedded into or upon a removable dental or oral appliance to form a hearing aid assembly. Such a removable oral appliance may be a custom-made device fabricated from a thermal forming process utilizing a replicate model of a dental structure obtained by conventional dental impression methods. The electronic and transducer assembly may receive incoming sounds either directly or through a receiver to process and amplify the signals and transmit the processed sounds via a vibrating transducer element coupled to a tooth or other bone structure, such as the maxillary, mandibular, or palatine bone structure.
The assembly for transmitting vibrations via at least one tooth may generally comprise a housing having a shape which is conformable to at least a portion of the at least one tooth, and an actuatable transducer disposed within or upon the housing and in vibratory communication with a surface of the at least one tooth. Moreover, the transducer itself may be a separate assembly from the electronics and may be positioned along another surface of the tooth, such as the occlusal surface, or even attached to an implanted post or screw embedded into the underlying bone.
In receiving and processing the various audio signals typically received by a user, various configurations of the oral appliance and processing of the received audio signals may be utilized to enhance and/or optimize the conducted vibrations which are transmitted to the user. For instance, in configurations where one or more microphones are positioned within the user's mouth, filtering features such as Acoustic Echo Cancellation (AEC) may be optionally utilized to eliminate or mitigate undesired sounds received by the microphones. In such a configuration, at least two intra-buccal microphones may be utilized to separate out desired sounds (e.g., sounds received from outside the body such as speech, music, etc.) from undesirable sounds (e.g., sounds resulting from chewing, swallowing, breathing, self-speech, teeth grinding, etc.).
If these undesirable sounds are not filtered or cancelled, they may be amplified along with the desired audio signals making for potentially unintelligible audio quality for the user. Additionally, desired audio sounds may be generally received at relatively lower sound pressure levels because such signals are more likely to be generated at a distance from the user and may have to pass through the cheek of the user while the undesired sounds are more likely to be generated locally within the oral cavity of the user. Samples of the undesired sounds may be compared against desired sounds to eliminate or mitigate the undesired sounds prior to actuating the one or more transducers to vibrate only the resulting desired sounds to the user.
Independent from or in combination with acoustic echo cancellation, another processing feature for the oral appliance may include use of a multiband actuation system to facilitate the efficiency with which audio signals may be conducted to the user. Rather than utilizing a single transducer to cover the entire range of the frequency spectrum (e.g., 200 Hz to 10,000 Hz), one variation may utilize two or more transducers where each transducer is utilized to deliver sounds within certain frequencies. For instance, a first transducer may be utilized to deliver sounds in the 200 Hz to 2000 Hz frequency range and a second transducer may be used to deliver sounds in the 2000 Hz to 10,000 Hz frequency range. Alternatively, these frequency ranges may be discrete or overlapping. As individual transducers may be configured to handle only a subset of the frequency spectrum, the transducers may be more efficient in their design.
Yet another process which may utilize the multiple transducers may include the utilization of directionality via the conducted vibrations to emulate the directional perception of audio signals received by the user. In one example for providing the perception of directionality with an oral appliance, two or more transducers may be positioned apart from one another along respective retaining portions. One transducer may be actuated corresponding to an audio signal while the other transducer may be actuated corresponding to the same audio signal but with a phase and/or amplitude and/or delay difference intentionally induced corresponding to a direction emulated for the user. Generally, upon receiving a directional audio signal and depending upon the direction to be emulated and the separation between the respective transducers, a particular phase and/or gain and/or delay change to the audio signal may be applied to the respective transducer while leaving the other transducer to receive the audio signal unchanged.
Another feature which may utilize the oral appliance and processing capabilities may include the ability to vibrationally conduct ancillary audio signals to the user, e.g., the oral appliance may be configured to wirelessly receive and conduct signals from secondary audio sources to the user. Examples may include the transmission of an alarm signal which only the user may hear or music conducted to the user in public locations, etc. The user may thus enjoy privacy in receiving these ancillary signals while also being able to listen and/or converse in an environment where a primary audio signal is desired.
An electronic and transducer device may be attached, adhered, or otherwise embedded into or upon a removable oral appliance or other oral device to form a hearing aid assembly. Such an oral appliance may be a custom-made device fabricated from a thermal forming process utilizing a replicate model of a dental structure obtained by conventional dental impression methods. The electronic and transducer assembly may receive incoming sounds either directly or through a receiver to process and amplify the signals and transmit the processed sounds via a vibrating transducer element coupled to a tooth or other bone structure, such as the maxillary, mandibular, or palatine bone structure.
As shown in
Generally, the volume of electronics and/or transducer assembly 16 may be minimized so as to be unobtrusive and as comfortable to the user when placed in the mouth. Although the size may be varied, a volume of assembly 16 may be less than 800 cubic millimeters. This volume is, of course, illustrative and not limiting as size and volume of assembly 16 and may be varied accordingly between different users.
Moreover, removable oral appliance 18 may be fabricated from various polymeric or a combination of polymeric and metallic materials using any number of methods, such as computer-aided machining processes using computer numerical control (CNC) systems or three-dimensional printing processes, e.g., stereolithography apparatus (SLA), selective laser sintering (SLS), and/or other similar processes utilizing three-dimensional geometry of the patient's dentition, which may be obtained via any number of techniques. Such techniques may include use of scanned dentition using intra-oral scanners such as laser, white light, ultrasound, mechanical three-dimensional touch scanners, magnetic resonance imaging (MRI), computed tomography (CT), other optical methods, etc.
In forming the removable oral appliance 18, the appliance 18 may be optionally formed such that it is molded to fit over the dentition and at least a portion of the adjacent gingival tissue to inhibit the entry of food, fluids, and other debris into the oral appliance 18 and between the transducer assembly and tooth surface. Moreover, the greater surface area of the oral appliance 18 may facilitate the placement and configuration of the assembly 16 onto the appliance 18.
Additionally, the removable oral appliance 18 may be optionally fabricated to have a shrinkage factor such that when placed onto the dentition, oral appliance 18 may be configured to securely grab onto the tooth or teeth as the appliance 18 may have a resulting size slightly smaller than the scanned tooth or teeth upon which the appliance 18 was formed. The fitting may result in a secure interference fit between the appliance 18 and underlying dentition.
In one variation, with assembly 14 positioned upon the teeth, as shown in
The transmitter assembly 22, as described in further detail below, may contain a microphone assembly as well as a transmitter assembly and may be configured in any number of shapes and forms worn by the user, such as a watch, necklace, lapel, phone, belt-mounted device, etc.
With respect to microphone 30, a variety of various microphone systems may be utilized. For instance, microphone 30 may be a digital, analog, and/or directional type microphone. Such various types of microphones may be interchangeably configured to be utilized with the assembly, if so desired. Moreover, various configurations and methods for utilizing multiple microphones within the user's mouth may also be utilized, as further described below.
Power supply 36 may be connected to each of the components in transmitter assembly 22 to provide power thereto. The transmitter signals 24 may be in any wireless form utilizing, e.g., radio frequency, ultrasound, microwave, Blue Tooth® (BLUETOOTH SIG, INC., Bellevue, Wash.), etc. for transmission to assembly 16. Assembly 22 may also optionally include one or more input controls 28 that a user may manipulate to adjust various acoustic parameters of the electronics and/or transducer assembly 16, such as acoustic focusing, volume control, filtration, muting, frequency optimization, sound adjustments, and tone adjustments, etc.
The signals transmitted 24 by transmitter 34 may be received by electronics and/or transducer assembly 16 via receiver 38, which may be connected to an internal processor for additional processing of the received signals. The received signals may be communicated to transducer 40, which may vibrate correspondingly against a surface of the tooth to conduct the vibratory signals through the tooth and bone and subsequently to the middle ear to facilitate hearing of the user. Transducer 40 may be configured as any number of different vibratory mechanisms. For instance, in one variation, transducer 40 may be an electromagnetically actuated transducer. In other variations, transducer 40 may be in the form of a piezoelectric crystal having a range of vibratory frequencies, e.g., between 250 to 4000 kHz.
Power supply 42 may also be included with assembly 16 to provide power to the receiver, transducer, and/or processor, if also included. Although power supply 42 may be a simple battery, replaceable or permanent, other variations may include a power supply 42 which is charged by inductance via an external charger. Additionally, power supply 42 may alternatively be charged via direct coupling to an alternating current (AC) or direct current (DC) source. Other variations may include a power supply 42 which is charged via a mechanical mechanism, such as an internal pendulum or slidable electrical inductance charger as known in the art, which is actuated via, e.g., motions of the jaw and/or movement for translating the mechanical motion into stored electrical energy for charging power supply 42.
In another variation of assembly 16, rather than utilizing an extra-buccal transmitter, hearing aid assembly 50 may be configured as an independent assembly contained entirely within the user's mouth, as shown in
In order to transmit the vibrations corresponding to the received auditory signals efficiently and with minimal loss to the tooth or teeth, secure mechanical contact between the transducer and the tooth is ideally maintained to ensure efficient vibratory communication. Accordingly, any number of mechanisms may be utilized to maintain this vibratory communication.
For any of the variations described above, they may be utilized as a single device or in combination with any other variation herein, as practicable, to achieve the desired hearing level in the user. Moreover, more than one oral appliance device and electronics and/or transducer assemblies may be utilized at any one time. For example,
Moreover, each of the different transducers 60, 62, 64, 66 can also be programmed to vibrate in a manner which indicates the directionality of sound received by the microphone worn by the user. For example, different transducers positioned at different locations within the user's mouth can vibrate in a specified manner by providing sound or vibrational queues to inform the user which direction a sound was detected relative to an orientation of the user, as described in further detail below. For instance, a first transducer located, e.g., on a user's left tooth, can be programmed to vibrate for sound detected originating from the user's left side. Similarly, a second transducer located, e.g., on a user's right tooth, can be programmed to vibrate for sound detected originating from the user's right side. Other variations and queues may be utilized as these examples are intended to be illustrative of potential variations.
In yet another variation for separating the microphone from the transducer assembly,
In utilizing multiple transducers and/or processing units, several features may be incorporated with the oral appliance(s) to effect any number of enhancements to the quality of the conducted vibratory signals and/or to emulate various perceptual features to the user to correlate auditory signals received by a user for transmitting these signals via sound conduction through teeth or bone structures in and/or around the mouth.
As illustrated in
Moreover, the one or more transducers 114, 116, 118 may be positioned along respective retaining portions 21, 23 and configured to emulate directionality of audio signals received by the user to provide a sense of direction with respect to conducted audio signals. Additionally, one or more processors 120, 124 may also be provided along one or both retaining portions 21, 23 to process received audio signals, e.g., to translate the audio signals into vibrations suitable for conduction to the user, as well as other providing for other functional features. Furthermore, an optional processor 122 may also be provided along one or both retaining portions 21, 23 for interfacing and/or receiving wireless signals from other external devices such as an input control, as described above, or other wireless devices.
In configurations particularly where the one or more microphones are positioned within the user's mouth, filtering features such as Acoustic Echo Cancellation (AEC) may be optionally utilized to eliminate or mitigate undesired sounds received by the microphones. AEC algorithms are well utilized and are typically used to anticipate the signal which may re-enter the transmission path from the microphone and cancel it out by digitally sampling an initial received signal to form a reference signal. Generally, the received signal is produced by the transducer and any reverberant signal which may be picked up again by the microphone is again digitally sampled to form an echo signal. The reference and echo signals may be compared such that the two signals are summed ideally at 180° out of phase to result in a null signal, thereby cancelling the echo.
In the variation shown in
Samples of the undesired sounds may be compared against desired sounds to eliminate or mitigate the undesired sounds prior to actuating the one or more transducers to vibrate only the resulting desired sounds to the user. In this example, first microphone 110 may be positioned along a buccal surface of the retaining portion 23 to receive desired sounds while second microphone 112 may be positioned along a lingual surface of retaining portion 21 to receive the undesirable sound signals. Processor 120 may be positioned along either retaining portion 21 or 23, in this case along a lingual surface of retaining portion 21, and may be in wired or wireless communication with the microphones 110, 112.
Although audio signals may be attenuated by passing through the cheek of the user, especially when the mouth is closed, first microphone 110 may still receive the desired audio signals for processing by processor 120, which may also amplify the received audio signals. As illustrated schematically in
The desired audio signals may be transmitted via wired or wireless communication along a receive path 142 where the signal 144 may be sampled and received by AEC processor 120. A portion of the far end speech 140 may be transmitted to one or more transducers 114 where it may initially conduct the desired audio signals via vibration 146 through the user's bones. Any resulting echo or reverberations 148 from the transmitted vibration 146 may be detected by second microphone 112 along with any other undesirable noises or audio signals 150, as mentioned above. The undesired signals 148, 150 detected by second microphone 112 or the sampled signal 144 received by AEC processor 120 may be processed and shifted out of phase, e.g., ideally 180° out of phase, such that the summation 154 of the two signals results in a cancellation of any echo 148 and/or other undesired sounds 150.
The resulting summed audio signal may be redirected through an adaptive filter 156 and re-summed 154 to further clarify the audio signal until the desired audio signals is passed along to the one or more transducers 114 where the filtered signal 162, free or relatively free from the undesired sounds, may be conducted 160 to the user. Although two microphones 110, 112 are described in this example, an array of additional microphones may be utilized throughout the oral cavity of the user. Alternatively, as mentioned above, one or more microphones may also be positioned or worn by the user outside the mouth, such as in a bracelet, necklace, etc. and used alone or in combination with the one or more intra-buccal microphones. Furthermore, although three transducers 114, 116, 118 are illustrated, other variations may utilize a single transducer or more than three transducers positioned throughout the user's oral cavity, if so desired.
Independent from or in combination with acoustic echo cancellation, another processing feature for the oral appliance may include use of a multiband actuation system to facilitate the efficiency with which audio signals may be conducted to the user. Rather than utilizing a single transducer to cover the entire range of the frequency spectrum (e.g., 200 Hz to 10,000 Hz), one variation may utilize two or more transducers where each transducer is utilized to deliver sounds within certain frequencies. For instance, a first transducer may be utilized to deliver sounds in the 200 Hz to 2000 Hz frequency range and a second transducer may be used to deliver sounds in the 2000 Hz to 10,000 Hz frequency range. Alternatively, these frequency ranges may be discrete or overlapping. As individual transducers may be configured to handle only a subset of the frequency spectrum, the transducers may be more efficient in their design.
Additionally, for certain applications where high fidelity signals are not necessary to be transmitted to the user, individual higher frequency transducers may be shut off to conserve power. In yet another alternative, certain transducers may be omitted, particularly transducers configured for lower frequency vibrations.
As illustrated in
One or both processors 120 and/or 124, which are in communication with the one or more transducers (in this example transducers 114, 116, 118), may be programmed to treat the audio signals for each particular frequency range similarly or differently. For instance, processors 120 and/or 124 may apply a higher gain level to the signals from one band with respect to another band. Additionally, one or more of the transducers 114, 116, 118 may be configured differently to optimally transmit vibrations within their respective frequency ranges. In one variation, one or more of the transducers 114, 116, 118 may be varied in size or in shape to effectuate an optimal configuration for transmission within their respective frequencies.
As mentioned above, the one or more of transducers 114, 116, 118 may also be powered on or off by the processor to save on power consumption in certain listening applications. As an example, higher frequency transducers 114, 118 may be shut off when higher frequency signals are not utilized such as when the user is driving. In other examples, the user may activate all transducers 114, 116, 118 such as when the user is listening to music. In yet another variation, higher frequency transducers 114, 118 may also be configured to deliver high volume audio signals, such as for alarms, compared to lower frequency transducers 116. Thus, the perception of a louder sound may be achieved just by actuation of the higher frequency transducers 114, 118 without having to actuate any lower frequency transducers 116.
An example of how audio signals received by a user may be split into sub-frequency ranges for actuation by corresponding lower or higher frequency transducers is schematically illustrated in
Each respective filtered signal 178, 180 may be passed on to a respective processor 182, 184 to further process each band's signal according to an algorithm to achieve any desired output per transducer. Thus, processor 182 may process the signal 178 to create the output signal 194 to vibrate the lower frequency transducer 116 accordingly while the processor 184 may process the signal 180 to create the output signal 196 to vibrate the higher frequency transducers 114 and/or 118 accordingly. An optional controller 186 may receive control data 188 from user input controls, as described above, for optionally sending signals 190, 192 to respective processors 182, 184 to shut on/off each respective processor and/or to append ancillary data and/or control information to the subsequent transducers.
In addition to or independent from either acoustic echo cancellation and/or multiband actuation of transducers, yet another process which may utilize the multiple transducers may include the utilization of directionality via the conducted vibrations to emulate the directional perception of audio signals received by the user. Generally, human hearing is able to distinguish the direction of a sound wave by perceiving differences in sound pressure levels between the two cochlea. In one example for providing the perception of directionality with an oral appliance, two or more transducers, such as transducers 114, 118, may be positioned apart from one another along respective retaining portions 21, 23, as shown in
One transducer may be actuated corresponding to an audio signal while the other transducer is actuated corresponding to the same audio signal but with a phase and/or amplitude and/or delay difference intentionally induced corresponding to a direction emulated for the user. Generally, upon receiving a directional audio signal and depending upon the direction to be emulated and the separation between the respective transducers, a particular phase and/or gain and/or delay change to the audio signal may be applied to the respective transducer while leaving the other transducer to receive the audio signal unchanged.
As illustrated in the schematic illustration of
With the estimated direction of arrival of the detected sound 200 determined, the data may be modified for phase and/or amplitude and/or delay adjustments 204 as well as for orientation compensation 208, if necessary, based on additional information received the microphones 110, 112 and relative orientation of the transducers 114, 116, 118, as described in further detail below. The process of adjusting for phase and/or amplitude and/or delay 204 may involve calculating one phase adjustment for one of the transducers. This may simply involve an algorithm where given a desired direction to be emulated, a table of values may correlate a set of given phase and/or amplitude and/or delay values for adjusting one or more of the transducers. Because the adjustment values may depend on several different factors, e.g., speed of sound conductance through a user's skull, distance between transducers, etc., each particular user may have a specific table of values. Alternatively, standard set values may be determined for groups of users having similar anatomical features, such as jaw size among other variations, and requirements. In other variations, rather than utilizing a table of values in adjusting for phase and/or amplitude and/or delay 204, set formulas or algorithms may be programmed in processor 120 and/or 124 to determine phase and/or amplitude and/or delay adjustment values. Use of an algorithm could simply utilize continuous calculations in determining any adjustment which may be needed or desired whereas the use of a table of values may simply utilize storage in memory.
Once any adjustments in phase and/or amplitude and/or delay 204 are determined and with the reproduced signals 202 processed from the microphones 110, 112, these signals may then be processed to calculate any final phase and/or amplitude and/or delay adjustments 206 and these final signals may be applied to the transducers 114, 116, 118, as illustrated, to emulate the directionality of received audio signals to the user. A detailed schematic illustration of the final phase and/or amplitude and/or delay adjustments 206 is illustrated in
As mentioned above, compensating 208 for an orientation of the transducers relative to one another as well as relative to an orientation of the user may be taken into account in calculating any adjustments to phase and/or amplitude and/or delay of the signals applied to the transducers. For example, the direction 230 perpendicular to a line 224 connecting the microphones 226, 228 (intra-buccal and/or extra-buccal) may define a zero degree direction of the microphones. A zero degree direction of the user's head may be indicated by the direction 222, which may be illustrated as in
In addition to or independent from any of the processes described above, another feature which may utilize the oral appliance and processing capabilities may include the ability to vibrationally conduct ancillary audio signals to the user, e.g., the oral appliance may be configured to wirelessly receive and conduct signals from secondary audio sources to the user. Examples may include the transmission of an alarm signal which only the user may hear or music conducted to the user in public locations, etc. The user may thus enjoy privacy in receiving these ancillary signals while also being able to listen and/or converse in an environment where a primary audio signal is desired.
The audio receiver processor 230 may communicate wirelessly or via wire with the audio application processor 232. During one example of use, a primary audio signal 240 (e.g., conversational speech) along with one or more ancillary audio signals 236 (e.g., alarms, music players, cell phones, PDA's, etc.) may be received by the one or more microphones of a receiver unit 250 of audio receiver processor 230. The primary signal 250 and ancillary signals 254 may be transmitted electrically to a multiplexer 256 which may combine the various signals 252, 254 in view of optional methods, controls and/or priority data 262 received from a user control 264, as described above. Parameters such as prioritization of the signals as well as volume, timers, etc., may be set by the user control 264. The multiplexed signal 258 having the combined audio signals may then be transmitted to processor 260, which may transmit the multiplexed signal 266 to the audio application processor 232, as illustrated in
As described above, the various audio signals 236, 240 may be combined and multiplexed in various forms 258 for transmission to the user 242. For example, one variation for multiplexing the audio signals via multiplexer 256 may entail combining the audio signals such that the primary 240 and ancillary 236 signals are transmitted by the transducers in parallel where all audio signals are conducted concurrently to the user, as illustrated in
Alternatively, the multiplexed signal 258 may be transmitted such that the primary 240 and ancillary 236 signals are transmitted in series, as graphically illustrated in
In yet another example, the transmitted signals may be conducted to the user in a hybrid form combining the parallel and serial methods described above and as graphically illustrated in
The applications of the devices and methods discussed above are not limited to the treatment of hearing loss but may include any number of further treatment applications. Moreover, such devices and methods may be applied to other treatment sites within the body. Modification of the above-described assemblies and methods for carrying out the invention, combinations between different variations as practicable, and variations of aspects of the invention that are obvious to those of skill in the art are intended to be within the scope of the claims.
This application is a continuation of U.S. patent application Ser. No. 14/261,759 filed Apr. 25, 2014, which is a continuation of U.S. patent application Ser. No. 12/840,213 filed Jul. 20, 2010 (now U.S. Pat. No. 8,712,077 issued Apr. 29, 2014), which is a continuation of U.S. patent application Ser. No. 11/672,250 filed Feb. 7, 2007 (now U.S. Pat. No. 7,844,070 issued Nov. 30, 2010), which claims the benefit of priority to U.S. Provisional Patent Application Nos. 60/809,244 filed May 30, 2006 and 60/820,223 filed Jul. 24, 2006, each of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
2045404 | Nicholides | Jun 1936 | A |
2161169 | Jefferis, Jr. | Jun 1939 | A |
2230397 | Abraham | Feb 1941 | A |
2239550 | Cubert | Apr 1941 | A |
2242118 | Fischer | May 1941 | A |
2318872 | Madiera | May 1943 | A |
2848811 | Wagner | Aug 1958 | A |
2908974 | Stifter | Oct 1959 | A |
2977425 | Cole | Mar 1961 | A |
2995633 | Puharich et al. | Aug 1961 | A |
3156787 | Puharich et al. | Nov 1964 | A |
3170993 | Puharich et al. | Feb 1965 | A |
3267931 | Puharich et al. | Aug 1966 | A |
3325743 | Blum | Jun 1967 | A |
3712962 | Epley | Jan 1973 | A |
3787641 | Santori | Jan 1974 | A |
3894196 | Briskey | Jul 1975 | A |
3985977 | Beaty et al. | Oct 1976 | A |
4025732 | Traunmuller | May 1977 | A |
4133975 | Barker, III | Jan 1979 | A |
4150262 | Ono | Apr 1979 | A |
4382780 | Kurz | May 1983 | A |
4443668 | Warren | Apr 1984 | A |
4478224 | Bailey | Oct 1984 | A |
4498461 | Hakansson | Feb 1985 | A |
4511330 | Smiley et al. | Apr 1985 | A |
4591668 | Iwata | May 1986 | A |
4612915 | Hough et al. | Sep 1986 | A |
4629424 | Lauks et al. | Dec 1986 | A |
4642769 | Petrofsky | Feb 1987 | A |
4729366 | Schaefer | Mar 1988 | A |
4736268 | Kipnis | Apr 1988 | A |
4791673 | Schreiber | Dec 1988 | A |
4817044 | Ogren | Mar 1989 | A |
4827525 | Hotvet et al. | May 1989 | A |
4832033 | Maher et al. | May 1989 | A |
4867682 | Hammesfahr et al. | Sep 1989 | A |
4904233 | Haakansson et al. | Feb 1990 | A |
4920984 | Furumichi et al. | May 1990 | A |
4962559 | Schuman | Oct 1990 | A |
4977623 | Demarco | Dec 1990 | A |
4982434 | Lenhardt et al. | Jan 1991 | A |
5012520 | Steeger | Apr 1991 | A |
5026278 | Oxman et al. | Jun 1991 | A |
5033999 | Mersky | Jul 1991 | A |
5047994 | Lenhardt et al. | Sep 1991 | A |
5060526 | Barth et al. | Oct 1991 | A |
5082007 | Adell | Jan 1992 | A |
5165131 | Staar | Nov 1992 | A |
5194003 | Garay | Mar 1993 | A |
5212476 | Maloney | May 1993 | A |
5233987 | Fabian et al. | Aug 1993 | A |
5277694 | Leysieffer et al. | Jan 1994 | A |
5323468 | Bottesch | Jun 1994 | A |
5325436 | Soli et al. | Jun 1994 | A |
5326349 | Baraff | Jul 1994 | A |
5354326 | Comben et al. | Oct 1994 | A |
5372142 | Madsen et al. | Dec 1994 | A |
5402496 | Soli et al. | Mar 1995 | A |
5403262 | Gooch | Apr 1995 | A |
5447489 | Issalene | Sep 1995 | A |
5455842 | Mersky et al. | Oct 1995 | A |
5460593 | Mersky et al. | Oct 1995 | A |
5477489 | Wiedmann | Dec 1995 | A |
5485851 | Erickson | Jan 1996 | A |
5487012 | Topholm et al. | Jan 1996 | A |
5506095 | Callne | Apr 1996 | A |
5523745 | Fortune et al. | Jun 1996 | A |
5546459 | Sih et al. | Aug 1996 | A |
5558618 | Maniglia | Sep 1996 | A |
5565759 | Dunstan | Oct 1996 | A |
5579284 | May | Nov 1996 | A |
5586562 | Matz | Dec 1996 | A |
5616027 | Jacobs et al. | Apr 1997 | A |
5624376 | Ball et al. | Apr 1997 | A |
5659156 | Mauney et al. | Aug 1997 | A |
5661813 | Shimauchi et al. | Aug 1997 | A |
5668883 | Abe et al. | Sep 1997 | A |
5673328 | Wandl et al. | Sep 1997 | A |
5680028 | McEachern | Oct 1997 | A |
5701348 | Shennib et al. | Dec 1997 | A |
5706251 | May | Jan 1998 | A |
5730151 | Summer et al. | Mar 1998 | A |
5735790 | Hangkansson et al. | Apr 1998 | A |
5760692 | Block | Jun 1998 | A |
5772575 | Lesinski et al. | Jun 1998 | A |
5788656 | Mino | Aug 1998 | A |
5793875 | Lehr et al. | Aug 1998 | A |
5795287 | Ball et al. | Aug 1998 | A |
5800336 | Ball et al. | Sep 1998 | A |
5812496 | Peck | Sep 1998 | A |
5828765 | Gable | Oct 1998 | A |
5844996 | Enzmann et al. | Dec 1998 | A |
5864481 | Gross et al. | Jan 1999 | A |
5889871 | Downs, Jr. | Mar 1999 | A |
5899847 | Adams et al. | May 1999 | A |
5902167 | Filo et al. | May 1999 | A |
5914701 | Gersheneld et al. | Jun 1999 | A |
5930202 | Duckworth et al. | Jul 1999 | A |
5961443 | Rastatter et al. | Oct 1999 | A |
5980246 | Ramsay et al. | Nov 1999 | A |
5984681 | Huang | Nov 1999 | A |
6029558 | Stevens et al. | Feb 2000 | A |
6047074 | Zoels et al. | Apr 2000 | A |
6057668 | Chao | May 2000 | A |
6068589 | Neukermans | May 2000 | A |
6068590 | Brisken | May 2000 | A |
6072884 | Kates | Jun 2000 | A |
6072885 | Stockham, Jr. et al. | Jun 2000 | A |
6075557 | Holliman et al. | Jun 2000 | A |
6086662 | Brodkin et al. | Jul 2000 | A |
6089864 | Buckner et al. | Jul 2000 | A |
6115477 | Filo et al. | Sep 2000 | A |
6116983 | Long et al. | Sep 2000 | A |
6118882 | Haynes | Sep 2000 | A |
6171229 | Kroll et al. | Jan 2001 | B1 |
6174278 | Jaeger et al. | Jan 2001 | B1 |
6184651 | Fernandez et al. | Feb 2001 | B1 |
6200133 | Kittelsen | Mar 2001 | B1 |
6223018 | Fukumoto et al. | Apr 2001 | B1 |
6239705 | Glen | May 2001 | B1 |
6261237 | Swanson et al. | Jul 2001 | B1 |
6333269 | Naito et al. | Dec 2001 | B2 |
6371758 | Kittelsen | Apr 2002 | B1 |
6377693 | Lippa et al. | Apr 2002 | B1 |
6390971 | Adams et al. | May 2002 | B1 |
6394969 | Lenhardt | May 2002 | B1 |
6447294 | Price | Sep 2002 | B1 |
6504942 | Hong et al. | Jan 2003 | B1 |
6516228 | Berrang et al. | Feb 2003 | B1 |
6533747 | Polaschegg et al. | Mar 2003 | B1 |
6538558 | Sakazume et al. | Mar 2003 | B2 |
6551761 | Hall-Goulle et al. | Apr 2003 | B1 |
6554761 | Puria et al. | Apr 2003 | B1 |
6585637 | Brillhart et al. | Jul 2003 | B2 |
6613001 | Dworkin | Sep 2003 | B1 |
6626822 | Jaeger et al. | Sep 2003 | B1 |
6629922 | Puria et al. | Oct 2003 | B1 |
6631197 | Taenzer | Oct 2003 | B1 |
6633747 | Reiss | Oct 2003 | B1 |
6658124 | Meadows | Dec 2003 | B1 |
6682472 | Davis | Jan 2004 | B1 |
6694035 | Teicher et al. | Feb 2004 | B1 |
6754472 | Williams et al. | Jun 2004 | B1 |
6756901 | Campman | Jun 2004 | B2 |
6778674 | Panasik et al. | Aug 2004 | B1 |
6826284 | Benesty et al. | Nov 2004 | B1 |
6849536 | Lee et al. | Feb 2005 | B2 |
6885753 | Bank | Apr 2005 | B2 |
6917688 | Yu et al. | Jul 2005 | B2 |
6937736 | Toda | Aug 2005 | B2 |
6937769 | Onno | Aug 2005 | B2 |
6941952 | Rush, III | Sep 2005 | B1 |
6954668 | Cuozzo | Oct 2005 | B1 |
6985599 | Asnes | Jan 2006 | B2 |
7003099 | Zhang et al. | Feb 2006 | B1 |
7010139 | Smeehuyzen | Mar 2006 | B1 |
7033313 | Lupin et al. | Apr 2006 | B2 |
7035415 | Belt et al. | Apr 2006 | B2 |
7065223 | Westerkull | Jun 2006 | B2 |
7074222 | Westerkull | Jul 2006 | B2 |
7076077 | Atsumi et al. | Jul 2006 | B2 |
7099822 | Zangi | Aug 2006 | B2 |
7162420 | Zangi et al. | Jan 2007 | B2 |
7164948 | Struble et al. | Jan 2007 | B2 |
7171003 | Venkatesh et al. | Jan 2007 | B1 |
7171008 | Elko | Jan 2007 | B2 |
7174022 | Zhang et al. | Feb 2007 | B1 |
7174026 | Niederdränk | Feb 2007 | B2 |
7190995 | Chervin et al. | Mar 2007 | B2 |
7198596 | Westerkull | Apr 2007 | B2 |
7206423 | Feng et al. | Apr 2007 | B1 |
7246058 | Burnett | Jul 2007 | B2 |
7246619 | Truschel et al. | Jul 2007 | B2 |
7258533 | Tanner et al. | Aug 2007 | B2 |
7269266 | Anjanappa et al. | Sep 2007 | B2 |
7271569 | Oglesbee | Sep 2007 | B2 |
7281924 | Ellison | Oct 2007 | B2 |
7298857 | Shennib et al. | Nov 2007 | B2 |
7310427 | Retchin et al. | Dec 2007 | B2 |
7329226 | Ni et al. | Feb 2008 | B1 |
7331349 | Brady et al. | Feb 2008 | B2 |
7333624 | Husung | Feb 2008 | B2 |
7361216 | Kangas et al. | Apr 2008 | B2 |
7409070 | Pitulia | Aug 2008 | B2 |
7433484 | Asseily et al. | Oct 2008 | B2 |
7436974 | Harper | Oct 2008 | B2 |
7463929 | Simmons | Dec 2008 | B2 |
7486798 | Anjanappa | Feb 2009 | B2 |
7512448 | Malick et al. | Mar 2009 | B2 |
7512720 | Schultz et al. | Mar 2009 | B2 |
7520851 | Davis et al. | Apr 2009 | B2 |
7522738 | Miller, III | Apr 2009 | B2 |
7522740 | Julstrom et al. | Apr 2009 | B2 |
7610919 | Utley et al. | Nov 2009 | B2 |
7629897 | Koljonen | Dec 2009 | B2 |
7664277 | Abolfathi et al. | Feb 2010 | B2 |
7680284 | Lee | Mar 2010 | B2 |
7682303 | Abolfathi | Mar 2010 | B2 |
7724911 | Menzel et al. | May 2010 | B2 |
7787946 | Stahmann et al. | Aug 2010 | B2 |
7796769 | Abolfathi | Sep 2010 | B2 |
7801319 | Abolfathi | Sep 2010 | B2 |
7806831 | Lavie et al. | Oct 2010 | B2 |
7844064 | Abolfathi et al. | Nov 2010 | B2 |
7844070 | Abolfathi | Nov 2010 | B2 |
7845041 | Gatzemeyer et al. | Dec 2010 | B2 |
7853030 | Grasbon et al. | Dec 2010 | B2 |
7854698 | Abolfathi | Dec 2010 | B2 |
7876906 | Abolfathi | Jan 2011 | B2 |
7945068 | Abolfathi et al. | May 2011 | B2 |
7974845 | Spiridigliozzi et al. | Jul 2011 | B2 |
8023676 | Abolfathi et al. | Sep 2011 | B2 |
8043091 | Schmitt | Oct 2011 | B2 |
8150075 | Abolfathi et al. | Apr 2012 | B2 |
8160279 | Abolfathi | Apr 2012 | B2 |
8170242 | Menzel et al. | May 2012 | B2 |
8177705 | Abolfathi | May 2012 | B2 |
8189838 | Rich | May 2012 | B1 |
8189840 | Guenther | May 2012 | B2 |
8224013 | Abolfathi et al. | Jul 2012 | B2 |
8233654 | Abolfathi | Jul 2012 | B2 |
8254611 | Abolfathi et al. | Aug 2012 | B2 |
8270637 | Abolfathi | Sep 2012 | B2 |
8270638 | Abolfathi et al. | Sep 2012 | B2 |
8291912 | Abolfathi et al. | Oct 2012 | B2 |
8295506 | Kassayan et al. | Oct 2012 | B2 |
8333203 | Spiridigliozzi et al. | Dec 2012 | B2 |
8358792 | Menzel et al. | Jan 2013 | B2 |
8433080 | Rader et al. | Apr 2013 | B2 |
8433082 | Abolfathi | Apr 2013 | B2 |
8433083 | Abolfathi et al. | Apr 2013 | B2 |
8503930 | Kassayan | Aug 2013 | B2 |
8577066 | Abolfathi | Nov 2013 | B2 |
8585575 | Abolfathi | Nov 2013 | B2 |
8588447 | Abolfathi et al. | Nov 2013 | B2 |
8649535 | Menzel et al. | Feb 2014 | B2 |
8649536 | Kassayan et al. | Feb 2014 | B2 |
8649543 | Abolfathi et al. | Feb 2014 | B2 |
8660278 | Abolfathi et al. | Feb 2014 | B2 |
8712077 | Abolfathi | Apr 2014 | B2 |
8712078 | Abolfathi | Apr 2014 | B2 |
8795172 | Abolfathi et al. | Aug 2014 | B2 |
8867994 | Kassayan et al. | Oct 2014 | B2 |
9049527 | Andersson | Jun 2015 | B2 |
9113262 | Abolfathi et al. | Aug 2015 | B2 |
9143873 | Abolfathi | Sep 2015 | B2 |
9185485 | Abolfathi | Nov 2015 | B2 |
9247332 | Kassayan et al. | Jan 2016 | B2 |
9398370 | Abolfathi | Jul 2016 | B2 |
9615182 | Abolfathi et al. | Apr 2017 | B2 |
9736602 | Menzel et al. | Aug 2017 | B2 |
9781525 | Abolfathi | Oct 2017 | B2 |
9781526 | Abolfathi | Oct 2017 | B2 |
9826324 | Abolfathi | Nov 2017 | B2 |
9900714 | Abolfathi | Feb 2018 | B2 |
9906878 | Abolfathi et al. | Feb 2018 | B2 |
10109289 | Kassayan et al. | Oct 2018 | B2 |
10194255 | Menzel et al. | Jan 2019 | B2 |
20010003788 | Ball et al. | Jun 2001 | A1 |
20010033669 | Bank et al. | Oct 2001 | A1 |
20010051776 | Lenhardt | Dec 2001 | A1 |
20020026091 | Leysieffer | Feb 2002 | A1 |
20020039427 | Whitwell et al. | Apr 2002 | A1 |
20020049479 | Pitts | Apr 2002 | A1 |
20020071581 | Leysieffer et al. | Jun 2002 | A1 |
20020077831 | Numa | Jun 2002 | A1 |
20020122563 | Schumaier | Sep 2002 | A1 |
20020173697 | Lenhardt | Nov 2002 | A1 |
20030004403 | Drinan et al. | Feb 2003 | A1 |
20030048915 | Bank | Mar 2003 | A1 |
20030059078 | Downs et al. | Mar 2003 | A1 |
20030091200 | Pompei | May 2003 | A1 |
20030114899 | Woods et al. | Jun 2003 | A1 |
20030199956 | Struble et al. | Oct 2003 | A1 |
20030212319 | Magill | Nov 2003 | A1 |
20040006283 | Harrison et al. | Jan 2004 | A1 |
20040015058 | Bessen et al. | Jan 2004 | A1 |
20040057591 | Beck et al. | Mar 2004 | A1 |
20040063073 | Kajimoto et al. | Apr 2004 | A1 |
20040127812 | Micheyl et al. | Jul 2004 | A1 |
20040131200 | Davis | Jul 2004 | A1 |
20040138723 | Malick et al. | Jul 2004 | A1 |
20040141624 | Davis et al. | Jul 2004 | A1 |
20040196998 | Noble | Oct 2004 | A1 |
20040202339 | O'Brien, Jr. et al. | Oct 2004 | A1 |
20040202344 | Anjanappa et al. | Oct 2004 | A1 |
20040214130 | Fischer et al. | Oct 2004 | A1 |
20040214614 | Aman | Oct 2004 | A1 |
20040234080 | Hernandez et al. | Nov 2004 | A1 |
20040243481 | Bradbury et al. | Dec 2004 | A1 |
20040247143 | Lantrua et al. | Dec 2004 | A1 |
20040254668 | Jang et al. | Dec 2004 | A1 |
20050020873 | Berrang et al. | Jan 2005 | A1 |
20050037312 | Uchida | Feb 2005 | A1 |
20050067816 | Buckman | Mar 2005 | A1 |
20050070782 | Brodkin | Mar 2005 | A1 |
20050088435 | Geng | Apr 2005 | A1 |
20050090864 | Pines et al. | Apr 2005 | A1 |
20050113633 | Blau et al. | May 2005 | A1 |
20050115561 | Stathmann et al. | Jun 2005 | A1 |
20050129257 | Tamura | Jun 2005 | A1 |
20050137447 | Bernhard | Jun 2005 | A1 |
20050189910 | Hui | Sep 2005 | A1 |
20050196008 | Anjanappa et al. | Sep 2005 | A1 |
20050201574 | Lenhardt | Sep 2005 | A1 |
20050241646 | Sotos et al. | Nov 2005 | A1 |
20050271999 | Fishburne | Dec 2005 | A1 |
20050273170 | Navarro et al. | Dec 2005 | A1 |
20060008106 | Harper | Jan 2006 | A1 |
20060025648 | Lupin et al. | Feb 2006 | A1 |
20060056649 | Schumaier | Mar 2006 | A1 |
20060064037 | Shalon et al. | Mar 2006 | A1 |
20060079291 | Granovetter et al. | Apr 2006 | A1 |
20060155346 | Miller | Jul 2006 | A1 |
20060166157 | Rahman et al. | Jul 2006 | A1 |
20060167335 | Park et al. | Jul 2006 | A1 |
20060207611 | Anonsen | Sep 2006 | A1 |
20060230108 | Tatsuta et al. | Oct 2006 | A1 |
20060239468 | Desloge | Oct 2006 | A1 |
20060253005 | Drinan et al. | Nov 2006 | A1 |
20060270467 | Song et al. | Nov 2006 | A1 |
20060275739 | Ray | Dec 2006 | A1 |
20060277664 | Akhtar | Dec 2006 | A1 |
20070010704 | Pitulia | Jan 2007 | A1 |
20070035917 | Hotelling et al. | Feb 2007 | A1 |
20070036370 | Granovetter et al. | Feb 2007 | A1 |
20070041595 | Carazo et al. | Feb 2007 | A1 |
20070050061 | Klein et al. | Mar 2007 | A1 |
20070093733 | Choy | Apr 2007 | A1 |
20070105072 | Koljonen | May 2007 | A1 |
20070127755 | Bauman | Jun 2007 | A1 |
20070142072 | Lassally | Jun 2007 | A1 |
20070144396 | Hamel et al. | Jun 2007 | A1 |
20070183613 | Juneau et al. | Aug 2007 | A1 |
20070208542 | Vock et al. | Sep 2007 | A1 |
20070223735 | LoPresti et al. | Sep 2007 | A1 |
20070230713 | Davis | Oct 2007 | A1 |
20070230736 | Boesen | Oct 2007 | A1 |
20070239294 | Brueckner et al. | Oct 2007 | A1 |
20070242835 | Davis | Oct 2007 | A1 |
20070249889 | Hanson et al. | Oct 2007 | A1 |
20070258609 | Steinbuss | Nov 2007 | A1 |
20070265533 | Tran | Nov 2007 | A1 |
20070276270 | Tran | Nov 2007 | A1 |
20070280491 | Abolfathi | Dec 2007 | A1 |
20070280492 | Abolfathi | Dec 2007 | A1 |
20070280493 | Abolfathi | Dec 2007 | A1 |
20070280495 | Abolfathi | Dec 2007 | A1 |
20070286440 | Abolfathi et al. | Dec 2007 | A1 |
20070291972 | Abolfathi et al. | Dec 2007 | A1 |
20080019542 | Menzel et al. | Jan 2008 | A1 |
20080019557 | Bevirt et al. | Jan 2008 | A1 |
20080021327 | El-Bialy et al. | Jan 2008 | A1 |
20080045161 | Lee et al. | Feb 2008 | A1 |
20080064993 | Abolfathi et al. | Mar 2008 | A1 |
20080070181 | Abolfathi et al. | Mar 2008 | A1 |
20080109972 | Mah et al. | May 2008 | A1 |
20080144876 | Reining et al. | Jun 2008 | A1 |
20080146890 | LeBoeuf | Jun 2008 | A1 |
20080159559 | Akagi et al. | Jul 2008 | A1 |
20080165996 | Saito et al. | Jul 2008 | A1 |
20080205678 | Boguslavskij et al. | Aug 2008 | A1 |
20080227047 | Lowe et al. | Sep 2008 | A1 |
20080304677 | Abolfathi et al. | Dec 2008 | A1 |
20090014012 | Sanders | Jan 2009 | A1 |
20090022294 | Goldstein et al. | Jan 2009 | A1 |
20090022351 | Wieland et al. | Jan 2009 | A1 |
20090028352 | Petroff | Jan 2009 | A1 |
20090030529 | Berrang et al. | Jan 2009 | A1 |
20090043149 | Abel | Feb 2009 | A1 |
20090052698 | Rader et al. | Feb 2009 | A1 |
20090052702 | Murphy et al. | Feb 2009 | A1 |
20090088598 | Abolfathi | Apr 2009 | A1 |
20090097684 | Abolfathi et al. | Apr 2009 | A1 |
20090097685 | Menzel et al. | Apr 2009 | A1 |
20090099408 | Abolfathi et al. | Apr 2009 | A1 |
20090105523 | Kassayan et al. | Apr 2009 | A1 |
20090147976 | Abolfathi | Jun 2009 | A1 |
20090149722 | Abolfathi et al. | Jun 2009 | A1 |
20090175478 | Nakajima et al. | Jul 2009 | A1 |
20090180652 | Davis et al. | Jul 2009 | A1 |
20090208031 | Abolfathi | Aug 2009 | A1 |
20090210231 | Spiridigliozzi et al. | Aug 2009 | A1 |
20090220115 | Lantrua | Sep 2009 | A1 |
20090220921 | Abolfathi et al. | Sep 2009 | A1 |
20090226011 | Abolfathi et al. | Sep 2009 | A1 |
20090226017 | Abolfathi et al. | Sep 2009 | A1 |
20090226020 | Abolfathi et al. | Sep 2009 | A1 |
20090268932 | Abolfathi et al. | Oct 2009 | A1 |
20090270032 | Kassayan | Oct 2009 | A1 |
20090270673 | Abolfathi et al. | Oct 2009 | A1 |
20090274325 | Abolfathi | Nov 2009 | A1 |
20090281433 | Saadat et al. | Nov 2009 | A1 |
20100006111 | Spiridigliozzi et al. | Jan 2010 | A1 |
20100014689 | Kassayan et al. | Jan 2010 | A1 |
20100098269 | Abolfathi et al. | Apr 2010 | A1 |
20100098270 | Abolfathi et al. | Apr 2010 | A1 |
20100185046 | Abolfathi | Jul 2010 | A1 |
20100189288 | Menzel et al. | Jul 2010 | A1 |
20100194333 | Kassayan et al. | Aug 2010 | A1 |
20100220883 | Menzel et al. | Sep 2010 | A1 |
20100290647 | Abolfathi et al. | Nov 2010 | A1 |
20100312568 | Abolfathi | Dec 2010 | A1 |
20100322449 | Abolfathi | Dec 2010 | A1 |
20110002492 | Abolfathi et al. | Jan 2011 | A1 |
20110007920 | Abolfathi et al. | Jan 2011 | A1 |
20110026740 | Abolfathi | Feb 2011 | A1 |
20110061647 | Stahmann et al. | Mar 2011 | A1 |
20110081031 | Abolfathi | Apr 2011 | A1 |
20110116659 | Abolfathi | May 2011 | A1 |
20110245584 | Abolfathi | Oct 2011 | A1 |
20110280416 | Abolfathi et al. | Nov 2011 | A1 |
20110319021 | Proulx et al. | Dec 2011 | A1 |
20120022389 | Sanders | Jan 2012 | A1 |
20120116779 | Spiridigliozzi et al. | May 2012 | A1 |
20120142270 | Abolfathi et al. | Jun 2012 | A1 |
20120165597 | Proulx et al. | Jun 2012 | A1 |
20120259158 | Abolfathi | Oct 2012 | A1 |
20120296154 | Abolfathi | Nov 2012 | A1 |
20120321109 | Abolfathi et al. | Dec 2012 | A1 |
20120321113 | Abolfathi | Dec 2012 | A1 |
20130003996 | Menzel et al. | Jan 2013 | A1 |
20130003997 | Kassayan et al. | Jan 2013 | A1 |
20130006043 | Abolfathi et al. | Jan 2013 | A1 |
20130010987 | Abolfathi et al. | Jan 2013 | A1 |
20130034238 | Abolfathi | Feb 2013 | A1 |
20130044903 | Abolfathi et al. | Feb 2013 | A1 |
20130109932 | Saadat et al. | May 2013 | A1 |
20130236035 | Abolfathi | Sep 2013 | A1 |
20130236043 | Abolfathi et al. | Sep 2013 | A1 |
20130306230 | Abolfathi et al. | Nov 2013 | A1 |
20130324043 | Kassayan | Dec 2013 | A1 |
20140081091 | Abolfathi et al. | Mar 2014 | A1 |
20140169592 | Menzel et al. | Jun 2014 | A1 |
20140177879 | Abolfathi et al. | Jun 2014 | A1 |
20140177880 | Kassayan et al. | Jun 2014 | A1 |
20140270268 | Abolfathi et al. | Sep 2014 | A1 |
20140275733 | Abolfathi | Sep 2014 | A1 |
20140296618 | Abolfalthi | Oct 2014 | A1 |
20140321667 | Abolfathi | Oct 2014 | A1 |
20140321674 | Abolfalthi | Oct 2014 | A1 |
20140349597 | Abolfathi et al. | Nov 2014 | A1 |
20150358723 | Abolfathi et al. | Dec 2015 | A1 |
20160134980 | Abolfathi | May 2016 | A1 |
20160217804 | Kassayan et al. | Jul 2016 | A1 |
20160323679 | Abolfathi | Nov 2016 | A1 |
20170171675 | Abolfathi et al. | Jun 2017 | A1 |
20170265011 | Abolfathi | Sep 2017 | A1 |
20170311100 | Menzel et al. | Oct 2017 | A1 |
20170347210 | Abolfathi | Nov 2017 | A1 |
20180176701 | Abolfathi | Jun 2018 | A1 |
20180176703 | Abolfathi et al. | Jun 2018 | A1 |
20190035417 | Kassayan et al. | Jan 2019 | A1 |
20190158967 | Menzel et al. | May 2019 | A1 |
Number | Date | Country |
---|---|---|
1425264 | Jun 2003 | CN |
101919261 | Dec 2010 | CN |
30 30 132 | Mar 1982 | DE |
102005012975 | Aug 2006 | DE |
102007053985 | May 2009 | DE |
102009015145 | Oct 2010 | DE |
0106846 | May 1984 | EP |
0715838 | Jun 1996 | EP |
0824889 | Feb 1998 | EP |
1559370 | Aug 2005 | EP |
1753919 | May 2007 | EP |
1841284 | Oct 2007 | EP |
2091129 | Aug 2009 | EP |
1066299 | Apr 1967 | GB |
2318872 | May 1998 | GB |
2467053 | Jul 2010 | GB |
52-022403 | Feb 1977 | JP |
53-006097 | Jan 1978 | JP |
58-026490 | Mar 1981 | JP |
58-502178 | Dec 1983 | JP |
62-159099 | Oct 1987 | JP |
07-210176 | Aug 1995 | JP |
10-126893 | May 1998 | JP |
07-213538 | Aug 1998 | JP |
2000-175280 | Jun 2000 | JP |
2003-070752 | Mar 2003 | JP |
2003-310561 | Nov 2003 | JP |
2004-000719 | Jan 2004 | JP |
2004-167120 | Jun 2004 | JP |
2004-205839 | Jul 2004 | JP |
2005-224599 | Aug 2005 | JP |
2005-278765 | Oct 2005 | JP |
2006-181257 | Jul 2006 | JP |
2006-217088 | Aug 2006 | JP |
2007028248 | Feb 2007 | JP |
2007028610 | Feb 2007 | JP |
2007044284 | Feb 2007 | JP |
2007049599 | Feb 2007 | JP |
2007049658 | Feb 2007 | JP |
2007-079386 | Mar 2007 | JP |
51-70405 | Mar 2013 | JP |
2013-103900 | May 2013 | JP |
200610422 | Mar 2006 | TW |
WO 1983002047 | Jun 1983 | WO |
WO 1991002678 | Mar 1991 | WO |
WO 1995008398 | Mar 1995 | WO |
WO 1995019678 | Jul 1995 | WO |
WO 1996000051 | Jan 1996 | WO |
WO 1996021335 | Jul 1996 | WO |
WO 1996041498 | Dec 1996 | WO |
WO 1999031933 | Jun 1999 | WO |
WO 2000056120 | Sep 2000 | WO |
WO 2001072084 | Sep 2001 | WO |
WO 2001095666 | Dec 2001 | WO |
WO 2002009622 | Feb 2002 | WO |
WO 2002024126 | Mar 2002 | WO |
WO 2002071798 | Sep 2002 | WO |
WO 2003001845 | Jan 2003 | WO |
WO 2004045242 | May 2004 | WO |
WO 2004093493 | Oct 2004 | WO |
WO 2004105650 | Dec 2004 | WO |
WO 2005000391 | Jan 2005 | WO |
WO 2005023129 | Mar 2005 | WO |
WO 2005037153 | Apr 2005 | WO |
WO 2005039433 | May 2005 | WO |
WO 2005053533 | Jun 2005 | WO |
WO 2006044161 | Apr 2006 | WO |
WO 2006055884 | May 2006 | WO |
WO 2006088410 | Aug 2006 | WO |
WO 2006130909 | Dec 2006 | WO |
WO 2007043055 | Apr 2007 | WO |
WO 2007052251 | May 2007 | WO |
WO 2007059185 | May 2007 | WO |
WO 2007140367 | Dec 2007 | WO |
WO 2007140368 | Dec 2007 | WO |
WO 2007140373 | Dec 2007 | WO |
WO 2007143453 | Dec 2007 | WO |
WO 2008024794 | Feb 2008 | WO |
WO 2008030725 | Mar 2008 | WO |
WO 2009014812 | Jan 2009 | WO |
WO 2009025917 | Feb 2009 | WO |
WO 2009045598 | Apr 2009 | WO |
WO 2009066296 | May 2009 | WO |
WO 2009073852 | Jun 2009 | WO |
WO 2009076528 | Jun 2009 | WO |
WO 2009102889 | Aug 2009 | WO |
WO 2009111404 | Sep 2009 | WO |
WO 2009111566 | Sep 2009 | WO |
WO 2009131755 | Oct 2009 | WO |
WO 2009131756 | Oct 2009 | WO |
WO 2009135107 | Nov 2009 | WO |
WO 2009137520 | Nov 2009 | WO |
WO 2009151790 | Dec 2009 | WO |
WO 2010005913 | Jan 2010 | WO |
WO 2010009018 | Jan 2010 | WO |
WO 2010045497 | Apr 2010 | WO |
WO 2010085455 | Jul 2010 | WO |
WO 2010090998 | Aug 2010 | WO |
WO 2010132399 | Nov 2010 | WO |
WO 2011008623 | Jan 2011 | WO |
WO 2011041078 | Apr 2011 | WO |
WO 2011150394 | Dec 2011 | WO |
WO 2012018400 | Feb 2012 | WO |
Entry |
---|
“Special Forces Smart Noise Cancellation Ear Buds with Built-In GPS,” http://www.gizmag.com/special-forces-smart-noise-cancellation-ear-buds-with-built-in-gps/9428/, 2 pages, 2008. |
Altmann, et al. Foresighting the new technology waves—Exper Group. In: State of the Art Reviews and Related Papers—Center on Nanotechnology and Society. 2004 Conference. Published Jun. 14, 2004, p. 1-291, Available at http://www.nano-and-society.org/NELSI/documents/ECreviewsandpapers061404.pdf. Accessed Jan. 11, 2009. |
Berard, G., “Hearing Equals Behavior” [summary], 1993, http://www.bixby,org/faq/tinnitus/treatment.html. |
Bozkaya, D. et al., “Mechanics of the Tapered Interference Fit in Dental implants.” published Oct. 2002 [online], retrieved Oct. 14, 2010, http://www1.coe.neu.edu/˜smuftu/Papers/paper-interference-fit-elsevier-2.pdf. |
Broyhill, D., “Battlefield Medical Information System—Telemedicine,” A research paper presented to the U.S. Army Command and General Staff College in partial Fulfillment of the requirement for A462 Combat Health Support Seminar, 12 pages, 2003. |
Dental Cements—Premarket Notification, U.S. Department of Health and Human Services Food and Drug Administration Center for Devices and Radiological Health, pp. 1-10, Aug. 18, 1998. |
Henry, et al. “Comparison of Custom Sounds for Achieving Tinnitus Relief,” J Am Acad Audiol,15:585-598, 2004. |
Jastreboff, Pawel, J., “Phantom auditory perception (tinnitus): mechanisms of generation and perception,” Neuroscience Research, 221-254, 1990, Elsevier Scientific Publishers Ireland, Ltd. |
Robb, “Tinnitus Device Directory Part I.” Tinnitus Today, p. 22, Jun. 2003. |
Song, S. et al., “A 0.2-mW 2-Mb/s Digital Transceiver Based on Wideband Signaling for Human Body Communications,” IEEE J Solid-State Cir, 42(9), 2021-2033, Sep. 2007. |
Stuart, A., et al., “Investigations of the Impact of Altered Auditory Feedback In-The-Ear Devices on the Speech of People Who Stutter: Initial Fitting and 4-Month Follow-Up,” Int J Lang Commun Disord, 39(1), Jan. 2004, [abstract only]. |
Wen, Y. et al, “Online Prediction of Battery Lifetime for Embedded and Mobile Devices,” Special Issue on Embedded Systems: Springer-Verlag Heidelberg Lecture Notes in Computer Science, V3164/2004, 15 pages, Dec. 2004. |
U.S. Appl. No. 11/754,823, filed May 29, 2007. |
U.S. Appl. No. 12/333,259, filed Dec. 11, 2008. |
U.S. Appl. No. 13/551,158, filed Jul. 17, 2012. |
U.S. Appl. No. 14/056,821, filed Oct. 17, 2013. |
U.S. Appl. No. 14/828,372, filed Aug. 17, 2015. |
U.S. Appl. No. 15/438,403, filed Feb. 21, 2017. |
U.S. Appl. No. 11/754,833, filed May 29, 2007. |
U.S. Appl. No. 12/702,629, filed Feb. 9, 2010. |
U.S. Appl. No. 11/672,239, filed Feb. 7, 2007. |
U.S. Appl. No. 12/848,758, filed Aug. 2, 2010. |
U.S. Appl. No. 11/672,250, filed Feb. 7, 2007. |
U.S. Appl. No. 12/840,213, filed Jul. 20, 2010. |
U.S. Appl. No. 14/261,759, filed Apr. 25, 2014. |
U.S. Appl. No. 11/672,264, filed Feb. 7, 2007. |
U.S. Appl. No. 13/011,762, filed Jan. 21, 2011. |
U.S. Appl. No. 11/672,271, filed Feb. 7, 2007. |
U.S. Appl. No. 12/862,933, filed Aug. 25, 2010. |
U.S. Appl. No. 13/526,923, filed Jun. 19, 2012. |
U.S. Appl. No. 14/936,548, filed Nov. 9, 2015. |
U.S. Appl. No. 11/741,648, filed Apr. 27, 2007. |
U.S. Appl. No. 12/333,272, filed Dec. 11, 2008. |
U.S. Appl. No. 12/646,789, filed Dec. 23, 2009. |
U.S. Appl. No. 12/779,396, filed May 13, 2010. |
U.S. Appl. No. 13/615,067, filed Sep. 13, 2012. |
U.S. Appl. No. 14/176,617, filed Feb. 10, 2014. |
U.S. Appl. No. 15/644,538, filed Jul. 7, 2017. |
Holgers, et al., “Sound stimulation via bone conduction for tinnitus relief: a pilot study,” International Journal of Audiology, 41(5), pp. 293-300, Jul. 31, 2002. |
Number | Date | Country | |
---|---|---|---|
20170311102 A1 | Oct 2017 | US |
Number | Date | Country | |
---|---|---|---|
60809244 | May 2006 | US | |
60820223 | Jul 2006 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14261759 | Apr 2014 | US |
Child | 15646892 | US | |
Parent | 12840213 | Jul 2010 | US |
Child | 14261759 | US | |
Parent | 11672250 | Feb 2007 | US |
Child | 12840213 | US |