The present invention relates to methods and apparatus for transmitting vibrations through teeth or bone structures in and/or around a mouth.
The human ear can be generally classified into three regions; the outer ear, the
middle ear, and the inner ear. The outer ear generally comprises the external auricle and the ear canal, which is a tubular pathway through which sound reaches the middle ear. The outer ear is separated from the middle ear by the tympanic membrane (eardrum). The middle ear generally comprises three small bones, known as the ossicles, which form a mechanical conductor from the tympanic membrane to the inner ear. Finally, the inner ear includes the cochlea, which is a fluid-filled structure that contains a large number of delicate sensory hair cells that are connected to the auditory nerve.
The action of speaking uses lungs, vocal chords, reverberation in the bones of the skull, and facial muscle to generate the acoustic signal that is released out of mouth and nose. The speaker hears this sound in two ways. The first one called “air conduction hearing” is initiated by the vibration of the outer ear (eardrum) that in turn transmits the signal to the middle ear (ossicles) followed by inner ear (cochlea) generating signals in the auditory nerve which is finally decoded by the brain to interpret as sound. The second way of hearing, “bone conduction hearing,” occurs when the sound vibrations are transmitted directly from the jaw/skull to the inner ear thus by-passing the outer and middle ears. As a consequence of this bone conduction hearing effect, we are able to hear our own voice even when we plug our ear canals completely. That is because the action of speaking sets up vibration in the bones of the body, especially the skull. Although the perceived quality of sound generated by the bone conduction is not on par with the sounds from air conduction, the bone conducted signals carry information that is more than adequate to reproduce spoken information.
As noted in U.S. Patent Publication No. 2004/0202344, there are several microphones available in the market that use bone conduction and are worn externally making indirect contact with bone at places like the scalp, ear canal, mastoid bone (behind ear), throat, cheek bone, and temples. They all have to account for the loss of information due to the presence of skin between the bone and the sensor. For example, Temco voiceducer mounts in ear and on scalp, where as Radioear Bone Conduction Headset mounts on the cheek and jaw bone. Similarly, throat mounted bone conduction microphones have been developed. A microphone mounting for a person's throat includes a plate with an opening that is shaped and arranged so that it holds a microphone secured in said opening with the microphone contacting a person's throat using bone conduction. Bone conduction microphones worn in ear canal pick up the vibration signals from the external ear canal. The microphones mounted on the scalp, jaw and cheek bones pick the vibration of the skull at respective places. Although the above-referred devices have been successfully marketed, there are many drawbacks. First, since the skin is present between the sensor and the bones .the signal is attenuated and may be contaminated by noise signals. To overcome this limitation, many such devices require some form of pressure to be applied on the sensor to create a good contact between the bone and the sensor. This pressure results in discomfort for the wearer of the microphone. Furthermore, they can lead to ear infection (in case of ear microphone) and headache (in case of scalp and jaw bone microphones) for some users.
There are several intra-oral bone conduction microphones that have been reported. In one known case, the microphone is made of a magnetostrictive material that is held between the upper and lower jaw with the user applying a compressive force on the sensor. The teeth vibration is picked up by the sensor and converted to electrical signal. The whole sensor is part of a mouthpiece of a scuba diver.
US Patent Publication No. 2004/0202344discloses a tooth microphone apparatus worn in a human mouth that includes a sound transducer element in contact with at least one tooth in mouth. The transducer produces an electrical signal in response to speech and the electrical signal from the sound transducer is transmitted to an external apparatus. The sound transducer can be a MEMS accelerometer, and the MEMS accelerometer can be coupled to a signal conditioning circuit for signal conditioning. The signal conditioning circuit can be further coupled to a transmitter. The transmitter can be an RF transmitter of any type, an optical transmitter, or any other type of transmitter such as a Bluetooth device or a device that transmits into a Wi-Fi network.
In a first aspect, systems and methods for transmitting an audio signal through a bone of a user includes receiving an audio signal from a first microphone positioned at an entrance or in a first ear canal; and vibrating a first transducer to audibly transmit the audio signal through the bone.
In a second aspect, a hearing device includes a first microphone positioned at an entrance or in a first ear canal; and a first transducer coupled to the first microphone, the first transducer vibrating in accordance with signals from the first microphone to audibly transmit the audio signal through the bone.
In another aspect, a bone conduction hearing aid device includes dual, externally located microphones that are placed at the entrance to or in the ear canals and an oral appliance containing dual transducers in communication with each other.
In yet another aspect, a bone conduction hearing aid device includes dual externally located microphones that are placed at the entrance to or in the ear canals and an oral appliance containing dual transducers in communication with each other. The device allows the user to enjoy the most natural sound input due to the location of the microphone which takes advantage of the pinna for optimal sound localization (and directionality) when that(those) sound(s) are transmitted to the cochlea using a straight, signal and “phase-shifted” signal to apply directionality to the patient.
In yet another aspect, a bone conduction hearing aid device includes dual externally located microphones that are placed at the entrance to or in the ear canals; the microphones are coupled to circuitry such as a signal processor, a power supply, a transmitter, and an antenna positioned in independent housings located behind, on, or within the fold of each of the ears (the pinna). The acoustic signals received by the microphones are amplified and/or processed by the signal processor, and the processed signal is wirelessly coupled to an oral appliance containing one or dual transducers which are electronically coupled within the oral appliance.
Implementations of the above aspects may include one or more of the Following. Circuitry coupled to the microphone such as a signal processor, a power supply, a transmitter and an antenna can be positioned in a housing. The circuitry can be located in the Housing either behind an ear or within one or more folds of a pinna. A second microphone can be positioned in or at an entrance of a second ear canal. The microphones receive sound signals from first and second ears and are wirelessly coupled with and vibrate the first and second transducers, respectively. Since sound is directional in nature, the sound level sensed by the microphone at the first ear may be higher in sound level, and arrive first in time at the first microphone. Natural head shadowing and the time of flight of sound spanning the distance between the first microphone at the first ear and the second microphone at the second ear may cause the sound signal received at the second microphone at the second ear to be lower in volume and delayed by a few milliseconds compared to the sound sensed by the first microphone. In the case of a dual transducer oral appliance, the first transducer receives a high sound level from the circuitry associated with the first microphone, and the second transducer receives a lower and slightly delayed sound level from the circuitry associated with the second microphone; this will result in generating an amplitude difference and phase-shifted signal at the second transducer. The first transducer receives a high sound level and the second transducer receives a low sound which is phase-shifted, wherein the high and phase-shifted low sounds add in a cochlea to provide the user with a perception of directionality. The device can include a circuit coupled to the first microphone to filter the audio signal into at least a first frequency range and a second frequency range; wherein the first transducer transmits the first frequency range through the bone of a user; a second microphone positioned at an entrance or in a second ear canal; a circuit coupled to a second microphone to adjust the audio signal with the second frequency range; and a second transducer to transmit the second frequency range through the bone of the user. The second circuit coupled to a second microphone may include an additional phase-shifting circuit to increase or decrease either the audio signal level difference and/or the magnitude of the time delay (phase-shift) of the second audio signal with respect to the first audio signal to enhance the perception of directionality to a greater extent than that provided by the natural attenuation and time delay caused by head shadowing and physical separation of the microphones.
An electronic and transducer device may be attached, adhered, or otherwise embedded into or upon a removable dental or oral appliance to form a hearing aid assembly or attached directly to the tooth or upper or lower jaw bone. Such a removable oral appliance may be a custom-made device fabricated from a thermal forming process utilizing a replicate model of a dental structure obtained by conventional dental impression methods. The electronic and transducer assembly may receive incoming sounds either directly or through a receiver to process and amplify the signals and transmit the processed sounds via a vibrating transducer element coupled to a tooth or other bone structure, such as the maxillary, mandibular, or palatine bone structure.
The assembly for transmitting vibrations via at least one tooth may generally comprise, in one variation, a housing having a shape which is conformable to at least a portion of the at least one tooth, and an actuatable transducer disposed within or upon the housing and in vibratory communication with a surface of the at least one tooth. Moreover, the transducer itself may he a separate assembly from the electronics and may be positioned along another surface of the tooth.
In other variations utilizing multiple components, generally a first component may be attached to the tooth or teeth using permanent or semi-permanent adhesives while a second removable component may be attached, adhered, or otherwise affixed to the first component. Examples of adhesives for attaching the first component to the tooth or teeth may include cements and epoxies intended to be applied and/or removed by a healthcare provider. Examples of typical dental cements include, but are not limited to, zinc oxide eugenol, zinc phosphate, zinc silico-phosphate, zinc-polyacrylate, zinc-polycarboxylate, glass ionomer, resin-based, silicate-based cements, etc.
The first component can contain any, all, or none of the mechanisms and/or electronics (e.g., actuators, processors, receivers, etc.) while the second component, which can be attached to the first component, can also contain any combination of the mechanisms and/or electronics, such as the battery. These two components may be temporarily coupled utilizing a variety of mechanisms, e.g., electromagnetic, mechanical attachment, chemical attachment, or a combination of any or all of these coupling mechanisms.
In one example, an electronics and/or transducer assembly may define a channel or groove along a surface for engaging a corresponding dental anchor or bracket which may comprise a light-curable acrylate-based composite material adhered directly to the tooth surface or a metallic bracket (e.g., stainless steel, Nickel-Titanium, Nickel, ceramics, composites, etc.) attached either directly to the tooth or integrated as part of an oral appliance. The dental anchor may be configured in a shape which corresponds to a shape of channel or groove such that the two may be interfitted in a mating engagement. In this manner, the transducer may vibrate directly against the dental anchor which may then transmit these signals directly into the tooth. Sealing the electronics and/or transducer assembly may facilitate the manufacturing of such devices by utilizing a single size for the electronics encasement which may mount onto a custom-fit retainer or bracket.
In yet another variation, a bracket may be ferromagnetic or electromagnetic and removably coupled via magnetic attraction to the housing which may also contain a complementary magnetic component for coupling to the magnetic component. The magnetic portion of the bracket may be confined or the entire bracket may be magnetic. One or more alignment members or arms defined along the bracket may facilitate the alignment of the bracket with the housing by aligning with an alignment step.
Alternative brackets may be configured into a cylindrical configuration sufficiently sized to fit comfortably within the user's mouth. For instance, suitable dimensions for such a bracket may range from 5 to 10 mm in diameter and 10 to 15 mm in length. Alternatively, the bracket may be variously shaped, e.g., ovoid, cubicle, etc. An electronics and/or transducer assembly having an outer surface configured with screw threading may be screwed into the bracket by rotating the assembly into the bracket to achieve a secure attachment for vibrational coupling.
Other variations utilizing a bracket may define a receiving channel into which the electronics and/or transducer assembly may be positioned and secured via a retaining tab. Yet other variations may utilize a protruding stop member for securing the two components to one another or other mechanical mechanisms for coupling.
Aside from mechanical coupling mechanisms, chemical attachment may also be utilized. The electronics and/or transducer assembly may be adhered to the bracket via a non-permanent adhesive, e.g., eugenol and non-eugenol cements. Examples of eugenol temporary cements include, but are not limited to, zinc oxide eugenol commercially available from Temrex (Freeport, N.Y.) or TempoCem® available from Zenith Dental (Englewood, N.J.), Other examples of non-eugenol temporary cements include, but are not limited to, cements which are commercially available such as PROVISCELL™ (Septodont, Inc., Ontario, Canada) as well as NOMIX™ (Centrix, Inc., Shelton, Conn.).
Advantages of the system may include one or more of the following. The system allows the user to enjoy the most natural sound input due to the location of the microphone which takes advantage of the pinna for optimal sound localization (and directionality) when the sounds are transmitted to the cochlea using a straight signal and “phase-shifted” signal to apply directionality to the patient. An additional advantage is conveyed by the physical separation of the location of each of the microphones when a first microphone at the first ear and a second microphone at a second ear sense sound level and phase differences with respect to the directional source of the sound, and the difference in these signals is conditioned and transmitted to dual bone conduction transducers which deliver these differences in sound through bone conduction to the two cochlea of the appliance wearer. High quality sound input is captured by placing the microphones within or at the entrance of the ear canal which would allow the patient to use the sound reflectivity of the pinna as well as improved sound directionality due to the microphone placement. The arrangement avoids the need to separate the microphone and speaker as required in air conduction hearing aids to reduce the chance of feedback and allows placement of the microphone to take advantage of the sound reflectivity of the pinna. The system also allows for better sound directionality due to the two bone conduction transducers being in electrical contact with each other. With the processing of the signals prior to being sent to the transducers and the transducers able to communicate with each other, the system provides the best sound localization possible by ensuring that the sound level and phase shift in sound sensed by the two separate microphones are preserved in the delivery of sound via the bone conduction transducers contained within the oral appliance. The system also provides a compact, comfortable, economical, and practical way of exploiting the tooth bone-vibration to configure a wireless intra-oral microphone.
Another aspect of the invention that is advantageous to the wearer is the housing for the microphone that will locate and temporarily fixate the microphone within the ear canal. The housing will contain at least one, and possibly multiple, opening(s) to enable sound passage from the outside through the housing to the tympanic membrane. This opening will allow passage of at least low frequency sounds, and possibly high frequency sounds, so that the wearer can perceive adequately loud sounds that are within their unassisted auditory range. This will enable the wearer to perceive adequately loud sounds that may not be amplified by the complete system. In addition, when a wearer of this device speaks, bone conduction carries sound from the mouth, to the inner and middle ears, vibrating the tympanic membrane. If the ear canal were completely occluded by the housing containing the microphone the wearer would perceive the sound of their voice as louder than normal, an effect known as occlusion. The opening(s) in the housing will allow the sound radiating from the tympanic membrane to pass through the housing unimpeded, reducing the occlusion effect. Because the amplified transducer of this hearing system is located in an oral appliance, and not in the ear canal as is typical of certain classes of acoustic hearing aids, the openings in this housing will not interfere with the delivery of amplified sounds, and feedback between a speaker located in the same ear canal as a microphone in an acoustic hearing aid will be commensurately reduced.
Each transmitter 5 transmits information to a receiver 8 that activates a transducer 9 that is powered by a battery 10. Each side of the head can have one set of receiver 8, transducer 9 and battery 10. This embodiment provides a bone conduction hearing aid device with dual externally located microphones that are placed at the entrance to or in the ear canals and an oral appliance containing dual transducers in communication with each other. The device will allow the user to enjoy the most natural sound input due to the location of the microphone which takes advantage of the pinna for optimal sound localization (and directionality).
In another embodiment, the microphones 7 receive sound, signals from both sides of the head, processes those signals to send a signal to the transducer on the side of the head where the sound is perceived by the microphone 7 to be at a higher sound level. A phase-shifted signal is sent to the transducer 9 on the opposite side of the head. These sounds will then “add” in the cochlea where the sound is louder and “cancel” on the opposite cochlea providing the user with the perception of directionality of the sound.
In yet another embodiment, the microphone 7 at the first ear receives sound signals from the first side of the head, processes those signal to send a signal to the transducer 9 on that same or first side of the oral appliance. A second microphone 7 at the second ear receives a sound signal that is lower in amplitude and delayed in respect to the sound sensed by the first microphone due to head shadowing and physical separation of the microphones 7, and sends a corresponding signal to the second transducer 9 on the second side of the oral appliance. The sound signals from the transducers 9 will be perceived by each cochlea on each side of the head as being different in amplitude and phase, which will result in the perception of directionality by the user.
As best shown in
The microphone 7 shown schematically in
Due to head shadowing and the physical separation of the microphones the signal will naturally be different in level and phase as it arrives at the two different microphones. The system takes advantage of this effect. Further, in one embodiment, a signal processing circuit can be used to amplify these differences to enhance the perception of directionality.
The brain sums the different perception at each of the two cochleas. In other words, one cochlea receives a high sound, and the other cochlea receives a lower sound slightly delayed compared to the first signal. The system preserves this inter-aural level difference and phase shift, and delivers the first signal to the first cochlea due to proximity of the transducer to the first cochlea. The system also delivers the second signal to the second cochlea due to their proximity, and the brain sums the information to allow the user to perceive, for example that the left side got a higher signal first compared to the right side, and that is perceived by the brain as a directionality signal.
With respect to microphone 30, a variety of various microphone systems may be utilized. For instance, microphone 30 may be a digital, analog, piezoelectric, and/or directional type microphone. Such various types of microphones may be interchangeably configured to be utilized with the assembly, if so desired.
Power supply 42 may be connected to each of the components such as processor 32 and transducer 40 to provide power thereto. The control or other sound generated signals received by antenna 34 may be in any wireless form utilizing, e.g., radio frequency, ultrasound, microwave, Blue Tooth®, among others for transmission to assembly 16. The external remote control 36 may be utilized such that a user may manipulate to adjust various acoustic parameters of the electronics and/or transducer assembly 16, such as acoustic focusing, volume control, filtration, muting, frequency optimization, sound adjustments, and tone adjustments, for example.
The signals transmitted may be received by electronics and/or transducer assembly 16 via a receiver, which may be connected to an internal processor for additional processing of the received signals. The received signals may be communicated to transducer 40, which may vibrate correspondingly against a surface of the tooth to conduct the vibratory signals through the tooth and bone and subsequently to the middle ear to facilitate hearing of the user. Transducer 40 may be configured as any number of different vibratory mechanisms. For instance, in one variation, transducer 40 may be an electromagnetically actuated transducer. In other variations, transducer 40 may be in the form of a piezoelectric crystal having a range of vibratory frequencies, e.g., between 250 to 20,000 Hz.
Although power supply 42 may be a simple battery, replaceable or permanent, other variations may include a power supply 42 which is charged by inductance via an external charger. Additionally, power supply 42 may alternatively be charged via direct coupling 48 to an alternating current (AC) or direct current (DC) source. Other variations may include a power supply 42 which is charged via a mechanical mechanism 46, such as an internal pendulum or slidable electrical inductance charger as known in the art, which is actuated via, e.g., motions of the jaw and/or movement for translating the mechanical motion into stored electrical energy for charging power supply 42.
In one variation, with assembly 14 positioned upon the teeth, as shown in
The transmitter assembly 22, as described in further detail below, may contain a microphone assembly as well as a transmitter assembly and may be configured in any number of shapes and forms worn by the user, such as a watch, necklace, lapel, phone, belt-mounted device, etc.
In such a variation, as illustrated schematically in
In another variation, a hearing aid assembly may be embedded into or configured as a custom made dental implant 54 (e.g., a permanent crown) that may be secured onto an implant post 50 previously implanted into the bone 52, e.g., jaw bone, of a patient, as shown in
In yet another variation, the electronics and transducer assembly 16 may be bonded or otherwise adhered directly to the surface of one or more teeth 12 rather than embedded or attached to a separate housing, as shown, in
In yet other variations, vibrations may be transmitted directly into the underlying bone or tissue structures rather than transmitting directly through the tooth or teeth of the user. An oral appliance can be positioned upon the user's tooth, in this example upon a molar located along the upper row of teeth. The electronics and/or transducer assembly can be located along the buccal surface of the tooth. Rather than utilizing a transducer in contact with the tooth surface, a conduction transmission member, such as a rigid or solid metallic member, may be coupled to the transducer in assembly and extend from oral appliance to a post or screw which is implanted directly into the underlying bone, such as the maxillary bone. As the distal end of transmission member is coupled directly to post or screw, the vibrations generated by the transducer may be transmitted through transmission member and directly into a post or screw, which in turn transmits the vibrations directly into and through the bone for transmission to the user's inner ear.
The above system allows the patient to take advantage of the highest quality sound input by placing the microphone(s) within or at the entrance of the ear canal which would allow the patient to use the sound reflectivity of the pinna as well as improved sound directionality due to the microphone placement. Most other healing aid devices require a separation of the microphone and speaker in order to reduce the chance of feedback. As such most hearing aid devices (specifically comparing to open-fit BTE's) place the microphone at the top of the ear and behind it which will not take advantage of the sound reflectivity of the pinna. The system also allows for better sound directionality due to the two bone conduction transducers being in electrical contact with each other. With the processing of the signals prior to being sent to the transducers and the transducers able to communicate with each other, the best sound localization is possible with this device.
Further examples of these algorithms are shown and described in detail in U.S. patent application Ser. Nos. 11/672,239; 11/672,250; 11/672,264; and 11/672,271 all filed Feb. 7, 2007 and each of which is incorporated herein by reference in its entirety.
As one of average skill in the art will appreciate, the communication devices described above may be implemented using one or more integrated circuits. For example, a host device may be implemented on one integrated circuit, the baseband processing module may be implemented on a second integrated circuit, and the remaining components of the radio, less the antennas, maybe implemented, on a third integrated circuit. As an alternate example, the radio may be implemented on a single integrated circuit. As yet another example, the processing module of the host device and the baseband processing module may be a common processing device implemented on a single integrated circuit.
“Computer readable media” can be any available media that can be accessed by client/server devices. Byway of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by client/server devices. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
All references including patent applications and publications cited herein are incorporated herein by reference in their entirety and for all purposes to the same extent as if each individual publication or patent or patent application was specifically and individually indicated to be incorporated by reference in its entirety for all purposes. Many modifications and variations of this invention can be made without departing from its spirit and scope, as will be apparent to those skilled in the art.
The specific embodiments described herein are offered by way of example only. The applications of the devices and methods discussed above are not limited to the treatment of hearing loss but may include any number of further treatment applications. Moreover, such devices and methods maybe applied to other treatment sites within the body. Modification of the above-described assemblies and methods for carrying out the invention, combinations between different variations as practicable, and variations of aspects of the invention that are obvious to those of skill in the art are intended to be within the scope of the claims.
Number | Name | Date | Kind |
---|---|---|---|
2045404 | Nicholides | Jun 1936 | A |
2161169 | Jefferis | Jun 1939 | A |
2318872 | Madiera | May 1943 | A |
2977425 | Cole | Mar 1961 | A |
2995633 | Puharich et al. | Aug 1961 | A |
3156787 | Puharich et al. | Nov 1964 | A |
3170993 | Puharich et al. | Feb 1965 | A |
3267931 | Puharich et al. | Aug 1966 | A |
3325743 | Blum | Jun 1967 | A |
3787641 | Santori | Jan 1974 | A |
3894196 | Briskey | Jul 1975 | A |
3985977 | Beaty et al. | Oct 1976 | A |
4025732 | Traunmuller | May 1977 | A |
4150262 | Ono | Apr 1979 | A |
4498461 | Hakansson | Feb 1985 | A |
4591668 | Iwata | May 1986 | A |
4612915 | Hough et al. | Sep 1986 | A |
4642769 | Petrofsky | Feb 1987 | A |
4738268 | Kipnis | Apr 1988 | A |
4791673 | Schreiber | Dec 1988 | A |
4817044 | Ogren | Mar 1989 | A |
4832033 | Maher et al. | May 1989 | A |
4920984 | Furumichi et al. | May 1990 | A |
4982434 | Lenhardt et al. | Jan 1991 | A |
5012520 | Steeger | Apr 1991 | A |
5033999 | Mersky | Jul 1991 | A |
5047994 | Lenhardt et al. | Sep 1991 | A |
5060526 | Barth et al. | Oct 1991 | A |
5082007 | Adell | Jan 1992 | A |
5233987 | Fabian et al. | Aug 1993 | A |
5323468 | Bottesch | Jun 1994 | A |
5325436 | Soli et al. | Jun 1994 | A |
5372142 | Madsen et al. | Dec 1994 | A |
5402496 | Soli et al. | Mar 1995 | A |
5403262 | Gooch | Apr 1995 | A |
5447489 | Issalene et al. | Sep 1995 | A |
5455842 | Merskey et al. | Oct 1995 | A |
5460593 | Mersky et al. | Oct 1995 | A |
5546459 | Sih et al. | Aug 1996 | A |
5558618 | Maniglia | Sep 1996 | A |
5565759 | Dunstan | Oct 1996 | A |
5616027 | Jacobs et al. | Apr 1997 | A |
5624376 | Ball et al. | Apr 1997 | A |
5659156 | Mauney et al. | Aug 1997 | A |
5661813 | Shimauchi et al. | Aug 1997 | A |
5706251 | May | Jan 1998 | A |
5760692 | Block | Jun 1998 | A |
5800336 | Ball et al. | Sep 1998 | A |
5812496 | Peck | Sep 1998 | A |
5828765 | Gable | Oct 1998 | A |
5902167 | Filo et al. | May 1999 | A |
5914701 | Gersheneld et al. | Jun 1999 | A |
5961443 | Rastatter et al. | Oct 1999 | A |
5984681 | Huang | Nov 1999 | A |
6029558 | Stevens et al. | Feb 2000 | A |
6047074 | Zoels et al. | Apr 2000 | A |
6068590 | Brisken | May 2000 | A |
6072884 | Kates | Jun 2000 | A |
6072885 | Stockham, Jr. et al. | Jun 2000 | A |
6075557 | Holliman et al. | Jun 2000 | A |
6115477 | Filo et al. | Sep 2000 | A |
6118882 | Haynes | Sep 2000 | A |
6171229 | Kroll et al. | Jan 2001 | B1 |
6223018 | Fukumoto et al. | Apr 2001 | B1 |
6239705 | Glen | May 2001 | B1 |
6333269 | Naito et al. | Dec 2001 | B2 |
6377693 | Lippa et al. | Apr 2002 | B1 |
6394969 | Lenhardt | May 2002 | B1 |
6504942 | Hong et al. | Jan 2003 | B1 |
6538558 | Sakazume et al. | Mar 2003 | B2 |
6585637 | Brillhart et al. | Jul 2003 | B2 |
6631197 | Taenzer | Oct 2003 | B1 |
6633747 | Reiss | Oct 2003 | B1 |
6682472 | Davis | Jan 2004 | B1 |
6754472 | Williams et al. | Jun 2004 | B1 |
6778674 | Panasik et al. | Aug 2004 | B1 |
6826284 | Benesty et al. | Nov 2004 | B1 |
6885753 | Bank | Apr 2005 | B2 |
6917688 | Yu et al. | Jul 2005 | B2 |
6941952 | Rush, III | Sep 2005 | B1 |
6954668 | Cuozzo | Oct 2005 | B1 |
6985599 | Asnes | Jan 2006 | B2 |
7003099 | Zhang et al. | Feb 2006 | B1 |
7033313 | Lupin et al. | Apr 2006 | B2 |
7035415 | Belt et al. | Apr 2006 | B2 |
7074222 | Westerkull | Jul 2006 | B2 |
7076077 | Atsumi et al. | Jul 2006 | B2 |
7099822 | Zangi | Aug 2006 | B2 |
7162420 | Zangi et al. | Jan 2007 | B2 |
7171003 | Venkatesh et al. | Jan 2007 | B1 |
7171008 | Elko | Jan 2007 | B2 |
7174022 | Zhang et al. | Feb 2007 | B1 |
7206423 | Feng et al. | Apr 2007 | B1 |
7246058 | Burnett | Jul 2007 | B2 |
7258533 | Tanner et al. | Aug 2007 | B2 |
7269266 | Anjanappa et al. | Sep 2007 | B2 |
7271569 | Oglesbee | Sep 2007 | B2 |
7310427 | Retchin et al. | Dec 2007 | B2 |
7329226 | Ni et al. | Feb 2008 | B1 |
7331349 | Brady et al. | Feb 2008 | B2 |
7333624 | Husung | Feb 2008 | B2 |
7361216 | Kangas et al. | Apr 2008 | B2 |
7409070 | Pitulia | Aug 2008 | B2 |
7486798 | Anjanappa et al. | Feb 2009 | B2 |
7520851 | Davis et al. | Apr 2009 | B2 |
7522738 | Miller, III | Apr 2009 | B2 |
7522740 | Julstrom et al. | Apr 2009 | B2 |
20010003788 | Ball et al. | Jun 2001 | A1 |
20010051776 | Lenhardt | Dec 2001 | A1 |
20020026091 | Leysieffer | Feb 2002 | A1 |
20020071581 | Leysieffer et al. | Jun 2002 | A1 |
20020077831 | Numa | Jun 2002 | A1 |
20020122563 | Schumaier | Sep 2002 | A1 |
20020173697 | Lenhardt | Nov 2002 | A1 |
20030059078 | Downs et al. | Mar 2003 | A1 |
20030091200 | Pompei | May 2003 | A1 |
20030212319 | Magill | Nov 2003 | A1 |
20040057591 | Beck et al. | Mar 2004 | A1 |
20040131200 | Davis | Jul 2004 | A1 |
20040141624 | Davis et al. | Jul 2004 | A1 |
20040202339 | O'Brien, Jr. et al. | Oct 2004 | A1 |
20040202344 | Anjanappa et al. | Oct 2004 | A1 |
20040243481 | Bradbury et al. | Dec 2004 | A1 |
20040247143 | Lantrua et al. | Dec 2004 | A1 |
20050037312 | Uchida | Feb 2005 | A1 |
20050067816 | Buckman | Mar 2005 | A1 |
20050070782 | Brodkin | Mar 2005 | A1 |
20050129257 | Tamura | Jun 2005 | A1 |
20050196008 | Anjanappa et al. | Sep 2005 | A1 |
20050241646 | Sotos et al. | Nov 2005 | A1 |
20060008106 | Harper | Jan 2006 | A1 |
20060025648 | Lupin et al. | Feb 2006 | A1 |
20060064037 | Shalon et al. | Mar 2006 | A1 |
20060167335 | Park et al. | Jul 2006 | A1 |
20060270467 | Song et al. | Nov 2006 | A1 |
20060275739 | Ray | Dec 2006 | A1 |
20070010704 | Pitulia | Jan 2007 | A1 |
20070036370 | Granovetter et al. | Feb 2007 | A1 |
20070041595 | Carazo et al. | Feb 2007 | A1 |
20070142072 | Lassally | Jun 2007 | A1 |
20070230713 | Davis | Oct 2007 | A1 |
20070242835 | Davis | Oct 2007 | A1 |
20070265533 | Tran | Nov 2007 | A1 |
20070276270 | Tran | Nov 2007 | A1 |
20070280491 | Abolfathi | Dec 2007 | A1 |
20070280492 | Abolfathi | Dec 2007 | A1 |
20070280493 | Abolfathi | Dec 2007 | A1 |
20070280495 | Abolfathi | Dec 2007 | A1 |
20070286440 | Abolfathi et al. | Dec 2007 | A1 |
20070291972 | Abolfathi et al. | Dec 2007 | A1 |
20080019542 | Menzel et al. | Jan 2008 | A1 |
20080019557 | Bevirt et al. | Jan 2008 | A1 |
20080021327 | El-Bialy et al. | Jan 2008 | A1 |
20080064993 | Abolfathi et al. | Mar 2008 | A1 |
20080070181 | Abolfathi et al. | Mar 2008 | A1 |
20080304677 | Abolfathi et al. | Dec 2008 | A1 |
20090028352 | Petroff | Jan 2009 | A1 |
20090088598 | Abolfathi | Apr 2009 | A1 |
20090097684 | Abolfathi et al. | Apr 2009 | A1 |
20090097685 | Menzel et al. | Apr 2009 | A1 |
20090099408 | Abolfathi et al. | Apr 2009 | A1 |
20090105523 | Kassayan et al. | Apr 2009 | A1 |
20090147976 | Abolfathi | Jun 2009 | A1 |
20090149722 | Abolfathi et al. | Jun 2009 | A1 |
20090180652 | Davis et al. | Jul 2009 | A1 |
Number | Date | Country |
---|---|---|
0715838 | Jun 1996 | EP |
0741940 | Nov 1996 | EP |
0824889 | Feb 1998 | EP |
1299052 | Feb 2002 | EP |
1633284 | Dec 2004 | EP |
1691686 | Aug 2006 | EP |
1718255 | Nov 2006 | EP |
1783919 | May 2007 | EP |
2007028248 | Feb 2007 | JP |
2007028610 | Feb 2007 | JP |
2007044284 | Feb 2007 | JP |
2007049599 | Feb 2007 | JP |
2007049658 | Feb 2007 | JP |
WO 8302047 | Jun 1983 | WO |
WO 9102678 | Mar 1991 | WO |
WO 9519678 | Jul 1995 | WO |
WO 9621335 | Jul 1996 | WO |
WO 0209622 | Feb 2002 | WO |
WO 2004045242 | May 2004 | WO |
WO 2004105650 | Dec 2004 | WO |
WO 2005000391 | Jan 2005 | WO |
WO 2005037153 | Apr 2005 | WO |
WO 2005053533 | Jun 2005 | WO |
WO 2006088410 | Aug 2006 | WO |
WO 2006130909 | Dec 2006 | WO |
WO 2007043055 | Apr 2007 | WO |
WO 2007052251 | May 2007 | WO |
WO 2007059185 | May 2007 | WO |
WO 2007140367 | Dec 2007 | WO |
WO 2007140368 | Dec 2007 | WO |
WO 2007140373 | Dec 2007 | WO |
WO 2007143453 | Dec 2007 | WO |
WO 2008024794 | Feb 2008 | WO |
WO 2008030725 | Mar 2008 | WO |
WO 2009014812 | Jan 2009 | WO |
WO 2009025917 | Feb 2009 | WO |
WO 2009066296 | May 2009 | WO |
Entry |
---|
“Special Forces Smart Noise Cancellation Ear Buds with Built-In GPS,” http://www.gizmag.com/special-forces-smart-noise-cancellation-ear-buds-with-built-in-gps/9428/, 2 pages, 2008. |
Altmann, et al. Foresighting the new technology waves—Exper Group. In: State of the Art Reviews and Related Papers—Center on Nanotechnology and Society. 2004 Conference. Published Jun. 14, 2004. p. 1-291. Available at http://www.nano-and-society.org. |
Berard, G., “Hearing Equals Behavior” [summary], 1993, http://www.bixby.org/faq/tinnitus/treatment.html. |
Broyhill, D., “Battlefield Medical Information System—Telemedicine,” A research paper presented to the U.S. Army Command and General Staff College in partial Fulfillment of the requirement for A462 Combat Health Support Seminar, 12 pages, 2003. |
Dental Cements—Premarket Notification, U.S. Department of Health and Human Services Food and Drug Administration Center for Devices and Radiological Health, pp. 1-10, Aug. 18, 1998. |
Henry, et al. “Comparison of Custom Sounds for Achieving Tinnitus Relief, ” J Am Acad Audiol,15:585.598, 2004. |
Jastreboff, Pawel, J., “Phantom auditory perception (tinnitus): mechanisms of generation and perception,” Neuroscience Research, 221-254, 1990, Elsevier Scientific Publishers Ireland, Ltd. |
Robb, “Tinnitus Device Directory Part I,” Tinnitus Today, p. 22, Jun. 2003. |
Song, S. et al., “A 0.2-mW 2-Mb/s Digital Transceiver Based on Wideband Signaling for Human Body Communications,” IEEE J Solid-State Cir, 42(9), 2021-2033, Sep. 2007. |
Stuart, A., et al., “Investigations of the Impact of Altered Auditory Feedback In-The-Ear Devices on the Speech of People Who Stutter: Initial Fitting and 4-Month Follow-Up,” Int J Lang Commun Disord, 39(1), Jan. 2004, [abstract only]. |
U.S. Appl. No. 11/672,264, filed Feb. 7, 2007 in the name of Abolfathi, Non-Final Rejection mailed Apr. 28, 2009. |
U.S. Appl. No. 11/672,264, filed Feb. 7, 2007 in the name of Abolfathi, Non-Final Rejection mailed Aug. 6, 2008. |
U.S. Appl, No. 11/672,239, filed Feb. 7, 2007 in the name of Abolfathi, Non-final Office Action mailed Jun. 18, 2009. |
U.S. Appl. No. 11/672,239, filed Feb. 7, 2007 in the name of Abolfathi, Non-final Office Action mailed Nov. 13, 2008. |
U.S. Appl. No. 11/672,250, filed Feb. 7, 2007 in the name of Abolfathi, Non-final Office Action mailed Apr. 21, 2009. |
U.S. Appl. No. 11/672,250, filed Feb. 7, 2007 in the name of Abolfathi, Non-final Office Action mailed Aug. 8, 2008. |
U.S. Appl. No. 11/672,271, filed Feb. 7, 2007 in the name of Abolfathi, Final Office Action mailed May 18, 2009. |
U.S. Appl. No. 11/672,271, filed Feb. 7, 2007 in the name of Abolfathi, Non-final Office Action mailed Aug. 20, 2008. |
U.S. Appl. No. 11/741,648, filed Apr. 27, 2007 in the name of Menzel et al., Final Office Action mailed May 18, 2009. |
U.S. Appl. No. 11/741,648, filed Apr. 27, 2007 in the name of Menzel et al., Non-final Office Action mailed Sep. 4, 2008. |
U.S. Appl. No. 11/754,823, filed May 29, 2007 in the name of Abolfathi et al., Final Office Action mailed May 12, 2009. |
U.S. Appl. No. 11/754,823, filed May 29, 2007 in the name of Abolfathi et al., Non-final Office Action mailed Aug. 14, 2008. |
U.S. Appl. No. 11/754,833, filed May 29, 2007 in the name of Abolfathi et al., Final Office Action mailed May 14, 2009. |
U.S. Appl. No. 11/754,833, filed May 29, 2007 in the name of Abolfathi et al., Non-final Office Action mailed Aug. 6, 2008. |
U.S. Appl. No. 11/866,345, filed May 29, 2007 in the name of Abolfathi et al., Final Office Action mailed Apr. 15, 2009. |
U.S. Appl. No. 11/866,345, filed May 29, 2007 in the name of Abolfathi et al., Non-final Office Action mailed Mar. 19, 2008. |
Wen, Y. et al, “Online Prediction of Battery Lifetime for Embedded and Mobile Devices,” Special Issue on Embedded Systems: Springer-Verlag Heidelberg Lecture Notes in Computer Science, V3164/2004, 15 pages, Dec. 2004. |
Number | Date | Country | |
---|---|---|---|
20090052698 A1 | Feb 2009 | US |