Hearing loss affects over 31 million people in the United States. As a chronic condition, the incidence of hearing impairment rivals that of heart disease and, like heart disease, the incidence of hearing impairment increases sharply with age.
Hearing loss can also be classified in terms of being conductive, sensorineural, or a combination of both. Conductive hearing impairment typically results from diseases or disorders that limit the transmission of sound through the middle ear. Most conductive impairments can be treated medically or surgically. Purely conductive hearing loss represents a relatively small portion of the total hearing impaired population.
Sensorineural hearing losses occur mostly in the inner ear and account for the vast majority of hearing impairment (estimated at 90-95% of the total hearing impaired population). Sensorineural hearing impairment (sometimes called “nerve loss”) is largely caused by damage to the sensory hair cells inside the cochlea. Sensorineural hearing impairment occurs naturally as a result of aging or prolonged exposure to loud music and noise. This type of hearing loss cannot be reversed nor can it be medically or surgically treated; however, the use of properly fitted hearing devices can improve the individual's quality of life.
Conventional hearing devices are the most common devices used to treat mild to severe sensorineural hearing impairment. These are acoustic devices that amplify sound to the tympanic membrane. These devices are individually customizable to the patient's physical and acoustical characteristics over four to six separate visits to an audiologist or hearing instrument specialist. Such devices generally comprise a microphone, amplifier, battery, and speaker. Recently, hearing device manufacturers have increased the sophistication of sound processing, often using digital technology, to provide features such as programmability and multi-band compression. Although these devices have been miniaturized and are less obtrusive, they are still visible and have major acoustic limitation.
In a parallel trend, the advent of music players and cell phones has driven the demand for small and portable headphones that can reproduce sound with high fidelity so that the user can listen to the sound without disturbing people who are nearby. These headphones typically use small speakers that can render the sound. With cell phones, there is a need to capture the user's voice with a microphone and relay the voice over the cellular network so that the parties can engage in a conversation even though they are separated by great distances. Microphones are transducers just like speakers. They change sound waves into electrical signals, while speakers change electrical signals into sound waves. When a headphone is equipped with a small microphone, it is called a headset.
A headset may be used in conjunction with a telephone device for several reasons. With a headset, the user is relived of the need to hold the phone and thus retains his or her hands free to perform other functions. Headsets also function to position the earphone and microphone portions of a telephone close to the user's head to provide for clearer reception and transmission of audio signals with less interference from background noise. Headsets may be used with telephones, computers, cellular telephones, and other devices.
The wireless industry has launched several after-market products to free the user from holding the phone while making phone calls. For example, various headsets are manufactured with an earpiece connected to a microphone and most of these headsets or hands-free kits are compatible with any phone brand or model. A possible headset can be plugged-in to the phone and comprise a microphone connected via wires to the headset so that the microphone, when in position, can appropriately capture the voice of the user. Other headsets are built in with a Bluetooth chip, or other wireless means, so that the voice conversation can be wirelessly diverted from the phone to the earpiece of the headset. The Bluetooth radio chip acts as a connector between the headset and a Bluetooth chip of the cell-phone.
The ability to correctly identify voiced and unvoiced speech is critical to many speech applications including speech recognition, speaker verification, noise suppression, and many others. In a typical acoustic application, speech from a human speaker is captured and transmitted to a receiver in a different location. In the speaker's environment there may exist one or more noise sources that pollute the speech signal, or the signal of interest, with unwanted acoustic noise. This makes it difficult or impossible for the receiver, whether human or machine, to understand the user's speech.
United States Patent 20080019557 describes a headset which includes a metal or metallic housing to which various accessory components can be attached. These components can include an ear loop, a necklace for the holding of the headset while not being worn on the ear, an external mount, and other components. The components include a magnet which facilitates mounting to the headset. The components are not restricted to a particular attachment point, which enhances the ability of the user to adjust the geometry for better fit.
With conventional headsets, people nearby can notice when the user is wearing the headset. U.S. Pat. No. 7,076,077 discloses a bone conduction headset which is inconspicuous in appearance during wearing. The bone conduction headset includes a band running around a back part of the user's head; a fastening portion formed in each of opposite end portions of the band; a bone conduction speaker provided with a knob which is engaged with the fastening portion; and, an ear engagement portion, which runs over the bone conduction speaker during wearing of the headset to reach and engage with the user's ear. An extension of either the fastening portion in the band or a casing of the bone conduction speaker may be formed into the ear engagement portion.
U.S. Pat. No. 7,246,058 discloses a system for detecting voiced and unvoiced speech in acoustic signals having varying levels of background noise. The systems receive acoustic signals at two microphones, and generate difference parameters between the acoustic signals received at each of the two microphones. The difference parameters are representative of the relative difference in signal gain between portions of the received acoustic signals. The systems identify information of the acoustic signals as unvoiced speech when the difference parameters exceed a first threshold, and identify information of the acoustic signals as voiced speech when the difference parameters exceed a second threshold. Further, embodiments of the systems include non-acoustic sensors that receive physiological information to aid in identifying voiced speech.
In one aspect, An intra-oral hearing appliance includes an actuator to provide bone conduction sound transmission; a transceiver coupled to the actuator to cause the actuator to generate sound; and a first chamber containing the actuator and the transceiver, said first chamber adapted to be coupled to one or more teeth.
Implementations of the above aspect may include one or more of the following.
An actuator driver or amplifier can be connected to the actuator. A second chamber can be used to house a power source to drive the actuator and the transceiver. A bridge can connect the first and second chambers. The bridge can have electrical cabling or an antenna embedded in the bridge. The bridge can be a wired frame, a polymeric material, or a combination of polymeric material and a wired frame. A mass can be connected to the actuator. The mass can be a weight such as tungsten or a suitable module with a mass such as a battery or an electronics module. The actuator can be a piezoelectric transducer. The configuration of the actuator can be a rectangular or cantilever beam bender configuration. One or more ceramic or alumina stands can connect the actuator to other components. A compressible material can surround the actuator. A non compressible material can cover the actuator and the compressible material. A rechargeable power source can power the transceiver and the actuator. An inductive charger can recharge the battery. The chamber can be a custom oral device. A pre-built housing can be provided for the mass. The pre-built housing can have an arm and one or more bottom contacts, the arm and the contacts adapted to bias a mass against a tooth. A microphone can be connected to the transceiver, the microphone being positioned intraorally or extraorally. A data storage device can be embedded in the appliance. A first microphone can pick up body conduction sound, a second microphone can pick up ambient sound, and a noise canceller can be used to subtract ambient sound from the body conduction sound. The actuator transmits sound through a tooth, a maxillary bone, a mandibular bone, or a palatine bone. A linking unit can provide sound to the transceiver, the linking unit adapted to communicate with an external sound source. The transceiver can be a wired transceiver or a wireless transceiver.
Advantages of preferred embodiments may include one or more of the following. The bone conduction headset is easy to wear and take off in use, and is further inconspicuous in appearance during the user's wearing thereof. The device can be operated without nearby people noticing the user's wearing of the headset. Compared to headphones, the device avoids covering the ears of the listener. This is important if (a) the listener needs to have the ears unobstructed (to allow them to hear other sounds in the environment), or (b) to allow them to plug the ears (to prevent hearing damage from loud sounds in the environment). The system is a multi-purpose communication platform that is rugged, wireless and secure. The device can be used in extreme environments such as very dusty, dirty or wet environments. The system provides quality, hands-free, yet inconspicuous communication capability for field personnel. The system overcomes hearing loss that can adversely affect a person's quality of life and psychological well-being. Solving such hearing impairment leads to reduced stress levels, increases self-confidence, increases sociability and increases effectiveness in the workplace.
An exemplary removable wireless dental hearing appliance is shown in
The power chamber 401 provides energy for electronics in an actuation chamber 407. Mechanically, the chambers 401 and 407 are connected by a bridge 405. Inside the bridge 405 are cables that supply power to the actuation chamber 407. Other devices such as antenna wires can be embedded in the bridge 405. The chambers 401, 407 and the bridge 405 are made from human compatible elastomeric materials commonly used in dental retainers, among others.
Turning now to the actuation chamber 407, an actuator 408 is positioned near the patient's teeth. The actuator 408 is driven by an electronic driver 409. A wireless transceiver 450 provides sound information to the electronic driver 409 so that the driver 409 can actuate the actuator 408 to cause sound to be generated and conducted to the patient's ear through bone conduction in one embodiment. For example, the electronic and actuator assembly may receive incoming sounds either directly or through a receiver to process and amplify the signals and transmit the processed sounds via a vibrating transducer element coupled to a tooth or other bone structure, such as the maxillary, mandibular, or palatine bone structure. Other sound transmission techniques in addition to bone conduction can be used and are contemplated by the inventors.
Correspondingly, in the actuation chamber 407, the actuator 408 in turn is made up of a piezoelectric actuator 408B that moves a mass 408A. The driver 409 and wireless transceiver circuitry are provided on a circuit board 420A.
In one embodiment where the unit is used as a hearing aid, a microphone can provide sound input that is amplified by the amplifier or driver 438. In another embodiment, the system can receive signals from a linking unit such as a Bluetooth transceiver that allows the appliance to play sound generated by a portable appliance or a sound source such as a music player, a hands-free communication device or a cellular telephone, for example. Alternatively, the sound source can be a computer, a one-way communication device, a two-way communication device, or a wireless hands-free communication device
In one embodiment, the actuator 454 is a piezoelectric transducer made with PZT. PZT-based compounds (Pb[ZrxTi1−x]O3 0<x<1, also lead zirconium titanate) are ceramic perovskite materials that develop a voltage difference across two of its facets when highly compressed. Being piezoelectric, it develops a voltage difference across two of its faces when compressed (useful for sensor applications), or physically changes shape when an external electric field is applied (useful for actuators and the like). The material is also ferroelectric, which means it has a spontaneous electric polarization (electric dipole) which can be reversed in the presence of an electric field. The material features an extremely large dielectric constant at the morphotropic phase boundary (MPB) near x=0.52. These properties make PZT-based compounds one of the most prominent and useful electroceramics.
The actuator 454 is also connected to a mass 458 through a mass arm 456. In one embodiment, the actuator 454 uses PZT in a rectangular beam bender configuration. The mass 458 can be a tungsten material or any suitable weight such as the battery or control electronics, among others. The support arms or links 452A-452B as well as the mass arm 456 are preferably made from ceramic or alumina which enables acoustic or sound energy to be efficiently transmitted by the mounting unit 454.
As shown in the two insets, the actuator 454 can be commanded to contract or expand, resulting in movements with upward arch shapes or downward arch shapes. The actuator 454 and its associated components are encapsulated in a compressible material 460 such as silicone to allow actuator movement. In one embodiment, the top of the appliance is provided with an acrylic encapsulated protection layer 462 providing a fixed platform that directs energy generated by the actuator 454 toward the teeth while the compressible material 460 provides room for movement by the actuator 454.
The appliance can be a custom oral device. The sound source unit can contain a short-range transceiver that is protocol compatible with the linking unit. For example, the sound source can have a Bluetooth transceiver that communicates with the Bluetooth transceiver linking unit in the appliance. The appliance can then receive the data transmitted over the Bluetooth protocol and drive a bone conduction transducer to render or transmit sound to the user.
The appliance can have a microphone embedded therein. The microphone can be an intraoral microphone or an extraoral microphone. For cellular telephones and other telephones, a second microphone can be used to cancel environmental noise and transmit a user's voice to the telephone. A noise canceller receives signals from the microphones and cancels ambient noise to provide a clean sound capture.
The appliance can have another microphone to pick up ambient sound. The microphone can be an intraoral microphone or an extraoral microphone. In one embodiment, the microphone cancels environmental noise and transmits a user's voice to the remote station. This embodiment provides the ability to cancel environmental noises while transmitting subject's own voice to the actuator 432. As the microphone is in a fixed location (compared to ordinary wireless communication devices) and very close to user's own voice, the system can handle environmental noise reduction that is important in working in high noise areas.
The system couples microphones and voicing activity sensors to a signal processor. The processor executes a detection algorithm, and a denoising code to minimize background acoustic noise. Two microphones can be used, with one microphone being the bone conduction microphone and one which is considered the “signal” microphone. The second microphone captures air noise or ambient noise, whose signal is filtered and subtracted from the signal in the first microphone. In one embodiment, the system runs an array algorithm for speech detection that uses the difference in frequency content between two microphones to calculate a relationship between the signals of the two microphones. As known in the art and discussed in U.S. Pat. No. 7,246,058, the content of which is incorporated by reference, this embodiment can cancel noise without requiring a specific orientation of the array with respect to the signal.
In another embodiment, the appliance can be attached, adhered, or otherwise embedded into or upon a removable oral appliance or other oral device to form a medical tag containing patient identifiable information. Such an oral appliance may be a custom-made device fabricated from a thermal forming process utilizing a replicate model of a dental structure obtained by conventional dental impression methods. The electronic and transducer assembly may receive incoming sounds either directly or through a receiver to process and amplify the signals and transmit the processed sounds via a vibrating transducer element coupled to a tooth or other bone structure, such as the maxillary, mandibular, or palatine bone structure.
In yet another embodiment, microphones can be place on each side of the ears to provide noise cancellation, optimal sound localization and directionality. The microphones can be placed inside or outside the ears. For example, the microphones can be placed either at the opening or directly with the user's ear canals. Each of the systems includes a battery, a signal processor, a transmitter, all of which can be positioned in a housing that clips onto the ear which, rests behind the ear between the pinna and the skull, or alternatively can be positioned in the ear's concha. The transmitter is connected to a wire/antenna that in turn is connected to the microphone. Each transmitter transmits information to a receiver that activates a transducer that is powered by a battery. Each side of the head can have one set of receiver, transducer and battery. This embodiment provides a bone conduction hearing aid device with dual externally located microphones that are placed at the entrance to or in the ear canals and an oral appliance containing dual transducers in communication with each other. The device will allow the user to enjoy the most natural sound input due to the location of the microphone which takes advantage of the pinna for optimal sound localization (and directionality).
In another embodiment, the microphones receive sound signals from both sides of the head, processes those signals to send a signal to the transducer on the side of the head where the sound is perceived by the microphone to be at a higher sound level. A phase-shifted signal is sent to the transducer on the opposite side of the head. These sounds will then “add” in the cochlea where the sound is louder and “cancel” on the opposite cochlea providing the user with the perception of directionality of the sound.
In yet another embodiment, the microphone at the first ear receives sound signals from the first side of the head, processes those signal to send a signal to the transducer on that same or first side of the oral appliance. A second microphone at the second ear receives a sound signal that is lower in amplitude and delayed in respect to the sound sensed by the first microphone due to head shadowing and physical separation of the microphones, and sends a corresponding signal to the second transducer on the second side of the oral appliance. The sound signals from the transducers will be perceived by each cochlea on each side of the head as being different in amplitude and phase, which will result in the perception of directionality by the user.
In one embodiment where the microphone is mounted in the user's ear canal, components such as the battery, the signal processor, and the transmitter can either be located behind the ear or within the folds of the pinna. The human auricle is an almost rudimentary, usually immobile shell that lies close to the side of the head with a thin plate of yellow fibrocartilage covered by closely adherent skin. The cartilage is molded into clearly defined hollows, ridges, and furrows that form an irregular, shallow funnel. The deepest depression, which leads directly to the external auditory canal, or acoustic meatus, is called the concha. It is partly covered by two small projections, the tonguelike tragus in front and the antitragus behind. Above the tragus a prominent ridge, the helix, arises from the floor of the concha and continues as the incurved rim of the upper portion of the auricle. An inner, concentric ridge, the antihelix, surrounds the concha and is separated from the helix by a furrow, the scapha, also called the fossa of the helix. The lobule, the fleshy lower part of the auricle, is the only area of the outer ear that contains no cartilage. The auricle also has several small rudimentary muscles, which fasten it to the skull and scalp. In most individuals these muscles do not function, although some persons can voluntarily activate them to produce limited movements. The external auditory canal is a slightly curved tube that extends inward from the floor of the concha and ends blindly at the tympanic membrane. In its outer third the wall of the canal consists of cartilage; in its inner two-thirds, of bone. The anthelix (antihelix) is a folded “Y” shaped part of the ear. The antitragus is the lower cartilaginous edge of the conchal bowl just above the fleshy lobule of the ear. The microphone is connected with the transmitter through the wire and antenna. The placement of the microphone inside the ear canal provides the user with the most natural sound input due to the location of the microphone which takes advantage of the pinna for optimal sound localization (and directionality) when the sounds are transmitted to the cochlea using a straight signal and “phase-shifted” signal to apply directionality to the patient. High quality sound input is captured by placing the microphones within or at the entrance of the ear canal which would allow the patient to use the sound reflectivity of the pinna as well as improved sound directionality due to the microphone placement. The arrangement avoids the need to separate the microphone and speaker to reduce the chance of feedback and allows placement of the microphone to take advantage of the sound reflectivity of the pinna. The system also allows for better sound directionality due to the two bone conduction transducers being in electrical contact with each other. With the processing of the signals prior to being sent to the transducers and the transducers able to communicate with each other, the system provides the best sound localization possible.
The appliance can include a data storage device such as a solid state memory or a flash storage device. The content of the data storage device can be encrypted for security. The linking unit can transmit encrypted data for secure transmission if desired.
The appliance may be fabricated from various polymeric or a combination of polymeric and metallic materials using any number of methods, such as computer-aided machining processes using computer numerical control (CNC) systems or three-dimensional printing processes, e.g., stereolithography apparatus (SLA), selective laser sintering (SLS), and/or other similar processes utilizing three-dimensional geometry of the patient's dentition, which may be obtained via any number of techniques. Such techniques may include use of scanned dentition using intra-oral scanners such as laser, white light, ultrasound, mechanical three-dimensional touch scanners, magnetic resonance imaging (MRI), computed tomography (CT), other optical methods, etc.
In forming the removable oral appliance, the appliance may be optionally formed such that it is molded to fit over the dentition and at least a portion of the adjacent gingival tissue to inhibit the entry of food, fluids, and other debris into the oral appliance and between the transducer assembly and tooth surface. Moreover, the greater surface area of the oral appliance may facilitate the placement and configuration of the assembly onto the appliance.
Additionally, the removable oral appliance may be optionally fabricated to have a shrinkage factor such that when placed onto the dentition, oral appliance may be configured to securely grab onto the tooth or teeth as the appliance may have a resulting size slightly smaller than the scanned tooth or teeth upon which the appliance was formed. The fitting may result in a secure interference fit between the appliance and underlying dentition.
In one variation, an extra-buccal transmitter assembly located outside the patient's mouth may be utilized to receive auditory signals for processing and transmission via a wireless signal to the electronics and/or transducer assembly positioned within the patient's mouth, which may then process and transmit the processed auditory signals via vibratory conductance to the underlying tooth and consequently to the patient's inner ear. The transmitter assembly, as described in further detail below, may contain a microphone assembly as well as a transmitter assembly and may be configured in any number of shapes and forms worn by the user, such as a watch, necklace, lapel, phone, belt-mounted device, etc.
With respect to microphone 30, a variety of various microphone systems may be utilized. For instance, microphone 30 may be a digital, analog, and/or directional type microphone. Such various types of microphones may be interchangeably configured to be utilized with the assembly, if so desired.
Power supply 36 may be connected to each of the components in transmitter assembly 22 to provide power thereto. The transmitter signals 24 may be in any wireless form utilizing, e.g., radio frequency, ultrasound, microwave, Blue Tooth® (BLUETOOTH SIG, INC., Bellevue, Wash.), etc. for transmission to assembly 16. Assembly 22 may also optionally include one or more input controls 28 that a user may manipulate to adjust various acoustic parameters of the electronics and/or transducer assembly 16, such as acoustic focusing, volume control, filtration, muting, frequency optimization, sound adjustments, and tone adjustments, etc.
The signals transmitted 24 by transmitter 34 may be received by electronics and/or transducer assembly 16 via receiver 38, which may be connected to an internal processor for additional processing of the received signals. The received signals may be communicated to transducer 40, which may vibrate correspondingly against a surface of the tooth to conduct the vibratory signals through the tooth and bone and subsequently to the middle ear to facilitate hearing of the user. Transducer 40 may be configured as any number of different vibratory mechanisms. For instance, in one variation, transducer 40 may be an electromagnetically actuated transducer. In other variations, transducer 40 may be in the form of a piezoelectric crystal having a range of vibratory frequencies, e.g., between 250 to 4000 Hz.
Power supply 42 may also be included with assembly 16 to provide power to the receiver, transducer, and/or processor, if also included. Although power supply 42 may be a simple battery, replaceable or permanent, other variations may include a power supply 42 which is charged by inductance via an external charger. Additionally, power supply 42 may alternatively be charged via direct coupling to an alternating current (AC) or direct current (DC) source. Other variations may include a power supply 42 which is charged via a mechanical mechanism, such as an internal pendulum or slidable electrical inductance charger as known in the art, which is actuated via, e.g., motions of the jaw and/or movement for translating the mechanical motion into stored electrical energy for charging power supply 42.
In another variation of assembly 16, rather than utilizing an extra-buccal transmitter, two-way communication assembly 50 may be configured as an independent assembly contained entirely within the user's mouth, as shown in
In order to transmit the vibrations corresponding to the received auditory signals efficiently and with minimal loss to the tooth or teeth, secure mechanical contact between the transducer and the tooth is ideally maintained to ensure efficient vibratory communication. Accordingly, any number of mechanisms may be utilized to maintain this vibratory communication.
Aside from an adhesive film, another alternative may utilize an expandable or swellable member to ensure a secure mechanical contact of the transducer against the tooth. As shown in
Another variation is shown in
In yet another variation, the electronics may be contained as a separate assembly 90 which is encapsulated within housing 62 and the transducer 92 may be maintained separately from assembly 90 but also within housing 62. As shown in
In other variations as shown in
In yet another variation shown in
Another variation for a mechanical mechanism is illustrated in
In yet another variation, the electronics 150 and the transducer 152 may be separated from one another such that electronics 150 remain disposed within housing 62 but transducer 152, connected via wire 154, is located beneath dental oral appliance 60 along an occlusal surface of the tooth, as shown in
In the variation of
In yet another variation, an electronics and/or transducer assembly 170 may define a channel or groove 172 along a surface for engaging a corresponding dental anchor 174, as shown in
In yet another variation,
Similarly, as shown in
In yet other variations, vibrations may be transmitted directly into the underlying bone or tissue structures rather than transmitting directly through the tooth or teeth of the user. As shown in
In yet another variation, rather utilizing a post or screw drilled into the underlying bone itself, a transducer may be attached, coupled, or otherwise adhered directly to the gingival tissue surface adjacent to the teeth. As shown in
For any of the variations described above, they may be utilized as a single device or in combination with any other variation herein, as practicable, to achieve the desired hearing level in the user. Moreover, more than one oral appliance device and electronics and/or transducer assemblies may be utilized at any one time. For example,
Moreover, each of the different transducers 270, 272, 274, 276 can also be programmed to vibrate in a manner which indicates the directionality of sound received by the microphone worn by the user. For example, different transducers positioned at different locations within the user's mouth can vibrate in a specified manner by providing sound or vibrational queues to inform the user which direction a sound was detected relative to an orientation of the user. For instance, a first transducer located, e.g., on a user's left tooth, can be programmed to vibrate for sound detected originating from the user's left side. Similarly, a second transducer located, e.g., on a user's right tooth, can be programmed to vibrate for sound detected originating from the user's right side. Other variations and queues may be utilized as these examples are intended to be illustrative of potential variations.
In variations where the one or more microphones are positioned in intra-buccal locations, the microphone may be integrated directly into the electronics and/or transducer assembly, as described above. However, in additional variation, the microphone unit may be positioned at a distance from the transducer assemblies to minimize feedback. In one example, similar to a variation shown above, microphone unit 282 may be separated from electronics and/or transducer assembly 280, as shown in
Although the variation illustrates the microphone unit 282 placed adjacent to the gingival tissue 268, unit 282 may be positioned upon another tooth or another location within the mouth. For instance,
In yet another variation for separating the microphone from the transducer assembly,
The applications of the devices and methods discussed above are not limited to the treatment of hearing loss but may include any number of further treatment applications. Moreover, such devices and methods may be applied to other treatment sites within the body. Modification of the above-described assemblies and methods for carrying out the invention, combinations between different variations as practicable, and variations of aspects of the invention that are obvious to those of skill in the art are intended to be within the scope of the claims.
This application is a continuation of U.S. application Ser. No. 12/333,279 filed Dec. 11, 2008 which is a continuation of U.S. application Ser. No. 12/042,186 filed Mar. 4, 2008, now abandoned, both of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
2045404 | Nicholides | Jun 1936 | A |
2161169 | Jefferis | Jun 1939 | A |
2239550 | Cubert | Apr 1941 | A |
2318872 | Madiera | May 1943 | A |
2977425 | Cole | Mar 1961 | A |
2995633 | Puharich et al. | Aug 1961 | A |
3156787 | Puharich et al. | Nov 1964 | A |
3170993 | Puharich et al. | Feb 1965 | A |
3267931 | Puharich et al. | Aug 1966 | A |
3325743 | Blum | Jun 1967 | A |
3787641 | Santori | Jan 1974 | A |
3894196 | Briskey | Jul 1975 | A |
3985977 | Beaty et al. | Oct 1976 | A |
4025732 | Traunmuller | May 1977 | A |
4150262 | Ono | Apr 1979 | A |
4498461 | Hakansson | Feb 1985 | A |
4591668 | Iwata | May 1986 | A |
4612915 | Hough et al. | Sep 1986 | A |
4642769 | Petrofsky | Feb 1987 | A |
4738268 | Kipnis | Apr 1988 | A |
4817044 | Ogren | Mar 1989 | A |
4832033 | Maher et al. | May 1989 | A |
4920984 | Furumichi et al. | May 1990 | A |
4982434 | Lenhardt et al. | Jan 1991 | A |
5012520 | Steeger | Apr 1991 | A |
5033999 | Mersky | Jul 1991 | A |
5047994 | Lenhardt et al. | Sep 1991 | A |
5060526 | Barth et al. | Oct 1991 | A |
5082007 | Adell | Jan 1992 | A |
5233987 | Fabian et al. | Aug 1993 | A |
5323468 | Bottesch | Jun 1994 | A |
5325436 | Soli et al. | Jun 1994 | A |
5372142 | Madsen et al. | Dec 1994 | A |
5402496 | Soli et al. | Mar 1995 | A |
5403262 | Gooch | Apr 1995 | A |
5447489 | Issalene et al. | Sep 1995 | A |
5455842 | Merskey et al. | Oct 1995 | A |
5460593 | Mersky et al. | Oct 1995 | A |
5546459 | Sih et al. | Aug 1996 | A |
5558618 | Maniglia | Sep 1996 | A |
5565759 | Dunstan | Oct 1996 | A |
5616027 | Jacobs et al. | Apr 1997 | A |
5624376 | Ball et al. | Apr 1997 | A |
5661813 | Shimauchi et al. | Aug 1997 | A |
5706251 | May | Jan 1998 | A |
5760692 | Block | Jun 1998 | A |
5800336 | Ball et al. | Sep 1998 | A |
5812496 | Peck | Sep 1998 | A |
5828765 | Gable | Oct 1998 | A |
5902167 | Filo et al. | May 1999 | A |
5914701 | Gersheneld et al. | Jun 1999 | A |
5961443 | Rastatter et al. | Oct 1999 | A |
5984681 | Huang | Nov 1999 | A |
6029558 | Stevens et al. | Feb 2000 | A |
6047074 | Zoels et al. | Apr 2000 | A |
6068590 | Brisken | May 2000 | A |
6072884 | Kates | Jun 2000 | A |
6072885 | Stockham, Jr. et al. | Jun 2000 | A |
6075557 | Holliman et al. | Jun 2000 | A |
6115477 | Filo et al. | Sep 2000 | A |
6118882 | Haynes | Sep 2000 | A |
6171229 | Kroll et al. | Jan 2001 | B1 |
6223018 | Fukumoto et al. | Apr 2001 | B1 |
6239705 | Glen | May 2001 | B1 |
6333269 | Naito et al. | Dec 2001 | B2 |
6371758 | Kittelsen | Apr 2002 | B1 |
6377693 | Lippa et al. | Apr 2002 | B1 |
6394969 | Lenhardt | May 2002 | B1 |
6504942 | Hong et al. | Jan 2003 | B1 |
6538558 | Sakazume et al. | Mar 2003 | B2 |
6585637 | Brillhart et al. | Jul 2003 | B2 |
6631197 | Taenzer | Oct 2003 | B1 |
6633747 | Reiss | Oct 2003 | B1 |
6682472 | Davis | Jan 2004 | B1 |
6754472 | Williams et al. | Jun 2004 | B1 |
6778674 | Panasik et al. | Aug 2004 | B1 |
6826284 | Benesty et al. | Nov 2004 | B1 |
6885753 | Bank | Apr 2005 | B2 |
6917688 | Yu et al. | Jul 2005 | B2 |
6941952 | Rush, III | Sep 2005 | B1 |
6954668 | Cuozzo | Oct 2005 | B1 |
6985599 | Asnes | Jan 2006 | B2 |
7003099 | Zhang et al. | Feb 2006 | B1 |
7033313 | Lupin et al. | Apr 2006 | B2 |
7035415 | Belt et al. | Apr 2006 | B2 |
7074222 | Westerkull | Jul 2006 | B2 |
7076077 | Atsumi et al. | Jul 2006 | B2 |
7099822 | Zangi | Aug 2006 | B2 |
7162420 | Zangi et al. | Jan 2007 | B2 |
7171003 | Venkatesh et al. | Jan 2007 | B1 |
7171008 | Elko | Jan 2007 | B2 |
7174022 | Zhang et al. | Feb 2007 | B1 |
7206423 | Feng et al. | Apr 2007 | B1 |
7246058 | Burnett | Jul 2007 | B2 |
7258533 | Tanner et al. | Aug 2007 | B2 |
7269266 | Anjanappa et al. | Sep 2007 | B2 |
7271569 | Oglesbee | Sep 2007 | B2 |
7310427 | Retchin et al. | Dec 2007 | B2 |
7329226 | Ni et al. | Feb 2008 | B1 |
7331349 | Brady et al. | Feb 2008 | B2 |
7333624 | Husung | Feb 2008 | B2 |
7361216 | Kangas et al. | Apr 2008 | B2 |
7409070 | Pitulia | Aug 2008 | B2 |
7486798 | Anjanappa et al. | Feb 2009 | B2 |
7520851 | Davis et al. | Apr 2009 | B2 |
7522738 | Miller, III | Apr 2009 | B2 |
7522740 | Julstrom et al. | Apr 2009 | B2 |
7945068 | Abolfathi et al. | May 2011 | B2 |
20010003788 | Ball et al. | Jun 2001 | A1 |
20010051776 | Lenhardt | Dec 2001 | A1 |
20020026091 | Leysieffer | Feb 2002 | A1 |
20020071581 | Leysieffer et al. | Jun 2002 | A1 |
20020077831 | Numa | Jun 2002 | A1 |
20020122563 | Schumaier | Sep 2002 | A1 |
20020173697 | Lenhardt | Nov 2002 | A1 |
20030059078 | Downs et al. | Mar 2003 | A1 |
20030091200 | Pompei | May 2003 | A1 |
20030212319 | Magill | Nov 2003 | A1 |
20040057591 | Beck et al. | Mar 2004 | A1 |
20040131200 | Davis | Jul 2004 | A1 |
20040141624 | Davis et al. | Jul 2004 | A1 |
20040202339 | O'Brien, Jr. et al. | Oct 2004 | A1 |
20040202344 | Anjanappa et al. | Oct 2004 | A1 |
20040243481 | Bradbury et al. | Dec 2004 | A1 |
20040247143 | Lantrua et al. | Dec 2004 | A1 |
20050037312 | Uchida | Feb 2005 | A1 |
20050067816 | Buckman | Mar 2005 | A1 |
20050070782 | Brodkin | Mar 2005 | A1 |
20050129257 | Tamura | Jun 2005 | A1 |
20050196008 | Anjanappa et al. | Sep 2005 | A1 |
20050241646 | Sotos et al. | Nov 2005 | A1 |
20060008106 | Harper | Jan 2006 | A1 |
20060025648 | Lupin et al. | Feb 2006 | A1 |
20060064037 | Shalon et al. | Mar 2006 | A1 |
20060167335 | Park et al. | Jul 2006 | A1 |
20060207611 | Anonsen | Sep 2006 | A1 |
20060270467 | Song et al. | Nov 2006 | A1 |
20060275739 | Ray | Dec 2006 | A1 |
20070010704 | Pitulia | Jan 2007 | A1 |
20070036370 | Granovetter et al. | Feb 2007 | A1 |
20070041595 | Carazo et al. | Feb 2007 | A1 |
20070142072 | Lassally | Jun 2007 | A1 |
20070183613 | Juneau et al. | Aug 2007 | A1 |
20070230713 | Davis | Oct 2007 | A1 |
20070242835 | Davis | Oct 2007 | A1 |
20070265533 | Tran | Nov 2007 | A1 |
20070276270 | Tran | Nov 2007 | A1 |
20070280491 | Abolfathi | Dec 2007 | A1 |
20070280492 | Abolfathi | Dec 2007 | A1 |
20070280493 | Abolfathi | Dec 2007 | A1 |
20070280495 | Abolfathi | Dec 2007 | A1 |
20070286440 | Abolfathi et al. | Dec 2007 | A1 |
20070291972 | Abolfathi et al. | Dec 2007 | A1 |
20080019542 | Menzel et al. | Jan 2008 | A1 |
20080019557 | Bevirt et al. | Jan 2008 | A1 |
20080021327 | El-Bialy et al. | Jan 2008 | A1 |
20080064993 | Abolfathi et al. | Mar 2008 | A1 |
20080070181 | Abolfathi et al. | Mar 2008 | A1 |
20080304677 | Abolfathi et al. | Dec 2008 | A1 |
20090028352 | Petroff | Jan 2009 | A1 |
20090052698 | Rader et al. | Feb 2009 | A1 |
20090088598 | Abolfathi | Apr 2009 | A1 |
20090097684 | Abolfathi et al. | Apr 2009 | A1 |
20090097685 | Menzel et al. | Apr 2009 | A1 |
20090099408 | Abolfathi et al. | Apr 2009 | A1 |
20090105523 | Kassayan et al. | Apr 2009 | A1 |
20090147976 | Abolfathi | Jun 2009 | A1 |
20090149722 | Abolfathi et al. | Jun 2009 | A1 |
20090180652 | Davis et al. | Jul 2009 | A1 |
20090220921 | Abolfathi et al. | Sep 2009 | A1 |
20090226011 | Abolfathi et al. | Sep 2009 | A1 |
20090226017 | Abolfathi et al. | Sep 2009 | A1 |
Number | Date | Country |
---|---|---|
0715838 | Jun 1996 | EP |
0741940 | Nov 1996 | EP |
0824889 | Feb 1998 | EP |
1299052 | Feb 2002 | EP |
1633284 | Dec 2004 | EP |
1691686 | Aug 2006 | EP |
1718255 | Nov 2006 | EP |
1783919 | May 2007 | EP |
2467053 | Jul 2010 | GB |
2007028248 | Feb 2007 | JP |
2007028610 | Feb 2007 | JP |
2007044284 | Feb 2007 | JP |
2007049599 | Feb 2007 | JP |
2007049658 | Feb 2007 | JP |
WO 8302047 | Jun 1983 | WO |
WO 9102678 | Mar 1991 | WO |
WO 9519678 | Jul 1995 | WO |
WO 9621335 | Jul 1996 | WO |
WO 0209622 | Feb 2002 | WO |
WO 2004045242 | May 2004 | WO |
WO 2004105650 | Dec 2004 | WO |
WO 2005000391 | Jan 2005 | WO |
WO 2005037153 | Apr 2005 | WO |
WO 2005053533 | Jun 2005 | WO |
WO 2006088410 | Aug 2006 | WO |
WO 2006130909 | Dec 2006 | WO |
WO 2007043055 | Apr 2007 | WO |
WO 2007052251 | May 2007 | WO |
WO 2007059185 | May 2007 | WO |
WO 2007140367 | Dec 2007 | WO |
WO 2007140368 | Dec 2007 | WO |
WO 2007140373 | Dec 2007 | WO |
WO 2007143453 | Dec 2007 | WO |
WO 2008024794 | Feb 2008 | WO |
WO 2008030725 | Mar 2008 | WO |
WO 2009014812 | Jan 2009 | WO |
WO 2009025917 | Feb 2009 | WO |
WO 2009066296 | May 2009 | WO |
WO 2009102889 | Aug 2009 | WO |
WO 2009111404 | Sep 2009 | WO |
WO 2009111566 | Sep 2009 | WO |
WO 2010085455 | Jul 2010 | WO |
Entry |
---|
“Special Forces Smart Noise Cancellation Ear Buds with Built-In GPS,” http://www.gizmag.com/special-forces-smart-noise-cancellation-ear-buds-with-built-in-gps/9428/, 2 pages, 2008. |
Altmann, et al. Foresighting the new technology waves—Exper Group. In: State of the Art Reviews and Related Papers—Center on Nanotechnology and Society. 2004 Conference. Published Jun. 14, 2004. p. 1-291. Available at http://www.nano-and-society.org. |
Berard, G., “Hearing Equals Behavior” [summary], 1993, http://www.bixby.org/faq/tinnitus/treatment.html. |
Britsh Patent Application No. 1000894.4 filed Jan. 20, 2010 in the name of Abolfathi et al., Search Report mailed Mar. 26, 2010. |
Broyhill, D., “Battlefield Medical Information System—Telemedicine,” A research paper presented to the U.S. Army Command and General Staff College in partial Fulfillment of the requirement for A462 Combat Health Support Seminar, 12 pages, 2003. |
Dental Cements—Premarket Notification, U.S. Department of Health and Human Services Food and Drug Administration Center for Devices and Radiological Health, pp. 1-10, Aug. 18, 1998. |
Henry, et al. “Comparison of Custom Sounds for Achieving Tinnitus Relief, ” J Am Acad Audiol, 15:585-598, 2004. |
Jastreboff, Pawel, J., “Phantom auditory perception (tinnitus): mechanisms of generation and perception,” Neuroscience Research, 221-254, 1990, Elsevier Scientific Publishers Ireland, Ltd. |
PCT Patent Application No. PCT/US2009/036038 filed Mar. 4, 2009 in the name of Abolfathi et al., International Search Report and Written Opinion mailed Apr. 20, 2009. |
PCT Patent Application No. PCT/US2010/021427 filed Jan. 19, 2010 in the name of Abolfathi et al., International Search Report and Written Opinion mailed Mar. 23, 2010. |
Robb, “Tinnitus Device Directory Part I,” Tinnitus Today, p. 22, Jun. 2003. |
Song, S. et al., “A 0.2-mW 2-Mb/s Digital Transceiver Based on Wideband Signaling for Human Body Communications,” IEEE J Solid-State Cir, 42(9), 2021-2033, Sep. 2007. |
Stuart, A., et al., “Investigations of the Impact of Altered Auditory Feedback In-The-Ear Devices on the Speech of People Who Stutter: Initial Fitting and 4-Month Follow-Up”Int J Lang Commun Disord, 39(1), Jan. 2004, [abstract only]. |
Wen, Y. et al, “Online Prediction of Battery Lifetime for Embedded and Mobile Devices,” Special Issue on Embedded Systems: Springer-Verlag Heidelberg Lecture Notes in Computer Science, V3164/2004, 15 pages, Dec. 2004. |
U.S. Appl. No. 11/754,823, filed May 29, 2007 in the name of Abolfathi et al., Final Office Action mailed May 12, 2009. |
U.S. Appl. No. 11/754,823, filed May 29, 2007 in the name of Abolfathi et al., Non-final Office Action mailed Aug. 14, 2008. |
U.S. Appl. No. 11/754,833, filed May 29, 2007 in the name of Abolfathi et al., Final Office Action mailed May 14, 2009. |
U.S. Appl. No. 11/754,833, filed May 29, 2007 in the name of Abolfathi et al., Non-final Office Action mailed Aug. 6, 2008. |
U.S. Appl. No. 11/672,239, filed Feb. 7, 2007 in the name of Abolfathi, Non-final Office Action mailed Jun. 18, 2009. |
U.S. Appl. No. 11/672,239, filed Feb. 7, 2007 in the name of Abolfathi, Non-final Office Action mailed Nov. 13, 2008. |
U.S. Appl. No. 11/672,250, filed Feb. 7, 2007 in the name of Abolfathi, Non-final Office Action mailed Apr. 21, 2009. |
U.S. Appl. No. 11/672,250, filed Feb. 7, 2007 in the name of Abolfathi, Non-final Office Action mailed Aug. 8, 2008. |
U.S. Appl. No. 11/672,264, filed Feb. 7, 2007 in the name of Abolfathi, Non-Final Rejection mailed Apr. 28, 2009. |
U.S. Appl. No. 11/672,264, filed Feb. 7, 2007 in the name of Abolfathi, Non-Final Rejection mailed Aug. 6, 2008. |
U.S. Appl. No. 11/672,271, filed Feb. 7, 2007 in the name of Abolfathi, Final Office Action mailed May 18, 2009. |
U.S. Appl. No. 11/672,271, filed Feb. 7, 2007 in the name of Abolfathi, Non-final Office Action mailed Aug. 20, 2008. |
U.S. Appl. No. 11/741,648, filed Apr. 27, 2007 in the name of Menzel et al., Final Office Action mailed May 18, 2009. |
U.S. Appl. No. 11/741,648, filed Apr. 27, 2007 in the name of Menzel et al., Non-final Office Action mailed Sep. 4, 2008. |
U.S. Appl. No. 11/866,345, filed May 29, 2007 in the name of Abolfathi et al., Final Office Action mailed Apr. 15, 2009. |
U.S. Appl. No. 11/866,345, filed May 29, 2007 in the name of Abolfathi et al., Non-final Office Action mailed Mar. 19, 2008. |
U.S. Appl. No. 12/042,186, filed Mar. 4, 2008 in the name of Abolfathi et al., non-final Office Action mailed Sep. 29, 2010. |
U.S. Appl. No. 12/333,279, filed Dec. 11, 2008 in the name of Abolfathi et al., final Office Action mailed Jan. 10, 2011. |
U.S. Appl. No. 12/333,279, filed Dec. 11, 2008 in the name of Abolfathi et al., non-final Office Action mailed Sep. 27, 2010. |
U.S. Appl. No. 12/333,279, filed Dec. 11, 2008 in the name of Abolfathi et al., Notice of Allowance mailed Mar. 29, 2011. |
Number | Date | Country | |
---|---|---|---|
20110280416 A1 | Nov 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12333279 | Dec 2008 | US |
Child | 13108372 | US | |
Parent | 12042186 | Mar 2008 | US |
Child | 12333279 | US |