Multifunction system and method for integrated hearing and communication with noise cancellation and feedback management

Information

  • Patent Grant
  • 11483665
  • Patent Number
    11,483,665
  • Date Filed
    Thursday, October 22, 2020
    4 years ago
  • Date Issued
    Tuesday, October 25, 2022
    2 years ago
Abstract
Systems, devices, and methods for communication include an ear canal microphone configured for placement in the ear canal to detect high frequency sound localization cues. An external microphone positioned away from the ear canal can detect low frequency sound, such that feedback can be substantially reduced. The canal microphone and the external microphone are coupled to a transducer, such that the user perceives sound from the external microphone and the canal microphone with high frequency localization cues and decreased feedback. Wireless circuitry can be configured to connect to many devices with a wireless protocol, such that the user can receive and transmit audio signals. A bone conduction sensor can detect near-end speech of the user for transmission with the wireless circuitry in a noisy environment. Noise cancellation of background sounds near the user can be provided.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention

The present invention is related to systems, devices and methods for communication.


People like to communicate with others. Hearing and speaking are forms of communication that many people use and enjoy. Many devices have been proposed that improve communication including the telephone and hearing aids.


Hearing impaired subjects need hearing aids to verbally communicate with those around them. Open canal hearing aids have proven to be successful in the marketplace because of increased comfort. Another reason why they are popular is reduced occlusion, which is a tunnel-like hearing effect that is problematic to most hearing aid users. Another common complaint is feedback and whistling from the hearing aid. Increasingly, hearing impaired subjects also make use of audio entertainment and communication devices. Often the use of these devices interferes with the use of hearing aids and more often are cumbersome to use together. Another problem is use of entertainment and communication systems in noisy environments, which requires active noise cancellation. There is a need to integrate open canal hearing aids with audio entertainment and communication systems and still allow their use in noisy places. For improving comfort, it is desirable to use these modalities in an open ear canal configuration.


Several approaches to improved hearing, improve feedback suppression and noise cancellation. Although sometimes effective, current methods and devices for feedback suppression and noise cancellation may not be effective in at least some instances. For example, when an acoustic hearing aid with a speaker positioned in the ear canal is used to amplify sound, placement of a microphone in the ear canal can result in feedback when the ear canal is open, even when feedback and noise cancellation are used.


One promising approach to improving hearing with an ear canal microphone has been to use a direct-drive transducer coupled to middle-ear transducer, rather than an acoustic transducer, such that feedback is significantly reduced and often limited to a narrow range of frequencies. The EARLENS™ transducer as described by Perkins et al (U.S. Pat. No. 5,259,032; US20060023908; US20070100197) and many other transducers that directly couple to the middle ear such as described by Puria et al (U.S. Pat. No. 6,629,922) may have significant advantages due to reduced feedback that is limited in a narrow frequency range. The EARLENS™ system may use an electromagnetic coil placed inside the ear canal to drive the middle ear, for example with the EARLENS™ transducer magnet positioned on the eardrum. A microphone can be placed inside the ear canal integrated in a wide-bandwidth system to provide pinna-diffraction cues. The pinna diffraction cues allow the user to localize sound and thus hear better in multi-talker situations, when combined with the wide-bandwidth system. Although effective in reducing feedback, these systems may result in feedback in at least some instances, for example with an open ear canal that transmits sound to a canal microphone with high gain for the hearing impaired.


Although at least some implantable hearing aid systems may result in decreased feedback, surgical implantation can be complex, expensive and may potentially subject the user to possible risk of surgical complications and pain such that surgical implantation is not a viable option for many users.


In at least some instances known hearing aides may not be fully integrated with telecommunications systems and audio system, such that the user may use more devices than would be ideal. Also, current combinations of devices may be less than ideal, such that the user may not receive the full benefit of hearing with multiple devices. For example, known hands free wireless BLUETOOTH′ devices, such as the JAWBONE™, may not work well with hearing aid devices as the hands free device is often placed over the ear. Also, such devices may not have sounds configured for optimal hearing by the user as with hearing aid devices. Similarly, a user of a hearing aid device, may have difficulty using direct audio from device such as a headphone jack for listening to a movie on a flight, an iPod or the like. In many instances, the result is that the combination of known hearing devices with communication and audio systems can be less than ideal.


The known telecommunication and audio systems may have at least some shortcomings, even when used alone, that may make at least some of these systems less than ideal, in at least some instances. For example, many known noise cancellation systems use headphones that can be bulky, in at least some instances. Further, at least some of the known wireless headsets for telecommunications can be some what obtrusive and visible, such that it would be helpful if the visibility and size could be minimized.


In light of the above, it would be desirable to provide an improved system for communication that overcomes at least some of the above shortcomings. It would be particularly desirable if such a communication system could be used without surgery to provide: high frequency localization cues, open ear canal hearing with minimal feedback, hearing aid functionality with amplified sensation level, a wide bandwidth sound with frequencies from about 0.1 to 10 kHz, noise cancellation, reduced feedback, communication with a mobile device or audio entertainment system.


2. Description of the Background Art

The following U.S. patents and publications may be relevant to the present application: U.S. Pat. Nos. 5,117,461; 5,259,032; 5,402,496; 5,425,104; 5,740,258; 5,940,519; 6,068,589; 6,222,927; 6,629,922; 6,445,799; 6,668,062; 6,801,629; 6,888,949; 6,978,159; 7,043,037; 7,203,331; 2002/20172350; 2006/0023908; 2006/0251278; 2007/0100197; Carlile and Schonstein (2006) “Frequency bandwidth and multi-talker environments,” Audio Engineering Society Convention, Paris, France 118:353-63; Killion, M. C. and Christensen, L. (1998) “The case of the missing dots: AI and SNR loss,” Hear Jour 51(5):32-47; Moore and Tan (2003) “Perceived naturalness of spectrally distorted speech and music,” J Acoust Soc Am 114(1):408-19; Puria (2003) “Measurements of human middle ear forward and reverse acoustics: implications for otoacoustic emissions,” J Acoust Soc Am 113(5):2773-89.


BRIEF SUMMARY OF THE INVENTION

Embodiments of the present invention provide improved systems, devices and methods for communication. Although specific reference is made to communication with a hearing aid, the systems methods and devices, as described herein, can be used in many applications where sound is used for communication. At least some of the embodiments can provide, without surgery, at least one of: hearing aid functionality, an open ear canal; an ear canal microphone; wide bandwidth, for example with frequencies from about 0.1 to about 10 kHz; noise cancellation; reduced feedback, communication with at least one of a mobile device; or communication with an audio entertainment system. The ear canal microphone can be configured for placement to detect high frequency sound localization cues, for example within the ear canal or outside the ear canal within about 5 mm of the ear canal opening so as to detect high frequency sound comprising localization cues from the pinna of the ear. The high frequency sound detected with the ear canal microphone may comprise sound frequencies above resonance frequencies of the ear canal, for example resonance frequencies from about 2 to about 3 kHz. An external microphone can be positioned away from the ear canal to detect low frequency sound at or below the resonance frequencies of the ear canal, such that feedback can be substantially reduced, even minimized or avoided. The canal microphone and the external microphone can be coupled to at least one output transducer, such that the user perceives sound from the external microphone and the canal microphone with high frequency localization cues and decreased feedback. Wireless circuitry can be configured to connect to many devices with a wireless protocol, such that the user can receive and transmit audio signals. A bone conduction sensor can detect near-end speech of the user for transmission with the wireless circuitry, for example in a noisy environment with a piezo electric positioner configured for placement in the ear canal. Noise cancellation of background sounds near the user can improve the user's hearing of desired sounds, for example noised cancellation of background sounds detected with the external microphone.


In a first aspect, embodiments of the present invention provide a communication device for use with an ear of a user. A first input transducer is configured for placement at least one of inside an ear canal or near an opening of the ear canal. A second input transducer is configured for placement outside the ear canal. At least one transducer configured for placement inside the ear canal of the user. The at least one output transducer is coupled to the first microphone and the second microphone to transmit sound from the first microphone and the second microphone to the user.


In many embodiments, the first input transducer comprises at least one of a first microphone configured to detect sound from air or a first acoustic sensor configured to detect vibration from tissue. The second input transducer comprises at least one of a second microphone configured to detect sound from air or a second acoustic sensor configured to detect vibration from tissue. The first input transducer may comprise a microphone configured to detect high frequency localization cues and wherein the at least one output transducer is acoustically coupled to first input transducer when the transducer is positioned in the ear canal. The second input transducer can be positioned away from the ear canal opening to minimize feedback when the first input transducer detects the high frequency localization cues.


In many embodiments, the first input transducer is configured to detect high frequency sound comprising spatial localization cues when placed inside the ear canal or near the ear canal opening and transmit the high frequency localization cues to the user. The high frequency localization cues may comprise frequencies above about 4 kHz. The first input transducer can be coupled to the at least one output transducer to transmit high frequencies above at least about 4 kHz to the user with a first gain and to transmit low frequencies below about 3 kHz with a second gain. The first gain can be greater than the second gain so as to minimize feedback from the transducer to the first input transducer. The first input transducer can be configured to detect at least one of a sound diffraction cue from a pinna of the ear of the user or a head shadow cue from a head of the user when the first input transducer is positioned at least one of inside the ear canal or near the opening of the ear canal.


In many embodiments, the first input transducer is coupled to the at least one output transducer to vibrate an eardrum of the ear in response to high frequency sound localization cues above a resonance frequency of the ear canal. The second input transducer is coupled to the at least one output transducer to vibrate the eardrum in response sound frequencies at or below the resonance frequency of the ear canal. The resonance frequency of the ear canal may comprise frequencies within a range from about 2 to 3 kHz.


In many embodiments, the first input transducer is coupled to the at least one output transducer to vibrate the eardrum with a resonance gain for first sound frequencies corresponding to the resonance frequencies of the ear canal and a cue gain for sound localization cue comprising frequencies above the resonance frequencies of the ear canal, and wherein the cue gain is greater than the resonance gain to minimize feedback.


In many embodiments, the first input transducer is coupled to the at least one output transducer to vibrate the eardrum with a first gain for first sound frequencies corresponding to the resonance frequencies of the ear canal. The second input transducer is coupled to the at least one output transducer to vibrate the eardrum with a second gain for the sound frequencies corresponding to the resonance frequencies of the ear canal, and the first gain is less than the second gain to minimize feedback.


In many embodiments, the second input transducer is configured to detect low frequency sound without high frequency localization cues from a pinna of the ear when placed outside the car canal to minimize feedback from the transducer. The low frequency sound may comprise frequencies below about 3 kHz.


In many embodiments, the device comprises circuitry coupled to the first input transducer, the second input transducer and the at least one output transducer, and the circuitry is coupled to the first input transducer and the at least one output transducer to transmit high frequency sound comprising frequencies above about 4 kHz from the first input transducer to the user. The circuitry can be coupled to the second input transducer and the at least one output transducer to transmit low frequency sound comprising frequencies below about 4 kHz from the second input transducer to the user. The circuitry may comprise at least one of a sound processor or an amplifier coupled to the first input transducer, the second input transducer and the at least one output transducer to transmit high frequencies from the first input transducer and low frequencies from the second input transducer to the user so as to minimize feedback.


In many embodiments, the at least one output transducer comprises a first transducer and a second transducer, in which the first transducer is coupled to the first input transducer to transmit high frequency sound and the second transducer coupled to the second input transducer to transmit low frequency sound.


In many embodiments, the first input transducer is coupled to the at least one output transducer to transmit first frequencies to the user with a first gain and the second input transducer is coupled to the at least one output transducer to transmit second frequencies to the user with a second gain.


In many embodiments, the at least one output transducer comprises at least one of an acoustic speaker configured for placement inside the ear canal, a magnet supported with a support configured for placement on an eardrum of the user, an optical transducer supported with a support configured for placement on the eardrum of the user, a magnet configured for placement in a middle ear of the user, and an optical transducer configured for placement in the middle ear of the user. The at least one output transducer may comprise the magnet supported with the support configured for placement on an eardrum of the user, and the at least one output transducer may further comprises at least one coil configured for placement in the ear canal to couple to the magnet to transmit sound to the user. The at least one coil may comprises a first coil and a second coil, in which the first coil is coupled to the first input transducer and configured to transmit first frequencies from the first input transducer to the magnet, and in which the second coil is coupled to the second input transducer and configured to transmit second frequencies from the second input transducer to the magnet. The at least one output transducer may comprise the optical transducer supported with the support configured for placement on the eardrum of the user, and the optical transducer may further comprise a photodetector coupled to at least one of a coil or a piezo electric transducer supported with the support and configured to vibrate the eardrum.


In many embodiments, the first input transducer is configured to generate a first audio signal and the second input transducer is configured to generate a second audio signal and wherein the at least one output transducer is configured to vibrate with a first gain in response to the first audio signal and a second gain in response to the second audio signal to minimize feedback.


In many embodiments, the device further comprises wireless communication circuitry configured to transmit near-end speech from the user to a far-end person when the user speaks. The wireless communication circuitry can be configured to transmit the near-end sound from at least one of the first input transducer or the second input transducer. The wireless communication circuitry can be configured to transmit the near-end sound from the second input transducer. A third input transducer can be coupled to the wireless communication circuitry, in which the third input transducer configured to couple to tissue of the patient and transmit near-end speech from the user to the far end person in response to bone conduction vibration when the user speaks.


In many embodiments, the device further comprises a second device for use with a second contralateral ear of the user. The second device comprises a third input transducer configured for placement inside a second ear canal or near an opening of the second ear canal to detect second high frequency localization cues. A fourth input transducer is configured for placement outside the second ear canal. A second at least one output transducer is configured for placement inside the second ear canal, and the second at least one output transducer is acoustically coupled to the third input transducer when the second at least one output transducer is positioned in the second ear canal. The fourth input transducer is positioned away from the second ear canal opening to minimize feedback when the third input transducer detects the second high frequency localization cues. The combination of the first and second input transducers on an ipsilateral ear and the third and fourth input transducers on a contralateral ear can lead to improved binaural hearing.


In another aspect, embodiments of the present invention provide a communication device for use with an ear of a user. The device comprises a first at least one input transducer configured to detect sound. A second input transducer is configured to detect tissue vibration when the user speaks. Wireless communication circuitry is coupled to the second input transducer and configured to transmit near-end speech from the user to a far-end person when the user speaks. At least one output transducer is configured for placement inside an ear canal of the user, in which the at least one output transducer is coupled to the first input transducer to transmit sound from the first input transducer to the user.


In many embodiments, the first at least one input transducer comprises a microphone configured for placement at least one of inside an car canal or near an opening of the ear canal to detect high frequency localization cues. Alternatively or in combination, the first at least one input transducer may comprise a microphone configured for placement outside the ear canal to detect low frequency speech and minimize feedback from the at least one output transducer.


Tn many embodiments, the second input transducer comprises at least one of an optical vibrometer or a laser vibrometer configured to generate a signal in response to vibration of the eardrum when the user speaks.


In many embodiments, the second input transducer comprises a bone conduction sensor configured to couple to a skin of the user to detect tissue vibration when the user speaks. The bone conduction sensor can be configured for placement within the ear canal.


In many embodiments, the device further comprises an elongate support configured to extend from the opening toward the eardrum to deliver energy to the at least one output transducer, and a positioner coupled to the elongate support. The positioner can be sized to fit in the ear canal and position the elongate support within the ear canal, and the positioner may comprise the bone conduction sensor. The bone conduction sensor may comprise a piezo electric transducer configured to couple to the ear canal to bone vibration when the user speaks.


In many embodiments, the at least one output transducer comprises a support configured for placement on an eardrum of the user.


In many embodiments, the wireless communication circuitry is configured to receive sound from at least one of a cellular telephone, a hands free wireless device of an automobile, a paired short range wireless connectivity system, a wireless communication network, or a WiFi network.


Tn many embodiments, the wireless communication circuitry is coupled to the at least one output transducer to transmit far-end sound to the user from a far-end person in response to speech from the far-end person.


In another aspect, embodiments of the present invention provide an audio listening system for use with an ear of a user. The system comprises a canal microphone configured for placement in an ear canal of the user, and an external microphone configured for placement external to the ear canal. A transducer is coupled to the canal microphone and the external microphone. The transducer is configured for placement inside the ear canal on an eardrum of the user to vibrate the eardrum and transmit sound to the user in response to the canal microphone and the external microphone.


In many embodiments, the transducer comprises a magnet and a support configured for placement on the eardrum to vibrate the eardrum in response to a wide bandwidth signal comprising frequencies from about 0.1 kHz to about 10 kHz.


In many embodiments, the system further comprises a sound processor coupled to the canal microphone and configured to receive an input from the canal microphone. The sound processor is configured to vibrate the eardrum in response to the input from the canal microphone. The sound processor can be configured to minimize feedback from the transducer.


In many embodiments, the sound processor is coupled to the external microphone and configured to vibrate the eardrum in response to an input from the external microphone.


Tn many embodiments, the sound processor is configured to cancel feedback from the transducer to the canal microphone with a feedback transfer function.


In many embodiments, the sound processor is coupled to the external microphone and configured to cancel noise in response to input from the external microphone. The external microphone can be configured to measure external sound pressure and wherein the sound processor is configured to minimize vibration of the eardrum in response to the external sound pressure measured with the external microphone. The sound processor can be configured to measure feedback from the transducer to the canal microphone and wherein the processor is configured to minimize vibration of the eardrum in response to the feedback.


Tn many embodiments, the external microphone is configured to measure external sound pressure, and the canal microphone is configured to measure canal sound pressure and wherein the sound processor is configured to determine feedback transfer function in response to the canal sound pressure and the external sound pressure.


In many embodiments, the system further comprises an external input for listening.


In many embodiments, the external input comprises an analog input configured to receive an analog audio signal from an external device.


In many embodiments, the system further comprises a bone vibration sensor to detect near-end speech of the user.


Tn many embodiments, the system further comprises wireless communication circuitry coupled to the transducer and configured to vibrate the transducer in response to far-end speech.


In many embodiments, the system further comprises a sound processor coupled to the wireless communication circuitry and wherein the sound processor is configured to process the far-end speech to generate processed far-end speech, and the processor is configured to vibrate the transducer in response to the processed far-end speech.


In many embodiments, wireless communication circuitry is configured to receive far-end speech from a communication channel of a mobile phone.


In many embodiments, the wireless communication circuitry is configured to transmit near-end speech of the user to a far-end person.


In many embodiments, the system further comprises a mixer configured to mix a signal from the canal microphone and a signal from the external microphone to generate a mixed signal comprising near-end speech, and the wireless communication circuitry is configured to transmit the mixed signal comprising the near-end speech to a far-end person.


In many embodiments, the sound processor is configured to provide mixed near-end speech to the user.


In many embodiments, the system is configured to transmit near-end speech from a noisy environment to a far-end person.


Tn many embodiments, the system further comprises a bone vibration sensor configured to detect near-end speech, the bone vibration sensor coupled to the wireless communication circuitry, and wherein the wireless communication circuitry is configured to transmit the near-end speech to the far-end person in response to bone vibration when the user speaks.


In another aspect, embodiments of the present invention provide a method of transmitting sound to an ear of a user. High frequency sound comprising high frequency localization cues is detected with a first microphone placed at least one of inside an ear canal or near an opening of the car canal. A second microphone is placed external to the car canal. At least one output transducer is placed inside the ear canal of the user. The at least one output transducer is coupled to the first microphone and the second microphone and transmits sound from the first microphone and the second microphone to the user.


In another aspect, embodiments of the present invention provide a device to detect sound from an ear canal of a user. The device comprises a piezo electric transducer configured for placement in the ear canal of the user.


In many embodiments, the piezo electric transducer comprises at least one elongate structure configured to extend at least partially across the ear canal from a first side of the ear canal to a second side of the ear canal to detect sound when the user speaks, in which the first side of the car canal can be opposite the second side. The at least one elongate structure may comprise a plurality of elongate structures configured to extend at least partially across the long dimension of the ear canal, and a gap may extend at least partially between the plurality of elongate structures to minimize occlusion when the piezo electric transducer is placed in the canal.


In many embodiments, the device further comprises a positioner coupled to the transducer, in which the positioner is configured to contact the ear canal and support the piezoelectric transducer in the ear canal to detect vibration when the user speaks. The at least one of the positioner or the piezo electric transducer can be configured to define at least one aperture to minimize occlusion when the user speaks.


In many embodiments, the positioner comprises an outer portion configured extend circumferentially around the piezo electric transducer to contact the ear canal with an outer perimeter of the outer portion when the positioner is positioned in the ear canal.


In many embodiments, the device further comprises an elongate support comprising an elongate energy transmission structure, the elongate energy transmission structure passing through at least one of the piezo electric transducer or the positioner to transmit an audio signal to the eardrum of the user, the elongate energy transmission structure comprising at least one of an optical fiber to transmit light energy or a wire configured to transmit electrical energy.


In many embodiments, the piezo electric transducer comprises at least one of a ring piezo electric transducer, a bender piezo electric transducer, a bimorph bender piezo electric transducer or a piezoelectric multi-morph transducer, a stacked piezoelectric transducer with a mechanical multiplier or a ring piezoelectric transducer with a mechanical multiplier or a disk piezo electric transducer.


In another aspect, embodiments of the present invention provide an audio listening system having multiple functionalities. The system comprises a body configured for positioning in an open ear canal, the functionalities include a wide-bandwidth hearing aid, a microphone within the body, a noise suppression system, a feedback cancellation system, a mobile phone communication system, and an audio entertainment system.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1A shows (1) a wide bandwidth EARLENS™ hearing aid of the prior art suitable for use with a mode of the system as in FIG. 1 with an ear canal microphone for sound localization;



FIG. 2A shows (2) a hearing aide mode of the system as in FIGS. 1 and 1A with feedback cancellation;



FIG. 3A shows (3) a hearing aid mode of the system as in FIGS. 1 and 1A operating with noise cancellation;



FIG. 4A shows (4) the system as in FIG. 1 where the audio input is from an RF receiver, for example a BLUETOOTH™ device connected to the far-end speech of the communication channel of a mobile phone.



FIG. 5A shows (5) the system as in FIGS. 1 and 4A configured to transmit the near-end speech, in which the speech can be a mix of the signal generated by the external microphone and the ear canal microphone from sensors including a small vibration sensor;



FIG. 6A shows the system as in FIGS. 1, 1A, 4A and 5A configured to transduce and transmit the near-end speech, from a noisy environment, to the far-end listener;



FIG. 7A shows a piezoelectric positioner configured for placement in the ear canal to detect near-end speech, according to embodiments of the present invention;



FIG. 7B shows a positioner as in FIG. 7A in detail, according to embodiments of the present invention;



FIG. 8A shows an elongate support with a pair of positioners adapted to contact the ear canal, and in which at least one of the positioners comprises a piezoelectric positioner configured to detect near end speech of the user, according to embodiments of the present invention;



FIG. 8B shows an elongate support as in FIG. 8A attached to two positioners placed in an ear canal, according to embodiments of the present invention;



FIG. 8B-1 shows an elongate support configured to position a distal end of the elongate support with at least one positioner placed in an ear canal, according to embodiments of the present invention;



FIG. 8C shows a positioner adapted for placement near the opening to the ear canal, according to embodiments of the present invention;



FIG. 8D shows a positioner adapted for placement near the coil assembly, according to embodiments of the present invention;



FIG. 9 illustrates a body comprising the canal microphone installed in the ear canal and coupled to a BTE unit comprising the external microphone, according to embodiments of the present invention;



FIG. 10A shows feedback pressure at the canal microphone and feedback pressure at the external microphone for a transducer coupled to the middle ear, according to embodiments of the present invention;



FIG. 10B shows gain versus frequency at the output transducer for sound input to canal microphone and sound input to the external microphone to detect high frequency localization cues and minimize feedback, according to embodiments of the present invention;



FIG. 10C shows a canal microphone with high pass filter circuitry and an external microphone with low pass filter circuitry, both coupled to a transducer to provide gain in response to frequency as in FIG. 10B;


FIGS. 10D1 shows a canal microphone coupled to first transducer and an external microphone coupled to a second transducer to provide gain in response to frequency as in FIG. 10B;


FIGS. 10D2 shows the canal microphone coupled to a first transducer comprising a first coil wrapped around a core and the external microphone coupled to a second transducer comprising second a coil wrapped around the core, as in FIG. 10D1;



FIG. 11A shows an elongate support comprising a plurality of optical fibers configured to transmit light and receive light to measure displacement of the eardrum, according to embodiments of the present invention;



FIG. 11B shows a positioner for use with an elongate support as in FIG. 11A and adapted for placement near the opening to the ear canal, according to embodiments of the present invention; and



FIG. 11C shows a positioner adapted for placement near a distal end of the elongate support as in FIG. 11 A, according to embodiments of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

Embodiments of the present invention provide a multifunction audio system integrated with communication system, noise cancellation, and feedback management, and non-surgical transduction. A multifunction hearing aid integrated with communication system, noise cancellation, and feedback management system with an open ear canal is described, which provides many benefits to the user.



FIGS. 1A to 6A illustrate different functionalities embodied in the integrated system. The present multifunction hearing aid comprises with wide bandwidth, sound localization capabilities, as well as communication and noise-suppression capabilities. The configurations for system 10 include configurations for multiple sensor inputs and direct drive of the middle ear.



FIG. 1 shows a hearing aid system 10 integrated with communication sub-system, noise suppression sub-system and feedback-suppression sub-system. System 10 is configured to receive sound input from an acoustic environment. System 10 comprises a canal microphone CM configured to receive input from the acoustic environment, and an external microphone configured to receive input from the acoustic environment. When the canal microphone is placed in the car canal, the canal microphone can receive high frequency localization cues, similar to natural hearing, that help the user localize sound. System 10 includes a direct audio input, for example an analog audio input from a jack, such that the user can listen to sound from the direct audio input. System 10 also includes wireless circuitry, for example known short range wireless radio circuitry configured to connect with the BLUETOOTH™ short range wireless connectivity standard. The wireless circuitry can receive input wirelessly, such as input from a phone, input from a stereo, and combinations thereof. The wireless circuitry is also coupled to the external microphone EM and bone vibration circuitry, to detect near-end speech when the user speaks. The bone vibration circuitry may comprise known circuitry to detect near-end speech, for example known JAWBONE™ circuitry that is coupled to the skin of the user to detect bone vibration in response to near-end speech. Near end speech can also be transmitted to the middle ear and cochlea, for example with acoustic bone conduction, such that the user can hear him or herself speak.


System 10 comprises a sound processor. The sound processor is coupled to the canal microphone CM to receive input from the canal microphone. The sound processor is coupled to the external microphone EM to receive sound input from the external microphone. An amplifier can be coupled to the external microphone EM and the sound processor so as to amplify sound from the external microphone to the sound processor. The sound processor is also coupled to the direct audio input. The sound processor is coupled to an output transducer configured to vibrate the middle ear. The output transducer may be coupled to an amplifier. Vibration of the middle ear can induce the stapes of the ear to vibrate, for example with velocity, such that the user perceives sound. The output transducer may comprise, for example, the EARLENS™ transducer described by Perkins et al in the following US patents and application Publications: U.S. Pat. No. 5,259,032; 20060023908; 20070100197, the full disclosures of which are incorporated herein by reference and may include subject matter suitable for combination in accordance with some embodiments of the present invention. The EARLENS™ transducer may have significant advantages due to reduced feedback that can be limited to a narrow frequency range. The output transducer may comprise an output transducer directly coupled to the middle ear, so as to reduce feedback. For example, the EARLENS™ transducer can be coupled to the middle ear, so as to vibrate the middle ear such that the user perceives sound. The output transducer of the EARLENS™ can comprise, for example a core/coil coupled to a magnet. When current is passed through the coil, a magnetic field is generated, which magnetic field vibrates the magnet of the EARLENS™ supported on the eardrum such that the user perceives sound. Alternatively or in combination, the output transducer may comprise other types of transducers, for example, many of the optical transducers or transducer systems described herein.


System 10 is configured for an open ear canal, such that there is a direct acoustic path from the acoustic environment to the eardrum of the user. The direct acoustic path can be helpful to minimize occlusion of the ear canal, which can result in the user perceiving his or her own voice with a hollow sound when the user speaks. With the open canal configuration, a feedback path can exist from the eardrum to the canal microphone, for example the EL Feedback Acoustic Pathway. Although use of a direct drive transducer such as the coil and magnet of the EARLENS™ system can substantially minimize feedback, it can be beneficial to minimize feedback with additional structures and configurations of system 10.



FIG. 1A shows (1) a wide bandwidth EARLENS™ hearing aid of the prior art suitable for use with a mode of the system as in FIG. 1 with ear canal microphone CM for sound localization. The canal microphone CM is coupled to sound processor SP. Sound processor SP is coupled to an output amplifier, which amplifier is coupled to a coil to drive the magnet of the EARLENS™ EL.



FIG. 2A shows (2) a hearing aide mode of the system as in FIGS. 1 and 1A with a feedback cancellation mode. A free field sound pressure PFF may comprise a desired signal. The desired signal comprising the free field sound pressure is incident the external microphone and on the pinna of the car. The free field sound is diffracted by the pinna of the ear and transformed to form sound with high frequency localization cues at canal microphone CM. As the canal microphone is placed in the ear canal along the sound path between the free field and the eardrum, the canal transfer function Hc1 may comprise a first component Hc1 and a second component Hc2, in which Hc1 corresponds to sound travel between the free field and the canal microphone and Hc2 corresponds to sound travel between the canal microphone and the eardrum.


As noted above, acoustic feedback can travel from the EARLENS™ EL to the canal microphone CM. The acoustic feedback travels along the acoustic feedback path to the canal microphone CM, such that a feedback sound pressure PFB is incident on canal microphone CM. The canal microphone CM senses sound pressure from the desired signal PCM and the feedback sound pressure PFB. The feedback sound pressure PFB can be canceled by generating an error signal EFB. A feedback transfer function HFB is shown from the output of the sound processor to the input to the sound processor, and an error signal e is shown as input to the sound processor. Sound processor SP may comprise a signal generator SG. HFB can be estimated by generating a wide band signal with signal generator SG and nulling out the error signal e. HFB can be used to generate an error signal EFB with known signal processing techniques for feedback cancellation. The feedback suppression may comprise or be combined with known feedback suppression methods, and the noise cancellation may comprise or be combined with known noise cancellation methods.



FIG. 3A shows (3) a hearing aid mode of the system as in FIGS. 1 and 1A operating with a noise cancellation mode. The external microphone EM is coupled to the sound processor SP, through an amplifier AMP. The canal microphone CM is coupled to the sound processor SP. External microphone EM is configured to detect sound from free field sound pressure PFF. Canal microphone CM is configured to detect sound from canal sound pressure PCM. The sound pressure PFF travels through the ear canal and arrives at the tympanic membrane to generate a pressure at the tympanic membrane PTM2. The free field sound pressure PFF travels through the ear canal in response to an ear canal transfer function He to generate a pressure at the tympanic membrane PTM1. The system is configured to minimize V0 corresponding to vibration of the eardrum due to PFF. The output transducer is configured to vibrate with —PTM1 such that V0 corresponding to vibration of the eardrum is minimized, and thus PFB at the canal microphone may also be minimized. The transfer function of the ear canal HC1 can be determined in response to PCM and PFF, for example in response to the ratio of PCM to PFF with the equation HC1=PCM/PFF.


The sound processor can be configured to pass an output current IC through the coil which minimizes motion of the eardrum. The current through the coil for a desired PTM2 can be determined with the following equation and approximation:

IC=PTM1/RTM2=(PTM1/REFF)mA

where PEFF comprises the effective pressure at the tympanic membrane per milliamp of the current measured on an individual subject.


The ear canal transfer function HC may comprise a first ear canal transfer function HC1 and a second car canal transfer function HC2. As the canal microphone CM is placed in the ear canal, the second ear canal transfer function HC2 may correspond to a distance along the ear canal from ear canal microphone CM to the eardrum. The first ear canal transfer function Hc1 may correspond to a portion of the ear canal from the ear canal microphone CM to the opening of the ear canal. The first ear canal transfer function may also comprise a pinna transfer function, such that first ear canal transfer function Hc1 corresponds to the ear canal sound pressure PCM at the canal microphone in response to the free field sound pressure PCM after the free field sound pressure has been diffracted by the pinna so as to provide sound localization cues near the entrance to the ear canal.


The above described noise cancellation and feedback suppression can be combined in many ways. For example, the noise cancellation can be used with an input, for example direct audio input during a flight while the user listens to a movie, and the surrounding noise of the flight cancelled with the noise cancellation from the external microphone, and the sound processor configured to transmit the direct audio to the transducer, for example adjusted to the user's hearing profile, such that the user can hear the sound, for example from the movie, clearly.



FIG. 4A shows (4) the system as in FIG. 1 where the audio input is from an RF receiver, for example a BLUETOOTH™ device connected to the far-end speech of the communication channel of a mobile phone. The mobile system may comprise a mobile phone system, for example a far end mobile phone system. The system 10 may comprise a listen mode to listen to an external input. The external input in the listen mode may comprise at least one of a) the direct audio input signal or b) far-end speech from the mobile system.



FIG. 5A shows (5) the system as in FIGS. 1, 1A and 4A configured to transmit the near-end speech with an acoustic mode. The acoustic signal may comprise near end speech detected with a microphone, for example. The near-end speech can be a mix of the signal generated by the external microphone and the mobile phone microphone. The external microphone EM is coupled to a mixer. The canal microphone may also be coupled to the mixer. The mixer is coupled to the wireless circuitry to transmit the near-end speech to the far-end. The user is able to hear both near end speech and far end speech.



FIG. 6A shows the system as in FIGS. 1, 1A, 4A and 5A configured to transduce and transmit the near-end speech from a noisy environment to the far-end listener. The system 10 comprises a near-end speech transmission with a mode configured for vibration and acoustic detection of near end speech. The acoustic detection comprises the canal microphone CM and the external microphone EM mixed with the mixer and coupled to the wireless circuitry. The near end speech also induces vibrations in the user's bone, for example the user's skull, that can be detected with a vibration sensor. The vibration sensor may comprise a commercially available vibration sensor such as components of the JAWBONE™. The skull vibration sensor is coupled to the wireless circuitry. The near-end sound vibration detected from the bone conduction vibration sensor is combined with the near-end sound from at least one of the canal microphone CM or the external microphone EM and transmitted to the far-end user of the mobile system.



FIG. 7A shows a piezoelectric positioner 710 configured to detect near end speech of the user. Piezo electric positioner 710 can be attached to an elongate support near a transducer, in which the piezoelectric positioner is adapted to contact the ear in the canal near the transducer and support the transducer. Piezoelectric positioner 710 may comprise a piezoelectric ring 720 configured to detect near-end speech of the user in response to bone vibration when the user speaks. The piezoelectric ring 720 can generate an electrical signal in response to bone vibration transmitted through the skin of the ear canal. A piezo electric positioner 710 comprises a wise support attached to elongate support 750 near coil assembly 740. Piezoelectric positioner 710 can be used to center the coil in the canal to avoid contact with skin 765, and also to maintain a fixed distance between coil assembly 740 and magnet 728. Piezoelectric positioner 710 is adapted for direct contact with a skin 765 of ear canal. For example, piezoelectric positioner 710 includes a width that is approximately the same size as the cross sectional width of the ear canal where the piezoelectric positioner contacts skin 765. Also, the width of piezoelectric positioner 710 is typically greater than a cross-sectional width of coil assembly 740 so that the piezoelectric positioner can suspend coil assembly 740 in the ear canal to avoid contact between coil assembly 40 and skin 765 of the ear canal.


The piezo electric positioner may comprise many known piezoelectric materials, for example at least one of Polyvinylidene Fluoride (PVDF), PVF, or lead zirconate titanate (PZT).


System 10 may comprise a behind the ear unit, for example BTE unit 700, connected to elongate support 750. The BTE unit 700 may comprise many of the components described above, for example the wireless circuitry, the sound processor, the mixer and a power storage device. The BTE unit 700 may comprise an external microphone 748. A canal microphone 744 can be coupled to the elongate support 750 at a location 746 along elongate support 750 so as to position the canal microphone at least one of inside the near canal or near the ear canal opening to detect high frequency sound localization cues in response to sound diffraction from the Pinna. The canal microphone and the external microphone may also detect head shadowing, for example with frequencies at which the head of the user may cast an acoustic shadow on the microphone 744 and microphone 748.


Positioner 710 is adapted for comfort during insertion into the user's ear and thereafter. Piezoelectric positioner 710 is tapered proximally (and laterally) toward the ear canal opening to facilitate insertion into the ear of the user. Also, piezoelectric positioner 710 has a thickness transverse to its width that is sufficiently thin to permit piezoelectric positioner 710 to flex while the support is inserted into position in the ear canal. However, in some embodiments the piezoelectric positioner has a width that approximates the width of the typical car canal and a thickness that extends along the car canal about the same distance as coil assembly 740 extends along the ear canal. Thus, as shown in FIG. 7A piezoelectric positioner 710 has a thickness no more than the length of coil assembly 740 along the ear canal.


Positioner 710 permits sound waves to pass and provides and can be used to provide an open canal hearing aid design. Piezoelectric positioner 710 comprises several spokes and openings formed therein. In an alternate embodiment, piezoelectric positioner 710 comprises soft “flower” like arrangement. Piezoelectric positioner 710 is designed to allow acoustic energy to pass, thereby leaving the ear canal mostly open.



FIG. 7B shows a piezoelectric positioner 710 as in FIG. 7A in detail, according to embodiments of the present invention. Spokes 712 and piezoelectric ring 720 define apertures 714. Apertures 714 are shaped to permit acoustic energy to pass. In an alternate embodiment, the rim is elliptical to better match the shape of the ear canal defined by skin 765. Also, the rim can be removed so that spokes 712 engage the skin in a “flower petal” like arrangement. Although four spokes are shown, any number of spokes can be used. Also, the apertures can be any shape, for example circular, elliptical, square or rectangular.



FIG. 8A shows an elongate support with a pair of positioners adapted to contact the ear canal, and in which at least one of the positioners comprises a piezoelectric positioner configured to detect near end speech of the user, according to embodiments of the present invention. An elongate support 810 extends to a coil assembly 819. Coil assembly 819 comprises a coil 816, a core 817 and a biocompatible material 818. Elongate support 810 includes a wire 812 and a wire 814 electrically connected to coil 816. Coil 816 can include any of the coil configurations as described above. Wire 812 and wire 814 are shown as a twisted pair, although other configurations can be used as described above. Elongate support 810 comprises biocompatible material 818 formed over wire 812 and wire 814. Biocompatible material 818 covers coil 816 and core 817 as described above.


Wire 812 and wire 814 are resilient members and are sized and comprise material selected to elastically flex in response to small deflections and provide support to coil assembly 819. Wire 812 and wire 814 are also sized and comprise material selected to deform in response to large deflections so that elongate support 810 can be deformed to a desired shape that matches the ear canal. Wire 812 and wire 814 comprise metal and are adapted to conduct heat from coil assembly 819. Wire 812 and wire 814 are soldered to coil 816 and can comprise a different gauge of wire from the wire of the coil, in particular a gauge with a range from about 26 to about 36 that is smaller than the gauge of the coil to provide resilient support and heat conduction. Additional heat conducting materials can be used to conduct and transport heat from coil assembly 819, for example shielding positioned around wire 812 and wire 814. Elongate support 810 and wire 812 and wire 814 extend toward the driver unit and are adapted to conduct heat out of the ear canal.



FIG. 8B shows an elongate support as in FIG. 8A attached to two piezoelectric positioners placed in an ear canal, according to embodiments of the present invention. A first piezoelectric positioner 830 is attached to elongate support 810 near coil assembly 819. First piezoelectric positioner 830 engages the skin of the car canal to support coil assembly 819 and avoid skin contact with the coil assembly. A second piezoelectric positioner 840 is attached to elongate support 810 near ear canal opening 817. In some embodiments, microphone 820 may be positioned slightly outside the ear canal and near the canal opening so as to detect high frequency localization cues, for example within about 7 mm of the canal opening. Second piezoelectric positioner 840 is sized to contact the skin of the ear canal near opening 17 to support elongate support 810. A canal microphone 820 is attached to elongate support 810 near ear canal opening 17 to detect high frequency sound localization cues. The piezoelectric positioners and elongate support are sized and shaped so that the supports substantially avoid contact with the ear between the microphone and the coil assembly. A twisted pair of wires 822 extends from canal microphone 820 to the driver unit and transmits an electronic auditory signal to the driver unit. Alternatively, other modes of signal transmission, as described below with reference to FIG. 8B-1, may be used. Although canal microphone 820 is shown lateral to piezoelectric positioner 840, microphone 840 can be positioned medial to piezoelectric positioner 840. Elongate support 810 is resilient and deformable as described above. Although elongate support 810, piezoelectric positioner 830 and piezoelectric positioner 840 are shown as separate structures, the support can be formed from a single piece of material, for example a single piece of material formed with a mold. In some embodiments, elongate support 81, piezoelectric positioner 830 and piezoelectric positioner 840 are each formed as separate pieces and assembled. For example, the piezoelectric positioners can be formed with holes adapted to receive the elongate support so that the piezoelectric positioners can be slid into position on the elongate support.



FIG. 8C shows a piezoelectric positioner adapted for placement near the opening to the ear canal according to embodiments of the present invention. Piezoelectric positioner 840 includes piezoelectric flanges 842 that extend radially outward to engage the skin of the ear canal. Flanges 842 are formed from a flexible material. Openings 844 are defined by piezoelectric flanges 842. Openings 844 permit sound waves to pass piezoelectric positioner 840 while the piezoelectric positioner is positioned in the ear canal, so that the sound waves are transmitted to the tympanic membrane. Although piezoelectric flanges 842 define an outer boundary of support 840 with an elliptical shape, piezoelectric flanges 842 can comprise an outer boundary with any shape, for example circular. In some embodiments, the piezoelectric positioner has an outer boundary defined by the shape of the individual user's ear canal, for example embodiments where piezoelectric positioner 840 is made from a mold of the user's ear. Elongate support 810 extends transversely through piezoelectric positioner 840.



FIG. 8D shows a piezoelectric positioner adapted for placement near the coil assembly, according to embodiments of the present invention. Piezoelectric positioner 830 includes piezoelectric flanges 832 that extend radially outward to engage the skin of the ear canal. Flanges 832 are formed from a flexible piezoelectric material, for example a biomorph material. Openings 834 are defined by piezoelectric flanges 832. Openings 834 permit sound waves to pass piezoelectric positioner 830 while the piezoelectric positioner is positioned in the ear canal, so that the sound waves are transmitted to the tympanic membrane. Although piezoelectric flanges 832 define an outer boundary of support 830 with an elliptical shape, piezoelectric flanges 832 can comprise an outer boundary with any shape, for example circular. In some embodiments, the piezoelectric positioner has an outer boundary defined by the shape of the individual user's ear canal, for example embodiments where piezoelectric positioner 830 is made from a mold of the user's ear. Elongate support 810 extends transversely through piezoelectric positioner 830.


Although an electromagnetic transducer comprising coil 819 is shown positioned on the end of elongate support 810, the piezoelectric positioner and elongate support can be used with many types of transducers positioned at many locations, for example optical electromagnetic transducers positioned outside the ear canal and coupled to the support to deliver optical energy along the support, for example through at least one optical fiber. The at least one optical fiber may comprise a single optical fiber or a plurality of two or more optical fibers of the support. The plurality of optical fibers may comprise a parallel configuration of optical fibers configured to transmit at least two channels in parallel along the support toward the eardrum of the user.



FIG. 8B-1 shows an elongate support configured to position a distal end of the elongate support with at least one piezoelectric positioner placed in an ear canal. Elongate support 810 and at least one piezoelectric positioner, for example at least one of piezoelectric positioner 830 or piezoelectric positioner 840, or both, are configured to position support 810 in the ear canal with the electromagnetic energy transducer positioned outside the ear canal, and the microphone positioned at least one of in the ear canal or near the ear canal opening so as to detect high frequency spatial localization clues, as described above. For example, the output energy transducer, or emitter, may comprise a light source configured to emit electromagnetic energy comprising optical frequencies, and the light source can be positioned outside the ear canal, for example in a BTE unit. The light source may comprise at least one of an LED or a laser diode, for example. The light source, also referred to as an emitter, can emit visible light, or infrared light, or a combination thereof. Light circuitry may comprise the light source and can be coupled to the output of the sound processor to emit a light signal to an output transducer placed on the eardrum so as to vibrate the eardrum such that the user perceives sound. The light source can be coupled to the distal end of the support 810 with a waveguide, such as an optical fiber with a distal end of the optical fiber 810D comprising a distal end of the support. The optical energy delivery transducer can be coupled to the proximal portion of the elongate support to transmit optical energy to the distal end. The piezoelectric positioner can be adapted to position the distal end of the support near an eardrum when the proximal portion is placed at a location near an ear canal opening. The intermediate portion of elongate support 810 can be sized to minimize contact with a canal of the ear between the proximal portion to the distal end.


The at least one piezoelectric positioner, for example piezoelectric positioner 830, can improve optical coupling between the light source and a device positioned on the eardrum, so as to increase the efficiency of light energy transfer from the output energy transducer, or emitter, to an optical device positioned on the eardrum. For example, by improving alignment of the distal end 810D of the support that emits light and a transducer positioned at least one of on the eardrum or inside the middle ear, for example positioned on an ossicle of the middle ear. The device positioned on the eardrum may comprise an optical transducer assembly OTA. The optical transducer assembly OTA may comprise a support configured for placement on the eardrum, for example molded to the eardrum and similar to the support used with transducer EL. The optical transducer assembly OTA may comprise an optical transducer configured to vibrate in response to transmitted light λT. The transmitted light λT may comprise many wavelengths of light, for example at least one of visible light or infrared light, or a combination thereof. The optical transducer assembly OTA vibrates on the eardrum in response to transmitted light λT. The at least one piezoelectric positioner and elongate support 810 comprising an optical fiber can be combined with many known optical transducer and hearing devices, for example as described in U.S. U.S. 2006/0189841, entitled “Systems and Methods for Photo-Mechanical Hearing Transduction”; and U.S. Pat. No. 7,289,639, entitled “Hearing Implant”, the full disclosure of which are incorporated herein by reference and may include subject matter suitable for combination in accordance with some embodiments of the present invention. The piezoelectric positioner and elongate support may also be combined with photo-electro-mechanical transducers positioned on the ear drum with a support, as described in U.S. Pat. Ser. Nos. 61/073,271; and 61/073,281, both filed on Jun. 17, 2008, the full disclosure of which are incorporated herein by reference and may include subject matter suitable for combination in accordance with some embodiments of the present invention.


In specific embodiments, elongate support 810 may comprise an optical fiber coupled to piezoelectric positioner 830 to align the distal end of the optical fiber with an output transducer assembly supported on the eardrum. The output transducer assembly may comprise a photodiode configured to receive light transmitted from the distal end of support 810 and supported with support component 30 placed on the eardrum, as described above. The output transducer assembly can be separated from the distal end of the optical fiber, and the proximal end of the optical fiber can be positioned in the BTE unit and coupled to the light source. The output transducer assembly can be similar to the output transducer assembly described in U.S. 2006/0189841, with piezoelectric positioner 830 used to align the optical fiber with the output transducer assembly, and the BTE unit may comprise a housing with the light source positioned therein.



FIG. 9 illustrates a body 910 comprising the canal microphone installed in the ear canal and coupled to a BTE unit comprising the external microphone, according to embodiments of system 10. The body 910 comprises the transmitter installed in the ear canal coupled to the BTE unit. The transducer comprises the EARLENS™ installed on the tympanic membrane. The transmitter assembly 960 is shown with shell 966 cross-sectioned. The body 910 comprising shell 966 is shown installed in a right ear canal and oriented with respect to the transducer EL. The transducer assembly EL is positioned against tympanic membrane, or eardrum at umbo area 912. The transducer may also be placed on other acoustic members of the middle ear, including locations on the malleus, incus, and stapes. When placed in the umbo area 912 of the eardrum, the transducer EL will be naturally tilted with respect to the ear canal. The degree of tilt will vary from individual to individual, but is typically at about a 60-degree angle with respect to the ear canal. Many of the components of the shell and transducer can be similar to those described in U.S. Pub. No. 2006/0023908, the full disclosure of which has been previously incorporated herein by reference and may include subject matter suitable for combination in accordance with some embodiments of the present invention.


A first microphone for high frequency sound localization, for example canal microphone 974, is positioned inside the ear canal to detect high frequency localization cues. A BTE unit is coupled to the body 910. The BTE unit has a second microphone, for example an external microphone positioned on the BTE unit to receive external sounds. The external microphone can be used to detect low frequencies and combined with the high frequency microphone input to minimize feedback when high frequency sound is detected with the high frequency microphone, for example canal microphone 974. A bone vibration sensor 920 is supported with shell 966 to detect bone conduction vibration when the user speaks. An outer surface of bone vibration sensor 920 can be disposed along outer surface of shell 966 so as to contact tissue of the ear canal, for example substantially similar to an outer surface of shell 966 near the sensor to minimize tissue irritation. Bone vibration sensor 920 may also extend through an outer surface shell 966 to contact the tissue of the ear canal. Additional components of system 10, such as wireless communication circuitry and the direct audio input, as described above, can be located in the BTE unit. The sound processor may be located in many places, for example in the BTE unit or within the ear canal.


The transmitter assembly 960 has shell 966 configured to mate with the characteristics of the individual's ear canal wall. Shell 966 can be preferably matched to fit snug in the individual's ear canal so that the transmitter assembly 960 may repeatedly be inserted or removed from the ear canal and still be properly aligned when re-inserted in the individual's ear. Shell 966 can also be configured to support coil 964 and core 962 such that the tip of core 962 is positioned at a proper distance and orientation in relation to the transducer 926 when the transmitter assembly is properly installed in the ear canal. The core 962 generally comprises ferrite, but may be any material with high magnetic permeability.


In many embodiments, coil 964 is wrapped around the circumference of the core 962 along part or all of the length of the core. Generally, the coil has a sufficient number of rotations to optimally drive an electromagnetic field toward the transducer. The number of rotations may vary depending on the diameter of the coil, the diameter of the core, the length of the core, and the overall acceptable diameter of the coil and core assembly based on the size of the individual's ear canal. Generally, the force applied by the magnetic field on the magnet will increase, and therefore increase the efficiency of the system, with an increase in the diameter of the core. These parameters will be constrained, however, by the anatomical limitations of the individual's ear. The coil 964 may be wrapped around only a portion of the length of the core allowing the tip of the core to extend further into the ear canal.


One method for matching the shell 966 to the internal dimensions of the ear canal is to make an impression of the ear canal cavity, including the tympanic membrane. A positive investment is then made from the negative impression. The outer surface of the shell is then formed from the positive investment which replicated the external surface of the impression. The coil 964 and core 962 assembly can then be positioned and mounted in the shell 966 according to the desired orientation with respect to the projected placement of the transducer 926, which may be determined from the positive investment of the ear canal and tympanic membrane. Other methods of matching the shell to the ear canal of the user, such as imaging of the user may be used.


Transmitter assembly 960 may also comprise a digital signal processing (DSP) unit 972, microphone 974, and battery 978 that are supported with body 910 and disposed inside shell 966. A BTE unit may also be coupled to the transmitter assembly, and at least some of the components, such as the DSP unit can be located in the BTE unit. The proximal end of the shell 966 has a faceplate 980 that can be temporarily removed to provide access to the open chamber 986 of the shell 966 and transmitter assembly components contained therein. For example, the faceplate 980 may be removed to switch out battery 978 or adjust the position or orientation of core 962. Faceplate 980 may also have a microphone port 982 to allow sound to be directed to microphone 974. Pull line 984 may also be incorporated into the shell 966 of faceplate 980 so that the transmitter assembly can be readily removed from the ear canal. In some embodiments, the external microphone may be positioned outside the ear near a distal end of pull line 984, such that the external microphone is sufficiently far from the ear canal opening so as to minimized feedback from the external microphone.


In operation, ambient sound entering the pinna, or auricle, and car canal is captured by the microphone 974, which converts sound waves into analog electrical signals for processing by the DSP unit 972. The DSP unit 972 may be coupled to an input amplifier to amplify the signal and convert the analog signal to a digital signal with a analog to digital converter commonly used in the art. The digital signal can then be processed by any number of known digital signal processors. The processing may consist of any combination of multi-band compression, noise suppression and noise reduction algorithms. The digitally processed signal is then converted back to analog signal with a digital to analog converter. The analog signal is shaped and amplified and sent to the coil 964, which generates a modulated electromagnetic field containing audio information representative of the audio signal and, along with the core 962, directs the electromagnetic field toward the magnet of the transducer EL. The magnet of transducer EL vibrates in response to the electromagnetic field, thereby vibrating the middle-ear acoustic member to which it is coupled, for example the tympanic membrane, or, for example the malleus 18 in FIGS. 3A and 3B of U.S. 2006/0023908, the full disclosure of which has been previously incorporated herein by reference.


In many embodiments, face plate 980 also has an acoustic opening 970 to allow ambient sound to enter the open chamber 986 of the shell. This allows ambient sound to travel through the open volume 986 along the internal compartment of the transmitter assembly and through one or more openings 968 at the distal end of the shell 966. Thus, ambient sound waves may reach and vibrate the eardrum and separately impart vibration on the eardrum. This open-channel design provides a number of substantial benefits. First, the open channel minimizes the occlusive effect prevalent in many acoustic hearing systems from blocking the ear canal. Second, the natural ambient sound entering the ear canal allows the electromagnetically driven effective sound level output to be limited or cut off at a much lower level than with a design blocking the ear canal.


With the two microphone embodiments, for example the external microphone and canal microphone as described herein, acoustic hearing aids can realize at least some improvement in sound localization, because of the decrease in feedback with the two microphones, which can allow at least some sound localization. For example a first microphone to detect high frequencies can be positioned near the ear canal, for example outside the ear canal and within about 5 mm of the ear canal opening, to detect high frequency sound localization cues. A second microphone to detect low frequencies can be positioned away from the ear canal opening, for example at least about 10 mm, or even 20 mm, from the ear canal opening to detect low frequencies and minimize feedback from the acoustic speaker positioned in the ear canal.


In some embodiments, the BTE components can be placed in body 910, except for the external microphone, such that the body 910 comprises the wireless circuitry and sound processor, battery and other components. The external microphone may extend from the body 910 and/or faceplate 980 so as to minimize feedback, for example similar to pull line 984 and at least about 10 mm from faceplate 980 so as to minimize feedback.



FIG. 10A shows feedback pressure at the canal microphone and feedback pressure at the external microphone versus frequency for an output transducer configured to vibrate the eardrum and produce the sensation of sound. The output transducer can be directly coupled to an ear structure such as an ossicle of the middle ear or to another structure such as the eardrum, for example with the EARLENS™ transducer EL. The feedback pressure PFB(Canal, EL) for the canal microphone with the EARLENS™ transducer EL is shown from about 0.1 kHz (100 Hz) to about 10 kHz, and can extend to about 20 kHz at the upper limit of human hearing. The feedback pressure can be expressed as a ratio in dB of sound pressure at the canal microphone to sound pressure at the eardrum. The feedback pressure PFB(External, EL) is also shown for external microphone with transducer EL and can be expressed as a ratio of sound pressure at the external microphone to sound pressure at the eardrum. The feedback pressure at the canal microphone is greater than the feedback pressure at the external microphone. The feedback pressure is generated when a transducer, for example a magnet, supported on the eardrum is vibrated. Although feedback with this approach can be minimal, the direct vibration of the eardrum can generate at least some sound that is transmitted outward along the canal toward the canal microphone near the ear canal opening. The canal microphone feedback pressure PFB(canal) comprises a peak around 2-3 kHz and decreases above about 3 kHz. The peak around 2-3 kHz corresponds to resonance of the ear canal. Although another sub peak may exist between 5 and 10 kHz for the canal microphone feedback pressure PFB(canal), this peak has much lower amplitude than the global peak at 2-3 kHz. As the external microphone is farther from the eardrum than the canal microphone, the feedback pressure PFB(External) for the external microphone is lower than the feedback pressure PFB(Canal) for the canal microphone. The external microphone feedback pressure may also comprise a peak around 2-3 kHz that corresponds to resonance of the ear canal and is much lower in amplitude than the feedback pressure of the canal microphone as the external microphone is farther from the ear canal. As the high frequency localization cues can be encoded in sound frequencies above about 3 kHz, the gain of canal microphone and external microphone can be configured to detect high frequency localization cues and minimize feedback.


The canal microphone and external microphone may be used with many known transducers to provide at least some high frequency localization cues with an open ear canal, for example surgically implanted output transducers and hearing aides with acoustic speakers. For example, the canal microphone feedback pressure PFB(canal, Acoustic) when an acoustic speaker transducer placed near the eardrum shows a resonance similar to transducer EL and has a peak near 2-3 kHz. The external microphone feedback pressure PFB(External, Acoustic) is lower than the canal microphone feedback pressure PFB(canal, Acoustic) at all frequencies, such that the external microphone can be used to detect sound comprising frequencies at or below the resonance frequencies of the ear, and the canal microphone may be used to detect high frequency localization cues at frequencies above the resonance frequencies of the ear canal. Although the canal microphone feedback pressure PFB(Canal, Acoustic) is greater for the acoustic speaker output transducer than the canal microphone feedback pressure PFB(Canal, EL) for the EARLENS™ transducer EL, the acoustic speaker may deliver at least some high frequency sound localization cues when the external microphone is used to amply frequencies at or below the resonance frequencies of the ear canal.



FIG. 10B shows gain versus frequency at the output transducer for sound input to canal microphone and sound input to the external microphone to detect high frequency localization cues and minimize feedback. As noted above, the high frequency localization cues of sound can be encoded in frequencies above about 3 kHz. These spatial localization cues can include at least one of head shadowing or diffraction of sound by the pinna of the ear. Hearing system 10 may comprise a binaural hearing system with a first device in a first ear canal and a second device in a second ear contralateral ear canal of a second contralateral ear, in which the second device is similar to the first device. To detect head shadowing a microphone can be positioned such that the head of the user casts an acoustic shadow on the input microphone, for example with the microphone placed on a first side of the user's head opposite a second side of the users head such that the second side faces the sound source. To detect high frequency localization cues from sound diffraction of the pinna of the user, the input microphone can be positioned in the ear canal and also external of the ear canal and within about 5 mm of the entrance of the ear canal, or therebetween, such that the pinna of the ear diffracts sound waves incident on the microphone. This placement of the microphone can provide high frequency localization cues, and can also provide head shadowing of the microphone. The pinna diffraction cues that provide high frequency localization of sound can be present with monaural hearing. The gain for sound input to the external microphone for low frequencies below about 3 kHz is greater than the gain for the canal microphone. This can result in decreased feedback as the canal microphone has decreased gain as compared to the external microphone. The gain for sound input to the canal microphone for high frequencies above about 3 kHz is greater than the gain for the external microphone, such that the user can detect high frequency localization cues above 3 kHz, for example above 4 kHz, when the feedback is minimized.


The gain profiles comprise an input sound to the microphone and an output sound from the output transducer to the user, such that the gain profiles for each of the canal microphone and external microphone can be achieved in many ways with many configurations of at least one of the microphone, the circuitry and the transducer. The gain profile for sound input to the external microphone may comprise low pass components configured with at least one of a low pass microphone, low pass circuitry, or a low pass transducer. The gain profile for sound input to the canal microphone may comprise low pass components configured with at least one of a high pass microphone, high pass circuitry, or a high pass transducer. The circuitry may comprise the sound processor comprising a tangible medium configured to high pass filter the sound input from the canal microphone and low pass filter the sound input from the external microphone.



FIG. 10C shows a canal microphone with high pass filter circuitry and an external microphone with low pass filter circuitry, both coupled to a transducer to provide gain in response to frequency as in FIG. 10B. Canal microphone CM is coupled to high pass filer circuitry HPF. The high pass filter circuitry may comprise known low pass filters and is coupled to a gain block, GAIN2, which may comprise at least one of an amplifier AMP1 or a known sound processor configured to process the output of the high pass filter. External microphone EM is coupled to low pass filer circuitry LPF. The low pass filter circuitry comprise may comprise known low pass filters and is coupled to a gain block, GAIN2, which may comprise at least one of an amplifier AMP2 or a known sound processor configured to process the output of the high pass filter. The output can be combined at the transducer, and the transducer configured to vibrate the eardrum, for example directly. In some embodiments, the output of the canal microphone and output of the external microphone can be input separately to one sound processor and combined, which sound processor may then comprise a an output adapted for the transducer.


FIGS. 10D1 shows a canal microphone coupled to first transducer TRANSDUCER1 and an external microphone coupled to a second transducer TRANSDUCER2 to provide gain in response to frequency as in FIG. 10B. The first transducer may comprise output characteristics with a high frequency peak, for example around 8-10 kHz, such that high frequencies are passed with greater energy. The second transducer may comprise a low frequency peak, for example around 1 kHz, such that low frequencies are passed with greater energy. The input of the first transducer may be coupled to output of a first sound processor and a first amplifier as described above. The input of the second transducer may be coupled to output of a second sound processor and a second amplifier. Further improvement in the output profile for the canal microphone can be obtained with a high pass filter coupled to the canal microphone. A low pass filter can also be coupled to the external microphone. In some embodiments, the output of the canal microphone and output of the external microphone can be input separately to one sound processor and combined, which sound processor may then comprise a separate output adapted for each transducer.


FIGS. 10D2 shows the canal microphone coupled to a first transducer comprising a first coil wrapped around a core, and the external microphone coupled to a second transducer comprising second a coil wrapped around the core, as in FIG. 10D1. A first coil COIL1 is wrapped around the core and comprises a first number of turns. A second coil COIL2 is wrapped around the core and comprises a second number of turns. The number of turns for each coil can be optimized to produce a first output peak for the first transducer and a second output peak for the second transducer, with the second output peak at a frequency below the a frequency of the first output peak. Although coils are shown, many transducers can be used such as piezoelectric and photostrictive materials, for example as described above. The first transducer may comprise at least a portion of the second transducer, such that first transducer at least partially overlaps with the second transducer, for example with a common magnet supported on the eardrum.


The first input transducer, for example the canal microphone, and second input transducer, for example the external microphone, can be arranged in many ways to detect sound localization cues and minimize feedback. These arrangements can be obtained with at least one of a first input transducer gain, a second input transducer gain, high pass filter circuitry for the first input transducer, low pass filter circuitry for the second input transducer, sound processor digital filters or output characteristics of the at least one output transducer.


The canal microphone may comprise a first input transducer coupled to at least one output transducer to vibrate an eardrum of the ear in response to high frequency sound localization cues above the resonance frequencies of the ear canal, for example resonance frequencies from about 2 kHz to about 3 kHz. The external microphone may comprise a second input transducer coupled to at least one output transducer to vibrate the eardrum in response sound frequencies at or below the resonance frequency of the ear canal. The resonance frequency of the ear canal may comprise frequencies within a range from about 2 to 3 kHz, as noted above.


The first input transducer can be coupled to at least one output transducer to vibrate the eardrum with a first gain for first sound frequencies corresponding to the resonance frequencies of the ear canal. The second input transducer can be coupled to the at least one output transducer to vibrate the eardrum with a second gain for the sound frequencies corresponding to the resonance frequencies of the ear canal, in which the first gain is less than the second gain to minimize feedback.


The first input transducer can be coupled to the at least one output transducer to vibrate the eardrum with a resonance gain for first sound frequencies corresponding to the resonance frequencies of the ear canal and a cue gain for sound localization cue comprising frequencies above the resonance frequencies of the car canal. The cue gain can be greater than the resonance gain to minimize feedback and allow the user to perceive the sound localization cues.



FIG. 11A shows an elongate support 1110 comprising a plurality of optical fibers 1110P configured to transmit light and receive light to measure displacement of the eardrum. The plurality of optical fibers 1110P comprises at least a first optical fiber 1110A and a second optical fiber 1110B. First optical fiber 1110A is configured to transmit light from a source. Light circuitry comprises the light source and can be configured to emit light energy such that the user perceives sound. The optical transducer assembly OTA can be configured for placement on an outer surface of the eardrum, as described above.


The displacement of the eardrum and optical transducer assembly can be measured with second input transducer which comprises at least one of an optical vibrometer, a laser vibrometer, a laser Doppler vibrometer, or an interferometer configured to generate a signal in response to vibration of the eardrum. A portion of the transmitted light λT can be reflected from at the eardrum and the optical transducer assembly OTA and comprises reflected light λR. The reflected light enters second optical fiber 1110B and is received by an optical detector coupled to a distal end of the second optical fiber 1110B, for example a laser vibrometer detector coupled to detector circuitry to measure vibration of the eardrum. The plurality of optical fibers may comprise a third optical fiber for transmission of light from a laser of the laser vibrometer toward the eardrum. For example, a laser source comprising laser circuitry can be coupled to the proximal end of the support to transmit light toward the ear to measure eardrum displacement. The optical transducer assembly may comprise a reflective surface to reflect light from the laser used for the laser vibrometer, and the optical wavelengths to induce vibration of the eardrum can be separate from the optical wavelengths used to measure vibration of the eardrum. The optical detection of vibration of the eardrum can be used for near-end speech measurement, similar to the piezo electric transducer described above. The optical detection of vibration of the eardrum can be used for noise cancellation, such that vibration of the eardrum is minimized in response to the optical signal reflected from at least one of eardrum or the optical transducer assembly.


Elongate support 1110 and at least one positioner, for example at least one of positioner 1130 or positioner 1140, or both, can be configured to position support 1110 in the ear canal with the electromagnetic energy transducer positioned outside the ear canal, and the microphone positioned at least one of in the ear canal or near the ear canal opening so as to detect high frequency spatial localization clues, as described above. For example, the output energy transducer, or emitter, may comprise a light source configured to emit electromagnetic energy comprising optical frequencies, and the light source can be positioned outside the ear canal, for example in a BTE unit. The light source may comprise at least one of an LED or a laser diode, for example. The light source, also referred to as an emitter, can emit visible light, or infrared light, or a combination thereof. The light source can be coupled to the distal end of the support with a waveguide, such as an optical fiber with a distal end of the optical fiber 1110D comprising a distal end of the support. The optical energy delivery transducer can be coupled to the proximal portion of the elongate support to transmit optical energy to the distal end. The positioner can be adapted to position the distal end of the support near an eardrum when the proximal portion is placed at a location near an ear canal opening. The intermediate portion of elongate support 1110 can be sized to minimize contact with a canal of the ear between the proximal portion to the distal end.


The at least one positioner, for example positioner 1130, can improve optical coupling between the light source and a device positioned on the eardrum, so as to increase 10 the efficiency of light energy transfer from the output energy transducer, or emitter, to an optical device positioned on the eardrum. For example, by improving alignment of the distal end 1110D of the support that emits light and a transducer positioned at least one of on the eardrum or in the middle ear. The at least one positioner and elongate support 1110 comprising an optical fiber can be combined with many known optical transducer and 15 hearing devices, for example as described in U.S. application Ser. No. 11/248,459, entitled “Systems and Methods for Photo-Mechanical Hearing Transduction”, the full disclosure of which has been previously incorporated herein by reference, and U.S. Pat. No. 7,289,639, entitled “Hearing Implant”, the full disclosure of which is incorporated herein by reference. The positioner and elongate support may also be combined with photo-electro-mechanical 20 transducers positioned on the ear drum with a support, as described in U.S. Pat. Ser. Nos. 61/073,271; and 61/073,281, both filed on Jun. 17, 2008, the full disclosures of which have been previously incorporated herein by reference.


In specific embodiments, elongate support 1110 may comprise an optical fiber coupled to positioner 1130 to align the distal end of the optical fiber with an output transducer assembly supported on the eardrum. The output transducer assembly may comprise a photodiode configured to receive light transmitted from the distal end of support 1110 and supported with support component 30 placed on the eardrum, as described above. The output transducer assembly can be separated from the distal end of the optical fiber, and the proximal end of the optical fiber can be positioned in the BTE unit and coupled to the light source. The output transducer assembly can be similar to the output transducer assembly described in U.S. 2006/0189841, with positioner 1130 used to align the optical fiber with the output transducer assembly, and the BTE unit may comprise a housing with the light source positioned therein.



FIG. 11B shows a positioner for use with an elongate support as in FIG. 11 A and adapted for placement near the opening to the ear canal. Positioner 1140 includes flanges 1142 that extend radially outward to engage the skin of the ear canal. Flanges 1142 are formed from a flexible material. Openings 1144 are defined by flanges 1142. Openings 1144 permit sound waves to pass positioner 1140 while the positioner is positioned in the ear canal, so that the sound waves are transmitted to the tympanic membrane. Although flanges 1142 define an outer boundary of support 1140 with an elliptical shape, flanges 1142 can comprise an outer boundary with any shape, for example circular. In some embodiments, the positioner has an outer boundary defined by the shape of the individual user's ear canal, for example embodiments where positioner 1140 is made from a mold of the user's ear. Elongate support 1110 extends transversely through positioner 1140.



FIG. 11C shows a positioner adapted for placement near a distal end of the elongate support as in FIG. 11A. Positioner 1130 includes flanges 1132 that extend radially outward to engage the skin of the ear canal. Flanges 1132 are formed from a flexible material. Openings 1134 are defined by flanges 1132. Openings 1134 permit sound waves to pass positioner 1130 while the positioner is positioned in the ear canal, so that the sound waves are transmitted to the tympanic membrane. Although flanges 1132 define an outer boundary of support 1130 with an elliptical shape, flanges 1132 can comprise an outer boundary with any shape, for example circular. In some embodiments, the positioner has an outer boundary defined by the shape of the individual user's ear canal, for example embodiments where positioner 1130 is made from a mold of the user's ear. Elongate support 1110 extends transversely through positioner 1130.


Although an electromagnetic transducer comprising coil 1119 is shown positioned on the end of elongate support 1110, the positioner and elongate support can be used with many types of transducers positioned at many locations, for example optical electromagnetic transducers positioned outside the ear canal and coupled to the support to deliver optical energy along the support, for example through at least one optical fiber. The at least one optical fiber may comprise a single optical fiber or a plurality of two or more optical fibers of the support. The plurality of optical fibers may comprise a parallel configuration of optical fibers configured to transmit at least two channels in parallel along the support toward the eardrum of the user.


While the exemplary embodiments have been described above in some detail for clarity of understanding and by way of example, a variety of additional modifications, adaptations, and changes may be clear to those of skill in the art. Hence, the scope of the present invention is limited solely by the appended claims.

Claims
  • 1. A method of transmitting information through an audio listening system to an ear of a user, wherein the system comprises:an external microphone configured for placement external to the ear canal to measure external sound pressure;a ring piezoelectric transducer configured for placement inside the ear canal on an eardrum of the user to vibrate the eardrum and transmit sound to the user in response to the external microphone, wherein the transducer comprises an output transducer, the output transducer being configured to vibrate the eardrum;a sound processor configured with active noise cancellation to cause the transducer to adjust vibration of the eardrum to minimize or cancel an external sound perceived by the user based on the external sound pressure measured by the external microphone; anda coil wrapped around a core coupled to an output of the sound processor and configured to emit a magnetic field to the transducer to vibrate the transducer when the transducer is positioned on the eardrum of the user, wherein the magnetic field comprises a combination of the external sound perceived by the user based on the external sound pressure measured by the external microphone and a direct audio signal; andthe method comprising the steps of:receiving sound through the external microphone;transmitting the received sound to the user by vibrating the eardrum of the user;adjusting the vibration of the eardrum to minimize or cancel the transmitted sound based on an external sound pressure measured by the external microphone.
CROSS REFERENCE TO RELATED APPLICATIONS DATA

The present application is a continuation of U.S. patent application Ser. No. 16/682,329, filed Nov. 13, 2019, now U.S. Pat. No. 10,863,286; which is a continuation of U.S. patent application Ser. No. 16/173,869, filed Oct. 29, 2018, now U.S. Pat. No. 10,516,950; which is a continuation of U.S. patent application Ser. No. 15/804,995, filed Nov. 6, 2017, now U.S. Pat. No. 10,154,352; which is a continuation of U.S. patent application Ser. No. 14/949,495, filed Nov. 23, 2015; which is a continuation of U.S. patent application Ser. No. 13/768,825, filed Feb. 15, 2013, now U.S. Pat. No. 9,226,083; which is a divisional of U.S. patent application Ser. No. 12/251,200, filed Oct. 14, 2008, now U.S. Pat. No. 8,401,212; which claims the benefit under 35 U.S.C. § 119(c) of U.S. Provisional Application No. 60/979,645 filed Oct. 12, 2007; the full disclosures of which are incorporated herein by reference in their entirety.

US Referenced Citations (690)
Number Name Date Kind
2763334 Starkey Sep 1956 A
3209082 McCarrell et al. Sep 1965 A
3229049 Goldberg Jan 1966 A
3440314 Eldon Apr 1969 A
3449768 Doyle et al. Jun 1969 A
3526949 Frank Sep 1970 A
3549818 Justin Dec 1970 A
3585416 Howard Jun 1971 A
3594514 Robert Jul 1971 A
3710399 Hurst Jan 1973 A
3712962 Epley Jan 1973 A
3764748 Branch et al. Oct 1973 A
3808179 Gaylord Apr 1974 A
3870832 Fredrickson Mar 1975 A
3882285 Nunley et al. May 1975 A
3965430 Brandt Jun 1976 A
3985977 Beaty et al. Oct 1976 A
4002897 Kleinman et al. Jan 1977 A
4031318 Pitre Jun 1977 A
4061972 Burgess Dec 1977 A
4075042 Das Feb 1978 A
4098277 Mendell Jul 1978 A
4109116 Victoreen Aug 1978 A
4120570 Gaylord Oct 1978 A
4207441 Ricard et al. Jun 1980 A
4248899 Lyon et al. Feb 1981 A
4252440 Fedors et al. Feb 1981 A
4281419 Treace Aug 1981 A
4303772 Novicky Dec 1981 A
4319359 Wolf Mar 1982 A
4334315 Ono et al. Jun 1982 A
4334321 Edelman Jun 1982 A
4338929 Lundin et al. Jul 1982 A
4339954 Anson et al. Jul 1982 A
4357497 Hochmair et al. Nov 1982 A
4375016 Harada Feb 1983 A
4380689 Giannetti Apr 1983 A
4428377 Zollner et al. Jan 1984 A
4524294 Brody Jun 1985 A
4540761 Kawamura et al. Sep 1985 A
4556122 Goode Dec 1985 A
4592087 Killion May 1986 A
4606329 Hough Aug 1986 A
4611598 Hortmann et al. Sep 1986 A
4628907 Epley Dec 1986 A
4641377 Rush et al. Feb 1987 A
4652414 Schlaegel Mar 1987 A
4654554 Kishi Mar 1987 A
4689819 Killion Aug 1987 A
4696287 Hortmann et al. Sep 1987 A
4729366 Schaefer Mar 1988 A
4741339 Harrison et al. May 1988 A
4742499 Butler May 1988 A
4756312 Epley Jul 1988 A
4759070 Voroba et al. Jul 1988 A
4766607 Feldman Aug 1988 A
4774933 Hough et al. Oct 1988 A
4776322 Hough et al. Oct 1988 A
4782818 Mori Nov 1988 A
4800884 Heide et al. Jan 1989 A
4800982 Carlson Jan 1989 A
4817607 Tatge Apr 1989 A
4840178 Heide et al. Jun 1989 A
4845755 Busch et al. Jul 1989 A
4865035 Mori Sep 1989 A
4870688 Voroba et al. Sep 1989 A
4918745 Hutchison Apr 1990 A
4932405 Peeters et al. Jun 1990 A
4936305 Ashtiani et al. Jun 1990 A
4944301 Widin et al. Jul 1990 A
4948855 Novicky Aug 1990 A
4957478 Maniglia et al. Sep 1990 A
4963963 Dorman Oct 1990 A
4982434 Lenhardt et al. Jan 1991 A
4999819 Newnham et al. Mar 1991 A
5003608 Carlson Mar 1991 A
5012520 Steeger Apr 1991 A
5015224 Maniglia May 1991 A
5015225 Hough et al. May 1991 A
5031219 Ward et al. Jul 1991 A
5061282 Jacobs Oct 1991 A
5066091 Stoy et al. Nov 1991 A
5068902 Ward Nov 1991 A
5094108 Kim et al. Mar 1992 A
5117461 Moseley May 1992 A
5142186 Cross et al. Aug 1992 A
5163957 Sade et al. Nov 1992 A
5167235 Seacord et al. Dec 1992 A
5201007 Ward et al. Apr 1993 A
5220612 Tibbetts et al. Jun 1993 A
5259032 Perkins et al. Nov 1993 A
5272757 Scofield et al. Dec 1993 A
5276910 Buchele Jan 1994 A
5277694 Leysieffer et al. Jan 1994 A
5282858 Bisch et al. Feb 1994 A
5296797 Bartlett Mar 1994 A
5298692 Ikeda et al. Mar 1994 A
5338287 Miller et al. Aug 1994 A
5360388 Spindel et al. Nov 1994 A
5378933 Pfannenmueller et al. Jan 1995 A
5402496 Soli et al. Mar 1995 A
5411467 Hortmann et al. May 1995 A
5424698 Dydyk et al. Jun 1995 A
5425104 Shennib Jun 1995 A
5440082 Claes Aug 1995 A
5440237 Brown et al. Aug 1995 A
5455994 Termeer et al. Oct 1995 A
5456654 Ball Oct 1995 A
5531787 Lesinski et al. Jul 1996 A
5531954 Heide et al. Jul 1996 A
5535282 Luca Jul 1996 A
5554096 Ball Sep 1996 A
5558618 Maniglia Sep 1996 A
5571148 Loeb et al. Nov 1996 A
5572594 Devoe et al. Nov 1996 A
5606621 Reiter et al. Feb 1997 A
5624376 Ball et al. Apr 1997 A
5654530 Sauer et al. Aug 1997 A
5692059 Kruger Nov 1997 A
5699809 Combs et al. Dec 1997 A
5701348 Shennib et al. Dec 1997 A
5707338 Adams et al. Jan 1998 A
5715321 Andrea et al. Feb 1998 A
5721783 Anderson Feb 1998 A
5722411 Suzuki et al. Mar 1998 A
5729077 Newnham et al. Mar 1998 A
5740258 Goodwin-Johansson Apr 1998 A
5742692 Garcia et al. Apr 1998 A
5749912 Zhang et al. May 1998 A
5762583 Adams et al. Jun 1998 A
5772575 Lesinski et al. Jun 1998 A
5774259 Saitoh et al. Jun 1998 A
5782744 Money Jul 1998 A
5788711 Lehner et al. Aug 1998 A
5795287 Ball et al. Aug 1998 A
5797834 Goode Aug 1998 A
5800336 Ball et al. Sep 1998 A
5804109 Perkins Sep 1998 A
5804907 Park et al. Sep 1998 A
5814095 Mueller et al. Sep 1998 A
5824022 Zilberman et al. Oct 1998 A
5825122 Givargizov et al. Oct 1998 A
5836863 Bushek et al. Nov 1998 A
5842967 Kroll Dec 1998 A
5851199 Peerless et al. Dec 1998 A
5857958 Ball et al. Jan 1999 A
5859916 Ball et al. Jan 1999 A
5868682 Combs et al. Feb 1999 A
5879283 Adams et al. Mar 1999 A
5888187 Jaeger et al. Mar 1999 A
5897486 Ball et al. Apr 1999 A
5899847 Adams et al. May 1999 A
5900274 Chatterjee et al. May 1999 A
5906635 Maniglia May 1999 A
5913815 Ball et al. Jun 1999 A
5922017 Bredberg et al. Jul 1999 A
5922077 Espy et al. Jul 1999 A
5935170 Haakansson et al. Aug 1999 A
5940519 Kuo Aug 1999 A
5949895 Ball et al. Sep 1999 A
5951601 Lesinski et al. Sep 1999 A
5984859 Lesinski Nov 1999 A
5987146 Pluvinage et al. Nov 1999 A
6001129 Bushek et al. Dec 1999 A
6005955 Kroll et al. Dec 1999 A
6011984 Van et al. Jan 2000 A
6024717 Ball et al. Feb 2000 A
6038480 Hrdlicka et al. Mar 2000 A
6045528 Arenberg et al. Apr 2000 A
6050933 Bushek et al. Apr 2000 A
6067474 Schulman et al. May 2000 A
6068589 Neukermans May 2000 A
6068590 Brisken May 2000 A
6072884 Kates Jun 2000 A
6084975 Perkins Jul 2000 A
6093144 Jaeger et al. Jul 2000 A
6135612 Clore Oct 2000 A
6137889 Shennib et al. Oct 2000 A
6139488 Ball Oct 2000 A
6153966 Neukermans Nov 2000 A
6168948 Anderson et al. Jan 2001 B1
6174278 Jaeger et al. Jan 2001 B1
6175637 Fujihira et al. Jan 2001 B1
6181801 Puthuff et al. Jan 2001 B1
6190305 Ball et al. Feb 2001 B1
6190306 Kennedy Feb 2001 B1
6208445 Reime Mar 2001 B1
6216040 Harrison Apr 2001 B1
6217508 Ball et al. Apr 2001 B1
6219427 Kates et al. Apr 2001 B1
6222302 Imada et al. Apr 2001 B1
6222927 Feng et al. Apr 2001 B1
6240192 Brennan et al. May 2001 B1
6241767 Stennert et al. Jun 2001 B1
6259951 Kuzma et al. Jul 2001 B1
6261224 Adams et al. Jul 2001 B1
6264603 Kennedy Jul 2001 B1
6277148 Dormer Aug 2001 B1
6312959 Datskos Nov 2001 B1
6339648 McIntosh et al. Jan 2002 B1
6342035 Kroll et al. Jan 2002 B1
6354990 Juneau et al. Mar 2002 B1
6359993 Brimhall Mar 2002 B2
6366863 Bye et al. Apr 2002 B1
6374143 Berrang et al. Apr 2002 B1
6385363 Rajic et al. May 2002 B1
6387039 Moses May 2002 B1
6390971 Adams et al. May 2002 B1
6393130 Stonikas et al. May 2002 B1
6422991 Jaeger Jul 2002 B1
6432248 Popp et al. Aug 2002 B1
6434246 Kates et al. Aug 2002 B1
6434247 Kates et al. Aug 2002 B1
6436028 Dormer Aug 2002 B1
6438244 Juneau et al. Aug 2002 B1
6445799 Taenzer et al. Sep 2002 B1
6473512 Juneau et al. Oct 2002 B1
6475134 Ball et al. Nov 2002 B1
6491622 Kasic, II et al. Dec 2002 B1
6491644 Vujanic et al. Dec 2002 B1
6491722 Kroll et al. Dec 2002 B1
6493453 Glendon Dec 2002 B1
6493454 Loi et al. Dec 2002 B1
6498858 Kates Dec 2002 B2
6507758 Greenberg et al. Jan 2003 B1
6519376 Biagi et al. Feb 2003 B2
6523985 Hamanaka et al. Feb 2003 B2
6536530 Schultz et al. Mar 2003 B2
6537200 Leysieffer et al. Mar 2003 B2
6547715 Mueller et al. Apr 2003 B1
6549633 Westermann Apr 2003 B1
6549635 Gebert Apr 2003 B1
6554761 Puria et al. Apr 2003 B1
6575894 Leysieffer et al. Jun 2003 B2
6592513 Kroll et al. Jul 2003 B1
6603860 Taenzer et al. Aug 2003 B1
6620110 Schmid Sep 2003 B2
6626822 Jaeger et al. Sep 2003 B1
6629922 Puria et al. Oct 2003 B1
6631196 Taenzer et al. Oct 2003 B1
6643378 Schumaier Nov 2003 B2
6663575 Leysieffer Dec 2003 B2
6668062 Luo et al. Dec 2003 B1
6676592 Ball et al. Jan 2004 B2
6681022 Puthuff et al. Jan 2004 B1
6695943 Juneau et al. Feb 2004 B2
6697674 Leysieffer Feb 2004 B2
6724902 Shennib et al. Apr 2004 B1
6726618 Miller Apr 2004 B2
6726718 Carlyle et al. Apr 2004 B1
6727789 Tibbetts et al. Apr 2004 B2
6728024 Ribak Apr 2004 B2
6735318 Cho May 2004 B2
6754358 Boesen et al. Jun 2004 B1
6754359 Svean et al. Jun 2004 B1
6754537 Harrison et al. Jun 2004 B1
6785394 Olsen et al. Aug 2004 B1
6792114 Kates et al. Sep 2004 B1
6801629 Brimhall et al. Oct 2004 B2
6829363 Sacha Dec 2004 B2
6831986 Kates Dec 2004 B2
6837857 Stirnemann Jan 2005 B2
6842647 Griffith et al. Jan 2005 B1
6888949 Vanden et al. May 2005 B1
6900926 Ribak May 2005 B2
6912289 Vonlanthen et al. Jun 2005 B2
6920340 Laderman Jul 2005 B2
6931231 Griffin Aug 2005 B1
6940988 Shennib et al. Sep 2005 B1
6940989 Shennib et al. Sep 2005 B1
6942989 Felkner et al. Sep 2005 B2
D512979 Corcoran et al. Dec 2005 S
6975402 Bisson et al. Dec 2005 B2
6978159 Feng et al. Dec 2005 B2
7020297 Fang et al. Mar 2006 B2
7024010 Saunders et al. Apr 2006 B2
7043037 Lichtblau et al. May 2006 B2
7050675 Zhou et al. May 2006 B2
7050876 Fu et al. May 2006 B1
7057256 Mazur et al. Jun 2006 B2
7058182 Kates Jun 2006 B2
7058188 Allred Jun 2006 B1
7072475 Denap et al. Jul 2006 B1
7076076 Bauman Jul 2006 B2
7095981 Voroba et al. Aug 2006 B1
7167572 Harrison et al. Jan 2007 B1
7174026 Niederdrank et al. Feb 2007 B2
7179238 Hissong Feb 2007 B2
7181034 Armstrong Feb 2007 B2
7203331 Boesen Apr 2007 B2
7239069 Cho Jul 2007 B2
7245732 Jorgensen et al. Jul 2007 B2
7255457 Ducharme et al. Aug 2007 B2
7266208 Charvin et al. Sep 2007 B2
7289639 Abel et al. Oct 2007 B2
7313245 Shennib Dec 2007 B1
7315211 Lee et al. Jan 2008 B1
7322930 Jaeger et al. Jan 2008 B2
7349741 Maltan et al. Mar 2008 B2
7354792 Mazur et al. Apr 2008 B2
7376563 Leysieffer et al. May 2008 B2
7390689 Mazur et al. Jun 2008 B2
7394909 Widmer et al. Jul 2008 B1
7421087 Perkins et al. Sep 2008 B2
7424122 Ryan Sep 2008 B2
7444877 Li et al. Nov 2008 B2
7547275 Cho et al. Jun 2009 B2
7630646 Anderson et al. Dec 2009 B2
7645877 Gmeiner et al. Jan 2010 B2
7668325 Puria et al. Feb 2010 B2
7747295 Choi Jun 2010 B2
7778434 Juneau et al. Aug 2010 B2
7809150 Natarajan et al. Oct 2010 B2
7822215 Carazo et al. Oct 2010 B2
7826632 Von Buol et al. Nov 2010 B2
7853033 Maltan et al. Dec 2010 B2
7867160 Pluvinage et al. Jan 2011 B2
7883535 Cantin et al. Feb 2011 B2
7885359 Meltzer Feb 2011 B2
7955249 Perkins et al. Jun 2011 B2
7983435 Moses Jul 2011 B2
8090134 Takigawa et al. Jan 2012 B2
8099169 Karunasiri Jan 2012 B1
8116494 Rass Feb 2012 B2
8128551 Jolly Mar 2012 B2
8157730 LeBoeuf et al. Apr 2012 B2
8197461 Arenberg et al. Jun 2012 B1
8204786 LeBoeuf et al. Jun 2012 B2
8233651 Haller Jul 2012 B1
8251903 LeBoeuf et al. Aug 2012 B2
8284970 Sacha Oct 2012 B2
8295505 Weinans et al. Oct 2012 B2
8295523 Fay et al. Oct 2012 B2
8320601 Takigawa et al. Nov 2012 B2
8320982 LeBoeuf et al. Nov 2012 B2
8340310 Ambrose et al. Dec 2012 B2
8340335 Shennib Dec 2012 B1
8391527 Feucht et al. Mar 2013 B2
8396235 Gebhardt et al. Mar 2013 B2
8396239 Fay et al. Mar 2013 B2
8401212 Puria et al. Mar 2013 B2
8401214 Perkins et al. Mar 2013 B2
8506473 Puria Aug 2013 B2
8512242 LeBoeuf et al. Aug 2013 B2
8526651 Lafort et al. Sep 2013 B2
8526652 Ambrose et al. Sep 2013 B2
8526971 Giniger et al. Sep 2013 B2
8545383 Wenzel et al. Oct 2013 B2
8600089 Wenzel et al. Dec 2013 B2
8647270 LeBoeuf et al. Feb 2014 B2
8652040 LeBoeuf et al. Feb 2014 B2
8684922 Tran Apr 2014 B2
8696054 Crum Apr 2014 B2
8696541 Pluvinage et al. Apr 2014 B2
8700111 LeBoeuf et al. Apr 2014 B2
8702607 LeBoeuf et al. Apr 2014 B2
8715152 Puria et al. May 2014 B2
8715153 Puria et al. May 2014 B2
8715154 Perkins et al. May 2014 B2
8761423 Wagner et al. Jun 2014 B2
8787609 Perkins et al. Jul 2014 B2
8788002 LeBoeuf et al. Jul 2014 B2
8817998 Inoue Aug 2014 B2
8824715 Fay et al. Sep 2014 B2
8837758 Knudsen Sep 2014 B2
8845705 Perkins et al. Sep 2014 B2
8855323 Kroman Oct 2014 B2
8858419 Puria et al. Oct 2014 B2
8885860 Djalilian et al. Nov 2014 B2
8886269 LeBoeuf et al. Nov 2014 B2
8888701 LeBoeuf et al. Nov 2014 B2
8923941 LeBoeuf et al. Dec 2014 B2
8929965 LeBoeuf et al. Jan 2015 B2
8929966 LeBoeuf et al. Jan 2015 B2
8934952 LeBoeuf et al. Jan 2015 B2
8942776 LeBoeuf et al. Jan 2015 B2
8961415 LeBoeuf et al. Feb 2015 B2
8986187 Perkins et al. Mar 2015 B2
8989830 LeBoeuf et al. Mar 2015 B2
9044180 LeBoeuf et al. Jun 2015 B2
9049528 Fay et al. Jun 2015 B2
9055379 Puria et al. Jun 2015 B2
9131312 LeBoeuf et al. Sep 2015 B2
9154891 Puria et al. Oct 2015 B2
9211069 Larsen et al. Dec 2015 B2
9226083 Puria et al. Dec 2015 B2
9277335 Perkins et al. Mar 2016 B2
9289135 LeBoeuf et al. Mar 2016 B2
9289175 LeBoeuf et al. Mar 2016 B2
9301696 LeBoeuf et al. Apr 2016 B2
9314167 LeBoeuf et al. Apr 2016 B2
9392377 Olsen et al. Jul 2016 B2
9427191 LeBoeuf Aug 2016 B2
9497556 Kaltenbacher et al. Nov 2016 B2
9521962 LeBoeuf Dec 2016 B2
9524092 Ren et al. Dec 2016 B2
9538921 LeBoeuf et al. Jan 2017 B2
9544700 Puria et al. Jan 2017 B2
9564862 Hoyerby Feb 2017 B2
9591409 Puria et al. Mar 2017 B2
9749758 Puria et al. Aug 2017 B2
9750462 LeBoeuf et al. Sep 2017 B2
9788785 LeBoeuf Oct 2017 B2
9788794 LeBoeuf et al. Oct 2017 B2
9794653 Aumer et al. Oct 2017 B2
9794688 You Oct 2017 B2
9801552 Romesburg Oct 2017 B2
9808204 LeBoeuf et al. Nov 2017 B2
9924276 Wenzel Mar 2018 B2
9930458 Freed et al. Mar 2018 B2
9949035 Rucker et al. Apr 2018 B2
9949039 Perkins et al. Apr 2018 B2
9949045 Kure et al. Apr 2018 B2
9961454 Puria et al. May 2018 B2
9964672 Phair et al. May 2018 B2
10003888 Stephanou et al. Jun 2018 B2
10034103 Puria et al. Jul 2018 B2
10143592 Goldstein Dec 2018 B2
10154352 Perkins et al. Dec 2018 B2
10178483 Teran et al. Jan 2019 B2
10206045 Kaltenbacher et al. Feb 2019 B2
10237663 Puria et al. Mar 2019 B2
10284964 Olsen et al. May 2019 B2
10286215 Perkins et al. May 2019 B2
10292601 Perkins et al. May 2019 B2
10306381 Sandhu et al. May 2019 B2
10492010 Rucker et al. Nov 2019 B2
10511913 Puria et al. Dec 2019 B2
10516946 Puria et al. Dec 2019 B2
10516949 Puria et al. Dec 2019 B2
10516950 Perkins et al. Dec 2019 B2
10516951 Wenzel Dec 2019 B2
10531206 Freed et al. Jan 2020 B2
10555100 Perkins et al. Feb 2020 B2
10609492 Olsen et al. Mar 2020 B2
10743110 Puria et al. Aug 2020 B2
10779094 Rucker et al. Sep 2020 B2
10863286 Perkins et al. Dec 2020 B2
11057714 Puria et al. Jul 2021 B2
11058305 Perkins et al. Jul 2021 B2
11070927 Rucker et al. Jul 2021 B2
11102594 Shaquer et al. Aug 2021 B2
11153697 Olsen et al. Oct 2021 B2
11166114 Perkins et al. Nov 2021 B2
11212626 Larkin et al. Dec 2021 B2
11252516 Wenzel Feb 2022 B2
11259129 Freed et al. Feb 2022 B2
11310605 Puria et al. Apr 2022 B2
11317224 Puria Apr 2022 B2
11337012 Atamaniuk et al. May 2022 B2
11350226 Sandhu et al. May 2022 B2
20010003788 Ball et al. Jun 2001 A1
20010007050 Adelman Jul 2001 A1
20010024507 Boesen Sep 2001 A1
20010027342 Dormer Oct 2001 A1
20010029313 Kennedy Oct 2001 A1
20010053871 Zilberman et al. Dec 2001 A1
20020025055 Stonikas et al. Feb 2002 A1
20020035309 Leysieffer Mar 2002 A1
20020048374 Soli et al. Apr 2002 A1
20020085728 Shennib et al. Jul 2002 A1
20020086715 Sahagen Jul 2002 A1
20020172350 Edwards et al. Nov 2002 A1
20020183587 Dormer Dec 2002 A1
20030021903 Shlenker et al. Jan 2003 A1
20030055311 Neukermans et al. Mar 2003 A1
20030064746 Rader et al. Apr 2003 A1
20030081803 Petilli et al. May 2003 A1
20030097178 Roberson et al. May 2003 A1
20030125602 Sokolich et al. Jul 2003 A1
20030142841 Wiegand Jul 2003 A1
20030208099 Ball Nov 2003 A1
20030208888 Fearing et al. Nov 2003 A1
20040093040 Boylston et al. May 2004 A1
20040121291 Knapp et al. Jun 2004 A1
20040158157 Jensen et al. Aug 2004 A1
20040165742 Shennib et al. Aug 2004 A1
20040166495 Greinwald, Jr. et al. Aug 2004 A1
20040167377 Schafer et al. Aug 2004 A1
20040190734 Kates Sep 2004 A1
20040202339 O'Brien, Jr. et al. Oct 2004 A1
20040202340 Armstrong et al. Oct 2004 A1
20040208333 Cheung et al. Oct 2004 A1
20040234089 Rembrand et al. Nov 2004 A1
20040234092 Wada et al. Nov 2004 A1
20040236416 Falotico Nov 2004 A1
20040240691 Grafenberg Dec 2004 A1
20050018859 Buchholz Jan 2005 A1
20050020873 Berrang et al. Jan 2005 A1
20050036639 Bachler et al. Feb 2005 A1
20050038498 Dubrow et al. Feb 2005 A1
20050088435 Geng Apr 2005 A1
20050101830 Easter et al. May 2005 A1
20050111683 Chabries et al. May 2005 A1
20050117765 Meyer et al. Jun 2005 A1
20050190939 Fretz Sep 2005 A1
20050196005 Shennib et al. Sep 2005 A1
20050222823 Brumback et al. Oct 2005 A1
20050226446 Luo et al. Oct 2005 A1
20050267549 Della et al. Dec 2005 A1
20050271870 Jackson Dec 2005 A1
20050288739 Hassler et al. Dec 2005 A1
20060058573 Neisz et al. Mar 2006 A1
20060062420 Araki Mar 2006 A1
20060074159 Lu et al. Apr 2006 A1
20060075175 Jensen et al. Apr 2006 A1
20060161227 Walsh et al. Jul 2006 A1
20060161255 Zarowski et al. Jul 2006 A1
20060177079 Baekgaard et al. Aug 2006 A1
20060177082 Solomito et al. Aug 2006 A1
20060183965 Kasic, II et al. Aug 2006 A1
20060231914 Carey, III et al. Oct 2006 A1
20060233398 Husung Oct 2006 A1
20060237126 Guffrey et al. Oct 2006 A1
20060247735 Honert et al. Nov 2006 A1
20060256989 Olsen et al. Nov 2006 A1
20060278245 Gan Dec 2006 A1
20070030990 Fischer Feb 2007 A1
20070036377 Stirnemann Feb 2007 A1
20070076913 Schanz Apr 2007 A1
20070083078 Easter et al. Apr 2007 A1
20070100197 Perkins et al. May 2007 A1
20070127748 Carlile et al. Jun 2007 A1
20070127752 Armstrong Jun 2007 A1
20070127766 Combest Jun 2007 A1
20070135870 Shanks et al. Jun 2007 A1
20070161848 Dalton et al. Jul 2007 A1
20070191673 Ball et al. Aug 2007 A1
20070201713 Fang et al. Aug 2007 A1
20070206825 Thomasson Sep 2007 A1
20070223755 Salvetti et al. Sep 2007 A1
20070225776 Fritsch et al. Sep 2007 A1
20070236704 Carr et al. Oct 2007 A1
20070250119 Tyler et al. Oct 2007 A1
20070251082 Milojevic et al. Nov 2007 A1
20070258507 Lee et al. Nov 2007 A1
20070286429 Grafenberg et al. Dec 2007 A1
20080021518 Hochmair et al. Jan 2008 A1
20080051623 Schneider et al. Feb 2008 A1
20080054509 Berman et al. Mar 2008 A1
20080063228 Mejia et al. Mar 2008 A1
20080063231 Juneau et al. Mar 2008 A1
20080077198 Webb et al. Mar 2008 A1
20080089292 Kitazoe et al. Apr 2008 A1
20080107292 Kornagel May 2008 A1
20080123866 Rule et al. May 2008 A1
20080130927 Theverapperuma et al. Jun 2008 A1
20080188707 Bernard et al. Aug 2008 A1
20080298600 Poe et al. Dec 2008 A1
20080300703 Widmer et al. Dec 2008 A1
20090016553 Ho et al. Jan 2009 A1
20090023976 Cho et al. Jan 2009 A1
20090043149 Abel et al. Feb 2009 A1
20090076581 Gibson Mar 2009 A1
20090131742 Cho et al. May 2009 A1
20090141919 Spitaels et al. Jun 2009 A1
20090149697 Steinhardt et al. Jun 2009 A1
20090157143 Edler et al. Jun 2009 A1
20090175474 Salvetti et al. Jul 2009 A1
20090246627 Park Oct 2009 A1
20090253951 Ball et al. Oct 2009 A1
20090262966 Vestergaard et al. Oct 2009 A1
20090281367 Cho et al. Nov 2009 A1
20090310805 Petroff Dec 2009 A1
20090316922 Merks et al. Dec 2009 A1
20100036488 De, Jr. et al. Feb 2010 A1
20100085176 Flick Apr 2010 A1
20100103404 Remke et al. Apr 2010 A1
20100114190 Bendett et al. May 2010 A1
20100145135 Ball et al. Jun 2010 A1
20100171369 Baarman et al. Jul 2010 A1
20100172507 Merks Jul 2010 A1
20100177918 Keady et al. Jul 2010 A1
20100222639 Purcell et al. Sep 2010 A1
20100260364 Merks Oct 2010 A1
20100272299 Van Schuylenbergh et al. Oct 2010 A1
20100290653 Wiggins et al. Nov 2010 A1
20100322452 Ladabaum et al. Dec 2010 A1
20110062793 Azancot et al. Mar 2011 A1
20110069852 Arndt et al. Mar 2011 A1
20110084654 Julstrom et al. Apr 2011 A1
20110112462 Parker et al. May 2011 A1
20110116666 Dittberner et al. May 2011 A1
20110125222 Perkins et al. May 2011 A1
20110130622 Ilberg et al. Jun 2011 A1
20110144414 Spearman et al. Jun 2011 A1
20110152602 Perkins et al. Jun 2011 A1
20110164771 Jensen et al. Jul 2011 A1
20110196460 Weiss Aug 2011 A1
20110221391 Won et al. Sep 2011 A1
20110249845 Kates Oct 2011 A1
20110249847 Salvetti et al. Oct 2011 A1
20110257290 Zeller et al. Oct 2011 A1
20110258839 Probst Oct 2011 A1
20110271965 Parkins et al. Nov 2011 A1
20120008807 Gran Jan 2012 A1
20120038881 Amirparviz et al. Feb 2012 A1
20120039493 Rucker et al. Feb 2012 A1
20120092461 Fisker et al. Apr 2012 A1
20120114157 Arndt et al. May 2012 A1
20120140967 Aubert et al. Jun 2012 A1
20120217087 Ambrose et al. Aug 2012 A1
20120236524 Pugh et al. Sep 2012 A1
20120263339 Funahashi Oct 2012 A1
20130004004 Zhao et al. Jan 2013 A1
20130034258 Lin Feb 2013 A1
20130083938 Bakalos et al. Apr 2013 A1
20130089227 Kates Apr 2013 A1
20130195300 Larsen et al. Aug 2013 A1
20130230204 Monahan et al. Sep 2013 A1
20130303835 Koskowich Nov 2013 A1
20130308782 Dittberner et al. Nov 2013 A1
20130308807 Burns Nov 2013 A1
20130343584 Bennett et al. Dec 2013 A1
20130343585 Bennett et al. Dec 2013 A1
20130343587 Naylor et al. Dec 2013 A1
20140084698 Asanuma et al. Mar 2014 A1
20140107423 Yaacobi Apr 2014 A1
20140153761 Shennib et al. Jun 2014 A1
20140169603 Sacha et al. Jun 2014 A1
20140177863 Parkins Jun 2014 A1
20140194891 Shahoian Jul 2014 A1
20140254856 Blick et al. Sep 2014 A1
20140286514 Pluvinage et al. Sep 2014 A1
20140288356 Van Vlem Sep 2014 A1
20140288358 Puria et al. Sep 2014 A1
20140296620 Puria et al. Oct 2014 A1
20140321657 Stirnemann Oct 2014 A1
20140379874 Starr et al. Dec 2014 A1
20150021568 Gong et al. Jan 2015 A1
20150049889 Bern Feb 2015 A1
20150117689 Bergs et al. Apr 2015 A1
20150124985 Kim et al. May 2015 A1
20150201269 Dahl Jul 2015 A1
20150222978 Murozaki Aug 2015 A1
20150245131 Facteau et al. Aug 2015 A1
20150358743 Killion Dec 2015 A1
20160008176 Goldstein Jan 2016 A1
20160064814 Jang et al. Mar 2016 A1
20160087687 Kesler et al. Mar 2016 A1
20160094043 Hao et al. Mar 2016 A1
20160277854 Puria et al. Sep 2016 A1
20160309265 Pluvinage et al. Oct 2016 A1
20160309266 Olsen et al. Oct 2016 A1
20160330555 Vonlanthen et al. Nov 2016 A1
20170040012 Goldstein Feb 2017 A1
20170095202 Facteau et al. Apr 2017 A1
20170180888 Andersson et al. Jun 2017 A1
20170195806 Atamaniuk et al. Jul 2017 A1
20170257710 Parker Sep 2017 A1
20180077503 Shaquer et al. Mar 2018 A1
20180077504 Shaquer et al. Mar 2018 A1
20180213331 Rucker et al. Jul 2018 A1
20180262846 Perkins et al. Sep 2018 A1
20180317026 Puria Nov 2018 A1
20180376255 Parker Dec 2018 A1
20190158961 Puria et al. May 2019 A1
20190166438 Perkins et al. May 2019 A1
20190230449 Puria Jul 2019 A1
20190239005 Sandhu et al. Aug 2019 A1
20190253811 Unno et al. Aug 2019 A1
20190253815 Atamaniuk et al. Aug 2019 A1
20190269336 Perkins et al. Sep 2019 A1
20200037082 Perkins et al. Jan 2020 A1
20200068323 Perkins et al. Feb 2020 A1
20200084551 Puria et al. Mar 2020 A1
20200092662 Wenzel Mar 2020 A1
20200092664 Freed et al. Mar 2020 A1
20200128338 Shaquer et al. Apr 2020 A1
20200186941 Olsen et al. Jun 2020 A1
20200186942 Flaherty et al. Jun 2020 A1
20200304927 Shaquer et al. Sep 2020 A1
20200336843 Lee et al. Oct 2020 A1
20200374639 Rucker et al. Nov 2020 A1
20200396551 Dy et al. Dec 2020 A1
20210029451 Fitz et al. Jan 2021 A1
20210029474 Larkin et al. Jan 2021 A1
20210186343 Perkins et al. Jun 2021 A1
20210266686 Puria et al. Aug 2021 A1
20210306777 Rucker et al. Sep 2021 A1
20210314712 Shaquer et al. Oct 2021 A1
20210392449 Flaherty et al. Dec 2021 A1
20210400405 Perkins et al. Dec 2021 A1
20220007114 Perkins et al. Jan 2022 A1
20220007115 Perkins et al. Jan 2022 A1
20220007118 Rucker et al. Jan 2022 A1
20220007120 Olsen et al. Jan 2022 A1
20220046366 Larkin et al. Feb 2022 A1
20220086572 Flaherty et al. Mar 2022 A1
20220150650 Rucker May 2022 A1
Foreign Referenced Citations (116)
Number Date Country
2004301961 Feb 2005 AU
2242545 Sep 2009 CA
1176731 Mar 1998 CN
101459868 Jun 2009 CN
101489171 Jul 2009 CN
102301747 Dec 2011 CN
105491496 Apr 2016 CN
2044870 Mar 1972 DE
3243850 May 1984 DE
3508830 Sep 1986 DE
102013114771 Jun 2015 DE
0092822 Nov 1983 EP
0242038 Oct 1987 EP
0291325 Nov 1988 EP
0296092 Dec 1988 EP
0242038 May 1989 EP
0296092 Aug 1989 EP
0352954 Jan 1990 EP
0291325 Jun 1990 EP
0352954 Aug 1991 EP
1035753 Sep 2000 EP
1435757 Jul 2004 EP
1845919 Oct 2007 EP
1955407 Aug 2008 EP
1845919 Sep 2010 EP
2272520 Jan 2011 EP
2301262 Mar 2011 EP
2752030 Jul 2014 EP
3101519 Dec 2016 EP
2425502 Jan 2017 EP
2907294 May 2017 EP
3183814 Jun 2017 EP
3094067 Oct 2017 EP
3006079 Mar 2019 EP
2455820 Nov 1980 FR
2085694 Apr 1982 GB
S60154800 Aug 1985 JP
S621726 Jan 1987 JP
S6443252 Feb 1989 JP
H09327098 Dec 1997 JP
2000504913 Apr 2000 JP
2004187953 Jul 2004 JP
2004193908 Jul 2004 JP
2005516505 Jun 2005 JP
2006060833 Mar 2006 JP
100624445 Sep 2006 KR
WO-9209181 May 1992 WO
WO-9501678 Jan 1995 WO
WO-9621334 Jul 1996 WO
WO-9736457 Oct 1997 WO
WO-9745074 Dec 1997 WO
WO-9806236 Feb 1998 WO
WO-9903146 Jan 1999 WO
WO-9915111 Apr 1999 WO
WO-0022875 Apr 2000 WO
WO-0022875 Jul 2000 WO
WO-0150815 Jul 2001 WO
WO-0158206 Aug 2001 WO
WO-0176059 Oct 2001 WO
WO-0158206 Feb 2002 WO
WO-0239874 May 2002 WO
WO-0239874 Feb 2003 WO
WO-03030772 Apr 2003 WO
WO-03063542 Jul 2003 WO
WO-03063542 Jan 2004 WO
WO-2004010733 Jan 2004 WO
WO-2005015952 Feb 2005 WO
WO-2005107320 Nov 2005 WO
WO-2006014915 Feb 2006 WO
WO-2006037156 Apr 2006 WO
WO-2006039146 Apr 2006 WO
WO-2006042298 Apr 2006 WO
WO-2006071210 Jul 2006 WO
WO-2006075169 Jul 2006 WO
WO-2006075175 Jul 2006 WO
WO-2006118819 Nov 2006 WO
WO-2006042298 Dec 2006 WO
WO-2007023164 Mar 2007 WO
WO-2009046329 Apr 2009 WO
WO-2009047370 Apr 2009 WO
WO-2009049320 Apr 2009 WO
WO-2009056167 May 2009 WO
WO-2009062142 May 2009 WO
WO-2009047370 Jul 2009 WO
WO-2009125903 Oct 2009 WO
WO-2009145842 Dec 2009 WO
WO-2009146151 Dec 2009 WO
WO-2009155358 Dec 2009 WO
WO-2009155361 Dec 2009 WO
WO-2009155385 Dec 2009 WO
WO-2010033932 Mar 2010 WO
WO-2010033933 Mar 2010 WO
WO-2010077781 Jul 2010 WO
WO-2010147935 Dec 2010 WO
WO-2010148345 Dec 2010 WO
WO-2011005500 Jan 2011 WO
WO-2012088187 Jun 2012 WO
WO-2012149970 Nov 2012 WO
WO-2013016336 Jan 2013 WO
WO-2016011044 Jan 2016 WO
WO-2016045709 Mar 2016 WO
WO-2016146487 Sep 2016 WO
WO-2017045700 Mar 2017 WO
WO-2017059218 Apr 2017 WO
WO-2017059240 Apr 2017 WO
WO-2017116791 Jul 2017 WO
WO-2017116865 Jul 2017 WO
WO-2018048794 Mar 2018 WO
WO-2018081121 May 2018 WO
WO-2018093733 May 2018 WO
WO-2019055308 Mar 2019 WO
WO-2019173470 Sep 2019 WO
WO-2019199680 Oct 2019 WO
WO-2019199683 Oct 2019 WO
WO-2020176086 Sep 2020 WO
WO-2021003087 Jan 2021 WO
Non-Patent Literature Citations (160)
Entry
Folkeard, et al. Detection, Speech Recognition, Loudness, and Preference Outcomes With a Direct Drive Hearing Aid: Effects of Bandwidth. Trends Hear. Jan.-Dec. 2021; 25: 1-17. doi: 10.1177/2331216521999139.
Knight, D. Diode detectors for RF measurement. Paper. Jan. 1, 2016. [Retrieved from 1-16 online] (retrieved Feb. 11, 2020) abstract, p. 1; section 1, p. 6; section 1.3, p. 9; section 3 voltage-double rectifier, p. 21; section 5, p. 27. URL: g3ynh.info/circuits/Diode_det.pdf.
Notice of Allowance dated Jul. 24, 2020 for U.S. Appl. No. 16/682,329.
Notice of Allowance dated Aug. 14, 2019 for U.S. Appl. No. 16/173,869.
Notice of Allowance dated Oct. 25, 2019 for U.S. Appl. No. 16/173,869.
Office action dated Jan. 24, 2020 for U.S. Appl. No. 16/682,329.
Asbeck, et al. Scaling Hard Vertical Surfaces with Compliant Microspine Arrays, The International Journal of Robotics Research 2006; 25; 1165-79.
Atasoy [Paper] Opto-acoustic Imaging, for BYM504E Biomedical Imaging Systems class at ITU, downloaded from the Internet www2.itu.edu.td—cilesiz/courses/BYM504- 2005-OA504041413.pdf, 14 pages.
Athanassiou, et al. Laser controlled photomechanical actuation of photochromic polymers Microsystems. Rev. Adv. Mater. Sci. 2003; 5:245-251.
Autumn, et al. Dynamics of geckos running vertically, The Journal of Experimental Biology 209, 260-272, (2006).
Autumn, et al., Evidence for van der Waals adhesion in gecko setae, www.pnas.orgycgiydoiyl0.1073ypnas.192252799 (2002).
Ayatollahi, et al. Design and Modeling of Micromachined Condenser MEMS Loudspeaker using Permanent Magnet Neodymium-Iron-Boron (Nd-Fe-B). IEEE International Conference on Semiconductor Electronics, 2006. ICSE '06, Oct. 29, 2006-Dec. 1, 2006; 160-166.
Baer, et al. Effects of Low Pass Filtering on the Intelligibility of Speech in Noise for People With and Without Dead Regions at High Frequencies. J. Acost. Soc. Am 112 (3), pt. 1, (Sep. 2002), pp. 1133-1144.
Best, et al. The influence of high frequencies on speech localization. Abstract 981 (Feb. 24, 2003) from www.aro.org/abstracts/abstracts.html.
Birch, et al. Microengineered systems for the hearing impaired. IEE Colloquium on Medical Applications of Microengineering, Jan. 31, 1996; pp. 2/1-2/5.
Boedts. Tympanic epithelial migration, Clinical Otolaryngology 1978, 3, 249-253.
Burkhard, et al. Anthropometric Manikin for Acoustic Research. J. Acoust. Soc. Am., vol. 58, No. 1, (Jul. 1975), pp. 214-222.
Camacho-Lopez, et al. Fast Liquid Crystal Elastomer Swims Into the Dark, Electronic Liquid Crystal Communications. Nov. 26, 2003; 9 pages total.
Carlile, et al. Frequency bandwidth and multi-talker environments. Audio Engineering Society Convention 120. Audio Engineering Society, May 20-23, 2006. Paris, France. 118: 8 pages.
Carlile, et al. Spatialisation of talkers and the segregation of concurrent speech. Abstract 1264 (Feb. 24, 2004) from www.aro.org/abstracts/abstracts.html.
Cheng, et al. A Silicon Microspeaker for Hearing Instruments. Journal of Micromechanics and Microengineering 2004; 14(7):859-866.
Dictionary.com's (via American Heritage Medical Dictionary) online dictionary definition of ‘percutaneous’. Accessed on Jun. 3, 2013. 2 pages.
Merriam-Webster's online dictionary definition of ‘percutaneous’. Accessed on Jun. 3, 2013. 3 pages.
Datskos, et al. Photoinduced and thermal stress in silicon microcantilevers. Applied Physics Letters. Oct. 19, 1998; 73(16):2319-2321.
Decraemer, et al. A method for determining three-dimensional vibration in the ear. Hearing Res., 77:19-37 (1994).
Dundas et al. The Earlens Light-Driven Hearing Aid: Top 10 questions and answers. Hearing Review. 2018;25(2):36-39.
Ear. Downloaded from the Internet. Accessed Jun. 17, 2008. 4 pages. URL: http://wwwmgs.bionet.nsc.ru/mgs/gnw/trrd/thesaurus/Se/ear.html.
Edinger, J.R. High-Quality Audio Amplifier With Automatic Bias Control. Audio Engineering; Jun. 1947; pp. 7-9.
European search report and opinion dated Sep. 25, 2013 for EP Application No. 08837672.8.
Fay. Cat eardrum mechanics. Ph.D. thesis. Dissertation submitted to Department of Aeronautics and Astronautics. Stanford University. May 2001; 210 pages total.
Fay, et al. Cat eardrum response mechanics. Mechanics and Computation Division. Department of Mechanical Engineering. Stanford University. 2002; 10 pages total.
Fay, et al. Preliminary evaluation of a light-based contact hearing device for the hearing impaired. Otol Neurotol. Jul. 2013;34(5):912-21. doi: 10.1097/MAO.0b013e31827de4b1.
Fay, et al. The discordant eardrum, PNAS, Dec. 26, 2006, vol. 103, No. 52, p. 19743-19748.
Fletcher. Effects of Distortion on the Individual Speech Sounds. Chapter 18, ASA Edition of Speech and Hearing in Communication, Acoust Soc.of Am. (republished in 1995) pp. 415-423.
Freyman, et al. Spatial Release from Informational Masking in Speech Recognition. J. Acost. Soc. Am., vol. 109, No. 5, pt. 1, (May 2001); 2112-2122.
Freyman, et al. The Role of Perceived Spatial Separation in the Unmasking of Speech. J. Acoust. Soc. Am., vol. 106, No. 6, (Dec. 1999); 3578-3588.
Fritsch, et al. EarLens transducer behavior in high-field strength MRI scanners. Otolaryngol Head Neck Surg. Mar. 2009;140(3):426-8. doi: 10.1016/j.otohns.2008.10.016.
Galbraith et al. A wide-band efficient inductive transdermal power and data link with coupling insensitive gain IEEE Trans Biomed Eng. Apr. 1987;34(4):265-75.
Gantz, et al. Broad Spectrum Amplification with a Light Driven Hearing System. Combined Otolaryngology Spring Meetings, 2016 (Chicago).
Gantz, et al. Light Driven Hearing System: A Multi-Center Clinical Study. Association for Research in Otolaryngology Annual Meeting, 2016 (San Diego).
Gantz, et al. Light-Driven Contact Hearing Aid for Broad Spectrum Amplification: Safety and Effectiveness Pivotal Study. Otology & Neurotology Journal, 2016 (in review).
Gantz, et al. Light-Driven Contact Hearing Aid for Broad-Spectrum Amplification: Safety and Effectiveness Pivotal Study. Otology & Neurotology. Copyright 2016. 7 pages.
Ge, et al., Carbon nanotube-based synthetic gecko tapes, p. 10792-10795, PNAS, Jun. 26, 2007, vol. 104, No. 26.
Gennum. GA3280 Preliminary Data Sheet: VoyageurTD Open Platform DSP System for Ultra Low Power Audio Processing. Oct. 2006; 17 pages. Downloaded from the Internet: http://www.sounddesigntechnologies.com/products/pdf/37601DOC.pdf.
Gobin, et al. Comments on the physical basis of the active materials concept. Proc. SPIE 2003; 4512:84-92.
Gorb, et al. Structural Design and Biomechanics of Friction-Based Releasable Attachment Devices in Insects. IntegrComp Biol. Dec. 2002. 42(6):1127-1139. doi: 10.1093/icb/42.6.1127.
Hakansson, et al. Percutaneous vs. transcutaneous transducers for hearing by direct bone conduction (Abstract). Otolaryngol Head Neck Surg. Apr. 1990;102(4):339-44.
Hato, et al. Three-dimensional stapes footplate motion in human temporal bones. Audiol. Neurootol., 8:140-152 (Jan. 30, 2003).
Hofman, et al. Relearning Sound Localization With New Ears. Nature Neuroscience, vol. 1, No. 5, (Sep. 1998); 417-421.
International search report and written opinion dated Dec. 24, 2008 for PCT/US2008/079868.
Izzo, et al. Laser Stimulation of Auditory Neurons: Effect of Shorter Pulse Duration and Penetration Depth. Biophys J. Apr. 15, 2008;94(8):3159-3166.
Izzo, et al. Laser Stimulation of the Auditory Nerve. Lasers Surg Med. Sep. 2006;38(8):745-753.
Izzo, et al. Selectivity of Neural Stimulation In the Auditory System: A Comparison of Optic and Electric Stimuli. J Biomed Opt. Mar.-Apr. 2007;12(2):021008.
Jackson, et al. Multiphoton and Transmission Electron Microscopy of Collagen in Ex Vivo Tympanic Membranes. Ninth Annual Symposium on Biomedical Computation at Stanford (BCATS). BCATS 2008 Abstract Book. Poster 18:56. Oct. 2008. URL: http://www.stanford.edu/˜puria1/BCATS08.html.
Jian, et al. A 0.6 V, 1.66 mW energy harvester and audio driver for tympanic membrane transducer with wirelessly optical signal and power transfer. InCircuits and Systems (ISCAS), 2014 IEEE International Symposium on Jun. 1, 2014. 874-7. IEEE.
Jin, et al. Speech Localization. J. Audio Eng. Soc. convention paper, presented at the AES 112th Convention, Munich, Germany, May 10-13, 2002, 13 pages total.
Khaleghi, et al. Attenuating the ear canal feedback pressure of a laser-driven hearing aid. J Acoust Soc Am. Mar. 2017;141(3):1683.
Khaleghi, et al. Attenuating the feedback pressure of a light-activated hearing device to allows microphone placement at the ear canal entrance. IHCON 2016, International Hearing Aid Research Conference, Tahoe City, CA, Aug. 2016.
Khaleghi, et al. Characterization of Ear-Canal Feedback Pressure due to Umbo-Drive Forces: Finite-Element vs. Circuit Models. ARO Midwinter Meeting 2016, (San Diego).
Khaleghi, et al. Mechano-Electro-Magnetic Finite Element Model of a Balanced Armature Transducer for a Contact Hearing Aid. Proc. MoH 2017, Mechanics of Hearing workshop, Brock University, Jun. 2017.
Khaleghi, et al. Multiphysics Finite Element Model of a Balanced Armature Transducer used in a Contact Hearing Device. ARO 2017, 40th ARO MidWinter Meeting, Baltimore, MD, Feb. 2017.
Kiessling, et al. Occlusion Effect of Earmolds with Different Venting Systems. J Am Acad Audiol. Apr. 2005;16(4):237-49.
Killion, et al. The case of the missing dots: AI and SNR loss. The Hearing Journal, 1998. 51(5), 32-47.
Killion. Myths About Hearing in Noise and Directional Microphones. The Hearing Review. Feb. 2004; 11(2):14, 16, 18, 19, 72 & 73.
Killion. SNR loss: I can hear what people say but I can't understand them. The Hearing Review, 1997; 4(12):8-14.
Lee, et al. A Novel Opto-Electromagnetic Actuator Coupled to the tympanic Membrane. J Biomech. Dec. 5, 2008;41(16):3515-8. Epub Nov. 7, 2008.
Lee, et al. The optimal magnetic force for a novel actuator coupled to the tympanic membrane: a finite element analysis. Biomedical engineering: applications, basis and communications. 2007; 19(3):171-177.
Levy, et al. Characterization of the available feedback gain margin at two device microphone locations, in the fossa triangularis and Behind the Ear, for the light-based contact hearing device. Acoustical Society of America (ASA) meeting, 2013 (San Francisco).
Levy, et al. Extended High-Frequency Bandwidth Improves Speech Reception in the Presence of Spatially Separated Masking Speech. Ear Hear. Sep.-Oct. 2015;36(5):e214-24. doi: 10.1097/AUD.0000000000000161.
Levy et al. Light-driven contact hearing aid: a removable direct-drive hearing device option for mild to severe sensorineural hearing impairment. Conference on Implantable Auditory Prostheses, Tahoe City, CA, Jul. 2017. 4 pages.
Lezal. Chalcogenide glasses—survey and progress. Journal of Optoelectronics and Advanced Materials. Mar. 2003; 5(1):23-34.
Mah. Fundamentals of photovoltaic materials. National Solar Power Research Institute. Dec. 21, 1998, 3-9.
Makino, et al. Epithelial migration in the healing process of tympanic membrane perforations. Eur Arch Otorhinolaryngol. 1990; 247: 352-355.
Makino, et al., Epithelial migration on the tympanic membrane and external canal, Arch Otorhinolaryngol (1986) 243:39-42.
Markoff. Intuition + Money: An Aha Moment. New York Times Oct. 11, 2008, page BU4, 3 pages total.
Martin, et al. Utility of Monaural Spectral Cues is Enhanced in the Presence of Cues to Sound-Source Lateral Angle. JARO. 2004; 5:80-89.
McElveen et al. Overcoming High-Frequency Limitations of Air Conduction Hearing Devices Using a LIGHT-DRIVEN Contact Hearing Aid. Poster presentation at The Triological Society, 120th Annual Meeting at COSM, Apr. 28, 2017; San Diego, CA.
Michaels, et al., Auditory epithelial migration on the human tympanic membrane: II. The existence of two discrete migratory pathways and their embryologic correlates. Am J Anat. Nov. 1990. 189(3):189-200. DOI: 10.1002/aja.1001890302.
Moore, et al. Perceived naturalness of spectrally distorted speech and music. J Acoust Soc Am. Jul. 2003;114(1):408-19.
Moore, et al. Spectro-temporal characteristics of speech at high frequencies, and the potential for restoration of audibility to people with mild-to-moderate hearing loss. Ear Hear. Dec. 2008;29(6):907-22. doi: 10.1097/AUD.0b013e3181824616.
Moore. Loudness perception and intensity resolution. Cochlear Hearing Loss, Chapter 4, pp. 90-115, Whurr Publishers Ltd., London (1998).
Murphy, et al. Adhesion and anisotropic friction enhancements of angled heterogeneous micro-fiber arrays with spherical and spatula tips. Journal of Adhesion Science and Technology. vol. 21. No. 12-13. Aug. 2007. pp. 1281-1296. DOI: 10.1163/156856107782328380.
Murugasu, et al. Malleus-to-footplate versus malleus-to-stapes-head ossicular reconstruction prostheses: temporal bone pressure gain measurements and clinical audiological data. Otol Neurotol. Jul. 2005;26(4):572-82. DOI: 10.1097/01.mao.0000178151.44505.1b.
Musicant, et al. Direction-dependent spectral properties of cat external ear: new data and cross-species comparisons. J Acoust Soc Am. Feb. 1990. 87(2):757-781. DOI: 10.1121/1.399545.
National Semiconductor. LM4673 Boomer: Filterless, 2.65W, Mono, Class D Audio Power Amplifier. Nov. 1, 2007. 24 pages. [Data Sheet] downloaded from the Internet: URL: http://www.national.com/ds/LM/LM4673.pdf.
Nishihara, et al. Effect of changes in mass on middle ear function. Otolaryngol Head Neck Surg. Nov. 1993;109(5):889-910.
Notice of allowance dated May 1, 2015 for U.S. Appl. No. 13/768,825.
“Notice of Allowance dated Jul. 30, 2018 for U.S. Appl. No. 15/804,995.”.
Notice of allowance dated Aug. 25, 2015 for U.S. Appl. No. 13/768,825.
Notice of allowance dated Nov. 27, 2012 for U.S. Appl. No. 12/251,200.
O'Connor, et al. Middle ear Cavity and Ear Canal Pressure-Driven Stapes Velocity Responses in Human Cadaveric Temporal Bones. J Acoust Soc Am. Sep. 2006;120(3):1517-28.
Office Action dated May 8, 2017 for U.S. Appl. No. 14/949,495.
Office action dated May 17, 2012 for U.S. Appl. No. 12/251,200.
Office action dated Jul. 17, 2014 for U.S. Appl. No. 13/768,825.
Office Action dated Sep. 2, 2016 for U.S. Appl. No. 14/949,495.
Office action dated Nov. 14, 2011 for U.S. Appl. No. 12/251,200.
Office Action dated Dec. 27, 2017 for U.S. Appl. No. 15/804,995.
Office action dated Dec. 31, 2014 for U.S. Appl. No. 13/768,825.
Park, et al. Design and analysis of a microelectromagnetic vibration transducer used as an implantable middle ear hearing aid. J. Micromech. Microeng. vol. 12 (2002), pp. 505-511.
Perkins, et al. Light-based Contact Hearing Device: Characterization of available Feedback Gain Margin at two device microphone locations. Presented at AAO-HNSF Annual Meeting, 2013 (Vancouver).
Perkins, et al. The EarLens Photonic Transducer: Extended bandwidth. Presented at AAO-HNSF Annual Meeting, 2011 (San Francisco).
Perkins, et al. The EarLens System: New sound transduction methods. Hear Res. Feb. 2, 2010; 10 pages total.
Perkins, R. Earlens tympanic contact transducer: a new method of sound transduction to the human ear. Otolaryngol Head Neck Surg. Jun. 1996;114(6):720-8.
Poosanaas, et al. Influence of sample thickness on the performance of photostrictive ceramics, J. App. Phys. Aug. 1, 1998; 84(3):1508-1512.
Puria et al. A gear in the middle ear. ARO Denver CO, 2007b.
Puria, et al. Cues above 4 kilohertz can improve spatially separated speech recognition. The Journal of the Acoustical Society of America, 2011, 129, 2384.
Puria, et al. Extending bandwidth above 4 kHz improves speech understanding in the presence of masking speech. Association for Research in Otolaryngology Annual Meeting, 2012 (San Diego).
Puria, et al. Extending bandwidth provides the brain what it needs to improve hearing in noise. First international conference on cognitive hearing science for communication, 2011 (Linkoping, Sweden).
Puria, et al. Hearing Restoration: Improved Multi-talker Speech Understanding. 5th International Symposium on Middle Ear Mechanics In Research and Otology (MEMRO), Jun. 2009 (Stanford University).
Puria, et al. Imaging, Physiology and Biomechanics of the middle ear: Towards understating the functional consequences of anatomy. Stanford Mechanics and Computation Symposium, 2005, ed Fong J.
Puria, et al. Malleus-to-footplate ossicular reconstruction prosthesis positioning: cochleovestibular pressure optimization. Otol Nerotol. May 2005; 26(3):368-379. DOI: 10.1097/01.mao.0000169788.07460.4a.
Puria, et al. Measurements and model of the cat middle ear: Evidence of tympanic membrane acoustic delay. J. Acoust. Soc. Am., 104(6):3463-3481 (Dec. 1998).
Puria, et al., Mechano-Acoustical Transformations in A. Basbaum et al., eds., The Senses: A Comprehensive Reference, v3, p. 165-201, Academic Press (2008).
Puria, et al. Middle Ear Morphometry From Cadaveric Temporal Bone MicroCT Imaging. Proceedings of the 4th International Symposium, Zurich, Switzerland, Jul. 27-30, 2006, Middle Ear Mechanics In Research and Otology, pp. 260-269.
Puria, et al. Sound-Pressure Measurements In The Cochlear Vestibule of Human-Cadaver Ears. Journal of the Acoustical Society of America. 1997; 101 (5-1): 2754-2770.
Puria, et al. Temporal-Bone Measurements of the Maximum Equivalent Pressure Output and Maximum Stable Gain of a Light-Driven Hearing System That Mechanically Stimulates the Umbo. Otol Neurotol. Feb. 2016;37(2):160-6. doi: 10.1097/MAO.0000000000000941.
Puria, et al. The EarLens Photonic Hearing Aid. Association for Research in Otolaryngology Annual Meeting, 2012 (San Diego).
Puria, et al. The Effects of bandwidth and microphone location on understanding of masked speech by normal-hearing and hearing-impaired listeners. International Conference for Hearing Aid Research (IHCON) meeting, 2012 (Tahoe City).
Puria, et al. Tympanic-membrane and malleus-incus-complex co-adaptations for high-frequency hearing in mammals. Hear Res. May 2010;263(1-2):183-90. doi: 10.1016/j.heares.2009.10.013. Epub Oct. 28, 2009.
Puria. Measurements of human middle ear forward and reverse acoustics: implications for otoacoustic emissions. J Acoust Soc Am. May 2003;113(5):2773-89.
Puria, S. Middle Ear Hearing Devices. Chapter 10. Part of the series Springer Handbook of Auditory Research pp. 273-308. Date: Feb. 9, 2013.
Qu, et al. Carbon nanotube arrays with strong shear binding-on and easy normal lifting-off. Science. Oct. 10, 2008. 322(5899):238-342. doi: 10.1126/science.1159503.
Robles, et al. Mechanics of the mammalian cochlea. Physiol Rev. Jul. 2001;81(3):1305-52.
Roush. SiOnyx Brings “Black Silicon” into the Light; Material Could Upend Solar, Imaging Industries. Xconomy, Oct. 12, 2008, retrieved from the Internet: www.xconomy.com/boston/2008/10/12/sionyx-brings-black-silicon-into-the-light-material-could-upend-solar-imaging-industries 4 pages total.
Rubinstein. How cochlear implants encode speech. Curr Opin Otolaryngol Head Neck Surg. Oct. 2004. 12(5):444-448. DOI: 10.1097/01.moo.0000134452.24819.c0.
School of Physics Sydney, Australia. Acoustic Compliance, Inertance and Impedance. 1-6. (2018). http://www.animations.physics.unsw.edu.au/jw/compliance-inertance-impedance.htm.
Sekaric, et al. Nanomechanical resonant structures as tunable passive modulators. Applied Physics Letters. May 2002. 80(19):3617-3619. DOI: 10.1063/1.1479209.
Shaw. Transformation of Sound Pressure Level From the Free Field to the Eardrum in the Horizontal Plane. J. Acoust. Soc. Am., vol. 56, No. 6, (Dec. 1974), 1848-1861.
Shih, et al. Shape and displacement control of beams with various boundary conditions via photostrictive optical actuators. Proc. IMECE. Nov. 2003; 1-10.
Smith. The Scientist and Engineers Guide to Digital Signal Processing. California Technical Publishing. 1997. Chapter 22. pp. 351-372.
Song, et al. The development of a non-surgical direct drive hearing device with a wireless actuator coupled to the tympanic membrane. Applied Acoustics. Dec. 31, 2013;74(12):1511-8.
Sound Design Technologies. Voyager TD Open Platform DSP System for Ultra Low Power Audio Processing—GA3280 Data Sheet. Oct. 2007. 15 pages. Retrieved from the Internet: http://www.sounddes.com/pdf/37601 DOC.pdf.
Spolenak, et al. Effects of contact shape on the scaling of biological attachments. Proc. R. Soc. A. 2005;461:305-319.
Stenfelt, et al. Bone-Conducted Sound: Physiological and Clinical Aspects. Otology & Neurotology, Nov. 2005; 26 (6):1245-1261.
Struck, et al. Comparison of Real-world Bandwidth in Hearing Aids vs Earlens Light-driven Hearing Aid System. The Hearing Review. TechTopic: EarLens. Hearingreview.com. Mar. 14, 2017. pp. 24-28.
Stuchlik, et al. Micro-Nano Actuators Driven by Polarized Light. IEEE Proc. Sci. Meas. Techn. Mar. 2004; 151(2):131-136.
Suski, et al. Optically activated ZnO/SiO2/Si cantilever beams. Sensors and Actuators A: Physical. Sep. 1990. 24(3): 221-225. https://doi.org/10.1016/0924-4247(90)80062-A.
Takagi, et al. Mechanochemical Synthesis of Piezoelectric PLZT Powder. KONA. 2003; 51(21):234-241.
Thakoor, et al. Optical microactuation in piezoceramics. Proc. SPIE. Jul. 1998; 3328:376-391.
Thompson. Tutorial on microphone technologies for directional hearing aids. Hearing Journal. Nov. 2003; 56(11):14-16, 18, 20-21.
Tzou, et al. Smart Materials, Precision Sensors/Actuators, Smart Structures, and Structronic Systems. Mechanics of Advanced Materials and Structures. 2004; 11:367-393.
Uchino, et al. Photostricitve actuators. Ferroelectrics. 2001; 258:147-158.
U.S. Appl. No. 16/173,869 Office Action dated Jan. 10, 2019.
Vickers, et al. Effects of Low-Pass Filtering on the Intelligibility of Speech in Quiet for People With and Without Dead Regions at High Frequencies. J. Acoust. Soc. Am. Aug. 2001; 110(2):1164-1175.
Vinge. Wireless Energy Transfer by Resonant Inductive Coupling. Master of Science Thesis. Chalmers University of Technology. 1-83 (2015).
Vinikman-Pinhasi, et al. Piezoelectric and Piezooptic Effects in Porous Silicon. Applied Physics Letters, Mar. 2006; 88(11): 111905-1-111905-2. DOI: 10.1063/1.2186395.
Wang, et al. Preliminary Assessment of Remote Photoelectric Excitation of an Actuator for a Hearing Implant. Proceeding of the 2005 IEEE, Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China. Sep. 1-4, 2005; 6233-6234.
Web Books Publishing, “The Ear,” accessed online Jan. 22, 2013, available online Nov. 2, 2007 at http://www.web-books.com/eLibrary/Medicine/Physiology/Ear/Ear.htm.
Wiener, et al. On the Sound Pressure Transformation by the Head and Auditory Meatus of the Cat. Acta Otolaryngol. Mar. 1966; 61(3):255-269.
Wightman, et al. Monaural Sound Localization Revisited. J Acoust Soc Am. Feb. 1997;101(2):1050-1063.
Wiki. Sliding Bias Variant 1, Dynamic Hearing (2015).
Wikipedia. Headphones. Downloaded from the Internet. Accessed Oct. 27, 2008. 7 pages. URL: http://en.wikipedia.org/wiki/Headphones.
Wikipedia. Inductive Coupling. 1-2 (Jan. 11, 2018). https://en.wikipedia.org/wiki/Inductive_coupling.
Wikipedia. Pulse-density Coupling. 1-4 (Apr. 6, 2017). https://en.wikipedia.org/wiki/Pulse-density_modulation.
Wikipedia. Resonant Inductive Coupling. 1-11 (Jan. 12, 2018). https://en.wikipedia.org/wiki/Resonant_inductive_coupling#cite_note-13.
Yao, et al. Adhesion and sliding response of a biologically inspired fibrillar surface: experimental observations, J. R. Soc. Interface (2008) 5, 723-733 doi:10.1098/rsif.2007.1225 Published online Oct. 30, 2007.
Yao, et al. Maximum strength for intermolecular adhesion of nanospheres at an optimal size. J R Soc Interface. Nov. 6, 2008;5(28):1363-70. doi: 10.1098/rsif.2008.0066.
Yi, et al. Piezoelectric Microspeaker with Compressive Nitride Diaphragm. The Fifteenth IEEE International Conference on Micro Electro Mechanical Systems, 2002; 260-263.
Yu, et al. Photomechanics: Directed bending of a polymer film by light. Nature. Sep. 11, 2003;425(6954):145. DOI: 10.1038/425145a.
Co-pending U.S. Appl. No. 17/356,217, inventors Imatani; Kyle et al., filed Jun. 23, 2021.
Related Publications (1)
Number Date Country
20210274293 A1 Sep 2021 US
Provisional Applications (1)
Number Date Country
60979645 Oct 2007 US
Divisions (1)
Number Date Country
Parent 12251200 Oct 2008 US
Child 13768825 US
Continuations (5)
Number Date Country
Parent 16682329 Nov 2019 US
Child 17077808 US
Parent 16173869 Oct 2018 US
Child 16682329 US
Parent 15804995 Nov 2017 US
Child 16173869 US
Parent 14949495 Nov 2015 US
Child 15804995 US
Parent 13768825 Feb 2013 US
Child 14949495 US