This application is being co-filed on the same day, Feb. 14, 2014, with “Eye Glasses With Microphone Array” by Dashen Fan, Attorney Docket No. 0717.2220-001. This application is being co-filed on the same day, Feb. 14, 2014, with “Eyewear Spectacle With Audio Speaker In The Temple” by Kenny W. Y. Chow, et al., Attorney Docket No. 0717.2229-001. This application is being co-filed on the same day, Feb. 14, 2014, with “Noise Cancelling Microphone Apparatus” by Dashen Fan, Attorney Docket No. 0717.2216-001.
The entire teachings of the above applications are incorporated herein by reference.
Traditionally, earphones have been used to present acoustic sounds to an individual when privacy is desired or it is desired not to disturb others. Examples of traditional earphone devices include over-the-head headphones having an ear cup speaker (e.g. Beats® by Dr. Dre headphones), ear bud style earphones (e.g., Apple iPod® earphones and Bluetooth® headsets), bone-conducting speakers (e.g., Google Glass). Another known way to achieve the desired privacy or peace and quiet for others is by using directional multi-speaker beam-forming. Also well-known but not conventionally used to present acoustic sounds to an individual that is not hearing-impaired are hearing aids. An example of which is the open ear mini-Behind-the-Ear (BTE) with Receiver-In-The-Aid (RITA) device. Such a hearing aid typically includes a clear “hook” that acts as an acoustic duct tube to channel audio speaker (also referred to as a receiver in telephony applications) sound to the inner ear of a user and act as the mechanical support so that the user can wear the hearing aid, the speaker being housed in the behind-the-ear portion of the hearing aid body. However, the aforementioned techniques all have drawbacks, namely, they are either bulky, cumbersome, unreliable, or immature.
Therefore, a need exists for earphones that overcome or minimize the above-referenced problem.
The present invention related in general to eyewear, and more particularly to eyewear devices and corresponding methods for presenting sound to a user of the eyewear.
In one embodiment, an eyewear sound induction ear speaker device of the invention includes an eyewear frame, a speaker including an audio channel integrated with the eyewear frame, and an acoustic duct coupled to the speaker and arranged to channel sound emitted by the speaker to an ear of the user wearing the eyewear frame.
In another embodiment, the invention is an eyewear sound induction ear speaker device that includes means for receiving an audio sound, means for processing and amplifying the audio sound, and means for channeling the amplified and processed audio sound to an ear of a user wearing an eyewear frame.
In still another embodiment, the invention is a method of providing sound for eyewear, including the steps of receiving a processed electrical audio signal at a speaker integrated with an eyewear frame, wherein the speaker includes an audio channel. The speaker is induced to produce acoustic sound at the audio channel, and the acoustic sound is channeled through an acoustic duct to be presented to a user wearing the eyewear frame.
In yet another embodiment, the invention is a method of channeling sound from eyewear device that includes the steps of receiving an electrical audio signal from electrical audio source at speaker integrated with an eyewear device, inducing audible sound from the electrical audio signal at the speaker, and channeling the audio sound to an ear of the user of the eyewear using the audio duct, the audio duct not blogging the ear canal of the ear.
The present invention has many advantages. For example, the eyewear spectacle of the invention is relatively compact, unobtrusive, and durable. Further, the device and method can be integrated with noise cancellation apparatus and methods that are also, optionally, components of the eyewear itself. In one embodiment, the noise cancellation apparatus, including microphones, electrical circuitry, and software can be integrated with and, optionally, on board the eyewear worn by the user. In another embodiment, microphones mounted on board the eyewear can be integrated with the speakers and with circuitry, such as a computer, receiver or transmitter to thereby process signals received from an external source or the microphones, or to process and transmit signals from the microphone, and to selectively transmit those signals, whether processed or unprocessed, to the user of the eyewear through the speakers mounted in the eyewear. For example, human-machine interaction through the use of a speech recognition user interface is becoming increasingly popular. To facilitate such human-machine interaction, accurate recognition of speech is useful. It is also useful as a machine that can present information to the user through spoken words, for example by reading a text to the user. Such a machine output presentation facilitates hands-free activities of a user, which is increasingly popular. Users also do not have to hold a speaker or device in place, nor do they need to have electronics behind their ear or an ear bud blocking their ear. There are also no flimsy wires, and users do not have to tolerate the skin contact or pressure associated with bone condition speakers.
The foregoing will be apparent from the following more particular description of example embodiments invention, as illustrated in the accompanying drawings in which like references refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
The terms “speaker” and “audio speaker” are used interchangeably throughout the present application and are used to refer to a small, relative to the size of a human ear, narrow band (e.g., voice-band, for example 300 Hz-20 KHz) speaker receiver or that converts electrical signals at audio frequencies into acoustic signals.
In one embodiment of the present invention, shown in
Acoustic duct 18 can be made from a pliable material and further arranged such that acoustic duct 18 does not block the ear canal of the user 20. Acoustic duct 18 can include point 22 and be horn-shaped, as shown in
As shown in
Receiver 28 is operatively coupled to first speaker 16, either alone or in conjunction with second audio speaker 24. Receiver 28 can be a wired or a wireless receiver, and receive an electrical audio signal from any electrical audio source. For example, the receiver can be operatively coupled to a 3.5 mm audio jack, Bluetooth wireless radio, memory storage device or other such source. The wireless receiver can include an audio codec, digital signal processor, and amplifiers, the audio codec can be coupled to the audio speaker and coupled to at least one microphone. The microphone can be an analog microphone coupled to an analog-to-digital (A/D) converter, which can in turn be coupled to a DSP. The audio microphone can be a micro-electro-mechanical system (MEMS) microphone. Further example embodiments can include a digital microphone, such as a digital MEMs microphone, coupled to an all-digital voice processing chip, obviating the need for a CODEC all together. The speaker can be driven by a digital-to-analog (D/A) driver, or can be driven by a pulse width modulation (PWM) digital signal.
Alternatively, or in addition to receiver 26, eyewear 10 of the invention can include a transmitter, whereby sounds captured electronically by microphones of eyewear that are thereby processed for transmission to an extend receiver or to at least one of audio speakers 16 and 24.
An example method of the present invention includes channeling sound from an eyewear device. The method includes receiving an electrical audio signal from an electrical audio source at a speaker integrated with an eyewear device, inducing audio sound from the electrical audio signal at the speaker, and channeling the audio sound to an ear of a user of the eyewear using an audio duct, the audio duct not blocking an ear canal of the ear. The electrical audio signal can be supplied from any electrical audio source, for example, a 3.5 mm audio-jack, Bluetooth® wireless radio, and a media storage device, such as a hard disk or solid-state memory device.
A corresponding example method of providing sound for eyewear 10 can include: receiving and electrically processing sound at at least one of speakers 16 and 24, integrated with eyewear frame 14, speakers 16 and 24 including audio channels; inducing the electrically processed sound at audio speakers 16 and 24 integrated within eyewear frame 14 to produce acoustic processed sound; and, channeling the acoustic processed sound through acoustic ducts 18, 22 to be presented to a user wearing eyewear 10.
Example methods can further include arranging at least one of acoustic ducts 18, 30 such that at least one of acoustic ducts 18, 22 do not block the ear canal of the user, acoustic ducts 18, 22 being comprised of a pliable non-load bearing material.
The processing can include preamplifying sound received from a wired or wireless receiver 28 using a pre-amplifier (not shown), further processing the amplified sound using a digital signal processor (not shown), converting the further processed sound into an analog signal, and postamplifying the analog signal to produce the electrically processed sound. The processing can further include processing sound in a second audio channel, inducing the electrically processed sound of the second audio channel at second audio speaker 24 integrated with eyewear frame 14 to produce stereo acoustic processed sound and, channeling the second acoustic processed sound through second acoustic duct 30 to present stereo sound to a user wearing eyewear frame 14.
In a further example embodiment, an eyewear sound induction ear speaker device can include a means for receiving an audio sound, a means for amplifying and processing the audio sound, and a means for channeling the audio sound to an ear of a user wearing an eyewear frame.
Horn-shaped acoustic ducts 18, 22 will amplify the sound coming out of respective speakers 16, 24, thereby bringing the sound to the users ear and increasing the effective sound volume. The larger acoustic port can be oval shaped, thinner in the thickness dimension of the acoustic duct wall, to fit it better to ear, or act as a clamp shaped position holder.
In an embodiment of the present invention, if a pressure-gradient microphone is employed, each microphone is within a rubber boot that extends an acoustic port on the front and the back side of the microphone with acoustic ducts. At the end of rubber boot, the new acoustic port is aligned with the opening in the tube, where empty space is filled with wind-screen material. If two omni-directional microphones are employed in place of one pressure-gradient microphone, then the acoustic port of each microphone is aligned with the opening.
In an embodiment, a long boom dual-microphone headset can look like a conventional close-talk boom microphone, but is a big boom with two-microphones in parallel. An end microphone of the boom is placed in front of user's mouth. The close-talk long boom dual-microphone design targets heavy noise usage in military, aviation, industrial and has unparalleled noise cancellation performance. For example, one main microphone can be positioned directly in front of mouth. A second microphone can be positioned at the side of the mouth. The two microphones can be identical with identical casing. The two microphones can be placed in parallel, perpendicular to the boom. Each microphone has front and back openings. DSP circuitry can be in the housing between the two microphones.
Microphone is housed in a rubber or silicon holder (e.g., the rubber boot) with an air duct extending to the acoustic ports as needed. The housing keeps the microphone in an air-tight container and provides shock absorption. The microphone front and back ports are covered with a wind-screen layer made of woven fabric layers to reduce wind noise or wind-screen foam material. The outlet holes on the microphone plastic housing can be covered with water-resistant thin film material or special water-resistant coating.
In another embodiment, a conference gooseneck microphone can provide noise cancellation. In large conference hall, echoes can be a problem for sound recording. Echoes recorded by a microphone can cause howling. Severe echo prevents the user from tuning up speaker volume and causes limited audibility. Conference hall and conference room can be decorated with expensive sound absorbing materials on their walls to reduce echo to achieve higher speaker volume and provide an even distribution of sound field across the entire audience. Electronic echo cancellation equipment is used to reduce echo and increase speaker volume, but such equipment is expensive, can be difficult to setup and often requires an acoustic expert.
In an embodiment, a dual-microphone noise cancellation conference microphone can provide an inexpensive, easy to implement solution to the problem of echo in a conference hall or conference room. The dual-microphone system described above can be placed in a desktop gooseneck microphone. Each microphone in the tube is a pressure-gradient bi-directional, uni-directional, or super-directional microphone.
In a head mounted computer, a user can desire a noise-canceling close-talk microphone without a boom microphone in front of his or her mouth. The microphone in front of the user's mouth can be viewed as annoying. In addition, moisture from the user's mouth can condense on the surface of the Electret Condenser Microphone (ECM) membrane, which after long usage can deteriorate microphone sensitivity.
In an embodiment, a short tube boom headset can solve these problems by shortening the boom, moving the ECM away from the user's mouth and using a rubber boot to extend the acoustic port of the noise-canceling microphone. This can extend the effective close-talk range of the ECM. This maintains the noise-canceling ECM property for far away noises. In addition, the boom tube can be lined with wind-screen form material. This solution further allows the headset computer to be suitable for enterprise call center, industrial, and general mobile usage. In an embodiment with identical dual-microphones within the tube boom, the respective rubber boots of each microphone can also be identical.
In an embodiment, the short tube boom headset can be a wired or wireless headset. The headset includes the short microphone (e.g., and ECM) tube boom. The tube boom can extend from the housing of the headset along the user's cheek, where the tube boom is either straight or curved. The tube boom can extend the length of the cheek to the side of the user's mouth, for instance. The tube boom can include a single noise-cancelling microphone on its inside.
The tube boom can further include a dual microphone inside of the tube. A dual microphone can be more effective in cancelling out non-stationary noise, human noise, music, and high frequency noises. A dual microphone can be more suitable for mobile communication, speech recognition, or a Bluetooth headset. The two microphones can be identical, however a person of ordinary skill in the art can also design a tube boom having microphones of different models.
In an embodiment having dual-microphones, the two microphones enclosed in their respective rubber boats are placed in series along the inside of the tube.
The tube can have a cylindrical shape, although other shapes are possible (e.g., a rectangular prism, etc.). The short tube boom can have two openings, one at the tip, and a second at the back. The tube surface can be covered with a pattern of one or more holes or slits to allow sound to reach the microphone inside the tube boom. In another embodiment, the short tube boom can have three openings, one at the tip, another in the middle, and another in the back. The openings can be equally spaced, however, other a person of ordinary skill in the art can design other spacings.
The microphone in the tube boom is a bi-directional noise-cancelling microphone having pressure-gradient microphone elements. The microphone can be enclosed in a rubber boot extending acoustic port on the front and the back side of the microphone with acoustic ducts. Inside of the boot, the microphone element is sealed in the air-tight rubber boot.
Within the tube, the microphone with the rubber boot is placed along the inside of the tube. An acoustic port at the tube tip aligns with the boom opening, and an acoustic port at the tube back aligns with boom opening. The rubber boot can be offset from the tube ends to allow for spacing between the tube ends and the rubber boot. The spacing further allows breathing room and for room to place a wind-screen of appropriate thickness. The rubber boot and inner wall of the tube remain air-tight, however. A wind-screen foam material (e.g., wind guard sleeves over the rubber boot) fills the air-duct and the open space between acoustic port and tube interior/opening.
Referring back to
A microphone 504 is arranged to be played between the two halves of the rubber boot 502a-b. The microphone 504 and rubber boot 502a-b are sized such that the microphone 504 fits in a cavity within the halves of the rubber boot 502a-b. The microphone is coupled with a wire 506, that extends out of the rubber boot 502a-b and can be connected to, for instance, the noise cancellation circuit described above.
If position 4 604d has a microphone, it is employed within a pendant.
The microphones can also be employed at other combinations of positions 604a-e, or at positions not shown in
The noise cancellation circuit 701 includes four functional blocks all of which are electronically linked, either wirelessly or by hardwire: a beam-forming (BF) module 702, a Desired Voice Activity Detection (VAD) Module 708, an adaptive noise cancellation (ANC) module 704 and a single signal noise reduction (NR) module 706. The two signals 710 and 712 are fed into the BF module 702, which generates a main signal 730 and a reference signal 732 to the ANC module 704. A closer (i.e., relatively close to the desired sound) microphone signal 710 is collected from a microphone closer to the user's mouth and a further (i.e., relatively distant from the desired sound) microphone signal is collected from a microphone further from the user's mouth, relatively. The BF module 702 also generates a main signal 720 and reference signal 722 for the desired VAD module 708. The main signal 720 and reference signal 722 can, in certain embodiments, be different from the main signal 730 and reference signal 732 generated for the for ANC module 704.
The ANC module 704 processes the main signal 730 and the reference signal 732 to cancel out noises from the two signals and output a noise cancelled signal 742 to the single channel NR module 706. The single signal NR module 706 post-processes the noise cancelled signal 742 from the ANC module 704 to remove any further residue noises. Meanwhile, the VAD module 708 derives, from the main signal 720 and reference signal 722, a desired voice activity detection (DVAD) signal 740 that indicates the presence or absence of speech in the main signal 720 and reference signal 722. The DVAD signal 740 can then be used to control the ANC module 704 and the NR module 706 from the result of BF module 702. The DVAD signal 740 indicates to the ANC module 704 and the Single Channel NR module 706 which sections of the signal have voice data to analyze, which can increase the efficiency of processing of the ANC module 704 and single channel NR module 706 by ignoring sections of the signal without voice data. Desired speech signal 744 is generated by single channel NR module 706.
In an embodiment, the BF module 702, ANC module 704, single NR reduction module 706, and desired VAD module 708 employ linear processing (e.g., linear filters). A linear system (which employs linear processing) satisfies the properties of superposition and scaling or homogeneity. The property of superposition means that the output of the system is directly proportional to the input. For example, a function F(x) is a linear system if:
F(x1+x2+ . . . )=F(x1)+F(x2)+ . . .
A satisfies the property of scaling or homogeneity of degree one if the output scales proportional to the input. For example, a function F(x) satisfies the properties of scaling or homogeneity if, for a scalar α:
F(αx)=αF(x)
In contract, a non-linear function does not satisfy both of these conditions.
Prior noise cancellation systems employ non-linear processing. By using linear processing, increasing the input changes the output proportionally. However, in non-linear processing, increasing the input changes the output non-proportionally. Using linear processing provides an advantage for speech recognition by improving feature extraction. Speaker recognition algorithm is developed based on noiseless voice recorded in quiet environment with no distortion. A linear noise cancellation algorithm does not introduce nonlinear distortion to noise cancelled speech. Speech recognition can deal with linear distortion on speech, but not non-linear distortion of speech. Linear noise cancellation algorithm is “transparent” to the speech recognition engine. Training speech recognition on the variations of nonlinear distorted noise is impossible. Non-linear distortion can disrupt the feature extraction necessary for speech recognition.
An example of a linear system is a Weiner Filter, which is a linear single channel noise removal filter. The Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant filtering an observed noisy process, assuming known stationary signal, noise spectra, and additive noise. The Wiener filter minimizes the mean square error between the estimated random process and the desired process.
A further microphone signal 812 is inputted to a frequency response matching filter 804. The frequency response matching filter 804 adjusts gain, phase, and shapes the frequency response of the further microphone signal 812. For example, the frequency response matching filter 804 can adjust the signal for the distance between the two microphones, such that an outputted reference signal 832 representative of the further microphone signal 812 can be processed with the main signal 830, representative of the closer microphone signal 810. The main signal 830 and reference signal 832 are sent to the ANC module.
A closer microphone signal 810 is outputted to the ANC module as a main signal 830. The closer microphone signal 810 is also inputted to a low-pass filter 806. The reference signal 832 is inputted to a low-pass filter 808 to create a reference signal 822 sent to the Desired VAD module. The low-pass filters 806 and 808 adjust the signal for a “close talk case” by, for example, having a gradual low off from 2 kHz to 4 kHz, in one embodiment. Other frequencies can be used for different designs and distances of the microphones to the user's mouth, however.
The ANC module 1004 produces a noise cancelled signal 1042 to a Single Channel Noise Reduction (NR) module 1006, similar to the ANC module 1004 of
Likewise, the second microphone 1108 is connected to a gain module 1116 and a delay module 1118, which is outputted to a combiner 1120. The third microphone 1110 is connected directly to the combiner 1120. The combiner 1120 subtracts the two provided signals to cancel noise, which creates the right signal 1120.
Likewise, the third microphone 1260 is connected to a gain module 1276 and a delay module 1278, which is outputted to a combiner 1280. The fourth microphone 1262 is connected directly to the combiner 1280. The combiner 1280 subtracts the two provided signals to cancel noise, which creates the right signal 1284.
Further example embodiments of the present invention may be configured using a computer program product; for example, controls may be programmed in software for implementing example embodiments of the present invention. Further example embodiments of the present invention may include a non-transitory computer readable medium containing instruction that may be executed by a processor, and, when executed, cause the processor to complete methods described herein. It should be understood that elements of the block and flow diagrams described herein may be implemented in software, hardware, firmware, or other similar implementation determined in the future. In addition, the elements of the block and flow diagrams described herein may be combined or divided in any manner in software, hardware, or firmware. If implemented in software, the software may be written in any language that can support the example embodiments disclosed herein. The software may be stored in any form of computer readable medium, such as random access memory (RAM), read only memory (ROM), compact disk read only memory (CD-ROM), and so forth. In operation, a general purpose or application specific processor loads and executes software in a manner well understood in the art. It should be understood further that the block and flow diagrams may include more or fewer elements, be arranged or oriented differently, or be represented differently. It should be understood that implementation may dictate the block, flow, and/or network diagrams and the number of block and flow diagrams illustrating the execution of embodiments of the invention.
The relevant teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This application also claims the benefit of U.S. Provisional Application No. 61/839,227, filed on Jun. 25, 2013. This application claims the benefit of U.S. Provisional Application No. 61/780,108, filed on Mar. 13, 2013. This application also claims the benefit of U.S. Provisional Application No. 61/839,211, filed on Jun. 25, 2013. This application also claims the benefit of U.S. Provisional Application No. 61/912,844, filed on Dec. 6, 2013.
Number | Date | Country | |
---|---|---|---|
61839227 | Jun 2013 | US | |
61780108 | Mar 2013 | US | |
61839211 | Jun 2013 | US | |
61912844 | Dec 2013 | US |