1. Field
The present disclosure relates to audio products and, more specifically but not exclusively, to earphones and headphones having built-in microphones, such as those used for telephony applications.
2. Description of the Related Art
This section introduces aspects that may help facilitate a better understanding of the disclosure. Accordingly, the statements of this section are to be read in this light and are not to be understood as admissions about what is prior art or what is not prior art.
Cellular smartphones provide a wide variety of audio functionality to users, including telephony, voice-memo recording, audio-video recording, and a voice interface to automated speech-recognition engines used in dictation and automated voice assistants. Almost all smartphones can be used with earphones, also known as in-ear headphones, earbuds, or headsets.
Many earphone designs to date contain a single omnidirectional microphone embedded in one of the earphone wires. This microphone is usually located in one of the earphone wires that connects to the left or right earphone transducer. For example, in the earphone designs by Apple, Beats, and Bose, to name only three, the microphone is located in the control capsule that provides the user with a set of audio controls. The control capsule on most earphones is located on either the left or right earphone wire and is placed along the wire so that, when the earphones are worn normally, the microphone is in line with the user's mouth. Such configurations limit the transmission quality of the user's speech as well as the accuracy of speech-recognition engines when used with automated voice assistants such as Apple's Siri and Google's Google Now.
In one embodiment, the present invention is an accessory system for a telephone. The accessory system comprises (a) at least one earphone configured to receive from the telephone incoming audio signals for rendering by the at least one earphone and (b) at least one microphone array comprising a plurality of microphones used to generate outgoing audio signals for (i) processing by a signal processor and (ii) transmission by the telephone.
Embodiments of the present invention are illustrated by way of example and are not limited by the accompanying figures, in which like references indicate similar or identical elements. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.
Detailed illustrative embodiments of the present invention are disclosed herein. However, specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments of the present invention. Embodiments of the present invention may be embodied in many alternative forms and should not be construed as limited to only the embodiments set forth herein. Further, the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention.
As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It further will be understood that the terms “comprises,” “comprising,” “has,” “having,” “includes,” and/or “including” specify the presence of stated features, steps, or components, but do not preclude the presence or addition of one or more other features, steps, or components.
This disclosure is related to an earphone design that incorporates microphone arrays in the earphone's wires or in the earphones themselves or both to enhance the transmission quality of the user's speech for the purpose of improving the quality of telephone calls and for improving the accuracy of speech-recognition engines when used with automated voice assistants such as Apple's Siri. The earphone design described here incorporates a microphone array in one or both of the left and right wires leading to the earphone's loudspeaker transducers. When two microphone arrays are deployed, each array can be used separately, or the two arrays can be mechanically configured to result in a longer and more-effective array for enhanced transmit-speech quality, particularly in noisy environments. The two arrays can also be arranged for use in stereo recording, such as might be used in making a video recording of a musical performance.
Compared to a single omnidirectional microphone, a microphone array has two or more omnidirectional (or directional) microphone elements arranged in a line, a two-dimensional grid, or a three-dimensional frame or solid. The nominal inter-microphone spacing may be chosen based on the physics of sound propagation such that the mathematical combination of the signals from all microphones produces an acoustic sensitivity that is greater than, and more directional than, the signal from a single omnidirectional element or single directional element. The sensitivity beampattern resulting from the microphone array can be designed to have maximum sensitivity in the direction of the user's mouth while minimizing sound contributions from all other directions, and in this way a microphone array can greatly enhance the quality of a user's speech by increasing the speech-signal-to-noise ratio beyond that provided by a single omnidirectional or directional microphone element.
The mathematics and engineering of microphone arrays is a mature science. See, e.g., S. L. Gay and J. Benesty, eds., Acoustic Signal Processing for Telecommunications, Kluwer Academic Publishers, Boston, 2000; M. Brandstein and D. Ward, eds., Microphone Arrays—Signal Processing Techniques and Applications, Springer-Verlag, Berlin, 2001; and J. Benesty, J. Chen, Y. Huang, eds., Microphone Array Signal Processing, Springer-Verlag, Berlin, 2008, the teachings of all of which are incorporated by reference herein in their entirety. This disclosure focuses on the mechanical features of an earphone accessory containing one or more microphone arrays that can be configured for different speech transmission and sound recording use scenarios.
For all use modes, the mathematical combination of microphone signals in one or both arrays for the purpose of generating a directional beampattern could take place centrally where one processing unit has access to all microphone signals. Typically, but not necessarily, this central unit would be the smartphone (not shown). Alternatively, the beamforming could be implemented local to the arrays where special-purpose processors (e.g., digital signal processors) are (i) embedded with the array capsules themselves in shared housing or (ii) separate from the array capsules in their own housing and, in either case, not shown in the figures.
Depending on the particular implementation, the arrays 104l/r and the earbuds 106l/r could similarly or differently communicate with the smartphone via either a wired link or a wireless link. If the arrays communicate wirelessly, they may be located anywhere including remotely from the earbuds and any earbud wires, such as, for example, clipped to the user's clothing.
In the single-sided mode of
Any of a number of methods could be used to automatically determine which of the two earphones 106l and 106r is inserted in the user's ear, and therefore which of the two arrays to use for speech transmission. For example, the earphones could contain a microphone-like transducer that detects the acoustic coupling from the earphone's loudspeaker. This coupling is greater when the earphone is inserted in the user's ear than when it is not, because the ear canal serves as a closed acoustic resonator. Alternatively, any number of voice-activity detection methods could be employed to determine which of the two arrays, left or right, provides the highest speech activity measure or rating when the user is talking, and that array is then selected for speech transmission. In yet another alternative, accelerometers could be embedded in the earphones and/or earphone wires and/or earphone control box. The direction of gravity as reported by the accelerometers could be used to determine the orientation of the earbuds relative to the wires and arrays, and from that, which earbud and corresponding array are predominately parallel to a normal to Earth's surface (indicating a higher likelihood of being worn compared to the other earbud). Another possibility could be to use the phase delay (or equivalently group delay) from the acoustic and vibration sources (talker's mouth, vibration sensor signal, and/or earbud loudspeaker signal) to triangulate the position of the microphones that make up the array in order to compute the array geometry and orientation. Other techniques are also possible.
In the two-sided mode of
To determine whether the earphones are in two-sided mode or one-sided mode, the acoustic coupling described in the previous section could be thresholded for both arrays. If the acoustic coupling for both arrays is greater than the specified threshold level, then the earphones can be assumed to be in two-sided mode. If the acoustic coupling for only one array is greater than the specified threshold level, then the earphones can be assumed to be in one-sided mode, with the earphone having the greater acoustic coupling being the inserted earphone and the other earphone being disabled.
In the enhanced directivity mode of
The two arrays could be connected by embedding magnets in the capsule ends, or a physical connector of any of various types could be used. In addition to magnets or physical connectors, the ends of the capsules could contain electrical connections to provide a means to combine the electrical signals from the two arrays. For example, this might be necessary if the processing is implemented local to the arrays. The electrical connections would typically consist of power, ground, and a microphone-signal bus.
For example, using the implementation of
The user could use an application on the smartphone to select Enhanced Directivity mode, or the act of connecting the two arrays together could automatically signal to the smartphone, or processors embedded in the array capsules, to configure the beamforming algorithm for enhanced directivity.
In the stereo recording mode of
Alternative embodiments may also be able to support one or more surround or other multichannel sound (e.g., 5.1) recording modes. For instance, with two arrays arranged in a “V”, “T”, or “+” configuration, it is possible to form multiple beams pointing in directions that would be appropriate for surround sound recording and playback. 5.1 surround playback systems have defined preferred loudspeaker locations. It would therefore be desirable to point multiple beams in the preferred 5.1 directions to achieve good spatial playback on the 5.1 system. In certain embodiments, the audio signals generated by all of the different microphones in the arrays could be individually recorded in memory (e.g., in the smartphone) and used for later postprocessing to give the user control over the desired beampatterns that suit desired spatial playback characteristics.
For best results, it would be desirable to know what the actual user-selected microphone array geometry was. One could achieve this by detecting the magnetic field strength at various points along the arrays and using this information to determine the overall array geometry between two or more arrays. It is conceivable that one could place magnets in a way that constrains the microphone array geometry to a few available geometries. From one or more magnetic sensors configured to detect the residual magnetic fields or one or more other strategically placed pressure sensors, one could therefore detect which discrete geometry was selected.
Another possibility to detect the array geometry and/or orientation is to use an earbud's loudspeaker to output a known acoustic test signal and use some or all of the resulting microphone signals to determine the array geometry/orientation. Processing could compute the phase delay and/or amplitude differences in the acoustic test signal traveling from the loudspeaker to the different microphones and derive the array geometry/orientation from these phase delay and/or amplitude differences. The known acoustic test signal could be purposefully designed to be inaudible to humans. In some implementations, the inaudible audio signals could be dynamically designed to be masked by ambient sound. Note that, as used in this specification, the term “inaudible” covers sounds that are outside the human listening range in terms of signal frequencies as well as sounds that are otherwise undetectable such as sounds that would be masked to humans by background noise in the listening environment.
In the conference mode of
Geometries other than the X- or “+”-shaped and the L- or V-shaped geometries of
The enhanced directivity mode of
A special case of the single-sided mode is when the user holds the array corresponding to the earphone that is not inserted into or otherwise connected to the ear, close to his/her mouth. A speech-activity-based metric would then select the better of the two arrays, and not necessarily the one side whose earphone is on the ear. This mode is essentially a hybrid of single-sided mode and the two-sided mode.
Although the disclosure has been described in the context of smartphone accessory systems having two earphones, in alternative embodiments, an accessory system might have only one earphone.
This application claims the benefit of the filing date of U.S. provisional application no. 61/708,826, filed on Oct. 2, 2012, the teachings of which are incorporated herein by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2013/061789 | 9/26/2013 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2014/055312 | 4/10/2014 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20020009203 | Erten | Jan 2002 | A1 |
20070165875 | Rezvani et al. | Jul 2007 | A1 |
20110033065 | Johnson | Feb 2011 | A1 |
20110129097 | Andrea | Jun 2011 | A1 |
20120020485 | Visser et al. | Jan 2012 | A1 |
Number | Date | Country |
---|---|---|
2461553 | Jun 2012 | EP |
WO2007017810 | Feb 2007 | WO |
Entry |
---|
International Search Report and Written Opinion; Mailed Feb. 11, 2014 for the corresponding PCT Application No. PCT/US2013/061789. |
Number | Date | Country | |
---|---|---|---|
20150201271 A1 | Jul 2015 | US |
Number | Date | Country | |
---|---|---|---|
61708826 | Oct 2012 | US |