Field
This disclosure generally relates to audio communications, and more particularly, to wireless headsets.
Background
Wired and wireless headsets are known. Conventional wired headsets include a wire running between an audio source and either one or two earpieces that are intended to fit on or within a user's ears. In many cases, wireless headsets are simply replacements for wired headsets. In such circumstances, a wireless headset substitutes a wireless link, usually a radio frequency (RF) or infrared (IR) channel, for the wire running between the headset and audio source. Wireless headsets are used to provide a greater degree of user freedom, as the user is no longer tethered to the audio source by a wire. It is known for both wired and wireless headsets to be used with audio sources such as communication devices, e.g., cordless telephones, mobile radios, personal digital assistants (PDAs), cellular subscriber units and the like, as well as other source devices, such as MP3 players, stereo systems, radios, video games, personal computers, laptop computers and the like.
Known wireless headsets communicate with audio sources using RF or IR wireless technology. Such wireless headset communications have been extended to personal wireless networks, such as the one defined by the Bluetooth Specification available at www.bluetooth.com. The Bluetooth Specification provides specific guidelines for providing wireless headset functionality. In particular, the Bluetooth Specification provides a Headset Profile that defines the requirements for Bluetooth devices necessary to support the Headset use case. Once configured, the headset can function as a device's audio input and/or output. Thus, a particularly popular use of Bluetooth networks is to provide wireless headset connectivity for cellular telephones and PDAs. In addition, the Bluetooth Specification also provides the Advanced Audio Distribution Profile (A2DP) that defines protocols and procedures for wirelessly distributing high-quality stereo or mono audio over a Bluetooth network. The purpose of this Profile is to connect to MP3 music players such as the Zune, iPod, and the like.
Although wireless headsets are an improvement over wired headsets in some circumstances, there are still opportunities to further improve wireless headsets.
Known wireless headsets do not support simultaneous, direct connections to two or more separate source devices. Thus, for users who have two or more separate audio source devices, it is not currently possible to simultaneously listen to the different devices using known headsets. For example, presently available wireless headsets can not independently output simultaneous voice calls and playback audio, e.g., a user can not hear an incoming cellular phone voice-call while playing music from an MP3 player. The ability to simultaneously hear audio from different sources greatly improves the usability of wireless headset because, among other things, it allows a user to be conveniently notified of events, such as incoming voice-calls during music playback from his/her MP3 player.
Disclosed herein is a new and improved wireless headset design that supports simultaneous connections to two or more audio sources and that can concurrently output audio from the different sources. The audio may include voice-calls and audio playback, e.g., playback of recorded or streaming music.
According to one aspect of the design, a wireless headset includes a first transceiver configured to receive a first audio input from a first source, a second transceiver configured to receive a second audio input from a second source, and an audio mixer configured to combine the first and second audio inputs into output audio.
According to another aspect of the design, a method for outputting audio at a wireless headset includes receiving, at the wireless headset, first and second audio inputs from different sources and mixing the first and second audio inputs into output audio.
According to an another aspect of the design, an apparatus includes means for receiving at a wireless headset a first audio input from a first source, means for receiving at the wireless headset a second audio input from a second source, means for mixing the first and second audio inputs into output audio, and means for outputting the output audio from the wireless headset.
According to a further aspect of the design, a computer-readable medium, embodying a set of instructions executable by one or more processors, includes code for receiving a first audio input from a first source, code for receiving a second audio input from a second source, code for mixing the first and second audio inputs into output audio, and code for outputting the output audio from a wireless headset.
Other aspects, features, processes and advantages of the wireless headset design will be or will become apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional features, aspects, processes and advantages be included within this description and be protected by the accompanying claims.
It is to be understood that the drawings are solely for purpose of illustration. Furthermore, the components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the wireless headset design and its various aspects. In the figures, like reference numerals designate corresponding parts throughout the different views.
The following detailed description, which references to and incorporates the drawings, describes and illustrates one or more specific embodiments. These embodiments, offered not to limit but only to exemplify and teach, are shown and described in sufficient detail to enable those skilled in the art to practice what is claimed. Thus, for the sake of brevity, the description may omit certain information known to those of skill in the art.
The word “exemplary” is used throughout this disclosure to mean “serving as an example, instance, or illustration.” Any embodiment or feature described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or features.
Turning now to the drawings, and in particular to
The audio signals transmitted to and from the headset 102 can represent any form of discernable sound, including but not limited to voice and monaural or stereo audio. The audio signals transmitted between the audio sources and the headset 102 over the wireless channels can represent digitized audio sampled at the industry standard rate of 44.1 KHz. Other standard rates are 8 kHz, 16 kHz, 48 kHz, and other rates may also be used.
The wireless headset 102 communicates with the audio sources via plural wireless channels, e.g., radio frequency (RF) or infrared channels. In the exemplary system 100, the MP3 player 104 plays back music, which is transmitted as wireless signals by way of a first wireless channel 108 to the headset 102 where it can be rendered and heard by a user. The signals on the first wireless channel 108 may represent stereo or monaural audio. The cellular phone 106 can place and receive voice calls over a cellular network. The cellular phone 106 transmits and receives voice-call information, including voice itself, to and from the headset 102 as wireless signals over a second wireless channel 110.
The exemplary wireless headset 102 includes two earpieces 103 and at least one support, such as a headband 105, for allowing the headset 102 to be comfortably worn by a user. The wireless headset 102 is configured to simultaneously receive audio information over both the first and second wireless channels 108, 110 and to mix the received audio information so that it can be combined and output together at the earpieces 103, thus allowing the user to simultaneously hear audio from both sources. In known Bluetooth headsets, only one Bluetooth transceiver is present. This transceiver can typically be “paired” with up to four different devices. However, only one paired device at a time can exchange information with the headset transceiver. Thus, with a conventional Bluetooth headset, a user can listen to only one audio source at a time. In contrast to conventional Bluetooth headsets, the wireless headset 102 includes two or more wireless transceivers. Each transceiver may be paired with a different source device, for example, one with the phone 106 and another with the MP3 player 104. The audio from the sources is mixed within the headset 102. The mixed audio output from the source devices is then output from speakers in the headset 102.
To control multiple source devices, the headset 102 may include a user interface to select the device to be controlled.
To support multiple transceivers on the headset 102, an audio mixer 206 (
Although illustrated with the headband 105, the headset 102 and earpieces 103 can having any suitable physical shape and size adapted to securely fit the earpieces 103 over or into a user's ears. The headband 105 may be optionally omitted from the headset 102. For example, the earpieces 103 can be conventional hook-shaped earpieces for attaching behind a user's earlobe and over or into the user's ear canal. In addition, although the headset 102 is illustrated as having two earpieces 103, the headset 102 may alternatively include only a single earpiece.
The headset 102 also includes a controller 226 coupled to a memory 227, a left-channel audio processing circuit 210, a left-channel digital-to-analog converter (DAC) 212, a left-channel high-impedance headphone (HPH) amplifier (Amp) 214, a left-channel earphone speaker 216, a right-channel audio processing circuit 218, a right-channel DAC 220, a right-channel HPH amp 222, and a right-channel earphone speaker 224.
The headset 102 may also include an optional microphone (MIC) 228 configured to produce a third audio stream that is preprocessed by microphone preprocessor 230 and then provided to one of the transceivers 202, 204, e.g., the second transceiver 204, where it is further processed and then passed to the audio mixer 206. When the microphone 228 is included in the headset 102, the audio mixer 206 is configured to combine the first, second and third audio streams into the output audio.
The microphone 228 is any suitable microphone device for converting sound into electronic signals.
The microphone preprocessor 230 is configured to process electronic signals received from the microphone 228. The microphone preprocessor 230 may include an analog-to-digital converter (ADC) and a noise reduction and echo cancellation circuit (NREC). The ADC converts analog signals from the microphone into digital signal that are then processed by the NREC. The NREC is employed to reduce undesirable audio artifacts for communications and voice control applications. The microphone preprocessor 230 may be implemented using commercially-available hardware, software, firmware, or any suitable combination thereof.
The controller 226 controls the overall operation of the headset 102 and certain components contained therein. The controller 226 can be any suitable control device for causing the headset 102 to perform its functions and processes as described herein. For example, the controller 226 can be a processor for executing programming instructions stored in the memory 227, e.g., a microprocessor, such as an ARM 7, or a digital signal processor (DSP), or it can be implemented as one or more application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), discrete logic, software, hardware, firmware or any suitable combination thereof.
The memory 227 is any suitable memory device for storing programming instructions and data executed and used by the controller 226.
The wireless interfaces 202, 204 each provide two-way wireless communications with the first and second audio sources 104, 106, respectively. Preferably, each wireless interface 202, 204 includes a commercially-available Bluetooth module that provides at least a Bluetooth core system consisting of a Bluetooth RF transceiver, baseband processor, protocol stack, as well as hardware and software interfaces for connecting the module to the controller 226 and audio mixer 206. Although any suitable wireless technology can be employed with the headset 102, the first and second transceivers 203, 205 as illustrated in
Digitized audio streams are output from the first and second wireless interfaces 202, 204 and received by the audio mixer 206. The format of the digitized audio streams may be any suitable format, and thus, the audio streams may, in some circumstances, be raw audio samples, such as pulse code modulation (PCM) samples, or in other circumstances, digitally encoded and/or compressed audio, such MP3 audio. The controller 226 may be configured to detect the incoming audio stream formats from each wireless interface 202, 204 and then configure the audio mixer 206, audio processing circuit 210, 218 and other components, as necessary, to process and/or decode the incoming audio streams in a manner so that the streams can be appropriately mixed and output through speakers 216, 224 to be meaningfully heard by a user. Encoded and/or compressed audio is typically decoded and/or decompressed prior to being passed to the audio mixer 206.
In the exemplary headset configurations shown in
The audio mixer 206 mixes the incoming audio streams from the wireless interfaces 202, 204 to produce mixed audio signals, and in this case, left-channel and right-channel mixed digitized audio streams. The audio mixer 206 includes a matrix element 208 configured to weight each of the first and second audio streams, and also a third microphone audio stream, if present, thereby producing weighted audio signals The matrix element 208 may also be configured to sum the weighted audio signals to produce one or more output streams.
The matrix element 208 may include one or more digital weighted sum circuits and its operation can be represented mathematically using matrix algebra. The matrix element output may be represented by the vector Y, its input by the vector X and the weighting coefficients by a matrix M, and thus, the operation of the matrix element 208 is described using matrix algebra as Y=MX.
In the exemplary headset circuits shown in
where: x1=left-channel stereo audio input
The inputs, x1, x2, x3, x4, to the matrix element 208 may be digital data representing a predefined duration of input audio.
The matrix element 208 has two outputs: left-channel speaker and right-channel speaker, represented by the vector shown in Equation 2.
where: y1=left channel audio output
The outputs, y1, y2, of the matrix element 208 may be digital data representing a predefined duration of audio.
The coefficient matrix M may be represented by a 2×4 matrix:
where the elements of M are pre-selected variable values or constants.
Thus, the matrix element output, Y=MX, can be written as the system of equations:
y1=a1x1+b1x2+c1x3+d1x4
y2=a2x1+b2x2+c2x3+d2x4 (4)
The audio mixer 206 may be programmably configured to select different weighting coefficient matrix configurations, and therefore, different mixings of the incoming audio streams. The streams can be combined such that the audio mixer output includes only the first audio stream. The streams can alternatively be combined to include only the second audio stream in the output audio, or to include a mixture of both the first and second audio streams in the output audio.
For example, to configure the headset 102 to play stereo audio only, the matrix M of weighting coefficients may be set to:
Thus, applying the matrix of Equation 5 into Equation 4, the operation and outputs of the matrix element 208 are described as shown below in Equation 6:
y1=x1
y2=x2 (6)
To configure the headset 102 to play voice only, evenly distributed in both earpiece speakers 216, 224, the matrix M of weighting coefficients may be set to:
Thus, applying the matrix of Equation 7 into Equation 4, the operation and outputs of the matrix element 208 are described as shown below in Equation 8:
y1=0.5x3
y2=0.5x3 (8)
To configure the headset 102 to play stereo audio combined with voice, evenly distributed in both earpiece speakers 216, 224, the matrix M of weighting coefficients may be set to:
Thus, applying the matrix of Equation 9 into Equation 4, the operation and outputs of the matrix element 208 are described as shown below in Equation 10:
y1=x1+0.5x3
y2=x2+0.5x3 (10)
Additionally, the elements of the matrix M can be time-varying to produce advanced effects, such as fade-in, fade-out or the like. The matrix M elements can be stored as data sets in the memory 227, and can be configured by the controller 226. The matrix M elements can also apply gains to the audio inputs, and the gains may also be made time-varying by changing the value(s) of one or more of the matrix elements over time.
The functions of the audio mixer 206 and matrix element 208 may be implemented using any suitable analog and/or digital circuitry. For example, in the digital domain, the audio mixer 206 and matrix element 208 may be implemented in software executable by a processor, e.g, a microprocessor, such as an ARM7, or a digital signal processor (DSP), or they may be implemented as one or more application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), discrete logic, software, hardware, firmware or any suitable combination thereof.
The mixed digitized audio streams output by the audio mixer 206 are provided to the left-channel and right-channel audio processing circuits 210, 218.
The left-channel audio processing circuit 210 receives the mixed digitized audio stream from the left channel output of the audio mixer 206. The audio processing circuit 210 includes digital circuitry to process the mixed digitized audio signals in the digital domain. For example, the left-channel mixed digitized audio stream may be truncated one or more times, filtered one or more times, amplified one or more times, and upsampled one or more times by the audio processing circuit 210. Filtering may include low pass filtering, high pass filtering, and/or passing the stream through filters characterized by other kinds of filter functions. Amplification in the digital domain may include the use of a programmable gain amplifier (PGA).
The right-channel audio processing circuit 218 receives the mixed digitized audio stream from the right channel output of the audio mixer 206. The audio processing circuit 218 includes digital circuitry to process the right-channel mixed digitized audio signals in the digital domain. For example, the right-channel mixed digitized audio stream may be truncated one or more times, filtered one or more times, amplified one or more times, and upsampled one or more times by the audio processing circuit 218. Filtering may include low pass filtering, high pass filtering, and/or passing the stream through filters characterized by other kinds of filter functions. Amplification in the digital domain may include the use of a programmable gain amplifier (PGA).
The left-channel and right-channel audio processing circuits 210, 218 may be implemented using commercially-available, off-the-shelf components. Additionally, the audio processing circuits 210, 218 may be combined into a single, multiplexed processing path that handles both left and right audio channels. Also, some or all of the functions of the audio processing circuits 210, 218 may be implemented as software executable on a processor.
The left-channel DAC 212 converts left-channel mixed digitized audio output from the left-channel audio processing circuit 210 into a left-channel analog audio signal. The left channel analog audio signal is then amplified by the audio amplifier 214 to drive the left speaker 216.
The right-channel DAC 220 converts right-channel mixed digitized audio output from the right-channel audio processing circuit 218 into a right-channel analog audio signal. The right-channel analog audio signal is then amplified by the audio amplifier 222 to drive the right speaker 224.
One of ordinary skill in the art will understand that additional analog audio processing circuitry (not shown), beyond the audio amplifiers 214, 222, may be included in the headset 102.
The left and right headset speakers 216, 224 are any suitable audio transducer for converting the electronic signals output from the amplifiers 214, 222, respectively, into sound.
To save power, the controller 226 can switch off certain audio paths within the headset 102 when they are not in use. For example, if voice is not being received at the headset 102 and only stereo audio is being received, the controller 226 can temporarily switch off the second wireless interface 204 and microphone preprocessor 230.
An alternative arrangement of the headset components is to have the first transceiver's output be sent to second transceiver 205, before or after the matrix element 208. This would allow music from an audio source connected to the first wireless interface 202 to be sent to a remote station or second source communicating with the headset 102 via the second wireless interface 204.
In an alternative implementation (not shown), the memory 227, wireless interfaces 202 and 204, as well as the first and second transceivers 203, 205 may also be included in the processor 211.
Other implementations of the headset circuitry are possible.
In block 302, audio from a first audio source, e.g., MP3 player 104, is received by the headset 102 over the first wireless channel 108. The audio may include Bluetooth streaming audio resulting from a connection established between the MP3 104 and the headset 102, as described in the A2DP specification. After the Bluetooth streaming audio connection is established, audio packets are transmitted from the first audio source to the headset 102. Generally, the audio packets include digitized audio that is encoded using a negotiated codec standard. Each audio packet represents a predetermined duration of sound, e.g., 20 milliseconds, that is to be output at the headset 102. The audio packets can be formatted according to the A2DP profile, including one or more frames of encoded audio. The audio can be encoded using any suitable audio codec, including but not limited to SBC, MPEG-1 audio, MPEG-2 audio.
In block 304, audio from a second audio source, e.g., cellular phone 106, is received by the headset 102 over the second wireless channel 110. The audio from the second source may be in a different format from the audio from the first source. If so, the controller 226 can perform any necessary decoding and/or additional processing to render the audio stream so that they can be compatibly mixed by the audio mixer 206.
Next, in block 306, audio streams from the two sources are mixed together into an output audio stream. The audio mixer 206 and matrix element 208 can perform this step. The functions of these components are discussed above in connection with
In block 308, the mixed audio is processed by the audio processing circuits 210, 218, DACs 212, 220 and output through the headphone speakers 216, 224 of the wireless headset 102.
Although specific implementations of headset circuits have been described above, the functions of the headset circuitry and its components, as well as the method steps described herein may be implemented in any suitable combinations of hardware, software, and/or firmware, where such software and/or firmware is executable by one or more digital circuits, such as microprocessors, DSPs, embedded controllers, or intellectual property (IP) cores. If implemented in software, the functions may be stored on or transmitted as instructions or code on one or more computer-readable media. Computer-readable media include both computer storage medium and communication medium, including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable medium can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable medium.
Other embodiments and modifications will occur readily to those of ordinary skill in the art in view of these teachings. Therefore, the following claims are intended to cover all such embodiments and modifications when viewed in conjunction with the above specification and accompanying drawings.
Number | Name | Date | Kind |
---|---|---|---|
6006115 | Wingate | Dec 1999 | A |
6662022 | Kanamori et al. | Dec 2003 | B1 |
6782106 | Kong et al. | Aug 2004 | B1 |
6954652 | Sakanashi | Oct 2005 | B1 |
8155335 | Rutschman | Apr 2012 | B2 |
20030161292 | Silvester | Aug 2003 | A1 |
20050096766 | Nishioka et al. | May 2005 | A1 |
20050202857 | Seshadri et al. | Sep 2005 | A1 |
20060153007 | Chester | Jul 2006 | A1 |
20060166715 | Van Engelen et al. | Jul 2006 | A1 |
20060166716 | Seshadri et al. | Jul 2006 | A1 |
20060262938 | Gauger, Jr. et al. | Nov 2006 | A1 |
20070002955 | Fechtel et al. | Jan 2007 | A1 |
20070038442 | Visser et al. | Feb 2007 | A1 |
20070042762 | Guccione | Feb 2007 | A1 |
20070129104 | Sano et al. | Jun 2007 | A1 |
20070149261 | Huddart | Jul 2007 | A1 |
20080161066 | Reda et al. | Jul 2008 | A1 |
20080161067 | Reda et al. | Jul 2008 | A1 |
Number | Date | Country |
---|---|---|
1612205 | May 2005 | CN |
5276593 | Oct 1993 | JP |
2001313582 | Nov 2001 | JP |
2005295253 | Oct 2005 | JP |
2007142684 | Jun 2007 | JP |
WO0184727 | Nov 2001 | WO |
WO2009097009 | Aug 2009 | WO |
Entry |
---|
International Search Report and Written Opinion PCT/US2009/063270; International Searching Authority; dated Feb. 9, 2010. |
Taiwan Search Report—TW098138713—TIPO—Jan. 31, 2013. |
Number | Date | Country | |
---|---|---|---|
20100150383 A1 | Jun 2010 | US |