The present disclosure is generally related to audio processing.
Recent wireless video transmission standards such as WirelessHD allow mobile devices such as tablets and smartphones to transmit rich multimedia from a user's hand to audio/video (A/V) resources in a room, such as a big screen and surround speakers. Current challenges include providing a satisfactory presentation of multimedia to interested users without interfering with the enjoyment of others.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
Disclosed herein are certain embodiments of a personal audio beamforming system and method that apply adaptive loudspeaker beamforming to focus audio energy coming from multiple loudspeakers such that the audio is perceived loudest at the location of a user and quieter elsewhere in a room. In one embodiment, a personal audio beamforming system may use adaptive loudspeaker beamforming in conjunction with a mobile sensing microphone residing in a mobile device, such as a smartphone, tablet, laptop, among other mobile devices with wireless communication capabilities.
For instance, tablets and smartphones typically have a microphone and audio signal processing capabilities. In one embodiment, an adaptive filtering algorithm (e.g., least means squares (LMS), recursive least squares (RLS), etc.) may be implemented in the mobile device to control the matrixing of multiple-channel audio being transmitted over a WirelessHD, or similar, transmission channel. In one embodiment, an adaptive feedback control loop may continually balance the phasing of the channels such that an audio amplitude sensed at the microphone input of the mobile device is optimized (e.g., maximized) while creating nulls or lower amplitude audio elsewhere in the room.
One or more benefits that inure through the use of one or more embodiments of a personal audio beamforming system include isolation of at least some of the audio from others in the room (e.g., prevent or mitigate disturbance by the user's audio to others in the room). In addition, or alternatively in some embodiments, a personal audio beamforming system may permit multiple users in a room to share loudspeaker resources and to hear their individual audio source with reduced crosstalk. Also, in some embodiments, there may be power savings realized through implementation of a personal audio beamforming system, since power is focused primarily in the desired direction, rather than in undesired directions.
In contrast, existing systems may have a one-time set-up to optimize the beam without further modification once initiated for a fixed listening position. Such limited adaptability may result in user dissatisfaction. In one or more embodiments of a personal audio beamforming system, the beam is continually adapted based on the signal characteristics as the position of the mobile device is moved, and in turn, the audio amplitude is optimized for the device of a user.
Having summarized certain features of an embodiment of a personal audio beamforming system, reference will now be made in detail to the description of the disclosure as illustrated in the drawings. While the disclosure will be described in connection with these drawings, there is no intent to limit it to the embodiment or embodiments disclosed herein. Further, although the description identifies or describes specifics of one or more embodiments, such specifics are not necessarily part of every embodiment, nor are all various stated advantages necessarily associated with a single embodiment or all embodiments. On the contrary, the intent is to cover all alternatives, modifications and equivalents included within the spirit and scope of the disclosure as defined by the appended claims. Further, it should be appreciated in the context of the present disclosure that the claims are not necessarily limited to the particular embodiments set out in the description.
Referring to
In one example operation, the mobile device 106 may be equipped with a wireless HDMI interface to project multimedia such as audio and/or video (e.g., received wirelessly or over a wired connection from a media source) to the media device 112. The media device 112 is equipped to process the signal and play back the video (e.g., on a display device, such as a computer monitor or television or other electronic appliance display screen) and play back the audio via the speakers 114. The microphone of the mobile device 106 is equipped to detect the audio from the speakers 114. The mobile device 106 may be equipped with feedback control logic, which extracts and/or computes signal statistics or parameters (e.g., amplitude, phase, etc.) from the microphone signal and makes adjustments to decoded source audio to cause the audio emanating from the speakers 114 to interact constructively, destructively, or a combination of both at the input to the microphone in a manner to ensure the microphone receives the audio at or proximal to a defined target level (e.g., highest or optimized audio amplitude) regardless of the location of the mobile device 106 in the room 110. In other words, as the user 102 traverses the room 110, the feedback control logic (whether embodied in the mobile device 106 or the media device 112) continually adjusts the decoded source audio to target a desired (e.g., optimal, maximum, etc.) amplitude at the input to the microphone of the mobile device 106.
In some embodiments, the mobile device 108 may also have a microphone to cause a nulling or attenuation of the audio to ensure the user 104 is not disturbed (or not significantly disturbed) by the audio the user 102 is enjoying. For instance, in one example operation, the mobile device 108 may indicate (e.g., as prompted by input by the user 104) to the mobile device 106 whether or not the user 104 is interested in audio content destined for the user 102. The mobile device 108 may transmit to the mobile device 106 statistics about the signal (and/or transmit the signal or a variation thereof) received by the microphone of the mobile device 108 to appropriately direct the control logic of the personal audio beamforming system (e.g., of the mobile device 106) to achieve the stated goals (e.g., boost the signal when the user 104 is interested in the audio or null the signal when disinterested). Assume the user 104 is not interested in the content (desired by the user of the mobile device 106) to be received by the mobile device 108. In such a circumstance, the mobile device 108 may try to distinguish a portion of the received signal amplitude contributed by the unwanted content sourced by the mobile device 106. If the mobile device 108 is not transmitting audio, then such a circumstance represents a simple case of the reception of unwanted audio. However, if the mobile device 108 is transmitting its own audio content, then in one embodiment, the mobile device 108 may estimate the expected audio signal envelope by analyzing it own content transmission and subtract the envelope (corresponding to the desired audio content) from an envelope of the signal detected (which includes the desired audio as well as the unwanted audio from the mobile device 106) by its microphone. Based on a residual envelope the mobile device 108 may estimate crosstalk signal strength. In other words, the mobile device 108 may determine how much unwanted signal power is received by subtracting off the desired content to be heard. The mobile device 108 may signal to the mobile device 106 information corresponding to the unwanted signal power to enable by the mobile device 106 a de-emphasizing of the spectrum corresponding to the unwanted audio signal power to achieve a nulling of the unwanted content at the microphone of the mobile device 108. Other mechanisms to remove the unwanted signal contribution are contemplated to be within the scope of the disclosure.
In some embodiments, source audio reception and processing (e.g., decode, encode, etc.) may be handled at the media device 112, where the mobile device 106 handles microphone input and feedback adjustments. In some embodiments, the mobile device 106 may only handle the microphone reception and communicate parameters of the signal (and/or the signal) to the media device 112 for further processing. Other variations are contemplated to be within the scope of the disclosure.
In some embodiments, the personal audio beamforming system may be comprised of all components shown in
Having described an example environment in which certain embodiments of a personal audio beamforming system may be employed, attention is directed now to
The audio processing logic 206 may include decoding and encoding functionality. For instance, the audio processing logic 206 decodes the sourced audio, providing the decoded audio to the feedback control logic 204. The feedback control logic 204 processes (e.g., modifies the amplitude and/or phase delay) of the decoded audio and provides the processed audio over plural channels. Audio encoding functionality of the audio processing logic 206 encodes the adjusted audio and provides a modified audio bitstream to the transmission interface logic 208. The transmission interface logic 208 may be embodied as a wireless audio transmitter (or transceiver in some embodiments) equipped with one or more antennas to wirelessly communicate the modified audio bitstream to the receive interface 210. In some embodiments, the transmission interface logic 208 may be a wired connection, such as where a mobile device (e.g., mobile device 106) is plugged into a media device 112 (
The receive interface logic 210 is configured to receive the transmitted (e.g., whether over a wired or wireless connection) modified audio bitstream (or some signal version thereof). The receive interface logic 210 may be embodied as a wireless audio receiver or a connection (e.g., for wired communication), depending on the manner of communication. The receive interface logic 210 is configured to provide the processed, modified audio bitstream to the audio processing/amplification logic 212, which may include audio decoding functionality, digital to analog converters (DACs), amplifiers, among other components well-known to one having ordinary skill in the art. The audio processing/amplification logic 212 processes the decoded audio having modified parameters and drives the plural speakers 214, enabling the audio to be output. The microphone 216 is configured to receive the audio emanating from the speakers 214, and provide a corresponding signal to the feedback control logic 204. The feedback control logic 204 may determine the signal parameters from the signal provided by the microphone 216, and filtering operations that cause signal adjustments in amplitude, phase, and/or frequency response are applied to the decoded source audio in the audio processing logic 206. The adjustments may be continuous, or almost continuous (e.g., aperiodic depending on conditions of the signal, or periodic, or both).
It should be appreciated within the context of the present disclosure that one or more of the functionality of the various logic illustrated in
Turning now to
Turning attention now to the wireless receiver/amplifier 304, the wireless audio receiver 326 includes one or more antennas, such as antenna 324. In some embodiments, the wireless audio receiver 326 (including antenna 324) is similar to the receive interface 210 (
The audio output from the plural speakers 214 is received at the microphone 216. The microphone 216 generates a signal based on the audio waves received by the speakers 214, and provides the signal to an analog to digital converter (ADC) 314. In some embodiments, the signal provided by the microphone 216 may already be digitized (e.g., via ADC functionality in the microphone). The digitized signal from the ADC 314 is provided to the feedback control logic 310, where the signal and/or signal statistics are evaluated and adjustments made as described above.
In some embodiments, the adjustments to the decoded source audio may take into account adjustments for other users in the room. For instance, the feedback control logic 310 may emphasize an audio level for the microphone input of the mobile device 302, while also adjusting the decoded source audio in a manner to de-emphasize (e.g., null out or attenuate) the audio emanating from the speakers 214 for another mobile device, such as mobile device 108 (
Explaining further, according to one example operation, assume M=1 (e.g., for an audio voice call), and consider
One or more embodiments of personal audio beamforming systems may be implemented in hardware, software (e.g., including firmware), or a combination thereof. In one embodiment(s), a personal audio beamforming system is implemented with any or a combination of the following technologies, which are all well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc. In some embodiments, one or more portions of a personal audio beamforming system may be implemented in software, where the software is stored in a memory that is executed by a suitable instruction execution system.
Referring now to
Referring to
In view of the above description, it should be appreciated that one embodiment of a personal audio beamforming method, shown in
Any process descriptions or blocks in flow diagrams should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art.
It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
6674865 | Venkatesh et al. | Jan 2004 | B1 |
6954538 | Shiraishi | Oct 2005 | B2 |
7117145 | Venkatesh et al. | Oct 2006 | B1 |
7526093 | Devantier et al. | Apr 2009 | B2 |
7577260 | Hooley et al. | Aug 2009 | B1 |
7606377 | Melanson | Oct 2009 | B2 |
7606380 | Melanson | Oct 2009 | B2 |
7676049 | Melanson | Mar 2010 | B2 |
7804972 | Melanson | Sep 2010 | B2 |
7826624 | Oxford | Nov 2010 | B2 |
7970150 | Oxford | Jun 2011 | B2 |
7991167 | Oxford | Aug 2011 | B2 |
8090117 | Cox | Jan 2012 | B2 |
8111830 | Moon et al. | Feb 2012 | B2 |
8160268 | Horbach | Apr 2012 | B2 |
8184180 | Beaucoup | May 2012 | B2 |
8275136 | Niemisto et al. | Sep 2012 | B2 |
20040208324 | Cheung et al. | Oct 2004 | A1 |
20070263845 | Hodges et al. | Nov 2007 | A1 |
20080069378 | Rabinowitz et al. | Mar 2008 | A1 |
20080226087 | Kinghorn | Sep 2008 | A1 |
20090238383 | Meyer et al. | Sep 2009 | A1 |
20090252355 | Mao | Oct 2009 | A1 |
20090316918 | Niemisto et al. | Dec 2009 | A1 |
20100026780 | Tico et al. | Feb 2010 | A1 |
20110002469 | Ojala | Jan 2011 | A1 |
20110038229 | Beaucoup | Feb 2011 | A1 |
20110091055 | Leblanc | Apr 2011 | A1 |
20110096915 | Nemer et al. | Apr 2011 | A1 |
20110129095 | Avendano et al. | Jun 2011 | A1 |
20110164141 | Tico et al. | Jul 2011 | A1 |
20110178798 | Flaks et al. | Jul 2011 | A1 |
20110301730 | Kemp et al. | Dec 2011 | A1 |
20120076306 | Aarts et al. | Mar 2012 | A1 |
20120093344 | Sun et al. | Apr 2012 | A1 |
20120120270 | Li et al. | May 2012 | A1 |
Entry |
---|
Yamaha Sound Bar/Digital Sound Projector, http://usa.yamaha.com/products/audio-visual/hometheater-systems/digital-sound-projector/, Copyright 2012. |
Number | Date | Country | |
---|---|---|---|
20140003622 A1 | Jan 2014 | US |