Due to mobility requirements and dimension restrictions, a mobile device (e.g., a mobile phone, a smart phone, etc.) typically comprises one or more small-size or low-cost loudspeakers. Sound quality for audio and speech signals used in mobile devices therefore has been severely limited by not being able to produce enough loudness without introducing damage to the loudspeaker(s), as compared to non-mobile or high-end loudspeaker systems. The widespread popularity of smart phones and of multimedia-intensive mobile applications has triggered demand for better audio quality for mobile devices. Several approaches have been used to achieve better audio sound quality with enough loudness. For example, automatic gain control (AGC) and/or automatic volume control (AVC) have been widely implemented to ease the existing audio quality problem to some extent for mobile devices.
The small loudspeaker in a mobile device can work in a linear mode for small signals, but its linearity would be no longer valid for large signals with high compression. A signal low enough in frequency and/or large enough in level may cause excessive movement of the loudspeaker diaphragm.
Excursion refers to the distance that a diaphragm in a loudspeaker may travel from its resting position. Signals low enough in frequency and/or large enough in level may cause excessive movement of the diaphragm of the loudspeaker in a mobile device. When the loudspeaker is driven by such a high power level signal, the diaphragm movement (i.e., the excursion) consistently exceeds its excursion limit, which leads to poor sound and an unpleasant audio experience for the listener. More particularly, in such a case, the voice coil tends to exit the gap, resulting in the coil rubbing and possibly reaching a break-up mode of the voice coil displacement.
Known prior art diaphragm excursion control techniques use a high-pass or a notch filter to suppress the low frequency contents around the resonance frequency that may cause excessive diaphragm movement. Due to the lack of low frequencies and loss of loudness, these approaches often render an unnatural and tinny sound. Moreover, because the low frequencies in the loudspeaker signal are always filtered out, the unpleasant experience for the listener persists even when the signal is small enough to stay in the loudspeaker's linear range.
An original loudness level of an audio signal (e.g., speech signal or other input audio signal) is maintained for a mobile device while maintaining sound quality as good as possible and protecting the loudspeaker used in the mobile device. More particularly, the loudness of an audio signal may be maximized while controlling the excursion of the diaphragm of the loudspeaker (in a mobile device) to stay within the allowed range.
In an implementation, the peak excursion is predicted (e.g., estimated) using the input signal and an excursion transfer function. The signal is modified to limit the excursion and to maximize loudness.
In an implementation, in a first operation, to estimate the peak excursion, the input audio signal or speech signal (i.e., the input signal) is filtered with the impulse response (of the excursion transfer function) of the loudspeaker to estimate the peak excursion for the signal. In a second operation, an excursion limiting signal processor receives the input audio signal and the estimated peak excursion, and modifies the input audio signal to maximize the perceived loudness such that the estimated peak excursion of the output signal does not exceed the maximum excursion of the loudspeaker (i.e., the output signal remains in the safe range of the loudspeaker).
In an implementation, the perceived loudness can be incorporated into the signal modification. The signal processing will be excursion limiting while maximizing the perceived loudness. An approximation of a psychoacoustic loudness model (such as Moore's loudness model) can be used. The approximation is based upon the subband energy of each equal rectangular band (ERB) of the input signal and the specific loudness at each ERB subband.
In an implementation, the excursion limiting signal processing may be implemented in the subband domain instead of the full-band time domain. The subband domain may be effective because the frequency components in signals have different levels of contributions to excursion and perceived loudness. In such a case, excursion prediction may be performed in the frequency domain.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The foregoing summary, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the embodiments, there are shown in the drawings example constructions of the embodiments; however, the embodiments are not limited to the specific methods and instrumentalities disclosed. In the drawings:
The mobile station 105 may be capable of communicating with packet switched networks and circuit switched networks. It is contemplated that the configurations disclosed herein may be adapted for use in networks that are packet switched (for example, wired and/or wireless networks arranged to carry audio transmissions according to protocols such as VoIP) and/or circuit switched. It is also contemplated that the configurations disclosed herein may be adapted for use in narrowband coding systems (e.g., systems that encode an audio frequency range of about four or five kilohertz) and for use in wideband coding systems (e.g., systems that encode audio frequencies greater than five kilohertz), including whole-band wideband coding systems and split-band wideband coding systems. Example combinations include circuit switched air interface and circuit switched core network, circuit switched air interface and packet switched core network, and IP access and packet switched core network, for example.
The mobile station 105 may comprise an excursion predictor 110, an excursion limiting signal processor 120, and a loudspeaker 130. Using techniques described further herein, the excursion predictor 110 may predict the estimated peak excursion of the loudspeaker 130 over a short time interval (e.g. a 20 ms frame), and the excursion limiting signal processor 120 may generate an output signal to be provided to the loudspeaker 130 using the estimated peak excursion. The excursion predictor 110 and the excursion limiting signal processor 120 may be implemented using one or more processors or computing devices such as the computing device 1200 illustrated in
The excursion predictor 110 predicts (e.g., estimates) the peak excursion of the loudspeaker 130 for an input audio signal (which may be a speech signal, for example) using the input audio signal and an excursion transfer function of the loudspeaker 130. More particularly, to estimate the peak excursion, the original audio/speech signal (the input signal) s(t) is filtered with the impulse response of excursion transfer function of the loudspeaker h(t) to estimate the peak excursion ep for the input audio/speech signal. If the impulse response of excursion transfer function of the loudspeaker h(t) is known, the excursion e(t) may be estimated by e(t)=h(t)* s(t), where * denotes a convolution of two sequences.
The estimated peak excursion ep over a short time interval of the input audio signal is provided to the excursion limiting signal processor 120. Using the estimated peak excursion ep and the maximum excursion Xmax of the loudspeaker 130 (e.g., a predetermined characteristic of the loudspeaker 130), the input audio signal is processed (i.e., modified) to determine an output signal {tilde over (s)}(t) that allows the loudspeaker diaphragm to move within the maximum excursion Xmax of the loudspeaker 130. In an implementation, the excursion limiting signal processor 120 maximizes the perceived loudness such that the estimated peak excursion {tilde over (e)}p of the output signal {tilde over (s)}(t) does not exceed the maximum excursion Xmax of the loudspeaker 130. The peak excursion ep of the loudspeaker can be determined by ep=max{|e(t)|} over a short time interval of the input audio signal. In this manner, the input audio signal is modified to limit the excursion and to maximize the loudness. The output signal will be in the safe range of the loudspeaker 130.
In an implementation, a metric for a perceived loudness can be incorporated into the signal modification by the excursion limiting signal processor 120. An approximation of Moore's loudness model (or any psychoacoustic loudness model, depending on the implementation) can be used. As described further herein, the approximation is based upon the subband energy of each equal rectangular band (ERB) of the input audio signal and the specific loudness at each ERB subband. Thus, in an implementation, signal processing for the excursion limiting signal processor 120 may be implemented in the subband domain instead of the full-band time domain. This subband or frequency domain approach may be effective in calculating perceived loudness and predicting peak excursion, because the frequency components in signals have different levels of contributions to excursion and perceived loudness.
As noted above, the excursion limiting signal processor 120 receives the input audio/speech signal and the estimated peak excursion ep, and modifies the input audio/speech signal to maximize the perceived loudness in such a way that the estimated peak excursion {tilde over (e)}p of output signal {tilde over (s)}(t) does not exceed the maximum excursion Xmax of the loudspeaker 130. In an implementation, the input signal may be segmented into small chunks of data, or frames, before it is processed or modified by the excursion limiting signal processor 120.
In an implementation, because the frequency components in the loudspeaker signal have different levels of contributions to excursion and perceived loudness, subband or frequency domain signal analysis may be used. For example, the input signal may be transformed into psycho-acoustically motivated subband signals. For example, the input signal may be transformed into critical bands or equal rectangular bandwidth (ERB) signals. Then, for each subband signal, its spectral energy may be determined, which may be then used to determine per band loudness and excursion.
In an implementation, to incorporate a perceived loudness criterion in the signal modification, the well known Moore's loudness model may be adopted. Moore's loudness model in each subband can be described as follows:
Nb=C{(Gb·ESIG(b)+Ab)α
where Nb is the specific loudness at b-th ERB band, ESIG(b) is the excitation pattern at the b-th ERB band, Gb, Ab and αb are ERB band dependent constants, and C is a predetermined constant. All the parameters used in Moore's loudness model are well known and a further description herein is omitted for brevity.
For each ERB subband, the following operations may be performed. At 330, fixed filters representing transfer functions through the outer and middle ear may be obtained e.g., retrieved from storage of the mobile station 105. At 340, an excitation pattern may be calculated from the physical spectrum; i.e., a transformation is performed to an excitation pattern. At 350, the excitation pattern is transformed to a specific loudness per each band.
After operations 330-350 have been performed for each subband, a full-band perceived loudness may be determined at 360. Thus, the loudness per subband Nb can be directly used for further processing to limit excursion in subband domain. Each specific loudness (from 350) can be summed across ERB bands to generate full-band perceptual loudness L as follows: L=ΣbNb. The loudness in either subband domain or full-band domain may be measured by using the sone unit of measurement; however, any unit of measurement pertaining to loudness may be used.
The computational complexity of Moore's model can be decreased using an approximation.
At 410, an input audio signal s(t) (e.g., a speech signal) is received at the mobile station 105. Similar to 320, at 420, the input audio signal may be transformed into subband signals in an ERB scale using a perceptual filter bank. At 430, for each ERB subband, the subband energy Eb may be calculated. The specific loudness at each ERB subband Nb may be approximated, at 440, based upon Eb and ERB band dependent constants pb and qb as shown in equation (1):
Nb=C{(Gb·ESIG(b)+Ab)α
At 640, the maximum diaphragm excursion ep, also referred to as peak excursion, for each subband may be estimated, for example, by equation (2).
where Hb=maxk ε B
Once the approximated terms Nb and ep, are determined, signal processing by the excursion limiting signal processor 120 may be performed in the subband domain instead of the full-band time domain. In the subband domain, the frequency components of the input signal have different levels of contributions to excursion and perceived loudness. Optimization in the subband domain can be reduced to the problem of finding a set of optimal subband gains that maximize perceived loudness with constrained excursion that should be less than the loudspeaker's maximally allowable limit. In other words, the optimization problem in the subband domain may be rephrased as finding a set of ERB gains {gb} for each subband such that {tilde over (S)}(k)=gbS(k) for k ε Bb maximizes the perceived loudness L≈Σbpb{pbEb}q
At 810, an input audio signal s(t) (e.g., a speech signal) is received at the mobile station 105. At 820, the input audio signal may be transformed into subband signals in an ERB scale using a perceptual filter bank. At 830, for each ERB subband, the subband energy Eb may be calculated.
At 840, the excursion limiting signal processor may perform loudness and excursion optimization by approximating a loudness model, estimating peak excursion, and determining a set of best subband gains for each subband. The subband signal is then multiplied by each subband gain at 850 to generate a gain-adjusted frequency domain output signal. At 860, an inverse filter bank may transform the frequency domain output signal into a gain-adjusted time domain signal. The signal may then be outputted at 870.
Both the loudness model approximation and the peak excursion prediction may be processed for either entire subbands or certain portion of subbands, depending on the implementation. For example, an implementation, the loudness model approximation and the excursion prediction may be processed only for lower frequency regions, or lower subbands, where the typical excursion is much bigger than that of higher frequency regions, or higher subbands. This may save computational complexity of the overall processing which may be beneficial to save battery consumption of mobile station 105.
For loudness and excursion optimization, the excursion limiting signal processor may be configured to find an optimal subband energy that satisfies equation (3):
Equation (3) may be rewritten as shown in Equation (4) using Lagrange multipliers, which is a well known method to find the maximum or minimum given constraints:
In one embodiment, a loudness and excursion optimization technique may find Lagrange multipliers using an iterative optimization method. This method may comprise an initialization step and an m-th iteration step (m≧1). The initialization step may comprise the equations:
The m-th iteration step (m≧1) may comprise the iterative execution of following equations:
The iteration may continue for a fixed number of times or until these parameters converge close to specific values.
In an implementation, pre-processing may be performed by the excursion limiting signal processor. When the gain change {gb} becomes too much on particular frequency bands, it may generate too much spectral timbre change, causing an unnatural or a disturbing sound. Too much gain change on weak signal frames, such as unvoiced frames, for example, may also generate too much sound pressure level (SPL) fluctuation which may negatively impact the overall sound quality.
At 1010, an input audio signal s(n) (e.g., a speech signal) is received at the pre-processor 902 of the mobile station 105. At 1020, pre-processing is performed. The limiter 903 may be configured to limit the portions of input audio/speech signal having a crest factor greater than limiting threshold. This limiting operation may be useful to create enough digital headroom before the makeup gain 905 boosts the input audio/speech signal. It is preferable to maintain makeup gain (e.g., 15 dB) to be lower than the limiting threshold (e.g., 18 dB), though any values may be used depending on the implementation. By using both a limiter 903 and a makeup gain 905, the input audio/speech signal s(n) may be amplified by makeup gain without generating any saturation distortion.
The pre-processed signal is then prepared for subsequent processing for excursion control by an excursion limiting signal processor 920 (similar to the excursion limiting signal processor 120 and comprising a loudness and excursion optimizer 925 and inverse fast Fourier transform (IFFT) 927). Prior to sending the signal to the excursion limiting signal processor 920, at 1030, the pre-processed signal is transformed with a fast Fourier transform (FFT) 907, and the output of the FFT is provided to an excursion predictor 910 at 1040 to predict an excursion.
It is determined at 1050 if the output of the excursion predictor 910 is less than the maximum excursion of the loudspeaker 130. If so, the constrained optimization is solved at 1060 to find out a best set of subband gains (using the loudness and excursion optimizer 925 of the excursion limiting signal processor 920), which are then provided to a multiplier of the excursion limiting signal processor 920 at 1070; otherwise, unity subband gains are provided to the multiplier at 1070.
At 1070, the multiplier receives the unity subband gains or the solved constrained optimization results and multiplies them with the transformed pre-processed signal (the output of 1030). The result is inverse transformed (e.g., using the IFFT 927) to obtain the resulting output signal at 1080. The output signal may then be provided to the loudspeaker 130.
Increasing the input audio/speech signal level at the pre-processor 902 and putting an additional constraint on ERB gain {gb} at the excursion limiting signal processor 920 may mitigate a spectral timbre change and the SPL (sound pressure level) fluctuation. It is preferable to maintain the ERB gain to be no more than unity, gb≦1. The pre-processed signal may be analyzed to predict its excursion and subsequently may be modified by multiplying optimal subband gains only when too much excursion is predicted. For example, when ep≦Xmax, the ERB gain {gb} becomes unity gain and when ep>Xmax, the ERB gain {gb} typically becomes smaller than unity.
With the addition of the new constraint on ERB gain, the optimization problem presented earlier based on Lagrange multiplier may be written as follows:
where μb denotes a Lagrangian multiplier corresponding to the constraint gb≦1.
As used herein, the term “determining” (and grammatical variants thereof) is used in an extremely broad sense. The term “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.
The term “signal processing” (and grammatical variants thereof) may refer to the processing and interpretation of signals. Signals of interest may include sound, images, and many others. Processing of such signals may include storage and reconstruction, separation of information from noise, compression, and feature extraction. The term “digital signal processing” may refer to the study of signals in a digital representation and the processing methods of these signals. Digital signal processing is an element of many communications technologies such as mobile stations, non-mobile stations, and the Internet. The algorithms that are utilized for digital signal processing may be performed using specialized computers, which may make use of specialized microprocessors called digital signal processors (sometimes abbreviated as DSPs).
Unless indicated otherwise, any disclosure of an operation of an apparatus having a particular feature is also expressly intended to disclose a method having an analogous feature (and vice versa), and any disclosure of an operation of an apparatus according to a particular configuration is also expressly intended to disclose a method according to an analogous configuration (and vice versa).
Mobile station 1100 is capable of providing bidirectional communication via a receive path and a transmit path. On the receive path, signals transmitted by base stations are received by an antenna 1112 and provided to a receiver (RCVR) 1114. Receiver 1114 conditions and digitizes the received signal and provides samples to a digital section 1120 for further processing. On the transmit path, a transmitter (TMTR) 1116 receives data to be transmitted from digital section 1120, processes and conditions the data, and generates a modulated signal, which is transmitted via antenna 1112 to the base stations. Receiver 1114 and transmitter 1116 may be part of a transceiver that may support CDMA, GSM, etc.
Digital section 1120 includes various processing, interface, and memory units such as, for example, a modem processor 1122, a reduced instruction set computer/digital signal processor (RISC/DSP) 1124, a controller/processor 1126, an internal memory 1128, a generalized audio encoder 1132, a generalized audio decoder 1134, a graphics/display processor 1136, and an external bus interface (EBI) 1138. Modem processor 1122 may perform processing for data transmission and reception, e.g., encoding, modulation, demodulation, and decoding. RISC/DSP 1124 may perform general and specialized processing for mobile station 1100. Controller/processor 1126 may direct the operation of various processing and interface units within digital section 1120. Internal memory 1128 may store data and/or instructions for various units within digital section 1120.
Generalized audio encoder 1132 may perform encoding for input signals from an audio source 1142, a microphone 1143, etc. Generalized audio decoder 1134 may perform decoding for coded audio data and may provide output signals to a speaker/headset 1144. Graphics/display processor 1136 may perform processing for graphics, videos, images, and texts, which may be presented to a display unit 1146. EBI 1138 may facilitate transfer of data between digital section 1120 and a main memory 1148.
Digital section 1120 may be implemented with one or more processors, DSPs, microprocessors, RISCs, etc. Digital section 1120 may also be fabricated on one or more application specific integrated circuits (ASICs) and/or some other type of integrated circuits (ICs).
Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices.
With reference to
Computing device 1200 may have additional features and/or functionality. For example, computing device 1200 may include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in
Computing device 1200 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by device 1200 and include both volatile and non-volatile media, and removable and non-removable media. Computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory 1204, removable storage 1208, and non-removable storage 1210 are all examples of computer storage media. Computer storage media include, but are not limited to, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 1200. Any such computer storage media may be part of computing device 1200.
Computing device 1200 may contain communications connection(s) 1212 that allow the device to communicate with other devices. Computing device 1200 may also have input device(s) 1214 such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 1216 such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length here.
In general, any device described herein may represent various types of devices, such as a wireless or wired phone, a cellular phone, a laptop computer, a wireless multimedia device, a wireless communication PC card, a PDA, an external or internal modem, a device that communicates through a wireless or wired channel, etc. A device may have various names, such as access terminal (AT), access unit, subscriber unit, mobile station, mobile device, mobile unit, mobile phone, mobile, remote station, remote terminal, remote unit, user device, user equipment, handheld device, non-mobile station, non-mobile device, endpoint, etc. Any device described herein may have a memory for storing instructions and data, as well as hardware, software, firmware, or combinations thereof.
The excursion predicting and excursion limiting techniques described herein may be implemented by various means. For example, these techniques may be implemented in hardware, firmware, software, or a combination thereof. Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
For a hardware implementation, the processing units used to perform the techniques may be implemented within one or more ASICs, DSPs, digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, a computer, or a combination thereof.
Thus, the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
For a firmware and/or software implementation, the techniques may be embodied as instructions on a computer-readable medium, such as random access RAM, ROM, non-volatile RAM, programmable ROM, EEPROM, flash memory, compact disc (CD), magnetic or optical data storage device, or the like. The instructions may be executable by one or more processors and may cause the processor(s) to perform certain aspects of the functionality described herein.
If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Although exemplary implementations may refer to utilizing aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited, but rather may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be effected across a plurality of devices. Such devices might include PCs, network servers, and handheld devices, for example.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
This application claims priority under the benefit of 35 U.S.C. §120 to Provisional Patent Application No. 61/432,094, filed on Jan. 12, 2011. This provisional patent application is hereby expressly incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4969195 | Noro | Nov 1990 | A |
5600718 | Dent et al. | Feb 1997 | A |
6639989 | Zacharov et al. | Oct 2003 | B1 |
7274793 | Combest et al. | Sep 2007 | B2 |
7372966 | Bright | May 2008 | B2 |
7894614 | Cordell | Feb 2011 | B2 |
8144881 | Crockett et al. | Mar 2012 | B2 |
8194869 | Mihelich et al. | Jun 2012 | B2 |
8340307 | Barr et al. | Dec 2012 | B2 |
8351621 | Lehnert et al. | Jan 2013 | B2 |
20040001597 | Marton | Jan 2004 | A1 |
20040086140 | Fedigan et al. | May 2004 | A1 |
20040184623 | Johannsen et al. | Sep 2004 | A1 |
20050031131 | Browning et al. | Feb 2005 | A1 |
20060262951 | Jun | Nov 2006 | A1 |
20070140058 | McIntosh et al. | Jun 2007 | A1 |
20080170723 | Sakagami et al. | Jul 2008 | A1 |
20080175397 | Holman | Jul 2008 | A1 |
20090129601 | Ojala et al. | May 2009 | A1 |
20090147963 | Smith | Jun 2009 | A1 |
20100119072 | Ojanpera | May 2010 | A1 |
20100202632 | Seefeldt et al. | Aug 2010 | A1 |
20100290643 | Mihelich et al. | Nov 2010 | A1 |
20110085678 | Pfaffinger | Apr 2011 | A1 |
20110123031 | Ojala | May 2011 | A1 |
20110228945 | Mihelich et al. | Sep 2011 | A1 |
20110311062 | Seefeldt et al. | Dec 2011 | A1 |
20120008799 | Disch et al. | Jan 2012 | A1 |
20120063614 | Crockett et al. | Mar 2012 | A1 |
20120063615 | Crockett et al. | Mar 2012 | A1 |
20120106743 | Seefeldt et al. | May 2012 | A1 |
20120121098 | Gautama | May 2012 | A1 |
20120155659 | Crockett et al. | Jun 2012 | A1 |
20120163629 | Seefeldt | Jun 2012 | A1 |
20120237045 | Mihelich et al. | Sep 2012 | A1 |
20120275606 | Breebaart et al. | Nov 2012 | A1 |
20120288118 | Gautama | Nov 2012 | A1 |
20120300949 | Rauhala et al. | Nov 2012 | A1 |
20120321096 | Crockett et al. | Dec 2012 | A1 |
20130108079 | Sato | May 2013 | A1 |
20130129124 | Westermann et al. | May 2013 | A1 |
Number | Date | Country |
---|---|---|
2369852 | Sep 2011 | EP |
Entry |
---|
A. Harma et al., “Volume Control in Netowrked Audio Systems”, International Workshop on Acoustic Signal Enhancement (IWAENC 2005) , pp. 245-248. |
International Search Report and Written Opinion—PCT/US2012/020672—ISA/EPO—May 16, 2012. |
Number | Date | Country | |
---|---|---|---|
20120179456 A1 | Jul 2012 | US |
Number | Date | Country | |
---|---|---|---|
61432094 | Jan 2011 | US |