The present disclosure relates to calibration of an audio signal processing device.
Teleconferencing applications are becoming increasingly popular. Implementing teleconferencing applications on certain devices, such as smart televisions, presents certain challenges. For example, echo in teleconferencing calls can be a problem. An echo cancellation device may be used to model an acoustic room response, estimate an echo, and subtract the estimated echo from a desired signal to transmit an echo free (or echo reduced) signal. When an electronic device used for teleconferencing is coupled to multiple external speakers (e.g., such as a home theater systems), multiple correlated acoustic signals may be generated that can be difficult to effectively cancel.
In a particular embodiment, an electronic device, such as a television or other home theater component that is adapted for use for teleconferencing, includes a calibration module. The calibration module may be operable to determine a direction of arrival of sound from loudspeakers of a home theater system. The electronic device may use beamforming to null signals from particular loudspeakers (e.g., to improve echo cancellation performance). The calibration module may also be configured to estimate acoustic coupling delays. The estimated acoustic coupling delays may be used to update a delay tuning parameter of an audio processing device that includes an echo cancellation device.
In a particular embodiment, a method includes, while operating an audio processing device in a use mode, retrieving first direction of arrival (DOA) data corresponding to a first audio output device from a memory of the audio processing device and generating a first null beam directed toward the first audio output device based on the first DOA data. The method also includes retrieving second DOA data corresponding to a second audio output device from the memory of the audio processing device and generating a second null beam directed toward the second audio output device based on the second DOA data. The first DOA data and the second DOA data were stored in the memory during operation of the audio processing device in a calibration mode.
In another particular embodiment, an apparatus includes an audio processing device. The audio processing device includes a memory to store direction of arrival (DOA) data that is determined while the audio processing device is operating in a calibration mode. The audio processing device also includes a beamforming device. While the audio processing device is operating in a use mode, the beamforming device performs operations including retrieving first DOA data corresponding to a first audio output device from the memory, generating a first null beam directed toward the first audio output device based on the first DOA data, retrieving second DOA data corresponding to a second audio output device from the memory, and generating a second null beam directed toward the second audio output device based on the second DOA data.
In another particular embodiment, a non-transitory computer-readable medium stores instructions that that are executable by a processor to cause the processor to perform operations including, while operating an audio processing device in a use mode, retrieving first direction of arrival (DOA) data corresponding to a first audio output device from a memory and generating a first null beam directed toward the first audio output device based on the first DOA data. The operations also include retrieving second DOA data corresponding to a second audio output device from the memory of the audio processing device and generating a second null beam directed toward the second audio output device based on the second DOA data. The first DOA data and the second DOA data were stored in the memory during operation of the audio processing device in a calibration mode
In another particular embodiment, an apparatus includes means for storing direction of arrival (DOA) data determined while an audio processing device operated in a calibration mode. The apparatus also includes means for generating a null beam based on the DOA data stored at the means for storing DOA data. The means for generating a null beam is configured to, while the audio processing device is operating in a use mode, retrieve first DOA data corresponding to a first audio output device from the means for storing DOA data and generate a first null beam directed toward the first audio output device based on the first DOA data, and retrieve second DOA data corresponding to a second audio output device from the means for storing DOA data and generate a second null beam directed toward the second audio output device based on the second DOA data.
In another particular embodiment, a method of using an audio processing device during a conference call includes delaying, by a delay amount, application of a signal to an echo cancelation device of an audio processing device. The delay amount is determined based on an estimated electric delay between an audio output interface of the audio processing device and a second device of a home theater system. The estimated electric delay is obtained during operation of the audio processing device in a calibration mode.
In another particular embodiment, an apparatus includes means for reducing echo in a second signal based on a first signal. The apparatus also includes means for delaying, by a delay amount, application of the first signal to the means for reducing echo. The delay amount is determined based on an estimated electric delay between an audio output interface of an audio processing device and a second device of a home theater system. The estimated electric delay is obtained during operation of the audio processing device in a calibration mode.
In another particular embodiment, an apparatus includes an audio processing device. The audio processing device includes an audio input interface to receive a first signal. The audio processing device also includes an audio output interface to send the first signal to a second device of a home theater system. The audio processing device further includes an echo cancellation device coupled to the audio output interface and the audio input interface. The echo cancellation device is configured to reduce echo associated with an acoustic signal generated by an acoustic output device of the home theater system and received at an input device coupled to the audio processing device. The audio processing device also includes a delay component coupled between the audio output interface and the echo cancellation device. The delay component is configured to delay, by a delay amount, application of the first signal to the echo cancelation device. The delay amount is determined based on an estimated electric delay between the audio output interface of the audio processing device and the second device of the home theater system. The estimated electric delay is obtained during operation of the audio processing device in a calibration mode.
One particular advantage provided by at least one of the disclosed embodiments is improved performance of home theater equipment for teleconferencing.
Other aspects, advantages, and features of the present disclosure will become apparent after review of the entire application, including the following sections: Brief Description of the Drawings, Detailed Description, and the Claims.
The home theater system 100 may include an electronic device 101 (e.g., a television) coupled to an audio receiver 102. For example, the electronic device 101 may be a networking-enabled “smart” television that is capable of communicating local area network (LAN) and/or wide area network (WAN) signals 160. The electronic device 101 may include or be coupled to a microphone array 130 and an audio processing component 140. The audio processing component 140 may be operable to (e.g., configured to) implement an adjustable delay for use in echo cancellation (e.g., during audio and/or video conferencing scenarios), to implement beamforming to reduce echo due to output of particular loudspeakers of the home theater system 100, or both.
The audio receiver 102 may receive audio signals from an audio output of the electronic device 101, process the audio signals, and send signals to each of a plurality of external loudspeakers and/or a subwoofer for output. For example, the audio receiver 102 may receive a composite audio signal from the electronic device 101 via a multimedia interface, such as a high-definition multimedia interface (HDMI). The audio receiver 102 may process the composite audio signal to generate separate audio signals for each loudspeaker and or subwoofer. In the embodiment of
When the home theater system 100 is set up, each component may be positioned relative to a seating area 120 to facilitate use of the home theater system 100 (e.g., to improve surround-sound performance). Of course, other arrangements of the components of the home theater system 100 are also possible and are within the scope of the present disclosure. When voice input is to be received from the user 122 (e.g., in an audio/video conferencing scenario) at a device in which a microphone and loudspeaker(s) are located close to each other or are incorporated into a single device, a delay between a reference signal (e.g., a far-end audio signal) and a signal received at the microphone (e.g., a near-end audio signal) is typically within an expected echo cancellation range. Thus, an echo cancellation device (e.g., an adaptive filter) receiving the near-end and far-end signals may be capable of performing acoustic echo cancellation. However, in home theater systems, the speaker-microphone distances and the presence of the audio receiver 102 may increase the delay between the near-end and far-end signals to an extent that a conventional adaptive filter can no longer perform acoustic echo cancellation effectively. For example, the adaptive filter may take longer to converge. Echo cancellation is further complicated in the home theater system 100 because the home theater system 100 includes multiple loudspeakers that typically output signals that are correlated.
The audio processing component 140 may be configured to operate in one or more calibration modes to prepare or configure the home theater system 100 of
Additionally or in the alternative, during operation in the calibration mode, the electronic device 101 may determine direction of arrival (DOA) information that is used subsequently for echo cancellation. To illustrate, the electronic device 101 may output an audio pattern (e.g., a calibration signal, such as white noise) for a particular period of time (e.g., five seconds) to the audio receiver 102. The audio receiver 102 may process the audio pattern and provide signals to the loudspeakers 103-109 and the subwoofer 110, one at a time. For example, a first loudspeaker 103 may output the audio pattern while the rest of the loudspeakers 104-109 and the subwoofer 110 are silent. Subsequently, another of the loudspeakers, such as a second loudspeaker 104) may output the audio pattern while the rest of the loudspeakers 103 and 105-109 and the subwoofer 110 are silent. This process may continue until each loudspeaker 103-109 and optionally the subwoofer 110 have output the audio pattern. While a particular loudspeaker or the subwoofer 110 outputs the audio pattern, the microphone array 130 may receive acoustic signals output from the particular loudspeaker or the subwoofer 110. The audio processing component 140 may determine DOA of the acoustic signals, which corresponds to a direction from the microphone array 130 to the particular loudspeaker. After determining a DOA for each of the loudspeakers 103-109 and the subwoofer 110 (or a subset thereof), an estimate delay value for each of the loudspeakers 103-109 and the subwoofer 110 (or a subset thereof), or both, calibration is complete.
During operation in a non-calibration mode (e.g., a use mode) after calibration is complete, the audio processing component 140 may delay far-end signals provided to an echo cancellation device of the audio processing component 140 based on the delay determined during the calibration mode. Alternatively or in addition, the audio processing component 140 may perform beamforming to null out signals received from particular directions of arrival (DOAs). In a particular embodiment, nulls are generated corresponding to forward facing loudspeakers, such as the loudspeakers 106-109. For example, as illustrated in
When a subsequent configuration change is detected (e.g., a different audio receiver or a different speaker is introduced into the home theater system 100), the calibration mode may be initiated again and one or more new or updated delay values 215, one or more new or updated DOAs, or a combination thereof, may be determined by the audio processing component 140.
During a teleconference call (e.g., in the use mode of operation), the microphone 206 may detect speech output by a user. However, sound output by the speaker 204 may also be received at the microphone 206 causing echo. The audio processing device 202 may include an echo cancellation device 210 (e.g., an adaptive filter, an echo suppressor, or another device or component operable to reduce echo) to process a received audio signal from the audio input interface 230 to reduce echo. Depending on where a user positions the speaker 204 and the microphone 206, the delay between the speaker 204 and the microphone 206 may be too large for the echo cancellation device 210 to effectively reduce the echo (as a result of electrical signal propagation delays, acoustic signal propagation delays, or both). The delay between when the audio processing device 202 outputs a signal via the audio output interface 222 and when the audio processing device 202 receives input including echo at the audio input interface 230 includes acoustic delay (e.g., delay due to propagation of sound waves) and electric delay (e.g., delay due to processing and transmission of the output signal after the output signal leaves the audio processing device 202). The acoustic delay may be related to relative positions and orientation of the speaker 204 and the microphone 206. For example, if the speaker 204 and the microphone 206 are relatively far from each other, the acoustic delay will be long than if the speaker 204 and the microphone 206 are relative close to each other. The electric delay is related to lengths of transmission lines that are between the audio processing device 202, the other components of the home theater system (e.g., the set top box device 224, the television 226, the audio receiver 228), and the speaker 204. The electric delay may also be related to processing delays caused by the other components of the home theater system (e.g., the set top box device 224, the television 226, the audio receiver 228). Thus, for example, acoustic delay may be changed when the speaker 204 is repositioned; however, the electric delay may not be changed by the repositioning as long as the lengths of the transmission lines are not changes (e.g., if the speaker 204 is repositioned by rotating the speaker 204 or by moving the speaker closer to the audio receiver 228).
In a particular embodiment, the audio processing device 202 includes a tunable delay component 216. A delay processing component 214 may determine one or more delay values 215 that are provided to the tunable delay component 216 to adjust (e.g., tune) a delay in providing an output signal of the audio processing device 202 (e.g., a signal from the audio output interface 222) to the echo cancellation device 210 to adjust an overall echo cancellation processing capability of the audio processing device to accommodate the delay. When more than one speaker, more than one microphone, or both, are present, delays between various speaker and microphone pairs may be different. In this case, the tunable delay component 216 may be adjusted to a delay value or delay values that enables the echo cancellation device 210 to reduce echo associated with each speaker and microphone pair. In a particular embodiment, the delay values 215 are indicative of estimated electric delay between the audio output interface 222 of the audio processing device 202 and a second device of a home theater system, such as the set top box 224, the television 226, or the audio receiver 228.
In a particular embodiment, the echo cancellation device 210 includes a plurality of echo cancellation circuits. Each of the plurality of echo cancellation circuits may be configured to reduce echo in a sub-band of a received audio signal. Note that while a received audio signal may be relatively narrowband (e.g., about 8 KHz within a human auditory range), the sub-bands are still narrower bands. For example, the audio processing device 202 may include a first sub-band analysis filter 208 coupled to the audio input interface 230. The first sub-band analysis filter 208 may divide the received audio signal into a plurality of sub-bands (e.g., frequency ranges) and provide each sub-band of the received audio signal to a corresponding echo cancellation circuit of the echo cancellation device 210. The audio processing device 202 may also include a second sub-band analysis filter 218 coupled between the audio output interface 222 and the echo cancellation device 210. The second sub-band analysis filter 218 may divide an output signal of the audio processing device 202 (such as first calibration signal 221 when the audio processing device is in the calibration mode) into the plurality of sub-bands (e.g., frequency ranges) and provide each sub-band of the output signal to a corresponding echo cancellation circuit of the echo cancellation device 210.
During operation of the system 200 in the calibration mode, a calibration signal generator 220 of the audio processing device 202 may output a first calibration signal 221. The first calibration signal 221 may be sent for a time period (e.g., 5 seconds) to one or more other devices of the system 200 (such as the set top box 224, the television 226, or the audio receiver 228) via the audio output interface 222. The first calibration signal 221 may also be provided to the second sub-band analysis filter 218 to be divided into output sub-bands. In the calibration mode, the tunable delay component 216 is typically not used. That is, the first calibration signal 221 is provided to the second sub-band analysis filter 218 and the echo cancellation device 210 without delay imposed by the tunable delay component 216.
In the calibration mode, an audio output of a component of the system 200 (such as the set top box 224, the television 226, or the audio receiver 228) may be coupled to the audio input interface 230. For example, a speaker wire that is coupled to the speaker 204 during the use mode of operation may be temporarily rerouted to couple to the audio input interface 230 during the calibration mode of operation. Alternately, a dedicated audio output of the component of the system 200 may be coupled to the audio processing device 202 for use during the calibration mode of operation.
A second calibration signal 232 may be received at the audio processing device 202 via the audio input interface 230. The second calibration signal 232 may correspond to the first calibration signal 221 as modified by and/or as delayed by one or more component of the system 200 (such as the set top box 224, the television 226, the audio receiver 228, and transmission lines therebetween). The second calibration signal 232 may be divided into input sub-bands by the first sub-band analysis filter 208. Echo cancellation circuits of the echo cancellation device 210 may process the input sub-bands (based on the second calibration signal 232) and the output sub-bands (based on the first calibration signal 221) to estimate delay associated with each sub-band. Note that using sub-bands of the signals enables the echo cancellation device 210 to converge more quickly than if the full bandwidth signals were used.
In a particular embodiment, a delay estimation module 212 learns (e.g., determines) delays for each sub-band. A delay processing component 214 determines a delay value or delay values 215 that are provided to the tunable delay component 216.
As illustrated in
In other embodiments, a plurality of tunable delay components 216 may be provided between the second sub-band analysis filter 218 and the echo cancellation device (rather than or in addition to the tunable delay component 216 illustrate in
When the second calibration signal 232 is received, it is passed through a first sub-band analysis filter 208 to produce M sub-band signals. The second calibration signal 232 is filtered through a parallel set of M band pass filters 304 to produce M sub-band signals. The signal in each sub-band can be down-sampled, at 305, by a factor of N (N<=M).
In a particular embodiment, the echo cancellation device 210 includes an adaptive filter 306 that runs in each of the sub-bands to cancel the echo in the respective sub-band. For example, the adaptive filter 306 in each sub-band may suppress the portion of the second calibration signal 232 that is correlated with the first calibration signal 221. The adaptive filter 306 in each sub-band determines an adaptive filter coefficient related to the echo. A largest amplitude adaptive filter coefficient tap location 309 represents the delay (in samples) between the first calibration signal 221 and the second calibration signal 232. Each sample in a sub-band domain 308 occupies the time duration of N samples in the first calibration signal 221. Thus, the overall delay, in terms of sample value of the first calibration signal 221, is tap location of the largest amplitude adaptive filter coefficient times the down-sampling factor. For example, in
The audio processing device 402 includes an audio output interface 422 that is configured to be coupled, via one or more other devices of a home theater system (such as the set top box device 224, the television 226, and the audio receiver 228) to one or more acoustic output devices (such as a speaker 404). For example, the audio output interface 422 may include an audio bus coupled to or terminated by one or more speaker connectors, a multimedia connector (such as a high definition multimedia interface (HDMI) connector), or a combination thereof. Although more than one speaker may be present, the description that follows describes determining a direction of arrival (DOA) for the speaker 404 to simplify the description. Directions of arrival (DOAs) for other speakers may be determined before or after the DOA of the speaker 404 is determined. While the following description describes determining the DOA for the speaker 404 in detail, in a particular embodiment, in the calibration mode, the audio processing device 402 may also determine the delay values 215 that are subsequently used for echo cancellation. For example, the delay values 215 may be determined before the DOA for the speaker 404 is determined or after the DOA for the speaker 404 is determined. The audio processing device 402 may also include an audio input interface 430 that is configured to be coupled to one or more acoustic input devices (such as a microphone array 406). For example, the audio input interface 430 may include an audio bus coupled to or terminated by one or more microphone connectors, a multimedia connector (such as an HDMI connector), or a combination thereof.
In a use mode, the microphone array 406 may be operable to detect speech from a user (such as the user 122 of
In a particular embodiment, the DOA determination device 410 includes a plurality of DOA determination circuits. Each of the plurality of DOA determination circuits may be configured to determine DOA associated with a particular sub-band. Accordingly, the DOA determination device 410 or the DOA determination circuits, individually or together, may form means for determining a direction of arrival of an acoustic signal received at an audio input array (such as the microphone array 406). Further, the audio input interface 430 may include signal communication circuitry, connectors, amplifiers, other circuits, or a combination there that provide means for receiving audio data at the DOA determination device 410 from the microphone array 406.
While an audio signal received at the audio input interface 430 (such as a second calibration signal 432 when the audio processing device is in the calibration mode) may be relatively narrowband (e.g., about 8 KHz within a human auditory range), the sub-bands are still narrower bands. For example, the audio processing device 402 may include a first sub-band analysis filter 408 coupled to the audio input interface 430. The first sub-band analysis filter 408 may divide the received audio signal into a plurality of sub-bands (e.g., frequency ranges) and provide each sub-band of the received audio signal to a corresponding DOA determination circuit of the DOA determination device 410. The audio processing device 402 may also include a second sub-band analysis filter 418 coupled between the audio output interface 422 and the DOA determination device 410. The second sub-band analysis filter 418 may divide an output signal of the audio processing device 402 (such as a first calibration signal 421 when the audio processing device is in the calibration mode) into the plurality of sub-bands (e.g., frequency ranges) and provide each sub-band of the output signal to a corresponding DOA determination circuit of the DOA determination device 410.
To illustrate, in the calibration mode, the calibration signal generator 420 may output a calibration signal, such as the first calibration signal 421 for a time period (e.g., 5 seconds), to the speaker 404 via the audio output interface 422. The first calibration signal 421 may also be provided to the second sub-band analysis filter 418 to be divided into output sub-bands. In response to the first calibration signal 421, the speaker 404 may generate an acoustic signal (e.g., acoustic white noise), which may be detected at the microphone array 406. The acoustic signal detected at the microphone array 406 may be modified by a transfer function (associated, for example, with echo paths and near end audio paths) that is related to relative positions of the speaker 404 and the microphone array 406. The second calibration signal 432, corresponding to sound detected at the microphone array 406 while the speaker 404 is outputting the acoustic signal, may be provided by the microphone array 406 to the audio input interface 430. The second calibration signal 432 may be divided into input sub-bands by the first sub-band analysis filter 408. DOA determination circuits of the DOA determination device 410 may process the input sub-bands (based on the second calibration signal 432) and the output sub-bands (based on the first calibration signal 421) to determine a DOA associated with each sub-band. DOA data corresponding to the DOA for each sub-band may be stored at a memory 412. Alternately, or in addition, DOA data that is a function of the DOA for each sub-band (e.g., an average or another function of the sub-band DOAs) may be stored at a memory 412. If the audio processing device 402 is coupled to one or more additional speakers, calibration of the other speakers continues as DOAs for the one or more additional speakers are determined during the calibration mode. Otherwise, the calibration mode may be terminated and the audio processing device 402 may be ready to be operated in a use mode.
In the use mode, a first signal 521 may be received from a far end source 520. For example, the first signal 521 may include audio input received from another party to a teleconference call. The first signal 521 may be provided to the speaker 204 via the audio output interface 222 and one or more other devices of a home theater system (such as the set top box device 224, the television 226, and the audio receiver 228). The speaker 204 may generate an output acoustic signal responsive to the first signal 521. A received acoustic signal at the microphone 206 may include the output acoustic signal as modified by a transfer function as well as other audio (such as speech from a user at the near end). A second signal 532 corresponding to the received acoustic signal may be output by the microphone 206 to the audio input interface 230. Thus, the second signal 532 may include echo from the first signal 521.
In a particular embodiment, the first signal 521 is provided to the tunable delay component 216. The tunable delay component 216 may delay providing the first signal 521 for subsequent processing for a delay amount corresponding to the delay values 215 determined in the calibration mode. In this embodiment, after the delay, the tunable delay component 216 provides the first signal 521 to echo cancellation components to reduce the echo. For example, the first signal 521 may be provided to the second sub-band analysis filter 218 to be divided into output sub-bands, which are provided to the echo cancellation device 210. In this example, the second signal 532 may be provided to the first sub-band analysis filter 208 to be divided into input sub-bands, which are also provided to the echo cancellation device 210. The input sub-bands and output sub-bands are processed to reduce echo and to form echo corrected sub-bands, which may be provided to a sub-band synthesis filter 512 to be joined to form an echo cancelled received signal. In another example, a full bandwidth of the first signal 521 (rather than a set of sub-bands of the first signal 521) may be provided to the echo cancellation device 210. That is, the second sub-band analysis filter 218 may be omitted or bypassed. In this example, a full bandwidth of the second signal 532 may also be provided to the echo cancellation device 210. That is, the first sub-band analysis filter 208 may be omitted or bypassed. Thus, in this example, the echo may be reduced over the full bandwidth (in a frequency domain or an analog domain) rather than by processing a set of sub-bands.
In another embodiment, a plurality of tunable delay components (each with a corresponding delay value) are placed between the second sub-band analysis filter 218 and the echo cancellation device 210. In this embodiment, the first signal 521 is provided to the second sub-band analysis filter 218 to be divided into output sub-bands, which are then delayed by particular amounts by the corresponding tunable delay components before being provided to the echo cancellation device 210.
When echo cancellation is performed on individual sub-bands (rather than on the full bandwidth of the received signal from the audio input interface 230), the audio processing device 202 may include the sub-band synthesis filter 512 to combine the sub-bands to form a full bandwidth echo cancelled received signal. In a particular embodiment, additional echo cancellation and noise suppression may be performed by providing the echo cancelled received signal to a full-band fast Fourier transform (FFT) component 514, a frequency space noise suppression and echo cancellation post-processing component 516 and an inverse FFT component 518 before sending the a third signal 519 (e.g., an echo canceled signal) via an output 530 to the far end source 520. Alternately, or in addition, additional analog domain audio processing may be performed.
In the use mode, a first signal 621 may be received from the far end source 520. For example, the first signal 621 may include audio input received from another party to a teleconference call. Alternately, the first signal 621 may be received from a local audio source (e.g., audio output of a television or of another media device). The first signal 621 may be provided to the speaker 404 via the audio output interface 422 and one or more other devices of a home theater system (such as the set top box device 224, the television 226, and the audio receiver 228). The first signal 621 or another signal may also be provided to one or more additional speakers (not shown in
In a particular embodiment, the first signal 621 is provided to a tunable delay component 216. The tunable delay component 216 may delay providing the first signal 621 for subsequent processing for a delay amount that corresponds to a delay values (e.g., the delay values 215 of
The echo cancellation device 610 may include beamforming components 611 and echo processing components 613. In the embodiment illustrated in
The beamforming components 611 are operable to use the direction of arrival (DOA) data from the memory 412 of
In a particular embodiment, the beamforming components 611, an echo cancellation post-processing component 616, another component of the audio processing device 402, or a combination thereof, may be operable to track a user that is providing voice input at the microphone array 406. For example, the beamforming components 611 may include the DOA determination device 410. The DOA determination device 410 may determine a direction of arrival of sounds produced by the user that are received at the microphone array 406. Based on the DOA of the user, the beamforming components 611 may track the user by modifying the audio data of the second signal 632 to focus on audio from the user, as described further with reference to
After echo cancellation is performed on individual sub-bands, the echo cancelled sub-bands may be provided by the echo cancellation device 610 to a sub-band synthesis filter 612 to combine the sub-bands to form a full bandwidth echo cancelled received signal. In a particular embodiment, additional echo cancellation and noise suppression are performed by providing the echo cancelled received signal to a full-band fast Fourier transform (FFT) component 614, a frequency space noise suppression and echo cancellation post-processing component 616, and an inverse FFT component 618 before sending a third signal 619 (e.g., an echo cancelled signal) to the far end source 520 or to other audio processing components (such as mixing or voice recognition processing components). Alternately, or in addition, additional analog domain audio processing 628 may be performed. For example, the noise suppression and echo cancellation post-processing component 616 may be positioned between the echo processing components 613 and the sub-band synthesis filter 612. In this example, no FFT component 614 or inverse FFT component 618 may be used.
The method includes, at 702, starting the audio processing device. The method may also include, at 704, determining whether new audio playback hardware (such as one or more of the set top box device 224, the television 226, and the audio receiver 228, or the speaker 204 of
When new audio playback hardware is detected, the method may include, at 706, running in a first calibration mode. The first calibration mode may be used to determine delay values, such as the delay values 215 of
The method may also include determining whether nullforming (i.e., beamforming to suppress audio data associated with one or more particular audio output devices) is enabled, at 710. When nullforming is not enabled, the method ends, and the audio processing device is ready to run in a use mode, at 718. When nullforming is enabled, the method includes, at 712, determining a direction of arrival (DOA) for each audio output device that is to be nulled. At 714, the DOAs may be stored (e.g., at the memory 412 of
The method includes, at 802, activating a use mode of the audio processing device (e.g., operating the audio processing device in a use mode of operation). The method also includes, at 804, activating echo cancellers, such as echo cancellation circuits of the echo processing component 613 of
The method may include, at 808, determining whether the target DOA coincides with a stored DOA for an audio output device. The stored DOAs may have been determined during operation of the audio processing device in a calibration mode. When the target DOA does not coincide with a stored DOA for any audio output device, the method includes, at 810, generating nulls for one or more audio output devices using the stored DOAs. In a particular embodiment, nulls may be generated for each front facing audio output device, where front facing refers to having a direct acoustic path (as opposed to a reflected acoustic path) from the audio output device to a microphone array. To illustrate, in
The method also includes, at 812, generating a tracking beam for the target DOA. The tracking beam may improve reception and/or processing of audio data associated with acoustic signals from the target DOA, for example, to improve processing of voice input from the user. The method may also include outputting (e.g., sending) a pass indicator for nullforming, at 814. The pass indicator may be provided to the echo cancellers to indicate that a null has been formed in audio data provided to the echo cancellers, where the null corresponds to the DOA of a particular audio output device. When multiple audio output devices are to be nulled, multiple pass indicators may be provided to the echo cancellers, one for each audio output device to be nulled. Alternately, a single pass indicator may be provided to the echo cancellers to indicate that nulls have been formed corresponding to each of the audio output devices to be nulled. The echo cancellers may include linear echo cancellers (e.g., adaptive filters), non-linear echo cancellers (e.g., EC PP), or both. In an embodiment that includes linear echo cancellers, the pass indicator may be used to indicate that echo associated with the particular audio output device has been removed via beamforming; accordingly, no linear echo cancellation of the signal associated with the particular audio output device may be performed by the echo cancellers. The method then proceeds to run a subsequent frame of audio data, at 816.
When the target DOA coincides with a stored DOA for any audio output device, at 808, the method includes, at 820, generating nulls for one or more audio output devices that do not coincide with the target DOA using the stored DOAs. For example, referring to
The method also includes, at 822, generating a tracking beam for the target DOA. The method may also include outputting (e.g., sending) a fail indicator for nullforming for the audio output device with a DOA that coincides with the target DOA, at 824. The fail indicator may be provided to the echo cancellers to indicate that at least one null that was to be formed has not been formed. In an embodiment that includes linear echo cancellers, the fail indicator may be used to indicate that echo associated with the particular audio output device has not been removed via beamforming; accordingly, linear echo cancellation of the signal associated with the particular audio output device may be performed by the echo cancellers. The method then proceeds to run a subsequent frame, at 816.
It is a challenge to provide a method for estimating a three-dimensional direction of arrival (DOA) for each frame of an audio signal for concurrent multiple sound events that is sufficiently robust under background noise and reverberation. Robustness can be improved by increasing the number of reliable frequency bins. It may be desirable for such a method to be suitable for arbitrarily shaped microphone array geometry, such that specific constraints on microphone geometry may be avoided. A pair-wise 1-D approach as described herein can be appropriately incorporated into any geometry.
Such an approach may be implemented to operate without a microphone placement constraint. Such an approach may also be implemented to track sources using available frequency bins up to Nyquist frequency and down to a lower frequency (e.g., by supporting use of a microphone pair having a larger inter-microphone distance). Rather than being limited to a single pair of microphones for tracking, such an approach may be implemented to select a best pair of microphones among all available pairs of microphones. Such an approach may be used to support source tracking even in a far-field scenario, up to a distance of three to five meters or more, and to provide a much higher DOA resolution. Other potential features include obtaining a 2-D representation of an active source. For best results, it may be desirable that each source is a sparse broadband audio source and that each frequency bin is mostly dominated by no more than one source.
For a signal received by a pair of microphones directly from a point source in a particular DOA, the phase delay differs for each frequency component and also depends on the spacing between the microphones. The observed value of the phase delay at a particular frequency bin may be calculated as the inverse tangent of the ratio of the imaginary term of the complex FFT coefficient to the real term of the complex FFT coefficient. As shown in
where d denotes the distance between the microphones (in m), θ denotes the angle of arrival (in radians) relative to a direction that is orthogonal to the array axis, f denotes frequency (in Hz), and c denotes the speed of sound (in m/s). For the ideal case of a single point source with no reverberation, the ratio of phase delay to frequency
will have the same value
over all frequencies.
Such an approach may be limited in practice by the spatial aliasing frequency for the microphone pair, which may be defined as the frequency at which the wavelength of the signal is twice the distance d between the microphones. Spatial aliasing causes phase wrapping, which puts an upper limit on the range of frequencies that may be used to provide reliable phase delay measurements for a particular microphone pair.
Instead of phase unwrapping, a proposed approach compares the phase delay as measured (e.g., wrapped) with pre-calculated values of wrapped phase delay for each of an inventory of DOA candidates.
e
i=ΣfεF(Δφob
of the squared differences between the observed and candidate phase delay values 215 over a desired range or other set F of frequency components. The phase delay values 215 Δφi
It may be desirable to calculate the error ei across as many frequency bins as possible to increase robustness against noise. For example, it may be desirable for the error calculation to include terms from frequency bins that are beyond the spatial aliasing frequency. In a practical application, the maximum frequency bin may be limited by other factors, which may include available memory, computational complexity, strong reflection by a rigid body at high frequencies, etc.
A speech signal is typically sparse in the time-frequency domain. If the sources are disjoint in the frequency domain, then two sources can be tracked at the same time. If the sources are disjoint in the time domain, then two sources can be tracked at the same frequency. It may be desirable for the array to include a number of microphones that is at least equal to the number of different source directions to be distinguished at any one time. The microphones may be omnidirectional (e.g., as may be typical for a cellular telephone or a dedicated conferencing device) or directional (e.g., as may be typical for a device such as a set-top box).
Such multichannel processing is generally applicable, for example, to source tracking for speakerphone applications. Such a technique may be used to calculate a DOA estimate for a frame of a received multichannel signal. Such an approach may calculate, at each frequency bin, the error for each candidate angle with respect to the observed angle, which is indicated by the phase delay. The target angle at that frequency bin is the candidate having the minimum error. In one example, the error is then summed across the frequency bins to obtain a measure of likelihood for the candidate. In another example, one or more of the most frequently occurring target DOA candidates across all frequency bins is identified as the DOA estimate (or estimates) for a given frame.
Such a method may be applied to obtain instantaneous tracking results (e.g., with a delay of less than one frame). The delay is dependent on the FFT size and the degree of overlap. For example, for a 512-point FFT with a 50% overlap and a sampling frequency of 16 kHz, the resulting 256-sample delay corresponds to sixteen milliseconds. Such a method may be used to support differentiation of source directions typically up to a source-array distance of two to three meters, or even up to five meters.
The error may also be considered as a variance (i.e., the degree to which the individual errors deviate from an expected value). Conversion of the time-domain received signal into the frequency domain (e.g., by applying an FFT) has the effect of averaging the spectrum in each bin. This averaging is even more obvious if a sub-band representation is used (e.g., mel scale or Bark scale). Additionally, it may be desirable to perform time-domain smoothing on the DOA estimates (e.g., by applying as recursive smoother, such as a first-order infinite-impulse-response filter).
It may be desirable to reduce the computational complexity of the error calculation operation (e.g., by using a search strategy, such as a binary tree, and/or applying known information, such as DOA candidate selections from one or more previous frames).
Even though the directional information may be measured in terms of phase delay, it is typically desired to obtain a result that indicates source DOA. Consequently, it may be desirable to calculate the error in terms of DOA rather than in terms of phase delay.
An expression of error ei in terms of DOA may be derived by assuming that an expression for the observed wrapped phase delay as a function of DOA, such as
is equivalent to a corresponding expression for unwrapped phase delay as a function of DOA, such as
except near discontinuities that are due to phase wrapping. The error ei may then be expressed as
e
i=∥ψfwr(θob)−ψfwr(θi)∥f2≡∥ψfun(θob)−ψfun(θi)∥f2
where the difference between the observed and candidate phase delay at frequency f is expressed in terms of DOA as
A Taylor series expansion may be performed to obtain the following first-order approximation:
which is used to obtain an expression of the difference between the DOA θob
This expression may be used, with the assumed equivalence of observed wrapped phase delay to unwrapped phase delay, to express error ei in terms of DOA:
where the values of [ψfwr(θob), ψfwr(θi)] are defined as [Δφob
To avoid division with zero at the endfire directions (θ=+/−90°), it may be desirable to perform such an expansion using a second-order approximation instead, as in the following:
As in the first-order example above, this expression may be used, with the assumed equivalence of observed wrapped phase delay to unwrapped phase delay, to express error ei in terms of DOA as a function of the observed and candidate wrapped phase delay values 215.
As shown in
As shown in
For expression (1), an extremely good match at a particular frequency may cause a corresponding likelihood to dominate all others. To reduce this susceptibility, it may be desirable to include a regularization term λ, as in the following expression:
Speech tends to be sparse in both time and frequency, such that a sum over a set of frequencies F may include results from bins that are dominated by noise. It may be desirable to include a bias term β, as in the following expression:
The bias term, which may vary over frequency and/or time, may be based on an assumed distribution of the noise (e.g., Gaussian). Additionally or alternatively, the bias term may be based on an initial estimate of the noise (e.g., from a noise-only initial frame). Additionally or alternatively, the bias term may be updated dynamically based on information from noise-only frames, as indicated, for example, by a voice activity detection module.
The frequency-specific likelihood results may be projected onto a (frame, angle) plane to obtain a DOA estimation per frame
that is robust to noise and reverberation because only target dominant frequency bins contribute to the estimate. In this summation, terms in which the error is large have values that approach zero and thus become less significant to the estimate. If a directional source is dominant in some frequency bins, the error value at those frequency bins will be nearer to zero for that angle. Also, if another directional source is dominant in other frequency bins, the error value at the other frequency bins will be nearer to zero for the other angle.
The likelihood results may also be projected onto a (frame, frequency) plane to indicate likelihood information per frequency bin, based on directional membership (e.g., for voice activity detection). This likelihood may be used to indicate likelihood of speech activity. Additionally or alternatively, such information may be used, for example, to support time- and/or frequency-selective masking of the received signal by classifying frames and/or frequency components according to their direction of arrival.
An anglogram representation is similar to a spectrogram representation. An anglogram may be obtained by plotting, at each frame, a likelihood of the current DOA candidate at each frequency.
A microphone pair having a large spacing is typically not suitable for high frequencies, because spatial aliasing begins at a low frequency for such a pair. A DOA estimation approach as described herein, however, allows the use of phase delay measurements beyond the frequency at which phase wrapping begins, and even up to the Nyquist frequency (i.e., half of the sampling rate). By relaxing the spatial aliasing constraint, such an approach enables the use of microphone pairs having larger inter-microphone spacings. As an array with a large inter-microphone distance typically provides better directivity at low frequencies than an array with a small inter-microphone distance, use of a larger array typically extends the range of useful phase delay measurements into lower frequencies as well.
The DOA estimation principles described herein may be extended to multiple microphone pairs in a linear array (e.g., as shown in
For a far-field source, the multiple microphone pairs of a linear array will have essentially the same DOA. Accordingly, one option is to estimate the DOA as an average of the DOA estimates from two or more pairs in the array. However, an averaging scheme may be affected by mismatch of even a single one of the pairs, which may reduce DOA estimation accuracy. Alternatively, it may be desirable to select, from among two or more pairs of microphones of the array, the best microphone pair for each frequency (e.g., the pair that gives the minimum error ei at that frequency), such that different microphone pairs may be selected for different frequency bands. At the spatial aliasing frequency of a microphone pair, the error will be large. Consequently, such an approach will tend to automatically avoid a microphone pair when the frequency is close to its wrapping frequency, thus avoiding the related uncertainty in the DOA estimate. For higher-frequency bins, a pair having a shorter distance between the microphones will typically provide a better estimate and may be automatically favored, while for lower-frequency bins, a pair having a larger distance between the microphones will typically provide a better estimate and may be automatically favored. In the four-microphone example shown in
In one example, the best pair for each axis is selected by calculating, for each frequency f, P×I values, where P is the number of pairs, I is the size of the inventory, and each value epi is the squared absolute difference between the observed angle θpf (for pair p and frequency f) and the candidate angle θif. For each frequency f, the pair p that corresponds to the lowest error value epi is selected. This error value also indicates the best DOA candidate θi at frequency f (as shown in
The signals received by a microphone pair may be processed as described herein to provide an estimated DOA, over a range of up to 180 degrees, with respect to the axis of the microphone pair. The desired angular span and resolution may be arbitrary within that range (e.g. uniform (linear) or nonuniform (nonlinear), limited to selected sectors of interest, etc.). Additionally or alternatively, the desired frequency span and resolution may be arbitrary (e.g. linear, logarithmic, mel-scale, Bark-scale, etc.).
In the model shown in
The DOA estimation principles described herein may also be extended to a two-dimensional (2-D) array of microphones. For example, a 2-D array may be used to extend the range of source DOA estimation up to a full 360 degrees (e.g., providing a similar range as in applications such as radar and biomedical scanning). Such an array may be used in a particular embodiment, for example, to support good performance even for arbitrary placement of the telephone relative to one or more sources.
The multiple microphone pairs of a 2-D array typically will not share the same DOA, even for a far-field point source. For example, source height relative to the plane of the array (e.g., in the z-axis) may play an important role in 2-D tracking.
An expression such as
where θ1 and θ2 are the estimated DOA for pair 1 and 2, respectively, may be used to project all pairs of DOAs to a 360° range in the plane in which the three microphones are located. Such projection may be used to enable tracking directions of active speakers over a 360° range around the microphone array, regardless of height difference. Applying the expression above to project the DOA estimates (0°, 60°) of
which may be mapped to a combined directional estimate (e.g., an azimuth) of 270° as shown in
In a typical use case, the source will be located in a direction that is not projected onto a microphone axis.
For the example shown in
and the DOA observed by the y-axis microphone pair MC20-MC30 is
Using expression (4) to project these directions into the x-y plane produces the magnitudes (21.8°, 68.2°) of the desired angles relative to the x and y axes, respectively, which corresponds to the given source location (x,y,z)=(5,2,5). The signs of the observed angles indicate the x-y quadrant in which the source is located, as shown in
In fact, almost 3D information is given by a 2D microphone array, except for the up-down confusion. For example, the directions of arrival observed by microphone pairs MC10-MC20 and MC20-MC30 may also be used to estimate the magnitude of the angle of elevation of the source relative to the x-y plane. If d denotes the vector from microphone MC20 to the source, then the lengths of the projections of vector d onto the x-axis, the y-axis, and the x-y plane may be expressed as d sin(θ2), d sin(θ1) and d√{square root over (sin2(θ1)+sin2(θ2))}{square root over (sin2(θ1)+sin2(θ2))} respectively. The magnitude of the angle of elevation may then be estimated as {circumflex over (θ)}h=cos−1√{square root over (sin2(θ1)+sin2(θ2))}{square root over (sin2(θ1)+sin2(θ2))}.
Although the microphone pairs in the particular examples of
The estimation of y may be performed using the projection p1=(d sin θ1 sin θ0, d sin θ1 cos θ0) of vector (x,y) onto axis 1. Observing that the difference between vector (x,y) and vector p1 is orthogonal to p1, calculate y as
The desired angles of arrival in the x-y plane, relative to the orthogonal x and y axes, may then be expressed respectively as
Extension of DOA estimation to a 2-D array is typically well-suited to and sufficient for certain embodiments. However, further extension to an N-dimensional array is also possible and may be performed in a straightforward manner. For tracking applications in which one target is dominant, it may be desirable to select N pairs for representing N dimensions. Once a 2-D result is obtained with a particular microphone pair, another available pair can be utilized to increase degrees of freedom. For example,
Estimates of DOA error from different dimensions may be used to obtain a combined likelihood estimate, for example, using an expression such as
where θ0,i denotes the DOA candidate selected for pair i. Use of the maximum among the different errors may be desirable to promote selection of an estimate that is close to the cones of confusion of both observations, in preference to an estimate that is close to only one of the cones of confusion and may thus indicate a false peak. Such a combined result may be used to obtain a (frame, angle) plane, as described herein, and/or a (frame, frequency) plot, as described herein.
The DOA estimation principles described herein may be used to support selection among multiple users that are speaking. For example, location of multiple sources may be combined with a manual selection of a particular user that is speaking (e.g., push a particular button to select a particular corresponding user) or automatic selection of a particular user (e.g., by speaker recognition). In one such application, an audio processing device (such as the audio processing device of
A source DOA may be easily defined in 1-D, e.g. from −90 deg. to +90 deg. For more than two microphones at arbitrary relative locations, it is proposed to use a straightforward extension of 1-D as described above, e.g. (θ1, θ2) in two-pair case in 2-D, (θ1, θ2, θ3) in three-pair case in 3-D, etc.
To apply spatial filtering to such a combination of paired 1-D DOA estimates, a beamformer/null beamformer (BFNF) as shown in
As the approach shown in
where lp indicates the distance between the microphones of pair p, ω indicates the frequency bin number, and fs indicates the sampling frequency.
A PWBFNF scheme may be used for suppressing direct path of interferers up to the available degrees of freedom (instantaneous suppression without smooth trajectory assumption, additional noise-suppression gain using directional masking, additional noise-suppression gain using bandwidth extension). Single-channel post-processing of quadrant framework may be used for stationary noise and noise-reference handling.
It may be desirable to obtain instantaneous suppression but also to provide minimization of artifacts, such as musical noise. It may be desirable to maximally use the available degrees of freedom for BFNF. One DOA may be fixed across all frequencies, or a slightly mismatched alignment across frequencies may be permitted. Only the current frame may be used, or a feed-forward network may be implemented. The BFNF may be set for all frequencies in the range up to the Nyquist rate (e.g., except ill-conditioned frequencies). A natural masking approach may be used (e.g., to obtain a smooth natural seamless transition of aggressiveness).
The method includes, at 2502, determining a direction of arrival (DOA) at an audio input array of a home theater system of an acoustic signal from a loudspeaker of the home theater system. For example, the audio processing component 140 of the home theater system 100 may determine a DOA to one or more of the loudspeakers 103-109 or the subwoofer 110 by supplying a calibration signal, one-by-one, to each of the loudspeakers 103-109 or the subwoofer 110 and detecting acoustic output at the microphone array 130.
The method may also include, at 2504, applying beamforming parameters to audio data from the audio input array to suppress a portion of the audio data associated with the DOA. For example, the audio processing component 140 may form one or more nulls, such as the nulls 150-156, in the audio data using the determined DOA.
The method includes, at 2602, while operating an audio processing device (e.g., a component of a home theater system) in a calibration mode, receiving audio data at the audio processing device from an audio input array. The audio data may correspond to an acoustic signal received from an audio output device (e.g., a loudspeaker) at two or more elements (e.g., microphones) of the audio input array. For example, when the audio receiver 102 of
The method also includes, at 2604, determining a direction of arrival (DOA) of the acoustic signal at the audio input array based on the audio data. In a particular embodiment, the DOA may be stored in a memory as DOA data, which may be used subsequently in a use mode to suppress audio data associated with the DOA. The method also includes, at 2606, generating a null beam directed toward the audio output device based on the DOA of the acoustic signal.
The method 2800 includes initiating a calibration mode of the audio processing device, at 2806. For example, the calibration mode may be initiated in response to receiving user input indicating a configuration change, at 2802, or in response to automatically detecting a configuration change, at 2804. The configuration change may be associated with the home theater system, associated with the audio processing device, associated with an acoustic output device, with an input device, or associated with a combination thereof. For example, the configuration change may include coupling a new component to the home theater system or removing a component from the home theater system.
The method 2800 also includes, at 2808, in response to initiation of the calibration mode of the audio processing device, sending a calibration signal (such as white noise) from an audio output interface of the audio processing device to a component of a home theater system.
The method 2800 also includes, at 2810, receiving a second calibration signal at an audio input interface of the audio processing device. The second calibration signal corresponds to the first calibration signal as modified by a transfer function. For example, a difference between the first calibration signal and the second calibration signal may be indicative of electric delay associated with the home theater system or associated with a portion of the home theater system.
The method 2800 also includes, at 2812, determining an estimated delay associated with the home theater system based on the first calibration signal and the second calibration signal. For example, estimating the delay may include, at 2814, determining a plurality of sub-bands of the first calibration signal, and, at 2816, determining a plurality of corresponding sub-bands of the second calibration signal. Sub-band delays for each of the plurality of sub-bands of the first calibration signal and each of the corresponding sub-bands of the second calibration signal may be determined, at 2818. The estimated delay may be determined based on the sub-band delays. For example, the estimated delay may be determined as an average of the sub-band delays.
The method 2800 may further include, at 2820, adjusting a delay value based on the estimated delay. As explained with reference to
The method includes sending a calibration signal from an audio processing device to an audio output device, at 2902. An acoustic signal may be generated by the audio output device in response to the calibration signal. For example, the calibration signal may be the first calibration signal 421 of
The method may also include receiving, at the audio processing device, audio data from an audio input array, at 2904. The audio data corresponds to an acoustic signal received from an audio output device at two or more elements of the audio input array. For example, the audio processing device may be a component of a home theater system, such as the home theater system 100 of
The method also includes, at 2906, determining a direction of arrival (DOA) of the acoustic signal at the audio input array based on the audio data. For example, the DOA may be determined as described with reference to
The method may include, at 2912, determining whether the home theater system includes additional loudspeakers. When the home theater system does not include additional loudspeakers, the method ends, at 2916, and the audio processing device is ready to enter a use mode (such as the use mode described with reference to
The method includes, at 3002, receiving audio data at the audio processing device. The audio data corresponds to an acoustic signal received from an audio output device at an audio input array. For example, the audio data may be received from the microphone array 406 of
The method may include, at 3004, determining a user DOA, where the user DOA is associated with an acoustic signal (e.g., the user voice input) received at the audio input array from a user. The user DOA may also be referred to herein as a target DOA. The method may include, at 3006, determining target beamforming parameters to track user audio data associated with the user based on the user DOA. For example, the target beamforming parameters may be determined as described with reference to
The method may include, at 3008, determining whether the user DOA is coincident with the DOA of the acoustic signal from the audio output device. For example, in
In response to determining that the user DOA is not coincident with the DOA of the acoustic signal from the audio output device, the method may include, at 3010, applying the beamforming parameters to the audio data to generated modified audio data. In a particular embodiment, the audio data may correspond to acoustic signals received at the audio input array from the audio output device and from one or more additional audio output devices, such as the loudspeakers 103-109 of
The method may also include, at 3012, performing echo cancellation of the modified audio data. For example, the echo processing components 613 of
In response to determining that the user DOA is coincident with the DOA of the acoustic signal from the audio output device, the method may include, at 3016, modifying the beamforming parameters before applying the beamforming parameters to the audio data. The beamforming parameters may be modified such that the modified beamforming parameters do not suppress a first portion of the audio data that is associated with the audio output device. For example, referring to
The method may include, at 3020, performing echo cancellation of the modified audio data. The method may also include, at 3022, sending an indication that the first portion of the audio data has not been suppressed to a component of the audio processing device. The indication that the first portion of the audio data has not been suppressed may include the fail indicator of
Accordingly, embodiments disclosed herein enable echo cancellation in circumstances where multiple audio output devices, such as loudspeakers, are sources of echo. Further, the embodiments reduce computation power used for echo cancellation by using beamforming to suppress audio data associated with one or more of the audio output devices.
Those of skill would appreciate that the various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a compact disc read-only memory (CD-ROM), or any other form of non-transitory storage medium. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application-specific integrated circuit (ASIC). The ASIC may reside in a computing device or a user terminal (e.g., a mobile phone or a PDA). In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal.
The previous description of the disclosed embodiments is provided to enable a person skilled in the art to make or use the disclosed embodiments. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other embodiments without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments disclosed herein but is to be accorded the widest scope possible consistent with the principles and novel features as defined by the following claims.
This application claims priority from U.S. Provisional Patent Application No. 61/667,249 filed on Jul. 2, 2012 and entitled “AUDIO SIGNAL PROCESSING DEVICE CALIBRATION,” and claims priority from U.S. Provisional Patent Application No. 61/681,474 filed on Aug. 9, 2012 and entitled “AUDIO SIGNAL PROCESSING DEVICE CALIBRATION,” the contents of each of which are incorporated herein in their entirety.
Number | Date | Country | |
---|---|---|---|
61667249 | Jul 2012 | US | |
61681474 | Aug 2012 | US |