With the advancement of technology, the use and popularity of electronic devices has increased considerably. Electronic devices are commonly used to process and output audio data.
For a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.
Electronic devices may be used to process audio data and generate output audio. For example, a device may receive audio data representing music and may output the music using two or more loudspeakers. To improve a user experience, some devices may include a large number of loudspeakers (e.g., 5 or more), enabling the device to send separate signals to each of the loudspeakers, resulting in a user perceiving a wide virtual sound stage due to separation between the loudspeakers. However, increasing the number of loudspeakers increases a size and cost of the device. To reduce the size and/or cost, some devices may only include 2-3 loudspeakers, and the distance between the loudspeakers may be relatively small. The small spacing between the loudspeakers may result in the user perceiving a small virtual sound stage when the device generates the output audio.
To improve spatial coverage of output audio and improve a user experience, devices, systems and methods are disclosed that perform upmixing and loudspeaker beamforming. For example, the system can performing upmixing to stereo audio data (e.g., two channel input signals) to extract a center channel and generate three-channel audio data. The system may then perform loudspeaker beamforming to the three-channel audio data to enable two loudspeakers to generate output audio having three distinct beams. The user may interpret the three distinct beams as originating from three separate locations, resulting in the user perceiving a wide virtual sound stage despite the loudspeakers being spaced close together on the device.
The device 110 may be an electronic device configured to receive, process, and output playback audio received from remote devices. For ease of illustration, some audio data may be referred to as a signal, such as a playback signal x(t), a microphone signal z(t), and/or the like. However, the signals may be comprised of audio data and may be referred to as audio data (e.g., playback audio data x(t), microphone audio data z(t), etc.) without departing from the disclosure. As used herein, audio data (e.g., playback audio data, microphone audio data, or the like) may correspond to a specific range of frequency bands. For example, the playback audio data and/or the microphone audio data may correspond to a human hearing range (e.g., 20 Hz-20 kHz), although the disclosure is not limited thereto.
The device 110 may include two or more microphone(s) 112, although the disclosure is not limited thereto and the device 110 may include additional components without departing from the disclosure. The microphone(s) 112 may be included in a microphone array without departing from the disclosure. For ease of explanation, however, individual microphones included in a microphone array will be referred to as microphone(s) 112.
The device 110 may include two or more loudspeaker(s) 114, although the disclosure is not limited thereto and the device 110 may include additional components without departing from the disclosure. For example, while
The techniques described herein are configured to perform spatial augmentation processing. For example, the device 110 may receive stereo input audio data (e.g., left channel and right channel) and perform upmixing and/or loudspeaker beamforming to widen a virtual sound stage perceived by the user 5. Thus, the device 110 may perform upmixing to extract a center channel from the stereo input audio data and process the center channel separately from the right channel and the left channel. In some examples, the device 110 may apply a first equalization filter to the left/right channels and a second equalization filter to the center channels, although the disclosure is not limited thereto. Additionally or alternatively, the device 110 may perform loudspeaker beamforming by applying directional filters to the left channel, the center channel, and/or the right channel to direct the audio output.
To illustrate an example of loudspeaker beamforming, in some examples the device 110 may process the left channel using first directional filters to generate a left-portion of the left channel and second directional filters to generate a right-portion of the left channel. Similarly, the device 110 may process the right channel using third directional filters to generate a left-portion of the right channel and fourth directional filters to generate a right-portion of the right channel. The device 110 may then combine the left-portion of the left channel and the left-portion of the right channel, and separately combine the right-portion of the left channel and the right-portion of the right channel. As a result of performing loudspeaker beamforming, the device 110 may generate output audio using two loudspeakers 114 that is associated with three separate directions; a left beam, a center beam, and a right beam. Thus, the user 5 may perceive a wider virtual sound stage and/or distinguish between the beams more clearly than if the device 110 generated the output audio without performing beamforming.
As illustrated in
The device 110 may generate (138) an extracted center channel (e.g., center audio data) using the mapping data. For example, the device 110 may combine the left input channel and the right input channel to generate combined audio data and apply the mapping data to the combined audio data to generate the extracted center channel. The device 110 may generate (140) an extracted left channel by subtracting the extracted center channel from the left input channel and may generate (142) an extracted right channel by subtracting the extracted center channel from the right input channel. Thus, the device 110 may generate the extracted left channel and extracted right channel by removing the extracted center channel from the input stereo audio data. While not illustrated in
After generating the extracted center channel, the extracted left channel, and the extracted right channel, the device 110 may optionally apply (144) directional filters to perform loudspeaker beamforming, may apply (146) equalization filters to perform equalization separately between the left/right channels and the center channel, and may generate (148) output audio. For example, the device 110 may perform loudspeaker beamforming to generate directional output audio that may be perceived by the user 5 as a left beam, a center beam, and a right beam, as will be described in greater detail below with regard to
While the playback audio data x(t) 210 is comprised of a plurality of samples, in some examples the device 110 may group a plurality of samples and process them together. As illustrated in
Additionally or alternatively, the device 110 may convert playback audio data x(n) 212 from the time domain to the frequency domain or subband domain. For example, the device 110 may perform Discrete Fourier Transforms (DFTs) (e.g., Fast Fourier transforms (FFTs), short-time Fourier Transforms (STFTs), and/or the like) to generate playback audio data X(n, k) 214 in the frequency domain or the subband domain. As used herein, a variable X(n, k) corresponds to the frequency-domain signal and identifies an individual frame associated with frame index n and tone index k. As illustrated in
The following high level description of converting from the time domain to the frequency domain refers to playback audio data x(n) 212, which is a time-domain signal corresponding to the audio to output using the loudspeakers 114. As used herein, variable x(n) corresponds to the time-domain signal, whereas variable X(n) corresponds to a frequency-domain signal (e.g., after performing FFT on the playback audio data x(n)).
A Fast Fourier Transform (FFT) is a Fourier-related transform used to determine the sinusoidal frequency and phase content of a signal, and performing FFT produces a one-dimensional vector of complex numbers. This vector can be used to calculate a two-dimensional matrix of frequency magnitude versus frequency. In some examples, the system 100 may perform FFT on individual frames of audio data and generate a one-dimensional and/or a two-dimensional matrix corresponding to the playback audio data X(n). However, the disclosure is not limited thereto and the system 100 may instead perform STFT without departing from the disclosure. A short-time Fourier transform (STFT) is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time.
Using a Fourier transform, a sound wave such as music or human speech can be broken down into its component “tones” of different frequencies, each tone represented by a sine wave of a different amplitude and phase. Whereas a time-domain sound wave (e.g., a sinusoid) would ordinarily be represented by the amplitude of the wave over time, a frequency domain representation of that same waveform comprises a plurality of discrete amplitude values, where each amplitude value is for a different tone or “bin.” So, for example, if the sound wave consisted solely of a pure sinusoidal 1 kHz tone, then the frequency domain representation would consist of a discrete amplitude spike in the bin containing 1 kHz, with the other bins at zero. In other words, each tone “k” is a frequency index (e.g., frequency bin).
Given a signal x(n), the FFT X(k,n) of x(n) is defined by
Where k is a frequency index, n is a frame index, and K is an FFT size. Hence, for each block (at frame index n) of K samples, the FFT is performed which produces K complex tones X(k,n) corresponding to frequency index k and frame index n.
The system 100 may include multiple loudspeaker 114, with a first channel (m=0) corresponding to a first loudspeaker 114a, a second channel (m=1) corresponding to a second loudspeaker 112b, and so on until a final channel (M) that corresponds to loudspeaker 112M. As illustrated in
The device 110 illustrated in
Despite the loudspeakers 114 being spaced close together, performing the upmixing and the loudspeaker beamforming may result in the user 5 perceiving a wide virtual sound stage when listening to output audio generated by the device 110. For example, the output audio data 340 may give the perception of spaciousness, such that the user 5 perceives the output audio as having separate beams generated at discrete locations like a traditional stereo system instead of a single source location.
As the output audio data 340 is beamformed using directional filters, the two loudspeakers 114a-114b may generate three separate beams that correspond to the left channel, the center channel, and the right channel. For example,
The room reflection virtual source 350 occurs when the output audio reflects off of an acoustically reflective surface (e.g., wall). For example,
The binaural effect 360 occurs as a side effect of performing beamforming to generate separate beams. As edges of a beam have different pressure as an audio waveform propagates past the user 5, the user 5 may perceive a difference in pressure between the user's left ear and the user's right ear. While the device 110 does not precisely control the binaural effect 360 or target the user 5 (e.g., unlike three-dimensional audio systems), the binaural effect 360 may cause the user 5 to detect an interaural level difference (ILD) and/or interaural phase difference (IPD) between the first pressure detected in the left ear and the second pressure detected in the right ear. The user 5 may interpret the ILD and/or the IPD to determine a directionality of the audio, separating the beams into distinct sound sources. Thus, the binaural effect 360 may result in the user 5 perceiving a wider virtual sound stage as the individual beams are associated with virtual directions instead of the actual location of the device 110.
As illustrated in
The device 110 may then subtract the center channel output data 414 from the left channel input data 402 to generate the left channel output data 412, and may subtract the center channel output data 414 from the right channel input data 404 to generate the right channel output data 416. Thus, the left channel output data 412 may correspond to the left side of the virtual sound stage, without including the center of the virtual sound stage, and the right channel output data 416 may correspond to the right side of the virtual sound stage, without including the center of the virtual sound stage. As part of generating the left channel output data 412, the center channel output data 414, and the right channel output data 416, the device 110 may preserve the original relative phase difference and/or perform additional timing to synchronize the output audio data. For example, the device 110 may apply a delay filter or other processing so that the output audio data is matched in time and/or phase.
In some examples, the device 110 may only perform beamforming for a particular frequency range. For example, the device 110 may perform beamforming up to a fixed frequency cutoff (e.g., 3 kHz, 4 kHz, etc.), relying on a passive directivity associated with the loudspeaker drivers for the higher frequencies. To illustrate an example, the device 110 may perform active beamforming to a first frequency range (e.g., 400 Hz to 3 kHz), rely on the passive directivity associated with the loudspeaker drivers for a second frequency range (e.g., 3 kHz to 16 kHz), and send a third frequency range (e.g., 100 Hz to 400 Hz) to the third loudspeaker 114c (e.g., woofer) to generate omnidirectional sound.
In the examples illustrated in
In some examples, the device 110 may dynamically change the angle of the loudspeaker drivers based on an environment of the device 110. For example, the device 110 may select the second angle (90°) when an acoustically reflective surface (e.g., wall) is in proximity to the loudspeaker, but may select the first angle (45°) when the device 110 is positioned away from any acoustically reflective surfaces. In some examples, the device 110 may vary the angle of the loudspeaker drivers between the left beam and the right beam. For example, the left beam may be driven at the first angle (45°) due to a lack of acoustically reflective surfaces in a first direction whereas the right beam may be driven at the second angle (90°) due to the presence of an acoustically reflective surface in close proximity to the device 110 in a second direction.
As illustrated in
As illustrated in
To extract the center channel, the device 110 may determine a relative magnitude difference and relative phase difference between the left input data 602 and the right input data 604. As illustrated in
A mapping function component 630 may receive the relative magnitude difference (e.g., magnitude difference data) and the relative phase difference (e.g., phase difference data) and may determine mapping data based on a probability that individual time-frequency units correspond to the center channel. For example, the mapping function component 630 may select spectral content with a relative magnitude difference close to 0 dB and a relative phase difference close to 0 radians, as described in greater detail below with regard to
The mapping function component 630 may generate a spectral mask (e.g., mapping data) with values between 0 and 1, indicating a probability that a time-frequency unit contains center or “mono compatible” content. In some examples, the spectral mask may include continuous values between 0 and 1, enabling the device 110 to generate the center channel (e.g., center audio data) with less distortion. However, the disclosure is not limited thereto, and in other examples the spectral mask may include binary values indicating that a particular time-frequency unit is either associated with the center channel (e.g., value of 1) or not associated with the center channel (e.g., value of 0). For example, the device 110 may compare the probability value to a threshold value, such that probability values above the threshold value are associated with the first value (e.g., 1) and probability values below the threshold value are associated with the second value (e.g., 0), although the disclosure is not limited thereto.
The mapping function component 630 may output the mapping data to a decimation component 635, which may perform decimation. For example, the decimation component 635 may decimate the mapping data or determine a median using the mapping data and then decimate. The decimation component 635 may perform decimation to process the mapping data to be compatible with a linear filter associated with the fractional delay filter component 640. For example, the decimation component 635 may reduce a size of the mapping data so that it can be combined with the linear filter, although the disclosure is not limited thereto.
As described above, the device 110 may synchronize the channels. For example, the fractional delay filter component 640 may perform phase rotation (e.g., phase rotate by (M−1)/2 samples) and set the Nyquist bin to be real (e.g., rotate a Nyquist curve to the real axis/real part of the transfer function). This effectively removes an imaginary component (e.g., zeroes out the imaginary component) from an input signal. Additionally or alternatively, the synthesized center may be phase-matched with center content in the left and right channels by adding appropriate delay. Thus, the fractional delay filter component 640 may result in the left and right channels being phase matched with the center channel. For example, the fractional delay filter component 640 may apply a linear phase filter with an even number of taps, which may be pre-calculated and stored during testing and/or initialization of the device 110. In some examples, the linear phase filter may have an odd number of taps, in which case performing fractional delay filtering is not necessary. While the fractional delay filter component 640 may match the target response using phase matching, the disclosure is not limited thereto and the fractional delay filter component 640 may synchronize the channels using any techniques known to one of skill in the art without departing from the disclosure. By applying the fractional delay filter component 640 to the center channel and the left/right channels, the device 110 may maintain a linear phase that enables the device 110 to subtract the center channel from the left/right channels.
To generate the center channel, the device 110 may use the mapping data in a linear phase Infinite Impulse Response (IIR) filter. For example,
The device 110 may perform re-expansion using the expansion component 650 to double the resolution of the combined data (e.g., output of the combiner component 645) so that it can be combined with the combined audio data (e.g., output of the summing component 615). For example, the combined data may have a first resolution (e.g., M) and the combined audio data may have a second resolution (e.g., 2M). Thus, the device 110 may perform re-expansion using the expansion component 650 to generate the center filter data having the second resolution, which can then be combined with the combined audio data using the first combiner component 660a.
To generate the left and right output channels, the device 110 may re-expand the output of the fractional delay filter component 640 using the second expansion component 655 to generate side filter data. For example, the second expansion component 655 may perform re-expansion by applying zero-padding FFT and IFFT processing, although the disclosure is not limited thereto. As described above with regard to the center channel and the expansion component 650, the device 110 may perform re-expansion using the expansion component 655 to double the resolution of the output of the fractional delay filter component 640. For example, the side filter data may have the same resolution as the output of the STFT components 610, enabling the device 110 to combine the side filter data with the output of the STFT components 610.
To generate the right output channel, a second combiner component 660b may multiply the output from the second STFT component 610b (e.g., right channel in the frequency domain) with the side filter data to generate the synchronized right channel in the frequency domain and a second IFFT component 670b may perform IFFT processing to the synchronized right channel to convert from the frequency domain to the time domain. Finally, a first summing component 675a may subtract the center channel from the synchronized right channel to generate the isolated right channel in the time domain, and a second OLA component 680b may process the isolated right channel using the overlap-add method to generate right output data 684.
To generate the left output channel, a third combiner component 660c may multiply the output from the first STFT component 610a (e.g., left channel in the frequency domain) with the side filter data to generate the synchronized left channel in the frequency domain and a third IFFT component 670c may perform IFFT processing to the synchronized left channel to convert from the frequency domain to the time domain. Finally, a second summing component 675b may subtract the center channel from the synchronized left channel to generate the isolated left channel in the time domain, and a third OLA component 680c may process the isolated left channel using the overlap-add method to generate left output data 686.
While not illustrated in
The number of samples (e.g., M) corresponds to a window size (e.g., frequency v. time resolution), such that a larger number of samples corresponds to a smaller frequency range per bin and a smaller number of samples corresponds to a larger frequency range per bin. For example, for a first sampling frequency (44.1 kHz), a first number of samples (e.g., 8192 samples) corresponds to 5.3 Hz per bin, which provides good separation of instruments and voice components of audio data, while a second number of samples (e.g., 1024 samples) corresponds to 43 Hz per bin, which provides poor separation of bass and mid-range instruments represented in the audio data but is effective at reducing transients represented in the audio data.
In some examples, the device 110 may dynamically modify the number of samples used to process audio data (e.g., convert from a time domain to a frequency domain, convert from the frequency domain to the time domain, and/or other audio processing) to reduce distortion represented in output audio data and/or other undesirable components of the output audio data. For example, the fractional delay filter component 640 may correspond to a linear phase filter that introduces pre-ringing and/or post-ringing into the output audio data. To reduce and/or prevent the pre-ringing and/or the post-ringing, the device 110 may dynamically select the number of samples to improve a quality of the output audio data. An example of dynamically selecting the number of samples is described below with regard to
While the device 110 may perform additional processing to dynamically select the number of samples, the disclosure is not limited thereto. For example, this additional processing increases a computational complexity and amount of processing associated with generating the output audio data. Instead, in some examples the device 110 may avoid the additional processing by reducing a length of a head and tail of the linear filter (e.g., filter corresponding to the fractional delay filter component 640), which is a compromise between reducing the pre-ringing and the post-ringing effect and reducing a computational complexity associated with generating the output audio data. By reducing the head and tail of the linear filter, the device 110 may generate the output audio data using a fixed number of samples without causing additional distortion (e.g., without the pre-ringing or the post-ringing effect).
As illustrated in
The device 110 may combine the Kaiser window data output by the Kaiser Window component 694 with the output of the IFFT component 690 using a combiner component 696. The output of the combiner component 696 is then input to the first expansion component 650 as described above with regard to
v
dB=20 log10|v|, vrad=∠v [3]
and a geometric mean (e.g., soft AND) 724:
where parameters 726 are 0≤λ, α, β<∞.
In some examples, the values for alpha α and beta β may be fixed, and the value of γ may be differentiable with respect to alpha α and beta β. For example, the device 110 may be programmed with specific values for alpha and beta (e.g., α=0.15 and β=4), as illustrated in
As illustrated in
Using the multiple resolutions, the device 110 may perform parallel center extractor processing 820. For example, the device 110 may use the first resolution (e.g., M) to process the input audio data using a first center extractor (M) component 825a, a first delay (0) component 830a, a first Hanned FFT component 835a, and a first spectral dB conversion component 840a, generating first output data (e.g., first center audio data). Similarly, the device 110 may use the second resolution (e.g., M/2) to process the input audio data using a second center extractor (M/2) component 825b, a second delay (3M/4) component 830b, a second Hanned FFT component 835b, and a second spectral dB conversion component 840b, generating second output data (e.g., second center audio data). Finally, the device 110 may use the third resolution (e.g., M/4) to process the input audio data using a third center extractor (M/4) component 825c, a third delay (9M/8) component 830c, a third Hanned FFT component 835c, and a third spectral dB conversion component 840c, generating third output data (e.g., third center audio data).
To select between the three resolutions, the device 110 may include inter-resolution transition logic (pre-ring detection) 850. For example, the device 110 may include a first summing component 845a to determine a first difference between the first output data and the second output data and may process the first difference using a first rectifier max(x, 0) component 855a to generate first rectified data. The device 110 may also include a second summing component 845b to determine a second difference between the second output data and the third output data and may process the second difference using a second rectifier max(x, 0) component 855b to generate second rectified data.
The device 110 may include a dB threshold gamma component 860, which may be used by a first decision component 865 and a second decision component 870 to select a resolution. The dB threshold gamma component 860 may store a threshold value (e.g., gamma), which may be used by the first decision component 865 and/or the second decision component 870. For example, the first decision component 865 may receive the first rectified data and determine whether a first median is greater than the gamma. If true (e.g., Median1>Gamma), the device 110 may select the lowest resolution (e.g., M/2), but if false (e.g., Median1<Gamma), the second decision component 870 may receive the second rectified data and determine whether a second median is less than the gamma. If false (e.g., Median2>Gamma), the device 110 may select the middle resolution (e.g., M), but if true (e.g., Median2<Gamma), the device 110 may select the highest resolution (e.g., 2M). The threshold value may vary without departing from the disclosure, but in some examples the device 110 may store a fixed threshold value selected during testing without departing from the disclosure. Thus, the device 110 may perform pre-ring detection and select a resolution that avoids the pre-ringing.
As illustrated in
The output of the first beamformer (Left for L) component 910 and the output of the third beamformer (Right for L) component 920 may be combined using a first summing component 950, the output of the second beamformer (Left for R) component 915 and the output of the fourth beamformer (Right for R) component 925 may be combined using a second summing component 955, and the output of the first summing component 950 and the output of the second summing component 955 may be input to a first equalizer (EQ) (side) component 960.
The output of the fifth beamformer (Left for C) component 930 and the output of the sixth beamformer (Right for C) component 935 may be input to a second EQ (Center) component 965. The first EQ (Side) component 960 may first apply equalization settings to both the left channel and the right channel, whereas the second EQ (Center) component 965 may apply second equalization settings to the center channel. However, while
The output of the first EQ (Side) component 960 and the second EQ (Center) component 965 may be combined to generate loudspeaker output signals. For example, the left channel (e.g., output of the first summing component 950, after being processed by the first EQ (Side) component 960) may be combined with the left portion of the center channel (e.g., output of the fifth beamformer (Left for C) component 930, after being processed by the second EQ (Center) component 965 using a third summing component 970 to generate left loudspeaker output 975. Similarly, the right channel (e.g., output of the second summing component 955, after being processed by the first EQ (Side) component 960) may be combined with the right portion of the center channel (e.g., output of the sixth beamformer (Right for C) component 935, after being processed by the second EQ (Center) component 965 using a fourth summing component 980 to generate right loudspeaker output 985. Thus, the left loudspeaker output 975 may be sent to a left loudspeaker 114a and the right loudspeaker output 985 may be sent to a right loudspeaker 114b to generate output audio having three beams.
The beamformer components 910/915/920/925/930/935 may perform loudspeaker beamforming processing using techniques known to one of skill in the art without departing from the disclosure. For example, the beamformer components may apply beamforming filter data (e.g., beamformer coefficients, beamformer values, beamforming filters, etc.) to an input signal to generate an output signal that may be perceived by a user as having directionality/directivity. To illustrate an example, the first beamformer (Left for L) component 910 may apply first beamforming filter data to generate a first portion of the left loudspeaker output 975, the third beamformer (Right for L) component 920 may apply second beamforming filter data to generate a second portion of the left loudspeaker output 975, and the fifth beamformer (Left for C) component 930 may apply third beamforming filter data to generate a third portion of the left loudspeaker output 975, although the disclosure is not limited thereto. Similarly, the second beamformer (Left for R) component 915 may apply fourth beamforming filter data to generate a first portion of the right loudspeaker output 985, the fourth beamformer (Right for R) component 925 may apply fifth beamforming filter data to generate a second portion of the right loudspeaker output 985, and the sixth beamformer (Right for C) component 935 may apply sixth beamforming filter data to generate a third portion of the right loudspeaker output 985, although the disclosure is not limited thereto.
The beamforming filter data may be precalculated and stored in the device 110. For example, the device 110 may be preconfigured with beamforming filter data corresponding to each channel (e.g., left, center, right) and each loudspeaker (e.g., left and right). Thus, the device 110 may store beamforming filter data corresponding to six separate beamforming filters to perform loudspeaker beamformer processing as described above. However, the disclosure is not limited thereto and the number of beamforming filters may vary depending on the number of loudspeakers and/or the number of channels without departing from the disclosure.
In some examples, the beamforming filter data may be calculated to maximize acoustic energy within a listening zone and to minimize acoustic energy within a silent area. For example, the system 100 may generate the first beamforming filter data to maximize acoustic energy (e.g., energy values) within a first listening zone corresponding to the left beam illustrated in
Similarly, the equalization components 960/965 may perform equalization processing using techniques known to one of skill in the art without departing from the disclosure. For example, the equalization components may apply equalization filter data (e.g., equalization settings, equalization values, equalization filters, etc.) to an input signal to generate an output signal. The equalization filter data may apply different processing to different frequency ranges, such as emphasizing a lower frequency range (e.g., increasing bass), a middle frequency range (e.g., increasing mid-range), and/or a higher frequency range (e.g., increasing treble).
While
In some examples, the device 110 may include a third loudspeaker 114c (e.g., woofer) configured to generate output audio associated with low frequencies (e.g., under 400 Hz). For example, the device 110 may identify a portion of input audio data below a crossover frequency (e.g., 400 Hz), which was originally associated with the left channel, the right channel, and/or the center channel, and may send the portion of the input audio data to the third loudspeaker 114c. As the device 110 does not apply active beamforming to the portion of the audio data sent to the third loudspeaker 114c, these low frequencies may be omnidirectional.
As illustrated in
While
The device 110 may generate the multiple beam implementation 1010 using the techniques described above in a variety of ways without departing from the disclosure. For example, the device 110 may use a first mapping function (e.g., first values for alpha and beta, corresponding to a first range of magnitude difference values and radian difference values) to generate the center beam and use a second mapping function (e.g., second values for alpha and beta, corresponding to a second range of magnitude difference values and radian difference values) to generate the left-center beam and the right-center beam. However, the disclosure is not limited thereto and the device 110 may generate the output beams using any techniques known to one of skill in the art in light of the techniques described above without departing from the disclosure.
While
While
The device 110 may generate (1116) combined audio data by combining the left channel and the right channel and may generate (1118) extracted center channel using the mapping data and the combined audio data. For example, the device 110 may apply a fractional delay filter to the mapping data and then multiply this filter data by the combined audio data to generate the center channel.
The device 110 may generate (1120) an extracted left channel by subtracting the extracted center channel from the left channel, and may generate (1122) an extracted right channel by subtracting the extracted center channel from the right channel. Thus, the extracted left channel and the extracted right channel do not include any of the extracted center channel, which helps separate the beams and results in the user 5 perceiving a wide virtual sound stage.
The device 110 may determine (1214) a first difference between the first magnitude and the second magnitude, may determine (1216) a second difference between the second magnitude and the third magnitude, and may determine (1218) whether the median is greater than the gamma for the second difference. If the median is greater than the gamma for the second difference, the device 110 may set (1220) the resolution equal to M/2 (e.g., perform down-resolution by cutting the resolution in half).
If the median is not greater than the gamma, the device 110 may determine (1222) whether the median is greater than the gamma for the first difference. If the median is not greater than the gamma for the first difference, the device 110 may set (1224) the resolution equal to M (e.g., hold the current resolution), whereas if the median is greater than the gamma for the first difference, the device 110 may set (1226) the resolution equal to 2M (e.g., perform up-resolution by doubling the resolution).
Thus, the device 110 may perform center extraction for multiple resolutions in parallel and perform pre-ring detection to select between the multiple resolutions. While not illustrated in
As illustrated in
The device 110 may include one or more controllers/processors 1304, which may each include a central processing unit (CPU) for processing data and computer-readable instructions, and a memory 1306 for storing data and instructions. The memory 1306 may include volatile random access memory (RAM), non-volatile read only memory (ROM), non-volatile magnetoresistive (MRAM) and/or other types of memory. The device 110 may also include a data storage component 1308, for storing data and controller/processor-executable instructions (e.g., instructions to perform the algorithm illustrated in
The device 110 includes input/output device interfaces 1302. A variety of components may be connected through the input/output device interfaces 1302. For example, the device 110 may include one or more microphone(s) 112 and/or one or more loudspeaker(s) 114 that connect through the input/output device interfaces 1302, although the disclosure is not limited thereto. Instead, the number of microphone(s) 112 and/or loudspeaker(s) 114 may vary without departing from the disclosure. In some examples, the microphone(s) 112 and/or loudspeaker(s) 114 may be external to the device 110.
The input/output device interfaces 1302 may be configured to operate with network(s) 199, for example a wireless local area network (WLAN) (such as WiFi), Bluetooth, ZigBee and/or wireless networks, such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, etc. The network(s) 199 may include a local or private network or may include a wide network such as the internet. Devices may be connected to the network(s) 199 through either wired or wireless connections.
The input/output device interfaces 1302 may also include an interface for an external peripheral device connection such as universal serial bus (USB), FireWire, Thunderbolt, Ethernet port or other connection protocol that may connect to network(s) 199. The input/output device interfaces 1302 may also include a connection to an antenna (not shown) to connect one or more network(s) 199 via an Ethernet port, a wireless local area network (WLAN) (such as WiFi) radio, Bluetooth, and/or wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, etc.
The device 110 may include components that may comprise processor-executable instructions stored in storage 1308 to be executed by controller(s)/processor(s) 1304 (e.g., software, firmware, hardware, or some combination thereof). For example, components of the device 110 may be part of a software application running in the foreground and/or background on the device 110. Some or all of the controllers/components of the device 110 may be executable instructions that may be embedded in hardware or firmware in addition to, or instead of, software. In one embodiment, the device 110 may operate using an Android operating system (such as Android 4.3 Jelly Bean, Android 4.4 KitKat or the like), an Amazon operating system (such as FireOS or the like), or any other suitable operating system.
Executable computer instructions for operating the device 110 and its various components may be executed by the controller(s)/processor(s) 1304, using the memory 1306 as temporary “working” storage at runtime. The executable instructions may be stored in a non-transitory manner in non-volatile memory 1306, storage 1308, or an external device. Alternatively, some or all of the executable instructions may be embedded in hardware or firmware in addition to or instead of software.
The components of the device 110, as illustrated in
The concepts disclosed herein may be applied within a number of different devices and computer systems, including, for example, general-purpose computing systems, server-client computing systems, mainframe computing systems, telephone computing systems, laptop computers, cellular phones, personal digital assistants (PDAs), tablet computers, video capturing devices, video game consoles, speech processing systems, distributed computing environments, etc. Thus the components, components and/or processes described above may be combined or rearranged without departing from the scope of the present disclosure. The functionality of any component described above may be allocated among multiple components, or combined with a different component. As discussed above, any or all of the components may be embodied in one or more general-purpose microprocessors, or in one or more special-purpose digital signal processors or other dedicated microprocessing hardware. One or more components may also be embodied in software implemented by a processing unit. Further, one or more of the components may be omitted from the processes entirely.
The above embodiments of the present disclosure are meant to be illustrative. They were chosen to explain the principles and application of the disclosure and are not intended to be exhaustive or to limit the disclosure. Many modifications and variations of the disclosed embodiments may be apparent to those of skill in the art. Persons having ordinary skill in the field of computers and/or digital imaging should recognize that components and process steps described herein may be interchangeable with other components or steps, or combinations of components or steps, and still achieve the benefits and advantages of the present disclosure. Moreover, it should be apparent to one skilled in the art, that the disclosure may be practiced without some or all of the specific details and steps disclosed herein.
Embodiments of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium. The computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure. The computer readable storage medium may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk and/or other media.
Embodiments of the present disclosure may be performed in different forms of software, firmware and/or hardware. Further, the teachings of the disclosure may be performed by an application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other component, for example.
Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
Conjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is to be understood with the context as used in general to convey that an item, term, etc. may be either X, Y, or Z, or a combination thereof. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y and at least one of Z to each is present.
As used in this disclosure, the term “a” or “one” may include one or more items unless specifically stated otherwise. Further, the phrase “based on” is intended to mean “based at least in part on” unless specifically stated otherwise.
Number | Name | Date | Kind |
---|---|---|---|
10475457 | Atti | Nov 2019 | B2 |
20150149166 | Jang | May 2015 | A1 |