The present invention relates to a method for rendering a stereo signal over a first and a second loudspeaker with respect to a desired direction and to a mobile device for rendering a stereo signal.
In particular, the invention relates to the field of sound reproduction by using loudspeaker systems.
There are many portable devices with two loudspeakers on the market, such as iPod docks or laptops. Tablets and mobile phones with built-in stereo loudspeakers can be viewed as stereo portable devices. Compared to a conventional stereo system with two discrete loudspeakers, the two loudspeakers of a portable stereo device are located very close to each other. Due to the size of the device, they are usually spaced by only few centimeters, between 10 and 30 cm for mobile devices such as smartphones or tablets. This results in music reproduction which is narrow, almost “mono-like”.
The concept of Mid/Side loudspeaker has been introduced in (Heegaard, F. D. (1992). “The Reproduction of Sound in Auditory Perspective and a Compatible System of Stereophony”, J. Audio Eng. Soc., 40(10), pp. 802-808). The goal was to reproduce a stereo signal with only a single loudspeaker box. As opposed to playing back left and right signals, sum signal, i.e. left signal plus right signal and difference signal, i.e. left signal minus right signal are reproduced with two loudspeakers with different characteristics. The sum signal is played back with a conventional loudspeaker which is omnidirectional at low frequencies and unidirectional at high frequencies. The difference signal is reproduced with a dipole loudspeaker, bi-directionally pointing towards left and right directions. Perceptually, this results in that a listener hears the sum signal (soloists, main content) from the loudspeaker position. Additionally, there is a spatial effect. The dipole, driven with the difference signal, excites the room with zero sound propagation towards the listener.
In the patent application PCT/CN2011/079806, a method for generating an acoustic signal with enhanced spatial effect is described. This method uses the same principle of dipole rendering, applied with normal loudspeaker systems. The original stereo signal is played out on the two loudspeakers and the difference signal is played out with a dipole rendering from the same loudspeaker system, i.e. direct rendering on one side, and multiplied by −1 on the other side. Such a system, however, requires that the listener is in a central listening position. If the listener is not exactly located in front of the loudspeaker system, his sound impression exhibits a sustained decline.
It is the object of the invention to provide an improved technique for reproducing a stereo signal.
This object is achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.
The invention is based on the finding that changing the rendering of difference and spatial signals reproduced with dipole characteristics according to the position of the listener allows steering zero sound propagation of the different/spatial signal towards the listener thereby improving his sound impression. By applying that technique, the invention does not require that the listener is located in a central listening position.
In order to describe the invention in detail, the following terms, abbreviations and notations will be used:
L: left channel, left path, left path signal component,
R: right channel, right path, right path signal component,
BCC: Binaural Cue Coding,
CLD: Channel Level Difference
ILD: Inter-channel Level Difference,
ITD: Inter-channel Time Differences,
IPD: Inter-channel Phase Differences,
ICC: Inter-channel Coherence/Cross Correlation,
STFT: Short-Time Fourier Transform,
QMF: Quadrature Mirror Filter.
According to a first aspect, the invention relates to a method for rendering a stereo audio signal over a first loudspeaker and a second loudspeaker with respect to a desired direction, the stereo signal comprising a first audio signal component and a second audio signal component, the method comprising: providing a first rendering signal based on a combination of the first audio signal component and a first difference signal obtained based on a difference between the first audio signal component and the second audio signal component to the first loudspeaker, and providing a second rendering signal based on a combination of the second audio signal component and a second difference signal obtained based on the difference between the first audio signal component and the second audio signal component to the second loudspeaker, such that both difference signals are different with respect to sign and one difference signal is delayed by a delay compared to the other difference signal to define a dipole signal, wherein the delay is adapted according to the desired direction.
The first and second audio signal component may be a first and a second audio channel signal of a conventional stereo signal or spatial cues and a downmix signal of a parametric stereo signal, e.g. first and second spatial cues for left and right channel per sub-band. Spatial cues are inter-channel cues. The loudspeakers may be conventional loudspeakers, i.e. no dipole loudspeaker hardware is required.
The method allows providing a stereo rendering with enhanced spatial perception steering to a desired direction, e.g. a direction where a listener is positioned and thus provides an improved technique for reproducing a stereo signal.
In a first possible implementation form of the method according to the first aspect, the method comprises adapting the delay as a function of an angle defining the desired direction relative to a central position with regard to the two loudspeakers.
The central position denotes a zero degree angle or a central line between the two loudspeakers.
By adapting the delay as a function of the angle with respect to the desired direction an optimum sound impression can be provided to the listener.
In a second possible implementation form of the method according to the first implementation form of the first aspect, the method comprises adapting the delay as a function of a distance between the loudspeakers.
By adapting the delay as a function of a distance between the loudspeakers, the method can be applied for each kind of mobile device no matter where and in which distance the loudspeakers are arranged. Even for external loudspeakers optimum sound quality can be guaranteed to the listener.
In a third possible implementation form of the method according to the first implementation form or according to the second implementation form of the first aspect, the function of the angle is according to: u=cos(π/2+α)/(cos(π/2+α)−1), where α denotes the angle defining the desired direction relative to a central position with regard to the two loudspeakers and u denotes the function of the angle.
Such a function can be efficiently realized by a lookup table storing the function values with respect to the angle. The computational complexity is low.
In a fourth possible implementation form of the method according to the third implementation form of the first aspect, the method comprises adapting the delay according to: τ=ud/(c(1−u)), where τ denotes the delay, d denotes the distance between the loudspeakers, u denotes the function of the angle (α) defining the desired direction relative to a central position with regard to the two loudspeakers and c denotes the speed of sound propagation.
Such a function can be easily computed as the parameters u, d and c can be predetermined and stored in a lookup table for fixed position of the loudspeakers in the mobile device applying that method. For variable loudspeaker positions, e.g. when using external loudspeakers, the sound-field parameter c and the distance d between the loudspeakers can be re-computed and thus the method is flexible with respect to changes of the loudspeaker positions.
In a fifth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the method comprises adapting the delay such that zero sound of the dipole signal is emitted towards the desired direction.
When zero sound is emitted towards the desired direction, e.g. to the direction where the listener is positioned, the spatial impression of the listener is enhanced as he hears the sound arriving from two distinct directions.
In a sixth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the method comprises delaying and filtering the difference between the first audio signal component and the second audio signal component prior to the combining with the first and second signal components; wherein further the combination of the first audio signal component and the first difference signal comprises an addition of the first audio signal component and the first difference signal, and the combination of the second audio signal component and the second difference signal comprises an addition of the second audio signal component and the second difference signal.
By delaying and filtering the difference signal prior to the combining with the first and second signal components the low-frequency gain loss of the differential sound reproduction can be compensated.
In a seventh possible implementation form of the method according to the sixth implementation form of the first aspect, the filtering comprises using a low-pass filter.
By using filtering with low-pass shelving filter the spectral shape of reverberation can be mimicked, thereby enhancing the sound impression.
In an eighth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the method comprises obtaining a direction information indicating the desired direction; e.g. by sensing a position of a listener; and adapting the delay based on the direction information.
By sensing a position of a listener for determining the desired direction, the method can be adjusted to the listener position and the method is flexibly adjustable to a moving listener. Even more than one listener can be detected and the method can be directed to a desired listener, e.g. a listener in a group of listeners.
In a ninth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the distance between the loudspeakers is within a range of 5 cm and 40 cm.
When the distance between the loudspeakers is within a range of 5 cm and 40 cm, the method is adapted to be applied in standard mobile devices such as mobile phones, smartphones, tablets etc.
In a tenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the angle defining the desired direction relative to a central position with regard to the two loudspeakers is within a range of −90 degrees and +90 degrees.
When the angle is within that range, the dipole rendering can be steered in all possible directions in front of a mobile device applying that method. There are no limitations with respect to the position of the listener.
In an eleventh possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the angle defining the desired direction relative to a central position with regard to the two loudspeakers is outside of a range between −1° and +1°, outside of a range between −5° and +5° or outside of a range between −10° and +10°.
In a twelfth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the stereo signal is available in compressed form as a parametric stereo signal comprising a mono down-mix signal and at least one inter-channel cue, in particular one of an inter-channel level difference, an inter-channel time difference, an inter-channel phase difference and an inter-channel coherence/cross correlation.
The method can be applied for multichannel audio signals. Thus, the method can be applied for compressed stereo signals. The method can be embedded in parametric stereo synthesis, thereby decreasing computational complexity.
In a thirteenth possible implementation form of the method according to the twelfth implementation form of the first aspect, the method comprises: determining the difference between the first audio signal component and the second audio signal component in frequency domain on a sub-band basis of the parametric stereo signal; and determining the delay by using a phase shift with respect to the sub-bands of the parametric stereo signal.
The difference corresponds to a difference signal but is not to be mixed up with the first and second difference signals. The parametric stereo signal may be only interchannel (spatial) cues or both, downmix signal and interchannel cues.
Implementing the method in frequency sub-bands saves computational complexity. Synergies can be realized with respect to separate computations of frequency synthesis and rendering steering direction.
In a fourteenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the delay is adapted in a preset manner according to the desired direction.
The adapted delay may be both, an already fixedly adapted delay and a flexibly or dynamically adapted delay. A fixed adapted delay may be an adaptation to a desired direction different from 0° with regard to the central line between the two loudspeakers.
In a fifteenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the method comprises delaying and filtering the difference between the first audio signal component and the second audio signal component prior to the combining with the first and second signal components.
In a sixteenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the combination of the first audio signal component and the first difference signal comprises an addition of the first audio signal component and the first difference signal, and the combination of the second audio signal component and the second difference signal comprises an addition of the second audio signal component and the second difference signal.
In a seventeenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the combination of the first audio signal component and the first difference signal comprises an addition of the first audio signal component and the first difference signal.
In an eighteenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the combination of the second audio signal component and the second difference signal comprises an addition of the second audio signal component and the second difference signal.
According to a second aspect, the invention relates to a mobile device configured for rendering a stereo audio signal over a first loudspeaker and a second loudspeaker with respect to a desired direction, the stereo signal comprising a first audio signal component and a second audio signal component, the mobile device comprising: rendering means configured for providing a first rendering signal based on a combination of the first audio signal component and a first difference signal obtained based on a difference between the first audio signal component and the second audio signal component to the first loudspeaker, and providing a second rendering signal based on a combination of the second audio signal component and a second difference signal obtained based on the difference between the first audio signal component and the second audio signal component to the second loudspeaker, such that both difference signals are different with respect to sign and one difference signal is delayed by a delay compared to the other difference signal to define a dipole signal, wherein the rendering means is configured to adapt the delay according to the desired direction.
The mobile device performs stereo rendering with enhanced spatial perception steering to a desired direction, e.g. a direction where a listener is positioned and thus provides an improved technique for reproducing a stereo signal. The mobile device can also process a parametric representation of a stereo signal, for example a compressed stereo signal or a mono or stereo representation of a multichannel audio signal.
In a first possible implementation form of the mobile device according to the second aspect, the mobile device comprises sensing means, in particular a camera, configured for sensing positioning information of a listener listening to the stereo signal, wherein the rendering means is configured to adapt the delay based on the positioning information.
By sensing positioning information of a listener for determining the desired direction, the mobile device can be adjusted to the listener position and is thus flexibly adjustable to a moving listener. Even more than one listener can be detected and the mobile device can be directed to a desired listener, e.g. a listener in a group of listeners.
In a second possible implementation form of the mobile device according to the second aspect as such or according to the first implementation form of the second aspect, the stereo signal is available in compressed form as a parametric stereo signal comprising a mono down-mix signal and at least one inter-channel cue, in particular one of an inter-channel level difference, an inter-channel time difference, an inter-channel phase difference and an inter-channel coherence/cross correlation.
The mobile device can process multichannel audio signals and compressed stereo signals. The rendering device can be embedded in an entity processing the parametric stereo synthesis, thereby decreasing computational complexity.
In a third possible implementation form of the mobile device according to the second aspect as such or according to any of the preceding implementation forms of the second aspect, the mobile device comprises a first determining entity configured for determining the difference signal in frequency domain on a sub-band basis of the parametric stereo signal; and a second determining entity configured for determining the delay by using a phase shift with respect to the sub-bands of the parametric stereo signal.
Processing frequency sub-bands saves computational complexity. Synergies can be realized with respect to separate computations of frequency synthesis and rendering steering direction.
In a fourth possible implementation form of the mobile device according to the second aspect as such or according to any of the preceding implementation forms of the second aspect, the a first loudspeaker and a second loudspeaker are built-in loudspeakers integrated into the mobile device.
According to a third aspect, the invention relates to a method, comprising: receiving a stereo signal having a left and a right channel; reproducing a sum signal directly with a pair of loudspeakers; reproducing left and/or right difference signals between the left and right channel, and optionally also a reverb signal with the two loudspeakers such that they have a first order directivity pattern, wherein a directivity pattern of the loudspeakers is controlled such that its zero points towards the most likely listener position.
In a first possible implementation form of the method according to the third aspect, the reproducing the sum signal and the reproducing the left and/or right difference signals are combined in order to compute the stereo signal.
In a second possible implementation form of the method according to the third aspect as such or according to the first implementation form of the third aspect, the method comprises playing out the stereo signal by the loudspeakers.
According to a fourth aspect, the invention relates to a method for rendering a stereo signal comprising a left signal and a right signal over two loudspeakers, the method comprising: rendering the stereo signal directly to the loudspeakers; and adding a rendered difference signal, providing this signal with a different sign and delay to both loudspeakers.
In a first possible implementation form of the method according to the fourth aspect, the left signal is rendered on the left loudspeaker and the right signal is rendered on the right loudspeaker.
In a second possible implementation form of the method according to the fourth aspect as such or according to the first implementation foam of the fourth aspect, the method comprises: applying a delay and/or a filter to the difference signal.
In a third possible implementation form of the method according to the fourth aspect as such or according to any of the preceding implementation forms of the fourth aspect, the method comprises: determining the delay as a function of a desired steering direction of the loudspeakers.
In a fourth possible implementation form of the method according to the fourth aspect as such or according to any of the preceding implementation forms of the fourth aspect, the method comprises: obtaining the desired steering direction from sensors of a mobile device.
The methods, systems and devices described herein may be implemented as software in a Digital Signal Processor (DSP), in a micro-controller or in any other side-processor or as hardware circuit within an application specific integrated circuit (ASIC).
The invention can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof, e.g. in available hardware of conventional mobile devices or in new hardware dedicated for processing the methods described herein.
Further embodiments of the invention will be described with respect to the following figures, in which:
As illustrated in
x1(t)=s(t)
x2(t)=−s(t−τ). (1)
The sound field generated by such a pair of point-source modeled loudspeakers 101, 103 in the far-field is
p(r,t)=2j sin(ω/2c(cτ+d cos φ))(s(t−τ/c−τ/2)/r). (2)
At low frequencies, (2) can be approximated by
wherefrom it can be seen that the ratio cτ/(cτ+d) corresponds to a parameter, determining the directional response shape
directivity(φ)=u+(1−u)cos φ. (4)
The parameter d in equations (2) and (3) represents the distance between the loudspeakers 101, 103 as depicted in
The parameter u, which steers a zero towards an angle α ([0, π/2]) with respect to a direction 201 of a listener 199 is as follows:
u=cos(π/2+α)/(cos(π/2+α)−1). (5)
As can be seen from
For negative angles α[−π/2, 0], the delay and the inversion are applied to the other loudspeaker, i.e. the left loudspeaker 103 of
The loudspeaker system 300 comprises a left path loudspeaker 301, a right path loudspeaker 303, a left path time delay 307, a right path time delay 305, a left path signal inverter 311, a right path signal inverter 309, a left path switch 315 and a right path switch 313. The loudspeakers 301, 303 are conventional loudspeakers, i.e. no special hardware for implementing dipole loudspeakers is required.
As illustrated in
The loudspeaker system 400 comprises a left path loudspeaker 401, a right path loudspeaker 403, a right path time delay 405, a right path signal inverter 409, a right path summer 413, a left path summer 415, a difference path summer 425, a difference path time delay 423 and a difference path multiplier 421. The loudspeakers 401, 403 are conventional loudspeakers, i.e. no special hardware for implementing dipole loudspeakers is required.
As illustrated in
In an alternative implementation not shown in
In a further implementation, the implementation shown in
The loudspeaker system 400 provides a spatial enhancement with steering towards the listener. The characteristics of such a two-loudspeaker-array enhancer with steering towards listener direction can be summed by the following items. One loudspeaker pair is used. Because of smaller form factor, i.e. only few centimeters, e.g. 5-40 cm separate the two loudspeakers, the dipole-processing of lower frequencies is not applicable. Instead, filters are used to control this aspect and the dipole processing is applied in the adapted frequency band. For the difference signal, a normal dipole rendering is used, if the listener is located straight in front of the array. For other positions of the listener, the rendering direction is adapted by changing the dipole to a tailed cardioid, such that the zero points towards the listener.
The involved signal processing is schematically shown in
The method 500 is configured for rendering a stereo signal over a first and a second loudspeaker with respect to a desired direction. The stereo signal comprises a first signal component L and a second signal component R according to the description of
In an implementation, the method 500 comprises adapting the delay τ as a function of an angle (α) defining the desired direction relative to a central position with regard to the two loudspeakers. In an implementation, the method 500 comprises adapting the delay τ as a function of a distance d between the loudspeakers. In an implementation, the function of the angle α is according to: u=cos(π/2+α)/(cos(π/2+α)−1), where α denotes the angle defining the desired direction relative to a central position with regard to the two loudspeakers and u denotes the function of the angle. In an implementation, the method 500 comprises adapting the delay τ according to: τ=ud/(c(1−u)), where τ denotes the delay, d denotes the distance between the loudspeakers, u denotes the function of the angle α defining the desired direction relative to a central position with regard to the two loudspeakers and c denotes the speed of sound propagation. In an implementation, the method 500 comprises adapting the delay τ such that zero sound of the dipole signal is emitted towards the desired direction. In an implementation, the method 500 comprises delaying and filtering the difference diff between the first audio signal component L and the second audio signal component R prior to the combining with the first L and second R signal components; wherein further the combination of the first audio signal component L and the first difference signal diff_L comprises an addition of the first audio signal component L and the first difference signal diff_L, and the combination of the second audio signal component R and the second difference signal diff_R comprises an addition of the second audio signal component R and the second difference signal diff_R. In an implementation, the filtering comprises using a low-pass filter. In an implementation, the method 500 comprises obtaining direction information indicating the desired direction; e.g. by sensing a position of a listener; and adapting the delay τ based on the direction information. In an implementation, the distance between the loudspeakers is within a range of 5 cm and cm. In an implementation, the angle defining the desired direction relative to a central position with regard to the two loudspeakers is within a range of −90 degrees and +90 degrees. In an implementation, the angle α defining the desired direction relative to a central position with regard to the two loudspeakers is outside of a range between −1° and +1°, is outside of a range between −5° and +5°, or outside of a range between −10° and +10°. In an implementation, the stereo signal is available in compressed form as a parametric stereo signal comprising a mono down-mix signal and at least one inter-channel cue, in particular one of an inter-channel level difference, an inter-channel time difference, an inter-channel phase difference and an inter-channel coherence/cross correlation. In an implementation, the method 500 comprises determining the difference diff between the first audio signal component L and the second audio signal component R in frequency domain on a sub-band basis of the parametric stereo signal; and determining the delay τ by using a phase shift with respect to the sub-bands of the parametric stereo signal. In an implementation, the delay τ is adapted in a preset manner according to the desired direction.
The mobile device 800 is configured for rendering a stereo signal over a first loudspeaker 801 and a second loudspeaker 803 with respect to a desired direction 811, where the stereo signal comprises a first signal component L and a second signal component R as described with respect to
The loudspeakers 801, 803 are conventional loudspeakers, i.e. no special hardware for implementing dipole loudspeakers is required.
In an implementation, the input stereo signal 802 is composed of the two channels L and R. In another implementation, the input stereo signal 802 is composed of a parametric representation of the stereo signal, e.g. a compressed stereo signal based on a coding/decoding scheme. In an implementation, this coding/decoding scheme uses a parametric representation of the stereo signal known as “Binaural Cue Coding” (BCC), which is presented in details in “Parametric Coding of Spatial Audio,” C. Faller, Ph.D. Thesis No. 3062, Ecole Polytechnique Fédérale de Lausanne (EPFL), 2004. In this document, a parametric spatial audio coding scheme is described. This scheme is based on the extraction and the coding of inter-channel cues that are relevant for the perception of the auditory spatial image and the coding of a mono or stereo representation of the multichannel audio signal. The inter-channel cues are Interchannel Level Difference (ILD) also known as Channel Level Difference (CLD), Interchannel Time Difference (ITD) which can also be represented with Interchannel Phase Difference (IPD), and Interchannel Coherence/Cross Correlation (ICC). The inter-channel cues are generally extracted based on a sub-band representation of the input signal (e.g. using a conventional Short-Time Fourier Transform (STFT) or a Complex-modulated Quadrature Mirror Filter (QMF)). The sub-bands are grouped in parameter bands following a non-uniform frequency resolution which mimic the frequency resolution of the human auditory system. The mono or stereo downmix signal is obtained by matrixing the original multichannel audio signal. This downmix signal is then encoded using conventional state-of-the-art mono or stereo audio coders. In this embodiment, the mono downmix signal is received by the mobile device 800 together with the stereo parameters (CLD, ITD and ICC).
A mono-downmix signal may be a combination of left and right channel signal. A mono-downmix signal may comprise inter-channel cues for both left and right channel per sub-band. A mono-downmix signal may be only the left or right channel signal. The inter-channel cues may be used only for the other channel per sub-band.
The steering direction rendering is then embedded in the parametric stereo synthesis. Thus, the computation of the difference signal is performed in the frequency domain on a sub-band basis, based on the sub-band stereo synthesis. In an implementation, the delay is easily introduced by using a sub-band phase shift and the filter is advantageously applied using different gains for each sub-band.
In an implementation, the steering direction control parameter 812 is obtained from an external tracking system or built-in in device. In an implementation, the angle α is a predetermined parameter stored in memory to a have a fixed steering direction. In an alternative implementation, the angle α is dynamically adjustable and obtained from a head tracking system or directly controlled by the user with a graphical interface.
In an implementation, the mobile device 800 is a docking station. In an implementation, the loudspeakers are external to the mobile device 800. In an implementation the mobile device 800 is a smartphone, a tablet or a laptop with built-in loudspeakers.
The loudspeaker system 900 comprises a left path loudspeaker 901, a right path loudspeaker 903, a right path time delay 905, a right path signal inverter 909, a right path summer 913, a left path summer 915, a difference path summer 925, an optional difference path time delay 923, a difference path multiplier 921, a left path downmix multiplier 955 and a right path downmix multiplier 953. The loudspeakers 901, 903 are conventional loudspeakers, i.e. no special hardware for implementing dipole loudspeakers is required.
As illustrated in
In an alternative implementation not shown in
In a further implementation, the implementation shown in
From the foregoing, it will be apparent to those skilled in the art that a variety of methods, systems, computer programs on recording media, and the like, are provided.
The present disclosure also supports a computer program product including computer executable code or computer executable instructions that, when executed, causes at least one computer to execute the performing and computing steps described herein.
Many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the above teachings. Of course, those skilled in the art readily recognize that there are numerous applications of the invention beyond those described herein. While the present inventions has been described with reference to one or more particular embodiments, those skilled in the art recognize that many changes may be made thereto without departing from the scope of the present invention. It is therefore to be understood that within the scope of the appended claims and their equivalents, the inventions may be practiced otherwise than as specifically described herein.
This application is a continuation of International Application No. PCT/EP2013/052327, filed on Feb. 6, 2013, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5208493 | Lendaro et al. | May 1993 | A |
5995631 | Kamada | Nov 1999 | A |
6507657 | Kamada et al. | Jan 2003 | B1 |
20050152554 | Wu | Jul 2005 | A1 |
20070025555 | Gonai et al. | Feb 2007 | A1 |
Number | Date | Country |
---|---|---|
09168200 | Jun 1997 | JP |
WO 2007004147 | Jan 2007 | WO |
WO 2013040738 | Mar 2013 | WO |
Entry |
---|
Harry F. Olson, “Gradient Microphones”, The Journal of the Acoustical Society of America, vol. 17, No. 3, Jan. 1946, p. 192-198. |
Frank Baumgarte, et al., “Binaural Cue Coding—Part I: Psychoacoustic Fundamentals and Design Principles”, IEEE Transactions on Speech and Audio Processing, vol. 11, No. 6, Nov. 2003, p. 509-519. |
Christof Faller, et al., “Binaural Cue Coding—Part II: Schemes and Applications”, IEEE Transactions on Speech and Audio Processing, vol. 11, No. 6, Nov. 2003, p. 520-531. |
Christof Faller, “Parametric Coding of Spatial Audio”, These No. 3062, 2004, 180 pages. |
Fr. Heegaard, “The Reproduction of Sound in Auditory Perspective and a Compatible System of Stereophony”, J. Audio Eng. Soc., vol. 40, No. 10, Oct. 1992, p. 802-808. |
Number | Date | Country | |
---|---|---|---|
20160037260 A1 | Feb 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2013/052327 | Feb 2013 | US |
Child | 14820143 | US |