The present invention relates to a method of, and apparatus for, planar audio tracking.
Techniques for tracking a source are known from the field of navigation, radar and sonar. One of the simplest source-tracking techniques employs a crossed dipole array—two dipole sensors centered at the same point and oriented at right angles.
Crossed dipoles have been used for radio direction finding since the early days of radio. S. W. Davies “Bearing Accuracies for Arctan Processing of Crossed Dipole Arrays” in Proc. OCEANS 1987, vol. 19., September 1987, pp 351-356 states that for a crossed dipole array with one dipole orientated towards the north, signals proportional to the sine and cosine of the source bearing are obtained and an estimate of the source bearing, {circumflex over (φ)}, can be obtained through the arctan of the ratio of these components. If there is an additional omnidirectional sensor located at the centre of the crossed dipole array, then its output may be used for synchronous detection of the “sense” or sign of the sine and cosine outputs; this allows the use of a four quadrant inverse tangent function to obtain unambiguous bearing estimates. This article studies the properties of a bearing estimator based on time-averaged products of the omnidirectional sensor uo(t), with north-south oriented (“cosine”) dipole output, uc(t), and east-west oriented (“sine”) dipole output, us(t).
U.S. Pat. No. 6,774,934 relates to camera positioning means used to point a camera to a speaking person in a video conferencing system. In order to find the correct direction for a camera, the system is required to determine the position from which the sound is transmitted. This is done by using at least two microphones receiving the speech signal and measuring the transmission delay between the signals received by the microphones. The delay is determined by first determining the impulse responses (h1) and (h2) and subsequently calculating a cross correlation function between these impulse responses. From the main peak in the cross correlation function, the delay value is determined. The described system is satisfactory when the microphones are spaced sufficiently wide apart that a delay value can be determined.
A drawback of currently known audio tracking techniques is that the dominant reflection of the audio source (via walls and tables for example) negatively influences the result of the audio-tracking.
An object of the present invention is to be able to derive a bearing from closely spaced microphones.
According to one aspect of the present invention there is provided a method of planar audio tracking using at least three from which virtual first and second cross-dipole microphones and a virtual monopole microphone are constructed, the method comprising directional pre-processing signals from the first and second cross-dipole microphones and the monopole microphone, filtering the results of the directional pre-processing of the signals, identifying functions representative of impulse responses from desired audio source(s) to the first and second cross-dipole and the monopole microphones, respectively, cross-correlating the functions of the first cross-dipole and the monopole microphones and the functions of the second cross-dipole and the monopole microphones to produce respective estimates representative of the lag of the most dominant audio source, and using the estimates representative of lag to determine an angle-estimate of the most dominant source.
According to another aspect of the present invention there is provided a planar audio tracking apparatus comprising at least three from which virtual first and second cross-dipole microphones and a virtual monopole microphone are constructed, means for directional pre-processing signals from the first and second cross-dipole microphones and the monopole microphone, means for filtering the results of the directional pre-processing of the signals and identifying functions representative of impulse responses from desired audio source(s) to the first and second cross-dipole and the monopole microphones, respectively, cross-correlating means for cross-correlating the functions of the first cross-dipole and the monopole microphones and the functions of the second cross-dipole and the monopole microphones to produce respective estimates representative of the lag of the most dominant audio source, and means for using the estimates representative of lag to determine an angle-estimate of the most dominant source.
The present invention will now be described, by way of example, with reference to the accompanying drawings, wherein:
In the drawings the same reference numerals have been used to represent corresponding features.
The normalized (frequency independent) dipole-response is computed as:
where:
and where
with φ the angle of incidence of sound, φ the angle of the main-lobe of the superdirectional response, Ei the signal picked-up by each of the microphones Mi, S the sensitivity of each of the microphones and Ω given by:
with ω the frequency (in radians), d the distance between the microphones and c the speed of sound.
The approximations for Ed(π/4,φ) and Ed(−π/4,φ) are valid for small values of Ω where the distance d is smaller than the wavelength λ of the sound, where:
λ=2π/ω.
Furthermore Iideal is an ideal integrator, defined as:
and T is an extra compensation term defined as
The integrator is required to remove the jω-dependency in the dipole response.
The normalized monopole response
The overline indicates that the response has been normalized with a maximum response S (equal to the response of a single sensor).
The technique for audio tracking uses the signals of two orthogonal dipoles (or crossed dipoles)
Xc=xcorr[
and:
Xs=xcorr[
which approximates the sine and the cosine values of the audio source angle φ.
An estimate of the angle of the audio source is now computed via the arctangent operation:
The ambiguity of the arctangent can be resolved since the signs of the cosine and the sine estimates are available from equations (11) and (12).
It is noted that for bad signal-to-noise ratios, the estimate of the audio-source angle will be degraded. For the extreme case of only (2D or 3D) diffuse (that is isotropic) noise, it can be shown that the cross-dipoles and the monopoles are mutually uncorrelated and the values of Xc and Xs are uniformly distributed random variables. As a result, the estimate
In order to overcome the dominant reflections of the audio source negatively influencing the result of the audio tracking, the crossed-dipole signals
Instead of computing the cross-correlation between the signals of the crossed-dipoles and the monopole as in equations (11) and (12), pairs of functions identified by the FSB are cross-correlated:
ψc=xcorr[hd(0),hm], (14)
and
ψs=xcorr[hd(π/2),hm]. (15)
The lag l in ψc(l) and ψs(l) which is representative for the most dominant audio-source (other lags are representative for reflections) is found by:
These cross-correlations for lag l approximates the sine and the cosine of the most dominant audio source coming from azimuth angle φ
ψc(l)≈cos {circumflex over (φ)}, (17)
and
ψs(l)≈sin {circumflex over (φ)} (18)
The angle estimate {circumflex over (φ)} is now computed as:
It is noted that an efficient cross-correlation of two vectors can be implemented via the Fast Fourier Transform.
Referring to
A monopole signal is produced by connecting the microphones M1 to M4 to a summing stage 28, the output from which is applied to an attenuating amplifier 30 having a gain of ¼.
A filtered sum beamforming stage (FSB) 32 has inputs 34, 36, 38 for the dipole 90 degree signal
The angle estimate {circumflex over (φ)} is derived in accordance to the method illustrated in the flow chart shown in
VP=aVin·W1
VQ=bVin·W2
VR=cVin·W3
These signals are applied to a summing stage 82 which produces a combined signal:
Vsum=VP+VQ+VR=aVin·W1bVin·W2cVin·W3
Vsum appears on the output 46 and also is applied to three further adjustable filters 90, 92, and 94 which derive filtered combined signals using transfer functions W1*, W2* and W3* which are the complex conjugates of W1, W2 and W3, respectively.
The first filtered combined signal is equal to:
VFC1=(aVin·W1+bVin·W2+cVin·W3)·W1*
The second filtered combined signal is equal to:
VFC2=(aVin·W1+bVin·W2+cVin·W3)·W2*
The third filtered combined signal is equal to:
VFC3=(aVin·W1+bVin·W2+cVin·W3)·W3*
A first difference measure between the signal a·Vin and the first filtered combined signal is determined by a subtractor 90. For the output signal of the subtractor 90 can be written:
A second difference measure between the signal V2 and the second filtered combined signal is determined by a subtractor 92. For the output signal of the subtractor 92 can be written:
VDIFF2=Vin(b−(a·W1+b·W2+c·W3)·W2*)
A third difference measure between the signal V3 and the first filtered combined signal is determined by a subtractor 94. For the output signal of the subtractor 94 can be written:
VDIFF3=Vin(c−(a·W1+b·W2+c·W3)·W3*)
The arrangement according to
In order to facilitate an understanding of the process only two of the difference equations will be considered.
a=(aW1+bW2+cW3)·W1* (A)
b=(a·W1+b·W2+c·W3)·W2* (B)
Eliminating the term (a·W1+b·W2 c·W3) in equations (A) and (B) by dividing (A) by (B) results in:
By conjugating the left hand side and the right hand side of (C) for W1:
Substituting (D) into (B) gives the following expression:
Rearranging (E) gives for |W2|2;
For |W|2 can be found in the same way:
From (F) and (G) it is clear that the value of |W1|2 increases when |a|2 increases (or |b|2 decreases) and that the value of |W2|2 increases when |b|2 increases (or |a|2 decreases). In such a way the strongest input signal is pronounced. This is of use to enhance a speech signal of a speaker over background noise and reverberant components of the speech signal without needing to know the frequency dependence of the paths from the speaker to the microphones as was needed in prior art arrangements.
Using uni-directional cardioid microphones has the main benefit, that the sensitivity for sensor-noise and sensor-mismatches is greatly reduced for the construction of the first-order dipole responses. Additionally
The responses of the three cardioid microphones is given by Ec0, Ec2π/3 and Ec4π/3. Assuming that there is no uncorrelated sensor-noise, the ith cardioid microphone response is ideally given by:
with:
where θ and φ are the standard spherical coordinate angles, that is, elevation and azimuth.
Using
and:
with r the radius of the circle we can write:
From the three cardioid microphones the following monopole and orthogonal dipoles can be constructed as:
For wavelengths larger than the size of the array, the responses of the monopole and the orthogonal dipoles are frequency invariant and ideally equal to:
Em=1 (25)
Ed0(θ,φ)=cos φ sin θ (26)
Edπ/2(θ,φ)=cos(φ−π/2)sin θ (27)
The directivity patterns of these monopole and orthogonal dipoles are shown in
The monopole response is referenced Em and the orthogonal dipole responses are referenced Edπ/2(θ,φ) and Edπ/2(θ,φ).
In a non-illustrated embodiment the three uni-directional cardioid microphones are arranged unequally spaced on the periphery of a circle, for example with the apices forming a right-angled triangle.
In the present specification and claims the word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. Further, the word “comprising” does not exclude the presence of other elements or steps than those listed.
The use of any reference signs placed between parentheses in the claims shall not be construed as limiting the scope of the claims.
From reading the present disclosure, other modifications will be apparent to persons skilled in the art. Such modifications may involve other features which are already known in the design, manufacture and use of planar audio tracking systems and components therefor and which may be used instead of or in addition to features already described herein.
Number | Date | Country | Kind |
---|---|---|---|
08106035 | Dec 2008 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB2009/055879 | 12/21/2009 | WO | 00 | 6/23/2011 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2010/073212 | 7/1/2010 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4119942 | Merklinger | Oct 1978 | A |
5664021 | Chu et al. | Sep 1997 | A |
6041127 | Elko | Mar 2000 | A |
6774934 | Belt et al. | Aug 2004 | B1 |
8054990 | Gratke et al. | Nov 2011 | B2 |
20050232440 | Roovers | Oct 2005 | A1 |
Number | Date | Country |
---|---|---|
1395599 | May 1975 | GB |
9746048 | Dec 1977 | WO |
Entry |
---|
Davies, S. “Bearing Accuracies for Arctan Processing of Crossed Dipole Arrays”, Proc. OCEANS 1987, pp. 351-356 (1987). |
Cox, H., et al. “Adaptive Cardioid Processing”, Conf. Record of the 26thAsilomar Conf. on Signals, Systems and Computers, vol. 2, pp. 1058-1061 (Oct. 1992). |
Chu, P. L., “Superdirective Microphone Array for a Set-top Videoconferencing System”, IEEE Int'l. Conf. on Acoustics, Speech and Signal Processing, vol. 1, pp. 235-238 (1997). |
Abel, J., et al. “Methods for Room Acoustic Analysis Using a Monopole-Dipole Microphone Array”, Proc. InterNoise98, paper 123, 6 pages (1998). |
Maranda, B., “The Statistical Accuracy of an Arctangent Bearing Estimator”, Proc. OCEANS 2003, vol. 4, pp. 2127-2132 (2003). |
Derkx, et al. “Theoretical Analysis of a First-Order Azimuth-Steerable Superdirective Microphone Array”, IEEE Trans. on Audio, Speech, and Language Processing, vol. 17, No. 1, pp. 150-162 (Dec. 2008). |
International Search Report and Written Opinion for International Patent Application No. PCT/IB2009/055879 (Jul. 30, 2010). |
Number | Date | Country | |
---|---|---|---|
20110264249 A1 | Oct 2011 | US |