The present disclosure is related to audio recording and encoding, in particular for virtual reality applications, especially for virtual reality provided by a small portable device.
Virtual reality (VR) sound recording typically requires Ambisonic B-format with expensive directive microphones. Professional audio microphones exist to either record A-format to be encoded into Ambisonic B-format or directly Ambisonic B-format, for instance using Soundfield microphones. More generally speaking, it is technically difficult to arrange omnidirectional microphones on a mobile device to capture sound for VR.
A way to generate Ambisonic B-format signals, given a distribution of omnidirectional microphones, is based on differential microphone arrays, i.e. applying delay and adding beam-forming in order to derive first order virtual microphone (e.g. cardioids) signals as A-format.
The first limitation of this technique results from its spatial aliasing which, by design, reduces the bandwidth to frequencies f in the range:
where c stands for the sound celerity and dmic the distance between a pair of two omnidirectional microphones. A second weakness results, for higher order Ambisonic B-format, from the microphone requirement. The required number of microphones and their required positions are not anymore suitable for mobile devices.
Another way of generating ambisonic B-format signals from omnidirectional microphones corresponds to sampling the sound field at the recording point in space using a sufficiently dense distribution of microphones. These sampled sound pressure signals are then converted to spherical harmonics, and can be linearly combined to eventually generate B-format signals.
The main limitation of such approaches is the required number of microphones. For consumer applications, with only few microphones (commonly up to 6), linear processing is too limited, leading to signal to noise ratio (SNR) issues at low frequencies, and aliasing at high frequencies.
Directional Audio Coding (DirAc) is a further method for spatial sound representation, but it does not generate B-format signals. Instead, it reads first order B-format signals and generates a number of related audio parameters (direction of arrival, diffuseness) and adds these to an omnidirectional audio channel. Later, the decoder takes the above information and converts it to a multi-channel audio signal using amplitude panning for direct sound and de-correlating for diffuse sound.
DirAc is thus a different technique, which takes B-format as input to render it to its own audio format.
Therefore, the present inventors have recognized a need to provide an audio encoding device and method, which allow for generating ambisonic B-format sound signals, while requiring only a low number of microphones, and achieving a high output sound quality.
Embodiments of the present disclosure provide such audio encoding devices and methods that allow for generating ambisonic B-format sound signals, while requiring only a low number of microphones, and achieve a high output sound quality.
According to a first aspect of the present disclosure, an audio encoding device, for encoding N audio signals, from N microphones, where N≥3, is provided. The device comprises a delay estimator, configured to estimate angles of incidence of direct sound by estimating for each pair of the N audio signals an angle of incidence of direct sound, and a beam deriver, configured to derive A-format direct sound signals from the estimated angles of incidence by deriving from each estimated angle of incidence an A-format direct sound signal, each A-format direct sound signal being a first-order virtual microphone signal, especially a cardioids signal. This allows for determining the A-format direct sound signals with a low hardware effort.
According to an implementation form of the first aspect, the device additionally comprises an encoder, configured to encode the A-format direct sound signals in first-order ambisonic B-format direct sound signals by applying a transformation matrix to the A-format direct sound signals. This allows for generating ambisonic B-format signals using only a very low number of microphones, but still achieving a high output sound quality.
According to an implementation form of the first aspect, N=3. The audio encoding device moreover comprises a short time Fourier transformer, configured to perform a short time Fourier transformation on each of the N audio signals x1, x2, x3, resulting in N short time Fourier transformed audio signals X1[k,i], X2[k,i], X3[k,i]. The delay estimator is then configured to determine cross spectra of each pair of short time Fourier transformed audio signals according to:
X12[k,i]=αXX1[k,i]X*2[k,i]+(1−αX)X12[k−1,i],
X13[k,i]=αXX1[k,i]X*3[k,i]+(1−αX)X13[k−1,i],
X23[k,i]=αXX2[k,i]X*3[k,i]+(1−αX)X23[k−1,i],
determine an angle of the complex cross spectrum of each pair of short time Fourier transformed audio signals according to:
perform a phase unwrapping to {tilde over (ψ)}12, {tilde over (ψ)}13, {tilde over (ψ)}23, resulting in Ψ12, Ψ13, Ψ23 estimate the delay in number of samples according to:
δ12[k,i]=(NSTFT/2+1)/(iπ)ψ12[k,i],
δ13[k,i]=(NSTFT/2+1)/(iπ)ψ13[k,i],
δ23[k,i]=(NSTFT/2+1)/(iπ)ψ23[k,i], if i≤ialias
or
δ12[k,i]=(NSTFT/2+1)/(iπ)Ψ12[k,i],
δ13[k,i]=(NSTFT/2+1)/(iπ)Ψ13[k,i],
δ23[k,i]=(NSTFT/2+1)/(iπ)Ψ23[k,i], if i>ialias
estimate the delay in seconds according to:
estimate the angles of incidence according to:
wherein
x1 is a first audio signal of the N audio signals,
x2 is a second audio signal of the N audio signals,
x3 is a third audio signal of the N audio signals,
X1 is a first short time Fourier transformed audio signal,
X2 is a second short time Fourier transformed audio signal,
X3 is a third short time Fourier transformed audio signal,
k is a frame of the short time Fourier transformed audio signal, and
i is a frequency bin of the short time Fourier transformed audio signal,
X12 is a cross spectrum of a pair of X1 and X2,
X13 is a cross spectrum of a pair of X1 and X3,
X23 is a cross spectrum of a pair of X2 and X3,
αx is a forgetting factor,
X* is the conjugate complex of X,
j is the imaginary unit,
{tilde over (ψ)}12 is an angle of the complex cross spectrum of X12,
{tilde over (ψ)}13 is an angle of the complex cross spectrum of X13,
{tilde over (ψ)}23 is an angle of the complex cross spectrum of X23,
ialias is a frequency bin corresponding to an aliasing frequency,
fs is a sampling frequency,
dmic is a distance of the microphones, and
c is the speed of sound. This allows for a simple and efficient determining of the delays.
According to a further implementation form of the first aspect, the beam deriver is configured to determine cardioid directional responses according to:
and derive the A-format direct sound signals according to:
A12[k,i]=D12[k,i]X1[k,i],
A13[k,i]=D13[k,i]X1[k,i],
A23[k,i]=D23[k,i]X1[k,i],
wherein
D is a cardioid directional response, and
A is an A-format direct sound signal. This allows for a simple and efficient determining of the beam signals.
According to a further implementation form of the first aspect, the encoder is configured to encode the A-format direct sound signals to the first-order ambisonic B-format direct sound signals according to:
wherein
RW is a first, zero-order ambisonic B-format direct sound signal,
Rx is a first, first-order ambisonic B-format direct sound signal,
Ry is a second, first-order ambisonic B-format direct sound signal, and
Γ−1 is the transformation matrix. This allows for a simple and efficient determining of the beam signals.
According to a further implementation form of the first aspect, the device comprises a direction of arrival estimator, configured to estimate a direction of arrival from the first-order ambisonic B-format direct sound signals, and a higher order ambisonic encoder, configured to encode higher order ambisonic B-format direct sound signals, using the first-order ambisonic B-format direct sound signals and the estimated direction of arrival, wherein higher order ambisonic B-format direct sound signals have an order higher than one. Thereby, an efficient encoding of the ambisonic B-format direct sound signal is achieved.
According to a further implementation form of the first aspect, the direction of arrival estimator is configured to estimate the direction of arrival according to:
wherein
θXY [k,i] is a direction of arrival of a direct sound of frame k and frequency bin i. This allows for a simple and efficient determining of the directions of arrival.
According to a further implementation form of the first aspect, the higher order ambisonic B-format direct sound signals comprise second order ambisonic B-format direct sound signals limited to two dimensions, wherein the higher order ambisonic encoder is configured to encode the second order ambisonic B-format direct sound signals according to:
wherein
RR is a first, second-order ambisonic B-format direct sound signal,
RS is a second, second-order ambisonic B-format direct sound signal,
RT is a third, second-order ambisonic B-format direct sound signal,
RU is a fourth, second-order ambisonic B-format direct sound signal,
RV is a fifth, second-order ambisonic B-format direct sound signal,
Δ denotes “defined as”,
ϕ is an elevation angle, and
θ is an azimuth angle. This allows for an efficient encoding of the higher order ambisonic B-format signals.
According to a further implementation form of the first aspect, the audio encoding device comprises a microphone matcher, configured to perform a matching of the N frequency domain audio signals, resulting in N matched frequency domain audio signals. This allows for further quality increase of the output signals.
According to a further implementation form of the first aspect, the audio encoding device comprises a diffuse sound estimator, configured to estimate a diffuse sound power, and a de-correlation filter bank, configured to perform a de-correlation of the diffuse sound power by generating three orthogonal diffuse sound components from the diffuse sound estimate power. This allows for implementing diffuse sound into the output signals.
According to a further implementation form of the first aspect, the diffuse sound estimator is configured to estimate the diffuse sound power according to:
wherein
Pdiff is the diffuse sound power,
E{ } is an expectation value,
Φdiff2 is a normalized cross-correlation coefficient between N1 and N2,
N1 is diffuse sound in a first channel, and
N2 is diffuse sound in a second channel. This allows for an especially efficient estimation of the diffuse sound power.
According to a further implementation form of the first aspect, the de-correlation filter bank is configured to perform the de-correlation of the diffuse sound power by generating three orthogonal diffuse sound components from the diffuse sound estimate power:
{tilde over (D)}W[k,i]=DFRWwuU1P2D-diff[k,i],
{tilde over (D)}X[k,i]=DFRXwuU2P2D-diff[k,i],
{tilde over (D)}Y[k,i]=DFRYwuU3P2D-diff[k,i],
wherein
wherein {tilde over (D)}W[k,i] is a first channel diffuse sound component,
wherein {tilde over (D)}X[k,i] is second channel diffuse sound component,
wherein {tilde over (D)}Y[k,i] is third channel diffuse sound component,
DFRW is a diffuse-field response of the first channel,
DFRX is a diffuse-field response of the second channel,
DFRY is a diffuse-field response of the third channel,
wu is an exponential window,
RT60 is a reverberation time,
U1,U2,U3 is the de-correlation filter bank,
u is Gaussian noise sequence,
lu is a given length of the Gaussian noise sequence, and
P2D-diff is the diffuse noise power. Thereby, an efficient de-correlation of the diffuse sound power is calculated.
According to a further implementation form of the first aspect, the audio encoding device comprises an adder, configured to add channel-wise, the first-order ambisonic B-format direct sound signals and the higher order ambisonic B-format direct sound signals, and/or the diffuse sound signals, resulting in complete ambisonic B-format signals. Thereby, in a simple manner, a finished output signal is generated.
According to a second aspect of the present disclosure, an audio recording device comprising N microphones configured to record the N audio signals and an audio encoding device according to the first aspect or any of the implementation forms of the first aspect is provided. This allows for an audio recording and encoding in a single device.
According to a third aspect of the present disclosure, a method for encoding N audio signals, from N microphones, where N≥3 is provided. The method comprises estimating angles of incidence of direct sound by estimating for each pair of the N audio signals an angle of incidence of direct sound, and deriving A-format direct sound signals from the estimated angles of incidence by deriving from each estimated angle of incidence an A-format direct sound signal, each A-format direct sound signal being a first-order virtual microphone signal. This allows for determining the A-format direct sound signals with a low hardware effort.
According to an implementation form of the third aspect, the method additionally comprises encoding the ambisonic A-format direct sound signals in first-order ambisonic B-format direct sound signals by applying at least one transformation matrix to the A-format direct sound signals. This allows for a simple and efficient determining of the ambisonic B-format direct sound signals.
The method may further comprise extracting higher order ambisonic B-format direct sound signals by extracting direction of arrival from first order ambisonic B-format direct sound signals.
According to a fourth aspect of the present disclosure, a computer program with a program code for performing the method according to the third aspect is provided.
A method is provided for parametric encoding of multiple omnidirectional microphone signals into any order Ambisonic B-format by means of:
The disclosed approach is based on at least three omnidirectional microphones on a mobile device. Successively, it estimates the angles of incidence of direct sound by means of delay estimation between the different microphone pairs. Given the incidences of direct sound, it derives beam signals, called the direct sound A-format signals. The direct sound A-format signals are then encoded into first order B-format using relevant transformation matrix.
For optional higher order B-format, a direction of arrival estimate is derived from the X and Y first order B-format signals. The diffuse, non-directive sound is optionally rendered as multiple orthogonal components, generated using de-correlation filters.
Generally, it has to be noted that all arrangements, devices, elements, units and means and so forth described in the present application could be implemented by software or hardware elements or any kind of combination thereof. Furthermore, the devices may be processors or may comprise processors, wherein the functions of the elements, units and means described in the present applications may be implemented in one or more processors. All steps which are performed by the various entities described in the present application as well as the functionality described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if in the following description or exemplary embodiments, a specific functionality or step to be performed by a general entity is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respect of software or hardware elements, or any kind of combination thereof.
The present disclosure is in the following explained in detail in relation to embodiments of the present disclosure in reference to the enclosed drawings, in which:
First, we demonstrate the construction and general function of an embodiment of the first aspect and second aspect of the present disclosure along
In
The audio recording device 1 comprises a number of N≥3 microphones 2, which are connected to the audio encoding device 3. The audio encoding device 3 comprises a delay estimator 11, which is connected to the microphones 2. The audio encoding device 3 moreover comprises a beam deriver 12, which is connected to the delay estimator. Furthermore, the audio encoding device 3 comprises an encoder 13, which is connected to the beam deriver 12. Note that the encoder 13 is an optional feature with regard to the first aspect of the present disclosure.
In order to determine ambisonic B-format direct sound signals, the microphones 2 record N≥3 audio signals. These audio signals are preprocessed by components integrated into the microphones 2, in this diagram. For example, a transformation into the frequency domain is performed. This will be shown in more detail along
In
The direction-of-arrival estimator 20 estimates a direction of arrival from the first-order ambisonic B-format direct sound signals and hands it to the higher order ambisonic encoder 21. The higher order ambisonic encoder 21 encodes higher order ambisonic B-format direct sound signals, using the first-order ambisonic B-format direct sound signals and the estimated direction of arrival as an input. The higher order ambisonic B-format direct sound signals have a higher order than 1.
Moreover, the audio encoding device 3 comprises a microphone matcher 30, which performs a matching of the N frequency domain audio signals output by the short-time Fourier transformers 10a, 10b, 10c resulting in N match frequency domain audio signals. Connected to the microphone matcher 30, the audio encoding device 3 moreover comprises a diffuse sound estimator 31, which is configured to estimate a diffuse sound power based upon the N match frequency domain audio signals. Furthermore, the audio encoding device 3 comprises a de-correlation filter bank 32, which is connected to the diffuse sound estimator 31 and configured to perform a de-correlation of the diffuse sound power by generating three orthogonal diffuse sound components from the diffuse sound estimate power.
Finally, the audio encoding device 3 comprises an adder 40, which adds the first-order B-format direct sound signals provided by the encoder 13, the higher order ambisonic B-format signals provided by the higher order encoder 21 and the diffuse sound components provided by the de-correlation filter bank 32. The sum signal is handed to an inverse short-time Fourier transformer 41, which performs an inverse short-time Fourier transformation to achieve the final ambisonic B-format signals in the time domain.
In the following, along
In
Especially, the propagation of direct sound following a ray from a sound source to a pair of microphones in the free-field is considered in
In
The following algorithm aims at estimating the angle of incidence of direct sound based on cross-correlation between both recorded microphone signals x1 and x2, and derives parametrically gain filters to generate beams focusing in specific directions.
A phase estimation, between both recording microphones, is carried out at each time-frequency tile. The microphone time-frequency representations, X1 and X2, of the microphone signals, are obtained using a NSTFT points short-time Fourier transform (STFT). The delay relation between the two microphones can be derived from the cross-spectrum:
X12[k,i]=αXX1[k,i]X*2[k,i]+(1−αX)X12[k−1,i], (2)
where * denotes the complex conjugate operator. And ax is determined by:
where TX is an time-constant in seconds and fs is the sampling frequency. The phase response is defined as the angle of the complex cross-spectrum X12, derived as the ratio between the imaginary and the real part of it:
where j is the imaginary unit, that satisfies j2=−1.
Unfortunately, analogous to the Nyquist frequency in temporal sampling, a microphone array has a restriction on the minimum spatial sampling rate. Using two microphones, the smallest wavelength of interest is given by:
λalias=2dmic (5)
corresponding to a maximum frequency,
up to which the phase estimation is unambiguous. Above this frequency, the measured phase is still obtained following (4) but with an uncertainty term related to an integer l modulo of 2π:
{tilde over (ψ)}12[k,i]=ψ12[k,i]+2π·l[i]. (7)
Because the maximum travelling time between the two microphones of the array is given by dmic/c, the bounds of integer l is defined by:
A high frequency extension is provided based in equation (8) to constrain an unwrapping algorithm. The unwrapping aims at correcting the phase angle {tilde over (ψ)}12[k,i] by adding a multiple l[k,i] of 2π when absolute jump between the two consecutive elements, |{tilde over (ψ)}12[k,i]−{tilde over (ψ)}12[k,i−1]|, are greater than or equal to the jump tolerance of π. The estimated unwrapped phase ψ12 is obtained by limiting the multiples l to their physical possible values. Eventually, even if the phase is aliased at high-frequency, its slope still follows the same principles as the delay estimation at low frequency. For the purpose of delay estimation, it is then sufficient to integrate the unwrapped phase ψ12 over a number of frequency bins in order to derive its slope for later delay
where Nhf stands for the frequency bandwidth on which the phase is integrated.
For each frequency bin i, dividing by the corresponding physical frequency, the delay δ12[k,i], expressed in number of samples, is obtained from the previously derived phase:
δ12[k,i]=(NSTFT/2+1)/(iπ)ψ12[k,i] if i≤ialias
otherwise:
δ12[k,i]=(NSTFT/2+1)/(iπ)Ψ12[k,i], (10)
where ialias is the frequency bin corresponding to the aliasing frequency (1). The delay in second is:
The derived delay relates directly to the angle of incidence of sound emitted by a sound source, as illustrated in
with dmic the distance between both microphones and c the celerity of sound in the air.
In free-field, for direct sound, the directional response of a cardioid microphone pointing on the side of the array, is built as a function of the estimated angle of incidence:
By applying the gain D to the input spectrum X1, a virtual cardioid signal can be retrieved from the direct sound of the input microphone signals. This corresponds to the function of the beam estimator 12.
In
In
In the following, the conversion from A-format direct sound signals to B-format direct sound signals is shown. This corresponds to the function of the encoder 13.
In the following Table are listed the Ambisonic B-format channels and their spherical representation D(θ,ϕ) up to third-order, normalized with the Schmidt semi-normalization (SN3D), where θ and ϕ are, respectively, the azimuth and elevation angles:
These spherical harmonics form a set of orthogonal basis functions and can be used to describe any function on the surface of a sphere.
Without loss of generality, three, the minimum number of, microphones are considered and placed in the horizontal XY-plane, for instance disposed at the edges of a mobile device as illustrated in
The three possible unordered microphone pairs are defined as:
pair 1Δ=mic2→mic1
pair 2Δ=mic3→mic2
pair 3Δ=mic1→mic3
The look direction (Θ=0) being defined by the X-axis, their direction vectors are:
The direction for each of the pair in the horizontal plane are:
And the microphone spacing:
The gain (13) resulting from the angle of incidence estimation is applied to each pair leading to cardioid directional responses:
∀n∈[1 . . . 3],Ap
The three resulting cardioids are pointing in the three directions θp
Assuming that the obtained cardioids are coincident, the corresponding first order Ambisonic B-format signals can be computed by means of linear combination of the spectra Ap
The inverse matrix of (18) enables to convert the cardioids to Ambisonic B-format,
The first order Ambisonic B-format normalized directional responses RW, RX, and RY, are shown in
In the following, the determining of higher order ambisonic B-format signals is shown. This corresponds to the function of the direction-of-arrival estimator 20 and the higher order ambisonic encoder 21.
Deriving previously, the first order ambisonic B-format signals RW, RX, and RY for the direct sound, no explicit direction of arrival (DOA) of sound was computed. Instead the directional responses of the three signals RW, RX, and RY have been obtained from the A-format cardioid signals Ap
In order to obtain the higher order (e.g. second and third) ambisonic B-format signals, an explicit DOA is derived based on the two first order ambisonic B-format signals RX and RY as:
Again, assuming three omnidirectional microphones in the horizontal plane (φ=0), the channels of interest as defined in the ambisonic definition in the Table are limited to:
The other channels are null since they are modulated by sinφ, with φ=0. For each of the above listed channels the directional responses are thus derived by substituting the azimuth angle Θ by the estimated DOA ΘXY. For instance, considering second order (assuming no elevation, i.e. φ=0):
The resulting ambisonic channels, RR, RU, RV, RL, RM, RP, and RQ, contain only the direct sound components of the sound field.
Now, the handling of diffuse sound is shown. This corresponds to the diffuse sound estimator 31 and the de-correlation filter bank 32 of
In
In
The previous derivation of the ambisonic B-format signals is only valid under the assumption of direct sound. It does not hold for diffuse sound. In the following a method for obtaining an equivalent diffuse sound for Ambisonic B-format signals is given. Considering enough time after the direct sound and a number of early reflections, numerous reflections are themselves reflected in the space creating a diffuse sound field. By diffuse sound field is mathematically understood as independent sounds having the same energy and coming from all directions, as illustrated in
It is assumed that X1 and X2 can be modelled as:
X1[k,i]=S[k,i]+N1[k,i],
X2[k,i]=a[k,i]S[k,i]+N2[k,i], (22)
where a[k,i] is a gain factor, S[k,i] is the direct sound in the left channel, and N1[k,i] and N2[k,i] represent diffuse sound. From (22) it follows that:
E{X1X*1}=E{SS*}+E{N1N*1}
E{X2X*2}=a2E{SS*}+E{N2N*2}
E{X1X*2}=aE{SS*}+E{N1N*2}. (23)
It is reasonable to assume that the amount of diffuse sound in both microphone signals is the same, i.e. E{N1N*1}=E{N2N*2}=E{NN*}. Furthermore, the normalized cross-correlation coefficient between N1 and N2 is denoted Φdiff and can be obtained from the Cook's,
Eventually (23) can be re-written as
E{X1X*1}=E{SS*}+E{NN*}
E{X2X*2}=a2E{SS*}+E{NN*}
E{X1X*2}=aE{SS*}+ΦdiffE{NN*}. (25)
Elimination of E{SS*} and a in (25) yields the quadratic equation:
AE{NN*}2+BE{NN*}+C=0 (26)
with
A=1−Φdiff2,
B=2ΦdiffE{X1X*2}−E{X1X*1}−E{X2X*2},
C=E{X1X*1}E{X2X*2}−E{X1X*2}2. (27)
The power estimate of diffuse sound, denoted Pdiff, is then one of the two solutions of (26), the physically possible one (the other solution of (26), yielding a diffuse sound power larger than the microphone signal power, is discarded, as it is physically impossible), i.e.:
Note that straightforwardly the contribution of the direct sound can be computed as:
Pdir[k,i]=PX
This corresponds to the function of the diffuse sound estimator 31.
By definition the Ambisonic B-format signals are obtained by projecting the sound field unto the spherical harmonics basis defined in the previous table. Mathematically, the projection corresponds to the integration of the sound field signal over the spherical harmonics.
As illustrated in
DW⊥DX⊥DY. (30)
Note that this property does not hold anymore for direct sound, since a sound source emitting from only ne direction projected unto the same basis will result in a single gain equal to the directional responses at the incidence angle of the sound source, leading to non-orthogonal, or in other terms, correlated components RW, RX, and RY.
However, here, considering a distribution of three omnidirectional microphones, the single diffuse sound estimate (28) is equivalent for all three microphones (or all three microphone pairs). Therefore there is no possibility to retrieve the native diffuse sound components of the Ambisonic B-format signals, i.e. DW, DX, and DY as they would be obtained separately by projection of the diffuse sound field unto the spherical harmonics basis.
Instead of getting the exact diffuse sound Ambisonic B-format signals, an alternative is to generate three orthogonal diffuse sound components from the single known diffuse sound estimate Pdiff. This way, even if the diffuse sound components do not correspond to the native Ambisonic B-format obtained by projection, the most perceptually important property of orthogonality (enabling localization and spatialization) is preserved. This can be achieved by using de-correlation filters.
The de-correlation filters are derived from a Gaussian noise sequence u of given length lu. A Gram-Schmidt process applied to this sequence leads to Nu orthogonal sequences U1, U2, Λ, UN
Given the length lu of the noise Gaussian noise sequence u, the de-correlation filters are shaped such that they have an exponential decay over time, similarly as reverberation is a room. To do so, the sequences U1, U2, Λ, UN are multiplied with an exponential window wu with a time constant corresponding to the reverberation time RT60:
In
The exponential decay of the de-correlation filters, illustrated in
Eventually, the resulting de-correlation filters are modulated by the diffuse-field responses of the ambisonic B-format channels they correspond to. This way the amount of diffuse sound in each ambisonic B-format channel matches the amount of diffuse sound of a natural B-format recording. The diffuse-field response DFR is the average of the corresponding spherical harmonic directional-response-squared contributions considering all directions, i.e.:
In the three microphones case (Nu=3), the resulting de-correlations filters are:
{tilde over (D)}W[k,i]=DFRWwuU1P2D-diff[k,i],
{tilde over (D)}X[k,i]=DFRXwuU2P2D-diff[k,i],
{tilde over (D)}Y[k,i]=DFRYwuU3P2D-diff[k,i]. (33)
This way, the orthogonality property between all three diffuse sounds being preserved any further processing using the generated B-format will work on diffuse sound too, i.e., using conventional ambisonic decoding.
Eventually both direct and diffuse sound contributions have to be mixed together in order to generate the full Ambisonic B-format. Given the assumed signal model, the direct and diffuse sounds are, by definition, orthogonal, too. Thus the complete Ambisonic B-format signal are obtained using a straightforward addition:
BW[k,i]=RW[k,i]+{tilde over (D)}W[k,i],
BX[k,i]=RX[k,i]+{tilde over (D)}X[k,i],
BY[k,i]=RY[k,i]+{tilde over (D)}Y[k,i]. (34)
This addition is performed by the adder 40 of
After this addition, only the inverse short-time Fourier transformation by the inverse short-time Fourier transformer 41 is performed in order to achieve the output B-format ambisonic signals.
Finally, in
Note that the audio encoding device according to the first aspect of the present disclosure as well as the audio recording device according to the second aspect of the present disclosure relate very closely to the audio encoding method according to the third aspect of the present disclosure. Therefore, the elaborations along
These encoded signals are fully compatible with conventional Ambisonic B-format signals, and thus, can be used as input for Ambisonic B-format decoding or any other processing. The same principle can be applied to retrieve full higher order Ambisonic B-format signals with both direct and diffuse sounds contributions.
Abbreviations and Notations
The present disclosure is not limited to the examples and especially not to a specific number of microphones. The characteristics of the exemplary embodiments can be used in any advantageous combination.
The present disclosure has been described in conjunction with various embodiments herein. However, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in usually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless communication systems.
This application is a continuation of International Patent Application Number PCT/EP2018/056411, filed on Mar. 14, 2018, the disclosure of which is hereby referenced in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
20150215721 | Sato et al. | Jul 2015 | A1 |
20190200155 | Zhang | Jun 2019 | A1 |
Number | Date | Country |
---|---|---|
104904240 | Sep 2015 | CN |
105378826 | Mar 2016 | CN |
205249484 | May 2016 | CN |
1737271 | Dec 2006 | EP |
2738762 | Jun 2014 | EP |
Entry |
---|
Miai Hai-ming et al., “Virtual source localization experiment on mixed-order ambisonics reproduction,” Technical Acoustics, vol. 36, No. 5 Pt.2, total 3 pages (Oct. 2017). With an English Abstract. |
Benjamin et al.,“A Soundfield Microphone Using Tangential Capsules,” Audio Engineering Society, Convention Paper 8240, Presented at the 129th Convention, San Francisco, CA, USA, XP040567210, total 12 pages (Nov. 4-7, 2010). |
Meyer et al.,“A Highly Scalable Spherical Microphone Array Based on an Orthonormal Decomposition of the Soundfield,” 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, total 4 pages, Institute of Electrical and Electronics Engineers, New York, New York (Date Added to IEEE Xplore: Apr. 7, 2011). |
Farina et al., “Spatial PCM Sampling: A New Method for Sound Recording and Playback,” AES 52nd International Conference, Guildford, UK, XP040633139, total 13 pages (Sep. 2-4, 2013). |
Zotter, “Analysis and Synthesis of Sound-Radiation with Spherical Arrays,” Institute of Electronic Music and Acoustics University of Music and Performing Arts, Austria, total 192 pages (Sep. 2009). |
Merimaa, “Applications of a 3-D Microphone Array,” Audio Engineering Society, Convention Paper 5501, Presented at the 112th Convention, total 11 pages, Munich, Germany (May 10-13, 2002). |
Brown et al.,“Complex Variables and Applications,” Eighth Edition, the McGraw-Hill Higher Education, total 482 pages (2009). |
Delikaris-Manias et al.,“Cross Pattern Coherence Algorithm for Spatial Filtering Applications Utilizing Microphone Arrays,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, No. 11, pp. 2356-2367, Institute of Electrical and Electronics Engineers, New York, New York (Nov. 2013). |
Pulkki, “Directional audio coding in spatial sound reproduction and stereo upmixing,” total 8 pages, AES 28th International Conference, Pitea, Sweden (Jun. 30-Jul. 2, 2006). |
Taghizadeh et al,.“Enhanced diffuse field model for ad hoc microphone array calibration,” Signal Processing 101 (2014), pp. 242-255, Elsevier B.V. All rights reserved, total 14 pages (2014). |
Olson, “Gradient Microphones,” The Journal of the Acoustical Society of America, vol. 17, No. 3, total 7 pages (Jan. 1946). |
Tournery et al., “Improved Time Delay Analysis/Synthesis for Parametric Stereo Audio Coding,” total 9 pages, Audio Engineering Society, Convention Paper, Presented at the 120th Convention, Paris, France (May 20-23, 2006). |
Cook et al., “Measurement of Correlation Coefficients in Reverberant Sound Fields,” The Journal of the Acoustical Society of America, vol. 27, No. 6, total 6 pages (Nov. 1955). |
Pulkki, “Microphone techniques and directional quality of sound reproduction,” total 18 pages, Audio Engineering Society, Convention Paper 5500, Presented at the 112th Convention, Munich, Germany (May 10-13, 2002). |
Tylka et al., “On the Calculation of Full and Partial Directivity Indices,” total 12 pages, 3D Audio and Applied Acoustics Laboratory, Princeton University, 3D3A Lab Technical Report #1—Nov. 16, 2014 Revised Feb. 19, 2016. |
Gerzon, “Practical Periphony: The Reproduction of Full-Sphere sound,” In Preprint 65th Conv. Aud. Eng. Soc., total 6 pages (Feb. 1980). |
J. Daniel, “Représentation de champs acoustiques, application à la transmission et à la reproduction de scènes sonores complexes dans un contexte multimédia,” PhD thesis, Thèse de doctoral de I'Université Paris 6, total 319 pages (2001). With an English Abstract. |
C. Schorkhuber et al., “Signal-Dependent Encoding for First-Order Ambisonic Microphones,” DAGA 2017 Kiel, total 4 pages (2017). |
Farrar, “Soundfield microphone: Design and development of microphone and control unit,” total 8 pages, Wireless World (Oct. 1979). |
Epain et al., “Spherical Harmonic Signal Covariance and Sound Field Diffuseness,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, No. 10, total 12 pages (Oct. 2016). |
Berg, “The Future of Audio Technology—Surround and Beyond, the Proceedings of the AES 28th International Conference,” total 9 pages, Pitea, Sweden (Jun. 30-Jul. 2, 2006). |
C. T. Molloy, “Calculation of the Directivity Index for Various Types of Radiators,” The Journal of the Acoustical Society of America, vol. 20, No. 4, total 20 pages (Jul. 1948). |
M. R. Schroeder, “Natural Sounding Artificial Reverberation,” Presented at the 13th Annual Meeting, total 18 pages (Oct. 9-13, 1961). |
Gerzon, “Periphony: With-Height Sound Reproduction,” Presented Mar. 1972, at the 2nd Convention of the Central Europe Section of the Andio Engineering Society, Munich, Germany, Journal of the Audio Engineering Society, total 9 pages. |
Pulkki et al., “Directional audio coding—perception—based reproduction of spatial sound,” International Workshop on the Principles and Applications of Spatial Hearing, Zao, Miyagi, Japan, total 5 pages (Nov. 11-13, 2009). |
M. Bodden, “Modeling human sound-source localization and the cocktail-party-effect,” acta acustica 1(1993) 43-45, total 7 pages (Feb./Apr. 1993). |
Gerzon, “Ambisonics in Multichannel Broadcasting and Video,”, total 13 pages, Presented at the 74th Convention of the Audio Engineering Society, New York, Oct. 8-12, 1983, J. Audio Eng. Soc., vol. 33, No. 11, Nov. 1985. |
Benjamin et al., “The Native B-format Microphone: Part I,” total 15 pages, Audio Engineering Society, Convention Paper 6621, Presented at the 119th Convention, New York, New York, USA (Oct. 7-10, 2005). |
Tournery et al.,“Converting Stereo Microphone Signals Directly to MPEG-Surround,” Audio Engineering Society, Convention Paper 7982, Presented at the 128th Convention, total 11 pages, London, UK (May 22-25, 2010). |
Faller, “Conversion of Two Closely Spaced Omnidirectional Microphone Signals to an XY Stereo Signal,” Audio Engineering Society, Convention Paper 8188, Presented at the 129th Convention, total 10 pages, San Francisco, CA, USA (Nov. 4-7, 2010). |
Walther et al.,“Linear Simulation of Spaced Microphone Arrays Using B-Format Recordings,” total 7 pages, Audio Engineering Society, Convention Paper 7987, Presented at the 128th Convention, London, UK (May 22-25, 2010). |
Number | Date | Country | |
---|---|---|---|
20210067868 A1 | Mar 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2018/056411 | Mar 2018 | US |
Child | 17019757 | US |