The present invention relates to audio processing and, in particular, to an apparatus and method for geometry-based spatial audio coding.
Audio processing and, in particular, spatial audio coding, becomes more and more important. Traditional spatial sound recording aims at capturing a sound field such that at the reproduction side, a listener perceives the sound image as it was at the recording location. Different approaches to spatial sound recording and reproduction techniques are known from the state of the art, which may be based on channel-, object- or parametric representations.
Channel-based representations represent the sound scene by means of N discrete audio signals meant to be played back by N loudspeakers arranged in a known setup, e.g. a 5.1 surround sound setup. The approach for spatial sound recording usually employs spaced, omnidirectional microphones, for example, in AB stereophony, or coincident directional microphones, for example, in intensity stereophony. Alternatively, more sophisticated microphones, such as a B-format microphone, may be employed, for example, in Ambisonics, see:
The desired loudspeaker signals for the known setup are derived directly from the recorded microphone signals and are then transmitted or stored discretely. A more efficient representation is obtained by applying audio coding to the discrete signals, which in some cases codes the information of different channels jointly for increased efficiency, for example in MPEG-Surround for 5.1, see:
A major drawback of these techniques is, that the sound scene, once the loudspeaker signals have been computed, cannot be modified.
Object-based representations are, for example, used in Spatial Audio Object Coding (SAOC), see
Object-based representations represent the sound scene with N discrete audio objects. This representation gives high flexibility at the reproduction side, since the sound scene can be manipulated by changing e.g. the position and loudness of each object. While this representation may be readily available from an e.g. multitrack recording, it is very difficult to be obtained from a complex sound scene recorded with a few microphones (see, for example, [21]). In fact, the talkers (or other sound emitting objects) have to be first localized and then extracted from the mixture, which might cause artifacts.
Parametric representations often employ spatial microphones to determine one or more audio downmix signals together with spatial side information describing the spatial sound. An example is Directional Audio Coding (DirAC), as discussed in
The term “spatial microphone” refers to any apparatus for the acquisition of spatial sound capable of retrieving direction of arrival of sound (e.g. combination of directional microphones, microphone arrays, etc.).
The term “non-spatial microphone” refers to any apparatus that is not adapted for retrieving direction of arrival of sound, such as a single omnidirectional or directive microphone.
Another example is proposed in:
In DirAC, the spatial cue information comprises the direction of arrival (DOA) of sound and the diffuseness of the sound field computed in a time-frequency domain. For the sound reproduction, the audio playback signals can be derived based on the parametric description. These techniques offer great flexibility at the reproduction side because an arbitrary loudspeaker setup can be employed, because the representation is particularly flexible and compact, as it comprises a downmix mono audio signal and side information, and because it allows easy modifications on the sound scene, for example, acoustic zooming, directional filtering, scene merging, etc.
However, these techniques are still limited in that the spatial image recorded is always relative to the spatial microphone used. Therefore, the acoustic viewpoint cannot be varied and the listening-position within the sound scene cannot be changed.
A virtual microphone approach is presented in
In
The method presented in
According to an embodiment, an apparatus for generating at least one audio output signal based on an audio data stream having audio data relating to one or more sound sources may have: a receiver for receiving the audio data stream having the audio data, wherein the audio data has for each one of the one or more sound sources one or more sound pressure values, wherein the audio data furthermore has for each one of the one or more sound sources one or more position values indicating a position of one of the sound sources, wherein each one of the one or more position values has at least two coordinate values, and wherein the audio data furthermore has one or more diffuseness-of-sound values for each one of the sound sources; and a synthesis module for generating the at least one audio output signal based on at least one of the one or more sound pressure values of the audio data of the audio data stream, based on at least one of the one or more position values of the audio data of the audio data stream and based on at least one of the one or more diffuseness-of-sound values of the audio data of the audio data stream.
According to another embodiment, an apparatus for generating an audio data stream having sound source data relating to one or more sound sources may have: a determiner for determining the sound source data based on at least one audio input signal recorded by at least one microphone and based on audio side information provided by at least two spatial microphones, the audio side information being spatial side information describing spatial sound; and a data stream generator for generating the audio data stream such that the audio data stream has the sound source data; wherein each one of the at least two spatial microphones is an apparatus for the acquisition of spatial sound capable of retrieving direction of arrival of sound, and wherein the sound source data has one or more sound pressure values for each one of the sound sources, wherein the sound source data furthermore has one or more position values indicating a sound source position for each one of the sound sources.
According to another embodiment, an apparatus for generating a virtual microphone data stream may have: an apparatus for generating an audio output signal of a virtual microphone, and an apparatus mentioned above for generating an audio data stream as the virtual microphone data stream, wherein the audio data stream has audio data, wherein the audio data has for each one of the one or more sound sources one or more position values indicating a sound source position, wherein each one of the one or more position values has at least two coordinate values, wherein the apparatus for generating an audio output signal of a virtual microphone has: a sound events position estimator for estimating a sound source position indicating a position of a sound source in the environment, wherein the sound events position estimator is adapted to estimate the sound source position based on a first direction of arrival of sound emitted by a first real spatial microphone being located at a first real microphone position in the environment, and based on a second direction of arrival of sound emitted by a second real spatial microphone being located at a second real microphone position in the environment; and an information computation module for generating the audio output signal based on a recorded audio input signal being recorded by the first real spatial microphone, based on the first real microphone position and based on a virtual position of the virtual microphone, wherein the first real spatial microphone and the second real spatial microphone are apparatuses for the acquisition of spatial sound capable of retrieving direction of arrival of sound, and wherein the apparatus for generating an audio output signal of a virtual microphone is arranged to provide the audio output signal to the apparatus for generating an audio data stream, and wherein the determiner of the apparatus for generating an audio data stream determines the sound source data based on the audio output signal provided by the apparatus for generating an audio output signal of a virtual microphone, the audio output signal being one of the at least one audio input signal of the apparatus mentioned above for generating an audio data stream.
According to another embodiment, a system may have: an apparatus mentioned above for generating at least one audio output signal, and an apparatus mentioned above for generating an audio data stream.
Another embodiment may have an audio data stream having audio data relating to one or more sound sources, wherein the audio data has for each one of the one or more sound sources one or more sound pressure values, wherein the audio data furthermore has for each one of the one or more sound sources one or more position values indicating a sound source position, wherein each one of the one or more position values has at least two coordinate values, and wherein the audio data furthermore has one or more diffuseness-of-sound values for each one of the one or more sound sources.
According to another embodiment, a method for generating at least one audio output signal based on an audio data stream having audio data relating to one or more sound sources may have the steps of: receiving the audio data stream having the audio data, wherein the audio data has for each one of the one or more sound sources one or more sound pressure values, wherein the audio data furthermore has for each one of the one or more sound sources one or more position values indicating a position of one of the sound sources, wherein each one of the one or more position values has at least two coordinate values, and wherein the audio data furthermore has one or more diffuseness-of-sound values for each one of the sound sources; and generating the at least one audio output signal based on at least one of the one or more sound pressure values of the audio data of the audio data stream, based on at least one of the one or more position values of the audio data of the audio data stream and based on at least one of the one or more diffuseness-of-sound values of the audio data of the audio data stream.
According to another embodiment, a method for generating an audio data stream having sound source data relating to one or more sound sources may have the steps of: determining the sound source data based on at least one audio input signal recorded by at least one microphone and based on audio side information provided by at least two spatial microphones, the audio side information being spatial side information describing spatial sound; and generating the audio data stream such that the audio data stream has the sound source data; wherein each one of the at least two spatial microphones is an apparatus for the acquisition of spatial sound capable of retrieving direction of arrival of sound, and wherein the sound source data has one or more sound pressure values for each one of the sound sources, wherein the sound source data furthermore has one or more position values indicating a sound source position for each one of the sound sources.
According to still another embodiment, a method for generating an audio data stream having audio data relating to one or more sound sources may have the steps of: receiving audio data having at least one sound pressure value for each one of the sound sources, wherein the audio data furthermore has one or more position values indicating a sound source position for each one of the sound sources, and wherein the audio data furthermore has one or more diffuseness-of-sound values for each one of the sound sources; generating the audio data stream such that the audio data stream has the at least one sound pressure value for each one of the sound sources, such that the audio data stream furthermore has the one or more position values indicating a sound source position for each one of the sound sources, and such that the audio data stream furthermore has one or more diffuseness-of-sound values for each one of the sound sources.
Another embodiment may have a computer program for implementing the methods mentioned above when being executed on a computer or a processor.
The audio data may be defined for a time-frequency bin of a plurality of time-frequency bins. Alternatively, the audio data may be defined for a time instant of a plurality of time instants. In some embodiments, one or more pressure values of the audio data may be defined for a time instant of a plurality of time instants, while the corresponding parameters (e.g., the position values) may be defined in a time-frequency domain. This can be readily obtained by transforming back to time domain the pressure values otherwise defined in time-frequency. For each one of the sound sources, at least one pressure value is comprised in the audio data, wherein the at least one pressure value may be a pressure value relating to an emitted sound wave, e.g. originating from the sound source. The pressure value may be a value of an audio signal, for example, a pressure value of an audio output signal generated by an apparatus for generating an audio output signal of a virtual microphone, wherein that the virtual microphone is placed at the position of the sound source.
The above-described embodiment allows to compute a sound field representation which is truly independent from the recording position and provides for efficient transmission and storage of a complex sound scene, as well as for easy modifications and an increased flexibility at the reproduction system.
Inter alia, important advantages of this technique are, that at the reproduction side the listener can choose freely its position within the recorded sound scene, use any loudspeaker setup, and additionally manipulate the sound scene based on the geometrical information, e.g. position-based filtering. In other words, with the proposed technique the acoustic viewpoint can be varied and the listening-position within the sound scene can be changed.
According to the above-described embodiment, the audio data comprised in the audio data stream comprises one or more pressure values for each one of the sound sources. Thus, the pressure values indicate an audio signal relative to one of the sound sources, e.g. an audio signal originating from the sound source, and not relative to the position of the recording microphones. Similarly, the one or more position values that are comprised in the audio data stream indicate positions of the sound sources and not of the microphones.
By this, a plurality of advantages are realized: For example, a representation of an audio scene is achieved that can be encoded using few bits. If the sound scene only comprises a single sound source in a particular time frequency bin, only the pressure values of a single audio signal relating to the only sound source have to be encoded together with the position value indicating the position of the sound source. In contrast, traditional methods may have to encode a plurality of pressure values from the plurality of recorded microphone signals to reconstruct an audio scene at a receiver. Moreover, the above-described embodiment allows easy modification of a sound scene on a transmitter, as well as on a receiver side, as will be described below. Thus, scene composition (e.g., deciding the listening position within the sound scene) can also be carried out at the receiver side.
Embodiments employ the concept of modeling a complex sound scene by means of sound sources, for example, point-like sound sources (PLS=point-like sound source), e.g. isotropic point-like sound sources (IPLS), which are active at specific slots in a time-frequency representation, such as the one provided by the Short-Time Fourier Transform (STFT).
According to an embodiment, the receiver may be adapted to receive the audio data stream comprising the audio data, wherein the audio data furthermore comprises one or more diffuseness values for each one of the sound sources. The synthesis module may be adapted to generate the at least one audio output signal based on at least one of the one or more diffuseness values.
In another embodiment, the receiver may furthermore comprise a modification module for modifying the audio data of the received audio data stream by modifying at least one of the one or more pressure values of the audio data, by modifying at least one of the one or more position values of the audio data or by modifying at least one of the diffuseness values of the audio data. The synthesis module may be adapted to generate the at least one audio output signal based on the at least one pressure value that has been modified, based on the at least one position value that has been modified or based on the at least one diffuseness value that has been modified.
In a further embodiment, each one of the position values of each one of the sound sources may comprise at least two coordinate values. Furthermore, the modification module may be adapted to modify the coordinate values by adding at least one random number to the coordinate values, when the coordinate values indicate that a sound source is located at a position within a predefined area of an environment.
According to another embodiment, each one of the position values of each one of the sound sources may comprise at least two coordinate values. Moreover, the modification module is adapted to modify the coordinate values by applying a deterministic function on the coordinate values, when the coordinate values indicate that a sound source is located at a position within a predefined area of an environment.
In a further embodiment, each one of the position values of each one of the sound sources may comprise at least two coordinate values. Moreover, the modification module may be adapted to modify a selected pressure value of the one or more pressure values of the audio data, relating to the same sound source as the coordinate values, when the coordinate values indicate that a sound source is located at a position within a predefined area of an environment.
According to an embodiment, the synthesis module may comprise a first stage synthesis unit and a second stage synthesis unit. The first stage synthesis unit may be adapted to generate a direct pressure signal comprising direct sound, a diffuse pressure signal comprising diffuse sound and direction of arrival information based on at least one of the one or more pressure values of the audio data of the audio data stream, based on at least one of the one or more position values of the audio data of the audio data stream and based on at least one of the one or more diffuseness values of the audio data of the audio data stream. The second stage synthesis unit may be adapted to generate the at least one audio output signal based on the direct pressure signal, the diffuse pressure signal and the direction of arrival information.
According to an embodiment, an apparatus for generating an audio data stream comprising sound source data relating to one or more sound sources is provided. The apparatus for generating an audio data stream comprises a determiner for determining the sound source data based on at least one audio input signal recorded by at least one microphone and based on audio side information provided by at least two spatial microphones. Furthermore, the apparatus comprises a data stream generator for generating the audio data stream such that the audio data stream comprises the sound source data. The sound source data comprises one or more pressure values for each one of the sound sources. Moreover, the sound source data furthermore comprises one or more position values indicating a sound source position for each one of the sound sources. Furthermore, the sound source data is defined for a time-frequency bin of a plurality of time-frequency bins.
In a further embodiment, the determiner may be adapted to determine the sound source data based on diffuseness information by at least one spatial microphone. The data stream generator may be adapted to generate the audio data stream such that the audio data stream comprises the sound source data. The sound source data furthermore comprises one or more diffuseness values for each one of the sound sources.
In another embodiment, the apparatus for generating an audio data stream may furthermore comprise a modification module for modifying the audio data stream generated by the data stream generator by modifying at least one of the pressure values of the audio data, at least one of the position values of the audio data or at least one of the diffuseness values of the audio data relating to at least one of the sound sources.
According to another embodiment, each one of the position values of each one of the sound sources may comprise at least two coordinate values (e.g., two coordinates of a Cartesian coordinate system, or azimuth and distance, in a polar coordinate system). The modification module may be adapted to modify the coordinate values by adding at least one random number to the coordinate values or by applying a deterministic function on the coordinate values, when the coordinate values indicate that a sound source is located at a position within a predefined area of an environment.
According to a further embodiment, an audio data stream is provided. The audio data stream may comprise audio data relating to one or more sound sources, wherein the audio data comprises one or more pressure values for each one of the sound sources. The audio data may furthermore comprise at least one position value indicating a sound source position for each one of the sound sources. In an embodiment, each one of the at least one position values may comprise at least two coordinate values. The audio data may be defined for a time-frequency bin of a plurality of time-frequency bins.
In another embodiment, the audio data furthermore comprises one or more diffuseness values for each one of the sound sources.
Embodiments of the present invention will be described in the following, in which:
Before providing a detailed description of embodiments of the present invention, an apparatus for generating an audio output signal of a virtual microphone is described to provide background information regarding the concepts of the present invention.
In embodiments, the sound event localization in space, as well as describing the position of the virtual microphone may be conducted based on the positions and orientations of the real and virtual spatial microphones in a common coordinate system. This information may be represented by the inputs 121 . . . 12N and input 104 in
The output of the apparatus or a corresponding method may be, when desired, one or more sound signals 105, which may have been picked up by a spatial microphone defined and placed as specified by 104. Moreover, the apparatus (or rather the method) may provide as output corresponding spatial side information 106 which may be estimated by employing the virtual spatial microphone.
In the following, position estimation of a sound events position estimator according to an embodiment is described in more detail.
Depending on the dimensionality of the problem (2D or 3D) and the number of spatial microphones, several solutions for the position estimation are possible.
If two spatial microphones in 2D exist, (the simplest possible case) a simple triangulation is possible.
In
The triangulation fails when the two lines 430, 440 are exactly parallel. In real applications, however, this is very unlikely. However, not all triangulation results correspond to a physical or feasible position for the sound event in the considered space. For example, the estimated position of the sound event might be too far away or even outside the assumed space, indicating that probably the DOAs do not correspond to any sound event which can be physically interpreted with the used model. Such results may be caused by sensor noise or too strong room reverberation. Therefore, according to an embodiment, such undesired results are flagged such that the information computation module 202 can treat them properly.
Similarly to the 2D case, the triangulation may fail or may yield unfeasible results for certain combinations of directions, which may then also be flagged, e.g. to the information computation module 202 of
If more than two spatial microphones exist, several solutions are possible. For example, the triangulation explained above, could be carried out for all pairs of the real spatial microphones (if N=3, 1 with 2, 1 with 3, and 2 with 3). The resulting positions may then be averaged (along x and y, and, if 3D is considered, z).
Alternatively, more complex concepts may be used. For example, probabilistic approaches may be applied as described in
According to an embodiment, the sound field may be analyzed in the time-frequency domain, for example, obtained via a short-time Fourier transform (STFT), in which k and n denote the frequency index k and time index n, respectively. The complex pressure Pv(k, n) at an arbitrary position pv for a certain k and n is modeled as a single spherical wave emitted by a narrow-band isotropic point-like source, e.g. by employing the formula:
Pv(k,n)=PIPLS(k,n)·γ(k,pIPLS(k,n),pv), (1)
where PIPLS(k, n) is the signal emitted by the IPLS at its position pIPLS(k, n). The complex factor γ(k, pIPLS, pv) expresses the propagation from pIPLS(k, n) to pv, e.g., it introduces appropriate phase and magnitude modifications. Here, the assumption may be applied that in each time-frequency bin only one IPLS is active. Nevertheless, multiple narrow-band IPLSs located at different positions may also be active at a single time instance.
Each IPLS either models direct sound or a distinct room reflection. Its position pIPLS(k, n) may ideally correspond to an actual sound source located inside the room, or a mirror image sound source located outside, respectively. Therefore, the position pIPLS(k, n) may also indicates the position of a sound event.
Please note that the term “real sound sources” denotes the actual sound sources physically existing in the recording environment, such as talkers or musical instruments. On the contrary, with “sound sources” or “sound events” or “IPLS” we refer to effective sound sources, which are active at certain time instants or at certain time-frequency bins, wherein the sound sources may, for example, represent real sound sources or mirror image sources.
Both the actual sound source 153 of
While this single-wave model is accurate only for mildly reverberant environments given that the source signals fulfill the W-disjoint orthogonality (WDO) condition, i.e. the time-frequency overlap is sufficiently small. This is normally true for speech signals, see, for example,
However, the model also provides a good estimate for other environments and is therefore also applicable for those environments.
In the following, the estimation of the positions pIPLS(k, n) according to an embodiment is explained. The position pIPLS(k, n) of an active IPLS in a certain time-frequency bin, and thus the estimation of a sound event in a time-frequency bin, is estimated via triangulation on the basis of the direction of arrival (DOA) of sound measured in at least two different observation points.
Here, φ1(k, n) represents the azimuth of the DOA estimated at the first microphone array, as depicted in
e1(k,n)=R1·e1POV(k,n),
e2(k,n)=R2·e2POV(k,n), (3)
where R are coordinate transformation matrices, e.g.,
when operating in 2D and c1=[c1,x, c1,y]T. For carrying out the triangulation, the direction vectors d1(k, n) and d2(k, n) may be calculated as:
d1(k,n)=d1(k,n)e1(k,n),
d2(k,n)=d2(k,n)e2(k,n), (5)
where d1(k, n)=∥d1(k, n)∥ and d2(k, n)=∥d2(k, n)∥ are the unknown distances between the IPLS and the two microphone arrays. The following equation
p1+d1(k,n)=p2+d2(k,n) (6)
may be solved for d1(k, n). Finally, the position pIPLS(k, n) of the IPLS is given by
pIPLS(k,n)=d1(k,n)e1(k,n)+p1. (7)
In another embodiment, equation (6) may be solved for d2(k, n) and pIPLS(k, n) is analogously computed employing d2(k, n).
Equation (6) provides a solution when operating in 2D, unless e1(k, n) and e2(k, n) are parallel. However, when using more than two microphone arrays or when operating in 3D, a solution cannot be obtained when the direction vectors d do not intersect. According to an embodiment, in this case, the point which is closest to all direction vectors d is be computed and the result can be used as the position of the IPLS.
In an embodiment, all observation points p1, p2, . . . should be located such that the sound emitted by the IPLS falls into the same temporal block n. This requirement may simply be fulfilled when the distance Δ between any two of the observation points is smaller than
where nFFT is the STFT window length, 0≤R<1 specifies the overlap between successive time frames and fs is the sampling frequency. For example, for a 1024-point STFT at 48 kHz with 50% overlap (R=0.5), the maximum spacing between the arrays to fulfill the above requirement is Δ=3.65 m.
In the following, an information computation module 202, e.g. a virtual microphone signal and side information computation module, according to an embodiment is described in more detail.
To compute the audio signal of the virtual microphone, the geometrical information, e.g. the position and orientation of the real spatial microphones 121 . . . 12N, the position, orientation and characteristics of the virtual spatial microphone 104, and the position estimates of the sound events 205 are fed into the information computation module 202, in particular, into the propagation parameters computation module 501 of the propagation compensator 500, into the combination factors computation module 502 of the combiner 510 and into the spectral weights computation unit 503 of the spectral weighting unit 520. The propagation parameters computation module 501, the combination factors computation module 502 and the spectral weights computation unit 503 compute the parameters used in the modification of the audio signals 111 . . . 11N in the propagation compensation module 504, the combination module 505 and the spectral weighting application module 506.
In the information computation module 202, the audio signals 111 . . . 11N may at first be modified to compensate for the effects given by the different propagation lengths between the sound event positions and the real spatial microphones. The signals may then be combined to improve for instance the signal-to-noise ratio (SNR). Finally, the resulting signal may then be spectrally weighted to take the directional pick up pattern of the virtual microphone into account, as well as any distance dependent gain function. These three steps are discussed in more detail below.
Propagation compensation is now explained in more detail. In the upper portion of
The lower portion of
The signals at the two real arrays are comparable only if the relative delay Dt12 between them is small. Otherwise, one of the two signals needs to be temporally realigned to compensate the relative delay Dt12, and possibly, to be scaled to compensate for the different decays.
Compensating the delay between the arrival at the virtual microphone and the arrival at the real microphone arrays (at one of the real spatial microphones) changes the delay independent from the localization of the sound event, making it superfluous for most applications.
Returning to
The propagation compensation module 504 is configured to use this information to modify the audio signals accordingly. If the signals are to be shifted by a small amount of time (compared to the time window of the filter bank), then a simple phase rotation suffices. If the delays are larger, more complicated implementations are necessitated.
The output of the propagation compensation module 504 are the modified audio signals expressed in the original time-frequency domain.
In the following, a particular estimation of propagation compensation for a virtual microphone according to an embodiment will be described with reference to
In the embodiment that is now explained, it is assumed that at least a first recorded audio input signal, e.g. a pressure signal of at least one of the real spatial microphones (e.g. the microphone arrays) is available, for example, the pressure signal of a first real spatial microphone. We will refer to the considered microphone as reference microphone, to its position as reference position pref and to its pressure signal as reference pressure signal Pref(k, n). However, propagation compensation may not only be conducted with respect to only one pressure signal, but also with respect to the pressure signals of a plurality or of all of the real spatial microphones.
The relationship between the pressure signal PIPLS(k, n) emitted by the IPLS and a reference pressure signal Pref(k, n) of a reference microphone located in pref can be expressed by formula (9):
Pref(k,n)=PIPLS(k,n)·γ(k,pIPLS,pref), (9)
In general, the complex factor γ(k, pa, pb) expresses the phase rotation and amplitude decay introduced by the propagation of a spherical wave from its origin in pa to pb. However, practical tests indicated that considering only the amplitude decay in γ leads to plausible impressions of the virtual microphone signal with significantly fewer artifacts compared to also considering the phase rotation.
The sound energy which can be measured in a certain point in space depends strongly on the distance r from the sound source, in
Assuming that the first real spatial microphone is the reference microphone, then pref=p1. In
s(k,n)=∥s(k,n)∥=∥p1+d1(k,n)−pv∥. (10)
The sound pressure Pv(k, n) at the position of the virtual microphone is computed by combining formulas (1) and (9), leading to
As mentioned above, in some embodiments, the factors γ may only consider the amplitude decay due to the propagation. Assuming for instance that the sound pressure decreases with 1/r, then
When the model in formula (1) holds, e.g., when only direct sound is present, then formula (12) can accurately reconstruct the magnitude information. However, in case of pure diffuse sound fields, e.g., when the model assumptions are not met, the presented method yields an implicit dereverberation of the signal when moving the virtual microphone away from the positions of the sensor arrays. In fact, as discussed above, in diffuse sound fields, we expect that most IPLS are localized near the two sensor arrays. Thus, when moving the virtual microphone away from these positions, we likely increase the distance s=∥s∥ in
By conducting propagation compensation on the recorded audio input signal (e.g. the pressure signal) of the first real spatial microphone, a first modified audio signal is obtained.
In embodiments, a second modified audio signal may be obtained by conducting propagation compensation on a recorded second audio input signal (second pressure signal) of the second real spatial microphone.
In other embodiments, further audio signals may be obtained by conducting propagation compensation on recorded further audio input signals (further pressure signals) of further real spatial microphones.
Now, combining in blocks 502 and 505 in
Possible solutions for the combination comprise:
The task of module 502 is, if applicable, to compute parameters for the combining, which is carried out in module 505.
Now, spectral weighting according to embodiments is described in more detail. For this, reference is made to blocks 503 and 506 of
For each time-frequency bin the geometrical reconstruction allows us to easily obtain the DOA relative to the virtual microphone, as shown in
The weight for the time-frequency bin is then computed considering the type of virtual microphone desired.
In case of directional microphones, the spectral weights may be computed according to a predefined pick-up pattern. For example, according to an embodiment, a cardioid microphone may have a pick up pattern defined by the function g(theta),
g(theta)=0.5+0.5 cos(theta),
where theta is the angle between the look direction of the virtual spatial microphone and the DOA of the sound from the point of view of the virtual microphone.
Another possibility is artistic (non physical) decay functions. In certain applications, it may be desired to suppress sound events far away from the virtual microphone with a factor greater than the one characterizing free-field propagation. For this purpose, some embodiments introduce an additional weighting function which depends on the distance between the virtual microphone and the sound event. In an embodiment, only sound events within a certain distance (e.g. in meters) from the virtual microphone should be picked up.
With respect to virtual microphone directivity, arbitrary directivity patterns can be applied for the virtual microphone. In doing so, one can for instance separate a source from a complex sound scene.
Since the DOA of the sound can be computed in the position pv of the virtual microphone, namely
where cv is a unit vector describing the orientation of the virtual microphone, arbitrary directivities for the virtual microphone can be realized. For example, assuming that Pv(k,n) indicates the combination signal or the propagation-compensated modified audio signal, then the formula:
{tilde over (P)}v(k,n)=Pv(k,n)[1+cos(φv(k,n))] (14)
calculates the output of a virtual microphone with cardioid directivity. The directional patterns, which can potentially be generated in this way, depend on the accuracy of the position estimation.
In embodiments, one or more real, non-spatial microphones, for example, an omnidirectional microphone or a directional microphone such as a cardioid, are placed in the sound scene in addition to the real spatial microphones to further improve the sound quality of the virtual microphone signals 105 in
In a further embodiment, computation of the spatial side information of the virtual microphone is realized. To compute the spatial side information 106 of the microphone, the information computation module 202 of
The output of the spatial side information computation module 507 is the side information of the virtual microphone 106. This side information can be, for instance, the DOA or the diffuseness of sound for each time-frequency bin (k, n) from the point of view of the virtual microphone. Another possible side information could, for instance, be the active sound intensity vector Ia(k, n) which would have been measured in the position of the virtual microphone. How these parameters can be derived, will now be described.
According to an embodiment, DOA estimation for the virtual spatial microphone is realized. The information computation module 120 is adapted to estimate the direction of arrival at the virtual microphone as spatial side information, based on a position vector of the virtual microphone and based on a position vector of the sound event as illustrated by
h(k,n)=s(k,n)−r(k,n).
The desired DOA a(k, n) can now be computed for each (k, n) for instance via the definition of the dot product of h(k, n) and v(k,n), namely
a(k,n)=arcos(h(k,n)·v(k,n)/(∥h(k,n)∥∥v(k,n)∥).
In another embodiment, the information computation module 120 may be adapted to estimate the active sound intensity at the virtual microphone as spatial side information, based on a position vector of the virtual microphone and based on a position vector of the sound event as illustrated by
From the DOA a(k, n) defined above, we can derive the active sound intensity Ia(k, n) at the position of the virtual microphone. For this, it is assumed that the virtual microphone audio signal 105 in
Ia(k,n)=−(½rho)|Pv(k,n)|2*[cos a(k,n), sin a(k,n)]T,
where [ ]T denotes a transposed vector, rho is the air density, and Pv(k, n) is the sound pressure measured by the virtual spatial microphone, e.g., the output 105 of block 506 in
If the active intensity vector shall be computed expressed in the general coordinate system but still at the position of the virtual microphone, the following formula may be applied:
Ia(k,n)=(½rho)|Pv(k,n)|2h(k,n)/∥h(k,n)∥.
The diffuseness of sound expresses how diffuse the sound field is in a given time-frequency slot (see, for example, [2]). Diffuseness is expressed by a value Ψ, wherein 0≤ψ≤1. A diffuseness of 1 indicates that the total sound field energy of a sound field is completely diffuse. This information is important e.g. in the reproduction of spatial sound. Traditionally, diffuseness is computed at the specific point in space in which a microphone array is placed.
According to an embodiment, the diffuseness may be computed as an additional parameter to the side information generated for the Virtual Microphone (VM), which can be placed at will at an arbitrary position in the sound scene. By this, an apparatus that also calculates the diffuseness besides the audio signal at a virtual position of a virtual microphone can be seen as a virtual DirAC front-end, as it is possible to produce a DirAC stream, namely an audio signal, direction of arrival, and diffuseness, for an arbitrary point in the sound scene. The DirAC stream may be further processed, stored, transmitted, and played back on an arbitrary multi-loudspeaker setup. In this case, the listener experiences the sound scene as if he or she were in the position specified by the virtual microphone and were looking in the direction determined by its orientation.
A diffuseness computation unit 801 of an embodiment is illustrated in
Let Edir(SM 1) to Edir(SM N) and Ediff(SM 1) to Ediff(SM N) denote the estimates of the energies of direct and diffuse sound for the N spatial microphones computed by energy analysis unit 810. If Pi is the complex pressure signal and Ψi is diffuseness for the i-th spatial microphone, then the energies may, for example, be computed according to the formulae:
Edir(SMi)=(1−Ψi)·|Pi|2
Ediff(SMi)=Ψi·|Pi|2
The energy of diffuse sound should be equal in all positions, therefore, an estimate of the diffuse sound energy Ediff(VM) at the virtual microphone can be computed simply by averaging Ediff(SM 1) to Ediff(SM N), e.g. in a diffuseness combination unit 820, for example, according to the formula:
A more effective combination of the estimates Ediff(SM 1) to Ediff(SM N) could be carried out by considering the variance of the estimators, for instance, by considering the SNR.
The energy of the direct sound depends on the distance to the source due to the propagation. Therefore, Edir(SM 1) to Edir(SM N) may be modified to take this into account. This may be carried out, e.g., by a direct sound propagation adjustment unit 830. For example, if it is assumed that the energy of the direct sound field decays with 1 over the distance squared, then the estimate for the direct sound at the virtual microphone for the i-th spatial microphone may be calculated according to the formula:
Similarly to the diffuseness combination unit 820, the estimates of the direct sound energy obtained at different spatial microphones can be combined, e.g. by a direct sound combination unit 840. The result is Edir(VM), e.g., the estimate for the direct sound energy at the virtual microphone. The diffuseness at the virtual microphone Ψ(VM) may be computed, for example, by a diffuseness sub-calculator 850, e.g. according to the formula:
As mentioned above, in some cases, the sound events position estimation carried out by a sound events position estimator fails, e.g., in case of a wrong direction of arrival estimation.
Additionally, the reliability of the DOA estimates at the N spatial microphones may be considered. This may be expressed e.g. in terms of the variance of the DOA estimator or SNR. Such an information may be taken into account by the diffuseness sub-calculator 850, so that the VM diffuseness 103 can be artificially increased in case that the DOA estimates are unreliable. In fact, as a consequence, the position estimates 205 will also be unreliable.
The apparatus 150 comprises a receiver 160 for receiving the audio data stream comprising the audio data. The audio data comprises one or more pressure values for each one of the one or more sound sources. Furthermore, the audio data comprises one or more position values indicating a position of one of the sound sources for each one of the sound sources. Moreover, the apparatus comprises a synthesis module 170 for generating the at least one audio output signal based on at least one of the one or more pressure values of the audio data of the audio data stream and based on at least one of the one or more position values of the audio data of the audio data stream. The audio data is defined for a time-frequency bin of a plurality of time-frequency bins. For each one of the sound sources, at least one pressure value is comprised in the audio data, wherein the at least one pressure value may be a pressure value relating to an emitted sound wave, e.g. originating from the sound source. The pressure value may be a value of an audio signal, for example, a pressure value of an audio output signal generated by an apparatus for generating an audio output signal of a virtual microphone, wherein that the virtual microphone is placed at the position of the sound source.
Thus,
According to an embodiment, the receiver 160 may be adapted to receive the audio data stream comprising the audio data, wherein the audio data furthermore comprises one or more diffuseness values for each one of the sound sources. The synthesis module 170 may be adapted to generate the at least one audio output signal based on at least one of the one or more diffuseness values.
The audio data stream generated by the apparatus 200 may then be transmitted. Thus, the apparatus 200 may be employed on an analysis/transmitter side. The audio data stream comprises audio data which comprises one or more pressure values and one or more position values for each one of a plurality of sound sources, i.e. each one of the pressure values and the position values relates to a particular sound source of the one or more sound sources of the recorded audio scene. This means that with respect to the position values, the position values indicate positions of sound sources instead of the recording microphones.
In a further embodiment, the determiner 210 may be adapted to determine the sound source data based on diffuseness information by at least one spatial microphone. The data stream generator 220 may be adapted to generate the audio data stream such that the audio data stream comprises the sound source data. The sound source data furthermore comprises one or more diffuseness values for each one of the sound sources.
As already stated, k and n denote the frequency and time indices, respectively. If desired and if the analysis allows it, more than one IPLS can be represented at a given time-frequency slot. This is depicted in
In the following, an apparatus for generating an audio data stream according to an embodiment is explained in more detail. As the apparatus of
The analysis module 410 computes the GAC stream from the recordings of the N spatial microphones. Depending on the number M of layers desired (e.g. the number of sound sources for which information shall be comprised in the audio data stream for a particular time-frequency bin), the type and number N of spatial microphones, different methods for the analysis are conceivable. A few examples are given in the following.
As a first example, parameter estimation for one sound source, e.g. one IPLS, per time-frequency slot is considered. In the case of M=1, the GAC stream can be readily obtained with the concepts explained above for the apparatus for generating an audio output signal of a virtual microphone, in that a virtual spatial microphone can be placed in the position of the sound source, e.g. in the position of the IPLS. This allows the pressure signals to be calculated at the position of the IPLS, together with the corresponding position estimates, and possibly the diffuseness. These three parameters are grouped together in a GAC stream and can be further manipulated by module 102 in
For example, the determiner may determine the position of a sound source by employing the concepts proposed for the sound events position estimation of the apparatus for generating an audio output signal of a virtual microphone. Moreover, the determiner may comprise an apparatus for generating an audio output signal and may use the determined position of the sound source as the position of the virtual microphone to calculate the pressure values (e.g. the values of the audio output signal to be generated) and the diffuseness at the position of the sound source.
In particular, the determiner 210, e.g., in
As another example, parameter estimation for 2 sound sources, e.g. 2 IPLS, per time-frequency slot is considered. If the analysis module 410 is to estimate two sound sources per time-frequency bin, then the following concept based on state-of-the-art estimators can be used.
ESPRIT ([26]) can be employed separately at each array to obtain two DOA estimates for each time-frequency bin at each array. Due to a pairing ambiguity, this leads to two possible solutions for the position of the sources. As can be seen from
Ei,j=|Pi,1−Pi,2|+|Pj,1−Pj,2|, (1)
where (i, j)∈{(1, 2), (1′, 2′)} (see
While the modification module 610 of
The modifications of the audio data stream conducted by the modification modules 610, 660 may also be considered as modifications of the sound scene. Thus, the modification modules 610, 660 may also be referred to as sound scene manipulation modules.
The sound field representation provided by the GAC stream allows different kinds of modifications of the audio data stream, i.e. as a consequence, manipulations of the sound scene. Some examples in this context are:
In the following a layer of an audio data stream, e.g. a GAC stream, is assumed to comprise all audio data of one of the sound sources with respect to a particular time-frequency bin.
The demultiplexer 401 is configured to separate the different layers of the M-layer GAC stream and form M single-layer GAC streams. Moreover, the manipulation processor 420 comprises units 402, 403 and 404, which are applied on each of the GAC streams separately. Furthermore, the multiplexer 405 is configured to form the resulting M-layer GAC stream from the manipulated single-layer GAC streams.
Based on the position data from the GAC stream and the knowledge about the position of the real sources (e.g. talkers), the energy can be associated with a certain real source for every time-frequency bin. The pressure values P are then weighted accordingly to modify the loudness of the respective real source (e.g. talker). It necessitates a priori information or an estimate of the location of the real sound sources (e.g. talkers).
In some embodiments, if knowledge about the position of the real sources is available, then based on the position data from the GAC stream, the energy can be associated with a certain real source for every time-frequency bin.
The manipulation of the audio data stream, e.g. the GAC stream can take place at the modification module 630 of the apparatus 600 for generating at least one audio output signal of
For example, the audio data stream, i.e. the GAC stream, can be modified prior to transmission, or before the synthesis after transmission.
Unlike the modification module 630 of
At the transmitter/analysis side, the sound field representation (e.g., the GAC stream) is computed in unit 101 from the inputs 111 to 11N, i.e., the signals recorded with N≥2 spatial microphones, and from the inputs 121 to 12N, i.e., relative position and orientation of the spatial microphones.
The output of unit 101 is the aforementioned sound field representation, which in the following is denoted as Geometry-based spatial Audio Coding (GAC) stream. Similarly to the proposal in
The GAC stream may be further processed in the optional modification module 102, which may also be referred to as a manipulation unit. The modification module 102 allows for a multitude of applications. The GAC stream can then be transmitted or stored. The parametric nature of the GAC stream is highly efficient. At the synthesis/receiver side, one more optional modification modules (manipulation units) 103 can be employed. The resulting GAC stream enters the synthesis unit 104 which generates the loudspeaker signals. Given the independence of the representation from the recording, the end user at the reproduction side can potentially manipulate the sound scene and decide the listening position and orientation within the sound scene freely.
The modification/manipulation of the audio data stream, e.g. the GAC stream can take place at modification modules 102 and/or 103 in
Examples of different concepts for the manipulation of the GAC stream are described in the following with reference to
1. Volume Expansion
It is assumed that a certain energy in the scene is located within volume V. The volume V may indicate a predefined area of an environment. Θ denotes the set of time-frequency bins (k, n) for which the corresponding sound sources, e.g. IPLS, are localized within the volume V.
If expansion of the volume V to another volume V′ is desired, this can be achieved by adding a random term to the position data in the GAC stream whenever (k, n)∈Θ (evaluated in the decision units 403) and substituting Q(k, n)=[X(k, n), Y(k, n), Z(k, n)]T (the index layer is dropped for simplicity) such that the outputs 431 to 43M of units 404 in
Q(k,n)=[X(k,n)+Φx(k,n);Y(k,n)+Φy(k,n)Z(k,n)+Φz(k,n)]T (2)
where Φx, Φy and Φz are random variables whose range depends on the geometry of the new volume V′ with respect to the original volume V. This concept can for example be employed to make a sound source be perceived wider. In this example, the original volume V is infinitesimally small, i.e., the sound source, e.g. the IPLS, should be localized at the same point Q(k, n)=[X(k, n), Y(k, n), Z(k, n)]T for all (k, n)∈Θ. This mechanism may be seen as a form of dithering of the position parameter Q(k, n).
According to an embodiment, each one of the position values of each one of the sound sources comprise at least two coordinate values, and the modification module is adapted to modify the coordinate values by adding at least one random number to the coordinate values, when the coordinate values indicate that a sound source is located at a position within a predefined area of an environment.
2. Volume Transformation
In addition to the volume expansion, the position data from the GAC stream can be modified to relocate sections of space/volumes within the sound field. In this case as well, the data to be manipulated comprises the spatial coordinates of the localized energy.
V denotes again the volume which shall be relocated, and Θ denotes the set of all time-frequency bins (k, n) for which the energy is localized within the volume V. Again, the volume V may indicate a predefined area of an environment.
Volume relocation may be achieved by modifying the GAC stream, such that for all time-frequency bins (k,n)∈Θ, Q(k,n) are replaced by f(Q(k,n)) at the outputs 431 to 43M of units 404, where f is a function of the spatial coordinates (X, Y, Z), describing the volume manipulation to be performed. The function f might represent a simple linear transformation such as rotation, translation, or any other complex non-linear mapping. This technique can be used for example to move sound sources from one position to another within the sound scene by ensuring that Θ corresponds to the set of time-frequency bins in which the sound sources have been localized within the volume V. The technique allows a variety of other complex manipulations of the entire sound scene, such as scene mirroring, scene rotation, scene enlargement and/or compression etc. For example, by applying an appropriate linear mapping on the volume V, the complementary effect of volume expansion, i.e., volume shrinkage can be achieved. This could e.g. be done by mapping Q(k,n) for (k,n)∈Θ to f(Q(k,n))∈V′, where V′⊂V and V′ comprises a significantly smaller volume than V.
According to an embodiment, the modification module is adapted to modify the coordinate values by applying a deterministic function on the coordinate values, when the coordinate values indicate that a sound source is located at a position within a predefined area of an environment.
3. Position-Based Filtering
The geometry-based filtering (or position-based filtering) idea offers a method to enhance or completely/partially remove sections of space/volumes from the sound scene. Compared to the volume expansion and transformation techniques, in this case, however, only the pressure data from the GAC stream is modified by applying appropriate scalar weights.
In the geometry-based filtering, a distinction can be made between the transmitter-side 102 and the receiver-side modification module 103, in that the former one may use the inputs 111 to 11N and 121 to 12N to aid the computation of appropriate filter weights, as depicted in
For all (k, n)∈Θ, the complex pressure P(k, n) in the GAC stream is modified to ηP(k, n) at the outputs of 402, where η is a real weighting factor, for example computed by unit 402. In some embodiments, module 402 can be adapted to compute a weighting factor dependent on diffuseness also.
The concept of geometry-based filtering can be used in a plurality of applications, such as signal enhancement and source separation. Some of the applications and the necessitated a priori information comprise:
In the following, synthesis modules according to embodiments are described. According to an embodiment, a synthesis module may be adapted to generate at least one audio output signal based on at least one pressure value of audio data of an audio data stream and based on at least one position value of the audio data of the audio data stream. The at least one pressure value may be a pressure value of a pressure signal, e.g. an audio signal.
The principles of operation behind the GAC synthesis are motivated by the assumptions on the perception of spatial sound given in
In particular, the spatial cues necessitated to correctly perceive the spatial image of a sound scene can be obtained by correctly reproducing one direction of arrival of nondiffuse sound for each time-frequency bin. The synthesis, depicted in
The first stage considers the position and orientation of the listener within the sound scene and determines which of the M IPLS is dominant for each time-frequency bin. Consequently, its pressure signal Pdir and direction of arrival θ can be computed. The remaining sources and diffuse sound are collected in a second pressure signal Pdiff.
The second stage is identical to the second half of the DirAC synthesis described in [27]. The nondiffuse sound is reproduced with a panning mechanism which produces a point-like source, whereas the diffuse sound is reproduced from all loudspeakers after having being decorrelated.
The first stage synthesis unit 501, computes the pressure signals Pdir and Pdiff which need to be played back differently. In fact, while Pdir comprises sound which has to be played back coherently in space, Pdiff comprises diffuse sound. The third output of first stage synthesis unit 501 is the Direction Of Arrival (DOA) θ 505 from the point of view of the desired listening position, i.e. a direction of arrival information. Note that the Direction of Arrival (DOA) may be expressed as an azimuthal angle if 2D space, or by an azimuth and elevation angle pair in 3D. Equivalently, a unit norm vector pointed at the DOA may be used. The DOA specifies from which direction (relative to the desired listening position) the signal Pdir should come from. The first stage synthesis unit 501 takes the GAC stream as an input, i.e., a parametric representation of the sound field, and computes the aforementioned signals based on the listener position and orientation specified by input 141. In fact, the end user can decide freely the listening position and orientation within the sound scene described by the GAC stream.
The second stage synthesis unit 502 computes the L loudspeaker signals 511 to 51L based on the knowledge of the loudspeaker setup 131. Please recall that unit 502 is identical to the second half of the DirAC synthesis described in [27].
The i-th GAC stream comprises a pressure signal Pi, a diffuseness Ψi and a position vector Qi=[Xi, Yi, Zi]T. The pressure signal Pi comprises one or more pressure values. The position vector is a position value. At least one audio output signal is now generated based on these values.
The pressure signal for direct and diffuse sound Pdir,i and Pdiff,i, are obtained from Pi by applying a proper factor derived from the diffuseness Ψi. The pressure signals comprise direct sound enter a propagation compensation block 602, which computes the delays corresponding to the signal propagation from the sound source position, e.g. the IPLS position, to the position of the listener. In addition to this, the block also computes the gain factors necessitated for compensating the different magnitude decays. In other embodiments, only the different magnitude decays are compensated, while the delays are not compensated.
The compensated pressure signals, denoted by {tilde over (P)}dir,i enter block 603, which outputs the index imax of the strongest input
The main idea behind this mechanism is that of the M IPLS active in the time-frequency bin under study, only the strongest (with respect to the listener position) is going to be played back coherently (i.e., as direct sound). Blocks 604 and 605 select from their inputs the one which is defined by imax. Block 607 computes the direction of arrival of the imax-th IPLS with respect to the position and orientation of the listener (input 141). The output of block 604 {tilde over (P)}dir,i
The apparatus 960 e.g. in
The apparatus 960 for generating an audio output signal of a virtual microphone is arranged to provide the audio output signal to the apparatus 970 for generating an audio data stream. The apparatus 970 for generating an audio data stream comprises a determiner, for example, the determiner 210 described with respect to
The apparatus 980 for generating a virtual microphone data stream feeds the generated virtual microphone signal into the apparatus 980 for generating at least one audio output signal based on an audio data stream. It should be noted, that the virtual microphone data stream is an audio data stream. The apparatus 980 for generating at least one audio output signal based on an audio data stream generates an audio output signal based on the virtual microphone data stream as audio data stream, for example, as described with respect to the apparatus of
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding unit or item or feature of a corresponding apparatus.
The inventive decomposed signal can be stored on a digital storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet.
Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
Some embodiments according to the invention comprise a non-transitory data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.
A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods may be performed by any hardware apparatus.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which will be apparent to others skilled in the art and which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.
[3] V. Pulkki, “Spatial sound reproduction with directional audio coding,” J. Audio Eng. Soc., vol. 55, no. 6, pp. 503-516. June 2007.
[7] J. Herre, C. Falch, D. Mahne, G. Del Galdo, M. Kallinger, and O. Thiergart, “Interactive teleconferencing combining spatial audio Object coding and DirAC technology,” in Audio Engineering Society Convention 128, London UK, May 2010.
This application is a continuation of copending International Application No. PCT/EP2011/071644, filed Dec. 2, 2011, which is incorporated herein by reference in its entirety, and additionally claims priority from U.S. Application No. 61/419,623, filed Dec. 3, 2010, and from U.S. Application No. 61/420,099, filed Dec. 6, 2010, all of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6072878 | Moorer | Jun 2000 | A |
6600824 | Matsuo | Jul 2003 | B1 |
6618485 | Matsuo | Sep 2003 | B1 |
6904152 | Moorer | Jun 2005 | B1 |
7606373 | Moorer | Oct 2009 | B2 |
8229754 | Ramirez | Jul 2012 | B1 |
8405323 | Finney et al. | Mar 2013 | B2 |
9299353 | Sole | Mar 2016 | B2 |
20010038580 | Jung | Nov 2001 | A1 |
20010055397 | Norris | Dec 2001 | A1 |
20020001389 | Amiri et al. | Jan 2002 | A1 |
20040138873 | Heo et al. | Jul 2004 | A1 |
20040157661 | Ueda et al. | Aug 2004 | A1 |
20040186734 | Heo et al. | Sep 2004 | A1 |
20050141728 | Moorer | Jun 2005 | A1 |
20050281410 | Grosvenor et al. | Dec 2005 | A1 |
20060002566 | Choi et al. | Jan 2006 | A1 |
20060010445 | Peterson et al. | Jan 2006 | A1 |
20060050897 | Asada | Mar 2006 | A1 |
20060171547 | Lokki et al. | Aug 2006 | A1 |
20060269070 | Miura | Nov 2006 | A1 |
20070032894 | Uenishi et al. | Feb 2007 | A1 |
20070203598 | Seo et al. | Aug 2007 | A1 |
20070297616 | Plogsties et al. | Dec 2007 | A1 |
20080004729 | Hiipakka | Jan 2008 | A1 |
20080298597 | Turku | Dec 2008 | A1 |
20080298610 | Virolainen | Dec 2008 | A1 |
20090043591 | Breebaart et al. | Feb 2009 | A1 |
20090051624 | Finney et al. | Feb 2009 | A1 |
20090122994 | Ohta | May 2009 | A1 |
20090129609 | Oh et al. | May 2009 | A1 |
20090147961 | Lee et al. | Jun 2009 | A1 |
20090252356 | Goodwin et al. | Oct 2009 | A1 |
20090264114 | Virolainen | Oct 2009 | A1 |
20100061558 | Faller | Mar 2010 | A1 |
20100114582 | Beack | May 2010 | A1 |
20100169103 | Pulkki et al. | Jul 2010 | A1 |
20100198601 | Mouhssine | Aug 2010 | A1 |
20100208904 | Nakajima et al. | Aug 2010 | A1 |
20110015770 | Seo | Jan 2011 | A1 |
20110216908 | Galdo | Sep 2011 | A1 |
20110222694 | Del Galdo | Sep 2011 | A1 |
20110249821 | Jaillet | Oct 2011 | A1 |
20110313763 | Amada | Dec 2011 | A1 |
20120014535 | Oouchi et al. | Jan 2012 | A1 |
20120020481 | Usami | Jan 2012 | A1 |
20120140947 | Shin | Jun 2012 | A1 |
20130016842 | Schultz-Amling et al. | Jan 2013 | A1 |
Number | Date | Country |
---|---|---|
1452851 | Oct 2003 | CN |
1714600 | Dec 2005 | CN |
101473645 | Jul 2009 | CN |
101485233 | Jul 2009 | CN |
2154910 | Feb 2010 | EP |
2414369 | Nov 2005 | GB |
H01109996 | Apr 1989 | JP |
H04181898 | Jun 1992 | JP |
H1063470 | Mar 1998 | JP |
2001045590 | Feb 2001 | JP |
2002051399 | Feb 2002 | JP |
2004193877 | Jul 2004 | JP |
2004242728 | Sep 2004 | JP |
2006503491 | Jan 2006 | JP |
2008028700 | Feb 2008 | JP |
2008197577 | Aug 2008 | JP |
2008245984 | Oct 2008 | JP |
2009089315 | Apr 2009 | JP |
2009216473 | Sep 2009 | JP |
2009246827 | Oct 2009 | JP |
2009537876 | Oct 2009 | JP |
2010147692 | Jul 2010 | JP |
2010525646 | Jul 2010 | JP |
2010193451 | Sep 2010 | JP |
2010232717 | Oct 2010 | JP |
2315371 | Jan 2008 | RU |
2383939 | Mar 2010 | RU |
2396608 | Aug 2010 | RU |
200701823 | Jan 2007 | TW |
WO-2004077884 | Sep 2004 | WO |
2005098826 | Oct 2005 | WO |
WO-2006006935 | Jan 2006 | WO |
2006072270 | Jul 2006 | WO |
2006105105 | Oct 2006 | WO |
WO-2007025033 | Mar 2007 | WO |
WO-2008128989 | Oct 2008 | WO |
2009046223 | Apr 2009 | WO |
2009089353 | Jul 2009 | WO |
2010017978 | Feb 2010 | WO |
WO-2010028784 | Mar 2010 | WO |
2010122455 | Oct 2010 | WO |
2010128136 | Nov 2010 | WO |
Entry |
---|
Schultz-Amling et al., “Virtual acoustic zoom based on parametric spatial audio representations”, U.S. Appl. No. 61/287,596, Dec. 17, 2009, 11 pages. |
Chien, Jen-Tzung et al., “Car Speech Enhancement Using Microphone Array Beamforming and Post Filters”, Proceedings of the 9th Australian International Conference on Speech Science & Technology; Melbourne, Dec. 2-5, 2002, pp. 568-572. |
Del Galdo, G. et al., “Generating Virtual Microphone Signals Using Geometrical Information Gathered by Distributed Arrays”, IEEE, 2011 Joint Workshop on Hands-free Speech Communications and Microphone Arrays., May 30-Jun. 1, 2011, pp. 185-190. |
Del Galdo et al., “Optimized Parameter Estimation in Directional Audio Coding Using Nested Mircophone Arrays”, AES Convention Paper 7911; Presented at the 127th Convention; New York, NY, USA, Oct. 9-12, 2009, 9 pages. |
Engdegard, J. et al., “Spatial Audio Object Coding (SAOC)—The Upcoming MPEG Standard on Parametric Object Based Audio Coding”, Audio Engineering Society Convention Paper, Presented at the 124th Convention, Amsterdam, The Netherlands, May 17-20, 2008, 15 pages. |
Fahy, F.J., “Sound energy and sound intensity”, Chapter 4, Essex: Elsevier Science Publishers Ltd., 1989, pp. 38-88. |
Faller, C. , “Microphone Front-Ends for Spatial Audio Coders”, Audio Engineering Society Convention Paper 7508; Presented at the 125th Convention, San Francisco, CA, USA, Oct. 2-5, 2008, 10 pages. |
Faller, C., “Obtaining a Highly Directive Center Channel from Coincident Stereo Microphone Signals”, AES Convention Paper 7380; Presented at the 124th Convention; Amsterdam, The Netherlands, May 17-20, 2008, 7 pages. |
Furness, R. , “Ambisonics—An Overview”, Minim Electronics Limited, Burnham, Slough,U.K.; AES 8th International Conference; Apr. 1990, pp. 181-190. |
Gallo, Emmanuel et al., “Extracting and Re-Rendering Structured Auditory Scenes from Field Recordings”, AES 30th Int'l Conference; Saariselkä, Finland, Mar. 15-17, 2007, 11 pages. |
Gerzon, M., “Ambisonics in Multichannel Broadcasting and Video”, Journal Audio Engineering Society, vol. 33, No. 11, Nov. 1985, pp. 859-871. |
Herre, J. et al., “Interactive Teleconferencing Combining Spatial Audio Object Coding and DirAC Technology”, AES Convention Paper 8098; Presented at the 128th Convention; London, UK, May 22-25, 2010, 12 pages. |
Herre, J. et al., “MPEG Surround—The ISO/MPEG Standard for Efficient and Compatible Multi-Channel Audio Coding”, Audio Engineering Society Convention Paper, Presented at the 122nd Convention, Vienna, Austria, May 5-8, 2007, 23 pages. |
Kallinger, M. et al. “A Spatial Filtering Approach for Directional Audio Coding”, AES Convention Paper 7653; Presented at the 126th Convention; Munich, Germany, May 7-10, 2009, 10 pages. |
Kallinger, M. et al., “Enhanced Direction Estimation using Microphone Arrays for Directional Audio Coding”, in Hands-Free Speech Communication and Microphone Arrays (HSCMA), May 2008, pp. 45-48. |
Kuntz, A. et al., “Limitations in the Extrapolation of Wave Fields from Circular Measurements”, 15th European Signal Processing Conference (EUSIPCO 2007), Poznan, Poland, Sep. 3-7, 2007, pp. 2331-2335. |
Marro, C. et al., “Analysis of Noise Reduction and Dereverberation Techniques Based on Microphone Arrays With Postfiltering”, IEEE Transactions on Speech and Audio Processing, vol. 6, No. 3, May 1998, pp. 240-259. |
Pulkki, V., “Directional audio coding in spatial sound reproduction and stereo upmixing”, AES 28th International Conference, Piteå, Sweden, Jun. 30-Jul. 2, 2006, pp. 1-8. |
Pulkki, V., “Spatial Sound Reproduction with Directional Audio Coding”, J. Audio Eng. Soc., Helsinki Univ. of Technology, Finland; 55(6), Jun. 2007, pp. 503-516. |
Rickard, S. et al., “On the Approximate W-Disjoint Orthogonality of Speech”, In the International Conference on Acoustics, Speech and Signal Processing, Apr. 2002, vol. 1, pp. I-529-I-532. |
Roy, R. et al. , “Direction-of-Arrival Estimation by Subspace Rotation Methods—ESPRIT”, In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Stanford, CA, USA, Apr. 1986, pp. 2495-2498. |
Roy, R. et al., “ESPRIT—Estimation of Signal Parameters Via Rotational Invariance Techniques”, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, No. 7, Jul. 1989, pp. 984-995. |
Schmidt, R. , “Multiple Emitter Location and Signal Parameter Estimation”, IEEE Transactions on Antennas and Propagation, vol. 34, No. 3, Mar. 1986, pp. 276-280. |
Schultz-Amling, R. et al., “Acoustical Zooming Based on a Parametric Sound Field Representation”, AES Convention Paper 8120; Presented at the 128th Convention; London, UK, May 22-25, 2010, 9 pages. |
Schultz-Amling, R. et al., “Planar Microphone Array Processing for the Analysis and Reproduction of Spatial Audio using Directional Audio Coding”, Audio Engineering Society, Convention Paper 7375, Presented at the 124th Convention, Amsterdam, The Netherlands, May 17-20, 2008, 10 pages. |
Simmer, K. U. et al., “Time Delay Compensation for Adaptive Multichannel Speech Enhancement Systems”, Proceedings of ISSSE-92, Paris, Sep. 1-4, 1992, 4 pages. |
Steele, Michael J. , “Optimal Triangulation of Random Samples in the Plane”, The Annals of Probability, vol. 10, No. 3, Aug. 1982, pp. 548-553. |
Vilkamo, J. et al., “Directional Audio Coding: Virtual Microphone-Based Synthesis and Subjective Evaluation”, J. Audio Eng. Soc., vol. 57, No. 9., Sep. 2009, pp. 709-724. |
Walther, A. et al., “Linear Simulation of Spaced Microphone Arrays Using B-Format Recordings”, Audio Engineering Society, Convention Paper 7987, Presented at the 128th Convention, May 22-25, 2010, London, UK, 7 pages. |
Williams, E.G., “Fourier Acoustics: Sound Radiation and Nearfield Acoustical Holography; Chapter 3, The Inverse Problem: Planar Nearfield Acoustical Holography”, Academic Press, Jun. 1999, pp. 89-114. |
Karbasi, Amin et al., “A New DOA Estimation Method Using a Circular Microphone Array”, School of Comp. and commun. Sciences, Ecole Polytechnique Federale de Lausanne CH-1015 Lausanne, Switzerland, 2007, 778-782. |
Number | Date | Country | |
---|---|---|---|
20130268280 A1 | Oct 2013 | US |
Number | Date | Country | |
---|---|---|---|
61419623 | Dec 2010 | US | |
61420099 | Dec 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2011/071644 | Dec 2011 | US |
Child | 13907510 | US |