The field of the invention pertains to the interpolation of a sound (or acoustic) field having been emitted by one or several source(s) and having been captured by a finite set of microphones.
The invention has numerous application, in particular, but without limitation, in the virtual reality field, for example to enable a listener to move in a sound stage that is rendered to him, or in the analysis of a sound stage, for example to determine the number of sound sources present in the analysed stage, or in the field of rendering a multi-channel scene, for example within a MPEG-H 3D decoder, etc.
In order to interpolate a sound field at a given position of a sound stage, a conventional approach consists in estimating the sound field at the given position using a linear interpolation between the fields as captured and encoded by the different microphones of the stage. The interpolation coefficients are estimated while minimising a cost function.
In such an approach, the known techniques favor a capture of the sound field by so-called ambisonic microphones. More particularly, an ambisonic microphone encodes and outputs the sound field captured thereby in an ambisonic format. The ambisonic format is characterised by components consisting of the projection of the sound field according to different directions. These components are grouped in orders. The zero-order encodes the instantaneous acoustic pressure captured by the microphone, the one-order encodes the three pressure gradients according to the three space axes, etc. As we get higher in the orders, the spatial resolution of the representation of the field increases. The ambisonic format in its complete representation, i.e. to the infinite order, allows encoding the filed at every point inside the maximum sphere devoid of sound sources, and having the physical location of the microphone having performed the capture as its center. In theory, using one single microphone, such an encoding of the sound field allows moving inside the area delimited by the source the closest to the microphone, yet without circumventing any of the considered sources.
Thus, such microphones allow representing the sound field in three dimensions through a decomposition of the latter into spherical harmonics. This decomposition is particularly suited to so-called 3DoF (standing for “Degree of Freedom”) navigation, for example, a navigation according to the three dimensions. It is actually this format that has been retained for immersive contents on Youtube's virtual reality channel or on Facebook-360.
However, the interpolation methods of the prior art generally assume that there is a pair of microphones at an equal distance from the position of the listener as in the method disclosed in the conference article of A. Southern, J. Wells and D. Murphy: «Rendering walk-through auralisations using wave-based acoustical models», 17th European Signal Processing Conference, 2009, p. 715-719». Such a distance equality condition is impossible to guarantee in practice. Moreover, such approaches give interesting results only when the microphones network is dense in the stage, which case is rare in practice.
Thus, there is a need for an improved method for interpolating a sound field. In particular, the method should allow estimating the sound field at the interpolation position so that the considered field is coherent with the position of the sound sources. For example, a listener located at the interpolation position should feel as if the interpolated field actually arrives from the direction of the sound source(s) of the sound stage when the considered field is rendered to him (for example, to enable the listener to navigate in the sound stage).
There is also a need for controlling the computing complexity of the interpolation method, for example to enable a real-time implementation on devices with a limited computing capacity (for example, on a mobile terminal, a virtual reality headset, etc.).
In an embodiment of the invention, a method for interpolating a sound field captured by a plurality of N microphones each outputting said encoded sound field in a form comprising at least one captured pressure and an associated pressure gradient vector, is provided. Such a method comprises an interpolation of said sound field at an interpolation position outputting an interpolated encoded sound field as a linear combination of said N encoded sound fields each weighted by a corresponding weighting factor. The method further comprises an estimation of said N weighting factors at least from:
Thus, the invention provides a novel and inventive solution for carrying out an interpolation of a sound field captured by at least two microphones, for example in a stage comprising one or several sound source(s).
More particularly, the proposed method takes advantage of the encoding of the sound field in a form providing access to the pressure gradient vector, in addition to the pressure. In this manner, the pressure gradient vector of the interpolated field remains coherent with that one of the sound field as emitted by the source(s) of the stage at the interpolation position. For example, a listener located at the interpolation position and listening to the interpolated field feels as if the field rendered to him is coherent with the sound source(s) (i.e. the field rendered to him actually arrives from the direction of the considered sound source(s)).
Moreover, the use of an estimated power of the sound field at the interpolation position to estimate the weighting factors allows keeping a low computing complexity. For example, this enables a real-time implementation on devices with a limited computing capacity.
According to one embodiment, the estimation implements a resolution of the equation
Σiai(t)(t)xi(t)=(t)xa(t), with:
For example, the considered equation is solved in the sense of mean squared error minimisation, for example by minimising the cost function ∥Σiai(t)(t)xi(t)−(t)xa(t)∥2. In practice, the solving method (for example, the Simplex algorithm) is selected according to the overdetermined (more equations than microphones) or underdetermined (more microphones than equations) nature.
According to one embodiment, the resolution is performed with the constraint that Σiai(t)(t)=(t).
According to one embodiment, the resolution is further performed with the constraint that the N weighting factors ai(t) are positive or zero.
Thus, phase reversals are avoided, thereby leading to improved results. Moreover, solving of the aforementioned equation is accelerated.
According to one embodiment, the estimation also implements a resolution of the equation αΣiai(t)(t)=α(t), with α a homogenisation factor.
According to one embodiment, the homogenisation factor α is proportional to the L-2 norm of the vector xa(t).
According to one embodiment, the estimation comprises:
Thus, using the effective power, the variations of the instantaneous power Wi2(t) are smoothed over time. In this manner, noise that might entail the weighting factors is reduced during estimation thereof. Thus, the interpolated sound field is even more stable.
According to one embodiment, the estimate (t) of the power of the sound field at the interpolation position is estimated from the instantaneous sound power Wi2(t) captured by that one among the N microphones the closest to the interpolation position or from the estimate (t) of the instantaneous sound power Wi2(t) captured by that one among the N microphones the closest to the interpolation position.
According to one embodiment, the estimate (t) of the power of the sound field at the interpolation position is estimated from a barycentre of the N instantaneous sound powers Wi2(t) captured by the N microphones, respectively from a barycentre of the N estimates (t) of the N instantaneous sound powers Wi2(t) captured by the N microphones. A coefficient weighting the instantaneous sound power Wi2(t), respectively weighting the estimate (t) of the instantaneous sound power Wi2(t) captured by the microphone bearing the index i, in the barycentre being inversely proportional to a normalised version of the distance between the position of the microphone bearing the index i outputting the pressure Wi(t) the said interpolation position. The distance is expressed in the sense of a L-p norm.
Thus, the pressure of the sound field at the interpolation position is accurately estimated based on the pressures output by the microphones. In particular, when p is selected equal to two, the decay law of the pressure of the sound field is met, leading to good results irrespective of the configuration of the stage.
According to one embodiment, the interpolation method further comprises, prior to the interpolation, a selection of the N microphones among Nt microphones, Nt>N.
Thus, the weighting factors may be obtained through a determined or overdetermined system of equations, thereby allowing avoiding or, to the least, minimising timbre changes that are perceptible by the ear, over the interpolated sound field.
According to one embodiment, the N selected microphones are those the closest to the interpolation position among the Nt microphones.
According to one embodiment, the selection comprises:
Thus, the microphones are selected so as to be distributed around the interpolation position.
According to one embodiment, the median vector u12(t) is expressed as
with xa(t) the vector representative of the interpolation position, xi
among the Nt indexes of the microphones.
According to one embodiment, the interpolation method further comprises, for given encoded sound field among the N encoded sound fields output by the N microphones, a transformation of the given encoded sound field by application of a perfect reconstruction filter bank outputting M field frequency components associated to the given encoded sound field, each field frequency component among the M field frequency components being located in a distinct frequency sub-band. The transformation repeated for the N encoded sound fields outputs N corresponding sets of M field frequency components. For a given frequency sub-band among the M frequency sub-bands, the interpolation outputs a field frequency component interpolated at the interpolation position and located within the given frequency sub-band, the interpolated field frequency component being expressed as a linear combination of the N field frequency components, among the N sets, located in the given frequency sub-band. The interpolation repeated for the M frequency sub-bands outputs M interpolated field frequency components at the interpolation position, each interpolated field frequency component among the M interpolated field frequency components being located in a distinct frequency sub-band.
Thus, the results are improved in the case where the sound field is generated by a plurality of sound sources.
According to one embodiment, the interpolation method further comprises an inverse transformation of said transformation. The inverse transformation applied to the M interpolated field frequency components outputs the interpolated encoded sound field at the interpolation position.
According to one embodiment, the perfect reconstruction filter bank belongs to the group comprising:
The invention also relates to a method for rendering a sound field. Such a method comprises:
The invention also relates to a computer program, comprising program code instructions for the implementation of an interpolation or rendering method as described before, according to any one of its different embodiments, when said program is executed by a processor.
In another embodiment of the invention, a device for interpolating a sound field captured by a plurality of N microphones each outputting the encoded sound field in a form comprising at least one captured pressure and an associated pressure gradient vector. Such an interpolation device comprises a reprogrammable computing machine or a dedicated computing machine, adapted and configured to implement the steps of the previously-described interpolation method (according to any one of its different embodiments).
Thus, the features and advantages of this device are the same as those of the previously-described interpolation method. Consequently, they are not detailed further.
Other objects, features and advantages of the invention will appear more clearly upon reading the following description, provided merely as an illustrative and non-limiting example, with reference to the figures, among which:
In all figures of the present document, identical elements and steps bear the same reference numeral.
The general principle of the invention is based on encoding of the sound field by the microphones capturing the considered sound field in a form comprising at least one captured pressure and an associated pressure gradient. In this manner, the pressure gradient of the field interpolated through a linear combination of the sound fields encoded by the microphones remains coherent with that of the sound field as emitted by the source(s) of the scene at the interpolation position. Moreover, the method according to the invention bases the estimation of the weighting factors involved in the considered linear combination on an estimate of the power of the sound field at the interpolation position. Thus, a low computing complexity is obtained.
In the following, a particular example of application of the invention to the context of navigation of a listener in a sound stage is considered. Of course, it should be noted that the invention is not limited to this type of application and may advantageously be used in other fields such as the rendering of a multi-channel scene, the compression of a multi-channel scene, etc.
Moreover, in the present application:
As of now, a sound stage 100 wherein a listener 110 moves, a sound field having been diffused by sound sources 100s and having been captured by microphones 100m are presented, with reference to [
More particularly, the listener 110 is provided with a headset equipped with loudspeakers 110hp enabling rendering of the interpolated sound field at the interpolation position occupied thereby. For example, it consists of Hi-Fi headphones, or a virtual reality headset such as Oculus, HTC Vive or Samsung Gear. In this instance, the sound field is interpolated and rendered through the implementation of the rendering method described hereinbelow with reference to [
Moreover, the sound field captured by the microphones 100m is encoded in a form comprising a captured pressure and an associated pressure gradient.
In other non-illustrated embodiments, the sound field captured by the microphones is encoded in a form comprising the captured pressure, the associated pressure gradient vector as well as all or part of the higher order components of the sound field in the ambisonic format.
Back to [
It is shown that this vector is orthogonal to the wavefront and points in the direction of propagation of the sound wave, namely opposite to the position of the emitter source: this way, it is directly correlated with the perception of the wavefront. This is particularly obvious when considering a field generated by one single punctual and far source s(t) propagating in an anechoic environment. The ambisonics theory states that, for such a plane wave with an incidence (ϑ, φ), where ϑ is the azimuth and p the elevation, the first-order sound field is given by the following equation:
In this case, the full-band acoustic intensity {right arrow over (I)}(t) is equal (while considering a multiplying coefficient), to:
Hence, we see that it points to the opposite of the direction of the emitter source and the direction of arrival (ϑ, φ) of the wavefront may be estimated by the following trigonometric relationships:
As of now, a method for interpolating the sound field captured by the microphones 100m of the stage 100 according to an embodiment of the invention is presented, with reference to [
Such a method comprises a step E200 of selecting N microphones among the Nt microphones of the stage 100. It should be noted that in the embodiment represented in [
More particularly, as discussed hereinbelow in connection with steps E210 and E210a, the method according to the invention implements the resolution of systems of equations (i.e. [math 4] in different constraints alternatives (i.e. hyperplan and/or positive weighting factors) and [Math 5]). In practice, it turns out that the resolution of the considered systems in the case where they are underdetermined (which case corresponds to the configuration where there are more microphones 100m than equations to be solved) leads to solutions that might favor different sets of microphones, over time. While the location of the sources 100s as perceived via the interpolated sound field is still coherent, there are nevertheless timbre changes that are perceptible by the ear. These differences are due: i) to the colouring of the reverberation which is different from one microphone 100m to another; ii) to the comb filtering induced by the mixture of non-coincident microphones 100m, which filtering has different characteristics from one set of microphones to another.
To avoid such timber changes, N microphones 100m are selected while always ensuring that the mixture is determined, and even overdetermined. For example, in the case of a 3D interpolation, it is possible to select up to three microphones among the Nt microphones 100m.
In one variant, the N microphones 100m that are the closest to the position to be interpolated are selected. This solution should be preferred when a large number Nt of microphones 110m is present in the stage. However, in some cases, the selection of the closest N microphones 110m could turn out to be “imbalanced” considering the interpolation position with respect to the source 100s and lead to a total reversal of the direction of arrival: this is the case in particular when the source 100s is placed between the microphones 100m and the interpolation position.
To avoid this situation, in another variant, the N microphones are selected distributed around the interpolation position. For example, we select the two microphones bearing the indexes i1 and i2 that are the closest to the interpolation position among the Nt microphones 100m, and then we look among the remaining microphones for that one that maximises the “enveloping” of the interpolation position. To achieve this, step E200 comprises for example:
For example, the median vector u12(t) is expressed as:
with:
the considered vectors being expressed in a given reference frame.
In this case, the index i3 of said third microphone is, for example, an index different from i1 and i2 which minimises the scalar product
among the Nt indexes of the microphones 100m. Indeed, the considered scalar product varies between −1 and +1, and it is minimum when the vectors u12(t) and
are opposite to one another, that is to say when the 3 microphones selected among the Nt microphones 110m surround the interpolation position.
In other embodiments that are not illustrated in [
Back to [
Thus, in the embodiment discussed hereinabove with reference to [
with:
In other embodiments that are not illustrated in [
where the dots refer to the higher-order components of the sound field decomposed in the ambisonic format.
Regardless of the embodiment considered for encoding of the sound field, the interpolation method according to the invention applies in the same manner in order to estimate the weighting factors ai(t).
For this purpose, the method of [
More particularly, in the embodiment of [
with:
In this instance, the equation [Math 2] simply reflects the fact that for a plane wave:
At a first glance, the distance d(xi(t),xs(t)) is unknown, but it is possible to observe that, assuming a unique plane wave, the instantaneous acoustic pressure Wi(t) at the microphone 100m bearing the index i is, in turn, inversely proportional to this distance. Thus:
By substituting this relationship in [Math 2], the following proportional relationship is obtained:
B
i%Wi2(t)(xi(t)−xs(t))
By replacing the relationship the latter relationship in [Math 1], the following equation is obtained:
with xa(t)=(xa(t) ya(t) za(t))T a vector representative of the interpolation position in the aforementioned reference frame. By reorganizing, we obtain:
In general, the aforementioned different positions (for example, of the active source 100s, of the microphones 100m, of the interpolation position, etc.) vary over time. Thus, in general, the weighting factors ai(t) are time-dependent. Estimating the weighting factors ai(t) amounts to solving a system of three linear equations (written hereinabove in the form of one single vector equation in [Math 3]). For the interpolation to remain coherent over time with the interpolation position which may vary over time (for example, the considered position corresponds to the position of the listener 110 who could move), it is carried out at different time points with a time resolution Ta adapted to the speed of change of the interpolation position. In practice, a refresh frequency fa=1/Ta is substantially lower than the sampling frequency fs of the acoustic signals. For example, an update of the interpolation coefficients ai(t) every Ta=100 ms is quite enough.
In [Math 3], the square of the sound pressure at the interpolation position, Wa2(t), also called instantaneous acoustic power (or more simply instantaneous power), is an unknown, the same applies to the vector representative of the position xs(t) of the active source 100s.
To be able to estimate the weighting factors ai(t) based on a resolution of [Math 3], an estimate (t) of the acoustic power at the interpolation position is obtained for example.
A first approach consists in approaching the instantaneous acoustic power by that one captured by the microphone 100m that is the closest to the considered interpolation position, i.e.:
In practice, the instantaneous acoustic power Wk2(t) may vary quickly over time, this may lead to a noisy estimate of the weighting factors ai(t) and to an instability of the interpolated stage. Thus, in some variants, the average or effective power captured by the microphone 100m that is the closest to the interpolation position over a time window around the considered time point, is calculated by averaging the instantaneous power over a frame of T samples:
where T corresponds to a duration of a few tens of milliseconds, or equal to the refresh time resolution of the weighting factors ai(t).
In other variants, it is possible to estimate the actual power by autoregressive smoothing in the form:
(t)=αw(t−1)+(1−αw)Wi2(t),
where the forget factor αw is determined so as to integrate the power over a few tens of milliseconds. In practice, values from 0.95 to 0.98 for sampling frequencies of the signal ranging from 8 kHz to 48 kHz achieves a good tradeoff between the robustness of the interpolation and its responsiveness to changes in the position of the source.
In a second approach, the instantaneous acoustic power Wa2(t) at the interpolation position is estimated as a barycentre of the N estimates (t) of the N instantaneous powers Wi2(t) of the N pressures captured by the selected N microphones 100m. Such an approach turns out to be more relevant when the microphones 100m are spaced apart from one another. For example, the barycentric coefficients are determined according to the distance ∥xi(t)−xa(t)∥p, where p is a positive real number and ∥⋅∥p is the L-p norm, between the interpolation position and the microphone 110m bearing the index i among the N microphones 100m. Thus, according to this second approach:
where {tilde over (d)}(xi(t),xa(t)) is the normalised version of ∥xi(t)−xa(t)∥p such that Σi{tilde over (d)}(xi(t),xa(t))=1. Thus, a coefficient weighting the estimate (t) of the instantaneous power Wi2(t) of the pressure captured by the microphone 110m bearing the index i, in the barycentric expression hereinabove and inversely proportional to a normalised version of the distance, in the sense of a L-p norm, between the position of the microphone bearing the index i outputting the pressure Wi(t) and the interpolation position.
In some alternatives, the instantaneous acoustic power Wa2(t) at the interpolation position is directly estimated as a barycentre of the N instantaneous powers Wi2(t) of the N pressures captured by the N microphones 100m. In practice, this amounts to substitute (t) with Wi2(t) in the equation hereinabove.
Moreover, different options for the norm p may be considered. For example, a low value of p tends to average the power over the entire area delimited by the microphones 100m, whereas a high value tends to favour the microphone 100m that is the closest to the interpolation position, the case p=∞ amounting to estimating by the power of the closest microphone 100m. For example, when p is selected equal to two, the decay law of the pressure of the sound field is met, leading to good results regardless of the configuration of the stage.
Moreover, the estimation of the weighting factors ai(t) based on a resolution of [Math 3] requires addressing the problem of not knowing the vector representative of the position xs(t) of the active source 100s.
In a first variant, the weighting factors ai(t) are estimated while neglecting the term containing the position of the source that is unknown, i.e. the right-side member in [Math 3]. Moreover, starting from the estimate of the power (t) and from the estimate (t) of the instantaneous power Wi2(t) captured by the microphones 100m, such a neglecting of the right-side member of [Math 3] amounts to solving the following system of three linear equations, written herein in the vector form:
Thus, it arises that the weighting factors ai(t) are estimated from:
For example, [Math 4] is solved in the sense of mean squared error minimisation, for example by minimising the cost function ∥Σiai(t)(t)xi(t)−(t)xa(t)∥2. In practice, the solving method (for example, the Simplex algorithm) is selected according to the overdetermined (more equations than microphones) or underdetermined (more microphones than equations) nature.
In a second variant, the weighting factors ai(t) are no longer estimated while neglecting the term containing the unknown position of the source, i.e. the right-side member of [Math 3], but while constraining the search for the coefficients ai(t) around the hyperplan Σiai(t)(t)=(t). Indeed, in the case where the estimate (t) is a reliable estimate of the actual power Wa2(t), imposing that the coefficients _ai(t) meet “to the best” the relationship Σiai(t)(t)=(t) implies that the right-side member in [Math 3] is low, and therefore any solution that solves the system of equations [Math 4] properly rebuilds the pressure gradients.
Thus, in this second variant, the weighting factors ai(t) are estimated by solving the system [Math 4] with the constraint that Σiai(t)(t)=(t). In the considered system, (t) and (t) are, for example, estimated according to one of the variants provided hereinabove. In practice, solving such a linear system with a linear constraint may be completed by the Simplex algorithm or any other constrained minimisation algorithm.
To accelerate the search, it is possible to add a constraint of positivity of the weighting factors ai(t). In this case, the weighting factors ai(t) are estimated by solving the system [Math 4] with the dual constraint that Σiai(t)(t)=(t), and that ∀i, ai(t)≥0. Moreover, the constraint of positivity of the weighting factors ai allows avoiding phase reversals, thereby leading to better estimation results.
Alternatively, in order to reduce the computing time, another implementation consists in directly integrating the hyperplan constraint Σiai(t)(t)=(t) into the system [Math 4], which ultimately amounts to resolution of the linear system:
In this instance, the coefficient α allows homogenising the units of the quantities (t)xa(t) and (t). Indeed, the considered quantities are not homogenous and, depending on the unit selected for the position coordinates (meter, centimeter, . . . ), the solutions will favor either the equations set Σiai(t)(t)xi(t)=(t)xa(t), or the hyperplan Σiai(t)(t)=(t). In order to make these quantities homogeneous, the coefficient α is, for example, selected equal to the L-2 norm of the vector xa(t), i.e. α=∥xa(t)∥2, with
In practice, it may be interesting to constrain even more the interpolation coefficients to meet the hyperplan constraint Σiai(t)(t)=(t). This may be obtained by weighting the amplifying factor α by an amplification factor λ>1. The results show that an amplification factor λ from 2 to 10 makes the prediction of the pressure gradients more robust.
Thus, we also note in this second variant that the weighting factors ai(t) are estimated from:
(t) being actually estimated from the considered quantities as described hereinabove.
As of now, the performances of the method of [
More particularly, the four microphones 300m are disposed at the four corners of a room and the source 300s is disposed at the center of the room. The room has an average reverberation, with a reverberation time or T60 of about 500 ms. The sound field captured by the microphones 300m is encoded in a form comprising a captured pressure and the associated pressure gradient vector.
The results obtained by application of the method of [
The simulations show that this heuristic formula provides better results than the method with fixed weights suggested in the literature.
To measure the performance of the interpolation of the field, we use the intensity vector {right arrow over (I)}(t) which theoretically should point in the direction opposite to the active source 300s. In [
As of now, the performances of the method of [
More particularly, in comparison with the configuration of the stage 300 of [
In [
As of now, another embodiment of the method for interpolating the sound field captured by the microphones 100m of the stage 100 is presented, with reference to [
According to the embodiment of [
However, in other embodiments that are not illustrated in [
Back to [
To avoid this, the embodiment of [
Thus, at a step E500, for given encoded sound field among the N encoded sound fields output by the selected N microphones 100m, a transformation of the given encoded sound field is performed by application of a time-frequency transformation such as Fourier transform or a perfect or almost perfect reconstruction filter bank, such as quadrature mirror filters or QMF. Such a transformation outputs M field frequency components associated to the given encoded sound field, each field frequency component among the M field frequency components being located within a distinct frequency sub-band.
For example, the encoded field vector, ψi, output by the microphone bearing the index i, i from 1 to N, is segmented into frames bearing the index n, with a size T compatible with the steady state of the sources present in the stage.
ψi(n)=[ψi(tn−T+1)ψi(tn−T+2) . . . ψi(tn)].
For example, the frame rate corresponds to the reset rate Ta of the weighting factors ai(t), i.e.:
t
n+1
=t
n
+E[Ta/Ts],
where Ts=1/fs is the sampling frequency of the signals and E[⋅] refers to the floor function.
Thus, the transformation is applied to each component of the vector ψi representing the sound field encoded by the microphone 100m bearing the index i (i.e. is applied to the captured pressure, to the components of the pressure gradient vector, as well as to the high-order components present in the encoded sound field, where appropriate), to produce a time-frequency representation. For example, the considered transformation is a direct Fourier transform. In this manner, we obtain for the l-th component ψi,l of the vector ψi:
where j=√{square root over (−1)}, and ω the normalised angular frequency.
In practice, it is possible to select T as a power of two (for example, immediately greater than Ta) and select ω=2πk/T, 0≤k<T so as to implement the Fourier transform in the form of a fast Fourier transform
In this case, the number of frequency components M is equal to the size of the analysis frame T. When T>Ta, it is also possible to apply the zero-padding technique in order to apply the fast Fourier transform. Thus, for a considered frequency sub-band ω (or k in the case of a fast Fourier transform), the vector constituted by all of the components ψi,l(n, ω), (ou ψi,l(n, k)) for the different l, represents the frequency component of the field ψi within the considered frequency sub-band ω (or k).
Moreover, in other variants, the transformation applied at step E500 is not a Fourier transformation, but an (almost) perfect reconstruction filter bank, for example a filter bank:
Back to [
In this manner, steps E210 and E210a described hereinabove with reference to [
For example, in order to implement the resolution of the systems [Math 4] or [Math 5], the effective power of each frequency sub-band is estimated either by a rolling average:
or by an autoregressive filtering:
(n,ω)=αw(n−1,ω)+(1−αw)|Wi2(n,ω)|.
Thus, the interpolation repeated for the M frequency sub-bands outputs M interpolated field frequency components at the interpolation position, each interpolated field frequency component among the M interpolated field frequency components being located within a distinct frequency sub-band.
Thus, at a step E510, an inverse transformation of the transformation applied at step E500 is applied to the M interpolated field frequency components outputting the interpolated encoded sound field at the interpolation position.
For example, considering again the example provided hereinabove where the transformation applied at step E500 is a direct Fourier transform, the inverse transformation applied at step E510 is an inverse Fourier transform.
As of now, a method for rendering the sound field captured by the microphones 100m of
More particularly, at a step E600, the sound field is captured by the microphones 110m, each microphone among the microphones 110m outputting a corresponding captured sound field;
At a step E610, each of the captured sound fields is encoded in a form comprising the captured pressure and an associated pressure gradient vector.
In other non-illustrated embodiments, the sound field captured by the microphones 110m is encoded in a form comprising the captured pressure, an associated pressure gradient vector as well as all or part of the higher order components of the sound field decomposed in the ambisonic format.
Back to [
At a step E630, the interpolated encoded sound field is compressed, for example by implementing an entropic encoding. Thus, a compressed interpolated encoded sound field is output. For example, the compression step E630 is implemented by the device 700 (described hereinbelow with reference to
Thus, at a step E640, the compressed interpolated encoded sound field output by the device 700 is transmitted to the rendering device 110hp. In other embodiments, the compressed interpolated encoded sound field is transmitted to another device provided with a computing capacity allowing decompressing a compressed content, for example a smartphone, a computer, or any other connected terminal provided with enough computing capacity, in preparation for a subsequent transmission.
Back to [
At a step E660, the interpolated encoded sound field is rendered on the rendering device 110hp.
Thus, when the interpolation position corresponds to the physical position of the listener 110, the latter feels as if the sound field rendered to him is coherent with the sound sources 100s (i.e. the field rendered to him actually arrives from the direction of the sound sources 100s).
In some embodiments that are not illustrated in [
In other embodiments that are not illustrated in [
As of now, an example of a structure of a rendering device 700 according to an embodiment of the invention is presented, with reference to [
The device 700 comprises a random-access memory 703 (for example a RAM memory), a processing unit 702 equipped for example with a processor, and driven by a computer program stored in a read-only memory 701 (for example a ROM memory or a hard disk). Upon initialisation, the computer program code instructions are loaded for example in the random-access memory 703 before being executed by the processor of the processing unit 702.
This [
In the case where the device 700 is made with a reprogrammable computing machine, the corresponding program (that is to say the sequence of instructions) may be stored in a storage medium, whether removable (such as a floppy disk, a CD-ROM or a DVD-ROM) or not, this storage medium being partially or totally readable by a computer or processor.
Moreover, in some embodiments discussed hereinabove with reference to [
Thus, in some embodiments, the device 700 is included in the rendering device 110hp.
In other embodiments, the device 700 is included in one of the microphones 110m or is duplicated in several ones of the microphones 110m.
Still in other embodiments, the device 700 is included in a piece of equipment remote from the microphones 110m as well as from the rendering device 110hp. For example, the remote equipment is a MPEG-H 3D decoder, a contents server, a computer, etc.
Number | Date | Country | Kind |
---|---|---|---|
1872951 | Dec 2018 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/085175 | 12/13/2019 | WO | 00 |