DEVICE AND METHOD FOR CAPTURING AND PROCESSING A THREE-DIMENSIONAL ACOUSTIC FIELD

Abstract
Capturing, encoding and transcoding an acoustic field, such as a three-dimensional acoustic field, comprising a device made up of two microphones, directional analysis and encoding means of said acoustic field, and optionally means for transcoding said acoustic field.
Description
FIELD

The present invention relates to a device and method for capturing, encoding and transcoding an acoustic field, more particularly a three-dimensional acoustic field, comprising a device made up of two non-coinciding microphones, directional analysis and encoding means, and optionally means for transcoding said acoustic field.


BACKGROUND

In the present document, three-dimensional sound capture is defined as the ability to obtain perceptual information on the acoustic field at a measurement point so as to be able to reproduce it in the context of immersive listening. The analyzed data is the acoustic content as well as the location of the sound sources.


There are four major categories of three-dimensional sound capture systems:

  • systems having at least 3 microphone buttons without processing or linear matrix processing for sound capture directly in a multichannel listening format with at least 3 channels or surround matrix with two channels; these look like trees or TV antennas (for example the widely used arrangements such as double-MS, Decca Tree, OCT Surround, IRT Cross or Hamasaki Square);
  • “array” systems with 3 or more microphone buttons, with nonlinear processing making it possible to obtain an intermediate format or a listening format; these, for example RondoMic, Nokia OZO, or the systems called “acoustic cameras”, are polymorphous: linear, square, tree shaped, spherical, cylindrical. One example is provided in US2016088392;
  • systems with spatial harmonics of the first order or a higher order, having at least 4 buttons, making it possible to obtain an A-Format or B-Format intermediate multichannel format with 4 or more channels by mastering or synthesis of microphone lobes; these systems, for example microphones of the Soundfield, Sennheiser Ambeo or Eigenmike type, generally offer a smaller bulk than the previous systems;
  • binaural recording systems, with two or more buttons, that reproduce the physical phenomena taking place around a listener's head, making it possible to obtain a signal with two channels containing psychoacoustic indicators of periphonic location. These systems have the drawback of not being perfectly suited to the physical characteristics of the end listener (separation of the ears, head shapes, etc.) as well as not allowing the listener's head to rotate without the acoustic field turning accordingly, which greatly limits their use aside from a sound pickup with fixed orientation. These systems include Kemar, Neumann KU-100, 3Dio, FreeSpace artificial heads, and many other models of intra-auricular microphones.


There are also many other devices with two microphones allowing a two-dimensional stereophonic capture and/or reproduction, such as pairs of microphones according to A-B, X-Y, MS or ORTF arrangements, or the dual-channel device described in U.S. Pat. No. 6,430,293. These devices only allow the capture of a two-dimensional projection of the three-dimensional acoustic space in a plane, most often in a half-plane like in the case of U.S. Pat. No. 6,430,293.


Several methods for directional analysis of an acoustic field are known, using various approaches. For example, U.S. Pat. No. 8,170,260 describes a method based on a capture by spherical harmonics with four buttons. The binaural cue coding method described in “Binaural cue coding: a novel and efficient representation of spatial audio” (IEEE: 2002) allows the simultaneous transmission of a monophonic reduction of several sound sources and direction information in time and in frequency, but it is limited to separate monophonic sources whose directions are known a priori, and therefore does not apply to the capture of any acoustic field. “Acoustic intensity in multichannel rendering systems” (AES, 2005) describes an intensimetric analysis method for a planar surround acoustic field, an implementation of which is described in FR2908586. The DirAC method described in “Directional Audio Coding in Spatial Reproduction and Stereo Upmixing” (Audio Engineering Society, 2006) makes it possible, from a B-format field represented by four spherical harmonics, to separate said field into diffuse and non-diffuse parts, to determine the variation over time of the direction of origin of the non-diffuse part, and to re-synthesize the field from a monophonic reduction and direction of origin information, through any diffusion arrangement. The HARPEX method, described in EP2285139, improves the DirAC method by making it possible, for each frequency, to manage two plane waves having different directions. U.S. Pat. No. 6,507,659 proposes a method for capturing a planar surround field using at least three microphone buttons. US20060222187 describes a method for capturing the hemispherical acoustic field using three coinciding microphones, according to a double-MS arrangement. All of these methods have the drawback of requiring a capture of the acoustic field over at least three channels, or four channels for some methods.


Other methods for parametric coding of a three-dimensional field are also known, for example the HOA encoding block present in standard MPEG-H described in “MPEG-H Audio—The New Standard for Universal Spatial/3D Audio Coding” (AES 2014), “Expanded three-channel mid/side coding for three-dimensional multichannel audio systems” (EURASIP 2014), which allows coding on three channels, or “A general compression approach to multi-channel three-dimensional audio” (IEEE 2013), which proposes three-dimensional coding on two stereophonic channels, from a capture with at least four channels.


All of these methods have either the drawback of requiring a capture of the field on at least three channels, or the drawback of a coding and transmission on at least three channels, or both of the aforementioned drawbacks at once.


SUMMARY

The aim of the present invention is to overcome the aforementioned drawbacks in the state of the art.


In the context of the present invention, a new system is described not falling within the four categories of three-dimensional field capture methods cited above. The invention uses only two microphone buttons and the appropriate processing means, and makes it possible to obtain a variety of coding of the signals reproducing the two-dimensional (called “surround”) or three-dimensional (called “periphonic”) acoustic field, suitable for immersive listening. The system operates in the frequency domain, by analyzing the sound signal over successive time windows, in the frequency domain. In each time window, for each frequency or each frequency band among a plurality of frequencies, it is assumed that one and only one monochromatic progressive plane wave passes through the device with a propagation direction, a magnitude and phase, and a frequency centered on the considered frequency band. This reductive approach is sufficient for a subjective capture, i.e., at a given point in space. The device analyzes the characteristics of these waves: spatial origin, magnitude, phase, and allows, from this information, transcoding toward a plurality of spatial encodings of the sound signal. One of the substantial advantages of the present invention is that it facilitates the transmission and storage of a three-dimensional field, since these two operations are systematically done on only two channels. A large majority of the processing and assembly, transmission, compression and storage chains are particularly suitable for a format with two channels, due to the historical preponderance of the stereophonic format.


The present invention therefore has many advantages relative to the state of the art, in that it:

  • allows a surround or 3D capture using only two buttons, therefore with a reduced bulk and cost, and whose device can advantageously be placed on board mobile devices;
  • makes it possible to capture a field using only two channels, and to transcode it a posteriori, in particular binaurally, without the physical characteristics and the orientation of the head being set a priori;
  • is usable on any digital audio recording or editing equipment accepting stereophonic content;
  • makes it possible to apply the processing for detecting directions, magnitude and phase at several levels of the audio content production chain, for example upon capture, but also before or after editing, or during final broadcasting; text missing or illegible when filed





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a general form of the invention comprising a sound acquisition device (100) according to any one of the implementations of the invention, made up of two microphones M1 (101) and M2 (102) arranged in an acoustic field, said microphones each having an output signal (103, 104). Said output signals are provided to the two inputs (105, 106) of a stage (107) for detecting direction of origin (108, 109) and calculating magnitude (110) and phase (111) of the incident wave. The information 108 to 111 is provided to a stage (112) for transcoding in any standard audio format (113).



FIG. 2 shows the ideal directivity (201) of a cardioid microphone, or the normalized output gain (202) as a function of the angle of incidence (203) of a plane wave. A unitary gain is obtained for a plane wave with direction of origin combined with the viewing direction of said microphone, and a nil gain is obtained for a direction of origin opposite the viewing direction.



FIG. 3 shows the ideal directivity (301) of an omnidirectional microphone, or the normalized output gain (302) as a function of the angle of incidence (303) of a plane wave. A unitary gain is obtained for a plane wave irrespective of the direction of origin of the wave.



FIG. 4 illustrates, in the plane, the difference in path between two microphones whose buttons do not coincide. The two microphones M1 (101) and M2 (102) are positioned in space along the X-axis (403) at abscissa xoffset (404) and −xoffset (405). A monochromatic plane wave, several wave fronts (406) of which are shown, has a direction of origin (407) determined by an azimuth and an elevation, and its oriented propagation axis (408) by definition has a direction opposite its direction of origin. The difference in path (409) is then defined as the length of the projection of the vector M1M2 (101102) on the oriented propagation axis (408).



FIG. 5 describes the distribution of the path difference measurements as a function of the azimuth (501) (comprised between −π and π) and the elevation (502) (comprised between −π/2 and π/2) of the source of the plane wave. The light areas represent the highest path difference values and the dark areas the lowest path difference values. The path difference values are comprised between −2·xoffset and +2·xoffset. A certain number of isolines (for example (503)) (contours) are drawn.



FIG. 6 shows, in the Cartesian coordinate system (601) with origin 0 and axes X, Y and Z, the coplanar arrangement, according to one implementation of the present invention, of two microphone buttons M1 (602) and M2 (603), whose main axes form an angle alook (605) for the first microphone, and π+alook (606) for the second microphone, with the axis oriented Y (604).



FIG. 7 describes, for a coplanar microphone device according to FIG. 6, the distribution of the panorama measurements as a function of the azimuth (701) (comprised between −π and π) and the elevation (702) (comprised between −π/2 and π/2) of the source of the plane wave. The light areas represent the highest path difference values and the dark areas the lowest path difference values. Like in FIG. 5, a certain number of isolines (contours) are drawn.



FIG. 8 shows the superposition of the isolines of FIG. 5 and FIG. 7, showing the determination uniqueness of the direction of origin for a given hemisphere (sign of the elevation angle), in the context of one of the implementations of the present invention.



FIG. 9 shows, in the Cartesian coordinate system (901) with origin 0 and axes X, Y and Z, the coplanar arrangement, according to one implementation of the present invention, of two microphone buttons with cardioid directivity M1 (902) and M2 (903). The main oriented axis of the first button 902 forms an angle alook (905) with the oriented axis Y (904), and an angle elook (907) with the plane XY. The main oriented axis of the second button 902 forms an angle π+alook (906) with the oriented axis Y (904), and an angle elook (907) with the plane XY.



FIG. 10 illustrates the behavior of the panorama as a function of the azimuth a (1001) and the elevation e (1002) of the source, with alook=π/2 and elook=π/4. The light colors are the highest values. The values are comprised between −1 and 1. These extrema are reached when one of the two functions gM1 or gM2 is canceled out, i.e., when the source is in a direction opposite the orientation of one of the two microphones.



FIG. 11 shows the superposition of the isolines of FIG. 5 and FIG. 10, showing the determination uniqueness of the direction of origin for a given half-space, in the context of one of the implementations of the present invention.



FIG. 12 illustrates the evolution of the value of the sum of the gains as a function of the elevation (1201) of the source, for elook assuming values comprised between 0 (curve 1202) and π/4 (curve 1203).



FIG. 13 shows, in the Cartesian coordinate system (1301) with origin 0 and axes X, Y and Z, the arrangement, according to one implementation of the present invention, of a first microphone button with omnidirectional directivity M1 (1302) and a second microphone button with cardioid directivity M2 (1303), both situated on the axis X (1304), at abscissa xoffset (1305) and −xoffset (1306). The first button 1302 has any orientation. The main oriented axis of the second button 1303 forms an angle alook (1307) with the oriented axis X (1304).



FIG. 14 illustrates the evolution of the ratio of the magnitudes as a function of the azimuth a (1401) and the elevation e (1402) of the source, with alook=π/2. The light colors are the highest values. The values are comprised between 0 and 1. These extrema are reached respectively in the rear and front directions of the cardioid microphone M2.



FIG. 15 illustrates the phase folding phenomenon based on the azimuth a (1501) and the elevation e (1502) of the source, for a monochromatic source with frequency f=20 kHz, with xoffset=2 cm and a speed of sound set at c=340 m/s. This phenomenon appears for any frequency such that the half-wavelength is less than or equal to 2xoffset.



FIG. 16 illustrates a calibration technique based on match learning between the spatial domain and the measurement domain.





DETAILED DESCRIPTION

A direct orthonormal three-dimensional Cartesian coordinate system is used with axes (X, Y, Z) and coordinates (x, y, z).


The azimuth is considered to be the angle in the plane (z=0), from the axis X toward the axis Y (trigonometric direction), in radians. A vector v will have an azimuth coordinate a when the half-plane (x=0, y≥0) having undergone a rotation around the axis Z by an angle a will contain the vector v.


A vector v will have an elevation coordinate e when, in the half-plane (y=0, x≥0) having undergone a rotation around the axis Z, it has an angle e with a non-nil vector of the half-line defined by intersection between the half-plane and the horizontal plane (z=0), positive toward the top.


The unit vector with azimuth and elevation a and e will have, as Cartesian coordinates:









{




x
=


cos


(
a
)




cos


(
e
)









y
=


sin


(
a
)




cos


(
e
)









z
=

sin


(
e
)










(
1
)







The capture of the acoustic field can be done in three dimensions using any implementation of the present invention, and transcoded into a format that preserves the information of the three dimensions (periphonic case). It may also be transcoded into a format that does not keep the third dimension: for example by not taking the elevation into account, or using it as divergence parameter, by causing the divergence to evolve for example as the cosine of the elevation.


Certain implementations of the invention use microphone buttons with cardioid directivity. These are acoustic sensors that measure the pressure gradient, and having the particularity of having a favored capture direction of the sound and an opposite rejection direction of the sound, i.e., in the listening axis of the microphone, the sound is captured with a maximum volume, and behind the microphone, the sound is ideally no longer audible. For such a microphone, the theoretical gain depending on the non-oriented angle θ between the direction of origin of the sound and the main axis of the microphone is written as follows:






g(θ)=½(1−cos(θ))  (2)


According to a vectorial formulation, a cardioid microphone oriented toward the unitary vector vm (i.e., of maximum gain for the waves coming from the direction vm) perceives a sinusoidal progressive plane wave coming from a unitary vector vs with the following theoretical gain:






g
m(vs)=½(1−vm·vs)  (3)


where vm·vs designates the scalar product of the vectors vm and vs.



FIG. 2 shows the behavior of the gain of a cardioid microphone as a function of the angle presented by the source with respect to the direction toward which the microphone is oriented. The term “cardioid” comes from the shape of this figure.


The commercially available cardioid microphones do not ideally follow the angular gain function; among their defects, one can see that:

  • there is a deviation over the entire function,
  • the gain for θ=π is not canceled out (“wide cardioid” or “subcardioid” case) or is canceled out at angles close to π and assumes negative values (“hypercardioid” case), or it has several lobes,
  • the defects depend on the frequency,
  • the defects vary from one microphone to another, for a same model.


It will be necessary to take some of these defects into account during the implementation of the device.


Some implementations of the invention use a microphone button with omnidirectional directivity. These are acoustic sensors that measure the pressure at a given point in space. Ideally, they do not have a favored orientation, i.e., the gain applied to an acoustic wave is independent of its propagation direction. This behavior is shown in FIG. 3 as a function of the angle of incidence of the wave relative to the main oriented angle of the sensor.


These buttons also have deviations between their theoretical behavior and their actual behavior, namely a directional tendency in the high frequencies caused by an acoustic shadow phenomenon.


According to certain implementations of the invention, it is possible, for each frequency among a plurality of frequencies, to measure the value called panorama. Let there be two microphones M1 and M2 each capturing an acoustic signal. It is considered that these microphones do not introduce a phase or phase inversion into the capture of the signals.


In the context of the present invention, the panorama of the two acoustic signals is defined as the ratio of the difference of the magnitudes perceived by the two mics divided by their sum:











Panorama


M
1

,

M
2





(

a
,
e

)


=





s







g

M
1




(

a
,
e

)





-



s







g

M
2




(

a
,
e

)










s







g

M
1




(

a
,
e

)





+



s







g

M
2




(

a
,
e

)











(
4
)







where s is a complex coefficient of magnitude equal to the amplitude of the wave, and phase equal to the phase of the wave, for example at the center of the device. This panorama is therefore independent of the magnitude and the phase of the signal s, and depends solely on its azimuth and its elevation, as well as the orientation of the microphones:











Panorama


M
1

,

M
2





(

a
,
e

)


=






g

M
1




(

a
,
e

)




-




g

M
2




(

a
,
e

)










g

M
1




(

a
,
e

)




+




g

M
2




(

a
,
e

)










(
5
)







It is trivial to show that the panorama thus assumes values in the interval [−1,1]. FIG. 7 illustrates the value of the panorama as a function of the azimuth a and the elevation e of the source, with alook=π/2. The light colors are the highest values. The values are comprised between −1 and 1. These extrema are reached when one of the two functions gM1 or gM2 is canceled out, i.e., when the source is in a direction opposite the orientation of one of the two microphones.


According to certain implementations of the invention, it is also possible to perform, for each frequency of a plurality of frequencies, the measurement of the difference in acoustic path between the two microphones, as well as the difference in phase between the two microphones.


A device is considered with two microphones M1 and M2, positioned on an axis X with respective coordinates xoffset and −xoffset, having the characteristic of not introducing phase or phase inversion into the acoustic signal that they capture.


The difference in path ΔL is the space between the two planes perpendicular to the propagation axis of the plane wave passing through the position of the microphone M1 and the position of the microphone M2. If the wave comes from a direction defined by the azimuth a and elevation e coordinates, the path difference is expressed:





ΔLM1,M2(a,e)=2xoffset cos(a)cos(e)  (6)



FIG. 5 shows the path difference as a function of the azimuth a and elevation e of the source. The light shades are the highest values. The values are comprised between −2xoffset and 2xoffset.


The absolute phase difference ΔΦ between the signals captured by the two mics, given a frequency corresponding to a wavelength 2, depends on said path difference:










Δ






Φ


(

a
,
e
,
λ

)



=



2

π

λ


Δ






L


(

a
,
e

)







(
7
)







The phase differences are being measured to within a multiple of 2π, so the normalized relative phase difference Δϕ is, i.e., brought back to the range ]−π,π] is:





Δφ(a,e,λ)=ΔΦ(a,e,λ)+k2π  (8)


with k∈Z such that Δφ(a, e, λ)∈]−π,π].


Regarding the correspondence between phase difference and acoustic path difference, two cases arise depending on the wavelength and the position deviation of the microphone buttons.


In the first case, the half-wavelength is greater than 2 xoffset, i.e., λ>4 xoffset


There is then equality between the normalized relative phase difference and the phase difference, and there is therefore uniqueness of the correspondence between the normalized relative phase difference and the path difference:










Δ






ϕ


(

a
,
e
,
λ

)



=

Δ






Φ


(

a
,
e
,
λ

)







(
9
)





therefore











Δ






L


(

a
,
e

)



=


λ

2





π



Δ






Φ


(

a
,
e
,
λ

)







(
10
)







In the second case, the half-wavelength is less than or equal to 2 xoffset, i.e., λ≤4 xoffset. The number of wave cycles added to the normalized phase difference can then be bounded. Indeed, the greatest path difference is observed for waves propagating along the axis X, in one direction or the other. Thus, when a phase difference Δφ(a,e, λ) is measured for a wavelength λ and an unknown azimuth a and elevation e source, all of the phase differences











Δ






Φ


(

a
,
e
,
λ

)



=


Δ






ϕ


(

a
,
e
,
λ

)



+

k





2





π



,




(
11
)






avec


{




k











Δ






Φ


(

a
,
e
,
λ

)









2

π

λ


2


x
offset



















are potentially acceptable, and therefore the path differences obtained by the following formula also are:










Δ






L


(

a
,
e

)



=



λ

2





π



Δ






ϕ


(

a
,
e
,
λ

)



+

k





λ





avec


{




k











Δ






L


(

a
,
e

)







2


x
offset













(
12
)








FIG. 15 illustrates this phase folding phenomenon based on the azimuth a (1501) and the elevation e (1502) of a monochromatic source with frequency f=20 kHz, with xoffset=2 cm and a speed of sound set at c=340 m/s. The light shades correspond to the values with the highest phase difference. Note that a same phase difference value can be obtained for several values of a and e.


A first preferred implementation of the present invention uses a coplanar arrangement of the two microphones. In this first implementation, shown in FIG. 6, the considered device comprises two microphone buttons (602, 603) with cardioid directivity placed in a space (601) (0, X, Y, Z) as follows:

  • The first microphone M1 (602) is placed in position (xoffset, 0,0) and is preferably oriented in the direction of the axis Y (604) in the positive direction, i.e., a plane wave propagating along the axis Y from the positive coordinates toward the negative coordinates will be perceived by the cardioid button under a maximum gain; its orientation will subsequently be described by the azimuth (605) and elevation (alook, 0) coordinates.
  • The second microphone (603) is placed in position (−xoffset, 0,0) and is preferably oriented in the direction of the axis Y (604) in the negative direction, i.e., a plane wave propagating along the axis Y from the negative coordinates toward the positive coordinates will be perceived by the cardioid button under a maximum gain; its orientation will subsequently be described by the azimuth (606) and elevation (alook+π,0) coordinates.


alook is preferably equal to π/2.


This arrangement of the microphones is shown in FIG. 6 in the Cartesian coordinate system.


To calculate the gains at the output of the microphones, a sinusoidal progressive plane wave is considered coming from the azimuth and elevation coordinates (a,e).


The gain in perception of microphone M1, according to equation (2) and the conversion into Cartesian coordinates, is:






g
M1(a,e)=½[1+cos(e)cos(a−alook)]  (13)


The gain in perception of microphone M2 is:






g
M2(a,e)=½[1−cos(e)cos(a−alook)]  (14)


The panorama is formulated, with the chosen microphone arrangement:





PanoramaM1,M2(a,e)=cos(a−alook)cos(e)  (15)


And since alook is preferably equal to π/2:





PanoramaM1,M2(a,e)=sin(a)cos(e)  (16)



FIG. 7 illustrates, for a coplanar microphone device according to FIG. 6, the distribution of the panorama measurements as a function of the azimuth (701) (comprised between −π and π) and the elevation (702) (comprised between −π/2 and π/2) of the source of the plane wave. The light areas represent the highest path difference values and the dark areas the lowest path difference values. Like in FIG. 5, a certain number of isolines (contours) are drawn.


The path difference and phase difference are calculated by applying equations 7 to 12, according to the method described above.


With the aim of determining azimuth and elevation of the incident wave, for the moment we will look at the wavelengths λ<4 xoffset, for which it has been demonstrated that there is uniqueness of the correspondence between the path difference and the phase difference.


If one superimposes FIGS. 5 (panorama) and 7 (phase difference, i.e., path difference), as shown in FIG. 8, it appears that given a panorama difference p∈[−1,1] and a path difference t in the range of values of the function, there is, in each of the hemispheres e≥0 and e≤0, one and only one solution to the system of equations making it possible to obtain the azimuth a and the elevation e of the direction of origin of the wave:









{






Panorama


M
1

,

M
2





(

a
,
e

)


=
p







Δ







L


M
1

,

M
2





(

a
,
e

)



=
t








(
17
)







Thus, if the capture principle is restricted to a hemispherical or surround capture, it is possible to restrict oneself to the interpretation of the direction of origin as that of one of the two hemispheres, in the case at hand and preferably the hemisphere e≥0.


Given, for a sinusoidal progressive plane wave with wavelength λ>4 xoffset, a panorama measurement p and a path difference measurement t, for an oriented microphone device with alook=π/2 and positioned at ±xoffset, the system of equations becomes, by setting out






k
=


t

2






x
offset





:













{






sin


(
a
)




cos


(
e
)



=
p








cos


(
a
)




cos


(
e
)



=
k








(
18
)







where one recognizes the expression of the Cartesian coordinates x and y of a unitary vector of azimuth a and elevation e spherical coordinates.


If p and t are both nil, in other words if cos(e)=0, then e=±π/2. Since the preferred hemisphere is e>0, then e=π/2 and a is equal to any value whatsoever, not influencing the vector describing the origin of the wave.


If p or t is not nil, in other words cos(e)≠0, considering that sin(a)2+cos(a)2=1:












p
2



cos


(
e
)


2


+


k
2



cos


(
e
)


2



=
1




(
19
)








cos


(
e
)


2

=


p
2

+

k
2






(
20
)







Since we have chosen the hemisphere e≥0, we obtain:





cos(e)=√{square root over (p2+k2)}  (21)


and therefore






e=arccos√{square root over (p2+k2)}  (22)


For known cos(e)≠0, by reinjecting it into equation 18, we can therefore immediately obtain, according to the system of equations:









{





sin


(
a
)


=

p

cos


(
e
)










cos


(
a
)


=

k

cos


(
e
)











(
23
)







from which a=atan 2(p,k) where atan 2(y,x) is the operator that yields the oriented angle between a vector (1,0)T and a vector (x,y)T; this operator is available in the form of a function std::atan 2 from the STL library of the C++ language.


A second preferred implementation of the present invention uses a coplanar arrangement of the two microphones. In this second implementation, the device comprises two microphone buttons with cardioid directivity placed in a space (X, Y, Z) as follows:

  • The first microphone M1 is placed in position (xoffset, 0,0) and is oriented according to the azimuth and elevation coordinates (alook, elook).
  • The second microphone is placed in position (−xoffset, 0,0) and is oriented according to the azimuth and elevation coordinates (alook+π, elook).


alook is preferably equal to π/2, elook is preferably equal to a positive value and within the interval [0,π/4]. This arrangement, illustrated by FIG. 9, includes the arrangement previously described when elook=0.


To calculate the output gains of the microphones, a sinusoidal progressive plane wave is considered coming from the azimuth and elevation coordinates (a, e).


The perception gain of the microphone M1, according to equations (2) and (3) and the conversion into Cartesian coordinates, is:






g
M

1
(a,e)=½[1+cos(e)cos(a−alook)cos(elook)+sin(e)sin(elook)]  (24)


The gain in perception of the microphone M2 is:






g
M

2
(a,e)=½[1−cos(e)cos(a−alook)cos(elook)+sin(e)sin(elook)]  (25)


The panorama is formulated, with the chosen microphone arrangement:











Panorama


M
1

,

M
2





(

a
,
e

)


=



cos


(
e
)




cos


(

a
-

a
look


)




cos


(

e
look

)




1
+


sin


(
e
)




sin


(

e
look

)









(
26
)








FIG. 10 illustrates the behavior of the panorama as a function of the azimuth a and the elevation e of the source, with alook=π/2 and elook=π/4. The light colors are the highest values. The values are comprised between −1 and 1. These extrema are reached when one of the two functions gM1 or gM2 is canceled out, i.e., when the source is in a direction opposite the orientation of one of the two microphones.


The path difference and phase difference are calculated by applying equations 7 to 12, according to the method described above.


With the aim of determining azimuth and elevation of the incident wave, for the moment we will look at the lengths λ>4 xoffset, for which there is uniqueness of the correspondence between the path difference and the phase difference.


As illustrated in FIG. 11 for alook=π/2 and elook=π/4, if the panorama and phase difference (i.e., path difference) graphs are superimposed, it appears that given a panorama difference p∈[−1,1] and a phase difference t in the range of the values of the functions, it is possible to split the sphere into two parts:

  • an upper part, which fully contains the hemisphere e≥0, and
  • a lower part contained in the hemisphere e≤0


in each of which one and only one solution is present to the system of equations making it possible to obtain the azimuth a and the elevation e of the direction of origin of the wave:






{







Panorama


M
1

,

M
2





(

a
,
e

)


=
p







Δ







L


M
1

,

M
2





(

a
,
e

)



=
t










Thus, if the principle of three-dimensional capture is restricted to a capture over only part of the space, it is possible to restrict oneself to the interpretation of the direction of origin as that of one of the two parts of the sphere, in the case at hand the upper part, covering more than one hemisphere.


Given, for a sinusoidal progressive plane wave with wavelength λ>4 xoffset, a panorama measurement p and a path difference measurement t, for an oriented microphone device with an azimuth alook=π/2 and a configurable elevation elook, and positioned at ±xoffset, the system of equations becomes, by setting out






k
=


t

2






x
offset





:













{







sin


(
a
)




cos


(
e
)




cos


(

e
look

)




1
+


sin


(
e
)




sin


(

e
look

)





=
p








cos


(
a
)




cos


(
e
)



=
k








(
27
)






or


:












{





sin


(
a
)


=


p


(

1
+


sin


(
e
)




sin


(

e
look

)




)




cos


(
e
)




cos


(

e
look

)











cos


(
a
)


=

k

cos


(
e
)











(
28
)







By considering that sin(a)2+cos(a)2=1:












[


p


(

1
+


sin


(
e
)




sin


(

e
look

)




)




cos


(
e
)




cos


(

e
look

)




]

2

+


[

k

cos


(
e
)



]

2


=
1




(
29
)







which is written as a second degree polynomial as sin(e):





[p2+(k2−1)cos (elook)2]+[2p2 sin(elook)]sin(e)+[(p2−1)sin(elook)2+1]sin(e)2=0  (30)


which has the roots, if they are expressed, i.e., if Δ≥0:










sin


(
e
)


=




-

p
2




sin


(

e
look

)



±


Δ

2





(


p
2

-
1

)




sin


(

e
look

)


2


+
1






(
31
)







where





Δ=2 cos(elook)2[1−p2−k2(1+p2)+(k2−1)(p2−1)cos(2elook)]  (32)


Thus the elevation e is known: it is the choice of e whose the elevation is the highest. It is then possible, by reinjecting e into equation 28, to know the azimuth a according to its sine and cosine.


In the implementations of the invention, a sub-determination of the direction of origin is encountered when the wavelength λ of the plane wave that one wishes to analyze is less than or equal to twice the distance between the microphone buttons, i.e., λ≤4 xoffset.


There may then be an indetermination on the number of periods undergoing propagation between the microphones at a given moment. This is reflected, in the expression of the path between the waves, by the addition or removal of an integer number of periods, before recovering the azimuth and the elevation. The number of the intercalary periods is bounded by the maximum possible number of periods given the spacing between the microphones.


A certain number of absolute phase differences are thus obtained. For each of these absolute phase differences, an azimuth and an elevation are determined. Among these multiple azimuth and elevation pairs, the one is chosen that is closest to the azimuth and the elevation of a fundamental potential whose wavelength is greater than 4 xoffset.


Once the direction of origin of the wave is determined, it is necessary to determine its magnitude. In the case of a pair of microphones whose axis is comprised in the plane XY (i.e., for which the angle elook is nil), the sum of the gains of the two microphones is:






g
M

1
(a,e)+gM2(a,e)=1  (33)


thus the magnitude of the wave is equal to the sum of the magnitudes of the microphones.


In the case of a pair of microphones for which the angle elook is any angle, the sum of the gains of the two microphones is:






g
M

1
(a,e)+gM2(a,e)=1+sin(e)sin(elook)  (34)



FIG. 12 illustrates the value of the sum of the gains as a function of the elevation (1201) of the source, for elook assuming values comprised between 0 (curve 1202) and π/4 (curve 1203).


Several approaches can thus be adopted to obtain the volume of the analyzed signal. The most direct method consists of using the sum of the gains directly. However, the increase in the volume for the waves coming from high elevations is not desirable. The compensation of the gain function consists of summing the volumes measured by each of the mics for the wave being analyzed, then multiplying the result by the inverse of the total gain function for the estimated elevation.


The favored implementation selected in the context of the present invention consists of:

  • preserving the calculated gain for the low or negative estimated elevations, and
  • compensating the gain measurement by multiplying it by the inverse of the function, only for the positive estimated elevations.


Once the direction of origin of the wave and the magnitude of the wave are determined, it is necessary to determine its phase. For the device comprising an omnidirectional button, it suffices to use the phase received by said omnidirectional button. On the contrary, for devices comprising two cardioid buttons, it is necessary to adopt the method below.


For all of the wavelengths λ>4xoffset, it is possible to estimate the phase of the wave at the center of the device, without indetermination: since the phase difference is, in absolute value, less than n, it suffices to add or subtract, respectively, half of the phase difference to or from the signal of the first microphone or the second microphone.


Alternatively, it is possible to use the phase of only one of the two microphones, with the drawback of having a shadow zone behind the selected microphone, which requires attenuating the amplitude of the output signal in the direction opposite that of the selected microphone; this can be problematic in the arrangement with elook=0 if the microphones are not subcardioid, but possible and not very problematic for an arrangement with a higher elook, for example close to π/4.


For the wavelengths λ≥4 xoffset, the phase difference may have been determined to within the cycle by the analysis of the fundamental waves as described above. However, errors may have been committed, and the user may find it preferable to choose the phase captured by one of the two microphones, with the drawback of the “hole.”


A third preferred implementation of the present invention uses a device with two microphone buttons, the first button having omnidirectional behavior and the second button having cardioid behavior:

  • the first microphone M1 is placed in position (xoffset, 0,0). Not being directional, there is no preferred orientation for its main axis;
  • the second microphone M2 is placed in position (−xoffset, 0,0) and is oriented along an azimuth alook=π/2, i.e., in the direction of the positive axis Y, in other words a plane wave propagating along the axis Y from the positive coordinates toward the negative ones will be perceived by the cardioid button under a maximum gain.



FIG. 13 illustrates this microphone arrangement.


For said arrangement, a sinusoidal progressive plane wave is considered coming from the azimuth and elevation coordinates (a, e).


The perception gain of the microphone M1, omnidirectional, is:






g
M

1
(a,e)=1  (35)


The perception gain of the microphone M2, cardioid, is:






g
M

2
(a,e)=½(1+sin(a)cps (e))  (36)


The concept of magnitude ratio, denoted hereinafter MagnitudeRatio(a, e), replaces the concept of panorama used with the other presented devices.


The magnitude ratio is formulated as follows:











MagnitudeRatio


M
1

,

M
2





(

a
,
e

)


=



g

M
2




(

a
,
e

)




g

M
1




(

a
,
e

)







(
37
)






or


:














MagnitudeRatio


M
1

,

M
2





(

a
,
e

)


=


1
2



(

1
+


sin


(
a
)




cos


(
e
)




)






(
38
)








FIG. 14 illustrates the variation of the magnitude ratio as a function of the azimuth a (1401) and the elevation e (1402) of the source, with alook=π/2. The light colors are the highest values. The values are comprised between 0 and 1. These extrema are reached respectively in the rear and front directions of the cardioid microphone M2.


The path difference and phase difference are calculated by applying equations 7 to 12, according to the method described above.


In order to determine the azimuth and the elevation of the source, one can see that the magnitude ratio follows the same formula as the panorama of the planar device with two cardioid microphones, to within an affine transformation. The path difference and the phase difference also following the same rules; the types and orientations of the microphones do not come into play in these calculations.


It is then possible to base oneself on the same reasoning to arrive at the following result:









{




e
=

arc





cos




p
2

+

k
2










a
=

atan





2


(

p
,
k

)










(
39
)







where







k
=

t

2






x
offset




,




t is the path difference traveled ΔL which is calculated from the relative phase difference Δφ and the wavelength λ, and p is the equivalent panorama calculated from the magnitude ratio r: p=2r−1.


The wavelengths such that λ≤4·xoffset have an undetermined traveled path difference. Their processing is addressed hereinafter.


The magnitude of the signal and its phase are next advantageously chosen like those of the omnidirectional microphone, avoiding shadow zones and phase joining.


The method for processing the captured signal contains a calibration means. Indeed, the actual microphones differ from their theoretical model, as already discussed above. Furthermore, obstruction phenomena of the sound signal may occur, a microphone or its support being able to be found on the acoustic path from one wave to another. Additionally, the actual space between the buttons or the propagation speed of the sound wave may differ from their theoretical values or values indicated by the user. Therefore, the phase difference at the low frequencies may be so small that a minimal phase defect of the buttons may introduce localization artifacts for them. The lowest frequencies of the audible spectrum do not necessarily require localization, since this localization is not perceived by the listener. During the analysis and transcoding, an arbitrary position may be allocated to them, for example at the azimuth and elevation coordinates (0,0). Furthermore, and depending on the distance between the microphones (xoffset), the phase difference may be too small with respect to phase errors introduced by the capture of the microphones. It is then possible to analyze frequencies below a certain threshold and determine their location based on the location of the harmonics.


For the devices comprising two cardioid buttons (respectively an omnidirectional button and a cardioid button), four calibration methods for the panorama (respectively magnitude ratio) and phase difference responses are presented to offset the deviations between the theoretical model and the concrete device.


In a first method, the panorama (respectively magnitude ratio) and phase difference responses are calibrated using a visualization of the points captured in the azimuth and elevation axes plane and presented to the user. To that end, a real-time graphic visualization of the cloud of dots (azimuth, elevation) generated by the detection of the positions of the different coefficients makes it possible to perform a first calibration of the system. In a calm environment, a spectrally rich acoustic source, such as a speaker broadcasting white noise, a sinesweep signal, or the human mouth emitting the “sh” sound in a prolonged manner is used in different directions with respect to the device. When the source is positioned along the axis X, on one side or the other, the cloud of dots varies when the distance parameter xoffset is modified: it moves away from the ideal direction while forming an arc around the direction when it assumes too low a value, and becomes over-concentrated when it assumes too high a value. When the source is positioned along the axis Y, on one side or the other, the cloud of dots varies when a multiplier parameter of the measured panorama (respectively magnitude ratio) is modified (but bounded and saturated at the interval [−1,1]) (respectively [0,1]): it expands or contracts for an overly low or overly high value. Adequate xoffset and panorama (respectively magnitude ratio) multiplier values make it possible to observe, when the source moves in the horizontal plane around the device, a cloud moving in azimuth in a fluid manner reproducing the movement of the source. This first panorama-phase (respectively magnitude ratio-phase) calibration method can be applied a posteriori on a recorded and non-analyzed signal of the mics during decoding, as long as the test signals have been introduced in recording.


In a second method, the panorama (respectively magnitude ratio) and phase difference responses are calibrated using automated learning of the variation ranges of the phase difference and panorama (respectively magnitude ratio) measurements. To that end, in a calm environment, a spectrally rich acoustic source, such as a speaker broadcasting white noise, a sinesweep signal, or the human mouth emitting the “sh” sound in a prolonged manner is used in different directions with respect to the device. When the source is positioned along the axis X, on one side or the other, a calibration phase is triggered on several successive processing blocks of the short-term Fourier transform (or the equivalent transform used). For each of the directions of origin of the signal (source at the azimuth 0 or π), the phase difference is recorded for each pair of complex coefficients representing a frequency band, whose wavelength does not allow sub-determination (λ>4 xoffset), an average of the phase differences is measured, which makes it possible to obtain a minimum phase difference Δφmin, a maximum phase difference Δφmin corresponding to each frequency band. Once the phase difference is calibrated, the localization algorithm of the sources is modified, in that a phase difference measurement will immediately be altered to be recalibrated in an affine manner from the range of original values to the range of values measured on calibration:










Δϕ
recalibrated

=


Δϕ

theoretical
,
min


+


(


Δϕ

theoretical
,
max


-

Δϕ

theoretical
,
min



)





Δϕ
measured

-

Δϕ
min




Δϕ
max

-

Δϕ
min









(
40
)







if Δφmax−Δφmin≠0, otherwise the value is not modified or assumes a nil value. When the source is positioned along the axis Y, on one side or the other, a similar procedure is applied: for each frequency band, on successive processing blocks, the minimum panorama Panoramamin (respectively minimum magnitude ratio) and the maximum panorama Panoramamax (respectively maximum magnitude ratio) are measured and averaged. Once the panorama (respectively magnitude ratio) is calibrated, the localization algorithm of the sources is modified, in that a panorama (respectively magnitude ratio) measurement is immediately altered to be recalibrated in an affine manner from the range of original values to the range of values measured on calibration:










Panorama
recalibrated

=


Panorama

th
,
min


+


(


Panorama

th
,
max


-

Panorama

th
,
min



)





Panorama
measured

-

Panorama
min




Panorama
max

-

Panorama
min









(

41

a

)






(


respectively






MagnitudeRatio
recalibrated


=



MagnitudeRatio
measured

-

MagnitudeRatio
min




MagnitudeRatio
max

-

MagnitudeRatio
min




)




(

41

b

)







where Panoramath,min and Panoramath,max are the theoretical panorama values taken at the azimuths −π/2 and π/2. Then the panorama (respectively MagnitudeRatio) value is saturated to have values only in the interval [−1,1] (respectively [0,1]).


In a third method, a calibration of the panorama and phase difference responses consists, for each frequency band, of correcting the obtained spherical coordinates (azimuth, elevation) by using a learned spatial correspondence between a measurement sphere and an actual coordinates sphere. To that end, the user defines calibration points on the coordinates sphere. These points form a triangle mesh of the sphere (techniques used for the “VBAP” 3D audio rendering); the triangle mesh is used to perform interpolations between the points. These points can be predefined, for example the points at the azimuth and elevation coordinates (0, 0), (π/2, 0), (π, 0), (−π/2,0), (0, π/2); an additional point without calibration (because outside the measuring range) makes it possible to complete the comprehensive mesh of the sphere: (0, −π/2). For each calibration point, the user produces, in the direction of the calibration point, in a calm environment, a signal with a rich spectrum, either using white noise broadcast by a speaker, or a sinesweep signal, or by pronouncing the sound “sh.” Over several successive processing windows, the coordinates for each frequency band are averaged and recorded. One thus obtains, for each frequency band, another mesh of the sphere, hereinafter called “learned mesh,” whose measuring points each correspond to a point of the “reference mesh,” that made up of the points at the calibration coordinates. The algorithm is thus modified to use the learning done. For each frequency band, once the azimuth and elevation coordinates are determined by the algorithm, they are modified:

  • the coordinates are analyzed to determine the triangle of the learned mesh to which they belong, using a VBAP technique, as well as the barycentric coefficients applied to the apices of the triangle,
  • the barycentric coefficients are applied to the corresponding triangles in the reference mesh in order to obtain a recalibrated azimuth and elevation.


In a fourth method, a calibration, for each frequency band, makes it possible to correct panorama (or magnitude ratio) measurements as well as the phase difference before determining the direction of origin. This correction is based on a learning phase, by excitation of the system using spectrally rich sounds or a sinesweep, in various directions, for example from the azimuth and elevation coordinates (0, 0), (π/2, 0), (π, 0), (−π/2, 0), (0, π/2). These directions form a partial mesh of the sphere in triangles, like in the VBAP techniques, for example using a Delaunay triangulation. This mesh is replicated, for each frequency band in the two-dimensional space between the measured panorama (or magnitude ratio) (1602, 1604) and phase difference (1601, 1603) values, like in the example illustrated in FIG. 16.


The analysis of the direction of origin of a signal is modified in one of the two following ways:

  • a panorama (or magnitude ratio) as well as phase difference measurement determines a point in the two-dimensional range, which makes it possible to determine, in the two-dimensional mesh of said frequency band, a triangle to which it belongs, or if the point does not belong to any triangle, the closest segment or apex of the mesh is determined. By correspondence of the two-dimensional mesh with the partial mesh of the sphere, an azimuth and elevation are determined, without using the azimuth and elevation determination formulas as a function of the panorama (or magnitude ratio) and phase difference.
  • a panorama (or magnitude ratio) as well as phase difference measurement determines a point in the two-dimensional range, which makes it possible to determine, in the two-dimensional mesh of said frequency band, a triangle to which it belongs, or if the point does not belong to any triangle, the closest segment or apex of the mesh is determined. By correspondence of the measured two-dimensional mesh with the theoretical two-dimensional mesh, a corrected panorama (or magnitude ratio) and phase difference measurement are determined.


Regardless of the implementation chosen for the present invention, and therefore regardless of the physical arrangement of the two microphones, the format of the dual-channel signal obtained by the microphones is in itself a spatial encoding of the audio signal. It may be used and transmitted as is, but will require, at one step of the chain, the appropriate spatial analysis for its use. The spatial analysis and the extraction of the specific characteristics of the signal as presented in the present invention, are, in their digital format (spectral content and its spatial correspondence), another spatial encoding format of the audio signal. In certain implementations of the invention, they make it possible, in any step of the transmission chain of the sound signal, to transcode toward a plurality of other formats, for example and non-limitingly:

  • VBAP 2D or 3D, VBIP 2D or 3D
  • DBAP 2D or 3D, DBIP 3D or 3D
  • Pair-wise panning 2D or layered-2D or 3D
  • First-order spherical harmonies (Ambisonics, FOA) A-Format or B-Format, 2D or 3D, or Higher-order spherical harmonies (Ambisonics, HOA)
  • Binaural
  • Surround mastered on two channels
  • Any digital format separating the spectral content and the spatial data


The mastered or spatial harmonic formats are particularly suitable for processing spatial audio, since they allow the manipulation of the sound field while allowing the use of certain traditional tools in the audio industry. Dual-channel formats are, however, those which make it possible to use the existing production chains and their formats more immediately; indeed, in most cases only two audio channels are provided.


The frequency implementation of the method for determining direction of origin, magnitude and phase of the wave is done as follows: the microphones are positioned according to one of the arrangements previously indicated. Alternatively, a recording having used a corresponding arrangement of buttons is used at the input of the algorithm.


The dual-channel signal goes through an analysis of the short-term Fourier transform type, or a similar time-to-frequency transform, such as MDCT/MCLT, complex wavelet transform, complex wavelet packet transform, etc. For each channel each corresponding to one of the two microphones, a complex coefficient vector is obtained corresponding to the frequency content of the signal, magnitude and phase.


The coefficients of the two vectors corresponding to the same frequency band are paired, and each pair of coefficients the spatial origin of the sound source for the frequency band in question, namely azimuth and elevation is analyzed, then the complex coefficient corresponding to the sound content of the analyzed plane wave is reconstituted, namely magnitude and phase.


Thus obtained for the frequency band are an azimuth value, an elevation value, and a complex coefficient corresponding to the magnitude and the phase of the wave in said frequency band. The signal is then transcoded, from said azimuth, elevation, magnitude and phase values, in a format chosen by the user. Several techniques are presented as examples, but they will appear obvious for a person knowing the state of the art of sound rendering or encoding the sound signal.


First-order spherical harmonic transcoding (or first-order ambisonic) can be done in the frequency domain. For each complex coefficient c corresponding to a frequency band, knowing the corresponding azimuth a and elevation e, four complex coefficients w, x, y, z corresponding to the same frequency band can be generated using the following formulas:









{




w
=

c

2








x
=


c
·

cos


(
a
)





cos


(
e
)









y
=


c
·

sin


(
a
)





cos


(
e
)









z
=

c
·

sin


(
e
)











(
42
)







The coefficients w, x, y, z obtained for each frequency band are assembled to respectively generate frequency representations W, X, Y and Z of four channels, and the application of the frequency-to-time transform (reverse of that used for the time-to-frequency transform), any clipping, then the overlapping of successive time windows obtained makes it possible to obtain four channels that are a temporal representation in first-order spatial harmonics of the three-dimensional audio signal. A similar approach can be used for transcoding to a format (HOA) of an order greater than or equal to 2, by completing equation (35) with the encoding formulas for the considered order.


LRS mastered surround encoding on two channels can be done in the frequency domain. The elevation being problematic in the mastered surround case, it is only introduced into this example by attenuation of the signal to avoid the position discontinuities for a source going through an elevation of n. For each complex coefficient c corresponding to a frequency band, knowing the corresponding azimuth a normalized in ]−π, π] and the elevation e, two complex coefficients l and r corresponding to the same frequency band can be generated using the following formulas:









{




pos
=



3
2


a

+

π
a














l
0

=


cos


(
e
)




sin


(
pos
)








if





a



[



-
π

/
6

,

π
/
6


]








r
0

=


cos


(
e
)




cos


(
pos
)














s
=
0













(
43
)














{




pos
=



3
5


a

-

π
10














l
0

=


cos


(
e
)




cos


(
pos
)








if





a

>

π
/
6








r
0

=
0











s
=


cos


(
e
)




sin


(
pos
)
















(
44
)






{




pos
=



-

3
5



a

-

π
10














l
0

=
0





if





a

<


-
π

/
6








r
0

=


cos


(
e
)




cos


(
pos
)














s
=


cos


(
e
)




sin


(
pos
)
















(
45
)






{




l
=


l
0

-


i

2



s








r
=


r
0

+


i

2



s










(
46
)







where i is the pure imaginary complex number with square −1. The coefficients l, r obtained for each frequency band are assembled to respectively generate frequency representations L and R of two channels, and the application of the frequency-to-time transform (reverse of that used for the time-to-frequency transform), any clipping, then the overlapping of successive time windows obtained makes it possible to obtain two channels that are a mastered stereo representation of the audio signal.


Y A device and method for capturing, encoding and transcoding an acoustic field, such as a three-dimensional acoustic field, comprising a device made up of two microphones, directional analysis and encoding means of said acoustic field, and optionally means for transcoding said acoustic field. A first microphone with directivity D1 and a second microphone with directivity D2 are positioned in separate locations and with separate orientations. The first microphone transforms the acoustic sound waves that it receives into a first electric signal, which is digitized to yield a first digital audio signal. The second microphone transforms the acoustic sound waves that it receives into a second electric signal, which is digitized to yield a second digital audio signal. A directional analysis system performs, for any frequency from among a plurality of frequencies, a phase measurement for the first and second digital audio signals as well as a panorama measurement for the first and second electric signal, and calculates direction of origin information of the acoustic wave therefrom.


A magnitude and phase determination system performs, for any frequency from among a plurality of frequencies, a determination of the magnitude and phase of said acoustic waves.


Optionally, the direction of origin information and the magnitude and phase of the acoustic wave are, for any frequency from among a plurality of frequencies, projected on an audio panoramic law, allowing the transcoding of the acoustic field in a given audio format.


A computer program comprising the computer code implementing the steps and systems of the method for encoding the acoustic field according to any one of claims 1 to 8, said computer program operating on at least one computer or on at least one processing circuit of the on-board signal.


A computer program comprising the computer code implementing the steps and systems of the means for encoding the acoustic field the steps and systems of the means for transcoding the acoustic field according to claim 8, said computer program operating on at least one computer or on at least one processing circuit of the on-board signal.


The present invention finds many applications in sound engineering, for example:

  • Three-dimensional (periphonic) sound capture
  • Surround sound capture
  • Capture of three-dimensional impulse responses
  • Capture of surround impulse responses
  • On-board sound capture in mobile equipment such as smartphone, tablet or video camera, the orientation of microphones can be compensated if it is measured using gyroscopes and magnetometers
  • Sound capture applied to robotics
  • Sound capture applied to telecommunications
  • Sound capture applied to telepresence
  • Sound capture similar to wearing binaural buttons, with the exception that it involves directional microphones calibrated with the described calibration methods so that the obstruction of the head does not disturb capture

Claims
  • 1-11. (canceled)
  • 12. A method for encoding a three-dimensional acoustic field, comprising: encoding an acoustic field captured by microphone buttons of a capture device that comprises a first microphone button with substantially cardioid directivity and a second microphone button with substantially cardioid directivity, placed in a Cartesian coordinate system XYZ, the first microphone button and the second microphone button not coinciding and being positioned along the axis X on either side of the plane YZ and at equal distances from said plane YZ, the main axes of said buttons being orthogonal to the axis X and coplanar to one another, the main axis oriented toward the front of the first microphone button forming an angle alook with the oriented axis X, the main axis oriented toward the front of the second microphone button forming an angle of π+alook with the oriented axis X, byreceiving a first signal from the first microphone button and a second signal from the second microphone button, (a) performing time-to-frequency transforms of the first signal and the second signal to obtain a first frequency signal and a second frequency signal,(b) receiving said first frequency signal and said second frequency signal and performing, for any frequency from among a plurality of frequencies, a panorama measurement,(c) receiving said first frequency signal and said second frequency signal and performing, for any frequency from among a plurality of frequencies, a phase difference measurement,(d) determining the direction of origin, receiving said panorama measurement from (b) and said phase different measurement from (c), and determining, for any frequency from among a plurality of frequencies, an azimuth angle and an elevation angle of the direction of origin,(e) receiving said azimuth and elevation angles of (d), and determining, for any frequency from among a plurality of frequencies, a magnitude and a phase.
  • 13. The method for encoding a three-dimensional acoustic field according to claim 1, wherein the angle alook is substantially equal to π/2.
  • 14. The method for encoding a three-dimensional acoustic field according to claim 1, further comprising determining the direction of origin making it possible, for any frequency whose wavelength is less than or equal to twice the distance between said microphone buttons from among a plurality of frequencies, to determine the appropriate number of additional phase cycles by analyzing the locations of the frequencies for which this frequency is the harmonic.
  • 15. The method for encoding a three-dimensional acoustic field according to claim 1, further comprising a determining the direction of origin making it possible, for any frequency whose wavelength is less than or equal to twice the distance between said microphone buttons from among a plurality of frequencies, to determine the location directly by analysis of the locations of the frequencies for which this frequency is the harmonic.
  • 16. The method for encoding a three-dimensional acoustic field according to claim 1, further comprising transcoding the three-dimensional acoustic field by: calculating audio panoramic gains by receiving the azimuth angle and the elevation angle of the direction of origin for any frequency from among a plurality of frequencies, and projecting said angles according to an audio panoramic law to obtain N panoramic gains,receiving the magnitude of the source, the phase of the source and said N gains for any frequency from among a plurality of frequencies, and grouping together said magnitude and said phase in a complex coefficient, and multiplying said complex coefficient by said gains to obtain N frequency signals, andperforming a frequency-to-time inverse transform of said N frequency signals for all of the frequencies, to obtain N projected time signals.
  • 17. A unit comprising a capture device that comprises a first microphone button with substantially cardioid directivity and a second microphone button with substantially cardioid directivity, placed in a Cartesian coordinate system XYZ, the first microphone button and the second microphone button not coinciding and being positioned along the axis X on either side of the plane YZ and at equal distances from said plane YZ, the main axes of said buttons being orthogonal to the axis X and coplanar to one another, the main axis oriented toward the front of the first microphone button forming an angle alook with the oriented axis X, the main axis oriented toward the front of the second microphone button forming an angle of π+alook with the oriented axis X.
  • 18. A method for encoding a three-dimensional acoustic field, comprising: encoding an acoustic field captured by microphone buttons of a capture device that comprises a first microphone button with substantially cardioid directivity and a second microphone button with substantially cardioid directivity, placed in a Cartesian coordinate system XYZ, the first button and the second button not coinciding and being positioned along the axis X on either side of the plane YZ and at equal distances from said plane YZ, the main axes of said buttons being orthogonal to the axis X and coplanar to one another, the main axis oriented toward the front of the first microphone button forming an angle alook with the oriented axis X and an angle elook with the plane XY, the main axis oriented toward the front of the second microphone button forming an angle of π+alook with the oriented axis X and an angle elook with the plane XY, byreceiving a first signal from the first microphone button and a second signal from the second microphone button, (a) performing a time-to-frequency transform of the first signal and the second signal to obtain a first frequency signal and a second frequency signal,(b) receiving said first frequency signal and said second frequency signal and performing, for any frequency from among a plurality of frequencies, a panorama measurement,(c) receiving said first frequency signal and said second frequency signal and performing, for any frequency from among a plurality of frequencies, a phase difference measurement,(d) determining the direction of origin by receiving said panorama measurement from the system (b) and said phase difference measurement from the system(c), and determining, for any frequency from among a plurality of frequencies, an azimuth angle and an elevation angle of the direction of origin, and(e) receiving said azimuth and elevation angles of the system (d), and determining, for any frequency from among a plurality of frequencies, a magnitude and a phase.
  • 19. The method for encoding a three-dimensional acoustic field according to claim 7, wherein the angle alook is substantially equal to π/2, and the angle elook is within the interval [0, π/2].
  • 20. The method for encoding a three-dimensional acoustic field according to claim 7, wherein, for any frequency whose wavelength is less than or equal to twice the distance between said microphone buttons from among a plurality of frequencies, analyzing the locations of the frequencies for which this frequency is the harmonic to determine a number of additional phase cycles.
  • 21. The method for encoding a three-dimensional acoustic field according to claim 7, wherein for any frequency whose wavelength is less than or equal to twice the distance between said microphone buttons from among a plurality of frequencies, analyzing the locations of the frequencies for which this frequency is the harmonic to determine the location directly.
  • 22. The method for encoding a three-dimensional acoustic field according to claim 7, further comprising calculating audio panoramic gains receiving the azimuth angle and the elevation angle of the direction of origin for any frequency from among a plurality of frequencies, and projecting said angles according to an audio panoramic law to obtain N panoramic gains,receiving the magnitude of the source, the phase of the source and said N gains for any frequency from among a plurality of frequencies, and grouping together said magnitude and said phase in a complex coefficient, and multiplying said complex coefficient by said gains to obtain N frequency signals, and
  • 23. A method for encoding a three-dimensional acoustic field, comprising: encoding an acoustic field captured by microphone buttons of a capture device that comprises a first microphone button and a second microphone button, placed in a Cartesian coordinate system XYZ, the first button and the second button not coinciding and being positioned along the axis X on either side of the plane YZ and at equal distances from said plane YZ, wherein: the first microphone button has substantially omnidirectional directivity,the second microphone button has substantially cardioid directivity,the first microphone button is situated in the plane XY, andthe main axis oriented toward the front of the second microphone button is comprised in the plane XY and forms an angle alook=π/2 with the oriented axis X, by:
  • 24. The method for encoding a three-dimensional acoustic field according to claim 12, wherein the (d) encoding the acoustic field further comprises determining the direction of origin making it possible, for any frequency whose wavelength is less than or equal to twice the distance between said microphone buttons from among a plurality of frequencies, to determine the appropriate number of additional phase cycles by analyzing the locations of the frequencies for which this frequency is the harmonic.
  • 25. The method for encoding a three-dimensional acoustic field according to claim 12, wherein the (d) encoding the acoustic field additionally comprises determining the direction of origin making it possible, for any frequency whose wavelength is less than or equal to twice the distance between said microphone buttons from among a plurality of frequencies, to determine the location directly by analysis of the locations of the frequencies for which this frequency is the harmonic.
  • 26. The method for encoding a three-dimensional acoustic field according to claim 12, further comprising calculating audio panoramic gains receiving the azimuth angle and the elevation angle of the direction of origin for any frequency from among a plurality of frequencies, and projecting said angles according to an audio panoramic law to obtain N panoramic gains,receiving the magnitude of the source, the phase of the source and said N gains for any frequency from among a plurality of frequencies, and grouping together said magnitude and said phase in a complex coefficient, and multiplying said complex coefficient by said gains to obtain N frequency signals, and performing a frequency-to-time inverse transform of said N frequency signals for all of the frequencies, to obtain N projected time signals.
Priority Claims (1)
Number Date Country Kind
2622 Sep 2016 MC national
Parent Case Info

This patent application is a national phase entry of PCT application PCT/EP2017/025255 filed Sep. 13, 2017, which claims the benefit of the earlier filing date of Monaco patent application 2622, filed Sep. 16, 2017.

PCT Information
Filing Document Filing Date Country Kind
PCT/EP2017/025255 9/13/2017 WO 00