The present disclosure relates to the field of processing sound signals.
More particularly, the present disclosure relates to the field of recording a 360° sound signal.
2. Brief Description of Related Developments
Methods and systems are known in the prior art for broadcasting 360° video signals. There is a need in the prior art to be able to combine sound signals with these 360° video signals.
Until now, 3D audio has been reserved for sound professionals and researchers. The purpose of this technology is to acquire as much spatial information as possible during the recording to then deliver this to the listener and provide a feeling of immersion in the audio scene.
In the video sector, interest is growing for videos filmed at 360° and reproduced using a virtual reality headset for full immersion in the image: the user can turn his/her head and explore the surrounding visual scene. In order to obtain the same level of precision in the sound sector, the most compact solution involves the use of an array of microphones, for example the Eigenmike by mh acoustics, the Soundfield by TSL Products, and the TetraMic by Core Sound. The polyhedral shape of the microphone arrays allows for the use of simple formulae to convert the signals from the microphones into an ambisonics format. The ambisonics format is a group of audio channels resulting from directional encoding of the acoustic field, and contains all of the information required for the spatial reproduction of the sound field. Equipped with between four and thirty-two microphones, these products are expensive and thus reserved for professional use.
Recent research has focused on encoding in ambisonics format on the basis of a reduced number of omnidirectional microphones. The use of a reduced number of this type of microphones allows costs to be reduced.
By way of example, the publication entitled “A triple microphonic array for surround sound recording” by Rilin CHEN ET AL. discloses an array comprised of two omnidirectional microphones which directivity patterns are virtually modified by applying a delay to one of the signals acquired by the microphones. The resulting signals are then combined to obtain the sound signal in ambisonics format.
One drawback of the method described in this prior art is that the microphones array is placed in a free field. In practice, when an obstacle is placed between the two microphones, diffraction phenomena cause attenuations and phase shifts of the incident wave differentiated according to the frequencies. As a result, the application of a delay to the signal received by one of the microphones will not allows for a faithful reproduction of the sound signal received because the delay applied will be the same at all frequencies.
The disclosure aims to overcome the drawbacks of the prior art by proposing a method for processing a sound signal allowing the sound signal to be encoded in ambisonics format on the basis of signals acquired by at least two omnidirectional microphones.
The disclosure relates to a sound signal processing method, comprising the steps of:
According to the disclosure, during the directivity optimisation sub-step, it is subtracted from each of the signals acquired by the microphones the signals acquired by the N−1 other microphones, each filtered by a FIR filter, in order to obtain N enhanced signals.
In one aspect of the disclosure, the N omnidirectional microphones are integrated into a device.
In one aspect of the disclosure, the FIR filter applied during the directivity optimisation sub-step to each acquired signal is equal to the ratio of the Z-transform of the impulse response of the microphone associated with the signal object of the subtraction over the Z-transform of the impulse response of the microphone associated with the signal to be filtered then subtracted, for an angle of incidence associated with a direction to be deleted.
In one aspect of the disclosure, said microphones are disposed in a circle on a plane, spaced apart by an angle equal to 360°/N.
In one aspect of the disclosure, the method implements four microphones spaced apart by an angle of 90° to the horizontal.
In one aspect of the disclosure, the device is a smartphone and the method implements two microphones, each placed on one lateral edge of said smartphone.
In one aspect of the disclosure, at least one Infinite Impulse Response IIR filter is applied to each of the enhanced signals during the directivity optimisation sub-step in order to correct the artefacts produced by the filtering operations using FIR filters.
In one aspect of the disclosure, the at least one IIR filter is a “peak” type filter, of which a central frequency fc, a quality factor Q and a gain GdB in decibels can be configured to compensate for the artefacts.
In one aspect of the disclosure, the order R of the ambisonics type format is equal to one.
In one aspect of the disclosure, the creation of the output signal in the ambisonics format is carried out by algebraic operations performed on the enhanced signals derived from the directivity optimisation sub-step in order to create the different channels of said ambisonics format.
The disclosure further relates to a sound signal processing system for implementing the method according to the disclosure. The system according to the disclosure includes means for:
According to the disclosure, the sound signal processing system includes means comprising Finite Impulse Response filters for filtering each of the signals acquired by the microphones and subtracting them from each of the other unfiltered original signals in order to obtain N enhanced signals.
The disclosure will be better understood from the following description and the accompanying figures. These are intended for purposes of illustration only and are not intended to limit the scope of the disclosure.
With reference to
In the aspect of the disclosure described hereafter, the acquisition 110 is carried out with a number N of microphones equal to two, and the order R is equal to 1 (the ambisonics format is thus referred to as “B-format”). The channels of the B-format will be denoted in the description below by (W; X; Y; Z) according to usual practice, these channels respectively representing:
Acquisition 110 consists of a recording of the sound signal Sinput. With reference to
In the shown aspect of the disclosure, the device 1 is a smartphone.
The two microphones M1; M2 are considered herein to be disposed along the Y dimension. The reasonings that follow could be conducted in an equivalent manner while considering the two microphones to be disposed along the X dimension (Front-Back) or along the Z dimension (Up-Down), the disclosure not being limited by this choice.
At the end of the acquisition step 110, two sampled digital signals are obtained. yg is used to denote the signal associated with the “Left channel” and recorded by the microphone M1 and yd is used to denote the signal associated with the “Right channel” and recorded by the microphone M2, said signals yg, yd constituting the input signal Sinput.
As shown in
When the acoustic wave 2 has a plurality of frequencies, the delay with which the microphone M2 acquires said acoustic wave depends on the frequency, in particular as a result of the presence of the device 1 between the microphones causing a diffraction phenomenon.
Similarly, each frequency of the acoustic wave is attenuated in a different manner, as a result of the presence of the device 1 on the one hand, and on the other hand as a function of the directivity properties of the microphones M1, M2 dependent on the frequency.
Moreover, since the microphones are both omnidirectional, they both reproduce the entire sound space.
Thereafter, the microphones M1 and M2 are sought to be differentiated by virtually modifying their directivity by processing the digital signals recorded, so as to be able to combine the modified signals to create the ambisonics format.
In a directivity optimisation sub-step 121, a filter F21(Z) is applied to the signal yg of the “Left channel”. The filtered signal is then subtracted from the signal yd of the “Right channel” by means of a subtractor.
According to the disclosure, the filter F21(Z) is of the Finite Impulse Response (FIR) filter type. Such a FIR filter allows each of the frequencies to be handled independently, by modifying the amplitude and the phase of the input signal over each of the frequencies, and thus allows the effects resulting from the presence of the device 1 between the microphones to be compensated.
By denoting as H1(Z, θ) and H2(Z, θ) the respective Z-transforms of the impulse responses of the microphones M1 and M2 when integrated into the device 1, in the direction of incidence given by the angle of incidence θ, the filter F21(Z) is determined by the relation:
The choice of a zero angle of incidence θ when determining the filter F21(Z) allows the sound component originating from the left to be isolated. Thus, after subtracting the signals, an enhanced signal yd* associated with the “Right channel”, from which the sound component originating from the left has been substantially deleted, is obtained.
The directivity of the microphone M2 is thus virtually modified so as to essentially acquire the sounds originating from the right.
The same operation is carried out in a similar manner for the Left channel. Similarly, a filter F12(Z) is applied to the signal yd of the Right channel. The filtered signal is then subtracted from the signal yg of the “Left channel” by means of a subtractor. The filter F12(Z) is a FIR filter defined by the relation:
The choice of an angle of incidence θ equal to 180° when determining the filter F12(Z) allows the sound component originating from the right to be isolated. Thus, after subtracting the signals, an enhanced signal yg* associated with the “Left channel”, from which the sound component originating from the right has been substantially deleted, is obtained.
The directivity of the microphone M1 is thus virtually modified so as to essentially acquire the sounds originating from the left.
In practice, the filters F21(Z) and F12(Z) have properties of high-pass filters and their application produces artefacts. In particular, the frequency spectrum of the enhanced signals yg*, yd* is attenuated in the low frequencies and altered in the high frequencies.
In order to correct these defects, at least one filter G1(Z), G2(Z) of the Infinite Impulse Response (IIR) filter type is applied to the enhanced signals yg* and yd* respectively.
In order to determine the at least one filter G1(Z) G2(Z) to be applied, a white noise B is filtered by the filters F21(Z), F12(Z) previously determined, as shown in
In one aspect of the disclosure, the IIR filters are “peak” type filters, of which a central frequency fc, a quality factor Q and a gain GdB in decibels can be configured to correct the artefacts. Thus, an attenuated frequency could be corrected by a positive gain, an accentuated frequency could be corrected by a negative gain.
Thus, after filtering by the at least one IIR filter G1(Z), G2(Z), a corrected signal YG is obtained, representative of the sounds originating from the left and a corrected signal YD is obtained, representative of the sounds originating from the right.
Thereafter, with reference to
In order to obtain the omnidirectional component W of the sound signal, the corrected signals YD, YG are added and the result is normalised by multiplying by a gain KW equal to 0.5:
On the basis of the convention according to which the Y component is positive if the sound essentially originates from the left, the Left-Right sound component is obtained by subtracting the corrected signal YD associated with the “Right channel” from the corrected signal YG associated with the “Left channel”. The result is normalised by multiplying by a factor KY equal to 0.5:
Given that no information is known on the Front-Back and Up-Down components, the X and Z components are set to zero.
At the end of the encoding step 120, data D in B-format is obtained (in the present aspect of the disclosure, the signals W and Y, the other signals X and Z being set to zero):
The corrected signals YG, YD of the Left and Right channels respectively can be reproduced by adding and subtracting the signals W and Y:
The rendering step 130 consists of rendering the sound signal, thanks to a transformation of the data in ambisonics format into binaural channels.
In one method of implementing the disclosure, the data D in ambisonics format is transformed into data in binaural format.
The disclosure is not limited to the aspect of the disclosure described hereinabove. In particular, the number of microphones used can be greater than two.
In one alternative aspect of the disclosure of the method 100 according to the disclosure, four omnidirectional microphones M1, M2, M3, M4 disposed at the periphery of a device 1, acquire an acoustic wave 2 of incidence θ relative to a straight line passing through the microphones M1 and M2, as shown in
The two microphones M1; M2 are considered herein to be disposed along the Y dimension and the two microphones M3, M4 are considered herein to be disposed along the X dimension. The four microphones are disposed in a circle, shown by dash-dot lines in
At the end of the acquisition step 110, four sampled digital signals are obtained. The following denotations are applied:
With reference to
In this aspect of the disclosure, the enhanced signal yg* is obtained by subtracting the signals yd, Xav and Xar respectively filtered by FIR filters F12(Z), F13(Z) and F14(Z) from the signal yg acquired by the microphone M1, which filters are defined by:
where H1(Z, θ), H2(Z, θ), H3(Z, θ), H4(Z, θ) denote the respective Z-transforms of the impulse responses of the microphones M1, M2, M3, M4 when integrated into the device 1, for an angle of incidence θ.
The choice of the angles of incidence 180°, 90°, 270° when determining the filters allows the sound components respectively originating from the right, from the front and from the back to be isolated.
Thus, after subtracting the signals, an enhanced signal yg* associated with the “Left channel” is obtained, from which the sound components originating from the right, from the front and from the back have been substantially deleted.
A filter G3(Z) of the IIR type is then applied to correct the artefacts generated by the filtering operations using FIR filters.
At the end of this step, the corrected signal YG is obtained.
Similar processing operations can be applied to the signals of the Right, Front and Back channels, in order to respectively obtain the corrected signals YD, XAV, XAR.
In order to obtain the omnidirectional component W of the sound signal, the corrected signals YD, YG, XAV, XAR are added and the result is normalised by multiplying by a gain KW equal to one quarter:
On the basis of the convention according to which the Y component is positive if the sound essentially originates from the left, the Left-Right sound component is obtained by subtracting the corrected signal YD associated with the “Right channel” from the corrected signal YG associated with the “Left channel”. The result is normalised by multiplying by the factor KY equal to one half:
On the basis of the convention according to which the X component is positive if the sound essentially originates from the front, the Front-Back sound component is obtained by subtracting the corrected signal XAR associated with the Back channel from the corrected signal XAv associated with the Front channel. The result is normalised by multiplying by the factor Kx equal to one half:
In one alternative aspect, the disclosure includes six microphones in order to integrate the Z component of the ambisonics format.
In alternative aspects of the disclosure, the order R of the ambisonics format is greater than or equal to 2, and the number of microphones is adapted so as to integrate all of the components of the ambisonics format. For example, for an order R equal to two, eighteen microphones are implemented in order to form the nine components of the corresponding ambisonic format.
The FIR filters applied to the signals acquired are adapted accordingly, in particular the angle of incidence θ considered for each filter is adapted so as to remove, from each of the signals, the sound components originating from unwanted directions in space.
For example, with reference to
In this aspect of the disclosure, the filter applied to the signal recorded by M3 and subtracted from the signal acquired by M1 is given by:
In this manner, after subtracting the filtered signal from the signal acquired by M1, an enhanced signal is obtained from which the sound component in the X′ direction has been deleted.
Thus, an ambisonics format of an order greater than or equal to two can be created by adding, for example, microphones in the directions such that φ=45°, φ=90° or φ=135°.
The present disclosure further relates to a sound signal processing system, comprising means for:
This sound signal processing system comprises at least one computation unit and one memory unit.
The above description of the disclosure is provided for the purposes of illustration only. It does not limit the scope of the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
1757191 | Jul 2017 | FR | national |
This application is a National Stage of International Application No. PCT/EP2018/069402, having an International Filing Date of 17 Jul. 2018, which designated the United States of America, and which International Application was published under PCT Article 21(2) as WO Publication No. 2019/020437 A1, which claims priority from and the benefit of French Patent Application No. 1757191, filed on 28 Jul. 2017, the disclosures of which are incorporated herein by reference in their entireties.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2018/069402 | 7/17/2018 | WO | 00 |