This Application is a Section 371 National Stage Application of International Application No. PCT/FR2018/000139, filed May 24, 2018, the content of which is incorporated herein by reference in its entirety, and published as WO 2018/224739 on Dec. 13, 2018, not in English.
The present invention relates to the field of audio or acoustic signal processing, and more particularly to the processing of real multichannel sound content in order to separate the sound sources.
Separating sources in a multichannel sound signal allows numerous applications. It may be used for example:
In a space E in which a number N of sources are transmitting a signal si, blindly separating the sources consists, based on a number M of observations from sensors distributed in this space E, in counting and extracting the number N of sources. In practice, each observation is obtained using a sensor that records the signal that has reached a point in the space where the sensor is situated. The recorded signal then results from the mixture and from the propagation in the space E of the signals si, and is therefore affected by various disturbances specific to the environment that is passed through, such as for example noise, reverberation, interference, etc.
The multichannel capturing of a number N of sound sources s, propagating in free-field conditions and considered to be points is formalized as a matrix operation:
where x is the vector of the M recorded channels, s is the vector of the N sources and A is a matrix called “mixture matrix” of size M×N, containing the contributions of each source to each observation, and the sign * symbolizes linear convolution. Depending on the propagation environment and the format of the antenna, the matrix A may adopt various forms. In the case of a coincident antenna (all of the microphones of the antenna are concentrated at one and the same point in space), in an anechoic environment, A is a simple gains matrix. In the case of a non-coincident antenna, in an anechoic or reverberant environment, the matrix A becomes a filter matrix. In this case, the relationship is generally expressed in the frequency domain x(f)=As(f), where A is expressed as a matrix of complex coefficients.
If the sound signal is captured in an anechoic environment, and taking the scenario in which the number of sources N is less than the number of observations M, analyzing (i.e. identifying the number of sources and their positions) and breaking down the scene into objects, i.e. the sources, may easily be achieved jointly using an independent component analysis (or “ICA” hereinafter) algorithm. These algorithms make it possible to identify the separation matrix B of dimensions N×M, the pseudo-inverse of A, which makes it possible to deduce the sources from the observations using the following equation:
s=Bx
The preliminary step of estimating the dimension of the problem, i.e. estimating the size of the separation matrix, that is to say the number of sources N, is conventionally performed by calculating the rank of the covariance matrix Co=E{xxT} of the observations, which, in this anechoic case, is equal to the number of sources:
N=rank(Co).
With regard to the location of the sources, this may be deduced from the encoding matrix A=B−1 and from knowledge of the spatial properties of the antenna that is used, in particular the distance between the sensors and their directivities.
Among the best-known ICA algorithms, mention may be made of JADE by J. F Cardoso and A. Souloumiac (“Blind beamforming for non-gaussian signals” in “IEE Proceedings F—Radar and Signal Processing”, volume 140, issue 6, December 1993) or Infomax by Amari et. al. (“A new learning algorithm for blind signal separation, Advances” in “neural information processing systems”, 1996).
In practice, in certain conditions, the separation step s=Bx amounts to beamforming: the combination of various channels given by the matrix B consists in applying a spatial filter whose directivity amounts to imposing unity gain in the direction of the source that it is desired to extract, and zero gain in the direction of the interfering sources. One example of beamforming for extracting three sources positioned respectively at 0°, 90° and −120° azimuth is illustrated in
In the presence of a mixture of sources captured in real conditions, the room effect will generate what is called a reverberant sound field, denoted xr, that will be added to the direct fields of the sources:
x=As+xr
The total acoustic field may be modeled as the sum of the direct field of the sources of interest (shown at 1 in
Thus, when using an SAS algorithm to separate sources in a reverberant environment, the separation matrix B of size M×M is obtained, generating M sources {tilde over (s)}j, 1≤j≤M at output, rather than the desired N, the last M−N components essentially containing reverberant field, using the matrix operation:
{tilde over (s)}=B·x
These additional components pose numerous problems:
for scene analysis: it is not known a priori which components relate to the sources and which components are induced by the room effect.
for separating sources through beamforming: each additional component induces constraints on the directivities that are formed and generally degrades the directivity factor, resulting in an increase in the reverberation level in the extracted signals.
Existing source-counting methods for multichannel content are often based on an assumption of parsimony in the time-frequency domain, that is to say on the fact that, for each time-frequency bin, a single source or a limited number of sources will have a non-negligible power contribution. For the majority of these, a step of locating the most powerful source is performed for each bin, and then the bins are aggregated (called “clustering” step) in order to reconstruct the total contribution of each source.
The DUET (for “Degenerate Unmixing Estimation Technique”) approach, described for example in the document “Blind separation of disjoint orthogonal signals: Demixing n sources from 2 mixtures.” by the authors A. Jourjine, S. Rickard, and O. Yilmaz, published in 2000 in ICASSP′00, makes it possible to locate and extract N sources in anechoic conditions based on only two non-coincident observations, by assuming that the sources have separate frequency supports, that is to say
Si(f)Sj(f)=0
for all values of f provided that i≠j.
After breaking down the observations into frequency sub-bands, typically performed via a short-term Fourier transform, an amplitude ai and a delay ti are estimated for each sub-band based on the theoretical mixture equation:
In each frequency band f, a pair (ai, ti) corresponding to the active source i is estimated as follows:
A representation in space of all of the pairs (ai, ti) is performed in the form of a histogram, the “clustering” is then performed on the histogram by way of a likelihood maximum depending on the position of the bin and on the assumed position of the associated source, assuming a Gaussian distribution of the estimated positions of each bin around the real position of the sources.
In practice, the assumption of parsimony of the sources in the time-frequency domain often fails, thereby constituting a significant limitation of these approaches for counting sources, as the pointed directions of arrival for each bin then result from a combination of the contributions of a plurality of sources and the “clustering” is no longer performed correctly. In addition, for analyzing content captured in real conditions, the presence of reverberation may firstly degrade the location of the sources and secondly lead to an overestimation of the number of real sources when first reflections reach a power level high enough to be perceived as secondary sources.
The present invention aims to improve the situation. To this end, it proposes a method for processing sound data in order to separate N sound sources of a multichannel sound signal captured in a real environment. The method is such that it comprises the following steps:
The various particular embodiments mentioned hereinafter may be added independently or in combination with one another to the steps of the processing method defined above.
In one particular embodiment, calculating a bivariate descriptor comprises calculating a coherence score between two components.
This descriptor calculation makes it possible to ascertain, in a relevant manner, whether a pair of components corresponds to two direct components (2 sources) or whether at least one of the components stems from a reverberant effect.
According to one embodiment, calculating a bivariate descriptor comprises determining a delay between the two components of the pair.
This determination of the delay and of the sign associated with this delay makes it possible to determine, for a pair of components, which component more probably corresponds to the direct signal and which component more probably corresponds to the reverberant signal.
According to one possible implementation of this descriptor calculation, the delay between two components is determined by taking into account the delay that maximizes an intercorrelation function between the two components of the pair.
This method for obtaining the delay offers determination of a reliable bivariate descriptor.
In one particular embodiment, the determination of the delay between two components of a pair is associated with an indicator of reliability of the sign of the delay, which depends on the coherence between the components of the pair.
In one variant embodiment, the determination of the delay between two components of a pair is associated with an indicator of reliability of the sign of the delay, which depends on the ratio of the maximum of an intercorrelation function for delays of opposing sign.
These reliability indicators make it possible to make the probability more reliable, for a pair of components belonging to a different class, of each component of the pair being the direct component or the reverberant component.
According to one embodiment, the calculation of a univariate descriptor is dependent on matching between mixture coefficients of a mixture matrix estimated on the basis of the source separation step and the encoding features of a plane-wave source.
This descriptor calculation makes it possible, for a single component, to estimate the probability of the component being direct or reverberant.
In one embodiment, the components of the set of M components are classified by taking into account the set of M components and by calculating the most probable combination of the classifications of the M components.
In one possible implementation of this overall approach, the most probable combination is calculated by determining a maximum of the likelihood values expressed as the product of the conditional probabilities associated with the descriptors, for the possible classification combinations of the M components.
In one particular embodiment, a step of preselecting the possible combinations is performed on the basis of just the univariate descriptors before the step of calculating the most probable combination.
This thus reduces the likelihood calculations to be performed on the possible combinations, since this number of combinations is restricted by this preselection step.
In one variant embodiment, a step of preselecting the components is performed on the basis of just the univariate descriptors before the step of calculating the bivariate descriptors.
The number of bivariate descriptors to be calculated is thus restricted, thereby reducing the complexity of the method.
In one exemplary embodiment, the multichannel signal is an ambisonic signal.
This processing method thus described is perfectly applicable to this type of signal.
The invention also relates to a sound data processing device implemented so as to perform separation processing of N sound sources of a multichannel sound signal captured by a plurality of sensors in a real environment. The device is such that it comprises:
The invention also applies to a computer program containing code instructions for implementing the steps of the processing method as described above when these instructions are executed by a processor and to a storage medium able to be read by a processor and on which there is recorded a computer program comprising code instructions for executing the steps of the processing method as described.
The device, program and storage medium have the same advantages as the method described above that they implement.
Other features and advantages of the invention will become more clearly apparent on reading the following description, given purely by way of nonlimiting example and with reference to the appended drawings, in which:
Thus, starting from a multichannel signal captured by a plurality of sensors placed in a real environment, that is to say reverberant environment, and delivering a number M of observations from these sensors (x (x1, . . . , xM)), the method implements a step E310 of blindly separating sound sources (SAS). It is assumed here in this embodiment that the number of observations is equal to or greater than the number of active sources.
Using a blind source separation algorithm applied to the M observations makes it possible, in the case of a reverberant environment, through beamforming, to extract M sound components associated with an estimated mixture matrix AM×M, that is to say:
s=Bx where x is the vector of the M observations, B is the separation matrix estimated by blindly separating sources, of dimensions M×M, and s is the vector of the M extracted sound components. These theoretically include N sound sources and M−N residual components corresponding to reverberation.
To obtain the separation matrix B, the blind source separation step may be implemented, for example using an independent component analysis (or “ICA”) algorithm or else a main component analysis algorithm.
In one exemplary embodiment, ambisonic multichannel signals are of interest.
Ambisonics consists in projecting the acoustic field onto a base of spherical harmonic functions in order to obtain a spatialized representation of the sound scene. The function Ymnσ(θ,ϕ) is the spherical harmonic of order m and of index nσ, dependent on the spherical coordinates (θ, ϕ), defined using the following formula:
where {tilde over (P)}mn(cos ϕ) is a polar function involving the Legendre polynomial:
In practice, real ambisonic encoding is performed based on a network of sensors that are generally distributed over a sphere. The captured signals are combined in order to synthesize ambisonic content the channels of which comply as far as possible with the directivities of the spherical harmonics. The basic principles of ambisonic encoding are described below.
Ambisonic formalism, which was initially limited to representing 1st-order spherical harmonic functions, has since been expanded to higher orders. Ambisonic formalism with a higher number of components is commonly called “higher order ambisonics” (or “HOA” below).
2m+1 spherical harmonic functions correspond to each order m. Thus, content of order m contains a total of (m+1)2 channels (4 channels at the 1st order, 9 channels at the 2nd order, 16 channels at the 3rd order, and so on).
“Ambisonic components” are understood hereinafter to be the ambisonic signal in each ambisonic channel, with reference to the “vector components” in a vector base that would be formed by each spherical harmonic function. Thus, for example, it is possible to count:
The ambisonic signals that are captured for these various components are then distributed over a number M of channels that results from the maximum order m that it is intended to capture in the sound scene. For example, if a sound scene is captured using an ambisonic microphone having 20 piezoelectric capsules, then the maximum captured ambisonic order is m=3, so that there are not more than 20 channels M=(m+1)2, the number of ambisonic components under consideration is 7+5+3+1=16 and the number M of channels is M=16, also given by the relationship M=(m+1)2, with m=3.
Thus, in the exemplary implementation in which the multichannel signal is an ambisonic signal, step E310 receives the signals x (x1, . . . , x1, . . . , xM), captured by a real microphone, in a reverberating environment that receives frames of ambisonic sound content on M=(m+1)2 channels and containing N sources.
The sources are therefore blindly separated in step E310 as explained above.
This step makes it possible to simultaneously extract M components and the estimated mixture matrix. The components obtained at the output of the source separation step may be classified into two classes of components: a first class of components called direct components corresponding to the direct sound sources and a second class of components called reverberant components corresponding to the reflections of the sources.
In step E320, descriptors of the M components (s1, s2, . . . sM) from the source separation step are calculated, which descriptors will make it possible to associate, with each extracted component, the class that corresponds thereto: direct component or reverberant component.
Two types of descriptors are calculated here: bivariate descriptors that involve pairs of components (sj, si) and univariate descriptors calculated for a component si.
A set of bivariate first descriptors is thus calculated. These descriptors are representative of statistical relationships between the components of the pairs of the obtained set of M components.
Three scenarios may be modeled depending on the respective classes of the components:
Specifically, each direct component consists primarily of the direct field of a source, similar to a plane wave, plus a residual reverberation whose power contribution is less than that of the direct field. As the sources are statistically independent by nature, there is therefore a low correlation between the extracted direct components.
By contrast, each reverberant component consists of first reflections, delayed and filtered versions of the direct field or fields, and of a delayed reverberation. The reverberant components thus have a significant correlation with the direct components, and generally a group delay able to be identified in relation to the direct components.
The coherence function γjl2 provides information about the existence of a correlation between two signals sj and sl and is expressed using the formula:
where Γjl(f) is the interspectrum between sj and sl and Γj(f) are Γl(f) are the respective autospectra of sj and si.
The coherence is ideally zero when sj are si are the direct fields of independent sources, but it adopts a high value when sj and si are two contributions from one and the same source: the direct field and a first reflection or else two reflections.
Such a coherence function therefore indicates a probability of having two direct components or two contributions from one and the same source (direct/reverberant or first reflection/subsequent reflections).
In practice, the interspectra and autospectra may be calculated by dividing the extracted components into K frames (adjacent or with overlap), by applying a short-term Fourier transform to each frame k of these K frames in order to produce the instantaneous spectra Sj(k, f), and by averaging the observations on the K frames:
Γjl(f)=Ek∈{1 . . . K}{Sj(k,f)Sk*(k,f)}
The descriptor used for a wideband signal is the average over all of the frequencies of the coherence function between two components, that is to say:
d65(sj,sl)=Ef{γjl2(f)}
As the coherence is bounded between 0 and 1, the average coherence will also be contained within this interval, tending toward 0 for perfectly independent signals and toward 1 for highly correlated signals.
It is noted that, in the first case, the coherence value dγ is less than 0.3, whereas, in the second case, dγ reaches 0.7 in the presence of a single active source. These values readily reflect both the independence of the direct signals and the relationship linking a direct signal and the same reverberant signal in the absence of interference. However, by incorporating a second active source into the initial mixture (case no. 3), the average coherence of the direct/reverberant case drops to 0.55 and is highly dependent on the spectral content and the power level of the various sources. In this case, the competition between the various sources causes the coherence to drop at low frequencies, whereas the values are higher above 5500 Hz due to a lower contribution of the interfering source.
It is therefore noted that determining a probability of belonging to one and the same class or to a different class for a pair of components may depend on the number of sources that are active a priori. For the classification step E340 described below, this parameter may be taken into account in one particular embodiment.
In step E330 of
In practice, the probability densities in
where (θ, φ) represent the spherical coordinates, azimuth/elevation, of the source, it is possible to deduce, through simple trigonometric calculations, the position of the extracted component using the following set of equations:
where arctan 2 is the arctangent function that makes it possible to remove the ambiguity regarding the sign of the arctangent function.
Once the signals have been classified, the various descriptors are calculated. A histogram of values of the descriptor is extracted from the points cloud—from the database—for a given class, from which one probability density is chosen from among a collection of probability densities, on the basis of a distance, generally the Kullback-Leibler divergence.
For the example of an ambisonic signal,
The probability laws shown here are presented for 4-channel (1st-order ambisonics) or 9-channel (2nd-order ambisonics) microphonic capturing, in the case of one or two sources that are simultaneously active. It is first of all observed that the average coherence dγ adopts significantly lower values for pairs of direct components in comparison with the cases in which at least one of the components is reverberant, and this observation is all the more pronounced the higher the ambisonic order. This is due to improved selectivity of the beamforming when the number of channels is greater, and therefore to improved separation of the extracted components.
It is also observed that, in the presence of two active sources, the coherence estimators degrade, whether these be the direct/reverberant or reverberant/reverberant pairs (the direct/direct pair does not exist in the presence of a single source).
Definitively, it appears that the probability densities depend greatly on the number of sources in the mixture, and on the number of sensors available.
This descriptor is therefore relevant for detecting whether a pair of extracted components corresponds to two direct components (2 true sources) or whether at least one of the two components stems from the room effect.
In one embodiment of the invention, another type of bivariate descriptor is calculated in step E320. This descriptor is either calculated instead of the coherence descriptor described above or in addition thereto.
This descriptor will make it possible to determine, for a (direct/reverberant) pair, which component is more probably the direct signal and which one corresponds to the reverberant signal, based on the simple assumption that the first reflections are delayed and attenuated versions of the direct signal.
This descriptor is based on another statistical relationship between the components, the delay between the two components of the pair. The delay τjl,max is defined as being the delay that maximizes the intercorrelation function rjl(τ)=Et{sj(t)sl(t−τ)} between the components of a pair of components sj and sl:
When sj is a direct signal and sl is an associated reflection, the trace of the intercorrelation function will generally result in a negative τjl,max. Thus, if it is known that a pair of direct/reverberant components is present, it is thus theoretically possible to assign the class to each of the components by virtue of the sign of τjl,max.
In practice, the estimation of the sign of τjl,max max is often highly impacted by noise, or even sometimes inverted:
For these reasons, it is possible to choose to make the sign of τjl,max used as a descriptor reliable by virtue of a robustness or reliability indicator.
The average coherence between the components makes it possible to evaluate the relevance of the direct/reverberant pair as seen above. If this is high, it may be hoped that the group delay will be a reliable descriptor.
On the other hand, the relative value of the intercorrelation peak τjl,max with respect to the other values of the intercorrelation function rjl(r) also provides information about the reliability of the group delay.
In one particular embodiment, a second indicator of reliability of the sign of the delay, called emergence, is defined by calculating the ratio between the absolute value of the intercorrelation at τmax and that of the correlation maximum for τ of a sign opposite that of τjl,max:
where τ
This ratio, which is called emergence, is an ad hoc criterion the relevance of which is proven in practice: it adopts values close to 1 for independent signals, i.e. 2 direct components, and higher values for correlated signals, such as a direct component and a reverberant component. In the abovementioned case of curve (1) in
There is therefore a descriptor dτ that determines, for each assumed direct/reverberant pair, the probability of each component of the pair being the direct component or the reverberant component. This descriptor is dependent on the sign of τmax, on the average coherence between the components and on the emergence of the intercorrelation maximum.
It should be noted that this descriptor is sensitive to noise, and in particular to the presence of a plurality of simultaneous sources, as illustrated on curve (2) of
Using this descriptor, in step E330, a probability of belonging to a first class of direct components or a second class of reverberant components is calculated for a pair of components. For sj identified as being ahead of sl, the probability of sj being direct and sl being reverberant is estimated using a two-dimensional law.
Logically, the probability of sj being reverberant and sl being direct even though sj is in phase advance is then estimated as the 1's complement of the direct/reverberant case:
p(j=r,l=d|dτ)=1=p(j=d,Cl=r|dτ)
where Cj and Cl are the respective classes of the components sj and sl, Cd being the first class of components, called direct components, corresponding to the N direct sound sources and Cr being the second class of M−N components, called reverberant components.
This descriptor is able to be used only for direct/reverberant pairs. The direct/direct and reverberant/reverberant pairs are not taken into consideration by this descriptor, and they are therefore considered to be equally probable:
The sign of the delay is a reliable indicator when both the coherence and the emergence have medium or high values. A low emergence or a low coherence will make the direct/reverberant or reverberant/direct pairs equally probable.
In step E320, a set of what are called univariate second descriptors, representative of encoding characteristics of the components of the obtained set of M components, is also calculated.
With knowledge of the capturing system that is used, a source coming from a given direction is encoded using mixture coefficients that depend, inter alia, on the directivity of the sensors. If the source is able to be considered as a point and if the wavelengths are long in comparison with the size of the antenna, the source may be considered to be a plane wave. This scenario is generally proven in the case of a small ambisonic microphone, provided that the source is far enough away from microphone (one meter is enough in practice).
For a component sj extracted by SAS, the jth column of the estimated mixture matrix A, obtained by inverting the separation matrix B, will contain the mixture coefficients associated therewith. If this component is direct, that is to say it corresponds to a single source, the mixture coefficients of column Aj will tend towards characteristics of microphonic encoding for a plane wave. In the case of a reverberant component, which is the sum of a plurality of reflections and a diffuse field, the estimated mixture coefficients will be more random and will not correspond to the encoding of a single source with a precise direction of arrival.
It is therefore possible to use the conformity between the estimated mixture coefficients and the theoretical mixture coefficients for a single source in order to estimate a probability of the component being direct or reverberant.
In the case of 1st-order ambisonic microphonic capturing, a plane wave sj of incidence (θj, ϕj) in what is known as the N3D ambisonic format is encoded using the formula:
xj=Ajsj
where
Specifically, there are several ambisonic formats that are distinguished in particular by the normalization of the various components grouped in terms of order. The known N3D format is considered here. The various formats are described for example at the following link: https://en.wikipedia.org/wiki/Ambisonic_data_exchange_format.
It is thus possible to deduce, from the encoding coefficients of a source, a criterion, called plane wave criterion, that illustrates the conformity between the estimated mixture coefficients and the theoretical equation of a single encoded plane wave:
The criterion cop is by definition equal to 1 in the case of a plane wave. In the presence of a correctly identified direct field, the plane wave criterion will remain very close to the value 1. By contrast, in the case of a reverberant component, the multitude of contributions (first reflections and delayed reverberation) with equivalent power levels will generally move the plane wave criterion away from its ideal value.
For this descriptor, as for the others, the associated distribution calculated at E330 has a certain variability, depending in particular on the level of noise present in the extracted components. This noise consists primarily of the residual reverberation and contributions from the interfering sources that will not have been perfectly canceled out. To refine the analysis, it is therefore possible to choose to estimate the distribution of the descriptors depending:
The distance between the distributions of the two classes allows relatively reliable discrimination between the plane wave components and those that are more diffuse.
The descriptors calculated in step E320 and disclosed here are thus based both on the statistics of the extracted components (average coherence and group delay) and on the estimated mixture matrix (plane wave criterion). These make it possible to determine conditional probabilities of a component belonging to one of the two classes Cd or Cr.
From the calculation of these probabilities, it is then possible, in step E340, to determine a classification of the components of the set of M components into the two classes.
For a component sj, Cj denotes the corresponding class. With regard to classifying the set of M extracted components, “configuration” is the name given to the vector of the classes C of dimension 1×M such that:
C=[C1,C2, . . . , CM] where Cj ∈ {Cd,Cr}
With the knowledge that there are two possible classes for each component, the problem ultimately amounts to choosing from among a total of 2M potential configurations assumed to be equally probable. To achieve this, the rule of the a posteriori maximum is applied: knowing L(Ci) to be the likelihood of the ith configuration, the configuration that is used will be the one having the maximum likelihood, that is to say:
C=arg maxCiL(Ci),∀1≤i≤2M
The chosen approach may be exhaustive and then consist in estimating the likelihood of all of the possible configurations based on the descriptors determined in step E320 and the distributions associated therewith that are calculated in step E330.
According to another approach, the configurations may be preselected in order to reduce the number of configurations to be tested, and therefore the complexity of implementing the solution. This preselection may be performed for example using the plane wave criterion alone, by classifying some components into the category Cr, provided that the value of their criterion cop moves far enough away from the theoretical value of a plane wave 1: in the case of ambisonic signals, it is possible to see, in the distributions of
This preselection makes it possible to reduce the number of configurations to be tested by pre-classifying certain components, excluding the configurations that impose the class Cd on these pre-classified components.
Another possibility for reducing the complexity even further is that of excluding the pre-classified components from the calculation of the bivariate descriptors and from the likelihood calculation, thereby reducing the number of bivariate criteria to be calculated and therefore even further reducing the processing complexity.
A naive Bayesian approach may be used to estimate the likelihood of each configuration using the calculated descriptors. In this type of approach, there is provided set of descriptors dk for each component sj. For each descriptor, the probability of the component sj belonging to the class Cα (α=d or r) is formulated using Bayes' law:
With the two classes Cr and Cd being assumed to be equally probable, this means that:
We then obtain:
in which the term Cj=Cα is abbreviated to Cα in order to simplify the notation. As this in this case involves looking for the likelihood maximum, the term on the denominator of each conditional probability is constant regardless of the configuration that is evaluated. Therefore, it is then possible to simplify the expression thereof:
p(α|dk)∝p(dk|α)
For a bivariate descriptor (such as for example coherence) involving two components sj and sl and their respective assumed classes, the previous expression is expanded:
p(j=α,l=β|dk)∝p(dk|α,β)
and so on.
The likelihood is expressed as the product of the conditional probabilities associated with each of the K descriptors, if it is assumed that these are independent:
where d is the vector of the descriptors and C is a vector representing a configuration (that is to say the combination of the assumed classes of the M components), as defined above.
More precisely, a number K1 of univariate descriptors is used for each of the components, whereas a number K2 of bivariate descriptors is used for each pair of components. As the probability laws for the descriptors are established on the basis of the assumed number of sources and on the number of channels (the index m represents the ambisonic order in the case of capturing of this type), the final expression of the likelihood is then formulated as follows:
where
For calculation-based reasons, rather than the likelihood, preference is given to its logarithmic version (log-likelihood):
This equation is the one used definitively to determine the most likely configuration in the Bayesian classifier described here for this embodiment.
The Bayesian classifier presented here is just one exemplary implementation, and it could be replaced, inter alia, by a support vector machine or a neural network.
Ultimately, the configuration having the likelihood maximum is used, indicating the direct or reverberant class associated with each of the M components C(C1, . . . , Ci, . . . , CM).
In this combination, the N components corresponding to the N active direct sources are therefore deduced.
The processing described here is performed in the time domain, but may also, in one variant embodiment, be applied in a transformed domain.
The method as described with reference to
Moreover, the useful bandwidth may be reduced depending on the potential imperfections of the capturing system, at high frequencies (presence of spatial aliasing) or at low frequencies (impossible to find the theoretical directivities of the microphonic encoding).
Sensors Ca1 to CaM, shown here in the form of a spherical microphone MIC, make it possible to acquire, in a real and therefore reverberant medium, M mixture signals x (x1, . . . , xi, . . . , xM), from a multichannel signal.
Of course, other forms of microphone or sensor may be provided. These sensors may be integrated into the device DIS or else outside the device, the signals resulting therefrom then being transmitted to the processing device, which receives them via its input interface 840. In one variant, these signals may simply be obtained beforehand and imported into the memory of the device DIS.
These M signals are then processed by a processing circuit and computerized means, such as a processor PROC at 860 and a working memory MEM at 870. This memory may contain a computer program containing code instructions for implementing the steps of the processing method as described for example with reference to
The device thus contains a source separation processing module 810 applied to the captured multichannel signal in order to obtain a set of M sound components s (s1, . . . , si, . . . , sM), where M≥N. The M components are provided at the input of a calculator 820 able to calculate a set of what are called bivariate first descriptors, representative of statistical relationships between the components of the pairs of the obtained set of M components and a set of what are called univariate second descriptors, representative of encoding characteristics of the components of the obtained set of M components.
These descriptors are used by a classification module 830 or classifier, able to classify components of the set of M components into two classes of components, a first class of N components called direct components corresponding to the N direct sound sources and a second class of M−N components called reverberant components.
For this purpose, the classification module contains a module 831 for calculating a probability of belonging to one of the two classes of the components of the set M, depending on the sets of first and second descriptors.
The classifier uses descriptors linked to the correlation between the components in order to determine which are direct signals (that is to say true sources) and which are reverberation residuals. It also uses descriptors linked to the mixture coefficients estimated by SAS, in order to evaluate the conformity between the theoretical encoding of a single source and the estimated encoding of each component. Some of the descriptors are therefore dependent on a pair of components (for the correlation), and others are dependent on a single component (for the conformity of the estimated microphonic encoding).
A likelihood calculation module 832 makes it possible to determine, in one embodiment, the most probable combination of the classifications of the M components by way of a likelihood value calculation depending on the probabilities calculated at the module 831 and for the possible combinations.
Lastly, the device contains an output interface 850 for delivering the classification information of the components, for example to another processing device, which may use this information to enhance the sound of the discriminated sources, to eliminate noise from them or else to mix a plurality of discriminated sources. Another possible processing operation may also be that of analyzing or locating the sources in order to optimize the processing of a voice command.
Many other applications using the classification information thus determined are then possible.
The device DIS may be integrated into a microphonic antenna in order for example to capture sound scenes or to record a voice command. The device may also be integrated into a communication terminal able to process signals captured by a plurality of sensors integrated into or remote from the terminal.
Although the present disclosure has been described with reference to one or more examples, workers skilled in the art will recognize that changes may be made in form and detail without departing from the scope of the disclosure and/or the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
1755183 | Jun 2017 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FR2018/000139 | 5/24/2018 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2018/224739 | 12/13/2018 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20010037195 | Acero | Nov 2001 | A1 |
20050060142 | Visser | Mar 2005 | A1 |
20060034361 | Choi | Feb 2006 | A1 |
20070260340 | Mao | Nov 2007 | A1 |
20080208538 | Visser | Aug 2008 | A1 |
20080306739 | Nakajima | Dec 2008 | A1 |
20090103738 | Faure | Apr 2009 | A1 |
20090254338 | Chan | Oct 2009 | A1 |
20090292544 | Virette | Nov 2009 | A1 |
20090310444 | Hiroe | Dec 2009 | A1 |
20100111290 | Namba | May 2010 | A1 |
20110015924 | Gunel Hacihabiboglu | Jan 2011 | A1 |
20110058676 | Visser | Mar 2011 | A1 |
20130121495 | Mysore | May 2013 | A1 |
20150117649 | Nesta | Apr 2015 | A1 |
Entry |
---|
Mathieu Baque et al “Separation of Direct Sounds from Early Reflections using the Entropy Rate Bound Minimizaiton Algorithm”, AES 60th International Conf., Leuven, Belgium, Feb. 3-5, pp. 1-8, (Year: 2016). |
Zaher El Chami et al “A New EM Algorithm for Underdetermined Convolutive Blind Source Separation”, 17th European Signal Processing Conf., Glasgow, Scotland, Aug. 24-28, pp. 1457-1461, (Year: 2009). |
Taejin Park et al “Background Music Separation for Multichannel Audio Based on Inter-channel Level Vector Sum”, IEEE ISCE Audio Researc Lab., Electronics and Telecommunications Research Institute ETRI, Daejon, Korea, South, pp. 1-2, (Year: 2014). |
Baque Mathieu et al., “Separation of Direct Sounds from Early Reflections Using Algorithm”, Conference: 60th International Conference: Dreams (Dereverberation and Reverberation of Audio, Music, and Speech); Jan. 2016, AES, 60 East 42nd Street, Room 2520 New York 10165-2420, USA, Jan. 27, 2016 (Jan. 27, 2016), XP040680602. |
Jourjine A. et al., “Blind Separation of Disjoint Orthogonal Signals; Demixing N Sources From 2 Mixtures”, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (ICASSP). Istanbul, Turkey, Jun. 5-9, 2000; IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)), New York, NY; IEEE, US, Jun. 5, 2000 (Jun. 5, 2000), pp. 2985-2988, XP001035813. |
International Search Report dated Aug. 8, 2018 for corresponding International Application No. PCT/FR2018/000139, filed Mat 24, 2018. |
Written Opinion of the International Searching Authority dated Aug. 8, 2018 for corresponding International Application No. PCT/FR2018/000139, filed Mat 24, 2018. |
English translation of the Written Opinion of the International Searching Authority dated Aug. 17, 2018 for corresponding International Application No. PCT/FR2018/000139, filed Mat 24, 2018. |
Number | Date | Country | |
---|---|---|---|
20200152222 A1 | May 2020 | US |