Method for the non-linear estimation of a mixture of signals

Information

  • Patent Grant
  • 10560073
  • Patent Number
    10,560,073
  • Date Filed
    Wednesday, December 23, 2015
    8 years ago
  • Date Issued
    Tuesday, February 11, 2020
    4 years ago
Abstract
This method for the non-linear estimation of no more than two mixed signals from separate sources, the time/frequency representation of which shows an unknown non-zero proportion of zero components, using an array made up of P>2 antennas, when the directional vectors U and V of the sources emitting these signals are additionally known or estimated, includes the following steps: a) Calculating the successive discrete Fourier transforms of the signal received by the antennas and sampled to obtain a time-frequency P-vector grid of the signal; each element of the grid being referred to as a box and containing a complex vector X forming a measurement;b) For each box, calculating the conditional expectation estimator of the signal, or of the signals, from the measurement X and an a priori probability density for the signals that is a Gaussian mixture.
Description

The present invention relates to a method for the non-linear estimation of wireless signals from several sources, the time/frequency representation of which shows an unknown non-zero proportion of zero components, using an array made up of P>2 antennas, when the directional vectors U and V of the sources emitting these signals are additionally known or estimated.


It is commonly necessary to estimate wireless signals (soundproofing) originating from radars, communication systems, or acoustic signals (audio or sonar), and received by a listening system made up of an antenna array.


The received signal results from a temporal and spectral mixture of no more than 2 sources, the directional vectors of which are assumed to be known, since they are estimated beforehand.


The criterion traditionally used is the maximum likelihood (ML), leading to processing by spatial linear filtering, which improves the signal-to-noise ratio by a factor equal to the number of sensors in the single-source case.


In the scenarios being considered (known directional vectors), this processing leads to a nonbiased linear estimate with minimal signal variance for which no a priori knowledge is available.


Other methods that are more complicated to carry out can be used, such as Capon filtering, when the directional vectors are imperfectly or not known (“Robust Adaptive Beamforming”, eds P. Stoica and J. Li, Wiley, 2006).


None of these methods exploit any a priori on the signal, and in particular none make it possible to correctly delimit the temporal and/or spectral supports of the signal, since a linear (ML) or pseudo-linear (Capon) processing always provides an output signal, even if at the input, the measurement is only made up of noise.


The problem is to access finer knowledge of the signal.


The invention aims to propose a method enabling a finer determination of the components of the signal.


To that end, the invention relates to a method including the following steps:

    • a) Calculating the successive discrete Fourier transforms of the signal received by the antennas and sampled to obtain a time-frequency P-vector grid of the signal; each element of the grid being referred to as a box and containing a complex vector X forming a measurement;
    • b) For each box, calculating the conditional expectation estimator of the signal, or of the signals, from the measurement X and an a priori probability density for the signals that is a Gaussian mixture.


The method thus makes it possible to answer the following questions: for which of the signals present (their number being assumed to be limited to 2 locally), what are the temporal and spectral supports of the supposed signal described by the components obtained using a time/frequency analysis? And, what is the value of each component when it is not zero? The answer to these questions makes it possible to improve the knowledge of the signal.


According to specific embodiments, the method includes one or more of the following features:

    • said method includes a step for estimating parameters necessary to establish the conditional expectation using the method of moments operating on the boxes of a divided window in the time/frequency grid;
    • the calculation of the conditional expectation estimator is approximated by a Conditional Expectation with 4 Linear Filters obtained by a four-hypothesis decision processing pertaining to four Hermitian forms of the measurement X, followed by linear filtering commanded by the result of the decision;
    • the calculation of the Conditional Expectation estimator with 4 Linear Filters is approximated by a Conditional Expectation with Independent Decisions obtained by a two-hypothesis decision processing pertaining to U*X and V*X, followed by linear filtering commanded by the result of the decision;
    • as a function of the result of the decision, the linear filtering processing yielding the Conditional Expectation estimator with Independent Decisions, is either:
      • the estimator of the dual-source maximum likelihood for each source;
      • the single-source maximum likelihood estimator for the first source, 0 for the second source;
      • 0 for the first source, the single-source maximum likelihood estimator for the second source;
      • 0 for each source.
    • the calculation of the Conditional Expectation estimator with Independent Decisions is approximated by a Thresholded Maximum Likelihood obtained by estimating the signal(s) using the maximum likelihood method followed by the comparison of each estimate to a threshold;
    • the or each decision threshold is chosen to respect a so-called false alarm likelihood consisting of declaring the signal to be non-zero when it is zero;
    • said method includes:
      • A first estimate of the signals done using the Conditional Expectation with Independent Decisions or Thresholded Maximum Likelihood method,
      • An estimate of parameters done from the components of the signal obtained in the previous step,
      • A second estimate of the signals done using the Conditional Expectation method or the Conditional Expectation with 4 Linear Filters method, informed of the values of the parameters obtained in the previous step.





The invention will be better understood upon reading the following description, done in reference to the appended drawings:



FIG. 1 is a schematic view illustrating signal sources and an installation for estimating wireless signals coming from these sources according to the invention, provided solely for information and not intended to represent reality;



FIG. 2 is a flowchart of one of the methods as implemented in the invention in the single-source case;



FIG. 3 is a flowchart of one of the methods as implemented in the invention in the dual-source case; and



FIG. 4 is a flowchart of one of the methods as implemented in the invention.





The device 8 for estimating a mixture of signals coming from several sources 12 according to the invention illustrated in FIG. 1 comprises an antenna array made up of a plurality of antenna elements 10 or sensors. Each antenna element is coupled to a receiving channel, in particular to digitize the analog signal received by each sensor.


The invention is suitable both for monopolar antenna arrays and bipolar antenna arrays.


The device further includes computing modules. In different alternative embodiments of the device for estimating signals according to the invention, the computer modules can be arranged with different architectures; in particular, each step of the method can be carried out by a separate module, or on the contrary, all of the steps can be grouped together within a single information processing unit 14.


The computing unit(s) are connected to the sensors by any means suitable for transmitting the received signals.


The computing unit(s) include information storage means, as well as computing means making it possible to carry out the algorithms of FIGS. 2 and 3, depending on whether a mono-source or dual-source scenario exists.


Advantageously, the reception is done on a spatial diversity array (interferometric array) and the demodulation of the signal allowing the “baseband lowering” is done by the same local oscillator for all of the antennas in order to ensure coherence. The received signal is sampled in real or complex (double demodulation and quadrature or any other method) on each channel.


The received signal, filtered in a band typically of several hundred MHz, is modeled by: s0(t)ei2πf0t. This model does not provide a hypothesis on the type of modulation.


This signal is sampled at a rhythm Te such that 1/Te>>2×band of the wanted signal.


The weighted and overlapping Discrete Fourier Transforms (DFT) of this signal are calculated over NTDF points. The weighting serves to reduce the secondary lobes. Since this weighting causes of variation in the contribution of the data to the DFT (the data at the center of the temporal support of the DFT being assigned a much higher weight than the data on the edges of the support), which can go as far as a loss of the short signals, the hypothesis is used of overlapping of the temporal supports.


The collected measurements are therefore the results of the DFTs. They constitute a time-frequency grid, the boxes of which are called time-frequency boxes. Each box of the grid contains a complex vector that is the result of the Discrete Fourier Transforms for all of the channels, for a given time interval and frequency interval.


For the frequencies of the signals and the distances involved here, the wave front is considered to be planar. The antennas therefore receive the signal with a phase difference depending on the two angles between the wave plane and the plane of the antennas.


In the mono-phase case, the measurements collected on the complete network are therefore written as follows:

Xn=snU+Wn

    • Xn represents the measurements: Xn is a complex column vector with dimension P, where P is the number of channels and n=1, 2, . . . N is a double index (l,c) traveling the space of the times (index of the Fourier transform) and frequencies (channel number for a Fourier transform). More specifically, the index n travels the boxes of a temporal x frequency rectangular window with size N. The indices l and c correspond to the row and column numbers of the boxes of the window.
    • sn is a complex number representing the signal after DFT.
    • Wn is the thermal noise in the time/frequency box with index n. Wn is a column vector with dimension P. Wn is Gaussian over its real and imaginary components, independent from one time/frequency box to the other and independent from one antenna channel to the other. In other words, Wn is spatially, frequentially and temporally blank. The standard deviation of the noise counted on each real or imaginary component, at a time/frequency box, is equal to σ. For the hypothesis of independence of the noise from one box to the next be verified, the overlap of the DFTs is limited to 50%.


In the case of a monopolar interferometric array, U is written as follows:







U
=

g
(




u
1











u
P




)


,




where g is a complex scalar dependent on the polarization of the incident wave and its arrival direction, and where ui are modulus 1 complex numbers representing the geometric phase shifts associated with the direction of the incident wave. It is possible to choose one of the antennas of the receiver as phase reference.


In the case of a $interferometric array, U is written as follows:

U=hH+vV,


where h and v are complex scalars such as |h|2+|v|2=1 that express the polarization of the incident wave, and where H (V, resp.) is the response from the array to a horizontally (vertically, resp.) polarized wave. H and V only depend on the direction and frequency of the incident wave.


In all cases, U is of dimension P, where P is the number of antennas used.


It is possible to consider that U is standardized, and that sn carries the power of the signal and the mean gain of the array.


U is kept by the DFT, which is a linear transform, and is found on the signal output by the DFT.


In the general scenario, it is possible to have a mixture of K signals (K optionally being greater than the resolution of the array). The signal is then written:













X
n

=





k
=
1

N





s
k



(
n
)




U
k



+

W
n



;

n
=
1


,
2
,







N








Expression





of





a





signal





mixture





Equation





1







The fundamental hypothesis is that when the set of N time/frequency boxes is restricted to a rectangular zone or window with index j, the complexity of the environment is such that in such a window, the mixture of the signals is limited to two signals.


The model then becomes:

Xjn=sj1(n)Uj1+sj2(n)Uj2+Wn  Equation 2
Expression of a Signal Mixture in a Window


Windows of predetermined size are defined to cover the time-frequency grid. All of these windows form a division of the grid. The size of the windows is chosen such that no more than two signals from two sources are present in each window.


The vector U or the vectors U and V designating the unitary directional vector(s) formed by the incident signal(s) relative to the array are next extracted (or estimated), using any suitable known means.


A loop is formed to travel all of the windows defining the division of the grid.


For each window, a step for estimating the signal(s) is carried out using the conditional expectation and approximations that can be made thereto for the specific model of the signals.


The final processing is broken down into:

    • a nonlinear decision step of the situation in each time/frequency box: source 1 present and source 2 present, or: source 1 present and source 2 absent, or: source 1 absent and source 2 present, or source 1 absent and source 2 absent, then
    • a linear filtering step specific to the situation.


      The obtained processing is nonbiased, practically optimal within the meaning of the mean quadratic error, and provides a time/frequency delimitation of the support of the signal to be estimated.


The estimate of the signal is done over a window where the situation is mono-source or dual-source.


In a mono-source situation, the vectorial signal measured for the box with index n is

Xn=snU+Wn, n=1, 2, . . .  Equation 3
Expression of the Measured Signal (Mono-Source Scenario)

In a dual-source situation, one has:

Xn=snU+cnV+Wn, n=1, 2, . . .  Equation 4
Expression of the Measured Signal (Dual-Source Scenario)


In the writings of Equation 3 and Equation 4, U and V are unitary directional vectors of a monopolar or bipolar array, sn and cn are complex signals that must be estimated; Wn is the noise, which is spatially white and in n, Gaussian, centered and with covariance: E(WnWn*)=2σ2IP where IP is the unit matrix of CP (P being the number of receiving channels). * designates the conjugated transpose.


Hereinafter, it will be assumed that U and V are estimated by Û and {circumflex over (V)}, for example using a method of the MUSIC type, and that this estimate having been done over a number of boxes N>>1, it is possible to make the approximation U=Û, V={circumflex over (V)}.


To account for the fact that sn (or cn) can be zero for certain n, without knowing the modulation of the signal in advance, sn (or cn) is modeled as independent samples in n of a random variable whereof the probability density is a mixture (q, 1−q, 1<1) of two Gaussians centered on respective variances 2σj2 (j=1 for sn and j=2 for cn), and 2τ2, with τ22j2.


j2 is the power of the wanted signal if it is present when τ2 is neglected in the expression (1−q)2τ2+q2σj2 of the mean power.


τ is a regularization parameter of the model that makes it possible to apply the Bayes formula for laws of probability allowing probability densities relative to the Lebesgue measurement; however, the physical reality is that there is no signal when this is modeled using the variance-centered Gaussian 2τ2 (power 2τ2). That is why, at the end of the calculations, only the limit of the expressions is retained when τ→0.


Sn and cn are considered to be independent.


In both cases, mono-source or dual-source, the measured signal is written in the single form:

Xn=MSn+Wn, n=1, 2, . . . N  Equation 5
Metrical Form of the Measured Signal

where M=U and Sn=sn in mono-source, and M=(U V), Sn=(sn cn)T in dual-source, with M known and σ2 known.


In the rest of the document, to make the notations lighter, with the understanding that the processing is done on each box with index n independently, the index n is omitted and X denotes the measurement in a time/frequency box, s the mono-source signal and S=(s c)T the dual-source signal in a time/frequency box.


The a priori knowledge of S is given by a probability density. In the mono-source case:











p


(
s
)


=



q

2


πσ
1
2





exp
(

-




s


2


2


σ
1
2




)


+



1
-
q


2


πτ
2





exp
(

-




s


2


2


τ
2




)










Probability





density






o

f






the





signal






(

mono


-


source





case

)






Equation





6








In the dual-source case:











p


(
S
)


=



j





q
j



π
2


det






C
j





exp


(


-

S
*




C
j

-
1



S

)











Probability





density





of





the






signal




(


d

ual



-


source





case

)






Equation





7








where q1=q2,q2=q3=q(1−q),q4=(1−q)2 if the sources are considered to be independent and equally probable (hereinafter, we generalize to a distribution q1,q2,q3,q4 of 4 situations not connected by the above expressions),


And












C
1

=

(




2


σ
1
2




0




0



2


σ
2
2





)


,


C
2

=

(




2


σ
1
2




0




0



2


τ
2





)


,






C
3

=

(




2


τ
2




0




0



2


σ
2
2





)


,


C
4

=

(




2


τ
2




0




0



2


τ
2





)









Covariance





matrices





of





the





signal





Equation





8








are the covariance matrices of S for the four possible cases.


S is estimated using the conditional expectation by using the mono-source and dual-source models (Equation 5, Equation 6 and Equation 5, Equation 7).


The Conditional Expectation (CE) is the estimator Ŝ that minimizes the mean quadratic deviation E(∥S−Ŝ∥2). It is also nonbiased, and provides an explicit solution for Ŝ. It is built as follows:


Let X be the measurement; its probability density, which depends on the parameter to be estimated S, is interpreted as the conditional probability density of X knowing S. One therefore has p(X/S) and p(S) derived from the a piori knowledge of S.


Ŝ is given by the explicit formula:











S
^

=



domS




Sp


(

S
/
X

)







dS









Estimate





of





S





Equation





9








p(S/X), the conditional probability of S knowing X, is obtained by the Bayes formula.











p


(

S
/
X

)


=



p


(

X
/
S

)


·

p


(
S
)




p


(
X
)










General





writing





of





the





conditional





probability











of





S





knowing





X





Equation





10








With






p


(
X
)



=



domS





p


(

X
/
S

)


·

p


(
S
)




dS









General





writing





of





the





probability





density





of





X





Equation





11







In the case where X=MS+V and p(S) is Gaussian, covariance-centered C, it is possible to find Ŝ analytically, which is generally not the case.


This is a linear function of X. In fact (in dimensions 2):












p


(

X
/
S

)


=


1


π
2


4


σ
4





exp


(

-





X
-
MS



2


2


σ
2




)
















p


(
S
)


=


1


π
2


det





C




exp


(


-

S
*




C

-
1



S

)












p


(

X
/
S

)




p


(
S
)



=


1


π
4


4


σ
4


det





C




exp


(


-




X


2




2


σ
2






+



X
*


MS


2


σ
2



+



S
*



M
*


X


2


σ
2



-



S
*



M
*


MS


2


σ
2



-


S
*



C

-
1



S


)








Let us take









-
1




=




M
*


M


2


σ
2



+

C

-
1









By completing the “square” in S, we have:








p


(

X
/
S

)




p


(
S
)



=



K
2


det





C




exp


(



-


(

S
-




2


σ
2





M
*


X


)

*







-
1




(

S
-




2


σ
2





M
*


X


)



-




X


2


2


σ
2



+



X
*


M





M
*


X




4


σ
4




)








where K2 is a constant (=1/π44) in dimension 2. Σ is a Hermitian matrix defined positive. One deduces from this:













domS




p


(

X
/
S

)




p


(
S
)







dS


=




K
2



π
2






det







det





C




exp
(


-




X


2


2


σ
2




+



X
*


M





M
*


X




4


σ
4




)













Probability





density





of





measurements











in











dual


-


source





Equation





12











domS




S
·

p


(

X
/
S

)





p


(
S
)







dS


=




K
2



π
2






det







det





C





exp
(


-




X


2


2


σ
2




+



X
*


M





M
*


X




4


σ
4




)

·



2


σ
2






M
*


X











Conditional





probability





density





of





S





knowing





X











X





in





dual


-


source





Equation





13







The complete expressions of Equation 12 will be used to find the desired estimator for our problem.


In the case where S is a Gaussian sample, Equation 9, Equation 10, Equation 11, Equation 12 and Equation 13 yield:











S
^

=




2


σ
2





M
*


X








Estimate





of





S





in





dual


-



source




(

case





where





S





is





Gaussian

)






Equation





14








which can also be written:

{circumflex over (S)}=(2σ2Σ−1)−1M*X=(M*M+2C−1)−1M*X


One concludes from this that if 2σ2C−1<<M*M, Ŝ is reduced to the maximum likelihood estimator of S using the model of Equation 5 with no a priori knowledge of S: ŜMV=(M*M)−1M*X.


The condition 2σ2C−1<<M*M as matrices is also expressed by C>>2σ2(M*M)−1, which means that the a priori on S that is defined by C does not provide real information on S.


The estimate of the signal in the mono-source case is as indicated below.


In the one-dimensional case for S=s (mono-source), M=U, and therefore M*M=1;


The matrix C is reduced to the constant c;









p


(

X
/
S

)




p


(
S
)



=


1

2


πσ
2





exp


(

-





X
-
Us



2


2


σ
2




)




;







p


(
s
)


=


1

π
·
c




exp
(

-




s


2

c


)











-
1




=



1

2


σ
2



+


1
c






or







=


2


σ
2


c



2


σ
2


+
c









is deduced from this








p


(

X
/
S

)




p


(
S
)



=



K
1

c



exp


(



-



2


σ
2


+
c


2


σ
2


c








1
-


c


2


σ
2


+
c




U
*


X




2


-




X


2


2


σ
2



+


c

2



σ
2



(


2


σ
2


+
c

)










U
*


X



2



)















with






K
1


=

1

2


πσ
2









This then yields:














doms



s
·

p


(

X
/
S

)


·

p


(
S
)




=



K
1

c


π



2


σ
2


c



2


σ
2


+
c





exp


(


-




X


2


2


σ
2




+


c

2



σ
2



(


2


σ
2


+
c

)










U
*


X



2



)


·

c


2


σ
2


+
c





U
*


X











Conditional





probability





density





of





S





knowing





X











in





mono


-


source





Equation





15










doms




p


(

X
/
S

)




p


(
S
)




=



K
1

c


π



2


σ
2


c



2


σ
2


+
c




exp


(


-




X


2


2


σ
2




+


c

2



σ
2



(


2


σ
2


+
c

)










U
*


X



2



)














Probability





density





of





measurements





in











mono


-


source





Equation





16








The conditional expectation is obtained, in the Gaussian case for s, by the quotient of Equation 15 by Equation 16:











s
^

=


c


2


σ
2


+
c




U
*


X








Estimate





of





s





in





mono


-


source






(

case





where





s





is





Gaussian

)






Equation





17








If c>>2σ2, which expresses that one has no a priori information on s, ŝ is reduced to the maximum likelihood estimator. ŝMV=U*X.


The estimate of the signal in the dual-source case is as indicated below.


The conditional expectation estimator is obtained using Equation 15 and Equation 16 for the mixture density of S given by Equation 7.


After simplifying by the common factor







K
2



π
2



exp
(

-




X


2


2


σ
2




)






in all of the terms in the numerator and the denominator, one obtains:











Equation





18













S
^

=




j




q
j




det







j



det






C
j





exp


(


1

4


σ
4





X
*


M




j




M
*


X



)






j


2


σ
2





M
*


X





j




q
j




det







j



det






C
j





exp


(


1

4


σ
4





X
*


M




j




M
*


X



)












Condition





expectation





in





dual


-


source







With







j

-
1



=





M
*


M


2


σ
2



+


C
j

-
1







or







j



=

2




σ
2



(



M
*


M

+

2


σ
2



C
j

-
1




)



-
1









Let us take Γj=2σ2Cj−1 (without dimension),

Qj=(M*M+2Cj−1)−1=(M*M+Γj)−1j/2σ2

We have:











S
^

=





j
=
1

4








q
j


det






Q
j


det






Γ
j



exp


(



X
*



MQ
j



M
*


X


2






σ
2



)




Q
j



M
*


X






j
=
1

4




q
j


det






Q
j


det






Γ
j



exp


(



X
*



MQ
j



M
*


X


2






σ
2



)













Conditional











expectation





in





dual

-
source





Equation





19








According to Equation 8,








M
*


M

=

(



1




U
*


V







V
*


U



1



)









Γ
1

=

(





σ
2

/

σ
1
2




0




0




σ
2

/

σ
2
2





)


,






Q
1

-
1


=

(




1
+


σ
2

/

σ
1
2







U
*


V







V
*


U




1
+


σ
2

/

σ
2
2






)










Γ
2

=

(





σ
2

/

σ
1
2




0




0




σ
2

/

τ
2





)


,






Q
2

-
1


=

(




1
+


σ
2

/

σ
1
2







U
*


V







V
*


U




1
+


σ
2

/

τ
2






)










Γ
3

=

(





σ
2

/

τ
2




0




0




σ
2

/

σ
2
2





)


,






Q
3

-
1


=

(




1
+


σ
2

/

τ
2







U
*


V







V
*


U




1
+


σ
2

/

σ
2
2






)










Γ
4

=

(





σ
2

/

τ
2




0




0




σ
2

/

τ
2





)


,






Q
4

-
1


=

(




1
+


σ
2

/

τ
2







U
*


V







V
*


U




1
+


σ
2

/

τ
2






)







One deduces from this:












Q
1

=


1






(

1
+


σ
2

/

σ
1
2



)



(

1
+


σ
2

/

σ
2
2



)


-










U
*


V



2







(




1
+


σ
2

/

σ
2
2







-

U
*



V







-

V
*



U




1
+


σ
2

/

σ
1
2






)










Q
2

=


1






(

1
+


σ
2

/

σ
1
2



)



(

1
+


σ
2

/

τ
2



)


-










U
*


V



2







(




1
+


σ
2

/

τ
2







-

U
*



V







-

V
*



U




1
+


σ
2

/

σ
1
2






)










Q
3

=


1






(

1
+


σ
2

/

τ
2



)



(

1
+


σ
2

/

σ
2
2



)


-










U
*


V



2







(




1
+


σ
2

/

σ
2
2







-

U
*



V







-

V
*



U




1
+


σ
2

/

τ
2






)









Q
4

=


1



(

1
+


σ
2

/

τ
2



)

2

-





U
*


V



2





(




1
+


σ
2

/

τ
2







-

U
*



V







-

V
*



U




1
+


σ
2

/

τ
2






)









Qi





matrices





Equation





20







The expression of the estimate of the signal is subject to an approximation to allow it to be estimated as indicated below in the dual-source case.


The products of determinants in Equation 19 are respectively equal to the following expressions, an approximation of which is provided for a good signal-to-noise ratio (σ122>>1 and σ222>>1) and for τ→0.

detQ1detΓ1=[(1+σ212)(1+σ222)−|U*V|2]−1σ412σ22≈(1−|U*V|2)−1σ412σ22
detQ2detΓ2=[(1+σ212)(1+σ22)−|U*V|2]−1σ412τ2≈(τ22)(1+σ212)−1·(σ412τ2)≈σ212
detQ3detΓ3=[(1+σ22)(1+σ222)−|U*V|2]−1σ422τ2≈σ222
detQ4det Γ4=[(1+σ22)2−|U*V|2]−1σ44≈1  Equation 21
Det Qi det Γi

Likewise, one finds, for Qj when τ→0:


Q1 is unchanged.











Q
1




1






(

1
+


σ
2

/

σ
1
2



)



(

1
+


σ
2

/

σ
2
2



)


-










U
*


V



2







(



1




-

U
*



V







-

V
*



U



1



)










Q
2



(




1
/

(

1
+


σ
2

/

σ
1
2



)




0




0


0



)









Q
3



(



0


0




0



1
/

(

1
+


σ
2

/

σ
2
2



)





)









Q
4



(



0


0




0


0



)








Simplified





form





of





the





Qi





matrices





Equation





22








One can see that the products det Qi det Γi have a finite limit in each of the four situations, as do the matrices Qj, which is a satisfactory behavior.


One has thus obtained a first expression of the estimator.


In reality, only one of the terms in Equation 19 is preponderant for each box, which leads to a first simplification. The new estimating processing of the Ŝ is deduced from this. Ŝ=Qj0M*X where







j
0

=

Arg






Max
j



{


q
j


det






Q
j


det






Γ
j



exp


(


X
*



MQ
j



M
*



X
/
2



σ
2


)



}







Which is simplified as:











S
^

=


Q

j





0




M
*


X







where








j
0

=


Arg






Max
j



F
j


=

Arg






Max
j



{


ln


(


q
j


det






Q
j


det






Γ
j


)


+


X
*



MQ
j



M
*



X
/
2







σ
2



}









Estimate





of





the





signal





with





decision





function





Equation





23








M*X is given by:








M
*


X

=

(





U
*


X







V
*


X




)






The det Qj·det Γj are given by Equation 21.


The Qj are given by Equation 22.


The qj are for example given by










q
j

=

{






q
2

,

j
=
1








q


(

1
-
q

)


,

j
=
2

,
3








(

1
-
q

)

2

,

j
=
4





,





Expression





of





the






q
i






parameters







Equation





24








if there is independence of the 4 situations and equal probability for s≠0,c≠0.


F(j) is given by:

F(j1)=ln(q1detQ1detΓ1)+X*MQ1M*X/2=ln(q2(1−|U*V|2412σ22)+X*MQ1M*X/2
F(j2)=ln(q2detQ2detΓ2)+X*MQ2M*X/2=ln(q(1−q212)+X*UU*X/2
F(j3)=ln(q3detQ3detΓ3)+X*MQ3M*X/2=ln(q(1−q222)+X*VV*X/2
F(j4)=ln(q4detQ4detΓ4)+X*MQ4M*X/2=ln((1−q)2)

For j0=1, the maximum likelihood estimator is found. Indeed, in this case,







Γ
1

=


(





σ
2

/

σ
1
2




0




0




σ
2

/

σ
2
2





)


0






and Q1=(M*M)−1 such that S=(M*M)−1M*X. One can see that if U*V=0, then the estimates of s and c are completely separate, since then, the relationship Ŝ=(M*M)−1M*X is simplified and uncoupled into ŝ=U*X, ĉ=V*X.

For j0=2T=(U*X,0) (filtering by the directional vector of source 1).
For j0=3T=(0,V*X) (filtering by the directional vector of source 2).
For j0=4T=(0,0)

Where the symbol T designates the transpose.


One has thus “linearized” the optimal processing, since one has obtained four linear filters, controlled by the decision on the type of situation for each box: both sources are present/source 1 is present/source 2 is present/neither of the sources is present. The obtained estimator is called “Conditional Expectation with 4 Linear Filters”.


In this way, we have simplified the optimal estimator, by breaking it down into two steps:

    • Detection of the situation
    • Application of filtering appropriate to the situation


It is satisfactory to see that the estimator is independent of τ, which is the expected behavior, since τ is not a physical parameter, but an artifice making it possible to modify the “absence of signal” situation by a very tight Gaussians.


This estimator requires the calculation of three quadratic forms and a test. The difficulty remains calculating the unknown parameters qj,j=1 . . . 4, σ12, σ22 (the power of the noise 2σ2 is presumed to be known).


In the specific case where the 2 sources are independent, and have the same power and the same presence rate, the parameters can be estimated by calculating the empirical moments of order 2 and 4.


If we call 2σ′2 the shared value of the variance of the Gaussian representing each source, and q the probability shared by the 2 sources, everything happens as if we were in a mono-source situation, with a single source of variance 2σM′2=2σ′2 and presence probability qM=2q.


σM′2 and qM are then given by the following equations:






{






1
N





n










X
n



2



=


2



q
M



(




σ


M
2

P

+

σ
2


)



+


(

1
-

q
M


)


2






σ
2











1
N





n










X
n



4



=


8



q
M



(




σ


M
2

P

+

σ
2


)



+

8


(

1
-

q
M


)







σ
4











In the general case (independent sources), there are 4 parameters of the model: q1, q2, σ12, σ22. One skilled in the art knows how to generalize the method of moments above to higher orders to obtain the estimates of these parameters.


An alternative of the estimating processing consists of simplifying the decision step previously described, as follows:


The conditional expectation considers all four possible situations:

s≠0,c≠0;s≠0,c=0;s=0,c≠0;s=0,c=0


It is normally necessary to address a decision problem with four hypotheses.


To simplify, we propose to test s≠0 against s=0 independently of c on the one hand, and to test c≠0 against c=0 independently of s on the other hand. We therefore perform two tests with two hypotheses instead of one test with four hypotheses.


These tests will be done from preprocessed measurements U*X, V*X.


Test of s≠0 Against s=0










H
1



:



{







U
*


X

=

s
+


U
*



V
·
c


+
u









V
*


X

=



V
*



U
·
s


+
c
+
v





,

s


0






H
0



:



{







U
*


X

=



U
*



V
·
c


+
u









V
*


X

=

c
+
v





,

s


0





Simplified





hypothesis






test
:













Equation





25








the 2 hypotheses for s


where the (.) indicate the scalar x scalar product


and where u=U*W,v=V*W:(u,v) is therefore Gaussian, centered and with covariance:






E


{


(





U
*


W







V
*


W




)



(







W
*


U







W
*


V

)

}




=

2







σ
2



(



1




U
*


V







V
*


U



1



)




,









and where c is an unknown parameter.


It is a problem not varying by the group of sector translations (U*V 1)T and a linear hypothesis problem (see “Testing Statistical Hypothesis”, 3rd edition, E. L. Lehmann, J. P. Dominos, Springer, 2005). It may be processed by first performing a projection on the orthogonal of (U*V 1)T to eliminate c, then testing the presence of s using the chi2 test. The projection is written:









U
*


X

-


(


U
*


V

)



V
*


X


=

{






(

1
-





U
*


V



2


)


s

+
u
-


(


U
*


V

)


v





(

H
1

)






u
-


(


U
*


V

)


v





(

H
0

)










The test to be performed therefore pertains to the measurement |U*X−(U*V)V*X|2:

|U*X−(U*V)V*X|2>ou<λ  Equation 26
Simplified hypothesis test on s

Which amounts to the same thing as performing the test:












s
^

MV



2

=








U
*


X

-


(


U
*


V

)



V
*


X




2


(

1
-





U
*


V



2


)


>
ou
<

λ




,





where ŝMV is the estimate within the meaning of the maximum likelihood of s.


Test of c≠0 Against c=0


In the same way for c, M*X is projected on the orthogonal of (1 V*U)T in order to eliminate the terms in s, and the new measurement to be considered is obtained:









-

(


V
*


U

)




U
*


X

+


V
*


X


=

{






(

1
-





U
*


V



2


)


c

+
v
-


(


U
*


V

)


u





(

H
1


)






v
-


(


V
*


U

)


u





(

H
0


)










One obtains the following test:

|(V*U)U*X−V*X|2>ou<μ  Equation 27
Simplified hypothesis test on c

which can be written











c
^

MV



2

=








(


V
*


U

)



U
*


X

-


V
*


X




2


(

1
-





U
*


V



2


)


>
ou
<

μ








where ĉMV is the estimate of c within the meaning of the maximum likelihood of c.


In dual-sources, the proposed estimator consists of performing the following operations:


As illustrated in FIG. 3, from the calculation of the dual-source maximum likelihood estimator ŜMV=(ŝ,ĉ)=(M*M)−1M*X done in step 220, thresholding of the modulus of each component of ŜMV is done in step 320 on each of the components |SMV| and |ĉMV|.


Then, depending on the situation, spatial filtering is applied in steps 331 to 334 under the following conditions, which makes it possible to obtain the so-called Conditional Expectation with Independent Decisions (CEID) estimator:

If |ŝMV|>seuil and |ĉMV|>seuil:ŜECDI=(M*M)−1M*X  (step 331)
If |ŝMV|>seuil and |ĉMV|>seuil:ŝECDI=U*X,ĉECDI=0  (step 332)
If |ŝMV|>seuil and |ĉMV|>seuil:ŝECDI=0ECDI=V*X  (step 333)
If |ŝMV|>seuil and |ĉMV|>seuil:ŜECDI=0  (step 334)


In mono-source, the proposed estimator consists of performing the following operations, as illustrated in FIG. 2:


From the calculation of the dual-source maximum likelihood estimator ŝMV=U*X done in step 210, thresholding of the modulus of ŝMV is done in step 350, then, depending on the situation, spatial filtering is applied in steps 361 or 362 under the following conditions:

If |ŝMV|>seuilŝECDI=U*X  (step 361)
If |ŝMV|<seuilŝECDI=0  (step 362)


Advantageously, the threshold for steps 320, 350 is determined as follows:


Pfa refers to the probability of deciding s≠0 whereas s=0, and Pd is the probability of deciding s≠0 whereas s≠0.


For example and non-limitingly, it is proposed to use the Neyman-Pearson criterion, which consists of determining the Pfa (for example, several percent), and in return, maximizing Pd, which makes it possible to obtain a threshold on λ′ and μ′. For example and non-limitingly, it is also possible to adjust the value of λ′ (μ′, resp.) such that 1−Pd=Pfa around a set RSB.


A close alternative of the Maximum Likelihood called Thresholded Maximum Likelihood (TML) consists of performing the following operations:

    • Calculating the maximum likelihood estimator ŜMV=(M*M)−1M*X
    • Thresholding the modulus of the components of ŜMV
    • Depending on the situation, applying spatial filtering

      If |ŝMV|<seuil:ŝMVS=0, otherwise ŝMVSMV
      If |ĉMV|<seuil:ĉMVS=0, otherwise ĉMVSMV


Another alternative consists of using one of the two previous estimators to obtain an initialization of the unknown parameters qj,j=1 . . . 4, σ12, σ22, then applying the Conditional Expectation estimator or the Conditional Expectation with 4 Linear Filters estimator.

Claims
  • 1. A method for the non-linear estimation of wireless signals from several sources, the time/frequency representation of which shows an unknown non-zero proportion of zero components, comprising: using a listening system comprising an antenna array having P>2 antennas to receive a signal at each of the antennas that results from no more than two mixed signals from separate sources for which the directional vectors U and V of the sources emitting the no more than two mixed signals are known or estimated;transmitting the signal received at each of the antennas to a computing unit that is communicatively coupled to the antennas;calculating, by the computing unit, successive discrete Fourier transforms of the signals received by the antennas and sampled to obtain a time-frequency P-vector grid of the signals received by the antennas, each element of the time-frequency P-vector grid providing a time-frequency representation of the signals received by the antennas for a respective time interval and a respective frequency interval and being referred to as a box and containing a complex vector X forming a measurement; andfor each box, calculating, from said box, by the computing unit, an estimation of a dual-source signal S=(s, c)T, s being a time-frequency representation of the signals received by the antennas and associated with a first source and c being a time-frequency representation of the signals received by the antennas and associated with a second source and (.)T being the transpose of (.) , Ŝ=(ŝ,ĉ)T based upon an approximation of a conditional expectation estimator of the signals s and c such that the estimation utilizes, as an a priori information, knowledge that the signals s and c have a non-zero proportion of components equal to zero estimate values of non-zero components of s and c, a probability density of S being modeled as a mixture of centered Gaussians weighted by coefficients q1, q2, q3, q4 representing probabilities of four respective situations in each box which are: (1) presence of the first and second sources, (2) presence of the first source and absence of the second source, (3) absence of the first source and presence of the second source, and (4) absence of the first and second sources, this approximation being:
  • 2. The method according to claim 1, further comprising estimating parameters (qj,σ12,σ22) necessary to establish the conditional expectation using a method of moments operating on the boxes of a divided window in the time-frequency P-vector grid.
  • 3. The method according to claim 1, wherein the computing unit is included in a radar receiver system.
  • 4. The method according to claim 1, further comprising performing, by the computing unit, identification of the sources as a function of the estimated signals s and c.
  • 5. The method according to claim 1, wherein the conditional expectation estimator is approximated by a processing called “conditional expectation with four linear filters” (CE4LF) including a four-hypothesis decision processing pertaining to four Hermitian forms of the measurement X, followed by linear filtering commanded by a result of the four-hypothesis decision according to equations:
  • 6. The method according to claim 5, wherein the CE4LF is approximated by a processing called “conditional expectation with independent decisions” (CEID) obtained by two statistical tests with two hypotheses followed by linear filtering commanded by a result of the two statistical tests.
  • 7. The method according to claim 6, wherein a decision threshold used in the CEID is chosen with respect to a probability of a false alarm that corresponds to a probability of declaring the signals s and c to be non-zero when it is zero.
  • 8. The method according to claim 6, wherein said method includes: a first estimate of the signals s and c that is obtained using the CEID or a thresholded maximum likelihood method,an estimate of parameters (qj,σ12,σ22) that is obtained from the said first estimate of the signals s and c, anda second estimate of the signals s and c that is obtained using the CEID method or the CE4LF method, informed by the values of the said estimate of parameters (qj,σ12,σ22).
  • 9. The method according to claim 6, wherein the four-hypothesis decision is given by: if |ŝMV|>threshold and |ĉMV|>threshold: the first and second sources are present;if |ŝMV|>threshold and |ĉMV|<threshold: the first source is present and the second source is absent;if |ŝMV|<threshold and |ĉMV|>threshold: the first source is absent and the second source is present; andif |ŝMV|<threshold and |ĉMV|<threshold: the first and second sources are absent; wherein ŜMV=(ŝMV ĉMV)T is a dual-source maximum likelihood estimator ŜMV=(M*M)−1M*X.
  • 10. The method according to claim 9, wherein said method includes: a first estimate of the signals s and c that is obtained using the CEID or a thresholded maximum likelihood method,an estimate of parameters (qj,σ12,σ22) done from the said first estimate of the signals s and c, anda second estimate of the signals s and c that is obtained using the CEID method or the CE4LF method, informed by the values of the said estimate of parameters (qj,σ12,σ22).
  • 11. The method according to claim 10 wherein, as a function of the result of the four-hypothesis decision, linear filtering provides for an “independent decision conditional expectation” estimator, which is ŜECDI=(ŝECDI ĉECDI)T: if the first and second sources are present: the dual-source maximum likelihood estimator for the signals s and c: ŜECDI=(M*M)−1M*X;if the first source is present and the second source is absent: a mono-source maximum likelihood estimator for s, and 0 for c: ŝECDI=U*X, ĉECDI=0;if the first source is absent and the second source is present: 0 for s, and a mono-source maximum likelihood estimator for c: ŝECDI=0, ĉECDI=V*X; andif the first and second sources are absent: 0 for the signal of the first and second sources: ŜECDI=0.
  • 12. The method according to claim 11, wherein said method includes: a first estimate of the signals s and c that is obtained using the CEID or a thresholded maximum likelihood method,an estimate of parameters (qj,σ12,σ22) done from the said first estimate of the signals s, c, anda second estimate of the signals s and c that is obtained using the CEID method or the CE4LF method, informed by the values of the said estimate of parameters (qj,σ12,σ22).
  • 13. The method according to claim 11, wherein, an approximation of the CEID is calculated by a thresholded maximum likelihood ŜMVS=(ŝMVS ĉMVS)T obtained by estimating the signal(s) using the dual-source maximum likelihood estimator followed by a comparison of each estimate to a threshold, according to the equations: if |ŝMV|<threshold: ŝMVS=0, else ŝMVS=ŝMV if |ĉMV|<threshold: ĉMVS=0, else ĉMVS=ĉMV.
Priority Claims (1)
Number Date Country Kind
14 02981 Dec 2014 FR national
PCT Information
Filing Document Filing Date Country Kind
PCT/EP2015/081213 12/23/2015 WO 00
Publishing Document Publishing Date Country Kind
WO2016/102697 6/30/2016 WO A
Non-Patent Literature Citations (11)
Entry
D. Kolossa, et al., Research Article, Independent Component Analysis and Time-Frequency Masking for Speech Recognition in Multitalker Conditions, EURASIP Journal on Audio, Speech, and Music Processing, vol. 2010, Article ID 651420, p. 1-13, 2010 (Year: 2010).
T. Porges, et al., Probability Distribution Mixture Model for Detection of Targets in High-Resolution SAR Images, 2009 International Radar Conference—Surveillance for a Safer World, 2009 (Year: 2009).
O. Yilmaz, Blind Separation of Speech Mixtures via Time-Frequency Masking, IEEE Transactions on Signal Processing, p. 1-15 2002 (Year: 2002).
H. Zayyani, et al., Estimating the Mixing Matrix in Sparse Component Analysis (SCA) using EM Algorithm and Iterative Bayesian Clustering, 16th European Signal Processing Conference 2008 (Year: 2008).
J. Hennessy et al, Computer Architecture, 5th edition, 2012, section 1.3 (Year: 2012).
Master A S: “Bayesian two source modeling for separation of n sources from stereo signals”, Acoustics, Speech, and Signal Processing, 2004. Proceedings.(ICASSP ' 04). IEEE International Conference on Montreal, Quebec, Canada May 17-21, 2004, Piscataway, NJ, USA,IEEE, Piscataway, NJ, USA, vol. 4, May 17, 2004(May 17, 2004), pp. 281-284, XP010718460, DOI: 10.1109/ICASSP.2004.1326818 ISBN: 978-0-7803-8484-2 p. 281-p. 283.
Carlos E et al: “ICA Based Blind Source Separation Applied to Radio Surveillance”, IEICE Transactions on Communications, Communications Society, Tokyo, JP, vol. E86-B, No. 12, Dec. 2003(Dec. 2003), pp. 3491-3497, XP001191555, ISSN: 0916-8516 the whole document.
C. Fevotte et al: “A Bayesian Approach for Blind Separation of Sparse Sources”, IEEE Transactions on Audio, Speech and Language Processing, vol. 14, No. 6, Nov. 2006(Nov. 2006), pp. 2174-2188, XP055224805, New York, NY, USA. ISSN: 1558-7916, DOI: 10.1109/TSA.2005.858523 the whole document.
Zayyani H et al: “Estimating the mixing matrix in Sparse Component Analysis (SCA) using EM algorithm and iterative Bayesian clustering”, 2006 14th European Signal Processing Conference, IEEE, Aug. 25, 2008 (Aug. 25, 2008), pp. 1-5, XP032761251, ISSN: 2219-5491 [retrieved on Apr. 3, 2015] the whole document.
International Search Report dated Feb. 29, 2016 in corresponding International Application No. PCT/EP2015/081213.
Written Opinion dated Feb. 29, 2016 in corresponding International Application No. PCT/EP2015/081213.
Related Publications (1)
Number Date Country
20180026607 A1 Jan 2018 US