The invention relates to an audio processing arrangement comprising a plurality of audio sources for generating input audio signals, a processing circuit for deriving processed audio signals from the input audio signals, a combining circuit for deriving a combined audio signal from the processed audio signals, and a control circuit for controlling the processing circuit in order to maximize a power measure of the combined audio signal, and for limiting a function of gains of the processed audio signals to a predetermined value. The invention also relates to an audio processing method.
Advanced processing of audio signals has become increasingly important in many areas including e.g. telecommunication, content distribution etc. For example, in some applications, such as teleconferencing, complex processing of inputs from a plurality of microphones has been used to provide a configurable directional sensitivity for the microphone array comprising the microphones. Specifically, the processing of signals from a microphone array can generate an audio beam with a direction that can be changed simply by changing the characteristics of the combination of the individual microphone signals.
Typically, beam form systems are controlled such that the attenuation of interferers is maximized. For example, a beam forming system can be controlled to provide a maximum attenuation (preferably a null) in the direction of a signal received from a main interferer.
A beam form system which provides particularly advantageous performance in many embodiments, is the Filtered-Sum Beamformer (FSB) disclosed in WO 99/27522.
In contrast to many other beam forming systems, the FSB system seeks to maximize the sensitivity of the microphone array towards a desired signal rather than to maximize attenuation towards an interferer. An example, of the FSB system is illustrated in
The FSB system seeks to identify characteristics of the acoustic impulse responses from a desired source to an array of microphones, including the direct field and the first reflections. The FSB creates an enhanced output signal, z, by adding the desired part of the microphone signals coherently by filtering the received signals in forward matching filters and adding the filtered outputs. Also, the output signal is filtered in backward adaptive filters having conjugate filter responses to the forward filters (in the frequency domain corresponding to time inversed impulse responses in the time domain). Error signals are generated as the difference between the input signals and the outputs of the backward adaptive filters, and the coefficients of the filters are adapted to minimize the error signals thereby resulting in the audio beam being steered towards the dominant signal. The generated error signals can be considered as noise reference signals which are particularly suitable for performing additional noise reduction on the enhanced output signal z.
A particularly important area for audio signal processing is in the field of hearing aids. In recent years, hearing aids have increasingly applied complex audio processing algorithms to provide an improved user experience and assistance to the user. For example, audio processing algorithms have been used to provide an improved signal to noise ratio between a desired sound source and an interfering sound source resulting in a clearer and more perceptible signal being provided to the user. In particular, hearing aids have been developed which include more than one microphone with the audio signals of the microphones being dynamically combined to provide directivity for the microphone arrangement. As another example, noise canceling system may be applied to reduce the interference caused by undesired sound sources and background noise.
The FSB system promises to be advantageous for applications such as hearing aids as it promises an efficient beam forming towards a desired signal (rather than being directed to attenuation of interfering signals). This has been found to be of particular advantage in hearing aid applications where it has been found to provide a signal to the user which facilitates and aids the perception of the desired signal. In addition, the FSB system provides a noise reference signal which is particularly suitable for noise reduction/compensation for the generated signal.
However, it has been found that the FSB system has some associated disadvantages when used in applications such as for a hearing aid. In particular, it has been found that for low distances between the microphones of the microphone array, the performance of the FSB system degrades. For example, for a typically hearing aid configuration of an end-fire array with two omni-directional microphones with a spacing of 15 mm, the FSB has been found to have suboptimal performance. Indeed, it has been found that in many scenarios, the FSB system has not been able to converge towards the desired signal.
Hence, an improved audio beam forming would be advantageous and in particular a beam forming allowing improved suitability for hearing aids for which distance between microphones is rather small.
It is an object of the present invention to provide an enhanced audio processing arrangement which is suitable for low distances between the microphones of the microphone array. The invention is defined by the independent claims. The dependent claims define advantageous embodiments.
This object is achieved according to the present invention in an audio processing arrangement as stated above and characterized in that the audio processing arrangement comprises a pre-processing circuit for deriving pre-processed audio signals from the input audio signals. The pre-processed signals are provided to the processing circuit instead of the input audio signals. The pre-processing circuit is arranged for minimizing a cross-correlation of interferences comprised in the input audio signals.
In an embodiment, the pre-processing circuit guarantees that only the power of a desired signal in the output signal is maximized in case the interference comprised in one input audio signal is correlated with the interference comprised in the other input audio signals. Without pre-processing circuit and with the processing circuit and the control circuit using e.g. adaptive filter coefficients that are configured to maximize the desired output power in the combined audio signal, the error signals of the adaptive filters comprised in the processing circuit and the control circuit contain interferences that are correlated with the input of the adaptive filters, in case the interferences in the audio signals are correlated. This will result in divergence of adaptive filter coefficients from the optimal solution. Here the divergence means that maximizing the output power of the combined signal does not result in maximizing the output power of the desired signal.
In an embodiment, the pre-processing performed in the pre-processing circuit ensures that, with e.g. adaptive filter coefficients as used by the processing circuit and the control circuit that are configured to maximize the desired output power in the combined audio signal, the correlation between the interference component in the error signal and the input of the adaptive filter is minimized.
In this way the audio processing arrangement provides a robust performance when applied to microphone arrays with correlated interferences. One example of such a situation is a small microphone array in end-fire configuration in reverberant conditions.
In an embodiment, the pre-processing circuit minimizes a cross-correlation of the interferences by circuit of multiplication of input audio signals by an inverse of a regulation matrix. The regulation matrix is a function of a correlation matrix, wherein entries of the correlation matrix are correlation measures between respective pairs of plurality of interferences, contained in the audio sources.
The divergence of e.g. the adaptive filters comprised in the processing circuit and the control circuit, respectively, from the situation where the adaptive filters are converged to the desired speech signal is caused by correlation of the interferences in the audio signals, in particular caused by the correlation of the interferences in the error signal of the adaptive filters and the input of the adaptive filters. Here the convergence to the desired signal circuit that the adaptive filter coefficients are configured to maximize the desired output power in the combined audio signal. Multiplication of the input audio signals by an inverse of the regulation matrix ensures that the correlation between the interferences in the error signal and the input of the adaptive filter is minimized.
In a further embodiment, the regulation matrix is the correlation matrix. Entries of the correlation matrix can be scalars or filters. When the entries are scalars, then it is advantageous to treat problem in the time domain. If the entries are filters, then it is advantageous to treat the problem in the frequency domain. In the frequency domain, for each frequency component ω, the correlation matrix Γ(ω) has scalar entries, and thus the scalar case can be applied for each individual frequency component.
In a further embodiment, the regulation matrix is given by:
Γreg(ω)=ηΓ(ω)+(1−η)I
wherein Γreg(ω) is the regulation matrix, Γ(ω) is the correlation matrix, η is a predetermined parameter, and I is an identity matrix, and ω is a radial frequency.
The advantage of the above choice of the regulation matrix is that the operation of the audio processing arrangement is made less sensitive to un-correlated noise such as e.g. microphone self noise.
In a further embodiment, the parameter η is given by:
wherein σν2 is a variance of the correlated interference in the input audio signals (either acoustic noise and/or reverberation of the desired speech signal), and σn2 the variance of the uncorrelated electronic noise (white noise, e.g. microphone self-noise) contained in the audio signals.
Γreg(ω) is equivalent to the data correlation matrix of the combined interference signal including correlated interferences and non-correlated electronic interferences. With such definition of the parameter η, the entries of the regulation matrix more precisely reflect the actual correlation between the interferences.
In a further embodiment, the parameter η takes on a predetermined fixed value. With the pre-determined fixed value of η it is not necessary to measure the values of σν2 and σn2, but an average value for η can be taken, leading to reducing the correlation. The advantage of this embodiment is that the determining the entries of the regulation matrix is very simple. The parameter η is treated as a design parameter that controls the trade-off between robustness to diffuse noise and amplification of microphone self-noise. A typical value of the parameter η is 0.99.
In a further embodiment, the (p,q) entry of the regulation matrix is given by:
wherein Vp(ω) is the interference in the input audio signal p, Vq(ω) the interference in the input audio signal q, ω a radial frequency, and E is the expectation operator. The advantage of the above embodiment is that the entries of the regulation matrix are quite accurate.
In a further embodiment, the (p,q) entry of the correlation matrix is given by:
wherein dpq is a distance between microphones p and q, c is a speed of sound in air, and ω is a radial frequency. The Γ matrix is the data correlation matrix that belongs to a (perfect) diffuse sound field. The diffuse sound field can be either a diffuse noise field, or the field due to reverberation of the desired speech. Especially for the latter it is difficult to measure the data correlation matrix, since the reverberation is connected to the desired (direct) speech, i.e. it is not available during non-speech activity. The above formula provides a good estimate of the coherence function in diffuse noise fields.
In a further embodiment, the processing circuit comprises a plurality of adjustable filters for deriving the processed audio signals from the pre-processed audio signals, and the control circuit comprises a plurality of further adjustable filters having a transfer function being a conjugate of a transfer function of the adjustable filters. The further adjustable filters derive filtered combined audio signals from the combined audio signals. The control circuit limits a function of gains of the processed audio signals to the predetermined value by controlling the transfer functions of the adjustable filters and the further adjustable filters in order to minimize a difference measure between the input audio signals and the filtered combined audio signal corresponding to the input audio signals.
By using adjustable filters as processing circuit the quality of speech signal can be further enhanced. By minimizing a difference measure between the input audio signal and the corresponding filtered combined audio signal, it is obtained that a power measure of the combined audio signal is maximized under the constraint that per frequency component a function of the gains of the adjustable filters is equal to a predetermined constant. Or in other words, the control circuit limits implicitly a function of the gains, such that the power of the interference in the output remains constant. Maximizing the power of the output then results in maximizing the power of the desired signal in the output signal, thus enhancing the Signal-to-Noise ratio in the output signal.
Due to a use of adjustable filters no adjustable delay elements such as used in a delay-sum beam former are required.
In a further embodiment, the audio processing arrangement comprises fixed delay elements to compensate a delay difference of a common audio signal present in the input audio signals. The audio signal from a sound source might arrive at different times to the audio sources, therefore causing a delay between input audio signals generated by these audio sources. These differences are compensated by the delay elements.
According to another aspect of the invention there is provided an audio processing method. It should be appreciated that the features, advantages, comments etc described above are equally applicable to this aspect of the invention.
The invention further provides an audio signal processing arrangement, and a hearing aid comprising the audio signal processing arrangement according to the invention.
These and other aspects, features and advantages of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
Throughout the figures, same reference numerals indicate similar or corresponding features. Some of the features indicated in the drawings are typically implemented in software, and as such represent software entities, such as software modules or objects.
The following description focuses on embodiments of the invention applicable to a hearing aid and in particular to a hearing aid comprising two audio sources. The audio sources may be microphones. The microphones are preferably omni-directional. However, it will be appreciated that the invention is not limited to this application but may be applied to many other audio applications. In particular, it will be appreciated that the described principles may readily be extended to embodiments based on more than two audio sources.
An output of the first audio source 101, being here a microphone 101, is connected to a first input of the audio processing arrangement 100 and an output of second audio source, being here a microphone 102, is connected to a second input of the audio processing arrangement 100.
A first input audio signal x1, and a second input audio signal x2:
x
1
=as+n
1,
x
2
=s+n
2,
generated by the audio sources 101 and 102, respectively, are processed by the audio processing arrangement to generate an audio beam form 103. Here, s is a desired sound source (e.g. speech), a to which we refer as the transfer factor is a constant, and n1 and n2 are uncorrelated noise interferences. Furthermore it is assumed that:
E{n12}=E{n22}=1, and
E{n1n2}=E{1s}=E{n2s}=0.
This means that n1 and n2 are uncorrelated with each other, have unit variance, and are uncorrelated with the desired sound source s.
The processing circuit 110 comprises a first scaling circuit 111 and a second scaling circuit 112, each scaling circuit scaling its input audio signal with a predetermined scaling factor. The first scaling circuit is using scaling factor f1. The second scaling circuit is using scaling factor f2. The first scaling circuit generates a first processed audio signal. The second scaling circuit generates a second processed audio signal.
The first and second processed signals are then summed in a combining circuit 120 to generate a combined (directional) audio signal 103:
Specifically, by modifying the scaling factors of the first and second scaling circuits 111 and 112, the direction of an audio beam can be directed in a desired direction.
The scaling factors are updated such that a power estimate for the entire combined audio signal is maximized. The adaptation of the scaling factors are furthermore made with a constraint that the summed energy of the scaling circuits 111 and 112 is maintained constant.
The result of the above is that the scaling factors are updated such that a power measure for a desired source component of the combined audio signal is maximized, even though the combined signal contains uncorrelated noise.
In the specific example, the scaling factors of circuits 111 and 112 are not updated directly. Instead, the audio processing arrangement 100 comprises a control circuit 130 which determines the values of the scaling factors to be used by the processing circuit 110. The control circuit comprises further scaling circuits 131 and 132 for scaling the combined audio signal to generate a third processed audio signal and a fourth processed audio signal, respectively.
The third processed audio signal is fed to a first subtraction circuit 133 which generates a first residual signal between the third processed audio signal and the first input audio signal x1. The fourth processed audio signal is fed to a second subtraction circuit 134 which generates a second residual signal between the fourth processed audio signal and the second input audio signal x2.
In the arrangement, the scaling factors of the further scaling circuit 131 and 132 are adapted by control elements 135 and 136, respectively, in the presence of a dominant signal from the desired sound source such that the powers of the residual signals are reduced and specifically minimized. Below the operation of the control circuit is explained in more detail.
The power of the combined audio signal 103 is:
When Py is maximized under the constraint f12+f22=1 the power of the noise in Py remains constant and the Signal-to-Noise ratio in Py is maximized. The scaling factors can be then calculated theoretically using a Lagrange multiplier method, which yields:
In practice however, the scaling factors are obtained preferably using a least-mean-squares (LMS) adaptation scheme, as is done in the control elements 135 and 136. The Lagrange multipliers method as such is used for theoretical calculation.
For f1 and f2 chosen as:
the scaling factors are applied in the audio processing arrangement 100 in circuit 111, 131, and 112, 132, respectively. In other words the scaling factor used by the scaling circuit 111 is the same as this used by the further scaling circuit 131. It can be shown that for the first scaling circuit 111 there is no remaining desired sound signal s in its residual signal and that the cross-correlation between the residual signal and the input of the first scaling circuit 111 is zero, in case:
The combined audio signal fed into the control circuit 130 is expressed as:
y=f
1(as+n1)+f2(s+n2).
The first residual signal r1 is then expressed as:
r
1
=as+n
1
−f
1
2(as+n1)−f1f2(s+n2).
the above first residual signal reduces to:
The cross-correlation between y and r1 gives then:
E{yr
1
}=f
1
f
2
2
E{n
1
2
}−f
1
f
2
2
E{n
2
2}=0.
At equilibrium there is no desired sound signal in the reference signal and E{yr1} due to the noise is zero.
The control elements 135 and 136 are preferably updated according to the expressions:
f
1(k+1)=f1(k)+μy(k)r1(k)
and
f
2(k+1)=f2(k)+μy(k)r2(k)
respectively, where k is a time index, r2 is the second residual signal and where μ is an adaptation constant. Since E{y r1} due to the noise is zero in case
will remain at equilibrium. The same holds for f2.
The above can easily be generalized for N input audio signals each having a transfer factor ai with 1≦i≦N. For N scaling circuits comprised in the processing circuit 110 each corresponding to an input audio signal i the scale factors for each of the scaling circuits can be expressed as:
The inventors have realized that the performance of the described audio processing arrangement 100 is significantly degraded in the presence of correlated noise and therefore is unsuitable for many applications where closely spaced microphones are used resulting in increased correlated noise, such as reverberation noise. Specifically, the inventors have realized that the presence of correlated noise may result in the algorithm converging towards suboptimal scaling factors corresponding to suboptimal beam forms/directions or may result in the algorithm not converging. Thus, as realized by the inventors, for an input signal comprising a desired signal component, an uncorrelated noise component and a correlated noise component, the uncorrelated noise component will merely increase the variance of the generated filter coefficient estimates but will not introduce a bias to the estimates whereas the correlated noise will tend to bias the adaptation away from the correct values of the filter coefficients. Specifically, it has been found that for a small microphone array in a reverberant room, the reverberation may completely prevent the beam forming unit 100 from converging towards the correct solution. This is especially the case if the level of the reverberation is equal to, or larger than, the direct sound including early reflections, i.e. if the distance between the source and the microphones exceeds the reverberation radius. Of course, such a situation is typically the case for hearing aid applications wherein the distance between the microphones is low whereas the distance to the desired sound source (e.g. a speaker) is much larger.
The operation of the pre-processing circuit 140 is explained on an example. There is a non-zero cross-correlation between n1 and n2:
E{n1n2}=ρ.
The power of the combined audio signal 103 is now:
With f12+f22=1, it is clear that maximizing Py does not necessarily mean that the Signal-to-Noise ratio is maximized. For ρ>>s2, maximizing Py maximizes 2 ρf1f2 with
which is not the correct solution except when a=1.
In the control circuit 130 the expression f12+f22=1 is optimized and a problem arises for the residual r1 for the case
as the expectation E{y r1} is then:
Thus E{y r1} has a non-zero value when ≠1. As a result, due to the update rule of the scaling factors used in the control element 135
is not equilibrium and f1 will converge to a different (undesired) solution.
It is thus desired to remove the influence of the cross-correlation of the interferences, as it is done in the pre-processing circuit 140. The data correlation matrix for the above example is defined as:
with its inverse being:
The pre-processed signals at the output of the pre-processing circuit 140 are then given by:
The combined signal y at the output of the combining circuit 120 is then:
The power of y is then:
To optimize the Signal-to-Noise ratio a constraint must be applied that keeps the noise contribution in Py independent of f1 and f2, i.e.:
which can be equivalently expressed in matrix notation as
Applying the Lagrange multiplier method results in the following values for f1 and f2:
The above constraint is implemented in the structure shown in
The desired sound source component in y is:
and in r1 is:
Similarly for the noise component in y:
and in r1:
Correlating yn and rn and inserting the obtained f1 and f2 results in:
E{ynrn}=0.
At equilibrium the influence of cross-interferences is removed due to the pre-processing performed in the pre-processing circuit 140.
In an embodiment, the pre-processing circuit 140 minimize a cross-correlation of the interferences by circuit of multiplication of input audio signals by an inverse of a regulation matrix. The regulation matrix is a function of a correlation matrix. Entries of the correlation matrix are correlation measures between respective pairs of plurality of audio sources.
Various choices of the regulation matrix can be made as long as the regulation matrix guarantees that the cross-correlation of interferences comprised in the input audio signals is minimized.
Preferably, the regulation matrix is given by
wherein Vp (ω) is the interference in the input audio signal p, Vq (ω) the interference in the input audio signal q, ω a radial frequency, and E is the expectation operator. An example where the regulation matrix can be computed as above is when the interference is from a noise source, and the above matrix can be estimated when the desired sound source is not active. The expectations are calculated by averaging over data samples.
The above approach for computing the regulation matrix is however not possible when the interference is reverberation, as reverberation is present only when the desired source is active and can thus not be measured. In this case, it is possible to make use of a model for the correlation matrix.
In a further embodiment, the regulation matrix is the correlation matrix.
In a further embodiment, the (p,q) entry of the correlation matrix is based on the model for diffuse noise and is given by:
wherein dpq is a distance between microphones p and q, c is a speed of sound in air, and ω is a radial frequency.
If the regulation matrix is the correlation matrix, it de-correlates correlated interferences but previously uncorrelated noise (e.g., white noise, sensor noise) now becomes correlated. Thus there is a trade-off: correlated interferences can be de-correlated, but at the cost of introducing correlation between previously uncorrelated noise. In a further embodiment, the above mentioned trade-off can be controlled by choosing the regulation matrix to be:
Γreg(ω)=ηΓ(ω)+(1−η)I
wherein Γreg(ω) is the regulation matrix, Γ(ω) is the correlation matrix, η is a predetermined parameter, and I is an identity matrix.
A more precise way to control the above mentioned trade-off is to adjust η based on the relative powers of the correlated and uncorrelated noises.
In a further embodiment, the parameter η is given by:
wherein σν2 is a variance of the interference in the input audio signals, and σn2 is the variance of an electronic noise contained in the input audio signals.
In a further embodiment, the parameter η takes on a predetermined fixed value. A preferred value for η is 0.98 or 0.99.
Often the power of the electronic noise σn2 is fixed and can be measured. The quantity σν2+σn2 can also be measured when the desired source is not active. Once these two quantities are known, the parameter η can be computed.
Further the audio processing arrangement 200 comprises fixed delay elements 151 and 152. The output of the first audio source 101 is connected to the input of the first delay element 151. The output of the first delay element 151 is connected to the first input of the subtraction circuit 133. The output of the second audio source 102 is connected to the input of the second delay element 152. The output of the second delay element 152 is connected to the second subtraction circuit 134. The delay elements 151 and 152 make the impulse response of the adjustable filters relatively anti-causal (earlier in time) with respect to the impulse response of the further adjustable filters.
In the case when there are adjustable filters instead of scalar (gain) factors as in the example considered previously, it is advantageous to look at the problem in the frequency domain. Similar to the example considered earlier, one then has in the frequency domain a first input audio signal x1(ω), and a second input audio signal x2(ω) expressed as:
x
1(ω)=a(ω)s(ω)+n1(ω),
x
2(ω)=s(ω)+n2(ω).
The above system can be treated as a scalar case for each frequency component (ω), and corresponding gain factors f1(ω) and f2(ω) can be derived as in the earlier example. The quantities f1(ω) and f2(ω) correspond to the transfer functions of the adjustable filters.
Although the present invention has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in accordance with the invention. In the claims, the term comprising does not exclude the presence of other elements or steps.
Furthermore, although individually listed, a plurality of circuits, elements or method steps may be implemented by e.g. a single unit or suitably programmed processor. Additionally, although individual features may be included in different claims, these may be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. Also the inclusion of a feature in one category of claims does not imply a limitation to this category but rather indicates that the feature is equally applicable to other claim categories as appropriate. Furthermore, the order of features in the claims do not imply any specific order in which the features must be worked and in particular the order of individual steps in a method claim does not imply that the steps must be performed in this order. Rather, the steps may be performed in any suitable order. In addition, singular references do not exclude a plurality. Thus references to “a”, “an”, “first”, “second” etc do not preclude a plurality. Reference signs in the claims are provided merely as a clarifying example and shall not be construed as limiting the scope of the claims in any way.
Number | Date | Country | Kind |
---|---|---|---|
08158970.7 | Jun 2008 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB09/52580 | 6/17/2009 | WO | 00 | 12/14/2010 |