This application claims the priority under 35 U.S.C. §119 of European patent application no. 10012343.9, filed on Sep. 30, 2010, and 10275102.1, filed on Sep. 30, 2010, the contents of which are incorporated by reference herein.
This invention relates to manipulation of a sound scene comprising multiple sound sources. It is particularly relevant in the case of simultaneous recording of audio by multiple microphones.
Most existing sound scene manipulation methods operate in a two-stage fashion: in a first stage, the individual sound sources are extracted from one or more microphone recordings; and in a second stage, the separated sound sources are recombined according to the desired sound scene manipulation. When the manipulation consists of a change in the desired level of the individual sound sources (which is commonly the case), the second stage is trivial, once the first stage has been executed. Indeed, the recombination in the second stage then reduces to a simple linear combination of the separated sound sources obtained from the first stage. Unfortunately, the extraction of the individual sound sources from the recorded microphone signal(s) is a difficult problem, on which a lot of research effort has been spent. Broadly speaking, the state of the art in sound source extraction can be classified into three approaches:
According to an aspect of the present invention there is provided an audio-processing device comprising:
A device according to an embodiment of the invention addresses the problem of sound scene manipulation from a fundamentally different perspective, in that it allows any specified level change to be performed for each of the individual sound source components in the observed mixture(s), without relying on an explicit sound source separation. The disadvantages overcome by the device as compared to the state of the art can be explained by considering each of the three approaches highlighted already above:
One application of a method or device according to an embodiment is the enhancement of acoustic signals like speech or music. In this case, the sound scene consists of desired as well as undesired sound sources, and the aim of the sound scene manipulation comprises reducing the level of the undesired sound sources relative to the level of the desired sound sources.
According to another aspect of the invention, there is provided a handheld personal electronic device comprising a plurality of microphones; and the audio processing device referred to above.
The invention is particularly suited to mobile, handheld applications, since it has relatively light computational demands. It may therefore be usable with a mobile device having limited processing resources or may enable power consumption to be reduced.
The mobile or handheld device preferably incorporates a video recording apparatus with a visual zoom capability, and the audio processing device is preferably adapted to modify the desired gain factors in accordance with a configuration of the visual zoom. This enables the device to implement an acoustic zoom function.
The microphones are preferably omni-directional microphones.
The present device may be particularly beneficial in these circumstances, because the source separation problem is inherently more difficult when using omni-directional microphones. If the microphones are uni-directional, there will often be significant selectivity (in terms of signal power) between the sources among the diverse audio signals. This can make the manipulation task easier. The present device is able to work also with omnidirectional microphones, where there will be less selectivity in the raw audio signals. The present device is therefore more flexible. For example, it can exploit spatial selectivity by means of beamforming techniques, but it is not limited to spatial selectivity through the use of unidirectional microphones.
According to a further aspect of the invention, there is provided a method of processing audio signals comprising:
The parameters of the different mixture may be reweighting factors, which relate the levels of the components in the at least one auxiliary signal to their respective levels in the reference audio signal.
The method is particularly relevant to configurations with more than one microphone. Sound from all of the sound sources is detected at each microphone. Therefore, each sound source gives rise to a corresponding component in each audio signal. The number of sources may be less than, equal to, or greater than the number of audio signals (which is equal to the number of microphones). The sum of the number of audio signals and the number of auxiliary signals should be at least equal to the number of sources which it is desired to independently control.
Each auxiliary signal contains a different mixture of the components. That is, the components occur with different amplitude in each of the auxiliary signals (according to the reweighting factors). In other words, the auxiliary signals and the audio signals should be linearly independent; and the sets of reweighting factors which relate the signal components to each auxiliary signal should also be linearly independent of one another.
Explicit source separation is not necessary. Preferably the levels of the source signal components in the auxiliary signals are varied by a power ratio in the range −40 dB to +6 dB, more preferably −30 dB to 0 dB, still more preferably −25 dB to 0 dB, compared to their levels in the reference audio signal(s).
In the step of synthesizing the output signal, a scaling coefficient is preferably applied to the reference original audio signal and the result is combined with the scaled auxiliary signals.
The scaled auxiliary signals and/or scaled audio signals may be combined by summing them.
In general, in practice, the scaling coefficients and the desired gain factors will have different values (and may be different in number). They would only be identical if the auxiliary signals were to achieve perfect separation of the sources, which is usually impossible in practice. Each desired gain factor corresponds to the desired volume (amplitude) of a respective one of the sound-sources. On the other hand, the scaling coefficients correspond to the auxiliary signals and/or input audio signals. The number of reweighting factors is equal to the product of the number of signal components and the number of auxiliary signals, since in general each auxiliary signal will comprise a mixture of all of the signal components.
Preferably, the desired gain factors; the reweighting factors and the scaling coefficients are related by a linear system of equations; and the step of calculating the set of scaling coefficients comprises solving the system of equations.
For example, the step of calculating the set of scaling coefficients may comprise: calculating the inverse of a matrix of the reweighting factors; and multiplying the desired gain factors by the result of this inversion calculation.
The reweighting factors may be formed into a matrix and the inverse of this matrix may be calculated explicitly. Alternatively, the inverse may be calculated implicitly by equivalent linear algebraic calculations. The result of the inversion may be expressed as a matrix, though this is not essential.
The at least one auxiliary signal may be a linear combination of any of: one or more of the audio signals; one or more shifted versions of the audio signals; and one or more filtered versions of the audio signals.
The at least one auxiliary signal may be generated by at least one of: fixed beamforming; adaptive beamforming; and adaptive spectral modification.
Here, fixed beamforming means a spatially selective signal processing operation with a time-invariant spatial response. Adaptive beamforming means a spatially selective signal processing operation with a time-varying spatial response. Adaptive spectral modification means a frequency-selective signal processing operation with a time-varying frequency response, such as the class of methods known in the art as adaptive spectral attenuation or adaptive spectral subtraction. An adaptive spectral modification process typically does not exploit spatial diversity, but only frequency diversity among the signal components.
These are advantageous examples of ways to create the auxiliary signals. Fixed beamforming may be beneficial when there is some prior expectation that one or more of the sound sources is localised and located in a predetermined direction relative to a set of microphones. The fixed beamformer will then modify the power of the corresponding signal component, relative to other components.
Adaptive beamforming may be beneficial when a localised sound source is expected, but its orientation relative to the microphone(s) is unknown.
Adaptive spectral modification (for example, by attenuation) may be useful when sound sources can be discriminated to some extent by their spectral characteristics. This may be the case for a diffuse noise source, for example.
The methods of generating the auxiliary signal or signals are preferably chosen according to the expected sound environment in a given application. For example, if several sources in known directions are expected, it may be appropriate to use multiple fixed beamformers. If multiple moving sources are expected, multiple adaptive beamformers may be beneficial. In this way—as will be apparent to those skilled in the art—one or more instances of different means of generating the auxiliary signal may be combined, in embodiments.
Optionally, a first auxiliary signal is generated by a first method; a second auxiliary signal is generated by a second, different method; and the second auxiliary signal is generated based on an output of the first method.
For example, the fixed beamforming may be adapted to emphasize sounds originating directly in front of the microphone or microphone array. For example, this may be useful when the microphone is used in conjunction with a camera, because the camera (and therefore the microphone) is likely to be aimed at a subject who is one of the sound sources.
An output of the fixed beamformer may be input to the adaptive beamformer. This may be a noise reference output of the fixed beamformer, wherein the power ratio of a component originating from the fixed direction is reduced relative to other components. It is advantageous to use this signal in the adaptive beamformer, in order to find a (remaining) localised source in an unknown direction, because the burden on the adaptive beamformer to suppress the fixed signals may be reduced.
An output of the adaptive beamformer may be input to the adaptive spectral modification.
Typically, neither of the beamformers nor an adaptive spectral attenuator will be sufficiently selective to separate individual sources from the mixture. In this context, the method of the invention may be seen as a flexible framework for combining weak separators, to allow an arbitrary desired weighting on sound sources. The individual operations of beamforming or spectral modification preferably cause a change in the signal power of individual sound source components in the range −25 dB to 0 dB. This refers to the input/output power ratio of each operation, ignoring cascade effects due to the output of one unit being connected to the input of another
The method may optionally comprising: synthesizing a first output audio signal by applying scaling coefficients to a first reference audio signal and at least one first auxiliary signal and combining the results; and synthesizing a second output audio signal by applying scaling coefficients to a second, different reference audio signal and at least one second auxiliary signal and combining the results.
This may be particularly useful for generating binaural (for example, stereo) outputs. The at least one first auxiliary signal and at least one second auxiliary signal may be the same or different signals. The two different reference audio signals should be selected from appropriately arranged microphones, for a desired stereo effect.
In a similar way, the method can be extended to synthesize an arbitrarily greater number of outputs, as desired for any particular application.
The sound sources may comprise one or more localised sound sources and a diffuse noise field.
The desired gain factors may be time-varying.
The method is particularly well suited to real-time implementation, which means that the desired gain can be adjusted dynamically. This may be useful for example for dynamically balancing changing sound sources, or for acoustic zooming.
In a sound scene consisting of multiple desired sound sources, one often encounters the problem that the levels of the different sources are not sufficiently balanced in the microphone recordings—for example, if one of the sources is positioned closer to the microphone array than the others. In a static scenario, the sound scene can be balanced using time-invariant gain factors, while in a dynamic scenario (that is, with moving or temporally modulated sound sources) the use of time-varying gain factors is more relevant.
The desired gain factors can be chosen in dependence upon the state of a visual zoom function.
In applications where joint audio and video recordings are made (for example, camcorder or video-phone applications), it may be beneficial to match the auditory and visual cues in the recordings to obtain an easier and/or faster multisensory integration. A key example is the process of manipulating the sound scene such that it properly matches the video zooming operations. For example, when zooming in on a particular subject, the sound level of this subject should increase accordingly while keeping the level of the other sound sources constant. In this case, the desired gain factor corresponding to the sound source in front of the camera will be increased over time, while the other gain factors are time-invariant.
Also provided is a computer program comprising computer program code means adapted to perform all the steps of a method as described above, when said program is run on a computer; and such a computer program embodied on a computer readable medium.
The invention will now be described by way of example with reference to the accompanying drawings, in which:
In the following, a theoretical explanation of a method according to an embodiment will first be given, along with an indication of the conditions under which this theory can be used for sound scene manipulation.
Consider a sound scene consisting of M localized sound sources sm(t), m=1, . . . , M positioned in different directions in the three-dimensional plane (as characterized by the azimuth-elevation angle pairs (θm,φm), m=1, . . . , M), in addition to a diffuse sound field that cannot be attributed to a single sound source or direction. Further to this, consider a microphone array consisting of N microphones (N≧2) and having an arbitrary three-dimensional geometry. Each of the microphones may have a different frequency- and angle-dependent response, as defined by
A
n(ω,θ,φ)=an(ω,θ,φ)e−jψ
The acoustic response (including the effect of the direct path time delay as well as reverberation) of a sound source at angle (θ,φ) to each of the microphones is given by
F
n(ω,θ,φ)=fn(ω,θ,φ)e−jξ
For ease of notation, we introduce the joint acoustic and microphone response, defined as
G
n(ω,θ,φ)=An(ω,θ,φ)Fn(ω,θ, φ), n=0, . . . , N−1. (3)
Using the above definitions, we can express each of the N audio signals Un(ω) detected at the microphones as a function of the localized sound sources and the diffuse sound field in the frequency domain as follows:
where Un(0)(ω) denotes the diffuse noise component. The above relation can equivalently be written in the time domain as follows,
The aim of the envisaged sound scene manipulation is to produce N manipulated signals, or audio output signals, ζn(t), in which each of the levels of the individual sound source components is changed in a user-specified way as compared to the respective levels in the nth microphone signal. Mathematically, the aim is to produce the signals
where gn(m)(t), m=0, . . . , M denote the user-specified time-varying gain factors for the different sound source components. Hereinafter, these will be referred to as the “desired gain factors”.
Suppose that one could generate M auxiliary signals xn(p)(t), p=1, . . . , M, in which the different sound source components have been arbitrarily reweighted with respect to the corresponding components in the microphone signal un(t), that is,
Here, each of the reweighting factors is by definition equal to the square root of the power ratio of the corresponding sound source components, that is,
The nth manipulated signal (output audio signal) can now be calculated as a weighted sum of the nth microphone signal and the auxiliary signals xn(p)(t), p=1, . . . , M defined above, that is,
By using the relations in equations (5) and (7), the expression for the calculated nth manipulated signal in equation (9) can be shown to be equivalent to the expression for the desired nth manipulated signal in equation (6) if the weights an(p), p=0, . . . , M satisfy the following relationship,
This implies that a unique set of weight trajectories an(p)(t), p=0, . . . , M, ∀t can be calculated that exactly produces the desired sound scene manipulation. Herein, the weight trajectories an(p)(t), p=0, . . . , M, ∀t are also referred to as “scaling coefficients”.
There are two conditions for exact reproduction of the effect of an arbitrary set of desired gain factors gn(m)(t), according to equation (10):
Loosely speaking, the first condition requires that the microphone signal un(t) and the auxiliary signals xn(p)(t), p=1, . . . , M should be linearly independent (which leads to linear independent columns Γ) and requires that the reweighting of the different sound source components in each of the auxiliary signals xn(p)(t), p=1, . . . , M should be linearly independent (which leads to linear independent rows in Γ). The reweighting factors can be calculated or estimated depending on the embodiment of the invention, as described in greater detail below.
Note that equation (7) above is a model for the auxiliary signals which will usually be satisfied only approximately, in practice. In the embodiments described below, the auxiliary signals will be derived from the various microphone signals. Therefore, they will be composed of filtered versions of the sound source components, instead of the unfiltered (“dry”) sound source components themselves suggested by equation (7).
If the model of equation (7) could be satisfied precisely, exact recovery of a single sound source component would be possible (by choosing the desired gain factors appropriately). In the embodiment to be described below, this would demand the design of ideal beamformers that have a flat frequency response within the bandwidth of the source component of interest, and demand that the diffuse noise has no spectral overlap with the source component of interest. In practice, these restrictions are usually not met, and as a consequence the auxiliary signals will be linear combinations of filtered versions of the original sound source components (with non-uniform frequency response), rather than linear combinations of the original sound source components. This makes the exact recovery of a single sound source component impossible; however, this is a shortcoming of the practical embodiment rather than the theoretical method.
In the following, without loss of generality, an exemplary scenario will be considered in which the sound field in the acoustic environment is assumed to consist of four contributions coming from a different azimuthal directions:
The number of localized interfering sound sources is taken to be one, for the purposes of this explanation. Furthermore, in this example, it is assumed that the capture device is equipped with two or more microphones. Those skilled in the art will appreciate that none of these assumptions should be taken to limit the scope of the invention.
If the nth microphone signal un(t) is decomposed in the time domain as:
u
n(t)=un(F)(t)+un(B)(t)+un(I)(t)+un(N)(t)
then the corresponding desired output of the algorithm can be written as follows:
ζn(t)=gF(t)un(E)(t)+gB(t)un(B)(t)+gI(t)un(I)(t)+gN(t)un(N)(t)
where gF(t), gB(t), gI(t), and gN(t) denote the desired gain factors for the different sound source components. Note that one is not necessarily interested in calculating N output signals of the algorithm. Typically, the focus is on obtaining a mono or stereo output, which implies that the relation above only needs to be considered for one or two particular values of n, say n1 (and n2).
Nevertheless, all N microphone signals will typically be used to obtain an estimate of the two output signals, ζn
Conventionally, it would be expected that the algorithm needs to perform some kind of source separation to isolate the different sound source components. However, since we are not interested in the separated sound source components, but rather in a mixture in which the levels of these components have been adjusted as compared to the microphone signals, an explicit source separation is not required. Let us denote three auxiliary signals as xn(t), yn(t), and zn(t), in which the different sound source components have been arbitrarily reweighted (by reweighting factors γ) with respect to the corresponding components in the microphone signal un(t), that is:
x
n(t)=γx
y
n(t)=γy
z
n(t)=γz
The output signal of the algorithm can now be calculated as a linear combination of the nth microphone signal and the auxiliary signals xn(t), yn(t), and zn(t) defined above, that is:
ζn(t)=an(0)(t)un(t)+ an(1)(t)xn(t)+an(2)(t)yn(t)+an(3)(t)zn(t).
This corresponds to equation (9) above. The corresponding form of equation (10) is:
This enables the scaling factors, a, to be calculated, provided the re-weighting factors are known. The estimation of the re-weighting factors will be described in greater detail below. Before that, two embodiments of the invention will be described.
Both embodiments have the general structure shown in the block diagram of
In the first embodiment the goal is to obtain a monaural (mono) output signal.
In
The audio synthesis unit 20 is indicated by the dashed box 220. This produces the output signal ζ0(t) as a weighted summation of the auxiliary signals x0, y0, and z0, as well as the reference audio signal u0. The weights are the scaling coefficients, a, derived by the scaling coefficient calculator 30 (not shown in
Note that in the mono output case of
Because of the above discrimination between the primary beamformer output signals x0(t) and y0(t), on the one hand; and the other beamformer output signals xn(t) and yn(t) with n>0, on the other hand, a stereo output signal should preferably not be created by calculating ζ0(t) and ζ1(t) using these auxiliary signals.
Instead, in the second embodiment, the block structure shown in
ζ0(t)=a0(0)(t)u0(t)+a0(1)(t)x0(t)+a0(2)(t)y0(t)+a0(3)(t)z0(t)
ζ1(t)=a1(0)(t)u1(t)+a1(1)(t)x0(t)+a1(2)(t)y0(t)+a1(3)(t)z0(t)
That is, the same set of auxiliary signals is used for generating both stereo outputs, but a different reference audio signal, un(t), is used in each case. This computation is performed by the audio synthesis unit 320 indicated by the dashed box.
In the case that N>2 (that is, when the array consists of more than two microphones), one should select u0(t) and u1(t) to be those two microphone signals that are best suited to deliver a stereo image. As will be apparent to those skilled in the art, this will typically depend on the placement of the microphones.
Note that, due to the particular structure shown in
Meanwhile, the weights for the primary output signal ζ1(t) can be calculated as before, with n=0.
As the equations above show, the scaling coefficient calculator 30 uses knowledge of the reweighting factors γn(p,m) to derive the scaling coefficients, a(t), from the desired gains, g(t). In the presently described embodiments, the reweighting factors are found by using knowledge of the characteristics of the various blocks 210, 212, 214 in the auxiliary signal generator. Preferably, the reweighting factors are determined offline.
Examples of the calculation of the reweighting factors will be described below. These examples rely on a frequency-domain characterisation of the auxiliary signal generator blocks 210, 212, 214.
The input-output relation of the three functional blocks in the block structure can be described in the frequency domain as follows. The fixed beamformer can be specified by an N×N transfer function matrix W1(ω), that is,
and U(ω) is defined as
U(ω)=[U0(ω) . . . UN-1(ω)]T
The adaptive beamformer can be specified by an N×1 transfer function vector W2(ω) that defines the relation between the adaptive beamformer input and its primary output signal:
Y
0(ω)=w2H(ω)×(ω)
where
w
2(ω)=[w2,(1)(ω) . . . w2,(N)(ω)]T
As explained earlier, the secondary adaptive beamformer output signal should ideally be an estimate of the diffuse noise component in the primary adaptive beamformer output signal. The most straightforward approach is to choose the secondary output signal to be equal to one of the noise references at the output of the fixed beamformer—for example, Y1(ω)=X1(ω). Alternatively, one could attempt to remove the localized interfering sound source component from the secondary adaptive beamformer output signal, however, this approach is not used in the present embodiments. The adaptive spectral attenuation can finally be specified using a scalar transfer function W3(ω), that is,
Z
0(ω)=We(ω)Y0(ω)
Using the above input-output relations, we can derive expressions for the different localized sound source components in the primary auxiliary signals X0(ω), Y0(ω), and Z0(ω) as a function of the corresponding dry sound source signals SF(ω), SB(ω), and SI(ω),
X
0
(c)(ω)=w1,(:,1)H(ω)G(ω,θc)Sc(ω)
Y
0
(c)(ω)=w2H(ω)w1H(ω)G(ω,θc)Sc(ω)
Z
0
(c)(ω)=w3(ω)w2H(ω)w1H(ω)G(ω,θc)Sc(ω)
where c represents the component F, B, or I, and w1,(:,1)(ω) denotes the first column of W1(ω). Similarly, the diffuse noise component in the primary auxiliary signals can be expressed as a function of the diffuse noise components in the microphone signals,
X
0
(N)(ω)=w1,(:,1)H(ω)U(N)(ω)
Y
0
(N)(ω)=w2H(ω)w1H(ω)U(N)(ω)
Z
0
(N)(ω)=w3(ω)w2H(ω)w1H(ω)U(N)(ω)
We will now make the following assumptions, to simplify the calculation of the reweighting factors:
∀ω: Sc(ω)≠0, Un(N)(ω)≠0|Gn(ω,θc)|≡|Gn(θc)|, n=0 , . . . , N−1, c=F,B,I
ω: Sc(ω)≠0 & Un(N)(ω)≠0, n=0, . . . , N−1, c=F,B,I
∀ω: Sc(ω)≠0|W3(ω)|≡|W3(c)|, c=F,B,I
∀ω: {dot over (U)}n(N)(ω)≠0|W3(ω)|≡|W3(N)|, n=0, . . . , N−1
σu
Under these assumptions, the signal powers of the different sound source components in the microphone and auxiliary signals can be estimated as follows:
σu
σx
σy
σz
σx
σy
σx
and consequently, the reweighting factors can be calculated as
Finally, note that from a computational point of view, in some applications, it may be undesirable to calculate the reweighting factors online (in real-time) using the preceding formulae. A more efficient approach involves setting the values of the reweighting factors off-line (in advance), making use of the fixed beamformer response (known a priori) and of heuristics about the behaviour of the adaptive beamformer and spectral attenuation response. The values chosen can be approximations of the theoretical values predicted by the equations above. For example, the values may be set heuristically in 5 dB steps. In many applications, the method will be largely insensitive to 5 dB or 10 dB deviations from the precise theoretical values.
The design of the fixed beamformer in an exemplary embodiment will now be described.
As explained previously above, the fixed beamformer creates a primary output signal X0(ω) that spatially enhances the front sound source signal, as well as a number of other output signals Xn(ω), n>0 that serve as “noise references” for the adaptive beamformer. Here, we will first discuss the design of the so-called front source beamformer (FSB), and afterwards we will explain the design of the so-called blocking matrix (BM).
Depending on the kind of spatial enhancement one wants to achieve for the front sound source, different fixed beamformer design methods could be employed for the FSB; for example, an array pattern synthesis approach, or a differential or superdirective design method. These methods themselves are known in the art. In the present embodiment, we will adopt a superdirective (SD) design method, which is recommendable when the aim is to maximize the directivity factor of the microphone array—that is, to maximize the array gain in the presence of a diffuse noise field. The frequency-domain SD design equation for the FSB can be found in S. Doclo and M. Moonen (“Superdirective beamforming robust against microphone mismatch,” IEEE Trans. Audio Speech Lang. Process., vol. 15, no. 2, pp. 617-631, February 2007):
where G(ω, θF) denotes the front sound source steering vector
G(ω,θ)=[G0(ω,θ) . . . GN-1(ω,θ)]T
IN represents the N×N identity matrix, μ is a regularization parameter, and
The directivity factor (DF) and the ratio of the front and back response (FBRR) of the SD beamformer are defined as follows:
Whereas the DF is nearly constant with FSB filter length, the FBRR increases for higher filter lengths and approximately saturates for a length greater than or equal to 128. Note that the frequency-domain SD design is executed at LFSB/2 frequencies that are uniformly distributed in the Nyquist interval, after which the frequency-domain FSB coefficients are transformed to length-LFSB time-domain filters. Experiments have also shown a significant performance gap between the 2-mic configuration and other configurations, with greater than 2 microphones, both in terms of directivity and FBRR.
The BM in the fixed beamformer consists of a number of filter-and-sum beamformers that each operate on one particular subset of microphone signals. In this way, a number of noise reference signals is created, in which the power of the desired signal components is maximally reduced relative to the power of these components in the microphone signals. Typically, in an N-microphone configuration, N-1 noise references are created by designing N-1 different filter-and-sum beamformers. However, in some cases it might be preferable to create fewer than N-1 noise references, which then leads to a reduction of the number of input signals xn(t) for the adaptive beamformer. In fact, in this embodiment we employ a BM consisting of only one filter-and-sum beamformer designed using the complete set of available microphone signals. In this way, the number of adaptive filters and hence the computational complexity of the adaptive beamformer can be considerably reduced.
In the context of the BM design, we consider the back sound source (if any) to be an undesired signal (which should be cancelled by the adaptive beamformer); hence the BM design reduces to a front-cancelling beamformer (FCB) design. Again, one of several different fixed beamformer design methods can be employed. In this embodiment, we use an array pattern synthesis method, different from existing methods.
In general, we can specify the frequency-domain FCB design at a set of angles {θ0, . . . , θM-1} by the following linear system of equations:
where Pm(ω), m=0 , . . . . , M-1 denotes the desired response at frequency ω and angle θm. The least-squares (LS) optimal solution is then given by
w
1,(:,2)(ω)=[
More specifically, to obtain an FCB design we should specify a zero response in the front direction and a non-zero response in any other direction. Preferably the latter direction should the back direction to avoid that the design would actually correspond to a front-back-cancelling beamformer design. As a consequence, the number of equations in the linear system of equations above is M=2, and the specification angles correspond to θ0=θF and θ1=θB. Finally, the desired response vector is equal to P*(ω)=[0,1]H.
With this design, the back response is indeed close to a unity response for most microphone configurations and filter length values. However, the front source response varies heavily according to the microphone configuration and filter length used. An important observation is that at least one microphone pair in an endfire configuration should preferably be included in the array to obtain a satisfactory power reduction of the front sound source component. Concerning the choice of the BM filter length, experiments show that there is no clear threshold effect—that is, the response in the front direction decreases with a nearly constant slope (provided an endfire microphone pair is included). As a consequence, the BM filter length should preferably be chosen according to the desired front sound source power reduction.
The design of the adaptive beamformer in an exemplary embodiment will now be described.
The adaptive beamformer in the block scheme may be implemented using a generalized sidelobe canceller (GSC) algorithm; a multi-channel Wiener filtering (MWF) algorithm; or any other adaptive algorithm. In this embodiment, we employ the speech-distortion-weighted multi-channel Wiener filtering (SDW-MWF) which includes the GSC and MWF as special cases. Details of this method can be found in S. Doclo, A. Spriet, J. Wouters, and M. Moonen (“Frequency-domain criterion for the speech distortion weighted multichannel wiener filter for robust noise reduction,” Speech Commun., vol. 49, no. 7-8, pp. 636-656, July-August 2007, special Issue on Speech Enhancement).
The objective of the SDW-MWF is to jointly minimize the energy of the undesired components (B, I, N) and the distortion of the desired component (F) in the enhanced signal Y0(ω). That is,
resulting in the adaptive beamformer estimate
w
2(ω)=[Φx(F)(ω)+μΦx(B,I,N)(ω)]−1Φx(F)(ω)e0
where e0 [1, 0, . . . , 0]T and the correlation matrices of the desired and undesired components in the adaptive beamformer input signal are defined as
Φx(F)(ω)=E{[X(F)(ω)][X(F)(ω)]H}
Φx(B,I,N)(ω)=E{[X(B)(ω)+X(I)(ω)+X(N)(ω)][X(B)(ω)+X(I)(ω)+X(N)(ω)]H}
The parameter μ can be tuned to trade off energy reduction of the undesired components versus distortion of the desired component. Several recursive implementations of the SDW-MWF filter estimate have been proposed, in which the adaptive SDW-MWF filter update is based on a generalized singular value decomposition (GSVD), a QR decomposition (QRD), a time-domain stochastic gradient method, or a frequency-domain stochastic gradient method. A common feature of these implementations is that the correlation matrices Φx(F)(ω) and Φx(B,I,N)(ω) are explicitly estimated before the SDW-MWF filter estimate is computed.
The signal-to-noise ratio (SNR) improvement provided by the SDW-MWF adaptive beamformer has been evaluated in a scenario with two localized sound sources: a front sound source consisting of a male speech signal (θF=0) and a localized interfering sound source consisting of a music signal (θI=90 degrees).
The mean SNR at the microphones is equal to 10 dB. The fixed beamformer is implemented using a SD design for the FSB and a front-cancelling design for the BM, and an evaluation is done both for LFSB=LBM=64 and for LFSB=LBM=128. The adaptation of the SDW.-MWF algorithm is based on a stochastic gradient frequency-domain implementation, and is controlled by a perfect (manual) voice activity detection (VAD). Two features of the SDW-MWF have been evaluated, namely:
Note that in case the desired component distortion is not penalized (1/μ=0), the algorithm without a feedforward filter corresponds to the GSC algorithm, while the algorithm with a feedforward filter is not relevant due to an intolerable speech distortion. The evaluation has shown that the GSC algorithm as well as the SDW-MWF algorithm with a small trade-off parameter (1/μ=0.01) are well suited for the reduction of the localized interfering sound source power. Moreover, there appears to be no significant influence of the number of microphones and the FSB and BM filter lengths on the adaptive beamformer performance.
The design of the Adaptive Spectral Attenuation process in an exemplary embodiment will now be described.
The adaptive spectral attenuation block is included in the structure with the aim of reducing the diffuse noise energy in the primary adaptive beamformer output signal. To this end, the short-term magnitude spectra of the reference microphone signal, |U0(ωk, l)|, and the primary and secondary adaptive beamformer output signals, |Y0(ωk, l)| and |Y1(ωk, l)|, are estimated by means of a Discrete Fourier transform (DFT), with k and l denoting the DFT frequency bin and time frame indices. An instantaneous spectral gain function is then calculated as follows,
where the subtraction factor βn ∈ [0,1] determines the amount of spectral attenuation and the regularization factor ε is a small constant which prevents division by zero. Since the secondary adaptive beamformer output signal Y1(ω) is equal to the noise reference X1(ω) at the output of the fixed beamformer, a spectral coherence function C(ωk,l) that relates the magnitude spectra of the diffuse noise components in the primary and secondary fixed beamformer output signals needs to be estimated and taken into account in the equation. The instantaneous gain function of the equation is then lowpass filtered and clipped, before being applied to the speech estimate, that is,
G
lp(Ωk,l)=(1−α)Glp(ωk,l−1)+αGinst(ωk,l)
G(ωkl)=max{Glp(ωk,l),ξn}
|Z(ωk,l)|=G(ωk,l)|Y0(ωk,l)|
where α denotes the lowpass filter pole and ξn=1−βn is the clipping level. The enhanced signal magnitude spectrum |Z(ωk,l)| is subsequently transformed back to the time domain by applying an inverse DFT (IDFT), and by using the phase spectrum of the primary adaptive beamformer output signal Y0(ωk,l).
An exemplary use of the embodiment in an Acoustic Zoom (AZ) application will now be described.
gI(t)≡1
gN(t)≡1
From preliminary results with the above zoom-in trajectory for the front sound source level, it was noted that a perceptually better trajectory could be designed. More particularly, a faster level increase at the start of the zoom-in operation would be desired, eventually converging to the same final level at close-up. A perceptually more attractive level trajectory was found to be
Concerning the specification of the back sound source gain factor, several possibilities exist. A first possibility is to regard the back sound source as an undesired sound source, in which case its level should remain constant. However, since the back sound source is typically very close to the camera, its level should often be reduced to obtain an acceptable balance between the back sound source and the other sound sources. A second possibility is to have the back sound source gain factor follow the inverse trajectory of the front sound source gain factor, possibly combined with a fixed back sound source level reduction. While such an inverse level trajectory would obviously make sense from a physical point of view, it may be perceived somewhat too artificial, since the front sound source level change is then supported by visual cues, while the back sound source level change is not.
Experiments have been performed to demonstrate the performance of the AZ algorithm. In both experiments, the front sound source is a male speech signal corresponding to a camera recording that consists of a far shot phase (5 s), a zoom-in phase (10 s), and a close-up phase (11 s). In addition, the sound field consists of diffuse babble noise and a localized interfering music source at θI=90 deg. In the first simulation, no back sound source is present, while in the second simulation, a female speech signal is present in the back direction (θB=180 deg).
A 3-microphone array was used, employing microphones 1,3, and 4 as indicated in
In these embodiments the values of the re-weighting factors were determined empirically in advance, rather than at run-time (as described previously above).
As will be apparent to those skilled in the art, the performance of the method depends in part upon the accuracy to which the reweighting factors can be estimated. The greater the accuracy, the better the performance of the manipulation will be.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.
For example, it is possible to operate the invention in an embodiment wherein different blocks are used to generate the auxiliary signals. The exemplary blocks described above (fixed or adaptive beamforming, or adaptive spectral modification) can be replaced or supplemented by other methods. Essentially, the auxiliary signal calculation should be such that it exploits the diversity of the individual sound sources in the sound scene. When multiple microphones are used, then exploiting spatial diversity is often the most straightforward option—and this is exploited by the beamformers in the embodiments described above. However, different kinds of diversity could equally be exploited, for example: diversity in the time domain (if not all of the sound sources are concurrently active); diversity in statistics (which could lead to the use of Wiener filtering, independent component analysis, and so on); or diversity in the degree of (non-)stationarity. The optimal choice of auxiliary signal generator will vary according to the application and the characteristics of the audio environment.
The ordering of the blocks described in embodiments herein and shown in the drawings is also not limiting on the scope of the invention. Blocks may be eliminated, re-ordered or duplicated.
Likewise, although the embodiments described herein have concentrated on monaural or stereo implementation, the invention can of course be implemented with a greater number of audio output signals than just one or two. Those skilled in the art will be readily able to generalise from the description above, to provide an arbitrary number of desired outputs. This may be useful, for example, for multi-channel or surround-sound audio applications.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
Number | Date | Country | Kind |
---|---|---|---|
10012343.9 | Sep 2010 | EP | regional |
10275102.1 | Sep 2010 | EP | regional |