This application claims benefit of priority to Korean Patent Application No. 10-2023-0055999 filed Apr. 28, 2023, the contents of which is incorporated herein by reference in its entirety.
The present invention relates to a beamforming device.
A sound input signal input through a microphone may include not only a target speech required for speech recognition but also noise that interferes with speech recognition. Various researches are being conducted to improve the performance of the speech recognition by removing noise from the sound input signal and extracting only the desired target speech.
The present invention provides a beamforming device capable of more accurately extracting a target speech signal from an input signal by estimating a speech existence probability corresponding to a probability that the target speech signal exists based on an input vector to provide a steering vector and a weight vector.
According to an embodiment of the present invention, a beamforming device may include a probability estimation unit, a steering vector unit, and a beamforming unit. The probability estimation unit may estimate a speech existence probability corresponding to a probability that a target speech signal exists based on an input vector. The steering vector unit may provide an estimated steering vector according to the speech existence probability and the input vector. The beamforming unit may calculate a weight vector based on the speech existence probability, the input vector, and the estimated steering vector to provide an output vector.
In an embodiment, the speech existence probability may be determined according to a target speech signal spatial covariance matrix for the target speech signal included in the input vector.
In an embodiment, the target speech signal spatial covariance matrix for the target speech signal included in the input vector may be calculated according to a noise spatial covariance matrix.
In an embodiment, the noise spatial covariance matrix for noise included in the input vector may be calculated according to a noise spatial covariance matrix estimate of a previous frame corresponding to the previous frame of a current frame.
In an embodiment, a noise spatial covariance inverse matrix for the noise included in the input vector may be calculated according to a variance-weighted spatial covariance inverse matrix in the previous frame.
In an embodiment, an estimated time-varying variance included in the noise spatial covariance inverse matrix is calculated by weighted-averaging a time-varying variance in the previous frame.
In an embodiment, the beamforming device may further include a probability providing unit. The probability providing unit may provide the speech existence probability based on the target speech signal spatial covariance matrix.
In an embodiment, the beamforming device may further include a mask unit. The mask unit may provide a target speech mask according to the speech existence probability.
In an embodiment, the estimated steering vector may be determined according to a re-estimated time-varying variance calculated based on the target speech mask.
In an embodiment, the weight vector may be determined according to the re-estimated time-varying variance calculated based on the target speech mask.
In an embodiment, the variance-weighted spatial covariance inverse matrix may be determined according to the re-estimated time-varying variance calculated based on the target speech mask.
In an embodiment, the time-varying variance may be determined according to power of an output signal calculated based on the target speech mask.
In an embodiment, the beamforming device may further include a determination unit. The determination unit may determine whether a diagonal component of the target speech signal spatial covariance matrix estimate is a negative number.
In an embodiment, when the diagonal component of the target speech signal spatial covariance matrix estimate is the negative number, the target speech mask of the current frame may be the same as the target speech mask of the previous frame, and the estimated steering vector of the current frame may be the same as the estimated steering vector of the previous frame.
In an embodiment, when the beamforming device operates in a single channel, the input vector may be configured by changing the frame and frequency based on the current frame and a reference frequency.
In an embodiment, the input vector may be composed of a portion of the input vector.
In addition to the technical problems of the present invention described above, other features and advantages of the present invention will be described below, or may be clearly understood by those skilled in the art from such description and explanation.
In the specification, in adding reference numerals to components throughout the drawings, it is to be noted that like reference numerals designate like components even though components are shown in different drawings.
On the other hand, the meaning of the terms described in the present specification should be understood as follows.
Singular expressions should be understood as including plural expressions, unless the context clearly defines otherwise, and the scope of rights should not be limited by these terms.
Also, it should be understood that terms such as “include” and “have” do not preclude the existence or addition possibility of one or more other features or numbers, steps, operations, components, parts, or combinations thereof.
Hereinafter, preferred embodiments of the present invention designed to solve the above problems will be described in detail with reference to the accompanying drawings.
Referring to
In addition, the speech existence probability (SPP) may be defined as a posterior probability of the existence of the target speech signal TSS in the input vector X at time t and frequency f, and may be expressed as [Equation 1] below using a Bayes rule.
Here, pt,f may be the speech existence probability, P(Ht,f(s)|xt,f) may be a posterior probability for when the target speech signal exists in the input vector, and Λt,f may be a generalized likelihood ratio. The generalized likelihood ratio may be expressed as [Equation 2] below.
Here, P(Ht,f(n)) may be a prior probability when there is no target speech signal and may be set to a constant between 0 and 1, P(xt,f|Ht,f(s)) may be a likelihood of when the target speech signal existing in the input vector, and P(xt,f|Ht,f(n)) may be the likelihood of when the target speech signal does not exist in the input vector.
According to an embodiment, the speech existence probability SPP may be determined according to a target speech signal spatial covariance matrix TGM for the target speech signal TSS included in the input vector X. Summarizing [Equation 1] above, it may be expressed as [Equation 3] below.
Here, Rt,fn may be a noise spatial covariance matrix, and Rt,fs may be the target speech signal spatial covariance matrix.
According to an embodiment, the target speech signal spatial covariance matrix TGM for the target speech signal TSS included in the input vector X may be calculated according to the noise spatial covariance matrix. For example, the target speech signal spatial covariance matrix TGM for the target speech signal (TSS) may be expressed as [Equation 4] below:
Here, Rt,fs may be the target speech signal spatial covariance matrix, Rt,fn may be the noise spatial covariance matrix, and Rt,fx may be the spatial covariance matrix for the input vector. The spatial covariance matrix for the input vector X may be expressed as [Equation 5] below.
Here, xt,f may be the input vector, Rt-1,fx may be the spatial covariance matrix for the input vector in the previous frame, Γtx may be a weight for normalizing the spatial covariance matrix for the input vector, and γ may be a forgetting factor. Here, the forgetting factor may be a constant that may have a value between 0 and 1.
According to an embodiment, the noise spatial covariance matrix for noise included in the input vector X may be calculated according to the noise spatial covariance matrix estimate of the previous frame corresponding to the previous frame of the current frame. For example, the noise spatial covariance matrix may be expressed as [Equation 9] below.
Here, Rt-1,fn may be the noise spatial covariance matrix estimate of the previous frame, {circumflex over (Γ)}t,fn may be the estimated weight for normalizing the noise spatial covariance matrix, Γt-1,fn may be the weight for normalizing the noise spatial covariance matrix in the previous frame, {circumflex over (λ)}t,f may be the estimated time-varying variance, xt,f may be the input vector, and γ may be the forgetting factor.
According to an embodiment, the noise spatial covariance inverse matrix for the noise included in the input vector X may be calculated according to the variance-weighted spatial covariance inverse matrix in the previous frame. For example, the noise spatial covariance inverse matrix may be expressed as [Equation 5] below.
Here, Ψt-1,f may be the variance-weighted spatial covariance inverse matrix in the previous frame, {circumflex over (λ)}t,f may be the estimated time-varying variance, and γ may be the forgetting factor. {circumflex over (Γ)}t,fn is the estimated weight for normalization of the noise spatial covariance matrix and may be expressed as [Equation 6] below.
{circumflex over (Γ)}t,fn=γΓt-1,fn+1/{circumflex over (λ)}t,f [Equation 6]
Here, Γt-1,fn may be a weight for normalizing the noise spatial covariance inverse matrix in the previous frame, {circumflex over (λ)}t,f may be the estimated time-varying variance, and γ may be the forgetting factor.
According to an embodiment, the estimated time-varying variance included in the noise spatial covariance inverse matrix may be calculated by weighted-averaging the time-varying variance in the previous frame. For example, the estimated time-varying variance may be expressed as [Equation 7] below.
Here, {circumflex over (λ)}t,f may be the estimated time-varying variance, λt-1 may be the time-varying variance in the previous frame, β may be a constant between 0 and 1, and ∈f may be a constant greater than 0. |Ŷt,f|2 may be the power of the estimated output signal, and may be expressed as [Equation 8] below.
Here, wt-1,r may be the weight vector in the previous frame, (⋅)H may be the Hermitian transpose, and Nf may be the number of adjacent frequencies. The number of adjacent frequencies may be a constant greater than zero.
Referring to
In addition, according to an embodiment, the beamforming device 10 may further include a mask unit 210. The mask unit 210 may provide a target speech mask MSK according to the speech existence probability SPP. For example, when it is unclear whether it is the target speech signal TSS, the speech existence probability SPP may have a value around 0.5. In this case, to extract the frame t and frequency f where the target speech signal TSS clearly exists, the target speech mask MSK as illustrated in [Equation 9] below may be used.
Here, ηk may be a threshold value (e.g., 0.8) with a constant between 0 and 1, and ∈p may be a lower limit value (e.g., 0.1) with a constant between 0 and 1.
The steering vector unit 200 may provide an estimated steering vector CSV according to the speech existence probability SPP and the input vector X. In one embodiment, the estimated steering vector CSV may be determined according to the re-estimated time-varying variance calculated based on the target speech mask MSK. For example, the re-estimated time-varying variance may be expressed as [Equation 10] below.
Here, {tilde over (λ)}t,f may be the re-estimated time-varying variance, λt-1 may be the time-varying variance in the previous frame, β may be a constant between 0 and 1, and ∈f may be a constant greater than 0. |{tilde over (Y)}t,f|2 may be the power of the re-estimated output signal, and may be expressed as [Equation 11] below.
Here, t,r may be the target speech mask. According to the re-estimated time-varying variance, the noise spatial covariance matrix estimate in the current frame may be expressed according to [Equation 12] below.
Here, Rt,fn may be the noise spatial covariance matrix estimate in the current frame, Rt-1,fn may be the noise spatial covariance matrix estimate in the previous frame, Γt-1,fn may be the weight for normalizing the noise spatial covariance matrix in the previous frame, {circumflex over (λ)}t,f may be the re-estimated time-varying variance, xt,f may be the input vector, γ may be the forgetting factor, and Γt,fn may be the weight for normalizing the noise spatial covariance matrix in the current frame. The weight for normalizing the noise spatial covariance matrix in the current frame may be expressed according to [Equation 13] below.
Here, Γt,fn may be the weight for normalizing the noise spatial covariance matrix in the current frame, Γt-1,fn may be the weight for normalizing the noise spatial covariance matrix in the previous frame, and {circumflex over (λ)}t,f may be the re-estimated time-varying variance. In addition, the target speech signal spatial covariance matrix estimate TGME may be expressed according to [Equation 14] below.
Here, Rt,fs may be the target speech signal spatial covariance matrix estimate, Rt,fx may be the spatial covariance matrix for the input vector, and Rt,fn may be the noise spatial covariance matrix estimate in the current frame. The estimated steering vector CSV may be calculated based on an eigen vector corresponding to a maximum eigen value of the target speech signal spatial covariance matrix estimate TGME, and may be calculated as [Equation 15] according to a power method.
Here, {tilde over (h)}t,f may be the estimated steering vector of the previous frame,
The beamforming unit 300 may calculate the weight vector based on the speech existence probability SPP, the input vector X, and the estimated steering vector CSV to provide an output vector Y. In one embodiment, the weight vector may be determined according to the re-estimated time-varying variance calculated based on the target speech mask MSK. For example, the weight vector may be expressed as [Equation 16] and [Equation 17] below.
Here, wt,f may be the weight vector, Yt,f may be the output vector, and Ψt,f may be the variance-weighted spatial covariance inverse matrix.
In one embodiment, the variance-weighted spatial covariance inverse matrix may be determined according to the re-estimated time-varying variance calculated based on the target speech mask (MSK). The variance-weighted spatial covariance inverse matrix may be expressed as [Equation 17] below.
Here, {tilde over (λ)}t,f may be the re-estimated time-varying variance.
According to an embodiment, the time-varying dispersion may be determined according to the power of the output signal calculated based on the target speech mask MSK. For example, the time-varying variance may be expressed as [Equation 18] below.
Here, λt-1 may be the time-varying variance in the previous frame, and |
Here, Yt,f may be the output vector and t,r may be the target speech mask.
Referring to
Referring to
According to an embodiment, the input vector X may be composed of a portion of the input vector X. For example, in the input vector X, only the frame may be configured differently based on the same frequency f, or only the frequency may be configured differently at the same frame t. In addition, as illustrated in
According to the beamforming device 10 of the present invention, it is possible to more accurately extract the target speech signal TTS from the input signal by estimating the speech existence probability SPP corresponding to the probability that the target speech signal TSS exists based on the input vector X to provide the steering vector and the weight vector.
According to the present invention as described above, there are the following effects.
According to the beamforming device of the present invention, it is possible to more accurately extract the target speech signal from the input signal by estimating the speech existence probability corresponding to the probability that the target speech signal exists based on the input vector to provide the steering vector and the weight vector.
In addition, other features and advantages of the present invention may be newly understood through the embodiments of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
10-2023-0055999 | Apr 2023 | KR | national |