The present invention relates to a speech processing apparatus, a noise suppression apparatus, a speech processing method, and a recording medium.
Model-based noise suppression techniques for suppressing noise using a speech model which models the features of speech have been developed. A model-based noise suppression method is a method for suppressing noise with high accuracy by referring to speech information of a speech model and is disclosed, for example, in Patent Literature 1, Non-Patent Literature 1, and Non-Patent Literature 2.
For example, Patent Literature 1 discloses a noise suppression system which uses a speech model. The noise suppression system disclosed in Patent Literature 1 obtains temporarily estimated speech in a spectrum region from an input signal and an average spectrum of noise and corrects the temporarily estimated speech using a standard pattern. The noise suppression system calculates a noise reduction filter from the corrected temporarily estimated speech and the average noise spectrum and calculates estimated speech from the noise reduction filter and an input signal spectrum.
The model-based noise suppression method disclosed in Non-Patent Literature 1 cannot suppress noise correctly when there is a mismatch between acoustic power of the input signal and acoustic power information of the speech model. Due to this, the technique of Non-Patent Literature 1 is not robust to variation in the acoustic power of the input signal.
On the other hand, a model-based noise suppression method disclosed in Patent Literature 1 and Non-Patent Literature 2 estimates acoustic power from an input signal. Therefore, the model-based noise suppression method disclosed in Patent Literature 1 and Non-Patent Literature 2 is robust to a mismatch between the power of the input signal and the power information of the speech model.
Acoustic power γ estimated from the input signal is represented by Equation (1).
Here, Sin(k) (k=0, . . . , K−1, where k is a frequency bin and K is the Nyquist frequency) is the spectrum of the input signal.
However, in acoustic power estimation which uses Equation (1), it is not possible to estimate the acoustic power included in the input signal correctly when the input signal includes noise or the noise is suppressed.
The present invention has been made in view of the above-described issues, and an object thereof is to provide a technique of estimating the acoustic power included in an input signal with high accuracy.
A speech processing apparatus according to one aspect of the present invention includes: expectation value calculation means for calculating, using an input signal spectrum and a speech model that models a feature quantity of speech, a spectrum expectation value which is an expectation value of a spectrum of an acoustic component included in the input signal spectrum; and acoustic power estimation means for estimating an acoustic power of the acoustic component of the input signal spectrum based on the input signal spectrum and the spectrum expectation value.
A noise suppression apparatus according to one aspect of the present invention includes: noise estimation means for calculating estimated noise from an input signal; a speech processing apparatus that estimates an expectation value of a spectrum of an acoustic component included in a spectrum of the input signal and an acoustic power of the acoustic component from the spectrum of the input signal; suppression gain calculation means for calculating a suppression gain using the expectation value of the spectrum of the acoustic component, the acoustic power, and the spectrum of the estimated noise; and noise suppression means for suppressing noise in the input signal using the suppression gain and the spectrum of the input signal, wherein the speech processing apparatus includes: expectation value calculation means for calculating, using the spectrum of the input signal and a speech model that models a feature quantity of speech, an expectation value of the spectrum of the acoustic component; and acoustic power estimation means for estimating the acoustic power based on the spectrum of the input signal and the expectation value of the spectrum of the acoustic component.
A speech processing method according to one aspect of the present invention includes: calculating a spectrum expectation value which is an expectation value of a spectrum of an acoustic component included in an input signal spectrum using the input signal spectrum and a speech model that models a feature quantity of speech; and estimating an acoustic power of the acoustic component of the input signal spectrum based on the input signal spectrum and the spectrum expectation value.
A computer program for realizing the above-described apparatuses or method by a computer and a computer-readable recording medium storing the computer program are also included in the scope of the present invention.
According to the present invention, it is possible to estimate the acoustic power included in an input signal with high accuracy.
Hereinafter, a first example embodiment of the present invention will be described with reference to the drawings.
(Configuration of Speech Processing Apparatus 10)
A spectrum Sin(k) (k=0, . . . , K−1, where k is a frequency bin and K is the Nyquist frequency) calculated from one block of a digital signal is input to the speech processing apparatus 10. Hereinafter, this spectrum Sin(k) will be referred to as an input spectrum or an input signal spectrum. Moreover, the speech processing apparatus 10 outputs the power (acoustic power) γ (scalar quantity) of an acoustic component included in the input spectrum.
(Storage 11)
A speech model that models a feature quantity of speech is stored in the storage 11. Specifically, a Gaussian mixture model (GMM) is stored in the storage 11.
In GMM, a feature quantity (in the present example embodiment, an M-dimensional vector (M is a natural number)) extracted from speech data collected in advance is used as learning data. Specifically, GMM is made up of a plurality of Gaussian distributions. Each Gaussian distribution has parameters including a weight, a mean vector, and a variance matrix.
Hereinafter, N is the number of mixtures (the number of Gaussian distributions that form GMM) of the GMM, wi is the weight of an i-th Gaussian distribution, μi (εRM, where RM is an M-dimensional real vector space) is a mean vector, and Σi (εRM×M, where i=0, . . . , N−1) (N is a natural number) is a variance matrix. Hereinafter, the parameters of an i-th Gaussian distribution will be collectively referred to as (wi, μi, Σi).
The feature quantity of speech data (hereinafter referred to as learning data) used for leaning GMM is a feature quantity called mel-spectrum or mel-ceptrum. However, the feature quantity used in the present example embodiment is not limited to these examples. Moreover, the feature quantity may further include a high-order dynamic component such as a first-order dynamic component, a second-order dynamic component, and the like.
The speech model stored in the storage 11 may be a hidden Markov model (HMM).
(Expectation Value Calculation Unit 12)
The expectation value calculation unit 12 calculates, using the input spectrum Sin(k) input to the speech processing apparatus 10 and the GMM stored in the storage 11, an expectation value ŜE(k) (hereinafter referred to as a spectrum expectation value) of the spectrum of the acoustic component included in the input spectrum Sin(k). Here, the hat (̂) indicates an estimated value (expectation value). In the present description, the hat symbol is on the right side of a preceding character. However, the hat symbol (̂) is disposed above a preceding character.
Specifically, in order to calculate the spectrum expectation value, first, the expectation value calculation unit 12 converts the input spectrum Sin(k) to a feature quantity vector sin (εRM) (hereinafter referred to as an input feature quantity). This input feature quantity is equivalent to the feature quantity of the learning data of the GMM. Moreover, the expectation value calculation unit 12 inversely converts the mean vector μi of the GMM to a logarithmic spectrum Sμ,i(k) (k=0, . . . , K−1) (hereinafter referred to as a mean logarithmic spectrum).
The expectation value calculation unit 12 calculates a spectrum expectation value ŜE(k) according to Equation (2) using the calculated input feature quantity sin, the mean logarithmic spectrum Sμ,i(k), and the parameter (wi, μi, Σi) of the GMM.
Here, N(x;μ,Σ) can be represented by Equation (3).
Here, m is the number of dimensions of a feature quantity vector.
The expectation value calculation unit 12 supplies the calculated spectrum expectation value ŜE(k) to the acoustic power estimation unit 13.
(Acoustic Power Estimation Unit 13) The acoustic power estimation unit 13 estimates the acoustic power γ of the acoustic component of the input spectrum Sin(k) based on the input spectrum Sin(k) input to the speech processing apparatus 10 and the spectrum expectation value ŜE(k) supplied from the expectation value calculation unit 12. This acoustic power γ is the output of the speech processing apparatus 10.
Specifically, the acoustic power estimation unit 13 sets the power of the spectrum expectation value ŜE(k) controlled such that the square error of the spectrum expectation value ŜE(k) and the input spectrum Sin(k) is minimized as the acoustic power γ. The acoustic power estimation unit 13 estimates the acoustic power γ by calculating the acoustic power γ using Equation (4).
Alternatively, the acoustic power estimation unit 13 may calculate the acoustic power γ using Equation (5).
In Equations (4) and (5), η is a coefficient that determines the magnification of the acoustic power and an experimentally obtained value may be given. Moreover, Ω indicates a set of frequency bins k to be used for addition. |Ω| indicates the number of elements of the set Ω. The set Ω is derived using Equation (6).
That is, the set Ω is the set of frequency bins k in which the spectrum expectation value ŜE(k) has a predetermined value θ or more. Several variations may be employed in calculation of θ, and these variations are represented by Equations (7) to (9).
Here, the set Ω when Equation (7) is used is the set of frequency bins k in which the spectrum expectation value ŜE(k) is maximized. The set Ω when Equation (8) is used is the set of frequency bins that exceeds an addition mean of the spectrum expectation value ŜE(k). The set Ω when Equation (9) is used is the set of frequency bins that exceeds a geometric mean of the spectrum expectation value ŜE(k).
Here, α in Equations (8) and (9) is a scalar quantity and is given in advance. The scalar quantity α may be an experimentally derived value. Furthermore, the set Ω may be the top P frequency bins of the spectrum expectation value ŜE(k). The “top P frequency bins of the spectrum expectation value ŜE(k)” mean P spectrum expectation values arranged in descending order of expectation values.
In Equation (6), the set Ω is calculated by comparison between the spectrum expectation value ŜE(k) and θ. However, the set Ω may be calculated by comparison between θ and linear coupling of the spectrum expectation value ŜE(k) and the input spectrum Sin(k).
In this manner, the acoustic power estimation unit 13 calculates the acoustic power γ of a frequency component k in which the value of the spectrum expectation value ŜE(k) or the values of the spectrum expectation value ŜE(k) and the input spectrum Sin(k) is equal to or larger than a predetermined value θ. Due to this, the acoustic power estimation unit 13 calculates the acoustic power γ using the frequency components only having the predetermined value θ or more. Therefore, the speech processing apparatus 10 according to the present example embodiment can estimate the acoustic power γ with higher accuracy.
The acoustic power estimation unit 13 may calculate a speech-likelihood value of an input spectrum. In this case, the acoustic power estimation unit 13 may further include a calculation unit that calculates the speech-likelihood value. Moreover, the acoustic power estimation unit 13 may change an acoustic power estimation method according to the value calculated by the calculation unit.
For example, the acoustic power estimation unit 13 may change the value η in Equation (4) or (5) according to the speech-likelihood. For example, the acoustic power estimation unit 13 may increase the value η when the input spectrum is likely to be speech and may set the value η to 0 when the input spectrum is not likely to be speech. Moreover, the acoustic power estimation unit 13 may change the predetermined value (threshold) θ or the value α in Equations (8) and (9) which are equations that determine the threshold θ according to the speech-likelihood. That is, the acoustic power estimation unit 13 may change the predetermined value θ which is compared with the spectrum expectation value ŜE(k) or the values of the spectrum expectation value ŜE(k) and the input spectrum Sin(k) based on the speech-likelihood of the input spectrum. For example, the acoustic power estimation unit 13 may set the threshold θ such as to increase the number of elements of Ω when the input spectrum is likely to be speech and may set the threshold θ such as to decrease the number of elements of Ω when the input spectrum is not likely to be speech.
Here, the “speech-likelihood” may be calculated using the parameters and the input spectrum of a speech model and a noise model prepared in advance. For example, when a speech-likelihood index is L, L is calculated using Equation (10).
Here, (wl, μl, Σl) represents the parameters of each Gaussian distribution when a speech model prepared in advance is the GMM and (wj, μj, Σj) represents the parameters of each Gaussian distribution when a noise model prepared in advance is the GMM. These parameters may be stored in the storage 11. Moreover, sin is a feature quantity vector of the input spectrum.
When the index L indicating the speech-likelihood is large (for example, larger than a predetermined value), it indicates that the input spectrum is likely to be speech. When the index L is small (for example, smaller than another predetermined value), it indicates that the input spectrum is not likely to be speech. Therefore, when the input spectrum is likely to be speech (that is, when the value L is large), the acoustic power estimation unit 13 sets the threshold θ to a smaller value such as to increase the number of elements of Ω. Similarly, when the input spectrum is not likely to be speech (that is, when the value L is small), the acoustic power estimation unit 13 sets the threshold θ to a larger value such as to decrease the number of elements of Ω. In this manner, by setting the value θ, the acoustic power estimation unit 13 can calculate the acoustic power γ with higher accuracy.
The acoustic power estimation unit 13 may derive the acoustic power according to Equation (11) using the index L of the speech-likelihood.
Here, γ1 and γ2 may be calculated based on Equation (4) or (5) under the set Ω and the value η calculated using different θ. Moreover, φ1 and φ2 may be values obtained experimentally to be φ1>φ2.
The values γ1 and γ2 may be predetermined values (first acoustic power and second acoustic power). Moreover, the acoustic power estimation unit 13 may set the first acoustic power γ1 and/or the second acoustic power γ2 to be γ1>γ2. In this manner, the acoustic power estimation unit 13 can estimate the acoustic power γ of the input spectrum Sin(k) with higher accuracy by setting the acoustic power γ to the second acoustic power γ2 which is the smaller value when the index L indicating the speech-likelihood is small.
(Hardware Configuration of Speech Processing Apparatus 10)
Next, a hardware configuration of the speech processing apparatus 10 will be described with reference to
The CPU 1 operates an operating system to control the speech processing apparatus 10 according to the present example embodiment. Moreover, the CPU 1 reads a program and data from a recording medium attached to a drive device, for example, and writes the same to the memory 3.
The CPU 1 functions as a part of the expectation value calculation unit 12 and the acoustic power estimation unit 13 of the present example embodiment, for example, and executes various processes based on the program written to the memory 3, for example.
The storage device 4 is an optical disc, a flexible disk, a magneto-optical disc, an externally attached hard disk, a semiconductor memory, or the like, for example. Some storage medium of the storage device 4 is a nonvolatile storage device and a program is stored in the nonvolatile storage device. The program may be downloaded from an external computer (not illustrated) connected to a communication network via the communication I/F 2, for example. The storage device 4 functions as the storage 11 of the present example embodiment, for example.
The input device 5 is implemented by a touch sensor or the like, for example, and is used for inputting operations. The output device 6 is implemented by a display, for example, and is used for checking output.
As described above, the speech processing apparatus 10 according to the present example embodiment is implemented by the hardware configuration illustrated in
(Processing of Speech Processing Apparatus 10)
Next, the flow of the processing of the speech processing apparatus 10 will be described with reference to
As illustrated in
Subsequently, the acoustic power estimation unit 13 calculates the acoustic power γ using the input spectrum Sin(k) and the spectrum expectation value ŜE(k) calculated by the expectation value calculation unit 12 (step S32) and ends the process.
(Effects)
According to the speech processing apparatus 10 according to the present example embodiment, it is possible to estimate the acoustic power included in the input signal with high accuracy.
This is because the expectation value calculation unit 12 calculates the expectation value (the spectrum expectation value ŜE(k)) of the spectrum of the acoustic component included in the input spectrum Sin(k) using the input spectrum Sin(k) and the speech model (GMM) that models the feature quantity of speech. Moreover, the acoustic power estimation unit 13 estimates the acoustic power γ of the acoustic component of the input spectrum Sin(k) based on the input spectrum Sin(k) and the spectrum expectation value ŜE(k).
In this manner, the acoustic power γ estimated by the acoustic power estimation unit 13 is calculated by referring to the spectrum expectation value ŜE(k) calculated from the speech model and the input spectrum Sin(k). Therefore, even when the input signal includes noise or the noise is suppressed, it is possible to calculate the acoustic power γ with high accuracy. Therefore, the speech processing apparatus 10 according to the present example embodiment can calculate the acoustic power γ of the acoustic component included in the input spectrum Sin(k) with high accuracy.
The acoustic power estimation unit 13 of the speech processing apparatus 10 according to the present example embodiment sets the power of the spectrum expectation value ŜE(k) controlled such that an error between the spectrum expectation value ŜE(k) and the input spectrum Sin(k) is minimized in a predetermined band where the influence of noise is small as the acoustic power γ. Due to this, it is possible to control the spectrum expectation value ŜE(k) to approach the speech spectrum included in the input spectrum Sin(k). Therefore, the speech processing apparatus 10 according to the present example embodiment can estimate the acoustic power included in the input signal with higher accuracy.
Hereinafter, a second example embodiment of the present invention will be described with reference to the drawings. A noise suppression apparatus according to the second example embodiment is a model-based noise suppression apparatus disclosed in Non-Patent Literature 1 and uses the acoustic power calculated by the first example embodiment as a noise suppression gain. For the sake of convenience, components having the same functions as the components included in the drawings described in the first example embodiment will be denoted by the same reference numerals and the description thereof will not be provided.
(Configuration of Noise Suppression Apparatus 20)
(Input Signal Acquisition Unit 21)
The input signal acquisition unit 21 acquires (receives) the digital signal input to the noise suppression apparatus 20. This digital signal is also referred to as an input signal. The input signal acquisition unit 21 slices the acquired digital signal into respective frames corresponding to predetermined unit periods and converts the same to spectra.
Specifically, the input signal acquisition unit 21 converts a t-th frame x(t) (εRT, where T is the number of samples included in the frame) (t is a natural number; hereinafter t is referred to as a frame period) of the sliced digital signal to a spectrum X(t,k) (k=0, . . . , K−1). Hereinafter, this converted spectrum X(t,k) is referred to as an input signal spectrum.
The input signal acquisition unit 21 supplies the converted input signal spectrum X(t,k) to the noise estimation unit 22, the temporary noise suppression unit 23, and the noise suppression unit 25.
Here, the number of samples T included in a frame will be described. For example, when the digital signal is a 16-bit signal having a sampling frequency of 8000 Hz converted according to the linear pulse code modulation (linear PCM), the digital signal is values corresponding to 8000 points per second. In this case, when one frame length is 25 milliseconds, one frame includes values corresponding to 200 points. Therefore, T=200.
Examples of the digital signal acquired by the input signal acquisition unit 21 include (1) a digital signal supplied from a microphone or the like via an A/D converter, (2) a digital signal read by a hard disk, (3) a digital signal obtained from a communication packet, and the like. However, in the present example embodiment, the digital signal is not limited to these digital signals. Moreover, the digital signal may be a speech signal recorded under a noisy environment and a speech signal which has been subjected to a noise suppression process.
(Noise Estimation Unit 22)
The noise estimation unit 22 is means for estimating estimated noise from the input signal spectrum. The noise estimation unit 22 receives the input signal spectrum X(t,k) from the input signal acquisition unit 21. The noise estimation unit 22 estimates (calculates) a spectrum N̂(t,k) (where k=0, . . . , K−1) of a noise component included in the received input signal spectrum X(t,k). The spectrum N̂(t,k) of the estimated noise component (estimated noise) will be referred to as an estimated noise spectrum. The noise estimation unit 22 supplies the estimated noise spectrum N̂(t,k) to the temporary noise suppression unit 23 and the suppression gain calculation unit 24.
In the present example embodiment, the noise estimation unit 22 calculates the estimated noise using the existing weighted noise estimation (WiNE). However, calculation of the estimated noise in the noise estimation unit 22 is not limited to this. The noise estimation unit 22 may calculate the estimated noise using a desired method.
In this way, the noise estimation unit 22 can estimate noise included in the input signal. In the present example embodiment, the estimated noise is also referred to as temporary noise.
(Temporary Noise Suppression Unit 23)
The temporary noise suppression unit 23 is means for generating a noise suppression signal in which temporary noise is suppressed from the input signal using the input signal spectrum and the estimated noise spectrum. Specifically, the temporary noise suppression unit 23 receives the input signal spectrum X(t,k) from the input signal acquisition unit 21. Moreover, the temporary noise suppression unit 23 receives the estimated noise spectrum N̂(t,k) from the noise estimation unit 22. The temporary noise suppression unit 23 removes the estimated noise spectrum N̂(t,k) from the input signal spectrum X(t,k) and calculates a temporary noise suppression spectrum Ŝ(t,k) (where k=0, . . . , K−1). A signal including this temporary noise suppression spectrum Ŝ(t,k) is referred to as a noise suppression signal. This noise suppression signal is referred to as a temporarily estimated speech since the noise suppression signal is a signal obtained by suppressing temporary noise.
The temporary noise suppression unit 23 supplies the calculated temporary noise suppression spectrum Ŝ(t,k) to the speech processing apparatus 10.
In the present example embodiment, the temporary noise suppression unit 23 calculates the temporary noise suppression spectrum Ŝ(t,k) using an existing technique (for example, spectral subtraction (SS), Wiener filter (WF), and the like). However, the present example embodiment is not limited to this. The temporary noise suppression unit 23 may calculate the spectrum of the temporarily estimated speech using a desired method. The noise suppression apparatus 20 may omit the processing of the temporary noise suppression unit 23 when a small amount of noise is included in the input signal or the input signal has already been subjected to noise suppression. In this case, the temporary noise suppression spectrum Ŝ(t,k) is the input signal spectrum X(t,k).
In this manner, the temporary noise suppression unit 23 supplies the temporary noise suppression spectrum Ŝ(t,k) obtained by suppressing the temporary noise whereby the speech processing apparatus 10 can use the temporary noise suppression spectrum Ŝ(t,k) obtained by suppressing the temporary noise as the input spectrum Sin(k). In this way, the speech processing apparatus 10 can estimate the acoustic power with higher accuracy.
(Speech Processing Apparatus 10)
The speech processing apparatus 10 calculates an acoustic power γ(t) from the temporary noise suppression spectrum Ŝ(t,k) supplied by the temporary noise suppression unit 23. The speech processing apparatus 10 supplies the acoustic power γ(t) to the suppression gain calculation unit 24. Moreover, the speech processing apparatus 10 also supplies the spectrum expectation value ŜE(t,k) calculated in the course of calculation of the acoustic power γ(t) to the suppression gain calculation unit 24. The spectrum expectation value ŜE(t,k) is calculated by the expectation value calculation unit 12 as described in the first example embodiment.
Since the speech processing apparatus 10 has been described in the first example embodiment, the specific description thereof will not be provided. However, in the present example embodiment, the input spectrum Sin(k), the spectrum expectation value ŜE(k), and the acoustic power γ are replaced with the temporary noise suppression spectrum Ŝ(t,k), the spectrum expectation value ŜE(t,k), and the acoustic power γ(t).
(Suppression Gain Calculation Unit 24)
The suppression gain calculation unit 24 is means for calculating a suppression gain using the spectrum expectation value ŜE(t,k), the acoustic power γ(t), and the estimated noise spectrum N̂(t,k).
Specifically, the suppression gain calculation unit 24 receives the estimated noise spectrum N̂(t,k) from the noise estimation unit 22. Moreover, the suppression gain calculation unit 24 receives the acoustic power γ(t) and the spectrum expectation value ŜE(t,k) from the speech processing apparatus 10. The suppression gain calculation unit 24 calculates a suppression gain W(t,k) (where k=0, . . . , K−1) according to Equation (12) using the received estimated noise spectrum N̂(t,k), the acoustic power γ(t), and the spectrum expectation value ŜE(t,k).
As illustrated in Equation (12), the nominator on the right side of Equation (12) is the product of the acoustic power γ(t) and the spectrum expectation value obtained by dividing the spectrum expectation value ŜE(t,k) by the sum at k of the spectrum expectation value ŜE(t,k). Moreover, the denominator on the right side of Equation (12) is the sum of the product and the estimated noise spectrum N̂(t,k). That is, the suppression gain calculation unit 24 calculates the ratio of (a) the product of the spectrum expectation value and the acoustic power γ(t) to (b) the sum of the product and the estimated noise spectrum N̂(t,k) as the suppression gain W(t,k).
In this manner, when calculating the suppression gain W(t,k), the suppression gain calculation unit 24 uses the acoustic power γ(t) and the spectrum expectation value ŜE(t,k) calculated by the speech processing apparatus 10. This acoustic power γ(t) is calculated by referring to the speech model and the spectrum expectation value ŜE(t,k) calculated from the temporary noise suppression spectrum Ŝ(t,k). Therefore, the suppression gain calculation unit 24 can calculate the suppression gain W(t,k) using the acoustic power γ(t) having high estimation accuracy.
The suppression gain calculation unit 24 supplies the calculated suppression gain W(t,k) to the noise suppression unit 25.
(Noise Suppression Unit 25)
The noise suppression unit 25 is means for suppressing the noise in the input signal using the suppression gain W(t,k) and the input signal spectrum X(t,k). Specifically, the noise suppression unit 25 receives the input signal spectrum X(t,k) from the input signal acquisition unit 21. Moreover, the noise suppression unit 25 receives the suppression gain W(t,k) from the suppression gain calculation unit 24. The noise suppression unit 25 calculates a noise suppression spectrum Y(t,k) (where k=0, . . . , K−1) using the input signal spectrum X(t,k) and the suppression gain W(t,k). The noise suppression unit 25 calculates the noise suppression spectrum Y(t,k) using Equation (13).
Y(t,k)=W(t,k)X(t,k) (13)
The noise suppression spectrum Y(t,k) is a spectrum in which noise included in the input signal spectrum X(t,k) is suppressed from the input signal spectrum X(t,k).
The noise suppression unit 25 converts the calculated noise suppression spectrum Y(t,k) to a feature quantity vector and outputs the same to a speech recognition device as a feature quantity vector of the estimated speech. When the feature quantity vector is output to a speech reproduction apparatus such as a speaker, the noise suppression unit 25 performs inverse-Fourier transform on the spectrum of the estimated speech obtained from the converted feature quantity vector to obtain a time-domain signal and outputs the signal (digital signal). Hereinafter, the feature quantity vector or the digital signal output by the noise suppression unit 25 is referred to as an output signal.
Since the hardware configuration of the noise suppression apparatus 20 according to the present example embodiment is the same as the hardware configuration of the speech processing apparatus 10 of the first example embodiment illustrated in
(Processing of Noise Suppression Apparatus 20)
Next, the flow of processing of the noise suppression apparatus 20 will be described with reference to
As illustrated in
Subsequently, the noise estimation unit 22 estimates noise included in the input signal. That is, the noise estimation unit 22 estimates the estimated noise spectrum N̂(t,k) from the input signal spectrum X(t,k) (step S52).
The temporary noise suppression unit 23 suppresses temporary noise in the input signal spectrum X(t,k). That is, the temporary noise suppression unit 23 removes the estimated noise spectrum N̂(t,k) from the input signal spectrum X(t,k) to calculate the temporary noise suppression spectrum Ŝ(t,k) (step S53). As described above, this step may be omitted. In this case, the temporary noise suppression spectrum Ŝ(t,k) is the input signal spectrum X(t,k).
Subsequently, the speech processing apparatus 10 calculates the spectrum expectation value ŜE(t,k) using the temporary noise suppression spectrum Ŝ(t,k) as an input (step S54). The speech processing apparatus 10 calculates the acoustic power γ(t) (step S55). Steps S54 and S55 are the same processes as steps S31 and S32 described in the first example embodiment, respectively.
Subsequently, the suppression gain calculation unit 24 calculates the suppression gain W(t,k) based on the estimated noise spectrum N̂(t,k), the spectrum expectation value ŜE(t,k), and the acoustic power γ(t) (step S56).
The noise suppression unit 25 suppresses noise in the input signal. That is, the noise suppression unit 25 calculates the noise suppression spectrum Y(t,k) by multiplying the suppression gain W(t,k) by the input signal spectrum X(t,k) (step S57).
Lastly, the input signal acquisition unit 21 of the noise suppression apparatus 20 checks whether there is a remaining digital signal to be processed (step S58). When there is a remaining digital signal to be processed (step S58: YES), the process returns to step S51. In other case (step S58: NO), the process ends.
(Effects)
The speech processing apparatus 10 of the noise suppression apparatus 20 according to the present example embodiment can estimate the acoustic power included in the input signal with higher accuracy similarly to the speech processing apparatus 10 according to the first example embodiment.
The noise suppression apparatus 20 according to the present example embodiment can suppress noise with higher accuracy since the noise included in the input signal is suppressed using the acoustic power having high accuracy.
Next, a third example embodiment of the present invention will be described. In the present example embodiment, a minimal configuration for solving the problems of the present invention will be described.
In the first and second example embodiments, although a configuration in which the storage 11 is included in the speech processing apparatus 10 has been described, the storage 11 may be implemented as an apparatus independent from the speech processing apparatus 10. This configuration will be described with reference to
Since the hardware configuration of the speech processing apparatus 30 according to the present example embodiment is the same as the hardware configuration of the speech processing apparatus 10 according to the first example embodiment illustrated in
The expectation value calculation unit 12 calculates a spectrum expectation value which is an expectation value of the spectrum of an acoustic component included in an input signal spectrum using the input signal spectrum and a speech model that models a feature quantity of speech. This speech model is stored in the storage 11 described in the first and second example embodiments.
The expectation value calculation unit 12 supplies the calculated spectrum expectation value to the acoustic power estimation unit 13.
The acoustic power estimation unit 13 estimates the acoustic power of the acoustic component of the input signal spectrum based on the input signal spectrum and the spectrum expectation value supplied from the expectation value calculation unit 12.
In this manner, according to the speech processing apparatus 30 according to the present example embodiment, the acoustic power estimation unit 13 estimates the acoustic power of the acoustic component of the input signal using the input signal spectrum and the spectrum expectation value calculated using the speech model.
Therefore, the speech processing apparatus 30 according to the present example embodiment can estimate the acoustic power included in the input signal with higher accuracy.
The above-described example embodiments are preferred example embodiments according to the present invention, and the scope of the present invention is not limited to the above-described example embodiments only. The above-described example embodiments may be modified or substituted by those skilled in the art without departing from the gist of the present invention, and a variety of forms in which a change is applied to the example embodiment can be constructed.
For example, the operations of the above-described example embodiments may be executed by hardware or software or both.
When the processes are executed by software, a program may be installed on a general-purpose computer that can execute the processes and the program may be executed by the computer, for example. Moreover, the program may be recorded on a recording medium such as a hard disk, for example.
A portion of or the whole of the example embodiment described above can be described in the following Supplementary Notes, but not limited thereto.
(Supplementary Note 1) A speech processing apparatus including: expectation value calculation means for calculating, using an input signal spectrum and a speech model that models a feature quantity of speech, a spectrum expectation value which is an expectation value of a spectrum of an acoustic component included in the input signal spectrum; and acoustic power estimation means for estimating an acoustic power of the acoustic component of the input signal spectrum based on the input signal spectrum and the spectrum expectation value.
(Supplementary Note 2) The speech processing apparatus according to Supplementary Note 1, wherein the acoustic power estimation means estimates the power of the spectrum expectation value controlled to minimize an error between the spectrum expectation value and the input signal spectrum as the acoustic power.
(Supplementary Note 3) The speech processing apparatus according to Supplementary Note 1 or 2, wherein the acoustic power estimation means calculates the acoustic power of a frequency component for which the spectrum expectation value or the spectrum expectation value and a value of the input signal spectrum is a predetermined value or more.
(Supplementary Note 4) The speech processing apparatus according to Supplementary Note 3, wherein the acoustic power estimation means changes the predetermined value to be compared with the spectrum expectation value or the spectrum expectation value and the value of the input signal spectrum based on a speech-likelihood of the input signal spectrum.
(Supplementary Note 5) The speech processing apparatus according to Supplementary Note 4, wherein the acoustic power estimation means sets the predetermined value to a smaller value when an index indicating the speech-likelihood is large and sets the predetermined value to a larger value when the index is small.
(Supplementary Note 6) The speech processing apparatus according to Supplementary Note 4 or 5, wherein the acoustic power estimation means estimates the acoustic power as the power of a predetermined acoustic component having a smaller value when the index indicating the speech-likelihood is small.
(Supplementary Note 7)
The speech processing apparatus according to any one of Supplementary Notes 1 to 6, further including storage means for storing the speech model.
(Supplementary Note 8) A noise suppression apparatus including: noise estimation means for calculating estimated noise from an input signal; a speech processing apparatus that estimates an expectation value of a spectrum of an acoustic component included in a spectrum of the input signal and an acoustic power of the acoustic component from the spectrum of the input signal; suppression gain calculation means for calculating a suppression gain using the expectation value of the spectrum of the acoustic component, the acoustic power, and the spectrum of the estimated noise; and noise suppression means for suppressing noise in the input signal using the suppression gain and the spectrum of the input signal, wherein the speech processing apparatus includes: expectation value calculation means for calculating, using the spectrum of the input signal and a speech model that models a feature quantity of speech, an expectation value of the spectrum of the acoustic component; and acoustic power estimation means for estimating the acoustic power based on the spectrum of the input signal and the expectation value of the spectrum of the acoustic component.
(Supplementary Note 9) The noise suppression apparatus according to Supplementary Note 8, wherein the acoustic power estimation means estimates the power of an expectation value of the spectrum of the acoustic component controlled to minimize an error between the expectation value of the spectrum of the acoustic component and the spectrum of the input signal as the acoustic power.
(Supplementary Note 10) The noise suppression apparatus according to Supplementary Note 8 or 9, wherein the acoustic power estimation means calculates the acoustic power of a frequency component for which the expectation value of the spectrum of the acoustic component or the expectation value of the spectrum of the acoustic component and the value of the spectrum of the input signal is a predetermined value or more.
(Supplementary Note 11) The noise suppression apparatus according to Supplementary Note 10, wherein the acoustic power estimation means changes the predetermined value to be compared with the expectation value of the spectrum of the acoustic component or the expectation value of the spectrum of the acoustic component and the value of the spectrum of the input signal based on a speech-likelihood of the spectrum of the input signal.
(Supplementary Note 12) The noise suppression apparatus according to Supplementary Note 11, wherein the acoustic power estimation means sets the predetermined value to a smaller value when an index indicating the speech-likelihood is large and sets the predetermined value to a larger value when the index is small.
(Supplementary Note 13) The noise suppression apparatus according to Supplementary Note 11 or 12, wherein the acoustic power estimation means estimates the acoustic power as the power of a predetermined acoustic component having a smaller value when the index indicating the speech-likelihood is small.
(Supplementary Note 14) The speech processing apparatus according to any one of Supplementary Notes 8 to 13, further including storage means for storing the speech model.
(Supplementary Note 15) A noise suppression apparatus including: noise estimation means for calculating estimated noise from an input signal; the speech processing apparatus according to any one of Supplementary Notes 1 to 7; suppression gain calculation means for calculating a suppression gain using an expectation value of the spectrum of an acoustic component included in the spectrum of the input signal, an acoustic power of the acoustic component, and the spectrum of the estimated noise; and noise suppression means for suppressing noise in the input signal using the suppression gain and the spectrum of the input signal.
(Supplementary Note 16) The noise suppression apparatus according to any one of Supplementary Notes 8 to 15, further including temporary noise suppression means for generating a temporary noise suppression signal in which temporary noise is suppressed from the input signal using the input signal and the estimated noise, wherein the speech processing apparatus estimates the expectation value of the spectrum of the acoustic component and the acoustic power using the spectrum of the temporary noise suppression signal as the spectrum of the input signal.
(Supplementary Note 17) The noise suppression apparatus according to any one of Supplementary Notes 8 to 16, wherein the suppression gain calculation means calculates a ratio of a product between the acoustic power and the expectation value of the spectrum of the acoustic component to a sum of the product and the estimated noise as the suppression gain.
(Supplementary Note 18) A speech processing method including: calculating a spectrum expectation value which is an expectation value of a spectrum of an acoustic component included in an input signal spectrum using the input signal spectrum and a speech model that models a feature quantity of speech; and estimating an acoustic power of the acoustic component of the input signal spectrum based on the input signal spectrum and the spectrum expectation value.
(Supplementary Note 19) A noise suppression method including: calculating estimated noise from an input signal; calculating an expectation value of a spectrum of an acoustic component included in a spectrum of the input signal using the spectrum of the input signal and a speech model that models a feature quantity of speech; estimating an acoustic power of the acoustic component based on the spectrum of the input signal and the expectation value of the spectrum of the acoustic component; calculating a suppression gain using the expectation value of the spectrum of the acoustic component, the acoustic power, and the spectrum of the estimated noise; and suppressing noise in the input signal using the suppression gain and the spectrum of the input signal.
(Supplementary Note 20) A program for causing a computer to execute processes of: calculating a spectrum expectation value which is an expectation value of a spectrum of an acoustic component included in an input signal spectrum using the input signal spectrum and a speech model that models a feature quantity of speech; and estimating an acoustic power of the acoustic component of the input signal spectrum based on the input signal spectrum and the spectrum expectation value.
(Supplementary Note 21) A program for causing a computer to execute processes of: calculating estimated noise from an input signal; calculating an expectation value of a spectrum of an acoustic component included in a spectrum of the input signal using the spectrum of the input signal and a speech model that models a feature quantity of speech; estimating an acoustic power of the acoustic component based on the spectrum of the input signal and the expectation value of the spectrum of the acoustic component; calculating a suppression gain using the expectation value of the spectrum of the acoustic component, the acoustic power, and the spectrum of the estimated noise; and suppressing noise in the input signal using the suppression gain and the spectrum of the input signal.
(Supplementary Note 22) A computer-readable recording medium recording the program according to Supplementary Note 20 or 21.
This application claims the priority based on Japanese Patent Application No. 2014-249982 filed on Dec. 10, 2014, the entire disclosure of which is incorporated herein by reference.
Number | Date | Country | Kind |
---|---|---|---|
2014-249982 | Dec 2014 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2015/006120 | 12/8/2015 | WO | 00 |