ACOUSTIC ECHO CANCELLATION SYSTEM AND ASSOCIATED METHOD

Information

  • Patent Application
  • 20240046944
  • Publication Number
    20240046944
  • Date Filed
    August 07, 2023
    9 months ago
  • Date Published
    February 08, 2024
    3 months ago
Abstract
The acoustic echo cancellation (AEC) system includes a loudspeaker interface coupled to a loudspeaker, a microphone interface coupled to a microphone, and a processor executing a model. The model predicts and generates a spectral magnitude mask (SMM) through a neural network according to a first microphone signal output by the loudspeaker and a second microphone signal output by the microphone, wherein a noisy speech signal is a sum of a clean speech signal and a noise signal; the SMM is a ratio of a spectral magnitude of an estimated speech signal and a spectral magnitude of the noisy speech signal; a true mask is a ratio of a spectral magnitude of the clean speech signal and the spectral magnitude of the noisy speech signal; and the model applies a power function to a loss function of the model according to the true mask and a magnitude of the true mask.
Description
BACKGROUND

The present invention is related to acoustic echo cancellation (AEC), and more particularly, to an AEC system in which a power function is applied to a loss function of a model for compensating speech distortion or suppressing echo on a spectral magnitude mask (SMM).


Acoustic echo often occurs in audio/video calls if a far-end speaker's voice (e.g. a far-end microphone signal) is played by a near-end speaker and is picked up by a near-end microphone (e.g. a near-end microphone signal generated by the near-end microphone may include an echo signal and a clean speech signal). For a conventional AEC system, a model is trained and built through a neural network to predict and generate an SMM according to the far-end microphone signal and the near-end microphone signal. However, problems of speech distortion and echo may occur in the SMM. As a result, a novel AEC system in which a power function is applied to a loss function of a model for compensating speech distortion or suppressing echo on the SMM is urgently needed.


SUMMARY

It is therefore one of the objectives of the present invention to provide an AEC system in which a power function is applied to a loss function of a model for compensating speech distortion or suppressing echo on an SMM and associated method, to address the above-mentioned issues.


According to an embodiment of the present invention, an AEC system is provided. The AEC system comprises a loudspeaker interface, a microphone interface, and a processor executing a model. The loudspeaker interface is coupled to a loudspeaker and is arranged to receive a first microphone signal. The microphone interface is coupled to a microphone and is arranged to receive a second microphone signal output by the microphone, wherein the first microphone signal is not output from the microphone, and an echo signal is transmitted from the loudspeaker to the microphone. The model is arranged to predict and generate a spectral magnitude mask (SMM) through a neural network according to the first microphone signal and the second microphone signal, wherein a noisy speech signal is a sum of a clean speech signal and a noise signal; the SMM is a ratio of a spectral magnitude of an estimated speech signal and a spectral magnitude of the noisy speech signal; a true mask is a ratio of a spectral magnitude of the clean speech signal and the spectral magnitude of the noisy speech signal; and the model is further arranged to apply a power function to a loss function of the model according to the true mask and a magnitude of the true mask.


According to an embodiment of the present invention, an AEC method is provided. The AEC method comprises: receiving a first microphone signal, by a loudspeaker interface; receiving a second microphone signal, by a microphone interface, wherein the first microphone signal is not output from the microphone, and an echo signal is transmitted from the loudspeaker to the microphone; executing a model, wherein the model is arranged to predict and generate a spectral magnitude mask (SMM) through a neural network according to the first microphone signal and the second microphone signal; a noisy speech signal is a sum of a clean speech signal and a noise signal; the SMM is a ratio of a spectral magnitude of an estimated speech signal and a spectral magnitude of the noisy speech signal; a true mask is a ratio of a spectral magnitude of the clean speech signal and the spectral magnitude of the noisy speech signal; and the model is further arranged to apply a power function to a loss function of the model according to the true mask and a magnitude of the true mask.


One of the benefits of the present invention is that, in the AEC system and associated method of the present invention, in response to the true mask being larger than 1, a monotonically increasing power function is applied to a loss function of an AEC model by multiplying the loss function of the AEC model by a speech distortion compensation weight. In this way, the speech distortion on an SMM generated by the AEC model can be compensated. In addition, in response to the true mask being smaller than 1, a monotonically decreasing power function is applied to the loss function of the AEC model by multiplying the loss function of the AEC model by an echo suppression weight. In this way, the echo on the SMM generated by the AEC model can be suppressed.


These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram illustrating an electronic device according to an embodiment of the present invention.



FIG. 2 is a diagram illustrating an acoustic echo cancellation (AEC) system according to an embodiment of the present invention.



FIG. 3 is a diagram illustrating implementation details of the AEC model shown in FIG. 2 according to an embodiment of the present invention.



FIG. 4 is a flow chart of an AEC method according to an embodiment of the present invention.





DETAILED DESCRIPTION

Certain terms are used throughout the following description and claims, which refer to particular components. As one skilled in the art will appreciate, electronic equipment manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not in function. In the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”.



FIG. 1 is a diagram illustrating an electronic device 10 according to an embodiment of the present invention. Byway of example, but not limitation, the electronic device 10 maybe a portable device such as a smartphone or a tablet. The electronic device 10 may include a processor 12 and a storage device 14. The processor 12 may be a single-core processor or a multi-core processor. The storage device 14 is a non-transitory machine-readable medium, and is arranged to store computer program code PROG and a model MD. The processor 12 is equipped with software execution capability. The computer program code PROG may include multiple artificial intelligence (AI)-based algorithms. When loaded and executed by the processor 12, the computer program code PROG instructs the processor 12 to train the model MD according to the AI-based algorithms. The electronic device 10 may be regarded as a computer system using a computer program product that includes a computer-readable medium containing the computer program code PROG. Regarding a model in an acoustic echo cancellation (AEC) system as proposed by the present invention, it may be embodied on the electronic device 10. That is, the model MD may be an AEC model mentioned hereinafter.



FIG. 2 is a diagram illustrating an AEC system 20 according to an embodiment of the present invention. As shown in FIG. 2, the AEC system 20 may include a loudspeaker 200, a microphone 202, and an AEC model 204. The loudspeaker 200 may be arranged to receive and play a far-end microphone signal x(n), and the microphone 202 may be arranged to receive a speech signal s(n) and output a near-end microphone signal y(n), wherein the far-end microphone signal x(n) is not output from the microphone 202, an echo signal d(n) is transmitted from the loudspeaker 200 to the microphone 202, and an external noise signal v(n) may also be received by the microphone 202 (i.e. y(n)=d(n)+s(n +v(n)).


The AEC model 204 may be arranged to receive the far-end microphone signal x(n) and the near-end microphone signal y(n) through a loudspeaker interface and a microphone interface of the AEC system 20, respectively, wherein the loudspeaker interface is coupled to the loudspeaker 200 and the microphone interface is coupled to the microphone 202. The AEC model 204 may be further arranged to predict and generate a spectral magnitude mask (SMM) through a neural network according to the far-end microphone signal x(n) and the near-end microphone signal y(n), for generating an estimated speech signal s′(n). Specifically, please refer to FIG. 3. FIG. 3 is a diagram illustrating implementation details of the AEC model 204 shown in FIG. 2 according to an embodiment of the present invention. As shown in FIG. 3, the AEC model 204 may include multiple segment modules 300 and 302, multiple fast Fourier transform (FFT) module 304 and 306, multiple instant layer normalization (iLN) modules 308 and 310, a concat module 312, and a separation kernel 314. The AEC model 204 may be arranged to perform short-time Fourier transform (STFT) upon the far-end microphone signal x(n) and the near-end microphone signal y(n), respectively, to generate a first transformed microphone signal X_T and a second transformed microphone signal Y_T. For example, the segment module 300 may be arranged to split the far-end microphone signal x(n) to generate a first segmented microphone signal X_S, and the FFT module 304 may be arranged to perform FFT upon the first segmented microphone signal X_S to generate the first transformed microphone signal X_T. The segment module 302 may be arranged to split the near-end microphone signal y(n) to generate a second segmented microphone signal Y_S, and the FFT module 306 may be arranged to perform FFT upon the second segmented microphone signal Y_S to generate the second transformed microphone signal Y_T.


Afterwards, the AEC model 204 may be arranged to generate the SMM through the separation kernel 314 according to the first transformed microphone signal X_T and the second transformed microphone signal Y_T. In this embodiment, the separation kernel 314 may include multiple long short term memory (LSTM) layers (e.g. 3 LSTM layers 316-320) and a fully-connected layer 322 (labeled as “FC” in FIG. 3) with a sigmoid activation 324. The iLN module 308 may be arranged to normalize the first transformed microphone signal X_T to generate a first normalized microphone signal X_N. The iLN module 310 may be arranged to normalize the second transformed microphone signal Y_T to generate a second normalized microphone signal Y_N. The concat module 312 may be arranged to concatenate the first normalized microphone signal X_N and the second normalized microphone signal Y_N to generate a concatenated result CR. The separation kernel 314 may be arranged to predict and generate two masks (e.g. a real part mask RM and an imaginary part mask IM) according to the concatenated result CR through the LSTM layers 316-320 and the fully-connected layer 322 with the sigmoid activation 324, wherein the real part mask RM corresponds to magnitude information of the far-end microphone signal x(n) and the near-end microphone signal y(n), and the imaginary part mask IM corresponds to phase information of the far-end microphone signal x(n) and the near-end microphone signal y(n). In addition, the real part mask RM is a real part of the SMM, and the imaginary part mask IM is an imaginary part of the SMM (i.e. the SMM includes the real part mask RM and the imaginary mask IM). In this way, a real part of the estimated speech signal s′ (n) can be obtained by multiplying the real part mask RM by a real part of the near-end microphone signal y(n), and an imaginary part of the estimated speech signal s′(n) can be obtained by multiplying the imaginary part mask IM by an imaginary part of the near-end microphone signal y(n).


Specifically, for the training of the AEC model 204, a noisy speech signal YSS (which may correspond to the near-end microphone signal y(n)) is a sum of a clean speech signal CSS (which may correspond to the speech signal s(n)) and a noise signal NSS (which may correspond to the external noise signal v(n) and the echo signal d(n)), that is, YSS=CSS+NSS. After the AEC model 204 is trained according to the AI-based algorithms, the SMM may be predicted and generated through the LSTM layers 316-320 and the fully-connected layer 322 with the sigmoid activation 324, wherein the SMM is equal to a ratio of a spectral magnitude S′(k, l) of the estimated speech signal s′(n) and a spectral magnitude Y(k, l) of the noisy speech signal YSS (i.e.








SMM
=




"\[LeftBracketingBar]"


S




(

k
,
l

)




"\[RightBracketingBar]"





"\[LeftBracketingBar]"


Y

(

k
,
l

)



"\[RightBracketingBar]"




)

,




k is a frame index, and l is a frequency bin index.


In this embodiment, a non-linear function is introduced to a loss function LOSS_F of the AEC model 204 for compensating speech distortion or suppressing echo on the SMM, and a true mask is generated by the AEC model 204 for subsequent operations, wherein the true mask is a ratio of a spectral magnitude S(k, l) of the clean speech signal CSS and the spectral magnitude Y(k, l) of the noisy speech signal YSS (i.e.








SMM
=




"\[LeftBracketingBar]"


S

(

k
,
l

)



"\[RightBracketingBar]"





"\[LeftBracketingBar]"


Y

(

k
,
l

)



"\[RightBracketingBar]"




)

,




the true mask includes a real part true mask Truere and an imaginary part true mask Trueim, and a magnitude Truemag of the true mask is equal to a square root of a sum of a square of the real part true mask Truere and a square of the imaginary part true mask Trueim, which can be expressed by the following equation:





Truemag=√{square root over (Truere2+Trueim2)}


In response to the true mask being larger than 1 (i.e. the spectral magnitude S(k, l) of the clean speech signal CSS is larger than the spectral magnitude Y(k, l) of the noisy speech signal YSS), the AEC model 204 maybe arranged to apply a monotonically increasing power function to the loss function LOSS_F of the AEC model 204 by multiplying the loss function LOSS_F of the AEC model 204 by a speech distortion compensation weight weightSDW, for compensating speech distortion on the SMM. That is, after the monotonically increasing power function is applied to the loss function LOSS_F of the AEC model 204, a loss function LSDW of the AEC model 204 can be expressed as: LSDW=weightSDW*LOSS_F. The speech distortion compensation weight weightSDW is equal to a minimum value between a bound value BOUND1 and a maximum value between 1 and the magnitude Truemag of the true mask to a power of n, which can be expressed by the following equation:





weightSDW=min([max(1, Truemag)]n, BOUND1)


wherein the bound value BOUND1 is arranged to limit the speech distortion compensation weight weightSDW, and both of the bound value BOUND1 and a value of n are positive numbers greater than 1.


In response to the true mask being smaller than 1 (i.e. the spectral magnitude S (k, l) of the clean speech signal CSS is smaller than the spectral magnitude Y(k, l) of the noisy speech signal YSS), the AEC model 204 maybe arranged to apply a monotonically decreasing power function to the loss function LOSS_F of the AEC model 204 by multiplying the loss function LOSS_F of the AEC model 204 by an echo suppression weight weightESW, for suppressing echo on the SMM. That is, after the monotonically decreasing power function is applied to the loss function LOSS_F of the AEC model 204, a loss function LESWof the AEC model 204 can be expressed as: LESW=weightESW*LOSS_F. The echo suppression weight weightESW is equal to a minimum value between a bound value BOUND2 and a minimum value between 1 and the magnitude Truemag of the true mask to the power of n, which can be expressed by the following equation:





weightESW=min([max(1, Truemag)]n, BOUND2)


wherein the bound value BOUND2 is arranged to limit the echo suppression weight weightESW, the bound value BOUND2 is a negative number, and a value of n is a positive number.


In this embodiment, the loss function LOSS_F of the AEC model 204 may be a mean square error between the SMM and the true mask. Since the SMM includes the real part mask RM and the imaginary part mask IM, the loss function LOSS_F of the AEC model 204 is a sum of a mean square error between the real part mask RM and a real part of the true mask (labeled as “RT” in the following equation) and a mean square error between the imaginary part mask IM and an imaginary part of the true mask (labeled as “IT” in the following equation), which can be expressed by the following equation:





LOSS_F=MSE(RT,RM)+MSE(IT,IM)


but the present invention is not limited thereto. The speech distortion compensation weight weightSDW and the echo suppression weight weightESW can also be applicable to other types of the loss function LOSS_F of the AEC model 204.



FIG. 4 is a flow chart of an AEC method according to an embodiment of the present invention. Provided that the result is substantially the same, the steps are not required to be executed in the exact order shown in FIG. 4. For example, the AEC method shown in FIG. 4 may be employed by the AEC system 20 shown in FIG. 2.


In Step S400, the far-end microphone signal x(n) is received and played by the loudspeaker 200.


In Step 402, the speech signal s(n) is received and the near-end microphone signal y(n) is output by the microphone 202, wherein the far-end microphone signal x(n) is not output from the microphone 202, the echo signal d(n) is transmitted from the loudspeaker 200 to the microphone 202, and the external noise signal v(n) may also be received by the microphone 202.


In Step 404, the AEC model 204 is executed to predict and generate an SMM through the neural network according to the far-end microphone signal x(n) and the near-end microphone signal y(n), and apply a power function (e.g. a monotonically increasing power function or a monotonically decreasing power function) to the loss function LOSS_F of the AEC model 204 according to the true mask and the magnitude Truemag of the true mask for compensating speech distortion or suppressing echo on the SMM.


Since a person skilled in the pertinent art can readily understand details of the steps after reading above paragraphs directed to the AEC system 20 shown in FIG. 2, further description is omitted here for brevity.


In summary, in the AEC system 20 and associated method of the present invention, in response to the true mask being larger than 1, a monotonically increasing power function is applied to the loss function LOSS_F of the AEC model 204 by multiplying the loss function LOSS_F of the AEC model 204 by the speech distortion compensation weight weightSDW. In this way, the speech distortion on the SMM generated by the AEC model 204 can be compensated. In addition, in response to the true mask being smaller than 1, a monotonically decreasing power function is applied to the loss function LOSS_F of the AEC model 204 by multiplying the loss function LOSS_F of the AEC model 204 by the echo suppression weight weightESW. In this way, the echo on the SMM generated by the AEC model 204 can be suppressed.


Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.

Claims
  • 1. An acoustic echo cancellation (AEC) system, comprising: a loudspeaker interface, coupled to a loudspeaker;a microphone interface, coupled to a microphone;a processor, arranged to execute:a model, arranged to predict and generate a spectral magnitude mask (SMM) through a neural network according to a first microphone signal output by the loudspeaker and a second microphone signal output by the microphone, wherein a noisy speech signal is a sum of a clean speech signal and a noise signal; the SMM is a ratio of a spectral magnitude of an estimated speech signal and a spectral magnitude of the noisy speech signal; a true mask is a ratio of a spectral magnitude of the clean speech signal and the spectral magnitude of the noisy speech signal; and the model is further arranged to apply a power function to a loss function of the model according to the true mask and a magnitude of the true mask.
  • 2. The AEC system of claim 1, wherein the true mask comprises a real part true mask and an imaginary part true mask; and the magnitude of the true mask is equal to a square root of a sum of a square of the real part true mask and a square of the imaginary part true mask.
  • 3. The AEC system of claim 1, wherein in response to the true mask being larger than 1, the model is arranged to apply a monotonically increasing power function to the loss function of the model by multiplying the loss function of the model by a speech distortion compensation weight for compensating speech distortion on the SMM.
  • 4. The AEC system of claim 3, wherein the speech distortion compensation weight is equal to a minimum value between a bound value and a maximum value between 1 and the magnitude of the true mask to a power of n; and the bound value and a value of n are positive numbers.
  • 5. The AEC system of claim 1, wherein in response to the true mask being smaller than 1, the model is arranged to apply a monotonically decreasing power function to the loss function of the model by multiplying the loss function of the model by an echo suppression weight for suppressing echo on the SMM.
  • 6. The AEC system of claim 5, wherein the echo suppression weight is equal to a minimum value between a bound value and a minimum value between 1 and the magnitude of the true mask to a power of n; the bound value is a positive number; and a value of n is a negative number.
  • 7. The AEC system of claim 1, wherein the SMM comprises a real part mask and an imaginary part mask; the real part mask is a real part of the SMM and the imaginary part mask is an imaginary part of the SMM; and the real part mask corresponds to magnitude information of the first microphone signal and the second microphone signal, and the imaginary part mask corresponds to phase information of the first microphone signal and the second microphone signal.
  • 8. The AEC system of claim 7, wherein a real part of the estimated speech signal is obtained by multiplying the real part mask by a real part of the second microphone signal, and an imaginary part of the estimated speech signal is obtained by multiplying the imaginary part mask by an imaginary part of the second microphone signal.
  • 9. The AEC system of claim 1, wherein the model comprises: multiple segment modules, arranged to split the first microphone signal and the second microphone signal, respectively, to generate a first segmented microphone signal and a second segmented microphone signal;multiple fast Fourier transform modules, arranged to perform fast Fourier transform upon the first segmented microphone signal and the second segmented microphone signal, respectively, to generate a first transformed microphone signal and a second transformed microphone signal;multiple instant layer normalization (iLN) modules, arranged to normalize the first transformed microphone signal and the second transformed microphone signal, respectively, to generate a first normalized microphone signal and a second normalized microphone signal;a concat module, arranged to concatenate the first normalized microphone signal and the second normalized microphone signal, to generate a concatenated result; anda separation kernel, arranged to predict and generate the SMM according to the concatenated result, wherein the estimated speech signal is generated according to the SMM.
  • 10. The AEC system of claim 9, wherein the separation kernel comprises multiple long short term memory (LSTM) layers and a fully-connected layer with sigmoid activation, and the SMM is predicted and generated by the multiple LSTM layers and the fully-connected layer with sigmoid activation.
  • 11. An acoustic echo cancellation (AEC) method, comprising: receiving a first microphone signal output by a loudspeaker, by a loudspeaker interface;receiving a second microphone signal output by a microphone, by a microphone interface;executing a model, wherein the model is arranged to predict and generate a spectral magnitude mask (SMM) through a neural network according to the first microphone signal and the second microphone signal; a noisy speech signal is a sum of a clean speech signal and a noise signal; the SMM is a ratio of a spectral magnitude of an estimated speech signal and a spectral magnitude of the noisy speech signal; a true mask is a ratio of a spectral magnitude of the clean speech signal and the spectral magnitude of the noisy speech signal; and the model is further arranged to apply a power function to a loss function of the model according to the true mask and a magnitude of the true mask.
  • 12. The AEC method of claim 11, wherein the true mask comprises a real part true mask and an imaginary part true mask; and the magnitude of the true mask is equal to a square root of a sum of a square of the real part true mask and a square of the imaginary part true mask.
  • 13. The AEC method of claim 11, wherein in response to the true mask being larger than 1, the model is arranged to apply a monotonically increasing power function to the loss function of the model by multiplying the loss function of the model by a speech distortion compensation weight for compensating speech distortion on the SMM.
  • 14. The AEC method of claim 13, wherein the speech distortion compensation weight is equal to a minimum value between a bound value and nth power of a maximum value between 1 and the magnitude of the true mask; and the bound value and a value of n are positive numbers.
  • 15. The AEC method of claim 11, wherein in response to the true mask being smaller than 1, the model is arranged to apply a monotonically decreasing power function to the loss function of the model by multiplying the loss function of the model by an echo suppression weight for suppressing echo on the SMM.
  • 16. The AEC method of claim 15, wherein the echo suppression weight is equal to a minimum value between a bound value and nth power of a minimum value between 1 and the magnitude of the true mask; the bound value is a positive number; and a value of n is a negative number.
  • 17. The AEC method of claim 11, wherein the SMM comprises a real part mask and an imaginary part mask; the real part mask is a real part of the SMM and the imaginary part mask is an imaginary part of the SMM; and the real part mask corresponds to magnitude information of the first microphone signal and the second microphone signal, and the imaginary part mask corresponds to phase information of the first microphone signal and the second microphone signal.
  • 18. The AEC method of claim 17, wherein a real part of the estimated speech signal is obtained by multiplying the real part mask by a real part of the second microphone signal, and an imaginary part of the estimated speech signal is obtained by multiplying the imaginary part mask by an imaginary part of the second microphone signal.
  • 19. The AEC method of claim 11, wherein the model is further arranged to perform steps of: splitting the first microphone signal and the second microphone signal, respectively, to generate a first segmented microphone signal and a second segmented microphone signal;performing fast Fourier transform upon the first segmented microphone signal and the second segmented microphone signal, respectively, to generate a first transformed microphone signal and a second transformed microphone signal;normalizing the first transformed microphone signal and the second transformed microphone signal, respectively, to generate a first normalized microphone signal and a second normalized microphone signal;concatenating the first normalized microphone signal and the second normalized microphone signal, to generate a concatenated result; andpredicting and generating the SMM according to the concatenated result, wherein the estimated speech signal is generated according to the SMM.
  • 20. The AEC method of claim 19, wherein the step of predicting and generating the SMM according to the concatenated result comprises: predicting and generating the SMM by multiple long short term memory (LSTM) layers and a fully-connected layer with sigmoid activation.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 63/396,218, filed on Aug. 8, 2022. The content of the application is incorporated herein by reference.

Provisional Applications (1)
Number Date Country
63396218 Aug 2022 US