The present invention relates generally to voice separation. More particularly, the present invention relates to a method for improving voice separation by eliminating overlaps. The present invention also relates to a system for improving voice separation by eliminating overlaps.
Nowadays, voice separation is widely used by general users in many occasions, one of which is, for example, in a car with speech recognition. When more than one person is speaking or while there is noise in the car, the host of the car cannot recognize the speech from the driver. Therefore, voice separation is needed to improve the speech recognition in this case. There are mainly two well-known types of voice separation methods. One is to create a microphone array to achieve voice enhancement. The other is to use the voice separation algorithms, such as, Frequency domain independent component analysis (FDICA), Degenerate unmixing estimation technique (DUET), or other extended algorithms. Because the FDICA algorithm for separating speech is more complex, the DUET algorithm is usually chosen for implementing the voice separation.
However, in the traditional DUET algorithm, some of time-frequency points overlapping may be separated into any of the voices. In this case, one of the separated voices may include another person's voice, which may result in the separated voice being not pure enough.
Therefore, there may be a need to partition these overlapping time-frequency points into a single cluster to avoid its appearing in the separated voice, so that the quality of the separated voice can be improved.
The present invention overcomes some of the drawbacks by providing a method and system to improve voice separation performance by eliminating overlaps.
On one hand, the present invention provides a method for improving voice separation performance by eliminating overlap. The method comprises the operations of: picking up, by at least two microphones, respectively, at least two mixtures including mixed first sound and second sound; recording and storing, in a sound recording module, the at least two mixtures from the at least two microphones; analyzing, in an algorithm module, the two mixtures to separate the time-frequency points. In particular, the algorithm module is configured to apply the Degenerate Unmixing Estimation Technique (DUET) algorithm, and the algorithm module further performs the operations of eliminating overlapping points from the time-frequency points. Thus, the first sound and the second sound are recovered into the time domain, respectively, from the time-frequency points with eliminating the overlapping points. The overlapping points comprise the time-frequency points that are neither of the first sound nor of the second sound. In this way, by using the method provided herein, the first sound is recovered from the time-frequency points only belonging to this first sound, and the second sound is recovered from the time-frequency points only belonging to this second sound, respectively.
In particular, in the method provided herein, eliminating the overlapping points comprises determining the overlapping points according to a rule of |d1−d2|<d0/4, where d1 is a distance between the overlapping point and a first peak center, d2 is a distance between the overlapping point and a second peak center, and d0 is the distance between the first peak center and the second peak center.
On the other hand, the present invention further provides a system for implementing the method to improve voice separation performance by eliminating overlap. The system comprises: at least two microphones for picking up at least two mixtures including mixed first sound and second sound; a sound recording module for recording and storing the at least two mixtures from the at least two microphones; an algorithm module configured to analyze the two mixtures to separate the time-frequency points. In particular, the algorithm module is configured to apply the Degenerate Unmixing Estimation Technique (DUET) algorithm, and the algorithm module further performs the operations of eliminating overlapping points from the time-frequency points. Thus, the first sound and the second sound are recovered into the time domain, respectively, from the time-frequency points only belonging to this first sound or to this second sound, respectively.
In particular, in the system provided herein, eliminating the overlapping points comprises determining the overlapping points according to a rule of |d1−d2|<d0/4, where d1 is a distance between the overlapping point and the first peak center, d2 is a distance between the overlapping point and a second peak center, and d0 is a distance between the first peak center and the second peak center.
The present invention may be better understood from reading the following description of non-limiting embodiments, with reference to the attached drawings. In the figures, like reference numerals designates corresponding parts, wherein:
The detailed description of the embodiments of the present invention is disclosed hereinafter; however, it is understood that the disclosed embodiments are merely exemplary of the invention that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention.
One of the objects of the invention is to provide a method to improve voice separation performance by eliminating overlap.
In one embodiment,
In operation 202, the mixed sounds picked up by the two microphones (mic1, mic2) are recorded and stored in the sound recording module.
Next, the algorithm module performs the analysis to the mixtures recorded and stored in the operation 203. In the algorithm module, the DUET is proposed as the algorithm for speech separation in the embodiment. The DUET algorithm is one of the methods of blind signal separation (BSS) which is to retrieve source signals from mixtures of them without a priori information about the source signals and the mixing process.
The DUET Blind Source Separation method is valid when the sources are W-disjoint orthogonal, that is, when the supports of the windowed Fourier transform of the signals in the mixture are disjointed. This DUET algorithm can roughly separate any number of sources using only two mixtures. For anechoic mixtures of attenuated and delayed sources, the DUET algorithm allows one to estimate the mixing parameters by clustering relative attenuation-delay pairs extracted from the ratios of the time-frequency representations of the mixtures. The estimates of the mixing parameters are then used to partition the time-frequency representation of one mixture to recover the original sources.
The DUET voice separation algorithm is divided into the following operations:
H(α,δ):=∫∫(τ,ω)∈l(α,δ)|{circumflex over (x)}1(τ,ω){circumflex over (x)}2(τ,ω)|pωqdτdω (2)
which corresponds to the relative delay;
which indicates the symmetric attenuation, and
By far each estimated source time-frequency representation has been partitioned into each one of the two peak centers (Pc_1, Pc_2), which may be converted back into the time domain to get the separated sound 1 and sound 2.
However, the recorded source mixtures are usually not W-disjoint orthogonal. In the embodiment, suppose there are, for example, only two people talking at the same time. Due according to the rule of the time-frequency binary masks construction
in the DUET algorithm, the time-frequency points are divided into two parts by non-zero or one. In case that some of the time-frequency points between the two peaks are not W-disjoint orthogonal and these time-frequency points mix the voices from the two persons (person 1, person 2). In the disclosed embodiment, these time-frequency points are defined as the overlapping points. In this case, because of existing these overlapping time-frequency points, one of the separated voices may include another person's voice, which entails that the separated sound 1 may also include the sound 2, and results in the separated voice being not pure enough. In fact, the overlapping time-frequency points of mixed two-person voices do not belong to anyone of the persons. The overlapping points should be categorized into the third category to be eliminated.
To solve the above technical problem, aspects disclosed herein provide, among other things, a method to improve the voice separation performance by eliminating the overlap, in which the overlapping time-frequency points are found out and divided into a single cluster, and they do not appear in the separated voice. Therefore, the quality of separated voice can be improved.
In particular, as shown in the operation 204 of
it can be determined that the time-frequency point (Pt_r) does not belong to any of the two peaks in
Finally, in operation 205 of
Other objects of the disclosed embodiments provide a system for improving voice separation performance by eliminating overlaps.
In the embodiment as shown in
As described above, the method and system provided herein elimination overlaps that exist in the separated voice signals and thus improves the quality of the voice separation. Those skilled in the art can understand that the signals picked up by the microphones in the present invention are not limited to two and can be extended to any number of mixed signals. The algorithm processed in the method and system herein can be performed, iteratively.
As used in this application, an element or operation recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural of the elements or operations, unless such exclusion is stated. Furthermore, references to “one embodiment” or “one example” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. The terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements or a particular positional order on their objects.
While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms of the invention. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the invention. Additionally, the features of various implementing embodiments may be combined to form further embodiments of the invention.
This application is the U.S. national phase of PCT Application No. PCT/CN2020/076192 filed on Feb. 21, 2020, the disclosure of which is incorporated in its entirety by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2020/076192 | 2/21/2020 | WO |