This application claims the priority of Korean Patent Application No. 10-2007-0103166, filed on Oct. 12, 2007, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
1. Field
One or more embodiments of the present invention relate to a method, medium, and apparatus extracting a target sound from mixed sound, and more particularly, to a method, medium, and apparatus processing mixed sound, which contains various sounds generated by a plurality of sound sources and is input to a portable digital device that can process or capture sounds, such as a cellular phone, a camcorder or a digital recorder, to extract a target sound desired by a user out of the mixed sound.
2. Description of the Related Art
Part of everyday life involves making or receiving phone calls, recording external sounds, and capturing moving images by using portable digital devices. Various digital devices, such as consumer electronics (CE) devices and cellular phones, use a microphone to capture sound. Generally, a microphone array including a plurality of microphones is utilized to implement stereophonic sound which uses two or more channels as contrasted with monophonic sound which uses only a single channel.
The microphone array including microphones can acquire not only sound itself but also additional information regarding directivity of the sound, such as the direction or position of the sound. Directivity is a feature that increases or decreases the sensitivity to a sound signal transmitted from a sound source, which is located in a particular direction, by using the difference in the arrival times of the sound signal at each microphone of the microphone array. When sound signals are obtained using the microphone array, a sound signal coming from a particular direction can be emphasized or suppressed.
As used herein, the term “sound source” denotes a source which radiates sounds, that is, an individual speaker included in a speaker array. In addition, the term “sound field” denotes a virtual region formed by a sound which is radiated from a sound source, that is, a region which sound energy reaches. The term “sound pressure” denotes the power of sound energy which is represented using the physical quantity of pressure.
One or more embodiments of the present invention provides a method, medium, and apparatus extracting a target sound, in which a target sound can be clearly separated from mixed sound containing a plurality of sound signals and inputted to a microphone array.
Additional aspects and/or advantages will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the invention.
According to an aspect of the present invention, there is provided a method of extracting a target sound. The method includes receiving a mixed signal through a microphone array, generating a first signal whose directivity is emphasized toward a target sound source and a second signal whose directivity toward the target sound source is suppressed based on the mixed signal, and extracting a target sound signal from the first signal by masking an interference sound signal, which is contained in the first signal, based on a ratio of the first signal to the second signal.
According to another aspect of the present invention, there is provided a computer-readable recording medium on which a program for executing the method of extracting a target sound source is recorded.
According to another aspect of the present invention, there is provided an apparatus for extracting a target sound. The apparatus includes a microphone array receiving a mixed signal, a beam former generating a first signal whose directivity is emphasized toward a target sound source and a second signal whose directivity toward the target sound source is suppressed based on the mixed signal, and a signal extractor extracting a target sound signal from the first signal by masking an interference sound signal, which is contained in the first signal, based on a ratio of the first signal to the second signal.
These and/or other aspects and advantages will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. In this regard, embodiments of the present invention may be embodied in many different forms and should not be construed as being limited to embodiments set forth herein. Accordingly, embodiments are merely described below, by referring to the figures, to explain aspects of the present invention.
Recording or receiving sounds by using portable digital devices may be performed more often in noisy places with various noises and ambient interference noises than in quiet places without ambient interference noises. When only voice communication was possible using a cellular phone, interference noises input to a microphone included in the cellular phone was not a big problem since the distance between a user and the cellular phone is very close. However, since video and speaker-phone communication is now possible using communication devices, the effect of interference noises on sound signals generated by a user of the communication device has relatively increased, thereby hindering clear communication. In this regard, a method of extracting a target sound from mixed sound is increasingly required by various sound acquiring devices such as consumer electronics (CE) devices and cellular phones with built-in microphones.
The target sound source may be determined according to an environment in which various embodiments of the present invention are implemented. Generally, a dominant signal from among a plurality of sound signals contained in a mixed sound signal may be determined to be a target sound source. That is, a sound signal having the highest gain or sound pressure may be determined as a target sound source. Alternatively, the directions or distances of the sound sources 115, 120, from the microphone array 110 may be taken into consideration to determine a target sound source. That is, a sound source which is located in front of the microphone array 110 or located closer to the microphone array 110, is more likely to be a target sound source. In
As described above, since a target sound source is determined according to the environment in which various embodiments of the present invention are implemented, it will be understood by those of ordinary skill in the art that various methods other than the above two methods can be used to determine the target sound source.
The configuration of the apparatus of
The microphone array 210 obtains sound signals generated by a plurality of adjacent sound sources in the form of a mixed sound signal. Since the microphone array 210 includes a plurality of microphones, a sound signal generated by each sound source may arrive at each microphone at a different time, depending on the position of the corresponding sound source and the distance between the corresponding sound source and each microphone. It will be assumed that N sound signals X1(t) through XN(t) are received through N microphones of the microphone array 210, respectively.
Based on the sound signals X1(t) through XN(t) received through the microphone array 210, the beam former 220 generates signals whose directivity toward the target sound source is emphasized and signals whose directivity toward the target sound source is suppressed. The generation of these signals is respectively performed using an emphasized signal beam former 221 and a suppressed signal beam former 222.
In order to receive a clear target sound signal which is mixed with background noise, a microphone array having two or more microphones generally functions as a spatial filter which increases the amplitude of each sound signal, which is received through the microphone array, by assigning an appropriate weight to each sound signal and spatially reduces noise when the direction of the target sound signal is different from that of an interference noise signal. In this case, the spatial filter is referred to as a beam former. In order to amplify or extract a target sound signal from noise which is coming from a different direction from that of the target sound signal, a microphone array pattern and phase differences between signals which are input to a plurality of microphones, respectively, must be obtained. This signal information can be obtained using a plurality of conventional beam-forming algorithms.
Major examples of beam-forming algorithms which can be used to amplify or extract a target sound signal include a delay-and-sum algorithm and a filter-and-sum algorithm. In the delay-and-sum algorithm, the position of a sound source is identified based on a relative period of time by which a sound signal generated by the sound source has been delayed before arriving at a microphone. In the filter-and-sum algorithm, output signals are filtered using a spatially linear filter in order to reduce the effects of two or more signals and noise in a sound field formed by sound sources. These beam-forming algorithms are well known to those of ordinary skill in the art to which the embodiment pertains.
The emphasized signal beam former 221 illustrated in
In
When a mixed sound signal is input to the microphone array 310, a sound signal, which is included in the mixed sound signal and transmitted from the position A in front of the microphone array 310 may also be input to the microphone array 310. In this case, the phase and size of the sound signal received by each microphone of the microphone array 310 may be almost identical. The adder 320 adds the sound signals, which are received by the microphones of the microphone array 310, respectively, and outputs a sound signal having increased gain and unchanged phase.
On the other hand, when a sound signal transmitted from the position B or C is input to the microphone array 310, it may arrive at each microphone of the microphone array 310 at a different time since each microphone is at a different distance and angle from the sound source located at the position B or C. That is, the sound signal generated by the sound source at the position B or C may arrive at a microphone, which is located closer to the sound source, earlier and may arrive at a microphone, which is located further from the sound source, relatively later.
When the adder 320 adds the sound signals respectively received by the microphones at different times, the sound signals may partially offset each other due to the difference in their arrival times. Otherwise, the gains of the sound signals may be reduced due to the differences between the phases thereof. Although the phases of the sound signals do not differ from one another by the same amounts, the gain of the sound signal transmitted from the position B or C is reduced relatively more than that of the sound signal transmitted from the position A. Therefore, as in the present embodiment, the directional sensitivity toward the target sound source in front of the microphone array 310 can be enhanced using the microphone array 310, which includes the microphones spaced at regular intervals, and the adder 320.
Generally, directional control factors, such as the gap between microphones of a microphone array and delay times of sound signals transmitted to the microphones, are widely used to determine the directional response of the microphone array. The relationship between the directional control factors is defined by Equation 1, for example.
Here, τ is an adaptive delay which determines the directional response of the microphone array, d is the gap between the microphones, α1 is a control factor introduced to define the relationship between the directional control factors, and c is the velocity of sound wave in air, that is, 340 m/sec.
In
A sound pressure field of the sound signal X1(t) delayed by the delay unit 330 is defined as a function of each angular frequency of the sound signal X1(t) and an angle at which the sound signal X1(t) from a sound source is incident to the microphone array. The sound pressure field is changed by various factors such as the gap between the microphones or an incident angle of the sound signal X1(t). Of these factors, the frequency or amplitude of the sound signal X1(t) varies according to properties thereof. Therefore, it is difficult to control the sound pressure field of the sound signal X1(t). For this reason, it is desirable for the sound pressure field of the sound signal X1(t) to be controlled using the adaptive delay of Equation 1, in that Equation 1 is irrespective of changes in the frequency or amplitude of the sound signal X1(t).
The LPF 350 ensures that frequency components, which are contained in the sound pressure field of the sound signal X1(t), remain unchanged in order to restrain the sound pressure field from being changed by changes in the frequency of the sound signal X1(t). Thus, after the LPF 350 filters a sound signal output from the subtractor 340, the directivity toward the target sound source can be controlled using the adaptive delay of Equation 1, irrespective of the frequency or amplitude of the sound signal. That is, an emphasized sound signal Z(t) featuring directivity toward the target source and thus is emphasized, may be generated by the target sound-emphasizing beam former of
The target sound-emphasizing beam formers according to two exemplary embodiments of the present invention have been described above with reference to
As in
In
A process of suppressing directivity will now be described in more detail. When a mixed sound signal is input to the microphone array 410, a sound signal, which is included in the mixed sound signal and transmitted from the position A in front of the microphone array 410 may also be input to the microphone array 410. In this case, the phases and sizes of the sound signals received by each pair of adjacent microphones among four microphones of the microphone array 410 may be very similar to each other. That is, the sound signals received through first and second, second and third, or third and fourth microphones may be very similar to each other.
Therefore, after opposite signs are assigned to the sound signals received through each pair of adjacent microphones, if an adder 420 adds the sound signals, the sound signals assigned with opposite signs may offset each other. Consequently, the gain or sound pressure of the sound signal from the sound source located at the position A in front of the microphone array 410 is reduced, which, in turn, suppresses directivity toward the target sound source.
On the other hand, when a sound signal generated by the sound source at the position B or C is input to the microphone array 410, each microphone of the microphone array 410 may experience a delay in receiving the sound signal. In this case, the duration of the delay may depend on the distance between the sound source and each microphone. That is, the sound signal transmitted from the position B or C arrives at each microphone at a different time. Due to the difference in the arrival times of the sound signal at the microphones, even if opposite signs are assigned to the sound signals received by each pair of adjacent microphones and then the sound signals are added by the adder 420, the sound signals do not greatly offset each other due to their different arrival times. Therefore, if opposite signs are assigned to the sound signals received by each pair of adjacent microphones of the microphone array 410 and then if the sound signals are added by the adder 420 as in the present embodiment, directivity toward the target sound source in front of the microphone array 410 can be suppressed.
The present exemplary embodiment is identical to the previous exemplary embodiment illustrated in
The beam formers which emphasize or suppress directivity toward a target sound source according to various embodiments of the present invention have been described above with reference to
The signal extractor 230 may include a time-frequency masking filter (hereinafter, masking filter) 231 and a mixer 232. The signal extractor 230 extracts a target sound signal from the emphasized signal Y(τ) (251) using the masking filter 230 which is set according to a ratio of the amplitude of the emphasized signal Y(τ) (251) to that of the suppressed signal Z(τ) (252) in a time-frequency domain. In this case, the emphasized signal Y(τ) (251) and the suppressed signal Z(τ) (252) are input values. As used herein, the term “masking” refers to a case where a signal suppresses other signals when a number of signals exist at the same time or at adjacent times. Thus, masking is performed based on the expectation that a clearer sound signal will be extracted if sound signal components can suppress interference noise components when a sound signal coexists with interference noise.
The masking filter 231 receives the emphasized signal Y(τ) (251) and the suppressed signal Z(τ) (252) and filters them based on a ratio of the amplitude of the emphasized signal Y(τ) (251) to that of the suppressed signal Z(τ) (252) in the time-frequency domain. The mixer 232 mixes the emphasized signal Y(τ) (251) with a signal output from the masking filter 231, thereby extracting a target sound signal O(τ,f) (240) from which interference noise is removed. A filtering process performed by the masking filter 231 of the signal extractor 230 will now be described in more detail with reference to
The window functions 521 and 522 reconfigure an emphasized signal Y(t) (511) and a suppressed signal Z(t) (512) generated by a beam former (not shown) into individual frames, respectively. In this case, a frame denotes each of a plurality of units into which a sound signal is divided according to time. In addition, a window function denotes a type of filter used to divide a successive sound signal into a plurality of sections, that is, frames, according to time and process the frames. In the case of digital signal processing, a signal is input to a system, and a signal output from the system is represented using convolutions. To limit a given target signal to a finite signal, the target signal is divided into a plurality of individual frames by a window function and processed accordingly. A major example of the window function is a Hamming window, which may be easily understood by those of ordinary skill in the art to which the embodiment pertains.
The emphasized signal Y(t) (511) and the suppressed signal Z(t) (512) reconfigured by the window functions 521 and 522 are transformed into signals in the time-frequency domain by the FFT units 531 and 532 for ease of calculation. Then, an amplitude ratio may be calculated based on the signals in the time-frequency domain as given by Equation 2 below, for example.
Here, τ indicates time, f indicates frequency, and an amplitude ratio α(τ,f) is represented by a ratio of absolute values of an emphasized signal Y(τ,f) and a suppressed signal Z(τ,f). That is, the amplitude ratio α(τ,f) in Equation 2 denotes a ratio of an emphasized signal and a suppressed signal which are included in individual frames in the time-frequency domain.
The masking filter-setting unit 550 illustrated in
First, a masking filter may be set using a binary masking filter and a soft masking filter calculated from the binary masking filter. Here, the binary masking filter is a filter which produces only zero and one as output values. The binary masking filter is also referred to as a hard masking filter. On the other hand, the soft masking filter is a filter which is controlled to linearly and gently increase or decrease in response to the variation of binary numbers output from the binary masking filter.
The masking filter-setting unit 550 illustrated in
Here, T(f) indicates a masking threshold value according to a frequency f of a sound signal. As the masking threshold value T(f), an appropriate value, which can be used to determine whether a corresponding frame is a target signal or an interference noise, is experimentally obtained according to various embodiments of the present invention. Since the binary masking filter outputs only binary values of zero and one, it is referred to as a binary masking filter or a hard masking filter.
In Equation 3, if the amplitude ratio α(τ,f) is greater than or equal to the masking threshold value T(f), that is, if an emphasized signal is greater than a suppressed signal, the binary masking filter is set to one. On the contrary, if the amplitude ratio α(τ,f) is less than the masking threshold value T(f), that is, if the emphasized signal is smaller than the suppressed signal, the binary masking filter is set to zero. Masking in the time-frequency domain requires relatively less computation even when the number of microphones in a microphone array is less than that of adjacent sound sources including a target sound source. This is because the number of masking filters equalling the number of sound sources can be generated and perform a masking operation in order to extract a target sound. The number of microphones does not greatly affect the masking operation. Therefore, even when there are a plurality of sound sources, the masking filters can perform in a superior manner.
In
Until now, various methods of removing the musical noise have been suggested. A popular example is a Gaussian filter. The Gaussian filter assigns a highest weight to a mean value among values of a plurality of signal blocks and lower weights to the other values of the signal blocks. Thus, the mean value is best filtered by the Gaussian filter, and a value further from the mean value is less filtered by the Gaussian filter.
Other than the Gaussian filter, various other filters may be used, such as a median filter which selects a median value from values of signal blocks of an equal size in horizontal and vertical directions. These various filters can be easily understood by those of ordinary skill in the art to which the embodiment pertains, and thus a detailed description thereof will be omitted.
Using the above methods, the binary masking filter M(τ,f) illustrated in
{tilde over (M)}(τ,f)=W(τ,f)M(τ,f) Equation 4
Here, W(τ,f) indicates a Gaussian filter used as a smoothing filter. That is, in Equation 4, a soft masking filter is a Gaussian filter multiplied by a binary masking filter. Above, the method of setting a soft masking filter using a binary masking filter has been described. Next, a method of directly setting a soft masking filter by using an amplitude ratio will be described as another exemplary embodiment of the present invention.
In this next exemplary embodiment, the masking filter-setting unit 550 does not use a binary masking filter defined by the masking threshold value 551. Instead, the masking filter-setting unit 550 may model a sigmoid function which can directly set the soft masking filter 560 based on the amplitude ratio α(τ,f) calculated by the amplitude ratio calculation unit 540. The sigmoid function is a special function which transforms discontinuous and non-linear input values into continuous and linear values between zero and one. The sigmoid function is a type of transfer function which defines a transformation process from input values into output values. In particular, the sigmoid function is widely used in neural network theory. That is, when a model is developed, it is difficult to determine an optimum variable and an optimum function due to many input variables. Thus, according to neural network theory, the prediction capability of the model is enhanced based on learning through data accumulation, and the sigmoid function is widely used in this neural network theory.
In the present exemplary embodiment, the amplitude ratio α(τ,f) is transformed into a value between zero and one by using the sigmoid function. Accordingly, the soft masking filter 560 can be directly set without using a binary masking filter.
Here, γ is a variable indicating the inclination of the sigmoid function. It can be understood from Equation 5 and
Referring back to
O(τ,f)={tilde over (M)}(τ,f)·Y(τ,f) Equation 6
Since the extracted target sound signal O(τ,f) (240) is a value in the time-frequency domain, it is inverse FFTed into a value in the time domain.
The apparatus for extracting a target sound signal when information regarding the direction of a target sound source is given has been described above with reference to
The apparatus for extracting a target sound signal when information regarding the direction of a target sound source is not given will now be described.
When information regarding the position of a target sound source is not given, the sound source search unit 223 searches for the position of the target sound source in the microphone array 210 using various algorithms which will be described below. As described above, a sound signal having dominant signal characteristics, that is, the sound signal having the biggest gain or sound pressure, from among a plurality of sound signals contained in a mixed sound signal is generally determined as a target sound source. Therefore, the sound source search unit 223 detects the direction or position of the target sound source based on the mixed sound signal which is input to the microphone array 210. In this case, dominant signal characteristics of a sound signal may be identified based on objective measurement values such as a signal-to-noise ratio (SNR) of the sound signal. Thus, the direction of a sound source, which generated a sound signal having relatively higher measurement values, may be determined as the direction in which a target sound source is located.
Various methods of searching for the position of a target sound source, such as time delay of arrival (TDOA), beam forming and high-definition spectral analysis, have been widely introduced and will be briefly described below.
In TDOA, the difference in the arrival times of a mixed sound signal at each pair of microphones of the microphone array 210 is measured, and the direction of a target sound source is estimated based on the measured difference. Then, the sound source search unit 223 estimates a spatial position, at which the estimated directions cross each other, to be the position of the target sound source.
In beam forming, the sound source search unit 223 delays a sound signal which is received at a particular angle, scans sound signals in space at each angle, selects a direction, in which a sound signal having a highest value is scanned, as the direction of a target sound source, and estimates a position, at which a sound signal having a highest value is scanned, to be the position of a target sound source.
The above methods of searching for the position of a target sound source can be easily understood by those of ordinary skill in the art to which the embodiments pertain, and thus a more detailed description thereof will be omitted (Juyang Weng, “Three-Dimensional Sound Localization from Compact Non-Coplanar Array of Microphones Using Tree-Based Learning,” pp. 310-323, 110(1), JASA 2001).
After the sound source search unit 223 determines the direction of the target sound source according to the various embodiments of the present invention described above, it transmits the mixed sound signal to an emphasized signal beam former 221 and a suppressed signal beam former 222 based on the determined direction of the target sound source. The subsequent process is identical to the process described above with reference to
Referring to
In operations 831 and 832, an emphasized signal having directivity toward the target sound source and a suppressed signal whose directivity is suppressed directivity are generated. These operations correspond to the operations performed by the emphasized signal beam former 221 and the suppressed signal beam former 222 which have been described above with reference to
In operations 841 and 842, the emphasized signal and the suppressed signal generated in operations 831 and 832, respectively, are filtered using a window function. Each of operations 841 and 842 corresponds to a process of dividing a continuous signal into a plurality of individual frames of uniform size in order to perform a convolution operation on the continuous signal. The individual frames are FFTed into frames in the time-frequency domain. That is, the emphasized signal and the suppressed signal are transformed into those in the time-frequency domain in operations 841 and 842.
In operation 850, an amplitude ratio of the emphasized signal to the suppressed signal in the time-frequency domain is calculated. The amplitude ratio provides information regarding a ratio of a target sound to an interference noise which is contained in an individual frame of sound signal.
In operation 860, a masking filter is set based on the calculated amplitude ratio. The methods of setting a masking filter according to two embodiments of the present invention have been suggested above; a method of setting a masking filter by using a binary masking filter and a masking threshold value and a method of directly setting a soft masking filter by using a sigmoid function.
In operation 870, the set masking filter is applied to the emphasized signal. That is, the emphasized signal is multiplied by the masking filter so as to extract a target sound signal.
In operation 880, the extracted target sound signal is inverse FFT-ed into a target sound signal in the time domain. The target sound signal in the time domain is finally extracted in operation 890.
In addition to the above described embodiments, embodiments of the present invention can also be implemented through computer readable code/instructions in/on a medium, e.g., a computer readable medium, to control at least one processing element to implement any above described embodiments and display the resultant image on a display. The medium can correspond to any medium/media permitting the storing and/or transmission of the computer readable code.
The computer readable code can be recorded on a recording medium in a variety of ways, with examples including magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.) and optical recording media (e.g., CD-ROMs, or DVDs). The computer readable code can also be transferred on transmission media such as media carrying or including carrier waves, as well as elements of the Internet, for example. Thus, the medium may be such a defined and measurable structure including or carrying a signal or information, such as a device carrying a bitstream, for example, according to embodiments of the present invention. The media may also be a distributed network, so that the computer readable code is stored/transferred and executed in a distributed fashion. Still further, as only an example, the processing element could include a processor or a computer processor, and processing elements may be distributed and/or included in a single device.
While aspects of the present invention has been particularly shown and described with reference to differing embodiments thereof, it should be understood that these exemplary embodiments should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in the remaining embodiments.
Thus, although a few embodiments have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2007-0103166 | Oct 2007 | KR | national |