The present invention relates to the field of communications technologies, and in particular, to a method and a device for audio signal classification.
A voice encoder is good at encoding voice-type audio signals under mid-to-low bit rates, while has a poor effect on encoding music-type audio signals. An audio encoder is applicable to encoding of the voice-type and music-type audio signals under a high bit rate, but has an unsatisfactory effect on encoding the voice-type audio signals under the mid-to-low bit rates. In order to achieve a satisfactory encoding effect on audio signals mixed by voice and audio under the mid-to-low bit rates, an encoding process that is applicable to the voice/audio encoder under the mid-to-low bit rates mainly includes: first judging a type of an audio signal by using a signal classification module, and then selecting a corresponding encoding method according to the judged type of the audio signal, and selecting a voice encoder for the voice-type audio signal, and selecting an audio encoder for the music-type audio signal.
In the prior art, a method for judging the type of the audio signal mainly includes:
1. Divide an input signal into a series of overlapping frames by using a window function.
2. Calculate a spectral coefficient of each frame by using Fast Fourier Transform (FFT).
3. Calculate characteristic parameters in five aspects for each segment according to the spectral coefficient of each frame, namely, harmony, noise, tail, drag out and rhythm.
4. Divide the audio signal into six types based on values of the characteristic parameters, including a voice type, a music type, a noise type, a short segment, a segment to be determined, and a short segment to be determined.
During implementation of judging the type of the audio signal, the inventor finds that the prior art at least has the following problems: In the method, characteristic parameters of multiple aspects need to be calculated during a classification process; audio signal classification is complex, which result in high complexity of the classification.
Embodiments of the present invention provide a method and a device for audio signal classification, so as to reduce complexity of audio signal classification and decrease a calculation amount.
In order to achieve the objectives, the embodiments of the present invention adopt the following technical solutions.
A method for audio signal classification includes:
obtaining a tonal characteristic parameter of an audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in at least one sub-band; and
determining, according to the obtained characteristic parameter, a type of the audio signal to be classified.
A device for audio signal classification includes:
a tone obtaining module, configured to obtain a tonal characteristic parameter of an audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in at least one sub-band; and
a classification module, configured to determine, according to the obtained characteristic parameter, a type of the audio signal to be classified.
The solutions provided in the embodiments of the present invention adopt a technical means of classifying the audio signal through a tonal characteristic of the audio signal, which overcomes a technical problem of high complexity of audio signal classification in the prior art, thus achieving technical effects of reducing complexity of the audio signal classification and decreasing a calculation amount required during the classification.
To illustrate the technical solutions according to the embodiments of the present invention more clearly, accompanying drawings required for describing the embodiments are introduced below briefly. Apparently, the accompanying drawings in the following descriptions are merely some embodiments of the present invention, and persons of ordinary skill in the art may obtain other drawings according to the accompanying drawings without creative efforts.
The technical solutions of the present invention are clearly and fully described in the following with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the embodiments to be described are only part of rather than all of the embodiments of the present invention. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of the present invention without creative efforts shall fall within the protection scope of the present invention.
Embodiments of the present invention provide a method and a device for audio signal classification. A specific execution process of the method includes: obtaining a tonal characteristic parameter of an audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in at least one sub-band; and determining, according to the obtained characteristic parameter, a type of the audio signal to be classified.
The method is implemented through a device including the following modules: a tone obtaining module and a classification module. The tone obtaining module is configured to obtain a tonal characteristic parameter of an audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in at least one sub-band; and the classification module is configured to determine, according to the obtained characteristic parameter, a type of the audio signal to be classified.
In the method and the device for audio signal classification according to the embodiments of the present invention, the type of the audio signal to be classified may be judged through obtaining the tonal characteristic parameter. Aspects of characteristic parameters that need to be calculated are few, and the classification method is simple, thus decreasing a calculation amount during a classification process.
This embodiment provides a method for audio signal classification. As shown in
Step 501: Receive a current frame audio signal, where the audio signal is an audio signal to be classified.
Specifically, it is assumed that a sampling frequency is 48 kHz, and a frame length N=1024 sample points, and the received current frame audio signal is a kth frame audio signal.
A process of calculating a tonal characteristic parameter of the current frame audio signal is described below.
Step 502: Calculate a power spectral density of the current frame audio signal.
Specifically, windowing processing of adding a Hanning window is performed on time-domain data of the kth frame audio signal.
Calculation may be performed through the following Hanning window formula:
where N represents a frame length, h(l) represents Hanning window data of a first sample point of the kth frame audio signal.
An FFT with a length of N is performed on the time-domain data of the kth frame audio signal after windowing (because the FFT is symmetrical about N/2, an FFT with a length of N/2 is actually calculated), and a k′th power spectral density in the kth frame audio signal is calculated by using an FFT coefficient.
The k′th power spectral density in the kth frame audio signal may be calculated through the following formula:
where s(l) represents an original input sample point of the kth frame audio signal, and X(k′) represents the k′th power spectral density in the kth frame audio signal.
The calculated power spectral density X(k′) is corrected, so that a maximum value of the power spectral density is a reference sound pressure level (96 dB).
Step 503: Detect whether a tone exists in each sub-band of a frequency area by using the power spectral density, collect statistics about the number of tones existing in the corresponding sub-band, and use the number of tones as the number of sub-band tones in the sub-band.
Specifically, the frequency area is divided into four frequency sub-bands, which are respectively represented by sb0, sb1, sb2, and sb3. If the power spectral density X(k′) and a certain adjacent power spectral density meet a certain condition, where the certain condition in this embodiment may be a condition shown as the following formula (3), it is considered that a sub-band corresponding to the X(k′) has a tone. Collect statistics about the number of tones to obtain the number of sub-band tones NTk
X(k′−1)<X(k′)≦X(k′+1) and X(k′)−X(k′+j)≧7 dB (3)
where, values of j are stipulated as follows:
In this embodiment, it is known that the number of coefficients (namely the length) of the power spectral density is N/2. Corresponding to the stipulation of the values of j, a meaning of a value interval of k′ is further described below.
sb0: corresponding to an interval of 2≦k′<63; a corresponding power spectral density coefficient is 0th to (N/16−1)th, and a corresponding frequency range is [0 kHz, 3 kHz).
sb1: corresponding to an interval of 63≦k′<127; a corresponding power spectral density coefficient is N/16th to (N/8−1)th, and a corresponding frequency range is [3 kHz, 6 kHz).
sb2: corresponding to an interval of 127≦k′<255; a corresponding power spectral density coefficient is N/8th to (N/4−1)th, and a corresponding frequency range is [6 kHz, 12 kHz).
sb3: corresponding to an interval of 255≦k′<500; a corresponding power spectral density coefficient is N/4th to N/2th, and a corresponding frequency range is [12 kHz, 24 kHz).
sb0 and sb1 correspond to a low-frequency sub-band part; sb2 corresponds to a relatively high-frequency sub-band part; and sb3 corresponds to a high-frequency sub-band part.
A specific process of collecting statistics about the NTk
For the sub-band sb0, values of k′ are taken one by one from the interval of 2≦k′<63. For each value of k′, judge whether the value meets the condition of the formula (3). After the entire value interval of k′ is traversed, collect statistics about the number of values of k′ that meet the condition. The number of values of k′ that meet the condition is the number of sub-band tones NTk
For example, if the formula (3) is correct when k′=3, k′=5, and k′=10, it is considered that the sub-band sb0 has three sub-band tones, namely NTk
Similarly, for the sub-band sb1, values of k′ are taken one by one from the interval of 63≦k′<127. For each value of k′, judge whether the value meets the condition of the formula (3). After the entire value interval of k′ is traversed, collect statistics about the number of values of k′ that meet the condition. The number of values of k′ that meet the condition is the number of sub-band tones NTk
Similarly, for the sub-band sb2, values of k′ are taken one by one from the interval of 127≦k′<255. For each value of k′, judge whether the value meets the condition of the formula (3). After the entire value interval of k′ is traversed, collect statistics about the number of values of k′ that meet the condition. The number of values of k′ that meet the condition is the number of sub-band tones NTk2 of the kth frame audio signal existing in the sub-band sb2.
Statistics about the number of sub-band tones NTk
Step 504: Calculate the total number of tones of the current frame audio signal.
Specifically, a sum of the number of sub-band tones of the kth frame audio signal in the four sub-bands sb0, sb1, sb2 and sb3 is calculated according to the NTk
The sum of the number of sub-band tones of the kth frame audio signal in the four sub-bands sb0, sb1, sb2 and sb3 is the number of tones in the kth frame audio signal, which may be calculated through the following formula:
where NTk
Step 505: Calculate an average value of the number of sub-band tones of the current frame audio signal in the corresponding sub-band among the stipulated number of frames.
Specifically, it is assumed that the stipulated number of frames is M, and the M frames include the kth frame audio signal and (M−1) frames audio signals before the kth frame. The average value of the number of sub-band tones of the kth frame audio signal in each sub-band of the M frames audio signals is calculated according to a relationship between a value of M and a value of k.
The average value of the number of sub-band tones may be calculated through the following formula (5):
where NTj-i represents the number of sub-band tones of a jth frame audio signal in a sub-band i, and ave_NTi represents the average value of the number of sub-band tones in the sub-band i. Particularly, it can be known from the formula (5) that a proper formula may be selected for calculation according to the relationship between the value of k and the value of M.
Particularly, in this embodiment, according to design requirements, it is unnecessary to calculate the average value of the number of sub-band tones in each sub-band as long as an average value ave_NT0 of the number of sub-band tones in the low-frequency sub-band sb0 and an ave_NT2 of the number of sub-band tones in the relatively high-frequency sub-band sb2 are calculated.
Step 506: Calculate an average value of the total number of tones of the current frame audio signal among the stipulated number of frames.
Specifically, it is assumed that the stipulated number of frames is M, and the M frames include the kth frame audio signal and (M−1) frames audio signals before the kth frame. The average value of the total number of tones of the kth frame audio signal in each frame audio signal among the M frames audio signals is calculated according to the relationship between the value of M and the value of k.
The total number of tones may be specifically calculated according to the following formula (6):
where NTj
Step 507: Respectively use a ratio between the calculated average value of the number of sub-band tones in at least one sub-band and the average value of the total number of tones as a tonal characteristic parameter of the current frame audio signal in the corresponding sub-band.
The tonal characteristic parameter may be calculated through the following formula (7):
where ave_NTi represents the average value of the number of sub-band tones in the sub-band i, ave_NTsum represents the average value of the total number of tones, and ave_NT_ratioi represents the ratio between the average value of the number of sub-band tones of the kth frame audio signal in the sub-band i and the average value of the total number of tones.
Particularly, in this embodiment, by using the average value ave_NT0 of the number of sub-band tones in the low-frequency sub-band sb0 and the average value ave_NT2 of the number of sub-band tones in the relatively high-frequency sub-band sb2 that are calculated in step 505, a tonal characteristic parameter ave_NT_ratio0 of the kth frame audio signal in the sub-band sb0 and a tonal characteristic parameter ave_NT_ratio2 of the kth frame audio signal in the sub-band sb2 are calculated through the formula (7), and ave_NT_ratio0 and ave_NT_ratio2 are used as the tonal characteristic parameters of the frame audio signal.
In this embodiment, the tonal characteristic parameters that need to be considered are the tonal characteristic parameters in the low-frequency sub-band and the relatively high-frequency sub-band. However, the design solution of the present invention is not limited to the one in this embodiment, and tonal characteristic parameters in other sub-bands may also be calculated according to the design requirements.
Step 508: Judge a type of the current frame audio signal according to the tonal characteristic parameter calculated in the foregoing process.
Specifically, judge whether the tonal characteristic parameter ave_NT_ratio0 in the sub-band sb0 and the tonal characteristic parameter ave_NT_ratio2 in the sub-band sb2 that are calculated in step 507 meet a certain relationship with a first parameter and a second parameter. In this embodiment, the certain relationship may be the following relational expression (12):
(ave—NT_ratio0>α) and (ave—NT_ratio2<β) (12)
where ave_NT_ratio0 represents the tonal characteristic parameter of the kth frame audio signal in the low-frequency sub-band, ave_NT_ratio2 represents the tonal characteristic parameter of the kth frame audio signal in the relatively high-frequency sub-band, α represents a first coefficient, and β represents a second coefficient.
If the relational expression (12) is met, it is determined that the kth frame audio signal is a voice-type audio signal; if the relational expression (12) is not met, it is determined that the kth frame audio signal is a music-type audio signal.
A process of smoothing processing on the current frame audio signal is described below.
Step 509: For the current frame audio signal with the type of the audio signal already judged, further judge whether a type of a previous frame audio signal of the current frame audio signal is the same as a type of a next frame audio signal of the current frame audio signal, if the type of the previous frame audio signal of the current frame audio signal is the same as the type of the next frame audio signal of the current frame audio signal, execute step 510; if the type of the previous frame audio signal of the current frame audio signal is different from the type of the next frame audio signal of the current frame audio signal, execute step 512.
Specifically, judge whether the type of the (k−1)th frame audio signal is the same as the type of the (k+1)th frame audio signal. If it is determined that the type of the (k−1)th frame audio signal is the same as the type of the (k+1)th frame audio signal, execute step 510; if it is determined that the type of the (k−1)th frame audio signal is different from the type of the (k+1)th frame audio signal, execute step 512.
Step 510: Judge whether the type of the current frame audio signal is the same as the type of the previous frame audio signal of the current frame audio signal; if it is determined that the type of the current frame audio signal is different from the type of the previous frame audio signal of the current frame audio signal, execute step 511; if it is determined that the type of the current frame audio signal is the same as the type of the previous frame audio signal of the current frame audio signal, execute step 512.
Specifically, judge whether the type of the kth frame audio signal is the same as the type of the (k−1)th frame audio signal. If the judgment result is that the type of the kth frame audio signal is different from the type of the (k−1)th frame audio signal, execute step 511; if the judgment result is that the type of the kth frame audio signal is the same as the type of the (k−1)th frame audio signal, execute step 512.
Step 511: Modify the type of the current frame audio signal to the type of the previous frame audio signal.
Specifically, the type of the kth frame audio signal is modified to the type of the (k−1)th frame audio signal.
During the smoothing processing on the current frame audio signal in this embodiment, specifically, when it is judged whether the smoothing processing needs to be performed on the current frame audio signal, a technical solution of knowing the types of the previous frame and next frame audio signal is adopted. However, the method belongs to a process of knowing related information of the previous and next frames, and adoption of the method for knowing previous frames and next frames is not limited by descriptions of this embodiment. During the process, the solution of specifically knowing types of at least one previous frame audio signal and at least one next frame audio signal is applicable to the embodiments of the present invention.
Step 512: The process ends.
In the prior art, five types of characteristic parameters need to be considered during type classification of audio signals. In the method provided in this embodiment, types of most audio signals may be judged through calculating the tonal characteristic parameters of the audio signals. Compared with the prior art, the classification method is easy, and a calculation amount is small.
This embodiment discloses a method for audio signal classification. As shown in
Step 101: Receive a current frame audio signal, where the audio signal is an audio signal to be classified.
Step 102: Obtain a tonal characteristic parameter of the current frame audio signal, where the tonal characteristic parameter of the current frame audio signal is in at least one sub-band.
Generally, a frequency area is divided into four frequency sub-bands. In each sub-band, the current frame audio signal may obtain a corresponding tonal characteristic parameter. Certainly, according to design requirements, a tonal characteristic parameter of the current frame audio signal in one or two of the sub-bands may be obtained.
Step 103: Obtain a spectral tilt characteristic parameter of the current frame audio signal.
In this embodiment, an execution sequence of step 102 and step 103 is not restricted, and step 102 and step 103 may even be executed at the same time.
Step 104: Judge a type of the current frame audio signal according to at least one tonal characteristic parameter obtained in step 102 and the spectral tilt characteristic parameter obtained in step 103.
In the technical solution provided in this embodiment, a technical means of judging the type of the audio signal according to the tonal characteristic parameter of the audio signal and the spectral tilt characteristic parameter of the audio signal is adopted, which solves a technical problem of complexity in the classification method in which five types of characteristic parameters, such as harmony, noise and rhythm, are required for type classification of audio signals in the prior art, thus achieving technical effects of reducing complexity of the classification method and reducing a classification calculation amount during the audio signal classification.
This embodiment provides a method for audio signal classification. As shown in
Step 201: Receive a current frame audio signal, where the audio signal is an audio signal to be classified.
Specifically, it is assumed that a sampling frequency is 48 kHz, and a frame length N=1024 sample points, and the received current frame audio signal is a kth frame audio signal.
A process of calculating a tonal characteristic parameter of the current frame audio signal is described below.
Step 202: Calculate a power spectral density of the current frame audio signal.
Specifically, windowing processing of adding a Hanning window is performed on time-domain data of the kth frame audio signal.
Calculation may be performed through the following Hanning window formula:
where N represents a frame length, h(l) represents Hanning window data of a first sample point of the kth frame audio signal.
An FFT with a length of N is performed on the time-domain data of the kth frame audio signal after windowing (because the FFT is symmetrical about N/2, an FFT with a length of N/2 is actually calculated), and a k′th power spectral density in the kth frame audio signal is calculated by using an FFT coefficient.
The k′th power spectral density in the kth frame audio signal may be calculated through the following formula:
where s(l) represents an original input sample point of the kth frame audio signal, and X(k′) represents the k′th power spectral density in the kth frame audio signal.
The calculated power spectral density X(k′) is corrected, so that a maximum value of the power spectral density is a reference sound pressure level (96 dB).
Step 203: Detect whether a tone exists in each sub-band of a frequency area by using the power spectral density, collect statistics about the number of tones existing in the corresponding sub-band, and use the number of tones as the number of sub-band tones in the sub-band.
Specifically, the frequency area is divided into four frequency sub-bands, which are respectively represented by sb0, sb1, sb2, and sb3. If the power spectral density X(k′) and a certain adjacent power spectral density meet a certain condition, where the certain condition in this embodiment may be a condition shown as the following formula (3), it is considered that a sub-band corresponding to the X(k′) has a tone. Collect statistics about the number of the tones to obtain the number of sub-band tones NTk
X(k′−1)<X(k′)≦X(5′+1) and X(k′)−X(k′+j)≧7 dB (3)
where, values of j are stipulated as follows:
In this embodiment, it is known that the number of coefficients (namely the length) of the power spectral density is N/2. Corresponding to the stipulation of the values of j, a meaning of a value interval of k′ is further described below.
sb0: corresponding to an interval of 2≦k′<63; a corresponding power spectral density coefficient is 0th to (N/16−1)th, and a corresponding frequency range is [0 kHz, 3 kHz).
sb1: corresponding to an interval of 63≦k′<127; a corresponding power spectral density coefficient is N/16th to (N/8−1)th, and a corresponding frequency range is [3 kHz, 6 kHz).
sb2: corresponding to an interval of 127≦k′<255; a corresponding power spectral density coefficient is N/8th to (N/4−1)th, and a corresponding frequency range is [6 kHz, 12 kHz).
sb3: corresponding to an interval of 255≦k′<500; a corresponding power spectral density coefficient is N/4th to N/2th, and a corresponding frequency range is [12 kHz, 24 kHz).
sb0 and sb1 correspond to a low-frequency sub-band part; sb2 corresponds to a relatively high-frequency sub-band part; and sb3 corresponds to a high-frequency sub-band part.
A specific process of collecting statistics about the NTk
For the sub-band sb0, values of k′ are taken one by one from the interval of 2≦k′<63. For each value of k′, judge whether the value meets the condition of the formula (3). After the entire value interval of k′ is traversed, collect statistics about the number of values of k′ that meet the condition. The number of values of k′ that meet the condition is the number of sub-band tones NTk
For example, if the formula (3) is correct when k′=3, k′=5, and k′=10, it is considered that the sub-band sb0 has three sub-band tones, namely NTk
Similarly, for the sub-band sb1, values of k′ are taken one by one from the interval of 63≦k′<127. For each value of k′, judge whether the value meets the condition of the formula (3). After the entire value interval of k′ is traversed, collect statistics about the number of values of k′ that meet the condition. The number of values of k′ that meet the condition is the number of sub-band tones NTk
Similarly, for the sub-band sb2, values of k′ are taken one by one from the interval of 127≦k′<255. For each value of k′, judge whether the value meets the condition of the formula (3). After the entire value interval of k′ is traversed, collect statistics about the number of values of k′ that meet the condition. The number of values of k′ that meet the condition is the number of sub-band tones NTk
Statistics about the number of sub-band tones NTk
Step 204: Calculate the total number of tones of the current frame audio signal.
Specifically, a sum of the number of sub-band tones of the kth frame audio signal in the four sub-bands sb0, sb1, sb2 and sb3 is calculated according to the NTk
The sum of the number of sub-band tones of the kth frame audio signal in the four sub-bands sb0, sb1, sb2 and sb3 is the number of tones in the kth frame audio signal, which may be calculated through the following formula:
where NTk
Step 205: Calculate an average value of the number of sub-band tones of the current frame audio signal in the corresponding sub-band among the speculated number of frames.
Specifically, it is assumed that the stipulated number of frames is M, and the M frames include the kth frame audio signal and (M−1) frames audio signals before the kth frame. The average value of the number of sub-band tones of the kth frame audio signal in each sub-band of the M frames audio signals is calculated according to a relationship between a value of M and a value of k.
The average value of the number of sub-band tones may be calculated through the following formula (5):
where NTj-i represents the number of sub-band tones of a jth frame audio signal in a sub-band i, and ave_NTi represents the average value of the number of sub-band tones in the sub-band i. Particularly, it can be known from the formula (5) that a proper formula may be selected for calculation according to the relationship between the value of k and the value of M.
Particularly, in this embodiment, according to design requirements, it is unnecessary to calculate the average value of the number of sub-band tones in each sub-band as long as an average value ave_NT0 of the number of sub-band tones in the low-frequency sub-band sb0 and an ave_NT2 of the number of sub-band tones in the relatively high-frequency sub-band sb2 are calculated.
Step 206: Calculate an average value of the total number of tones of the current frame audio signal in the stipulated number of frames.
Specifically, it is assumed that the stipulated number of frames is M, and the M frames include the kth frame audio signal and (M−1) frames audio signals before the kth frame. The average value of the total number of tones of the kth frame audio signal in each frame audio signal among the M frames audio signals is calculated according to the relationship between the value of M and the value of k.
The total number of tones may be specifically calculated according to the following formula (6):
where NTj
Step 207: Respectively use a ratio between the calculated average value of the number of sub-band tones in at least one sub-band and the average value of the total number of tones as a tonal characteristic parameter of the current frame audio signal in the corresponding sub-band.
The tonal characteristic parameter may be calculated through the following formula (7):
where ave_NTi represents the average value of the number of sub-band tones in the sub-band i, ave_NTsum represents the average value of the total number of tones, and ave_NT_ratioi represents the ratio between the average value of the number of sub-band tones of the kth frame audio signal in the sub-band i and the average value of the total number of tones.
Particularly, in this embodiment, by using the average value ave_NT0 of the number of sub-band tones in the low-frequency sub-band sb0 and the average value ave_NT2 of the number of sub-band tones in the relatively high-frequency sub-band sb2 that are calculated in step 205, a tonal characteristic parameter ave_NT_ratio0 of the kth frame audio signal in the sub-band sb0 and a tonal characteristic parameter ave_NT_ratio2 of the kth frame audio signal in the sub-band sb2 are calculated through the formula (7), and ave_NT_ratio0 and ave_NT_ratio2 are used as the tonal characteristic parameters of the kth frame audio signal.
In this embodiment, the tonal characteristic parameters that need to be considered are the tonal characteristic parameters in the low-frequency sub-band and the relatively high-frequency sub-band. However, the design solution of the present invention is not limited to the one in this embodiment, and tonal characteristic parameters in other sub-bands may also be calculated according to the design requirements.
A process of calculating a spectral tilt characteristic parameter of the current frame audio signal is described below.
Step 208: Calculate a spectral tilt of one frame audio signal.
Specifically, calculate a spectral tilt of the kth frame audio signal.
The spectral tilt of the kth frame audio signal may be calculated through the following formula (8):
where s(n) represents an nth time-domain sample point of the kth frame audio signal, r represents an autocorrelation parameter, and spec_tiltk represents the spectral tilt of the kth frame audio signal.
Step 209: Calculate, according to the spectral tilt of one frame calculated above, a spectral tilt average value of the current frame audio signal in the stipulated number of frames.
Specifically, it is assumed that the stipulated number of frames is M, and the M frames include the kth frame audio signal and (M−1) frames audio signals before the kth frame. The average spectral tilt of each frame audio signal among the M frames audio signals, namely the spectral tilt average value in the M frames audio signals, is calculated according to the relationship between the value of M and the value of k.
The spectral tilt average value may be calculated through the following formula (9):
where k represents a frame number of the current frame audio signal, M represents the stipulated number of frames, spec_tiltj represents the spectral tilt of the jth frame audio signal, and ave_spec_tilt represents the spectral tilt average value. Particularly, it can be known from the formula (9) that a proper formula may be selected for calculation according to the relationship between the value of k and the value of M.
Step 210: Use a mean-square error between the spectral tilt of at least one audio signal and the calculated spectral tilt average value as a spectral tilt characteristic parameter of the current frame audio signal.
Specifically, it is assumed that the stipulated number of frames is M, and the M frames include the kth frame audio signal and (M−1) frames audio signals before the kth frame. The mean-square error between the spectral tilt of at least one audio signal and the spectral tilt average value is calculated according to the relationship between the value of M and the value of k. The mean-square error is the spectral tilt characteristic parameter of the current frame audio signal.
The spectral tilt characteristic parameter may be calculated through the following formula (10):
where k represents the frame number of the current frame audio signal, ave_spec_tilt represents the spectral tilt average value, and dif_spec_tilt represents the spectral tilt characteristic parameter. Particularly, it can be known from the formula (10) that a proper formula may be selected for calculation according to the relationship between the value of k and the value of M.
An execution sequence of a process of calculating the tonal characteristic parameter (step 202 to step 207) and a process of calculating the spectral tilt characteristic parameter (step 208 to step 210) in the foregoing description of this embodiment is not restricted, and the two processes may even be executed at the same time.
Step 211: Judge a type of the current frame audio signal according to the tonal characteristic parameter and the spectral tilt characteristic parameter that are calculated in the foregoing processes.
Specifically, judge whether the tonal characteristic parameter ave_NT_ratio0 in the sub-band sb0 and the tonal characteristic parameter ave_NT_ratio2 in the sub-band sb2 that are calculated in step 207, and the spectral tilt characteristic parameter dif_spec_tilt calculated in step 210 meet a certain relationship with a first parameter, a second parameter and a third parameter. In this embodiment, the certain relationship may be the following relational expression (11):
(ave—NT_ratio0>α) and (ave—NT_ratio2<β) and (dif—spec_tilt>γ) (11)
where ave_NT_ratio0 represents the tonal characteristic parameter of the kth frame audio signal in the low-frequency sub-band, ave_NT_ratio2 represents the tonal characteristic parameter of the kth frame audio signal in the relatively high-frequency sub-band, dif_spec_tilt represents the spectral tilt characteristic parameter of the kth frame audio signal, a represents a first coefficient, γ represents a second coefficient, and γ represents a third coefficient.
If the certain relationship, namely the relational expression (11), is met, it is determined that the kth frame audio signal is a voice-type audio signal; if the relational expression (11) is not met, it is determined that the kth frame audio signal is a music-type audio signal.
A process of smoothing processing on the current frame audio signal is described below.
Step 212: For the current frame audio signal with the type of the audio signal already judged, further judge whether a type of a previous frame audio signal of the current frame audio signal is the same as a type of a next frame audio signal of the current frame audio signal, if the type of the previous frame audio signal of the current frame audio signal is the same as the type of the next frame audio signal of the current frame audio signal, execute step 213; if the type of the previous frame audio signal of the current frame audio signal is different from the type of the next frame audio signal of the current frame audio signal, execute step 215.
Specifically, judge whether the type of the (k−1)th frame audio signal is the same as the type of the (k+1)th frame audio signal. If the judgment result is that the type of the (k−1)th frame audio signal is the same as the type of the (k+1)th frame audio signal, execute step 213; if the judgment result is that the type of the (k−1)th frame audio signal is different from the type of the (k+1)th frame audio signal, execute step 215.
Step 213: Judge whether the type of the current frame audio signal is the same as the type of the previous frame audio signal of the current frame audio signal; if it is determined that the type of the current frame audio signal is different from the type of the previous frame audio signal of the current frame audio signal, execute step 214; if it is determined that the type of the current frame audio signal is the same as the type of the previous frame audio signal of the current frame audio signal, execute step 215.
Specifically, judge whether the type of the kth frame audio signal is the same as the type of the (k−1)th frame audio signal. If the judgment result is that the type of the kth frame audio signal is different from the type of the (k−1)th frame audio signal, execute step 214; if the judgment result is that the type of the kth frame audio signal is the same as the type of the (k−1)th frame audio signal, execute step 215.
Step 214: Modify the type of the current frame audio signal to the type of the previous frame audio signal.
Specifically, the type of the kth frame audio signal is modified to the type of the (k−1)th frame audio signal.
During the smoothing processing on the current frame audio signal described in this embodiment, when the type of the current frame audio signal, namely the type of the kth frame audio signal is judged in step 212, the next step 213 cannot be performed until the type of the (k+1)th frame audio signal is judged. It seems that a frame of delay is introduced here to wait for the type of the (k+1)th frame audio signal to be judged. However, generally, an encoder algorithm has a frame of delay when encoding each frame audio signal, and this embodiment happens to utilize the frame of delay to carry out the smoothing processing, which not only avoids misjudgment of the type of the current frame audio signal, but also prevents the introduction of an extra delay, so as to achieve a technical effect of real-time classification of the audio signal.
When requirements on delay are not restrict, during the smoothing processing on the current frame audio signal in this embodiment, it may also be decided whether the smoothing processing needs to be performed on a current audio signal through judging types of previous three frames and types of next three frames of the current audio signal, or types of previous five frames and types of next five frames of the current audio signal. The specific number of the related previous and next frames that need to be known is not limited by the description in this embodiment. Because more related information of previous and next frames is known, an effect of the smoothing processing may be better.
Step 215: The process ends.
Compared with the prior art in which type classification of audio signals is implemented according to five types of characteristic parameters, the method for audio signal classification provided in this embodiment may implement the type classification of audio signals merely according to two types of characteristic parameters. A classification algorithm is simple; complexity is low; and a calculation amount during a classification process is reduced. At the same time, in the solution of this embodiment, a technical means of performing smoothing processing on the classified audio signal is also adopted, so as to achieve beneficial effects of improving a recognition rate of the type of the audio signal, and giving full play to functions of a voice encoder and an audio encoder during a subsequent encoding process.
Corresponding to the first embodiment, this embodiment specifically provides a device for audio signal classification. As shown in
The receiving module 40 is configured to receive a current frame audio signal, where the current frame audio signal is an audio signal to be classified. The tone obtaining module 41 is configured to obtain a tonal characteristic parameter of the audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in at least one sub-band. The classification module 43 is configured to determine, according to the tonal characteristic parameter obtained by the tone obtaining module 41, a type of the audio signal to be classified. The first judging module 44 is configured to judge whether a type of at least one previous frame audio signal of the audio signal to be classified is the same as a type of at least one corresponding next frame audio signal of the audio signal to be classified after the classification module 43 classifies the type of the audio signal to be classified. The second judging module 45 is configured to judge whether the type of the audio signal to be classified is different from the type of the at least one previous frame audio signal when the first judging module 44 determines that the type of the at least one previous frame audio signal of the audio signal to be classified is the same as the type of the at least one corresponding next frame audio signal of the audio signal to be classified. The smoothing module 46 is configured to perform smoothing processing on the audio signal to be classified when the second judging module 45 determines that the type of the audio signal to be classified is different from the type of the at least one previous frame audio signal. The first setting module 47 is configured to preset the stipulated number of frames for calculation.
In this embodiment, if the tonal characteristic parameter in at least one sub-band obtained by the tone obtaining module 41 is: a tonal characteristic parameter in a low-frequency sub-band and a tonal characteristic parameter in a relatively high-frequency sub-band, the classification module 43 includes a judging unit 431 and a classification unit 432.
The judging unit 431 is configured to judge whether the tonal characteristic parameter of the audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in the low-frequency sub-band, is greater than a first coefficient, and whether the tonal characteristic parameter in the relatively high-frequency sub-band is smaller than a second coefficient. The classification unit 432 is configured to determine that the type of the audio signal to be classified is a voice type when the judging unit 431 determines that the tonal characteristic parameter of the audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in the low-frequency sub-band, is greater than the first coefficient and the tonal characteristic parameter in the relatively high-frequency band is smaller than the second coefficient, and determine that the type of the audio signal to be classified is a music type when the judging unit 431 determines that the tonal characteristic parameter of the audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in the low-frequency sub-band, is not greater than the first coefficient or the tonal characteristic parameter in the relatively high-frequency band is not smaller than the second coefficient.
The tone obtaining module 41 is configured to calculate the tonal characteristic parameter according to the number of tones of the audio signal to be classified, where the number of tones of the audio signal to be classified is in at least one sub-band, and the total number of tones of the audio signal to be classified.
Further, the tone obtaining module 41 in this embodiment includes a first calculation unit 411, a second calculation unit 412 and a tonal characteristic unit 413.
The first calculation unit 411 is configured to calculate an average value of the number of sub-band tones of the audio signal to be classified, where the number of sub-band tones of the audio signal to be classified is in at least one sub-band. The second calculation unit 412 is configured to calculate an average value of the total number of tones of the audio signal to be classified. The tonal characteristic unit 413 is configured to respectively use a ratio between the average value of the number of sub-band tones in at least one sub-band and the average value of the total number of tones as a tonal characteristic parameter of the audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in the corresponding sub-band.
The calculating, by the first calculation unit 411, the average value of the number of sub-band tones of the audio signal to be classified, where the number of sub-band tones of the audio signal to be classified is in at least one sub-band, includes: calculating the average value of the number of sub-band tones in one sub-band according to a relationship between the stipulated number of frames for calculation, where the stipulated number of frames for calculation is set by the first setting module 47, and a frame number of the audio signal to be classified.
The calculating, by second calculation unit 412, the average value of the total number of tones of the audio signal to be classified includes: calculating the average value of the total number of tones according to the relationship between the stipulated number of frames for calculation, where the stipulated number of the frames for calculation is set by the first setting module, and the frame number of the audio signal to be classified.
With the device for audio signal classification provided in this embodiment, a technical means of obtaining the tonal characteristic parameter of the audio signal is adopted, so as to achieve a technical effect of judging types of most audio signals, reducing complexity of a classification method for audio signal classification, and meanwhile decreasing a calculation amount during the audio signal classification.
Corresponding to the method for audio signal classification in the second embodiment, this embodiment discloses a device for audio signal classification. As shown in
The receiving module 30 is configured to receive a current frame audio signal. The tone obtaining module 31 is configured to obtain a tonal characteristic parameter of an audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in at least one sub-band. The spectral tilt obtaining module 32 is configured to obtain a spectral tilt characteristic parameter of the audio signal to be classified. The classification module 33 is configured to determine a type of the audio signal to be classified according to the tonal characteristic parameter obtained by the tone obtaining module 31 and the spectral tilt characteristic parameter obtained by the spectral tilt obtaining module 32.
In the prior art, multiple aspects of characteristic parameters of audio signals need to be considered during audio signal classification, which leads to high complexity of classification and a great calculation amount. However, in the solution provided in this embodiment, during the audio signal classification, the type of the audio signal may be recognized merely according to two characteristic parameters, namely the tonal characteristic parameter of the audio signal and the spectral tilt characteristic parameter of the audio signal, so that the audio signal classification becomes easy, and the calculation amount during the classification is also decreased.
This embodiment specifically provides a device for audio signal classification. As shown in
The receiving module 40 is configured to receive a current frame audio signal, where the current frame audio signal is an audio signal to be classified. The tone obtaining module 41 is configured to obtain a tonal characteristic parameter of the audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in at least one sub-band. The spectral tilt obtaining module 42 is configured to obtain a spectral tilt characteristic parameter of the audio signal to be classified. The classification module 43 is configured to judge a type of the audio signal to be classified according to the tonal characteristic parameter obtained by the tone obtaining module 41 and the spectral tilt characteristic parameter obtained by the spectral tilt obtaining module 42. The first judging module 44 is configured to judge whether a type of at least one previous frame audio signal of the audio signal to be classified is the same as a type of at least one corresponding next frame audio signal of the audio signal to be classified after the classification module 43 classifies the type of the audio signal to be classified. The second judging module 45 is configured to judge whether the type of the audio signal to be classified is different from the type of the at least one previous frame audio signal when the first judging module 44 determines that the type of the at least one previous frame audio signal of the audio signal to be classified is the same as the type of the at least one corresponding next frame audio signal of the audio signal to be classified. The smoothing module 46 is configured to perform smoothing processing on the audio signal to be classified when the second judging module 45 determines that the type of the audio signal to be classified is different from the type of the at least one previous frame audio signal. The first setting module 47 is configured to preset the stipulated number of frames for calculation during calculation of the tonal characteristic parameter. The second setting module 48 is configured to preset the stipulated number of frames for calculation during calculation of the spectral tilt characteristic parameter.
The tone obtaining module 41 is configured to calculate the tonal characteristic parameter according to the number of tones of the audio signal to be classified, where the number of tones of the audio signal to be classified is in at least one sub-band, and the total number of tones of the audio signal to be classified.
In this embodiment, if the tonal characteristic parameter in at least one sub-band, where the tonal characteristic parameter in at least one sub-band is obtained by the tone obtaining module 41, is: a tonal characteristic parameter in a low-frequency sub-band and a tonal characteristic parameter in a relatively high-frequency sub-band, the classification module 43 includes a judging unit 431 and a classification unit 432.
The judging unit 431 is configured to judge whether the spectral tilt characteristic parameter of the audio signal is greater than a third coefficient when the tonal characteristic parameter of the audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in the low-frequency sub-band, is greater than a first coefficient, and the tonal characteristic parameter in the relatively high-frequency sub-band is smaller than a second coefficient. The classification unit 432 is configured to determine that the type of the audio signal to be classified is a voice type when the judging unit determines that the spectral tilt characteristic parameter of the audio signal to be classified is greater than the third coefficient, and determine that the type of the audio signal to be classified is a music type when the judging unit determines that the spectral tilt characteristic parameter of the audio signal to be classified is not greater than the third coefficient.
Further, the tone obtaining module 41 in this embodiment includes a first calculation unit 411, a second calculation unit 412 and a tonal characteristic unit 413.
The first calculation unit 411 is configured to calculate an average value of the number of sub-band tones of the audio signal to be classified, where the average value of the number of sub-band tones of the audio signal to be classified is in at least one sub-band. The second calculation unit 412 is configured to calculate an average value of the total number of tones of the audio signal to be classified. The tonal characteristic unit 413 is configured to respectively use a ratio between the average value of the number of sub-band tones in at least one sub-band and the average value of the total number of tones as a tonal characteristic parameter of the audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in the corresponding sub-band.
The calculating, by the first calculation unit 411, the average value of the number of sub-band tones of the audio signal to be classified, where the average value of the number of sub-band tones of the audio signal to be classified is in at least one sub-band includes: calculating the average value of the number of sub-band tones in one sub-band according to a relationship between the stipulated number of frames for calculation, where the stipulated number of frames for calculation is set by the first setting module 47, and a frame number of the audio signal to be classified.
The calculating, by the second calculation unit 412, the average value of the total number of tones of the audio signal to be classified includes: calculating the average value of the total number of tones according to the relationship between the stipulated number of frames for calculation, where the stipulated number of frames for calculation is set by the first setting module 47, and the frame number of the audio signal to be classified.
Further, in this embodiment, the spectral tilt obtaining module 42 includes a third calculation unit 421 and a spectral tilt characteristic unit 422.
The third calculation unit 421 is configured to calculate a spectral tilt average value of the audio signal to be classified. The spectral tilt characteristic unit 422 is configure to use a mean-square error between the spectral tilt of at least one audio signal and the spectral tilt average value as the spectral tilt characteristic parameter of the audio signal to be classified.
The calculating, by the third calculation unit 421, the spectral tilt average value of the audio signal to be classified includes: calculating the spectral tilt average value according to the relationship between the stipulated number of frames for calculation, where the stipulated number of frames for calculation is set by the second setting module 48, and the frame number of the audio signal to be classified.
The calculating, by the spectral tilt characteristic unit 422, the mean-square error between the spectral tilt of at least one audio signal and the spectral tilt average value includes: calculating the spectral tilt characteristic parameter according to the relationship between the stipulated number of frames for calculation, where the stipulated number of frames for calculation is set by the second setting module 48, and the frame number of the audio signal to be classified.
The first setting module 47 and the second setting module 48 in this embodiment may be implemented through a program or a module, or the first setting module 47 and the second setting module 48 may even set the same stipulated number of frames for calculation.
The solution provided in this embodiment has the following beneficial effects: easy classification, low complexity and a small calculation amount; no extra delay is introduced to an encoder, and requirements of real-time encoding and low complexity of a voice/audio encoder during a classification process under mid-to-low bit rates are satisfied.
The embodiments of the present invention is mainly applied to the fields of communications technologies, and implements fast, accurate and real-time type classification of audio signals. With the development of network technologies, the embodiments of the present invention may be applied to other scenarios in the field, and may also be used in other similar or close fields of technologies.
Through the description of the preceding embodiments, persons skilled in the art may clearly understand that the present invention may certainly be implemented by hardware, but more preferably in most cases, may be implemented by software on a necessary universal hardware platform. Based on such understanding, the technical solution of the present invention or the part that makes contributions to the prior art may be substantially embodied in the form of a software product. The computer software product may be stored in a readable storage medium, for example, a floppy disk, hard disk, or optical disk of the computer, and contain several instructions used to instruct an encoder to implement the method according to the embodiments of the present invention.
The foregoing is only the specific implementations of the present invention, but the protection scope of the present invention is not limited here. Any change or replacement that can be easily figured out by persons skilled in the art within the technical scope disclosed by the present invention shall be covered by the protection scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
200910129157.3 | Mar 2009 | CN | national |
This application is a continuation of International Application No. PCT/CN2010/071373, filed on Mar. 27, 2010, which claims priority to Chinese Patent Application No. 200910129157.3, filed on Mar. 27, 2009, both of which are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN10/71373 | Mar 2010 | US |
Child | 13246485 | US |