1. Field of the Invention
The present invention relates to a segmentation method of an audio data stream, which is broadcasted or recorded using some media, wherein this audio data stream is a sequence of digital samples, or may be transformed to the sequence of digital samples. The goal of such segmentation is the division of audio data stream into segments, which correspond to different physical sources of audio signal. In the case when some source(s) and some background source(s) emit audio signal, parameters of background source(s) is not changed essentially in the framework of one segment.
2. Description of the Related Art
Audio and video recordings have become commonplace with the advent of consumer grade recording equipment. Unfortunately, both the audio and video streams provide few clues to assist in accessing the desired section of the record. In books, indexing is provided by the table of contents at the front and the index at the end, which readers can browse to locate authors and references to authors. A similar indexing scheme would be useful in an audio stream, to help in location of sections where, for example, specific speakers were talking. The limited amount of data associated with most audio records does not provide enough information for confidently and easily access to desired points of interest. So, user has to peruse the contents of a record in sequential order to retrieve desired information.
As a solution of this problem, it is possible to use the automatic indexing system of audio events in the audio data stream. The indexation process consists of two sequential parts: segmentation and classification processes. Segmentation process implies division of the audio stream into homogeneous (in some sense) segments. Classification process implies the attributing of these segments by appropriate notes. Thus, segmentation process is the first and very important stage in the indexation process. To this problem, the basic notice in the given invention is given.
As the basic audio events in the audio stream, it is accepted to consider speech, music and noise (that is non-speech and non-music). The basic notice in a world is given to the speech detection, segmentation and indexation in audio stream, such as broadcast news.
Broadcast news data come to use in long unsegmented speech streams, which not only contain speech with various speakers, backgrounds, and channels, but also contain a lot of non-speech audio information. So it is necessary to chop the long stream into smaller segments. It is also important to make these smaller segments homogeneous (each segment contains the data from one source only), so that the non-speech information can be discarded, and those segments from the same or similar source can be clustered for speaker normalization and adaptation.
Zhan et al., “Dragon Systems' 1997 Mandarin Broadcast News System”, Proceedings of the Broadcast News transcription and Understanding Workshop, Lansdowne, Va., pp. 25-27, 1998, produced the segments by looking for sufficiently long silence regions in the output of a coarse recognition pass. This method generated considerable multi-speaker segments, and no speaker change information was used in the segmentation.
In the subsequent works, Wegmann et al., “Progress in Broadcast News Transcription at Dragon System”, Proceedings of ICASSP'99, Phoenix, Ariz., March, 1999, used the speaker change detection in the segmentation pass. The following is a procedure of their automatic segmentation:
An amplitude-based detector was used to break the input into chunks that are 20 to 30 seconds long.
These chunks were chopped into 2 to 30 seconds long, based on silences produced from a fast word recognizer.
These segments were further refined using a speaker change detector.
Balasubramanian et al., U.S. Pat. No. 5,606,643, enables retrieval based on indexing an audio stream of a recording according to the speaker. In particular, the audio stream may be segmented into speaker events, and each segment labeled with the type of event, or speaker identity. When speech from individuals is intermixed, for example in conversational situations, the audio stream may be segregated into events according to speaker difference, with segments created by the same speaker identified or marked.
Creating an index in an audio stream, either in real time or in post-processing, may enable a user to locate particular segments of the audio data. For example, this may enable a user to browse a recording to select audio segments corresponding to a specific speaker, or “fast-forward” through a recording to the next speaker. In addition, knowing the ordering of speakers can also provide content clues about the conversation, or about the context of the conversation.
The ultimate goal of the segmentation is to produce a sequence of discrete segments with particular characteristics remaining constant within each one. The characteristics of choice depend on the overall structure of the indexation system.
Saunders, “Real-Time Discrimination of Broadcast Speech/Music”, Proc. ICASSP 1996, pp. 993-996, has described a speech/music discriminator based on zero-crossings. Its application is for discrimination between advertisements and programs in radio broadcasts. Since it is intended to be incorporated in consumer radios, it is intended to be low cost and simple. It is mainly designed to detect the characteristics of speech, which are described as, limited bandwidth, alternate voiced and unvoiced sections, limited range of pitch, syllabic duration of vowels, energy variations between high and low levels. It is indirectly using the amplitude, pitch and periodicity estimate of the waveform to carry out the detection process since zero-crossings give an estimate of the dominant frequency in the waveform.
Zue and Spina, “Automatic Transcription of General Audio Data: Preliminary Analyses”, Proc. ICSP 1996, pp. 594-597, use an average of the cepstral coefficients over a series of frames. This is shown to work well in distinguishing between speech and music when the speech is band-limited to 4 kHz and music to 16 kHz but less well when both signals occupied a 16 kHz bandwidth.
Scheier and Slaney, “Construction and Evalution of a Robust Multifeature Speech/Music Discriminator”, Proc. ICASSP 1997, pp. 1331-1334, use a variety of features. These are: four hertz modulation energy, low energy, roll off of the spectrum, the variance of the roll off of the spectrum, the spectral centroid, variance of the spectral centroid, the spectral flux, variance of the spectral flux, the zero-crossing rate, variance of the zero-crossing rate, the cepstral residual, variance of the cepstral residual, pulse metric. The first two features are amplitude related. The next six features are derived from the fine spectrum of the input signal and therefore are related to the techniques described in the previous reference.
Carey et al., “A Comparison of Features for Speech, Music Discrimination”, Proc. IEEE 1999, pp. 149-152, use a variety of features. There are: cepstral coefficients, delta cepstral coefficients, amplitude, delta amplitude, pitch, delta pitch, zero-crossing rate, delta zero-crossing rate. The pitch and cepstral coefficients encompass the fine and broad spectral features respectively. The zero-crossing parameters and the amplitude were believed worthy of investigation as a computationally inexpensive alternative to the other features.
The present invention provides a segmentation procedure to chunk an input audio stream into segments having homogeneous acoustic characteristics. This audio stream is a sequence of digital samples, which are broadcasted or recorded using some media.
An object of the invention is to provide a fast segmentation procedure with a relatively low numerical complexity.
The segmentation procedure comprises three stages. These are: first-grade characteristic calculation, second-grade characteristic calculation, and decision-making. The stage of first-grade characteristic calculation is aimed for calculation of audio features vectors from the input audio stream. These features vectors define characteristics of audio signals. The stage of second-grade characteristic calculation forms sequence of statistic features vectors from the sequence of audio features vectors. The statistic features vectors define statistic features of the first-grade features. The stage of decision-making analyses variation of the second grade features and performs definition of the segments boundaries basing on that analysis.
Thus, an essential aim of the invention is to provide the segmentation method, firstly, that can be used for a wide variety of applications, secondary, that the segmentation procedure may be industrial-scaled manufactured, based on the development of one relatively simple integrated circuit.
Other aspects of the present invention can be seen upon review of the figure, the detailed description and the claims, which follow.
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate an embodiment of the invention and together with the description serve to explain the principle of the invention. In the drawings:
Parameters of the autoregressive linear model, which is the foundation of LPC analysis are reliable and may be defined with relatively small computation complexity. The following parameters form coordinates of audio features vector:
Parameters K1, K2, E0 are calculated simultaneously with LPC analysis, according to Marple, Jr. “Digital Spectral Analysis”, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1987. After LPC analysis, 10 coefficients Line Spectral Pairs (LSP) are computed according to the patent U.S. Pat. No. 4,393,272 or ITU-T, Study Group 15 Contribution—Q. 12/15, Draft Recommendation G.729, Jun. 8, 1995, Version 5.0. Λi, i=1 . . . 5 Formant Frequencies are calculated as half of sum of the corresponding LSP coefficients. E1 is the ratio of the energies in the 6-dB preemrhasized first-order difference audio signal to the regular audio signal, according to Campbell et al. “Voiced/Unvoiced Classification of Speech with Applications to the U.S. Government LPC-10E Algorithm”, Proceedings ICASSP' 86, April, Tokyo, Japan, V.1, pp 473-476.
As the result, there are the audio feature vectors (9 characteristics at all). These vectors have definite physical meaning and the dynamical range sufficient for the precise segmentation of the audio stream. The further work of the segmentation procedure is the statistical analysis of the obtained data. The calculation of the statistical characteristics is performed in non-overlapped second-grade windows, each of these windows consists of some predefined number of frames (e.g. 20-100 frames in one window). Thus, some number of vectors of the first-grade characteristics describes such a window. The division of the input sequence of the audio feature vectors is performed at the step 31. At the step 35, the sequence of those vectors is transformed to the statistic feature vectors.
The statistical features vector {right arrow over (V)} consists of two sub-vectors, the first of them consists of:
and the second of these sub-vectors consists of:
where M is a number of frames in one window.
As the result, there are the statistic feature vectors (15 characteristics at all).
The sub-stages of the decision-making 40 will be discussed in more details below, but
The sub-stage of initial segmentation 100 is performed in such a way that the dividing markers, which corresponds boundaries of segments, are determined with the accuracy corresponding to one second-grade window. The sub-stage of improvement of the segmentation precision 200 carried out by the previous step implies the correction of the position of each dividing marker with the accuracy corresponding to one frame and eliminating of false segments. The sub-stage of internal markers definition 300 implies the determination of a stationary interval inside each segment. The resulting sequence of the not intersected audio segments with their time boundaries is outputted at the step 50.
Sub-Stage of Initial Segmentation
Let {right arrow over (V)}[k], {right arrow over (V)}[k+1], {right arrow over (V)}[k+2], {right arrow over (V)}[k+3] four sequential statistical features vectors, which are taken 136 from the set of sequential statistical features vectors.
The differences Aji=|Vi[k+j]−Vi[k+j+1]|, j=0, 1, 2, i=1, . . . , 10 are calculated for the first sub-vectors of the statistical features vectors 137. If at least one of these values is greater than the corresponding predefined threshold 138, the dividing marker is installed between the second-range windows 139. In this case, another steps of this sub-stage does not performed and the next four vectors, first of which is the first vector after the installed dividing marker will be taken from the set of sequential statistical features vectors for analysis 148.
Otherwise the differences Ai=|(Vi[k]+Vi[k+1])−(Vi[k+2]+Vi[k+3])|, i=11.15 are calculated 140 for the second sub-vectors of the statistical features vectors. These values are matched with the predefined thresholds 141. The case when all of these values are smaller than the corresponding threshold values corresponds to the absence of the dividing marker 142. In this case, the last steps of this sub-stage does not performed and the next four vectors, first of which is the vector {right arrow over (V)}[k+1] will be taken from the set of sequential statistical features vectors for analysis 148. Otherwise the differences Aji=|Vi[k+j]−Vi[k+j+1]|, i=11.15, j=0, 1, 2 are calculated 143 for the second sub-vectors of the statistical features vectors. If at least one of these values is greater than the corresponding predefined thresholds 144 then the dividing marker is installed between the second-range windows 145. In this case, another steps of this sub-stage is not performed and the next four vectors, first of which is the first vector after the installed dividing marker will be taken from the set of sequential statistical features vectors 148. Otherwise the next four vectors, first of which is the vector {right arrow over (V)}[k+1] will be taken from the set of sequential statistical features vectors for analysis 148. If the dividing marker is taken at the step in diamond 147, then the sub-stage of initial segmentation ends and the initial segmentation marker passes to the sub-stage of accurate segmentation.
Sub-Stage of Accurate Segmentation
Argument, which correspond to maximum value Sj, is calculated at the step 220:
At the step 230, the new dividing marker μ′ is placed into the position corresponded to this J between shaded rectangles on
Sub-Stage of Internal Markers Definition
The sub-stage of internal markers definition of the final segmentation analyses each segment with the purpose of the definition of two internal markers (μint, ηint) defining the most homogeneous interval inside the segment. It is made with the following purposes: the placed dividing markers separate two audio events of the different nature. These events, as a rule, smoothly transiting one to another and do not have drastic border. Therefore there is a time interval containing information about both the events. That may hamper their correct classification.
As well as at the previous sub-stage, this task is solved by usage of a precise statistical analysis of a sequence of Formants Frequencies coefficients Λi, i=1, . . . , 5 close to each dividing marker. Let's consider an arbitrary segment, limited by markers μ and η, (so that η−μ=n+1 frames), and composed from Formants Frequencies coefficients (see
Firstly, two differences are evaluated:
At the second, arguments, which correspond to maximum values S1j and S2j, are calculated:
Then, the new markers μint and ηint are placed into the positions corresponded to these J1, J2 between shaded rectangles on
Thus, the process of segmentation is ended. As the result, the sequence of not intersected audio intervals with their time boundaries is obtained.
Number | Date | Country | Kind |
---|---|---|---|
10-2002-0009209 | Feb 2002 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
4393272 | Itakura et al. | Jul 1983 | A |
5606643 | Balasubramanian et al. | Feb 1997 | A |
6185527 | Petkovic et al. | Feb 2001 | B1 |
6567775 | Maali et al. | May 2003 | B1 |
6931373 | Bhaskar et al. | Aug 2005 | B1 |
Number | Date | Country | |
---|---|---|---|
20030171936 A1 | Sep 2003 | US |