Throughout this description, the embodiments and examples shown should be considered as exemplars, rather than limitations on the apparatus and methods disclosed or claimed.
Throughout this description, mathematical formula will follow normal American typographical conventions. An italic font will be used for all letters representing variables, except for upper case Greek letters, which are in an upright font. Bold upper case letters represent matrices, and bold lower case letters represent vectors. Elements within matrices and vectors are represented by the corresponding non-bold letter. Thus Q represents a matrix, and Q(i,j) represents an element with the matrix Q. Similarly x represents a vector, and x(i) represents an element with vector x.
Description of Methods
Refer now to
A plurality of music samples may be selected (110). Each music sample may be all or a portion of a song or track. Each music sample may be a compilation of samples of different tracks, songs, or portions of a work, or a compilation of samples of work by the same group, artist, or composer. Each music sample may be converted into vector form (130). Within this application, the vector representation of each music sample will be referred to as a “music vector”. It must be understood that a “music vector” is not music in any conventional sense of the word, but is a numerical representation of the content of a music sample. The vectorization process 130, which may be any of a number of known processes, may attempt to pack the content of the corresponding music sample into the minimum number of elements possible while still retaining the essential features of the music necessary for understanding.
At 120, community metadata relating to the plurality of music samples may be retrieved. As used herein, “metadata” means text-based data relating to music, and “community metadata” is text-based data generated by the community of music listeners. Community metadata may be retrieved from the Internet or other sources. At 140, natural language processing techniques may be applied to the community metadata retrieved in step 120 to select salient terms. As used herein, “salient terms” are words or phrases relating to music that stand out from the mass of words comprising the community metadata. Methods for selecting salient terms will be described in detail subsequently.
At 150, a classifier may be trained to relate the salient terms selected at 140 to the content of the music vectors developed in 130. In general use, a “classifier” is an algorithm, which may be used with one or more supporting data structures, to determine if a data sample falls within one or more classes. As used herein, a “classifier” means an algorithm, which may be used with one or more data structures, to determine if a music sample is likely to be described by one or more salient terms selected from the community metadata. As used herein, a “classifier machine” is a vector, matrix, or other data structure that, when applied to a music sample by means of a related classifier algorithm, indicates if the music sample is likely to be described by a particular salient term. The classifier training 150 may include applying an algorithm to a plurality of music samples and a plurality of salient terms where the relationship (i.e. which terms have been used to describe which music samples) between the samples and terms is known. The result of the training of the classifier 150 may be a set of classifier machines that can be applied to determine which terms are appropriate to describe new music samples.
After training the classifier 150, the number of classes, or ranks, may be reduced by selecting semantic basis functions from the plurality of salient terms. As used herein, a “semantic basis function” is a word, group of words, or phrase that has been shown to be particularly useful or accurate for classifying music samples. The semantic basis functions, and classifier machines related to the semantic basis functions, may be used at 170 for music retrieval tasks that may include categorization of new music samples, recommendation of music based on listener-provided criteria, automated review of new music samples, and other related tasks.
At 240, a plurality of classifier machines may be trained using the first plurality of n music samples. Each of the plurality of classifier machines may relate to a corresponding one of the plurality of salient terms extracted at 230.
At 250, the plurality of classifier machines may be tested using the second plurality of m music vectors as test vectors. Testing the plurality of classifier machines may consist of applying each classifier machine to each test vector to predict what salient terms may be used to describe which test vector. These predictions may then be compared with the known set of terms describing the second plurality of music sample that were extracted from the community metadata at 230. The comparison of the predicted and known results may be converted to an accuracy metric for each salient term. The accuracy metric may be the probability that a salient term will be predicted correctly or other metric for each salient term.
At 260, a plurality of semantic basis functions may be selected from the plurality of salient terms. The semantic basis functions may be selected based on the accuracy metric for each salient term. A predetermined number of salient terms having the highest accuracy metrics may be selected for the semantic basis functions. The semantic basis functions may be all salient terms having an accuracy metric higher than predetermined threshold. Other criteria may be used to select the semantic basis functions. For example, a filter may be applied to candidate semantic basis functions to minimize or eliminate redundant semantic basis functions having similar or identical meanings.
Having selected semantic basis functions, a set of semantic basis classifier machines may be computed 270. The method used to compute the semantic basis classifier machines may be the same as the method initially used to train classifier machines at 240. The set of music samples used to train semantic basis classifier machines at 270 may be larger than the first plurality of music samples. The set of music samples used to train semantic basis classifier machines at 270 may include all or part of the first plurality of music samples, all or part of the second plurality of music samples, and/or additional music samples.
The semantic basis classifier machines trained at 270 may be used at 280 for music retrieval tasks that may include categorization of new music samples, recommendation of music based on listener-provided criteria, automated review of new music samples, and other related tasks. Note that the method 200 has a start at 205, but does not have an end since 280 may be repeated indefinitely. Additionally, note that the method 200 may be repeated in whole or in part periodically to ensure that the semantic basis functions and semantic basis classifier engines reflect current musical styles and preferences.
A number of methods are known for 220 wherein music samples are converted to music vectors or other numerical representation. These methods may use time-domain analysis, frequency-domain analysis, cepstral analysis, or combinations of these methods.
A simple and popular method is colloquially known as a “beatogram”; or more formally as a spectral autocorrelation. A digitized music sample is divided into a series of short time windows, and a Fourier transform is performed on each time window. The result of each Fourier transform is the power spectrum of the music signal divided into a plurality of frequency bins. A single FFT is then applied to the time history of each frequency bin. The intuition behind the beatogram is to capture both the frequency content and time variation of the frequency content of music samples.
Cepstral analysis was derived from speech research. Cepstral analysis is computationally cheap, well studied, and a known method for music representations (see, for example, B. Logan, “Mel frequency cepstral coefficients for music modeling,” Proceedings of the International Symposium on Music Information Retrieval, Oct. 23-25, 2000). Mel-frequency cepstral coefficients (MFCCs) are defined as the mel-scaled cepstrum (the inverse fourier transform of the logarithm of the power spectrum on a mel scale axis) of the time-domain signal. The mel scale is a known non-linear pitch scale developed from a listener study of pitch perception. MFCCs are widely used in speech recognizers and other speech systems as they are an efficiently computable way of reducing the dimensionality of spectra while performing a psychoacoustic scaling of frequency response.
Another method for converting music samples into music vectors at 220 may be may be Modulation Cepstra (see B. Whitman and D. Ellis, “Automatic Record Reviews,” Proceedings of the 2004 International Symposium on Music Information Retrieval, 2004. Modulation Cepstra may be considered as a cepstral analog to the previously described “beatogram”.
At 330, language processing techniques may be employed to extract terms from the downloaded text pages. The extracted terms may include n-grams (sequences of ordered words having n words) such as single words (n1) and two-word groups (n2). The extracted terms may also include adjectives (adj) and noun phrases (np). Known methods are available to extract these and other terms from the downloaded pages (see, for example, E. Brill, “A simple rule-based part-of-speech tagger,” Proceedings of the 3rd Conference on Applied Natural Language Processing, 1992, and L. Ramshaw and M. Marcus, “Text chunking using transformation-based learning,” Proceedings of the 3rd Workshop on Very Large Corpora, 1995).
At 340, the salience of each term may be computed. The salience of each term is an estimation of the usefulness of the term for understanding music samples. The salience of a term is very different from the occurrence of the term. For example, the word “the” is likely to be used in every downloaded document, but carries no information relevant to any music sample. At the other extreme, a word that appears only once in all of the downloaded Web pages is quite probably misspelled and equally irrelevant.
At 340, the salience of each term may be computed as the well-known Term Frequency-Inverse Document Frequency (TF-IDF) metric, which is given by:
where s(t|M) is the salience of term t with respect to context (music sample) M; P(t|M) is the probability that a downloaded document within the document set for music sample M contains term t; and P(t|M∞) is the probability that any document of the documents downloaded for all music samples contains term t. The effect of the TF-IDF metric is to reduce, or down-weight, the salience of very common or infrequently used words.
To further down-weight very rare words, such as typographic errors and off-topic words, a Gaussian-like smoothing function may be used to compute salience:
s(t|M)=P(t|M)e−(log(P(t|M
where P (t|M∞) is normalized such that its maximum is equal to the total number of documents, and μ is a constant selected empirically. Other methods may be used to compute salience. The salience may be computed for each extracted term with respect to each of the plurality of music samples.
At 350, a plurality of salient terms may be selected. The selected salient terms may be those terms having a salience exceeding a threshold value for at least one music sample or for at least a predetermined number of music samples. The selection of salient terms may also consider possible overlap or redundancy of terms having similar meaning. For example, the well known Latent Semantic Analysis may be used to cluster terms into many similar meaning groups, such that only the highest salience terms may be selected from each group. Note that 350 is optional and the subsequent processes may proceed using all terms.
At 360, a truth vector yt may be constructed for each salient term selected in 350. A truth vector yt is an l-element vector, where l is the number of music samples in a sample set. Each element yt(M) in the truth vector yt indicates if term t is salient to music sample M. Each element yt(M) in the truth vector yt may be equal to the salience s(t|M), scaled to span the range from −1 to +1. Alternately, a threshold may be applied such that a salience value above the threshold is set to +1, and a salience value below the threshold is set to −1. In this case, each element yt(M) in the truth vector yt may be either −1 or +1. A value of −1 may indicate that term t is not salient to music sample M, and a value of +1 may indicate the converse.
While the method 300 has a start at 310 and a finish at 370, it should be understood that the method is at least partially recursive and that step 340 must be performed for every combination of music sample M and term t.
Various machine classification methods, including Support Vector Machines and Regularized Least Squares Classifiers (RLSC) may be used for music understanding. An RLSC is well suited to music understanding since the RLSC can be readily extended to large number of classes. In the music understanding methods 100 and 200, each salient term represents a class, where the class definition is “music samples that can be appropriately described by this term”. Details of the RLSC method are well known (see, for example, Rifkin, Yeo, and Poggio, “Regularized Least Squares Classification,” Advances in Learning Theory: Methods, Models, and Applications, NATO Science Series III: Computer and Systems Science, Vol. 190, 2003).
At 430, a Gaussian-weighted kernel matrix K is computed from the l music vectors. K is an l×l matrix where each element is given by
K(i,j)=e−(|x−x|)
where |xi−xj| is the Euclidean distance between music vector xi and music vector xj, and σ is a standard deviation. The l music vectors may be normalized, in which case u may be defined to equal 0.5. The l music vectors may not be normalized, in which case U may be determined empirically.
Optionally, when the l music vectors are not normalized, u may be determined at 420 by
where Aij is a matrix containing the l music vectors, each of which has d dimensions or elements. In this case, σ is the square root of the largest element in any of the l music vectors.
At 440, a “support matrix” S is computed. The term support matrix is used herein since matrix S is analogous to the support vectors produced by a support vector machine. The calculation of matrix S proceeds through two steps. First, a regularization term I/C is added to the kernel matrix K to form a sum matrix, where I is the identity matrix and C is a constant. C may be initially set to 100 and tuned empirically to the input music vectors. The sum matrix is then inverted to form the support matrix, which is given by
The inversion may be done by a conventional method, such as Gaussian elimination, which may be preceded by a factorization process such as the well-known Cholesky decomposition.
At 450, the method 400 may receive a plurality of t truth vectors, yt, for t salient terms. The truth vectors may be provided by the method 300 of
c
t
=Sy
t
where S is the support matrix and ct and yt are the classifier machine and truth vector, respectively, for salient term t.
At 510, one of the m test music vectors may be selected and, at 520, one of t classifier machines may be selected. At 530, a function ft(x) may be computed as follows
where x is the test music vector, xi is one of the l music vectors used to train the classifier, and ct(i) is the i'th term of classifier engine ct for term t. ft(x) is a scalar value that may be considered as the probability that term twill be used to describe the music sample represented by music vector X.
At step 540, ft(x) is compared with the corresponding value within the ground truth vector corresponding to x. ft(x) may be considered to be correctly predicted if the numerical sign of ft(x) is the same as the sign of the corresponding term in the ground truth vector. Other criteria may be used to define if ft(x) has been correctly predicted.
At step 550, a decision is made if all combinations of test music vectors and classifier machines have been evaluated. If not, steps 520-540 may be repeated recursively until all combinations are evaluated. A score for each classifier machine may be accumulated during the recursive process. After all combinations of test music vectors and classifier machines have been evaluated, the classifier machines and the associated salient terms may be ranked in step 560 and semantic basis functions may be selected from the higher ranking salient terms in step 570.
At 660, the results of the previous steps may be combined to form a test sample description vector f(x) for the new music sample, as follows
The test sample description vector f(x) may be a powerful tool for understanding the similarities and differences between music samples.
For example, at 670 the test sample description vector f(x) may be compared with a descriptive query 675 received from a user. This query may take the form of one or more text expressions, such as “sad”, “soft” or “fast”. The query may be entered in free-form text. The query may be entered by selecting phrases from a menu, which may include or be limited to a set of predetermined semantic basis functions. The query may be entered by some other method or in some other format. The query may be converted into an ideal description vector to facilitate comparison. The comparison of the test sample description vector f(x) and the query may be made on an element-by-element basis, or may be made by calculating a Euclidean distance between the test sample description vector f(x) and the ideal description vector representing the query.
At 680, a determination may be made if the test music sample satisfies the query. The test music sample may be considered to satisfy the query if the Euclidean distance between the test sample description vector f(x) and the ideal description vector representing the query is below a predetermined threshold. The test music sample may be recommend to the user at 690 if the test music sample is sufficiently similar to the query, or may not be recommended at 695.
Alternatively, at 670, the test sample description vector f(x) may be compared with description vectors for one or more known target music samples 677. For example, a user may request a play list of music that is similar to one or more target music samples 677. In this case, a test music sample may be recommended to the user if the Euclidean distance between the test sample description vector and the description vectors of the target music samples are below a predetermined threshold.
Song recommendation, as described above, is a one example of the application of the method for understanding music. Other applications include song clustering (locating songs similar to a test sample song or determining if a test sample song is similar to a target set of songs), genre and style prediction, marketing classification, sales prediction, or fingerprinting (determining if a song with different audio characteristics “sounds like” a copy of itself).
Training the classifier over a large number of songs will result in very large kernel and support matrices. For example, training the classifier over 50,000 songs or music samples may require a 50,000×50,000-element kernel matrix. Such a large matrix may be impractical to store or to invert to form the equally-large support matrix.
At 720, a kernel sub-matrix Ki is calculated for each group of music vectors. At 730, a support sub-matrix Si is calculated from each of the kernel matrices. At 735, t truth vectors, yt, corresponding to t terms (or t semantic basis functions) are introduced. At 740 each truth vector may be divided into g segments. Note that the elements of the truth vectors must be reordered to match the order of the music samples prior to segmentation. At 750, sub-classifier machines are trained for each group of music samples. Sub-classifier machine ct,1 is a classifier machine for term t trained on music vector group 1. A total of t×g sub-classifier machines are trained, each having l/g elements. The computational methods for forming the kernel sub-matrices, support sub-matrices, and sub-classifier machines may be essentially the same as described for 420-460 of method 400 shown in
At 760, each group of t sub-classifier machines may be used to compute a sub-description vector f(x)i for a test music vector x introduced at 755. f(x)i is a sub-description vector for test music vector x formed by a sub-classifier trained on music vector group i. A total of g sub-description vectors may be computed at 760. The computational methods used in 760 may be essentially the same as 630-660 of method 600 of
At 770, a final test sample description vector f(x) may be computed by combining the g sub-description vectors f(x)i from 760. The final test sample description vector f(x) may be computed by averaging the f(x)i from 760, or by some other method. At 780, the final test sample description vector f(x) may be input to music retrieval tasks such as 670 in
Description of Apparatus
The computing device 800 may include or interface with a display device 840 and input device 850. The computing device 800 may also include an audio interface unit 860 which may include an analog to digital converter. The computing device 800 may also interface with one or more networks 870.
The storage device 830 may accept a storage media containing instructions that, when executed, cause the computing device 800 to perform music understanding methods such as the methods 100 to 700 of
The foregoing is merely illustrative and not limiting, having been presented by way of example only. Although examples have been shown and described, it will be apparent to those having ordinary skill in the art that changes, modifications, and/or alterations may be made.
Although many of the examples presented herein involve specific combinations of method acts or system elements, it should be understood that those acts and those elements may be combined in other ways to accomplish the same objectives. With regard to flowcharts, additional and fewer steps may be taken, and the steps as shown may be combined or further refined to achieve the methods described herein. Acts, elements and features discussed only in connection with one embodiment are not intended to be excluded from a similar role in other embodiments.
For means-plus-function limitations recited in the claims, the means are not intended to be limited to the means disclosed herein for performing the recited function, but are intended to cover in scope any means, known now or later developed, for performing the recited function.
As used herein, “plurality” means two or more.
As used herein, a “set” of items may include one or more of such items.
As used herein, whether in the written description or the claims, the terms “comprising”, “including”, “carrying”, “having”, “containing”, “involving”, and the like are to be understood to be open-ended, and to mean “including but not limited to”. Only the transitional phrases “consisting of” and “consisting essentially of”, respectively, are closed or semi-closed transitional phrases with respect to claims.
Use of ordinal terms such as “first”, “second”, “third”, etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.
As used herein, “and/or” means that the listed items are alternatives, but the alternatives also include any combination of the listed items.
This application claims benefit of the filing date of provisional patent application Ser. No. 60/791,540, filed Apr. 12, 2006, which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60791540 | Apr 2006 | US |