The field of Music Information Retrieval (MIR) concerns itself, among other things, with the analysis of music in its many facets, such as melody, timbre or rhythm. Among those aspects, popular western commercial music (i.e., “pop” music) is arguably characterized by emphasizing mainly the melody and accompaniment aspects of music. For purposes of simplicity, the melody, or main musical melodic line, also referred to herein as a “foreground”, and the accompaniment also is referred to herein as “background”. Typically, in pop music the melody is sung, whereas the accompaniment often is performed by at least one or more instrumentalists, and possibly vocalists as well. Often, a singer delivers the lyrics, and the backing musicians provide harmony as well as genre and style cues.
A mashup is a fusion or mixture of disparate elements, and, in media, can include, in one example, a recording created by digitally synchronizing and combining background tracks with vocal tracks from two or more different songs (although other types of tracks can be “mashed-up” as well). A mashing up of musical recordings may involve removing vocals from one first musical track and replacing those vocals with vocals from at least one of second musically-compatible track, and/or adding vocals from the second track to the first track.
Listeners are more likely to enjoy mash-ups created from songs the users already know and like. Some commercially available websites enable users to listen to playlists suited to the users' tastes, based on state-of-the-art machine learning techniques. However, the art of personalizing musical tracks themselves to users' tastes has not been perfected.
Also, a mashup typically does not work to combine two entire songs, because most songs are much too different from each other for that to work well. Instead, a mashup typically starts with the instrumentals of one song as the foundation, and then the vocals are inserted into the instrumentals one short segment at a time. Any number of the vocal segments can be inserted into the instrumentals, and in any order that may be desired.
However, if two vocal and instrumental segments are not properly aligned, then they will not sound good together.
It is with respect to these and other general considerations that embodiments have been described. Also, although relatively specific problems have been discussed, it should be understood that the embodiments should not be limited to solving the specific problems identified in the background.
The foregoing and other limitations are overcome by methods for determining musically compatible music tracks and segments and combining them, and by systems that operate in accordance with the methods, and by computer-readable storage media storing instructions which, when executed by one or more computer processors, cause the one or more computer processors to perform the methods.
One aspect includes a method for combining audio tracks, comprising: determining at least one music track that is musically compatible with a base music track; aligning the at least one music track and the base music track in time; separating the at least one music track into an accompaniment component and a vocal component; and adding the vocal component of the at least one music track to the base music track.
Another aspect includes the method according to the previous aspect, wherein the determining includes determining at least one segment of the at least one music track that is musically compatible with at least one segment of the base music track.
Another aspect includes the method according to any of the previous aspects, wherein the base music track and the at least one music track are music tracks of different songs.
Another aspect includes the method according to any of the previous aspects, wherein the determining is performed based on musical characteristics associated with at least one of the base music track and the at least one music track.
Another aspect includes the method according to any of the previous aspects, and further comprising: determining whether to keep a vocal component of the base music track, or replace the vocal component of the base music track with the vocal component of the at least one music track before adding the vocal component of the at least one music track to the base music track.
Another aspect includes the method according to any of the previous aspects, wherein the musical characteristics include at least one of an acoustic feature vector distance between tracks, a likelihood of at least one track including a vocal component, a tempo, or musical key.
Another aspect includes the method according to any of the previous aspects, wherein the base music track is an instrumental track and the at least one music track includes the accompaniment component and the vocal component.
Another aspect includes the method according to any of the previous aspects, wherein the at least one music track includes a plurality of music tracks, and the determining includes calculating a respective musical compatibility score between the base track and each of the plurality of music tracks.
Another aspect includes the method according to any of the previous aspects, and further comprising: transforming a musical key of at least one of the base track and a corresponding one of the plurality of music tracks, so that keys of the base track and the corresponding one of the plurality of music tracks are compatible.
Another aspect includes the method according to any of the previous aspects, wherein the determining includes determining at least one of: a vertical musical compatibility between segments of the base track and the at least one music track, and a horizontal musical compatibility among tracks.
Another aspect includes the method according to any of the previous aspects, wherein the vertical musical compatibility is based on at least one of a tempo compatibility, a harmonic compatibility, a loudness compatibility, vocal activity, beat stability, or a segment length.
Another aspect includes the method according to any of the previous aspects, wherein the at least one music track includes a plurality of music tracks, and wherein determining the horizontal musical compatibility includes determining at least one of: a distance between acoustic feature vectors among the plurality of music tracks, and a measure of a number of repetition of a segment of one of the plurality of music tracks being selected as a candidate for being mixed with the base track.
Another aspect includes the method according to any of the previous aspects, wherein the determining further includes determining a compatibility score based on a key distance score associated with at least one of the tracks, an acoustic feature vector distance associated with at least one of the tracks, the vertical musical compatibility, and the horizontal musical compatibility.
Another aspect includes the method according to any of the previous aspects, and further comprising: refining at least one boundary of a segment of the at least one music track.
Another aspect includes the method according to any of the previous aspects, wherein the refining includes adjusting the at least one boundary to a downbeat temporal location.
Another aspect includes the method according to any of the previous aspects, and further comprising: determining a first beat before the adjusted at least one boundary in which a likelihood of containing vocals is lower that a predetermined threshold; and further refining the at least one boundary of the segment by moving the at least one boundary of the segment to a location of the first beat.
Another aspect includes the method according to any of the previous aspects, and further comprising: performing at least one of time-stretching, pitch shifting, applying a gain, fade in processing, or fade out processing to at least part of the at least one music track.
Another aspect includes the method according to any of the previous aspects, and further comprising: determining that at least one user has an affinity for at least one of the base music track or the at least one music track.
Another aspect includes the method according to any of the previous aspects, and further comprising: identifying music tracks for which a plurality of user have an affinity; and identifying those ones of the identified music tracks for which one of the plurality of users has an affinity, wherein at least one of the identified music tracks for which one of the plurality of users has an affinity is used as the base music track.
Another aspect includes the method according to any of the previous aspects, wherein at least another one of the identified music tracks for which one of the plurality of users has an affinity is used as the at least one music track.
Another aspect includes a system for combining audio tracks, comprising: a memory storing a computer program; and a computer processor, controllable by the computer program to perform a method comprising: determining at least one music track that is musically compatible with a base music track, based on musical characteristics associated with at least one of the base music track and the at least one music track; aligning the at least one music track and the base music track in time; separating the at least one music track into an accompaniment component and a vocal component; and adding the vocal component of the at least one music track to the base music track.
Another aspect includes the system according to the previous aspect, wherein the musical characteristics include at least one of an acoustic feature vector distance between tracks, a likelihood of at least one track including a vocal component, a tempo, or musical key.
Another aspect includes the system according to any of the previous aspects, wherein the determining includes determining at least one segment of the at least one music track that is musically compatible with at least one segment of the base music track.
Another aspect includes the system according to any of the previous aspects, wherein the method further comprises transforming a musical key of at least one of the base track and a corresponding one of the plurality of music tracks, so that keys of the base track and the corresponding one of the plurality of music tracks are compatible.
Another aspect includes the system according to any of the previous aspects, wherein the determining includes determining at least one of a vertical musical compatibility between segments of the base track and the at least one music track, or a horizontal musical compatibility among tracks.
Another aspect includes the system according to any of the previous aspects, wherein the vertical musical compatibility is based on at least one of a tempo compatibility, a harmonic compatibility, a loudness compatibility, vocal activity, beat stability, or a segment length.
Another aspect includes the system according to any of the previous aspects, wherein the at least one music track includes a plurality of music tracks, and wherein determining of the horizontal musical compatibility includes determining at least one of a distance between acoustic feature vectors among the plurality of music tracks, and a repetition of a segment of one of the plurality of music tracks being selected as a candidate for being mixed with the base track.
Another aspect includes the system according to any of the previous aspects, wherein the determining further includes determining a compatibility score based on a key distance score associated with at least one of the tracks, an acoustic feature vector distance associated with at least one of the tracks, the vertical musical compatibility, and the horizontal musical compatibility.
In the following detailed description, references are made to the accompanying drawings that form a part hereof, and in which are shown by way of illustrations specific embodiments or examples. These aspects may be combined, other aspects may be utilized, and structural changes may be made without departing from the present disclosure. Embodiments may be practiced as methods, systems or devices.
Accordingly, embodiments may take the form of a hardware implementation, an entirely software implementation, or an implementation combining software and hardware aspects. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims and their equivalents.
Example aspects described herein can create new musical tracks that are a mashup of different, pre-existing audio tracks, such as, e.g., musical tracks. By example and without limitation, at least one component of a musical track, such as a vocal component, can be combined with at least part of another musical track, such as an instrumental or background track (also referred to as an “accompaniment track”), to form a mashup of those tracks. According to an example aspect herein, such a musical mashup can involve various procedures, including determining musical tracks that are musically compatible with one another, determining, from those tracks, segments that are compatible with one another, performing beat and downbeat alignment for the compatible segments, performing refinement of transitions between the segments, and mixing the segments of the tracks.
Before describing the foregoing procedures in more detail, examples of at least some types of information that can be used in the procedures will first be described. Example aspects of the present application can employ various different types of information. For example, the example aspects can employ various types of audio signals or tracks, such as mixed original signals, i.e., signals that include both an accompaniment (e.g., background instrumental) component and a vocal component, wherein the accompaniment component includes instrumental content such as one or more types of musical instrument content (although it may include vocal content as well), and the vocal component includes vocal content. Each of the tracks may be in the form of, by example and without limitation, audio files for each of the tracks (e.g. mp3, way, or the like). Other types of tracks that can be employed include solely instrumental tracks (e.g., tracks that include only instrumental content, or only an instrumental component of a mixed original signal), and vocal tracks (e.g., tracks that include only vocal content, or only a vocal component of a mixed original signal). In one example embodiment herein, a ‘track’ may include an audio signal or recording of the applicable content, a file that includes an audio recording/signal of applicable content, a section of a medium (e.g., tape, wax, vinyl) on which a physical (or magnetic) track has been created due to a recording being made or pressed there, or the like. Also, for purposes of this description, the terms “background” and “accompaniment” are used interchangeably.
In one example embodiment herein, vocal and accompaniment/background (e.g., instrumental) tracks (or components) can be obtained from mixed, original tracks, although in other examples they may pre-exist and can be obtained from a database. In one example embodiment herein, vocal and instrumental tracks (or components) can be obtained from a mixed original track according to the method(s) described in the following U.S. patent application, although this example is not exclusive: U.S. patent application Ser. No. 16/055,870, filed Aug. 6, 2018, entitled “SINGING VOICE SEPARATION WITH DEEP U-NET CONVOLUTIONAL NETWORKS”, by A. Jansson et al. The foregoing Jansson application is hereby incorporated by reference in its entirety, as if set forth fully herein.
Example aspects of the present application also can employ song or track segmentation information for creating mashups. For example, song segmentation information can include the temporal positions of boundaries between sections of each track.
An additional type of information that can be employed to create mashups can include segment labelling information. Segment labelling information identifies (using, e.g., particular IDs) different types of track segments, and track segments may be labeled according to their similarity. By example and without limitation, segments that are included in a verse (which tends to be repeated) of a song may have a same label, segments that are included in a chorus of a song may have a same label, and the like. In one example, segments that are considered to be similar to one another (and which thus have a same label) are deemed to be within a same cluster.
Of course, the above examples given for how to obtain vocal and accompaniment tracks, song segmentation information, and segment labelling information, are intended to be representative in nature, and, in other examples, vocal and/or accompaniment tracks, song segmentation information, and/or segment labelling information may be obtained from any applicable source, or in any suitable manner known in the art.
Additional information that can be employed to create mashups also can include tempo(s) of each track, a representation of tonality of each track (e.g., a twelve-dimensional chroma vector), beat/downbeat positions in each track (e.g., temporal positions of beats and downbeats in each track), information about the presence of vocals (if any) in time in each track, energy of each of the segments in the vocal and accompaniment tracks, or the like. The foregoing types of information can be obtained from any applicable source, or in any suitable manner known in the art. In one example, at least some of the foregoing information is obtained for each track (including, e.g., separated tracks) using a commercially available audio analysis tool, such as the Echo Nest analyzer. In other examples, the aforementioned types of information may pre-exist and can be obtained from a database.
According to one example, determining information about the presence of vocals involves mining original-instrumental pairs from a catalogue of music content, extracting strong vocal activity signals between corresponding tracks, exploiting the signal(s) to train deep neural networks to detect singing voice, and recognizing the effects of this data source on resulting models. In other example embodiments herein, information (vx) about the presence of vocals can be obtained from loudness of a vocal track obtained from a mixed, original signal, such as, e.g., a vocal track obtained according the Jansson application identified above.
Additional information that can be employed to create mashups can include acoustic feature vector information, and loudness information (e.g., amplitude). An acoustic feature vector describes the acoustic and musical properties of a given recording. An acoustic feature vector can be created manually, by manually quantifying the amount of given properties, e.g. vibrato, distortion, presence of vocoder, energy, valence, etc. The vector can also be created automatically, such as by using the amplitude of the signal, the time-frequency progression, or more complex features.
Each of the above types of information associated with particular tracks and/or with particular segments of tracks, can be stored in a database in association with the corresponding tracks and/or segments. The database may be, by example and without limitation, one or more of main memory 1125, portable storage medium 1150, and mass storage device 1130 of the system 1100 of
In one example embodiment herein, with respect to tracks 110, 112, the content of track 112 is from a different song than the content from track(s) 110, although in other examples the content of at least some tracks 110, 112 may be from the same song(s). For purposes of this description, the track 110 also is referred to herein as a “target” or “candidate” track 110. Also, each track 110, 112 includes respective segments, wherein segments of the candidate or target track 110 are also referred to herein as “candidate segments” or “target segments”, and segments of the query track 112 also are referred to herein as “query segments”.
As represented in
The information represented by reference numerals 110a, 112a, 114, 114a, 114b, 116, 116a, 118, 118a and 118b is employed in an algorithm to perform an automashup that results in a mashup track 120, according to an example aspect herein. It should be note that, although candidate track 110 is shown and described above for convenience as including instrumental content, in some cases it also may include at least some vocal content as well, depending on the application of interest.
A procedure 200 according to an example aspect herein, for determining whether individual segments of a query track (e.g., an accompaniment track) 112 under consideration are to be kept, or have content (e.g., vocal content) replaced or added thereto from one or more candidate (e.g., vocal) tracks 110, during an automashup of the tracks 110, 112, will now be described, with reference to
In one example embodiment herein, the procedure 200 employs at least some of the various types of information 1131 as described above, including, without limitation, information about the likelihood of a segment containing vocals (vx) (e.g., at beats of segments), downbeat positions, song segmentation information (including start and end positions of segments), and segment labelling information (e.g., IDs), and the like. As described above, each type of information may be stored in a database in association with corresponding tracks 110, 112 and/or segments 122, 124 associated with the information 1131.
Referring to
score_rep=Nrepet/(ideal_num_reps) (F1)
where Nrepet represents a number of segments 122 of the query track 112 that have the same segment labelling information (e.g., the same segment ID) as the currently considered query segment 122, score_rep represents the intermediate score, and ideal_num_reps represents the predetermined ideal number of repetitions.
If the value of score_rep is greater than value ‘1’ (“Yes” in sub-step 206b), then in sub-step 206c, the value of score_rep is set as follows, according to formula (F2):
score_rep=1/(score_rep) (F2).
On the other hand, if the value of score_rep is less than or equal to value ‘1’ (“No” in sub-step 206b), then the value of score_rep that was determined in step 206a is maintained.
In either case, after sub_step 206b, control passes to sub-step 206d, where a value for the second score (K_keep_rep) is determined according to the following formula (F3):
K_keep_rep=score_rep (F3).
Then, control passes to step 208 where a value of a “keep score” K_keep is determined according to the following formula (F3′), for the segment 122 under consideration:
K_keep=K_keep_rep*K_keep_vx (F3′).
Next, control passes via connector A to step 210 of
In a next step 214, a mean K_keep score for each of the clusters (i.e., a mean of the K_keep score values for segments 122 from each respective cluster) is determined, and then control passes to step 216, where a set of segments 122 from the cluster with the greatest determined mean K_keep score is selected. Then, in step 218, it is determined which segments 122 have a length of less than a predetermined number of bars (e.g., 4 bars), and those segments are added to the selected set of segments, according to one example embodiment herein, to provide a combined set of segments 122. The combined set of segments 122 resulting from step 218 is deemed to be assigned to “S-keep”, and thus each segment 122 of the combined set will be maintained (kept) with its original content, whether the content includes vocal content, instrumental content, or both.
To determine segments “(S_subs)” for which the original vocal content included therein will be replaced, and to determine segments (S_add) to which vocals from other songs will be added (versus replaced), the remaining set of segments 122 that had not been previously assigned to S_keep are employed. More specifically, to determine segments S_add, those ones of the remaining segments 122 (i.e., those not resulting from step 218) that are deemed to not contain vocal content are identified. In one example embodiment herein, identification of such segments 122 is performed as described in the Humphrey application (and/or the identification may be based on information 1131 stored in the database), and can include determining a mean probability that respective ones of the segments 122 contain vocal content (at each of the beats) (step 220). Then, for each such segment 122, a determination is made as to whether the mean determined therefor is lower than a predetermined threshold (e.g., 0.1) in step 222. If the mean for respective ones of those segments 122 is not lower than the predetermined threshold (i.e., if the mean equals or exceeds the predetermined threshold) (“No” in step 222), then those respective segments 122 are deemed to be segments (S_subs) for which the original vocals thereof will be replaced (i.e., each such segment is assigned to “S_subs”) (step 224). If the mean calculated for respective ones of the segments 122 identified in step 220 is lower than the predetermined threshold (“Yes” in step 222), then those segments 122 are deemed to be segments (S_add) to which vocals from other, candidate tracks 110 will be added (i.e., each such segment is assigned to “S_add”) (step 226).
A procedure 300 to perform automashups using the segments (S_subs) and (S_add), according to an example aspect herein, will now be described, with reference to
Then, in step 304, beat and downbeat alignment is performed for the segment 122 under consideration and the candidate (e.g., vocal) segment(s) 124 determined to be compatible in step 302. In step 306, transition refinement is performed for the segment 112 under consideration and/or the candidate segment(s) 124 aligned in step 304, based on, for example, segmentation information, beat and downbeat information, and voicing information, such as that stored among information 1131 in association with the tracks 110, 112 and/or segments 122, 124 in the database. Then, in step 308, those segments 122, 124 are mixed. In one example, mixing includes a procedure involving time-stretching and pitch shifting using, for example, pysox or a library such as elastique. By example, in a case where that segment 122 was previously assigned to S_subs, mixing can include replacing vocal content of that segment 122, with vocal content of the aligned segment 124. Also by example, in a case where the segment 122 was previously assigned to S_add, mixing can include adding vocal content of the segment 124 to the segment 122.
In a next step 310, a determination is made as to whether a next segment 122 among segments (S_subs) and (S_add) exists in the query track 112, for being processed in the procedure 300. If “Yes’ in step 310, then control passes back to step 302 where the procedure 300 is performed again for the next segment 122 of the track 112. If “No” in step 310, then the procedure ends in step 312. As such, the procedure 300 is performed (in one example embodiment) in sequential order, from a first segment 122 of the query track 112 until the last segment 122 of the query track 112. The procedure also can be performed multiple times, based on the query track 112 and multiple candidate tracks 110, such that a mashup is created based on multiple ones of the tracks 110. Also in a preferred embodiment herein, to reduce processing load and the amount of time required to perform procedure 300, the number of candidate tracks 110 that are employed can be reduced prior to the procedure 300, by selecting best options from among the candidate tracks 110. This is performed by determining a “song mashability score” (e.g., score 126 of
As a result of the procedure 300, a mashup track 120 (
Before describing how a song mashability score is determined, the song suggester procedure 400 according to an example aspect herein will first be described. In one example embodiment herein, the song suggester procedure 400 involves calculating a song mashability score defining song mashability. To do so, a number of different types of scores are determined or considered to determine song mashability, including, by example and without limitation, an acoustic feature vector distance, a likelihood of including vocals, closeness in tempo, and closeness in key.
An acoustic vector distance score is represented by “Ksong (acoustic)”. In one example embodiment herein, an ideal normalized distance between tracks can be predetermined such that segments under evaluation are not too distant from one another in terms of acoustic distance. The smaller the distance between the query and candidate (e.g., vocal) tracks, the higher is the score. Of course, in other example embodiments herein, the ideal normalized distance need not be predetermined in that manner. Also, it is within the scope of the invention for the ideal normalized distance to be specified by a user, and/or the ideal normalized distance may be such that the segments under evaluation are not close in space (i.e., and therefore the segments may be from songs of different genres) to achieve a desired musical effect, for example.
In one example embodiment herein, an acoustic feature vector distance score Ksong(acoustic) is determined according to the procedure 400 of
In another example embodiment herein, the predetermined algorithm is the Annoy (Approximate Nearest Neighbors Oh Yeah) algorithm, which can be used to find nearest neighbors. An Annoy tree is a library with bindings for searching for points in space close to a particular query point. The Annoy tree can form file-based data structures that can be mapped into memory so that various processes may share the same data. In one example, and as described above, an Annoy algorithm builds up binary trees, wherein for each tree, all points are split recursively by random hyperplanes. A root of each tree is inserted into a priority queue. All trees are searched using the priority queue, until there are search_k candidates. Duplicate candidates are removed, a distance to candidates is computed, candidates are sorted by distance, and then top ones are returned.
In general, a nearest neighbor algorithm involves steps such as: (a) start on an arbitrary vertex as a current vertex, (b) find out a shortest edge connecting the current vertex with an unvisited vertex V, (c) set the current vertex to V, (d) mark V as visited, and (e) if all the vertices in domain are visited, then terminate. The sequence of the visited vertices is the output of the algorithm.
Referring again to
Then, for a given candidate track 110 with index j, formula (F4) is performed in step 408 to determine a distance (“difference”) between the final vector of acoustic feature vector distances (Vdist) and an ideal normalized distance:
difference=Vdist[j]−ideal_norm_distance (F4),
where “Vdist[j]” is the final vector of acoustic feature vector distances for candidate track 110 with index j, and “ideal_norm_distance” is the ideal normalized distance. In one example embodiment herein, the ideal normalized distance ideal_norm_distance can be predetermined, and, in one example, is zero (‘0’), to provide a higher score to acoustically similar songs.
A value of “Ksong(acoustic)” (the acoustic feature vector distance score) is then determined in step 410 according to the following formula (F5):
Ksong(acoustic)=max(0.01,1−abs(difference)) (F5),
where “difference” is defined as in formula (F4).
In the foregoing manner, the acoustic feature vector score Ksong(acoustic) is determined.
As described above, another type of information that is used to determine a mashability score is information about the presence of vocals (if any) in time, or, in other words, information representing the likelihood that a segment in question contains vocals. As described above, information about the presence of vocals (if any) in time, for a candidate track 110, can be obtained according to the method described in the Humphrey application, although this example is not exclusive, and the information can be obtained from among the information 1131 stored in a database. For convenience, information representing the likelihood that a segment in question contains vocals is referred to herein as a “vocalness likelihood score”.
In one example embodiment herein, a greater likelihood of a track segment including vocals means a greater score. Such a relationship can be useful in situations where, for example, users would like to search for tracks 110 which contain vocals. In another example scenario (e.g., a DJ wanting to mix together songs) the vocalness likelihood score may be ignored.
In one example embodiment herein, a vocalness likelihood score can be determined according to procedure 500 of
Another type of information that is used to determine a mashability score is closeness in tempo. For determining a score for closeness in tempo, according to an example embodiment herein, that score, which is represented by “Ksong(tempo)”, is determined according to the following formula (F6):
Ksong(tempo)=np.max([0.01,1−abs(log 2(tempo_cand/tempo_query)*K_tempo)]) (F6),
where tempo_cand and tempo_query are the tempi of the candidate and query tracks 110, 112, respectively (e.g., such tempi can be retrieved from the database), and K tempo is a factor to control the penalty of the difference between tempi. Tempo can be determined in many ways. One example includes: tempo=60/median (durations), where durations are the durations of the beats in a song. In one example embodiment herein, the closer the candidate and query tracks 110, 112, are in beats-per-minute (bpm), the higher is the score Ksong(tempo) (in a logarithmic scale).
Another type of information that is used to determine a mashability score is closeness in key, which is defined by a “closeness in key score” Ksong(key). The manner in which a closeness in key score Ksong(key) is determined according to an example embodiment herein, will now be described. The closeness in key score Ksong(key) measures how close together tracks 110, 112 are in terms of musical key. In one example embodiment herein, “closeness” in key is measured by way of a difference in semitones of keys of tracks 110, 112, although this example is non-limiting. Also in one example embodiment herein, the smaller the difference (in semitones) between the semitones of tracks 110, 112, then the greater is the score Ksong(key).
Referring again to step 604, if two tracks 110, 112 under consideration are not both in a major key, or are not both in a minor key (“No” in step 604), then, prior to determining the score Ksong(key), the relative key or pitch corresponding to the key or pitch, respectively, of one of those tracks 110, 112 is determined (step 606). For example, each pitch in the major key in Western music is known to have an associated relative minor, and each pitch in the minor key is known to have a relative major. Such relationships between relative majors and minors may be stored in a lookup table stored in a database (such as the database described above).
Step 608 will now be described. In step 608, a determination is made of the difference in semitones between the root notes of the keys received as a result of the performance of step 604 or 606, wherein the difference is represented by variable “n_semitones”. In one example herein, the difference n_semitones can be in a range between a minimum of zero “0” and a maximum of six “6”, although this example is not limiting.
By example, if a candidate track 110 under consideration is in a major key and has a root pitch class of A major, and the query track 112 under consideration also is in a major key and has a root pitch class of B major (“Yes” in step 604), then in step 608 a determination is made of the difference (in semitones) between those root pitch classes, which in the present example results in a determination of two (‘2’) semitones (i.e., n_semitones=2). In another example, in a case in which the candidate track 110 under consideration is in a major key and has a root pitch class of C major, and the query track 112 under consideration is in a minor key and has a root pitch class of G minor (“No” in step 604), then the relative minor of C major (e.g., A minor) is correlated to and accessed from the lookup table 1133 in step 606, and is provided to step 608 along with G minor. In step 608, a determination is made of the difference (in semitones) between those root pitch classes, which in the present example results in a determination of two (‘2’) semitones (i.e., n_semitones=2).
Step 610 will now be described. According to an example embodiment herein, step 610 is performed to determine the closeness in key score, using the following formula (F6):
Ksong(key)=max(0,min(1,1−abs(n_semitones*K_semitone_change)−mode_change_score_penalty)) (F6),
where the variable Ksong(key) represents the closeness in key score, variable n_semitones represents the difference determined in step 608, and mode_change_score_penalty is pre-set equal to ‘0’ if both songs are in a same key type (in the case of “Yes” in step 604), or is equal to a value of a constant K_mode_change_score, which represents a penalty for requiring a change in key type (in the case of “No” in step 604). In one example embodiment herein, constant K_mode_change_score is equal to a predetermined value, such as, by example and without limitation, 0.9. Also in formula (F6), and according to one example embodiment herein, K_semitone_change is equal to a predetermined value, such as, by example and without limitation, 0.4. Which particular value is employed for the variable K_semitone_change depends on how much it is desired to penalize any transpositions that may be required to match both key types (i.e., in the case of “No” in step 604), and can depend on, for example, the quality of a pitch shifting algorithm used, the type (e.g., genre) of music used, the desired musical effect, etc.
According to an example aspect herein, a song mashability score (represented by variable (Ksong[j])) between the query track 112, and each of the candidate tracks 110, can be determined. Reference is now made to
Ksong[j]=Ksong(key)[j]*Ksong(tempo)[j]*Ksong(vocalness)[j]*Ksong(acoustic)[j] (F7).
In one example embodiment herein, the resulting vector Ksong [j] has Nc components, where Nc corresponds to the number of candidate tracks. Steps 702 to 710 of procedure 700 can be performed with respect to each of the j candidate tracks 110 to yield respective scores Ksong [j] for each such track 110. Also in one example embodiment herein, song mashability score Ksong [j] determined for the j candidate tracks 110 can be ordered in descending order (in step 710) from greatest score to least score (although in another example, they may be ordered in ascending order, from least score to greatest score).
In one example embodiment herein, to limit the number of tracks that may be employed for mashing up, certain ones of the j candidate tracks 110 can be eliminated based on predetermined criteria. As an example, respective mashability scores Ksong [j] determined for respective ones of the j candidate tracks 110 can be compared individually to a predetermined threshold value (step 712). If a score is less than the predetermined threshold value (“No” in step 712), then the respective candidate track 110 is discarded (step 714). If a score is equal to or greater than the predetermined threshold value (“Yes” in step 712), then the respective candidate track 110 under consideration is maintained (selected) in step 716 (for eventually being mashed up in step 308 of
Having described the manner in which song mashability is determined according to an example embodiment herein, a procedure for finding a segment, such as, e.g., a candidate (e.g., vocal) segment 124, with high mashability relative to a query track (e.g., an accompaniment track) 112 according to another example aspect herein, will now be described, with reference to
In one example embodiment herein, to enable a vertical mashability score to be calculated, a minimum length of segments (in terms of the number of beats thereof) is first determined in step 902, using the following formula (F8):
Nbeats=min(Nvoc,Nacc) (F8),
where variable Nbeats represents a minimum length of segments (in terms of number of beats), Nvoc represents the number of beats of the candidate (e.g., vocal) segment 124 under consideration, and variable Nacc represents the number of beats of the query segment 122 under consideration from the query track 112. In the initial performance of step 902, the segments under consideration include a first query segment 122 of the query track 112 and a first candidate segment 124 of the candidate track 110 under consideration.
In a next step 904, a tempo compatibility between the candidate segment 124 and the query segment 122 is determined (in one example, the closer the tempo, the higher is a tempo compatibility score K_seg_tempo, to be described below). In one example embodiment herein, step 904 can be performed according to procedure 1000 shown in
K_seg_tempo=max([min_score,1−abs(log 2(tempo_candidate/tempo_query)*K)]) (F9),
where K_seg_tempo represents the tempo compatibility score, min_score represents a predetermined minimum value for that score (e.g., 0.0001), tempo_candidate represents the tempo value obtained for the candidate segment 124 in step 1006, tempo_query represents the tempo value obtained for the query segment 122 in step 1006, and K is a value to control a penalty due to tempo differences. K is a predetermined constant, (e.g. 0.2). The higher the value of K, the lower the score. In other words, it is more important that the query and candidate have similar tempi. It is noted that, the closer the tempi of the segments 122, 124 are, the greater is the score.
Referring again to
K_seg_harm_prog=(1+med_corr)/2 (F10),
wherein K_seg_harm_prog represents the harmonic compatibility score, and med_corr represents the median value determined in step 1106′.
Another factor involved in vertical mashability is normalized loudness compatibility. Referring again to
K_seg_norm_loudness=min([target_loudness,query_loudness])/max([target_loudness,query_loudness]) (F11),
where K_seg_norm_loudness represents the normalized loudness compatibility score, target_loudness represents a loudness of the candidate (target) segment 124 (as determined in step 1206), and query_loudness represents a loudness of the query segment 122 (as also determined in step 1206).
Another factor involved in vertical mashability is vocal activity detection on the segment of the candidate (e.g., vocal) track 110 under consideration. Referring again to
Beat-stability can be another factor involved in vertical mashability. Beat-stability, for a candidate segment 124, is the stability of beat duration in a candidate segment 124 under consideration, wherein, in one example embodiment herein, a greater beat stability results in a higher score. Beat stability is determined in step 912 of
where i corresponds to the index of a beat, and delta_rel[i] is a vector representing a relative change between durations of consecutive beats in the candidate segment 124 under consideration. In one example embodiment herein, “dur” represents a duration, the vector (delta_rel[i]) has a size represented by (Nbeats−1), and formula (F12) provides a maximum value.
In step 1304, a beat stability score, K_seg_beat_stab, is determined according to the following formula (F13):
K_seg_beat_stab=max(0,1−max(delta_rel)) (F13).
Another factor involved in vertical mashability is harmonic change balance, which measures if there is a balance in a rate of change in time of harmonic content (chroma vectors) of both query and candidate (target) segments 122, 124. Briefly, if musical notes change often in one of the tracks (either query or candidate), the score is higher when the other track is more stable, and vice versa.
Harmonic change balance is determined in step 914 of
Change=(1−corr)/2 (F14).
As a result, a vector is obtained with (Nbeats−1) change rate values for both candidate and query tracks, 110, 112, wherein the change rate value for the candidate (e.g., vocal) track 110 is represented by “CRvoc”, and the change rate value for the query (accompaniment) track 112 is represented by “CRacc”.
A Harmonic Change Balance (HCB) vector is then determined in step 1408′ according to the following formula (F15):
HCB[i]=1−abs(CRacc[i]−(1−CRvoc[i])) (F15),
where HCB[i] represents a Harmonic Change balance, value [i] corresponds to each element of the change rate vectors, CRvoc is the change rate value for the candidate (e.g., vocal) track 110, and CRacc is the change rate value for the query track 112.
A Harmonic change balance score (K_harm_change_bal) is then determined in step 1410′ according to the following formula (F16):
K_harm_change_bal=median(HCB) (F16).
Another factor involved in vertical mashability is segment length. In one example embodiment herein, the closer the lengths of the query and candidate segments 112, 110 (measured in beats) are to each other, then the greater is a segment length score K_len. Segment length is measured in step 916 of
K_len=min([Nvoc/Nacc,Nacc/Nvoc]) (F17),
wherein K_len represents the segment length score, Nvoc represents a length of a candidate segment 124 under consideration, and Nacc represents a length of a query segment 122 under consideration.
According to an example embodiment herein, vertical mashability is measured by a vertical mashability score (V), which is determined as the product of all the foregoing types of scores involved with determining vertical mashability. According to one example embodiment herein, the vertical mashability score (V) is determined according to the following formula (F18), in step 918:
V=(K_seg_harm_prog{circumflex over ( )}(W_seg_harm_prog))*(K_seg_tempo{circumflex over ( )}(W_seg_tempo))*(K_seg_vad{circumflex over ( )}(W_seg_vad))*(K_seg_beat_stab{circumflex over ( )}(W_seg_beat_stab))*(K_harm_change_bal{circumflex over ( )}(W_harm_change_bal))*(K_len{circumflex over ( )}(W_len)) (F18),
where the symbol {circumflex over ( )} represents a power operator, the term W_seg_harm_prog represents a weight for the score K_seg_harm_prog, the term W_seg_tempo represents a weight for the score K_seg_tempo, the term W_seg_vad represents a weight for the term K_seg_vad, the term W_seg_beat_stab represents a weight for the term K_seg_beat_stab, the term W_harm_change_bal represents a weight for the term K_harm_change_bal, and the term W_len represents a weight for the term K_len.
The weights enable control of the impact or importance of each of the mentioned scores in the calculation of the overall vertical mashability score (V). In one example embodiment herein, one or more of the weights have a predetermined value, such as, e.g., ‘1’. Weights of lower value result in the applicable related score having a lesser impact or importance on the overall vertical mashability score, relative to weights having higher scores, and vice versa.
Horizontal mashability will now be described in detail. A horizontal mashability score (H) considers a closeness between consecutive tracks. In one example embodiment, to determine horizontal mashability, tracks from which vocals may be employed (i.e., candidate tracks 110) for a mashup are considered.
To determine horizontal mashability, a distance is computed between the acoustic feature vectors of the candidate track 110 whose segment 124 is a current candidate and a segment 124 (if any) that was previously selected as a best candidate for a mashup. The smaller the distance, the higher is the horizontal mashability score. Determining horizontal mashability also involves considering a repetition of the selected segment 124.
In one example embodiment herein, an acoustic feature vector distance is determined according to procedure 1500 of
A next step 1506 includes normalizing the distance vector (from step 1504) by its maximum value, to obtain a normalized distance vector (step 1506). A final vector of acoustic feature vector distances (Vsegdist) is within the interval [0,1].
For a given candidate track 110 with index j, formula (F19) is performed in step 1508 to determine a distance (“difference”) between the final vector of acoustic feature vector distances (Vsegdist) and an ideal normalized distance:
difference=Vsegdist[j]−ideal_norm_distance (F19),
where Vsegdist[j] is the final vector of acoustic feature vector distances (determined in step 1506), and “ideal_norm_distance” is the ideal normalized distance. In one example embodiment herein, the ideal normalized distance ideal_norm_distance can be predetermined, and, in one example, is zero (‘0’), to provide a higher score for acoustically similar tracks (to allow smooth transitions between vocals in terms of style/genre).
A value of K_horiz_ac is then determined in step 1510 according to the following formula (F20):
K_horiz_ac=max(0.01,1−abs(difference)) (F20),
where K-horiz_ac represents a horizontal acoustic distance score of the candidate track 110 with index j.
The manner in which the number of repetitions of a given segment 124 is determined (e.g., to favor changing between vocals of different tracks/segments), will now be described with reference to the procedure 1600 of
K_repet=1/(1+num_repet) (F21),
where, as described above, num_repet is equal to the number of times the specific segment 124 has already been previously selected as the best candidate in searches of candidate segments 110 (e.g., vocal segments) for being mixed with previously considered query segments 122.
A procedure 1700 for determining a horizontal mashability score according to an example aspect herein will now be described, with reference to
H=(K_horiz_ac{circumflex over ( )}W_horiz_ac)*(K_repet{circumflex over ( )}W_repet) (F22),
where H represents the horizontal mashability score, and W_horiz_ac and W_repet are weights that allow control of an importance or impact of respective scores K_horiz_ac and K_repet in the determination of value H. In one example embodiment herein, W_horiz_ac=W_repet=1 by default.
Referring now to
M[j]=Ksong(key)[j]*Ksong(acoustic)[j]*V[j]*H[j] (F23)
where M[j] represents the total mashability score for a jth segment 124 under consideration, Ksong(key)[j] represents the key distance score for the segment 124, Ksong(acoustic)[j] represents the acoustic feature vector calculated for the segment 124, V[j] represents the vertical mashability score for the segment 124, and H[j] represents the horizontal mashability score H for the segment 124. Steps 1802 to 1810 can be performed for each segment 124 of candidate track(s) 110 under consideration.
After computing the score (M) for all segments 124 of all candidate tracks 110 under consideration, the segment 124 with the highest total mashability score (M) is selected (step 1812), although in other example embodiments, a sampling between all possible candidate segments can be done with a probability which is proportional to their total mashability score. The above procedure can be performed with respect to all segments 122 that were assigned to S-subs and S_add of the query track 112 under consideration, starting from the start of the track 112 and finishing at the end of the track 112, to determine mashability between those segments 122 and individual ones of the candidate segments 124 of candidate tracks 110 that were selected as being compatible with the query track 112.
As described above with respect to the procedure 300 of
Alignment in step 304 of procedure 300 involves properly aligning the candidate (e.g., vocal) segment 124 with the segment 122 under consideration from the query track 112 to ensure that, once mixing occurs, the mixed segments sound good together. As an example, if a beat of the candidate segment 124 is not aligned properly with a corresponding beat of the segment 122, then a mashup of those segments would not sound good together and would not be in an acceptable musical time. Proper alignment according to an example aspect herein avoids or substantially minimizes that possibility.
Also by example, another factor taken into consideration is musical phrasing. If the candidate segment 124 starts or ends in the middle of a musical phrase, then a mashup would sound incomplete. Take for example a song like “I Will Always Love You,” by Celine Dion. If a mashup were to select a candidate (e.g., vocal) segment that starts in the middle of the vocal phrase “I will always love you,” (e.g., at “ . . . ays love you” and cut off “I will alw . . . ”), then the result would sound incomplete. Thus, in one example embodiment herein it is desired to analyze vocal content of the candidate segment 124 to determine whether the vocal content is present at the starting or ending boundary of the segment 124, and, if so, to attempt to shift the starting and/or ending boundaries to the start or end of the musical phrase so as to not cut the musical phrase off in the middle of the musical phrase.
In one example embodiment herein, segment refinement in step 306 is performed according to procedure 2100 of
Vocal activity in the candidate track 110 is then analyzed over a predetermined number of downbeats around the downbeat location (e.g., 4 beats, either before or after the location in time) (step 1208), based on the beat and downbeat information, and voicing information. For a preliminary starting boundary of the candidate (e.g., vocal) segment 124, a search is performed (step 2110) for the first beat in the candidate track before that segment boundary in which the likelihood of containing vocals is lower than a predetermined threshold (e.g., 0.5, on a scale from 0 to 1, where 0 represents full confidence that there are not vocals at that downbeat and 1 represents full confidence that there are vocals at that downbeat). The first downbeat before the starting boundary that meets that criteria is selected as the final starting boundary for the candidate segment 124 (step 2112). This is helpful to avoid cutting a melodic phrase at the start of the candidate segment 124, and alignment between candidate and query segments 122, 124 is maintained based on the refined downbeat location. Similarly, for the ending boundary of the candidate segment 124, a search is performed (step 2114) for the first beat in the candidate track after the segment boundary in which the likelihood of containing vocals is lower than the threshold (e.g., 0.5), and that downbeat is selected as the final ending boundary of the candidate segment 124 (step 2116). This also is helpful to avoid cutting a melodic phrase at the end of the segment 124.
As such, by virtue of procedure 2100, the boundaries of the candidate segment 124 are adjusted so that the starting and ending boundaries of a segment are aligned with a corresponding downbeat, and the starting and ending boundaries can be positioned before or after a musical phrase of vocal content (e.g., at a point in which there are not vocals). The procedure 2100 can be performed for more than one candidate track 110 with respect to the query track 112 under consideration, including for all segments selected (even segments from different songs) as being compatible.
It is helpful to align the starting and ending boundaries with the downbeats. For example, if the corresponding insertion point of the instrumentals is also selected at a downbeat (of the instrumentals), then, when the two are put together by aligning the starting boundary of the vocals with the insertion point of the instrumentals, the beats will automatically also be aligned.
As described above, in procedure 300 of
The particular gain (in dB) that is applied to a segment in step 2204 can depend on the type of the segment, according to an example embodiment herein. Preferably, for query segments 122 that have been assigned to S_keep, the original loudness thereof is maintained (i.e., the gain=1). For segments 122 assigned to S_subs and S_add, on the other hand, a loudness of beats of the tracks 110, 112 is employed and a heuristically determined value is used for a gain (in dB).
Gain=max(Lvocal,Laccomp−2)−MaxLvocal (F24).
As a result of the “Gain” being determined for a particular candidate segment 124 (to be used in place of or to be added to a query segment 122 assigned to S_subs or S_add, respectively, in step 2510), that Gain is applied to the segment 124 in step 2204.
After step 2204, time-stretching is performed in step 2206. Preferably, time-stretching is performed to each beat of respective candidate (e.g., vocal) tracks 110 so that they conform to beats of the query track 112 under consideration, based on a time-stretching ratio (step 2206). In one example embodiment herein, the time-stretching ratio is determined according to procedure 2400 of
Step 2208 includes performing pitch shifting to each candidate (e.g., vocal) segment 124, as needed, based on a pitch-shifting ratio. In some embodiments, the pitch-shifting ratio is computed while computing the mashability scores discussed above. For example, the vocals are pitch-shifted by n_semitones, where n_semitones is the number of semitones. In some embodiments, the number of semitones is determined during example step 608 discussed in reference to
Then, the procedure 2200 can include applying fade-in and fade-out, and/or high pass filtering or equalizations around transition points, using determined transitions (step 2210). In one example embodiment herein, the parts of each segment 124 (of a candidate track 110 under consideration) which are located temporally before initial and after the final points of the refined boundaries (i.e., transitions), can be rendered with a volume fade in, and a fade out, respectively, so as to perform a smooth intro and outro, and reduce clashes between vocals of different tracks. Fade in and Fade out can be performed in a manner known in the art. In another example embodiment herein, instead of performing a fade in step 2210, low pass filtering can be performed with a filter cutoff point that descends from, by example, 2 Khz, at a transition position until 0 Hz at the section initial boundary, in a logarithmic scale (i.e., where no filtering is performed at the boundary). Similarly, instead of performing a fade out in step 2210, a low pass filtering can be performed, with an increasing cutoff frequency, from, by example, 0 to 2 Khz, in logarithmic scale. Depending on the length of the transition (which depends on the refinement to avoid cutting vocal phrases), a faster or slower fade in or fade out can be provided (i.e., the longer the transition the slower the fade in or fade out). In some embodiments, the transition zone is the zone between the refined boundary using vocal activity detection and the boundary refined only with downbeat positions.
Referring again to
Another example aspect herein will now be described. In accordance with this example aspect, an automashup can be personalized based on a user's personal taste profile. For example, users are more likely to enjoy mashups created from songs the users know and like. Accordingly, the present example aspect enables auto-mashups to be personalized to individual users' taste profiles. Also in accordance with this example aspect, depending on the application of interest, there may not be enough servers available to be able to adequately examine how every track might mash up with every other track, particularly in situations where a catalog many (e.g., millions) of tracks is involved. The present example aspect reduces the number of tracks that are searched for and considered/examined for possible mash-ups, thereby alleviating the number of servers and processing power required to perform mash-ups.
A procedure 2600 according to the present example aspect will now be described, with reference to the flow diagram shown in
Next, in step 2604, tracks that were determined in step 2602 are added to a set S1. In some example embodiments herein, there may be one set S1 for each user, or, in other example embodiments, there may be a single set S1 that includes all user tracks that were determined in step 2602. In the latter case, where there is overlap of tracks, only a single version of the track is included in the set S1, thereby reducing the number of tracks.
Then, in step 2606, audio analysis algorithms are performed to the tracks from set S1, and the resulting output(s) are stored as information 1131 in the database. In one example embodiment herein, the audio analysis performed in step 2606 includes determining the various types of information 1131 in the manner described above. By example only and without limitation, step 2606 may include separating components (e.g., vocal, instrumental, and the like) from the tracks, determining segmentation information based on the tracks, determining segment labelling information, performing track segmentation, determining the tempo(s) of the tracks, determining beat/downbeat positions in the tracks, determining the tonality of the tracks, determining information about the presence of vocals (if any) in time in each track, determining energy of each of the segments in the vocal and accompaniment tracks, determining acoustic feature vector information and loudness information (e.g., amplitude) associated with the tracks, and/or the like. In at least some cases, algorithms performed to determine at least some of the foregoing types of information can be expensive to run and may require a high level of processing power and resources. However, according to an example aspect herein, by reducing the total available number of tracks to only those included in the set S1, a reduction of costs, processing power, and resources can be achieved.
For each user for which the determination in step 2602 originally was made, a further determination is made in step 2608, of a predetermined number P2 (e.g., the top 100) of the respective user's most liked mixed, original tracks. In one example embodiment herein, the determination in step 2608 can be made by making affinity determinations for the respective users, in the above-described manner. Next, in step 2610, tracks that were determined in step 2608 are added to a set S2, wherein, in one example embodiment herein, there is set S2 for each user (although in other example embodiments, there may be a single set S2 that includes all user tracks that were determined in step 2608).
Then, in step 2612 an intersection of the tracks from the sets S1 and S2 is determined. In one example embodiment herein, step 2612 is performed to identify which tracks appear in both sets S1 and S2. According to an example embodiment herein, in a case where set S1 includes tracks determined in step 2602 for all users, and where each set S2 includes tracks determined in step 2608 for a respective one of the users, then step 2612 determines the intersection between tracks that are in the set S1 and the set S2, and is performed for each set S2 vis-a-vis the set S1. In an illustrative, non-limiting example in which the predetermined numbers P1 and P2 are 10 and 100, respectively, the performance of step 2612 results in there being between 10 and 100 tracks being identified per user in step 2612. The identified tracks for each respective user are then assigned to a corresponding set SU (step 2614).
In another example embodiment herein, step 2612 is performed based on multiple users. By example and without limitation, referring to
Referring again to
In some example embodiments herein, the results of more than one user's affinity determinations (in procedure 2600) can be employed as mashup candidates, and musical compatibility determinations and possible resulting mashups can be performed for those tracks as well in the above-described manner, whether some tracks overlap across users or not. In still another example, only tracks for which a predetermined number of users are determined to have an affinity are employed in the musical compatibility determinations and possible mashups. In still another example where more than one user's affinity determinations are employed as mashup candidates, the intersection between those results and each user's full collection of tracks is determined and employed and the intersecting tracks are employed in musical compatibility determinations and possible mashups. At least some of the results of the intersection also can be employed to generate a waveform.
By virtue of the above procedure 2600, the number of tracks that are searched for and considered/examined for possible mash-ups can be reduced based on user profile(s), thereby alleviating the number of servers and processing power required to perform mash-ups.
In accordance with another example embodiment herein, a collage can be created of images (e.g., album cover art) associated with musical tracks that are employed in a “mashup” of songs. In one example embodiment herein, each pixel of the collage is an album cover image associated with a corresponding musical track employed in a mashup, and the overall collage forms a profile photo of the user. A process according to this example aspect can include downloading a user's profile picture, and album art associated with various audio tracks, such as those used in mashups personalized for the user. Next, a resize is performed of every album art image to a single pixel. A next step includes obtaining the color (e.g., average color) of that pixel and placing it in a map of colors to the images they are associated with. This gives the dominant color of each piece of album art. Next steps include cropping the profile picture into a series of 20×20 pixels, and then performing a resize to one pixel on each of these cropped pictures, and then finding a nearest color in the map of album art colors. A next step includes replacing the cropped part of the picture with the album art resized to, by example only, 20×20 pixels. As a result, a collage of the album art images is provided, and, in one example embodiment herein, the collage forms a profile image of the user.
According to still another example embodiment herein, titles are formulated based on titles of songs that are mashed up. That is, titles of mashed up tracks are combined in order to create a new title that includes at least some words from the titles of the mashed up tracks. Prior to being combined, the words from each track title are categorized into different parts of speech using Natural Language Processing, such as by, for example, the Natural Language Toolkit (NLTK), which is a known collection of libraries and tools for natural language processing in Python. A custom derivation tree determines word order so that the combined track names are syntactically correct. Various possible combinations of words forming respective titles can be provided. In one example embodiment herein, out of all the possible combinations, the top 20% are selected based on length. The final track name is then randomly chosen from the 20%. The track names can then be uploaded to a data storage system (e.g., such as BigTable), along with other metadata for each track. From the data storage system, the track names can be retrieved and served in real-time along with the corresponding song mashups. In an illustrative example, the following (four) track titles T are employed as inputs: T={Shine on Me, I Feel Fantastic, Rolling Down the Hill, Wish You Were Here}. An algorithm according to an example embodiment herein selects the following words W from those titles T: W={shine, feel, fantastic, rolling, down, hill, wish, you, were, here}. Based on those words, the following possible combined titles are generated: “Wish the Hill,” “The Shine was Rolling,” and “The Fantastic Shine”.
As can be appreciated in view of the above description, at least some example aspects herein employ source separation to generate candidate (e.g., vocal) tracks and query (e.g., accompaniment) tracks, although in other example embodiments, stems can be used instead, or a multitrack can be employed where separation is therefore not needed). In other example embodiments herein, full tracks can be employed (without separation of vocals and accompaniment components).
Also, at least some example aspects herein can determine which segments to keep of an original, mixed track, which ones to replace with content (e.g., vocal content) from other tracks, and which ones to have content from other tracks added thereto. For those segments in which vocals from other songs/tracks are added, it can be determined whether source (e.g., vocal) separation is needed to be performed or not on a query track (e.g., accompaniment track) by using vocal activity detection information, among information 1131.
At least some example embodiments herein also employ a song mashability score, using global song features, including, by example only, acoustic features derived from collaborative filtering knowledge. At least some example embodiments herein also employ a segment mashability score, including various types of musical features as described above.
At least some example embodiments herein also at least implicitly use collaborative filtering information (i.e., using acoustic feature vectors for improving recommendations of content (e.g., vocals) to be mixed with query (e.g., instrumental) tracks, and selection of content in contiguous segments. Presumably, the more similar they are, then the more likely it is for them to work well together in a mashup. However, this is a configurable parameter, and, in other examples, users may elect to foster mixes of more different songs, instead of more similar ones.
At least some example aspects herein also employ refinement of transitions between lead (vocal) parts, by using section, downbeat, and vocal activity detection for finding ideal transition points, in order to avoid detrimentally cutting melodic phrases.
The computation system 1100 may include without limitation a processor device 1110, a main memory 1125, and an interconnect bus 1105. The processor device 1110 (410) may include without limitation a single microprocessor, or may include a plurality of microprocessors for configuring the system 1100 as a multi-processor acoustic attribute computation system. The main memory 1125 stores, among other things, instructions and/or data for execution by the processor device 1110. The main memory 1125 may include banks of dynamic random access memory (DRAM), as well as cache memory.
The system 1100 may further include a mass storage device 1130 (which, in the illustrated embodiment, has LUT 1133 and stored information 1131), peripheral device(s) 1140, portable non-transitory storage medium device(s) 1150, input control device(s) 1180, a graphics subsystem 1160, and/or an output display interface 1170. A digital signal processor (DSP) 1182 may also be included to perform audio signal processing. For explanatory purposes, all components in the system 1100 are shown in
Mass storage device 1130 additionally stores a song suggester engine 1188 that can determine musical compatibility between different musical tracks, a segment suggestion engine 1190 that can determine musical compatibility between segments of the musical tracks, a combiner engine 1194 that mixes or mashes up musically compatible tracks and segments, an alignment engine 1195 that aligns segments to be mixed/mashed up, and a boundary connecting engine 1196 that refines boundaries of such segments.
The portable storage medium device 1150 operates in conjunction with a nonvolatile portable storage medium, such as, for example, a solid state drive (SSD), to input and output data and code to and from the system 1100. In some embodiments, the software for storing information may be stored on a portable storage medium, and may be inputted into the system 1100 via the portable storage medium device 1150. The peripheral device(s) 1140 may include any type of computer support device, such as, for example, an input/output (I/O) interface configured to add additional functionality to the system 1100. For example, the peripheral device(s) 1140 may include a network interface card for interfacing the system 1100 with a network 1120.
The input control device(s) 1180 provide a portion of the user interface for a user of the computer 1100. The input control device(s) 1180 may include a keypad and/or a cursor control device. The keypad may be configured for inputting alphanumeric characters and/or other key information. The cursor control device may include, for example, a handheld controller or mouse, a trackball, a stylus, and/or cursor direction keys. In order to display textual and graphical information, the system 1100 may include the graphics subsystem 1160 and the output display 1170. The output display 1170 may include a display such as a CSTN (Color Super Twisted Nematic), TFT (Thin Film Transistor), TFD (Thin Film Diode), OLED (Organic Light-Emitting Diode), AMOLED display (Activematrix Organic Light-emitting Diode), and/or liquid crystal display (LCD)-type displays. The displays can also be touchscreen displays, such as capacitive and resistive-type touchscreen displays. The graphics subsystem 1160 receives textual and graphical information, and processes the information for output to the output display 1170.
The user interface 1400 also includes forward control 1406 and reverse control 1404 for scrolling through a track in either respective direction, temporally. According to an example aspect herein, the user interface 1400 further includes a volume control bar 1408 having a volume control 1409 (also referred to herein as a “karaoke slider”) that is operable by a user for attenuating the volume of at least one track. By example, assume that the play button 1402 is selected to playback a song called “Night”. According to one non-limiting example aspect herein, when the play button 1402 is selected, the “mixed” original track of the song, and the corresponding instrumental track of the same song (i.e., wherein the tracks may be identified as being a pair according to procedures described above), are retrieved from the mass storage device 1130. As a result, both tracks are simultaneously played back to the user, in synchrony. In a case where the volume control 1409 is centered at position 1410 in the volume control bar 1408, then, according to one example embodiment herein, the “mixed” original track and instrumental track both play at 50% of a predetermined maximum volume. Adjustment of the volume control 1409 in either direction along the volume control bar 1408 enables the volumes of the simultaneously played back tracks to be adjusted in inverse proportion, wherein, according to one example embodiment herein, the more the volume control 1409 is moved in a leftward direction along the bar 1408, the lesser is the volume of the instrumental track and the greater is the volume of the “mixed” original track. For example, when the volume control 1409 is positioned precisely in the middle between a leftmost end 1412 and the center 1410 of the volume control bar 1408, then the volume of the “mixed” original track is played back at 75% of the predetermined maximum volume, and the instrumental track is played back at 25% of the predetermined maximum volume. When the volume control 1409 is positioned all the way to the left end 1412 of the bar 1408, then the volume of the “mixed” original track is played back at 100% of the predetermined maximum volume, and the instrumental track is played back at 0% of the predetermined maximum volume.
Also according to one example embodiment herein, the more the volume control 1409 is moved in a rightward direction along the bar 1408, the greater is the volume of the instrumental track and the lesser is the volume of the “mixed” original track. By example, when the volume control 1409 is positioned precisely in the middle between the center positon 1410 and rightmost end 1414 of the bar 1408, then the volume of the “mixed” original track is played back at 25% of the predetermined maximum volume, and the instrumental track is played back at 75% of the predetermined maximum volume. When the volume control 1409 is positioned all the way to the right along the bar 1408, at the rightmost end 1414, then the volume of the “mixed” original track is played back at 0% of the predetermined maximum volume, and the instrumental track is played back at 100% of the predetermined maximum volume.
In the above manner, a user can control the proportion of the volume levels between the “mixed” original track and the corresponding instrumental track.
Of course, the above example is non-limiting. By example, according to another example embodiment herein, when the play button 1402 is selected, the “mixed” original track of the song, as well as the vocal track of the same song (i.e., wherein the tracks may be identified as being a pair according to procedures described above), can be retrieved from the mass storage device 1130, wherein, in one example, the vocal track is obtained according to one or more procedures described above, such as that shown in
In still another example embodiment herein, when the play button 1402 is selected to play back a song, the instrumental track of the song, as well as the vocal track of the same song (wherein the tracks are recognized to be a pair) are retrieved from the mass storage device 1130. As a result, both tracks are simultaneously played back to the user, in synchrony. Adjustment of the volume control 1409 in either direction along the volume control bar 1408 enables the volume of the simultaneously played tracks to be adjusted in inverse proportion, wherein, according to one example embodiment herein, the more the volume control 1409 is moved in a leftward direction along the bar 1408, the lesser is the volume of the vocal track and the greater is the volume of the instrumental track, and, conversely, the more the volume control 1409 is moved in a rightward direction along the bar 1408, the greater is the volume of the vocal track and the lesser is the volume of the instrumental track.
Of course, the above-described directionalities of the volume control 1409 are merely representative in nature, and, in other example embodiments herein, movement of the volume control 1409 in a particular direction can control the volumes of the above-described tracks in an opposite manner than those described above, and/or the percentages described above may be different that those described above, in other example embodiments. Also, in one example embodiment herein, which particular type of combination of tracks (i.e., a mixed original signal paired with either a vocal or instrumental track, or paired vocal and instrumental tracks) is employed in the volume control technique described above can be predetermined according to pre-programming in the system 1100, or can be specified by the user by operating the user interface 1400.
Referring again to
Input control devices 1180 can control the operation and various functions of system 1100.
Input control devices 1180 can include any components, circuitry, or logic operative to drive the functionality of system 1100. For example, input control device(s) 1180 can include one or more processors acting under the control of an application.
Each component of system 1100 may represent a broad category of a computer component of a general and/or special purpose computer. Components of the system 1100 (400) are not limited to the specific implementations provided herein.
Software embodiments of the examples presented herein may be provided as a computer program product, or software, that may include an article of manufacture on a machine-accessible or machine-readable medium having instructions. The instructions on the non-transitory machine-accessible machine-readable or computer-readable medium may be used to program a computer system or other electronic device. The machine- or computer-readable medium may include, but is not limited to, floppy diskettes, optical disks, and magneto-optical disks or other types of media/machine-readable medium suitable for storing or transmitting electronic instructions. The techniques described herein are not limited to any particular software configuration. They may find applicability in any computing or processing environment. The terms “computer-readable”, “machine-accessible medium” or “machine-readable medium” used herein shall include any medium that is capable of storing, encoding, or transmitting a sequence of instructions for execution by the machine and that causes the machine to perform any one of the methods described herein. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, module, unit, logic, and so on), as taking an action or causing a result. Such expressions are merely a shorthand way of stating that the execution of the software by a processing system causes the processor to perform an action to produce a result.
Some embodiments may also be implemented by the preparation of application-specific integrated circuits, field-programmable gate arrays, or by interconnecting an appropriate network of conventional component circuits.
Some embodiments include a computer program product. The computer program product may be a storage medium or media having instructions stored thereon or therein which can be used to control, or cause, a computer to perform any of the procedures of the example embodiments of the invention. The storage medium may include without limitation an optical disc, a ROM, a RAM, an EPROM, an EEPROM, a DRAM, a VRAM, a flash memory, a flash card, a magnetic card, an optical card, nanosystems, a molecular memory integrated circuit, a RAID, remote data storage/archive/warehousing, and/or any other type of device suitable for storing instructions and/or data.
Stored on any one of the computer-readable medium or media, some implementations include software for controlling both the hardware of the system and for enabling the system or microprocessor to interact with a human user or other mechanism utilizing the results of the example embodiments of the invention. Such software may include without limitation device drivers, operating systems, and user applications. Ultimately, such computer-readable media further include software for performing example aspects of the invention, as described above.
Included in the programming and/or software of the system are software modules for implementing the procedures described herein.
While various example embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein. Thus, the present invention should not be limited by any of the above described example embodiments, but should be defined only in accordance with the following claims and their equivalents.
In addition, it should be understood that the
Further, the purpose of the foregoing Abstract is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract is not intended to be limiting as to the scope of the example embodiments presented herein in any way. It is also to be understood that the procedures recited in the claims need not be performed in the order presented.
This application is a Continuation of U.S. application Ser. No. 16/728,953, filed Dec. 27, 2019, and titled METHOD, SYSTEM, AND COMPUTER-READABLE MEDIUM FOR CREATING SONG MASHUPS, the disclosure of which is hereby incorporated by reference in its entirety. To the extend appropriate a claim of priority is made to the above-disclosed application.
Number | Date | Country | |
---|---|---|---|
Parent | 16728953 | Dec 2019 | US |
Child | 17930933 | US |