The disclosure relates to synchronizing multiple audio tracks using harmonics of the harmonic sound.
Multiple media recordings may be generated during the same live occurrence. The media recordings obtained from multiple media capture devices during the same live occurrence may be synchronized using harmonics of the harmonic sound of the media recordings. Harmonics may be generated from an audio track in a frequency space in which energy may be represented as a function of frequency.
One or more aspects of the present disclosure relate to a synchronization of multiple media files using harmonics of the harmonic sound. Harmonics may include pitch of the harmonic sound, harmonic energy, and/or other features. For example, transformed representation may be used to obtain one or more of pitch of the harmonic sound, harmonic energy of individual temporal windows partitioning an audio track, and/or other information. One or more transformed representations of one or more temporal windows of one or more temporal window lengths of one or more audio tracks may be compared to correlate pitch of the harmonic sound and harmonic energy of individual temporal windows to one another. The results of the correlation may be used to determine a temporal offset between multiple audio tracks. The temporal offset may be used to synchronize multiple audio tracks.
In some implementations, a system configured to synchronize multiple media files using harmonics of the harmonic sound may include one or more servers and/or other components. Server(s) may be configured to communicate with one or more client computing platforms according to a client/server architecture and/or other communication schemes. The users of the system may access the system via client computing platform(s). Server(s) may be configured to execute one or more computer program components. The computer program components may include one or more of an audio track component, a temporal window component, a transformation component, a pitch component, a harmonics component, a temporal alignment component, a synchronizing component, and/or other components.
A repository of media files may be available via the system (e.g., via an electronic storage and/or other storage location). The repository of media files may be associated with different users. In some implementations, the system and/or server(s) may be configured for various types of media files that may include video files that include audio content, audio files, and/or other types of files that include some audio content. Other types of media items may include one or more of audio files (e.g., music, podcasts, audio books, and/or other audio files), multimedia presentations, photos, slideshows, and/or other media files. The media files may be received from one or more storage locations associated with client computing platform(s), server(s), and/or other storage locations where media files may be stored. Client computing platform(s) may include one or more of a cellular telephone, a smartphone, a digital camera, a laptop, a tablet computer, a desktop computer, a television set-top box, a smart TV, a gaming console, and/or other client computing platforms. In some implementations, the plurality of media files may include audio files that may not contain video content.
The audio track component may be configured to obtain one or more audio tracks from one or more media files. By way of non-limiting illustration, a first audio track and/or other audio tracks may be obtained from a first media file and/or other media files. The audio track component may be configured to obtain a second audio track from a second media file. The first media file and the second media file may be available within the repository of media files available via the system and/or available on a third party platform, which may be accessible and/or available via the system.
One or more of the first media file, the second media file, and/or other media files may be media files captured by the same user via one or more client computing platform(s) and/or may be media files captured by other users. In some implementations, the first media file, the second media file, and/or other media files may be of the same live occurrence. As one example, the files may include files of the same event, such as videos of one or more of a sporting event, concert, wedding, and/or events taken from various perspectives by different users. In some implementations, the first media file, the second media file, and/or other media files may not be of the same live occurrence but may be of the same content. For example, the first media file may be a user-recorded file of a song performance and the second media file may be the same song performance by a professional artist.
The audio track component may be configured to obtain audio tracks from media files by extracting audio signals from media files, and/or by other techniques. By way of non-limiting illustration, the audio track component may be configured to obtain the first audio track by extracting audio signal from the first media file. The audio track component may be configured to obtain the second audio track by extracting an audio signal from the second media file. For example and referring to
Referring back to
The temporal window component may be configured to obtain one or more temporal window length values. The temporal window component may be configured to obtain one or more temporal window length values of different temporal window lengths. Temporal window length value may refer to a portion of an audio track duration. Temporal window length value may be expressed in time units including seconds, milliseconds, and/or other units. The temporal window component may be configured to obtain temporal window length values that may include a temporal window length generated by a user, a randomly generated temporal window length, and/or otherwise obtained. By way of non-limiting illustration, a first temporal window length may be obtained. The temporal window component may be configured to obtain a second temporal window length.
The temporal window component may be configured to partition one or more audio track durations of one or more audio tracks into multiple temporal windows of one or more temporal window lengths. Individual temporal windows of may span the entirety of the audio track comprised of harmonic sound information obtained via the audio track component from the audio wave content of one or more audio tracks. By way of non-limiting illustration, the first audio track may be partitioned into multiple temporal windows of the first temporal window length and of the second temporal windows length. The temporal window component may be configured to partition the second audio track into multiple temporal windows of the first temporal window length and of the second temporal windows length.
The transformation component may be configured to determine one or more transformed representations of one or more audio tracks by transforming one or more audio energy tracks for one or more temporal windows into a frequency space in which energy may be represented as a function of frequency to generate a harmonic energy spectrum of the one or more audio tracks. By way of non-limiting illustration, a first transformed representation of the first audio track may be determined by transforming one or more temporal windows of the first temporal window length. The transformation component may be configured to determine a second transformed representation of the first audio track by transforming one or more temporal windows of the second temporal window length. The transformation component may be configured to determine a third transformed representation of the second audio track by transforming one or more temporal windows of the second temporal window length. The transformation component may be configured to determine a fourth transformed representation of the second audio track by transforming one or more temporal windows of the second temporal window length. As illustrated in
Referring back to
The pitch component may be configured to identify one or more pitches of the harmonic sound of one or more transformed representations for individual temporal windows of one or more temporal window length. By way of non-limiting illustration, a first pitch of the first transformed representation of one or more temporal windows of the first temporal window length of the first audio track may be identified. The pitch component may be configured to determine a second pitch of the second transformed representation of one or more temporal windows of the second temporal window length of the first audio track. The pitch component may be configured to determine a third pitch of the third transformed representation of one or more temporal windows of the first temporal window length of the second audio track. The pitch component may be configured to determine a fourth pitch of the fourth transformed representation of one or more temporal windows of the second temporal window length of the first audio track.
The harmonic energy component may be configured to determine magnitudes of harmonic energy at harmonics of the harmonic sound in one or more transformed representations for individual temporal windows of individual temporal window lengths of one or more audio tracks. Individual magnitudes of harmonic energy may be determined for the first harmonic and the second harmonic for individual temporal windows of individual temporal window lengths. A total magnitude of harmonic energy for individual temporal windows may be determined by finding an average of individual magnitudes, a sum of individual magnitudes, and/or otherwise determined. By way of non-limiting illustration, a first magnitude of harmonic energy may be determined for the first transformed representation of one or more temporal windows of the first temporal window length of the first audio track may be determined. The harmonic energy component may be configured to determine a second magnitude of harmonic energy of the second transformed representation of one or more temporal windows of the second temporal window length of the first audio track. The harmonic energy component may be configured to determine a third magnitude of harmonic energy of the third transformed representation of one or more temporal windows of the first temporal window length of the second audio track. The harmonic energy component may be configured to determine a fourth magnitude of harmonic energy of the fourth transformed representation of one or more temporal windows of the second temporal window length of the second audio track.
The comparison component may be configured to compare one or more transformed representations of one or more temporal windows of one or more temporal window length of one or more audio tracks. Specifically, the comparison component may be configured to correlate pitch of the harmonic sound and harmonic energy of one or more temporal windows of one or more audio tracks. By way of non-limiting illustration, the first transformed representation of one or more temporal windows of the first temporal window length of the first audio track may be compared against the third transformed representation of one or more temporal windows of the first temporal window length of the second audio track to correlate individual pitch of the harmonic sound and harmonic energy of individual temporal windows.
The process performed by the comparison component may be performed iteratively until a result of such comparison is determined. For example, after comparing individual transformed representation of individual temporal windows at the first temporal window length of the first audio track against individual transformed representations of individual temporal windows at the first temporal window length of the second audio track, multiple correlation results may be obtained. The correlation results may be transmitted to the system and a determination for the most accurate result may be made.
In some implementations, based on the results obtained from comparing audio tracks at a certain temporal window length, the comparison component may be configured to compare one or more transformed representations of one or more temporal windows of the second temporal window length of one or more audio tracks.
The process performed by the comparison component for the second temporal window length may be performed iteratively until a result of such comparison is determined. For example, after comparing individual transformed representation of individual temporal windows at the second temporal window length of the first audio track against individual transformed representations of individual temporal windows at the second temporal window length of the second audio track, multiple correlation results may be obtained. The correlation results may be transmitted to the system and a determination for the most accurate result may be made.
In various implementations, the comparison component may be configured to apply one or more constraint parameter to control the comparison process. The comparison constraint parameters may include one or more of limiting comparison time, limiting the energy portion, limiting frequency bands, limiting the number of comparison iterations and/or other constrains.
The comparison component may be configured to determine the time it took to compare the first transformed representation of the first audio track against the first transformed representation of the second audio track at the first temporal window length. Time taken to correlate pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track against pitch of the harmonic sound and harmonic energy of individual temporal windows of the second audio track may be transmitted to the system. The comparison component may utilize the time taken to correlate pitch of the harmonic sound and harmonic energy of individual temporal windows at a particular temporal window length in subsequent comparison iterations. For example, time taken to compare transformed representations at a longer temporal window length may be equal to 5 seconds. The comparison component may be configured to limit the next comparison iteration at a smaller temporal window length to 5 seconds. In one implementation, the time taken to compare two transformed representations may be utilized by the other constraint comparison parameters and/or used as a constant value.
The comparison component may be configured to limit the audio track duration of one or more audio tracks during the comparison process by applying a comparison window set by a comparison window parameter. The comparison component 116 may be configured to limit the audio track duration of one or more audio track being compared by applying the comparison window parameter (i.e., by setting a comparison window). The comparison window parameter may include a time of audio track duration to which the comparison may be limited, a position of the comparison window, including a start position and an end position, and/or other constrains. This value may be predetermined by the system, set by a user, and/or otherwise obtained.
In some implementation, the comparison component may be configured to limit the audio track duration such that the comparison window parameter may not be greater than 50 percent of the audio track duration. For example, if an audio track is 500 seconds then the length of the comparison window set by the comparison window parameter may not be greater than 250 seconds.
The comparison window parameter may have a predetermined start position that may be generated by the system and/or may be based on user input. The system may generate a start position of the comparison window based on the audio track duration. For example, the start position may be randomly set to the initial one third of the audio track duration. In some implementations, the user may generate the start position of the comparison window based on specific audio features of the audio track. For example, user may know that a first audio track and a second audio track may contain audio features that represent sound captured at the same football game, specifically first touchdown of the game. Audio features associated with the touchdown may be used to generate the start position of the comparison window.
The comparison component may be configured to limit one or more portions of one or more audio track during the comparison process based on the comparison window parameter during every comparison iteration. The comparison component may be configured to limit the comparison process to the same portion of one or more audio tracks. Alternatively, in some implementations, the comparison component may be configured to limit the comparison process to different portions of one or more audio tracks based on the comparison window parameter during individual comparison iteration. For example, the comparison window parameter may be generated every time the comparison of the audio tracks at a specific temporal window length is performed. In other words, the start position of the comparison window parameter may be different with every comparison iteration irrespective of the start position of the comparison window parameter at the previous resolution level.
The comparison component may be configured to limit the number of comparison iterations based on a correlation threshold parameter. The comparison component may be configured to generate a correlation coefficient based on a result of a first comparison that may identify correlated pitch of the harmonic sound and harmonic energy of individual temporal windows. The comparison component 116 may be configured to obtain a threshold value. The threshold value may be generated by the system, may be set by a user, and/or obtained by other means. The comparison component may be configured to compare the correlation coefficient against the threshold value. The comparison component may be configured to stop the comparison when the correlation coefficient falls below the threshold value.
In some implementations, the comparison component may be configured to compare pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track against pitch of the harmonic sound and harmonic energy of individual temporal windows of the second audio track within the multi-resolution framework, which is incorporated by reference.
The second comparison may be performed at a level of resolution that may be higher than the mid-resolution level. Pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track of the first energy track at the higher resolution level may be compared against pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track of the second energy track at the higher resolution level. The result of the second comparison may be transmitted to the system.
This process may be iterative such that the comparison component may compare pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track against pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track of the second energy track at every resolution level whereby increasing the resolution with individual iteration until the highest level of resolution is reached. For example, if the number of resolution levels within individual energy track is finite, the comparison component may be configured to compare transformed representations at a mid-resolution level first, then, at next iteration, the comparison component may be configured to compare frequency energy resolutions at a resolution level higher than the resolution level of previous iteration, and so on. The last iteration may be performed at the highest resolution level. The system may accumulate a number of transmitted correlation results obtained from the comparison component. The correlation results may be transmitted to the system and a determination for the most accurate result may be made.
The temporal alignment component may be configured to determine a temporal alignment estimate between multiple audio tracks. By way of non-limiting illustration, the temporal alignment component may be configured to determine a temporal alignment estimate between multiple audio tracks based on the results of comparing one or more transformed representation generated by the transformation component via the comparison component to correlate pitch of the harmonic sound identified by the pitch component and harmonic energy determined by the harmonics component of individual temporal windows, and/or based on other techniques. The temporal alignment estimate may reflect an offset in time between a commencement of sound on one or more audio tracks.
The temporal alignment component may be configured to identify matching pitch of the harmonic sound and harmonic energy of transformed representations of one or more temporal windows of individual temporal windows length of individual audio tracks. The temporal alignment component may identify matching pitch of the harmonic sound and harmonic energy from individual comparison iteration via the comparison component. The temporal alignment component may be configured to calculate a Δt, or time offset value, based on a position of the matching energy samples within the corresponding frequency energy representations.
In some implementations, the temporal alignment component may be configured to determine multiple temporal alignment estimates between the first audio track and the second audio track. Individual temporal alignment estimates may be based on comparing individual transformed representations of one or more temporal windows of individual audio tracks via the comparison component, as described above. The temporal alignment component may be configured to assign a weight to individual temporal alignment estimates. The temporal alignment component may be configured to determine a final temporal alignment estimate by computing weighted averages of multiple temporal alignment estimates and/or by performing other computations.
In some implementations, the temporal alignment component may be configured to use individual playback rates associated with individual audio tracks when determining the temporal alignment estimate. Using individual playback rates as a factor in determining audio track alignment may correct a slight difference in sample clock rates associated with equipment on which audio tracks may have been recorded. For example, multiple individual temporal alignment estimates may be analyzed along with individual playback rates of each audio track. A final temporal alignment estimate may be computed by taking into account both individual temporal alignment estimates and playback rates and/or other factors. A liner correction approach and/or other approach may be taken.
The synchronizing component may be configured to synchronize one or more audio tracks. By way of non-limiting illustration, the synchronizing component may be configured to use comparison results obtained via the comparison component of comparing one or more transformed representations of one or more temporal windows of one or more audio tracks, and/or using other techniques. The synchronizing component may be configured to synchronize the first audio track with the second audio track based on the temporal alignment estimate. In some implementations, the time offset between the energy tracks may be used to synchronize individual audio tracks by aligning the audio tracks based on the time offset calculation.
These and other objects, features, and characteristics of the system and/or method disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
A repository of media files may be available via system 100 (e.g., via electronic storage 122 and/or other storage location). The repository of media files may be associated with different users. In some implementations, system 100 and/or server(s) 102 may be configured for various types of media files that may include video files that include audio content, audio files, and/or other types of files that include some audio content. Other types of media items may include one or more of audio files (e.g., music, podcasts, audio books, and/or other audio files), multimedia presentations, photos, slideshows, and/or other media files. The media files may be received from one or more storage locations associated with client computing platform(s) 104, server(s) 102, and/or other storage locations where media files may be stored. Client computing platform(s) 104 may include one or more of a cellular telephone, a smartphone, a digital camera, a laptop, a tablet computer, a desktop computer, a television set-top box, a smart TV, a gaming console, and/or other client computing platforms. In some implementations, the plurality of media files may include audio files that may not contain video content.
Audio track component 106 may be configured to obtain one or more audio tracks from one or more media files. By way of non-limiting illustration, a first audio track and/or other audio tracks may be obtained from a first media file and/or other media files. Audio track component 106 may be configured to obtain a second audio track from a second media file. The first media file and the second media file may be available within the repository of media files available via system 100 and/or available on a third party platform, which may be accessible and/or available via system 100.
One or more of the first media file, the second media file, and/or other media files may be media files captured by the same user via one or more client computing platform(s) 104 and/or may be media files captured by other users. In some implementations, the first media file, the second media file, and/or other media files may be of the same live occurrence. As one example, the files may include files of the same event, such as videos of one or more of a sporting event, concert, wedding, and/or events taken from various perspectives by different users. In some implementations, the first media file, the second media file, and/or other media files may not be of the same live occurrence but may be of the same content. For example, the first media file may be a user-recorded file of a song performance and the second media file may be the same song performance by a professional artist.
Audio track component 106 may be configured to obtain audio tracks from media files by extracting audio signals from media files, and/or by other techniques. By way of non-limiting illustration, audio track component 106 may be configured to obtain the first audio track by extracting audio signal from the first media file. Audio track component 106 may be configured to obtain the second audio track by extracting an audio signal from the second media file. For example and referring to
Referring back to
Temporal window component 108 may be configured to obtain one or more temporal window length values. Temporal window component 108 may be configured to obtain one or more temporal window length values of different temporal window lengths. Temporal window length value may refer to a portion of an audio track duration. Temporal window length value may be expressed in time units including seconds, milliseconds, and/or other units. Temporal window component 108 may be configured to obtain temporal window length values that may include a temporal window length generated by a user, a randomly generated temporal window length, and/or otherwise obtained. By way of non-limiting illustration, a first temporal window length may be obtained. Temporal window component 108 may be configured to obtain a second temporal window length.
Temporal window component 108 may be configured to partition one or more audio track durations of one or more audio tracks into multiple temporal windows of one or more temporal window lengths. Individual temporal windows of may span the entirety of the audio track comprised of harmonic sound information obtained via audio track component 106 from the audio wave content of one or more audio tracks. By way of non-limiting illustration, the first audio track may be partitioned into multiple temporal windows of the first temporal window length and of the second temporal windows length. Temporal window component 108 may be configured to partition the second audio track into multiple temporal windows of the first temporal window length and of the second temporal windows length.
For example, and as illustrated in
Referring back to
Referring back to
Referring back to
Referring back to
For example, and as illustrated in
Referring back to
Referring back to
In some implementations, based on the results obtained from comparing audio tracks at a certain temporal window length, comparison component 116 may be configured to compare one or more transformed representations of one or more temporal windows of the second temporal window length of one or more audio tracks. For example comparison process 524 may compare first transformed representation 518 of first temporal window 516 of second temporal window length 519 of first audio track 505 against first transformed representation 528 of first temporal window 526 of second temporal window length 519 of second energy track 520.
Referring back to
In various implementations, comparison component 116 may be configured to apply one or more constraint parameter to control the comparison process. The comparison constraint parameters may include one or more of limiting comparison time, limiting the energy portion, limiting frequency bands, limiting the number of comparison iterations and/or other constrains.
Comparison component 116 may be configured to determine the time it took to compare the first transformed representation of the first audio track against the first transformed representation of the second audio track at the first temporal window length. Time taken to correlate pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track against pitch of the harmonic sound and harmonic energy of individual temporal windows of the second audio track may be transmitted to system 100. Comparison component 116 may utilize the time taken to correlate pitch of the harmonic sound and harmonic energy of individual temporal windows at a particular temporal window length in subsequent comparison iterations. For example, time taken to compare transformed representations at a longer temporal window length may be equal to 5 seconds. Comparison component 116 may be configured to limit the next comparison iteration at a smaller temporal window length to 5 seconds. In one implementation, the time taken to compare two transformed representations may be utilized by the other constraint comparison parameters and/or used as a constant value.
Comparison component 116 may be configured to limit the audio track duration of one or more audio tracks during the comparison process by applying a comparison window set by a comparison window parameter. Comparison component 116 may be configured to limit the audio track duration of one or more audio track being compared by applying the comparison window parameter (i.e., by setting a comparison window). The comparison window parameter may include a time of audio track duration to which the comparison may be limited, a position of the comparison window, including a start position and an end position, and/or other constrains. This value may be predetermined by system 100, set by a user, and/or otherwise obtained.
In some implementation, comparison component 116 may be configured to limit the audio track duration such that the comparison window parameter may not be greater than 50 percent of the audio track duration. For example, if an audio track is 500 seconds then the length of the comparison window set by the comparison window parameter may not be greater than 250 seconds.
The comparison window parameter may have a predetermined start position that may be generated by system 100 and/or may be based on user input. System 100 may generate a start position of the comparison window based on the audio track duration. For example, the start position may be randomly set to the initial one third of the audio track duration. In some implementations, the user may generate the start position of the comparison window based on specific audio features of the audio track. For example, user may know that a first audio track and a second audio track may contain audio features that represent sound captured at the same football game, specifically first touchdown of the game. Audio features associated with the touchdown may be used to generate the start position of the comparison window.
Comparison component 116 may be configured to limit one or more portions of one or more audio track during the comparison process based on the comparison window parameter during every comparison iteration. Comparison component 116 may be configured to limit the comparison process to the same portion of one or more audio tracks. Alternatively, in some implementations, comparison component 116 may be configured to limit the comparison process to different portions of one or more audio tracks based on the comparison window parameter during individual comparison iteration. For example, the comparison window parameter may be generated every time the comparison of the audio tracks at a specific temporal window length is performed. In other words, the start position of the comparison window parameter may be different with every comparison iteration irrespective of the start position of the comparison window parameter at the previous resolution level.
Comparison component 116 may be configured to limit the number of comparison iterations based on a correlation threshold parameter. Comparison component 116 may be configured to generate a correlation coefficient based on a result of a first comparison that may identify correlated pitch of the harmonic sound and harmonic energy of individual temporal windows. Comparison component 116 may be configured to obtain a threshold value. The threshold value may be generated by system 100, may be set by a user, and/or obtained by other means. Comparison component 116 may be configured to compare the correlation coefficient against the threshold value. Comparison component 116 may be configured to stop the comparison when the correlation coefficient falls below the threshold value.
In some implementations, comparison component 116 may be configured to compare pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track against pitch of the harmonic sound and harmonic energy of individual temporal windows of the second audio track within the multi-resolution framework, which is incorporated by reference.
For example, comparison component 116 may be configured to compare individual transformed representations of one or more temporal windows of the first audio track against individual transformed representations of one or more temporal windows of the second audio track at a mid-resolution level. Pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track at the mid-resolution level may be compared against pitch of the harmonic sound and harmonic energy of individual temporal windows of the second audio track at the mid-resolution level to correlate pitch values and harmonic energy values between the first audio track and the second audio track. The result of a first comparison may identify correlated pitch and harmonic energy values from the first and second audio tracks that may represent energy in the same sound. The result of first comparison may be transmitted to system 100 after the first comparison is completed.
The second comparison may be performed at a level of resolution that may be higher than the mid-resolution level. Pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track of the first energy track at the higher resolution level may be compared against pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track of the second energy track at the higher resolution level. The result of the second comparison may be transmitted to system 100.
This process may be iterative such that comparison component 116 may compare pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track against pitch of the harmonic sound and harmonic energy of individual temporal windows of the first audio track of the second energy track at every resolution level whereby increasing the resolution with individual iteration until the highest level of resolution is reached. For example, if the number of resolution levels within individual energy track is finite, comparison component 116 may be configured to compare transformed representations at a mid-resolution level first, then, at next iteration, comparison component 116 may be configured to compare frequency energy resolutions at a resolution level higher than the resolution level of previous iteration, and so on. The last iteration may be performed at the highest resolution level. System 100 may accumulate a number of transmitted correlation results obtained from comparison component 116. The correlation results may be transmitted to system 100 and a determination for the most accurate result may be made.
Temporal alignment component 118 may be configured to determine a temporal alignment estimate between multiple audio tracks. By way of non-limiting illustration, temporal alignment component 118 may be configured to determine a temporal alignment estimate between multiple audio tracks based on the results of comparing one or more transformed representation generated by transformation component 112 via comparison component 114 to correlate pitch of the harmonic sound identified by pitch component 112 and harmonic energy determined by harmonics component 114 of individual temporal windows, and/or based on other techniques. The temporal alignment estimate may reflect an offset in time between a commencement of sound on one or more audio tracks.
Temporal alignment component 118 may be configured to identify matching pitch of the harmonic sound and harmonic energy of transformed representations of one or more temporal windows of individual temporal windows length of individual audio tracks. Temporal alignment component 118 may identify matching pitch of the harmonic sound and harmonic energy from individual comparison iteration via comparison component 116. Temporal alignment component 118 may be configured to calculate a Δt, or time offset value, based on a position of the matching energy samples within the corresponding frequency energy representations.
In some implementations, temporal alignment component 118 may be configured to determine multiple temporal alignment estimates between the first audio track and the second audio track. Individual temporal alignment estimates may be based on comparing individual transformed representations of one or more temporal windows of individual audio tracks via comparison component 116, as described above. Temporal alignment component 118 may be configured to assign a weight to individual temporal alignment estimates. Temporal alignment component 118 may be configured to determine a final temporal alignment estimate by computing weighted averages of multiple temporal alignment estimates and/or by performing other computations.
In some implementations, temporal alignment component 118 may be configured to use individual playback rates associated with individual audio tracks when determining the temporal alignment estimate. Using individual playback rates as a factor in determining audio track alignment may correct a slight difference in sample clock rates associated with equipment on which audio tracks may have been recorded. For example, multiple individual temporal alignment estimates may be analyzed along with individual playback rates of each audio track. A final temporal alignment estimate may be computed by taking into account both individual temporal alignment estimates and playback rates and/or other factors. A liner correction approach and/or other approach may be taken.
Synchronizing component 120 may be configured to synchronize one or more audio tracks. By way of non-limiting illustration, synchronizing component 120 may be configured to use comparison results obtained via comparison component 116 of comparing one or more transformed representations of one or more temporal windows of one or more audio tracks, and/or using other techniques. Synchronizing component 120 may be configured to synchronize the first audio track with the second audio track based on the temporal alignment estimate. In some implementations, the time offset between the energy tracks may be used to synchronize individual audio tracks by aligning the audio tracks based on the time offset calculation.
Referring again to
In some implementations, system 100 may synchronize media files from three, four, five, or more media capture devices (not illustrated) capturing the same live occurrence. Users capturing live occurrence simultaneously may be located near or away from each other and may make recordings from various perspectives.
In some implementations, the plurality of media files may be generated by the same user. For example, a user may place multiple media recording devices around himself to record himself from various perspectives. Similarly, a film crew may generate multiple media files during a movie shoot of the same scene.
Referring again to
A given client computing platform 104 may include one or more processors configured to execute computer program components. The computer program components may be configured to enable a producer and/or user associated with the given client computing platform 104 to interface with system 100 and/or external resources 120, and/or provide other functionality attributed herein to client computing platform(s) 104. By way of non-limiting example, the given client computing platform 104 may include one or more of a desktop computer, a laptop computer, a handheld computer, a NetBook, a Smartphone, a gaming console, and/or other computing platforms.
External resources 120 may include sources of information, hosts and/or providers of virtual environments outside of system 100, external entities participating with system 100, and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources 120 may be provided by resources included in system 100.
Server(s) 102 may include electronic storage 122, one or more processors 124, and/or other components. Server(s) 102 may include communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. Illustration of server(s) 102 in
Electronic storage 122 may include electronic storage media that electronically stores information. The electronic storage media of electronic storage 122 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with server(s) 102 and/or removable storage that is removably connectable to server(s) 102 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage 122 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. The electronic storage 122 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). Electronic storage 122 may store software algorithms, information determined by processor(s) 124, information received from server(s) 102, information received from client computing platform(s) 104, and/or other information that enables server(s) 102 to function as described herein.
Processor(s) 124 may be configured to provide information processing capabilities in server(s) 102. As such, processor(s) 124 may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s) 124 is shown in
It should be appreciated that although components 106, 108, 110, 112, 114, 116, 118 and 120 are illustrated in
In some implementations, method 600 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method 600 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 600.
At an operation 602, a first audio track may be partitioned into individual temporal windows of a first and a second temporal window length and/or a second audio track may be partitioned into individual temporal windows of a first and a second temporal window. Operation 602 may be performed by one or more physical processors executing a temporal window component that is the same as or similar to temporal window component 108, in accordance with one or more implementations.
At an operation 604, a first and a second transformed representation for individual temporal windows of a first and a second temporal window length of the first audio track may be determined and/or a third and a fourth transformed representation for individual temporal windows of a first and a second temporal window length of the second audio track may be determined. Operation 604 may be performed by one or more physical processors executing a transformation component that is the same as or similar to transformation component 110, in accordance with one or more implementations.
At an operation 606, pitches of harmonic sound of the first and the second transformed representations may be identified and/or pitches of harmonic sound of the third and fourth transformed representations may be identified. Operation 606 may be performed by one or more physical processors executing a pitch component that is the same as or similar to pitch component 112, in accordance with one or more implementations.
At an operation 608, magnitudes of harmonic energy at a first and a second harmonics in the first and the second transformed representations may be identified and/or pitches of harmonic sound of the third and fourth transformed representations may be identified. Operation 608 may be performed by one or more physical processors executing a harmonics component that is the same as or similar to harmonics component 114, in accordance with one or more implementations.
At an operation 610, pitches and harmonic energy of the first transformed representation may be compared be to pitches and harmonic energy of the third transformed representation. At an operation 612, pitches and harmonic energy of the second transformed representation may be compared be to pitches and harmonic energy of the third transformed representation. Operations 610 and 612 may be performed by one or more physical processors executing a comparison component that is the same as or similar to comparison component 116, in accordance with one or more implementations.
At an operation 614, a temporal alignment estimate between the first audio track and the second audio track based on the comparison of the first transformed representation to the third transformed representation and the second transformed representation to the fourth transformed representation may be determined. Operation 614 may be performed by one or more physical processors executing a temporal alignment component that is the same as or similar to temporal alignment 118, in accordance with one or more implementations.
At an operation 616, a synchronization of the first audio track with the second audio track based on the temporal alignment estimate of the first audio representation and the second audio representation may be performed. Operation 616 may be performed by one or more physical processors executing a synchronizing component that is the same as or similar to synchronizing component 120, in accordance with one or more implementations.
Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
Although the invention has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the invention is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present invention contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment.
Number | Name | Date | Kind |
---|---|---|---|
5175769 | Hejna, Jr. | Dec 1992 | A |
6564182 | Gao | May 2003 | B1 |
7012183 | Herre | Mar 2006 | B2 |
7256340 | Okazaki | Aug 2007 | B2 |
7301092 | McNally et al. | Nov 2007 | B1 |
7461002 | Crockett | Dec 2008 | B2 |
7521622 | Zhang | Apr 2009 | B1 |
7593847 | Oh | Sep 2009 | B2 |
7619155 | Teo | Nov 2009 | B2 |
7672836 | Lee | Mar 2010 | B2 |
7745718 | Makino | Jun 2010 | B2 |
7767897 | Jochelson | Aug 2010 | B2 |
7863513 | Ishii | Jan 2011 | B2 |
7985917 | Morris | Jul 2011 | B2 |
8101845 | Kobayashi | Jan 2012 | B2 |
8111326 | Talwar | Feb 2012 | B1 |
8179475 | Sandrew | May 2012 | B2 |
8193436 | Sim | Jun 2012 | B2 |
8205148 | Sharpe | Jun 2012 | B1 |
8223978 | Yoshizawa | Jul 2012 | B2 |
8378198 | Cho | Feb 2013 | B2 |
8411767 | Alexander | Apr 2013 | B2 |
8428270 | Crockett | Apr 2013 | B2 |
8497417 | Lyon | Jul 2013 | B2 |
8785760 | Serletic | Jul 2014 | B2 |
8964865 | Alexander | Feb 2015 | B2 |
9031244 | Lang | May 2015 | B2 |
9418643 | Eronen | Aug 2016 | B2 |
20020133499 | Ward | Sep 2002 | A1 |
20030033152 | Cameron | Feb 2003 | A1 |
20040083097 | Chu | Apr 2004 | A1 |
20040094019 | Herre | May 2004 | A1 |
20040148159 | Crockett | Jul 2004 | A1 |
20040165730 | Crockett | Aug 2004 | A1 |
20040172240 | Crockett | Sep 2004 | A1 |
20040254660 | Seefeldt | Dec 2004 | A1 |
20040264561 | Alexander | Dec 2004 | A1 |
20050021325 | Seo | Jan 2005 | A1 |
20050091045 | Oh | Apr 2005 | A1 |
20050234366 | Heinz | Oct 2005 | A1 |
20060021494 | Teo | Feb 2006 | A1 |
20060080088 | Lee | Apr 2006 | A1 |
20060107823 | Platt | May 2006 | A1 |
20070055503 | Chu | Mar 2007 | A1 |
20070055504 | Chu | Mar 2007 | A1 |
20070061135 | Chu | Mar 2007 | A1 |
20070163425 | Tsui | Jul 2007 | A1 |
20070240556 | Okazaki | Oct 2007 | A1 |
20080148924 | Tsui | Jun 2008 | A1 |
20080219637 | Sandrew | Sep 2008 | A1 |
20080304672 | Yoshizawa | Dec 2008 | A1 |
20080317150 | Alexander | Dec 2008 | A1 |
20090049979 | Naik | Feb 2009 | A1 |
20090056526 | Yamashita | Mar 2009 | A1 |
20090170458 | Molisch | Jul 2009 | A1 |
20090217806 | Makino | Sep 2009 | A1 |
20090287323 | Kobayashi | Nov 2009 | A1 |
20100257994 | Hufford | Oct 2010 | A1 |
20110167989 | Cho | Jul 2011 | A1 |
20120103166 | Shibuya | May 2012 | A1 |
20120127831 | Gicklhorn | May 2012 | A1 |
20120297959 | Serletic | Nov 2012 | A1 |
20130025437 | Serletic | Jan 2013 | A1 |
20130201972 | Alexander | Aug 2013 | A1 |
20130220102 | Savo | Aug 2013 | A1 |
20130304243 | Iseli | Nov 2013 | A1 |
20130339035 | Chordia | Dec 2013 | A1 |
20140053710 | Serletic, II | Feb 2014 | A1 |
20140053711 | Serletic, II | Feb 2014 | A1 |
20140067385 | Oliveira | Mar 2014 | A1 |
20140123836 | Vorobyev | May 2014 | A1 |
20140180637 | Kerrigan | Jun 2014 | A1 |
20140307878 | Osborne | Oct 2014 | A1 |
20150279427 | Godfrey | Oct 2015 | A1 |
20160192846 | Shekhar | Jul 2016 | A1 |
20160212306 | Kawa | Jul 2016 | A1 |
Number | Date | Country | |
---|---|---|---|
Parent | 15247273 | Aug 2016 | US |
Child | 15458714 | US |