The present invention generally relates to the synchronization of multimedia contents.
More particularly, the invention deals with the synchronization of different versions of a multimedia content like a video content, for example a movie.
Thus, the invention concerns a method and a device for synchronizing two versions of a multimedia content. It also concerns a computer program implementing the method of the invention.
The approaches described in this section could be pursued, but are not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
Nowadays, many versions of a video content, as a movie, may coexist. An example is the successive DVD versions of a blockbuster that can be found a couple of years after the theatrical one in extended version or in director's cut version. Other examples range from old movies brought up to date with new additional visual effects or in a colorized version, to “cleaned up” versions, due to local censure, from which violent, religious, sexual, political scenes are removed. Temporal editions that can occur between those versions include frame addition or deletion and scene re-ordering.
Thus, there is a need for a movie synchronization method which aims at synchronizing multiple versions of the same movie with an objective of transferring some metadata available in a first version into a second version where those metadata are absent. Such metadata may come from an artistic work, e.g. subtitles or chapters, but they may also be generated through a computational analysis of the audio-video content itself, e.g. characters present, scene analysis, etc. In both cases, transferring directly the metadata from one version to the other avoids a long and hard task of metadata re-generation.
There exists in the literature methods related to the audio/video recordings synchronization problem, for example in the paper of N. Bryan, P. Smaragdis, and G. J. Mysore, “Clustering and synchronizing multi-camera video via landmark cross-correlation,” Proc. ICASSP, 2012. In this paper, landmark-based audio fingerprinting is used to match multiple recordings of the same event together.
However, the teachings of the previously cited paper are not applicable to the synchronization problem considered here, as they do not take into account frame additions and deletions as well as frame reordering, which usually happen in different versions of a movie.
In order to deal with the frame addition/deletion efficiently, Dynamic Time Warping (DTW) is typically applied to find the best alignment path between two audio pieces. This is described, for example, in the paper of R. Macrae, X. Anguera, and N. Oliver, “MUVISYNC: Realtime music video alignment,” Proc. ICME, 2010. However, the cost of computation of DTW does not scale efficiently for long signals as it is very high for such signals, and the frame reordering problem cannot be handled due to the monotonicity condition of DTW. Moreover, standard DTW requires knowledge of both start point and end point of the audio sequences to be aligned, which is not a trivial information, in order to estimate an optimal path.
The present invention proposes a solution for improving the situation.
Accordingly, the present invention provides a method for synchronizing two versions of a multimedia content, each version comprising a plurality of video frames, said method comprising steps of:
By using only audio streams by an audio fingerprinting technique, the method of the present invention provides a robust, computationally inexpensive and easy to implement mechanism to perform frame accurate synchronization of multiple versions of the same multimedia content, such as a movie.
Furthermore, the robustness of the audio fingerprinting technique permits an accurate synchronization even if both versions have a different audio and/or video quality and/or have been coded and/or distorted differently.
Besides, the determination of at least two temporal matching periods between the versions permits to detect the cases of frame addition, deletion and reordering, rendering the synchronization method robust in all situations.
Advantageously, the extracting step comprises a step of transforming time-domain audio signals of both versions into a time-frequency representation.
Preferably, the step of transforming uses short-time Fourier transform, STFT.
The use of STFT is advantageous as it permits a quick extraction of a robust feature which is the energy peak location in the time-frequency representation.
Advantageously, the determining step comprises a step of matching the extracted audio fingerprints of both versions using Shazam's algorithm.
The Shazam's algorithm is well known for its robustness. It is described in the paper of A. L. Wang, “An Industrial-Strength Audio Search Algorithm,” Proc. Int. Sym. on Music Information Retrieval (ISMIR), pp. 1-4, 2003.
Advantageously, the step of matching comprises a step of computing a histogram representing a number of matches as a function of a difference of time offsets between both versions.
The computed histogram permits a good visualization of the matching between the versions.
Preferably, the temporal matching periods are determined using a thresholding of the computed histogram.
Such thresholding, using a heuristically chosen threshold depending on the fingerprint density, i.e. the approximate number of extracted fingerprints per second, and the durations of the matching periods between two versions, or a threshold learnt from training data, permits to identify maximum peaks in the histogram. Contrarily to Shazam's algorithm which searches for only one maximum peak, i.e. only one matching period, more than one peak may be identified according to the present invention. The identification of a plurality of peaks enables the determination of more than one matching period, and consequently the detection of temporal alterations between the different versions of the multimedia content, like frame addition and/or deletion and/or reordering.
Advantageously, the mapping step comprises a step of clustering the extracted audio fingerprints performed in each determined temporal matching period.
The step of clustering permits the elimination of outliers, i.e. frame locations that do not represent an actual matching between two actual periods in the versions of the multimedia content.
Preferably, the clustering step uses hierarchical clustering or k-means clustering.
Advantageously, the clustering step uses a modified hierarchical clustering in which a distance between two clusters is computed between boundary points of said clusters.
According to a particular embodiment of the invention, the versions of the multimedia content are different recordings of a video content captured by different cameras.
The invention further provides a synchronization device able to synchronize two versions of a multimedia content, each version comprising a plurality of video frames, said device comprising:
Advantageously, the synchronization device is a communication terminal, particularly a smart-phone or a tablet or a set-top box.
The method according to the invention may be implemented in software on a programmable apparatus. It may be implemented solely in hardware or in software, or in a combination thereof.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like.
The invention thus provides a computer-readable program comprising computer-executable instructions to enable a computer to perform the method of the invention. The diagram of
The present invention is illustrated by way of examples, and not by way of limitation, in the figures of the accompanying drawings, in which like reference numerals refer to similar elements and in which:
Referring to
The synchronization device 2 is preferably a communication terminal, particularly a smart-phone or a tablet or a set-top box. It may also consist in a personal computer, a laptop, or any other terminal containing a processor for processing data.
The synchronization device 2 of the present invention is able to synchronize two different versions 4, 6 of a multimedia content such as a movie. Each version 4, 6 comprises a plurality of video frames. The frames of the first version 4 generally correspond to the frames of the second version 6, except at least one frame which is deleted from the first version 4 and/or at least one frame which is added to the first version 4 and/or at least one frame which is reordered between the first version 4 and the second version 6.
Of course, the synchronization device 2 is able to synchronize more than two versions of the multimedia content by processing the plurality of versions in a pair-wise manner or by synchronizing each different version with a reference version of the movie.
The synchronization device 2 comprises an extraction module 8 for extracting audio fingerprints from each version 4, 6 of the multimedia content. The extraction module 8 receives as inputs either the entire video frames of both versions 4, 6 or only audio streams of the video frames of the versions 4, 6. In other words, it is not necessary that the whole audio or video content of said versions be present in the synchronization device as it is only necessary that the synchronization device accesses to the audio streams of the video frames of the versions 4, 6 to process them according to the present invention.
The synchronization device 2 further comprises an analysis module 10 for analyzing the extracted audio fingerprints in order to determine at least two matching periods of time between both versions 4, 6.
Besides, the synchronization device 2 comprises an exploitation module 12 for exploiting the determined matching periods of time to perform a mapping between the video frames of both versions. For example, this mapping can be used to transfer some metadata available in the first version into the second version where those metadata are absent.
The operations implemented by the modules 8, 10, 12 will be detailed in the following with reference to
As shown on
The extraction of landmark based audio fingerprints at step 20 comprises a step of transforming time-domain audio signals of both versions 4, 6 into a time-frequency representation using short-time Fourier transform (STFT). When performing STFT, the extraction module 8 advantageously segments the audio signals into frames having a duration equal to the duration of a typical video frame rate, for instance equal to 16 ms or 32 ms or 64 ms or 40 ms. Preferably, the segmented audio frames correspond to the video frames that will be mapped by the exploitation module 12.
An example of this time-frequency representation of the extracted audio fingerprints is shown in the graph of
At step 24, the landmark audio fingerprints extracted from the versions 4, 6 are compared to find a matching between them.
At step 24, when a landmark (f1,t1, f2,t2, Δt)t1 matching between both versions 4, 6 is found, only the time offset t1 and the difference of time offsets Δt (t2-t1) between the versions 4, 6 is stored.
At step 26, the resulting differences of time offsets Δt of the matching landmarks are used to draw a histogram of the differences of time offsets. An example of such histogram is shown in
Preferably, the above steps 20, 24, 26 of the synchronization method of the present invention use the Shazam's algorithm.
At step 28, the numbers of matches in the histogram of the difference of time offsets are compared with a threshold Th to identify maximum peaks. The threshold Th may be either heuristically chosen or learnt from training data. In the example of
It is important to note that at this step, the Shazam's algorithm searches for only one maximum peak, as for example point PA in
At step 30, the differences of time offsets corresponding to the peaks in the histogram identified at step 28 are exploited in order to generate a scatterplot of matching landmark locations as shown in the graph of
The filtered scatterplot obtained at step 30 is however not optimal as it contains outliers, i.e. points that accidently lie in the diagonals but that do not represent an actual matching between the versions 4, 6 of the multimedia content. In the example scatterplot of
In a preferred embodiment of the invention, these outliers are eliminated at step 32 so that the resulting scatterplot, as shown in
In order to eliminate these outliers, step 32 comprises a step of clustering points lying in each diagonal of the scatterplot, for example by using a hierarchical clustering or a k-means algorithm.
A preferred implementation of a hierarchical clustering algorithm is considering first each point in a diagonal of the filtered scatterplot as a cluster containing a single item, then computing the Euclidean distance between each pair of clusters and merging the clusters having a distance smaller than a pre-defined threshold D. This “bottom up” process is repeated until either the distance between any pair of clusters is larger than D or only one cluster is remained. The remaining clusters with small number of points are considered to be outliers.
Contrarily to conventional hierarchical clustering algorithms, the distance between clusters is defined, in a preferred embodiment of the invention, as the minimum distance between their boundary points, i.e. the two points in each cluster having the lowest and the highest time offsets, instead of the distance between their centroids.
Then, at step 34, the obtained scatterplots are exploited to specify the positions of frame addition and/or deletion and/or reordering in order to perform a frame mapping between the video frames of both versions 4, 6.
In the example of
In the same manner, the matching time period B is a segment comprised between t2 and t3 along the x-axis and between t′2 and t′3 along the y-axis whereas the following matching time period C is a segment comprised between t4 and t5 along the x-axis and between t′3 and t′4 along the y-axis. As there's a “gap” between both matching periods B and C only along the x-axis, this clearly means that there's another frame deletion between t3 and t4 that has been performed from the first version 4 to the second version 6 of the multimedia content.
Similarly, the matching time period C is a segment comprised between t4 and t5 along the x-axis and between t′3 and t′4 along the y-axis whereas the following matching time period D is a segment comprised between t5 and t6 along the x-axis and between t′5 and t′6 along the y-axis. As there's a “gap” between both matching periods C and D only along the y-axis, this clearly means that there's a frame addition between t′4 and t′5 that has been performed in the second version 6 of the multimedia content.
After this detection of frame additions and/or deletions, the exploitation module 12 performs the video frame mapping between both versions by:
As there's a “gap” between matching periods E and G only along the x-axis and a “gap” between matching periods G and H only along the y-axis, this clearly means that there's a frame reordering by deleting a frame between t2 and t3 in the first version 4 and adding it between t′3 and t′4 in the second version 6.
After this detection of frame reordering, the exploitation module 12 performs the video frame mapping between both versions by:
Thus, the present invention remarkably insures a frame accurate synchronization between different versions of a multimedia content as it is able to detect any temporal alteration performed between the considered versions.
While there has been illustrated and described what are presently considered to be the preferred embodiments of the present invention, it will be understood by those skilled in the art that various other modifications may be made, and equivalents may be substituted, without departing from the true scope of the present invention. Additionally, many modifications may be made to adapt a particular situation to the teachings of the present invention without departing from the central inventive concept described herein. Furthermore, an embodiment of the present invention may not include all of the features described above. Therefore, it is intended that the present invention is not limited to the particular embodiments disclosed, but that the invention includes all embodiments falling within the scope of the appended claims.
Expressions such as “comprise”, “include”, “incorporate”, “contain”, is and “have” are to be construed in a non-exclusive manner when interpreting the description and its associated claims, namely construed to allow for other items or components which are not explicitly defined also to be present. Reference to the singular is also to be construed as a reference to the plural and vice versa.
A person skilled in the art will readily appreciate that various parameters disclosed in the description may be modified and that various embodiments disclosed and/or claimed may be combined without departing from the scope of the invention.
Thus, even if the above description focused on the synchronization of multiple versions of a multimedia content like a movie, it can be advantageously applied to the synchronization of recordings captured by different cameras for either a personal or a professional use.
Number | Date | Country | Kind |
---|---|---|---|
12306481.8 | Nov 2012 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2013/074766 | 11/26/2013 | WO | 00 |