This invention relates generally to hardware/software systems for creating an audio track synchronized to a specified, i.e., target video track.
A “video track”, as used herein, refers to an ordered sequence of visual events represented by any time based visual media, where each such event (hereinafter, “video” event) can be specified by a timing offset from a video start time. A video event can constitute any moment deemed to be visually significant.
An “audio track”, as used herein, refers to an ordered sequence of audible events represented by any time based audible media, where each such event (hereinafter, “audio” event) can be specified by a timing offset from an audio start time. A audio event can constitute any moment deemed to be audibly significant.
It is often desirable to produce an audio track, e.g., music, to accompany a video track, e.g., a TV commercial or full length film. When bringing video and audio together, the significant events in the respective tracks must be well synchronized to achieve a satisfactory result.
When composing original music specifically for a video track, it is common practice to compile a list of timing offsets associated with important video events and for the composer to use the list to create music containing correspondingly offset music events. Composing original music to accompany a video is quite costly and time consuming and so it has become quite common to instead manipulate preexisting, i.e., prerecorded, music to synchronize with a video track. The selection of appropriate prerecorded music is a critical step in the overall success of joining video and audio tracks. The genre, tempo, rhythmic character and many other musical characteristics are important when selecting music. But, beyond the initial selection, the difficulty of using prerecorded music is that its audio/music events will rarely align with the video events in the video track. Accordingly, a skilled human music editor is typically employed to select suitable music for the video and he/she then uses a computer/workstation to edit the prerecorded music. Such editing typically involves interactively shifting music events in time generally by removing selected music portions to cause desired music events to occur sooner or by adding music portions to cause desired music events to occur later. Multiple iterative edits may be required to alter the prerecorded music to sufficiently synchronize it to the video track and great skill and care is required to ensure that the music remains aesthetically pleasing to a listener. Various software applications (e.g., Avid Pro Tools, Apple Soundtrack, SmartSound Sonicfire Pro, Sony Vegas, Sync Audio Studios Musicbed) have been released to facilitate the editing of prerecorded music. Such applications generally provide a user interface offering the user a means to visualize the timing relationship between a video track and a proposed audio track while providing tools to move or transform items in the audio tracks. The standard approach is for the editor to repeatedly listen to the source music to acquaint himself with its form while also listening for musical events that can be utilized to effectively enhance the video events in the video track. The process is largely one of trial and error, using a “razor blade” tool to cut the music into sections and subsequently slide the sections backwards or forwards to test the effectiveness of the section at that timing. Once a rough arrangement of sections is determined, additional manual trimming and auditioning of the sections is generally required to make the sections fit together in a continuous stream of music. The outlined manual process is very work intensive and requires professional skill to yield a musically acceptable soundtrack.
An alternative method utilized by a few software applications involves adjusting the duration of a musical composition or user defined sub-section by increasing or decreasing the rate (i.e., tempo, beats per minute) at which the media is played. If the tempo is increased/decreased a uniform amount for the entire musical composition, then it is true that the timing for which a single musical event occurs can be adjusted relative to the beginning of the music, but it is mathematically unlikely that multiple music events will align with multiple video events via a single tempo adjustment. Additionally, only small timing adjustments are practical to avoid degrading the recording of the music.
The present invention is directed to an enhanced method and apparatus for automatically manipulating prerecorded audio data to produce an audio track synchronized to a target video track. For the sake of clarity of presentation, it will generally be assumed herein that “audio data”, refers to music, but it should be understood that the invention is also applicable to other audio forms; e.g., speech, special effects, etc.
More particularly, the present invention is directed to a system which allows a user to select a music source from multiple music sources stored in a music library. Each music source includes multiple audio portions having block data and beat data associated therewith. The block data includes data blocks respectively, specifying the duration of the associated audio portions. Each block preferably also includes interblock compatibility data and/or suitability data. The beat data, generally referred to as a “beatmap”, comprises timing information specifying the rhythmic pulse, or “beat” for the associated music source portion.
A system in accordance with the invention is operable by a user to produce an audio track synchronized to a video timing specification (VTS) specifying successive timing segments delimited by successive video events. After the user selects a music source, the system generates a music segment for each defined timing segment. In a preferred embodiment, for each music segment to be generated, an “untrimmed” music segment is first generated by assembling an ordered sequence of compatible data blocks selected at least in part based on their suitability and/or compatibility characteristics. The assembled data blocks forming the untrimmed music segment represent audio portions having a duration at least equal to the duration of the associated timing segment. If necessary, the untrimmed music segment is then trimmed to produce a final music segment having a duration matching the duration of the associated timing segment.
In a preferred embodiment, trimming is accomplished by truncating the audio portion represented by at least one of the data blocks in the untrimmed music segment. Preferably, audio portions are truncated to coincide with a beat defined by an associated beat map. After final music segments have been generated for all of the timing segments, they are assembled in an ordered sequence to form the audio track for the target video track.
For simplicity of explanation, reference herein will sometimes be made to trimming the duration of a data block but this should be understood to mean modifying a data block to adjust the duration of the associated audio portion.
In accordance with an optional but useful feature of a preferred embodiment of the invention, a video timing specification analyzer is provided for automatically analyzing each video timing specification to identify “best fit” music sources from the music library, i.e., sources having a tempo closely related to the timing of video events, for initial consideration by a user.
Attention is initially directed to
The system 8A includes a library 13 storing a plurality of prerecorded music sources 14. Each music source in accordance with the invention is comprised of multiple audio portions with each portion having a data block and beat data 16 associated therewith. Each data block (as will be discussed in greater detail in connection with
A music event constitutes an audibly significant moment of a music source. It can be subjective to a particular listener but primarily falls within two types:
As depicted in
A data display 64 preferably displays the timing segments to a user and the user is able to interact with the timing segment data via input 66. In a preferred embodiment, the timing segment table can be displayed on the computer screen with the user controlling the keyboard and mouse to modify, add or remove timing segments. In an alternative embodiment, the timing segment data can be displayed and modified in a visual timeline form, presenting the user with a visualization of the relative start time and duration of each timing segment. User modifications will preferably be recalculated into the table 62 to ensure that timing segments are successive.
The first timing segment is passed in step 68 to the music segment generator 70 (MSG) (
Construction of a new music segment having a duration matching a timing segment request 100 received from step 68 in
The duration of the music segment under construction is evaluated at 110 by summing the duration of all data blocks in the segment. As long as the music segment is shorter in duration than the requested timing segment duration, additional data blocks 112 will be tried and evaluated for their compatibility with the previous data block in the segment 116. The process continues, effectively trying and testing all combinations of data blocks until a combination is discovered that has a suitable duration 110 and is compatible with a timing segment request. If all blocks are tried and the music segment fails the compatibility or duration test 114, the final data block in the music segment is removed 120 to make room for trying other data blocks in that position. If all data blocks are removed from the music segment 122, it indicates that all combinations of data blocks have been tried and that the iterative process of the block sequence compiler 130 is complete.
A music segment that is evaluated in step 118 to successfully fulfill the timing segment request, is retained in memory in a table of segment candidates 124. The entire process is continued by creating new segments 102 until all possible combinations of data blocks have been tried 126.
The collected music segment candidates 124 will vary from one another as each music segment represents a different combination and utilization of the available music data blocks. The music segments can therefore be ranked 128 based on criteria, such as duration or data block usage. The ranked music segment candidate table is returned to the timing controller (
Attention is now directed to
Attention is now directed to
Attention is now directed to
Stage 1, depicts the exemplary data for a video timing specification (
Stage 3 begins when the music sequence generator (MSG) 70 (
Stage 4 shows the music segment 210 after the segment trimming step 77 (
In stage 5, the three exemplary music segments 210, 212, 214 are connected to make a complete music sequence 216, for constructing the final audio track. In a preferred embodiment of the invention, construction of the final audio track can be enhanced by the selective application of an audio cross-fade between adjacent music segments that are non-contiguous in the source music. One skilled in the art can see how the exemplary scenario can be extended to build additional music segments to correspond with additional video events.
Attention is now directed to
The line segment 252 displays the desired duration for the music segment as defined by timing segment S1. The segment trimmer will utilize various strategies to shorten the music segment to more closely adhere to the duration of S1. A user of the system will preferably be allowed to specify which strategy he/she prefers, or the timing controller may specify a strategy.
Alternative 1: Using the target duration 252, the nearest occurrence of any beat 257 (depicted as an ‘|’ in the figure) is located in the beatmap 256. The end of the music segment is shortened by trimming block E 258 to the beat occurring most closely to the desired timing segment end time.
Alternative 2: Using the target duration 252, the nearest occurrence of a downbeat 259 (depicted as an ‘X’ in the figure) is located in the beatmap. The end of the music segment will be shortened to the location of a downbeat 260.
Alternative 3: An algorithm is employed to systematically remove beats just prior to downbeats until the segment has been sufficiently shortened. In this example a total of 5 beats have been removed. From block A 262, a single beat is removed from the end, falling immediately prior to the initial downbeat of block B. In block B a single beat is removed prior to the downbeat that occurs in the middle of the block, and an additional beat is removed from the end of the block. Block E 266 similarly has two beats removed, one form the middle and one from the end.
The foregoing describes a system operable by a user to produce an audio track synchronized to a video timing specification specifying successive timing segments. Although only a limited number of exemplary embodiments have been expressly described, it is recognized that many variations and modifications will readily occur to those skilled in the art which are consistent with the invention and which are intended to fall within the scope of the appended claims. One specific embodiment of the invention is included in the commercially available SmartSound Sonicfire Pro 5 product which contains a HELP file further explaining the operation and features of the system.
Number | Name | Date | Kind |
---|---|---|---|
4569026 | Best | Feb 1986 | A |
5300725 | Manabe | Apr 1994 | A |
5598352 | Rosenau et al. | Jan 1997 | A |
5603016 | Davies | Feb 1997 | A |
5693902 | Hufford et al. | Dec 1997 | A |
5877445 | Hufford et al. | Mar 1999 | A |
5895876 | Moriyama et al. | Apr 1999 | A |
5918303 | Yamaura et al. | Jun 1999 | A |
5952598 | Goede | Sep 1999 | A |
5969716 | Davis et al. | Oct 1999 | A |
6072480 | Gorbet et al. | Jun 2000 | A |
6084169 | Hasegawa et al. | Jul 2000 | A |
6201176 | Yourlo | Mar 2001 | B1 |
6232539 | Looney et al. | May 2001 | B1 |
6243725 | Hempleman et al. | Jun 2001 | B1 |
6248946 | Dwek | Jun 2001 | B1 |
6392133 | Georges | May 2002 | B1 |
6448484 | Higgins | Sep 2002 | B1 |
6452083 | Pachet et al. | Sep 2002 | B2 |
6489969 | Garmon et al. | Dec 2002 | B1 |
6528715 | Gargi | Mar 2003 | B1 |
6608249 | Georges | Aug 2003 | B2 |
6635816 | Funaki | Oct 2003 | B2 |
6686970 | Windle | Feb 2004 | B1 |
6756533 | Aoki | Jun 2004 | B2 |
6856997 | Lee et al. | Feb 2005 | B2 |
7012650 | Hu et al. | Mar 2006 | B2 |
7071402 | Georges | Jul 2006 | B2 |
7078607 | Alferness | Jul 2006 | B2 |
7165219 | Peters et al. | Jan 2007 | B1 |
7301092 | McNally et al. | Nov 2007 | B1 |
7394011 | Huffman | Jul 2008 | B2 |
7500176 | Thomson et al. | Mar 2009 | B2 |
7735011 | Najdenovski | Jun 2010 | B2 |
7754959 | Herberger et al. | Jul 2010 | B2 |
20020059074 | Bhadkamkar et al. | May 2002 | A1 |
20020062313 | Lee et al. | May 2002 | A1 |
20020134219 | Aoki | Sep 2002 | A1 |
20020170415 | Hruska et al. | Nov 2002 | A1 |
20030160944 | Foote et al. | Aug 2003 | A1 |
20040027369 | Kellock et al. | Feb 2004 | A1 |
20040031379 | Georges | Feb 2004 | A1 |
20050217462 | Thomson et al. | Oct 2005 | A1 |
20060050140 | Shin et al. | Mar 2006 | A1 |
20060056806 | Terakado et al. | Mar 2006 | A1 |
20060092487 | Kuwabara et al. | May 2006 | A1 |
20060101339 | Katsumata | May 2006 | A1 |
20060112810 | Eves et al. | Jun 2006 | A1 |
20060122842 | Herberger et al. | Jun 2006 | A1 |
20060259862 | Adams et al. | Nov 2006 | A1 |
20070044643 | Huffman | Mar 2007 | A1 |
20070101355 | Chung et al. | May 2007 | A1 |
20070137463 | Lumsden | Jun 2007 | A1 |
20070162855 | Hawk et al. | Jul 2007 | A1 |
20070189710 | Pedlow | Aug 2007 | A1 |
20070209499 | Kotani | Sep 2007 | A1 |
20070230911 | Terasaki | Oct 2007 | A1 |
20080190268 | McNally | Aug 2008 | A1 |
20080195981 | Pulier et al. | Aug 2008 | A1 |
20080232697 | Chen et al. | Sep 2008 | A1 |
20080247458 | Sun et al. | Oct 2008 | A1 |
20080304573 | Moss et al. | Dec 2008 | A1 |
20080309795 | Mitsuhashi et al. | Dec 2008 | A1 |
20090046991 | Miyajima et al. | Feb 2009 | A1 |
20090049371 | Keng | Feb 2009 | A1 |
20090049979 | Naik et al. | Feb 2009 | A1 |
20090097823 | Bhadkamkar et al. | Apr 2009 | A1 |
20090162822 | Strachan et al. | Jun 2009 | A1 |
20090209237 | Six | Aug 2009 | A1 |
20100023485 | Cheng Chu | Jan 2010 | A1 |
20100040349 | Landy | Feb 2010 | A1 |
20100070057 | Sugiyama | Mar 2010 | A1 |
20100145794 | Barger et al. | Jun 2010 | A1 |
20100162344 | Casagrande et al. | Jun 2010 | A1 |
20100172591 | Ishikawa | Jul 2010 | A1 |
20100183280 | Beauregard et al. | Jul 2010 | A1 |
20100198380 | Peiffer et al. | Aug 2010 | A1 |
20100257994 | Hufford | Oct 2010 | A1 |
Number | Date | Country | |
---|---|---|---|
20100257994 A1 | Oct 2010 | US |