System and method for assembling a recorded composition

Information

  • Patent Grant
  • 11314936
  • Patent Number
    11,314,936
  • Date Filed
    Thursday, October 15, 2015
    9 years ago
  • Date Issued
    Tuesday, April 26, 2022
    2 years ago
Abstract
A system and method for assembling segments of recorded music or video from among various versions or variations of a recording, into a new version or composition, such that a first segment of a first version of a recorded work is attached to a segment of a second segment of a second version of the recorded work, to create a new version of the recorded work.
Description
FIELD OF THE INVENTION

The present invention generally relates to assembling a version of an audio or video recording, and may for example allow compilation of a version of a composition from segments of various recorded versions or variations of one or more compositions.


BACKGROUND OF THE INVENTION

Artists such as singers, film producers or videographers may record and make available more than one version of a particular composition, or multiple variations of a part of a composition. Such versions may include for example an acoustic version of a song, an electric or synthesized version of the same song, a hip-hop version, a classical version, etc. Similarly, various artists may record and make available their own cover versions of the same song. Other artists may wish to create a composition that may include certain variations of parts of the original composition, or of parts of variations of similar or different compositions.


SUMMARY OF THE INVENTION

An embodiment of the invention may include a system having a memory to store data representing a first version of a composition and a second version of the same composition, where each of such versions is divided into segments, and each of such segments includes a pre-defined portion of the composition, and the system also includes a processor to assemble a third version of the composition out of the first segment of the first version and the second segment of the second version. The third version may be stored in a memory that is associated with the processor, such that data representing the third version can be recalled to play the third version.


Is some embodiments the processor may issue a signal in advance, such as in advance of a time of a completion of a display or playing of the first segment of the first version, where the signal alerts a user that the first segment is about to finish and that the user may select a second segment to be combined with, assembled onto or linked to the first segment. A signal may also be issued by the processor to indicate that the second segment of the second version may linked to the first segment of the first version.


In some embodiments, a processor may present an indication, such as a visual indication, that the first segment of the new version was taken from the first segment of the first version, and the second segment of the new version was taken from the second version. Such indication, such as the visual indication may include an indication of a mode of the first segment and a mode of the second segment.


In some embodiments, the linking or association of the segments may include linking a set of data that represent or embody the first version to a set of data that embody the second version. In some embodiments the linking of the segments may preserve in the new version a musical flow of the composition or work.


In some embodiments, a processor may modify a duration of the second version to approximate a duration of the first version.


Some embodiments of the invention may include a method that designates a segmentation break at a pre-defined point in each of several versions of a composition, and accepts an instruction from a user to alter at the segmentation break a display of a first of versions of the composition and to continue from the segmentation break point a display of the second version of the composition.


In some embodiments, the instruction may be recorded to associate the instruction with the first versions and the second version.


In some embodiments, the method may include presenting a visual or audio display of an indication of a segmentation break in advance of a display of the pre-defined point of the first segment; and displaying an indication of the second versions that is suitable to be associated with the pre-defined point of the first version.


In some embodiments, a processor may modify a duration of the second version to match a duration of the first versions


Some embodiments of the invention may include a method that presents to a user an indication of recordings of a composition, where each of the recordings includes a segmentation indication at a pre-defined point of the composition, and the method links at the pre-defined point of the composition, a set of stored data that represents a first segment of a first of recording of the composition, to stored data that represents a second segment of a second recording of the composition, and the method stores as a new recording, a set of data representing the first segment linked of the second segment.


In some embodiments, the method may issue to the user a signal in advance of the pre-defined point of the composition, where the signal indicates to the user possible selection of a second segment of one or more other versions of the work that may be linked to the first segment.


Some embodiments of the invention may include a method that presents a visual representation corresponding to first segments of various versions of a composition, where the method includes accepting a selection of a first segment from among the various first segments, presenting a various possible immediately subsequent segments of the various versions of the work, accepting from a user a selection from among the various immediately subsequent presented segments, appending the selected subsequent segment to the first segment, repeating the process of presenting, selecting and appending of subsequent segments of the composition until an entire duration of the composition is assembled. In some embodiments, a processor may select a default segment from a version to be appended to the version if a user fails to select another segment to be appended to the version being assembled.


Some embodiments of the invention may include a method that presents to a user possible pre-defined variations for a first segment of a work, and that accepts a selection from among the presented first segments of the work, and that identifies a second set of pre-defined variations for a second segment of the work, where the second set is based, or is a derivative of the selection that was made by the user for the first segment. The method may accept a selection from among the set of pre-defined variations for the second segment of the work, and may associate the selection from the first set with the selection from the second set.


Some embodiments of the invention may include a method that defines a start point and an end point for a segment in a work, that presents an indication of variations of the work that may be inserted as a segment of in a new version of the work, that accepts a selection made by a user from among the possible variations of the work to be inserted as a segment in a new version of the work and that inserts data representing the selected variation as a segment in the new version of the work.





BRIEF DESCRIPTION OF THE DRAWINGS

The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with features and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanied drawings in which:



FIG. 1 is a conceptual illustration of a system in accordance with an embodiment the invention;



FIG. 2 is a conceptual illustration of segments of various versions of a composition and possible combinations of such segments into a created version of the composition in accordance with an embodiment of the invention;



FIG. 3 is a flow diagram of a method in accordance with an embodiment of the invention;



FIG. 4 is a flow diagram of a method in accordance with an embodiment of the invention;



FIG. 5 is a flow diagram of multiple variations of segments of a composition, and possible connections between the variations, in accordance with an embodiment of the invention



FIG. 6 is a flow diagram of a method in accordance with an embodiment of the invention;



FIG. 7 is a flow diagram of a method in accordance with an embodiment of the invention;



FIG. 8 is a flow diagram of a method in accordance with an embodiment of the invention; and



FIG. 9 is a flow diagram of a method in accordance with an embodiment of the invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

In the following description, various embodiments of the invention will be described. For purposes of explanation, specific examples are set forth in order to provide a thorough understanding of at least one embodiment of the invention. However, it will also be apparent to one skilled in the art that other embodiments of the invention are not limited to the examples described herein. Furthermore, well-known features may be omitted or simplified in order not to obscure embodiments of the invention described herein.


Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification, discussions utilizing terms such as “adding”, “associating” “selecting,” “evaluating,” “processing,” “computing,” “calculating,” “determining,” “designating,” “allocating” or the like, refer to the actions and/or processes of a computer, computer processor or computing system, or similar electronic computing device, that manipulate, execute and/or transform data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.


An embodiment of the invention may be practiced through the execution of instructions that may be stored on an article such as a disc, memory device or other mass data storage article. Such instructions may be for example loaded into a processor and executed. The instructions may be stored in a memory of a computer as a client executed application. Such client executed application may store links to segments of music or video or may store data representing music or video. An application may be executed by a processor, and segments of data representing music, sound, images or video may be manipulated in accordance with instructions that may be stored in a memory and that may be are associated with such segments.


When used in this paper, the terms “composition” or “work”, may, in addition to their regular definition, refer to a song, musical opus, video presentation, audio recording, film, movie, advertisement or other collection of audio and or audio plus visual signals that are assembled into a work that has identifiable components. For example, a composition may refer to a song having stanzas and verses, or bars and phrases, where in general, stanzas are linked to or follow verses, and verses are linked to or follow stanzas. The terms “mode” or “version” of a composition may, in addition to its regular definition, refer to a style or identifiable characteristic of a particular recording of a given composition, or a recoding made or sung by a particular artist. For example, a given song, video, speech or film may be recorded in an acoustic version, an electric version, a hip-hop version, a jazz version or other versions. The same song or video may be recorded by various artists or combination of artists in their own respective versions. In some embodiments, each of such versions may include all of the components of the particular composition, such as all or most of the stanzas, verses, notes or scenes of the composition. Typically, the information or data manipulated in embodiments of the invention is one or more audio recordings of compositions or works.


When used in this paper, a “segment” may in addition to its regular meaning, refer to a pre-defined portion of a work or composition, or a interval of either a defined or undefined period during a work or composition that may be set off with a start time at a certain point during the composition, and/or an end time during the composition, at which point another segment of the composition may begin or at which point a non-segmented portion of the composition may resume. In some embodiments, a segment may refer to a space or blank during a song or composition into which space or blank a variation may be inserted.


When used in this paper, a “progression of a recording” may refer to a scale or measure of the progress of a recording relative to its complete play. For example, a progression may refer to an elapsed time or period of a recording, as such time or period may be initiated from a beginning or end of a recording. In some embodiments, a progression may refer to a point in a series of musical notes, lyrics, images or other known events or markers in each of two or more recordings of the composition. For example, if the notes or lyrics or a musical or audio composition are known, a progression of the recording may include a tracking of the notes played or heard in one or more versions of the recording. A progression may be consistent between two or more versions of a recording such that a point in a progression of a first version may be tracked and associated with a corresponding point on a second version.


When used in this paper, the term “variation” may, in addition to its regular meaning, mean a portion of a song, movie, clip, or advertisement that may be inserted into or combined with one or more other portions of a song, movie or clip at a pre-defined point in the song, movie or clip. A variation may include lyrics, music, images or music that are different from the original song, movie or clip into which the variation is being added, and that are different from the other variations. A variation may be differentiated from a version in that while a version will generally be or include the same work that is being sung or played in a different way, a variation may be or include a different lyric, song or beat but that may be related to the original song or to the other segments to which the variation may be added by the fact that it is musically similar or creates a musically, lyrically or visually desired effect when it is combined with the other segments to which it is added.


In some embodiments, various versions of the same composition, each assembled as discussed herein, may be recorded and made available for users or consumers to select from, depending on their taste, mood or other preference.


Reference is made to FIG. 1, a conceptual illustration of a system in accordance with an embodiment of the invention. In some embodiments, system 100 may include for example a memory 102 such as a magnetic storage device, flash, RAM or other electronic storage device suitable for mass storage of data such as digital or analog audio or video data. In some embodiments, one or more segments of memory 102 may be divided or structured into a data base or other structured format that may associate one or more data entries in memory 102 with one or more other data entries in memory 102. In some embodiments, structured data may be stored or accessible by reference to for example, a mark up language such as for example, XML (Extensible Markup Language) or other mark up languages. System 100 may also include a processor 104 such as a processor suitable for digital signal processing, encoding and decoding of large data streams and for large-scale data manipulations such as image processing. Processor 104 may include more than one processor such as for example a controller, CPU or a video processor that may operate for example in parallel or in other configurations. System 100 may also include one or more display or output device 106, such as speakers or a video display, and an input device 108 such as a key-board, mouse, microphone touch screen or other input device 108.


In operation, processor 104 may execute code such as music or video playback code (e.g., stored in storage 102), which inputs music or video data (e.g. also stored in storage 102) and cause music to be output from an output device 106 (e.g. a speaker) and/or video to be output from an output device 106 (e.g. a monitor or display). Processor 104 may execute code to carry out methods as disclosed herein.


In operation, memory 102 may be loaded with or store two or more versions of a composition such as a song or video. Each of the recorded and stored versions may be marked, or divided into segments, where each such segment represents or is associated with a known portion of the composition. The beginning or ending markings of such segments may not be visible or audible, but may designate or set-off the start and/or end the segment.


A user may be presented with a selection of versions of the composition, and may choose a first version that is to be played. At some point in the progression of the first chosen version, the user may select a segment of second version of the recording that is to be inserted as part of a new version of the recording that the user is creating. Processor 104 may identify the segment most closely fitting the user's selection, and may copy or insert the selected segment of the second version into the version of the composition that the user is creating. This process may be repeated until all of the segments of the recoding are included in the user's new version.


The user may in this way, select a first stanza or segment of, for example, a song in an acoustic mode, a second stanza from an electric mode and a cadence from a jazz mode. In some embodiments the segments may be combined seamlessly so that beat, rhythm, pitch and other musical characteristics are retained in the movement from a segment in one mode to a segment in another mode and so that a complete, uninterrupted and seamless version of the new version is created that includes a segment from the acoustic version, a segment from the electric version and a cadence from the jazz version.


In some embodiments, segments may divide all or some of the recorded versions of a composition, such that a first segment of each of the rock, acoustic and jazz version of a composition may include only a first stanza or other pre-defined portion of the composition in each of the versions. The second segment in each of the same rock, acoustic and jazz versions may include only the first stanza of the composition. Subsequent segments may include for example subsequent stanzas or verses, instrumental portions, cadences or other portions of the composition. Parallel segments in each of the versions may thereby define particular portions of the composition. For example, a fifth segment in each of the rock and acoustic versions may point to and include for example the twelfth through fifteen line of the song or video that is the subject of both of the recorded versions. In some embodiments, the segment markers or set off points may be loaded into for example a mark-up data language such as an XML format, and the segments of many recorded versions may be associated with or linked to each other.


In some embodiments, a play speed of one or more versions of a recording may be altered so that the total duration of the various versions of the composition from which segments may be chose, may be relatively or approximately uniform. Such alterations of play speed may be performed with tools such as Ableton Live™, Qbase™ software products or other suitable audio recording tools


Each of the respective first, second, third, and nth markers, break points or segment set-off points of all of the recorded versions of a particular recording may therefore uniformly point to the identical or corresponding portions of the recorded work. Such uniform definition of the segments may allow the segments, when combined, to create a musically seamless or continuously flowing work without the need for a user to make further adjustments to links between the segments. For example, a user may select a first segment from a first version, a second through fourth segment from a second version and a final segment from the first version, and such segments may be combined by the processor to create a seamlessly flowing version of the recording.


In some embodiments, a version may contain many or even hundreds of defined segments so that a processor 104 may locate a segment point that is close to any point in the recording even if the user did not issue a signal to switch segments at the precise timing or location of a segmentation point.


In some embodiments, a system may store the various segments (or pointers to such segments) that were selected by a user from two or more versions, and may replay the segments as a new version created by a user. In this way, users may create new versions of a known recording by assembling pieces of various versions of the recording.


Reference is made to FIG. 2, a conceptual illustration of a display of versions and segments of versions of a composition in accordance with an embodiment of the invention. A display may present a representation of a first version 202 and a second version 204 of a recording by way of for example a graphic user interface 205 (e.g., displayed on a monitor such as a output device 106), and may indicate graphically, the mode of each of the displayed versions 202 and 204, and the location (by way of for example a graphic arrow or marker 211) in a progression of the recording of the various segments 206, 208 and 210 that are defined in the versions. For example, a particular version may be labeled with a name, icon 213 or avatar that may represent the version or the artist who performed the version.


A recording may begin to play by way of a video and for audio output, and the display may indicate to a user the progress of the playing of the version of the recording on a display. In advance of reaching for example an end of a defined segment 208, the display may indicate an upcoming decision point wherein the user may decide which, if any, of the possible choices of segments 208 from other versions 204 may be inserted into the version that he is creating. For example, such advance notice may be displayed or presented to a user by way of a user interface or by a sound indication, for example a few seconds before the end of the segment that is then playing or being shown. In some embodiments, a display of a countdown may be added to indicate to the user the point on the recording by which he must make his selection during the course of the play of the then current version. In advance of the decision point, a display of the possible alternative segments 208 from versions 204 and 214 that may be selected may be provided to the user, and such display may hover and then disappear when the decision point passes or a selection of a new segment 208 has been made.


In some embodiments, if no selection of an alternative segment is made by a user, the default action may be set to continue playing the version that is then progressing. Other defaults may be used such as for example randomly altering versions at one or more segment breaks. If a selection of a segment from another version 214 is made, the graphic display may indicate the new version then being played, and may for example highlight or otherwise show the path of the various segments that have been selected for inclusion in the new version and the current version being played.


In some embodiments, the path or segments from versions that have been selected may be displayed for the user, and stored to show and retain the new version created by the user. The segments may be joined to create an original version of the recoding consisting of various segments of assorted versions of the composition.


In some embodiments, a user may download or otherwise import into a client or other application the versions from which selections of segments may be made. In some embodiments, no such downloading may be required, and instead a reference, such as an HTML (HyperText Markup Language) site, to segments of various versions that are stored remotely, may be presented to the user, and the user may store his newly created version by storing in a memory such references to the remotely stored versions. In some embodiments, the application may detect the bandwidth that is available on the user's computer as well as the speed of the recording, and may store or download the appropriate amount of data to facilitate smooth playback. In some embodiments, the user's created version 212 may also be stored remotely and made available to other users who may for example download version 212 to a computer or other device, use segments of such user's version 212 to create their own new versions, or other uses.


The client or application may include control functions such as for example play, pause, rewind, volume and other common controls for audio and video recording or playing.


Reference is made to FIG. 3, a flow diagram of a method in accordance with an embodiment of the invention. In block 300 a user may be presented with a start screen where for example the user may select the recording and two more versions of the recording that may be available. In some embodiments, various characteristics, data and descriptions of the recording and the version may be loaded into the application and may be displayed, r played back or presented. In block 302, the player or client software that is stored in a memory may be pre-loaded with at least some of, or portions of the initial segments of the various versions of the recording, as were selected by the user. In block 304 the user may select the version for the first segment from which the recording is to begin, and the first segment of such version may become the first segment in the user's new version. In block 306, the selected segment may be played for the user, and portions of the upcoming segments that may be selected by a user at the next decision point may be pre-loaded or buffered into the application. In block 308, if the segment then being played is not the last segment of the recording, one or more versions of the subsequent segment or segments may be presented to the user for his selection. In block 310, the process of presenting and selecting segments of a recoding may continue until the last segment of the recording is reached. In block 312, an ending screen may be presented to a user where the summary of the selected and assembled segments are displayed or played, and the user may be prompted to save, share, upload or otherwise use the newly created version. In some embodiments, such final version may be stored in a memory associated or connected with a client that may run an application executing an embodiment of the invention. In some embodiments, the process of selecting segments and adding such selected segments to the song as it plays may be made in real time and while the song is playing for the user.


Reference is made to FIG. 4, a flow diagram of a method in accordance with an embodiment of the invention. In block 400, there is presented an indication of versions of a composition, where each such version includes segmentation marks at each of a number of pre-defined points. In block 402, a segment from a first version is joined at one of the pre-defined points to a segment from a second version. In block 404, there is stored or recorded an indication of the joined segments from each of the versions and an indication of the segmentation point at which such segments were joined.


In some embodiments, a signal, such as a displayed or audio signal on a user interface, may be issued in advance of the end of segment, to alert the user that the current segment will soon be completed and that he will have an opportunity to change or alter the flow of his newly created version by substituting a segment from a different version that the one he is now using. If the user does not input a signal for such substitution, then the display may default to continue showing or playing the version then being played or may choose a random version to add to the segments that were already assembled.


In some embodiments, there may be presented to a user an indication of which segments from among the various versions are suitable for being assembled onto the version then being played. For example, at a particular point in a song, a piano instrumental may be heard, and a display may show that another version of the song includes a guitar instrumental that can break up the piano instrumental and that can be inserted after the piano instrumental. The display may also indicate that a cappella version of the song may not be suitable or appropriate for insertion at such point.


In some embodiments, a display may be presented that shows the origin or the various segments that have been assembled into the newly created version. For example, a graphic or icon of a guitar may be overlaid onto a graphical display representing a first segment of the user's newly created version to show that that the source of the segment is an electric guitar version or a hip-hop mode or version of the recording. The icon or graphic of the segment as incorporated into the newly created version may be similar to or identical with the icon or graphic of the version that was the origin of the segment. An avatar of a particular singer may be overlaid onto a second segment to show that such second segment was taken from a version performed by the particular singer.


In some embodiments, a process of assembling the various segments may include linking an end of the first segment with a start of the second segment while maintaining a musical flow of the newly created version. For example, the segments may be linked to maintain a beat, key, tone, pitch or other characteristics of one or more of the original versions. In some embodiments the linking, moving, connecting or manipulating of segments may be accomplished by manipulating data that when processed through a player may reproduce music, sounds, images or video. Representations or links to such segments of data that represent the music or image may be stored, displayed, manipulated or processed.


In some embodiments, a processor may accept a signal from a user at various points in the course of the play or display of a version of the composition, even if such points are not associated with a defined break point or segmentation point. The processor may then select the closest or otherwise most suitable break point or segmentation point for that can be used to alter the flow of the play to substitute the then current segment for a segment selected by the user


In some embodiments, a processor may modify a duration of various versions of a composition so that such durations are approximate the same.


In some embodiments, one or more artists or composers may record multiple variations of one or more segments of a song or music video. For example, a segment of a love song may be recorded in the masculine, as a man singing about a woman, or in the feminine, as a woman singing about a man, such that in the first variation of a segment, the song is about “her eyes”, and in the second variation of the segment the song is about “his smile”, Another segment may be recorded in a first variation where a man and a woman break up and never see each other, in a second variation of the segment where the man and the woman break up but then get back together again, and in a third variation of the segment where the man and the woman break up and the woman returns to demolish the man's ear. Other variations and permutations of segments may be recorded and presented to a user to create possible story lines that may be combined to weave different plots, song settings, genders or other factors in a song or music video. A user may select a first segment from among the first segment variations, and combine that segment with a second segment from among the second segment variations, and may continue combining segments into a song that carries a different plot, setting, ending or one or more other factors that are unlike any of the existing songs that were recorded by the artist. All of the segment variations may be of pre-defined length or have pre-defined starting and/or ending points at which each of such segment variations may be joined with one or more other segments.


In some embodiments, a variation may be inserted at a pre-defined starting point or break point (n), but may end at one of among several subsequent pre-defined ending points (n+2, n+3, etc.), rather than at the next break point (n+1). In this way, a long variation may be added in a spot that would otherwise have been filled with a shorter variation. In some embodiments, the various segments that may be combined may not share a particular melody, duration, tempo, musical arrangement or other pre-defined characteristics, except for a designation by the system of a pre-defined beginning and/or end to the particular segment, and that an end to a first segment is to be followed by a beginning of one from among various second or subsequent segments.


Reference is made to FIG. 5, a conceptual diagram of possible variations of a series of segments that may be constructed into a song or music video by for example an application in accordance with an embodiment of the invention. For example, in a first segment 502, a user may be presented with two variations from which he may choose, a first variation 504 is a stanza about a lonely boy, and a second variation 506 is a stanza about a lonely girl. If a user selects segment 504 as a first segment in the construction of his song or video, then the system will limit, define or present to the user that only variations 510 through 516 in segment 2508 are suitable to follow selected variation 504 of segment 1502 that can follow. In FIG. 5, the suitability of variations that may follow a selected variation are shown as solid lines 501. As shown in FIG. 5, variation 518 may not be suitable to follow variation 504, and a user will therefore not be presented with variation 518 as a possible variation to follow variation 506. If a user first selects variation 504 as his selection for segment 1502, and then selects variation 510 as his choice for segment 2508, the system may present variations 522 to 530 to the user for possible selection as segment 3520. This process of presentation, selection of possible variations and choice by the user may be continued until for example a variation has been selected for all of the segments. In some embodiments, a variation need not be chosen for each segment. For example, if a user chooses variation 506 for segment 1502, and then chooses variation 518 as a selection for segment 508, the user may then be presented with variation 566 as a final selection for the user's song, such that the user will have selected only three segments that are to be constructed into a song or video. In some embodiments, a variation in a prior segment may be re-used or presented again as a possible choice in a subsequent segment. For example, variation 516 may be presented as a possible choice for segment 2508, and may be presented again as a possible choice for segment 4, such that a variation may be re-used in multiple segments in a work. In some embodiments, a use of a variation 516 in segment 4538 may be associated with different variations in segment 5560 to account for the use of variation 516 twice or to account for the placement of variation 516 near the end of the work.


In some embodiments, a user may be presented with a selection of variations for one or more segments, and may choose a first variation that is to be played or assembled. At a certain point during the segment or after the segment ends, the user may select a variation for the second segment as part of a new version of the recording that the user is creating. A processor may identify one or more segments that closely fit the user's selection and that match or are musically compatible with the then just-ended segment. The processor may assemble the selected or closely fitting segment after the then just-ended segment. This process may be repeated until some or all of the segments of the recoding have been selected in the user's new version. As part of the selection process, the processor may match musical characteristics of one or more previously selected segments to the possible segments that may be selected by the user in subsequent segments. Such assistance by the processor may increase the musical quality of the assembled segments. In some embodiments, a user may be presented with the relative quality of the match between assembled segments or variations that are presented for possible assembly. For example, a processor may compare any or all or rhythm, pitch, timing or other characteristics of variations and indicate to a user which of the variations includes characteristics that would match the segments already assembled.


In some embodiments, a user may select a variation to be inserted in a segment even after the pre-defined insertion point has passed in the playing of the song. In such case, the variation to be inserted may be stored and played in the point designated for insertion in a next playing of the composition. In some embodiments, a selection variation may be associated with one or more next variations from which a user may select to follow the selected variation.


In some embodiments, a system may randomly select variations for insertion into some or all of the segments.


In some embodiments, segment 1502, may not be the start of a song, video, work or recording, but may represent the first spot or space in a recorded work that is available for insertion by a user of a selected variation. For example, a user may be presented with a first stanza of the song “Mary Had a Little Lamb”, where such first stanza includes the usual lyrics. The user may be presented with several variations of a first segment, that is actually the second stanza of the work, where such variations include different music, lyrics, tempo, etc. Similarly, the user may be presented with multiple variations of a third stanza from which to choose. Finally, the system may insert a final stanza without giving the user a choice of variations from which to choose.


In another embodiment, a system may present to a user a recoding of the song “Happy Birthday”, and may designate a start point for a segment that starts with the end of “Happy Birthday dear”. A user may be presented with an assortment of recordings of names from which may be selected a recording of a sung name that will be inserted into the segment. The end of the inserted segment may be the end of the recorded name, and the recorded work may continue with “Happy Birthday to you”. The newly created work may include the recorded first part, the selected segment, and the recorded ending.


In some embodiments, the assembled variation, or signals associated with the assembled variations, may be stored. The assembled variations in the form of a newly created work may be played, stored or distributed. In some embodiments, the assembled segments may constitute a newly created musical composition.


Reference is made to FIG. 6, a flow chart of a method in accordance with an embodiment of the invention. In block 600, there may be designated a segmentation break at a pre-defined point in each of a several versions of a composition. In block 602, an instruction may be accepted from, for example a user, to alter at the segmentation break, an output of a first segment of the first version, and to continue from the segmentation point an output of a segment of the second version.


Reference is made to FIG. 7, a flow chart of a method in accordance with an embodiment of the invention. In block 700, there may be presented to a user an indication of a first segment for several versions of a work, where each of such first segments has a same pre-defined start point. In block 702, there may be accepted from a user by for example a processor a selection of a first segment from among the several first segments that were presented from the versions. In block 704, there may be presented to for example a user several versions of a next or subsequent segment for one or more of the versions that were presented for the first segment. In block 706, there may be accepted from the user a selection from among the next or subsequent segments that were presented to the user. In block 708, the selected first segment may be appended, assembled or attached to the second segment at a predefined point so that a musical quality of the combination of the two segments is maintained. In block 710 the process of presenting segments of versions, accepting a selection of a segment and appending the selected segment to the prior segment may be repeated until an entire duration of the composition is assembled from the segments.


Reference is made to FIG. 8, a flow chart of a method in accordance with an embodiment of the invention. In block 800, there may be presented to a user an indication a pre-defined first segment of several variations of a work. In block 802, a processor may accept from for example a user a selection of one of the variations of the first segment. In block 804, the processor may select or identify several variations for inclusion as a second segment based on the selection that was made by the user for the first segment. For example, if a selected first segment is from a hip-hop version, the processor may present to the user various second segments that also have hip-hop sounds from different artists, or may include portions of different hip-hop songs from the same artist. In some embodiments, the processor may also present an indication of a relative suitability of the various presented second segments in light of the selected first segment, where from said first plurality for said first segment of said work.


Reference is made to FIG. 9, a flow chart of a method in accordance with an embodiment of the invention. In block 900, a start point and or an end point for one or more segments of a work may be defined in a recording of the work. In block 902, an indication of several variations for one or more of the segments may be presented to a user. In block 904, a selection may be accepted for a variation from among the presented variations. In block 906, the selected variation may be added, combined or inserted into prior or subsequent variations to create a new work based on the selected variations.


It will be appreciated by persons skilled in the art that embodiments of the invention are not limited by what has been particularly shown and described hereinabove. Rather the scope of at least one embodiment of the invention is defined by the claims below.

Claims
  • 1. A method comprising: providing a video presentation comprising a plurality of predefined paths corresponding to different versions of a video, each predefined path comprising a plurality of seamlessly joined video segments, wherein a first video segment of the plurality of video segments comprises a decision period;during a first playback of the video during a first time period: during playback of the first video segment and within the decision period, providing a visual representation of a first set of options, each option in the first set of options being associated with a different video segment in a first subset of the video segments, the first subset of video segments corresponding to a first subset of predefined paths of the plurality of predefined paths; andreceiving a first decision selecting one of the options in the first set of options; andduring a second playback of the video during a second time period, later than the first time period, upon reaching the decision period of the first video segment: modifying the first set of options, based on the first decision selecting one of the options in the first set of options during the first playback of the video, to create a second set of options, the second set of options comprising at least one option included in the first set of options and at least one option not included in the first set of options;providing a visual representation of the second set of options, each option in the second set of options being associated with a different video segment in a second subset of the video segments, the second subset of video segments corresponding to a second subset of predefined paths of the plurality of predefined paths, the second subset of predefined paths being different than the first subset of predefined paths;receiving a second decision selecting one of the options in the second set of options, the selected option in the second set of options being associated with a next video segment to play following the first video segment; andseamlessly presenting the next video segment following the first video segment.
  • 2. The method of claim 1, wherein at least one of the seamlessly joined video segments is included in two or more of the predefined paths.
  • 3. The method of claim 1, wherein the decision period comprises a portion of playback time of the first video segment.
  • 4. The method of claim 3, wherein the decision period comprises a predefined start time and a predefined end time within the playback time of the first video segment.
  • 5. The method of claim 1, wherein the decision is one of received from a user and automatically determined.
  • 6. The method of claim 1, further comprising presenting each of the video segments in a particular one of the predefined paths to provide a seamless version of the video.
  • 7. The method of claim 1, further comprising providing, during playback of the first video segment and within the decision period, visual representations of one or more of the video segments in the first subset of video segments.
  • 8. The method of claim 7, further comprising removing from display one or more of the visual representations following an end of the decision period.
  • 9. The method of claim 1, further comprising storing decisions made by a user at a plurality of decision points during the first and/or second playback of the video presentation.
  • 10. The method of claim 9, further comprising making available the stored decisions of the user to other users.
  • 11. A system comprising: at least one memory for storing computer-executable instructions; andat least one processor for executing the instructions stored on the at least one memory, wherein execution of the instructions programs the at least one processor to perform operations comprising: providing a video presentation comprising a plurality of predefined paths corresponding to different versions of a video, each predefined path comprising a plurality of seamlessly joined video segments, wherein a first video segment of the plurality of video segments comprises a decision period;during a first playback of the video during a first time period: during playback of the first video segment and within the decision period, providing a visual representation of a first set of options, each option in the first set of options being associated with a different video segment in a first subset of the video segments, the first subset of video segments corresponding to a first subset of predefined paths of the plurality of predefined paths; andreceiving a first decision selecting one of the options in the first set of options; andduring a second playback of the video during a second time period, later than the first time period, upon reaching the decision period of the first video segment: modifying the first set of options, based on the first decision selecting one of the options in the first set of options during the first playback of the video, to create a second set of options, the second set of options comprising at least one option included in the first set of options and at least one option not included in the first set of options;providing a visual representation of the second set of options, each option in the second set of options being associated with a different video segment in a second subset of the video segments, the second subset of video segments corresponding to a second subset of predefined paths of the plurality of predefined paths, the second subset of predefined paths being different than the first subset of predefined paths;receiving a second decision selecting one of the options in the second set of options, the selected option in the second set of options being associated with a next video segment to play following the first video segment; andseamlessly presenting the next video segment following the first video segment.
  • 12. The system of claim 11, wherein at least one of the seamlessly joined video segments is included in two or more of the predefined paths.
  • 13. The system of claim 11, wherein the decision period comprises a portion of playback time of the first video segment.
  • 14. The system of claim 13, wherein the decision period comprises a predefined start time and a predefined end time within the playback time of the first video segment.
  • 15. The system of claim 11, wherein the decision is one of received from a user and automatically determined.
  • 16. The system of claim 11, wherein the operations further comprise presenting each of the video segments in a particular one of the predefined paths to provide a seamless version of the video.
  • 17. The system of claim 11, wherein the operations further comprise providing, during playback of the first video segment and within the decision period, visual representations of one or more of the video segments in the first subset of video segments.
  • 18. The system of claim 17, wherein the operations further comprise removing from display one or more of the visual representations following an end of the decision period.
  • 19. The system of claim 11, wherein the operations further comprise storing decisions made by a user at a plurality of decision points during the first and/or second playback of the video presentation.
  • 20. The system of claim 19, wherein the operations further comprise making available the stored decisions of the user to other users.
Priority Claims (1)
Number Date Country Kind
198717 May 2009 IL national
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 12/706,721, filed on Feb. 17, 2010, and entitled “System and Method for Assembling a Recorded Composition,” which claims priority to Israeli Patent Application No. 198717, filed on May 12, 2009, the entireties of which are hereby incorporated by reference.

US Referenced Citations (508)
Number Name Date Kind
4569026 Best Feb 1986 A
5137277 Kitaue Aug 1992 A
5161034 Klappert Nov 1992 A
5568602 Callahan et al. Oct 1996 A
5568603 Chen et al. Oct 1996 A
5597312 Bloom et al. Jan 1997 A
5607356 Schwartz Mar 1997 A
5610653 Abecassis Mar 1997 A
5636036 Ashbey Jun 1997 A
5676551 Knight et al. Oct 1997 A
5715169 Noguchi Feb 1998 A
5734862 Kulas Mar 1998 A
5745738 Ricard Apr 1998 A
5751953 Shiels et al. May 1998 A
5754770 Shiels et al. May 1998 A
5801694 Gershen Sep 1998 A
5818435 Kozuka et al. Oct 1998 A
5848934 Shiels et al. Dec 1998 A
5887110 Sakamoto et al. Mar 1999 A
5894320 Vancelette Apr 1999 A
5956037 Osawa et al. Sep 1999 A
5983190 Trower, II et al. Nov 1999 A
6067400 Saeki et al. May 2000 A
6122668 Tena et al. Sep 2000 A
6128712 Hunt et al. Oct 2000 A
6191780 Martin et al. Feb 2001 B1
6222925 Shiels et al. Apr 2001 B1
6240555 Shoff et al. May 2001 B1
6298020 Kumagami Oct 2001 B1
6298482 Seidman et al. Oct 2001 B1
6460036 Herz Oct 2002 B1
6535639 Uchihachi et al. Mar 2003 B1
6657906 Martin Dec 2003 B2
6698020 Zigmond et al. Feb 2004 B1
6728477 Watkins Apr 2004 B1
6740802 Browne, Jr. May 2004 B1
6771875 Kunieda et al. Aug 2004 B1
6801947 Li Oct 2004 B1
6947966 Oko, Jr. et al. Sep 2005 B1
7085844 Thompson Aug 2006 B2
7155676 Land et al. Dec 2006 B2
7231132 Davenport Jun 2007 B1
7296231 Loui et al. Nov 2007 B2
7310784 Gottlieb et al. Dec 2007 B1
7379653 Yap et al. May 2008 B2
7430360 Abecassis Sep 2008 B2
7444069 Bernsley Oct 2008 B1
7472910 Okada et al. Jan 2009 B1
7627605 Lamere et al. Dec 2009 B1
7669128 Bailey et al. Feb 2010 B2
7694320 Yeo et al. Apr 2010 B1
7779438 Davies Aug 2010 B2
7787973 Lambert Aug 2010 B2
7917505 van Gent et al. Mar 2011 B2
8024762 Britt Sep 2011 B2
8046801 Ellis et al. Oct 2011 B2
8065710 Malik Nov 2011 B2
8151139 Gordon Apr 2012 B1
8176425 Wallace et al. May 2012 B2
8190001 Bernsley May 2012 B2
8202167 Ackley et al. Jun 2012 B2
8276058 Gottlieb et al. Sep 2012 B2
8281355 Weaver et al. Oct 2012 B1
8321905 Streeter et al. Nov 2012 B1
8350908 Morris et al. Jan 2013 B2
8600220 Bloch et al. Dec 2013 B2
8612517 Yadid et al. Dec 2013 B1
8626337 Corak et al. Jan 2014 B2
8646020 Reisman Feb 2014 B2
8650489 Baum et al. Feb 2014 B1
8667395 Hosogai et al. Mar 2014 B2
8750682 Nicksay et al. Jun 2014 B1
8752087 Begeja et al. Jun 2014 B2
8826337 Issa et al. Sep 2014 B2
8860882 Bloch et al. Oct 2014 B2
8930975 Woods et al. Jan 2015 B2
8977113 Rumteen et al. Mar 2015 B1
9009619 Bloch et al. Apr 2015 B2
9021537 Funge et al. Apr 2015 B2
9082092 Henry Jul 2015 B1
9094718 Barton et al. Jul 2015 B2
9190110 Bloch Nov 2015 B2
9257148 Bloch et al. Feb 2016 B2
9268774 Kim et al. Feb 2016 B2
9271015 Bloch et al. Feb 2016 B2
9363464 Alexander Jun 2016 B2
9367196 Goldstein et al. Jun 2016 B1
9374411 Goetz Jun 2016 B1
9390099 Wang et al. Jul 2016 B1
9456247 Pontual et al. Sep 2016 B1
9465435 Zhang et al. Oct 2016 B1
9473582 Fraccaroli Oct 2016 B1
9510044 Pereira et al. Nov 2016 B1
9520155 Bloch et al. Dec 2016 B2
9530454 Bloch et al. Dec 2016 B2
9538219 Sakata et al. Jan 2017 B2
9554061 Proctor, Jr. et al. Jan 2017 B1
9571877 Lee et al. Feb 2017 B2
9607655 Bloch et al. Mar 2017 B2
9641898 Bloch et al. May 2017 B2
9653115 Bloch et al. May 2017 B2
9653116 Paulraj et al. May 2017 B2
9672868 Bloch et al. Jun 2017 B2
9715901 Singh et al. Jul 2017 B1
9736503 Bakshi et al. Aug 2017 B1
9792026 Bloch et al. Oct 2017 B2
9792957 Bloch et al. Oct 2017 B2
9826285 Mishra et al. Nov 2017 B1
9967621 Armstrong et al. May 2018 B2
10070192 Baratz Sep 2018 B2
10178304 Tudor et al. Jan 2019 B1
10178421 Thomas et al. Jan 2019 B2
10187687 Harb et al. Jan 2019 B2
10194189 Goetz et al. Jan 2019 B1
10257572 Manus et al. Apr 2019 B2
10419790 Gersten Sep 2019 B2
10460765 Bloch et al. Oct 2019 B2
10523982 Oyman Dec 2019 B2
10771824 Haritaoglu et al. Sep 2020 B1
20010056427 Yoon et al. Dec 2001 A1
20020019799 Ginsberg et al. Feb 2002 A1
20020029218 Bentley et al. Mar 2002 A1
20020053089 Massey May 2002 A1
20020086724 Miyaki et al. Jul 2002 A1
20020091455 Williams Jul 2002 A1
20020105535 Wallace et al. Aug 2002 A1
20020106191 Betz et al. Aug 2002 A1
20020120456 Berg et al. Aug 2002 A1
20020120931 Huber et al. Aug 2002 A1
20020129374 Freeman et al. Sep 2002 A1
20020140719 Amir et al. Oct 2002 A1
20020144262 Plotnick et al. Oct 2002 A1
20020166440 Herberger Nov 2002 A1
20020174430 Ellis et al. Nov 2002 A1
20020177914 Chase Nov 2002 A1
20020194595 Miller et al. Dec 2002 A1
20030007560 Mayhew et al. Jan 2003 A1
20030012409 Overton et al. Jan 2003 A1
20030020744 Ellis et al. Jan 2003 A1
20030023757 Ishioka et al. Jan 2003 A1
20030039471 Hashimoto Feb 2003 A1
20030069057 DeFrees-Parrott Apr 2003 A1
20030076347 Barrett et al. Apr 2003 A1
20030101164 Pic et al. May 2003 A1
20030148806 Weiss Aug 2003 A1
20030159566 Sater et al. Aug 2003 A1
20030183064 Eugene et al. Oct 2003 A1
20030184598 Graham Oct 2003 A1
20030221541 Platt Dec 2003 A1
20040009813 Wind Jan 2004 A1
20040019905 Fellenstein et al. Jan 2004 A1
20040034711 Hughes Feb 2004 A1
20040070595 Atlas et al. Apr 2004 A1
20040091848 Nemitz May 2004 A1
20040128317 Sull et al. Jul 2004 A1
20040138948 Loomis Jul 2004 A1
20040146275 Takata et al. Jul 2004 A1
20040172476 Chapweske Sep 2004 A1
20040194128 McIntyre et al. Sep 2004 A1
20040194131 Ellis et al. Sep 2004 A1
20040199923 Russek Oct 2004 A1
20040261127 Freeman et al. Dec 2004 A1
20050019015 Ackley et al. Jan 2005 A1
20050055377 Dorey et al. Mar 2005 A1
20050091597 Ackley Apr 2005 A1
20050102707 Schnitman May 2005 A1
20050107159 Sato May 2005 A1
20050120389 Boss et al. Jun 2005 A1
20050132401 Boccon-Gibod et al. Jun 2005 A1
20050166224 Ficco Jul 2005 A1
20050198661 Collins et al. Sep 2005 A1
20050210145 Kim et al. Sep 2005 A1
20050251820 Stefanik et al. Nov 2005 A1
20050251827 Ellis et al. Nov 2005 A1
20050273807 Nissing Dec 2005 A1
20060024034 Filo et al. Feb 2006 A1
20060028951 Tozun et al. Feb 2006 A1
20060064733 Norton Mar 2006 A1
20060080167 Chen et al. Apr 2006 A1
20060120624 Jojic et al. Jun 2006 A1
20060150072 Salvucci Jul 2006 A1
20060150216 Herz et al. Jul 2006 A1
20060153537 Kaneko et al. Jul 2006 A1
20060155400 Loomis Jul 2006 A1
20060200842 Chapman et al. Sep 2006 A1
20060222322 Levitan Oct 2006 A1
20060224260 Hicken et al. Oct 2006 A1
20060253330 Maggio et al. Nov 2006 A1
20060274828 Siemens et al. Dec 2006 A1
20070003149 Nagumo et al. Jan 2007 A1
20070024706 Brannon et al. Feb 2007 A1
20070033633 Andrews et al. Feb 2007 A1
20070055989 Shanks et al. Mar 2007 A1
20070079325 de Heer Apr 2007 A1
20070085759 Lee et al. Apr 2007 A1
20070099684 Butterworth May 2007 A1
20070101369 Dolph May 2007 A1
20070118801 Harshbarger et al. May 2007 A1
20070154169 Cordray et al. Jul 2007 A1
20070157234 Walker Jul 2007 A1
20070157260 Walker Jul 2007 A1
20070157261 Steelberg et al. Jul 2007 A1
20070162395 Ben-Yaacov et al. Jul 2007 A1
20070220583 Bailey et al. Sep 2007 A1
20070226761 Zalewski et al. Sep 2007 A1
20070239754 Schnitman Oct 2007 A1
20070253677 Wang Nov 2007 A1
20070253688 Koennecke Nov 2007 A1
20080019223 Bridges Jan 2008 A1
20080021187 Wescott et al. Jan 2008 A1
20080021874 Dahl Jan 2008 A1
20080022320 Ver Steeg Jan 2008 A1
20080031595 Cho Feb 2008 A1
20080086456 Rasanen et al. Apr 2008 A1
20080086754 Chen et al. Apr 2008 A1
20080091721 Harboe Apr 2008 A1
20080092159 Dmitriev et al. Apr 2008 A1
20080148152 Blinnikka et al. Jun 2008 A1
20080161111 Schuman Jul 2008 A1
20080170687 Moors et al. Jul 2008 A1
20080177893 Bowra et al. Jul 2008 A1
20080178232 Velusamy Jul 2008 A1
20080276157 Kustka et al. Nov 2008 A1
20080300967 Buckley et al. Dec 2008 A1
20080301750 Silfvast et al. Dec 2008 A1
20080314232 Hansson et al. Dec 2008 A1
20090022015 Harrison Jan 2009 A1
20090022165 Candelore et al. Jan 2009 A1
20090024923 Hartwig et al. Jan 2009 A1
20090029771 Donahue Jan 2009 A1
20090055880 Batteram et al. Feb 2009 A1
20090063681 Ramakrishnan et al. Mar 2009 A1
20090063995 Baron et al. Mar 2009 A1
20090077137 Weda et al. Mar 2009 A1
20090079663 Chang et al. Mar 2009 A1
20090083631 Sidi et al. Mar 2009 A1
20090116817 Kim et al. May 2009 A1
20090177538 Brewer et al. Jul 2009 A1
20090178089 Picco et al. Jul 2009 A1
20090191971 Avent Jul 2009 A1
20090195652 Gal Aug 2009 A1
20090199697 Lehtiniemi et al. Aug 2009 A1
20090226046 Shteyn Sep 2009 A1
20090228572 Wall et al. Sep 2009 A1
20090254827 Gonze et al. Oct 2009 A1
20090258708 Figueroa Oct 2009 A1
20090265737 Issa et al. Oct 2009 A1
20090265746 Halen et al. Oct 2009 A1
20090297118 Fink et al. Dec 2009 A1
20090320075 Marko Dec 2009 A1
20100017820 Thevathasan et al. Jan 2010 A1
20100042496 Wang et al. Feb 2010 A1
20100050083 Axen Feb 2010 A1
20100069159 Yamada et al. Mar 2010 A1
20100070987 Amento et al. Mar 2010 A1
20100077290 Pueyo Mar 2010 A1
20100088726 Curtis et al. Apr 2010 A1
20100122286 Begeja et al. May 2010 A1
20100146145 Tippin et al. Jun 2010 A1
20100153512 Balassanian et al. Jun 2010 A1
20100153885 Yates Jun 2010 A1
20100161792 Palm et al. Jun 2010 A1
20100162344 Casagrande et al. Jun 2010 A1
20100167816 Perlman et al. Jul 2010 A1
20100167819 Schell Jul 2010 A1
20100186032 Pradeep et al. Jul 2010 A1
20100186579 Schnitman Jul 2010 A1
20100199299 Chang et al. Aug 2010 A1
20100210351 Berman Aug 2010 A1
20100251295 Amento et al. Sep 2010 A1
20100257994 Hufford Oct 2010 A1
20100262336 Rivas et al. Oct 2010 A1
20100267450 McMain Oct 2010 A1
20100268361 Mantel et al. Oct 2010 A1
20100278509 Nagano et al. Nov 2010 A1
20100287033 Mathur Nov 2010 A1
20100287475 van Zwol et al. Nov 2010 A1
20100293455 Bloch Nov 2010 A1
20100325135 Chen et al. Dec 2010 A1
20100332404 Valin Dec 2010 A1
20110000797 Henry Jan 2011 A1
20110007797 Palmer et al. Jan 2011 A1
20110010742 White Jan 2011 A1
20110026898 Lussier et al. Feb 2011 A1
20110033167 Arling et al. Feb 2011 A1
20110041059 Amarasingham et al. Feb 2011 A1
20110069940 Shimy et al. Mar 2011 A1
20110078023 Aldrey et al. Mar 2011 A1
20110078740 Bolyukh et al. Mar 2011 A1
20110096225 Candelore Apr 2011 A1
20110126106 Ben Shaul May 2011 A1
20110131493 Dahl Jun 2011 A1
20110138331 Pugsley et al. Jun 2011 A1
20110163969 Anzures et al. Jul 2011 A1
20110169603 Fithian et al. Jul 2011 A1
20110182366 Frojdh et al. Jul 2011 A1
20110191684 Greenberg Aug 2011 A1
20110191801 Vytheeswaran Aug 2011 A1
20110193982 Kook et al. Aug 2011 A1
20110197131 Duffin et al. Aug 2011 A1
20110200116 Bloch et al. Aug 2011 A1
20110202562 Bloch et al. Aug 2011 A1
20110239246 Woodward et al. Sep 2011 A1
20110246661 Manzari et al. Oct 2011 A1
20110246885 Pantos et al. Oct 2011 A1
20110252320 Arrasvuori et al. Oct 2011 A1
20110264755 Salvatore De Villiers Oct 2011 A1
20110282745 Meoded et al. Nov 2011 A1
20110282906 Wong Nov 2011 A1
20110307786 Shuster Dec 2011 A1
20110307919 Weerasinghe Dec 2011 A1
20110307920 Blanchard et al. Dec 2011 A1
20110313859 Stillwell et al. Dec 2011 A1
20110314030 Burba et al. Dec 2011 A1
20120004960 Ma et al. Jan 2012 A1
20120005287 Gadel et al. Jan 2012 A1
20120011438 Kim et al. Jan 2012 A1
20120017141 Eelen et al. Jan 2012 A1
20120062576 Rosenthal et al. Mar 2012 A1
20120081389 Dilts Apr 2012 A1
20120089911 Hosking et al. Apr 2012 A1
20120094768 McCaddon et al. Apr 2012 A1
20120105723 van Coppenolle et al. May 2012 A1
20120110618 Kilar et al. May 2012 A1
20120110620 Kilar et al. May 2012 A1
20120120114 You et al. May 2012 A1
20120137015 Sun May 2012 A1
20120147954 Kasai et al. Jun 2012 A1
20120159541 Carton et al. Jun 2012 A1
20120179970 Hayes Jul 2012 A1
20120198412 Creighton et al. Aug 2012 A1
20120213495 Hafeneger et al. Aug 2012 A1
20120225693 Sirpal et al. Sep 2012 A1
20120233631 Geshwind Sep 2012 A1
20120246032 Beroukhim et al. Sep 2012 A1
20120263263 Olsen et al. Oct 2012 A1
20120308206 Kulas Dec 2012 A1
20120317198 Patton et al. Dec 2012 A1
20120324491 Bathiche et al. Dec 2012 A1
20130003993 Michalski Jan 2013 A1
20130021269 Johnson et al. Jan 2013 A1
20130024888 Sivertsen Jan 2013 A1
20130028446 Krzyzanowski Jan 2013 A1
20130028573 Hoofien et al. Jan 2013 A1
20130031582 Tinsman et al. Jan 2013 A1
20130298146 D'Alessandro Jan 2013 A1
20130033542 Nakazawa Feb 2013 A1
20130036200 Roberts et al. Feb 2013 A1
20130039632 Feinson Feb 2013 A1
20130046847 Zavesky et al. Feb 2013 A1
20130054728 Amir et al. Feb 2013 A1
20130055321 Cline et al. Feb 2013 A1
20130061263 Issa et al. Mar 2013 A1
20130094830 Stone et al. Apr 2013 A1
20130117248 Bhogal et al. May 2013 A1
20130125181 Montemayor et al. May 2013 A1
20130129304 Feinson May 2013 A1
20130129308 Karn et al. May 2013 A1
20130173765 Korbecki Jul 2013 A1
20130177294 Kennberg Jul 2013 A1
20130202265 Arrasvuori et al. Aug 2013 A1
20130204710 Boland et al. Aug 2013 A1
20130219425 Swartz Aug 2013 A1
20130235152 Hannuksela et al. Sep 2013 A1
20130235270 Sasaki et al. Sep 2013 A1
20130254292 Bradley Sep 2013 A1
20130259442 Bloch et al. Oct 2013 A1
20130282917 Reznik et al. Oct 2013 A1
20130290818 Arrasvuori et al. Oct 2013 A1
20130328888 Beaver et al. Dec 2013 A1
20130330055 Zimmermann et al. Dec 2013 A1
20130335427 Cheung et al. Dec 2013 A1
20140015940 Yoshida Jan 2014 A1
20140019865 Shah Jan 2014 A1
20140025620 Greenzeiger et al. Jan 2014 A1
20140025839 Marko et al. Jan 2014 A1
20140040273 Cooper et al. Feb 2014 A1
20140040280 Slaney et al. Feb 2014 A1
20140046946 Friedmann et al. Feb 2014 A2
20140078397 Bloch et al. Mar 2014 A1
20140082666 Bloch et al. Mar 2014 A1
20140085196 Zucker et al. Mar 2014 A1
20140086445 Brubeck et al. Mar 2014 A1
20140094313 Watson et al. Apr 2014 A1
20140101550 Zises Apr 2014 A1
20140105420 Lee Apr 2014 A1
20140126877 Crawford et al. May 2014 A1
20140129618 Panie et al. May 2014 A1
20140136186 Adami et al. May 2014 A1
20140152564 Gulezian et al. Jun 2014 A1
20140156677 Collins, III et al. Jun 2014 A1
20140178051 Bloch et al. Jun 2014 A1
20140186008 Eyer Jul 2014 A1
20140194211 Chimes et al. Jul 2014 A1
20140210860 Caissy Jul 2014 A1
20140219630 Minder Aug 2014 A1
20140220535 Angelone Aug 2014 A1
20140237520 Rothschild et al. Aug 2014 A1
20140245152 Carter et al. Aug 2014 A1
20140270680 Bloch et al. Sep 2014 A1
20140279032 Roever et al. Sep 2014 A1
20140282013 Amijee Sep 2014 A1
20140282642 Needham et al. Sep 2014 A1
20140298173 Rock Oct 2014 A1
20140314239 Meyer et al. Oct 2014 A1
20140380167 Bloch et al. Dec 2014 A1
20150007234 Rasanen et al. Jan 2015 A1
20150012369 Dharmaji et al. Jan 2015 A1
20150015789 Guntur et al. Jan 2015 A1
20150020086 Chen et al. Jan 2015 A1
20150033266 Klappert et al. Jan 2015 A1
20150046946 Hassell et al. Feb 2015 A1
20150058342 Kim et al. Feb 2015 A1
20150063781 Silverman et al. Mar 2015 A1
20150067596 Brown et al. Mar 2015 A1
20150067723 Bloch et al. Mar 2015 A1
20150070458 Kim et al. Mar 2015 A1
20150104155 Bloch et al. Apr 2015 A1
20150106845 Popkiewicz et al. Apr 2015 A1
20150124171 King May 2015 A1
20150154439 Anzue et al. Jun 2015 A1
20150160853 Hwang et al. Jun 2015 A1
20150179224 Bloch et al. Jun 2015 A1
20150181271 Onno et al. Jun 2015 A1
20150181301 Bloch et al. Jun 2015 A1
20150185965 Belliveau et al. Jul 2015 A1
20150195601 Hahm Jul 2015 A1
20150199116 Bloch et al. Jul 2015 A1
20150201187 Ryo Jul 2015 A1
20150256861 Oyman Sep 2015 A1
20150258454 King et al. Sep 2015 A1
20150293675 Bloch et al. Oct 2015 A1
20150294685 Bloch et al. Oct 2015 A1
20150304698 Redol Oct 2015 A1
20150318018 Kaiser et al. Nov 2015 A1
20150331485 Wilairat et al. Nov 2015 A1
20150331933 Tocchini, IV et al. Nov 2015 A1
20150331942 Tan Nov 2015 A1
20150348325 Voss Dec 2015 A1
20160009487 Edwards et al. Jan 2016 A1
20160021412 Majeed et al. Jan 2016 A1
20160037217 Seok et al. Jan 2016 A1
20160057497 Kim et al. Feb 2016 A1
20160062540 Yang et al. Mar 2016 A1
20160065831 Howard et al. Mar 2016 A1
20160066051 Caidar et al. Mar 2016 A1
20160086585 Sugimoto Mar 2016 A1
20160094875 Peterson et al. Mar 2016 A1
20160099024 Gilley Apr 2016 A1
20160100226 Sadler et al. Apr 2016 A1
20160104513 Bloch et al. Apr 2016 A1
20160132203 Seto et al. May 2016 A1
20160142889 O'Connor et al. May 2016 A1
20160162179 Annett et al. Jun 2016 A1
20160170948 Bloch Jun 2016 A1
20160173944 Kilar et al. Jun 2016 A1
20160192009 Sugio et al. Jun 2016 A1
20160217829 Bloch et al. Jul 2016 A1
20160224573 Shahraray et al. Aug 2016 A1
20160232579 Fahnestock Aug 2016 A1
20160277779 Zhang et al. Sep 2016 A1
20160303608 Jossick Oct 2016 A1
20160321689 Turgeman Nov 2016 A1
20160322054 Bloch et al. Nov 2016 A1
20160323608 Bloch et al. Nov 2016 A1
20160337691 Prasad et al. Nov 2016 A1
20160365117 Boliek et al. Dec 2016 A1
20160366454 Tatourian et al. Dec 2016 A1
20170006322 Dury et al. Jan 2017 A1
20170041372 Hosur Feb 2017 A1
20170062012 Bloch et al. Mar 2017 A1
20170142486 Masuda May 2017 A1
20170178409 Bloch et al. Jun 2017 A1
20170178601 Bloch et al. Jun 2017 A1
20170195736 Chai et al. Jul 2017 A1
20170264920 Mickelsen Sep 2017 A1
20170286424 Peterson Oct 2017 A1
20170289220 Bloch et al. Oct 2017 A1
20170295410 Bloch et al. Oct 2017 A1
20170326462 Lyons et al. Nov 2017 A1
20170337196 Goela et al. Nov 2017 A1
20170345460 Bloch et al. Nov 2017 A1
20180007443 Cannistraro et al. Jan 2018 A1
20180014049 Griffin et al. Jan 2018 A1
20180025078 Quennesson Jan 2018 A1
20180048831 Berwick et al. Feb 2018 A1
20180068019 Novikoff et al. Mar 2018 A1
20180115592 Samineni Apr 2018 A1
20180130501 Bloch et al. May 2018 A1
20180176573 Chawla et al. Jun 2018 A1
20180191574 Vishnia et al. Jul 2018 A1
20180254067 Elder Sep 2018 A1
20180262798 Ramachandra Sep 2018 A1
20180314959 Apokatanidis et al. Nov 2018 A1
20190069038 Phillips Feb 2019 A1
20190069039 Phillips Feb 2019 A1
20190075367 van Zessen et al. Mar 2019 A1
20190090002 Ramadorai et al. Mar 2019 A1
20190098371 Keesan Mar 2019 A1
20190132639 Panchaksharaiah et al. May 2019 A1
20190166412 Panchaksharaiah et al. May 2019 A1
20190182525 Steinberg et al. Jun 2019 A1
20190238719 Alameh et al. Aug 2019 A1
20190335225 Fang et al. Oct 2019 A1
20190354936 Deluca et al. Nov 2019 A1
20200023157 Lewis et al. Jan 2020 A1
20200037047 Cheung et al. Jan 2020 A1
20200344508 Edwards et al. Oct 2020 A1
Foreign Referenced Citations (20)
Number Date Country
2639491 Mar 2010 CA
004038801 Jun 1992 DE
10053720 Apr 2002 DE
0965371 Dec 1999 EP
1033157 Sep 2000 EP
2104105 Sep 2009 EP
2359916 Sep 2001 GB
2428329 Jan 2007 GB
2003-245471 Sep 2003 JP
2008005288 Jan 2008 JP
20040005068 Jan 2004 KR
20100037413 Apr 2010 KR
199613810 May 1996 WO
WO-2000059224 Oct 2000 WO
WO-2007062223 May 2007 WO
WO-2007138546 Dec 2007 WO
WO-2008001350 Jan 2008 WO
WO-2008052009 May 2008 WO
WO-2008057444 May 2008 WO
WO-2009125404 Oct 2009 WO
Non-Patent Literature Citations (115)
Entry
U.S. Appl. No. 15/356,913, Systems and Methods for Real-Time Pixel Switching, filed Nov. 21, 2016.
U.S. Appl. No. 15/703,462 Published as US20180130501, Systems and Methods for Dynamic Video Bookmarking, filed Sep. 13, 2017.
U.S. Appl. No. 14/700,845 Published as US2016/0323608, Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Apr. 30, 2015.
U.S. Appl. No. 14/835,857 Published as US2017/0062012, Systems and Methods for Adaptive and Responsive Video, filed Aug. 26, 2015.
U.S. Appl. No. 15/085,209 Published as US-2017/0289220, Media Stream Rate Synchronization, filed Mar. 30, 2016.
U.S. Appl. No. 15/189,931 U.S. Pat. No. 10,218,760 Published as US 2017-0374120, Dynamic Summary Generation for Real-time Switchable Video, filed Jun. 22, 2016.
U.S. Appl. No. 15/395,477 Published as US 2018-0191574, Systems and Methods for Dynamic Weighting of Branched Video Paths, filed Dec. 30, 2016.
U.S. Appl. No. 15/863,191, Dynamic Library Display, filed Mar. 5, 2018.
U.S. Appl. No. 15/481,916, the Office Actions dated Aug. 6, 2018, Oct. 6, 2017 and Mar. 8, 2019.
U.S. Appl. No. 14/249,665, now U.S. Pat. No. 9,792,026, the Office Actions dated May 16, 2016 and Feb. 22, 2017; and the Notice of Allowance dated Jun. 2, 2017.
U.S. Appl. No. 14/884,285, the Office Actions dated Oct. 5, 2017 and Jul. 26, 2018.
U.S. Appl. No. 14/534,626, the Office Actions dated Nov. 25, 2015, Jul. 5, 2016, Jun. 5, 2017, Mar. 2, 2018 and Sep. 26, 2018.
U.S. Appl. No. 14/700,845, the Office Actions dated May 20, 2016, Dec. 2, 2016, May 22, 2017, Nov. 28, 2017, Jun. 27, 2018 and Feb. 19, 2019.
U.S. Appl. No. 14/835,857, the Office Actions dated Sep. 23, 2016, Jun. 5, 2017,Aug. 9, 2018; the Advisory Action dated Oct. 20, 2017; and Notice of Allowance dated Feb. 25, 2019.
U.S. Appl. No. 12/706,721 U.S. Pat. No. 9,190,110 Published as US2010/0293455, System and Method for Assembling a Recorded Composition, filed Feb. 17, 2010.
U.S. Appl. No. 14/884,285 Published as US2017/0178601, Systems and Method for Assembling a Recorded Composition, filed Oct. 15, 2015.
U.S. Appl. No. 13/033,916 U.S. Pat. No. 9,607,655 Published as US2011/0200116, System and Method for Seamless Multimedia Assembly, filed Feb. 24, 2011.
U.S. Appl. No. 13/034,645 Published as US2011/0202562, System and Method for Data Mining Within Interactive Multimedia, filed Feb. 24, 2011.
U.S. Appl. No. 13/437,164 U.S. Pat. No. 8,600,220 Published as US2013/0259442, Systems and Methods for Loading More Than One Video Content at a Time, filed Apr. 2, 2012.
U.S. Appl. No. 14/069,694 U.S. Pat. No. 9,271,015 Published as US2014/0178051, Systems and Methods for Loading More Than One Video Content at a Time, filed Nov. 1, 2013.
U.S. Appl. No. 13/622,780 U.S. Pat. No. 8,860,882 Published as US2014/0078397, Systems and Methods for Constructing Multimedia Content Modules, filed Sep. 19, 2012.
U.S. Appl. No. 13/622,795 U.S. Pat. No. 9,009,619, Published as US2014/0082666, Progress Bar for Branched Videos, filed Sep. 19, 2012.
U.S. Appl. No. 14/639,579 U.S. Pat. No. 10,474,334 Published as US2015/0199116, Progress Bar for Branched Videos, filed Mar. 5, 2015.
U.S. Appl. No. 13/838,830 U.S. Pat. No. 9,257,148 Published as US2014/0270680, System and Method for Synchronization of Selectably Presentable Media Streams, filed Mar. 15, 2013.
U.S. Appl. No. 14/984,821 U.S. Pat. No. 10,418,066 Published as US2016/0217829, System and Method for Synchronization of Selectably Presentable Media Streams, filed Dec. 30, 2015.
U.S. Appl. No. 13/921,536 U.S. Pat. No. 9,832,516 Published as US2014/0380167, Systems and Methods for Multiple Device Interaction with Selectably Presentable Media Streams, filed Jun. 19, 2013.
U.S. Appl. No. 14/107,600 U.S. Pat. No. 10,448,119 Published as US2015/0067723, Methods and Systems for Unfolding Video Prc-Roll, filed Dec. 16, 2013.
U.S. Appl. No. 14/335,381 U.S. Pat. No. 9,530,454 Published as US2015/0104155, Systems and Methods for Real-Time Pixel Switching, filed Jul. 18, 2014.
U.S. Appl. No. 14/139,996 U.S. Pat. No. 9,641,898 Published as US2015/0181301, Methods and Systems for In-Video Library, filed Dec. 24, 2013.
U.S. Appl. No. 14/140,007 U.S. Pat. No. 9,520,155 Published as US2015/0179224, Methods and Systems for Seeking to Non-Key Frames, filed Dec. 24, 2013.
U.S. Appl. No. 14/249,627 9,653,115 Published as US 2015-0294685, Systems and Methods for Creating Linear Video From Branched Video, filed Apr. 10, 2014.
U.S. Appl. No. 15/481,916 Published as US 2017-0345460, Systems and Methods for Creating Linear Video From Branched Video, filed Apr. 7, 2017.
U.S. Appl. No. 16/986,977 Published as US 2020/0365187, Systems and Methods for Creating Linear Video From Branched Video, filed Aug. 6, 2020.
U.S. Appl. No. 14/249,665 U.S. Pat. No. 9,792,026 Published as US2015/0293675, Dynamic Timeline for Branched Video, filed Apr. 10, 2014.
U.S. Appl. No. 14/509,700 9,792,957 Published as US2016/0104513, Systems and Methods for Dynamic Video Bookmarking, filed Oct. 8, 2014.
U.S. Appl. No. 14/534,626 Published as US-2018-0130501-A1, Systems and Methods for Dynamic Video Bookmarking, filed Sep. 13, 2017.
U.S. Appl. No. 14/534,626 Published as US2016/0105724, Systems and Methods for Parallel Track Transitions, filed Nov. 6, 2014.
U.S. Appl. No. 14/700,845 U.S. Pat. No. 10,582,265 Published as US2016/0323608, Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Apr. 30, 2015.
U.S. Appl. No. 16/752,193 Published as US2020/0404382, Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Jan. 24, 2020.
U.S. Appl. No. 14/700,862 U.S. Pat. No. 9,672,868 Published as US2016/0322054, Systems and Methods for Seamless Media Creation, filed Apr. 30, 2015.
U.S. Appl. No. 14/835,857 U.S. Pat. No. 10,460,765 Published as US2017/0062012, Systems and Methods for Adaptive and Responsive Video, filed Aug. 26, 2015.
U.S. Appl. No. 14/978,464 Published as US2017/0178601, Intelligent Buffering of Large-Scale Video, filed Dec. 22, 2015.
U.S. Appl. No. 14/978,491 U.S. Pat. No. 11,128,853 Published as US2017/0178409, Seamless Transitions in Large-Scale Video, filed Dec. 22, 2015.
U.S. Appl. No. 17/403,703, Seamless Transitions in Large-Scale Video, filed Aug. 16, 2021.
U.S. Appl. No. 15/085,209 U.S. Pat. No. 10,462,202 Published as US2017/0289220, Media Stream Rate Synchronization, filed Mar. 30, 2016.
U.S. Appl. No. 15/165,373 Published as US 2017-0295410, Symbiotic Interactive Video, filed May 26, 2016.
U.S. Appl. No. 15/189,931 U.S. Pat. No. 10,218,760 Published as US 2017/0374120, Dynamic Summary Generation for Real-time Switchable Videos, filed Jun. 22, 2016.
U.S. Appl. No. 15/395,477 U.S. Pat. No. 11,050,809 Published as US 2018/0191574, Systems and Methods for Dynamic Weighting of Branched Video Paths, filed Dec. 30, 2016.
U.S. Appl. No. 15/997,284 Published as US 2019/0373330, Interactive Video Dynamic Adaptation and User Profiling, filed Jun. 4, 2018.
U.S. Appl. No. 15/863,191 U.S. Pat. No. 10,257,578, Dynamic Library Display for Interactive Videos, filed Jan. 5, 2018.
U.S. Appl. No. 16/283,066 U.S. Pat. No. 10,856,049 Published as US2019/0349637, Dynamic Library Display for Interactive Videos, filed Feb. 22, 2019.
Google Scholar search, “Inserting metadata inertion advertising video”, Jul. 16, 2021, 2 pages.
International Preliminary Report and Written Opinion of PCT/IL2012/000080 dated Aug. 27, 2013, 7 pages.
Marciel, M. et al., “Understanding the Detection of View Fraud in Video Content Portals”, (Feb. 5, 2016), Cornell University, pp. 1-13.
U.S. Appl. No. 15/356,913, Systems and Methods for Real-Time Pixel Switching, dated Nov. 21, 2016.
U.S. Appl. No. 14/249,627 U.S. Pat. No. 9,653,115 Published as US 2015-0294685, Systems and Methods for Creating Linear Video From Branched Video, filed Apr. 10, 2014.
U.S. Appl. No. 16/986,977, Systems and Methods for Creating Linear Video From Branched Video, filed Aug. 6, 2020.
U.S. Appl. No. 14/509,700 U.S. Pat. No. 9,792,957 Published as US2016/0104513, Systems and Methods for Dynamic Video Bookmarking, filed Oct. 8, 2014.
U.S. Appl. No. 16/865,896, Systems and Methods for Dynamic Video Bookmarking, filed May 4, 2020.
U.S. Appl. No. 14/534,626, Published as U52016/0105724, Systems and Methods for Parallel Track Transitions, filed Nov. 6, 2014.
U.S. Appl. No. 16/752,193 Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Jan. 24, 2020.
U.S. Appl. No. 16/559,082 Published as US2019/0392868, Systems and Methods for Adaptive and Responsive Video, filed Sep. 3, 2019.
U.S. Appl. No. 16/800,994, Systems and Methods for Adaptive and Responsive Video, filed Feb. 25, 2020.
U.S. Appl. No. 14/978,491 Published as US2017/0178409, Seamless Transitions in Large-Scale Video, filed Dec. 22, 2015.
U.S. Appl. No. 12/706,721, now U.S. Pat. No. 9,190,110, the Office Actions dated Apr. 26, 2012, Aug. 17, 2012, Mar. 28, 2013, Jun. 20, 2013, Jan. 3, 2014, Jul. 7, 2014, and Dec. 19, 2014; the Notices of Allowance dated Jun. 19, 2015, Jul. 17, 2015, Jul. 29, 2015, Aug. 12, 2015, and Sep. 14, 2015.
U.S. Appl. No. 14/884,284, the Office Actions dated Sep. 8, 2017; May 18, 2018; Dec. 14, 2018; Jul. 25, 2019; Nov. 18, 2019 and Feb. 21, 2020.
U.S. Appl. No. 13/033,916, now U.S. Pat. No. 9,607,655, the Office Actions dated Jun. 7, 2013, Jan. 2, 2014, Aug. 28, 2014, Jan. 5, 2015, Jul. 9, 2015, and Jan. 5, 2016; the Advisory Action dated May 11, 2016; and the Notice of Allowance dated Dec. 21, 2016.
U.S. Appl. No. 13/034,645, the Office Actions dated Jul. 23, 2012, Mar. 21, 2013, Sep. 15, 2014, Jun. 4, 2015, Apr. 7, 2017, Oct. 6, 2017, Aug. 10, 2018, Jul. 5, 2016, Apr. 5, 2019 and Dec. 26, 2019.
U.S. Appl. No. 13/437,164, now U.S. Pat. No. 8,600,220, the Notice of Allowance dated Aug. 9, 2013.
U.S. Appl. No. 14/069,694, now U.S. Pat. No. 9,271,015, the Office Actions dated Apr. 27, 2015 and Aug. 31, 2015, the Notice of Allowance dated Oct. 13, 2015.
U.S. Appl. No. 13/622,780, now U.S. Pat. No. 8,860,882, the Office Action dated Jan. 16, 2014, the Notice of Allowance dated Aug. 4, 2014.
U.S. Appl. No. 13/622,795, now U.S. Pat. No. 9,009,619, the Office Actions dated May 23, 2014 and Dec. 1, 2014, the Notice of Allowance dated Jan. 9, 2015.
U.S. Appl. No. 14/639,579, now U.S. Pat. No. 10,474,334, the Office Actions dated May 3, 2017, Nov. 22, 2017 and Jun. 26, 2018, the Notices of Allowance dated Feb. 8, 2019 and Jul. 11, 2019.
U.S. Appl. No. 13/838,830, now U.S. Pat. No. 9,257,148, the Office Action dated May 7, 2015, Notices of Allowance dated Nov. 6, 2015.
U.S. Appl. No. 14/984,821, now U.S. Pat. No. 10,418,066, the Office Actions dated Jun. 1, 2017, Dec. 6, 2017, and Oct. 5, 2018; the Notice of Allowance dated May 7, 2019.
U.S. Appl. No. 13/921,536, now U.S. Pat. No. 9,832,516, the Office Actions dated Feb. 25, 2015, Oct. 20, 2015, Aug. 26, 2016 and Mar. 8, 2017, the Advisory Action dted Jun. 21, 2017, and Notice of Allowance dated Sep. 12, 2017.
U.S. Appl. No. 14/107,600, now U.S. Pat. No. 10,448,119, the Office Actions dated Dec. 19, 2014, Jul. 8, 2015, Jun. 3, 2016, Mar. 8, 2017, Oct. 10, 2017 and Jul. 25, 2018, and the Notices of Allowance dated Dec. 31, 2018 and Apr. 25, 2019.
U.S. Appl. No. 14/335,381, now U.S. Pat. No. 9,530,454, the Office Action dated Feb. 12, 2016; and the Notice of Allowance dated Aug. 24, 2016.
U.S. Appl. No. 14/139,996, now U.S. Pat. No. 9,641,898, the Office Actions dated Jun. 18, 2015, Feb. 3, 2016 and May 4, 2016; and the Notice of Allowance dated Feb. 23, 2016.
U.S. Appl. No. 14/140,007, now U.S. Pat. No. 9,520,155, the Office Actions dated Sep. 8, 2015 and Apr. 26, 2016; and the Notice of Allowance dated Oct. 11, 2016.
U.S. Appl. No. 14/249,627, now U.S. Pat. No. 9,653,115, the Office Actions dated Jan. 14, 2016 and Aug. 9, 2016; and the Notice of Allowance dated Jan. 13, 2017.
U.S. Appl. No. 15/481,916, the Office Actions dated Oct. 6, 2017, Aug. 6, 2018, Mar. 8, 2019, Nov. 27, 2019, and the Notice of Allowance dated Apr. 21, 2020.
U.S. Appl. No. 14/249,665, now U.S. Pat. No. 9,792,026, the Office Actions dated May 16, 2016 and Feb. 22, 2017; and the Notices of Allowance dated Jun. 2, 2017 and Jul. 24, 2017.
U.S. Appl. No. 14/509,700, now U.S. Pat. No. 9,792,957, the Office Action dated Oct. 28, 2016; and the Notice of Allowance dated Jun. 15, 2017.
U.S. Appl. No. 15/703,462, the Office Action dated Jun. 21, 2019, and Dec. 27, 2019; and the Notice of Allowance dated Feb. 10, 2020 and May 14, 2020.
U.S. Appl. No. 14/534,626, the Office Actions dated Nov. 25, 2015, Jul. 5, 2016, Jun. 5, 2017, Mar. 2, 2018, Sep. 26, 2018, May 8, 2019, Dec. 27, 2019, and Aug. 19, 2020.
U.S. Appl. No. 14/700,845, now U.S. Pat. No. 9,653,115, the Office Actions dated May 20, 2016, Dec. 2, 2016, May 22, 2017, Nov. 28, 2017, Jun. 27, 2018 and Feb. 19, 2019 and the Notice of Allowance dated Oct. 21, 2019.
U.S. Appl. No. 14/700,862, now U.S. Pat. No. 9,672,868, the Office Action dated Aug. 26, 2016; and the Notice of Allowance dated Mar. 9, 2017.
U.S. Appl. No. 14/835,857, now U.S. Pat. No. 10,460,765, the Office Actions dated Sep. 23, 2016, Jun. 5, 2017 and Aug. 9, 2018, and the Advisory Action dated Oct. 20, 2017; Notice of Allowances dated Feb. 25, 2019 and Jun. 7, 2019.
U.S. Appl. No. 16/559,082, the Office Actions dated Feb. 2, 2020 and Jul. 23, 2020.
U.S. Appl. No. 16/800,994, the Office Action dated Apr. 15, 2020.
U.S. Appl. No. 14/978,464, the Office Actions dated Jul. 25, 2019, Dec. 14, 2018, May 18, 2018, Sep. 8, 2017, Dec. 14, 2018, Jul. 25, 2019, Nov. 18, 2019 and Jul. 23, 2020.
U.S. Appl. No. 14/978,491, the Office Actions dated Sep. 8, 2017, May 25, 2018, Dec. 14, 2018, Aug. 12, 2019, Dec. 23, 2019 and Jul. 23, 2020.
U.S. Appl. No. 15/085,209, now U.S. Patent No. 10,462,202, the Office Actions dated Feb. 26, 2018 and Dec. 31, 2018; the Notice of Allowance dated Aug. 12, 2019.
U.S. Appl. No. 15/165,373, the Office Actions dated Mar. 24, 2017, Oct. 11, 2017, May 18, 2018, Feb. 1, 2019, Aug. 8, 2019, Jan. 3, 2020 and Jun. 11, 2020.
U.S. Appl. No. 15/189,931, now U.S. Pat. No. 10,218,760, the Office Action dated Apr. 6, 2018, and the Notice of Allowance dated Oct. 24, 2018.
U.S. Appl. No. 15/395,477, the Office Actions dated Nov. 2, 2018, Aug. 16, 2019, and Apr. 15, 2019.
U.S. Appl. No. 15/997,284, the Office Actions dated Aug. 1, 2019, Nov. 21, 2019, and Apr. 28, 2020.
U.S. Appl. No. 15/863,191, now U.S. Pat. No. 10/257,578, the Notices of Allowance dated Jul. 5, 2018 and Nov. 23, 2018.
U.S. Appl. No. 16/283,066, the Office Action dated Jan. 6, 2020; and.
U.S. Appl. No. 16/591,103, the Office Action dated Apr. 22, 2020.
An ffmpeg and SDL Tutorial, “Tutorial 05: Synching Video,” Retrieved from internet on Mar. 15, 2013: <http://dranqer.com/ffmpeg/tutorial05.html>, (4 pages).
Archos Gen 5 English User Manual Version 3.0, Jul. 26, 2007, p. 1-81.
Barlett, Mitch, (Oct. 6, 2008) “iTunes 11: How to Queue Next Song,” Technipages, pp 1-8, retrieved on Dec. 26, 2013 from the Internet http://www.technipages.com/itunes-queue-next-song.html.
International Search Report for International Application PCT/IL2010/000362 dated Aug. 25, 2010.
International Search Report and Written Opinion for International Patent Application PCT/IB2013/001000 dated Jul. 31, 2013 (11 pages).
International Search Report for International Patent Application PCT/IL2012/000080 dated Aug. 9, 2012 (4 pages).
International Search Report for International Patent Application PCT/IL2012/00081 dated Jun. 28, 2012 (4 pages).
ITunes 11: How to Queue Next Song, Published Oct. 6, 2008, pp. 1-8.
Labs.byHook: “Ogg Vorbis Encoder for Flash: Alchemy Series Part 1,” [Online] Internet Article, Retrieved on Dec. 17, 2012 from the Internet: URL:http://labs.byhook.com/2011/02/15/ogg-vorbis-encoder-for-flash-alchem-y-series-part-1/, 2011, (6pages).
Miller, Gregor et al., (Sep. 3, 2009) “MiniDiver: A Novel Mobile Media Playback Interface for Rich Video Content on an iPhoneTM”, Entertainment Computing A ICEC 2009, pp. 98-109.
Sodagar, I., (2011) “The MPEG-DASH Standard for Multimedia Streaming Over the Internet”, IEEE Multimedia, IEEE Service Center, New York, NY US, vol. 18, No. 4, pp. 62-67.
Supplemental Search Report for PCT/IL2010/000362 dated Jun. 28, 2012.
Supplemental European Search Report for EP13184145 dated Jan. 30, 2014 (5 pages).
Yang, H, et al., “Time Stamp Synchronization in Video Systems,” Teletronics Technology Corporation, <http://www.ttcdas.com/products/daus encoders/pdf tech papers/tp 2010 time stamp video system.pdf>, Abstract, (8 pages).
Related Publications (1)
Number Date Country
20160170948 A1 Jun 2016 US
Continuations (1)
Number Date Country
Parent 12706721 Feb 2010 US
Child 14884285 US