Dynamic summary generation for real-time switchable videos

Information

  • Patent Grant
  • 10218760
  • Patent Number
    10,218,760
  • Date Filed
    Wednesday, June 22, 2016
    8 years ago
  • Date Issued
    Tuesday, February 26, 2019
    5 years ago
Abstract
In viewing a media presentation having multiple streams or tracks running in parallel and synchronized to a common playback timeline, a user experiencing one of the streams will miss interesting events and other content occurring in other streams. Accordingly, upon receiving an instruction to switch from a first stream to a second stream, a summary of the second stream is dynamically generated based on content that the user missed while watching the first stream. The summary is presented to the user prior to transitioning to presentation of the second stream.
Description
FIELD OF THE INVENTION

The present disclosure relates generally to media presentations and, more particularly, to systems and methods for dynamically generating summarized versions of video content missed by a user while the user is viewing one or more different video streams.


BACKGROUND

Streaming media presentations, such as online videos, often include separate audio and video streams, or tracks. Some of these media presentations also include multiple video and/or audio streams that are played in parallel, so that a user can switch among the streams and continue playback of a new stream at the same point in time where he or she left off in the previous stream. However, if the user is only viewing one stream while multiple streams are playing in parallel, he will not see or hear the content in the other streams unless he watches every stream separately. The user must then spend unnecessary time and effort in order to feel that he has experienced all available content.


SUMMARY

Systems and methods for dynamically generating summaries of media content are disclosed. In one aspect, a plurality of video streams that are synchronized to a common playback timeline are simultaneously received at, e.g., a video player application. The video streams can include content related to each other, such as a common storyline. At some point during the presentation of one of the video streams to the user, an instruction to switch to a second stream is received. Prior to switching to the second stream, a video summary of the second video stream is generated and presented to the user. The video summary is generated based on content in the second video stream that the user missed while watching the first video stream.


The video summary can be presented to the user directly after presentation of the first stream is stopped, and directly before presentation of the second stream begins. The video summary can also include a summary of content that the user will miss in the second stream while watching the video summary. The video summary can generally include content not previously viewed by the user.


In some implementations, the video summary includes video content from the second video stream presented at an increased playback speed. In other implementations, the video summary includes a selection of still images of video content in the second video stream. In further implementations, the video summary includes content from the second video stream previously designated as important or interesting.


In one implementation, each video stream is composed of multiple segments, each having an associated segment summary video and segment idle video. When generating the video summary, the segment summary videos associated with missed content segments in the second stream can be included in the video summary. Where padding is needed, one or more idle videos associated with a missed segment can be included in the summary as well.


In yet another implementation, the video summary is switched away from while it is being presented. If and when the user switches back to the second stream, a newly generated summary can include a summary of content corresponding to the portion of the previous video summary that was not viewed by the user.


In one implementation, an indicator can be presented that informs the user when an interesting event is occurring or has occurred on a video stream not currently being viewed by the user.


Other aspects of the inventions include corresponding systems and computer-readable media. The various aspects and advantages of the invention will become apparent from the following drawings, detailed description, and claims, all of which illustrate the principles of the invention, by way of example only.





BRIEF DESCRIPTION OF THE DRAWINGS

A more complete appreciation of the invention and many attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings. In the drawings, like reference characters generally refer to the same parts throughout the different views. Further, the drawings are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the invention.



FIG. 1 depicts a high-level system architecture according to an implementation.



FIG. 2 depicts an example method for dynamically generating a video summary.



FIG. 3 depicts an example summary generation over the timeline of a parallel track media presentation.



FIG. 4A depicts example segments and associated summary and idle content in a parallel track media presentation.



FIG. 4B depicts an example summary generation for the presentation of FIG. 4A.



FIGS. 5A-5D depict an example summary generation when switching tracks during presentation of a summary.



FIG. 6 depicts example visual indicators of interesting events on different tracks.





DETAILED DESCRIPTION

Described herein are various implementations of methods and supporting systems for dynamically generating summaries of content in media presentations. The techniques described herein can be implemented in any suitable hardware or software. If implemented as software, the processes can execute on a system capable of running one or more custom operating systems or commercial operating systems such as the Microsoft Windows® operating systems, the Apple OS X® operating systems, the Apple iOS® platform, the Google Android™ platform, the Linux® operating system and other variants of UNIX® operating systems, and the like. The software can be implemented on a general purpose computing device in the form of a computer including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit.



FIG. 1 depicts an example system for content summary generation according to an implementation. A media presentation having multiple video and/or audio streams can be presented to a user on a user device 110 having an application 112 capable of playing and/or editing the content. The user device 110 can be, for example, a smartphone, tablet, laptop, palmtop, wireless telephone, television, gaming device, virtual reality headset, music player, mobile telephone, information appliance, workstation, a smart or dumb terminal, network computer, personal digital assistant, wireless device, minicomputer, mainframe computer, or other computing device, that is operated as a general purpose computer or a special purpose hardware device that can execute the functionality described herein. The user device 110 can have output functionality (e.g., display monitor, touchscreen, image projector, etc.) and input functionality (e.g., touchscreen, keyboard, mouse, remote control, etc.).


The system can include a plurality of software modules stored in a memory and executed on one or more processors. The modules can be in the form of a suitable programming language, which is converted to machine language or object code to allow the processor or processors to read the instructions. The software can be in the form of a standalone application, implemented in any suitable programming language or framework.


The application 112 can be a video player and/or editor that is implemented as a native application, web application, or other form of software. In some implementations, the application 112 is in the form of a web page, widget, and/or Java, JavaScript, .Net, Silverlight, Flash, and/or other applet or plug-in that is downloaded to the user device 110 and runs in conjunction with a web browser. The application 112 and the web browser can be part of a single client-server interface; for example, the application 112 can be implemented as a plugin to the web browser or to another framework or operating system. Any other suitable client software architecture, including but not limited to widget frameworks and applet technology can also be employed.


Media content can be provided to the user device 110 by content server 102, which can be a web server, media server, a node in a content delivery network, or other content source. In some implementations, the application 112 (or a portion thereof) is provided by application server 106. For example, some or all of the described functionality of the application 112 can be implemented in software downloaded to or existing on the user device 110 and, in some instances, some or all of the functionality exists remotely. For example, certain video encoding and processing functions can be performed on one or more remote servers, such as application server 106. In some implementations, the user device 110 serves only to provide output and input functionality, with the remainder of the processes being performed remotely.


The user device 110, content server 102, application server 106, and/or other devices and servers can communicate with each other through communications network 114. The communication can take place via any media such as standard telephone lines, LAN or WAN links (e.g., T1, T3, 56 kb, X.25), broadband connections (ISDN, Frame Relay, ATM), wireless links (802.11, Bluetooth, GSM, CDMA, etc.), and so on. The network 114 can carry TCP/IP protocol communications and HTTP/HTTPS requests made by a web browser, and the connection between clients and servers can be communicated over such TCP/IP networks. The type of network is not a limitation, however, and any suitable network can be used.


Method steps of the techniques described herein can be performed by one or more programmable processors executing a computer program to perform functions of the invention by operating on input data and generating output. Method steps can also be performed by, and apparatus of the invention can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Modules can refer to portions of the computer program and/or the processor/special circuitry that implements that functionality.


Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. One or more memories can store media assets (e.g., audio, video, graphics, interface elements, and/or other media files), configuration files, and/or instructions that, when executed by a processor, form the modules, engines, and other components described herein and perform the functionality associated with the components. The processor and the memory can be supplemented by, or incorporated in special purpose logic circuitry.


It should also be noted that the present implementations can be provided as one or more computer-readable programs embodied on or in one or more articles of manufacture. The article of manufacture can be any suitable hardware apparatus, such as, for example, a floppy disk, a hard disk, a CD-ROM, a CD-RW, a CD-R, a DVD-ROM, a DVD-RW, a DVD-R, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape. In general, the computer-readable programs can be implemented in any programming language. The software programs can be further translated into machine language or virtual machine instructions and stored in a program file in that form. The program file can then be stored on or in one or more of the articles of manufacture.


The media presentations referred to herein can be structured in various forms. For example, a particular media presentation can be an online streaming video having one video stream and one audio stream. In other implementations, a media presentation includes multiple tracks or streams that a user can switch among in real-time or near real-time. For example, a media presentation can be structured using parallel audio and/or video tracks as described in U.S. patent application Ser. No. 14/534,626, filed on Nov. 6, 2014, and entitled “Systems and Methods for Parallel Track Transitions,” the entirety of which is incorporated by reference herein. For example, a playing video file or stream can have one or more parallel tracks that can be switched among in real-time automatically and/or based on user interactions. In some implementations, such switches are made seamlessly and substantially instantaneously, such that the audio and/or video of the playing content can continue without any perceptible delays, gaps, or buffering. In further implementations, switches among tracks maintain temporal continuity; that is, the tracks can be synchronized to a common timeline so that there is continuity in audio and/or video content when switching from one track to another (e.g., the same song is played using different instruments on different audio tracks; same storyline performed by different characters on different video tracks, and the like).


To facilitate near-instantaneous switching among parallel tracks, multiple media tracks (e.g., video streams) can be downloaded simultaneously to user device 110. Upon selecting a streaming video to play, an upcoming portion of the video stream is typically buffered by a video player prior to commencing playing the video, and the video player can continue buffering as the video is playing. Accordingly, in one implementation, if an upcoming segment of a video presentation (including the beginning of the presentation) includes two or more parallel tracks, the application 112 (e.g., a video player) can initiate download of the upcoming parallel tracks substantially simultaneously. The application 112 can then simultaneously receive and/or retrieve video data portions of each track. The receipt and/or retrieval of upcoming video portions of each track can be performed prior to the playing of any particular parallel track as well as during the playing of a parallel track. The downloading of video data in parallel tracks can be achieved in accordance with smart downloading techniques such as those described in U.S. Pat. No. 8,600,220, issued on Dec. 3, 2013, and entitled “Systems and Methods for Loading More than One Video Content at a Time,” the entirety of which is incorporated by reference herein.


Upon reaching a segment of the video presentation that includes parallel tracks, the application 112 makes a determination of which track to play. The determination can be based on, for example, an interaction made or option selected by the user during a previous video segment, during a previous playback of a pre-recorded video presentation, prior to playing the video, and so on. Based on this determination, the current track either continues to play or the application 112 switches to a parallel track.


In some implementations, multiple users can view a media presentation simultaneously. For example, several users on different user devices can view a parallel track multimedia presentation in synchronization, and each user can have a unique experience by switching between tracks (e.g., content representing different plots, points of view, people, locations, commercials, etc.) at different times. In one example, three users (user 1, user 2, and user 3) view in sync a ten-minute media presentation that includes parallel tracks A, B, and C. Over the ten minutes, user 1 can view content from the tracks in the sequence B, A, B, C, B, A; while at the same time user 2 can view content from the tracks in the sequence A, C, B, A; while at the same time user 3 can view content only from track A. Ultimately, each user can simultaneously experience a presentation unique to them.


In other implementations, the media presentation takes the form of an interactive video based on a video tree, hierarchy, or other structure. Such interactive video structures are described in U.S. patent application Ser. No. 14/978,491, filed on Dec. 22, 2015, and entitled “Seamless Transitions in Large-Scale Video,” the entirety of which is incorporated by reference herein.



FIG. 2 depicts one implementation of a method for dynamic summary generation of media content. In STEP 202, multiple tracks or media streams (e.g., audiovisual content) are received in parallel by application 112. As described above, the streams can be parallel tracks that are synchronized to a common playback timeline. One of the streams (e.g., selected by the user or automatically) is presented to the user (STEP 204). The stream being presented can be visible to the user, while other parallel streams, though continuing to proceed according to the common timeline, are hidden/not heard. During playback of the currently viewed stream, application 112 can determine that the stream should be switched to a different parallel stream (STEP 206). The determination can be based on, for example, receiving an interaction or instruction from the user viewing the stream (e.g., interacting with the interface of the application 112 or content in the stream itself in order to make a selection resulting in the switch), or can be an automatic switch made by application 112 (e.g., at a predefined switchover time).


In STEP 208, a video summary is generated of content previously missed by the user in the new stream being switched to. Thus, for example, if the user had viewed Stream A between time 0 and time 10, and switched to Stream B at time 10, the generated summary would include Stream B content that had occurred between time 0 and time 10. The video summary can be generated on the client side or server side, for example, by application 112 and/or application server 106. To provide a seamless presentation, the video summary can be generated prior to a switch occurring. For example, video summaries of streams not being viewed can be proactively generated, with the expectation that the user may switch to one of those streams. The video summary further can be generated while one or more streams is being downloaded. In another implementation, the video summary can be generated in part or in full after the instruction for the switch is received. To increase responsiveness, a starting portion of the video summary can be generated and presented to the user and, while the summary is being presented, the remainder of the summary can be generated in real time. The generated summary is then presented in part or in full to the user (STEP 210) prior to switching to the new stream. In some implementations, the user can disable summary generation, and proceed directly to the switched-to track without interruption.


In some implementations, the user can switch back to the previously viewed stream (e.g., Stream A) or another stream (e.g., Stream C) while the video summary (e.g., of Stream B) is being presented (STEP 212). As will be described further below, this results in the need to track the portions of the summary that have been viewed or not viewed (STEP 214) so that, if the user returns to that stream (e.g., Stream B), the newly generated summary will include the summarized content that the user missed by switching from the previous summary prior to the completion of its presentation. If, on the other hand, the video summary plays in full, the switched-to stream (e.g., Stream B) then begins to be presented to the user (STEP 216). The process continues when the stream is next switched, if at all (return to STEP 206). When the presentation ends (or, in some instances, at any point in the presentation), the user can be provided with an opportunity to view summaries for some or all missed content.



FIG. 3 depicts an example parallel track video having two streams, Track A and Track B, each synchronized to a common timeline. For convenience, the common timeline is depicted as four timelines, representing the visible playback of video 300 as it transitions among video content from Track A, Track B, Track A Summary, and Track B Summary. In general, when generating a video summary for a particular stream, the summary covers the content occurring between the time when the user last viewed the stream (or the beginning of the stream, if it has not yet been viewed) and the time when the particular stream will start being presented to the user. The summary can have a minimum and/or maximum length of time. Of note, when the user switches to a different stream and a video summary is shown, the video summary overlaps content in the stream that would have been presented had the switch been immediate. Thus, in some implementations, the summary covers content in this overlapping portion as well. This situation is depicted in FIG. 3, where Track B summary 315 includes content 310 between B Sum Start (the starting time in Track B of content not seen by the user) and B Sum End (the time at which the Track B summary 315 will end).


In further detail, FIG. 3 depicts video content in Track A being viewed by the user from the beginning of the timeline to the point in time labeled “Switch from A to B,” at which point there is an intent to switch from viewing Track A to viewing Track B. At that time, then, the playing video 300 transitions to a video summary of unseen content 310 from Track B (Track B summary 315), which includes the content that occurred while the user was watching Track A, as well as content in Track B that will occur while the Track B summary 315 is being shown. The length of the video summary 315 can be determined by, for example, identifying and adding up the length of individual portions of the summary 315 that will be compiled into the full summary 315 for presentation to the user. In the event that more time is allocated to the summary than is needed, idle videos (discussed further below) or other filler can be included in the summary.


Following the completion of playback of Track B summary 315, the playing video 300 transitions to Track B. The time that Track B summary 315 is presented to the user is also time during which Track A is not viewed. Accordingly, upon the user switching back to Track A, the video summary (Track A Summary 325) created for Track A includes content 320 from A Sum Start to A Sum End, which includes content occurring in Track A during playback of Track B summary 315, content occurring while presenting Track B, and content occurring while the user views Track A summary 325. Then, following presentation of Track A summary 325, the playing video 300 continues with presentation of Track A.


Various techniques for generating a summary are contemplated. In one implementation, the generation of a video summary is fully automated. In this instance, the summary of missed video content can include a version of the missed content presented at an increased speed (i.e., visibly fast-forwarded at 1.5×, 2×, 3×, or some other multiplier). The playback multiplier that is applied to the missed content can be constant or can vary (e.g., slower during more interesting parts). In one implementation, the playback multiplier is calculated to be a value that results in the summary being a particular length (e.g., a multiplier of 5× can be used to condense a 10-second video segment to 2 seconds). In another instance, the video summary includes individual static images from the missed content (e.g., individual frames or screenshots). The images selected can be random or predefined (e.g., designated by the video content creator), and there can be a random or fixed number of images selected per time period. The length of time between pictures and the length of time a picture stays on screen, as well as the minimum and/or maximum length of the summary itself, can be set by a content creator or content provider, or can be configurable within the video player.


In another implementation, the generation of a video summary is semi-automated. More specifically, interesting or important portions (e.g., still images, video clips, events, scenes, etc.) of the video content can be designated in advance by a content creator or other party. Then, when the video summary is generated, some or all of the designated portions are automatically compiled together for presentation to the user.


In a further implementation, video summary generation involves a more manual process, in which a specific summary is pre-constructed for each portion of a video stream. In one instance, for example, each video stream is divided into units referred to as “segments.” When the content for the video stream is created or compiled by a content creator, editor, or other third party, a separate summary video can also be provided and associated with each segment in the stream (or some of the segments). Thus, for example, a 30-second video segment can have an associated 3-second summary. Then, if missed content includes all or a portion of that 30-second segment, the corresponding segment summary can be included in the full video summary generated for the missed content. If the missed content includes multiple segments, multiple corresponding segment summaries can be combined together for the full summary.


In some implementations, one or more segments also include a respective idle video that can serve as filler content. An idle video can include predefined video content showing a loop or other content in which nothing significant occurs. The idle content can be related to content in the segment. For example, if the segment includes video of a soldier in a battle, the idle content can depict the soldier taking cover behind a wall and breathing heavily for a period of time. Idle videos are useful where stream playback begins at segment boundaries with segments of fixed (though not necessarily equal) lengths. When a switch between streams occurs and a video summary is generated and presented, the summary may be of a length such that its end does not correspond with the end of a segment. In other words, the summary ends before the beginning of the next segment. In this instance, an idle video can be used to pad the length of the video summary (repetitively, if needed) such that the video stream commences with the start of the next segment directly after completion of the video summary. In other implementations, if the presentation of a summary will result in only a small intrusion into the next segment (e.g., less than 3 seconds), rather than padding the summary with idle content, the summary can nonetheless be shown, and the user will miss a small portion of the beginning of that segment's content.


For audio content in a video summary, a predetermined or random selection of audio can be provided (e.g., background music, narration, dialogue or sound effects, etc.). In the case of manually generated summaries, the segment summary can include a prerecorded audio track. In other implementations, the audio for a particular portion of the summary is the same audio associated with the corresponding content in the full version of the content.



FIG. 4A depicts a video presentation with two parallel tracks, Track A and Track B. Track A includes three segments: A1, A2, and A3; and Track B includes four segments: B1, B2, B3, and B4. Each segment, except for A3, has associated, predefined summary and idle content. As depicted, summary A1 and idle A1 are associated with segment A1; summary B2 and idle B2 are associated with segment B2; and so on. In one example, segment A1 is a two-minute audiovisual presentation of a couple sitting at a table in a restaurant and having a conversation. In this case, summary A1 is a ten-second video clip highlighting the important parts of the discussion, and idle A1 depicts the couple eating without talking for one minute. As another example, segment B3 is a five-minute audiovisual presentation of a man walking down a crowded street, meeting and talking with various people. In this case, summary B3 includes twenty seconds of the man's encounters with notable persons and other interesting events, and idle B3 depicts the man continuously walking without speaking.



FIG. 4B depicts two examples of a user watching the video presentation of FIG. 4A and switching between Tracks A and B at different times during playback. In Example 1, playback begins with Track A as the selected stream and, thus, segment A1 is played first. At time 401, the end of segment A1, a user selection implicates Track B, and playback begins to switch to Track B. As shown in FIG. 4A, segment A1 plays for the entire length of segment B1, as well as part of segment B2. At time 401, the next segment start in Track B is segment B3; however, summary B1 and summary B2 combined are not long enough to reach the start of segment B3. Accordingly, to maintain a smooth experience for the user without having gaps in playback, idle content (in this instance, idle B1) is included in the Track B combined summary (in this instance, the beginning of the Track B combined summary) as filler so that the remaining time in segment B2 is filled and segment B3 begins directly after conclusion of the Track B summary.


Example 2 in FIG. 4B begins with the presentation of segment A1. At time 402, prior to the completion of segment A1, a user selection implicates Track B. As segment B2 has not yet been reached, summary B1 is all that is shown prior to continuing playback on Track B. In this instance, there is minimal empty space between time 402 and the beginning of segment B2 after inserting summary B1 (e.g., <1.5 sec), so it is not necessary to include idle content. Playback on Track B continues through segments B2 and B3 until time 403, at which point a user selection implicates a return to Track A. When time 403 is reached, the user has missed a portion of segment A1 (from switching tracks prior to its completion) and a majority of segment A2. In this instance, because a large enough portion of segment A1 (e.g., >80%) was already seen by the user, no summary is shown for that segment. On the other hand, because a large enough portion of segment A2 (e.g., >80%) was not seen, summary A2 is shown before switching to Track A. In considering the portion of a segment that must have been seen or not seen, consideration can also be given to the time taken up by any summary that will be shown instead of segment content. Here, there is also sufficient time prior to the beginning of segment A3 to include idle content A2.


In some implementations, the present system intelligently determines whether a switch to a different stream should result in a summary being generated and shown for a particular portion of content. In general, the system can track whether a particular segment or portion of content was presented to a user, and use this status as an initial threshold for determining whether the segment or content portion is eligible to be included in a summary. For example, the system can consider only missed content (i.e., content that was played on a parallel track not visible to the user while the user was watching a different track) to be eligible for inclusion.


In one implementation, whether missed content is included in a summary can depend on how recently the user was viewing the stream (e.g., don't generate a summary for missed content on Track A if the user was previously viewing Track A within the last 10 seconds, or other threshold value). Another factor can be the amount of time the user has spent viewing a stream over a particular time period or running window (e.g., don't generate a summary for missed content on Track A if the user has spent at least four minutes out of the last five minutes viewing content on Track A). Yet another factor, useful in semi-automated summary generation systems, as described above, is whether the user has been presented with a threshold amount of designated interesting or important scenes or events in the content (e.g., don't generate a summary for missed content on Track A if the user has seen at least 80% of the important events in the missed content; generate a summary for missed content on Track B if the user has missed more than 20 seconds of designated important scenes in the missed content). For a manual summary generation system, for example, the summary for a particular segment can be shown to the user if at least a threshold amount of the segment was missed (e.g., show segment summary if more than 40%, 50%, 60%, or other amount of the segment was missed).


In some implementations, the system supports switching to another stream while the video summary is being presented and prior to the video summary being completed. In such circumstances, the system tracks which parts of the summary were viewed, determines the content in the switched-to stream corresponding to the viewed summary portions, and ensures that the corresponding content will be summarized and included in the summary the next time the user switches to that stream, if at all. In one implementation, for fully- and semi-automated systems, the point in time at which the user stops viewing the summary is correlated to a point in time in the corresponding content in the switched-to stream. Then, if the user returns again to that stream, the generated summary can include missed content starting from that point in time in the stream and going forward. In another implementation, for manual systems, if a user views more than a threshold amount (e.g., 50%, 60%, 70%, 80%, 90%, etc.) of a segment summary associated with a particular segment, then that segment is considered viewed and need not be summarized if the user returns to the switched-to stream. If, on the other hand, the user's viewing of the segment summary does not meet the threshold amount, the segment summary will be included in the summary the next time the user returns to the switched-to stream. In such a case, the segment summary can be restarted or shown from the point where the user left off.



FIGS. 5A-5D depict a graphical example of how the present system, in one implementation, addresses a stream switch during presentation of a summary. The figures show a two-track video presentation synchronized to a common timeline which, for convenience, is depicted as four corresponding timelines. Track A timeline 502 represents content in stream A over the common timeline. Likewise, Track B timeline 504 represents content in stream B over the common timeline. Summary timeline 512 represents content in Track A that will be summarized upon the user switching to Track A, and summary timeline 514 represents content in Track B that will be summarized upon the user switching to Track B.



FIG. 5A illustrates playing video (represented as a dashed line) that is being presented to a user. The Track A video is shown to the user between T0 and T1 on Track A timeline 502 (note that the video progresses according to the common timeline on Track B as well, however, that content is hidden to the user), at which point, at T1, a switch is made to Track B. As shown in summary timeline 514, the user missed content occurring on Track B between T0 and T1 and, accordingly, this is the content to be summarized. Further, because the user is switching away from Track A at T1, the content on Track 1 following T1 can be marked for summarization until the user returns to Track A.


In FIG. 5B, while the summary of Track B content is being presented during T1 and T2, the user switches back to Track A, at time T2, before the summary finishes. Accordingly, summary timeline 514 shows that only a portion of the missed Track B content (T0 to TX) was summarized and viewed by the user, with content between TX and T1 yet to be presented in a summary to the user. Further, in viewing the summary between T1 and T2, more Track B content was missed and, thus, the Track B content between T1 and T2 can be summarized as well. Similarly, while the user was watching the Track B summary, the user missed Track A content between T1 and T2, causing Track A content between T1 and T2 to be marked for summarization as well (as shown in summary timeline 512).



FIG. 5C shows a switch back to Track B, at time T3, after the user has viewed further content in Track A. As shown previously in FIG. 5B, Track A had missed content between T1 and T2 not yet provided in a summary to the user. Track A timeline 502 in FIG. 5C shows that, on switching to Track A at T2, the missed Track A content is summarized and presented to the user, and there is no further missed Track A content to summarize. During the time that the user is viewing the Track A summary, and before the switch back to Track B (i.e., between Time T2 and T3), further missed Track B content has built up (as shown in summary timeline 514) and can be summarized on the return to Track B at T3.



FIG. 5D depicts a further switch back to Track A, at time T4, after viewing the Track B summary between T3 and TY, and further Track B content between TY and T4. Here, because the entire remaining Track B summary was able to be shown before the user switched from the track, there is no further Track B content to summarize, as shown by summary timeline 514. Further, as sufficient time was available, the summary was able to include content missed in real-time on Track B, between T3 and TY, as the user viewed the summary. On the other hand, between T3 and T4, the user missed content on Track A and, as shown by summary timeline 512, that missed content can be summarized and presented on the user's return to Track A at T4.


In one implementation, visual, audio, haptic, or other indicators are provided to the user to inform that user that something interesting or otherwise notable has happened, is happening, or will happen on a track not currently being viewed by the user. Such indicators can include, but are not limited to, sound clips, graphics, video or audio thumbnails from the interesting content, and the like. In some instances, a numerical indicator is used to inform the user of the number of scenes (interesting or otherwise) missed on each track. In other instances, an indicator can include other information about missed content, such as the length of time of missed content, level of importance, category, and so on. The foregoing indicators can be, for example, shown in a video player interface, overlaid on the video itself, and/or displayed on a player progress bar.



FIG. 6 depicts a parallel video presentation including Tracks A, B, and C. The dashed line represents the video seen by the user during the presentation as he views content on Track B and switches to Track C about halfway though. At Point 1 in Track B, video display window 602 includes graphical indicator 608 overlaid on the presentation. Graphical indicator 608 notifies the user that interesting content 620 is currently occurring in Track A. If desired, the user can switch to Track A and view the interesting content as it occurs. At Point B in Track C, video display window 612 shows graphical indicators 618, which indicate to the user that he missed interesting content 624 on Track B and interesting content 620, 622, and 626 on Track A. If the user then switches at Point 2 to Track A, he can see a summary of the three missed pieces of interesting content 620, 622, and 626. Likewise, if he switches at Point 2 to Track B, he can see a summary of interesting content 624. In some implementations, a numerical indicator is included in graphical indicators 618, which shows the number of interesting scenes missed.


One will appreciate the near-limitless number of applications of the techniques described herein. As one illustrative example, a video presentation includes a story about two girls, each with a different camera following her. A user can switch from one following camera to the other by switching between tracks. When a switch is made from one girl to another, a summary of events that happened to that girl from the previous time that the user saw her can be shown, prior to switching to that girl's camera. In another example, a video presentation includes three soccer games being broadcast in parallel (live or prerecorded). Upon the user switching to a different match (different stream), a highlight summary of the match is dynamically generated and shown to the user based on what the user missed in that match.


Although the systems and methods described herein relate primarily to audio and video presentation, the invention is equally applicable to various streaming and non-streaming media, including animation, video games, interactive media, and other forms of content usable in conjunction with the present systems and methods. Further, there can be more than one audio, video, and/or other media content stream played in synchronization with other streams. Streaming media can include, for example, multimedia content that is continuously presented to a user while it is received from a content delivery source, such as a remote video server. If a source media file is in a format that cannot be streamed and/or does not allow for seamless connections between segments, the media file can be transcoded or converted into a format supporting streaming and/or seamless transitions.


While various implementations of the present invention have been described herein, it should be understood that they have been presented by example only. Where methods and steps described above indicate certain events occurring in certain order, those of ordinary skill in the art having the benefit of this disclosure would recognize that the ordering of certain steps can be modified and that such modifications are in accordance with the given variations. For example, although various implementations have been described as having particular features and/or combinations of components, other implementations are possible having any combination or sub-combination of any features and/or components from any of the implementations described herein.

Claims
  • 1. A computer-implemented method comprising: simultaneously receiving a plurality of video streams, wherein each of the video streams is synchronized to a common playback timeline;presenting a first one of the video streams to a user between a first time in the playback timeline and a second, later time in the playback timeline;receiving an instruction to switch to presentation of a second one of the video streams;generating a video summary of the second video stream based on content in the second video stream occurring between the first time and the second time; andpresenting at least a portion of the video summary to the user prior to switching to the presentation of the second video stream, wherein the video summary is presented to the user between the second time and a third, later time in the playback timeline, and wherein the generating of the video summary is further based on content in the second video stream occurring between the second time and the third time, such that the video summary includes a summary of content in the second video stream that the user will miss during watching the video summary.
  • 2. The method of claim 1, wherein each video stream comprises content related to the other video streams.
  • 3. The method of claim 1, wherein the video summary comprises content not previously viewed by the user.
  • 4. The method of claim 1, wherein the video summary comprises at least one of (i) video content from the second video stream presented at an increased playback speed and (ii) a selection of still images of video content in the second video stream.
  • 5. The method of claim 1, wherein the video summary comprises content from the second video stream previously designated as important.
  • 6. The method of claim 1, wherein each video stream comprises a plurality of segments, each segment comprising a segment summary video and a segment idle video.
  • 7. The method of claim 6, wherein the generating of the video summary comprises including in the video summary at least one segment summary video from one or more segments occurring at least in part between the first time and the second time.
  • 8. The method of claim 6, wherein the generating of the video summary comprises including in the video summary at least one segment idle video from one or more segments occurring at least in part between the first time and the second time.
  • 9. The method of claim 1, further comprising: switching back to presentation of the first video stream while the video summary is playing;identifying content in the second video stream corresponding to a portion of the video summary not viewed by the user; andincluding a summary of at least some of the identified content in a second video summary generated for presentation to the user when next switching to the second video stream.
  • 10. The method of claim 1, further comprising providing an indicator informing the user when an interesting event is occurring or has occurred on a video stream not currently being viewed by the user.
  • 11. A system comprising: at least one memory for storing computer-executable instructions; and at least one processor for executing the instructions stored on the memory, wherein execution of the instructions programs the at least one processor to perform operations comprising: simultaneously receiving a plurality of video streams, wherein each of the video streams is synchronized to a common playback timeline;presenting a first one of the video streams to a user between a first time in the playback timeline and a second, later time in the playback timeline;receiving an instruction to switch to presentation of a second one of the video streams;generating a video summary of the second video stream based on content in the second video stream occurring between the first time and the second time; andpresenting at least a portion of the video summary to the user prior to switching to the presentation of the second video stream, wherein the video summary is presented to the user between the second time and a third, later time in the playback timeline, and wherein the generating of the video summary is further based on content in the second video stream occurring between the second time and the third time, such that the video summary includes a summary of content in the second video stream that the user will miss during watching the video summary.
  • 12. The system of claim 11, wherein each video stream comprises content related to the other video streams.
  • 13. The system of claim 11, wherein the video summary comprises content not previously viewed by the user.
  • 14. The system of claim 11, wherein the video summary comprises at least one of (i) video content from the second video stream presented at an increased playback speed and (ii) a selection of still images of video content in the second video stream.
  • 15. The system of claim 11, wherein the video summary comprises content from the second video stream previously designated as important.
  • 16. The system of claim 11, wherein each video stream comprises a plurality of segments, each segment comprising a segment summary video and a segment idle video.
  • 17. The system of claim 16, wherein the generating of the video summary comprises including in the video summary at least one segment summary video from one or more segments occurring at least in part between the first time and the second time.
  • 18. The system of claim 16, wherein the generating of the video summary comprises including in the video summary at least one segment idle video from one or more segments occurring at least in part between the first time and the second time.
  • 19. The system of claim 11, wherein the operations further comprise: switching back to presentation of the first video stream while the video summary is playing;identifying content in the second video stream corresponding to a portion of the video summary not viewed by the user; andincluding a summary of at least some of the identified content in a second video summary generated for presentation to the user when next switching to the second video stream.
  • 20. The system of claim 11, wherein the operations further comprise providing an indicator informing the user when an interesting event is occurring or has occurred on a video stream not currently being viewed by the user.
US Referenced Citations (296)
Number Name Date Kind
4569026 Best Feb 1986 A
5161034 Klappert Nov 1992 A
5568602 Callahan et al. Oct 1996 A
5607356 Schwartz Mar 1997 A
5636036 Ashbey Jun 1997 A
5676551 Knight et al. Oct 1997 A
5734862 Kulas Mar 1998 A
5737527 Shiels et al. Apr 1998 A
5745738 Ricard Apr 1998 A
5754770 Shiels et al. May 1998 A
5818435 Kozuka et al. Oct 1998 A
5848934 Shiels et al. Dec 1998 A
5887110 Sakamoto et al. Mar 1999 A
5894320 Vancelette Apr 1999 A
6067400 Saeki et al. May 2000 A
6122668 Teng et al. Sep 2000 A
6128712 Hunt et al. Oct 2000 A
6191780 Martin et al. Feb 2001 B1
6222925 Shiels et al. Apr 2001 B1
6240555 Shoff et al. May 2001 B1
6298482 Seidman et al. Oct 2001 B1
6657906 Martin Dec 2003 B2
6698020 Zigmond et al. Feb 2004 B1
6728477 Watkins Apr 2004 B1
6801947 Li Oct 2004 B1
7155676 Land et al. Dec 2006 B2
7231132 Davenport Jun 2007 B1
7310784 Gottlieb et al. Dec 2007 B1
7379653 Yap et al. May 2008 B2
7444069 Bernsley Oct 2008 B1
7627605 Lamere et al. Dec 2009 B1
7669128 Bailey et al. Feb 2010 B2
7694320 Yeo et al. Apr 2010 B1
7779438 Davies Aug 2010 B2
7787973 Lambert Aug 2010 B2
7917505 van Gent et al. Mar 2011 B2
8024762 Britt Sep 2011 B2
8065710 Malik Nov 2011 B2
8151139 Gordon Apr 2012 B1
8176425 Wallace et al. May 2012 B2
8190001 Bernsley May 2012 B2
8276058 Gottlieb et al. Sep 2012 B2
8281355 Weaver et al. Oct 2012 B1
8600220 Bloch et al. Dec 2013 B2
8612517 Yadid et al. Dec 2013 B1
8650489 Baum et al. Feb 2014 B1
8826337 Issa et al. Sep 2014 B2
8860882 Bloch et al. Oct 2014 B2
8977113 Rumteen et al. Mar 2015 B1
9009619 Bloch et al. Apr 2015 B2
9021537 Funge et al. Apr 2015 B2
9082092 Henry Jul 2015 B1
9094718 Barton et al. Jul 2015 B2
9190110 Bloch Nov 2015 B2
9257148 Bloch et al. Feb 2016 B2
9268774 Kim et al. Feb 2016 B2
9271015 Bloch et al. Feb 2016 B2
9390099 Wang et al. Jul 2016 B1
9465435 Zhang et al. Oct 2016 B1
9520155 Bloch et al. Dec 2016 B2
9530454 Bloch et al. Dec 2016 B2
9607655 Bloch et al. Mar 2017 B2
9641898 Bloch et al. May 2017 B2
9653115 Bloch et al. May 2017 B2
9653116 Paulraj et al. May 2017 B2
9672868 Bloch et al. Jun 2017 B2
9792026 Bloch et al. Oct 2017 B2
9792957 Bloch et al. Oct 2017 B2
9826285 Mishra Nov 2017 B1
20020086724 Miyaki et al. Jul 2002 A1
20020091455 Williams Jul 2002 A1
20020105535 Wallace et al. Aug 2002 A1
20020106191 Betz et al. Aug 2002 A1
20020120456 Berg et al. Aug 2002 A1
20020124250 Proehl et al. Sep 2002 A1
20020129374 Freeman Sep 2002 A1
20020140719 Amir et al. Oct 2002 A1
20020144262 Plotnick et al. Oct 2002 A1
20020177914 Chase Nov 2002 A1
20020194595 Miller et al. Dec 2002 A1
20030007560 Mayhew et al. Jan 2003 A1
20030148806 Weiss Aug 2003 A1
20030159566 Sater et al. Aug 2003 A1
20030183064 Eugene et al. Oct 2003 A1
20030184598 Graham Oct 2003 A1
20030221541 Platt Dec 2003 A1
20040019905 Fellenstein et al. Jan 2004 A1
20040034711 Hughes Feb 2004 A1
20040091848 Nemitz May 2004 A1
20040125124 Kim et al. Jul 2004 A1
20040128317 Sull et al. Jul 2004 A1
20040138948 Loomis Jul 2004 A1
20040172476 Chapweske Sep 2004 A1
20040194128 McIntyre et al. Sep 2004 A1
20040194131 Ellis et al. Sep 2004 A1
20050019015 Ackley et al. Jan 2005 A1
20050055377 Dorey et al. Mar 2005 A1
20050091597 Ackley Apr 2005 A1
20050102707 Schnitman May 2005 A1
20050107159 Sato May 2005 A1
20050166224 Ficco Jul 2005 A1
20050198661 Collins et al. Sep 2005 A1
20050210145 Kim et al. Sep 2005 A1
20050251820 Stefanik et al. Nov 2005 A1
20060002895 McDonnell et al. Jan 2006 A1
20060024034 Filo et al. Feb 2006 A1
20060028951 Tozun et al. Feb 2006 A1
20060064733 Norton et al. Mar 2006 A1
20060150072 Salvucci Jul 2006 A1
20060155400 Loomis Jul 2006 A1
20060200842 Chapman et al. Sep 2006 A1
20060222322 Levitan Oct 2006 A1
20060224260 Hicken et al. Oct 2006 A1
20070003149 Nagumo et al. Jan 2007 A1
20070024706 Brannon et al. Feb 2007 A1
20070033633 Andrews et al. Feb 2007 A1
20070055989 Shanks Mar 2007 A1
20070099684 Butterworth May 2007 A1
20070101369 Dolph May 2007 A1
20070118801 Harshbarger et al. May 2007 A1
20070157261 Steelberg et al. Jul 2007 A1
20070162395 Ben-Yaacov et al. Jul 2007 A1
20070226761 Zalewski et al. Sep 2007 A1
20070239754 Schnitman Oct 2007 A1
20070253677 Wang Nov 2007 A1
20070253688 Koennecke Nov 2007 A1
20070263722 Fukuzawa Nov 2007 A1
20080019445 Aono et al. Jan 2008 A1
20080021874 Dahl et al. Jan 2008 A1
20080022320 Ver Steeg Jan 2008 A1
20080031595 Cho Feb 2008 A1
20080086456 Rasanen et al. Apr 2008 A1
20080086754 Chen et al. Apr 2008 A1
20080091721 Harboe et al. Apr 2008 A1
20080092159 Dmitriev et al. Apr 2008 A1
20080148152 Blinnikka et al. Jun 2008 A1
20080170687 Moors et al. Jul 2008 A1
20080177893 Bowra et al. Jul 2008 A1
20080178232 Velusamy Jul 2008 A1
20080276157 Kustka et al. Nov 2008 A1
20080300967 Buckley et al. Dec 2008 A1
20080301750 Silfvast et al. Dec 2008 A1
20080314232 Hansson et al. Dec 2008 A1
20090022015 Harrison Jan 2009 A1
20090022165 Candelore et al. Jan 2009 A1
20090024923 Hartwig et al. Jan 2009 A1
20090055880 Batteram et al. Feb 2009 A1
20090063681 Ramakrishnan et al. Mar 2009 A1
20090077137 Weda Mar 2009 A1
20090116817 Kim et al. May 2009 A1
20090191971 Avent Jul 2009 A1
20090195652 Gal Aug 2009 A1
20090199697 Lehtiniemi et al. Aug 2009 A1
20090228572 Wall et al. Sep 2009 A1
20090254827 Gonze et al. Oct 2009 A1
20090258708 Figueroa Oct 2009 A1
20090265746 Halen et al. Oct 2009 A1
20090297118 Fink et al. Dec 2009 A1
20090320075 Marko Dec 2009 A1
20100017820 Thevathasan et al. Jan 2010 A1
20100042496 Wang et al. Feb 2010 A1
20100077290 Pueyo Mar 2010 A1
20100088726 Curtis et al. Apr 2010 A1
20100146145 Tippin et al. Jun 2010 A1
20100153512 Balassanian et al. Jun 2010 A1
20100161792 Palm et al. Jun 2010 A1
20100162344 Casagrande et al. Jun 2010 A1
20100167816 Perlman et al. Jul 2010 A1
20100186032 Pradeep et al. Jul 2010 A1
20100186579 Schnitman Jul 2010 A1
20100210351 Berman Aug 2010 A1
20100262336 Rivas et al. Oct 2010 A1
20100267450 McMain Oct 2010 A1
20100268361 Mantel et al. Oct 2010 A1
20100278509 Nagano et al. Nov 2010 A1
20100287033 Mathur Nov 2010 A1
20100287475 van Zwol et al. Nov 2010 A1
20100293455 Bloch Nov 2010 A1
20100332404 Valin Dec 2010 A1
20110000797 Henry Jan 2011 A1
20110007797 Palmer et al. Jan 2011 A1
20110010742 White Jan 2011 A1
20110026898 Lussier et al. Feb 2011 A1
20110033167 Arling et al. Feb 2011 A1
20110041059 Amarasingham et al. Feb 2011 A1
20110078023 Aldrey et al. Mar 2011 A1
20110078740 Bolyukh et al. Mar 2011 A1
20110096225 Candelore Apr 2011 A1
20110126106 Ben Shaul et al. May 2011 A1
20110131493 Dahl Jun 2011 A1
20110138331 Pugsley et al. Jun 2011 A1
20110163969 Anzures et al. Jul 2011 A1
20110191684 Greenberg Aug 2011 A1
20110191801 Vytheeswaran Aug 2011 A1
20110197131 Duffin et al. Aug 2011 A1
20110200116 Bloch et al. Aug 2011 A1
20110202562 Bloch et al. Aug 2011 A1
20110238494 Park Sep 2011 A1
20110246885 Pantos et al. Oct 2011 A1
20110252320 Arrasvuori et al. Oct 2011 A1
20110264755 Salvatore De Villiers Oct 2011 A1
20110282745 Meoded et al. Nov 2011 A1
20110282906 Wong Nov 2011 A1
20110307786 Shuster Dec 2011 A1
20110307919 Weerasinghe Dec 2011 A1
20110307920 Blanchard et al. Dec 2011 A1
20120004960 Ma et al. Jan 2012 A1
20120005287 Gadel et al. Jan 2012 A1
20120017141 Eelen et al. Jan 2012 A1
20120062576 Rosenthal et al. Mar 2012 A1
20120081389 Dilts Apr 2012 A1
20120089911 Hosking et al. Apr 2012 A1
20120094768 McCaddon et al. Apr 2012 A1
20120110618 Kilar et al. May 2012 A1
20120110620 Kilar et al. May 2012 A1
20120134646 Alexander May 2012 A1
20120147954 Kasai et al. Jun 2012 A1
20120179970 Hayes Jul 2012 A1
20120198412 Creighton et al. Aug 2012 A1
20120263263 Olsen et al. Oct 2012 A1
20120308206 Kulas Dec 2012 A1
20130028573 Hoofien et al. Jan 2013 A1
20130031582 Tinsman et al. Jan 2013 A1
20130039632 Feinson Feb 2013 A1
20130046847 Zavesky et al. Feb 2013 A1
20130054728 Amir et al. Feb 2013 A1
20130055321 Cline et al. Feb 2013 A1
20130061263 Issa et al. Mar 2013 A1
20130097643 Stone et al. Apr 2013 A1
20130117248 Bhogal et al. May 2013 A1
20130125181 Montemayor et al. May 2013 A1
20130129308 Karn et al. May 2013 A1
20130177294 Kennberg Jul 2013 A1
20130188923 Hartley et al. Jul 2013 A1
20130204710 Boland et al. Aug 2013 A1
20130219425 Swartz Aug 2013 A1
20130254292 Bradley Sep 2013 A1
20130259442 Bloch et al. Oct 2013 A1
20130282917 Reznik et al. Oct 2013 A1
20130308926 Jang et al. Nov 2013 A1
20130328888 Beaver et al. Dec 2013 A1
20140019865 Shah Jan 2014 A1
20140025839 Marko et al. Jan 2014 A1
20140040273 Cooper et al. Feb 2014 A1
20140040280 Slaney et al. Feb 2014 A1
20140078397 Bloch et al. Mar 2014 A1
20140082666 Bloch et al. Mar 2014 A1
20140094313 Watson et al. Apr 2014 A1
20140101550 Zises Apr 2014 A1
20140126877 Crawford et al. May 2014 A1
20140129618 Panje et al. May 2014 A1
20140152564 Gulezian et al. Jun 2014 A1
20140178051 Bloch et al. Jun 2014 A1
20140186008 Eyer Jul 2014 A1
20140194211 Chimes et al. Jul 2014 A1
20140237520 Rothschild et al. Aug 2014 A1
20140245152 Carter et al. Aug 2014 A1
20140270680 Bloch et al. Sep 2014 A1
20140282013 Amijee Sep 2014 A1
20140282642 Needham et al. Sep 2014 A1
20140380167 Bloch et al. Dec 2014 A1
20150007234 Rasanen et al. Jan 2015 A1
20150012369 Dharmaji et al. Jan 2015 A1
20150015789 Guntur et al. Jan 2015 A1
20150046946 Hassell et al. Feb 2015 A1
20150058342 Kim et al. Feb 2015 A1
20150067723 Bloch et al. Mar 2015 A1
20150104155 Bloch et al. Apr 2015 A1
20150179224 Bloch et al. Jun 2015 A1
20150181301 Bloch et al. Jun 2015 A1
20150185965 Belliveau et al. Jul 2015 A1
20150195601 Hahm Jul 2015 A1
20150199116 Bloch et al. Jul 2015 A1
20150201187 Ryo Jul 2015 A1
20150258454 King et al. Sep 2015 A1
20150293675 Bloch et al. Oct 2015 A1
20150294685 Bloch et al. Oct 2015 A1
20150304698 Redol Oct 2015 A1
20150331942 Tan Nov 2015 A1
20160062540 Yang et al. Mar 2016 A1
20160104513 Bloch et al. Apr 2016 A1
20160105724 Bloch et al. Apr 2016 A1
20160132203 Seto et al. May 2016 A1
20160170948 Bloch Jun 2016 A1
20160173944 Kilar et al. Jun 2016 A1
20160192009 Sugio et al. Jun 2016 A1
20160217829 Bloch et al. Jul 2016 A1
20160224573 Shahraray et al. Aug 2016 A1
20160277779 Zhang et al. Sep 2016 A1
20160303608 Jossick Oct 2016 A1
20170062012 Bloch et al. Mar 2017 A1
20170142486 Masuda May 2017 A1
20170178409 Bloch et al. Jun 2017 A1
20170178601 Bloch et al. Jun 2017 A1
20170289220 Bloch et al. Oct 2017 A1
20170295410 Bloch et al. Oct 2017 A1
Foreign Referenced Citations (19)
Number Date Country
2639491 Mar 2010 CA
004038801 Jun 1992 DE
10053720 Apr 2002 DE
0965371 Dec 1999 EP
1033157 Sep 2000 EP
2104105 Sep 2009 EP
2359916 Sep 2001 GB
2428329 Jan 2007 GB
2008005288 Jan 2008 JP
20040005068 Jan 2004 KR
20100037413 Apr 2010 KR
WO-1996013810 May 1996 WO
WO-2000059224 Oct 2000 WO
WO-2007062223 May 2007 WO
WO-2007138546 Dec 2007 WO
WO-2008001350 Jan 2008 WO
WO-2008052009 May 2008 WO
WO-2008057444 May 2008 WO
WO-2009137919 Nov 2009 WO
Non-Patent Literature Citations (13)
Entry
An ffmpeg and SDL Tutorial, “Tutorial 05: Synching Video,” Retrieved from Internet on Mar. 15, 2013: <http://dranger.com/ffmpeg/tutorial05.html> (4 pages).
Archos Gen 5 English User Manual Version 3.0, Jul. 26, 2007, p. 1-81.
Bartlett, “iTunes 11: How to Queue Next Song,” Technipages, Oct. 6, 2008, pp. 1-8, Retrieved from the Internet on Dec. 26, 2013, http://www.technipages.com/itunes-queue-next-song.html.
International Search Report and Written Opinion for International Patent Application PCT/IB2013/001000 dated Jul. 31, 2013 (11 pages).
International Search Report for International Application PCT/IL2010/000362 dated Aug. 25, 2010 (6 pages).
International Search Report for International Patent Application PCT/IL2012/000080 dated Aug. 9, 2012 (4 pages).
International Search Report for International Patent Application PCT/IL2012/000081 dated Jun. 28, 2012 (4 pages).
Labs.byHook: “Ogg Vorbis Encoder for Flash: Alchemy Series Part 1,” Retrieved from Internet on on Dec. 17, 2012: URL:http://labs.byhook.com/2011/02/15/ogg-vorbis-encoder-for-flash-alchem- y-series-part-1/, 2011, 6 pages.
Miller, Gregor et al., “MiniDiver: A Novel Mobile Media Playback Interface for Rich Video Content on an iPhoneTM”, Entertainment Computing A ICEC 2009, Sep. 3, 2009, pp. 98-109.
Sodagar, I., “The MPEG-DASH Standard for Multimedia Streaming Over the Internet”, IEEE Multimedia, IEEE Service Center, New York, NY US, (2011) 18(4): 62-67.
Supplemental European Search Report for EP13184145 dated Jan. 30, 2014 (5 pages).
Supplemental European Search Report for EP10774637.2 (PCT/IL2010/000362) dated Jun. 28, 2012 (6 pages).
Yang, H, et al., “Time Stamp Synchronization in Video Systems,” Teletronics Technology Corporation, <http://www.ttcdas.com/products/daus_encoders/pdf/_tech_papers/tp_2010_time_stamp_video_system.pdf>, Abstract, (8 pages).
Related Publications (1)
Number Date Country
20170374120 A1 Dec 2017 US