The present disclosure relates generally to video bookmarking and, more particularly, to systems and methods for tracking and saving the path of a user through an interactive video tree such that the choices made by the user in traversing the video tree can be recreated at a later time.
Common today are web-based and standalone video players that allow users to mark specific locations in a linear video and restart playback of the video at those locations without having to view the preceding content. For example, the video-sharing website, YouTube, recognizes a time offset parameter in the uniform resource locator (URL) to a video, thereby allowing a user to start the video at the specified offset. Other known video bookmarking techniques operate similarly; that is, they essentially save a timestamp to return the user to a particular location in a video.
Systems and methods for dynamic bookmarking in interactive video are described. In one aspect, an interactive video is formed based on a video tree structure that is made up of video segments. Each video segment represents a predefined portion of one or more paths in the video tree, with each path being associated with a different video presentation. One of the paths in the video tree is traversed based on the decisions made by a user during playback of the video presentation associated with the path being traversed. A selection of a location in a video segment is made, and a bookmark of the selected location is stored for subsequent retrieval. The bookmark includes information identifying the sequence of video segments in the video tree that was traversed to reach the particular location. When the bookmark is later selected, the user is directed to the bookmarked location in the video segment and, based on the saved sequence of video segments, the decisions made by the user during playback of the video presentation are restored. The video tree structure can be modified, and the bookmark will be automatically updated, if necessary, based on the modified structure of the video tree.
In one implementation, the bookmark includes an offset of the location from the beginning of the first video segment and/or an offset of the location from a decision period. The decision period can be a period during which the user can choose from a plurality of options during playback of a video segment, where a following segment is determined based on a choice made by the user during the decision period. The bookmark can also include a video thumbnail associated with the location.
In another implementation, the decisions are restored with a visual representation of at least one of the sequence of video segments and the decisions being provided.
In a further implementation, a video player for playing the video segments is provided. The video player includes a video progress bar that a user can interact with to select a location in a video segment to create a bookmark.
In one implementation, a second bookmark of a location in the video tree is automatically provided based on historical data, user data, and/or content information. Historical data can include previous decisions made by the user in traversing the video tree, and previous decisions made by a group of users in traversing the video tree. User data can include demographics, geography, and social media information. Content information can include video presentation length, segment length, path length, and content subject matter.
In another implementation, a dynamic bookmark that references a tracked statistic is stored. Upon selection of the dynamic bookmark, a location in the video tree is identified based on the current state of the tracked statistic, and the user is directed to the identified location.
Other aspects of the invention include corresponding systems and computer-readable media. The various aspects and advantages of the invention will become apparent from the following drawings, detailed description, and claims, all of which illustrate the principles of the invention, by way of example only.
A more complete appreciation of the invention and many attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings. In the drawings, like reference characters generally refer to the same parts throughout the different views. Further, the drawings are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the invention.
Described herein are various implementations of methods and supporting systems for creating and retrieving bookmarks in interactive videos. In one implementation, the presentation of an interactive video is based on a video tree, hierarchy, or other structure. A video tree can be formed by nodes that are connected in a branching, hierarchical, or other linked form. Nodes can have an associated video segment, audio segment, graphical user interface elements, and/or other associated media. Users (e.g., viewers) can watch a video that begins from a starting node in the tree and proceeds along connected nodes. Upon reaching a point where multiple video segments branch off from a currently viewed segment, the user can interactively select the branch to traverse and, thus, the next video segment to watch. Branched video can include seamlessly assembled and selectably presentable multimedia content such as that described in U.S. patent application Ser. No. 13/033,916, filed on Feb. 24, 2011, and entitled “System and Method for Seamless Multimedia Assembly,” and U.S. patent application Ser. No. 14/107,600, filed on Dec. 16, 2013, and entitled “Methods and Systems for Unfolding Video Pre-Roll,” the entireties of which are hereby incorporated by reference.
The prerecorded video segments in a video tree can be selectably presentable multimedia content; that is, some or all of the video segments in the video tree can be individually or collectively played for a user based upon the user's selection of a particular video segment, an interaction with a previous or playing video segment, or other interaction that results in a particular video segment or segments being played. The video segments can include, for example, one or more predefined, separate multimedia content segments that can be combined in certain manners to create a continuous, seamless presentation such that there are no noticeable gaps, jumps, freezes, delays, or other visual or audible interruptions to video or audio playback between segments. In addition to the foregoing, “seamless” can refer to a continuous playback of content that gives the user the appearance of watching a single, linear multimedia presentation, as well as a continuous playback of multiple content segments that have smooth audio and/or video transitions (e.g., fadeout/fade-in, linking segments) between two or more of the segments.
In some instances, the user is permitted to make choices or otherwise interact in real-time at decision points or during decision periods interspersed throughout the multimedia content. Decision points and/or decision periods can occur at any time and in any number during a multimedia segment, including at or near the beginning and/or the end of the segment. Decision points and/or periods can be predefined, occurring at fixed points or during fixed periods in the multimedia content segments. Based at least in part on the user's choices made before or during playback of content, one or more subsequent multimedia segment(s) associated with the choices can be presented to the user. In some implementations, the subsequent segment is played immediately and automatically following the conclusion of the current segment, whereas in other implementations, the subsequent segment is played immediately upon the user's interaction with the video, without waiting for the end of the decision period or the segment itself.
If a user does not make a selection at a decision point or during a decision period, a default, previously identified selection, or random selection can be made by the system. In some instances, the user is not provided with options; rather, the system automatically selects the segments that will be shown based on information that is associated with the user, other users, or other factors, such as the current date. For example, the system can automatically select subsequent segments based on the user's IP address, location, time zone, the weather in the user's location, social networking ID, saved selections, stored user profiles, preferred products or services, and so on. The system can also automatically select segments based on previous selections made by other users, such as the most popular suggestion or shared selections. The information can also be displayed to the user in the video, e.g., to show the user why an automatic selection is made. As one example, video segments can be automatically selected for presentation based on the geographical location of three different users: a user in Canada will see a twenty-second beer commercial segment followed by an interview segment with a Canadian citizen; a user in the US will see the same beer commercial segment followed by an interview segment with a US citizen; and a user in France is shown only the beer commercial segment.
Multimedia segment(s) selected automatically or by a user can be presented immediately following a currently playing segment, or can be shown after other segments are played. Further, the selected multimedia segment(s) can be presented to the user immediately after selection, after a fixed or random delay, at the end of a decision period, and/or at the end of the currently playing segment. Two or more combined segments form a seamless multimedia content path, and users can take multiple paths and experience a complete, start-to-finish, seamless presentation. Further, one or more multimedia segments can be shared among intertwining paths while still ensuring a seamless transition from a previous segment and to the next segment. The content paths can be predefined, with fixed sets of possible transitions in order to ensure seamless transitions among segments. There can be any number of predefined paths, each having any number of predefined multimedia segments. Some or all of the segments can have the same or different playback lengths, including segments branching from a single source segment.
Traversal of the nodes along a content path in a tree can be performed by selecting among options that appear on and/or around the video while the video is playing. In some implementations, these options are presented to users at a decision point and/or during a decision period in a content segment. The display can hover and then disappear when the decision period ends or when an option has been selected. Further, a timer, countdown or other visual, aural, or other sensory indicator can be presented during playback of content segment to inform the user of the point by which he should (or in some cases must) make his selection. For example, the countdown can indicate when the decision period will end, which can be at a different time than when the currently playing segment will end. If a decision period ends before the end of a particular segment, the remaining portion of the segment can serve as a non-interactive seamless transition to one or more other segments. Further, during this non-interactive end portion, the next multimedia content segment (and other potential next segments) can be downloaded and buffered in the background for later playback (or potential playback).
The segment that is played after a currently playing segment can be determined based on an option selected or other interaction with the video. Each available option can result in a different video and audio segment being played. As previously mentioned, the transition to the next segment can occur immediately upon selection, at the end of the current segment, or at some other predefined or random point. Notably, the transition between content segments can be seamless. In other words, the audio and video can continue playing regardless of whether a segment selection is made, and no noticeable gaps appear in audio or video playback between any connecting segments. In some instances, the video continues on to another segment after a certain amount of time if none is chosen, or can continue playing in a loop.
In one example, the multimedia content is a music video in which the user selects options upon reaching segment decision points to determine subsequent content to be played. First, a video introduction segment is played for the user. Prior to the end of the segment, a decision point is reached at which the user can select the next segment to be played from a listing of choices. In this case, the user is presented with a choice as to who will sing the first verse of the song: a tall, female performer, or a short, male performer. The user is given an amount of time to make a selection (i.e., a decision period), after which, if no selection is made, a default segment will be automatically selected. The default can be a predefined or random selection. Of note, the media content continues to play during the time the user is presented with the choices. Once a choice is selected (or the decision period ends), a seamless transition occurs to the next segment, meaning that the audio and video continue on to the next segment as if there were no break between the two segments and the user cannot visually or audibly detect the transition. As the music video continues, the user is presented with other choices at other decisions points, depending on which path of choices is followed. Ultimately, the user arrives at a final segment, having traversed a complete multimedia content path.
The techniques described herein can be implemented in any appropriate hardware or software. If implemented as software, the processes can execute on a system capable of running one or more commercial operating systems such as the Microsoft Windows® operating systems, the Apple OS X® operating systems, the Apple iOS® platform, the Google Android™ platform, the Linux® operating system and other variants of UNIX® operating systems, and the like. The software can be implemented on a general purpose computing device in the form of a computer including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit.
Referring to
The described systems can include a plurality of software modules stored in a memory and executed on one or more processors. The modules can be in the form of a suitable programming language, which is converted to machine language or object code to allow the processor or processors to read the instructions. The software can be in the form of a standalone application, implemented in any suitable programming language or framework.
The application 112 can be a video player and/or editor that is implemented as a native application, web application, or other form of software. In some implementations, the application 112 is in the form of a web page, widget, and/or Java, JavaScript, .Net, Silverlight, Flash, and/or other applet or plug-in that is downloaded to the device and runs in conjunction with a web browser. The application 112 and the web browser can be part of a single client-server interface; for example, the application 112 can be implemented as a plugin to the web browser or to another framework or operating system. Any other suitable client software architecture, including but not limited to widget frameworks and applet technology can also be employed.
Multimedia content can be provided to the user device 110 by content server 102, which can be a web server, media server, a node in a content delivery network, or other content source. In some implementations, the application 112 (or a portion thereof) is provided by application server 106. For example, some or all of the described functionality of the application 112 can be implemented in software downloaded to or existing on the user device 110 and, in some instances, some or all of the functionality exists remotely. For example, certain video encoding and processing functions can be performed on one or more remote servers, such as application server 106. In some implementations, the user device 110 serves only to provide output and input functionality, with the remainder of the processes being performed remotely.
The user device 110, content server 102, application server 106, and/or other devices and servers can communicate with each other through communications network 114. The communication can take place via any media such as standard telephone lines, LAN or WAN links (e.g., T1, T3, 56 kb, X.25), broadband connections (ISDN, Frame Relay, ATM), wireless links (802.11, Bluetooth, GSM, CDMA, etc.), and so on. The network 114 can carry TCP/IP protocol communications and HTTP/HTTPS requests made by a web browser, and the connection between clients and servers can be communicated over such TCP/IP networks. The type of network is not a limitation, however, and any suitable network can be used.
Method steps of the techniques described herein can be performed by one or more programmable processors executing a computer program to perform functions of the invention by operating on input data and generating output. Method steps can also be performed by, and apparatus of the invention can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Modules can refer to portions of the computer program and/or the processor/special circuitry that implements that functionality.
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. One or more memories can store media assets (e.g., audio, video, graphics, interface elements, and/or other media files), configuration files, and/or instructions that, when executed by a processor, form the modules, engines, and other components described herein and perform the functionality associated with the components. The processor and the memory can be supplemented by, or incorporated in special purpose logic circuitry.
It should also be noted that the present implementations can be provided as one or more computer-readable programs embodied on or in one or more articles of manufacture. The article of manufacture can be any suitable hardware apparatus, such as, for example, a floppy disk, a hard disk, a CD-ROM, a CD-RW, a CD-R, a DVD-ROM, a DVD-RW, a DVD-R, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape. In general, the computer-readable programs can be implemented in any programming language. The software programs can be further translated into machine language or virtual machine instructions and stored in a program file in that form. The program file can then be stored on or in one or more of the articles of manufacture.
In one implementation, bookmarks can be automatically or manually added to branching multimedia content. As referred to herein, a “bookmark” refers to a designated location in a branching video or other multimedia presentation. A bookmark can be a “point bookmark,” which specifies a location in the video to which the application 112 can seek. A point bookmark can include a timestamp or a positive or negative offset specifying a location in the video with respect to the beginning of the video presentation, the beginning of a particular segment of the video presentation, a decision point or other point in the video presentation. For example, referring to
A bookmark can also be a “path bookmark,” which represents a path of video segments taken or decisions made over a multimedia presentation based on a tree or other structure. For example, as shown in
A bookmark can be stored in any suitable form. For example, a bookmark can be stored as or connected with a URL, bar code, Quick Response Code, electronic file, and the like. In some implementations, bookmarks can be shared with other users via email, file sharing, or social media (e.g., Facebook, LinkedIn, Twitter).
In STEP 310, the application 112 receives a selection of a location in one of the video segments in the traversed path. The selection can be made automatically (e.g., by the application 112) or by the user viewing or editing the video presentation. For example, as shown in
In some implementations, bookmarks can be associated with a video presentation during its editing and/or creation. For example, the application 112 can automatically create a list of bookmarks representing points of interest in the video tree, such as particular segments, decision points, decision periods, locations of interactive multimedia (e.g., buttons or other interface elements appearing on the video). In one instance, the application 112 automatically creates a table of contents for the video. In other implementations, one or more video segments in the video tree can be analyzed by other software programs to automatically suggest other points of interest. For example, image analysis can be performed to locate all instances in which an advertised product occurs in a video, audio recognition can be used to create bookmarks when particular songs are playing, and so on.
In another implementation, the application 112 considers one or more video parameters, statistics, and/or other attributes in automatically creating a list of bookmarks. This can include, for example, data associated with the user, such as demographics, geography, and social media information (e.g., accounts, connections, likes, tweets, etc.); historical data, such as decisions or selections made in previous plays of a video presentation by the user, the user's friends, the user's social networking connections, and/or other users; content information, such as video length, segment length, path length, and content subject matter, and so on. For example, a bookmark can be created for an video segment option based on how popular that option is and/or how many times the option was chosen or not chosen by the user or by other users.
It is to be appreciated that a vast number of possibilities exists for the automatic creation of point and path bookmarks based on the video tree structure, segment content, and other parameters and statistics. Examples of automatically created bookmarks include, but are not limited to, the shortest path to reach a segment, the most popular segment among the user's friends from a selection of segment options, the most commonly reached final segment in a video tree in the user's country, the most popular first segment option selected by other users who expressed an interest in a particular product in a different video, the least popular path followed among all users, and so on.
In one implementation, a dynamic bookmark can be created by a user or automatically created by the application 112. A “dynamic bookmark” can refer to a bookmark that includes a reference to a tracked statistic. The statistic can be updated in real-time or periodically based on the particular activity or activities that the statistic tracks. The application 112 can track activities and statistics locally and/or can communicate with a remote server, such as applications server 106, for tracking statistics over a number of separate users. Statistics can be those as described above, such as the popularity of a selection made during a video presentation.
In one example, the tracked statistic represents the most popular choice among all users of three drinks that a particular user can select from at a decision point in the video tree. The dynamic bookmark can be tied to the tracked statistic such that, when the dynamic bookmark is selected, the current state of the statistic determines where the bookmark will map to in the video tree. Referring to the drinks example, at time 1, soda may be the most popular drink according to the statistic and, when the dynamic bookmark is selected at time 1, the application 112 seeks to the segment in the video tree corresponding to the soda choice. Then, popular opinion changes over time and, at time 2, beer is now the most popular drink. Accordingly, when the dynamic bookmark is selected at time 2, the application 112 seeks to the segment in the video tree corresponding to the beer choice. The statistic associated with a dynamic bookmark can be referred to every time the bookmark is selected or, alternatively, the statistic when updated or otherwise on a periodic basis can be pushed to the dynamic bookmark to avoid the need for looking up the statistic in each instance.
Still referring to
After creation of the bookmark, it can be retrieved by the same user or a different user (e.g., if the bookmark is made available on a publicly accessible server or otherwise shared). The bookmark can be made accessible and/or displayed to one or more users in a menu, library, list, or other format, and can be visually represented as text and/or images (e.g., a thumbnail of the video frame at the bookmarked location). In one implementation, a visual indicator of a bookmark is displayed on a timeline or progress bar at a location representing the point in time in the video presentation or a particular segment corresponding to the bookmark.
In STEP 318, a selection of the bookmark is received and, based on the selection, the application 112 restores the bookmark in accordance with its form. The bookmark can be selected by the user, by another user, or automatically by the application 112. If the stored bookmark is a point bookmark, the application 112 can seek to the bookmarked location in the associated video segment (STEP 322). The application 112 can begin playing the video from this location or can stop or pause playback of the video after seeking. If, on the other hand, the stored bookmark is a path bookmark, the application 112 can seek to the bookmarked location as well as restore the path traversed and/or decisions made to reach the bookmarked location (STEP 324).
In some implementations, the application 112 automatically selects the bookmark. For example, the application 112 can send the user to a particular bookmark at the start of or during playback of a video presentation. For example, if in traversing a different video presentation the user selects a segment option indicating his gender, the application 112 can cause playback of the first video presentation to jump to a bookmark associated with the user's gender. In other implementations, when seeking forward or backward along a video, the application 112 can “snap” to a particular bookmarked location.
In one implementation, when a path bookmark is restored, a visual representation of the path traversed and/or decisions made to reach the bookmarked location is provided to the application user. The visual representation of the path and/or decisions can be provided, for example, in a tree format such as that shown in
Upon restoration of a bookmark, decisions that were made by the user along the path (e.g., decisions made at decision points or during decision periods) to reach the bookmarked location can also be visually represented as, for example, text and/or images associated with the decisions made. For example, if at a decision point in the video presentation, and prior to creating the bookmark, the user had selected a female character instead of a male character to continue the presentation with, that decision can be represented as a thumbnail image of the girl shown on the visual representation of the path provided on restoration of the bookmark.
In some implementations, a user is able to interact with the visual representation of the path/decisions (e.g., if the representation is provided on a progress bar or timeline) that is provided when a bookmark is restored. For example, the user can select a point in time on the path of segments leading up to the bookmarked location in order to seek to that point. However, if video segments in the path prior to the bookmarked location are not buffered or otherwise locally cached, they may need to be retrieved by the application 112 from the content server 102 prior to commencing playback of video at the point in time selected by the user.
In one implementation, path and/or point bookmarks are automatically updated upon a change in content or structure of the underlying media presentation, such as a change in segment length, alteration of the tree structure (e.g., addition or removal of a segment, addition or removal of a connection between segments, etc.), modification of the video, audio, or interface, and so on. For example,
Updates to a bookmark can be automatically performed by the application 112 (e.g., a video editor, player or content authoring tool). In some instances, content segments are stored in a file and directory structure and, upon a change to the video tree and associated directory structure, the bookmark is updated, if necessary, to point to the new location of the segment. If the filename of the bookmarked segment is modified, the old name can be linked to the new name to allow the bookmark to correctly persist. In the case where a segment in the path of segments to the bookmarked location is added or removed (e.g., if the “Male” and “Female” options from
Prior to publishing a video presentation, the application 112 can verify that all bookmarks associated with the video presentation are valid, and update any bookmarks that need to be mapped to new locations based on changes to the video structure. If the application 112 encounters a bookmark that cannot be updated (e.g., if the bookmarked location has been deleted), the application 112 can notify the video editor and allow him to manually update the bookmark. In some implementations, bookmarks that are invalid or cannot be updated can point to a default location, such as the beginning of the first segment in the video structure.
One will appreciate the various uses of the techniques described herein. For example, an advertisement for shampoo can include various bookmarked locations, and a user can be provided to a link to an appropriate location in the advertisement (e.g., a video segment relating to a shampoo product for blondes) based on information known about or gathered from the user. In another example, a user can interact with a video presentation to select a desired product, and the user is redirected to a website where the user can buy the product. At the website, the user can view the product and other products, and can be provided with bookmarks to locations in the video where those products appear.
In a further example, a user watching an interactive educational video can dive into various topics and create one or more bookmarks saving the locations he visited in the video. At a later time, the user can return to the video and select the bookmarks to recreate the paths he took to reach each topic. Similarly, a news video can have a number of associated bookmarks that each point to a specific section of the news (e.g., weather, sports, etc.). Even if new content segments are added or the particular content the bookmark points to is changed (e.g., news content segments are updated each day), the bookmark can be configured to always direct the user to the same place in the corresponding video tree. In yet another example, an interactive video game is provided using a video tree such as those described herein. The user can create a bookmark to save his progress and, using the bookmark at a later time, restore the sequence of events and decisions he made in playing the game up to the bookmarked point. As another example, a bookmark can be created for a most-traversed path of songs or music videos in an interactive media presentation, and a user can select the bookmark to play the path on a television or radio. The bookmark can also be dynamically updated as the most-viewed path changes from time to time.
Although the systems and methods described herein relate primarily to audio and video playback, the invention is equally applicable to various streaming and non-streaming media, including animation, video games, interactive media, and other forms of content usable in conjunction with the present systems and methods. Further, there can be more than one audio, video, and/or other media content stream played in synchronization with other streams. Streaming media can include, for example, multimedia content that is continuously presented to a user while it is received from a content delivery source, such as a remote video server. If a source media file is in a format that cannot be streamed and/or does not allow for seamless connections between segments, the media file can be transcoded or converted into a format supporting streaming and/or seamless transitions.
While various implementations of the present invention have been described herein, it should be understood that they have been presented by example only. Where methods and steps described above indicate certain events occurring in certain order, those of ordinary skill in the art having the benefit of this disclosure would recognize that the ordering of certain steps can be modified and that such modifications are in accordance with the given variations. For example, although various implementations have been described as having particular features and/or combinations of components, other implementations are possible having any combination or sub-combination of any features and/or components from any of the implementations described herein.
This application is a continuation of U.S. patent application Ser. No. 14/509,700, filed on Oct. 8, 2014, and entitled “Systems and Methods for Dynamic Video Bookmarking,” the entirety of which is incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
4569026 | Best | Feb 1986 | A |
5161034 | Klappert | Nov 1992 | A |
5568602 | Callahan et al. | Oct 1996 | A |
5568603 | Chen et al. | Oct 1996 | A |
5597312 | Bloom et al. | Jan 1997 | A |
5607356 | Schwartz | Mar 1997 | A |
5610653 | Abecassis | Mar 1997 | A |
5636036 | Ashbey | Jun 1997 | A |
5676551 | Knight et al. | Oct 1997 | A |
5715169 | Noguchi | Feb 1998 | A |
5734862 | Kulas | Mar 1998 | A |
5737527 | Shiels et al. | Apr 1998 | A |
5745738 | Ricard | Apr 1998 | A |
5754770 | Shiels et al. | May 1998 | A |
5818435 | Kozuka et al. | Oct 1998 | A |
5848934 | Shiels et al. | Dec 1998 | A |
5887110 | Sakamoto et al. | Mar 1999 | A |
5894320 | Vancelette | Apr 1999 | A |
5956037 | Osawa et al. | Sep 1999 | A |
6067400 | Saeki et al. | May 2000 | A |
6122668 | Teng et al. | Sep 2000 | A |
6128712 | Hunt et al. | Oct 2000 | A |
6191780 | Martin et al. | Feb 2001 | B1 |
6222925 | Shiels et al. | Apr 2001 | B1 |
6240555 | Shoff et al. | May 2001 | B1 |
6298482 | Seidman et al. | Oct 2001 | B1 |
6460036 | Herz | Oct 2002 | B1 |
6657906 | Martin | Dec 2003 | B2 |
6698020 | Zigmond et al. | Feb 2004 | B1 |
6728477 | Watkins | Apr 2004 | B1 |
6801947 | Li | Oct 2004 | B1 |
6947966 | Oko, Jr. et al. | Sep 2005 | B1 |
7085844 | Thompson | Aug 2006 | B2 |
7155676 | Land et al. | Dec 2006 | B2 |
7231132 | Davenport | Jun 2007 | B1 |
7310784 | Gottlieb et al. | Dec 2007 | B1 |
7379653 | Yap et al. | May 2008 | B2 |
7444069 | Bernsley | Oct 2008 | B1 |
7472910 | Okada et al. | Jan 2009 | B1 |
7627605 | Lamere et al. | Dec 2009 | B1 |
7669128 | Bailey et al. | Feb 2010 | B2 |
7694320 | Yeo et al. | Apr 2010 | B1 |
7779438 | Davies | Aug 2010 | B2 |
7787973 | Lambert | Aug 2010 | B2 |
7917505 | van Gent et al. | Mar 2011 | B2 |
8024762 | Britt | Sep 2011 | B2 |
8046801 | Ellis et al. | Oct 2011 | B2 |
8065710 | Malik | Nov 2011 | B2 |
8151139 | Gordon | Apr 2012 | B1 |
8176425 | Wallace et al. | May 2012 | B2 |
8190001 | Bernsley | May 2012 | B2 |
8276058 | Gottlieb et al. | Sep 2012 | B2 |
8281355 | Weaver et al. | Oct 2012 | B1 |
8600220 | Bloch et al. | Dec 2013 | B2 |
8612517 | Yadid et al. | Dec 2013 | B1 |
8650489 | Baum et al. | Feb 2014 | B1 |
8667395 | Hosogai et al. | Mar 2014 | B2 |
8826337 | Issa et al. | Sep 2014 | B2 |
8860882 | Bloch et al. | Oct 2014 | B2 |
8977113 | Rumteen et al. | Mar 2015 | B1 |
9009619 | Bloch et al. | Apr 2015 | B2 |
9021537 | Funge et al. | Apr 2015 | B2 |
9082092 | Henry | Jul 2015 | B1 |
9094718 | Barton et al. | Jul 2015 | B2 |
9190110 | Bloch | Nov 2015 | B2 |
9257148 | Bloch et al. | Feb 2016 | B2 |
9268774 | Kim et al. | Feb 2016 | B2 |
9271015 | Bloch et al. | Feb 2016 | B2 |
9367196 | Goldstein et al. | Jun 2016 | B1 |
9390099 | Wang et al. | Jul 2016 | B1 |
9465435 | Zhang et al. | Oct 2016 | B1 |
9473582 | Fraccaroli | Oct 2016 | B1 |
9520155 | Bloch et al. | Dec 2016 | B2 |
9530454 | Bloch et al. | Dec 2016 | B2 |
9607655 | Bloch et al. | Mar 2017 | B2 |
9641898 | Bloch et al. | May 2017 | B2 |
9653115 | Bloch et al. | May 2017 | B2 |
9653116 | Paulraj et al. | May 2017 | B2 |
9672868 | Bloch et al. | Jun 2017 | B2 |
9715901 | Singh et al. | Jul 2017 | B1 |
9792026 | Bloch et al. | Oct 2017 | B2 |
9792957 | Bloch et al. | Oct 2017 | B2 |
9826285 | Mishra et al. | Nov 2017 | B1 |
9967621 | Armstrong et al. | May 2018 | B2 |
10178421 | Thomas et al. | Jan 2019 | B2 |
20020053089 | Massey | May 2002 | A1 |
20020086724 | Miyaki et al. | Jul 2002 | A1 |
20020091455 | Williams | Jul 2002 | A1 |
20020105535 | Wallace et al. | Aug 2002 | A1 |
20020106191 | Betz et al. | Aug 2002 | A1 |
20020120456 | Berg et al. | Aug 2002 | A1 |
20020124250 | Proehl et al. | Sep 2002 | A1 |
20020129374 | Freeman et al. | Sep 2002 | A1 |
20020140719 | Amir et al. | Oct 2002 | A1 |
20020144262 | Plotnick et al. | Oct 2002 | A1 |
20020174430 | Ellis et al. | Nov 2002 | A1 |
20020177914 | Chase | Nov 2002 | A1 |
20020194595 | Miller et al. | Dec 2002 | A1 |
20030007560 | Mayhew et al. | Jan 2003 | A1 |
20030012409 | Overton et al. | Jan 2003 | A1 |
20030148806 | Weiss | Aug 2003 | A1 |
20030159566 | Sater et al. | Aug 2003 | A1 |
20030183064 | Eugene et al. | Oct 2003 | A1 |
20030184598 | Graham | Oct 2003 | A1 |
20030221541 | Platt | Dec 2003 | A1 |
20040009813 | Wind | Jan 2004 | A1 |
20040019905 | Fellenstein et al. | Jan 2004 | A1 |
20040034711 | Hughes | Feb 2004 | A1 |
20040070595 | Atlas et al. | Apr 2004 | A1 |
20040091848 | Nemitz | May 2004 | A1 |
20040125124 | Kim et al. | Jul 2004 | A1 |
20040128317 | Sull et al. | Jul 2004 | A1 |
20040138948 | Loomis | Jul 2004 | A1 |
20040172476 | Chapweske | Sep 2004 | A1 |
20040194128 | McIntyre et al. | Sep 2004 | A1 |
20040194131 | Ellis et al. | Sep 2004 | A1 |
20050019015 | Ackley et al. | Jan 2005 | A1 |
20050055377 | Dorey et al. | Mar 2005 | A1 |
20050091597 | Ackley | Apr 2005 | A1 |
20050102707 | Schnitman | May 2005 | A1 |
20050107159 | Sato | May 2005 | A1 |
20050120389 | Boss et al. | Jun 2005 | A1 |
20050132401 | Boccon-Gibod et al. | Jun 2005 | A1 |
20050166224 | Ficco | Jul 2005 | A1 |
20050198661 | Collins et al. | Sep 2005 | A1 |
20050210145 | Kim et al. | Sep 2005 | A1 |
20050251820 | Stefanik et al. | Nov 2005 | A1 |
20050251827 | Ellis et al. | Nov 2005 | A1 |
20060002895 | McDonnell et al. | Jan 2006 | A1 |
20060024034 | Filo et al. | Feb 2006 | A1 |
20060028951 | Tozun et al. | Feb 2006 | A1 |
20060064733 | Norton et al. | Mar 2006 | A1 |
20060150072 | Salvucci | Jul 2006 | A1 |
20060155400 | Loomis | Jul 2006 | A1 |
20060200842 | Chapman et al. | Sep 2006 | A1 |
20060222322 | Levitan | Oct 2006 | A1 |
20060224260 | Hicken et al. | Oct 2006 | A1 |
20060274828 | Siemens et al. | Dec 2006 | A1 |
20070003149 | Nagumo et al. | Jan 2007 | A1 |
20070024706 | Brannon et al. | Feb 2007 | A1 |
20070033633 | Andrews et al. | Feb 2007 | A1 |
20070055989 | Shanks et al. | Mar 2007 | A1 |
20070079325 | de Heer | Apr 2007 | A1 |
20070099684 | Butterworth | May 2007 | A1 |
20070101369 | Dolph | May 2007 | A1 |
20070118801 | Harshbarger et al. | May 2007 | A1 |
20070154169 | Cordray et al. | Jul 2007 | A1 |
20070157234 | Walker | Jul 2007 | A1 |
20070157260 | Walker | Jul 2007 | A1 |
20070157261 | Steelberg et al. | Jul 2007 | A1 |
20070162395 | Ben-Yaacov et al. | Jul 2007 | A1 |
20070220583 | Bailey et al. | Sep 2007 | A1 |
20070226761 | Zalewski et al. | Sep 2007 | A1 |
20070239754 | Schnitman | Oct 2007 | A1 |
20070253677 | Wang | Nov 2007 | A1 |
20070253688 | Koennecke | Nov 2007 | A1 |
20070263722 | Fukuzawa | Nov 2007 | A1 |
20080019445 | Aono et al. | Jan 2008 | A1 |
20080021187 | Wescott et al. | Jan 2008 | A1 |
20080021874 | Dahl et al. | Jan 2008 | A1 |
20080022320 | Ver Steeg | Jan 2008 | A1 |
20080031595 | Cho | Feb 2008 | A1 |
20080086456 | Rasanen et al. | Apr 2008 | A1 |
20080086754 | Chen et al. | Apr 2008 | A1 |
20080091721 | Harboe et al. | Apr 2008 | A1 |
20080092159 | Dmitriev et al. | Apr 2008 | A1 |
20080148152 | Blinnikka et al. | Jun 2008 | A1 |
20080170687 | Moors et al. | Jul 2008 | A1 |
20080177893 | Bowra et al. | Jul 2008 | A1 |
20080178232 | Velusamy | Jul 2008 | A1 |
20080276157 | Kustka et al. | Nov 2008 | A1 |
20080300967 | Buckley et al. | Dec 2008 | A1 |
20080301750 | Silfvast et al. | Dec 2008 | A1 |
20080314232 | Hansson et al. | Dec 2008 | A1 |
20090022015 | Harrison | Jan 2009 | A1 |
20090022165 | Candelore et al. | Jan 2009 | A1 |
20090024923 | Hartwig et al. | Jan 2009 | A1 |
20090055880 | Batteram et al. | Feb 2009 | A1 |
20090063681 | Ramakrishnan et al. | Mar 2009 | A1 |
20090077137 | Weda et al. | Mar 2009 | A1 |
20090079663 | Chang et al. | Mar 2009 | A1 |
20090083631 | Sidi et al. | Mar 2009 | A1 |
20090116817 | Kim et al. | May 2009 | A1 |
20090177538 | Brewer et al. | Jul 2009 | A1 |
20090191971 | Avent | Jul 2009 | A1 |
20090195652 | Gal | Aug 2009 | A1 |
20090199697 | Lehtiniemi et al. | Aug 2009 | A1 |
20090228572 | Wall et al. | Sep 2009 | A1 |
20090254827 | Gonze et al. | Oct 2009 | A1 |
20090258708 | Figueroa | Oct 2009 | A1 |
20090265746 | Halen et al. | Oct 2009 | A1 |
20090297118 | Fink et al. | Dec 2009 | A1 |
20090320075 | Marko | Dec 2009 | A1 |
20100017820 | Thevathasan et al. | Jan 2010 | A1 |
20100042496 | Wang et al. | Feb 2010 | A1 |
20100050083 | Axen et al. | Feb 2010 | A1 |
20100069159 | Yamada et al. | Mar 2010 | A1 |
20100077290 | Pueyo | Mar 2010 | A1 |
20100088726 | Curtis et al. | Apr 2010 | A1 |
20100146145 | Tippin et al. | Jun 2010 | A1 |
20100153512 | Balassanian et al. | Jun 2010 | A1 |
20100153885 | Yates | Jun 2010 | A1 |
20100161792 | Palm et al. | Jun 2010 | A1 |
20100162344 | Casagrande et al. | Jun 2010 | A1 |
20100167816 | Perlman et al. | Jul 2010 | A1 |
20100186032 | Pradeep et al. | Jul 2010 | A1 |
20100186579 | Schnitman | Jul 2010 | A1 |
20100210351 | Berman | Aug 2010 | A1 |
20100251295 | Amento et al. | Sep 2010 | A1 |
20100262336 | Rivas et al. | Oct 2010 | A1 |
20100267450 | McMain | Oct 2010 | A1 |
20100268361 | Mantel et al. | Oct 2010 | A1 |
20100278509 | Nagano et al. | Nov 2010 | A1 |
20100287033 | Mathur | Nov 2010 | A1 |
20100287475 | van Zwol et al. | Nov 2010 | A1 |
20100293455 | Bloch | Nov 2010 | A1 |
20100332404 | Valin | Dec 2010 | A1 |
20110000797 | Henry | Jan 2011 | A1 |
20110007797 | Palmer et al. | Jan 2011 | A1 |
20110010742 | White | Jan 2011 | A1 |
20110026898 | Lussier et al. | Feb 2011 | A1 |
20110033167 | Arling et al. | Feb 2011 | A1 |
20110041059 | Amarasingham et al. | Feb 2011 | A1 |
20110069940 | Shimy et al. | Mar 2011 | A1 |
20110078023 | Aldrey et al. | Mar 2011 | A1 |
20110078740 | Bolyukh et al. | Mar 2011 | A1 |
20110096225 | Candelore | Apr 2011 | A1 |
20110126106 | Ben Shaul et al. | May 2011 | A1 |
20110131493 | Dahl | Jun 2011 | A1 |
20110138331 | Pugsley et al. | Jun 2011 | A1 |
20110163969 | Anzures et al. | Jul 2011 | A1 |
20110191684 | Greenberg | Aug 2011 | A1 |
20110191801 | Vytheeswaran | Aug 2011 | A1 |
20110197131 | Duffin et al. | Aug 2011 | A1 |
20110200116 | Bloch et al. | Aug 2011 | A1 |
20110202562 | Bloch et al. | Aug 2011 | A1 |
20110238494 | Park | Sep 2011 | A1 |
20110246885 | Pantos et al. | Oct 2011 | A1 |
20110252320 | Arrasvuori et al. | Oct 2011 | A1 |
20110264755 | Salvatore De Villiers | Oct 2011 | A1 |
20110282745 | Meoded et al. | Nov 2011 | A1 |
20110282906 | Wong | Nov 2011 | A1 |
20110307786 | Shuster | Dec 2011 | A1 |
20110307919 | Weerasinghe | Dec 2011 | A1 |
20110307920 | Blanchard et al. | Dec 2011 | A1 |
20110313859 | Stillwell et al. | Dec 2011 | A1 |
20120004960 | Ma et al. | Jan 2012 | A1 |
20120005287 | Gadel et al. | Jan 2012 | A1 |
20120017141 | Eelen et al. | Jan 2012 | A1 |
20120062576 | Rosenthal et al. | Mar 2012 | A1 |
20120081389 | Dilts | Apr 2012 | A1 |
20120089911 | Hosking et al. | Apr 2012 | A1 |
20120094768 | McCaddon et al. | Apr 2012 | A1 |
20120110618 | Kilar et al. | May 2012 | A1 |
20120110620 | Kilar et al. | May 2012 | A1 |
20120134646 | Alexander | May 2012 | A1 |
20120147954 | Kasai et al. | Jun 2012 | A1 |
20120179970 | Hayes | Jul 2012 | A1 |
20120198412 | Creighton et al. | Aug 2012 | A1 |
20120213495 | Hafeneger et al. | Aug 2012 | A1 |
20120263263 | Olsen et al. | Oct 2012 | A1 |
20120308206 | Kulas | Dec 2012 | A1 |
20130028573 | Hoofien et al. | Jan 2013 | A1 |
20130031582 | Tinsman et al. | Jan 2013 | A1 |
20130039632 | Feinson | Feb 2013 | A1 |
20130046847 | Zavesky et al. | Feb 2013 | A1 |
20130054728 | Amir et al. | Feb 2013 | A1 |
20130055321 | Cline et al. | Feb 2013 | A1 |
20130061263 | Issa et al. | Mar 2013 | A1 |
20130097643 | Stone et al. | Apr 2013 | A1 |
20130117248 | Bhogal et al. | May 2013 | A1 |
20130125181 | Montemayor et al. | May 2013 | A1 |
20130129308 | Karn et al. | May 2013 | A1 |
20130173765 | Korbecki | Jul 2013 | A1 |
20130177294 | Kennberg | Jul 2013 | A1 |
20130188923 | Hartley et al. | Jul 2013 | A1 |
20130204710 | Boland et al. | Aug 2013 | A1 |
20130219425 | Swartz | Aug 2013 | A1 |
20130254292 | Bradley | Sep 2013 | A1 |
20130259442 | Bloch et al. | Oct 2013 | A1 |
20130282917 | Reznik et al. | Oct 2013 | A1 |
20130298146 | Conrad et al. | Nov 2013 | A1 |
20130308926 | Jang et al. | Nov 2013 | A1 |
20130328888 | Beaver et al. | Dec 2013 | A1 |
20130335427 | Cheung et al. | Dec 2013 | A1 |
20140019865 | Shah | Jan 2014 | A1 |
20140025839 | Marko et al. | Jan 2014 | A1 |
20140040273 | Cooper et al. | Feb 2014 | A1 |
20140040280 | Slaney et al. | Feb 2014 | A1 |
20140078397 | Bloch et al. | Mar 2014 | A1 |
20140082666 | Bloch et al. | Mar 2014 | A1 |
20140085196 | Zucker et al. | Mar 2014 | A1 |
20140094313 | Watson et al. | Apr 2014 | A1 |
20140101550 | Zises | Apr 2014 | A1 |
20140126877 | Crawford et al. | May 2014 | A1 |
20140129618 | Panje et al. | May 2014 | A1 |
20140136186 | Adami et al. | May 2014 | A1 |
20140152564 | Gulezian et al. | Jun 2014 | A1 |
20140156677 | Collins et al. | Jun 2014 | A1 |
20140178051 | Bloch et al. | Jun 2014 | A1 |
20140186008 | Eyer | Jul 2014 | A1 |
20140194211 | Chimes et al. | Jul 2014 | A1 |
20140219630 | Minder | Aug 2014 | A1 |
20140220535 | Angelone | Aug 2014 | A1 |
20140237520 | Rothschild et al. | Aug 2014 | A1 |
20140245152 | Carter et al. | Aug 2014 | A1 |
20140270680 | Bloch et al. | Sep 2014 | A1 |
20140282013 | Amijee | Sep 2014 | A1 |
20140282642 | Needham et al. | Sep 2014 | A1 |
20140298173 | Rock | Oct 2014 | A1 |
20140380167 | Bloch et al. | Dec 2014 | A1 |
20150007234 | Rasanen et al. | Jan 2015 | A1 |
20150012369 | Dharmaji | Jan 2015 | A1 |
20150015789 | Guntur et al. | Jan 2015 | A1 |
20150046946 | Hassell et al. | Feb 2015 | A1 |
20150058342 | Kim et al. | Feb 2015 | A1 |
20150067723 | Bloch et al. | Mar 2015 | A1 |
20150104155 | Bloch et al. | Apr 2015 | A1 |
20150179224 | Bloch et al. | Jun 2015 | A1 |
20150181271 | Onno et al. | Jun 2015 | A1 |
20150181301 | Bloch et al. | Jun 2015 | A1 |
20150185965 | Belliveau et al. | Jul 2015 | A1 |
20150195601 | Hahm | Jul 2015 | A1 |
20150199116 | Bloch et al. | Jul 2015 | A1 |
20150201187 | Ryo | Jul 2015 | A1 |
20150258454 | King et al. | Sep 2015 | A1 |
20150293675 | Bloch et al. | Oct 2015 | A1 |
20150294685 | Bloch et al. | Oct 2015 | A1 |
20150304698 | Redol | Oct 2015 | A1 |
20150331942 | Tan | Nov 2015 | A1 |
20160021412 | Zito, Jr. | Jan 2016 | A1 |
20160037217 | Harmon et al. | Feb 2016 | A1 |
20160062540 | Yang et al. | Mar 2016 | A1 |
20160066051 | Caidar et al. | Mar 2016 | A1 |
20160094875 | Peterson et al. | Mar 2016 | A1 |
20160100226 | Sadler | Apr 2016 | A1 |
20160104513 | Bloch et al. | Apr 2016 | A1 |
20160105724 | Bloch et al. | Apr 2016 | A1 |
20160132203 | Seto et al. | May 2016 | A1 |
20160162179 | Annett et al. | Jun 2016 | A1 |
20160170948 | Bloch | Jun 2016 | A1 |
20160173944 | Kilar et al. | Jun 2016 | A1 |
20160192009 | Sugio et al. | Jun 2016 | A1 |
20160217829 | Bloch et al. | Jul 2016 | A1 |
20160224573 | Shahraray | Aug 2016 | A1 |
20160277779 | Zhang et al. | Sep 2016 | A1 |
20160303608 | Jossick | Oct 2016 | A1 |
20160322054 | Bloch et al. | Nov 2016 | A1 |
20160323608 | Bloch et al. | Nov 2016 | A1 |
20170062012 | Bloch et al. | Mar 2017 | A1 |
20170142486 | Masuda | May 2017 | A1 |
20170178409 | Bloch et al. | Jun 2017 | A1 |
20170178601 | Bloch et al. | Jun 2017 | A1 |
20170195736 | Chai et al. | Jul 2017 | A1 |
20170289220 | Bloch et al. | Oct 2017 | A1 |
20170295410 | Bloch et al. | Oct 2017 | A1 |
20170345460 | Bloch et al. | Nov 2017 | A1 |
20180007443 | Cannistraro et al. | Jan 2018 | A1 |
20180025078 | Quennesson | Jan 2018 | A1 |
20180068019 | Novikoff et al. | Mar 2018 | A1 |
20180130501 | Bloch et al. | May 2018 | A1 |
20180191574 | Vishnia et al. | Jul 2018 | A1 |
20180262798 | Ramachandra | Sep 2018 | A1 |
20190075367 | van Zessen et al. | Mar 2019 | A1 |
Number | Date | Country |
---|---|---|
2639491 | Mar 2010 | CA |
004038801 | Jun 1992 | DE |
10053720 | Apr 2002 | DE |
0965371 | Dec 1999 | EP |
1033157 | Sep 2000 | EP |
2104105 | Sep 2009 | EP |
2359916 | Sep 2001 | GB |
2428329 | Jan 2007 | GB |
2008005288 | Jan 2008 | JP |
2004-0005068 | Jan 2004 | KR |
2010-0037413 | Apr 2010 | KR |
WO-1996013810 | May 1996 | WO |
WO-2000059224 | Oct 2000 | WO |
WO-2007062223 | May 2007 | WO |
WO-2007138546 | Dec 2007 | WO |
WO-2008001350 | Jan 2008 | WO |
WO-2008052009 | May 2008 | WO |
WO-2008057444 | May 2008 | WO |
WO-2009125404 | Oct 2009 | WO |
WO-2009137919 | Nov 2009 | WO |
Entry |
---|
An ffmpeg and SDL Tutorial, “Tutorial 05: Synching Video,” Retrieved from internet on Mar. 15, 2013: <http://dranqer.com/ffmpeg/tutorial05.html>, (4 pages). |
Archos Gen 5 English User Manual Version 3.0, Jul. 26, 2007, p. 1-81. |
Bartlett, “iTunes 11: How to Queue Next Song,” Technipages , Oct. 6, 2008, pp. 1-8, Retrieved from the Internet on Dec. 26, 2013, http://www.technipages.com/itunes-queue-next-song.html. |
International Search Report and Written Opinion for International Patent Application PCT/IB2013/001000 dated Jul. 31, 2013 (11 pages). |
International Search Report for International Application PCT/IL2010/000362 dated Aug. 25, 2010 (6 pages). |
International Search Report for International Patent Application PCT/IL2012/000080 dated Aug. 9, 2012 (4 pages). |
International Search Report for International Patent Application PCT/IL2012/000081 dated Jun. 28, 2012 (4 pages). |
Labs.byHook: “Ogg Vorbis Encoder for Flash: Alchemy Series Part 1,” Retrieved from Internet on on Dec. 17, 2012: URL:http://labs.byhook.com/2011/02/15/ogg-vorbis-encoder-for-flash-alchem- y-series-part-1/, 2011, 6 pages. |
Miller, Gregor et al., “MiniDiver: A Novel Mobile Media Playback Interface for Rich Video Content on an iPhoneTM”, Entertainment Computing A ICEC 2009, Sep. 3, 2009, pp. 98-109. |
Sodagar, I., “The MPEG-DASH Standard for Multimedia Streaming Over the Internet”, IEEE Multimedia, IEEE Service Center, New York, NY US, (2011) 18(4): 62-67. |
Supplemental European Search Report for EP13184145 dated Jan. 30, 2014 (5 pages). |
Supplemental European Search Report for EP10774637.2 (PCT/IL2010/000362) dated Jun. 28, 2012 (6 pages). |
Yang, H, et al., “Time Stamp Synchronization in Video Systems,” Teletronics Technology Corporation, <http://www.ttcdas.com/products/daus_encoders/pdf/_tech_papers/tp_2010_time_stamp_video_system.pdf>, Abstract, (8 pages). |
U.S. Appl. No. 12/706,721, U.S. Pat. No. 9,190,110, Published as US2010/0293455, System and Method for Assembling a Recorded Composition, filed Feb. 17, 2010. |
U.S. Appl. No. 13/033,916, U.S. Pat. No. 9,607,655, Published as US2011/0200116, System and Method for Seamless Multimedia Assembly, filed Feb. 24, 2011. |
U.S. Appl. No. 13/034,645, Published as US2011/0202562, System and Method for Data Mining Within Interactive Multimedia, filed Feb. 24, 2011. |
U.S. Appl. No. 14/884,285, Published as US2016/0170948, System and Method for Assembling a Recorded Composition, filed Oct. 15, 2015. |
U.S. Appl. No. 13/437,164, U.S. Pat. No. 8,600,220, Published as US2013/0259442, Systems and Methods for Loading More Than One Video Content at a Time, filed Apr. 2, 2012. |
U.S. Appl. No. 14/069,694, U.S. Pat. No. 9,271,015, Published as US2014/0178051, Systems and Methods for Loading More Than One Video Content at a Time, filed Nov. 1, 2013. |
U.S. Appl. No. 13/622,780, U.S. Pat. No. 8,860,882, Published as US2014/0078397, Systems and Methods for Constructing Multimedia Content Modules, filed Sep. 19, 2012. |
U.S. Appl. No. 13/622,795, U.S. Pat. No. 9,009,619, Published as US2014/0082666, Progress Bar for Branched Videos, filed Sep. 19, 2012. |
U.S. Appl. No. 14/639,579, Published as US2015/0199116, Progress Bar for Branched Videos, filed Mar. 5, 2015. |
U.S. Appl. No. 13/838,830, U.S. Pat. No. 9,257,148, Published as US2014/0270680, System and Method for Synchronization of Selectably Presentable Media Streams, filed Mar. 15, 2013. |
U.S. Appl. No. 14/984,821, U.S. Pat. No. 10,418,066, Published as US2016/0217829, System and Method for Synchronization of Selectably Presentable Media Streams, filed Dec. 30, 2015. |
U.S. Appl. No. 13/921,536, U.S. Pat. No. 9,832,516, Published as US2014/0380167, Systems and Methods for Multiple Device Interaction with Selectably Presentable Media Streams, filed Jun. 19, 2013. |
U.S. Appl. No. 14/107,600, U.S. Pat. No. 10,448,119, Published as US2015/0067723, Methods and Systems for Unfolding Video Pre-Roll, filed Dec. 16, 2013. |
U.S. Appl. No. 14/335,381, U.S. Pat. No. 9,530,454, Published as US2015/0104155, Systems and Methods for Real-Time Pixel Switching, filed Jul. 18, 2014. |
U.S. Appl. No. 15/356,913, Systems and Methods for Real-Time Pixel Switching, filed Nov. 21, 2016. |
U.S. Appl. No. 14/139,996, U.S. Pat. No. 9,641,898, Published as US2015/0181301, Methods and Systems for In-Video Library, filed Dec. 24, 2013. |
U.S. Appl. No. 14/140,007, U.S. Pat. No. 9,520,155, Published as US2015/0179224, Methods and Systems for Seeking to Non-Key Frames, filed Dec. 24, 2013. |
U.S. Appl. No. 14/249,627, U.S. Pat. No. 9,653,115, Published as US 2015-0294685, Video Systems and Methods for Creating Linear Video From Branched, filed Apr. 10, 2014. |
U.S. Appl. No. 15/481,916, Published as US 2017-0345460, Systems and Methods for Creating Linear Video From Branched Video, filed Apr. 7, 2017. |
U.S. Appl. No. 14/249,665, U.S. Pat. No. 9,792,026, Published as US2015/0293675, Dynamic Timeline for Branched Video, filed Apr. 10, 2014. |
U.S. Appl. No. 14/509,700, U.S. Pat. No. 9,792,957, Published as US2016/0104513, Systems and Methods for Dynamic Video Bookmarking, filed Oct. 8, 2014. |
U.S. Appl. No. 14/534,626, Published as US2016/0105724, Systems and Methods for Parallel Track Transitions, filed Nov. 6, 2014. |
U.S. Appl. No. 14/700,845, Published as US2016/0323608, Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Apr. 30, 2015. |
U.S. Appl. No. 14/700,862, U.S. Pat. No. 9,672,868, Published as US2016/0322054, Systems and Methods for Seamless Media Creation, filed Apr. 30, 2015. |
U.S. Appl. No. 14/835,857, Published as U52017/0062012, Systems and Methods for Adaptive and Responsive Video, filed Aug. 26, 2015. |
U.S. Appl. No. 16/559,082, Systems and Methods for Adaptive and Responsive Video, filed Sep. 3, 2019. |
U.S. Appl. No. 14/978,464, Published as US2017/0178601, Intelligent Buffering of Large-Scale Video, filed Dec. 22, 2015. |
U.S. Appl. No. 14/978,491, Published as US2017/0178409, Seamless Transitions in Large-Scale Video, filed Dec. 22, 2015. |
U.S. Appl. No. 15/085,209, Published as US2017/0289220, Media Stream Rate Synchronization, filed Mar. 30, 2016. |
U.S. Appl. No. 15/165,373, Published as US 2017-0295410, Video Symbiotic Interactive, filed May 26, 2016. |
U.S. Appl. No. 15/189,931, U.S. Pat. No. 10,218,760, Published as US2017/0374120, Dynamic Summary Generation for Real-time Switchable Videos, filed Jun. 22, 2016. |
U.S. Appl. No. 15/395,477, Published as US2018/0191574, Systems and Methods for Dynamic Weighting of Branched Video Paths, filed Dec. 30, 2016. |
U.S. Appl. No. 15/997,284, Interactive Video Dynamic Adaptation and User Profiling, filed Jun. 4, 2018. |
U.S. Appl. No. 15/863,191, U.S. Pat. No. 10,257,578, Dynamic Library Display for Interactive Videos, filed Jan. 5, 2018. |
U.S. Appl. No. 16/283,066, Dynamic Library Display for Interactive Videos, filed Feb. 22, 2019. |
U.S. Appl. No. 12/706,721, now U.S. Pat. No. 9,190,110, the Office Actions dated Apr. 26, 2012, Aug. 17, 2012, Mar. 28, 2013, Jun. 20, 2013, Jan. 3, 2014, Jul. 7, 2014, and Dec. 19, 2014; the Notices of Allowance dated Jun. 19, 2015, Jul. 17, 2015, Jul. 29, 2015, Aug. 12, 2015, and Sep. 14, 2015. |
U.S. Appl. No. 13/033,916, now U.S. Pat. No. 9,607,655, the Office Actions dated Jun. 7, 2013, Jan. 2, 2014, Aug. 28, 2014, Jan. 5, 2015, Jul. 9, 2015, and Jan. 5, 2016; the Advisory Action dated May 11, 2016; and the Notice of Allowance dated Dec. 21, 2016. |
U.S. Appl. No. 13/034,645, the Office Actions dated Jul. 23, 2012, Mar. 21, 2013, Sep. 15, 2014, Jun. 4, 2015, Apr. 7, 2017, Oct. 6, 2017, Aug. 10, 2018, Jul. 5, 2016 and Apr. 5, 2019. |
U.S. Appl. No. 14/884,285, the Office Actions dated Oct. 5, 2017, Jul. 26, 2018 and Jul. 11, 2019. |
U.S. Appl. No. 13/437,164, now U.S. Pat. No. 8,600,220, the Notice of Allowance dated Aug. 9, 2013. |
U.S. Appl. No. 14/069,694, now U.S. Pat. No. 9,271,015, the Office Actions dated Apr. 27, 2015 and Aug. 31, 2015, the Notice of Allowance dated Oct. 13, 2015. |
U.S. Appl. No. 13/622,780, now U.S. Pat. No. 8,860,882, the Office Action dated Jan. 16, 2014, the Notice of Allowance dated Aug. 4, 2014. |
U.S. Appl. No. 13/622,795, now U.S. Pat. No. 9,009,619, the Office Actions dated May 23, 2014 and Dec. 1, 2014, the Notice of Allowance dated Jan. 9, 2015. |
U.S. Appl. No. 14/639,579, the Office Actions dated May 3, 2017, Nov. 22, 2017 and Jun. 26, 2018, the Notice of Allowance dated Feb. 8, 2019 and Jul. 11, 2019. |
U.S. Appl. No. 13/838,830, now U.S. Pat. No. 9,257,148, the Office Action dated May 7, 2015, Notices of Allowance dated Nov. 6, 2015. |
U.S. Appl. No. 14/984,821, now U.S. Pat. No. 10,418,066, the Office Actions dated Jun. 1, 2017, Dec. 6, 2017, and Oct. 5, 2018; the Notice of Allowance dated May 7, 2019. |
U.S. Appl. No. 13/921,536, now U.S. Pat. No. 9,832,516, the Office Actions dated Feb. 25, 2015, Oct. 20, 2015, Aug. 26, 2016 and Mar. 8, 2017, the Advisory Action dated Jun. 21, 2017, and Notice of Allowance dated Sep. 12, 2017. |
U.S. Appl. No. 14/107,600, now U.S. Pat. No. 10,448,119, the Office Actions dated Dec. 19, 2014, Jul. 8, 2015, Jun. 3, 2016, Mar. 8, 2017, Oct. 10, 2017, and Jul. 25, 2018, Notices of Allowance dated Dec. 31, 2018 and Apr. 25, 2019. |
U.S. Appl. No. 14/335,381, now U.S. Pat. No. 9,530,454, the Office Action dated Feb. 12, 2016; and the Notice of Allowance dated Aug. 24, 2016. |
U.S. Appl. No. 14/139,996, now U.S. Pat. No. 9,641,898, the Office Actions dated Jun. 18, 2015, Feb. 3, 2016 and May 4, 2016; and the Notice of Allowance dated Dec. 23, 2016. |
U.S. Appl. No. 14/140,007, now U.S. Patent No.9,520,155, the Office Actions dated Sep. 8, 2015 and Apr. 26, 2016; and the Notice of Allowance dated Oct. 11, 2016. |
U.S. Appl. No. 14/249,627, now U.S. Patent No. 9,653,115, the Office Actions dated Jan. 14, 2016 and Aug. 9, 2016; and the Notice of Allowance dated Jan. 13, 2017. |
U.S. Appl. No. 15/481,916, the Office Actions dated Oct. 6, 2017, Aug. 6, 2018, and Mar. 8, 2019. |
U.S. Appl. No. 14/249,665, now U.S. Patent No. 9,792,026, the Office Actions dated May 16, 2016 and Feb. 22, 2017; and the Notice of Allowance dated Jun. 2, 2017 and Jul. 24, 2017. |
U.S. Appl. No. 14/509,700, now U.S. Patent No. 9,792,957, the Office Action dated Oct. 28, 2016; and the Notice of Allowance dated Jun. 15, 2017. |
U.S. Appl. No. 14/534,626, the Office Actions dated Nov. 25, 2015, Jul. 5, 2016, Jun. 5, 2017, Mar. 2, 2018, Sep. 26, 2018 and May 8, 2019. |
U.S. Appl. No. 14/700,845, the Office Actions dated May 20, 2016, Dec. 2, 2016, May 22, 2017, Nov. 28, 2017, Jun. 27, 2018 and Feb. 19, 2019. |
U.S. Appl. No. 14/700,862, now U.S. Pat. No. 9,672,868, the Office Action dated Aug. 26, 2016; and the Notice of Allowance dated Mar. 9, 2017. |
U.S. Appl. No. 14/835,857, the Office Actions dated Sep. 23, 2016, Jun. 5, 2017 and Aug. 9, 2018, and the Advisory Action dated Oct. 20, 2017; Notice of Allowances dated Feb. 25, 2019 and Jun. 7, 2019. |
U.S. Appl. No. 14/978,464, the Office Actions dated Sep. 8, 2017, May 18, 2018, Dec. 14, 2018 and Jul. 25, 2019. |
U.S. Appl. No. 14/978,491, the Office Actions dated Sep. 8, 2017, May 25, 2018, Dec. 14, 2018 and Aug. 12, 2019. |
U.S. Appl. No. 15/085,209, the Office Actions dated Feb. 26, 2018 and Dec. 31, 2018; the Notice of Allowance dated Aug. 12, 2019. |
U.S. Appl. No. 15/165,373, the Office Actions dated Mar. 24, 2017, Oct. 11, 2017, May 18, 2018; Feb. 1, 2019 and Aug. 8, 2019. |
U.S. Appl. No. 15/189,931, now U.S. Pat. No. 10,218,760, the Office Actions dated Apr. 6, 2018, Notice of Allowance dated Oct. 24, 2018. |
U.S. Appl. No. 15/395,477, the Office Actions dated Nov. 2, 2018 and Aug. 16, 2019. |
U.S. Appl. No. 15/997, the Office Action dated Aug. 1, 2019. |
U.S. Appl. No. 15/861,191, now U.S. Pat. No. 10/257,578, Notices of Allowance dated Jul. 5, 2018 and Nov. 23, 2018. |
U.S. Appl. No. 16/591,103, now U.S. Pat. No. 10/257,578, Notices of Allowances dated Jul. 5, 2018 and Nov. 23, 2018. |
Number | Date | Country | |
---|---|---|---|
20180130501 A1 | May 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14509700 | Oct 2014 | US |
Child | 15703462 | US |