Dynamic library display for interactive videos

Information

  • Patent Grant
  • 11528534
  • Patent Number
    11,528,534
  • Date Filed
    Friday, November 6, 2020
    4 years ago
  • Date Issued
    Tuesday, December 13, 2022
    2 years ago
Abstract
A video library interface provides a listing of interactive videos and information associated with the videos and is dynamically updated as a user views the videos and makes decisions that affect the playback of the episodes. More specifically, an interactive video that includes different traversable video paths is provided to and interacted with by a user. Based on user interactions received during presentation of the video, different video paths within the interactive video are traversed. In addition, a video library display including a visual depiction of information associated with a plurality of videos is provided. The video library display is dynamically modified based on one or more interactions made by the user with respect to the interactive video.
Description
FIELD OF THE INVENTION

The present disclosure relates generally to audiovisual presentations and, more particularly, to systems and methods for dynamically modifying the features of a video library display based on decisions made in interactive videos.


BACKGROUND

Online streaming and cable media services often present viewers with a library display on their computers, televisions, or other devices that allows the viewers to browse among television shows, movies, and other various forms of media content. Netflix, Amazon Video, and Hulu, for example, make it easy for a viewer to browse through a library of episodes for a television series and view information about each episode, such as the title, actors, episode length, and a representative image. This information is generally static and the same for all viewers, as it is representative of static media content. With interactive videos, however, static information may not adequately describe the videos for users having different individual experiences in the interactive videos.


SUMMARY

Systems and methods are described for implementing a video library interface/display having a listing of interactive videos and information associated therewith that is dynamically updated based on user decisions made within the interactive videos. In one aspect, a computer-implemented method includes the steps of providing an interactive video comprising a plurality of traversable video paths; receiving, during presentation of the interactive video to a user, a first interaction with the interactive video, the first interaction comprising a decision made by the user in the interactive video; traversing a particular video path in the interactive video in response to the first interaction; providing a video library display comprising a visual depiction of information associated with a plurality of videos; and dynamically modifying the video library display based on one or more interactions made by the user with respect to the interactive video, the one or more interactions including the first interaction. Other aspects of the foregoing include corresponding systems and computer programs on non-transitory storage media.


Various implementations can include one or more of the following features. The videos include individual episodes of a series. The visual depiction of information comprises a list of the videos, and dynamically modifying the video library display comprises removing one of the videos from the list, adding a video to the list, or changing an order of videos in the list. The visual depiction of information comprises at least one of metadata associated with a particular video, a thumbnail image of a particular video, and a summary of a particular video. Dynamically modifying the video library display comprises modifying the metadata, thumbnail image, or summary of a first one of the videos. The metadata, thumbnail image, or summary of the first video is modified to reflect one or more decisions made by the user in the first video. Dynamically modifying the video library display comprises including in the video library display supplemental content relating to one or more of the plurality of videos.


In one implementation, a selection of a first one of the videos in the video library display is received, and presentation of the first video is commenced at a first decision point in the first video, where a plurality of possible traversable video paths branch from the first decision point. The visually depicted information can include visual references to a plurality of traversable decision points in the first video including the first decision point, and presentation of the first video can be commenced based on receiving a selection of the first decision point in the visual references by the user.


In another implementation, a first one of the videos comprises an interactive video comprising a plurality of traversable video paths; the first video is presented a plurality of times, wherein in each presentation of the first video, at least one different video path is traversed; information relating to the different traversed video paths is aggregated over the plurality of times the first video is presented; and the video library display is dynamically modified by including in the visual depiction of information the aggregated information.


Further aspects and advantages of the invention will become apparent from the following drawings, detailed description, and claims, all of which illustrate the principles of the invention, by way of example only.





BRIEF DESCRIPTION OF THE DRAWINGS

A more complete appreciation of the invention and many attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings. In the drawings, like reference characters generally refer to the same parts throughout the different views. Further, the drawings are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the invention.



FIG. 1 depicts a high-level system architecture for providing interactive media content according to an implementation.



FIG. 2 depicts an example client-server system architecture for providing a dynamically updating video library interface.



FIG. 3 depicts example displays of a standard video library interface.



FIG. 4 depicts the progression of an episode listing in one implementation of a video library interface.



FIGS. 5A-5C depict various progressions of video information screens in one implementation of a video library interface.



FIG. 6 depicts an example video information screen with partial video metadata.



FIG. 7 depicts an example video information screen with decision point links.



FIG. 8 depicts a method for updating video indexes and episode data in a video library interface according to an implementation.





DETAILED DESCRIPTION

Described herein are various implementations of methods and supporting systems for dynamically modifying a video library display based on decisions made, paths traversed, or other events occurring in an interactive video. FIG. 1 depicts a high-level architecture of such a system according to an implementation. A media presentation having multiple video and/or audio streams can be presented to a user on a user device 110 having one or more application(s) 112 that together are capable of playing and/or editing the content and displaying a video library where information associated with videos can browsed and videos can be selected for playback. The user device 110 can be, for example, a smartphone, tablet, laptop, desktop, palmtop, television, gaming device, virtual reality headset, smart glasses, smart watch, music player, mobile telephone, workstation, or other computing device configured to execute the functionality described herein. The user device 110 can have output functionality (e.g., display monitor, touchscreen, image projector, etc.) and input functionality (e.g., touchscreen, keyboard, mouse, remote control, etc.).


The application 112 can be a video player/editor and library browser that is implemented as a native application, web application, or other form of software. In some implementations, the application 112 is in the form of a web page, widget, and/or Java, JavaScript, .Net, Silverlight, Flash, and/or other applet or plug-in that is downloaded to the user device 110 and runs in conjunction with a web browser. The application 112 and the web browser can be part of a single client-server interface; for example, the application 112 can be implemented as a plugin to the web browser or to another framework or operating system. Any other suitable client software architecture, including but not limited to widget frameworks and applet technology, can also be employed.


Media content can be provided to the user device 110 by content server 102, which can be a web server, media server, a node in a content delivery network, or other content source. In some implementations, the application 112 (or a portion thereof) is provided by application server 106. For example, some or all of the described functionality of the application 112 can be implemented in software downloaded to or existing on the user device 110 and, in some instances, some or all of the functionality exists remotely. For example, certain video encoding and processing functions can be performed on one or more remote servers, such as application server 106. In some implementations, the user device 110 serves only to provide output and input functionality, with the remainder of the processes being performed remotely.


The user device 110, content server 102, application server 106, and/or other devices and servers can communicate with each other through communications network 114. The communication can take place via any media such as standard telephone lines, LAN or WAN links (e.g., T1, T3, 56 kb, X.25), broadband connections (ISDN, Frame Relay, ATM), wireless links (802.11, Bluetooth, GSM, CDMA, etc.), and so on. The network 114 can carry TCP/IP protocol communications and HTTP/HTTPS requests made by a web browser, and the connection between clients and servers can be communicated over such TCP/IP networks. The type of network is not a limitation, however, and any suitable network can be used.


More generally, the techniques described herein can be implemented in any suitable hardware or software. If implemented as software, the processes can execute on a system capable of running one or more custom operating systems or commercial operating systems such as the Microsoft Windows® operating systems, the Apple OS X® operating systems, the Apple iOS® platform, the Google Android™ platform, the Linux® operating system and other variants of UNIX® operating systems, and the like. The software can be implemented a computer including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit.


The system can include a plurality of software modules stored in a memory and executed on one or more processors. The modules can be in the form of a suitable programming language, which is converted to machine language or object code to allow the processor or processors to read the instructions. The software can be in the form of a standalone application, implemented in any suitable programming language or framework.


Method steps of the techniques described herein can be performed by one or more programmable processors executing a computer program to perform functions of the invention by operating on input data and generating output. Method steps can also be performed by, and apparatus of the invention can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Modules can refer to portions of the computer program and/or the processor/special circuitry that implements that functionality.


Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. One or more memories can store media assets (e.g., audio, video, graphics, interface elements, and/or other media files), configuration files, and/or instructions that, when executed by a processor, form the modules, engines, and other components described herein and perform the functionality associated with the components. The processor and the memory can be supplemented by, or incorporated in special purpose logic circuitry.


It should also be noted that the present implementations can be provided as one or more computer-readable programs embodied on or in one or more articles of manufacture. The article of manufacture can be any suitable hardware apparatus, such as, for example, a floppy disk, a hard disk, a CD-ROM, a CD-RW, a CD-R, a DVD-ROM, a DVD-RW, a DVD-R, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape. In general, the computer-readable programs can be implemented in any programming language. The software programs can be further translated into machine language or virtual machine instructions and stored in a program file in that form. The program file can then be stored on or in one or more of the articles of manufacture.


The media presentations referred to herein can be structured in various forms. For example, a particular media presentation can be an online streaming video having multiple tracks or streams that a user can switch among in real-time or near real-time. For example, a media presentation can be structured using parallel audio and/or video tracks as described in U.S. patent application Ser. No. 14/534,626, filed on Nov. 6, 2014, and entitled “Systems and Methods for Parallel Track Transitions,” the entirety of which is incorporated by reference herein. More specifically, a playing video file or stream can have one or more parallel tracks that can be switched among in real-time automatically and/or based on user interactions. In some implementations, such switches are made seamlessly and substantially instantaneously, such that the audio and/or video of the playing content can continue without any perceptible delays, gaps, or buffering. In further implementations, switches among tracks maintain temporal continuity; that is, the tracks can be synchronized to a common timeline so that there is continuity in audio and/or video content when switching from one track to another (e.g., the same song is played using different instruments on different audio tracks; same storyline performed by different characters on different video tracks, and the like).


Such media presentations can also include interactive video structured in a video tree, hierarchy, or other form. A video tree can be formed by nodes that are connected in a branching, hierarchical, or other linked form. Nodes can each have an associated video segment, audio segment, graphical user interface (GUI) elements, and/or other associated media. Users (e.g., viewers) can watch a video that begins from a starting node in the tree and proceeds along connected nodes in a branch or path. Upon reaching a point during playback of the video where multiple video segments (child nodes) branch off from a segment (parent node), the user can interactively select the branch or path to traverse and, thus, the next video segment to watch.


As referred to herein, a particular branch or path in an interactive media structure, such as a video tree, can refer to a set of consecutively linked nodes between a starting node and ending node, inclusively, or can refer to some or all possible linked nodes that are connected subsequent to (e.g., sub-branches) or that include a particular node. Branched video can include seamlessly assembled and selectably presentable multimedia content such as that described in U.S. patent application Ser. No. 13/033,916, filed on Feb. 24, 2011, and entitled “System and Method for Seamless Multimedia Assembly” (the “Seamless Multimedia Assembly application”), and U.S. patent application Ser. No. 14/107,600, filed on Dec. 16, 2013, and entitled “Methods and Systems for Unfolding Video Pre-Roll,” the entireties of which are hereby incorporated by reference.


The prerecorded video segments in a video tree or other structure can be selectably presentable multimedia content; that is, some or all of the video segments in the video tree can be individually or collectively played for a user based upon the user's selection of a particular video segment, an interaction with a previous or playing video segment, or other interaction that results in a particular video segment or segments being played. The video segments can include, for example, one or more predefined, separate multimedia content segments that can be combined in various manners to create a continuous, seamless presentation such that there are no noticeable gaps, jumps, freezes, delays, or other visual or audible interruptions to video or audio playback between segments. In addition to the foregoing, “seamless” can refer to a continuous playback of content that gives the user the appearance of watching a single, linear multimedia presentation, as well as a continuous playback of multiple content segments that have smooth audio and/or video transitions (e.g., fadeout/fade-in, linking segments) between two or more of the segments.


In some instances, the user is permitted to make choices or otherwise interact in real-time at decision points or during decision periods interspersed throughout the multimedia content. Decision points and/or decision periods can occur at any time and in any number during a multimedia segment, including at or near the beginning and/or the end of the segment. Decision points and/or periods can be predefined, occurring at fixed points or during fixed periods in the multimedia content segments. Based at least in part on the user's choices made before or during playback of content, one or more subsequent multimedia segment(s) associated with the choices can be presented to the user. In some implementations, the subsequent segment is played immediately and automatically following the conclusion of the current segment, whereas in other implementations, the subsequent segment is played immediately upon the user's interaction with the video, without waiting for the end of the decision period or the end of the segment itself.


If a user does not make a selection at a decision point or during a decision period, a default, previously identified selection, or random selection can be made by the system. In some instances, the user is not provided with options; rather, the system automatically selects the segments that will be shown based on information that is associated with the user, other users, or other factors, such as the current date. For example, the system can automatically select subsequent segments based on the user's IP address, location, time zone, the weather in the user's location, social networking ID, saved selections, stored user profiles, preferred products or services, and so on. The system can also automatically select segments based on previous selections made by other users, such as the most popular suggestion or shared selections. The information can also be displayed to the user in the video, e.g., to show the user why an automatic selection is made. As one example, video segments can be automatically selected for presentation based on the geographical location of three different users: a user in Canada will see a twenty-second beer commercial segment followed by an interview segment with a Canadian citizen; a user in the US will see the same beer commercial segment followed by an interview segment with a US citizen; and a user in France is shown only the beer commercial segment.


Multimedia segment(s) selected automatically or by a user can be presented immediately following a currently playing segment, or can be shown after other segments are played. Further, the selected multimedia segment(s) can be presented to the user immediately after selection, after a fixed or random delay, at the end of a decision period, and/or at the end of the currently playing segment. Two or more combined segments can form a seamless multimedia content path or branch, and users can take multiple paths over multiple playthroughs, and experience different complete, start-to-finish, seamless presentations. Further, one or more multimedia segments can be shared among intertwining paths while still ensuring a seamless transition from a previous segment and to the next segment. The content paths can be predefined, with fixed sets of possible transitions in order to ensure seamless transitions among segments. The content paths can also be partially or wholly undefined, such that, in some or all instances, the user can switch to any known video segment without limitation. There can be any number of predefined paths, each having any number of predefined multimedia segments. Some or all of the segments can have the same or different playback lengths, including segments branching from a single source segment.


Traversal of the nodes along a content path in a tree can be performed by selecting among options that appear on and/or around the video while the video is playing. In some implementations, these options are presented to users at a decision point and/or during a decision period in a content segment. Some or all of the displayed options can hover and then disappear when the decision period ends or when an option has been selected. Further, a timer, countdown or other visual, aural, or other sensory indicator can be presented during playback of content segment to inform the user of the point by which he should (or, in some cases, must) make his selection. For example, the countdown can indicate when the decision period will end, which can be at a different time than when the currently playing segment will end. If a decision period ends before the end of a particular segment, the remaining portion of the segment can serve as a non-interactive seamless transition to one or more other segments. Further, during this non-interactive end portion, the next multimedia content segment (and other potential next segments) can be downloaded and buffered in the background for later playback (or potential playback).


A segment that is played after (immediately after or otherwise) a currently playing segment can be determined based on an option selected or other interaction with the video. Each available option can result in a different video and audio segment being played. As previously mentioned, the transition to the next segment can occur immediately upon selection, at the end of the current segment, or at some other predefined or random point. Notably, the transition between content segments can be seamless. In other words, the audio and video continue playing regardless of whether a segment selection is made, and no noticeable gaps appear in audio or video playback between any connecting segments. In some instances, the video continues on to another segment after a certain amount of time if none is chosen, or can continue playing in a loop.


In one example, the multimedia content is a music video in which the user selects options upon reaching segment decision points to determine subsequent content to be played. First, a video introduction segment is played for the user. Prior to the end of the segment, a decision point is reached at which the user can select the next segment to be played from a listing of choices. In this case, the user is presented with a choice as to who will sing the first verse of the song: a tall, female performer, or a short, male performer. The user is given an amount of time to make a selection (i.e., a decision period), after which, if no selection is made, a default segment will be automatically selected. The default can be a predefined or random selection. Of note, the media content continues to play during the time the user is presented with the choices. Once a choice is selected (or the decision period ends), a seamless transition occurs to the next segment, meaning that the audio and video continue on to the next segment as if there were no break between the two segments and the user cannot visually or audibly detect the transition. As the music video continues, the user is presented with other choices at other decisions points, depending on which path of choices is followed. Ultimately, the user arrives at a final segment, having traversed a complete multimedia content path.



FIG. 2 depicts one implementation of a detailed architecture of client-side components in application 112 on user device 110, including inputs received from remote sources, such as content server 102 and application server 106. Client-side components include a video player component having a Choice Manager 216, Inputs Collector 244, GUI Manager 254, Loading Manager 262, and Video Appender 270 and a video library component having a Video Data Manager 261, List Manager 264, and Library GUI Module 269. In general, the video player component includes functionality to play the various forms of interactive videos described herein, and the video library component includes functionality to provide and manage a browseable library of media information, as further described below. Content server 102 can make available to the client Videos 225 and other media content, and Media Data 227 associated with the media content (e.g., titles, metadata, images, etc.). The server can also provide a Project Configuration File 230, as further described below.


Inputs Collector 244 receives user inputs 240 from input components such as a device display screen 272, keyboard, mouse, microphone, virtual reality headset, and the like. Such inputs 240 can include, for example, mouse clicks, keyboard presses, touchpad presses, eye movement, head movement, voice input, and other interactions. Inputs Collector 244 provides input information based on the inputs 240 to Choice Manager 216, which also receives information from a Project Configuration File 230 to determine which video segment should be currently played and which video segments may be played or presented as options to be played at a later time. Choice Manager 216 notifies Video Appender 270 of the video segment to be currently played, and Video Appender 270 seamlessly connects that video segment to the video stream being played in real time. Choice Manager 216 notifies Loading Manager 262 of the video segments that may be played or presented as options to be played at a later time.


Project Configuration File 230 can include information defining the media presentation, such as the video tree or other structure, and how video segments can be linked together in various manners to form one or more paths. Project Configuration File 230 can further specify which audio, video, and/or other media files correspond to each segment (e.g., node in a video tree), that is, which audio, video, and/or other media should be retrieved when application 112 determines that a particular segment should be played. Additionally, Project Configuration File 230 can indicate interface elements that should be displayed or otherwise presented to users, as well as when the elements should be displayed, such that the audio, video, and interactive elements of the media presentation are synchronized. Project Configuration File 230 can be stored on user device 110 or can be remotely accessed by Choice Manager 216.


In some implementations, Project Configuration File 230 is also used in determining which media files should be loaded or buffered prior to being played (or potentially played). Because decision points can occur near the end of a segment, it may be necessary to begin transferring one or more of the potential next segments to viewers prior to a selection being made. For example, if a viewer is approaching a decision point with three possible branches, all three potential next segments can be preloaded partially or fully to ensure a smooth transition upon conclusion of the current segment. Intelligent buffering and progressive downloading of the video, audio, and/or other media content can be performed as described in U.S. patent application Ser. No. 13/437,164, filed Apr. 2, 2012, and entitled “Systems and Methods for Loading More Than One Video Content at a Time,” the entirety of which is incorporated by reference herein.


Using information in Project Configuration File 230, Choice Manager 216 can inform GUI Manager 254 of which interface elements should be displayed to viewers on screen 272. Project Configuration File 230 can further indicate the specific timings for which actions can be taken with respect to the interface elements (e.g., when a particular element is active and can be interacted with). The interface elements can include, for example, playback controls (pause, stop, play, seek, etc.), segment option selectors (e.g., buttons, images, text, animations, video thumbnails, and the like, that a viewer can interact with during decision periods, the selection of which results in a particular multimedia segment being seamlessly played following the conclusion of the current segment), timers (e.g., a clock or other graphical or textual countdown indicating the amount of time remaining to select an option or next segment, which, in some cases, can be the amount of time remaining until the current segment will transition to the next segment), links, popups, an index (e.g., for browsing and/or selecting other multimedia content to view or listen to), and/or a dynamic progress bar such as that described in U.S. patent application Ser. No. 13/622,795, filed Sep. 19, 2012, and entitled “Progress Bar for Branched Videos,” the entirety of which is incorporated by reference herein. In addition to visual elements, sounds or other sensory elements can be presented. For example, a timer can have a “ticking” sound synchronized with the movement of a clock hand. The interactive interface elements can be shared among multimedia segments or can be unique to one or more of the segments.


In addition to reading information from Project Configuration File 230, Choice Manager 216 is notified of user interactions (e.g., mouse clicks, keyboard presses, touchpad presses, eye movements, etc.) from Inputs Collector 244, which interactions can be translated into actions associated with the playback of a media presentation (e.g., segment selections, playback controls, etc.). Based thereon, Choice Manager 216 notifies Loading Manager 262, which can process the actions as further described below. Choice Manager 216 can also interface with Loading Manager 262 and Video Appender 270. For example, Choice Manager 216 can listen for user interaction information from Inputs Collector 244 and notify Loading Manager 262 when an interaction by the viewer (e.g., a selection of an option displayed during the video) has occurred. In some implementations, based on its analysis of received events, Choice Manager 216 causes the presentation of various forms of sensory output, such as visual, aural, tactile, olfactory, and the like.


As earlier noted, Choice Manager 216 can also notify Loading Manager 262 of video segments that may be played at a later time, and Loading Manger 262 can retrieve the corresponding videos 225 (whether stored locally or on, e.g., content server 102) to have them prepared for potential playback through Video Appender 270. Choice Manager 216 and Loading Manager 262 can function to manage the downloading of hosted streaming media according to a loading logic. In one implementation, Choice Manager 216 receives information defining the media presentation structure from Project Configuration File 230 and, using information from Inputs Collector 244, determines which media segments to download and/or buffer (e.g., if the segments are remotely stored). For example, if Choice Manager 216 informs Loading Manager 262 that a particular segment A will or is likely to be played at an upcoming point in the presentation timeline, Loading Manager 262 can intelligently request the segment for download, as well as additional media segments X, Y and Z that can be played following segment A, in advance of playback or notification of potential playback thereof. The downloading can occur even if fewer than all of X, Y, Z will be played (e.g., if X, Y and Z are potential segment choices branching off segment A and only one will be selected for playback).


In some implementations, Loading Manager 262 ceases or cancels downloading of content segments or other media if it determines that it is no longer possible for a particular media content segment (or other content) to be presented on a currently traversed media path. Referring to the above example, a user interacts with the video presentation such that segment Y is determined to be the next segment that will be played. The interaction can be received by Choice Manager 216 and, based on its knowledge of the path structure of the video presentation, Loading Manager 262 is notified to stop active downloads or dequeue pending downloads of content segments no longer reachable now that segment Y has been selected.


Video Appender 270 receives media content from Loading Manager 262 and instructions from Choice Manager 216 on which media segments to include in a media presentation. Video Appender 270 can analyze and/or modify raw video or other media content, for example, to concatenate two separate media streams into a single timeline. Video Appender 270 can also insert cue points and other event markers, such as junction events, into media streams. Further, Video Appender 270 can form one or more streams of bytes from multiple video, audio or other media streams, and feed the formed streams to a video playback function such that there is seamless playback of the combined media content on display screen 272 (as well as through speakers for audio, for example).


The client-side video library component includes subcomponents that provide for the management of a browseable library of media information using Media Data 227 received from a server. Video Data Manager 261 receives Media Data 227 and, based on this information, loads and manages the various types of information associated with each available item of media content. List Manager 265 utilizes Media Data 227 to load and manage a listing of all available items of media content. Library GUI Module 269 receives the media information and listing constructed by Video Data Manager 261 and List Manager 265, respectively, and combines this data into a library interface for output to screen 272. A user can interact with the library interface by navigating through the library, viewing information associated with the library items, and selecting an item to play. Subsequently, using playback interfaces in the video player, the user can control the playing media using controls such as play, stop, pause, toggle subtitles, fast-forward, fast-backward, etc.


The video library and video player components also communicate through Choice Manager 216, which as earlier described receives user interactions with playing content through Inputs Collector 244. More specifically, based on the received user interactions, Choice Manager 216 informs List Manager 265 which items of media content should be included in or excluded from the media item listing generated by List Manager 265, and informs Video Data Manager 261 which media information (e.g., metadata, thumbnail images, etc.) can be presented in the video library user interface. In some implementations, List Manager 265 and Video Data Manager 261 save the listing and media information configurations locally and/or on the server for use in regenerating the video library interface at a later time.


In some implementations, application 112 tracks data regarding user interactions, users, and/or player devices, and provides the data to an analytics server. Collected analytics can include, but are not limited to: the number, type, and/or location of a device; user data, such as login information, name, address, age, sex, and the like; user interactions, such as button/touchpad presses, mouse clicks, mouse/touchpad movements, interaction timings, and the like; decisions made by users or automatically (e.g., content segment user choices or default selections); and content paths followed in the presentation content structure. The analytics can include those described in U.S. patent application Ser. No. 13/034,645, entitled “System and Method for Data Mining within Interactive Multimedia,” and filed Feb. 24, 2011, the entirety of which is incorporated by reference herein.



FIG. 3 depicts a generic interface or display for a video library, in which media list 302 provides a list of episode titles for a video series (i.e., “Episode 1—The Quest”, “Episode 2—The Test,” and so on), as well as titles for supplemental content for the series (i.e., “Trailer” and “Behind the scenes”). Upon browsing to a particular item in the media list 302 (here, Episode 3), a video information display screen 304 is displayed that provides information about the selected item. For example, the video information display screen 304 can depict the episode title, a brief description of the episode, and a representative image of the episode, among other information (e.g., synopsis, actors, genre, tags, etc.). The user can also commence playback of selected media from the media list 302 or video information display screen 304.


In one implementation, media list 302 and/or video information display screen 304 dynamically change based on decisions made by a user or other events occurring within one or more interactive videos. Such interactive videos can include those shown in media list 302 and/or other videos not listed. One will appreciate the various ways in which the displays can change based on the decisions and events, including but not limited to including or excluding episodes or other media items from the media list 302, changing the order of the media items in the media list 302, providing different default information or modifying information (e.g., metadata, thumbnail image, summary, etc.) in the video information display screen 304 for a particular media item, including supplemental content (e.g., trailers, behind the scenes videos, interviews, etc.) in the media list 302, and so on.


In one example, as shown in FIG. 4, the video library interface can provide a visual depiction of a listing of interactive episodes that dynamically changes as a user progresses through the episodes. Initially, listing 402 shows only “Episode 1—The Beginning,” prior to the user watching any episodes. The user can select Episode 1 to watch, and can interact with the video and make decisions during playback that affect how the video proceeds. Near the end of the presentation of interactive Episode 1, the user is provided with an in-video map and given the opportunity to proceed to a forest or a city. Depending on the choice the user makes, the video library interface is updated to reflect the user's decision. More specifically, the library interface changes from listing 402 to listing 404, and now includes “Episode 2—The Forest.” On the other hand, had the user decided to proceed to the city at the end of the first episode, the video listing would instead include “Episode 2—The City.” Similarly, at the end of Episode 2, the user is given the choice to travel to a castle or the sea. Upon selecting the sea, the library interface changes from listing 404 to listing 406, which displays the first two episodes representing the user's path thus far, and “Episode 3—The Sea” as the next episode in the series to watch. In some implementations, and as shown in listing 406, supplemental content relating to the user's decisions can be included. Here, a trailer is added, as well as behind the scenes footage that corresponds to the “Episode 2—The Forest.” In other implementations, the video library interface displays three episodes prior to the user viewing any particular episode, and as the user progresses through the episodes, the names of the episodes in the interface change to reflect the user's choices (e.g., “Episode 2” can become “Episode 2—The City” if the user heads to the city at the end of Episode 1).


In one implementation, the user's progression through an interactive episodic series causes information about the episodes to be change within the video library interface. Referring to FIGS. 5A-5C, an interactive series has three interactive episodes, and the user can choose his path through the episodes while viewing them. FIG. 5A depicts video information display screens 502, 504, and 506 for episodes 1-3, respectively, of the interactive series prior to the user watching any of the episodes. At this point in time, video information display screen 502 (for episode 1) includes metadata 510 (episode title and brief description) and a representative thumbnail image 508 of the episode. However, the video information display screens 504 and 506 for episodes 2 and 3, respectively, have yet to include any description or thumbnail images of those episodes because the user has not yet made any decision in episode 1 or later episodes (or, in some instances, other videos not in the series) that would determine what content episodes 2 and 3 would contain.


During presentation of the first interactive video, the user is given the option for the character, James, to travel to “the black forest” or “the islands of doom.” FIG. 5B depicts the video information display screens 522, 524, and 526 for episodes 1-3, respectively, following the user's decision for James to travel to the black forest in episode 1, but before starting episode 2. The metadata 530 in video information display screen 522 for episode 1 is updated to reflect that James left home and traveled toward the black forest. Likewise, video information display screen 524 for episode 2 is dynamically modified so that the metadata 536 refers to the black forest, and the thumbnail image 534 depicts an image of the black forest. Consequently, when the user selects interactive episode 2 for playback, he will begin his journey where episode 1 left off, i.e., traveling to the black forest. Because episode 2 has yet to be viewed, there are no changes to video information display screen 526 for episode 3.



FIG. 5C illustrates the state of the video information display screens 542, 544, and 546 for episodes 1-3, respectively, if the user instead decides to travel to the islands of doom instead of the black forest at the end of episode 1. Here, the metadata 550 for episode 1 is updated instead to state that James is on his way to the islands of doom. Likewise, video information display screen 544 for episode 2 includes a brief description 556 of the islands of doom episode and a representative thumbnail image 554 of the islands. Again, there is no change to video information display screen 546 for episode 3.


In some implementations, the metadata for a particular video is partially displayed in a video information display screen. For example, the summary description for an interactive video can have blank spaces that are filled in as a user progresses through that video or through other videos. Referring to FIG. 6, consider, for instance, an interactive murder mystery episode series, where each episode has a description that is initially only partially present, but becomes increasingly filled in as the user progresses through the episodes and makes decisions on behalf of a detective character that solve parts of the mystery. On the video information display screen 602 for episode 1, the description 604 for the episode can be, for example, “It was clear that ______ killed the officer, using a ______ that was hidden in the ______.” Based on decisions made by the user in watching episode 1 (e.g., exploring a particular area, examining certain items, etc.), the description 604 can be updated as the episode progresses or when it is completed. For example, if the path that the user takes through episode 1 results in the discovery of the murder weapon, but not the murderer, the new description for episode 1 can be, “It was still unclear who killed the officer, but the murder was committed with a candlestick that was hidden in the conservatory.” In other implementations, default values, images, or text can be used instead of partial metadata or blanks. For example, the default summary description for episode 1 prior to the user watching the episode can be, “The police officer was killed around midnight, but by whom and with what weapon still remain a mystery.”


In one implementation, the video library interface allows the user to select a video for playback from, for example, the video title listing or a video information display screen. FIG. 7 depicts one such video information display screen 702 for an episode titled, “The Quest.” In this episode, the main character, Irene, progresses through a storyline that the user can affect by his decisions. The episode description 704 reflects the decisions of a user who has already watched the video and made decisions resulting in the storyline of, “Irene went to a party, she talked to Drew and then they went to the airport.” The decisions made by the user are the text portions of the description 704 in bold, including the choice of going to a party (versus a different destination), talking to Drew (instead of another person), and going to the airport (rather than another location).


Notably, not only can the user start playback of episode 1 from this screen 702, but the user can also easily navigate to a particular decision point in the interactive video. This allows the user to change his previous decision, if desired, and continue the video from that point (or start from a particular decision point on the first playback of the video). To facilitate this navigation process, the metadata (episode description 704) in the video information display screen 702 for the video includes links that the user can select. As noted above, the description 704 includes three bolded portions of text (“party”, “Drew”, and “airport”) that correspond to decision points in episode 1, and by selecting one of the text portions, the user can navigate to the corresponding decision point in the episode. Thus, for example, by selecting “Drew,” the user can start episode 1 at the point in time where Irene is deciding whom at the party to talk to, and can choose to speak with someone else. The episode will then continue based on the new decision, and the episode metadata can be updated accordingly. Similarly, if the user decides to restart the episode from the beginning and make different decisions within the episode, the metadata associated for all decisions in that episode can dynamically change as well. In further implementations, video summaries can exhibit similar behavior; e.g., when a user interacts with (clicks, taps, etc.) a particular video summary, he will be navigated to the part of the video where the decision reflected in the summary was made.


In some implementations, interactions within one video can affect information associated with and displayed in the video library interface for that video as well as other videos in the same episodic series, or even other unrelated videos. FIG. 8 is a diagram depicting the interconnections between episodes in a series and how interactions of a user with one episode can influence an index of episodes and information associated with other episodes in the series. On entering a video library interface (802), the video index is updated and made available for display to the user (804). Updating the video index can include, for example, changing the listing of videos displayed to the user. In addition, the data associated with each episode (e.g., metadata, images, etc.) is also updated (806a, 806b, 806c, . . . , 806n). The updates to the index and episode data can occur prior to the user watching any particular episode, in which case the index and episode data can include default values, blanks, or other information. Following the viewing of an episode, the index and episode data can be dynamically updated to reflect any decisions made by the viewer during the episode. In some implementations, the index and episode data are updated in real-time as the user is watching the episode.


Referring to Episode 1 in FIG. 8, following the episode data update (806a), the user watches the interactive episode and makes decisions throughout the episode (808a and 810a). As earlier described with respect to FIG. 2, the Choice Manager 216 adapts the interactive video and causes different content to be shown to the user depending on the choices the user makes during the episode (812a). The Choice Manager 216 also sends the interaction information to the video library component (Video Data Manager 261 and List Manager 265) for updating the video library list and video information associated with the episode and, in some instances, other episodes or unrelated videos (814a). As shown in FIG. 8, a similar process occurs when the user watches any other episode in the series (Episode 2, Episode 3, . . . , Episode n).


The information associated with a video or other media content can be initialized or dynamically change in different ways among various implementations. In one implementation, the information for a particular video is an aggregation or other form of combination of some or all versions of the video that the user has seen thus far (e.g., all different paths the user has taken through an interactive video). Consider, for example, an interactive video series based on the Sherlock Holmes character, in which the user needs to discover multiple clues in order to solve a mystery; however, not all clues can be found on a single playthrough of the episode. After watching the episode multiple times making different decisions at different points in the episode, the user is able to discover the necessary clues. In correspondence with each viewing of the episode, the video metadata can update to reflect each clue the user has found. For example, after the first playthrough of the episode, the episode summary can be, “Sherlock Holmes finds a spent bullet casing in the fireplace, his first clue!” After the second playthrough, the summary can be updated to, “After a second search of the house, Sherlock locates a gun in a locked chest that appears to have been recently fired and matches the caliber of the previously found bullet casing.” Similarly, video information can also be updated to reflect decisions yet to be made or paths not taken. For example, after the second playthrough, the summary can also include, “Sherlock still needs to search the house more closely to gather clues about who the killer might be.”


In another implementation, the information associated with a video can be initialized or updated based on the decisions of others, instead of or in addition to those of an individual viewer. For example, the description of a video can reflect a community preference by including the most popular decisions made in an interactive video. Consider, for example, an interactive video in which a user can choose to drive a car or ride a motorcycle to a party. If the user decides to take the car but most users select the motorcycle, the video description can state, “Yoni breaks the mold and drives the Pinto to the party, but everyone else passes him on their motorcycles!” In further implementations, the information associated with a video can be initialized or updated based on known characteristics of the user (e.g., demographics, location, local weather or events, etc.).


In some implementations, the decisions the user makes in one video can also affect not only the information associated with the same video, related videos (e.g., other videos in a series), or unrelated videos, but also the content provided and/or decisions made available to the user in such video(s). For example, if the user consistently avoids violent interactions in one interactive video, the present system can remove options in further playbacks of that video that result in violent content being shown. Moreover, such options and content can be correspondingly disabled in other videos so that the user also will not be exposed to violent content when viewing those videos. This feature can also be used to dynamically adapt content for particular audiences (e.g., children).


Various examples of how the techniques described herein can be applied will now be described; however, it is to be appreciated that the uses are wide-ranging and nearly limitless. In one example, a user views a trailer video, and the video library list is dynamically populated with videos related to the trailer content. In another example, a user watches a episodic documentary about chefs from different countries, can select a different chef to watch after each episode, and the metadata associated with the episodes changes to describe the which chefs the user has already observed. In yet another example, an interactive game has multiple episodes, and each episode is unlocked only when a mystery has been solved in the preceding episode. In a further example, an interactive video series depicts the ups and downs of a relationship between two people. At each decision point in an episode, the user can decide what will happen next, and each person in the video reacts accordingly. Depending on which choices are made in episodes the user has watched, different episodes will be provided for the user to watch next. So, for example, if the user decides that the couple will ski at the end of the second episode, then the next episode made available to the user will be one in which the couple is at a ski resort.


As another example, consider an interactive series based around the lives of several characters. Each character has at least one interactive episode centered around that character, and a user viewing the episode can make decisions within the episode that affect the life of the character as well as the world in which the characters exist (and thereby affect other episodes). So, for example, if the user chooses a path in one episode that causes the character to rob a bank, the other episodes in the series are dynamically modified to include the bank robbing event, and the robbery can be reflected in the description of the episode and other episodes. Ultimately, the choices made in one interactive episode can affect the content made available or shown in other episodes, as well as affect the information shown in the video library interface that is associated with the interactive episode and/or other episodes.


Although the systems and methods described herein relate primarily to audio and video playback, the invention is equally applicable to various streaming and non-streaming media, including animation, video games, interactive media, and other forms of content usable in conjunction with the present systems and methods. Further, there can be more than one audio, video, and/or other media content stream played in synchronization with other streams. Streaming media can include, for example, multimedia content that is continuously presented to a user while it is received from a content delivery source, such as a remote video server. If a source media file is in a format that cannot be streamed and/or does not allow for seamless connections between segments, the media file can be transcoded or converted into a format supporting streaming and/or seamless transitions.


While various implementations of the present invention have been described herein, it should be understood that they have been presented by example only. For example, one of skill in the art will appreciate that the techniques for creating seamless audio segments can be applied to creating seamless video segments and other forms of seamless media as well. Where methods and steps described above indicate certain events occurring in certain order, those of ordinary skill in the art having the benefit of this disclosure would recognize that the ordering of certain steps can be modified and that such modifications are in accordance with the given variations. For example, although various implementations have been described as having particular features and/or combinations of components, other implementations are possible having any combination or sub-combination of any features and/or components from any of the implementations described herein.

Claims
  • 1. A computer-implemented method comprising: providing an interactive video comprising a plurality of traversable video paths and video metadata including a text summary of the interactive video;displaying, via a video information display graphic, the text summary of the interactive video to a user;receiving, during a first playthrough of the interactive video to the user, a first interaction with the interactive video, the first interaction comprising a decision made by the user in the interactive video;traversing a first video path in the interactive video in response to the first interaction;upon completion of the first playthrough of the interactive video, dynamically modifying the video metadata based on the traversal of the first video path, wherein dynamically modifying the video metadata includes modifying at least a portion of the text summary of the interactive video to include a description of the first interaction; anddisplaying, prior to a second playthrough of the interactive video, the modified text summary to the user via the video information display graphic to influence a second interaction during the second playthrough of the interactive video, wherein a second video path in the interactive video is traversed in response to the second interaction,
  • 2. The method of claim 1, wherein the video metadata includes at least one of a thumbnail image of the interactive video.
  • 3. The method of claim 1, wherein the modified video metadata includes text and/or visual references to at least one traversable decision point in the interactive video associated with the second video path.
  • 4. The method of claim 3, wherein the text and/or visual references correspond to clues or hints associated with the at least one traversable decision point and content included in the interactive video.
  • 5. The method of claim 3, further comprising: providing the second playthrough of the interactive video;presenting the at least one traversable decision point associated with the second video path to the user;receiving, during the second playthrough of the interactive video to the user, the second interaction with the interactive video, the second interaction comprising a decision made by the user in the interactive video; andtraversing the second video path in the interactive video in response to the second interaction.
  • 6. The method of claim 5, further comprising: upon completion of the second playthrough of the interactive video, dynamically modifying the video metadata based on the traversal of the second video path, wherein dynamically modifying the video metadata includes modifying at least a portion of the text summary of the interactive video to include a description of the second interaction; anddisplaying, prior to a third playthrough of the interactive video, the modified text summary to the user via the video information display graphic to influence a third interaction during the third playthrough of the interactive video, wherein a third video path in the interactive video is traversed in response to the third interaction, the third video path being different from the first video path and the second video path.
  • 7. The method of claim 1, wherein the interactive video corresponds to an episode of a series of episodes.
  • 8. A system comprising: at least one memory for storing computer-executable instructions; andat least one processor for executing the instructions stored on the memory, wherein execution of the instructions programs the at least one processor to perform operations comprising:providing an interactive video comprising a plurality of traversable video paths and video metadata including a text summary of the interactive video;displaying, via a video information display graphic, the text summary of the interactive video to a user;receiving, during a first playthrough of the interactive video to the user, a first interaction with the interactive video, the first interaction comprising a decision made by the user in the interactive video;traversing a first video path in the interactive video in response to the first interaction;upon completion of the first playthrough of the interactive video, dynamically modifying the video metadata based on the traversal of the first video path, wherein dynamically modifying the video metadata includes modifying at least a portion of the text summary of the interactive video to include a description of the first interaction; anddisplaying, prior to a second playthrough of the interactive video, the modified text summary to the user via the video information display graphic to influence a second interaction during the second playthrough of the interactive video, wherein a second video path in the interactive video is traversed in response to the second interaction,
  • 9. The system of claim 8, wherein the video metadata includes at least one of a thumbnail image of the interactive video.
  • 10. The system of claim 8, wherein the modified video metadata includes text and/or visual references to at least one traversable decision point in the interactive video associated with the second video path.
  • 11. The system of claim 10, wherein the text and/or visual references correspond to clues or hints associated with the at least one traversable decision point and content included in the interactive video.
  • 12. The system of claim 8, wherein execution of the instructions programs the at least one processor to perform operations comprising: providing the second playthrough of the interactive video;presenting the at least one traversable decision point associated with the second video path to the user;receiving, during the second playthrough of the interactive video to the user, the second interaction with the interactive video, the second interaction comprising a decision made by the user in the interactive video; andtraversing the second video path in the interactive video in response to the second interaction.
  • 13. The system of claim 12, wherein execution of the instructions programs the at least one processor to perform operations comprising: upon completion of the second playthrough of the interactive video, dynamically modifying the video metadata based on the traversal of the second video path, wherein dynamically modifying the video metadata includes modifying at least a portion of the text summary of the interactive video to include a description of the second interaction; anddisplaying, prior to a third playthrough of the interactive video, the modified text summary to the user via the video information display graphic to influence a third interaction during the third playthrough of the interactive video, wherein a third video path in the interactive video is traversed in response to the third interaction, the third video path being different from the first video path and the second video path.
  • 14. The system of claim 8, wherein the interactive video corresponds to an episode of a series of episodes.
  • 15. A computer-implemented method comprising: providing an interactive video comprising a plurality of traversable video paths and video metadata representing a summary of the interactive video, wherein the interactive video corresponds to an episode of a series of episodes;receiving, during a first playthrough of the interactive video to a user, a first interaction with the interactive video, the first interaction comprising a decision made by the user in the interactive video;traversing a first video path in the interactive video in response to the first interaction;dynamically modifying the video metadata based on the traversal of the first video path;displaying the modified video metadata to the user to influence a second interaction during a second playthrough of the interactive video, the second interaction corresponding to a second video path in the interactive video that is different from the first video path;providing a video library display comprising a visual depiction of information associated with the series of episodes; anddynamically modifying the video library display based on the first interaction and/or the second interaction.
  • 16. The method of claim 15, wherein the visual depiction of information comprises a list of the episodes, and wherein dynamically modifying the video library display includes removing one of the episodes from the list, adding an episode to the list, or changing an order of episodes in the list.
  • 17. The method of claim 15, wherein dynamically modifying the video library display includes modifying video metadata of another episode in the series of episodes based on the first interaction and/or the second interaction.
  • 18. A system comprising: at least one memory for storing computer-executable instructions; andat least one processor for executing the instructions stored on the memory, wherein execution of the instructions programs the at least one processor to perform operations comprising: providing an interactive video comprising a plurality of traversable video paths and video metadata representing a summary of the interactive video, wherein the interactive video corresponds to an episode of a series of episodes;receiving, during a first playthrough of the interactive video to a user, a first interaction with the interactive video, the first interaction comprising a decision made by the user in the interactive video;traversing a first video path in the interactive video in response to the first interaction;dynamically modifying the video metadata based on the traversal of the first video path;displaying the modified video metadata to the user to influence a second interaction during a second playthrough of the interactive video, the second interaction corresponding to a second video path in the interactive video that is different from the first video path;providing a video library display comprising a visual depiction of information associated with the series of episodes; anddynamically modifying the video library display based on the first interaction and/or the second interaction.
  • 19. The system of claim 18, wherein the visual depiction of information comprises a list of the episodes, and wherein dynamically modifying the video library display comprises removing one of the episodes from the list, adding an episode to the list, or changing an order of episodes in the list.
  • 20. The system of claim 18, wherein dynamically modifying the video library display includes modifying video metadata of another episode in the series of episodes based on the first interaction and/or the second interaction.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of and claims priority to U.S. patent application Ser. No. 16/283,066, titled “Dynamic Library Display For Interactive Videos” and filed on Feb. 22, 2019 under, which is a continuation of U.S. patent application Ser. No. 15/863,191, titled “Dynamic Library Display For Interactive Videos” and filed on Jan. 5, 2018, now U.S. Pat. No. 10,257,578, issued on Apr. 9, 2019 under, which are hereby incorporated by reference herein their entireties.

US Referenced Citations (561)
Number Name Date Kind
4569026 Best Feb 1986 A
5137277 Kitaue Aug 1992 A
5161034 Klappert Nov 1992 A
5568602 Callahan et al. Oct 1996 A
5568603 Chen et al. Oct 1996 A
5597312 Bloom et al. Jan 1997 A
5607356 Schwartz Mar 1997 A
5610653 Abecassis Mar 1997 A
5636036 Ashbey Jun 1997 A
5676551 Knight et al. Oct 1997 A
5694163 Harrison Dec 1997 A
5715169 Noguchi Feb 1998 A
5734862 Kulas Mar 1998 A
5737527 Shiels et al. Apr 1998 A
5745738 Ricard Apr 1998 A
5751953 Shiels May 1998 A
5754770 Shiels et al. May 1998 A
5818435 Kozuka et al. Oct 1998 A
5848934 Shiels et al. Dec 1998 A
5887110 Sakamoto et al. Mar 1999 A
5894320 Vancelette Apr 1999 A
5956037 Osawa et al. Sep 1999 A
5966121 Hubbell et al. Oct 1999 A
5983190 Trower, II et al. Nov 1999 A
6067400 Saeki et al. May 2000 A
6091886 Abecassis Jul 2000 A
6122668 Teng et al. Sep 2000 A
6128712 Hunt et al. Oct 2000 A
6191780 Martin et al. Feb 2001 B1
6222925 Shiels et al. Apr 2001 B1
6240555 Shoff et al. May 2001 B1
6298020 Kumagami Oct 2001 B1
6298482 Seidman et al. Oct 2001 B1
6460036 Herz Oct 2002 B1
6535639 Uchihachi et al. Mar 2003 B1
6657906 Martin Dec 2003 B2
6698020 Zigmond et al. Feb 2004 B1
6728477 Watkins Apr 2004 B1
6771875 Kunieda et al. Aug 2004 B1
6801947 Li Oct 2004 B1
6947966 Oko, Jr. et al. Sep 2005 B1
7085844 Thompson Aug 2006 B2
7155676 Land et al. Dec 2006 B2
7231132 Davenport Jun 2007 B1
7296231 Loui et al. Nov 2007 B2
7310784 Gottlieb et al. Dec 2007 B1
7379653 Yap et al. May 2008 B2
7430360 Abecassis Sep 2008 B2
7444069 Bernsley Oct 2008 B1
7472910 Okada et al. Jan 2009 B1
7627605 Lamere et al. Dec 2009 B1
7650623 Hudgeons Jan 2010 B2
7669128 Bailey et al. Feb 2010 B2
7694320 Yeo et al. Apr 2010 B1
7779438 Davies Aug 2010 B2
7787973 Lambert Aug 2010 B2
7917505 van Gent et al. Mar 2011 B2
8024762 Britt Sep 2011 B2
8046801 Ellis et al. Oct 2011 B2
8065710 Malik Nov 2011 B2
8151139 Gordon Apr 2012 B1
8176425 Wallace et al. May 2012 B2
8190001 Bernsley May 2012 B2
8202167 Ackley et al. Jun 2012 B2
8276058 Gottlieb et al. Sep 2012 B2
8281355 Weaver et al. Oct 2012 B1
8321905 Streeter et al. Nov 2012 B1
8341662 Bassett Dec 2012 B1
8350908 Morris et al. Jan 2013 B2
8600220 Bloch et al. Dec 2013 B2
8612517 Yadid et al. Dec 2013 B1
8626337 Corak et al. Jan 2014 B2
8646020 Reisman Feb 2014 B2
8650489 Baum et al. Feb 2014 B1
8667395 Hosogai et al. Mar 2014 B2
8750682 Nicksay et al. Jun 2014 B1
8752087 Begeja et al. Jun 2014 B2
8826337 Issa et al. Sep 2014 B2
8860882 Bloch et al. Oct 2014 B2
8930975 Woods et al. Jan 2015 B2
8977113 Rumteen et al. Mar 2015 B1
9009619 Bloch et al. Apr 2015 B2
9021537 Funge et al. Apr 2015 B2
9082092 Henry Jul 2015 B1
9094718 Barton et al. Jul 2015 B2
9190110 Bloch Nov 2015 B2
9257148 Bloch et al. Feb 2016 B2
9268774 Kim et al. Feb 2016 B2
9271015 Bloch et al. Feb 2016 B2
9363464 Alexander Jun 2016 B2
9367196 Goldstein et al. Jun 2016 B1
9374411 Goetz Jun 2016 B1
9390099 Wang et al. Jul 2016 B1
9456247 Pontual et al. Sep 2016 B1
9465435 Zhang et al. Oct 2016 B1
9473582 Fraccaroli Oct 2016 B1
9510044 Pereira et al. Nov 2016 B1
9520155 Bloch et al. Dec 2016 B2
9530454 Bloch et al. Dec 2016 B2
9531998 Farrell et al. Dec 2016 B1
9538219 Sakata et al. Jan 2017 B2
9554061 Proctor, Jr. et al. Jan 2017 B1
9571877 Lee et al. Feb 2017 B2
9607655 Bloch et al. Mar 2017 B2
9641898 Bloch et al. May 2017 B2
9653115 Bloch et al. May 2017 B2
9653116 Paulraj et al. May 2017 B2
9672868 Bloch et al. Jun 2017 B2
9715901 Singh et al. Jul 2017 B1
9736503 Bakshi et al. Aug 2017 B1
9792026 Bloch et al. Oct 2017 B2
9792957 Bloch et al. Oct 2017 B2
9826285 Mishra et al. Nov 2017 B1
9967621 Armstrong et al. May 2018 B2
10070192 Baratz Sep 2018 B2
10178304 Tudor et al. Jan 2019 B1
10178421 Thomas et al. Jan 2019 B2
10187687 Harb et al. Jan 2019 B2
10194189 Goetz et al. Jan 2019 B1
10257572 Manus et al. Apr 2019 B2
10257578 Bloch Apr 2019 B1
10310697 Roberts Jun 2019 B2
10419790 Gersten Sep 2019 B2
10460765 Bloch et al. Oct 2019 B2
10523982 Oyman Dec 2019 B2
10771824 Haritaoglu et al. Sep 2020 B1
10856049 Bloch Dec 2020 B2
11003748 Oliker et al. May 2021 B2
20010056427 Yoon et al. Dec 2001 A1
20020019799 Ginsberg et al. Feb 2002 A1
20020029218 Bentley et al. Mar 2002 A1
20020053089 Massey May 2002 A1
20020086724 Miyaki et al. Jul 2002 A1
20020091455 Williams Jul 2002 A1
20020105535 Wallace et al. Aug 2002 A1
20020106191 Betz et al. Aug 2002 A1
20020120456 Berg et al. Aug 2002 A1
20020120931 Huber Aug 2002 A1
20020124250 Proehl et al. Sep 2002 A1
20020129374 Freeman et al. Sep 2002 A1
20020140719 Amir et al. Oct 2002 A1
20020144262 Plotnick et al. Oct 2002 A1
20020174430 Ellis et al. Nov 2002 A1
20020177914 Chase Nov 2002 A1
20020194595 Miller et al. Dec 2002 A1
20030007560 Mayhew et al. Jan 2003 A1
20030012409 Overton et al. Jan 2003 A1
20030020744 Ellis et al. Jan 2003 A1
20030023757 Ishioka et al. Jan 2003 A1
20030039471 Hashimoto Feb 2003 A1
20030069057 DeFrees-Parrott Apr 2003 A1
20030076347 Barrett et al. Apr 2003 A1
20030101164 Pic et al. May 2003 A1
20030148806 Weiss Aug 2003 A1
20030159566 Sater et al. Aug 2003 A1
20030183064 Eugene et al. Oct 2003 A1
20030184598 Graham Oct 2003 A1
20030221541 Platt Dec 2003 A1
20040009813 Wind Jan 2004 A1
20040019905 Fellenstein et al. Jan 2004 A1
20040034711 Hughes Feb 2004 A1
20040070595 Atlas et al. Apr 2004 A1
20040091848 Nemitz May 2004 A1
20040125124 Kim et al. Jul 2004 A1
20040128317 Sull et al. Jul 2004 A1
20040138948 Loomis Jul 2004 A1
20040146275 Takata et al. Jul 2004 A1
20040172476 Chapweske Sep 2004 A1
20040194128 McIntyre et al. Sep 2004 A1
20040194131 Ellis et al. Sep 2004 A1
20040199923 Russek Oct 2004 A1
20040261127 Freeman et al. Dec 2004 A1
20050019015 Ackley et al. Jan 2005 A1
20050028193 Candelore et al. Feb 2005 A1
20050055377 Dorey et al. Mar 2005 A1
20050091597 Ackley Apr 2005 A1
20050102707 Schnitman May 2005 A1
20050107159 Sato May 2005 A1
20050120389 Boss et al. Jun 2005 A1
20050132401 Boccon-Gibod et al. Jun 2005 A1
20050166224 Ficco Jul 2005 A1
20050198661 Collins et al. Sep 2005 A1
20050210145 Kim et al. Sep 2005 A1
20050240955 Hudson Oct 2005 A1
20050251820 Stefanik et al. Nov 2005 A1
20050251827 Ellis et al. Nov 2005 A1
20050289582 Tavares et al. Dec 2005 A1
20060002895 McDonnell et al. Jan 2006 A1
20060024034 Filo et al. Feb 2006 A1
20060028951 Tozun et al. Feb 2006 A1
20060064733 Norton et al. Mar 2006 A1
20060080167 Chen et al. Apr 2006 A1
20060120624 Jojic et al. Jun 2006 A1
20060130121 Candelore et al. Jun 2006 A1
20060150072 Salvucci Jul 2006 A1
20060150216 Herz et al. Jul 2006 A1
20060153537 Kaneko et al. Jul 2006 A1
20060155400 Loomis Jul 2006 A1
20060200842 Chapman et al. Sep 2006 A1
20060212904 Klarfeld et al. Sep 2006 A1
20060222322 Levitan Oct 2006 A1
20060224260 Hicken et al. Oct 2006 A1
20060253330 Maggio et al. Nov 2006 A1
20060274828 Siemens et al. Dec 2006 A1
20070003149 Nagumo et al. Jan 2007 A1
20070024706 Brannon et al. Feb 2007 A1
20070028272 Lockton Feb 2007 A1
20070033633 Andrews et al. Feb 2007 A1
20070055989 Shanks et al. Mar 2007 A1
20070079325 de Heer Apr 2007 A1
20070085759 Lee et al. Apr 2007 A1
20070099684 Butterworth May 2007 A1
20070101369 Dolph May 2007 A1
20070118801 Harshbarger et al. May 2007 A1
20070154169 Cordray et al. Jul 2007 A1
20070157234 Walker Jul 2007 A1
20070157260 Walker Jul 2007 A1
20070157261 Steelberg et al. Jul 2007 A1
20070162395 Ben-Yaacov et al. Jul 2007 A1
20070180488 Walter Aug 2007 A1
20070220583 Bailey et al. Sep 2007 A1
20070226761 Zalewski et al. Sep 2007 A1
20070239754 Schnitman Oct 2007 A1
20070253677 Wang Nov 2007 A1
20070253688 Koennecke Nov 2007 A1
20070263722 Fukuzawa Nov 2007 A1
20080019445 Aono et al. Jan 2008 A1
20080021187 Wescott et al. Jan 2008 A1
20080021874 Dahl et al. Jan 2008 A1
20080022320 Ver Steeg Jan 2008 A1
20080031595 Cho Feb 2008 A1
20080086456 Rasanen et al. Apr 2008 A1
20080086754 Chen et al. Apr 2008 A1
20080091721 Harboe et al. Apr 2008 A1
20080092159 Dmitriev et al. Apr 2008 A1
20080148152 Blinnikka et al. Jun 2008 A1
20080161111 Schuman Jul 2008 A1
20080170687 Moors et al. Jul 2008 A1
20080177893 Bowra et al. Jul 2008 A1
20080178232 Velusamy Jul 2008 A1
20080276157 Kustka et al. Nov 2008 A1
20080300967 Buckley et al. Dec 2008 A1
20080301750 Silfvast et al. Dec 2008 A1
20080314232 Hansson et al. Dec 2008 A1
20090022015 Harrison Jan 2009 A1
20090022165 Candelore et al. Jan 2009 A1
20090024923 Hartwig et al. Jan 2009 A1
20090027337 Hildreth Jan 2009 A1
20090029771 Donahue Jan 2009 A1
20090055880 Batteram et al. Feb 2009 A1
20090063681 Ramakrishnan et al. Mar 2009 A1
20090063995 Baron et al. Mar 2009 A1
20090077137 Weda et al. Mar 2009 A1
20090079663 Chang et al. Mar 2009 A1
20090083631 Sidi et al. Mar 2009 A1
20090116817 Kim et al. May 2009 A1
20090131764 Lee et al. May 2009 A1
20090133051 Hildreth May 2009 A1
20090133071 Sakai et al. May 2009 A1
20090138805 Hildreth May 2009 A1
20090177538 Brewer et al. Jul 2009 A1
20090178089 Picco et al. Jul 2009 A1
20090191971 Avent Jul 2009 A1
20090195652 Gal Aug 2009 A1
20090199697 Lehtiniemi et al. Aug 2009 A1
20090210790 Thomas Aug 2009 A1
20090226046 Shteyn Sep 2009 A1
20090228572 Wall et al. Sep 2009 A1
20090254827 Gonze et al. Oct 2009 A1
20090258708 Figueroa Oct 2009 A1
20090265737 Issa et al. Oct 2009 A1
20090265746 Halen et al. Oct 2009 A1
20090297118 Fink et al. Dec 2009 A1
20090320075 Marko Dec 2009 A1
20100017820 Thevathasan et al. Jan 2010 A1
20100042496 Wang et al. Feb 2010 A1
20100050083 Axen et al. Feb 2010 A1
20100069159 Yamada et al. Mar 2010 A1
20100070987 Amento et al. Mar 2010 A1
20100077290 Pueyo Mar 2010 A1
20100088726 Curtis et al. Apr 2010 A1
20100122286 Begeja et al. May 2010 A1
20100146145 Tippin et al. Jun 2010 A1
20100153512 Balassanian et al. Jun 2010 A1
20100153885 Yates Jun 2010 A1
20100161792 Palm et al. Jun 2010 A1
20100162344 Casagrande et al. Jun 2010 A1
20100167816 Perlman et al. Jul 2010 A1
20100167819 Schell Jul 2010 A1
20100186032 Pradeep et al. Jul 2010 A1
20100186579 Schnitman Jul 2010 A1
20100199299 Chang et al. Aug 2010 A1
20100210351 Berman Aug 2010 A1
20100251295 Amento et al. Sep 2010 A1
20100262336 Rivas et al. Oct 2010 A1
20100267450 McMain Oct 2010 A1
20100268361 Mantel et al. Oct 2010 A1
20100278509 Nagano et al. Nov 2010 A1
20100287033 Mathur Nov 2010 A1
20100287475 van Zwol et al. Nov 2010 A1
20100293455 Bloch Nov 2010 A1
20100312670 Dempsey Dec 2010 A1
20100325135 Chen et al. Dec 2010 A1
20100332404 Valin Dec 2010 A1
20110000797 Henry Jan 2011 A1
20110007797 Palmer et al. Jan 2011 A1
20110010742 White Jan 2011 A1
20110026898 Lussier et al. Feb 2011 A1
20110033167 Arling et al. Feb 2011 A1
20110041059 Amarasingham et al. Feb 2011 A1
20110069940 Shimy et al. Mar 2011 A1
20110078023 Aldrey et al. Mar 2011 A1
20110078740 Bolyukh et al. Mar 2011 A1
20110096225 Candelore Apr 2011 A1
20110126106 Ben Shaul et al. May 2011 A1
20110131493 Dahl Jun 2011 A1
20110138331 Pugsley et al. Jun 2011 A1
20110163969 Anzures et al. Jul 2011 A1
20110169603 Fithian et al. Jul 2011 A1
20110182366 Frojdh et al. Jul 2011 A1
20110191684 Greenberg Aug 2011 A1
20110191801 Vytheeswaran Aug 2011 A1
20110193982 Kook et al. Aug 2011 A1
20110197131 Duffin et al. Aug 2011 A1
20110200116 Bloch et al. Aug 2011 A1
20110202562 Bloch et al. Aug 2011 A1
20110238494 Park Sep 2011 A1
20110239246 Woodward et al. Sep 2011 A1
20110246661 Manzari et al. Oct 2011 A1
20110246885 Pantos et al. Oct 2011 A1
20110252320 Arrasvuori et al. Oct 2011 A1
20110264755 Salvatore De Villiers Oct 2011 A1
20110282745 Meoded et al. Nov 2011 A1
20110282906 Wong Nov 2011 A1
20110307786 Shuster Dec 2011 A1
20110307919 Weerasinghe Dec 2011 A1
20110307920 Blanchard et al. Dec 2011 A1
20110313859 Stillwell et al. Dec 2011 A1
20110314030 Burba et al. Dec 2011 A1
20120004960 Ma et al. Jan 2012 A1
20120005287 Gadel et al. Jan 2012 A1
20120011438 Kim et al. Jan 2012 A1
20120017141 Eelen et al. Jan 2012 A1
20120062576 Rosenthal et al. Mar 2012 A1
20120072420 Moganti et al. Mar 2012 A1
20120081389 Dilts Apr 2012 A1
20120089911 Hosking et al. Apr 2012 A1
20120090000 Cohen et al. Apr 2012 A1
20120094768 McCaddon Apr 2012 A1
20120105723 van Coppenolle et al. May 2012 A1
20120110618 Kilar et al. May 2012 A1
20120110620 Kilar et al. May 2012 A1
20120120114 You et al. May 2012 A1
20120134646 Alexander May 2012 A1
20120137015 Sun May 2012 A1
20120147954 Kasai et al. Jun 2012 A1
20120159530 Ahrens et al. Jun 2012 A1
20120159541 Carton et al. Jun 2012 A1
20120179970 Hayes Jul 2012 A1
20120198412 Creighton et al. Aug 2012 A1
20120198489 O'Connell et al. Aug 2012 A1
20120213495 Hafeneger et al. Aug 2012 A1
20120225693 Sirpal et al. Sep 2012 A1
20120233631 Geshwind Sep 2012 A1
20120246032 Beroukhim et al. Sep 2012 A1
20120263263 Olsen et al. Oct 2012 A1
20120308206 Kulas Dec 2012 A1
20120317198 Patton et al. Dec 2012 A1
20120324491 Bathiche et al. Dec 2012 A1
20130021269 Johnson et al. Jan 2013 A1
20130024888 Sivertsen Jan 2013 A1
20130028446 Krzyzanowski Jan 2013 A1
20130028573 Hoofien et al. Jan 2013 A1
20130031582 Tinsman et al. Jan 2013 A1
20130033542 Nakazawa Feb 2013 A1
20130036200 Roberts et al. Feb 2013 A1
20130039632 Feinson Feb 2013 A1
20130046847 Zavesky et al. Feb 2013 A1
20130054728 Amir et al. Feb 2013 A1
20130055321 Cline et al. Feb 2013 A1
20130061263 Issa et al. Mar 2013 A1
20130094830 Stone et al. Apr 2013 A1
20130097643 Stone et al. Apr 2013 A1
20130117248 Bhogal et al. May 2013 A1
20130125181 Montemayor et al. May 2013 A1
20130129304 Feinson May 2013 A1
20130129308 Karn et al. May 2013 A1
20130167168 Ellis et al. Jun 2013 A1
20130173765 Korbecki Jul 2013 A1
20130177294 Kennberg Jul 2013 A1
20130188923 Hartley et al. Jul 2013 A1
20130195427 Sathish Aug 2013 A1
20130202265 Arrasvuori et al. Aug 2013 A1
20130204710 Boland et al. Aug 2013 A1
20130205314 Ramaswamy et al. Aug 2013 A1
20130219425 Swartz Aug 2013 A1
20130235152 Hannuksela et al. Sep 2013 A1
20130235270 Sasaki et al. Sep 2013 A1
20130254292 Bradley Sep 2013 A1
20130259442 Bloch et al. Oct 2013 A1
20130282917 Reznik et al. Oct 2013 A1
20130283401 Pabla et al. Oct 2013 A1
20130290818 Arrasvuori et al. Oct 2013 A1
20130298146 Conrad et al. Nov 2013 A1
20130308926 Jang et al. Nov 2013 A1
20130328888 Beaver et al. Dec 2013 A1
20130330055 Zimmermann et al. Dec 2013 A1
20130335427 Cheung et al. Dec 2013 A1
20140015940 Yoshida Jan 2014 A1
20140019865 Shah Jan 2014 A1
20140025620 Greenzeiger et al. Jan 2014 A1
20140025839 Marko et al. Jan 2014 A1
20140040273 Cooper et al. Feb 2014 A1
20140040280 Slaney et al. Feb 2014 A1
20140046946 Friedmann et al. Feb 2014 A2
20140078397 Bloch et al. Mar 2014 A1
20140082666 Bloch et al. Mar 2014 A1
20140085196 Zucker et al. Mar 2014 A1
20140086445 Brubeck et al. Mar 2014 A1
20140094313 Watson et al. Apr 2014 A1
20140101550 Zises Apr 2014 A1
20140105420 Lee Apr 2014 A1
20140126877 Crawford et al. May 2014 A1
20140129618 Panje et al. May 2014 A1
20140136186 Adami et al. May 2014 A1
20140152564 Gulezian et al. Jun 2014 A1
20140156677 Collins, III et al. Jun 2014 A1
20140178051 Bloch et al. Jun 2014 A1
20140186008 Eyer Jul 2014 A1
20140194211 Chimes et al. Jul 2014 A1
20140210860 Caissy Jul 2014 A1
20140219630 Minder Aug 2014 A1
20140220535 Angelone Aug 2014 A1
20140237520 Rothschild et al. Aug 2014 A1
20140245152 Carter et al. Aug 2014 A1
20140270680 Bloch et al. Sep 2014 A1
20140279032 Roever et al. Sep 2014 A1
20140282013 Amijee Sep 2014 A1
20140282642 Needham et al. Sep 2014 A1
20140298173 Rock Oct 2014 A1
20140314239 Meyer et al. Oct 2014 A1
20140317638 Hayes Oct 2014 A1
20140380167 Bloch et al. Dec 2014 A1
20150007234 Rasanen et al. Jan 2015 A1
20150012369 Dharmaji et al. Jan 2015 A1
20150015789 Guntur et al. Jan 2015 A1
20150020086 Chen et al. Jan 2015 A1
20150033266 Klappert et al. Jan 2015 A1
20150046946 Hassell et al. Feb 2015 A1
20150058342 Kim et al. Feb 2015 A1
20150063781 Silverman et al. Mar 2015 A1
20150067596 Brown et al. Mar 2015 A1
20150067723 Bloch et al. Mar 2015 A1
20150070458 Kim et al. Mar 2015 A1
20150070516 Shoemake et al. Mar 2015 A1
20150104155 Bloch et al. Apr 2015 A1
20150106845 Popkiewicz et al. Apr 2015 A1
20150124171 King May 2015 A1
20150154439 Anzue et al. Jun 2015 A1
20150160853 Hwang et al. Jun 2015 A1
20150179224 Bloch et al. Jun 2015 A1
20150181271 Onno et al. Jun 2015 A1
20150181291 Wheatley Jun 2015 A1
20150181301 Bloch et al. Jun 2015 A1
20150185965 Belliveau et al. Jul 2015 A1
20150195601 Hahm Jul 2015 A1
20150199116 Bloch et al. Jul 2015 A1
20150201187 Ryo Jul 2015 A1
20150256861 Oyman Sep 2015 A1
20150258454 King et al. Sep 2015 A1
20150286716 Snibbe et al. Oct 2015 A1
20150293675 Bloch et al. Oct 2015 A1
20150294685 Bloch et al. Oct 2015 A1
20150304698 Redol Oct 2015 A1
20150318018 Kaiser et al. Nov 2015 A1
20150331485 Wilairat et al. Nov 2015 A1
20150331933 Tocchini, IV et al. Nov 2015 A1
20150331942 Tan Nov 2015 A1
20150348325 Voss Dec 2015 A1
20150373385 Straub Dec 2015 A1
20160009487 Edwards et al. Jan 2016 A1
20160021412 Zito, Jr. Jan 2016 A1
20160029002 Balko Jan 2016 A1
20160037217 Harmon et al. Feb 2016 A1
20160057497 Kim et al. Feb 2016 A1
20160062540 Yang et al. Mar 2016 A1
20160065831 Howard et al. Mar 2016 A1
20160066051 Caidar et al. Mar 2016 A1
20160086585 Sugimoto Mar 2016 A1
20160094875 Peterson et al. Mar 2016 A1
20160099024 Gilley Apr 2016 A1
20160100226 Sadler et al. Apr 2016 A1
20160104513 Bloch et al. Apr 2016 A1
20160105724 Bloch et al. Apr 2016 A1
20160132203 Seto et al. May 2016 A1
20160134946 Glover May 2016 A1
20160142889 O'Connor et al. May 2016 A1
20160150278 Greene May 2016 A1
20160162179 Annett et al. Jun 2016 A1
20160170948 Bloch Jun 2016 A1
20160173944 Kilar et al. Jun 2016 A1
20160192009 Sugio et al. Jun 2016 A1
20160217829 Bloch et al. Jul 2016 A1
20160224573 Shahraray et al. Aug 2016 A1
20160232579 Fahnestock Aug 2016 A1
20160277779 Zhang et al. Sep 2016 A1
20160303608 Jossick Oct 2016 A1
20160321689 Turgeman Nov 2016 A1
20160322054 Bloch et al. Nov 2016 A1
20160323608 Bloch et al. Nov 2016 A1
20160337691 Prasad et al. Nov 2016 A1
20160344873 Jenzeh et al. Nov 2016 A1
20160365117 Boliek et al. Dec 2016 A1
20160366454 Tatourian et al. Dec 2016 A1
20170006322 Dury et al. Jan 2017 A1
20170041372 Hosur Feb 2017 A1
20170062012 Bloch et al. Mar 2017 A1
20170142486 Masuda May 2017 A1
20170149795 Day, II May 2017 A1
20170178409 Bloch et al. Jun 2017 A1
20170178601 Bloch et al. Jun 2017 A1
20170195736 Chai et al. Jul 2017 A1
20170264920 Mickelsen Sep 2017 A1
20170286424 Peterson Oct 2017 A1
20170289220 Bloch et al. Oct 2017 A1
20170295410 Bloch et al. Oct 2017 A1
20170326462 Lyons et al. Nov 2017 A1
20170337196 Goela et al. Nov 2017 A1
20170345460 Bloch et al. Nov 2017 A1
20180007443 Cannistraro et al. Jan 2018 A1
20180014049 Griffin et al. Jan 2018 A1
20180025078 Quennesson Jan 2018 A1
20180048831 Berwick et al. Feb 2018 A1
20180060430 Lu Mar 2018 A1
20180068019 Novikoff et al. Mar 2018 A1
20180115592 Samineni Apr 2018 A1
20180130501 Bloch et al. May 2018 A1
20180176573 Chawla et al. Jun 2018 A1
20180191574 Vishnia et al. Jul 2018 A1
20180254067 Elder Sep 2018 A1
20180262798 Ramachan Dra Sep 2018 A1
20180310049 Takahashi et al. Oct 2018 A1
20180314959 Apokatanidis et al. Nov 2018 A1
20180376205 Oswal et al. Dec 2018 A1
20190066188 Rothschild Feb 2019 A1
20190069038 Phillips Feb 2019 A1
20190069039 Phillips Feb 2019 A1
20190075367 van Zessen et al. Mar 2019 A1
20190090002 Ramadorai et al. Mar 2019 A1
20190098371 Keesan Mar 2019 A1
20190132639 Panchaksharaiah et al. May 2019 A1
20190166412 Panchaksharaiah et al. May 2019 A1
20190182525 Steinberg et al. Jun 2019 A1
20190238719 Alameh et al. Aug 2019 A1
20190335225 Fang et al. Oct 2019 A1
20190354936 Deluca et al. Nov 2019 A1
20200023157 Lewis et al. Jan 2020 A1
20200037047 Cheung et al. Jan 2020 A1
20200169787 Pearce et al. May 2020 A1
20200193163 Chang et al. Jun 2020 A1
20200344508 Edwards et al. Oct 2020 A1
Foreign Referenced Citations (21)
Number Date Country
2639491 Mar 2010 CA
004038801 Jun 1992 DE
10053720 Apr 2002 DE
0965371 Dec 1999 EP
1033157 Sep 2000 EP
2104105 Sep 2009 EP
2359916 Sep 2001 GB
2428329 Jan 2007 GB
2003-245471 Sep 2003 JP
2008005288 Jan 2008 JP
2004-0005068 Jan 2004 KR
2010-0037413 Apr 2010 KR
WO-1996013810 May 1996 WO
WO-2000059224 Oct 2000 WO
WO-2007062223 May 2007 WO
WO-2007138546 Dec 2007 WO
WO-2008001350 Jan 2008 WO
WO-2008052009 May 2008 WO
WO-2008057444 May 2008 WO
WO-2009125404 Oct 2009 WO
WO-2009137919 Nov 2009 WO
Non-Patent Literature Citations (117)
Entry
U.S. Appl. No. 15/356,913, Systems and Methods for Real-Time Pixel Switching, filed Nov. 21, 2016.
U.S. Appl. No. 15/481,916 Published as US 2017-0345460, Systems and Methods for Creating Linear Video From Branched Video, filed Apr. 7, 2017.
U.S. Appl. No. 14/534,626 Published as US-2018-0130501-A1, Systems and Methods for Dynamic Video Bookmarking, filed Sep. 13, 2017.
U.S. Appl. No. 16/865,896, Systems and Methods for Dynamic Video Bookmarking, filed May 4, 2020.
U.S. Appl. No. 16/752,193, Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Jan. 24, 2020.
U.S. Appl. No. 16/800,994, Systems and Methods for Adaptive and Responsive Video, filed Feb. 25, 2020.
U.S. Appl. No. 14/978,464 Published as US2017/0178601, Intelligent Buffering of Large-Scale Video, filed Dec. 22, 2015.
U.S. Appl. No. 14/978,491, Published as US2017/0178409, Seamless Transitions in Large-Scale Video, filed Dec. 22, 2015.
U.S. Appl. No. 15/395,477 Published as US2018/0191574, Systems and Methods for Dynamic Weighting of Branched Video Paths, filed Dec. 30, 2016.
U.S. Appl. No. 16/591,103, Systems and Methods for Dynamically Adjusting Video Aspect Ratios, filed Oct. 2, 2019.
U.S. Appl. No. 16/793,205, Dynamic Adaptation of Interactive Video Players Using Behavioral Analytics, filed Feb. 18, 2020.
U.S. Appl. No. 16/793,201, Systems and Methods for Detecting Anomalous Activities for Interactive Videos, filed Feb. 18, 2020.
U.S. Appl. No. 16/922,540, Systems and Methods for Seamless Audio and Video Endpoint Transitions, filed Jul. 7, 2020.
U.S. Appl. No. 12/706,721, now U.S. Pat. No. 9,190,110, the Office Actions dated Apr. 26, 2012, Aug. 17, 2012, Mar. 28, 2013, Jun. 20, 2013, Jan. 3, 2014, Jul. 7, 2014, and Dec. 19, 2014; the Notices of Allowance dated Jun. 19, 2015, Jul. 17, 2015, Jul. 29, 2015, Aug. 12, 2015, and Sep. 14, 2015.
U.S Appl. No. 14/884,284, the Office Actions dated Sep. 8, 2017; May 18, 2018; Dec. 14, 2018; Jul. 25, 2019; Nov. 18, 2019 and Feb. 21, 2020.
U.S. Appl. No. 13/033,916, now U.S. Pat. No. 9,607,655, the Office Actions dated Jun. 7, 2013, Jan. 2, 2014, Aug. 28, 2014, Jan. 5, 2015, Jul. 9, 2015, and Jan. 5, 2016; the Advisory Action dated May 11, 2016; and the Notice of Allowance dated Dec. 21, 2016.
U.S. Appl. No. 13/034,645, the Office Actions dated Jul. 23, 2012, Mar. 21, 2013, Sep. 15, 2014, Jun. 4, 2015, Apr. 7, 2017, Oct. 6, 2017, Aug. 10, 2018, Jul. 5, 2016, Apr. 5, 2019 and Dec. 26, 2019.
U.S. Appl. No. 13/437,164, now U.S. Pat. No. 8,600,220, the Notice of Allowance dated Aug. 9, 2013.
U.S. Appl. No. 14/069,694, now U.S. Pat. No.9,271,015, the Office Actions dated Apr. 27, 2015 and Aug. 31, 2015, the Notice of Allwance dated Oct. 13, 2015.
U.S. Appl. No. 13/622,780, now U.S. Pat. No. 8,860,882, the Offce Acion dated Jan. 16, 2014, the Notice of Allowance dated Aug. 4, 2014.
U.S. Appl. No. 13/622,795, now U.S. Pat. No. 9,009,619, the Office Actions dated May 23, 2014 and Dec. 1, 2014, the Notice of Allowance dated Jan. 9, 2015.
U.S. Appl. No. 14/639,579, now U.S. Pat. No. 10,474,334, the Office Actions dated May 3, 2017, Nov. 22, 2017, and Jun. 26, 2018, the Notices of Allowances dated Feb. 8, 2019 and Jul. 11, 2019.
U.S. Appl. No. 13/838,830, now U.S. Pat. No. 9,257,148, the Office Action dated May 7, 2015, Notices of Allowance dated Nov. 6, 2015.
U.S. Appl. No. 14/984,821, now U.S. Pat. No. 10,418,066, the Office Actions dated Jun. 1, 2017, Dec. 6, 2017, and Oct. 5, 2018; the Notice of Allowance dated May 7, 2019.
U.S. Appl. No. 13/921,536, now U.S. Pat. No. 9,832,516, the Office Actions dated Feb. 25, 2015, Oct. 20, 2015, Aug. 26, 2016 and Mar. 8, 2017, the Advisory Action dated Jun. 21, 2017, and Notice of Allowance dated Sep. 12, 2017.
U.S. Appl. No. 14/107,600, now U.S. Pat. No. 10,448,119, the Office Actions dated Dec. 19, 2014, Jul. 8, 2015, Jun. 3, 2016, Mar. 8, 2017, Oct. 10, 2017 and Jul. 25, 2018, and the Notices of Allowance dated Dec. 31, 2018 and Apr. 25, 2019.
U.S. Appl. No. 14/335,381, now U.S. Pat. No. 9,530,454, the Office Action dated Feb. 12, 2016; and the Notice of Allowance dated Aug. 25, 2016.
U.S. Appl. No. 14/139,996, now U.S. Pat. No. 9,641,898, the Office Actions dated Jun. 18, 2015, Feb. 3, 2016 and May 4, 2016; and the Notice of Allowance dated Dec. 23, 2016.
U.S. Appl. No. 14/140,007, now U.S. Pat. No. 9,520,155, the Office Actions dated Sep. 8, 2015 and Apr. 26, 2016; and the Notice of Allowance dated Oct. 11, 2016.
U.S. Appl. No. 14/249,627, now U.S. Pat. No. 9,653,115, the Office Actions dated Jan. 14, 2016 and Aug. 9, 2016; and the Notice of Allowance dated Jan. 13, 2017.
U.S. Appl. No. 15/481,916, the Office Actions dated Oct. 6, 2017, Aug. 6, 2018, Mar. 8, 2019, Nov. 27, 2019, and the Notice of Allowance dated Apr. 21, 2020.
U.S. Appl. No. 14/249,665, now U.S. Pat. No. 9,792,026, the Office Actions dated May 16, 2016 and Feb. 22, 2017; and the Notices of Allowance dated Jun. 2, 2017 and Jul. 24, 2017.
U.S. Appl. No. 14/509,700, now U.S. Pat. No. 9,792,957, the Office Action dated Oct. 28, 2016; and the Notice of Allowance dated Jun. 15, 2017.
U.S. Appl. No. 15/703,462, the Office Action dated Jun. 21, 2019, and Dec. 27, 2019; and the Notice of Allowance dated Feb. 10, 2020.
U.S. Appl. No. 14/534,626, the Office Actions dated Nov. 25, 2015, Jul. 5, 2016, Jun. 5, 2017, Mar. 2, 2018, Sep. 26, 2018, May 8, 2019, Dec. 27, 2019; and Aug. 19, 2020.
U.S. Appl. No. 14/700,845, now U.S. Pat. No. 9,653,115, the Office Actions dated May 20, 2016, Dec. 2, 2016, May 22, 2017, Nov. 28, 2017, Jun. 27, 2018 and Feb. 19, 2019 and the Notice of Allowance dated Oct. 21, 2019.
U.S. Appl. No. 14/700,862, now U.S. Pat. No. 9,672,868, the Office Action dated Aug. 26, 2016; and the Notice of Allowance dated Mar. 9, 2017.
U.S. Appl. No. 14/835,857, now U.S. Pat. No. 10,460,765, the Office Actions dated Sep. 23, 2016, Jun. 5, 2017 and Aug. 9, 2018, and the Advisory Action dated Oct. 20, 2017; Notice of Allowances dated Feb. 25, 2019 and Jun. 7, 2019.
U.S. Appl. No. 16/559,082, the Office Action dated Feb. 2, 2020 and Jul. 23, 2020.
U.S. Appl. No. 16/800,994, the Office Action dated Apr. 15, 2020.
U.S. Appl. No. 14/978,464, the Office Actions dated Jul. 25, 2019, Dec. 14, 2018, May 18, 2018, Sep. 8, 2017, Dec. 14, 2018, Jul. 25, 2019, Nov. 18, 2019, Jul. 23, 2020.
U.S. Appl. No. 14/978,491, the Office Actions dated Sep. 8, 2017, May 25, 2018, Dec. 14, 2018, Aug. 12, 2019; Dec. 23, 2019; and Jul. 23, 2020.
U.S. Appl. No. 15/085,209, now U.S. Pat. No. 10,462,262, the Office Actions dated Feb. 26, 2018 and Dec. 31, 2018; the Notice of Allowance dated Aug. 12, 2019.
U.S. Appl. No. 15/165,373, the Office Actions dated Mar. 24, 2017, Oct. 11, 2017, May 18, 2018, Feb. 1, 2019, Aug. 8, 2019, and Jan. 3, 2020.
U.S. Appl. No. 15/189,931, now U.S. Pat. No. 10,218,760, the Office Actions dated Apr. 6, 2018, Notice of Allowance dated Oct. 24, 2018.
U.S. Appl. No. 15/395,477, the Office Actions dated Nov. 2, 2018, Aug. 16, 2019, and Apr. 15, 2019.
U.S. Appl. No. 15/997,284, the Office Actions dated Aug. 1, 2019, Nov. 21, 2019 and Apr. 28, 2020.
U.S. Appl. No. 15/863,191, now U.S. Pat. No. 10,257,578, the Notices of Allowance dated Jul. 5, 2018 and Nov. 23, 2018.
U.S. Appl. No. 16/591,103, the Office Action dated Apr. 22, 2020.
An ffmpeg and SDL Tutorial, “Tutorial 05: Synching Video,” Retrieved from Internet on Mar. 15, 2013: <http://dranqer.com/ffmpeg/tutorial05.html>, (4 pages).
Archos Gen 5 English User Manual Version 3.0, Jul. 26, 2007, p. 1-81.
Bartlett, “iTunes 11: How to Queue Next Song,” Technipages, Oct. 6, 2008, pp. 1-8, Retrieved from the Internet on Dec. 26, 2013, http://www.technipages.com/itunes-queue-next-song.html.
International Search Report and Written Opinion for International Patent Application PCT/IB2013/001000 dated Jul. 31, 2013 (11 pages).
International Search Report for International Application PCT/IL2010/000362 dated Aug. 25, 2010 (6 pages).
International Search Report for International Patent Application PCT/IL2012/000080 dated Aug. 9, 2012 (4 pages).
International Search Report for International Patent Application PCT/IL2012/000081 dated Jun. 28, 2012 (4 pages).
Labs.byHook: “Ogg Vorbis Encoder for Flash: Alchemy Series Part 1,” Retrieved from Internet on on Dec. 17, 2012: URl:http://labs.byhook.com/2011/02/15/ogg-vorbis-encoder-for-flash-alchem- y-series-part-1/, 2011, 6 pages.
Miller, Gregor et al., “MiniDiver: A Novel Mobile Media Playback Interface for Rich Video Content on an iPhone™”, Entertainment Computing A ICEV 2009, Sep. 3, 2009, pp. 98-109.
Sodagar, I., “The MPEG-DASH Standard for Multimedia Streaming Over the Internet”, IEEE Multimedia, IEEE Service Center, New York, NY US, (2011) 18(4): 62-67.
Supplemental European Search Report for EP10774637.2 (PCT/IL2010/000362) dated Jun. 28, 2012 (6 pages).
Supplemental European Search Report for EP13184145 dated Jan. 30, 2014 (5 pages).
Yang, H, et al., “Time Stamp Synchronization in Video Systems,” Teletronics Technology Corporation, <http://www.ttcdas.com/products/daus_encoders/pdf/_tech_papers/tp_2010_time_stamp_video_system.pdf>, Abstract, (8 pages).
U.S. Appl. No. 12/706,721 U.S. Pat. No. 9,190,110 Published as US2010/0293455, System and Method for Assembling a Recorded Composition, filed Feb. 17, 2010.
U.S. Appl. No. 14/884,285 Published as US2017/0178601, Systems and Method for Assembling a Recorded Composition, filed Oct. 15, 2015.
U.S. Appl. No. 13/033,916 U.S. Pat. No. 9,607,655 Published as US2011/0200116, System and Method for Seamless Multimedia Assembly, filed Feb. 24, 2011.
U.S. Appl. No. 13/034,645 U.S. Pat. No. 11,232,458 Published as US2011/0202562, System and Method for Data Mining Within Interactive Multimedia, filed Feb. 24, 2011.
U.S. Appl. No. 17/551,847 Published as US2021/0366520, Systems and Methods for Data Mining Within Interactive Multimedia, filed Dec. 15, 2021.
U.S. Appl. No. 13/437,164 U.S. Pat. No. 8,600,220 Published as US2013/0259442, Systems and Methods for Loading More Than One Video Content at a Time, filed Apr. 2, 2012.
U.S. Appl. No. 14/069,694 U.S. Pat. No. 9,271,015 Published as US2014/0178051, Systems and Methods for Loading More Than One Video Content at a Time, filed Nov. 1, 2013.
U.S. Appl. No. 13/622,780 U.S. Pat. No. 8,860,882 Published as US2014/0078397, Systems and Methods for Constructing Multimedia Content Modules, filed Sep. 19, 2012.
U.S. Appl. No. 13/622,795 U.S. Pat. No. 9,009,619 Published as US2014/0082666, Progress Bar for Branched Videos, filed Sep. 19, 2012.
U.S. Appl. No. 14/639,579 U.S. Pat. No. 10,474,334 Published as US2015/0199116, Progress Bar for Branched Videos, filed Mar. 5, 2015.
U.S. Appl. No. 13/838,830 U.S. Pat. No. 9,257,148 Published as US2014/0270680, System and Method for Synchronization of Selectably Presentable Media Streams, filed Mar. 15, 2013.
U.S. Appl. No. 14/984,821 U.S. Pat. No. 10,418,066 Published as US2016/0217829, System and Method for Synchronization of Selectably Presentable Media Streams, filed Dec. 30, 2015.
U.S. Appl. No. 13/921,536 U.S. Pat. No. 9,832,516 Published as US2014/0380167, Systems and Methods for Multiple Device Interaction with Selectably Presentable Media Streams, filed Jun. 19, 2013.
U.S. Appl. No. 14/107,600 U.S. Pat. No. 10,448,119 Published as US2015/0067723, Methods and Systems for Unfolding Video Pre-Roll, filed Dec. 16, 2013.
U.S. Appl. No. 14/335,381 U.S. Pat. No. 9,530,454 Published as US2015/0104155, Systems and Methods for Real-Time Pixel Switching, filed Jul. 18, 2014.
U.S. Appl. No. 14/139,996 U.S. Pat. No. 9,641,898 Published as US2015/0181301, Methods and Systems for In-Video Library, filed Dec. 24, 2013.
U.S. Appl. No. 14/140,007 U.S. Pat. No. 9,520,155 Published as US2015/0179224, Methods and Systems for Seeking to Non-Key Frames, filed Dec. 24, 2013.
U.S. Appl. No. 14/249,627 U.S. Pat. No. 9,653,115 Published as US2015-0294685, Systems and Methods for Creating Linear Video From Branched Video, filed Apr. 10, 2014.
U.S. Appl. No. 15/481,916 U.S. Pat. No. 10,755,747 Published as US2017-0345460, Systems and Methods for Creating Linear Video From Branched Video, filed Apr. 7, 2017.
U.S. Appl. No. 16/986,977 Published as US2020/0365187, Systems and Methods for Creating Linear Video From Branched Video, filed Aug. 6, 2020.
U.S. Appl. No. 14/249,665 U.S. Pat. No. 9,792,026 Published as US2015/0293675, Dynamic Timeline for Branched Video, filed Apr. 10, 2014.
U.S. Appl. No. 14/509,700 U.S. Pat. No. 9,792,957 Published as US2016/0104513, Systems and Methods for Dynamic Video Bookmarking, filed Oct. 8, 2014.
U.S. Appl. No. 14/534,626 U.S. Pat. No. 10,692,540 Published as US-2018-0130501-A1, Systems and Methods for Dynamic Video Bookmarking, filed Sep. 13, 2017.
U.S. Appl. No. 16/865,896 U.S. Pat. No. 10,885,944 Published as US2020/0265870, Systems and Methods for Dynamic Video Bookmarking, filed May 4, 2020.
U.S. Appl. No. 17/138,434 Published as US2021/0366520, Systems and Methods for Dynamic Video Bookmarking, filed Dec. 30, 2020.
U.S. Appl. No. 17/701,168, Systems and Methods for Dynamic Video Bookmarking, filed Mar. 22, 2022.
U.S. Appl. No. 14/534,626 Published as US2016/0105724, Systems and Methods for Parallel Track Transitions, filed Nov. 6, 2014.
U.S. Appl. No. 14/700,845 U.S. Pat. No. 10,582,265 Published as US2016/0323608, Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Apr. 30, 2015.
U.S. Appl. No. 16/752,193 Published as US2020/0404382, Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Jan. 24, 2020.
U.S. Appl. No. 14/700,862 U.S. Pat. No. 9,672,868 Published as US2016/0322054, Systems and Methods for Seamless Media Creation, filed Apr. 30, 2015.
U.S. Appl. No. 14/835,857 U.S. Pat. No. 10,460,765 Published as US2017/0062012, Systems and Methods for Adaptive and Responsive Video, filed Aug. 26, 2015.
U.S. Appl. No. 16/559,082 Published as US2019/0392868, Systems and Methods for Adaptive and Responsive Video, filed Sep. 3, 2019.
U.S. Appl. No. 14/978,464 U.S. Pat. No. 11,164,548 Published as US2017/0178601, Intelligent Buffering of Large-Scale Video, filed Dec. 22, 2015.
U.S. Appl. No. 14/978,491 U.S. Pat. No. 11,128,853 Published as US2017/0178409, Seamless Transitions in Large-Scale Video, filed Dec. 22, 2015.
U.S. Appl. No. 17/403,703 Published as US2022/0038673, Seamless Transitions in Large-Scale Video, filed Aug. 16, 2021.
U.S. Appl. No. 15/085,209 U.S. Pat. No. 10,462,202 Published as US2017/0289220, Media Stream Rate Synchronization, filed Mar. 30, 2016.
U.S. Appl. No. 15/165,373 Published as US2017/0295410, Symbiotic Interactive Video, filed May 26, 2016.
U.S. Appl. No. 15/189,931 U.S. Pat. No. 10,218,760 Published as US2017/0374120, Dynamic Summary Generation for Real-time Switchable Videos, filed Jun. 22, 2016.
U.S. Appl. No. 15/395,477 U.S. Pat. No. 11,050,809 Published as US2018/0191574, Systems and Methods for Dynamic Weighting of Branched Video Paths, filed Dec. 30, 2016.
U.S. Appl. No. 15/395,477 Published as US2021/0281626, Systems and Methods for Dynamic Weighting of Branched Video Paths, filed Dec. 30, 2016.
U.S. Appl. No. 15/997,284 Published as US2019/0373330, Interactive Video Dynamic Adaptation and User Profiling, filed Jun. 4, 2018.
U.S. Appl. No. 15/863,191 U.S. Pat. No. 10,257,578, Dynamic Library Display for Interactive Videos, filed Jan. 5, 2018.
U.S. Appl. No. 16/283,066 U.S. Pat. No. 10,856,049 Published as US2019/0349637, Dynamic Library Display for Interactive Videos, filed Feb. 22, 2019.
U.S. Appl. No. 16/591,103 Published as US2021/0105433, Systems and Methods for Dynamically Adjusting Video Aspect Ratios, filed Oct. 2, 2019.
U.S. Appl. No. 16/793,205 Published as US2021/0258647, Dynamic Adaptation of Interactive Video Players Using Behavioral Analytics, filed Feb. 18, 2020.
U.S. Appl. No. 16/793,201 Published as US2021/0258640, Systems and Methods for Detecting Anomalous Activities for Interactive Videos, filed Feb. 18, 2020.
U.S. Appl. No. 16/922,540 Published as US2022/0014817, Systems and Methods for Seamless Audio and Video Endpoint Transitions, filed Jul. 7, 2020.
U.S. Appl. No. 17/462,199, Shader-based dynamic video manipulation, filed Aug. 31, 2021.
U.S. Appl. No. 17/462,222, Shader-based dynamic video manipulation, filed Aug. 31, 2021.
U.S. Appl. No. 17/334,027, Automated platform for generating interactive videos, filed May 28, 2021.
U.S. Appl. No. 17/484,604, Discovery engine for interactive videos, filed Sep. 24, 2021.
U.S. Appl. No. 17/484,635, Video player integration within websites, filed Sep. 24, 2021.
Google Scholar search, “Inserting metadata inertion advertising video”, Jul. 16, 2021, 2 pages.
Marciel, M. et al., “Understanding the Detection of View Fraud in Video Content Portals”, (Feb. 5, 2016), Cornell University, pp. 1-13.
International Preliminary Report and Written Opinion of PCT/IL2012/000080 dated Aug. 27, 2013, 7 pages.
Related Publications (1)
Number Date Country
20210306707 A1 Sep 2021 US
Continuations (2)
Number Date Country
Parent 16283066 Feb 2019 US
Child 17091149 US
Parent 15863191 Jan 2018 US
Child 16283066 US