The present disclosure relates generally to interactive video and, more particularly, to systems and methods for intelligent buffering and seamless transitions in large scale video.
Immersive video experiences are offered in a variety of forms, including, but not limited to, 360-degree video presented on a two-dimensional display screen, virtual reality in simulated three-dimensional space, and augmented reality in physical three-dimensional space. One of the many challenges in providing such video experiences, particularly when video data is received over a network rather than being stored on local hardware, is ensuring that the viewer is able to visually navigate the video without encountering noticeable interruptions, such as pauses for buffering. Another challenge is transitioning among different videos within the three-dimensional space without drawing the user's attention to the changes and diminishing the immersive experience.
Systems and methods for providing intelligent buffering and seamless transitions in large scale video are described herein. In one aspect, a computer-implemented method includes storing at least a portion of a video presentation having multiple sub-videos, with each sub-video being associated with a particular field of view. The field of view of a user viewing the video presentation is identified and, based thereon, a first set of sub-videos is loaded for presentation within the current field of view. In addition, a second set of sub-videos associated with proximate fields of view is loaded. A change in the user's field of view is then identified, and at least one sub-video from the second set is loaded for presentation within the new field of view.
Various implementations of the foregoing aspect can include one or more of the following features. A particular sub-video includes a plurality of video frames, with each frame including a portion of a frame of a larger video. A particular sub-video includes a plurality of video frames, with each frame including a first portion at a first resolution and a second portion at a second resolution lower than the first resolution. The change in the field of view is based on a user interaction (e.g., head movement, eye movement) with the video presentation. At least one of the second sub-videos is associated with the second field of view.
In another implementation, the method includes loading a third set of sub-videos associated with one or more fields of view proximate to the second field of view. This loading can be performed at an increased speed relative to further loading of the second sub-videos. The loading of the first set of sub-videos can also be stopped at this point.
In a further implementation, the first set of sub-videos includes a sub-video for immediate presentation within the first field of view and one or more different sub-videos for potential presentation within the first field of view. A transition in presentation within the first field of view from the sub-video for immediate presentation to one of the different sub-videos can be identified, where the transition is based on an interaction of the user. Such interactions can include a head movement, an eye movement, speech, a hand movement, an arm movement, and an input from a control device. The loading of the different sub-videos can be based on a current field of view of the user and/or a probability that the user will have a particular field of view. The second set of sub-videos can also include a sub-video for immediate presentation within the second field of view and one or more other sub-videos for potential presentation within the second field of view.
In yet another implementation, the method includes associating a weight with each sub-video being loaded based on a probability of that sub-video being viewed by the user. The loading speed a particular sub-video can then be based at least in part on the weight associated therewith.
In another aspect, a computer-implemented method includes providing, for presentation to a user, a video having multiple sub-videos. A first distraction level based on content in the video and a second distraction level based on one or more actions of the user during the presentation of the video are tracked. Based on either or both distraction levels, a transition point in the video is identified during the presentation of the video. One of the sub-videos is then changed to a different sub-video at the transition point.
Various implementations of the foregoing aspect can include one or more of the following features. The video is presented in a simulated three-dimensional space. A subplurality of the sub-videos is presented simultaneously to the user. The first sub-video is presented within a field of view of the user. A particular sub-video includes a plurality of video frames, each frame including a portion of a frame of a larger video. A particular sub-video includes a plurality of video frames, each frame including a plurality of distinct portions.
In one implementation, tracking the first level of distraction includes identifying object movement in a particular sub-video, an object occlusion in a particular sub-video, a distracting video scene, and/or distracting audio. Tracking the second level of distraction can include identifying a change in a field of view of the user, an eye focus of the user, and/or a movement direction of the user. The first distraction level can be determined prior to the presentation of the video and/or determined during the presentation of the video.
In another implementation, identifying the transition point includes determining at a particular point in time that a combination of the first distraction level and the second distraction level exceed a threshold level of distraction, or that the first distraction level exceeds a threshold level of distraction, or that the second distraction level exceeds a threshold level of distraction. The change to the different sub-video can occur immediately upon identifying the transition point.
Other aspects of the inventions include corresponding systems and computer-readable media. The various aspects and advantages of the invention will become apparent from the following drawings, detailed description, and claims, all of which illustrate the principles of the invention, by way of example only.
A more complete appreciation of the invention and many attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings. In the drawings, like reference characters generally refer to the same parts throughout the different views. Further, the drawings are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the invention.
Described herein are various implementations of methods and supporting systems for providing intelligent buffering and seamless transitions in multimedia content, such as large scale video. As referred to here, “large scale video” refers to video having a total display area that is larger than the viewing area for a user during a particular period (up to and including the entire duration) of a playing video. A large scale video can be composed of a single large area sub-video or multiple sub-videos each comprising a portion of the large scale video area (e.g., arranged in a grid). Some large scale videos, during playback or streaming, permit the viewer to change his field of view, or viewing area, to watch other portions of the video area. For example, a large scale video can be presented in a two-dimensional or three-dimensional representative space (e.g., projected on a spherical surface, virtual reality, augmented reality, or a form of spatial or immersive media using one or more of computer generated imagery, pre-recorded video, wide angle video, and the like), where the user can turn his head, move his eyes, or provide other input to change where he is looking, effectively moving his viewing area to another portion of the video. The viewing area can be representative of the area that a viewer would see within his cone of vision (e.g., from a first-person perspective).
The techniques described herein can be implemented in any appropriate hardware or software. If implemented as software, the processes can execute on a system capable of running one or more custom operating systems or commercial operating systems such as the Microsoft Windows® operating systems, the Apple OS X® operating systems, the Apple iOS® platform, the Google Android™ platform, the Linux® operating system and other variants of UNIX® operating systems, and the like. The software can be implemented on a general purpose computing device in the form of a computer including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit.
Referring to
The system can include a plurality of software modules stored in a memory and executed on one or more processors. The modules can be in the form of a suitable programming language, which is converted to machine language or object code to allow the processor or processors to read the instructions. The software can be in the form of a standalone application, implemented in any suitable programming language or framework.
The application 112 can be a video player and/or editor that is implemented as a native application, web application, or other form of software. In some implementations, the application 112 is in the form of a web page, widget, and/or Java, JavaScript, .Net, Silverlight, Flash, and/or other applet or plug-in that is downloaded to the user device 110 and runs in conjunction with a web browser. The application 112 and the web browser can be part of a single client-server interface; for example, the application 112 can be implemented as a plugin to the web browser or to another framework or operating system. Any other suitable client software architecture, including but not limited to widget frameworks and applet technology can also be employed.
Multimedia content can be provided to the user device 110 by content server 102, which can be a web server, media server, a node in a content delivery network, or other content source. In some implementations, the application 112 (or a portion thereof) is provided by application server 106. For example, some or all of the described functionality of the application 112 can be implemented in software downloaded to or existing on the user device 110 and, in some instances, some or all of the functionality exists remotely. For example, certain video encoding and processing functions can be performed on one or more remote servers, such as application server 106. In some implementations, the user device 110 serves only to provide output and input functionality, with the remainder of the processes being performed remotely.
The user device 110, content server 102, application server 106, and/or other devices and servers can communicate with each other through communications network 114. The communication can take place via any media such as standard telephone lines, LAN or WAN links (e.g., T1, T3, 56 kb, X.25), broadband connections (ISDN, Frame Relay, ATM), wireless links (802.11, Bluetooth, GSM, CDMA, etc.), and so on. The network 114 can carry TCP/IP protocol communications and HTTP/HTTPS requests made by a web browser, and the connection between clients and servers can be communicated over such TCP/IP networks. The type of network is not a limitation, however, and any suitable network can be used.
Method steps of the techniques described herein can be performed by one or more programmable processors executing a computer program to perform functions of the invention by operating on input data and generating output. Method steps can also be performed by, and apparatus of the invention can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Modules can refer to portions of the computer program and/or the processor/special circuitry that implements that functionality.
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. One or more memories can store media assets (e.g., audio, video, graphics, interface elements, and/or other media files), configuration files, and/or instructions that, when executed by a processor, form the modules, engines, and other components described herein and perform the functionality associated with the components. The processor and the memory can be supplemented by, or incorporated in special purpose logic circuitry.
It should also be noted that the present implementations can be provided as one or more computer-readable programs embodied on or in one or more articles of manufacture. The article of manufacture can be any suitable hardware apparatus, such as, for example, a floppy disk, a hard disk, a CD-ROM, a CD-RW, a CD-R, a DVD-ROM, a DVD-RW, a DVD-R, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape. In general, the computer-readable programs can be implemented in any programming language. The software programs can be further translated into machine language or virtual machine instructions and stored in a program file in that form. The program file can then be stored on or in one or more of the articles of manufacture.
In one implementation, interactive large scale video is provided in a three-dimensional (real or actual) space, such as through a virtual reality device. The video within or outside a user's viewing area can transition (change seamlessly or non-seamlessly) in real-time, as the user interacts with the environment. As noted above, a large scale video can be a single large area video, or can be made of multiple videos or frame portions, each positioned as a portion of the area of a larger area video canvas. For a single large area video, the video is buffered to the player application 112 on the user device 110, and the entire video is switched to effect a transition in a particular area of the video. For video portions, one or more small videos or frame portions can be buffered to the player application 112 and can be copied in real-time, as the video is playing, to a canvas that is displayed to the user frame by frame. Changes in an area of the video then can be accomplished by switching only the video or frame portion or portions in the relevant area. The underlying canvas can be empty or can hold a default video for display to the user when other video is unavailable.
Various video structuring techniques can be used to provide transitions in large scale video. In some implementations, a video presentation includes multiple tracks or streams that a user can switch among in real-time or near real-time. In one implementation, the video presentation is an interactive video based on a video tree, hierarchy, or other structure. A video tree can be formed by nodes that are connected in a branching, hierarchical, or other linked form. Nodes can have an associated video segment, audio segment, graphical user interface elements, and/or other associated media. Users (e.g., viewers) can watch a video that begins from a starting node in the tree and proceeds along connected nodes. Upon reaching a point where multiple video segments branch off from a currently viewed segment, the user interacts with the video in a manner that results in the selection of the branch to traverse and, thus, the next video segment to watch. Branched video can include seamlessly assembled and selectably presentable multimedia content such as that described in U.S. patent application Ser. No. 13/033,916, filed on Feb. 24, 2011, and entitled “System and Method for Seamless Multimedia Assembly,” and U.S. patent application Ser. No. 14/107,600, filed on Dec. 16, 2013, and entitled “Methods and Systems for Unfolding Video Pre-Roll,” the entireties of which are hereby incorporated by reference.
The video segments in a video tree can be selectably presentable multimedia content; that is, some or all of the video segments in the video tree can be individually or collectively played for a user based upon the user's selection of a particular video segment, an interaction with a previous or playing video segment, or other interaction that results in a particular video segment or segments being played. The video segments can include, for example, one or more predefined, separate multimedia content segments that can be combined in various manners to create a continuous, seamless presentation such that there are no noticeable gaps, jumps, freezes, delays, or other visual or audible interruptions to video or audio playback between segments. In addition to the foregoing, “seamless” can refer to a continuous playback of content that gives the user the appearance of watching a single, linear multimedia presentation or portion of a presentation, as well as a continuous playback of multiple content segments that have smooth audio and/or video transitions (e.g., fadeout/fade-in, linking segments) between two or more of the segments.
In some instances, the user is permitted to make choices or otherwise interact in real-time at decision points or during decision periods interspersed throughout the multimedia content. Decision points and/or decision periods can occur at any time and in any number during a multimedia segment, including at or near the beginning and/or the end of the segment. Decision points and/or periods can be predefined, occurring at fixed points or during fixed periods in the multimedia content segments. Based at least in part on the user's interactions before or during the playing of content, one or more subsequent multimedia segment(s) associated with the choices can be presented to the user. In some implementations, the subsequent segment is played immediately and automatically following the conclusion of the current segment, whereas, in other implementations, the subsequent segment is played immediately upon the user's interaction with the video, without waiting for the end of the decision period or the segment itself.
If a user does not make a selection at a decision point or during a decision period, a default, previously identified selection, or random selection can be made by the system. In some instances, the user is not provided with options; rather, the system automatically selects the segments that will be shown based on information that is associated with the user, other users, or other factors, such as the current date. For example, the system can automatically select subsequent segments based on the user's IP address, location, time zone, the weather in the user's location, social networking ID, saved selections, stored user profiles, preferred products or services, and so on. The system can also automatically select segments based on previous selections made by other users, such as the most popular suggestion or shared selections. The information can also be displayed to the user in the video, e.g., to show the user why an automatic selection is made. As one example, video segments can be automatically selected for presentation based on the geographical location of three different users: a user in Canada will see a twenty-second beer commercial segment followed by an interview segment with a Canadian citizen; a user in the U.S. will see the same beer commercial segment followed by an interview segment with a U.S. citizen; and a user in France is shown only the beer commercial segment.
Multimedia segment(s) selected automatically or by a user can be presented immediately following a currently playing segment, or can be shown after other segments are played. Further, the selected multimedia segment(s) can be presented to the user immediately after selection, after a fixed or random delay, at the end of a decision period, and/or at the end of the currently playing segment. Two or more combined segments form a seamless multimedia content path, and users can take multiple paths and experience a complete, start-to-finish, seamless presentation. Further, one or more multimedia segments can be shared among intertwining paths while still ensuring a seamless transition from a previous segment and to the next segment. The content paths can be predefined, with fixed sets of possible transitions in order to ensure seamless transitions among segments. There can be any number of predefined paths, each having any number of predefined multimedia segments. Some or all of the segments can have the same or different playback lengths, including segments branching from a single source segment.
Traversal of the nodes along a content path in a tree can be performed by selecting among options that appear on and/or around the video while the video is playing. In some implementations, these options are presented to users at a decision point and/or during a decision period in a content segment. The display can hover and then disappear when the decision period ends or when an option has been selected. Further, a timer, countdown or other visual, aural, or other sensory indicator can be presented during the playing of a content segment to inform the user of the point by which he should (or in some cases must) make his selection. For example, the countdown can indicate when the decision period will end, which can be at a different time than when the currently playing segment will end. If a decision period ends before the end of a particular segment, the remaining portion of the segment can serve as a non-interactive seamless transition to one or more other segments. Further, during this non-interactive end portion, the next multimedia content segment (and other potential next segments) can be downloaded and buffered in the background for later presentation (or potential presentation).
The segment that is played after a currently playing segment can be determined based on an option selected or other interaction with the video. Each available option can result in a different video and audio segment being played. As previously mentioned, the transition to the next segment can occur immediately upon selection, at the end of the current segment, or at some other predefined or random point. Notably, the transition between content segments can be seamless. In other words, the audio and video can continue playing regardless of whether a segment selection is made, and no noticeable gaps appear in audio or video presentation between any connecting segments. In some instances, the video continues on to another segment after a certain amount of time if none is chosen, or can continue playing in a loop.
In another implementation, transitions among videos can be performed using the techniques described in U.S. patent application Ser. No. 14/534,626, filed on Nov. 6, 2014, and entitled “Systems and Methods for Parallel Track Transitions,” the entirety of which is incorporated by reference herein. For example, a playing video file or stream can have one or more parallel tracks that can be switched to and from in real-time automatically and/or based on user interactions. In some implementations, such switches are made seamlessly and substantially instantaneously, such that the audio/video of the playing content can continue without any perceptible delays, gaps, or buffering.
To facilitate near-instantaneous switching among parallel tracks, multiple media tracks (e.g., video streams) can be downloaded simultaneously to user device 110. Upon selecting a streaming video to play, an upcoming portion of the video stream is typically buffered by a video player prior to commencing playing the video, and the video player can continue buffering as the video is playing. Accordingly, in one implementation, if an upcoming segment of a video presentation (including the beginning of the presentation) includes two or more parallel tracks, the application 112 (e.g., a video player) can initiate download of the upcoming parallel tracks substantially simultaneously. The application 112 can then simultaneously receive and/or retrieve video data portions of each track. The receipt and/or retrieval of upcoming video portions of each track can be performed prior to the playing of any particular parallel track as well as during the playing of a parallel track. The downloading of video data in parallel tracks can be achieved in accordance with smart downloading techniques such as those described in U.S. Pat. No. 8,600,220, issued on Dec. 3, 2013, and entitled “Systems and Methods for Loading More than One Video Content at a Time,” the entirety of which is incorporated by reference herein.
Upon reaching a segment of the video presentation that includes parallel tracks, the application 112 makes a determination of which track to play. The determination can be based on, for example, an interaction made or option selected by the user during a previous video segment, during a previous playback of a pre-recorded video presentation, prior to playing the video, and so on. Based on this determination, the current track either continues to play or the application 112 switches to a parallel track.
The above video structuring concepts, among other techniques, can be applied to large scale video to effect transitions in all or a portion of the area of a large scale video (e.g., sub-videos) based on a user interaction. In particular, multiple sub-videos, or the entire large scale video itself, can transition to other videos or segments using these branching video or parallel track transition techniques. As one basic example, consider a large scale video presented in a sphere and allowing a user to look around in 360 degrees, which includes two sub-videos consisting of video played on opposite sides, or hemispheres, of the total viewing area. To transition to a new video in one hemisphere without the user noticing, the user device (e.g., a VR headset) can be used to determine when the user is viewing the opposite hemisphere. At that time, the playing video in the first hemisphere can be transitioned to the new video with minimal or no recognition of the change by the user.
Various types of user interactions and controls can affect the presentation of a large scale video and result in changes to the entire video or one or more sub-videos. Certain interactions are particularly useful in virtual reality environments. For example, the head movement of a user (determined by a VR headset, motion tracker, device with a gyroscope, or otherwise) and/or eye movement of a user (determined by image recognition or otherwise) can be used to set the field of view of the user and what he sees on his screen or other display device. Head and eye movement can also be used to determine which sub-video(s) to start playing or transition to. For example, in a video showing a person to the user's left and a person to the user's right, a movement of the user's head and/or eyes toward a particular person can be used to determine which person will begin talking to the user (i.e., which video with the person talking will be transitioned to). Similarly, head movement, eye movement, blinking, and other facial expressions can be used to indicate a user's selection of an option (e.g., nodding as “yes”, shaking head as “no”). With respect to eye movement, detection of such eye motion can also include detection of eye focus. For example, if it is determined that the user is staring at a particular object in the video, a transition can be made to a video in which the object performs an action (e.g., a bird flies away).
Other interactions that result in a change in a user's field of view, a change in video being played, or a choice being selected are contemplated. For example, human speech can be analyzed using voice recognition techniques, allowing a user to speak to characters in a video or make requests. Depending on what the user says, different sub-video(s) can be transitioned to in the large scale video. Other possible interactions affecting the video include general body movement, hand, finger, or arm movement, leg or foot movement, movement of a control device (e.g., gloves, sword, video game controller, watch), and other suitable methods of interaction.
In one implementation, large scale video is intelligently buffered on the user device 110 so that the user's field of view can change while videos in surrounding areas are seamlessly loaded, without noticeable interruption. To accomplish this, the application 112 on the user device 110 can intelligently request media content that will be played or is likely to be played, or the content server 102 can intelligently select what media content to transmit to the user device 110, or a combination of both techniques can occur. Different types of buffering are possible, including buffering content for period of time in the future (e.g., a fixed period or a period that can vary given the likelihood of particular content being played or otherwise seen by a user); buffering video content directly within the user's field of view; buffering video content directly within the user's field of view and an area around the field of view; and buffering an entire video or all sub-videos that can possibly be viewed (which can vary given, e.g., a defined video tree or other structure).
In STEP 204, the current field of view of the user is identified (e.g., by the user device 110 or using information provided to the content server 102 or other remote device). In parallel, one or more sub-videos within the current field of view of the user are loaded to user device 110 for display to the user (STEP 206), one or more sub-videos in fields of view proximate to the current field of view (which can include fields of view overlapping with the current field of view) are loaded to user device 110 for potential display to the user (STEP 208), and the sub-videos in the current field of view are presented to the user (STEP 210). In some implementations, STEPS 206, 208, and 210 need not be performed in parallel. Upon identifying a change in the field of view of the user (STEP 212), the process returns to STEP 204 to load sub-videos within and proximate to the new field of view to user device 110, and the sub-videos within the new field of view are presented to the user.
Referring now to
When the user switches to a parallel track Video B (whether as a result of a user interaction or an automatic transition), as shown in
In one implementation, specific areas of video content can be loaded more or less quickly relative to other areas. As described above, while a user is viewing a large scale video, video content within his field of view as well as within proximate fields of view is buffered. With reference to
Referring now to
In
In one implementation, large scale videos are provided in multiple resolutions. For example, a large scale video presentation can be composed of individual, full-size (encompassing the full viewable area of the large scale video presentation) sub-videos, with each video frame of the sub-video being broken up into sub-areas of different resolution. Alternatively, the large scale video presentation can be composed of multiple sub-videos spatially arranged to form multiple full-size videos, with one or more of the sub-videos being encoded in a different resolution from other sub-videos.
Other sub-videos 810, 820 can have different sub-areas 812, 822 in high resolution. Accordingly, when the user changes his field of view, a seamless transition is made to the sub-video having a high resolution area corresponding to the new field of view. In this manner, less data is buffered and a change in video quality is not observable to the user.
In some implementations, the loading of video content is optimized by applying weights to the content based on the probability that the content will be presented to the user. The weights applied to video content can be predefined or dynamically adjusted according to the behavior of the user (or a population of users). Higher weighted content can then be loaded in preference to lower weighted content.
In one example of predefined weighting, historical statistics of user behavior, past interactions and selections, and other suitable data can be used to determine the probability that particular content will be viewed by a particular user. Weights can then be applied to the content based on the determined probabilities, such that highly-viewed content will be prioritized in buffering.
In one example of dynamic weighting, a user that makes minimal or no changes to his field of view over a period of time is likely to continue that behavior. Consequently, time is weighted more heavily than space. In other words, video content within the user's field of view and corresponding fields of view in parallel tracks or branching video options (and optionally within a small proximate area surrounding the user's field of view) is buffered at a higher rate and/or further into the future than video content in larger proximate areas surrounding the user's field of view. In contrast, if the user changes his field of view frequently, the loading of video content for larger areas proximate to the user's field of view is prioritized over the loading of more future video content for a smaller area. As another example, when the user switches to a new video, that video can be weighted more heavily for loading, with respect to both time and loading area. Thus, the user's observed behavior can dynamically affect which video content and how much of that video content is retrieved, relative to other video content.
With some presentations of large scale videos, such as in virtual reality experiences, the user is provided with a simulated three-dimensional environment and, so, in addition to intelligently buffering media content, as described above, it can be preferable to perform transitions among videos in a manner that does not undesirably alert the user to the transitions.
Various techniques are contemplated for performing such seamless experience transitions, for example, a particular sub-video can be cut or changed to a different sub-video when outside (at least partially) of the field of view of the user. Referring to
Similarly, in
In one example of the above technique, the large scale video being viewed by the user 902 represents the user's view when looking out the windows of a car, namely, sub-video area 914 is the view out of the windshield, and sub-video areas 916 and 918 are the views out of the left and right windows, respectively. While video content in sub-video area 914 is playing and within the user's field of view (e.g., simulating a driving experience), a choice can be presented to the user 902 as to what car he would like to see on the road, Car A or Car B. If the user chooses Car A, sub-video area 916 is changed to a sub-video that shows Car A driving by, and the user 902 is directed to look to his left. If, instead, the user chooses Car B, sub-video area 918 is changed to a sub-video that shows Car B driving by, and the user 902 is directed to look to his right.
When a video within a user's field of view needs to be changed to a different video, it can be difficult to do so without the user noticing the transition. Accordingly, another technique for performing a seamless experience transition involves determining a point or period of time during the video, in a predefined and/or dynamic manner, at which the user's ability to notice a change in video is lower relative to other points of time in the video.
Further, in this implementation, a distraction level based on actions of the user during presentation of the video is tracked (STEP 1006). This user action distraction level can be based on, for example, the user changing his field of view, the user focusing on a particular portion of the video, the user moving his head or other body part(s) in a particular direction, and/or other actions taken by the user that would tend to distract him from the playing video. When the user is engaging in such actions (or is expected to be engaging in such actions), the measurement of this distraction level rises higher relative to when the user is engaging in no such actions or less distracting actions. This distraction level can be preset by, e.g., a content editor who expects the user to be performing a particular action at a particular point in the video (e.g., looking in a particular direction or at a particular object), and/or can be determined automatically in real-time (e.g., using sensors on the user's device that track movement and other actions).
In STEP 1008, a transition point or period is identified in the video at which one sub-video likely can be changed to another sub-video without alerting the user to the change and, in STEP 1010, the transition is made at the point or during the period. If desirable, the transition can be made immediately at the identified transition point or within a specified time range of the point. In some implementations, the identification of a transition point is based on one or more distraction level measurements exceeding a threshold, which can be predefined. For example, if a particular distraction level has a measured value between 1 and 10, the system may require a distraction level to reach at least 7 before changing a video. As another example, the system may require the combination of two distraction levels (each having a possible measurement between 1 and 10, inclusive) to reach at least 15. In the event that a video must be changed during a particular time period, but the total distraction level does not exceed the threshold during that period, the video can, in some instances, be changed during the highest point of distraction during the period (if such knowledge is available), or at the end of the period, with the understanding that the user may notice the change.
One of skill in the art will appreciate that the techniques disclosed herein are applicable to a wide variety of media presentations. Several examples are now provided, however, many possible scenarios are contemplated. In one example immersive video presentation, a scene taking place in a large open space, such as a cityscape, includes visible “hotspots” scattered about the space (e.g., on the tops of buildings). By interacting with a particular hotspot (e.g., by focusing or pointing), the user is transported to a viewing point at the simulated physical location corresponding to the hotspot. In another example, a futuristic eye-scanner approaches the user and requests authentication. By focusing on the eye scanner for several seconds, the user is able to unlock a door. If the user does not complete the virtual eye scan, the door stays locked and the video unfolds in a different manner. In yet another example, a video presented in simulated three-dimensional space includes a character that beckons the user to follow the character's finger or select particular objects. Parts of the video can change depending on whether the user follows the character's instructions. In a further example, when the user moves around in a virtual space or zooms his view to get closer to an object emitting sound, the sound can increase in volume while other audio effects grow quieter.
Although the systems and methods described herein relate primarily to audio and video presentation, the invention is equally applicable to various streaming and non-streaming media, including animation, video games, interactive media, and other forms of content usable in conjunction with the present systems and methods. Further, there can be more than one audio, video, and/or other media content stream played in synchronization with other streams. Streaming media can include, for example, multimedia content that is continuously presented to a user while it is received from a content delivery source, such as a remote video server. If a source media file is in a format that cannot be streamed and/or does not allow for seamless connections between segments, the media file can be transcoded or converted into a format supporting streaming and/or seamless transitions.
While various implementations of the present invention have been described herein, it should be understood that they have been presented by example only. Where methods and steps described above indicate certain events occurring in certain order, those of ordinary skill in the art having the benefit of this disclosure would recognize that the ordering of certain steps can be modified and that such modifications are in accordance with the given variations. For example, although various implementations have been described as having particular features and/or combinations of components, other implementations are possible having any combination or sub-combination of any features and/or components from any of the implementations described herein.
Number | Name | Date | Kind |
---|---|---|---|
4569026 | Best | Feb 1986 | A |
5161034 | Klappert | Nov 1992 | A |
5568602 | Callahan et al. | Oct 1996 | A |
5568603 | Chen et al. | Oct 1996 | A |
5597312 | Bloom et al. | Jan 1997 | A |
5607356 | Schwartz | Mar 1997 | A |
5610653 | Abecassis | Mar 1997 | A |
5636036 | Ashbey | Jun 1997 | A |
5676551 | Knight et al. | Oct 1997 | A |
5715169 | Noguchi | Feb 1998 | A |
5734862 | Kulas | Mar 1998 | A |
5737527 | Shiels et al. | Apr 1998 | A |
5745738 | Ricard | Apr 1998 | A |
5754770 | Shiels et al. | May 1998 | A |
5818435 | Kozuka et al. | Oct 1998 | A |
5848934 | Shiels et al. | Dec 1998 | A |
5887110 | Sakamoto et al. | Mar 1999 | A |
5894320 | Vancelette | Apr 1999 | A |
5956037 | Osawa et al. | Sep 1999 | A |
6067400 | Saeki et al. | May 2000 | A |
6122668 | Teng et al. | Sep 2000 | A |
6128712 | Hunt et al. | Oct 2000 | A |
6191780 | Martin et al. | Feb 2001 | B1 |
6222925 | Shiels et al. | Apr 2001 | B1 |
6240555 | Shoff et al. | May 2001 | B1 |
6298482 | Seidman et al. | Oct 2001 | B1 |
6460036 | Herz | Oct 2002 | B1 |
6657906 | Martin | Dec 2003 | B2 |
6698020 | Zigmond et al. | Feb 2004 | B1 |
6728477 | Watkins | Apr 2004 | B1 |
6801947 | Li | Oct 2004 | B1 |
6947966 | Oko, Jr. et al. | Sep 2005 | B1 |
7085844 | Thompson | Aug 2006 | B2 |
7155676 | Land et al. | Dec 2006 | B2 |
7231132 | Davenport | Jun 2007 | B1 |
7310784 | Gottlieb et al. | Dec 2007 | B1 |
7379653 | Yap et al. | May 2008 | B2 |
7444069 | Bernsley | Oct 2008 | B1 |
7472910 | Okada et al. | Jan 2009 | B1 |
7627605 | Lamere et al. | Dec 2009 | B1 |
7669128 | Bailey et al. | Feb 2010 | B2 |
7694320 | Yeo et al. | Apr 2010 | B1 |
7779438 | Davies | Aug 2010 | B2 |
7787973 | Lambert | Aug 2010 | B2 |
7917505 | van Gent et al. | Mar 2011 | B2 |
8024762 | Britt | Sep 2011 | B2 |
8046801 | Ellis et al. | Oct 2011 | B2 |
8065710 | Malik | Nov 2011 | B2 |
8151139 | Gordon | Apr 2012 | B1 |
8176425 | Wallace et al. | May 2012 | B2 |
8190001 | Bernsley | May 2012 | B2 |
8276058 | Gottlieb et al. | Sep 2012 | B2 |
8281355 | Weaver et al. | Oct 2012 | B1 |
8600220 | Bloch et al. | Dec 2013 | B2 |
8612517 | Yadid et al. | Dec 2013 | B1 |
8650489 | Baum et al. | Feb 2014 | B1 |
8667395 | Hosogai et al. | Mar 2014 | B2 |
8750682 | Nicksay et al. | Jun 2014 | B1 |
8826337 | Issa et al. | Sep 2014 | B2 |
8860882 | Bloch et al. | Oct 2014 | B2 |
8930975 | Woods et al. | Jan 2015 | B2 |
8977113 | Rumteen et al. | Mar 2015 | B1 |
9009619 | Bloch et al. | Apr 2015 | B2 |
9021537 | Funge et al. | Apr 2015 | B2 |
9082092 | Henry | Jul 2015 | B1 |
9094718 | Barton et al. | Jul 2015 | B2 |
9190110 | Bloch | Nov 2015 | B2 |
9257148 | Bloch et al. | Feb 2016 | B2 |
9268774 | Kim et al. | Feb 2016 | B2 |
9271015 | Bloch et al. | Feb 2016 | B2 |
9367196 | Goldstein et al. | Jun 2016 | B1 |
9390099 | Wang et al. | Jul 2016 | B1 |
9456247 | Pontual et al. | Sep 2016 | B1 |
9465435 | Zhang et al. | Oct 2016 | B1 |
9473582 | Fraccaroli | Oct 2016 | B1 |
9520155 | Bloch et al. | Dec 2016 | B2 |
9530454 | Bloch et al. | Dec 2016 | B2 |
9607655 | Bloch et al. | Mar 2017 | B2 |
9641898 | Bloch et al. | May 2017 | B2 |
9653115 | Bloch et al. | May 2017 | B2 |
9653116 | Paulraj et al. | May 2017 | B2 |
9672868 | Bloch et al. | Jun 2017 | B2 |
9715901 | Singh et al. | Jul 2017 | B1 |
9792026 | Bloch et al. | Oct 2017 | B2 |
9792957 | Bloch et al. | Oct 2017 | B2 |
9826285 | Mishra et al. | Nov 2017 | B1 |
9967621 | Armstrong et al. | May 2018 | B2 |
10178304 | Tudor et al. | Jan 2019 | B1 |
10178421 | Thomas et al. | Jan 2019 | B2 |
10523982 | Oyman | Dec 2019 | B2 |
20020019799 | Ginsberg et al. | Feb 2002 | A1 |
20020053089 | Massey | May 2002 | A1 |
20020086724 | Miyaki et al. | Jul 2002 | A1 |
20020091455 | Williams | Jul 2002 | A1 |
20020105535 | Wallace et al. | Aug 2002 | A1 |
20020106191 | Betz et al. | Aug 2002 | A1 |
20020120456 | Berg et al. | Aug 2002 | A1 |
20020124250 | Proehl et al. | Sep 2002 | A1 |
20020129374 | Freeman et al. | Sep 2002 | A1 |
20020140719 | Amir et al. | Oct 2002 | A1 |
20020144262 | Plotnick et al. | Oct 2002 | A1 |
20020174430 | Ellis et al. | Nov 2002 | A1 |
20020177914 | Chase | Nov 2002 | A1 |
20020194595 | Miller et al. | Dec 2002 | A1 |
20030007560 | Mayhew et al. | Jan 2003 | A1 |
20030012409 | Overton et al. | Jan 2003 | A1 |
20030076347 | Barrett et al. | Apr 2003 | A1 |
20030148806 | Weiss | Aug 2003 | A1 |
20030159566 | Sater et al. | Aug 2003 | A1 |
20030183064 | Eugene et al. | Oct 2003 | A1 |
20030184598 | Graham | Oct 2003 | A1 |
20030221541 | Platt | Dec 2003 | A1 |
20040009813 | Wind | Jan 2004 | A1 |
20040019905 | Fellenstein et al. | Jan 2004 | A1 |
20040034711 | Hughes | Feb 2004 | A1 |
20040070595 | Atlas et al. | Apr 2004 | A1 |
20040091848 | Nemitz | May 2004 | A1 |
20040125124 | Kim et al. | Jul 2004 | A1 |
20040128317 | Sull et al. | Jul 2004 | A1 |
20040138948 | Loomis | Jul 2004 | A1 |
20040172476 | Chapweske | Sep 2004 | A1 |
20040194128 | McIntyre et al. | Sep 2004 | A1 |
20040194131 | Ellis et al. | Sep 2004 | A1 |
20040199923 | Russek | Oct 2004 | A1 |
20050019015 | Ackley et al. | Jan 2005 | A1 |
20050055377 | Dorey et al. | Mar 2005 | A1 |
20050091597 | Ackley | Apr 2005 | A1 |
20050102707 | Schnitman | May 2005 | A1 |
20050107159 | Sato | May 2005 | A1 |
20050120389 | Boss et al. | Jun 2005 | A1 |
20050132401 | Boccon-Gibod et al. | Jun 2005 | A1 |
20050166224 | Ficco | Jul 2005 | A1 |
20050198661 | Collins et al. | Sep 2005 | A1 |
20050210145 | Kim et al. | Sep 2005 | A1 |
20050251820 | Stefanik et al. | Nov 2005 | A1 |
20050251827 | Ellis et al. | Nov 2005 | A1 |
20060002895 | McDonnell et al. | Jan 2006 | A1 |
20060024034 | Filo et al. | Feb 2006 | A1 |
20060028951 | Tozun et al. | Feb 2006 | A1 |
20060064733 | Norton et al. | Mar 2006 | A1 |
20060150072 | Salvucci | Jul 2006 | A1 |
20060150216 | Herz et al. | Jul 2006 | A1 |
20060153537 | Kaneko et al. | Jul 2006 | A1 |
20060155400 | Loomis | Jul 2006 | A1 |
20060200842 | Chapman et al. | Sep 2006 | A1 |
20060222322 | Levitan | Oct 2006 | A1 |
20060224260 | Hicken et al. | Oct 2006 | A1 |
20060274828 | Siemens et al. | Dec 2006 | A1 |
20070003149 | Nagumo et al. | Jan 2007 | A1 |
20070024706 | Brannon et al. | Feb 2007 | A1 |
20070033633 | Andrews et al. | Feb 2007 | A1 |
20070055989 | Shanks et al. | Mar 2007 | A1 |
20070079325 | de Heer | Apr 2007 | A1 |
20070085759 | Lee et al. | Apr 2007 | A1 |
20070099684 | Butterworth | May 2007 | A1 |
20070101369 | Dolph | May 2007 | A1 |
20070118801 | Harshbarger et al. | May 2007 | A1 |
20070154169 | Cordray et al. | Jul 2007 | A1 |
20070157234 | Walker | Jul 2007 | A1 |
20070157260 | Walker | Jul 2007 | A1 |
20070157261 | Steelberg et al. | Jul 2007 | A1 |
20070162395 | Ben-Yaacov et al. | Jul 2007 | A1 |
20070220583 | Bailey et al. | Sep 2007 | A1 |
20070226761 | Zalewski et al. | Sep 2007 | A1 |
20070239754 | Schnitman | Oct 2007 | A1 |
20070253677 | Wang | Nov 2007 | A1 |
20070253688 | Koennecke | Nov 2007 | A1 |
20070263722 | Fukuzawa | Nov 2007 | A1 |
20080019445 | Aono et al. | Jan 2008 | A1 |
20080021187 | Wescott et al. | Jan 2008 | A1 |
20080021874 | Dahl et al. | Jan 2008 | A1 |
20080022320 | Ver Steeg | Jan 2008 | A1 |
20080031595 | Cho | Feb 2008 | A1 |
20080086456 | Rasanen et al. | Apr 2008 | A1 |
20080086754 | Chen et al. | Apr 2008 | A1 |
20080091721 | Harboe et al. | Apr 2008 | A1 |
20080092159 | Dmitriev et al. | Apr 2008 | A1 |
20080148152 | Blinnikka et al. | Jun 2008 | A1 |
20080161111 | Schuman | Jul 2008 | A1 |
20080170687 | Moors et al. | Jul 2008 | A1 |
20080177893 | Bowra et al. | Jul 2008 | A1 |
20080178232 | Velusamy | Jul 2008 | A1 |
20080276157 | Kustka et al. | Nov 2008 | A1 |
20080300967 | Buckley et al. | Dec 2008 | A1 |
20080301750 | Silfvast et al. | Dec 2008 | A1 |
20080314232 | Hansson et al. | Dec 2008 | A1 |
20090022015 | Harrison | Jan 2009 | A1 |
20090022165 | Candelore et al. | Jan 2009 | A1 |
20090024923 | Hartwig et al. | Jan 2009 | A1 |
20090029771 | Donahue | Jan 2009 | A1 |
20090055880 | Batteram et al. | Feb 2009 | A1 |
20090063681 | Ramakrishnan et al. | Mar 2009 | A1 |
20090077137 | Weda et al. | Mar 2009 | A1 |
20090079663 | Chang et al. | Mar 2009 | A1 |
20090083631 | Sidi et al. | Mar 2009 | A1 |
20090116817 | Kim et al. | May 2009 | A1 |
20090177538 | Brewer et al. | Jul 2009 | A1 |
20090191971 | Avent | Jul 2009 | A1 |
20090195652 | Gal | Aug 2009 | A1 |
20090199697 | Lehtiniemi et al. | Aug 2009 | A1 |
20090226046 | Shteyn | Sep 2009 | A1 |
20090228572 | Wall et al. | Sep 2009 | A1 |
20090254827 | Gonze et al. | Oct 2009 | A1 |
20090258708 | Figueroa | Oct 2009 | A1 |
20090265746 | Halen et al. | Oct 2009 | A1 |
20090297118 | Fink et al. | Dec 2009 | A1 |
20090320075 | Marko | Dec 2009 | A1 |
20100017820 | Thevathasan et al. | Jan 2010 | A1 |
20100042496 | Wang et al. | Feb 2010 | A1 |
20100050083 | Axen et al. | Feb 2010 | A1 |
20100069159 | Yamada et al. | Mar 2010 | A1 |
20100077290 | Pueyo | Mar 2010 | A1 |
20100088726 | Curtis et al. | Apr 2010 | A1 |
20100146145 | Tippin et al. | Jun 2010 | A1 |
20100153512 | Balassanian et al. | Jun 2010 | A1 |
20100153885 | Yates | Jun 2010 | A1 |
20100161792 | Palm et al. | Jun 2010 | A1 |
20100162344 | Casagrande et al. | Jun 2010 | A1 |
20100167816 | Perlman et al. | Jul 2010 | A1 |
20100167819 | Schell | Jul 2010 | A1 |
20100186032 | Pradeep et al. | Jul 2010 | A1 |
20100186579 | Schnitman | Jul 2010 | A1 |
20100210351 | Berman | Aug 2010 | A1 |
20100251295 | Amento et al. | Sep 2010 | A1 |
20100262336 | Rivas et al. | Oct 2010 | A1 |
20100267450 | McMain | Oct 2010 | A1 |
20100268361 | Mantel et al. | Oct 2010 | A1 |
20100278509 | Nagano et al. | Nov 2010 | A1 |
20100287033 | Mathur | Nov 2010 | A1 |
20100287475 | van Zwol et al. | Nov 2010 | A1 |
20100293455 | Bloch | Nov 2010 | A1 |
20100325135 | Chen et al. | Dec 2010 | A1 |
20100332404 | Valin | Dec 2010 | A1 |
20110000797 | Henry | Jan 2011 | A1 |
20110007797 | Palmer et al. | Jan 2011 | A1 |
20110010742 | White | Jan 2011 | A1 |
20110026898 | Lussier et al. | Feb 2011 | A1 |
20110033167 | Arling et al. | Feb 2011 | A1 |
20110041059 | Amarasingham et al. | Feb 2011 | A1 |
20110069940 | Shimy et al. | Mar 2011 | A1 |
20110078023 | Aldrey et al. | Mar 2011 | A1 |
20110078740 | Bolyukh et al. | Mar 2011 | A1 |
20110096225 | Candelore | Apr 2011 | A1 |
20110126106 | Ben Shaul et al. | May 2011 | A1 |
20110131493 | Dahl | Jun 2011 | A1 |
20110138331 | Pugsley et al. | Jun 2011 | A1 |
20110163969 | Anzures et al. | Jul 2011 | A1 |
20110169603 | Fithian et al. | Jul 2011 | A1 |
20110191684 | Greenberg | Aug 2011 | A1 |
20110191801 | Vytheeswaran | Aug 2011 | A1 |
20110193982 | Kook et al. | Aug 2011 | A1 |
20110197131 | Duffin et al. | Aug 2011 | A1 |
20110200116 | Bloch et al. | Aug 2011 | A1 |
20110202562 | Bloch et al. | Aug 2011 | A1 |
20110238494 | Park | Sep 2011 | A1 |
20110246885 | Pantos et al. | Oct 2011 | A1 |
20110252320 | Arrasvuori et al. | Oct 2011 | A1 |
20110264755 | Salvatore De Villiers | Oct 2011 | A1 |
20110282745 | Meoded et al. | Nov 2011 | A1 |
20110282906 | Wong | Nov 2011 | A1 |
20110307786 | Shuster | Dec 2011 | A1 |
20110307919 | Weerasinghe | Dec 2011 | A1 |
20110307920 | Blanchard et al. | Dec 2011 | A1 |
20110313859 | Stillwell et al. | Dec 2011 | A1 |
20110314030 | Burba et al. | Dec 2011 | A1 |
20120004960 | Ma et al. | Jan 2012 | A1 |
20120005287 | Gadel et al. | Jan 2012 | A1 |
20120017141 | Eelen et al. | Jan 2012 | A1 |
20120062576 | Rosenthal et al. | Mar 2012 | A1 |
20120081389 | Dilts | Apr 2012 | A1 |
20120089911 | Hosking et al. | Apr 2012 | A1 |
20120094768 | McCaddon et al. | Apr 2012 | A1 |
20120110618 | Kilar et al. | May 2012 | A1 |
20120110620 | Kilar et al. | May 2012 | A1 |
20120120114 | You et al. | May 2012 | A1 |
20120134646 | Alexander | May 2012 | A1 |
20120147954 | Kasai et al. | Jun 2012 | A1 |
20120159541 | Carton et al. | Jun 2012 | A1 |
20120179970 | Hayes | Jul 2012 | A1 |
20120198412 | Creighton et al. | Aug 2012 | A1 |
20120213495 | Hafeneger et al. | Aug 2012 | A1 |
20120263263 | Olsen et al. | Oct 2012 | A1 |
20120308206 | Kulas | Dec 2012 | A1 |
20120317198 | Patton et al. | Dec 2012 | A1 |
20120324491 | Bathiche et al. | Dec 2012 | A1 |
20130028446 | Krzyzanowski | Jan 2013 | A1 |
20130028573 | Hoofien et al. | Jan 2013 | A1 |
20130031582 | Tinsman et al. | Jan 2013 | A1 |
20130039632 | Feinson | Feb 2013 | A1 |
20130046847 | Zavesky et al. | Feb 2013 | A1 |
20130054728 | Amir et al. | Feb 2013 | A1 |
20130055321 | Cline et al. | Feb 2013 | A1 |
20130061263 | Issa et al. | Mar 2013 | A1 |
20130097643 | Stone et al. | Apr 2013 | A1 |
20130117248 | Bhogal | May 2013 | A1 |
20130125181 | Montemayor et al. | May 2013 | A1 |
20130129304 | Feinson | May 2013 | A1 |
20130129308 | Karn et al. | May 2013 | A1 |
20130173765 | Korbecki | Jul 2013 | A1 |
20130177294 | Kennberg | Jul 2013 | A1 |
20130188923 | Hartley et al. | Jul 2013 | A1 |
20130202265 | Arrasvuori et al. | Aug 2013 | A1 |
20130204710 | Boland et al. | Aug 2013 | A1 |
20130219425 | Swartz | Aug 2013 | A1 |
20130254292 | Bradley | Sep 2013 | A1 |
20130259442 | Bloch et al. | Oct 2013 | A1 |
20130282917 | Reznik et al. | Oct 2013 | A1 |
20130290818 | Arrasvuori et al. | Oct 2013 | A1 |
20130298146 | Conrad et al. | Nov 2013 | A1 |
20130308926 | Jang et al. | Nov 2013 | A1 |
20130328888 | Beaver | Dec 2013 | A1 |
20130335427 | Cheung et al. | Dec 2013 | A1 |
20140015940 | Yoshida | Jan 2014 | A1 |
20140019865 | Shah | Jan 2014 | A1 |
20140025839 | Marko et al. | Jan 2014 | A1 |
20140040273 | Cooper et al. | Feb 2014 | A1 |
20140040280 | Slaney et al. | Feb 2014 | A1 |
20140046946 | Friedmann et al. | Feb 2014 | A2 |
20140078397 | Bloch et al. | Mar 2014 | A1 |
20140082666 | Bloch et al. | Mar 2014 | A1 |
20140085196 | Zucker et al. | Mar 2014 | A1 |
20140094313 | Watson et al. | Apr 2014 | A1 |
20140101550 | Zises | Apr 2014 | A1 |
20140126877 | Crawford et al. | May 2014 | A1 |
20140129618 | Panje et al. | May 2014 | A1 |
20140136186 | Adami et al. | May 2014 | A1 |
20140152564 | Gulezian et al. | Jun 2014 | A1 |
20140156677 | Collins, III et al. | Jun 2014 | A1 |
20140178051 | Bloch et al. | Jun 2014 | A1 |
20140186008 | Eyer | Jul 2014 | A1 |
20140194211 | Chimes et al. | Jul 2014 | A1 |
20140210860 | Caissy | Jul 2014 | A1 |
20140219630 | Minder | Aug 2014 | A1 |
20140220535 | Angelone | Aug 2014 | A1 |
20140237520 | Rothschild et al. | Aug 2014 | A1 |
20140245152 | Carter et al. | Aug 2014 | A1 |
20140270680 | Bloch et al. | Sep 2014 | A1 |
20140279032 | Roever et al. | Sep 2014 | A1 |
20140282013 | Amijee | Sep 2014 | A1 |
20140282642 | Needham et al. | Sep 2014 | A1 |
20140298173 | Rock | Oct 2014 | A1 |
20140314239 | Meyer et al. | Oct 2014 | A1 |
20140380167 | Bloch et al. | Dec 2014 | A1 |
20150007234 | Rasanen et al. | Jan 2015 | A1 |
20150012369 | Dharmaji et al. | Jan 2015 | A1 |
20150015789 | Guntur et al. | Jan 2015 | A1 |
20150046946 | Hassell et al. | Feb 2015 | A1 |
20150058342 | Kim et al. | Feb 2015 | A1 |
20150067723 | Bloch et al. | Mar 2015 | A1 |
20150070458 | Kim et al. | Mar 2015 | A1 |
20150104155 | Bloch et al. | Apr 2015 | A1 |
20150160853 | Hwang et al. | Jun 2015 | A1 |
20150179224 | Bloch et al. | Jun 2015 | A1 |
20150181271 | Onno et al. | Jun 2015 | A1 |
20150181301 | Bloch et al. | Jun 2015 | A1 |
20150185965 | Belliveau et al. | Jul 2015 | A1 |
20150195601 | Hahm | Jul 2015 | A1 |
20150199116 | Bloch et al. | Jul 2015 | A1 |
20150201187 | Ryo | Jul 2015 | A1 |
20150258454 | King et al. | Sep 2015 | A1 |
20150293675 | Bloch et al. | Oct 2015 | A1 |
20150294685 | Bloch et al. | Oct 2015 | A1 |
20150304698 | Redol | Oct 2015 | A1 |
20150331933 | Tocchini, IV et al. | Nov 2015 | A1 |
20150331942 | Tan | Nov 2015 | A1 |
20150348325 | Voss | Dec 2015 | A1 |
20160021412 | Zito, Jr. | Jan 2016 | A1 |
20160037217 | Harmon et al. | Feb 2016 | A1 |
20160062540 | Yang et al. | Mar 2016 | A1 |
20160065831 | Howard et al. | Mar 2016 | A1 |
20160066051 | Caidar et al. | Mar 2016 | A1 |
20160094875 | Peterson et al. | Mar 2016 | A1 |
20160100226 | Sadler et al. | Apr 2016 | A1 |
20160104513 | Bloch et al. | Apr 2016 | A1 |
20160105724 | Bloch et al. | Apr 2016 | A1 |
20160132203 | Seto et al. | May 2016 | A1 |
20160142889 | O'Connor et al. | May 2016 | A1 |
20160162179 | Annett et al. | Jun 2016 | A1 |
20160170948 | Bloch | Jun 2016 | A1 |
20160173944 | Kilar et al. | Jun 2016 | A1 |
20160192009 | Sugio et al. | Jun 2016 | A1 |
20160217829 | Bloch et al. | Jul 2016 | A1 |
20160224573 | Shahraray et al. | Aug 2016 | A1 |
20160277779 | Zhang et al. | Sep 2016 | A1 |
20160303608 | Jossick | Oct 2016 | A1 |
20160322054 | Bloch et al. | Nov 2016 | A1 |
20160323608 | Bloch et al. | Nov 2016 | A1 |
20160365117 | Boliek et al. | Dec 2016 | A1 |
20170062012 | Bloch et al. | Mar 2017 | A1 |
20170142486 | Masuda | May 2017 | A1 |
20170178409 | Bloch et al. | Jun 2017 | A1 |
20170178601 | Bloch et al. | Jun 2017 | A1 |
20170195736 | Chai et al. | Jul 2017 | A1 |
20170264920 | Mickelsen | Sep 2017 | A1 |
20170289220 | Bloch et al. | Oct 2017 | A1 |
20170295410 | Bloch et al. | Oct 2017 | A1 |
20170345460 | Bloch et al. | Nov 2017 | A1 |
20180007443 | Cannistraro et al. | Jan 2018 | A1 |
20180014049 | Griffin et al. | Jan 2018 | A1 |
20180025078 | Quennesson | Jan 2018 | A1 |
20180068019 | Novikoff et al. | Mar 2018 | A1 |
20180130501 | Bloch et al. | May 2018 | A1 |
20180191574 | Vishnia et al. | Jul 2018 | A1 |
20180254067 | Elder | Sep 2018 | A1 |
20180262798 | Ramachandra | Sep 2018 | A1 |
20190075367 | van Zessen et al. | Mar 2019 | A1 |
20190090002 | Ramadorai et al. | Mar 2019 | A1 |
Number | Date | Country |
---|---|---|
2639491 | Mar 2010 | CA |
004038801 | Jun 1992 | DE |
10053720 | Apr 2002 | DE |
0965371 | Dec 1999 | EP |
1033157 | Sep 2000 | EP |
2104105 | Sep 2009 | EP |
2359916 | Sep 2001 | GB |
2428329 | Jan 2007 | GB |
2008005288 | Jan 2008 | JP |
2004-0005068 | Jan 2004 | KR |
2010-0037413 | Apr 2010 | KR |
WO-199613810 | May 1996 | WO |
WO-2000059224 | Oct 2000 | WO |
WO-2007062223 | May 2007 | WO |
WO-2007138546 | Dec 2007 | WO |
WO-2008001350 | Jan 2008 | WO |
WO-2008057444 | May 2008 | WO |
WO-2008052009 | May 2008 | WO |
WO-2009125404 | Oct 2009 | WO |
WO-2009137919 | Nov 2009 | WO |
Entry |
---|
An ffmpeg and SDL Tutorial, “Tutorial 05: Synching Video,” Retrieved from internet on Mar. 15, 2013: <http://dranger.com/ffmpeg/tutorial05.html>, (4 pages). |
Archos Gen 5 English User Manual Version 3.0, Jul. 26, 2007, pp. 1-81. |
Barlett, Mitch, “iTunes 11: How to Queue Next Song,” Technipages, Oct. 6, 2008, pp. 1-8, retrieved on Dec. 26, 2013 from the internet http://www.technipages.com/itunes-queue-next-song.html. |
Gregor Miller et al. “MiniDiver: A Novel Mobile Media Playback Interface for Rich Video Content on an iPhoneTM”, Entertainment Computing A ICEC 2009, Sep. 3, 2009, pp. 98-109. |
International Search Report for International Patent Application PCT/IL2010/000362 dated Aug. 25, 2010 (2 pages). |
International Search Report for International Patent Application PCT/IL2012/000080 dated Aug. 9, 2012 (4 pages). |
International Search Report for International Patent Application PCT/IL2012/000081 dated Jun. 28, 2012 (4 pages). |
International Search Report and Written Opinion for International Patent Application PCT/IB2013/001000 dated Jul. 31, 2013 (12 pages). |
Labs.byHook: “Ogg Vorbis Encoder for Flash: Alchemy Series Part 1,” [Online] Internet Article, Retrieved on Jun. 14, 2012 from the Internet: URL:http://labs.byhook.com/2011/02/15/ogg-vorbis-encoder-for-flash-alchemy-series-part-1/, 2011, (pp. 1-8). |
Sodagar, I., (2011) “The MPEG-DASH Standard for Multimedia Streaming Over the Internet”, IEEE Multimedia, IEEE Service Center, New York, NY US, vol. 18, No. 4, pp. 62-67. |
Supplemental European Search Report for EP10774637.2 (PCT/IL2010/000362) dated Jun. 20, 2012 (6 pages). |
Supplemental European Search Report for EP13184145 dated Jan. 30, 2014 (6 pages). |
Yang, H, et al., “Time Stamp Synchronization in Video Systems,” Teletronics Technology Corporation, <http://www.ttcdas.com/products/daus_encoders/pdf/_tech_papers/tp_2010_time_stamp_video_system.pdf>, Abstract, (8 pages). |
U.S. Appl. No. 13/033,916 Published as US2011/0200116, System and Method for Seamless Multimedia Assembly, filed Feb. 24, 2011. |
U.S. Appl. No. 14/884,285, System and Method for Assembling a Recorded Composition, filed Oct. 15, 2015. |
U.S. Appl. No. 14/984,821, System and Method for Synchronization of Selectably Presentable Media Streams, filed Dec. 30, 2015. |
U.S. Appl. No. 13/921,536 Published as US2014/0380167, Systems and Methods for Multiple Device Interaction with Selectably Presentable Media Streams, filed Jun. 19, 2013. |
U.S. Appl. No. 14/335,381 Published as US2015/0104155, Systems and Methods for Real-Time Pixel Switching, filed Jul. 18, 2014. |
U.S. Appl. No. 14/139,996 Published as US2015/0181301, Methods and Systems for In-Video Library, filed Dec. 24, 2013. |
U.S. Appl. No. 14/140,007 Published as US2015/0179224, Methods and Systems for Seeking to Non-Key Frames, filed Dec. 24, 2013. |
U.S. Appl. No. 14/249,627 Published as US2015/0294685, Systems and Methods for Creating Linear Video From Branched Video, filed Apr. 10, 2014. |
U.S. Appl. No. 14/249,665 Published as US2015/0293675, Dynamic Timeline for Branched Video, filed Apr. 10, 2014. |
U.S. Appl. No. 14/509,700 Published as US2016/0104513, Systems and Methods for Dynamic Video Bookmarking, filed Oct. 8, 2014. |
U.S. Appl. No. 14/700,845, Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Apr. 30, 2015. |
U.S. Appl. No. 14/700,862, Systems and Methods for Seamless Media Creation, filed Apr. 30, 2015. |
U.S. Appl. No. 14/835,857, Systems and Methods for Adaptive and Responsive Video, filed Aug. 26, 2015. |
U.S. Appl. No. 14/978,464, Intelligent Buffering of Large-Scale Video, filed Dec. 22, 2015. |
U.S. Appl. No. 15/085,209, Media Stream Rate Synchronization, filed Mar. 30, 2016. |
U.S. Appl. No. 12/706,721, now U.S. Pat. No. 9,190,110, the Office Actions dated Apr. 26, 2012, Aug. 17, 2012, Mar. 28, 2013, Jun. 20, 2013, Jan. 3, 2014, Jul. 7, 2014, and Dec. 19, 2014; the Notices of Allowance dated Jun. 19, 2015 and Jul. 17, 2015; the Notices of Allowability dated Jul. 29, 2015, Aug. 12, 2015 and Sep. 14, 2015. |
U.S. Appl. No. 13/033,916, the Office Actions dated Jun. 7, 2013, Jan. 2, 2014, Aug. 28, 2014, Jan. 5, 2015, Jul. 9, 2015, and Jan. 5, 2016. |
U.S. Appl. No. 13/034,645, the Office Actions dated Jul. 23, 2012, Mar. 21, 2013, Sep. 15, 2014, and Jun. 4, 2015. |
U.S. Appl. No. 13/921,536, the Office Actions dated Feb. 25, 2015 and Oct. 20, 2015. |
U.S. Appl. No. 14/107,600, the Office Actions dated Dec. 19, 2014 and Jul. 8, 2015. |
U.S. Appl. No. 14/335,381, the Office Action dated Feb. 12, 2016. |
U.S. Appl. No. 14/139,996, the Office Actions dated Jun. 18, 2015, Feb. 3, 2016 and May 4, 2016. |
U.S. Appl. No. 14/140,007, the Office Actions dated Sep. 8, 2015 and Apr. 26, 2016. |
U.S. Appl. No. 14/249,627, the Office Action dated Jan. 14, 2016. |
U.S. Appl. No. 14/249,665, the Office Action dated May 16, 2016. |
U.S. Appl. No. 14/534,626, the Office Action dated Nov. 25, 2015. |
U.S. Appl. No. 14/884,285 Published as US2016/0170948, System and Method for Assembling a Recorded Composition, filed Oct. 15, 2015. |
U.S. Appl. No. 14/639,579 Published as US2015/0199116, Progress Bar for Branched Videos, filed Mar. 5, 2015. |
U.S. Appl. No. 14/984,821 Published as US2016-0217829, System and Method for Synchronization of Selectably Presentable Media Streams, filed Dec. 30, 2015. |
U.S. Appl. No. 13/921,536 U.S. Pat. No. 9,832,516 Published as US2014/0380167, Systems and Methods for Multiple Device Interacton with Selectably Presentable Media Streams, filed Jun. 19, 2013. |
U.S. Appl. No. 14/107,600 Published as US2015/0067723, Methods and Systems for Unfolding Video Pre-Roll, filed Dec. 16, 2013. |
U.S. Appl. No. 15/703,462, Systems and Methods for Dynamic Video Bookmarking, filed Oct. 13, 2017. |
U.S. Appl. No. 14/700,845 Published as US2016/0323608, Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Apr. 30, 2015. |
U.S. Appl. No. 14/835,857 Published as US2017/0062012, Systems and Methods for Adaptive and Responsive Video, filed Aug. 26, 2015. |
U.S. Appl. No. 15/085,209 Published as US-2017/0289220, Media Stream Rate Synchronization, filed Mar. 30, 2016. |
U.S. Appl. No. 15/189,931 Published as US 2017-0374120, Dynamic Summary Generation for Real-time Switchable Video, filed Jun. 22, 2016. |
U.S. Appl. No. 15/997,284, Interactive Video Dynamic Adaption and User Profiling, filed Jun. 4, 2018. |
U.S. Appl. No. 15/863,191, Dynamic Library Display for Interactive Videos, filed Mar. 5, 2018. |
U.S. Appl. No. 16/283,066, Dynamic Library Display, filed Feb. 22, 2019. |
U.S. Appl. No. 12/706,721, now U.S. Pat. No. 9,190,110, the Office Actions dated Apr. 26, 2012, Aug. 17, 2012, Mar. 28, 2013, Jun. 20, 2013, Jan. 3, 2014, Jul. 7, 2014, and Dec. 19, 2014; the Notices of Allowance dated Jun. 19, 2015 and Jul. 17, 2015; the Notices of Allowance dated Jul. 29, 2015, Aug. 12, 2015 and Sep. 14, 2015. |
U.S. Appl. No. 13/034,645, the Office Actions dated Jul. 23, 2012, Mar. 21, 2013, Sep. 15, 2014, Jun. 4, 2015, Apr. 7, 2017, Oct. 6, 2017 and Aug. 10, 2018. |
U.S. Appl. No. 14/884,285, the Office Actions dated Oct. 5, 2017 and Jul. 26, 2018. |
U.S. Appl. No. 14/639,579, the Office Actions dated May 3, 2017, Nov. 22, 2017 and Jun. 26, 2018, the Notice of Allowance dated Feb. 8, 2019. |
U.S. Appl. No. 13/838,830, now U.S. Pat. No. 9,257,148, the Office Action dated May 7, 2015, the Notice of Allowance dated Nov. 6, 2015. |
U.S. Appl. No. 14/984,821, the Office Actions dated Jun. 1, 2017, Dec. 6, 2017, and Oct. 5, 2018. |
U.S. Appl. No. 14/107,600, the Office Actions dated Dec. 19, 2014, Jul. 8, 2015, Jun. 3, 2016, Mar. 8, 2017, Oct. 10, 2017 and Jul. 25, 2018, Notice of Allowance dated Dec. 31, 2018 and Notice of Allowance dated Apr. 25, 2019. |
U.S. Appl. No. 14/249,665, now U.S. Pat. No. 9,792,026, the Office Actions dated May 16, 2016 and Feb. 22, 2017; and the Notice of Allowance dated Jun. 2, 2017. |
U.S. Appl. No. 14/534,626, the Office Actions dated Nov. 25, 2015, Jul. 5, 2016, Jun. 5, 2017, Mar. 2, 2018 and Sep. 26, 2018. |
U.S. Appl. No. 14/700,845, the Office Actions dated May 20, 2016, Dec. 2, 2016, May 22, 2017, Nov. 28, 2017 and Jun. 27, 2018. |
U.S. Appl. No. 14/835,857, the Office Actions dated Sep. 23, 2016, Jun. 5, 2017 and Aug. 9, 2018, and the Advisory Action dated Oct. 20, 2017; and the Notice of Allowance dated Feb. 26, 2019. |
U.S. Appl. No. 14/978,464, the Office Actions dated Sep. 8, 2017, May 18, 2018 and Dec. 14, 2018. |
U.S. Appl. No. 15/085,209, the Office Actions dated Feb. 26, 2018 and Dec. 31, 2018. |
U.S. Appl. No. 15/165,373, the Office Actions dated Mar. 24, 2017, Oct. 11, 2017, May 18, 2018 and Feb. 1, 2019. |
U.S. Appl. No. 15/189,931, the Office Actions dated Apr. 6, 2018, Notice of Allowance dated Oct. 24, 2018. |
U.S. Appl. No. 15/395,477, the Office Actions dated Nov. 2, 2018. |
U.S. Appl. No. 15/863,191, Notice of Allowances dated Jul. 5, 2018 and Nov. 23, 2018. |
U.S. Appl. No. 12/706,721 U.S. Pat. No. 9,190,110 Published as US2010/0293455, System and Method for Assembling a Recorded Composition, filed Feb. 17, 2010. |
U.S. Appl. No. 14/884,285 Published as US2017/0178601, Systems and Method for Assembling a Recorded Composition, filed Oct. 15, 2015. |
U.S. Appl. No. 13/033,916 U.S. Pat. No. 9,607,655 Published as US2011/0200116, System and Method for Seamless Multimedia Assembly, filed Feb. 24, 2011. |
U.S. Appl. No. 13/034,645 Published as US2011/0202562, System and Method for Data Mining Within Interactive Multimedia, filed Feb. 24, 2011. |
U.S. Appl. No. 13/437,164 U.S. Pat. No. 8,600,220 Published as US2013/0259442, Systems and Methods for Loading More Than One Video Content at a Time, filed Apr. 2, 2012. |
U.S. Appl. No. 14/069,694 U.S. Pat. No. 9,271,015 Published as US2014/0178051, Systems and Methods for Loading More Than One Video Content at a Time, filed Nov. 1, 2013. |
U.S. Appl. No. 13/622,780 U.S. Pat. No. 8,860,882 Published as US2014/0078397, Systems and Methods for Constructing Multimedia Content Modules, filed Sep. 19, 2012. |
U.S. Appl. No. 13/622,795 U.S. Pat. No. 9,009,619 Published as US2014/0082666, Progress Bar for Branched Videos, filed Sep. 19, 2012. |
U.S. Appl. No. 14/639,579 U.S. Pat. No. 10,474,334 Published as US2015/0199116, Progress Bar for Branched Videos, filed Mar. 5, 2015. |
U.S. Appl. No. 13/838,830 U.S. Pat. No. 9,257,148 Published as US2014/0270680, System and Method for Synchronization of Selectably Presentable Media Streams, filed Mar. 15, 2013. |
U.S. Appl. No. 14/984,821 U.S. Pat. No. 10,418,066 Published as US2016/0217829, System and Method for Synchronization of Selectably Presentable Media Streams, filed Dec. 30, 2015. |
U.S. Appl. No. 13/921,536 U.S. Pat. No. 9,832,516 Published as US2014/0380167, Systems and Methods for Multiple Device Interaction with Selectably Presentable Media Streams, filed Jun. 19, 2013. |
U.S. Appl. No. 14/107,600 U.S. Pat. No. 10,448,119 Published as US2015/0067723, Methods and Systems for Unfolding Video Pre-Roll, filed Dec. 16, 2013. |
U.S. Appl. No. 14/335,381 U.S. Pat. No. 9,530,454 Published as US2015/0104155, Systems and Methods for Real-Time Pixel Switching, filed Jul. 18, 2014. |
U.S. Appl. No. 15/356,913, Systems and Methods for Real-Time Pixel Switching, filed Nov. 21, 2016. |
U.S. Appl. No. 14/139,996 U.S. Pat. No. 9,641,898 Published as US2015/0181301, Methods and Systems for In-Video Library, filed Dec. 24, 2013. |
U.S. Appl. No. 14/140,007 U.S. Pat. No. 9,520,155 Published as US2015/0179224, Methods and Systems for Seeking to Non-Key Frames, filed Dec. 24, 2013. |
U.S. Appl. No. 14/249,627 U.S. Pat. No. 9,653,115 Published as US 2015-0294685, Systems and Methods for Creating Linear Video From Branched Video, filed Apr. 10, 2014. |
U.S. Appl. No. 15/481,916 Published as US 2017-0345460, Systems and Methods for Creating Linear Video From Branched Video, filed Apr. 7, 2017. |
U.S. Appl. No. 14/249,665 U.S. Pat. No. 9,792,026 Published as US2015/0293675, Dynamic Timeline for Branched Video, filed Apr. 10, 2014. |
U.S. Appl. No. 14/509,700 U.S. Pat. No. 9,792,957 Published as US2016/0104513, Systems and Methods for Dynamic Video Bookmarking, filed Oct. 8, 2014. |
U.S. Appl. No. 14/534,626 Published as US-2018-0130501-A1, Systems and Methods for Dynamic Video Bookmarking, filed Sep. 13, 2017. |
U.S. Appl. No. 14/534,626 Published as US2016/0105724, Systems and Methods for Parallel Track Transitions, filed Nov. 6, 2014. |
U.S. Appl. No. 14/700,845 U.S. Pat. No. 10,582,265 Published as US2016/0323608, Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Apr. 30, 2015. |
U.S. Appl. No. 16/752,193, Systems and Methods for Nonlinear Video Playback Using Linear Real-Time Video Players, filed Jan. 24, 2020. |
U.S. Appl. No. 14/700,862 U.S. Pat. No. 9,672,868 Published as US2016/0322054, Systems and Methods for Seamless Media Creation, filed Apr. 30, 2015. |
U.S. Appl. No. 14/835,857 U.S. Pat. No. 10,460,765 Published as US2017/0062012, Systems and Methods for Adaptive and Responsive Video, filed Aug. 26, 2015. |
U.S. Appl. No. 16/559,082 Published as US2019/0392868, Systems and Methods for Adaptive and Responsive Video, filed Sep. 3, 2019. |
U.S. Appl. No. 14/978,464 Published as US2017/0178601, Intelligent Buffering of Large-Scale Video, filed Dec. 22, 2015. |
U.S. Appl. No. 15/085,209 U.S. Pat. No. 10,462,202 Published as US2017/0289220, Media Stream Rate Synchronization, filed Mar. 30, 2016. |
U.S. Appl. No. 15/165,373 Published as US 2017-0295410, Symbiotic Interactive Video, filed May 26, 2016. |
U.S. Appl. No. 15/189,931 U.S. Pat. No. 10,218,760 Published as US2017/0374120, Dynamic Summary Generation for Real-time Switchable Videos, filed Jun. 22, 2016. |
U.S. Appl. No. 15/395,477 Published as US2018/0191574, Systems and Methods for Dynamic Weighting of Branched Video Paths, filed Dec. 30, 2016. |
U.S. Appl. No. 15/997,284, Interactive Video Dynamic Adaptation and User Profiling, filed Jun. 4, 2018. |
U.S. Appl. No. 15/863,191 U.S. Pat. No. 10,257,578, Dynamic Library Display for Interactive Videos, filed Jan. 5, 2018. |
U.S. Appl. No. 16/283,066, Dynamic Library Display for Interactive Videos, filed Feb. 22, 2019. |
U.S. Appl. No. 16/591,103, Systems and Methods for Dynamically Adjusting Video Aspect Ratios, filed Oct. 2, 2019. |
U.S. Appl. No. 16/793,205, Dynamic Adaptation of Interactive Video Players Using Behavioral Analytics, filed Feb. 18, 2020. |
U.S. Appl. No. 16/793,201, System and Methods for Detecting Anomalous Activities for Interactive Videos, filed Feb. 18, 2020. |
U.S. Appl. No. 12/706,721, now U.S. Pat. No. 9,190,110, the Office Actions dated Apr. 26, 2012, Aug. 17, 2012, Mar. 28, 2013, Jun. 20, 2013, Jan. 3, 2014, Jul. 7, 2014, and Dec. 19, 2014; the Notices of Allowance dated Jun. 19, 2015, Jul. 17, 2015, Jul. 29, 2015, Aug. 12, 2015, and Sep. 14, 2015. |
U.S. Appl. No. 14/884,284, the Office Actions dated Sep. 8, 2017; May 18, 2018; Dec. 14, 2018; Jul. 25, 2019; Nov. 18, 2019 and Feb. 21, 2020. |
U.S. Appl. No. 13/033,916, now U.S. Pat. No. 9,607,655, the Office Actions dated Jun. 7, 2013, Jan. 2, 2014, Aug. 28, 2014, Jan. 5, 2015, Jul. 9, 2015, and Jan. 5, 2016; the Advisory Action dated May 11, 2016; and the Notice of Allowance dated Dec. 21, 2016. |
U.S. Appl. No. 13/034,645, the Office Actions dated Jul. 23, 2012, Mar. 21, 2013, Sep. 15, 2014, Jun. 4, 2015, Apr. 7, 2017, Oct. 6, 2017, Aug. 10, 2018, Jul. 5, 2016, Apr. 5, 2019 and Dec. 26, 2019. |
U.S. Appl. No. 13/437,164, now U.S. Pat. No. 8,600,220, the Notice of Allowance dated Aug. 9, 2013. |
U.S. Appl. No. 14/069,694, now U.S. Pat. No. 9,271,015, the Office Actions dated Apr. 27, 2015 and Aug. 31, 2015, the Notice of Allowance dated Oct. 13, 2015. |
U.S. Appl. No. 13/622,780, now U.S. Pat. No. 8,860,882, the Office Action dated Jan. 16, 2014, the Notice of Allowance dated Aug. 4, 2014. |
U.S. Appl. No. 13/622,795, now U.S. Pat. No. 9,009,619, the Office Actions dated May 23, 2014 and Dec. 1, 2014, the Notice of Allowance dated Jan. 9, 2015. |
U.S. Appl. No. 14/639,579, the Office Actions dated May 3, 2017, Nov. 22, 2017 and Jun. 26, 2018, the Notice of Allowance dated Feb. 8, 2019 and Jul. 11, 2019. |
U.S. Appl. No. 13/838,830, now U.S. Pat. No. 9,257,148, the Office Action dated May 7, 2015, Notices of Allowance dated Nov. 6, 2015. |
U.S. Appl. No. 14/984,821, now U.S. Pat. No. 10,418,066, the Office Actions dated Jun. 1, 2017, Dec. 6, 2017, and Oct. 5, 2018; the Notice of Allowance dated May 7, 2019. |
U.S. Appl. No. 13/921,536, now U.S. Pat. No. 9,832,516, the Office Actions dated Feb. 25, 2015, Oct. 20, 2015, Aug. 26, 2016 and Mar. 8, 2017, the Advisory Action dated Jun. 21, 2017, and Notice of Allowance dated Sep. 12, 2017. |
U.S. Appl. No. 14/107,600, now U.S. Pat. No. 10,448,119, the Office Actions dated Dec. 19, 2014, Jul. 8, 2015, Jun. 3, 2016, Mar. 8, 2017, Oct. 10, 2017 and Jul. 25, 2018, Notices of Allowance dated Dec. 31, 2018 and Apr. 25, 2019. |
U.S. Appl. No. 14/335,381, now U.S. Pat. No. 9,530,454, the Office Action dated Feb. 12, 2016; and the Notice of Allowance dated Aug. 24, 2016. |
U.S. Appl. No. 14/139,996, now U.S. Pat. No. 9,641,898, the Office Actions dated Jun. 18, 2015, Feb. 3, 2016 and May 4, 2016; and the Notice of Allowance dated Dec. 23, 2016. |
U.S. Appl. No. 14/140,007, now U.S. Pat. No. 9,520,155, the Office Actions dated Sep. 8, 2015 and Apr. 26, 2016; and the Notice of Allowance dated Oct. 11, 2016. |
U.S. Appl. No. 14/249,627, now U.S. Pat. No. 9,653,115, the Office Actions dated Jan. 14, 2016 and Aug. 9, 2016; and the Notice of Allowance dated Jan. 13, 2017. |
U.S. Appl. No. 15/481,916, the Office Actions dated Oct. 6, 2017, Aug. 6, 2018, Mar. 8, 2019, Nov. 27, 2019. |
U.S. Appl. No. 14/249,665, now U.S. Pat. No. 9,792,026, the Office Actions dated May 16, 2016 and Feb. 22, 2017; and the Notices of Allowance dated Jun. 2, 2017 and Jul. 24, 2017. |
U.S. Appl. No. 14/509,700, now U.S. Pat. No. 9,792,957, the Office Action dated Oct. 28, 2016, and the Notice of Allowance dated Jun. 15, 2017. |
U.S. Appl. No. 15/703,462, the Office Action dated Jun. 21, 2019, and Dec. 27, 2019; and the Notice of Allowance dated Feb. 10, 2020. |
U.S. Appl. No. 14/534,626, the Office Actions dated Nov. 25, 2015, Jul. 5, 2016, Jun. 5, 2017, Mar. 2, 2018, Sep. 26, 2018, May 8, 2019 and Dec. 27, 2019. |
U.S. Appl. No. 14/700,845, now U.S. Pat. No. 9,653,115, the Office Actions dated May 20, 2016, Dec. 2, 2016, May 22, 2017, Nov. 28, 2017, Jun. 27, 2018 and Feb. 19, 2019 and the Notice of Allowance dated Oct. 21, 2019. |
U.S. Appl. No. 14/700,862, now U.S. Pat. No. 9,672,868, the Office Action dated Aug. 26, 2016; and the Notice of Allowance dated Mar. 9, 2017. |
U.S. Appl. No. 14/835,857, now U.S. Pat. No. 10,460,765, the Office Actions dated Sep. 23, 2016, Jun. 5, 2017 and Aug. 9, 2018, and the Advisory Action dated Oct. 20, 2017; Notice of Allowances dated Feb. 25, 2019 and Jun. 7, 2019. |
U.S. Appl. No. 14/978,464, the Office Actions dated Jul. 25, 2019 Dec. 14, 2018, May 18, 2018, and Sep. 8, 2017. |
U.S. Appl. No. 16/559,082, the Office Actions dated Feb. 20, 2020; the Notice of Allowance dated Feb. 20, 2020. |
U.S. Appl. No. 15/085,209, now U.S. Pat. No. 10,462,202, the Office Actions dated Feb. 26, 2018 and Dec. 31, 2018; the Notice of Allowance dated Aug. 12, 2019. |
U.S. Appl. No. 15/165,373, the Office Actions dated Mar. 24, 2017, Oct. 11, 2017, May 18, 2018; Feb. 1, 2019, Aug. 8, 2019, and Jan. 3, 2020. |
U.S. Appl. No. 15/189,931, now U.S. Pat. No. 10,218,760, the Office Actions dated Apr. 6, 2018, Notice of Allowance dated Oct. 24, 2018. |
U.S. Appl. No. 15/395,477, the Office Actions dated Nov. 2, 2018, and Aug. 16, 2019. |
U.S. Appl. No. 15/997,284, the Office Actions dated Aug. 11, 2019 and Nov. 21, 2019. |
U.S. Appl. No. 15/863,191, now U.S. Pat. No. 10/257,578, Notices of Allowance dated Jul. 5, 2018 and Nov. 23, 2018. |
U.S. Appl. No. 16/283,066, the Office Action dated Jan. 6, 2020. |
Number | Date | Country | |
---|---|---|---|
20170178409 A1 | Jun 2017 | US |