This disclosure relates to systems and methods that create video compositions.
High quality video content may be stored in a cloud storage. A user may wish to create a video composition from the video content. However, downloading the video content from the cloud storage to review the video content may take a long time and take a large amount of bandwidth/storage space. Additionally, only small segments of the video content may be of interest to the user for inclusion in the video composition.
This disclosure relates to creating video compositions. Video information defining video content may be accessed. One or more highlight moments in the video content may be identified. One or more video segments in the video content may be identified based on the one or more highlight moments. Derivative video information defining one or more derivative video segments may be generated based on the one or more video segments. The derivative video information may be transmitted over a network to a computing device. One or more selections of the derivative video segments may be received from the computing device. Video information defining one or more video segments corresponding to the one or more selected derivative video segments may be transmitted to the computing device. The computing device may generate video composition information defining a video composition based on the video information defining the one or more video segments corresponding to the one or more selected derivative video segments.
A system that creates video compositions may include one or more of physical storage media, processors, and/or other components. The physical storage media may store video information defining video content. Video content may refer to media content that may be consumed as one or more videos. Video content may include one or more videos stored in one or more formats/container, and/or other video content. The video content may have a progress length. In some implementations, the video content may include one or more of spherical video content, virtual reality content, and/or other video content.
The processor(s) may be configured by machine-readable instructions. Executing the machine-readable instructions may cause the processor(s) to facilitate creating video compositions. The machine-readable instructions may include one or more computer program components. The computer program components may include one or more of an access component, a highlight moment component, a video segment component, a derivative video segment component, a communication component, and/or other computer program components.
The access component may be configured to access the video information defining one or more video content and/or other information. The access component may access video information from one or more storage locations. The access component may be configured to access video information defining one or more video content during acquisition of the video information and/or after acquisition of the video information by one or more image sensors.
The highlight moment component may be configured to identify one or more highlight moments in the video content. One or more highlight moments may include a first highlight moment, and/or other highlight moments. In some implementations, one or more highlight moments may be identified based on one or more highlight indications set during capture of the video content and/or other information. In some implementations, one or more highlight moments may identified based on metadata characterizing capture of the video content. In some implementations, one or more highlight moments may be changed based on one or more user interactions with the computing device and/or other information.
The video segment component may be configured to identify one or more video segments in the video content. One or more video segments may be identified based on one or more highlight moments and/or other information. Individual video segment may comprise one or more portions of the video content including one or more highlight moments. One or more video segments may include a first video segment and/or other video segments. The first video segment may comprise a first portion of the video content including the first highlight moment, and/or other portions of the video content.
The derivative video segment component may be configured to generate derivative video information defining one or more derivative video segments. Derivative video information may be generated based on one or more video segments. Individual derivative video segments may correspond to and may be generated from individual video segments. Individual derivative video segments may be characterized by lower fidelity than the corresponding individual video segments. Lower fidelity may include one or more of lower resolution, lower framerate, higher compression, and/or other lower fidelity.
One or more derivative video segments may include a first derivative video segment and/or other derivative video segments. The first derivative video segment may correspond to and may be generated from the first video segment. The first derivative video segment may be characterized by lower fidelity than the first video segment.
The communication component may be configured to transmit information to and receive information from one or more computing devices over a network. The communication component may be configured to transmit over a network derivative video information defining one or more derivative video segments and/or other information to a computing device. In some implementations, transmitting the derivative video information to the computing device may include streaming derivative video segments to the computing device.
The communication component may be configured to receive over the network one or more selections of the derivative video segments and/or other information from the computing device. In some implementations, an ordering of one or more selected derivative video segments may be determined and/or changed based on one or more user interactions with the computing device and/or other information. In some implementations, one or more portions of the video content comprised in one or more video segments may be changed based on one or more user interactions with the computing device and/or other information.
The communication component may be configured to transmit over the network video information defining one or more video segments corresponding to one or more selected derivative video segments and/or other information to the computing device. The computing device may generate video composition information defining a video composition based on the video information defining one or more of the video segments corresponding to one or more selected derivative video segments. In some implementations, the video composition information may be generated by the computing device further based on the ordering of one or more selected derivative video segments. In some implementations, the video composition may be changed based on one or more user interactions with the computing device and the video information defining one or more video segments corresponding to one or more selected derivative video segments and transmitted to the computing device.
These and other objects, features, and characteristics of the system and/or method disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
Storage media 12 may be configured to include electronic storage medium that electronically stores information. Storage media 12 may store software algorithms, information determined by processor 11, information received remotely, and/or other information that enables system 10 to function properly. For example, storage media 12 may store information relating to video information, derivative video information, video content, highlight moments, video segments, derivative video segments, computing devices, video compositions, and/or other information.
Storage media 12 may store video information 20 defining one or more video content. Video content may refer to media content that may be consumed as one or more videos. Video content may include one or more videos stored in one or more formats/container, and/or other video content. A video may include a video clip captured by a video capture device, multiple video clips captured by a video capture device, and/or multiple video clips captured by separate video capture devices. A video may include multiple video clips captured at the same time and/or multiple video clips captured at different times. A video may include a video clip processed by a video application, multiple video clips processed by a video application and/or multiple video clips processed by separate video applications.
Video content may have a progress length. A progress length may be defined in terms of time durations and/or frame numbers. For example, video content may include a video having a time duration of 60 seconds. Video content may include a video having 1800 video frames. Video content having 1800 video frames may have a play time duration of 60 seconds when viewed at 30 frames/second. Other time durations and frame numbers are contemplated.
In some implementations, video content may include one or more of spherical video content, virtual reality content, and/or other video content. Spherical video content may refer to a video capture of multiple views from a single location. Spherical video content may include a full spherical video capture (360 degrees of capture) or a partial spherical video capture (less than 360 degrees of capture). Spherical video content may be captured through the use of one or more cameras/image sensors to capture images/videos from a location. The captured images/videos may be stitched together to form the spherical video content.
Virtual reality content may refer to content that may be consumed via virtual reality experience. Virtual reality content may associate different directions within the virtual reality content with different viewing directions, and a user may view a particular directions within the virtual reality content by looking in a particular direction. For example, a user may use a virtual reality headset to change the user's direction of view. The user's direction of view may correspond to a particular direction of view within the virtual reality content. For example, a forward looking direction of view for a user may correspond to a forward direction of view within the virtual reality content.
Spherical video content and/or virtual reality content may have been captured at one or more locations. For example, spherical video content and/or virtual reality content may have been captured from a stationary position (e.g., a seat in a stadium). Spherical video content and/or virtual reality content may have been captured from a moving position (e.g., a moving bike). Spherical video content and/or virtual reality content may include video capture from a path taken by the capturing device(s) in the moving position. For example, spherical video content and/or virtual reality content may include video capture from a person walking around in a music festival.
Referring to
Access component 102 may be configured to access video information defining one or more video content and/or other information. Access component 102 may access video information from one or more storage locations. A storage location may include storage media 12, electronic storage of one or more image sensors (not shown in
Highlight moment component 104 may be configured to identify one or more highlight moments in the video content. A highlight moment may correspond to a moment or a duration within the video content. A highlight moment may indicate an occurrence of one or more events of interest. For example,
In some implementations, one or more highlight moments may be identified based on one or more highlight indications set during capture of the video content and/or other information. For example, during capture of video 500, a user of or near the camera that captured video 500 may indicate that a highlight moment has occurred, is occurring, or will occur via one or more inputs (e.g., voice command, use of physical interface such as a physical button or a virtual button on a touchscreen display, particular motion of the user and/or the camera) into the camera. In some implementations, one or more highlight moments may identified based on metadata characterizing capture of the video content. For example, based on motion, location, and/or orientation data during the capture of video 500 by a camera, the camera may determine that a highlight moment has occurred, is occurring, or will occur. For example, a highlight moment may be identified based on the metadata indicating that a person has jumped or is accelerating. In some implementations, one or more highlight moments may be identified based on visual and/or audio analysis of the video content. For example, one or more highlight moments may be identified based on analysis of video 500 that looks for particular visuals and/or audio captured within video 500. In some implementations, a user may be presented with an option to confirm one or more highlight moments automatically detected within the video content.
Video segment component 106 may be configured to identify one or more video segments in the video content. One or more video segments may be identified based on one or more highlight moments and/or other information. Individual video segment may comprise one or more portions of the video content including one or more highlight moments. For example,
In
In
In some implementations, video segment component 106 may identify video segments for video content without highlight moments. For video content without highlight moments, video segment component 106 may divide the video content into video segments of equal duration. Providing to computing device(s) video segments for video content without highlight moments may enable users to set highlight moments for the video content/video segments.
Derivative video segment component 108 may be configured to generate derivative video information defining one or more derivative video segments. Derivative video information may be generated based on one or more video segments and/or other information. Individual derivative video segments may correspond to and may be generated from individual video segments. Individual derivative video segments may be characterized by lower fidelity than the corresponding individual video segments. Lower fidelity may include one or more of lower resolution, lower framerate, higher compression, and/or other lower fidelity.
For example, with respect to
With respect to
With respect
Communication component 110 may be configured to transmit information to and receive information from one or more computing devices over a network. Communication component 110 may be configured to transmit over a network derivative video information defining one or more derivative video segments and/or other information to one or more computing devices. A computing device may refer to a device including a processor and a display that can receive video information over a network and present the video segments/derivative video segments defined by the video information/derivative video information on the display. The computing device may enable one or more users to select one or more derivative video segments for inclusion in a video composition. The computing device may enable one or more users to change the highlight moments, the derivative video segments, and/or other properties of the video composition. In some implementations, transmitting the derivative video information to the computing device may include streaming derivative video segments to the computing device. Use of derivative video information to present previews of the video segments on the computing device may shorten the time needed for a user to view the video segments that may be selected for inclusion in a video composition. Use of derivative video information to present previews of the video segments on the computing device may enable a user to more quickly select the portion of the video content from which the user decides to make the video composition.
In some implementations, communication component 110 may be configured to receive over the network one or more changes to highlight moments and/or other information from the computing device. One or more highlight moments may be changed based on one or more user interactions with the computing device and/or other information. For example,
Video segment component 106 may identify video segments based on the changed highlight moments and/or change the identified video segments based on the changed highlight moments. For example,
In some implementations, one or more portions of the video content comprised in one or more video segments may be changed based on one or more user interactions with the computing device and/or other information. For example,
Communication component 110 may be configured to receive over the network one or more selections of the derivative video segments for inclusion in a video composition and/or other information from the computing device. For example,
In some implementations, the ordering of one or more selected derivative video segments may be determined and/or changed based on one or more user interactions with the computing device and/or other information. For example, a user may select derivative video segments 702, 704, 706 for inclusion in a video composition in the order of derivative video segment 702, derivative video segment 704, and derivative video segment 706. The user may change the ordering of the derivative video segments via interaction with one or more physical buttons (e.g., buttons on a camera, mobile device) or one or more areas of a touchscreen display (e.g., buttons on a touchscreen display, icons/previews representing derivative video segments). For example, referring to
Communication component 110 may be configured to transmit over the network video information defining one or more video segments corresponding to one or more selected derivative video segments and/or other information to the computing device. For example, responsive to receiving the user's selection of derivative video segments 702, 704, 706 for inclusion in a video composition, communication component 110 may transmit over the network video information defining video segments 618, 620, 622. In some implementations, video information defining video segments may be transmitted to the computing device upon receiving selection of individual derivative video segments. In some implementations, video information defining video segments may be transmitted to the computing device upon receiving indication from the computing device that the selection of derivative video segments has been completed.
For example, referring to
The computing device (e.g., 301, 302) may generate video composition information defining a video composition based on the received video information. The received video information may defining one or more video segments corresponding to one or more selected derivative video segments. The computing device may encode the video composition based on a system setting, a user setting, and/or other information. The video composition information may be generated by the computing device further based on the ordering of one or more selected derivative video segments. For example, the video composition information generated based on the ordering of the selected derivative video segments shown in
In some implementations, the video composition may include user-provided content. User provided content may refer to visual and/or audio content provided/selected by the user for inclusion in the video composition. For example, the user of the computing device may select one or more of text, image, video, music, sound, and/or other user-provided content for inclusion in the video composition.
In some implementations, the video composition/the video composition information may be changed based on one or more user interactions with the computing device. Changes to the video composition may include changes to the highlight moments (adding, removing, moving highlight moments), changes in the portion of the video content included in the video segments, changes in the ordering of the video segments, changes in the speed at which one or more portions of the video composition are played, changes in the inclusion of user-provided content, and/or other changes.
Because the video information defining the video segments within the video composition has been transmitted to the computing device, the computing device may make changes to the video composition using the received video information and without receiving additional/other video information. In some implementations, the computing device may request additional/other video information defining other portions of the video content based on the changes to the video composition requiring video information defining not yet received video segments. The computing device may generate changed video composition information using the previously received video information and newly received video information.
Interface 800 may include highlight indicator 808 indicating the location of a highlight moment within the video content/video segment and highlight creator 810 which may be used to set a highlight moment within the video content/video segment. Highlight creator 810 may include a white line that extends vertically across the previews of video frames. A user may move the video frames of the video content/video segment behind highlight creator 810 and interact with highlight creator 810 to place a highlight moment within the particular frame behind the white line.
Interface 800 may include video composition arrangement section 812 displaying the order of the derivative video segments (e.g., 702, 704, 706) selected for inclusion in a video segment. A user may change the order of the selected derivative video segments by moving the individual derivative video segments within video composition arrangement section 812. A user may deselect one or more selected derivative video segments by removing the derivative video segment from video composition arrangement section 812.
Referring to
In some implementations, the computing device may set/change one or more portions of the video content comprised in one or more video segments included in the video composition. The computing device may determine the amount of the video content comprised in the video segments based on metadata of the video content and/or other information. For example, the computing device may set/change the amount of the video content comprised in the video segments based on the duration of the video content, the motion/orientation of the camera that captured the video content, the number of derivative video segments/highlight moments selected for inclusion in the video composition, the music/song to which the video composition is to be synced, and/or other information.
For example, a user may have viewed previews of and selected for inclusion in a video composition derivative video segments corresponding to video segments 602 (containing highlight A 502), 604 (containing highlight B 504), 606 (containing highlight C 506) shown in
As another example, a user may choose to sync a video composition to a music having a duration of 12 seconds. A user may have selected three derivative video segments/highlight moments for inclusion in the video composition. The computing device may request video information defining the corresponding three video segments such that the three video segments have a total play duration of 12 seconds. In some implementations, the computing device may change the play speed of one or more portions of the video segments (e.g., slow down, speed up, speed ramp).
Referring to
For example, video information defining video content may be accessed by server(s) 303. Server(s) 303 may identify one or more highlight moments in the video content. Server(s) 303 may identify one or more video segments in the video content based on the one or more highlight moments. Server(s) 303 may generate derivative video information defining one or more derivative video segments based on the one or more video segments. Server(s) 303 may transmit the derivative video information over network 304 to one or more computing devices 301, 302. One or more selections of the derivative video segments may be received from the computing device(s) 301, 302. Server(s) 303 may transmit video information defining one or more video segments corresponding to the one or more selected derivative video segments to the computing device(s) 301, 302. The computing device(s) 301, 302 may generate video composition information defining a video composition based on the received video information defining the one or more video segments corresponding to the one or more selected derivative video segments.
In some implementations, the computing device may set and/or change one or more highlight moment(s) 422 in the video content. The server may receive from the computing device the highlight moment(s) set and/or changed by the computing device and may identify highlight moment(s) 404 based on the highlight moment(s) set and/or changed by the computing device. In some implementations, the computing device may change the portion of the video content 424 that comprise the video segment(s). The server may receive from the computing device the change in the portion of the video content that comprise the video segment(s) and may generate derivative video information 408 based on the changed video segment(s).
The systems/methods disclosed herein may enable the user(s) to begin reviewing the video segments more quickly by receiving the derivative versions of the video segments than if the users received the video content. The systems/methods may enable the users to select video segments for inclusion in a video composition using the derivative versions of the video segments. The systems/methods may enable the user(s) to download just those portions of the video content that are selected for inclusion in the video composition.
Implementations of the disclosure may be made in hardware, firmware, software, or any suitable combination thereof. Aspects of the disclosure may be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a tangible computer readable storage medium may include read only memory, random access memory, magnetic disk storage media, optical storage media, flash memory devices, and others, and a machine-readable transmission media may include forms of propagated signals, such as carrier waves, infrared signals, digital signals, and others. Firmware, software, routines, or instructions may be described herein in terms of specific exemplary aspects and implementations of the disclosure, and performing certain actions.
Although processor 11 and storage media 12 are shown to be connected to interface 13 in
Although processor 11 is shown in
It should be appreciated that although computer components are illustrated in
The description of the functionality provided by the different computer program components described herein is for illustrative purposes, and is not intended to be limiting, as any of computer program components may provide more or less functionality than is described. For example, one or more of computer program components 102, 104, 106, 108, and/or 110 may be eliminated, and some or all of its functionality may be provided by other computer program components. As another example, processor 11 may be configured to execute one or more additional computer program components that may perform some or all of the functionality attributed to one or more of computer program components 102, 104, 106, 108, and/or 110 described herein.
The electronic storage media of storage media 12 may be provided integrally (i.e., substantially non-removable) with one or more components of system 10 and/or removable storage that is connectable to one or more components of system 10 via, for example, a port (e.g., a USB port, a Firewire port, etc.) or a drive (e.g., a disk drive, etc.). Storage media 12 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EPROM, EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Storage media 12 may be a separate component within system 10, or storage media 12 may be provided integrally with one or more other components of system 10 (e.g., processor 11). Although storage media 12 is shown in
In some implementations, method 200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operation of method 200 in response to instructions stored electronically on one or more electronic storage mediums. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operation of method 200.
Referring to
At operation 202, one or more highlight moments in the video content may be identified. One or more highlight moments may include a first highlight moment. In some implementations, operation 202 may be performed by a processor component the same as or similar to highlight moment component 104 (Shown in
At operation 203, one or more video segments in the video content may be identified based on one or more highlight moments. Individual video segments may comprising a portion of the video content including one or more highlight moments. One or more video segments may include a first video segment. The first video segment may comprise a first portion of the video content including the first highlight moment. In some implementations, operation 203 may be performed by a processor component the same as or similar to video segment component 106 (Shown in
At operation 204, derivative video information defining one or more derivative video segments may be generated. Derivative video information may be generated based on the one or more video segments. Individual derivative video segments may correspond to and may be generated from individual video segments. Individual derivative video segments may be characterized by lower fidelity than the corresponding individual video segments. One or more derivative video segments may include a first derivative video segment. The first derivative video segment may correspond to and may be generated from the first video segment. The first derivative video segment may be characterized by lower fidelity than the first video segment. In some implementations, operation 204 may be performed by a processor component the same as or similar to derivative video segment component 108 (Shown in
At operation 205, the derivative video information defining one or more derivative video segments may be transmitted over a network to a computing device. In some implementations, operation 205 may be performed by a processor component the same as or similar to communication component 110 (Shown in
At operation 206, one or more selections of the derivative video segments may be received over the network from the computing device. In some implementations, operation 206 may be performed by a processor component the same as or similar to communication component 110 (Shown in
At operation 207, the video information defining one or more video segments corresponding to one or more selected derivative video segments may be transmitted over the network to the computing device. The computing device may generate video composition information defining a video composition based on the video information defining one or more video segments corresponding to one or more selected derivative video segments. In some implementations, operation 207 may be performed by a processor component the same as or similar to communication component 110 (Shown in
Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
Number | Date | Country | |
---|---|---|---|
62450882 | Jan 2017 | US |