This disclosure relates to editing videos based on matching motion represented within the videos.
Different video clips may have portions with matching motion. Matching motion may include activity motion (motion of activity captured within the videos) capture motion (motion of image sensor(s) that captured with videos), and/or other motion. A user may wish to create a video composition that joins the different video clips with portions with matching motion. Manually identifying and editing the video clips to create such a video composition may be time consuming.
This disclosure relates to editing videos based on motion. First video information defining first video content and second video information defining second video content may be accessed. Motion within the first video content and/or motion within the second video content may be assessed. A match between the motion assessed within the first video content and the motion assessed within the second video content may be determined. The match may include a first set of video frames within the first video content and a second set of video frames within the second video content within which the matching motion is present. A first video portion of the first video content and a second video portion of the second video content may be identified based on the match. The first video portion may include one or more frames of the first set of video frames and the second video portion may include one or more frames of the second set of video frames. The first video portion and the second video portion may be concatenated such that the concatenation of the first video portion and the second video portion results in a transition between the first video portion and the second video portion in which continuity of motion may be achieved.
A system that edits videos based on motion may include one or more of physical storage media, processors, and/or other components. The physical storage media may store video information defining video content. Video content may refer to media content that may be consumed as one or more videos. Video content may include one or more videos stored in one or more formats/container, and/or other video content. The video content may have a progress length. In some implementations, the video content may include one or more of spherical video content, virtual reality content, and/or other video content.
The processor(s) may be configured by machine-readable instructions. Executing the machine-readable instructions may cause the processor(s) to facilitate editing videos based on motion. The machine-readable instructions may include one or more computer program components. The computer program components may include one or more of an access component, a motion component, a match component, a video portion component, a concatenation component, and/or other computer program components.
The access component may be configured to access the video information defining one or more video content and/or other information. The access component may access first video information defining first video content, second video information defining second video content, and/or other video information defining other video content. The access component may access video information from one or more storage locations. The access component may be configured to access video information defining one or more video content during acquisition of the video information and/or after acquisition of the video information by one or more image sensors.
The motion component may be configured to assess motion within one or more video content. The motion component may assess motion within the first video content, the second video content, and/or other video content.
In some implementations, the motion assessed within one or more video content may include capture motion of the video content. In some implementations, capture motion may include one or more of zoom, pan, tilt, dolly, truck, and/or pedestal of capture of the video content by one or more image sensors. In some implementations, capture motion may include a direction of gravity on one or more image sensors during the capture of the video content.
In some implementations, the motion assessed within one or more video content may include activity motion within the video content. In some implementations, activity motion may include one or more of linear speed, angular speed, linear acceleration, angular acceleration, linear direction, and/or angular direction of one or more moving activities captured within the video content.
The match component may be configured to determine one or more matches between the motions assessed within two or more video content. The match component may determine one or more matches between the motion assessed within the first video content and the motion assessed within the second video content. A match may include a first set of video frames within the first video content and a second set of video frames within the second video content within which the matching motion is present.
In some implementations, the first video content may include a capture of a first event type and/or other event types and the second video content may include a capture of a second event type and/or other event types. The match component may determine the match(es) between the motion assessed within the first video content and the motion assessed within the second video content regardless of a match between the first event type and the second event type. The match component may determine the match(es) between the motion assessed within the first video content and the motion assessed within the second video content further based on a match between the first event type and the second event type.
The video portion component may be configured to identify one or more video portions based on the match. The video portion component may identify a first video portion of the first video content and a second video portion of the second video content based on the match. The first video portion may include one or more frames of the first set of video frames. The second video portion may include one or more frames of the second set of video frames.
The concatenation component may be configured to concatenate two or more video portions. The concatenation component may concatenate the first video portion and the second video portion such that the one or more frames of the first set of video frames are adjacent to the one or more frames of the second set of video frames. The concatenation of the first video portion and the second video portion may result in a transition between the first video portion and the second video portion in which continuity of motion may be achieved.
In some implementations, the concatenation component may be configured to determine an order of two or more video portions for the concatenation of the video portions. The concatenation component may determine an order of the first video portion and the second video portion for the concatenation of the first video portion and the second video portion.
These and other objects, features, and characteristics of the system and/or method disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
Electronic storage 12 may include electronic storage medium that electronically stores information. Electronic storage 12 may store software algorithms, information determined by processor 11, information received remotely, and/or other information that enables system 10 to function properly. For example, electronic storage 12 may store information relating to video information, video content, motion within video content, video frames, video portions, concatenation of video portions, and/or other information.
Electronic storage 12 may store video information 20 defining one or more video content. Video information 20 may include first video information 20A defining first video content, second video information 20B defining second video content, and/or other video information defining other video content. Video content may refer to media content that may be consumed as one or more videos. Video content may include one or more videos stored in one or more formats/container, and/or other video content. A video may include a video clip captured by a video capture device, multiple video clips captured by a video capture device, and/or multiple video clips captured by separate video capture devices. A video may include multiple video clips captured at the same time and/or multiple video clips captured at different times. A video may include a video clip processed by a video application, multiple video clips processed by a video application and/or multiple video clips processed by separate video applications.
Video content may have a progress length. A progress length may be defined in terms of time durations and/or frame numbers. For example, video content may include a video having a time duration of 60 seconds. Video content may include a video having 1800 video frames. Video content having 1800 video frames may have a play time duration of 60 seconds when viewed at 30 frames/second. Other time durations and frame numbers are contemplated.
Referring to
Access component 102 may be configured to access video information defining one or more video content and/or other information. Access component 102 may access video information from one or more storage locations. A storage location may include electronic storage 12, electronic storage of one or more image sensors (not shown in
Access component 102 may be configured to access video information defining one or more video content during acquisition of the video information and/or after acquisition of the video information by one or more image sensors. For example, access component 102 may access video information defining a video while the video is being captured by one or more image sensors. Access component 102 may access video information defining a video after the video has been captured and stored in memory (e.g., electronic storage 12).
Motion component 104 may be configured to assess motion within one or more video content and/or other information. Motion component 104 may assess motion within the first video content, the second video content, and/or other video content. Motion component 104 may assess motion within video content based on motion vector extraction and/or other information. Motion vectors may represent motion of one or more visuals captured within individual video frames. Motion may exist within video frames due to motion of image sensor(s) that captured the video frames and/or due to motion of a thing captured within the video frames. Motion vectors may be determined using one or more of block-matching algorithm, phase correlation and frequency domain methods, pixel recursive algorithms, optical flow, feature detection, and/or other criteria matching methods.
Motion vector may represent movement of one or more pixels and/or groupings of pixels between video frames of the video content. Motion vector may represent movement of an object captured within the video content from a location in a video frame to another location in another video frame (and to subsequent locations in subsequent frames). Motion vector may be characterized by direction(s) of motion (linear and/or angular) and magnitude(s) of motion.
Motion component 104 may assess motion within an entire video frame (e.g., combination of motion vectors associated with portions of the video frame) or motion within particular portion(s) of the video frame. For example, video frames of video content may be divided into multiple portions (e.g., macro blocks) and motion vector of individual portions may be determined. Motion vectors of the individual portions may be combined (e.g., summed, square summed, averaged) to determine the motion for the entire video frame. Individual video frames of the video content may be associated with global motion (e.g., motion of the frame as a whole) and/or local motion (motion of a portion of the frame).
Motion component 104 may assess motion within video content based on video compression and/or other information. Video compression of video content may result in video frames that include information for entire viewable dimensions of the video frames (e.g., I-frame) and video frames that include information for portions of the viewable dimensions of the video frames (e.g., P-frame, B-frame). A video frame may include information regarding changes in the video frames from prior frames, subsequent frames, or both. Information regarding changes in the video frames may characterize/defined by the motion of the video content. Motion component 104 may use the information regarding changes in the video frame to assess the motion of the video content.
Motion assessed within video content may include capture motion, activity motion, and/or other motion. Capture motion may refer to motion/operation of image sensor(s) that captured the video content. Capture motion may include motion/operation of the image sensor(s) at a time, over a duration of time, at a location, or over a range of locations. As non-limiting examples, capture motion may include one or more of zoom, pan, tilt, dolly, truck, pedestal, and/or other capture of the video content by the image sensor(s). In some implementations, capture motion may include a direction of gravity on the image sensor(s) during the capture of the video content. The direction of gravity may indicate the positioning of the image sensor(s) with respect to gravity during capture of the video content (e.g., upright, tilted, flipped). In some implementation, capture motion of the video content may be assessed further based on movement sensors (e.g., accelerometer, gyroscope) that measure the motion of the image sensor(s).
Activity motion may refer to motion of activity within the video content (activity captured within the field of view of image sensor(s)). Activity motion may include motion of one or more activities at a time, over a duration of time, at a location, or over a range of locations. As non-limiting examples, activity motion may include one or more of linear speed, angular speed, linear acceleration, angular acceleration, linear direction, and/or angular direction of one or more moving activities captured within the video content. In some implementations, activity motion may include a direction of gravity on the image sensor(s) during the capture of the video content. The direction of gravity may indicate the positioning of the image sensor(s) with respect to gravity during capture of the video content (e.g., upright, tilted, flipped).
Match component 106 may be configured to determine one or more matches between the motions assessed within two or more video content. Match component 106 may determine one or more matches between the motion assessed within one video content (e.g., first video content) and the motion assessed within another video content (e.g., second video content). In some implementations, match component 106 may determine one or more matches between the motions assessed within a single video content (matches between motions assessed within different portions of the video content). A match may include a set of video frames (one or more video frames) within one video content and a set of video frames (one or more video frames) within another video content within which the matching motion is present. For example, a match may include a first set of video frames within the first video content and a second set of video frames within the second video content within which the matching motion is present. In some implementations, a match may include a first set of video frames and a second set of video frames within one video content.
Match component 106 may determine match(es) between motions assessed within video content based on magnitude(s) and/or direction(s) of the assess motions. For example, referring to
Match component 106 may determine match(es) between motions assessed within video content based on magnitude and/or direction of local motion and/or global motion. Match component 106 may determine match(es) based on exact match(es) in motion (e.g., exact matches in direction and/or magnitude) or based on proximate match(es) in motion (e.g., match determined based on direction and/or magnitude not deviating by a certain percentage or amount).
In some implementations, match component 106 may determine match(es) between motions assessed within video content based on matches in motion curves. Video content may include motion that changes/does not change over the range of duration/video frames. A motion curve may represent motion (direction and/or magnitude) over a range of duration/video frames. Match component 106 may determine match(es) between motions based on matches in changes/lack of changes of motions assessed within video content over a range of duration/video frames. For example, match component 106 may determine match(es) based on a motion curve of video content characterized by capture of an image sensor moving underneath and out of an arch and a motion curve of video content characterized by capture of an image sensor moving underneath and out of a different arch (or other structure).
Video content may include a capture of a particular event type. An event type may refer to a type of scene, activity, and/or occurrence captured within the video content. An event type may correspond to a portion of video content or entire video content. For example, the first video content may include a capture of a first event type and/or other event types and the second video content may include a capture of a second event type and/or other event types. In some implementations, match component 106 may determine the match(es) between the motion assessed within the first video content and the motion assessed within the second video content further based on a match between the first event type and the second event type. For example, match component 106 may determine match(es) between motions assessed within two video content based on both video content including a capture of a skating activity, including a capture of a holiday party, and/or other matches in event type.
In implementations, match component 106 may determine the match(es) between the motion assessed within the first video content and the motion assessed within the second video content based on/regardless of a non-match between the first event type and the second event type. For example, match component 106 may determine match(es) between motions assessed within two video content based on/despite difference in type of events captured within the two video content (e.g., boating activity versus holiday party). Determining matches regardless of the match between event types may provide for a different feel of the matched video content than video content matched based on matched event types.
Video content may include a capture of a particular object/thing. A capture of the particular thing/object may correspond to a portion of video content or entire video content. For example, the first video content may include a capture of a first object/thing and/or other objects/things and the second video content may include a capture of a second object/thing and/or other objects/things. In some implementations, match component 106 may determine the match(es) between the motion assessed within the first video content and the motion assessed within the second video content further based on a match between the first object/thing and the second object/thing. For example, match component 106 may determine match(es) between motions assessed within two video content based on both video content including a capture of a particular person, animal, building, and/or other objects/things.
In some implementations, match component 106 may determine the match(es) between the motion assessed within the first video content and the motion assessed within the second video content based on/regardless of a non-match between the first object/thing and the second object/thing. For example, match component 106 may determine match(es) between motions assessed within two video content based on/despite difference in type of object/thing captured within the two video content (e.g., palm tree versus lamp post). Determining matches regardless of the match between captured objects/thing may provide for a different feel of the matched video content than video content matched based on matched object/thing.
In some implementations, match component 106 may determine match(es) between motions assessed within video content based on the direction of gravity on one or more image sensors during the capture of the video content. For example, a direction of motion in one video content may be matched with a direction of motion in another video content based on match in the direction of gravity on the image sensor(s) that captured both video content. Determining matches based on the direction of gravity may enable matching of video content including same motions with respect to ground. Determining matches based on the direction of gravity may enable matching of video content captured using the same image sensor orientation of capture with respect to ground. In some implementations, match component 106 may determine match(es) between motions assessed within video content regardless of the direction of gravity on one or more image sensors during the capture of the video content. Determining matches regardless of the match between the direction of gravity on the image sensor(s) during the capture of the video content may provide for a different feel of the matched video content than video content matched based on the direction of gravity on the image sensor(s).
Video portion component 108 may be configured to identify one or more video portions based on the match and/or other information. The video portions identified by video portion component 108 may include one or more frames of the set of video frames within which the matching motion is present. For example, video portion component 108 may identify a first video portion of the first video content and a second video portion of the second video content based on the match and/or other information. The first video portion may include one or more frames of the first set of video frames and/or other frames. The second video portion may include one or more frames of the second set of video frames and/or other frames. In some implementations, the video portions identified by video portion component 108 may include a minimum number of video frames from the set of video frames within which the matching motion is present. For example, for video content captured at 30 frames per second, the video portions identified by video portion component 108 may include five or more frames from the set of video frames within which the matching motion is present. Other numbers of video frames are contemplated.
For example,
Concatenation component 110 may be configured to concatenate two or more video portions. Concatenation component 110 may concatenate two or more video portions such that one or more frames of the set of video frames within which the matching motion is present in one video content is adjacent to one or more frames of the set of video frames within which the matching motion is present in another video content. For example, concatenation component 110 may concatenate the first video portion and the second video portion such that the one or more frames of the first set of video frames are adjacent to the one or more frames of the second set of video frames. The concatenation of the first video portion and the second video portion may result in a transition between the first video portion and the second video portion in which continuity of motion may be achieved.
In
In some implementations, concatenation component 110 may be configured to determine an order of two or more video portions for the concatenation of the video portions. For example, concatenation component 110 may determine an order of the first video portion and the second video portion for the concatenation of the first video portion and the second video portion. Referring to
In some implementations, concatenation component 110 may be configured to modify one or more video portions for concatenations. Concatenation component 110 may modify one or more video portion such that the motion within the modified video portion better matches the motion for concatenation. For example, a motion in one video portion may proximately match (e.g., in direction and/or magnitude) a motion in another video portion for concatenation. Concatenation component 110 may modify one or more both of the video portions such that the motion within the two video portions are better matched. Modification of the video portion(s) may include one or more changes in perceived speed with which the video portion is presented during playback, changes in dimensional portions of the video portion that is presented during playback (e.g., change in dimensional area presented by zoom, crop, rotation), and/or other changes. Modification of the video portion(s) may include additions of one or more transitions (e.g., crossfade, masking) using frames from the two video portions such that the motion within the two video portions are better matched.
Referring to
Referring to
In some implementations, concatenation of video portions may be synchronized to one or more musical tracks. Video portions may be identified and/or concatenated such that transitions between different video portions/video contents occur with one or more particular sounds in the musical track(s). For example, different lengths of video portions may be identified so that the transitions between the video portions occur with the occurrence of one of more of a beat, a tempo, a rhythm, an instrument, a volume, a vocal, a chorus, a frequency, a style, a start, an end, and/or other sounds occurring within the music track. Video portions may be identified, concatenated, and/or ordered such that video portions of differing motion intensity (amount of magnitude and/or direction) may occur with portions of a musical track having different musical intensity (e.g., energy, volume, amplitude). For example, for a high energy (e.g., loud volume, fast tempo) portion of the musical track, a video portion with high motion (e.g., large motion magnitude) may be presented. Other synchronization of video portions to musical tracks area contemplated.
The systems/methods disclosed herein may enable generation of video summary including video portions that are connected by motion captured within the video portions. Such generation of video summary may provide for continuation of motion from one video portion (video clip) to another. The identification and arrangement of video potions may be unrelated to the content captured within the video portions.
Implementations of the disclosure may be made in hardware, firmware, software, or any suitable combination thereof. Aspects of the disclosure may be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a tangible computer readable storage medium may include read only memory, random access memory, magnetic disk storage media, optical storage media, flash memory devices, and others, and a machine-readable transmission media may include forms of propagated signals, such as carrier waves, infrared signals, digital signals, and others. Firmware, software, routines, or instructions may be described herein in terms of specific exemplary aspects and implementations of the disclosure, and performing certain actions.
In some implementations, video content may include one or more of spherical video content, virtual reality content, and/or other video content. Spherical video content may refer to a video capture of multiple views from a single location. Spherical video content may include a full spherical video capture (360 degrees of capture) or a partial spherical video capture (less than 360 degrees of capture). Spherical video content may be captured through the use of one or more cameras/image sensors to capture images/videos from a location. The captured images/videos may be stitched together to form the spherical video content.
Virtual reality content may refer to content that may be consumed via virtual reality experience. Virtual reality content may associate different directions within the virtual reality content with different viewing directions, and a user may view a particular directions within the virtual reality content by looking in a particular direction. For example, a user may use a virtual reality headset to change the user's direction of view. The user's direction of view may correspond to a particular direction of view within the virtual reality content. For example, a forward looking direction of view for a user may correspond to a forward direction of view within the virtual reality content.
Spherical video content and/or virtual reality content may have been captured at one or more locations. For example, spherical video content and/or virtual reality content may have been captured from a stationary position (e.g., a seat in a stadium). Spherical video content and/or virtual reality content may have been captured from a moving position (e.g., a moving bike). Spherical video content and/or virtual reality content may include video capture from a path taken by the capturing device(s) in the moving position. For example, spherical video content and/or virtual reality content may include video capture from a person walking around in a music festival.
Although processor 11 and electronic storage 12 are shown to be connected to interface 13 in
Although processor 11 is shown in
It should be appreciated that although computer components are illustrated in
The description of the functionality provided by the different computer program components described herein is for illustrative purposes, and is not intended to be limiting, as any of computer program components may provide more or less functionality than is described. For example, one or more of computer program components 102, 104, 106, 108, and/or 110 may be eliminated, and some or all of its functionality may be provided by other computer program components. As another example, processor 11 may be configured to execute one or more additional computer program components that may perform some or all of the functionality attributed to one or more of computer program components 102, 104, 106, 108, and/or 110 described herein.
The electronic storage media of electronic storage 12 may be provided integrally (i.e., substantially non-removable) with one or more components of system 10 and/or removable storage that is connectable to one or more components of system 10 via, for example, a port (e.g., a USB port, a Firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage 12 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EPROM, EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Electronic storage 12 may be a separate component within system 10, or electronic storage 12 may be provided integrally with one or more other components of system 10 (e.g., processor 11). Although electronic storage 12 is shown in
In some implementations, method 200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operation of method 200 in response to instructions stored electronically on one or more electronic storage mediums. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operation of method 200.
Referring to
At operation 202, motion within the first video content and motion within the second video content may be assessed. In some implementations, operation 202 may be performed by a processor component the same as or similar to motion component 104 (Shown in
At operation 203, a match between the motion assessed within the first video content and the motion assessed within the second video content may be determined. The match may including a first set of video frames within the first video content and a second set of video frames within the second video content within which the matching motion is present. In some implementations, operation 203 may be performed by a processor component the same as or similar to match component 106 (Shown in
At operation 204, a first video portion of the video content and a second video portion of the second video content may be identified based on the match. The first video portion may include one or more frames of the first set of video frames. The second video portion may include one or more frames of the second set of video frames. In some implementations, operation 204 may be performed by a processor component the same as or similar to video portion component 108 (Shown in
At operation 205, the first video portion and the second video portion may be concatenated. The first video portion and the second video portion may be concatenated such that one or more frames of the first set of video frames are adjacent to one or more frames of the second set of video frames. The concatenation of the first video portion and the second video portion may result in a transition between the first video portion and the second video portion in which continuity of motion may be achieved. In some implementations, operation 205 may be performed by a processor component the same as or similar to concatenation component 110 (Shown in
Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
Number | Name | Date | Kind |
---|---|---|---|
5130794 | Ritchey | Jul 1992 | A |
6337683 | Gilbert | Jan 2002 | B1 |
6593956 | Potts | Jul 2003 | B1 |
7222356 | Yonezawa | May 2007 | B1 |
7483618 | Edwards | Jan 2009 | B1 |
8446433 | Mallet | May 2013 | B1 |
8611422 | Yagnik | Dec 2013 | B1 |
8718447 | Yang | May 2014 | B2 |
8730299 | Kozko | May 2014 | B1 |
8763023 | Goetz | Jun 2014 | B1 |
8910046 | Matsuda | Dec 2014 | B2 |
8988509 | Macmillan | Mar 2015 | B1 |
9032299 | Lyons | May 2015 | B2 |
9036001 | Chuang | May 2015 | B2 |
9077956 | Morgan | Jul 2015 | B1 |
9111579 | Meaney | Aug 2015 | B2 |
9142253 | Ubillos | Sep 2015 | B2 |
9151933 | Sato | Oct 2015 | B2 |
9204039 | He | Dec 2015 | B2 |
9208821 | Evans | Dec 2015 | B2 |
9245582 | Shore | Jan 2016 | B2 |
9253533 | Morgan | Feb 2016 | B1 |
9317172 | Lyons | Apr 2016 | B2 |
9423944 | Eppolito | Aug 2016 | B2 |
9473758 | Long | Oct 2016 | B1 |
9479697 | Aguilar | Oct 2016 | B2 |
9564173 | Swenson | Feb 2017 | B2 |
10083718 | Patry | Sep 2018 | B1 |
10789985 | Patry | Sep 2020 | B2 |
20040128317 | Sull | Jul 2004 | A1 |
20050025454 | Nakamura | Feb 2005 | A1 |
20060122842 | Herberger | Jun 2006 | A1 |
20070173296 | Hara | Jul 2007 | A1 |
20070204310 | Hua | Aug 2007 | A1 |
20070230461 | Singh | Oct 2007 | A1 |
20080044155 | Kuspa | Feb 2008 | A1 |
20080123976 | Coombs | May 2008 | A1 |
20080152297 | Ubillos | Jun 2008 | A1 |
20080163283 | Tan | Jul 2008 | A1 |
20080177706 | Yuen | Jul 2008 | A1 |
20080208791 | Das | Aug 2008 | A1 |
20080253735 | Kuspa | Oct 2008 | A1 |
20080313541 | Shafton | Dec 2008 | A1 |
20090213270 | Ismert | Aug 2009 | A1 |
20090274339 | Cohen | Nov 2009 | A9 |
20090327856 | Mouilleseaux | Dec 2009 | A1 |
20100045773 | Ritchey | Feb 2010 | A1 |
20100064219 | Gabrisko | Mar 2010 | A1 |
20100086216 | Lee | Apr 2010 | A1 |
20100104261 | Liu | Apr 2010 | A1 |
20100183280 | Beauregard | Jul 2010 | A1 |
20100231730 | Ichikawa | Sep 2010 | A1 |
20100245626 | Woycechowsky | Sep 2010 | A1 |
20100251295 | Amento | Sep 2010 | A1 |
20100278504 | Lyons | Nov 2010 | A1 |
20100278509 | Nagano | Nov 2010 | A1 |
20100281375 | Pendergast | Nov 2010 | A1 |
20100281386 | Lyons | Nov 2010 | A1 |
20100287476 | Sakai | Nov 2010 | A1 |
20100299630 | McCutchen | Nov 2010 | A1 |
20100318660 | Balsubramanian | Dec 2010 | A1 |
20100321471 | Casolara | Dec 2010 | A1 |
20110025847 | Park | Feb 2011 | A1 |
20110069148 | Jones | Mar 2011 | A1 |
20110069189 | Venkataraman | Mar 2011 | A1 |
20110075990 | Eyer | Mar 2011 | A1 |
20110093798 | Shahraray | Apr 2011 | A1 |
20110134240 | Anderson | Jun 2011 | A1 |
20110173565 | Ofek | Jul 2011 | A1 |
20110206351 | Givoly | Aug 2011 | A1 |
20110211040 | Lindemann | Sep 2011 | A1 |
20110258049 | Ramer | Oct 2011 | A1 |
20110293250 | Deever | Dec 2011 | A1 |
20110320322 | Roslak | Dec 2011 | A1 |
20120014673 | O'Dwyer | Jan 2012 | A1 |
20120027381 | Kataoka | Feb 2012 | A1 |
20120030029 | Flinn | Feb 2012 | A1 |
20120057852 | Devleeschouwer | Mar 2012 | A1 |
20120123780 | Gao | May 2012 | A1 |
20120127169 | Barcay | May 2012 | A1 |
20120311448 | Achour | Dec 2012 | A1 |
20130024805 | In | Jan 2013 | A1 |
20130044108 | Tanaka | Feb 2013 | A1 |
20130058532 | White | Mar 2013 | A1 |
20130063561 | Stephan | Mar 2013 | A1 |
20130078990 | Kim | Mar 2013 | A1 |
20130127636 | Aryanpur | May 2013 | A1 |
20130136193 | Hwang | May 2013 | A1 |
20130142384 | Ofek | Jun 2013 | A1 |
20130151970 | Achour | Jun 2013 | A1 |
20130166303 | Chang | Jun 2013 | A1 |
20130191743 | Reid | Jul 2013 | A1 |
20130195429 | Fay | Aug 2013 | A1 |
20130197967 | Pinto | Aug 2013 | A1 |
20130208134 | Hamalainen | Aug 2013 | A1 |
20130208942 | Davis | Aug 2013 | A1 |
20130215220 | Wang | Aug 2013 | A1 |
20130259399 | Ho | Oct 2013 | A1 |
20130263002 | Park | Oct 2013 | A1 |
20130283301 | Avedissian | Oct 2013 | A1 |
20130287214 | Resch | Oct 2013 | A1 |
20130287304 | Kimura | Oct 2013 | A1 |
20130300939 | Chou | Nov 2013 | A1 |
20130308921 | Budzinski | Nov 2013 | A1 |
20130318443 | Bachman | Nov 2013 | A1 |
20130343727 | Rav-Acha | Dec 2013 | A1 |
20140026156 | Deephanphongs | Jan 2014 | A1 |
20140064706 | Lewis, II | Mar 2014 | A1 |
20140072285 | Shynar | Mar 2014 | A1 |
20140093164 | Noorkami | Apr 2014 | A1 |
20140096002 | Dey | Apr 2014 | A1 |
20140105573 | Hanckmann | Apr 2014 | A1 |
20140161351 | Yagnik | Jun 2014 | A1 |
20140165119 | Liu | Jun 2014 | A1 |
20140169766 | Yu | Jun 2014 | A1 |
20140176542 | Shohara | Jun 2014 | A1 |
20140193040 | Bronshtein | Jul 2014 | A1 |
20140212107 | Saint-Jean | Jul 2014 | A1 |
20140219634 | McIntosh | Aug 2014 | A1 |
20140226953 | Hou | Aug 2014 | A1 |
20140232818 | Carr | Aug 2014 | A1 |
20140232819 | Armstrong | Aug 2014 | A1 |
20140245336 | Lewis, II | Aug 2014 | A1 |
20140300644 | Gillard | Oct 2014 | A1 |
20140328570 | Cheng | Nov 2014 | A1 |
20140341528 | Mahate | Nov 2014 | A1 |
20140366052 | Ives | Dec 2014 | A1 |
20140376876 | Bentley | Dec 2014 | A1 |
20150015680 | Wang | Jan 2015 | A1 |
20150022355 | Pham | Jan 2015 | A1 |
20150029089 | Kim | Jan 2015 | A1 |
20150058709 | Zaletel | Feb 2015 | A1 |
20150085111 | Lavery | Mar 2015 | A1 |
20150154452 | Bentley | Jun 2015 | A1 |
20150178915 | Chatterjee | Jun 2015 | A1 |
20150186073 | Pacurariu | Jul 2015 | A1 |
20150220504 | Bocanegra Alvarez | Aug 2015 | A1 |
20150254871 | Macmillan | Sep 2015 | A1 |
20150256746 | Macmillan | Sep 2015 | A1 |
20150256808 | Macmillan | Sep 2015 | A1 |
20150271483 | Sun | Sep 2015 | A1 |
20150287435 | Land | Oct 2015 | A1 |
20150294141 | Molyneux | Oct 2015 | A1 |
20150318020 | Pribula | Nov 2015 | A1 |
20150339324 | Westmoreland | Nov 2015 | A1 |
20150375117 | Thompson | Dec 2015 | A1 |
20150382083 | Chen | Dec 2015 | A1 |
20160005440 | Gower | Jan 2016 | A1 |
20160026874 | Hodulik | Jan 2016 | A1 |
20160027470 | Newman | Jan 2016 | A1 |
20160027475 | Hodulik | Jan 2016 | A1 |
20160029105 | Newman | Jan 2016 | A1 |
20160055885 | Hodulik | Feb 2016 | A1 |
20160088287 | Sadi | Mar 2016 | A1 |
20160098941 | Kerluke | Apr 2016 | A1 |
20160119551 | Brown | Apr 2016 | A1 |
20160217325 | Bose | Jul 2016 | A1 |
20160225405 | Matias | Aug 2016 | A1 |
20160225410 | Lee | Aug 2016 | A1 |
20160234345 | Roberts | Aug 2016 | A1 |
20160358603 | Azam | Dec 2016 | A1 |
20160366330 | Boliek | Dec 2016 | A1 |
20170006214 | Andreassen | Jan 2017 | A1 |
Number | Date | Country |
---|---|---|
2001020466 | Mar 2001 | WO |
2009040538 | Apr 2009 | WO |
Entry |
---|
Ernoult, Emeric, ‘How to Triple Your YouTube Video Views with Facebook’, SocialMediaExaminer.com, Nov. 26, 2012, 16 pages. |
FFmpeg, “AVPacket Struct Reference,” Doxygen, Jul. 20, 2014, 24 Pages, [online] [retrieved on Jul. 13, 2015] Retrieved from the internet <URL:https://www.ffmpeg.org/doxygen/2.5/group_lavf_decoding.html>. |
FFmpeg, “Demuxing,” Doxygen, Decembers, 2014, 15 Pages, [online] [retrieved on Jul. 13, 2015] Retrieved from the internet <URL:https://www.ffmpeg.org/doxygen/2.3/group_lavf_encoding.html>. |
FFmpeg, “Muxing,” Doxygen, Jul. 20, 2014, 9 Pages, [online] [retrieved on Jul. 13, 2015] Retrieved from the internet <URL: https://www.ffmpeg.org/doxyg en/2. 3/structAVPacket.html>. |
Han et al., Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding, International Conference on Learning Representations 2016, 14 pgs. |
He et al., ‘Deep Residual Learning for Image Recognition,’ arXiv:1512.03385, 2015,12 pgs. |
Iandola et al., ‘SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size’, arXiv:1602.07360v3 [cs.CV] Apr. 6, 2016 (9 pgs.). |
Parkhi et al., ‘Deep Face Recognition,’ Proceedings of the British Machine Vision, 2015,12 pgs. |
PCT International Preliminary Report on Patentability for PCT/US2015/023680, dated Oct. 4, 2016, 10 pages. |
PCT International Search Reort for PCT/US15/18538 dated Jun. 16, 15 (2 pages). |
PCT International Search Report and Written Opinion for PCT/US15/12086 dated Mar. 17, 2016, 7 pages. |
PCT International Search Report and Written Opinion for PCT/US15/18538, dated Jun. 16, 2015, 26 pages. |
PCT International Search Report and Written Opinion for PCT/US16/31076, dated Aug. 8, 2016, 19 Pages. |
PCT International Search Report for PCT/US15/23680 dated Aug. 3, 2015, 4 pages. |
PCT International Search Report for PCT/US15/41624 dated Nov. 4, 2015, 5 pages. |
PCT International Search Report for PCT/US17/16367 dated Apr. 14, 2017 (2 pages). |
PCT International Written Opinion for PCT/US2015/041624, dated Dec. 17, 2015, 7 Pages. |
Ricker, ‘First Click: TomTom's Bandit camera beats GoPro with software’, Mar. 9, 2016 URL: http://www.theverge.eom/2016/3/9/11179298/tomtom-bandit-beats-gopro (6 pages). |
Schroff et al., ‘FaceNet: A Unified Embedding for Face Recognition and Clustering,’ IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, 10 pgs. |
Sergey Ioffe and Christian Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” in Proc. ICML. 2015, pp. 448-456, JMLR.org. |
Tran et al., ‘Learning Spatiotemporal Features with 3D Convolutional Networks’, arXiv:1412.0767 [cs.CV] Dec. 2, 2014 (9 pgs). |
Yang et al., ‘Unsupervised Extraction of Video Highlights Via Robust Recurrent Auto-encoders’ arXiv:1510.01442v1 [cs.CV] Oct. 6, 2015 (9 pgs). |
Number | Date | Country | |
---|---|---|---|
20210020201 A1 | Jan 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16138527 | Sep 2018 | US |
Child | 17032712 | US | |
Parent | 15468721 | Mar 2017 | US |
Child | 16138527 | US |