This disclosure relates to systems and methods that classify activities captured within images.
A standard long short-term memory unit may be used to determine attention maps. The standard long short-term memory unit may use a one-dimensional hidden state, which does not preserve local information. Local information may be important for action recognition tasks.
This disclosure relates to classifying activities captured within images. An image including a visual capture of a scene may be accessed. The image may be processed through a convolutional neural network. The convolutional neural network may generate a set of two-dimensional feature maps based on the image. The set of two-dimensional feature maps may be processed through a contextual long short-term memory unit. The contextual long short-term memory unit may generate a set of two-dimensional outputs based on the set of two-dimensional feature maps. A set of attention-masks for the image may be generated based on the set of two-dimensional outputs and the set of two-dimensional feature maps. The set of attention-masks may define dimensional portions of the image. The scene may be classified based on the two-dimensional outputs.
A system that classifies activities captured within images may include one or more processors, and/or other components. The processor(s) may be configured by machine-readable instructions. Executing the machine-readable instructions may cause the processor(s) to facilitate classifying activities captured within images. The machine-readable instructions may include one or more computer program components. The computer program components may include one or more of an access component, a convolutional neural network component, a contextual LSTM unit component, an attention-mask component, a classification component, and/or other computer program components.
The access component may be configured to access one or more images and/or other information. The image(s) may include visual capture of one or more scenes. One or more images may be one or more video frames of a video. The access component may access one or more images and/or other information as input into a convolutional neural network.
The convolutional neural network component may be configured to process an image through the convolutional neural network. The convolutional neural network may include a plurality of convolution layers and/or other layers. The convolutional neural network may generate a set of two-dimensional feature maps based on the image and/or other information. In some implementations, the set of two-dimensional feature maps may be generated by a last convolution layer in the convolutional neural network. In some implementations, the set of two-dimensional feature maps may be obtained from the convolutional neural network before the set of two-dimensional feature maps is flattened.
The contextual LSTM unit component may be configured to process the set of two-dimensional feature maps through a contextual long short-term memory unit. The contextual long short-term memory unit may generate a set of two-dimensional outputs based on the set of two-dimensional feature maps and/or other information. In some implementations, the set of two-dimensional outputs may be used to visualize the dimensional portions of the image. In some implementations, the set of two-dimensional outputs may be used to constrain the dimensional portions of the image.
In some implementations, the contextual LSTM unit may include a loss function characterized by a non-overlapping loss, an entropy loss, a cross-entropy loss, and/or other losses. In some implementations, the non-overlapping loss, the entropy loss, and the cross-entropy loss may be combined into the loss function through a linear combination with a first hyper parameter for the non-overlapping loss, a second hyper parameter for the entropy loss, and a third hyper parameter for the cross-entropy loss. The loss function may discourage the set of attention masks defining a same dimensional portion of the image across multiple time-steps.
The attention-mask component may be configured to generate a set of attention-masks for the image based on the set of two-dimensional outputs, the set of two-dimensional feature maps, and/or other information. The set of attention-masks may define dimensional portions of the image.
The classification component may be configured to classify the scene based on the set of two-dimensional outputs and/or other information. In some implementations, the classification of the scene may be performed by a fully connected layer that takes as input the set of two-dimensional outputs and/or other information. Classifying the scene may include classifying one or more activities within the scene. In some implementations, the classification component may be configured to classify a video based on classifications of one or more video frames of the video.
These and other objects, features, and characteristics of the system and/or method disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
Electronic storage 12 may be configured to include electronic storage medium that electronically stores information. Electronic storage 12 may store software algorithms, information determined by processor 11, information received remotely, and/or other information that enables system 10 to function properly. For example, electronic 12 may store information relating to images, convolutional neural network, contextual long short-term memory unit, attention-masks, scene classification, loss function, and/or other information.
Processor 11 may be configured to provide information processing capabilities in system 10. As such, processor 11 may comprise one or more of a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Processor 11 may be configured to execute one or more machine readable instructions 100 to facilitate classifying activities captured within images. Machine readable instructions 100 may include one or more computer program components. Machine readable instructions 100 may include one or more of access component 102, convolutional neural network component 104, contextual LSTM unit component 106, attention-mask component 108, classification component 110, and/or other computer program components.
Access component 102 may be configured to access one or more images and/or other information. Image(s) may include visual capture of one or more scenes. One or more images may be one or more video frames of a video. Access component 102 may access one or more images and/or other information as input into a convolutional neural network. Access component 102 may access one or more images and/or other information from one or more storage locations. A storage location may include electronic storage 12, electronic storage of one or more image sensors (not shown in
Convolutional neural network component 104 may be configured to process an image through the convolutional neural network. A convolutional neural network may refer to a neural network that receives an input and transforms the input through a series of layers. A convolutional neural network may include a plurality of convolution layers and/or other layers. For example, a convolutional neural network may include one or more of an input layer, an output layer, a convolution layer, a padding layer, a squeeze layer, an expand layer, a concatenation layer, a combine layer, a pooling layer, a normalization layer, a fully-connected layer, an activation layer, a dropout layer, a flatten layer, and/or other layers.
A convolutional neural network may generate a set of two-dimensional feature maps based on the image and/or other information. The set of two-dimensional feature maps may include a two-dimensional activation map. The set of two-dimensional feature maps may be generated by any convolution layer (e.g., 2D filter bank) within the convolutional neural network. The set of two-dimensional feature maps may be processed through other layers, such as an activation layer, a normalization layer, a downsample layer, and/or other layers. The set of two-dimensional feature maps may be generated by a last convolution layer in a convolutional neural network. The set of two-dimensional feature maps may be obtained from a convolutional neural network before the set of two-dimensional feature maps is flattened.
Contextual LSTM unit component 106 may be configured to process the set of two-dimensional feature maps through a contextual long short-term memory unit. A long short-term memory unit may refer to a recurrent neural network architecture used to remember/retain values for durations of time. A long short-term memory unit may be implemented in blocks containing several long short-term memory units. A contextual long short-term memory unit may refer to a long short-term memory unit that incorporates contextual features into its model.
For a long short-term memory unit, images (e.g., video frames) may be processed into fixed length representations. Each representation may be fed into the long short-term memory unit one time-step at a time. Through training, the long short-term memory unit may learn what features to keep in memory so that determination may be made as to what activity is being performed within the images/video. Use of fixed length representations of images may result in loss of information as to the physical location of objects within the image. The two-dimensional spatial structure of the image may have been flattened out. An attention mechanism with a standard long short-term memory unit may use a one-dimensional hidden state. The one-dimensional hidden state may not preserve local information. Local information is important for action recognition tasks.
A contextual long short-term memory unit may be used to generate attention masks (mapping a two-dimensional image representation to a two-dimensional attention map) that directly preserve local information. The image representations that are fed into a contextual long short-term memory unit may remain a two-dimensional activation map. By taking two-dimensional representations from a convolutional neural network (e.g., before it has been flattened), local information may be preserved.
A contextual long short-term memory unit may be formulated by replacing multiplication operations of a long short-term memory unit with convolutional operations.
In a contextual long short-term memory unit, the input, hidden state, forget gate, output gate, and cell memory may be three-dimensional with shape (NCELLS, MAP_WIDTH, MAP_HEIGHT). Different size kernels may be used to capture different scales of spatio-temporal behaviors.
A contextual long short-term memory unit may generate a set of two-dimensional outputs based on the set of two-dimensional feature maps and/or other information. In some implementations, the set of two-dimensional outputs may be used to visualize the dimensional portions of the image. The dimensional portions of the image may refer to regions of the image the network is using to make its decisions. In some implementations, the set of two-dimensional outputs may be used to constrain the dimensional portions of the image. The dimensional portions of the image may be constrained through non-overlapping loss and entropy loss.
In some implementations, a contextual LSTM unit may include a loss function characterized by a non-overlapping loss, an entropy loss, a cross-entropy loss, and/or other losses. The non-overlapping loss may encourage attention maps to not focus on the same region across multiple time-steps. The non-overlapping loss may discourage the model from relying too much on the background, or on any individual region of the image. The entropy (uniformness) loss may encourage attention maps, within one time-step, to diffuse attention across the attention maps. The entropy (uniformness loss) may discourage the model from concentrating attention too strongly on any particular cell, and encourage the model to diffuse attention at a larger region of the image. The cross-entropy loss may include a regular cross-entropy loss between the predicted per-frame class distribution and one-hot vector representing the correct class.
The loss function may discourage the attention maps defining a same dimensional portion of the image across multiple time-steps. The loss function may discourage the attention maps from concentrating on one region of the image across multiple time-steps. Instead, the loss function may encourage attention maps to look at every region in the image across multiple time-steps.
In some implementations, the non-overlapping loss, the entropy loss, and the cross-entropy loss may be combined into the loss function through a linear combination with a first hyper parameter (λ1) for the non-overlapping loss, a second hyper parameter (λ2) for the entropy loss, and a third hyper parameter (λ3) for the cross-entropy loss. The hyper parameters may be determined empirically. Changing the value(s) of the hyper parameters provides for changing the relative importance of each loss term in the loss function.
Attention-mask component 108 may be configured to generate a set of attention-masks for the image based on the set of two-dimensional outputs, the set of two-dimensional feature maps, and/or other information. The set of attention-masks may define dimensional portions of the image. At a given time-step, the output of the contextual long short-term memory unit may be multiplied elementwise with the last convolution layer in a feedforward convolutional neural network. This may result in an attention-masked convolutional layer, which may be fed into an additional fully connected layer with a number of outputs equal to the number of classes.
Classification component 110 may be configured to classify the scene based on the set of two-dimensional outputs and/or other information. The classification of the scene may be performed by a fully connected layer that takes as input the set of two-dimensional outputs and/or other information. The classification of the scene may be performed based on visuals within the region(s) of the image that are the focus of the model. Classifying the scene may include classifying one or more activities within the scene. In some implementations, classification component 110 may be configured to classify a video based on classifications of one or more video frames of the video. For example, classification component 110 may classify a video by combining classification of multiple video frames within the video.
Although processor 11 is shown in
It should be appreciated that although computer components are illustrated in
The description of the functionality provided by the different computer program components described herein is for illustrative purposes, and is not intended to be limiting, as any of computer program components may provide more or less functionality than is described. For example, one or more of computer program components 102, 104, 106, 108, and/or 110 may be eliminated, and some or all of its functionality may be provided by other computer program components. As another example, processor 11 may be configured to execute one or more additional computer program components that may perform some or all of the functionality attributed to one or more of computer program components 102, 104, 106, 108, and/or 110 described herein.
The electronic storage media of electronic storage 12 may be provided integrally (i.e., substantially non-removable) with one or more components of system 10 and/or removable storage that is connectable to one or more components of system 10 via, for example, a port (e.g., a USB port, a Firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage 12 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EPROM, EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Electronic storage 12 may be a separate component within system 10, or electronic storage 12 may be provided integrally with one or more other components of system 10 (e.g., processor 11). Although electronic storage 12 is shown in
In some implementations, method 200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operation of method 200 in response to instructions stored electronically on one or more electronic storage mediums. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operation of method 200.
Referring to
At operation 202, the image may be processed through a convolutional neural network. The convolutional neural network may generate a set of two-dimensional feature maps based on the image. In some implementations, operation 202 may be performed by a processor component the same as or similar to convolutional neural network component 104 (Shown in
At operation 203, the set of two-dimensional feature maps may be processed through a contextual long short-term memory unit. The contextual long short-term memory unit may generate a set of two-dimensional outputs based on the set of two-dimensional feature maps. In some implementations, operation 203 may be performed by a processor component the same as or similar to contextual LSTM unit component 106 (Shown in
At operation 204, a set of attention-masks for the image may be generated based on the set of two-dimensional outputs and the set of two-dimensional feature maps. The set of attention-masks may define dimensional portions of the image. In some implementations, operation 204 may be performed by a processor component the same as or similar to attention-masks component 108 (Shown in
At operation 205, the scene may be classified based on the set of two-dimensional outputs. In some implementations, operation 205 may be performed by a processor component the same as or similar to classification component 110 (Shown in
Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
Number | Name | Date | Kind |
---|---|---|---|
5130794 | Ritchey | Jul 1992 | A |
6337683 | Gilbert | Jan 2002 | B1 |
6593956 | Potts | Jul 2003 | B1 |
7222356 | Yonezawa | May 2007 | B1 |
7483618 | Edwards | Jan 2009 | B1 |
8446433 | Mallet | May 2013 | B1 |
8611422 | Yagnik | Dec 2013 | B1 |
8718447 | Yang | May 2014 | B2 |
8730299 | Kozko | May 2014 | B1 |
8763023 | Goetz | Jun 2014 | B1 |
8910046 | Matsuda | Dec 2014 | B2 |
8988509 | Macmillan | Mar 2015 | B1 |
9032299 | Lyons | May 2015 | B2 |
9036001 | Chuang | May 2015 | B2 |
9077956 | Morgan | Jul 2015 | B1 |
9111579 | Meaney | Aug 2015 | B2 |
9142253 | Ubillos | Sep 2015 | B2 |
9151933 | Sato | Oct 2015 | B2 |
9204039 | He | Dec 2015 | B2 |
9208821 | Evans | Dec 2015 | B2 |
9245582 | Shore | Jan 2016 | B2 |
9253533 | Morgan | Feb 2016 | B1 |
9317172 | Lyons | Apr 2016 | B2 |
9423944 | Eppolito | Aug 2016 | B2 |
9473758 | Long | Oct 2016 | B1 |
9479697 | Aguilar | Oct 2016 | B2 |
9564173 | Swenson | Feb 2017 | B2 |
20040128317 | Sull | Jul 2004 | A1 |
20050025454 | Nakamura | Feb 2005 | A1 |
20060122842 | Herberger | Jun 2006 | A1 |
20070173296 | Hara | Jul 2007 | A1 |
20070204310 | Hua | Aug 2007 | A1 |
20070230461 | Singh | Oct 2007 | A1 |
20080044155 | Kuspa | Feb 2008 | A1 |
20080123976 | Coombs | May 2008 | A1 |
20080152297 | Ubillos | Jun 2008 | A1 |
20080163283 | Tan | Jul 2008 | A1 |
20080177706 | Yuen | Jul 2008 | A1 |
20080208791 | Das | Aug 2008 | A1 |
20080253735 | Kuspa | Oct 2008 | A1 |
20080313541 | Shafton | Dec 2008 | A1 |
20090213270 | Ismert | Aug 2009 | A1 |
20090274339 | Cohen | Nov 2009 | A9 |
20090327856 | Mouilleseaux | Dec 2009 | A1 |
20100045773 | Ritchey | Feb 2010 | A1 |
20100064219 | Gabrisko | Mar 2010 | A1 |
20100086216 | Lee | Apr 2010 | A1 |
20100104261 | Liu | Apr 2010 | A1 |
20100183280 | Beauregard | Jul 2010 | A1 |
20100231730 | Ichikawa | Sep 2010 | A1 |
20100245626 | Woycechowsky | Sep 2010 | A1 |
20100251295 | Amento | Sep 2010 | A1 |
20100278504 | Lyons | Nov 2010 | A1 |
20100278509 | Nagano | Nov 2010 | A1 |
20100281375 | Pendergast | Nov 2010 | A1 |
20100281386 | Lyons | Nov 2010 | A1 |
20100287476 | Sakai | Nov 2010 | A1 |
20100299630 | McCutchen | Nov 2010 | A1 |
20100318660 | Balsubramanian | Dec 2010 | A1 |
20100321471 | Casolara | Dec 2010 | A1 |
20110025847 | Park | Feb 2011 | A1 |
20110069148 | Jones | Mar 2011 | A1 |
20110069189 | Venkataraman | Mar 2011 | A1 |
20110075990 | Eyer | Mar 2011 | A1 |
20110093798 | Shahraray | Apr 2011 | A1 |
20110134240 | Anderson | Jun 2011 | A1 |
20110173565 | Ofek | Jul 2011 | A1 |
20110206351 | Givoly | Aug 2011 | A1 |
20110211040 | Lindemann | Sep 2011 | A1 |
20110258049 | Ramer | Oct 2011 | A1 |
20110293250 | Deever | Dec 2011 | A1 |
20110320322 | Roslak | Dec 2011 | A1 |
20120014673 | O'Dwyer | Jan 2012 | A1 |
20120027381 | Kataoka | Feb 2012 | A1 |
20120030029 | Flinn | Feb 2012 | A1 |
20120057852 | Devleeschouwer | Mar 2012 | A1 |
20120123780 | Gao | May 2012 | A1 |
20120127169 | Barcay | May 2012 | A1 |
20120206565 | Villmer | Aug 2012 | A1 |
20120311448 | Achour | Dec 2012 | A1 |
20130024805 | In | Jan 2013 | A1 |
20130044108 | Tanaka | Feb 2013 | A1 |
20130058532 | White | Mar 2013 | A1 |
20130063561 | Stephan | Mar 2013 | A1 |
20130078990 | Kim | Mar 2013 | A1 |
20130127636 | Aryanpur | May 2013 | A1 |
20130136193 | Hwang | May 2013 | A1 |
20130142384 | Ofek | Jun 2013 | A1 |
20130151970 | Achour | Jun 2013 | A1 |
20130166303 | Chang | Jun 2013 | A1 |
20130191743 | Reid | Jul 2013 | A1 |
20130195429 | Fay | Aug 2013 | A1 |
20130197967 | Pinto | Aug 2013 | A1 |
20130208134 | Hamalainen | Aug 2013 | A1 |
20130208942 | Davis | Aug 2013 | A1 |
20130215220 | Wang | Aug 2013 | A1 |
20130259399 | Ho | Oct 2013 | A1 |
20130263002 | Park | Oct 2013 | A1 |
20130283301 | Avedissian | Oct 2013 | A1 |
20130287214 | Resch | Oct 2013 | A1 |
20130287304 | Kimura | Oct 2013 | A1 |
20130300939 | Chou | Nov 2013 | A1 |
20130308921 | Budzinski | Nov 2013 | A1 |
20130318443 | Bachman | Nov 2013 | A1 |
20130343727 | Rav-Acha | Dec 2013 | A1 |
20140026156 | Deephanphongs | Jan 2014 | A1 |
20140064706 | Lewis, II | Mar 2014 | A1 |
20140072285 | Shynar | Mar 2014 | A1 |
20140093164 | Noorkami | Apr 2014 | A1 |
20140096002 | Dey | Apr 2014 | A1 |
20140105573 | Hanckmann | Apr 2014 | A1 |
20140161351 | Yagnik | Jun 2014 | A1 |
20140165119 | Liu | Jun 2014 | A1 |
20140169766 | Yu | Jun 2014 | A1 |
20140176542 | Shohara | Jun 2014 | A1 |
20140193040 | Bronshtein | Jul 2014 | A1 |
20140212107 | Saint-Jean | Jul 2014 | A1 |
20140219634 | McIntosh | Aug 2014 | A1 |
20140226953 | Hou | Aug 2014 | A1 |
20140232818 | Carr | Aug 2014 | A1 |
20140232819 | Armstrong | Aug 2014 | A1 |
20140245336 | Lewis, II | Aug 2014 | A1 |
20140300644 | Gillard | Oct 2014 | A1 |
20140328570 | Cheng | Nov 2014 | A1 |
20140341528 | Mahate | Nov 2014 | A1 |
20140366052 | Ives | Dec 2014 | A1 |
20140376876 | Bentley | Dec 2014 | A1 |
20150015680 | Wang | Jan 2015 | A1 |
20150022355 | Pham | Jan 2015 | A1 |
20150029089 | Kim | Jan 2015 | A1 |
20150058709 | Zaletel | Feb 2015 | A1 |
20150085111 | Lavery | Mar 2015 | A1 |
20150154452 | Bentley | Jun 2015 | A1 |
20150178915 | Chatterjee | Jun 2015 | A1 |
20150186073 | Pacurariu | Jul 2015 | A1 |
20150220504 | Bocanegra Alvarez | Aug 2015 | A1 |
20150254871 | Macmillan | Sep 2015 | A1 |
20150256746 | Macmillan | Sep 2015 | A1 |
20150256808 | Macmillan | Sep 2015 | A1 |
20150271483 | Sun | Sep 2015 | A1 |
20150287435 | Land | Oct 2015 | A1 |
20150294141 | Molyneux | Oct 2015 | A1 |
20150318020 | Pribula | Nov 2015 | A1 |
20150339324 | Westmoreland | Nov 2015 | A1 |
20150375117 | Thompson | Dec 2015 | A1 |
20150382083 | Chen | Dec 2015 | A1 |
20160005435 | Campbell | Jan 2016 | A1 |
20160005440 | Gower | Jan 2016 | A1 |
20160026874 | Hodulik | Jan 2016 | A1 |
20160027470 | Newman | Jan 2016 | A1 |
20160027475 | Hodulik | Jan 2016 | A1 |
20160029105 | Newman | Jan 2016 | A1 |
20160055885 | Hodulik | Feb 2016 | A1 |
20160088287 | Sadi | Mar 2016 | A1 |
20160098941 | Kerluke | Apr 2016 | A1 |
20160119551 | Brown | Apr 2016 | A1 |
20160217325 | Bose | Jul 2016 | A1 |
20160225405 | Matias | Aug 2016 | A1 |
20160225410 | Lee | Aug 2016 | A1 |
20160234345 | Roberts | Aug 2016 | A1 |
20160358603 | Azam | Dec 2016 | A1 |
20160366330 | Boliek | Dec 2016 | A1 |
20170006214 | Andreassen | Jan 2017 | A1 |
20180129899 | Harron | May 2018 | A1 |
20180150740 | Wang | May 2018 | A1 |
Number | Date | Country |
---|---|---|
2001020466 | Mar 2001 | WO |
2009040538 | Apr 2009 | WO |
Entry |
---|
Xiao, Qiqi, et al. “Cross domain knowledge transfer for person re-identification.” arXiv preprint arXiv:1611.06026 (2016) . (Year: 2016). |
Lee, Gayoung, Yu-Wing Tai, and Junmo Kim. “Deep saliency with encoded low level distance map and high level features.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. (Year: 2016). |
Ricker, “First Click: TomTom's Bandit camera beats GoPro with software” Mar. 9, 2016 URL: http://www.theverge.com/2016/3/9/11179298/tomtom-bandit-beats-gopro (6 pages). |
FFmpeg, “AVPacket Struct Reference,” Doxygen, Jul. 20, 2014, 24 Pages, [online] [retrieved on Jul. 13, 2015] Retrieved from the internet <URL:https://www.ffmpeg.org/doxygen/2.5/group_lavf_decoding.html>. |
FFmpeg, “Demuxing,” Doxygen, Dec. 5, 2014, 15 Pages, [online] [retrieved on Jul. 13, 2015] Retrieved from the internet <URL:https://www.ffmpeg.org/doxygen/2.3/group_lavf_encoding.html>. |
FFmpeg, “Muxing,” Doxygen, Jul. 20, 2014, 9 Pages, [online] [retrieved on Jul. 13, 2015] Retrieved from the internet <URL: https://www.ffmpeg.org/doxyg en/2. 3/structA VP a ck et. html>. |
Han et al., Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding, International Conference on Learning Representations 2016, 14 pgs. |
He et al., “Deep Residual Learning for Image Recognition,” arXiv:1512.03385, 2015, 12 pgs. |
Iandola et al., “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size,” arXiv:1602.07360, 2016, 9 pgs. |
Iandola et al., “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size”, arXiv:1602.07360v3 [cs.CV] Apr. 6, 2016 (9 pgs.). |
Ioffe et al., “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167, 2015, 11 pgs. |
Parkhi et al., “Deep Face Recognition,” Proceedings of the British Machine Vision, 2015, 12 pgs. |
PCT International Preliminary Report on Patentability for PCT/US2015/023680, dated Oct. 4, 2016, 10 pages. |
PCT International Search Report and Written Opinion for PCT/US15/12086 dated Mar. 17, 2016, 20 pages. |
PCT International Search Report and Written Opinion for PCT/US2015/023680, dated Oct. 6, 2015, 13 pages. |
PCT International Search Report for PCT/US15/23680 dated Aug. 3, 2015, 4 pages. |
PCT International Search Report for PCT/US15/41624 dated Nov. 4, 2015, 5 pages. |
PCT International Written Opinion for PCT/US2015/041624, dated Dec. 17, 2015, 7 pages. |
Schroff et al., “FaceNet: A Unified Embedding for Face Recognition and Clustering,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, 10 pgs. |
Tran et al., “Learning Spatiotemporal Features with 3D Convolutional Networks”, arXiv:1412.0767 [cs.CV] Dec. 2, 2014 (9 pgs). |
Yang et al., “Unsupervised Extraction of Video Highlights Via Robust Recurrent Auto-encoders” arXiv:1510.01442v1 [cs.CV] Oct. 6, 2015 (9 pgs). |
Ernoult, Emeric, “Flow to Triple Your YouTube Video Views with Facebook”, SocialMediaExaminer.com, Nov. 26, 2012, 16 pages. |
PCT International Search Report and Written Opinion for PCT/US15/18538, dated Jun. 16, 2015, 26 pages. |
PCT International Search Report for PCT/US17/16367 dated Apr. 14, 2017 (2 pages). |
PCT International Search Reort for PCT/US15/18538 dated Jun. 16, 2015 (2 pages). |