This patent document relates generally to the field of machine learning. More particularly, the present document relates to using feature encoding for storing video stream without redundant frames.
Machine learning is an application of artificial intelligence. In machine learning, a computer or computing device is programmed to think like human beings so that the computer may be taught to learn on its own. The development of neural networks has been key to teaching computers to think and understand the world in the way human beings do.
Many videos are stored and occupy lots of digital storage. This is even worse for videos taken by surveillance camera. The frames at a continuous time slice are pretty much the same for majority of the video stream. But these same frames are stored and taking up huge amount of storage. Even though some video compression method are introduced to address this problem, the compression rate is still not satisfying. And the disadvantage of this becomes more obvious when people are looking for something within the video stream for an abnormal behavior or event. For most of the surveillance cameras, the position is fixed, and the scene it is taking is relatively unchanged for majority of the time. It will be efficient to only store the frames with obvious scene change from its immediate prior frame. And the non-changed frames could be skipped. And then, only these frames with scene changes are saved into storage. This will save lots of digital storage resources. In addition, it will be convenient for future search for events of interest in the video stream.
This section is for the purpose of summarizing some aspects of the invention and to briefly introduce some preferred embodiments. Simplifications or omissions in this section as well as in the abstract and the title herein may be made to avoid obscuring the purpose of the section. Such simplifications or omissions are not intended to limit the scope of the invention.
Methods and systems for using feature encoding for storing a video stream without redundant frames are disclosed. According to one aspect of the disclosure, a video stream containing a plurality of frames is received in a computing system. Each frame is converted to a resolution suitable as an input image to a deep learning model based on VGG-16 model, ResNet or MobileNet. Respective vectors of feature encoding values of current and immediately prior frames are obtained by performing computations of the deep learning model. A difference metric between the current frame and the immediately prior frame is determined by comparing the respective vectors using a difference measurement technique. The current frame is stored in a to-be-kept video file only when the difference metric indicates that the current frame and the immediately prior frame are different in accordance with a predefined criterion.
According to another aspect of the disclosure, a video stream containing a plurality of frames is received in a computing system. Each frame is divided to sub-frames with each sub-frame containing a resolution suitable as an input image to a deep learning model based on VGG-16 model, ResNet or MobileNet. Respective vectors of feature encoding values of all sub-frames of current and immediately prior frames are obtained by performing computations of the deep learning model. A difference metric between the current frame and the immediately prior frame is determined by comparing the respective vectors using a difference measurement technique. The current frame is stored in a to-be-kept video file only when the difference metric indicates that the current frame and the immediately prior frame are different in accordance with a predefined criterion.
Objects, features, and advantages of the invention will become apparent upon examining the following detailed description of an embodiment thereof, taken in conjunction with the attached drawings.
These and other features, aspects, and advantages of the invention will be better understood with regard to the following description, appended claims, and accompanying drawings as follows:
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will become obvious to those skilled in the art that the invention may be practiced without these specific details. The descriptions and representations herein are the common means used by those experienced or skilled in the art to most effectively convey the substance of their work to others skilled in the art. In other instances, well-known methods, procedures, and components have not been described in detail to avoid unnecessarily obscuring aspects of the invention.
Reference herein to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Used herein, the terms “vertical”, “horizontal”, “diagonal”, “left”, “right”, “top”, “bottom”, “column”, “row”, “diagonally” are intended to provide relative positions for the purposes of description, and are not intended to designate an absolute frame of reference. Additionally, used herein, term “character” and “script” are used interchangeably.
Embodiments of the invention are discussed herein with reference to
Referring first to
Feature encoding values are output at certain stage of a deep learning model. The layer structure of an example of the deep learning model 300 is shown in
At action 104, each frame of the video stream 210 is converted to a resolution suitable as an input image to the deep learning model 300.
Next, at action 106, respective vectors of feature encoding values for two consecutive frames (i.e., current frame and immediately prior frame) are obtained by performing computations of the deep learning model 300.
Then a difference metric between the current frame and the immediately prior frame is determined at action 108. The difference metric is achieved by comparing the respective vectors of feature coding values using a difference measurement technique.
In one embodiment, the difference measurement technique is based on Euclidean distance between the respective vectors, each of which contains a multiple of 512 floating point numbers.
In another embodiment, the difference measurement technique is cosine similarity between the respective vectors.
In yet another embodiment, the difference measurement technique is based on a CNN model for binary classification of “different” or “similar”. Details of binary classification are shown and described in
To allow binary classification for determining difference metric, respective vectors of feature encoding values are written into a two-dimensional (2-D) symbol 600 of
At action 110, the current frame is saved into a to-be-kept video file (e.g., file 240 in
Finally, at action 112, each frame of the to-be-kept video file is optionally compressed with known video compression schemes, for example, Motion Picture Experts Group MPEG-2, MPEG-4, H.264, and VC-1.
The previous convolution-to-pooling procedure is repeated. The reduced set of imagery data 531 is then processed with convolutions using a second set of filters 540. Similarly, each overlapped sub-region 535 is processed. Another activation can be conducted before a second pooling operation 540. The convolution-to-pooling procedures are repeated for several layers. The deep learning model 300 shown in
This repeated convolution-to-pooling procedure is trained using a known dataset or database. For image classification, the dataset contains the predefined categories. A particular set of filters, activation and pooling can be tuned and obtained before use for classifying an imagery data, for example, a specific combination of filter types, number of filters, order of filters, pooling types, and/or when to perform activation.
To create each portion of the 2-D symbol 600, each floating point value of the feature encoding values of a frame is converted to a corresponding color or grayscale. Depending upon number of the feature encoding values for each frame, color or grayscale is stored in one or more pixels in the 2-D symbol 600. For example, when the number of feature encoding values is 512, each feature value may occupy 49 pixels in a 224×224 2-D symbol. When the number of feature encoding values is 4608 (i.e., frame is divided to nine smaller images), each feature value may occupy 4 pixels.
The 2-D symbol 600 is then classified in a binary classification deep learning model shown in
Due to huge amount of computations required in a deep learning model such as CNN, a CNN based computing system 800 is preferred.
The image processing technique 738 includes predefining two categories 742 (e.g., “Similar”, “Different”). As a result of performing the image processing technique 738, respective probabilities 744 of the categories are determined for associating one of the categories 742 “Different”. In other words, the current frame is different from the immediately prior frame according to classification result of the 2-D symbol 600 in pre-trained deep learning model.
Referring back to
At action 126, respective vectors of feature encoding values of all sub-frames of current frame and immediately prior frame are obtained via a deep learning model, for example, the deep learning 300 based on VGG-16 model shown in
Referring now to
The CNN based computing system 800 may be implemented on integrated circuits as a digital semi-conductor chip (e.g., a silicon substrate in a single semi-conductor wafer) and contains a controller 810, and a plurality of CNN processing units 802a-802b operatively coupled to at least one input/output (I/O) data bus 820. Controller 810 is configured to control various operations of the CNN processing units 802a-802b, which are connected in a loop with a clock-skew circuit (e.g., clock-skew circuit 1540 in
In one embodiment, each of the CNN processing units 802a-802b is configured for processing imagery data, for example, two-dimensional symbol 600 of
In another embodiment, the CNN based computing system is a digital integrated circuit that can be extendable and scalable. For example, multiple copies of the digital integrated circuit may be implemented on a single semi-conductor chip as shown in
All of the CNN processing engines are identical. For illustration simplicity, only few (i.e., CNN processing engines 822a-822h, 832a-832h) are shown in
Each CNN processing engine 822a-822h, 832a-832h contains a CNN processing block 824, a first set of memory buffers 826 and a second set of memory buffers 828. The first set of memory buffers 826 is configured for receiving imagery data and for supplying the already received imagery data to the CNN processing block 824. The second set of memory buffers 828 is configured for storing filter coefficients and for supplying the already received filter coefficients to the CNN processing block 824. In general, the number of CNN processing engines on a chip is 2n, where n is an integer (i.e., 0, 1, 2, 3, . . . ). As shown in
The first and the second I/O data bus 830a-830b are shown here to connect the CNN processing engines 822a-822h, 832a-832h in a sequential scheme. In another embodiment, the at least one I/O data bus may have different connection scheme to the CNN processing engines to accomplish the same purpose of parallel data input and output for improving performance.
More details of a CNN processing engine 842 in a CNN based integrated circuit are shown in
In order to achieve faster computations, few computational performance improvement techniques have been used and implemented in the CNN processing block 844. In one embodiment, representation of imagery data uses as few bits as practical (e.g., 5-bit representation). In another embodiment, each filter coefficient is represented as an integer with a radix point. Similarly, the integer representing the filter coefficient uses as few bits as practical (e.g., 12-bit representation). As a result, 3×3 convolutions can then be performed using fixed-point arithmetic for faster computations.
Each 3×3 convolution produces one convolution operations result, Out(m, n), based on the following formula:
where:
Each CNN processing block 844 produces Z×Z convolution operations results simultaneously and, all CNN processing engines perform simultaneous operations. In one embodiment, the 3×3 weight or filter coefficients are each 12-bit while the offset or bias coefficient is 16-bit or 18-bit.
To perform 3×3 convolutions at each sampling location, an example data arrangement is shown in
Imagery data are stored in a first set of memory buffers 846, while filter coefficients are stored in a second set of memory buffers 848. Both imagery data and filter coefficients are fed to the CNN block 844 at each clock of the digital integrated circuit. Filter coefficients (i.e., C(3×3) and b) are fed into the CNN processing block 844 directly from the second set of memory buffers 848. However, imagery data are fed into the CNN processing block 844 via a multiplexer MUX 845 from the first set of memory buffers 846. Multiplexer 845 selects imagery data from the first set of memory buffers based on a clock signal (e.g., pulse 852).
Otherwise, multiplexer MUX 845 selects imagery data from a first neighbor CNN processing engine (from the left side of
At the same time, a copy of the imagery data fed into the CNN processing block 844 is sent to a second neighbor CNN processing engine (to the right side of
After 3×3 convolutions for each group of imagery data are performed for predefined number of filter coefficients, convolution operations results Out(m, n) are sent to the first set of memory buffers via another multiplex MUX 847 based on another clock signal (e.g., pulse 851). An example clock cycle 850 is drawn for demonstrating the time relationship between pulse 851 and pulse 852. As shown pulse 851 is one clock before pulse 852, as a result, the 3×3 convolution operations results are stored into the first set of memory buffers after a particular block of imagery data has been processed by all CNN processing engines through the clock-skew circuit 860.
After the convolution operations result Out(m, n) is obtained from Formula (1), activation procedure may be performed. Any convolution operations result, Out(m, n), less than zero (i.e., negative value) is set to zero. In other words, only positive value of output results are kept. For example, positive output value 10.5 retains as 10.5 while -2.3 becomes 0. Activation causes non-linearity in the CNN based integrated circuits.
If a 2×2 pooling operation is required, the Z×Z output results are reduced to (Z/2)×(Z/2). In order to store the (Z/2)×(Z/2) output results in corresponding locations in the first set of memory buffers, additional bookkeeping techniques are required to track proper memory addresses such that four (Z/2)×(Z/2) output results can be processed in one CNN processing engine.
To demonstrate a 2×2 pooling operation,
An input image generally contains a large amount of imagery data. In order to perform image processing operations, an example input image 1400 (e.g., a two-dimensional symbol 600 of
Although the invention does not require specific characteristic dimension of an input image, the input image may be required to resize to fit into a predefined characteristic dimension for certain image processing procedures. In an embodiment, a square shape with (2L×Z)-pixel by (2L×Z)-pixel is required. L is a positive integer (e.g., 1, 2, 3, 4, etc.). When Z equals 14 and L equals 4, the characteristic dimension is 224. In another embodiment, the input image is a rectangular shape with dimensions of (2I×Z)-pixel and (2J×Z)-pixel, where I and J are positive integers.
In order to properly perform 3×3 convolutions at pixel locations around the border of a Z-pixel by Z-pixel block, additional imagery data from neighboring blocks are required.
When more than one CNN processing engine is configured on the integrated circuit. The CNN processing engine is connected to first and second neighbor CNN processing engines via a clock-skew circuit. For illustration simplicity, only CNN processing block and memory buffers for imagery data are shown. An example clock-skew circuit 1540 for a group of example CNN processing engines are shown in
CNN processing engines connected via the second example clock-skew circuit 1540 to form a loop. In other words, each CNN processing engine sends its own imagery data to a first neighbor and, at the same time, receives a second neighbor's imagery data. Clock-skew circuit 1540 can be achieved with well-known manners. For example, each CNN processing engine is connected with a D flip-flop 1542.
Although the invention has been described with reference to specific embodiments thereof, these embodiments are merely illustrative, and not restrictive of, the invention. Various modifications or changes to the specifically disclosed example embodiments will be suggested to persons skilled in the art. For example, whereas the two-dimensional symbol has been described and shown with a specific example of a matrix of 224×224 pixels, other sizes may be used for achieving substantially similar objectives of the invention, for example, 448×448, 896×896, etc. Furthermore, whereas first and second portions in a 2-D symbol have been shown and described as upper and lower portions, other partition schemes can be used for achieving the same, for example, left and right portions or any other partitions. Finally, the number of feature values has been shown and described as 512, other multiple of 512 may be used for achieving the same, for example, MobileNet contains 1024 feature encoding values. In summary, the scope of the invention should not be restricted to the specific example embodiments disclosed herein, and all modifications that are readily suggested to those of ordinary skill in the art should be included within the spirit and purview of this application and scope of the appended claims.
This application claims benefits of a U. S. Provisional Patent Application Ser. No. 62/822,042 for “Feature Encoding Based Video Compression and Storage”, filed Mar. 21, 2019. The contents of which are hereby incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
62822042 | Mar 2019 | US |