The present invention relates to a video processing apparatus and a video processing method, and more specifically, to video processing suitable for analyzing a mode of action of a moving body in a video.
Action analysis technology for a moving body in a video is expected to be applied in fields such as surveillance video analysis, healthcare, and life logs. Video information is 3D spatiotemporal information consisting of both 2D spatial information and 1D temporal information, and thus has high complexity.
A convolutional neural network, which is well known as an effective technique in a field of still image analysis, is also applied to in-video action analysis. For example, JP 2018-206321 A described below discloses an image processing apparatus that calculates human posture information by applying a 2D convolution operation to a still image of each frame extracted from a video and estimates a human action class based on the information.
Further, a two-stream method is known in which respective features are modeled from spatial information of a video and from optical flow information representing a motion change in a temporal direction of an action of a moving body in the video, and ensemble is finally performed on both features (Karen Simonyan, et al., Two-stream convolutional networks for action recognition in videos, Proceedings of the 27th International Conference on Neural Information Processing Systems, 2014).
Furthermore, 3D convolution is also proposed in which an image processing system performs convolution processing on a plurality of frames acquired in a time-series manner (Shuiwang Ji, et al., 3D Convolutional Neural Networks for Human Action Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013).
In the conventional technology according to JP 2018-206321 A, since convolution processing is just applied to still image frames, temporal sequentiality as a characteristic of motion is impaired. Thus, it is not suitable for analyzing a human action class.
Meanwhile, the technology of “3D Convolutional Neural Networks for Human Action Recognition”, in which convolution processing is applied to a plurality of frames sampled continuously in the temporal direction, is superior to the technology of “Two-stream convolutional networks for action recognition in videos” in extracting action features of an object. However, convolution is performed on the plurality of frames regardless of motion flow of the moving body, which makes the former technology useless as a means of modeling the spatiotemporal action information.
Therefore, an object of the present invention is to provide a video processing technology capable of extracting a feature amount of action of a moving body with high accuracy for a video consisting of spatiotemporal information.
In order to achieve the above object, the present invention is a video processing apparatus including a controller configured to process a video of a moving body captured by a camera, and a memory that stores a program, wherein the controller is configured to, by executing the program in the memory, sample frames output from the camera at a predetermined rate, calculate a direction of motion of the moving body based on a sequence of a plurality of the frames, and extract a feature amount of the video by performing convolution processing on the plurality of the frames based on the calculated direction. Further, the present invention is a video processing method executed by the video processing apparatus.
According to the present invention, it is possible to extract a feature amount of action of a moving body with high accuracy for a video consisting of spatiotemporal information.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. A video processing system includes a (surveillance) camera for capturing a moving body and a video processing apparatus that analyzes a video taken by the camera. The camera is connected to a network, and the video processing apparatus imports images from the camera via the network into a memory at a predetermined frame rate.
The video processing apparatus includes a controller (CPU, GUI, etc.) and the memory. The controller executes a program in the memory to perform processing for analyzing an action of a moving body (object body) based on the taken video. A frame consists of a plurality of pixels, each of which stores color information. The memory stores a program for realizing the video processing system described later, and may be a non-portable recording medium (hard disk, flash memory, or storage).
The modules are implemented by the controller executing a program and/or by hardware. The module may be paraphrased with means, function, circuit, or unit. The camera is a video acquisition module.
In the first embodiment, an action is recognized and an action class is estimated for video data that is input from the camera to the controller and is delimited by a start and an end of the action. The dense sampling module 200 performs sampling on the video at a high frame rate so that the first convolution processing module 204 can extract the features of the motion of the moving body in the video. The first convolution processing module 204 performs convolution processing along a trajectory of the motion, in other words, in a temporal direction on a plurality of frames sampled continuously.
The sparse sampling module 202 conducts sampling at a low frame rate instead of sampling at the high frame rate as in the dense sampling module 200 so that the second convolution processing module 206 is suitable for extracting the features of the non-moving body in the frame. Convolution processing on the spatiotemporal video is realized by combining the convolution processing in the temporal direction (3D convolution processing) by the first convolution processing module 204 and convolution processing in a spatial direction (2D convolution processing) by the second convolution processing module 206.
In the convolution processing in the spatial direction by the second convolution processing module 206, a convolution matrix is created by multiplying pixel values (weight) of a filter called a kernel (for example, 3 pixels×3 pixels) by pixel values of a frame with sliding the filter from the top left pixel to the bottom right pixel of a matrix of the frame on a pixel-by-pixel basis. The convolution processing in the temporal direction will be described later. The weight of the filter (value of each pixel) may be decided by learning.
The controller realizes a control method, referred to as a channel pyramid 220 for convenience, in which the number of channels of convolution processing is hierarchically increased or decreased depending on the frame sampling rate on the video for unified control of a plurality of sampling paths and convolution processing for each path.
Then, assuming that the number of channels of the convolution processing on the frames sampled at the low rate by the second convolution processing module 206 is “C”, the channels of the convolution processing on the frames sampled at the high rate by the first convolution processing module 204 is “βC (β=1/α)”. This shows that, in the convolution processing by the first convolution processing module 204, the number of channels is small in response to the increased number of frames.
In ordered to fully learn information including no spatial motion change, more kernel filters are required. However, there is a problem that, if the number of frames is large and the number of kernels is also large, the speed of the 3D convolution processing drops significantly. Thus, the first convolution processing module 204 proportionally reduces the number of channels in response to the increased number of frames. The number of channels may be the number of filters. A plurality of filters improve the accuracy in extracting features by the convolution processing in the spatial direction on a frame. Matrices 300 and 302 are obtained by convolution processing.
The first feature extraction module 204 extracts the moving body in the video from a sequence of frames sampled over time, and further, extracts a displacement degree (or displacement amount) of a region of the moving body such as trajectory direction (or displacement direction) and displacement magnitude from the sequence of frames (motion calculation module 400). The first feature extraction module 204 performs a convolution operation based on the displacement degree (convolution execution module 402). Note that, “extract” may be paraphrased with set, determine, calculate, estimate, judge, recognize, distinguish, or the like.
The motion calculation module 400 applies “optical flow” (for example, Fleet, David J.; Weiss, Yair (2006), “Optical Flow Estimation”, in Paragios, Nikos; Chen, Yunmei; Faugeras, Olivier D. (eds.), Handbook of Mathematical Models in Computer Vision, Springer, pp. 237-257, ISBN 978-0-387-26371-7) to the sequence of a plurality of frames to calculate at least the motion displacement direction of the moving body. In the optical flow, movement of an object portion in two or more images or overall movement is estimated to be represented by a vector by using the images with the object portion that commonly appears in the images as a clue. The Lucas-Kanade method (LK method) and the like are known. Various other methods have been proposed, and a method through estimation by deep learning is also possible.
The motion calculation module 400 applies the optical flow to frames having the same frame size to calculate the displacement amount (displacement degree) of the motion such as the displacement direction and the displacement magnitude of the motion for each pixel of the frames. The direction and the displacement amount are expressed as a vector, which is defined as a motion vector.
The motion calculation module 400 applies the optical flow to frames of the same scaling size to calculate the displacement of the motion of the moving body for each frame size. The motion calculation module 400 converts or corrects the motion vectors calculated between the frames having the ¼ frame size by upsampling to the ½ frame size, and integrates the converted motion vectors into the motion vectors calculated between the frames having the ½ frame size. The integration may be an operation of averaging a plurality of motion vectors.
Next, the motion calculation module 400 converts the motion direction in the frames having the ½ frame size by upsampling to the original frame size, and integrates the converted motion direction into the motion direction calculated between the frames having the original frame size. Then, a final value of the motion direction is obtained.
When the camera is fixed at a specific point like a surveillance camera, the size of the moving body in a frame changes depending on a distance from the camera to the moving body. The motion direction of a moving body of which size is small compared to the frame size can be calculated with high accuracy by the optical flow, but the motion direction of a moving body of which size is large compared to the frame size is calculated with lower accuracy. Influence by the different accuracy in calculating the motion direction depending on the size of the moving body with respect to the frame size can be removed by integrating the motion direction based on the frames having the small frame size with the motion direction based on the frames having the large original frame size as described above. As a result, the motion direction is calculated more correctly, and an appropriate value thereof can be obtained more surely.
Next, the convolution execution module 402 will be described. Conventional 3D convolution processing in the temporal direction is performed by executing a filter-based convolution operation on each of time-series frames sampled from the camera video and by linearly combining results of the operations on the plurality of frames.
However, the conventional convolution is performed based on pixels at the same position across the plurality of frames although coordinates of pixels in each frame related to motion in the plurality of frames often differ significantly between the plurality of frames, which causes failure to capture change in the motion. Thus, the conventional 3D convolution processing has been unsuitable as a modeling means for a moving body having spatiotemporal action information.
Motion 700 is motion of a moving body, and a motion displacement direction 702 is calculated by the optical flow. Pt, k represents coordinates of a center point of a window having a size same as a kernel size S2. k≤N, and N is the number of windows depending on a spatial stride when sliding a kernel from the top left to the bottom right. Pt−Δt, k and Pt+Δt, k represent coordinates of centers of windows in the frames before and after the time t, corresponding to Pt, k, and are calculated according to the motion displacement direction.
A kernel 706 with center coordinates (Pt, k) is used for the convolution operation on the frame ft, a kernel 708 with center coordinates (Pt−Δt, k) is used for the convolution operation on the frame ft−Δt, and a kernel 710 with center coordinates (Pt+Δt, k) is used for the convolution operation on the frame ft+Δt.
A relationship between the center coordinates of these three kernels is as follows.
P
t−Δt,k
=P
t,k+(wt−Δt)*Pt,k
P
t+Δt,k
=P
t,k+(wt+Δt)*Pt,k
w: A displacement direction and a degree of motion calculated from the optical flow.
In this way, when the direction of the moving body is displaced, the coordinates of the kernel filters for the plurality of frames are different from each other in accordance with the displacement of the direction.
The center coordinates in the respective frames of the three kernels linked together by the motion 700 differ from each other according to the motion displacement direction 702.
The convolution execution module 402 performs 3D convolution on the frames ft−Δt, ft, and ft+Δt each time the kernel 706 is slid by one pixel from the top left (Pt, k=0) to the bottom right of the frame ft.
That is, the convolution execution module 402, based on the three kernels associated with each other by the motion direction 702, performs convolution on pixels of the frame ft−Δt with the kernel 708 (the center coordinates: Pt−Δt, k), convolution on pixels of the frame ft with the kernel 706 (the center coordinates: Pt, k), and convolution on pixels of the frame ft+Δt with the kernel 710 (the center coordinates: Pt+Δt, k), and linearly combines results of the convolution operations to achieve the 3D convolution processing.
In this 3D convolution processing, the convolution operations are performed together on a plurality of frames sampled at a certain time and before and after the time. However, the 2D convolution by the second convolution processing module 206 is different in that the convolution operation is performed on one frame.
In this way, the convolution execution module 402 executes the convolution processing in the temporal direction for motion extraction on the plurality of frames based on pixels (frame pixels) at different positions according to the motion displacement direction. Therefore, it is possible to extract the feature amount for the motion according to the motion flow of the moving body with high accuracy. As a result, the accuracy in the action recognition, the action analysis, etc. for a moving person or the like is dramatically improved.
Thus, it is necessary to convert a shape of a tensor of the dense path. The resizing module 208 applies 3D convolution processing with a temporal stride set to α to the tensor of the dense path so that the number of output channels is αβC (β=1/α), thereby converts the shape of the tensor to {T, S, αβC}. The lateral combining module 210 executes an ensemble operation, such as concatenation or summation, on the converted tensor and a tensor of the sparse path for each frame. The lateral combining module 210 performs average pooling for each frame on the combined tensor to acquire feature amounts of the frames, and further performs global pooling on the feature amounts of the frames to acquire a feature amount of the video. The feature amount of the video is output to the video feature amount extraction module 212.
The video feature amount extraction module 212 converts the combined tensor into a vector, resulting in extraction of the video feature amount.
The action estimation module 214 outputs an action class corresponding to the input video through a fully connected layer and softmax processing using the extracted video feature amount. Thus, it is possible to estimate contents of action for clipped video data of the action (a video trimmed at start and end times of the action) given from the camera to the video processing apparatus.
An action detection system of the second embodiment includes an action start/end likelihood determination module 900. As shown in
The action start/end likelihood determination module 900, which is configured with Gaussian Mixture Model including K independent clusters, learns a start of action and an end of action in advance based on training frame data, learns weights based on a predictive coding method, and calculates the likelihood of the “start of action” and the “end of action” for each frame based on results of the learning.
As shown in
The module 902 compares the indices of the frames in the start frame list with the indices of the frames in the end frame list in each of a plurality of the clusters. When the index of an end frame is larger than the index of a start frame, a pair of the start frame and the end frame are considered as a start and an end of a candidate movement interval, and the index of the start frame and the index of the end frame are output.
An action estimation module 214 estimates an action of a moving body for a video clip 904 corresponding to each candidate movement interval generated by the candidate movement interval generation module 902, based on the video feature amounts of frames contained in the video clip 904 through a multi-layer perceptron (MLP) or the like. The action estimation module 214 estimates an action for all the candidate movement intervals. The action estimation module 214 calculates action class scores by softmax, and outputs an action label corresponding to the highest score out of the action class scores. The action estimation module 214 estimates an action for all the candidate movement intervals generated by the candidate movement interval generation module 902 (
A redundant interval suppression module 910 performs non-maximum suppression (NMS) to filter out redundant intervals using, from a probability list P for each action class of each video clip obtained by the estimation, an action label corresponding to argmax (P) and its probability, and the start and end times (frames) of the corresponding video clip. As a result, the most probable action label for the video clip without redundant part is decided.
The embodiments described above are examples of the present invention and do not limit the technical scope of the present invention. For example, the above-described embodiments have two sampling paths, but may have three or more. Further, the above-described 3D convolution operation along the direction of motion is performed on three frames at a certain time and before and after the time, but may be performed on more frames. Furthermore, in the above-described embodiments, a video taken by the camera is processed in real time, but a video recorded in the storage may be processed in batch processing by the video processing apparatus. Furthermore, the video processing by the video processing apparatus may be provided to a user as a cloud service for analyzing a surveillance video possessed by the user.
Number | Date | Country | Kind |
---|---|---|---|
2020-083938 | May 2020 | JP | national |