The disclosed embodiments are directed to producing a video abstract from video using signed foreground extraction and fusion. More specifically, the disclosed embodiments are directed to producing a video abstract from surveillance video using signed foreground extraction and fusion with background update calculations being performed using a graphics processing unit.
In cities and other localities, there is an increasingly large number of surveillance cameras used in the streets, schools, hospitals, stadiums, and other public places. A large quantity of surveillance video is produced every day, which puts a great deal of pressure on facilities for storing the surveillance video and those who must study the surveillance video, e.g., law enforcement officials.
Conventionally, various processes may be used to generate a video summary from surveillance video. In the generation of a video summary, e.g., a video abstract, based on motive objects, extracting motion foreground objects may be done by first detecting the motion area and cutting the motive object out of the source frame. Then, the motive object picture is integrated into the corresponding background image by erasing the corresponding location on the background picture and putting the motive object image in its place. Because of the effects of light changes, foreground images tend not to blend well into the background, and they leave a clear border shadow, which negatively affects the quality of the generated summary video.
In surveillance video, the most influential factor in the background is light. To adapt a background to the actual environment, an algorithm may be used to update the background. However, due to the large amount of video data in typical installations, and the complexity of the applied algorithms, background updating requires a significant amount of time, which significantly affects the speed of video summary generation.
The disclosed embodiments are directed to systems and methods for merging foreground motive object images into background images without flaws such as border shadow to generate high quality summary videos (i.e., video abstracts).
In one aspect, the disclosed embodiments provide a system and method for producing a video abstract from video produced by a surveillance video camera. The method includes retrieving a frame of the video produced by the camera, updating a background frame based at least in part on the retrieved frame, performing a video division process to separate static frames from motive frames. The video division process includes retrieving, if it is determined that the retrieved frame is a static frame, a next frame of the video produced by the camera, updating the background frame, and repeating the video division process. The video division process further includes extracting, if it is determined that the retrieved frame is a motive frame, foreground data from the retrieved frame after conversion of the retrieved frame to a signed data type. The method for producing the video abstract further includes determining whether an end of the video produced by the camera has been reached, retrieving, if it is determined that the end of the video produced by the camera has not been reached, a next frame of the video produced by the camera, updating the background frame, and repeating the video division process. The method further includes merging, if it is determined that the end of the video produced by the camera has been reached, the foreground data with the updated background data after conversion of the foreground data and the updated background data to the signed data type, and generating a video abstract with the merged foreground and background data.
The disclosed embodiments involve the generation of video abstracts, which are a form of video compression. This is an effective method of compressing the video, which will help to solve the video storage problem. As explained in further detail below, the disclosed techniques extract the pertinent contents of a long video and use them to generate a short video, so one can get the contents of concern quickly.
In general, surveillance video is composed of a static background and a dynamic foreground, so it can be divided into dynamic frames (i.e., frames containing motive objects) and static frames (i.e., frames without motive objects) based on the state of frame. Most of the time, the static contents of a video are not of interest, so one can extract motive objects in the dynamic frames and integrate these motive targets into the corresponding background to generate a video abstract.
The disclosed embodiments provide a solution based on signed foreground extraction and fusion (S-FEF) and use of a graphics processing unit (GPU). In this solution, S-FEF is used to extract and merge the foreground to reduce the influence of shadows, and a GPU is used to accelerate background updating.
A flow diagram for generating a surveillance video abstract based on S-FEF and using a GPU is shown in
First, an initial background is obtained by choosing a frame of background without any moving objects from the start of surveillance video (step 205). This background will be updated (step 215) with the necessary changes over time, e.g., changes due to variations in light throughout a day and night.
A frame of the surveillance video is read using, e.g., an application program interface (API) such as Open Source Computer Vision Library (“OpenCV”) to decode and read the surveillance video. OpenCV is an open source, cross-platform computer vision library (step 210).
The next steps relate to a video division process 220 based on the gray-level average value of the pixels obtained using a continuous frame difference method. In the video division process, a video frame is converted from red-green-blue (RGB) space to gray space (e.g., 256 gray levels) (step 225). The nth gray scale frame is subtracted from the (n+1)th gray scale frame of the video and the absolute value of the result for each pixel is obtained (step 230), in accordance with formula (1). An average value is then determined for the gray scale pixels resulting from the difference calculation, as shown in formula (2).
diff_fgrayn[i][j]=|fgrayn+1[i][j]−fgrayn[i][j]| (1)
Aven=(Σi=0R-1Σj=0C-1diff_fgrayn[i][j])/(R*C) (2)
In formula (1), fgrayn[i][j] represents the gray value of pixel(i,j) at the nth frame, fgrayn+1[i][j] represents the gray value of pixel(i,j) at the (n+1)th frame, diff_fgrayn[i][j] represents the absolute value of pixel(i,j) after the difference between nth frame and (n+1)th frame. In formula (2), R represents the number of pixel rows of the video frame, C represents the number of pixel columns of the video frame, Aven is the average value of the gray scale pixels resulting from the difference calculation between the nth frame and the (n+1)th frame.
The average grayscale differential computed in accordance with formula (2) is compared to a defined threshold (step 235). If the average grayscale differential does not exceed the threshold, then another frame of the surveillance video is read and subjected to the background updating and video division processes. If, on the other hand, the average grayscale differential exceeds the defined threshold, then the foreground data is extracted (step 240) and saved (step 245) before another frame of the surveillance video is read. The calculations necessary to perform the background update (step 215) require substantial computer resources. Therefore, as explained in further detail below, the graphics processing unit (GPU) may be used to perform these calculations to improve performance and efficiency.
In disclosed embodiments, the background is updated using a single Gaussian model using the GPU to perform the calculation (step 215). The single Gaussian model is suitable for updating a relatively stable background. Under this model, it is assumed that each position in the image is independent of every other and the pixel values of each point are Gaussian distributions, e.g., as shown formula (3):
In formula (3), xij represents the gray value of the pixel whose coordinates are (i, j); μij represents the average gray value of pixel (i,j); σ2ij represents the variance of pixel (i,j); p(xij) represents the probability of pixel (i,j). When the probability of a pixel is less than the set threshold, it is considered to be a background point, otherwise it is considered to be a foreground point.
When the scene in a surveillance video changes, such as, for example, due to changes in lighting conditions, motive objects, etc., the Gaussian model can update the background based on such changes of condition using formula (4):
B
t
=αI
t+(1−α)Bt-1 (4)
In formula (4), the update coefficient is α, Bt represents the current background, Bt-1 represents the background of the previous frame, and lt represents the current input image.
To update the background using the single Gaussian model (step 215), the following parameters are initialized: α, λ, and θ[i][j], where a represents updating speed, A represents threshold parameters, and θ[i][j] represents the variance of each pixel between frame and background.
The video frame and the corresponding background frame are converted to a signed data. Then, the absolute value of the difference between the video frame and the corresponding background, is calculated using formula (5):
sub[i][j]=|f[i][j]−b[i][j]| (5)
In formula (5), f[i][j] represents the signed data value of each pixel in a video frame, b[i][j] represents the signed data value of each pixel in a corresponding background.
If the computed differential for a particular background pixel, bt-1[i] [j], is less than a threshold, in accordance with sub[i][j]<λ*θ[i][j], then the background is updated for that pixel as follows:
b
t
[i][j]=α*f[i][j]+(1−α)*bt-1[i][j] (6)
In formula (6), f[i][j] represents the signed data value of each pixel in video frame, bt-1[i][j] represents the signed data value of each pixel in background before being updated, bt[i][j] represents the signed data value of each pixel in background after being updated, and α is a parameter representing an updating speed, as noted above. In this manner, the background is updated on a pixel-by-pixel basis.
If, on the other hand, the computed differential, sub[i][j], for a particular background pixel, bt-1[i][j], is greater than a threshold, according to sub[i][j]<λ*θ[i][j], then the pixel is considered to be a foreground point.
Based on the updated background, the parameter representing the variance between frame and background pixels, θ[i][j], is also updated, as follows:
θt[i][j]=√{square root over ((1−α)*θt-1[i][j]+α*(f[i][j]−bt[i][j])2)} (7)
In formula (7), f[i][j] represents the signed data value of each pixel in the video frame, bt[i][j] represents the signed data value of each pixel in the background after updating, θt-1[i][j] represents the value of the variance parameter before updating, and θt[i][j] represents the value of the variance parameter after updating.
After the extraction (step 240) and saving (step 245) of the foreground data, it is determined whether the end of the video has been reached (step 250). If the end of the video has been reached, the foreground data is merged with the background (step 255) in a process discussed below (see
As noted above, the background update calculations require substantial computer resources. Therefore, the GPU is used to perform these calculations to improve performance and efficiency. Specifically, as depicted in
The memory of the GPU, which may be generally referred to as the “GPU-side memory” or “device-side” memory, is allocated to store particular data (step 305). For example, GPU-side memory is allocated for the frame-background variance parameter, θ[i][j], with the data type “signed” and an array of size Sz.
Sz=R*C (8)
In formula (8), R represents the number of rows of the video frame in pixels and C represents the number of columns of the video frame in pixels.
GPU-side memory is allocated for the video frame, with the data type being “unsigned char” and the array size being Sz. GPU-side memory is similarly allocated for the background frame.
After the GPU memory has been allocated (step 305), the initial background is copied to its allocated portion of the device-side memory (step 310). This is done as part of the step of obtaining an initial background frame, as shown, e.g., in
After the initialization described above, the GPU is used to perform the background updating calculations. As shown, for example, in
A step of reading a frame of surveillance video (step 315) is presented in
A suitable number of GPU processing threads are allocated depending upon the size of the frame (step 325). For example, if a frame size is m columns by n rows, with c channels (e.g., the three color channels, red, green, and blue, of an RGB image), then the number of threads allocated may be m×n×c, which would provide for each pixel to have a thread for each of its RGB components. A kernel function is started to perform the background update calculations (step 330), which are described above.
The resulting updated background is copied back to the host side (i.e., the memory of the CPU) (step 335) and the video division process (see
Experiments were conducted to confirm the acceleration of processing speed which is obtained by using the GPU to perform background updating calculations. As shown in
Sr=Tc/Tg (9)
From
In general, only the dynamic parts of a surveillance video are of interest, i.e., frames which contain moving objects, which may be referred to as “motive frames”. Motive frames can be detected by determining whether a significant change has occurred with respect to the previous frame.
As shown in
After the surveillance has been reduced to a series of motive segments, it is desirable to isolate the particular portions of the motive segments which include moving objects. In short, the extraction of the moving objects is done by subtracting from each motive frame its corresponding background. The isolated moving objects are then fused with, i.e., superimposed on, a particular background.
Typically, the data type of video frame is unsigned char, i.e., values in the form of character data without an indication of positive or negative. In subtraction between foreground and background, values smaller than zero are dropped, which results in some information being lost. By contrast, in the disclosed embodiments, the extraction of the moving objects is done using signed foreground extraction and fusion (S-FEF) to preserve as much information as possible. When performing foreground/background subtraction, the data of the video frames is converted to signed type, thereby preventing the loss of negative subtraction results. Preserving this additional information results in a higher quality video abstract.
As shown in
forez[i][j]=fs[i][j]−backs[i][j] (10)
In formula (10), fs[i][j] represents the signed value of motion frame, backs[i][j] represents the signed value of corresponding background, and fores[i][j] represents the signed foreground value.
As shown in
f
as
[i][j]=foreas[i][j]+backas[i][j] (11)
In formula (11), foreas[i][j] is the signed pixel values of the foreground to be fused, backas[i][j] is the signed pixel values of background to be fused, and fas[i][j] is the signed pixel values of the fused frame. The signed pixel values of a fused frame, fas[i][j], are converted to unsigned char data type to get a merged frame of the video abstract (step 935).
As discussed above, the surveillance video is divided into motive segments, i.e., dynamic segments. These segments are combined according to particular rules to form the video abstract. In disclosed embodiments, a compression method may be used in which a compression factor, C, is set to an integer value: 1, 2, 3, . . . etc. In this method, C parts of motive segments can be used to generate abstract video at the same time. For example, if C=2, then we can get a video abstract can be formed according to the process depicted in
In view of the above, it can be seen that disclosed embodiments use S-FEF to prevent border shadow and use the GPU to perform parallel computing for the background updating algorithm using a single Gaussian model to substantially speed up the calculations. Because the background updating usually takes up a significant amount of time, generation of the video abstract can be significantly accelerated using such methods. Thus, the disclosed embodiments allow one to obtain a surveillance video abstract faster and of higher quality which, in turn, can effectively reduce the video storage requirements.
Embodiments described herein are solely for the purpose of illustration. Those in the art will recognize other embodiments may be practiced with modifications and alterations to that described above.