This non-provisional application claims priority under 35 U.S.C. § 119(a) on Patent Application No(s). 202010967908.5 filed in China on Sep. 15, 2020, the entire contents of which are hereby incorporated by reference.
The present disclosure relates to the field of image processing, and more particularly to a method for generating a loop video.
In recent years, social media has flourished, and sharing videos on the Internet has become normal. With the rise of cloud storage services, users no longer have to worry about storage space. Users casually upload the videos after shooting without caring whether or not those videos are reused. With so much new video contents created and shared everyday among people, people have little interest in older contents. This is such a waste given the meanings and culture ingredients inside those contents. It may be worthwhile giving new life to those videos.
The loop video is a medium which is in many ways intermediate between photos a video. The loop video may capture the dynamic information in the screen and represent the whole scenario in a looping form. The loop video brings immersion to the audience without being broken by the duration limitation like video. In such an era that social media develops rapidly, filming short video and sharing are already indispensable things in life, such as filming a self-study trumpet solo, recording the moment dancing with friends or sharing the fantastic performance by a busker whom you're just passing by. Those video contents are suitable to produce a loop video. However, most of the recent approaches didn't take the property of this kind of video into consideration. They aimed to perform a smooth loop video which leads to constraining people's movement into unnormal repetition, without considering the continuity of character motion.
In view of the above, the present disclosure proposes a method for generating a loop video from an input video based on artificial intelligence or learning algorithm so that the present disclosure solves the problem that the loop video generated by the conventional method lacks semantic consistency and visual variety.
According to one or more embodiment of this disclosure, a method for generating a loop video comprising: obtaining an input video including a plurality of frames, wherein a first frame is included in the plurality of frames, each of the plurality of frames has a plurality of pixels; extracting a moving object from the input video, wherein the moving object corresponds to a moving pixel region in the first frame, and the moving pixel region includes at least two of the plurality of pixels of the first frame; inputting a plurality of candidate periods to a target function respectively to calculate a plurality of errors of the moving pixel region for respective ones of the candidate periods; determining a start frame and a loop period of the moving pixel region for each of the plurality of errors so as to obtain a plurality of start frames and a plurality of loop periods, wherein the loop period is associated with one of the plurality of candidate periods; generating a plurality of output frames according to the plurality of start frames and the plurality of loop periods; and generating an output frame sequence from the plurality of output frames according to a loop parameter, wherein the output frame sequence corresponds to the loop video.
In sum, the present disclosure proposes a method for generating a loop video. The proposed method creates the loop video consisted of context-aware segments based on spatiotemporal consistency and semantic constraint, and the present disclosure ensures the completeness of moving object in the loop video. The present disclosure uses frame entropy to estimate the variety of output video and keep good variety in output by dynamic strategy method and bounce point extraction.
The present disclosure will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only and thus are not limitative of the present disclosure and wherein:
In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawings.
Please refer to
In overall, the method for generating a loop video of the present disclosure includes two stages, and they are analysis stage and rendering stage.
The analysis stage includes steps S1-S5 of
The rendering includes step S6 in
Please refer to step S1, “obtaining an input video including a plurality of frames”. Specifically, the input video provided by the user includes multiple frames, each of these frames has a plurality of pixels. The input video can be defined as a 3D volume V (x, t) with a 2D pixel position x and an input frame time t.
Please refer to step S2, “extracting a moving object from the input video”. The moving object corresponds to a moving pixel region in each frame and has at least two pixels. One or more pixels which do not belong to any moving object form a fixed pixel region.
For example, please refer to
Please refer to
Please refer to step S21, “obtaining an attribute of each of the first pixels in the first frame”. For example, colors of the first pixel a1 and second pixel b1 in first frame are obtained in this step.
Please refer to step S22, “determining one of the second pixels in the second frame, wherein an attribute of said one of the second pixels corresponds to the attribute of the first pixel”. For example, regarding the pixel a1 of the moving object MO, if the moving object is red and the part outside the moving object MO is white, this step finds multiple pixels such as a2 and b2 in the second frame F2, and the color of each of pixels a2 and b2 are identical to the color of pixels a1 in the first frame F1. Further, this step determines that the pixel corresponding to pixel a1 is pixel a1 rather than pixel a2 according to colors of pixels next to pixel a1 or the coordinate of pixel a1. Regarding pixels c1 which does not belong to the moving object MO, this step adopts the method described above to determines its corresponding second pixel c2, and so on for other pixels such as b1, d1, and e1.
Please refer to step S23, “calculating a displacement between the first pixel and the second pixel whose attribute corresponds to the attribute of the first pixel”. In other embodiment, step S23 may calculate an optical flow between the first pixel a1 and second pixel a2.
Please refer to step S24, “determining at least two first pixels as the moving pixel region, wherein the displacement of each of said at least two first pixels is in a specific range”. For example, since the moving object MO moves from the left side to the right side, all first pixels such as a2 and b1 in
Please refer to step S25, “tracing a position of the moving pixel region in the second frame according to the displacement”. Specifically, after determining all first pixels of the moving pixel region in the first frame F1, this step uses the displacement or the optical flow information to trace the moving pixel region in the following frames.
Please refer to
In other embodiment of the present disclosure, the moving pixel region is called a dynamic superpixel and the fixed pixel region is called a static superpixel. Each superpixel represent an object area. It should be further noticed that the detecting and tracking flow as shown in
Please refer to step S3, “inputting a plurality of candidate periods to a target function respectively to calculate a plurality of errors of the moving pixel region for respective ones of the candidate periods”. The candidate period may be a multiple of a basic period. For example, the length of the basic period is four frames and the candidate period include 4, 8, 12, 16 . . . frames. However, the present disclosure does not limit to the above example. In an embodiment, regarding each of the moving pixel region and the fixed pixel region, that is, regarding each of dynamic superpixel and static superpixel, step S3 calculates multiple errors related to these superpixels combined with multiple candidate periods. The objective function configured to calculate these errors is as equation 1.
E(p,s)=Econsistency(p,s)+Estatic(p,s) (Equation 1)
E(p, s) is the objective function, p is the loop period of the superpixel, s is the start frame of the superpixel, Econsistency(p, s) is the term configured to determine the spatiotemporal consistency of the superpixel, Estatic(p, s) penalizes the assignment of static loop pixels except in regions of the input video that are truly static. The calculation of Econsistency(p, s) will be introduced as follow.
Econsistency(p, s) of equation 1 is calculated as equation 2.
Econsistency(p,s)=Espatial(p,s)+Etemporal(p,s) (Equation 2)
From equation 2, regarding each of the superpixels, the error of this superpixel x includes an error of spatial consistency error and an error of temporal consistency. The error Espatial(p, s) reflecting spatial consistency is calculated as equation 3.
Espatial(p,s)=Σ∥x+z=1∥Ψspatial(x,z)Υs(x,z) (Equation 3)
Regarding each superpixel x, equation 3 considers a superpixel z spatially adjacent to the superpixel x.
Υs(x, z) in equation 3 is calculated as equation 4.
In equation 4, λs is a constant, MAD represents Median Absolute Deviation. If the difference between two adjacent superpixels x and z is large in the input video, equation 4 reduces the consistency cost of this two superpixels x and z so that user is not easily to notice the inconsistency.
Please refer to equation 3. The spatial term Ψspatial(x, z) which dominates the error of spatial consistency is calculated as equation 5.
Vout(x, t) represents an estimated color of the output video at the position of the superpixel x at time t. Vin(x, t) represents an estimated color of the input video at the position of the superpixel x at time t. The term Φ(x, t) is equation 5 is a time-mapping function and is calculated as equation 6.
Φ(x,t)=sx+((t'sx)mod px (Equation 6)
Regarding a longer input video, the present disclosure uses equation 6 to map this input video to a shorter output video, and this output video has a start frame sx and a loop period px. For example, the input video has 9 frames and is played from frame 0 to frame 9, if the start frame is 7th frame, and the loop period is 3 frames, the mapping result will be (7, 8, 9, 7, 8, 9, 7, 8, 9).
Please refer to equation 5. Regarding adjacent superpixels x and z, the equation 5 calculates a L2 difference between the first color difference of these two superpixels in the output video and the second color difference of these two superpixels in the input video. For example, if the moving object of the input video is a human body, the value calculated with equation 5 will reflect a consistency of this human body in the output video. For example, is the value calculated with equation 5 is greater than a certain number, some part of the moving object will probably disappear in a certain frame of the generated loop video and user may notice such inconsistency when he watches the video.
Please review the equation 2, the term Etemporal(p, s) reflecting the error of temporal consistency is calculated as equation 7.
Etemporal(x)=ΣxΨtemporal(x)Υt(x) (Equation 7)
The dominated term Ψtemporal(x) is calculated as equation 8.
Regarding the superpixel x, equation 8 calculates a first color difference between two consecutive frames of the output video, calculates the second color difference between the next frame at the end of the loop and the start frame at the beginning of the loop, and calculates the third color difference between the frame at the end of the loop and the previous frame before the beginning of the loop, and the L2 distance of the first color difference and the second color difference and the L2 distance of the first color difference and the third color difference are added. From a visual perspective of view, the error of temporal consistency not only reflects the temporal consistency of the two consecutive output frames during the playback of the loop video, but also reflects the temporal consistency of the looped video from the end of this playback to the beginning of the next loop.
It should be noticed that an embodiment of the present disclosure uses each of the moving pixel region and the fixed pixel region described in step S2 as the input of equations 1-8 related to error calculations. In another embodiment, equations 1-8 may use the moving pixel region only. In other words, an embodiment of the present disclosure takes the superpixel as the unit when evaluating the error of spatiotemporal consistency. The loop video generated based on the above concept not only preserves the temporal and spatial consistency in the pixel level, but also preserves the semantic consistency of the input video.
Please refer to step S4 of
Please refer to step S5 of
Please refer to step S6, “generating an output frame sequence from the plurality of output frames according to a loop parameter”. The output frame sequence corresponds to a loop video. Specifically, after generating the first output frame, an example for determining the next output frame is shown as equation 10.
Pi,j is the probability of the transition from frame ith to frame jth. Di,j is a frame transition cost and is calculated as equation 11.
Di,j=∥V(⋅,i)−V(⋅,j)∥ (Equation 11)
According to equation 11, the frame transition cost is the cumulative color difference between each pixel in frame ith and the pixel corresponding to frame jth. In other words, the greater the color difference between two adjacent frames is, the higher the cost of frame transition is.
In equation 10, the factor σ controls the mapping from pixel different to probability. In general, the frame with higher temporal consistency has a higher probability to be selected. In the situation that input video containing people, the smaller value of σ usually brings the strange repetition of human behavior. Moreover, the bigger value of σ results in a non-repetition frame but the discontinuous motion in the whole output frame sequence. In an embodiment of the present disclosure, setting the value of the factor σ adopts a static strategy, that is, the factor σ is a fixed value. In another embodiment of the present disclosure, since it is a trade-off between high variety and high temporal consistency, setting the value of the factor σ adopts a dynamic strategy. A smaller σ will be set when the output frame has poor temporal consistency and a bigger σ will be set to escape the small looping when the repetition occurs. The dynamic strategy proposes an adaptive function for the σ value as shown in equation 12.
Ĥ(t) is the frame entropy estimation, Ĥ(t)=H(t+1)−H(t), Ĥ(t)′ represents the trend of frame entropy, which is a difference between two consecutive H(t). {circumflex over (D)}(t) is the short term average pixel transition error, which is the sum of the difference between the current frame and each of the previous frames. Said previous frames includes multiple frames of the played output video. Factors αentropy and αdiffentropy are constants, the factor αentropy controls the degree of the variety of the output frame sequence. The factor αdiffentropy controls the sensitivity of the trend of the frame entropy estimation.
The frame entropy is calculated as equation 13.
H(t) is the frame entropy at time t. N is the maximum of the frame entropy and is associated with the total number of input frames. px(t) is the probability of occurrence of each frame x. The frame entropy is the occurrence probability of each frame accumulated before the measurement time t. The frame entropy can instantly reflect the degree of dispersion of the current output frame.
The loop parameter comprises a frame entropy. Before generating the output frame sequence from the plurality of output frames according to the loop parameter, the present disclosure further comprises: determining a target frame and a historic frame, wherein the target frame is a next frame of the historic frame in a time domain; accumulating a probability of occurrence of each of the target frame and the historic frame to obtain an accumulated value; and selectively inserting the target frame after the historic frame into the output frame sequence according to the accumulated value and a historic frame entropy.
Please refer to
Please refer to
Regarding steps of calculating the start frame and the loop period, the above method of calculating the frame entropy can be an independent step. In other words, regarding an output video, any method can be used to calculate the start frame and loop period of each of multiple superpixels, and the result can be combined with the frame entropy calculation method in an embodiment of the present disclosure, and thereby improving the visual variety of the loop videos.
The character in the video may often move back and forth, such as pulling the bow of a cello back and forth. The present disclosure proposes the concept of “bounce point” for further improving the frame utilization rate and the selection diversity of the next output frame.
Please refer to
The suitable bounce point lies in the middle of two symmetrical motions. In other words, when the rewinding version of the current action is similar to the normally played version of the next action, the output frame corresponding to the next action can be replaced with the output frame corresponding to the previous action. Taking
The loop parameter comprises a probability of a bounce cost. Before generating the output frame sequence from the plurality of output frames according to the loop parameter, the present disclosure further comprises: determining a target frame, a precedent frame and a subsequent frame; wherein the target frame is a next frame of the precedent frame in a time domain, and the subsequent frame is a next frame of the target frame in the time domain; calculating a first motion vector from the precedent frame to the target frame, a second motion vector from the target frame to the subsequent frame, and a motion similarity between the first motion vector and the second motion vector; converting the motion similarity to the probability of the bounce cost; and selectively inserting the precedent frame after the target frame into the output frame sequence according to the probability of the bounce cost.
Before step S6 determines the frame number of the next output frame, an embodiment of the present disclosure calculates a motion cost firstly, and then determines whether to use a bounce motion subsequence with bounce length L as an output after the playback time t according to the probability corresponding to the motion cost. The motion cost is calculated as equation 14, wherein the bounce length is determined by user's requirement.
Dbounce(t,L)=Σl=0Lωbounce∥Motion(t+l,t+l+1)−Motion(t−l,t−l−1)∥2 (Equation 14)
Dbounce(t, L) compares each frame of two symmetric motions in a backward and forward direction starting from the bounce point t. The term Motion(t1, t2) represents the dense motion vector from frame t1th to t2th. The dense motion vector between two frames may be estimated by optical flow method. There are many ways to calculate the optical flow between two adjacent frames and the present disclosure does not limit thereof. An embodiment of the present disclosure adopts the polynomial expansion method to estimate the optical flow of all superpixels in frames.
The bounce weight ωbounce is calculated as equation 15.
ωbounce=exp(l−L)+exp(−l) (Equation 15)
The bounce weight ωbounce is designed to focus the calculation on those frames that close to the bounce point t because the playback at bounce point t is a discontinuous part of the input sequence. Thus, the motion symmetry at bounce point neighborhood is more important to other continuous part.
An embodiment of the present disclosure uses a simple exponential function to map the L2 distance to probability as shown in equation 16.
Pt=exp(−Dbounce(t)/σbounce) (Equation 16)
The factor σbounce may be set to a small multiple of the average Dbounce value to make the likelihood of bounce at a given frame is fairly low. Adjusting the value of the factor σbounce can control the possibility of triggering a bounce point in a certain frame.
Therefore, the loop parameter described in step S6 include the probability of the bounce cost. When determining the next frame t+1th, the present disclosure also considers whether the probability of the bounce cost Pt of the current frame t is greater than a certain default value, and then determines whether to use the current frame t as the bounce point to replay previous frames.
The method for generating a loop video proposed according to an embodiment of the present disclosure increases the possibility of selecting the current frame as the bounce point when determining the next output frame, so the loop video may improve the frame utilization of the input video.
In sum, the present disclosure proposes a method for generating a loop video. The proposed method creates the loop video consisted of context-aware segments based on spatiotemporal consistency and semantic constraint, and the present disclosure ensures the completeness of moving object in the loop video. The present disclosure uses frame entropy to estimate the variety of output video and keep good variety in output by dynamic strategy method and bounce point extraction.
Number | Date | Country | Kind |
---|---|---|---|
202010967908.5 | Sep 2020 | CN | national |
Number | Name | Date | Kind |
---|---|---|---|
20080123959 | Ratner | May 2008 | A1 |
20140327680 | Hoppe | Nov 2014 | A1 |
20180090171 | Bradley | Mar 2018 | A1 |
Number | Date | Country | |
---|---|---|---|
20220084556 A1 | Mar 2022 | US |