In one embodiment, the present invention provides a super resolution based deinterlacing method and apparatus for processing an interlaced video sequence. Such a super resolution-based deinterlacing method includes the steps of, for each block of pixels in a video frame: Applying block matching on that block to obtain a motion vector (MV); Using the MV as the initial motion vector and applying optical flow on that block to obtain a sub-pixel resolution motion vector; and interpolating missing pixels in that block using motion compensation.
In order to systematically describe the deinterlacing problem and a deinterlacing method according to the present invention, in the following description let ft denote an incoming interlaced video field at time instant t, and ft(v,h) denote the associated value of the video signal at the geometrical location (v,h) where v represents vertical location and h represents horizontal location. In this description, field comprises interlaced video information, frame comprises progressive video information, image represents one frame or field, and block comprises a small area in the field/frame/image.
Based upon the above description of the interlaced video signal, a deinterlacing problem can be stated as a process to reconstruct or interpolate the unavailable signal values in each field. That is, the deinterlacing problem is to reconstruct the signal values of ft at odd lines (v=1, 3, 5, . . . ) for top field ft and to reconstruct the signal values of ft at even lines (v=0, 2, 4, . . . ) for bottom field ft.
For clarity of description herein and without limitation, the deinterlacing problem is simplified as a process which spatially reconstructs or interpolates the unavailable signal value of ft at the ith line where the signal values of the lines at i±1, i±3, i±5, . . . are available. More simply, deinterlacing is to interpolate the value of ft(i,h), which is not originally available.
Generally, super resolution technologies are used in image scaling to interpolate a pixel with motion compensation. Optical flow, one of the super resolution motion estimation methods, is a well-known method to estimate the global motion of the whole image. However, due to hardware limitations, optical flow cannot be applied on the whole image in practical hardware implementation.
According to an embodiment of the present invention, optical flow is combined with a block matching motion estimation method, to search the sub-pixel resolution motion vector of each block of pixels in an interlaced image.
To improve the robustness of the motion estimation, the motion estimation can be applied on overlapped blocks. For example, as shown in
The motion vector MV is then used as the initial motion vector of optical flow applied to the external block B′. The optical flow refines the motion vector MV into sub-pixel resolution (i.e., motion vector having fractional part of pixel resolution), to obtain a sub-pixel resolution motion vector OF representing the sub-pixel resolution motion vector. There is no limitation on which motion model is used in optical flow One example can be a rigid model. Other examples are possible.
Based on the obtained sub-pixel resolution motion vector OF, a matched block C in image fs can be interpolated. The matched block C is most similar to block B in a neighborhood field in image fs. Interpolating block C involves interpolating each pixel in block C based on the spatial pixels, since block C may not be aligned with pixels in fs, and the pixels in block C are not originally available.
There is no limitation on which method is used in interpolation of block C. It can be bilinear, bi-cubic or poly-phase filter. Other examples are possible.
Thus, the missing pixels in block B are obtained from block C with motion compensation. Interpolation of block C reconstructs a missing pixel in block B of frame ft as follows. All of the pixels in block C are interpolated spatially. Each missing pixel in block B should have one matched pixel in block C. The matched pixel is copied from block C to block B to obtain the missing pixel.
The missing pixels in block B are obtained from block C with motion compensation as blocks B and C have motion (displacement) between them. After processing all the blocks of image ft, a deinterlaced image is obtained.
In the examples, both block matching and optical flow are applied on the same images. According to the present invention, this can be further extended to different images. In one example, block matching is applied between the current image ft and the neighboring (temporally) image fs. Optical flow is applied between the current image ft and another neighboring (temporally) image fr. The initial motion vector of optical flow can be obtained based on the block matching result and displacement of the three images ft, fs, fr in time axis.
In the following, example deinterlacing systems implementing super resolution based deinterlacing methods, according to the present invention, are described.
In system 100, assume input image ft is the current image, block B is the current block to be processed, and block B′ is the external block of B. The BMME unit 106 first searches the symmetric motion vector of block B between the previous image ft−1 and the next image ft+1, and generates motion vectors MV1 and MV2. MV1 is the motion vector from the current image ft to the previous image ft−1, and MV2 is the motion vector from the current image ft to the next image ft+1.
Based on the initial motion vector MV1, optical flow unit 108 is applied on block B′ between the current image ft and the previous image ft−1, to generate a sub-pixel resolution motion vector OF1. Based on the initial motion vector MV2, optical flow unit 110 is applied on block B′ between the current image ft and the next image ft+1, to a sub-pixel resolution motion vector OF2.
Thus, each missing pixel in block B has two motion compensated pixels: one is interpolated in the previous image ft−1 and the other is interpolated in the next image ft+1. Each missing pixel can be interpolated by taking the average of those two motion compensated pixels. There is no limitation on method of interpolate the missing pixel. SR-IPC 112 in
The symmetric motion estimation used in the system 100 is based on the assumption that motion is constant in a short time period.
The difference between the blocks C′ and A′ such as mean absolution error or mean square error, can be calculated to determine whether blocks C′ and A′ are matched.
In the example in
In another example in
As such, the matching difference can be calculated between blocks C′ and A′ without using any pixel information in the current image ft. By trying all the motion vector candidates using full search or other methods, the best matching blocks of B′ in the previous and next images, respectively, can be found. Accordingly, the motion vector MV1 and MV2 can be obtained that satisfy MV1=−MV2 from the motion vector from previous to next field to obtain the motion vector from current to previous, next field respectively.
In system 200, assume input image ft is the current image, block B is the current block to be processed and block B′ is the external block of block B. The BMME 206 searches the motion vector of block B′ between the current image ft and the second previous image ft−2 using a block matching method to generate a motion vector as MV.
Using MV as the initial motion vector, the optical flow unit 208 is applied on the block B′ between the current image ft and the second previous image ft−2, the to generate a sub-pixel resolution motion vector OF. Based upon the assumption that motion is constant in a short time period, the optical flow unit 208 calculates the sub-pixel resolution motion vector of block B′ from the current image ft to the previous image ft−1, as OF/2.
Accordingly, the missing pixels in block B can be compensated from the interpolated pixels in the previous image ft−1 based on the motion vector OF/2 as it is assumed that the motion is symmetric.
SR-IPC 210 interpolates the missing pixel based on the obtained sub-pixel resolution motion vector, whereby f′t is a deinterlaced frame.
In system 300, assume input image ft is the current image, block B is the current block to be processed and block B′ is the external block of block B. The BMME unit 306 first searches the motion vector of block B′ between the current image ft and the second previous image ft−2 using a block matching method, to generate a motion vector MV. Based upon the assumption that motion is constant in a short time period, the BMME unit 306 calculates the motion vector of block B′ from the current image ft to the previous image ft−1, as MV/2.
Using MV/2 as the initial motion vector, the optical flow unit 308 is applied on the block B′ between the current image ft and the previous image ft−1 to generate a sub-pixel resolution motion vector OF.
Accordingly, the missing pixels in block B can be compensated from the interpolated pixels in the previous image ft−1 based on the motion vector OF using motion compensated interpolation. SR-IPC 310 interpolates the missing pixel based on the sub-pixel resolution motion vector, whereby f′t is a deinterlaced frame.
While the present invention is susceptible of embodiments in many different forms, these are shown in the drawings and herein described in detail, preferred embodiments of the invention with the understanding that this description is to be considered as an exemplification of the principles of the invention and is not intended to limit the broad aspects of the invention to the embodiments illustrated. The aforementioned example architectures above according to the present invention can be implemented in many ways, such as program instructions for execution by a processor, as logic circuits, as ASIC, as firmware, etc., as is known to those skilled in the art. Therefore, the present invention is not limited to the example embodiments described herein.
The present invention has been described in considerable detail with reference to certain preferred versions thereof; however, other versions are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions contained herein.