This application claims the benefit, under 35 U.S.C. §365 of International Application PCT/EP2005/056614, filed Dec. 8, 2005, which was published in accordance with PCT Article 21(2) on Jun. 15, 2006 in English and which claims the benefit of European patent application No. 04292937.2, filed Dec. 9, 2004.
The present invention relates to new method and apparatus for interpolating motion compensated picture from two consecutive source pictures.
It is known to generate interpolated pictures for converting a 50 Hz video signal into a 100 Hz video signal. These interpolated pictures should be motion compensated in order to not create visual artifacts.
Generally, a motion estimator is used for computing a motion vector for each pixel of a source picture in the 50 Hz signal. A motion vector represents the motion (represented by a pixel number) of a pixel between two consecutive source pictures. The motion vector of a pixel is then used for determining the position of this pixel in the interpolated picture.
The use of such a motion estimator is not satisfactory in some cases, notably when there is a zoom in or a zoom out between two consecutive source pictures. Some holes appear in the picture regions which are zoomed in and some conflicts appear in the regions which are zoomed out. A conflict designates the presence of two motion vectors imposing two different displacements for a given pixel. A hole designates the absence of motion vector for a given pixel.
The invention proposes a new method for interpolating motion compensated picture from two consecutive source pictures without using motion estimator.
The invention concerns a method for interpolating a motion compensated picture from two source pictures. It comprises the following steps for determining the pixel value of each pixel with coordinates (x,y) of the motion compensated picture:
In a particular embodiment, the value allocated to the pixel with coordinates (x,y) of the motion compensated picture is the average of the arithmetic mean values of the pixel value of the pixels with coordinates (x−mx,y−my) in the first source picture and the pixels with coordinates (x+mx,y+my) in the second source picture, said average being weighted by the corresponding correlation coefficient.
Advantageously, the inventive method comprises the following steps for computing the correlation coefficient for a couple of motion values (mx,my):
If each source picture comprises at least two color components, the block difference value is computed based on the video levels of said color components.
The invention concerns also an apparatus for interpolating a motion compensated picture from two source pictures. For determining the pixel value of each pixel with coordinates (x,y) of the motion compensated picture, it comprises:
In a particular embodiment, the fuzzy interpolation block computes the average of the arithmetic mean values computed by the second generator, each arithmetic mean value being weighted by the corresponding correlation coefficient computed by the first generator.
Exemplary embodiments of the invention are illustrated in the drawings and are explained in more detail in the following description.
The
According to the invention, the method is computed pixel by pixel on the interpolated picture. At the beginning of the processing, no pixel value is known for this picture. These values are determined pixel by pixel sequentially. For example, the pixels of the first line are first individually determined, then the pixels of the second line and so on for the rest of the interpolated picture. Other scannings are possible but, whatever the scanning is, the pixel values are processed once and only once.
The interpolated picture (frame t+½) is generated from a current source picture (frame t), called first source picture, and the next one (frame t+1), called second source picture.
A motion range comprising motion values is defined for each spatial direction of the pictures. The motion values define the motion of the interpolated picture compared with the two source pictures. For example, in the horizontal direction, the motion value, called mx, is included in the range [−Mx,+Mx] and the motion value, called my, in the vertical direction is included in the range [−My,+My]. These ranges are chosen in accordance with the maximum possible motion (2*Mx in the horizontal direction and 2*My in the vertical direction) between two source pictures. These ranges are limited in order to limit the number of computations.
For each pixel with coordinates (x,y) in the interpolated picture and each motion values mx and my, a pixel block B(x−mx,y−my,t) is defined in the first source picture around the pixel with coordinates (x−mx,y−my) and a pixel block B(x+mx,y+my,t+1) is defined in the second source picture around the pixel with coordinates (x+mx,y+my). These two blocks B(x−mx,y−my,t) and B(x+mx,y+my,t+1) are spatiotemporally symmetric blocks with regard to the pixel (x,y) Different forms are possible for the blocks: it can be a horizontal segment, a rectangle, etc. . . . In the example of
Then, for each pixel with coordinates (x,y) in the interpolated picture, block differences are computed between two spatiotemporally symmetric pixel blocks of the source pictures with exception on the borders of the picture. This computation is made for each motion values mx and my.
This computation is made for each color component Red, Green, Blue and the results are summed up.
For the motion values mx and my, the block difference could be defined by the following formula:
Another possible value for this block difference could be:
From each block difference computed for the current pixel of the interpolated picture, a correlation coefficient is then computed. This coefficient, called C(mx,my), is representative of the correlation between the two symmetric pixel blocks. This coefficient is inversely proportional to the difference value. So, this coefficient is bigger for smaller difference and vice versa. For example,
Finally, the current pixel (x,y) in the interpolated picture could get for example the following value for the red, green and blue components:
This motion compensation only requires one external frame memory. This frame memory can also be used for other purposes (e.g. for APL measurement in case of PDP).
The
To improve the performance or simplify the implementation, some pre- or post-processing like filtering, up or down sampling, can be done by a block 14 placed before the generator 12.
Number | Date | Country | Kind |
---|---|---|---|
04292937 | Dec 2004 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2005/056614 | 12/8/2005 | WO | 00 | 5/22/2007 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2006/061421 | 6/15/2006 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5504531 | Knee et al. | Apr 1996 | A |
RE41196 | Selby | Apr 2010 | E |
20070297513 | Biswas et al. | Dec 2007 | A1 |
Number | Date | Country |
---|---|---|
8223536 | Aug 1996 | JP |
3022977 | Jan 2000 | JP |
WO 02056589 | Jul 2002 | WO |
Entry |
---|
S.S. Skrzypkowiak et al.: “Affine Motion Estimation Using a Neural Network” Proceedings of the International Conference on Image Processing, Washington, Oct. 23-26, 1995, vol. vol. 1, pp. 418-421. |
M. Bierling et al: “Motion Compensating Field Interpolation Using a Hierarchically Structured Displacement Estimator”, Signal Processing, Elsevier Science Publishers B.V. Amsterdam, NL, vol. 11, No. 4, 1986, pp. 387-404. |
Search Report Dated Jul. 4, 2006. |
Number | Date | Country | |
---|---|---|---|
20080030613 A1 | Feb 2008 | US |