The invention is related to the field of video compression.
A temporal prediction filter is used in a video compression process to predict a target image from a set of previously decoded reference images. The temporal prediction process is effective at removing a significant amount of temporal redundancy, which generally results in a higher coding efficiency. The prediction process uses a set of motion vectors and a filter that operates on the motion vectors to predict the target image.
For example, the prediction method divides a reference image 110 into multiple fixed-size blocks 120, as shown in
Conventional temporal filters, which use a single motion vector to predict the location of an associated block, or rely on a filter defined for a regular motion vector pattern, need a regular distribution of motion vectors to perform temporal prediction. Therefore, they are unable to adapt the prediction process to an irregular pattern of motion vectors. There is a need for a filter that can locally adapt its tap and filter coefficients to the variations of an irregular pattern of motion vectors. There is also a need for a temporal filter that has flexibility to adapt to object boundaries and spatial textures.
A method of generating an adaptive temporal filter is performed by constructing a motion vector area cell around each of a plurality of motion vectors in a target image, selecting a pixel in the target image, constructing a pixel area cell around the selected pixel, determining an overlap area between the motion vector area cells and the pixel area cell, generating filter weights from the overlap area, and using the filter weights to filter the selected pixel.
The present invention is illustrated by way of example and may be better understood by referring to the following description in conjunction with the accompanying drawings, in which:
In the following description, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration a specific embodiment in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. For example, skilled artisans will understand that the terms field or frame or image that are used to describe the various embodiments are generally interchangeable as used with reference to video data.
An adaptive area of influence (AAOI) temporal filter automatically adapts to an irregular pattern of motion vectors, object features, and spatial textures when predicting a target image. The AAOI filter operates in the time-domain over motion compensated signals, which is different from other methods that attempt to filter motion vectors directly (e.g., triangulation filtering in the motion vector domain). For example, because the AAOI filtering method operates in the time-domain, it is more amenable to adaptation to object and spatial textures. In one embodiment, the AAOI filter performs a two stage process to couple neighboring motion vectors during the prediction of a pixel. The first stage adapts the filter to an irregular sampling pattern of motion vectors, to object shapes, and to boundaries. The second stage adapts the filter to spatial textures of the image.
An example of an adaptive temporal filtering procedure is shown in
Returning to
Returning to
At 240, the adaptive area of influence (AAOI) filter is applied to the area of influence cells to perform temporal prediction for the target image. The filter is applied in the time domain to generate a prediction result for the target image given the set of motion vector values and sampling pattern. The AAOI filter uses a filter tap and filter coefficients that are defined by an area of overlapping regions to capture the relevance of motion vectors neighboring a pixel to be predicted. At 250, the prediction results produced by the filter are used to re-estimate the values of the motion vectors, so as to improve the accuracy of the adaptive filter. At 260, in some embodiments, the process may return to 240 to decrease the prediction error generated by the adaptive area of influence filter. Otherwise, the process ends at 270.
Referring to 240 shown in
where {fi} is a set of filter coefficients, and x+vi is the motion compensated pixel when motion vector vi is applied to pixel x. The support or tap of the filter is defined by the set S(x). The tap support S(x) and the filter coefficients {fi} are, in general, functions of the pixel position x and its neighboring motion vectors. That is, the filter coefficients can change for each pixel, because the distribution of motion vectors changes throughout the image. Hence, the filter locally adapts to the changing motion vector pattern.
In one embodiment, filter coefficients {fi} are computed using the method illustrated by
The first stage begins at 410, receiving a local motion vector sampling pattern, that contains motion vectors in the neighborhood of a target pixel to be predicted. At 420, area of influence cells are constructed around each local motion vector. The local area of the target pixel is thus partitioned into a set of AOI cells. At 430, in order to interpolate the pixel, it is viewed as a new node, and a pixel area of influence cell is constructed around it. Then, at 440, the area of each neighboring AOI cell that overlaps the pixel area of influence cell is determined. The overlapping areas define a natural tap structure and filter weight. In other words, the tap structure is defined by each motion vector i whose AOI cell has a non-zero overlapping area Ai with the pixel area cell. The filter weight of each motion vector in the tap structure is defined by the ratio Ai/A. That is, for some pixel location x:
where S(x) is a set of local motion vectors in the neighborhood of pixel x, Ai is an overlapping area of an AOI cell for motion vector i in the set S(x) and the pixel influence cell, A is the total overlap area of the AOI cells and the pixel influence cell, and fi is the filter weight.
At 450, the filter is adapted to image features, such as an object boundary of a moving object, for example. The shape of the area of influence cells in some embodiments changes to adapt to the boundary of the moving object. The area cells are adapted to an object boundary in the image by constraining the motion vector area cells and pixel area cell to include only pixels that belong to the same object. This generates modified AOI cells around the pixel to be predicted. Therefore, in one embodiment, the filter support and coefficients are expressed as:
where Ãi is the modified AOI cell for motion vector i, due to the object boundary. Each modified AOI cell includes pixels in the same motion layer as the pixel to be predicted, and excludes pixels in other motion layers. At the conclusion of this first stage, the filter has adapted to the both the irregular pattern of motion vectors and to the boundary of the moving object.
After generating a prediction for each pixel in the image, the second stage of the filtering process is performed. In the second stage, at 460, the filter is adapted to spatial textures. Because the prediction that is output from the first stage of the AAOI filter is in the form of a regular pattern of sampled data, a least squared (LS) trained filter is used in some embodiments in the second stage to adapt the filter to spatial textures. In another embodiment, a spatial adaptation process can directly modify the AOI cells in stage 1 to include only those pixels that have a similar spatial texture.
The adaptive filtering process illustrated in
An example of the intermediate results produced during the first stage of the process of
At 447, an overlap area between the AOI cell of each motion vector and the AOI cell of the pixel is determined. For example, the AOI cell for motion vector (1) overlaps the pixel AOI cell in overlap area A1. The tap and filter weights of the filter are determined by the overlap areas. The tap structure is defined by each motion vector i whose AOI cell has a non-zero overlapping area Ai with the pixel area cell. In this example, the AOI cell for motion vector (4) does not overlap with the pixel AOI cell. Therefore, the filter tap structure for pixel x is motion vectors (1), (2), (3), (5), and (6). The filter weight of each motion vector in the tap structure is defined by the ratio Ai/A. For example, in this case, f1=A1/A.
At 457, the filter is adapted to image features, such as an object boundary 451 of a moving object, for example. The moving object's object boundary 451 separates motion layers 453 and 455. To interpolate pixel x, the tap structure is modified to include motion vectors that are in the same motion layer as the pixel x. Because pixel x is in motion layer 455, the tap structure from 447 is modified to remove motion vectors (3) and (5), leaving motion vectors (1), (2) and (6) as the tap structure.
Furthermore, at 457, the filter weights are adapted to the shape of the object boundary 451. In this example, the shapes of the area of influence cells along object boundary 451 change to adapt to the boundary of the moving object. Object boundary 451 dissects the AOI cell for motion vector (2). To interpolate pixel x, which is in motion layer 455, the AOI cell for motion vector (2) is redefined to include only those pixels of its original cell that are in motion layer 455. This generates a modified AOI cell around motion vector (2). The shape of the AOI cell for motion vector (6) is also adapted to the object boundary 451. The area between the AOI cell for motion vector (6) and object boundary 451 is in motion layer 455. However, this area was initially included in the AOI cell for motion vector (5). Because motion vector (5) is no longer part of the tap structure for the filter, the pixels in this area now become part of the AOI cell for motion vector (6). The modified overlapping areas, Ã2 and Ã6, and overlapping area A1, are used to generate filter weights.
The filter produced by the method illustrated in
The filter forms a prediction for pixel x in the target image 520 using a tap structure of local motion vectors v1 through v5. The motion vectors are local to pixel x because each of their respective AOI cells overlap with at least a portion of the AOI cell for pixel x. Each motion vector {vi} in the tap structure maps to image data {Ii} in the reference image 510. The adaptive temporal prediction filter adjusts the reference data {Ii} by a filter weight {fi}to predict pixel x. In one embodiment, the prediction filter uses the tap structure and the filter weights to generate a prediction according to the following equation:
Prediction=I1*f1+I2*f2+I3*f3+I4*f4+I5*f5
where the filter tap, which is defined by the local motion vectors, and the filter coefficients {fi}, are determined by the method illustrated in
After the initial prediction, the process re-estimates the values of the motion vectors, as shown in block 250 of
In one embodiment, the AAOI filter is used by a video coding system for encoding an image (or frame, or field) of video data, as shown in
At 740, a temporal prediction filtering process is applied to the irregular motion sampling pattern. This adaptive filtering process uses the motion vectors, irregular sampling pattern, and reference images to generate a prediction of the target image. At 750, the motion vector values are coded and sent to the decoder. At 760, a residual is generated, which is the actual target data of the target image minus the prediction error from the adaptive filtering process. At 770, the residual is coded and at 780 is sent to the decoder.
In another embodiment, the AAOI filter is used in decoding a image (or frame, or image) of video data, as shown in
While the invention is described in terms of embodiments in a specific system environment, those of ordinary skill in the art will recognize that the invention can be practiced, with modification, in other and different hardware and software environments within the spirit and scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4922341 | Strobach | May 1990 | A |
5047850 | Ishii et al. | Sep 1991 | A |
5654771 | Tekalp | Aug 1997 | A |
5872866 | Strongin et al. | Feb 1999 | A |
5974188 | Benthal | Oct 1999 | A |
6069670 | Borer | May 2000 | A |
6178205 | Cheung et al. | Jan 2001 | B1 |
6208692 | Song et al. | Mar 2001 | B1 |
6212235 | Nieweglowski et al. | Apr 2001 | B1 |
6456340 | Margulis | Sep 2002 | B1 |
6466624 | Fogg | Oct 2002 | B1 |
6480615 | Sun et al. | Nov 2002 | B1 |
6590934 | Kim | Jul 2003 | B1 |
6591015 | Yasunari et al. | Jul 2003 | B1 |
6608865 | Itoh | Aug 2003 | B1 |
6690729 | Hayashi | Feb 2004 | B2 |
6754269 | Yamaguchi et al. | Jun 2004 | B1 |
6765965 | Hanami et al. | Jul 2004 | B1 |
6782054 | Bellers | Aug 2004 | B2 |
6864994 | Harrington | Mar 2005 | B1 |
7242815 | Kalevo et al. | Jul 2007 | B2 |
7492823 | Lee et al. | Feb 2009 | B2 |
20040057517 | Wells | Mar 2004 | A1 |
20040062307 | Hallapuro et al. | Apr 2004 | A1 |
20040101050 | Lee et al. | May 2004 | A1 |
20040131267 | Adiletta et al. | Jul 2004 | A1 |
20040233991 | Sugimoto et al. | Nov 2004 | A1 |
20050100092 | Sekiguchi et al. | May 2005 | A1 |
20050135483 | Nair | Jun 2005 | A1 |
20070009050 | Wang et al. | Jan 2007 | A1 |
Number | Date | Country |
---|---|---|
WO 0016563 | Mar 2000 | WO |
WO 0178402 | Oct 2001 | WO |
WO 0237859 | May 2002 | WO |
WO 2004047454 | Jun 2004 | WO |
WO 2005069629 | Jul 2005 | WO |
Number | Date | Country | |
---|---|---|---|
20070064807 A1 | Mar 2007 | US |