The invention described and claimed hereinbelow is also described in PCT/DE 01/01018, filed on Mar. 16, 2001 and DE 10022 331.1, filed May 10, 2000. This German Patent Application, whose subject matter is incorporated here by reference, provides the basis for a claim of priority of invention under 35 U.S.C. 119 (a)-(d).
The invention is based on a method for transformation encoding of moving-image sequences, in the case of which motion vectors are estimated block-by-block between a reference image signal and an actual image signal from the image sequence, with which said motion vectors the motion compensation is carried out.
In the case of hybrid encoding concepts for encoding moving-image sequences, a motion vector field is estimated block-by-block between a previously-generated image signal (reference frame) and an actual frame from an image sequence. This vector field is then used to perform a motion compensation. The motion vector field and the residual prediction error are encoded and transmitted to the receiver. The prediction error is usually encoded using block transformations, typically a discrete cosine transformation (DCT) with 8×8 coefficients.
An 8×8 DCT is used for transformation encoding in the previously standardized methods for moving-image encoding [1, 2]. For the motion compensation, blocks that are 16×16 and 8×8 pixels in size are used and, with MPEG-4, 16×8 pixels [1] are also used in the case of interlaced coding. The size of the block transformation is constant with 8×8 coefficients.
In the test model for the new H.26L video coding standard [3], a 4×4 integer transformation based on the DCT is proposed. Compared to the DCT, this has the advantage that the pixel values—present in the form of integers—are mapped on integer transformation coefficients. This makes perfect reconstruction possible on the one hand and, on the other, it eliminates the transformation errors that are possible with the heretofore-common float-DCT, which said errors occur when the inverse DCTs are implemented differently in the transmitter and the receiver, e.g., when the float data type is used on one side and the double data type is used on the other side. An integer transformation is presented in [4] that approximates the transformation properties of the DCT and can be used in place of the float DCT.
In the H.26L test model, block sizes of 16×16 to 4×4 pixels are used for motion compensation. They are divided into 4×4 blocks in the test model for transformation encoding.
With the measures described in claim 1 and the further developments of the dependent claims, the efficiency of encoding of the prediction error can be enhanced, in particular in hybrid encoding procedures when different block sizes are used in the motion compensation.
The method according to the invention, i.e., coupling the block size of the transformation for the prediction error to the block size used in the motion compensation, is advantageous in particular when the blocks to be transformed are not limited to square shapes, and rectangular blocks are permitted as well, e.g., 4×8 or 16×8 pixels.
Compared to the conventional methods, the use of block sizes coupled to the motion compensation offers the advantage that maximally-large parts of the prediction error can be transformed jointly without block boundaries contained therein diminishing the transformation gain with disturbing, high-frequency parts (blocking artifacts). Enhanced encoding efficiency is achieved as a result. The transformation of large blocks (16×16) and blocks having a non-square shape, e.g., 8×4 or 16×8 pixels, results in encoding gains compared to known methods. As a result of the transformation, the energy of the transformed signal is concentrated on a few coefficients. The number of successive zeros within blocks is increased by the use of larger blocks, which can be used to make encoding more efficient (run-length encoding).
Since the selection of block sizes is already encoded in the bitstream for the motion compensation, no further signalization is needed to use the adapted transformations.
Exemplary embodiments of the invention are described in greater detail with reference to the drawings.
In the standardized coding method and in H.26L, the image-sequence frames are divided into macroblocks (MB) that are composed of a block with 16×16 pixels of luminance components and two chrominance blocks corresponding therewith, often 8×8 pixels, 4:2:0 YUV format [5]. Only the luminance components shall be considered hereinbelow; they are referred to as MB. The possible divisions of a macroblock MB proposed for H.26L are presented in
In the case of the invention, the motion vectors are estimated block-by-block between an actual reference image signal—in particular a previously transmitted or determined image signal—and an actual image signal of a moving-image sequence, with which said motion vectors the motion compensation is carried out. Different block sizes are used. The prediction error is transformation-encoded. The block size of the transformation encoding is coupled to the block size used in the motion compensation, in particular, the block size selected for the transformation encoding is the same as the block size that was used in the motion compensation. Square as well as rectangular blocks are permitted so that maximally-large parts of the prediction error can be transformed jointly. This results in very efficient encoding, since the block sizes for the motion compensation are to be already encoded in the transmission bitstream, and further signalization is therefore not required for the adaptive transformation encoding with regard for its block sizes.
The number of successive zeroes within the blocks can be used for efficient encoding, in particular run-length encoding.
The subdivision of the macroblocks that were determined for the motion compensation is shown. In other words, macroblock MB (aA) is divided into four sub-blocks, to each of which a motion vector is assigned. Each of these sub-blocks is predicted independently of the other ones from the reference frame. MB (aB) has only one motion vector; in this case, the sub-block therefore corresponds to the entire macroblock MB. In the example MB (bA), there are eight sub-blocks that are predicted independently of each other with their own motion vectors. The prediction error that remains with the motion compensation also has the block structure shown.
For the transformations with adaptive block size, the information known from the motion compensation about the subdivision of the macroblocks is referred to. For each macroblock MB, that block transformation is selected that has the same block size as the sub-blocks. Therefore: in macroblock MB (aA), each of the four sub-blocks is transformed with an 8×8 transformation. Macroblock MB (aB) is given a 16×16 transformation, macroblock MB (aC) is given 8×16 transformations, etc. The block size of the transformations therefore corresponds to the block size of the motion compensation (size of the sub-blocks).
Square Blocks
Separable transformations are used, i.e., the transformation matrix is applied in the horizontal and vertical direction, i.e., in the case of a square,
C=T×B×TT
Wherein B represents a block with n×n pixels, and C represents the transformed block, T is the transformation matrix having the size n×n. This is orthogonal, i.e.,
T×TT=TT×T=constant×In,
Whereby In refers to the n×n unit matrix. The following applies for orthonormal transformations: T×TT=I, i.e., constant=1.
Rectangular Blocks
Separable orthogonal transformations are also used with rectangular blocks having the size n×m, with n≠m. The transformation matrices for the rows and columns have different sizes, which is characterized by the indexing in the following equation:
Cn,m=Tv m,m×Bn,m×TTh n,n
Th represents the transformation matrix for the rows, and Tv represents the transformation matrix for the columns.
Quantization
Scalar quantization is used as the basis hereinbelow. The following relationships must be modified accordingly for other quantifiers.
The blocks of the prediction error are transformed. If orthonormal transformation matrices are used, i.e., T×TT=I, a scalar quantization with a constant quantization step size qP for all transformation block sizes results in the same measure of distortion.
When whole-number, i.e., integer transformations, are applied in particular, it must be assumed that the transformation matrices are non-standardized. In this case, a generally-valid quantization step size cannot be given. Since a uniform distortion is generally desired in all blocks of the encoded frame, quantifier tables must be compiled in which a corresponding qPi is assigned to a qp specified for encoding for every block form that occurs.
If ch and cv are the scaling constants of the transformation matrices in the horizontal and vertical direction,
Th×TTh=ch×In,
Tv×TTv=cv×Im;
whereby Th is an n×n matrix, and Tv is an m×m matrix. The quantifier step size for the n×m block Bi can then be determined using this equation:
In the case of whole-number transformations—integer transformations—qP, i should be an integer. An allocation table that contains the integer qPi adapted accordingly for each block size must be compiled for this purpose.
A typical feature of the invention is the fact that basic functions or basic images of the underlying transformations become visible in the case of a very coarse quantization in the reconstructed frames. In the case of the conventional encoding methods, the block size of these basic functions is constant in the entire frame; when the adaptive block sizes are used, basic images having different sizes and, mainly, non-square shapes, in accordance with the blocks of the motion compensation, can be made out.
Number | Date | Country | Kind |
---|---|---|---|
100 22 331 | May 2000 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/DE01/01018 | 3/16/2001 | WO | 00 | 10/7/2003 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO01/86961 | 11/15/2001 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5021891 | Lee | Jun 1991 | A |
5107345 | Lee | Apr 1992 | A |
5576767 | Lee et al. | Nov 1996 | A |
5724451 | Shin et al. | Mar 1998 | A |
5784107 | Takahashi | Jul 1998 | A |
6427028 | Donescu et al. | Jul 2002 | B1 |
6600836 | Thyagarajan et al. | Jul 2003 | B1 |
Number | Date | Country |
---|---|---|
0 536 784 | Apr 1993 | EP |
Number | Date | Country | |
---|---|---|---|
20040062309 A1 | Apr 2004 | US |