The present invention relates to a moving picture encoding technique for encoding a moving picture, and a moving picture decoding technique for decoding a moving picture.
An encoding system such as an MPEG (Moving Picture Experts Group) system or the like has been designed as a technique for bringing a large capacity of moving picture information into digital data and recording and transmitting the same. There have been known MPEG-1, MPEG-2, MPEG-4 and H.264/AVC (Advanced Video Coding) standards, and so on.
In H.264/AVC, the efficiency of compression has been improved by using prediction encoding such as intra prediction encoding, inter prediction encoding or the like. At this time, various directions exist in the prediction encoding. They are used properly in block units for the purpose of their encoding. At this time, a problem arose in that codes each indicative of a prediction direction used in a target block needed to be encoded separately, thus increasing the amount of coding.
Since prediction was performed every macro block while performing switching between a plurality of pixel value prediction methods and block sizes upon each prediction encoding, it was necessary to encode information on the pixel value prediction methods and block sizes for every macro block.
For this problem, there has been disclosed in, for example, a Non Patent Literature 1 that upon encoding of each prediction direction at intra prediction encoding, a code for representing the prediction direction is made short with respect to each block at the edge of an image, small in the available number of prediction directions, thereby reducing the amount of coding.
The technique described in the Non Patent Literature 1 is, however, applicable to only each block at the edge of the image, but has a problem in that the effect of improving the efficiency of compression is small.
The present invention has been made in view of the above problems. An object of the present invention is to more reduce the amount of coding in an encoding/decoding process of a moving picture.
In order to address the above problems, one embodiment of the present invention may be configured as described in Claims, for example.
The amount of coding can be more reduced in an encoding/decoding process of a moving picture.
Embodiments of the present invention will hereinafter be explained with reference to the accompanying drawings.
In H.264/AVC, an encoding process is performed to a target frame for encoding in accordance with raster scan order (501). A prediction process is performed using a decoded image with respect to encoded blocks adjacent to the left, upper left, upper part and upper right of a target block for encoding. The prediction process utilizes pixel values of 13 pixels included in each encoded block (502). Pixels on the same straight line with a prediction direction vector as a gradient or tilt are all predicted based on the same pixel. As designated at (503), for example, pixels B, C, D and E of a target block for encoding are all subjected to prediction encoding processing referring to the same pixel. Differences (residuals) b, c, d and e between the pixels B, C, D and E of the target block and a value A′ obtained by decoding a pixel directly above the pixel B are first calculated. Next, one prediction direction is selected in block units out of candidates of prediction direction of eight types such as vertical, horizontal and diagonal directions. The residuals and a prediction direction value indicative of the selected one prediction direction are encoded. In H.264/AVC, however, “DC prediction” for predicting all pixels included in the target block for encoding by an average value of reference pixels can be used in addition to the prediction process along the specific prediction direction (504).
The decoding process is also performed in accordance with raster scan order in a manner similar to the encoding process (701). Next, an inverse procedure of the encoding process is performed using the decoded referenced pixel and the residual. That is, residual values and a reference pixel value are added along the prediction direction to thereby obtain a decoded image. For example, reference numeral (702) indicates the process of adding a decoded reference pixel A′ to residuals b′, c′, d′, and e′ (those in which b, c, d, and e of
As described above, the intra prediction encoding process based on H.264/AVC adopts a unidirectional method for predicting pixels located along the prediction direction from the reference pixel, based on the reference pixel. In this case, it was necessary to add information as to along which prediction direction should be used for prediction process of every block taken as the unit of the prediction process, to an encoded video stream.
H.264/AVC pays attention to the fact that a prediction direction of a target block is in high correlation with a prediction direction of an adjacent block and estimates a prediction direction of a target block for encoding from prediction directions of encoded adjacent blocks. That is, as designated at reference numeral (601), reference is made to a prediction direction of a block A adjacent to the left side of a target block for encoding, and a prediction direction of a block B adjacent to the upper side of the target block identically. The prediction direction small in prediction direction value, of the two prediction directions is assumed to be a predicted value (adjacent direction) of the prediction direction of the target block (602).
Reference numeral (603) designates the details of a bit pattern indicative of a prediction method. In H.264/AVC, when the prediction direction for the target block and the prediction direction for the adjacent block are the same, information (1 bit) is encoded which indicates that the prediction direction for the target block and the prediction direction for the adjacent block correspond to the same prediction direction.
On the other hand, when both are different from each other, information is encoded which indicates that the prediction direction for the target block and the prediction direction for the adjacent block are different from each other. Thereafter, actual prediction directions (eight types excepting the prediction direction for each adjacent block, of eight directions+nine types of prediction directions related to DC prediction) are encoded in 3 bits.
In this case, a large number of codes are required to represent each prediction direction. When intra prediction is performed in block units of a 4×4 pixel size, for example, a code of 64 bits is generated per macro block at maximum.
The embodiment 1 is an example in which the present invention is used in an encoding process for a prediction direction of a target block and a decoding process therefor upon intra prediction. In the present embodiment, it is determined whether it is easy to estimate the prediction direction of the target block, using prediction direction data of each block adjacent to the target block. The encoding and decoding methods of the prediction direction data on the target block are switched between the case where it is determined that it is easy to estimate the prediction direction of the target block and the case where it is determined that it is not easy to estimate the prediction direction of the target block.
The present embodiment will hereinafter be described in further detail.
As shown in an image block explanatory diagram (801), prediction directions MA, MB, MC and MD of encoded adjacent blocks A, B, C and D adjacent to the left, upper, upper left and upper right sides of the target block are used in determining whether it is easy to estimate the prediction direction of the target block. That is, when there are N (where N: integers greater than or equal to 2 and less than or equal to 4) or more same prediction directions M exist in the prediction directions MA, MB, MC and MD, the estimation of the prediction direction of the target block is determined to be easy, and prediction direction data of the target block is encoded using the prediction direction encoding method A.
When prediction direction information about each adjacent block cannot be utilized as the prediction direction of the target block as in the case of the existence of the target block in the edge of a slice, the edge of an image, and the like, for example, it is determined that it is easy to estimate the prediction direction of the target block, and prediction direction data of the target block is encoded using the prediction direction encoding method A.
Incidentally, as to the blocks that do not correspond even to both of the above cases, the estimation of the prediction direction is determined not to be easy, and the prediction direction encoding method B (803) is selected to perform variable length encoding.
When the prediction direction encoding method A is selected by the above determination method, it proceeds to a prediction direction selection process. That is, any of the prediction directions MA, MB, MC and MD of the encoded adjacent blocks A, B, C and D adjacent to the left, upper, upper left and upper right sides of the target block for encoding is selected by a prescribed method. The prediction direction of the selected adjacent block is assumed to be an estimated prediction direction of the target block.
Now, while the prescribed selection method may be any selection method if there is provided a method capable of achieving a similar process at both of the encoding and decoding sides, for example, a method of selecting the smallest prediction direction in prediction direction value out of MA, MB, MC and MC, a method of selecting the most prediction direction value of MA, MB, MC and MD, etc. may be used.
Further, as to the encoding process where the prediction direction encoding method A is selected, the encoded adjacent blocks A, B, C and D adjacent to the left, upper, upper left and upper right sides of the target block for encoding may be used as adjacent blocks to perform the encoding process. Alternatively, only the encoded adjacent blocks A and B adjacent to the left and right sides of the target block may be used as adjacent blocks as is usual to perform the encoding process.
The prediction direction encoding method A will next be explained in further detail. The prediction direction encoding method A is a method for determining estimated prediction information using prediction direction information about each adjacent block and encoding prediction direction data of the target block using the estimated prediction information.
A bit pattern diagram (802) of
When the prediction direction for the target block for encoding and the prediction direction (estimated prediction direction) for each adjacent block are the same direction, information (1 bit) indicating that the prediction direction for the target block and the prediction direction (estimated prediction direction) for the adjacent block are the same direction, is encoded.
On the other hand, when the prediction direction for the target block is different from the prediction direction (estimated prediction direction) for the adjacent block, information indicating that the prediction direction for the target block and the prediction direction (estimated prediction direction) for the adjacent block are different from each other is encoded. Thereafter, actual prediction directions (eight types excluding the prediction direction of adjacent block (estimated prediction direction), which are out of nine types of prediction directions that consist of eight directions+DC prediction) are encoded in 3 bits.
The prediction direction encoding method B will next be explained in further detail. The prediction direction encoding method B is a method for independently encoding prediction direction data of the target block without estimating the prediction direction data of the target block, based on prediction direction data of each adjacent block.
The table (803) of
While the encoding process according to the present embodiment has been explained above, the decoding process can be performed by carrying out a process opposite to the corresponding encoding method upon the decoding process. That is, the decoding process according to the present embodiment determines using prediction direction information of adjacent decoded blocks whether it is easy to estimate a prediction direction of a target block for decoding. When it is determined that it is easy to estimate the prediction direction of the target block, prediction direction data about the target block for decoding is decoded in accordance with the bit pattern shown in the bit pattern (802). On the other hand, when the estimation of the prediction direction of the target block is determined not to be easy, prediction direction data about the target block is decoded based on the variable length coding table shown in table (803).
A moving picture encoding device according to the present embodiment will next be explained using
The moving picture encoding device according to the present embodiment includes an input image memory (102) which holds an input original image (101), a block divide unit (103) which divides the input image into small areas, a motion estimation unit (104) which detects motion in block units, an intra prediction unit (106) which performs an intra prediction process (described in
The input image memory (102) retains a single image of the original image (101) as a target image or frame for encoding. The block divide unit (103) divides the image into small blocks and outputs the divided images to the motion estimation unit (104) and the intra prediction unit (106). The motion estimation unit (104) calculates the amount of motion of the corresponding block using the decoded image stored in the reference frame memory (116) and outputs the amount of motion to the inter prediction unit (107) as motion vector data. The intra prediction unit (106) and the inter prediction unit (107) perform an intra prediction process and an inter prediction process in block units. The mode selection unit (108) selects the optimal prediction process out of the intra prediction process and the inter prediction process. The mode selection unit (108) outputs a predicted image about the selected prediction process to the subtraction unit (109). When the intra prediction process is selected here, the mode selection unit (108) outputs encoded prediction direction data to be described later to the variable length decoding unit (112). The subtraction unit (109) generates residual data between the input image and the predicted image based on the optical prediction encoding process and outputs the residual data to the transform unit (110). The transform unit (110) and the quantization unit (111) perform transformation such as DCT (Discrete Cosine Transformation) and quantization processing to the transmitted residual data respectively in block units of a designated size and output the transformed data to the variable length coding unit (112) and the inverse quantization unit (113). The variable length coding unit (112) performs variable length coding to residual information expressed by transform coefficients, based on the probability of occurrence of signs along with information necessary for prediction decoding, such as prediction directions at intra prediction encoding and moving vectors at inter prediction encoding, etc., to thereby generate a coded video stream. The inverse quantization unit (113) and the inverse transform unit (114) perform inverse quantization and inverse transformation such as IDCT (Inverse DCT) to the post-quantization transform coefficients to obtain a residual and output the transformed data to the addition unit (115). The addition unit (115) generates a decoded image and outputs the decoded image to the reference frame memory (116). The reference frame memory (116) stores the decoded image therein.
Here, for example, each of the images divided by the block divide unit (103) shown in
When the estimation of the prediction direction of the block to be encoded is determined to be easy, for example, the encoding of prediction direction data is performed by a prediction direction prediction encoding unit (205). The prediction direction prediction encoding unit (205) performs the encoding of the prediction direction data using the method (prediction direction encoding method A) of
On the other hand, when the estimation of the prediction direction of the block to be encoded is determined not to be easy, the encoding of prediction direction data is performed by a prediction direction variable-length encoding unit (204). The prediction direction variable length encoding unit (204) performs the encoding of a prediction direction using the method (prediction direction encoding method B) of
The prediction direction variable-length encoding unit (204) or the prediction direction prediction encoding unit (205) output the prediction direction data encoded in the above-described manner to the mode selection unit (108). Incidentally, although the encoding of the prediction direction data has been carried out by the intra prediction unit (106) in the example of
One example of a moving picture decoding device according to the present embodiment will next be explained using
The variable length decoding unit (302) variable-length decodes the video stream (301) to obtain information necessary for a prediction process, such as a residual transform coefficient component, a prediction direction, motion vectors, etc. The residual transform coefficient component is outputted to the inverse quantization unit (303). The prediction direction, the motion vectors and so on are outputted to the intra prediction unit (306) or the inter prediction unit (307) according to predicting means. Subsequently, the inverse quantization unit (303) and the inverse transform unit (304) perform inverse quantization and inverse transformation to the residual information respectively to decode the residual data. The intra prediction unit (306) and the inter prediction unit (307) perform a prediction process based on the data inputted from the variable length decoding unit (302), referring to the decoded image stored in the reference frame memory (309). The addition unit (308) generates a decoded image. The reference frame memory (309) stores the decoded image therein.
Here, a prediction direction cost calculation unit (401) reads prediction direction information of each peripheral decoded block from a prediction direction memory (405) and determines based on the read information whether it is easy to estimate the prediction direction of a block to be decoded. As this determination method, for example, the method described in
For example, when it is determined that the estimation of the prediction direction of the block to be decoded is easy, the decoding of prediction direction data is performed by a prediction direction prediction decoding unit (403). The prediction direction prediction decoding unit (403) performs a decoding process of prediction direction data using a decoding method corresponding to the method (prediction direction encoding method A) of
On the other hand, when the estimation of the prediction direction of the block to be decoded is determined not to be easy, the decoding process of prediction direction data is performed by a prediction direction variable length decoding unit (402). The prediction direction variable length decoding unit (402) performs the decoding process of a prediction direction using a decoding method corresponding to the method (prediction direction encoding method B) of
The prediction direction data subjected to the decoding process as described above is inputted to an inter prediction image creation unit (404). Further, the prediction direction data subjected to the decoding process is stored in the prediction direction memory (405). The inter prediction image creation unit (404) outputs an intra prediction image to the addition unit (308) based on the pixel value of the decoded image of each adjacent block inputted from the reference frame memory (309) and the prediction direction data subjected to the decoding process.
Incidentally, although the decoding process of the prediction direction data is carried out at the intra prediction unit (306) in the example of
A procedure for the encoding of one frame in the moving picture encoding device according to the present embodiment will next be explained using
The following processes are performed to all blocks existing in frames to be encoded (901). That is, a prediction encoding process is once performed to all encoding directions (combination of prediction method and block size) in association with the blocks to thereby calculate residuals and select the encoding direction highest in coding efficiency.
Upon the prediction encoding process, an intra prediction encoding process (904) or an inter prediction encoding process (907) is performed to select the optimal prediction encoding process, thereby performing encoding efficiently depending on the property of an image.
When the encoding direction best in coding efficiency is selected out of the many encoding directions (908), for example, the RD-Optimization method for determining the optimal encoding direction from the relationship between the image-quality distortion and the amount of coding is used, whereby the encoding can be performed efficiently. The details of the RD-Optimization method are described in a Reference Literature 1.
Subsequently, transformation (909) and a quantization process (910) are performed to residual data generated based on the selected encoding direction. Further, variable length encoding is performed to generate a video stream (911).
On the other hand, an inverse quantization process (912) and an inverse transform process (913) are performed to quantized transform coefficients to decode the residual data, whereby a decoded image is generated and stored in the reference frame memory (914). If the above processing is completed with respect to all the blocks, the encoding of an image of one frame is ended (915).
The details of a procedure for the intra prediction encoding process (904) of
An intra prediction process (1002) is performed to all prediction directions (1001) at each block to be encoded. The optimal prediction direction is selected out of all the prediction directions (1003). It is determined from information on encoded peripheral blocks whether it is easy to estimate the prediction direction (1004). If it is easy to estimate it, encoding is performed using the prediction direction encoding method A (1005). If it is not easy to estimate it, encoding is performed using the prediction direction encoding method B (1006), whereby the encoding of each prediction direction corresponding to one block is ended (1007).
Incidentally, although the encoding of the prediction direction data has been carried out in the intra prediction encoding process (904) in the example of
A procedure for the decoding of one frame in the moving picture decoding device shown in
The following processes are first performed to all blocks in one frame (1101). That is, a variable length decoding process is performed to an input stream (1102). An inverse quantization process (1103) and an inverse transform process (1104) are performed, whereby residual data is decoded. Subsequently, a prediction mode in which a target block is being prediction-encoded, is determined based on information included in the video stream. An intra prediction decoding process (1106) or an inter prediction decoding process (1109) is performed based on the result of determination to generate a predicted image. The predicted image is added to the decoded residual data to create a decoded image. The created decoded image is stored in the reference frame memory. If the above processing is completed with respect to all the blocks in the frame, the decoding of an image of one frame is ended (1110).
The details of a procedure for the intra prediction decoding process (1106) of
From prediction directions of decoded blocks located around a target block, a determination is first made whether the estimation of a prediction direction of the target block is easy (1201). If the estimation of the prediction direction of the target block is easy at this time, decoding corresponding to the prediction direction encoding method A is executed (1202). If not so, a decoding process corresponding to the prediction direction encoding method B is executed (1203). If a prediction decoding process is performed based on the finally-decoded prediction direction data (1204), the intra prediction decoding process corresponding to one block is ended (1205).
Incidentally, although the decoding of the prediction direction data has been carried out in the intra prediction decoding process (1106) in the example of
Although DCT has been taken as one example of the transformation in the present embodiment, any of DST (Discrete Sine Transformation), WT (Wavelet Transformation), DFT (Discrete Fourier Transformation), KLT (Karhunen-Loeve Transformation), etc. may be adopted if it takes transform used in the elimination of an inter-pixel correlation.
In particular, coding may be performed to a residual itself without performing transformation. Further, the variable length coding may not be performed in particular
Although the present embodiment has described the case where prediction is performed in the block units of the 4×4 pixel size in particular, the present invention may be applied to blocks having any of a 8×8 pixel size, a 16×16 pixel size, etc., for example.
Although prediction is performed along the eight directions defined in H.264/AVC in the present embodiment, the number of directions may be increased or decreased.
According to the moving picture encoding device, moving picture encoding method, moving picture decoding device and moving picture decoding method according to the embodiment 1 described above, the amount of code can be more reduced in the encoding/decoding process of the moving picture.
The embodiment 2 will explain an example in which such a selective encoding process as described in the embodiment 1 is used in the encoding process of prediction mode information such as the size of a macro block and prediction methods (intra prediction and inter prediction), etc. used in prediction encoding.
In each frame, encoding is sequentially performed from a macro block placed in the upper left corner of a screen to a macro block placed in the lower right corner thereof in accordance with raster scan order. The macro block can be divided into blocks each having a smaller size. The optimal one is selected out of several sizes defined for every type of prediction method in advance and encoded. In the case of the intra prediction, two types of block sizes of 16×16 pixels (I16×16 mode) and 4×4 pixels (I14×4 mode) can be used. Either a suitable one of the two modes is used. On the other hand, sizes of 16×16 pixels (P16×16 mode), 16×8 pixels (P16×8 mode), 8×16 pixels (P8×16 mode), and 8×8 pixels (P8×8 mode) are prepared for the inter prediction. The 8×8 pixel size can be further divided into submacro blocks of 8×8 pixel, 8×4 pixel, 4×8 pixel and 4×4 pixel sizes. Further, a PSkip mode in which motion vector information is not encoded is prepared for the block size of 16×16 pixels, whereas a P8×8ref0 mode in which a reference frame number is not encoded is prepared for the 8×8 pixel size.
The above-described prediction methods and block sizes are determined with respect to the respective macro blocks, and their information is encoded. The combinations (e.g., I16×16 mode, I4×4 mode and the like) of the prediction methods (intra prediction and inter prediction) and the block sizes illustrated in the above are called block type.
A block type encoding method according to the present embodiment will now be explained using
As is apparent if
Specifically, as shown in the image block explanatory diagram (1401), a block type of a target block is estimated using decoded images of encoded adjacent blocks A, B, C and D respectively adjacent to the left, upper, upper left and upper right sides with respect to the target block. At this time, the encoding method of block type is switched depending on whether the estimation of the block type is easy. When the estimation of the block type is easy, a block type encoding method A is used, and the block type of the target block is encoded based on the result of prediction using the block type of each adjacent block. On the other hand, when the estimation of the block type is not easy, a block type encoding method B is used, and the block type is singly encoded without being estimated from each adjacent block. This determination of estimation difficulty level can be performed by, for example, a method such as the assumption that a majority decision is made from block types MSA, MSB, MSC and MSD of peripheral encoded adjacent blocks A, B, C and D, and when there are N (where N: integer greater than or equal to 2) or more same block types exist, the estimation of the block type is easy, whereas the estimation thereof is not easy at other than it.
The bit pattern diagram (1402) shows the details of a bit pattern indicative of a prediction method at the block type encoding method A. The block type encoding method A needs to decide an adjacent mode (estimated block type). This can however be determined by, for example, a method such as the assumption of the block type that appears most often, of the block types of peripheral blocks as an adjacent mode (estimated block type).
The table (1403) shows one example of a variable length coding table used in the block type encoding method B. The block type encoding method B variable-length encodes a block type in accordance with such a variable length coding table as designated at (1403). The variable length coding table like the table (1403) is one example, and another pattern may be used.
While the encoding has been described above, decoding can be performed by carrying out a process opposite to the corresponding encoding method upon decoding.
That is, the estimation difficulty level of a block type of a target block is determined using block type information on each adjacent decoded block. When the estimation of the block type is easy, the decoding of the block type is performed in accordance with the bit pattern of the bit pattern diagram (1402). On the other hand, when the estimation of the block type is not easy, the decoding of the block type is conducted based on the coding table like the table (1403), so that the corresponding block type can be decoded.
An image encoding device according to the present embodiment can be achieved if the variable length coding unit (112) is taken as a configuration shown in
In
For example, when the estimation of the block type is determined to be easy, the encoding of the block type is performed by a block type prediction encoding unit (1503). The block type prediction encoding unit (1503) performs the encoding of the block type using the method (block type encoding method A) shown in the bit pattern diagram (1402) of
For example, when the estimation of the block type is determined not to be easy, the encoding of the block type is carried out by a block type variable length coding unit (1502). The block type variable length coding unit (1502) performs the encoding of the block type using a variable length coding method (block type encoding method B) that uses the table (1403) of
As described above, encoding is performed while selecting the method of encoding the block type every block.
At the same time, a variable length coding unit (1504) except for the block type performs variable length coding of data other than the block type and sets this result and the result of encoding of the block type as output values. Although the encoding of the block type is performed by the variable length coding unit (112) in the example of
An image decoding device according to the present embodiment can be achieved if the variable length decoding unit (302) is taken as a configuration shown in
In
For example, when the estimation of the block type is determined to be easy, the decoding of the block type is performed by a block type prediction decoding unit (1603). The block type prediction decoding unit (1603) performs decoding using a decoding method corresponding to the method (block type encoding method A) shown in the bit pattern diagram (1402) of
When the estimation of the block type is determined not to be easy, the decoding of the block type is carried out by a block type variable length decoding unit (1602). The block type variable length decoding unit (1602) performs the decoding of the block type using a decoding method corresponding to a variable length decoding method (block type encoding method B) that uses the table (1403) of
Each block type decoded as described above is stored in the block type memory (1605).
A variable length decoding unit (1604) except for the block type performs variable length decoding of data other than the block type and outputs the result of decoding of the data other than the block type and the result of decoding of each block type.
Although the decoding of the block type is performed by the variable length decoding unit (302) in the example of
In the procedure for the encoding of one frame in the moving picture encoding device according to the present embodiment, the details of the variable length encoding process (911) of
In
Although the encoding of the block type is performed in the variable length encoding process (911) in the example of
In the procedure for the decoding of one frame in the moving picture decoding device according to the present embodiment, the details of the variable length decoding process (1102) of
In
Although the decoding of the block type is performed in the variable length decoding process (1102) in the example of
Although DCT has been taken as one example of the transformation in the present embodiment, any of DST (Discrete Sine Transformation), WT (Wavelet Transformation), DFT (Discrete Fourier Transformation), KLT (Karhunen-Loeve Transformation), etc. may be adopted if it takes transform used in the elimination of an inter-pixel correlation. In particular, coding may be performed to a residual itself without performing transformation.
Further, the variable length coding may not be performed in particular. Although the prediction is performed along the eight directions defined in H.264/AVC in the present embodiment, the number of directions may be increased or decreased.
Although the example illustrative of some block types is taken in the present embodiment, other block types may be used.
Although the above two embodiments respectively have shown the example in which the present invention is applied to the encoding and decoding of each prediction direction at the intra prediction, and the encoding and decoding of each block type at the prediction encoding, the present invention can be applied even in the case of other information such as CBP (Coded Block Pattern) indicative of the presence or absence of a frequency coefficient, motion vectors and the like if the encoding and decoding of information necessary to be encoded in block units are taken.
The present invention is useful as a moving picture encoding technique for encoding a moving picture and a moving picture decoding technique for decoding a moving picture.
101 . . . original image, 102 . . . memory of original image, 103 . . . block divide unit, 104 . . . motion estimation unit, 106 . . . intra prediction unit, 107 . . . inter prediction unit, 108 . . . direction selection unit, 109 . . . subtraction unit, 110 . . . transform unit, 111 . . . quantization unit, 112 . . . variable length coding unit, 113 . . . inverse quantization unit, 114 . . . inverse transform unit, 115 . . . addition unit, 116 . . . reference frame memory, 201 . . . direction-specific prediction unit, 202 . . . prediction direction determination unit, 203 . . . prediction direction estimation cost calculation unit, 204 . . . prediction direction variable-length encoding unit, 205 . . . prediction direction prediction encoding unit, 206 . . . prediction direction memory, 207 . . . inter prediction image creation unit, 301 . . . video stream, 302 . . . variable length decoding unit, 303 . . . inverse quantization unit, 304 . . . inverse transform unit, 306 . . . intra prediction unit, 307 . . . inter prediction unit, 308 . . . addition unit, 309 . . . reference frame memory, 401 . . . prediction direction cost calculation unit, 402 . . . prediction direction variable-length decoding unit, 403 . . . prediction direction prediction decoding unit, 404 . . . inter prediction image creation unit, 405 . . . prediction direction memory, 1501 . . . block type cost calculation unit, 1502 . . . block type adjustable coding unit, 1503 . . . block type prediction encoding unit, 1504 . . . variable length coding unit for block type, 1505 . . . bloc type memory, 1601 . . . block type cost calculation unit, 1602 . . . block type adjustable decoding unit, 1603 . . . block type prediction decoding unit, 1604 . . . variable length decoding unit for block type, 1605 . . . block type memory.
Number | Date | Country | Kind |
---|---|---|---|
2008-313879 | Dec 2008 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2009/006476 | 11/30/2009 | WO | 00 | 6/8/2011 |