The present invention relates to a video encoding device for and a video encoding method of encoding a moving image with a high degree of efficiency, and a video decoding device for and a video decoding method of decoding an encoded moving image with a high degree of efficiency.
Conventionally, in accordance with an international standard video encoding method, such as MPEG or ITU-T H.26×, after an inputted video frame is partitioned into macroblocks each of which consists of blocks of 16×16 pixels and a motion-compensated prediction is carried out on each of the macroblocks, information compression is carried out on the inputted video frame by carrying out orthogonal transformation and quantization on a prediction error signal on a per block basis. A problem is, however, that as the compression ratio becomes high, the compression efficiency is reduced because of degradation in the quality of a prediction reference image used when carrying out a motion-compensated prediction. To solve this problem, in accordance with an encoding method such as MPEG-4 AVC/H.264 (refer to nonpatent reference 1), by carrying out an in-loop blocking filtering process, a block distortion occurring in a prediction reference image and caused by quantization of orthogonal transformation coefficients is eliminated.
Particularly when carrying out a motion-compensated prediction between frames, a search for a motion vector is performed on each macroblock itself or each of subblocks into which each macroblock is further partitioned finely. Then, a motion-compensated prediction image is generated by carrying out a motion-compensated prediction on a reference image signal stored in a memory 107 by using the motion vector, and a prediction error signal is calculated by determining the difference between a prediction signal showing the motion-compensated prediction image and the image signal generated through the partitioning. Further, a predicting unit 102 outputs parameters for prediction signal generation which the predicting unit determines when acquiring the prediction signal to a variable length encoding unit 108. For example, the parameters for prediction signal generation includes an intra prediction mode indicating how a spatial prediction is carried out within a frame, and a motion vector indicating an amount of motion between frames.
When receiving the prediction error signal from the predicting unit 102, a compressing unit 103 removes a signal correlation by carrying out a DCT (discrete cosine transform) process on the prediction error signal, and then quantizes this prediction error signal to acquire compressed data. When receiving the compressed data from the compressing unit 103, a local decoding unit 104 calculates a prediction error signal corresponding to the prediction error signal outputted from the predicting unit 102 by inverse-quantizing the compressed data and then carrying out an inverse DCT process on the compressed data.
When receiving the prediction error signal from the local decoding unit 104, an adding unit 105 adds the prediction error signal and the prediction signal outputted from the predicting unit 102 to generate a local decoded image. A loop filter 106 eliminates a block distortion piggybacked onto a local decoded image signal showing the local decoded image generated by the adding unit 105, and stores the local decoded image signal from which the distortion is eliminated in a memory 107 as a reference image signal.
When receiving the compressed data from the compressing unit 103, a variable length encoding unit 108 entropy-encodes the compressed data and outputs a bitstream which is the encoded result. When outputting the bitstream, the variable length encoding unit 108 multiplexes the parameters for prediction signal generation outputted from the predicting unit 102 into the bitstream and outputs this bitstream.
In accordance with the encoding method disclosed by the nonpatent reference 1, the loop filter 106 determines a smoothing intensity for a neighboring pixel at a block boundary in DCT on the basis of information including the granularity of the quantization, the coding mode, the degree of variation in the motion vector, etc., thereby reducing distortions occurring at block boundaries. As a result, the quality of the reference image signal can be improved and the efficiency of the motion-compensated prediction in subsequent encoding processes can be improved.
In contrast, a problem with the encoding method disclosed by the nonpatent reference 1 is that the amount of high frequency components lost from the signal increases with increase in the compression rate, and this results in excessive smoothness in the entire screen and hence the video image becomes blurred. In order to solve this problem, the following patent reference 1 proposes a technique of applying a Wiener filter as the loop filter 106, and forming the loop filter 106 in such a way that a squared error distortion between an image signal to be encoded which is an original image signal and a reference image signal corresponding to this original image signal is minimized.
The Wiener filter is defined as a filter to be performed on the signal s′ in such a way as to minimize this encoding distortion (noise) e within the limits of the squared error distortion. Generally, filter coefficients w can be determined from the autocorrelation matrix Rs′s′ of the signal s′ and the cross correlation vector Rss′ of the signals s and s′ according to the following equation (1). The magnitudes of the matrices Rs′s′ and Rss′ correspond to the number of taps of the filter determined.
w=R
s′s′
−1
R
ss′ (1)
By applying the Wiener filter having the filter coefficients w, a signal s hat whose quality is improved (“̂” attached to the alphabetical letter is expressed as hat because this patent application is filed by using the electronic filing system) is acquired as a signal corresponding to the reference image signal. According to the technique disclosed by the patent reference 1, a region classification (class classification) is carried out on each frame according to the motion information and the local signal characteristics of the image and an optimal Wiener filter is designed for each class so that high accuracy distortion compensation according to the locality of the image is implemented.
Because the conventional video encoding device is constructed as above, the high accuracy distortion compensation according to the locality of an image to be encoded can be implemented. A problem is, however, that because a class classification is carried out by using a method predetermined according to the motion information and the local signal characteristics of the image, the effect of the filtering process differs greatly depending upon the image to be encoded, and an image in which the distortion compensation effect caused by the filtering process is hardly exerted against distortions is generated.
The present invention is made in order to solve the above-mentioned problem, and it is therefore an object of the present invention to provide a video decoding device, a video encoding device, a video decoding method, and a video encoding method capable of improving the accuracy of improvement of image quality.
In accordance with the present invention, there is provided a video decoding device in which a filter carries out a pixel adaptive offset process of performing a class classification of each pixel in a decoded image generated by a decoded image generator into one class for each coding block having a largest size by using a class classifying method, and adding an offset of each class to the pixel value of each pixel belonging to the above-mentioned class.
In accordance with the present invention, the filter carries out the pixel adaptive offset process of performing a class classification of each pixel in the decoded image generated by the decoded image generator into one class for each coding block having the largest size by using a class classifying method, and adding the offset of each class to the pixel value of each pixel belonging to the above-mentioned class. Therefore, there is provided an advantage of being able to improve the accuracy of improvement of image quality.
a) is an explanatory drawing showing a distribution of coding blocks and prediction blocks after partitioning, and
The encoding controlling unit 2 carries out a process of determining the largest size of each of the coding blocks which is a unit to be processed at a time when a prediction process s carried out, and also determining the upper limit on the number of hierarchical layers at the time that each of the coding blocks having the largest size is hierarchically partitioned into blocks to determine the size of each of the coding blocks. The encoding controlling unit 2 also carries out a process of selecting a coding mode which is applied to each coding block outputted from the block partitioning unit 1 from one or more available coding modes (one or more intra coding modes in which the size or the like of each prediction block which is a unit for prediction process differs and one or more inter coding modes in which the size or the like of each prediction block differs). As an example of the method of selecting a coding mode, there is a method of selecting a coding mode which provides the highest degree of coding efficiency for each coding block outputted from the block partitioning unit 1 from the one or more available coding modes.
The encoding controlling unit 2 also carries out a process of, when the coding mode having the highest degree of coding efficiency is an intra coding mode, determining an intra prediction parameter which is used when carrying out an intra prediction process on the coding block in the intra coding mode for each prediction block which is a prediction unit shown by the above-mentioned intra coding mode, and, when the coding mode having the highest degree of coding efficiency is an inter coding mode, determining an inter prediction parameter which is used when carrying out an inter prediction process on the coding block in the inter coding mode for each prediction block which is a prediction unit shown by the above-mentioned inter coding mode. The encoding controlling unit 2 further carries out a process of determining prediction difference coding parameters which the encoding controlling unit provides for a transformation/quantization unit 7 and an inverse quantization/inverse transformation unit 8. Orthogonal transformation block partitioning information showing information about partitioning into orthogonal transformation blocks which are units for orthogonal transformation process on the coding block and a quantization parameter defining a quantization step size at the time of carrying out quantization on transform coefficients are included in the prediction difference coding parameters. The encoding controlling unit 2 constructs an encoding controller.
A select switch 3 carries out a process of, when the coding mode determined by the encoding controlling unit 2 is an intra coding mode, outputting the coding block outputted from the block partitioning unit 1 to an intra prediction unit 4, and, when the coding mode determined by the encoding controlling unit 2 is an inter coding mode, outputting the coding block outputted from the block partitioning unit 1 to a motion-compensated prediction unit 5.
The intra prediction unit 4 carries out a process of, when an intra coding mode is selected by the encoding controlling unit 2 as the coding mode corresponding to the coding block outputted from the select switch 3, performing an intra prediction process (intra-frame prediction process) on each prediction block, which is a unit for prediction process at the time of carrying out the prediction process on the coding block, by using the intra prediction parameter determined by the encoding controlling unit 2 while referring to a local decoded image stored in a memory 10 for intra prediction to generate an intra prediction image. An intra predictor is comprised of the intra prediction unit 4 and the memory 10 for intra prediction.
The motion-compensated prediction unit 5 carries out a process of, when an inter coding mode is selected by the encoding controlling unit 2 as the coding mode corresponding to the coding block outputted from the select switch 3, comparing the coding block with one or more frames of local decoded image stored in a motion-compensated prediction frame memory 12 for each prediction block which is a unit for prediction process to search for a motion vector, and performing an inter prediction process (motion-compensated prediction process) on each prediction block in the coding block by using the motion vector and the inter prediction parameter, such as a frame number which is determined by the encoding controlling unit 2 and which is referred to, to generate an inter prediction image. A motion-compensated predictor is comprised of the motion-compensated prediction unit 5 and the motion-compensated prediction frame memory 12.
A subtracting unit 6 carries out a process of subtracting the intra prediction image generated by the intra prediction unit 4 or the inter prediction image generated by the motion-compensated prediction unit 5 from the coding block outputted from the block partitioning unit 1, and outputting a prediction difference signal showing a difference image which is the result of the subtraction to the transformation/quantization unit 7. The transformation/quantization unit 7 carries out a process of performing an orthogonal transformation process (e.g., a DCT (discrete cosine transform), a DST (discrete sine transform), or an orthogonal transformation process, such as a KL transform, in which bases are designed for a specific learning sequence in advance) on each orthogonal transformation block of the prediction difference signal outputted from the subtracting unit 6 by referring to the orthogonal transformation block partitioning information included in the prediction difference coding parameters determined by the encoding controlling unit 2 to calculate transform coefficients, and also quantizing the transform coefficients of each orthogonal transformation block by referring to the quantization parameter included in the prediction difference coding parameters and then outputting compressed data which are the transform coefficients quantized thereby to the inverse quantization/inverse transformation unit 8 and a variable length encoding unit 13. The transformation/quantization unit 7 constructs an image compressor.
The inverse quantization/inverse transformation unit 8 carries out a process of inverse-quantizing the compressed data about each orthogonal transformation block outputted from the transformation/quantization unit 7 by referring to the quantization parameter and the orthogonal transformation block partitioning information included in the prediction difference coding parameters determined by the encoding controlling unit 2, and also performing an inverse orthogonal transformation process on the transform coefficients which are the compressed data inverse-quantized thereby to calculate a local decoded prediction difference signal corresponding to the prediction difference signal outputted from the subtracting unit 6. An adding unit 9 carries out a process of adding the local decoded prediction difference signal calculated by the inverse quantization/inverse transformation unit 8 and the intra prediction image generated by the intra prediction unit 4 or the inter prediction image generated by the motion-compensated prediction unit 5 to calculate a local decoded image corresponding to the coding block outputted from the block partitioning unit 1. A decoded image generator is comprised of the inverse quantization/inverse transformation unit 8 and the adding unit 9. The memory 10 for intra prediction is a recording medium for storing the local decoded image calculated by the adding unit 9.
A loop filter unit 11 carries out a process of performing a predetermined filtering process on the local decoded image calculated by the adding unit 9 to output the local decoded image filtering-processed thereby. Concretely, the loop filter unit performs a filtering (deblocking filtering) process of reducing a distortion occurring at a boundary between orthogonal transformation blocks and a distortion occurring at a boundary between prediction blocks, a process (pixel adaptive offset process) of adaptively adding an offset on a per pixel basis, an adaptive filtering process of adaptively switching among linear filters, such as Wiener filters, to perform the filtering process, and so on. The loop filter unit 11 can be constructed in such a way as to perform one of the above-mentioned deblocking filtering process, the above-mentioned pixel adaptive offset process, and the above-mentioned adaptive filtering process, or perform two or more of the processes, as shown in
In the pixel adaptive offset process, a case of not carrying out the offset process is defined as one class classifying method for each largest coding block first, and one class classifying method is selected from among a plurality of class classifying methods which are prepared in advance. Next, by using the selected class classifying method, each pixel included in the block is classified into one of classes, and an offset minimizing the sum of squared errors of brightness values between the image to be encoded and the local decoded image is calculated for each of the classes. Finally, the process of adding the offset to the brightness value of the local decoded image is carried out, thereby improving the image quality of the local decoded image. Therefore, in the pixel adaptive offset process, an index indicating the class classifying method for each largest coding block and the offset provided for each of the classes of each largest coding block are outputted to the variable length encoding unit 13 as a part of filter parameters.
Further, in the adaptive filtering process, as shown in
Further, as the method of using both the pixel adaptive offset process and the adaptive filtering process, instead of using the structure shown in
The motion-compensated prediction frame memory 12 is a recording medium for storing the local decoded image on which the filtering process is performed by the loop filter unit 11. The variable length encoding unit 13 carries out a process of variable-length-encoding the compressed data outputted thereto from the transformation/quantization unit 7, the output signal of the encoding controlling unit 2 (the block partitioning information about the partition within each largest coding block, the coding mode, the prediction difference coding parameters, and the intra prediction parameter or the inter prediction parameter), the motion vector outputted from the motion-compensated prediction unit 5 (when the coding mode is an inter coding mode), and the filter parameters outputted from the loop filter unit 11 to generate a bitstream. The variable length encoding unit 13 constructs a variable length encoder.
In the example shown in
An inverse quantization/inverse transformation unit 32 carries out a process of inverse-quantizing the compressed data variable-length-decoded by the variable length decoding unit 31 for each orthogonal transformation block by referring to the quantization parameter and the orthogonal transformation block partitioning information which are included in the prediction difference coding parameters variable-length-decoded by the variable length decoding unit 31, and also performing an inverse orthogonal transformation process on the transform coefficients which are the compressed data inverse-quantized thereby to calculate a decoded prediction difference signal which is the same as the local decoded prediction difference signal outputted from the inverse quantization/inverse transformation unit 8 shown in
A select switch 33 carries out a process of, when the coding mode variable-length-decoded by the variable length decoding unit 31 is an intra coding mode, outputting the intra prediction parameter variable-length-decoded by the variable length decoding unit 31 to an intra prediction unit 34, and, when the coding mode variable-length-decoded by the variable length decoding unit 31 is an inter coding mode, outputting the inter prediction parameter and the motion vector which are variable-length-decoded by the variable length decoding unit 31 to a motion compensation unit 35.
The intra prediction unit 34 carries out a process of, when the coding mode associated with the decoding block specified from the block partitioning information variable-length-decoded by the variable length decoding unit 31 is an intra coding mode, performing an intra prediction process (intra-frame prediction process) using the intra prediction parameter outputted from the select switch 33 on each prediction block, which is a unit for prediction process at the time of carrying out the prediction process on the decoding block, while referring to a decoded image stored in a memory 37 for intra prediction to generate an intra prediction image. An intra predictor is comprised of the intra prediction unit 34 and the memory 37 for intra prediction.
The motion compensation unit 35 carries out a process of, when the coding mode associated with the decoding block specified from the block partitioning information variable-length-decoded by the variable length decoding unit 31 is an inter coding mode, performing an inter prediction process (motion-compensated prediction process) using the motion vector and the inter prediction parameter which are outputted from the select switch 33 on each prediction block, which is a unit for prediction process at the time of carrying out the prediction process on the above-mentioned decoding block while referring to a decoded image stored in a motion-compensated prediction frame memory 39 to generate an inter prediction image. A motion-compensated predictor is comprised of the motion compensation unit 35 and the motion-compensated prediction frame memory 39.
An adding unit 36 carries out a process of adding the decoded prediction difference signal calculated by the inverse quantization/inverse transformation unit 32 and the intra prediction image generated by the intra prediction unit 34 or the inter prediction image generated by the motion compensation unit 35 to calculate a decoded image which is the same as the local decoded image outputted from the adding unit 9 shown in
The loop filter unit 38 carries out a process of performing a predetermined filtering process on the decoded image calculated by the adding unit 36 to output the decoded image filtering-processed thereby. Concretely, the loop filter unit performs a filtering (deblocking filtering) process of reducing a distortion occurring at a boundary between orthogonal transformation blocks or a distortion occurring at a boundary between prediction blocks, a process (pixel adaptive offset process) of adaptively adding an offset on a per pixel basis, an adaptive filtering process of adaptively switching among linear filters, such as Wiener filters, to perform the filtering process, and so on. Although the loop filter unit 38 can be constructed in such a way as to perform one of the above-mentioned deblocking filtering process, the above-mentioned pixel adaptive offset process, and the above-mentioned adaptive filtering process, or perform two or more of the processes, as shown in
In the pixel adaptive offset process, the index indicating the class classifying method set for each largest decoding block, the index being included in the filter parameters variable-length-decoded by the variable length decoding unit 31, is referred to, and, when the index does not show “not carrying out the offset process”, each pixel within each largest decoding block is classified into one of classes. Next, a process of adding the offset set for each class included in the filter parameters variable-length-decoded by the variable length decoding unit 31 to the brightness value of each pixel belonging to the class is carried out, thereby improving the image quality of the decoded image. Further, in the adaptive filtering process, the filter parameters variable-length-decoded by the variable length decoding unit 31 (the number of classes (groups) (the number of filters), the filter for the local decoded image belonging to each class (group), the class number (filter number) which is identification information about the identification of the class (group) of each largest decoding block) are referred to, and, when the class (group) indicated by the class number does not show “not carrying out the offset process”, the filtering process is carried out on the decoded image belonging to the class (group) by using the filter corresponding to the class number. The motion-compensated prediction frame memory 39 is a recording medium for storing the decoded image on which the filtering process is carried out by the loop filter unit 38.
Further, in a case in which the loop filter unit 11 of the video encoding device is constructed in such a way as to optimally select either the pixel adaptive offset process or the adaptive filtering process for each largest coding block as the method of using both the pixel adaptive offset process and the adaptive filtering process while reducing the amount of computations of the entire filtering process of the loop filter unit 38, the selection information included in the filter parameters variable-length-decoded by the variable length decoding unit 31 and indicating which one of the pixel adaptive offset process on each largest decoding block and the adaptive filtering process on each largest decoding block is carried out is referred to and the pixel adaptive offset process or the adaptive filtering process is carried out on each largest decoding block.
In the example shown in
Next, the operations of the video encoding and decoding devices will be explained. In this Embodiment 1, a case in which the video encoding device receives each frame of a video as an inputted image, carries out an intra prediction from already-encoded neighborhood pixels or a motion-compensated prediction between adjacent frames, and performs a compression process with orthogonal transformation and quantization on an acquired prediction difference signal, and, after that, carries out variable length encoding to generate a bitstream, and the video decoding device decodes the bitstream outputted from the video encoding device will be explained.
The video encoding device shown in
Although a process of generating a prediction difference signal having small signal power and small entropy by using a temporal and spatial prediction, thereby reducing the whole code amount, is carried out in the encoding process, the code amount of parameters used for the prediction can be reduced as long as the parameters can be applied uniformly to as large an image signal region as possible. On the other hand, because the amount of errors occurring in the prediction increases when the same prediction parameter is applied to a large image region in an image signal pattern having a large change in time and space, the code amount of the prediction difference signal increases. Therefore, it is desirable to apply the same prediction parameter to an image region having a large change in time and space to reduce the block size of a block which is subjected to the prediction process, thereby increasing the data volume of the parameter which is used for the prediction and reducing the electric power and entropy of the prediction difference signal.
In this Embodiment 1, a structure of, in order to carry out encoding which is adapted for such the typical characteristics of a video signal, starting the prediction process and so on from a predetermined largest block size first, hierarchically partitioning the region of the video signal into blocks, and adapting the prediction process and the encoding process of encoding the prediction difference to each of the blocks partitioned is provided.
A video signal having a format which is to be processed by the video encoding device shown in
In the following explanation, for convenience' sake, a case in which the video signal of the inputted image is a YUV signal unless otherwise specified, and the two color difference components U and V which are signals having a 4:2:0 format which are subsampled with respect to the luminance component Y are handled will be described. Further, a data unit to be processed which corresponds to each frame of the video signal is referred to as a “picture.” In this Embodiment 1, although an explanation will be made in which a “picture” is a video frame signal on which progressive scanning is carried out, a “picture” can be alternatively a field image signal which is a unit which constructs a video frame when the video signal is an interlaced signal.
First, the processing carried out by the video encoding device shown in
The encoding controlling unit 2 also selects a coding mode corresponding to each of the coding blocks into which the inputted image is hierarchically partitioned from one or more available coding modes (step ST2). More specifically, the encoding controlling unit 2 hierarchically partitions each image region having the largest coding block size into coding blocks each having a coding block size until the number of hierarchical layers partitioned reaches the upper limit on the number of hierarchical layers partitioned which is determined in advance, and determines a coding mode for each of the coding blocks. The coding mode can be one of one or more intra coding modes (generically referred to as “INTRA”) and one or more inter coding modes (generically referred to as “INTER”), and the encoding controlling unit 2 selects a coding mode corresponding to each of the coding blocks from among all the coding modes available in the picture currently being processed or a subset of these coding modes.
Each of the coding blocks into which the inputted image is hierarchically partitioned by the block partitioning unit 1, which will be mentioned below, is further partitioned into one or more prediction blocks each of which is a unit on which a prediction process is to be carried out, and the state of the partitioning into the one or more prediction blocks is also included as information in the coding mode information. More specifically, the coding mode information is an index identifying either an intra coding mode or an inter coding mode and what type of partitioning into prediction blocks the coding mode has. Although a detailed explanation of a selection method of selecting a coding mode for use in the encoding controlling unit 2 will be omitted hereafter because the selection method is a known technique, there is a method of carrying out an encoding process on each coding block by using arbitrary available coding modes to examine the encoding efficiency, and selecting a coding mode having the highest degree of coding efficiency from among the plurality of available coding modes, for example.
The encoding controlling unit 2 further determines a quantization parameter and an orthogonal transformation block partitioning state, which are used when a difference image is compressed, for each coding block, and also determines a prediction parameter (an intra prediction parameter or an inter prediction parameter) which is used when a prediction process is carried out. When each coding block is further partitioned into prediction blocks on each of which the prediction process is carried out, the encoding controlling unit can select a prediction parameter (an intra prediction parameter or an inter prediction parameter) for each of the prediction blocks. In addition, because when an intra prediction process is carried out on each prediction block in a coding block whose coding mode is an intra coding mode, already-encoded pixels adjacent to the prediction block are used, it is necessary to carry out encoding on a per prediction block basis, and therefore selectable transformation block sizes are limited to the size of the prediction block or less, which will be described in detail.
The encoding controlling unit 2 outputs prediction difference coding parameters including the quantization parameter and the transformation block size to the transformation/quantization unit 7, the inverse quantization/inverse transformation unit 8, and the variable length encoding unit 13. The encoding controlling unit 2 also outputs the intra prediction parameter to the intra prediction unit 4 as needed. The encoding controlling unit 2 further outputs the inter prediction parameter to the motion-compensated prediction unit 5 as needed.
When receiving the video signal as the inputted image, the block partitioning unit 1 partitions the inputted image into coding blocks each having the largest coding block size determined by the encoding controlling unit 2, and further partitions each of the largest coding blocks, into which the inputted image is partitioned, into coding blocks hierarchically, these coding blocks being determined by the encoding controlling unit 2, and outputs each of the coding blocks.
Hereafter, the coding block size determined by the encoding controlling unit 2 is defined as the size of (Ln, Mn) in the luminance component of each coding block. Because quadtree partitioning is carried out, (Ln+1,Mn+1)=(Ln/2, Mn/2) is always established. In the case of a color video image signal (4:4:4 format) in which all the color components have the same sample number, such as an RGB signal, all the color components have a size of (Ln, Mn), while in the case of handling a 4:2:0 format, a corresponding color difference component has a coding block size of (Ln/2, Mn/2).
Hereafter, each coding block in the nth hierarchical layer is expressed as Bn, and a coding mode selectable for each coding block Bn is expressed as m(Bn). In the case of a color video signal which consists of a plurality of color components, the coding mode m(Bn) can be configured in such a way that an individual mode is used for each color component, or can be configured in such a way that a common mode is used for all the color components. Hereafter, an explanation will be made by assuming that the coding mode indicates the one for the luminance component of the coding block when having a 4:2:0 format in a YUV signal unless otherwise specified.
Each coding block Bn is partitioned into one or more prediction blocks each showing a prediction unit by the block partitioning unit 1, as shown in
The encoding controlling unit 2 generates such a block partitioning state as shown in
When the coding mode m(Bn) determined by the encoding controlling unit 2 is an intra coding mode (in the case of m(Bn)εINTRA), the select switch 3 outputs the coding block Bn outputted from the block partitioning unit 1 to the intra prediction unit 4. In contrast, when the coding mode m(Bn) determined by the encoding controlling unit 2 is an inter coding mode (in the case of m(Bn)εINTER), the select switch outputs the coding block Bn outputted from the block partitioning unit 1 to the motion-compensated prediction unit 5.
When the coding mode m(Bn) determined by the encoding controlling unit 2 is an intra coding mode (in the case of m(Bn) εINTRA), and the intra prediction unit 4 receives the coding block Bn from the select switch 3 (step ST3), the intra prediction unit 4 carries out the intra prediction process on each prediction block Pin in the coding block Bn by using the intra prediction parameter determined by the encoding controlling unit 2 while referring to the local decoded image stored in the memory 10 for intra prediction to generate an intra prediction image PINTRAin; (step ST4). Because the video decoding device needs to generate an intra prediction image which is completely the same as the intra prediction image PINTRAin, the intra prediction parameter used for the generation of the intra prediction image PINTRAin is outputted from the encoding controlling unit 2 to the variable length encoding unit 13, and is multiplexed into the bitstream. The details of the processing carried out by the intra prediction unit 4 will be mentioned below.
When the coding mode m(Bn) determined by the encoding controlling unit 2 is an inter coding mode (in the case of m(Bn)εINTER), and the motion-compensated prediction unit 5 receives the coding block Bn from the select switch 3 (step ST3), the motion-compensated prediction unit 5 compares each prediction block Pin in the coding block Bn with the local decoded image which is stored in the motion-compensated prediction frame memory 12 and on which the filtering process is performed to search for a motion vector, and carries out the inter prediction process on each prediction block Pin in the coding block Bn by using both the motion vector and the inter prediction parameter determined by the encoding controlling unit 2 to generate an inter prediction image PINTERin in (step ST5). Because the video decoding device needs to generate an inter prediction image which is completely the same as the inter prediction image PINTERin, the inter prediction parameter used for the generation of the inter prediction image PINTERin is outputted from the encoding controlling unit 2 to the variable length encoding unit 13 and is multiplexed into the bitstream. The motion vector which is searched for by the motion-compensated prediction unit 5 is also outputted to the variable length encoding unit 13 and is multiplexed into the bitstream.
When receiving the coding block Bn from the block partitioning unit 1, the subtracting unit 6 subtracts the intra prediction image PINTRAin generated by the intra prediction unit 4 or the inter prediction image PINTERin generated by the motion-compensated prediction unit 5 from the prediction block Pin in the coding block Bn, and outputs a prediction difference signal ein showing a difference image which is the result of the subtraction to the transformation/quantization unit 7 (step ST6).
When receiving the prediction difference signal ein from the subtracting unit 6, the transformation/quantization unit 7 refers to the orthogonal transformation block partitioning information included in the prediction difference coding parameters determined by the encoding controlling unit 2, and carries out an orthogonal transformation process (e.g., a DCT (discrete cosine transform), a DST (discrete sine transform), or an orthogonal transformation process, such as a KL transform, in which bases are designed for a specific learning sequence in advance) on each orthogonal transformation block of the prediction difference signal ein to calculates transform coefficients. The transformation/quantization unit 7 also refers to the quantization parameter included in the prediction difference coding parameters to quantize the transform coefficients of each orthogonal transformation block, and outputs compressed data which are the transform coefficients quantized thereby to the inverse quantization/inverse transformation unit 8 and the variable length encoding unit 13 (step ST7).
When receiving the compressed data from the transformation/quantization unit 7, the inverse quantization/inverse transformation unit 8 refers to the quantization parameter and the orthogonal transformation block partitioning information which are included in the prediction difference coding parameters determined by the encoding controlling unit 2 to inverse-quantize the compressed data about each orthogonal transformation block. The inverse quantization/inverse transformation unit 8 carries out an inverse orthogonal transformation process (e.g., an inverse DCT, an inverse DST, an inverse KL transform, or the like) on the transform coefficients which are the compressed data inverse-quantized for each orthogonal transformation block, and calculates a local decoded prediction difference signal corresponding to the prediction difference signal ein outputted from the subtracting unit 6 and outputs the local decoded prediction difference signal to the adding unit 9 (step ST8).
When receiving the local decoded prediction difference signal from the inverse quantization/inverse transformation unit 8, the adding unit 9 calculates a local decoded image by adding the local decoded prediction difference signal and either the intra prediction image PINTRAin generated by the intra prediction unit 4 or the inter prediction image PINTERin generated by the motion-compensated prediction unit 5 (step ST9). The adding unit 9 outputs the local decoded image to the loop filter unit 11 while storing the local decoded image in the memory 10 for intra prediction. This local decoded image is an encoded image signal which is used at the time of subsequent intra prediction processes.
When receiving the local decoded image from the adding unit 9, the loop filter unit 11 carries out the predetermined filtering process on the local decoded image, and stores the local decoded image filtering-processed thereby in the motion-compensated prediction frame memory 12 (step ST10). The details of the process carried out by the loop filter unit 11 will be mentioned below.
The video encoding device repeatedly carries out the processes of steps ST3 to ST9 until the video encoding device completes the processing on all the coding blocks Bn into which the inputted image is partitioned hierarchically, and, when completing the processing on all the coding blocks Bn, shifts to a process of step ST13 (steps ST11 and ST12).
The variable length encoding unit 13 entropy-encodes the compressed data outputted thereto from the transformation/quantization unit 7, the block partitioning information about the partitioning of each largest coding block into blocks, which is outputted from the encoding controlling unit 2 (the quadtree information which is shown in
Next, the process carried out by the intra prediction unit 4 will be explained in detail.
The intra prediction unit 4 carries out the intra prediction process on each prediction block Pin by referring to the intra prediction parameter of the prediction block Pin to generate an intra prediction image PINTRAin, as mentioned above. Hereafter, an intra process of generating an intra prediction signal of a prediction block Pin in the luminance signal will be explained.
It is assumed that the size of the prediction block Pin is lin×min pixels.
When the index value indicating the intra prediction mode for the prediction block Pin is 0 (planar prediction), the intra prediction unit uses already-encoded pixels adjacent to the top of the prediction block Pin and already-encoded pixels adjacent to the left of the prediction block Pin to determine a value interpolated according to the distance between these pixels and the target pixel to be predicted in the prediction block Pin as a predicted value and generates a prediction image. Further, when the index value indicating the intra prediction mode for the prediction block Pin is 2 (average (DC) prediction), the intra prediction unit determines the average of the already-encoded pixels adjacent to the top of the prediction block Pin and the already-encoded pixels adjacent to the left of the prediction block Pin as the predicted value of each pixel in the prediction block Pin and generates a prediction image. When the index value indicating the intra prediction mode is other than 0 (planar prediction) and 2 (average prediction), the intra prediction unit generates a predicted value of each pixel in the prediction block Pin on the basis of a prediction direction vector up=(dx, dy) shown by the index value. As shown in
where k is a negative scalar value.
When a reference pixel is at an integer pixel position, the value of the corresponding integer pixel is determined as the predicted value of the target pixel to be predicted, whereas to when a reference pixel is not at an integer pixel position, the value of an interpolation pixel generated from the integer pixels which are adjacent to the reference pixel is determined as the predicted value of the target pixel to be predicted. In the example shown in
Through the process described above, the intra prediction unit generates prediction pixels for all the pixels of the luminance signal in the prediction block Pin, and outputs an intra prediction image PINTRAin. The intra prediction parameter (intra prediction mode) used for the generation of the intra prediction image PINTRAin is outputted to the variable length encoding unit 13 in order to multiplex the intra prediction parameter into the bitstream.
Like in the case of performing a smoothing process on a reference image at the time of carrying out an intra prediction on an 8×8-pixel block in an image which complies with MPEG-4 AVC/H.264 explained previously, even in a case in which an already-encoded pixel adjacent to the prediction block Pin on which a smoothing process is carried out is provided as the reference pixel at the time of generating an intermediate prediction image of the prediction block Pin, the intra prediction unit 4 can carry out the filtering process which is the same as the above-mentioned example on the intermediate prediction image.
The intra prediction unit also carries out an intra prediction process based on the intra prediction parameter (intra prediction mode) on each of the color difference signals of the prediction block Pin according to the same procedure as that according to which the intra prediction unit carries out the intra prediction process on the luminance signal, and outputs the intra prediction parameter used for the generation of the intra prediction image to the variable length encoding unit 13. However, selectable intra prediction parameters (intra prediction modes) for each of the color difference signals can differ from those for the luminance signal. For example, in the case of a YUV 4:2:0 format, each of the color difference signals (U and V signals) is the one whose resolution is reduced to one-half that of the luminance signal (Y signal) both in a horizontal direction and in a vertical direction, and the complexity of each of the color difference signals is lower than that of the luminance signal and hence a prediction can be carried out on each of the color difference signals more easily than on the luminance signal. Therefore, by reducing the number of selectable intra prediction parameters (intra prediction modes) for each of the color difference signals to be smaller than that for the luminance signal, a reduction in the code amount required to encode the intra prediction parameter (intra prediction mode) and a reduction in the amount of computations required to carry out the prediction process can be implemented.
Next, the process carried out by the loop filter unit 11 will be explained in detail. The loop filter unit 11 carries out a process of performing a predetermined filtering process on the local decoded image calculated by the adding unit 9 and outputting the local decoded image filtering-processed thereby. Concretely, the loop filter unit carries out a filtering (deblocking filtering) process of reducing a distortion occurring at a boundary between orthogonal transformation blocks and a distortion occurring at a boundary between prediction blocks, a process (pixel adaptive offset process) of adaptively adding an offset on a per pixel basis, an adaptive filtering process of adaptively switching among linear filters, such as Wiener filters, and performing the filtering process, and so on. The loop filter unit 11 can be constructed in such a way as to perform one of the above-mentioned deblocking filtering process, the above-mentioned pixel adaptive offset process, and the above-mentioned adaptive filtering process, or perform two or more of the processes, as shown in
In the pixel adaptive offset process, by using a predetermined method, a pixel included in each largest coding block is classified into one of classes and an offset minimizing the sum of squared errors of brightness values between the image to be encoded and the local decoded image is calculated for each of the classes first. Finally, the process of adding the offset determined for each of the classes to the brightness value of each pixel belonging to the class (each pixel of the local decoded image) is carried out, thereby improving the image quality of the local decoded image.
As the predetermined method of carrying out the class classification, there are a method of classifying each pixel into one of classes according to the brightness value of the local decoded image, and a method of classifying each pixel into one of classes according to the state of a neighboring region around the pixel (e.g., whether or not the neighboring region is an edge portion) for each of the directions of edges as shown in
Next, the adaptive filtering process will be explained concretely.
The filter selection and the filtering process shown in steps ST102 to ST106 will be explained. First, a cost C0 in the case of not carrying out the filtering process is calculated as that in a case of a filter number 0 for the largest coding block which is the current target to be processed (step ST102). As the definition of the cost, there is an example shown below
Cost=D+λ·R (3)
where D is the sum of squared errors between the image signal to be encoded within the block and the local decoded image signal on which the filtering process is carried out, and λ is a constant. Further, R is the code amount required when encoding the filter number for selecting the filter to be used, and the coefficients of the filter corresponding to the filter number.
In this embodiment, although the cost is expressed by the equation (3), the cost can be alternatively given only by the sum of squared errors D, for example. Although the accuracy of the cost drops when the cost is defined this way, the load of calculating the cost decreases. As an alternative, D can be an absolute value sum of errors with a lower arithmetic load, instead of the sum of squared errors. Which equation is used to define the cost can be determined according to the arithmetic load acceptable by the video encoding device. When calculating the cost C0 in the case of not carrying out the filtering process, D is the sum of squared errors between the image signal to be encoded and the local decoded image signal on which the filtering process is not carried out, and the code amount R required for the filter is 0.
Next, a Wiener filter minimizing the sum of squared errors between the image signal to be encoded and the local decoded image signal is calculated according to the equation (1) with each pixel in the largest coding block which is the target to be processed being set as a target (step ST103), and a cost Cnew in the case of carrying out the filtering process on each pixel within the block is calculated (step ST104). In this case, the filtering process using the Wiener filter is expressed by the following equation (4).
S′(x0)=a0S(x0)+a1S(x1)+ . . . +aL-1S(xL-1)+aL (4)
where x0 shows the target pixel to be filtered, x1 (1=0, 1, 2, . . . , L-1) shows the reference pixels of the filter (including the target pixel to be filtered), and L shows the number of reference pixels of the filter. Further, S (x) shows the brightness value of the pixel x, S′(x) shows the brightness value on which the filtering process is performed of the pixel x, and am (m=0, 1, 2, . . . , L) shows the filter coefficients. aL shows an offset coefficient, and the offset coefficient can be always set to invalid by setting aL to 0. By doing this way, although the filter performance drops, the number of filter coefficients to be encoded can be reduced and the code amount required to encode the filter coefficients can be reduced.
In addition, filters (filters whose filter numbers are set to 1, 2, . . . , and numbl respectively) which can be used other than the filter designed for the largest coding block which is the current target to be processed are used to calculate the costs C1, C2, . . . , and Cnumbl in the case of carrying out the filtering process (step ST105). numbl shows the number of filters, other than the filter designed for the largest coding block which is the current target to be processed, which can be used. As examples of the filters, other than the filter designed for the largest coding block which is the current target to be processed, which can be used, there can be provided a filter used for a largest coding block on which the filtering process is already carried out, filters prepared in advance before the encoding process is carried out, and so on.
Next, the smallest cost is specified from among the costs Cnew, C0, C1, C2, . . . , and Cnumbl which are calculated until now, and the filtering process associated with the smallest cost is determined as the filtering process for the largest coding block which is the current target to be processed (step ST106). At that time, when the determined filtering process is the filter (cost Cnew) designed for the largest coding block which is the current target to be processed, this designed filter is set as a filter having a number of numbl+1 (filter number is numbl+1) which can be used for the largest coding block which is the next target to be processed, and the number numbl of filters which can be used is incremented by one. The video encoding device then shifts to a selection of a filter for the largest coding block which is the next target to be processed, and the filtering process on this coding block (step ST107). The filter number which is set as mentioned above is defined as the class number, and the number of filters is referred to as the number of classes.
The video encoding device carries out the above-mentioned process on all the largest coding blocks, and finally compares the case of not carrying out the pixel adaptive offset process on the entire picture with the case of carrying out the pixel adaptive offset process determined as mentioned above on each largest coding block, and selects an optimal one of the cases. Therefore, in the adaptive filtering process, the enable or disable information indicating whether or not the pixel adaptive offset process at the picture level is enabled, and the number of classes, the class number of each largest coding block including “no filtering process”, and the filter coefficients of each class when the enable or disable information indicates that “the process is enabled” are outputted to the variable length encoding unit 13 as a part of the filter parameters.
As an example of the adaptive filtering process, besides the example shown in
As compared with the example shown in
Further, although an optimal filtering process is determined for each largest coding block in the above-mentioned example, there is a possibility that the sum of squared errors of the brightness value between the image to be encoded and the local decoded image increases through the filtering process when taking into consideration a case of performing the filtering process on each of regions smaller than largest coding blocks. Therefore, as shown in
In addition, although a single optimal filtering process is determined for each largest coding block in the above-mentioned example, the smaller size blocks within each largest coding block can be classified into groups according to their local characteristics (the degree of scatter of the brightness values, such as variance, and the directions of edges) or the like, a filter is designed for each of the groups, and a filtering process can be carried out on each of the groups. By doing this way, although the code amount required for the filter increases because it is necessary to encode the number of filters equal to the number of groups for each largest coding block, a high-accuracy filtering process can be implemented because an optimal filter can be used for each of the smaller size regions. In the case in which the video encoding device is constructed in such a way as to carry out this process, the video decoding device also needs to be constructed in such a way as to carry out the same group classification as that carried out by the video encoding device.
Further, as the method of using both the pixel adaptive offset process and the adaptive filtering process, instead of using the structure shown in
Further, although the example of carrying out a selection of a class classifying method in the pixel adaptive offset process and a selection of a class in the adaptive filtering process for each largest coding block is explained as above, the selections can be alternatively carried out for each block having an arbitrary block size. By doing this way, when encoding an image not having a complicated pattern, the selections are carried out on blocks having a larger block size than largest coding blocks. As a result, the selection information associated with each block to be encoded can be reduced without reducing the image quality improvement capability much. In contrast, when encoding an image having a complicated pattern, the selections are carried out on blocks having a smaller block size than largest coding blocks. As a result, although the selection information associated with each block to be encoded increases, processes following change in the complicated pattern can be implemented, and therefore a large image quality improvement effect can be achieved. When carrying out a selection of a class classifying method in the pixel adaptive offset process and a selection of a class in the adaptive filtering process for each coding block having an arbitrary block size, as mentioned above, the above-mentioned block size is outputted to the variable length encoding unit 13 as a part of the filter parameters.
Next, the details of the process of variable-length-encoding the filter parameters will be explained. The variable length encoding unit 13 variable-length-encodes the filter parameters outputted thereto from the loop filter unit 11. The filter parameters include, as the parameters necessary for the pixel adaptive offset process, the enable or disable information indicating whether or not the pixel adaptive offset process at the picture level is enabled, and the index indicating the class classifying method for each largest coding block and the offset provided for each class of each largest coding block when the enable or disable information indicates that “the process is enabled.” Further, the filter parameters include, as the parameters necessary for the adaptive filtering process, the enable or disable information indicating whether or not the adaptive filtering process at the picture level is enabled, and the number of classes, the class number showing the filter used for each largest coding block, and the filter coefficients corresponding to each class number when the enable or disable information indicates that “the process is enabled.”
Particularly, in the variable length encoding of the class number of each largest coding block in the adaptive filtering process, an encoding process using Move-To-Front shown in
As the encoding method of encoding the class number (filter number) of each largest coding block, there can be a method of, as shown in
Further, also when carrying out the process of encoding the index indicating the class classifying method selected for each largest coding block in the pixel adaptive offset process, the use of Move-To-Front makes it easy for the occurrence probability of the coding index to concentrate at near zero, like when carrying out the process of encoding the class number of each largest coding block in the above-mentioned adaptive filtering process. Therefore, by carrying out the variable length encoding according to this frequency of occurrence probability, the index can be encoded with a high degree of coding efficiency.
For both the encoding of the class number (class index) of each largest coding block in the adaptive filtering process, and the encoding of the index indicating the class classifying method selected for each largest coding block in the pixel adaptive offset process, instead of the above-mentioned encoding method using Move-To-Front, there can be provided a method of encoding a flag indicating whether or not the class classifying method selected for each largest coding block is the same as that selected for a neighboring block (e.g., a block on top or to the left of the largest coding block), and an index indicating which block has the same class classifying method as that of the largest coding block when the flag indicates that the class classifying method is “the same” or an index to be encoded when the flag indicates that the class classifying method is “not the same.” As an alternative, as shown in
Into the header information, the largest number of offsets in the pixel adaptive offset process which can be used on a per picture basis and the largest number of classes in the adaptive filtering process which can be used on a per picture basis can be encoded. Because the largest number of offsets and the largest number of filters which have to be decoded on a per picture basis by the decoding side are apparent in the case in which the encoding side does this way, restrictions for implementing the real-time decoding device can be defined by the header information
Further, in both the adaptive filtering process and the pixel adaptive offset process, when the adaptive filtering process is defined to select whether or not to carry out the filtering process for each of coding blocks into which each largest coding block is partitioned, as shown in
Next, the processing carried out by the video decoding device shown in
The variable length decoding unit 31 also determines the largest encoding block size and the upper limit on the number of hierarchical layers partitioned which are determined by the encoding controlling unit 2 of the video encoding device shown in
The variable length decoding unit 31 then decodes the coding mode assigned to each decoding block. The variable length decoding unit 31 partitions each decoding block into one or more prediction blocks each of which is a unit for prediction process on the basis of the information included in the coding mode, and decodes the prediction parameter assigned to each of the one or more prediction blocks (step ST24).
More specifically, when the coding mode assigned to a decoding block is an intra coding mode, the variable length decoding unit 31 decodes the intra prediction parameter for each of the one or more prediction blocks which are included in the decoding block and each of which is a unit for prediction process. In contrast, when the coding mode assigned to a decoding block is an inter coding mode, the variable length decoding unit decodes the inner prediction parameter and the motion vector for each of the one or more prediction blocks which are included in the decoding block and each of which is a unit for prediction process (step ST24). The variable length decoding unit 31 also decodes the compressed data (transformed and quantized transform coefficients) of each orthogonal transformation block on the basis of the orthogonal transformation block partitioning information included in the prediction difference coding parameters (step ST24).
When the coding mode m (Bn) variable-length-decoded by the variable length decoding unit 31 is an intra coding mode (when m(Bn)εINTRA), the select switch 33 outputs the intra prediction parameter of each prediction block unit, which is variable-length-decoded by the variable length decoding unit 31, to the intra prediction unit 34. In contrast, when the coding mode m(Bn) variable-length-decoded by the variable length decoding unit 31 is an inter coding mode (when m(Bn)εINTER), the select switch outputs the inter prediction parameter and the motion vector of each prediction block unit, which are variable-length-decoded by the variable length decoding unit 31, to the motion compensation unit 35.
When the coding mode m (B) variable-length-decoded by the variable length decoding unit 31 is an intra coding mode (m(Bn)εINTRA) (step ST25), the intra prediction unit 34 receives the intra prediction parameter of each prediction block unit outputted from the select switch 33, and carries out an intra prediction process on each prediction block Pin in the decoding block Bn using the above-mentioned intra prediction parameter while referring to the decoded image stored in the memory 37 for intra prediction to generate an intra prediction image PINTRAin according to the same procedure as that which the intra prediction unit 4 shown in
When the coding mode m (B) variable-length-decoded by the variable length decoding unit 31 is an inter coding mode (m(Bn)εINTER) (step ST25), the motion compensation unit 35 receives the motion vector and the inter prediction parameter of each prediction block unit which are outputted from the select switch 33, and carries out an inter prediction process on each prediction block Pin in the decoding block Bn using the above-mentioned motion vector and the above-mentioned inter prediction parameter while referring to the decoded image stored in the motion-compensated prediction frame memory 39 and on which the filtering process is carried out to generate an inter prediction image PINTERin (step ST27).
When receiving the compressed data and the prediction difference coding parameters from the variable length decoding unit 31, the inverse quantization/inverse transformation unit 32 inverse-quantizes the compressed data about each orthogonal transformation block by referring to the quantization parameter and the orthogonal transformation block partitioning information which are included in the prediction difference coding parameters according to the same procedure as that according to which the inverse quantization/inverse transformation unit 8 shown in
The adding unit 36 adds the decoded prediction difference signal calculated by the inverse quantization/inverse transformation unit 32 and either the intra prediction image PINTRAin generated by the intra prediction unit 34 or the inter prediction image PINTERin; generated by the motion compensation unit 35 to calculate a decoded image and output the decoded image to the loop filter unit 38, and also stores the decoded image in the memory 37 for intra prediction (step ST29). This decoded image is a decoded image signal which is used at the time of subsequent intra prediction processes.
When completing the processes of steps ST23 to ST29 on all the decoding blocks Bn (step ST30), the loop filter unit 38 carries out a predetermined filtering process on the decoded image outputted from the adding unit 36, and stores the decoded image filtering-processed thereby in the motion-compensated prediction frame memory 39 (step ST31). Concretely, the loop filter unit carries out a filtering (deblocking filtering) process of reducing a distortion occurring at a boundary between orthogonal transformation blocks and a distortion occurring at a boundary between prediction blocks, a process (pixel adaptive offset process) of adaptively adding an offset to each pixel, an adaptive filtering process of adaptively switching among linear filters, such as Wiener filters, and performing the filtering process, and so on. While the loop filter unit 38 can be constructed in such a way as to perform one of the above-mentioned deblocking filtering process, the above-mentioned pixel adaptive offset process, and the above-mentioned adaptive filtering process, or perform two or more of the processes, as shown in
In the pixel adaptive offset process, the index indicating the class classifying method selected for each largest decoding block included in the filter parameters variable-length-decoded by the variable length decoding unit 31 is referred to, the class classifying method corresponding to the index is specified from the class classifying methods prepared in advance, and a classification of each pixel within each largest decoding block into one of classes by using the class classifying method is carried out. Then, the process of adding the offset provided for each class included in the filter parameters variable-length-decoded by the variable length decoding unit 31 to the brightness value of each pixel (each pixel of the decoded image) belonging to the above-mentioned class is carried out.
Further, in the adaptive filtering process, the filter parameters variable-length-decoded by the variable length decoding unit 31 (the number of classes (the number of filters)), the filter for the local decoded image belonging to each class, and the class number (filter number) which is identification information about the identification of the class of each block are referred to, and, when the class shown by the class number does not show “the offset process is not carries out”, the filtering process is carried out on the decoded image belonging to the class by using the filter corresponding to the class number.
First, on the basis of the class number of each largest decoding block included in the filter parameters variable-length-decoded by the variable length decoding unit 31, whether or not to carry out the filtering process on the largest decoding block which is the target to be processed is determined (step ST222). When carrying out the filtering process, the filtering process is carried out on each pixel within the block by using the filter corresponding to the above-mentioned class number (step ST223). The video decoding device then shifts to the next largest decoding block (ST224). The video decoding device carries out the above-mentioned processes on all the largest decoding blocks, and stores the decoded images on which the video decoding device carries out the processes in the motion-compensated prediction frame memory 39.
Further, in a case in which the loop filter unit 11 of the video encoding device is constructed in such a way as to make an optimum selection of either one of the pixel adaptive offset process and the adaptive filtering process for each largest coding block as the method using both the pixel adaptive offset process and the adaptive filtering process while reducing the amount of computations of the entire filtering process of the loop filter unit 38, the video decoding device carries out the pixel adaptive offset process or the adaptive filtering process for each largest decoding block by referring to the selection information included in the filter parameters variable-length-decoded by the variable length decoding unit 31 and indicating whether or not to carry out the pixel adaptive offset process or the adaptive filtering process for each largest decoding block.
Further, although the example of decoding the information about the class classifying method selection for the pixel adaptive offset process on each largest decoding block, and the information about the class selection for the adaptive filtering process, and carryings out the loop filtering process is explained above, in a case in which the video encoding device is constructed in such a way as to encode the block size, the information about the class classifying method selection for the pixel adaptive offset process on each block having the block size, and the information about the class selection for the adaptive filtering process, the video decoding device variable-length-decodes the above-mentioned block size, the information about the class classifying method selection for the pixel adaptive offset process on each block having the block size, and the information about the class selection for the adaptive filtering process by using the variable length decoding unit 31, and the loop filter unit 38 carries out the loop filtering process on each block having the above-mentioned block size on the basis of these parameters.
In a case in which the information indicating whether or not to carry out the filtering process on each of decoding blocks into which each largest decoding block is partitioned is encoded into the slice header, the variable length decoding unit 31 decodes the above-mentioned information, and the filtering process in step ST223 is carried out only on decoding blocks each having the information indicating that the filtering process is carried out. The decoded image on which the filtering process is carried out by this loop filter unit 38 is provided as a reference image for motion-compensated prediction, and is determined as a reproduced image.
As can be seen from the above description, in accordance with this Embodiment 1, the loop filter unit 11 of the video encoding device is constructed in such a way as to carry out a classification of a local decoded image generated by the adding unit 9 into one class for each coding block having the largest size determined by the encoding controlling unit 2, design a filter that compensates for a distortion piggybacked for each local decoded image belonging to each class, and carry out the filtering process on the above-mentioned local decoded image by using the filter, and the variable length encoding unit 13 is constructed in such a way as to encode, as filter parameters, the filter designed by the loop filter unit 11 and used for the local decoded image belonging to each class, and the class number of each largest coding block. Therefore, because the user is enabled to cause the video encoding device to carry out a class classification with an arbitrary number of classes on each block having a predetermined block size according to an arbitrary method and carry out the filtering process on each region belonging to each class, there is provided an advantage of being able to carry out the class classification according to the image to be encoded, and improve the accuracy of improvement of image quality.
According to this Embodiment 1, because the video decoding device is constructed in such a way as to carry out the filtering process by using a filter belonging to each class by referring to the filter parameters variable-length-decoded by the variable length decoding unit 31 thereof, there is provided an advantage of being able to improve the accuracy of improvement of the image quality by causing the video encoding device to determine an optimal class classification and carry out the encoding.
While the invention has been described in its preferred embodiment, it is to be understood that various changes can be made in an arbitrary component according to the embodiment, and an arbitrary component according to the embodiment can be omitted within the scope of the invention.
The video encoding device and the video encoding method in accordance with the present invention are suitable for a video encoding device that needs to encode a video image with a high degree of efficiency, and the video encoding device and the video encoding method in accordance with the present invention are suitable for a video decoding device that needs to decode a video image which is encoded with a high degree of efficiency.
Number | Date | Country | Kind |
---|---|---|---|
2012-009128 | Jan 2012 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2013/050207 | 1/9/2013 | WO | 00 |