Embodiments described herein relate generally to methods for encoding and decoding a moving image and a still image.
Recently, a moving image coding method in which a coding efficiency is largely improved is recommended as ITU-T Rec. H.264 and ISO/IEC 14496-10 (hereinafter referred to as H.264) by ITU-T and ISO/IEC. In H.264, prediction processing, transform processing, and entropy coding processing are performed in rectangular block units (for example, a 16-by-16 pixel block unit and an 8-by-8 pixel block unit). In the prediction processing, motion compensation is performed to a rectangular block of an encoding target (hereinafter referred to as an encoding target block). In the motion compensation, a prediction in a temporal direction is performed by referring to an already-encoded frame (hereinafter referred to as a reference frame). In the motion compensation, it is necessary to encode and transmit motion information including a motion vector to a decoding side. The motion vector is information on a spatial shift between the encoding target block and a block referred to in the reference frame. In the case that the motion compensation is performed using a plurality of reference frames, it is necessary to encode a reference frame number in addition to the motion information. Therefore, a code amount related to the motion information and the reference frame number may increase.
A direct mode, in which the motion vector to be allocated to the encoding target block is derived from the motion vectors allocated to the already-encoded blocks and the predicted image is generated based on the derived motion vector, is cited as an example of a method for evaluating the motion vector in motion compensation prediction (see JP-B 4020789 and U.S. Pat. No. 7,233,621). In the direct mode, because the motion vector is not encoded, the code amount of the motion information can be reduced. For example, the direct mode is adopted in H.264/AVC.
In the direct mode, the motion vector of the encoding target block is predicted and generated by a fixed method for calculating the motion vector from a median of the motion vectors of the already-encoded blocks adjacent to the encoding target block. Therefore, the motion vector calculation has a low degree of freedom.
A method for selecting one already-encoded block from the already-encoded blocks to allocate the motion vector to the encoding target block has been proposed in order to enhance the degree of freedom of the motion vector calculation. In the method, it is necessary to always transmit selection information identifying the selected block to the decoding side such that the decoding side can identify the selected already-encoded block. Accordingly, the code amount related to the selection information increases in the case that the motion vector to be allocated to the encoding target block is decided by selecting one already-encoded block from the already-encoded blocks.
In general, according to one embodiment, an image encoding method includes selecting a motion reference block from an already-encoded pixel block including motion information. The method includes selecting an available block from the motion reference block, the available block including a candidate of motion information applied to an encoding target block, the available block including different motion information. The method includes selecting a selection block from the available block. The method includes generating a predicted image of the encoding target block using motion information of the selection block. The method includes encoding a prediction error between the predicted image and an original image. The method includes encoding selection information identifying the selection block by referring to a code table decided according to a number of the available block.
Embodiments provide image encoding and image decoding methods having a high encoding efficiency.
Hereinafter, image encoding and image decoding methods and apparatuses according to embodiments will be described with reference to the drawings. In the embodiments, like reference numbers denote like elements, and duplicated explanations will be avoided.
For example, an original image (input image signal) 10 that is of a moving image or a still image is input to the image encoder 100 in units of the pixel blocks into which the original image is divided. The image encoder 100 performs compression encoding of the input image signal 10 to generate encoded data 14. The generated encoded data 14 is temporarily stored in the output buffer 120, and transmitted to a storage system (a storage media, not illustrated) or a transmission system (a communication line, not illustrated) at an output timing managed by the encoding controller 150.
The encoding controller 150 controls the entire encoding processing of the image encoder 100, namely, feedback control of a generated code amount, quantization control, prediction mode control, and entropy encoding control. Specifically, the encoding controller 150 provides encoding control information 50 to the image encoder 100, and properly receives feedback information 51 from the image encoder 100. The encoding control information 50 includes prediction information, motion information 18, and quantization parameter information. The prediction information includes prediction mode information and block size information. The motion information 18 includes a motion vector, a reference frame number, and a prediction direction (a unidirectional prediction and a bidirectional prediction). The quantization parameter information includes a quantization parameter, such as a quantization width (or a quantization step size), and a quantization matrix. The feedback information 51 includes the generated code amount by the image encoder 100. For example, the feedback information 51 is used to decide the quantization parameter.
The image encoder 100 encodes the input image signal 10 in units of pixel blocks (for example, a macroblock, a sub-block, and one pixel) into which the original image is divided. Therefore, the input image signal 10 is sequentially input to the image encoder 100 in units of pixel blocks into which the original image is divided. In the present embodiment, the processing unit for encoding is set to the macroblock, the pixel block (macroblock) that is of an encoding target corresponding to the input image signal 10 is simply referred to as an encoding target block. An image frame including the encoding target block, namely, the image frame of the encoding target is referred to as an encoding target frame.
For example, the encoding target block may be a 16-by-16-pixel block as shown in
The encoding processing may be performed to each pixel block in the encoding target frame in any order. In the present embodiment, for the sake of convenience, it is assumed that, as illustrated in
The image encoder 100 in
In the image encoder 100, the input image signal 10 is provided to the predictor 101 and the subtractor 102. The subtractor 102 receives the input image signal 10, and receives a predicted image signal 11 from the predictor 101. The subtractor 102 calculates a difference between the input image signal 10 and the predicted image signal 11 to generate a prediction error image signal 12.
The transform/quantization module 103 receives the prediction error image signal 12 from the subtractor 102, and performs transform processing to the received prediction error image signal 12 to generate a transform coefficient. For example, the transform processing is an orthogonal transform such as a discrete cosine transform (DCT). In another embodiment, the transform/quantization module 103 may generate the transform coefficient using techniques such as a wavelet transform and an independent component analysis, instead of the discrete cosine transform. Then the transform/quantization module 103 quantizes the generated transform coefficient based on the quantization parameter provided by the encoding controller 150. A quantized transform coefficient (also called transform coefficient information) 13 is output to the variable length encoder 104 and the inverse-quantization/inverse-transform module 105.
The inverse-quantization/inverse-transform module 105 inversely quantizes the quantized transform coefficient 13 according to the quantization parameter provided by the encoding controller 150, namely, the quantization parameter identical to that of the transform/quantization module 103. Then the inverse-quantization/inverse-transform module 105 performs an inverse transform to the inversely-quantized transform coefficient to generate a decoded prediction error signal 15. The inverse transform processing performed by the inverse-quantization/inverse-transform module 105 is coincided with the inverse transform processing of the transform processing performed by the transform/quantization module 103. For example, the inverse transform processing is an inverse discrete cosine transform (IDCT) or an inverse wavelet transform.
The adder 106 receives the decoded prediction error signal 15 from the inverse-quantization/inverse-transform module 105, and receives the predicted image signal 11 from the predictor 101. The adder 106 adds the decoded prediction error signal 15 and the predicted image signal 11 to generate a locally-decoded image signal 16. The generated locally-decoded image signal 16 is stored as a reference image signal 17 in the frame memory 107. The reference image signal 17 stored in the frame memory 107 is read and referred to by the predictor 101 in encoding the encoding target block.
The predictor 101 receives the reference image signal 17 from the frame memory 107, and receives available block information 30 from the available-block acquiring module 109. The predictor 101 receives reference motion information 19 from the motion information memory 108. The predictor 101 generates the predicted image signal 11, the motion information 18, selection block information 31 of the encoding target block based on the reference image signal 17, the reference motion information 19, and the available block information 30. Specifically, the predictor 101 includes a motion information selector 118 that generates the motion information 18 and the selection block information 31 based on the available block information 30 and the reference motion information 19 and a motion compensator 113 that generates the predicted image signal 11 based on the motion information 18. The predicted image signal 11 is transmitted to the subtractor 102 and the adder 106. The motion information 18 is stored in the motion information memory 108 for the prediction processing performed to the subsequent encoding target block. The selection block information 31 is transmitted to the variable length encoder 104. The predictor 101 is described in detail later.
The motion information 18 is temporarily stored as the reference motion information 19 in the motion information memory 108.
The pieces of reference motion information 19 are retained in the motion information frame 25 in predetermined units of blocks (for example, units of 4-by-4-pixel blocks). The motion vector block 28 in
The motion information memory 108 is not limited to the example in which the pieces of reference motion information 19 are retained in units of 4-by-4-pixel blocks, and the pieces of reference motion information 19 may be retained in another pixel block unit. For example, the pixel block unit related to the reference motion information 19 may be one pixel or a 2-by-2-pixel block. The shape of the pixel block related to the reference motion information 19 is not limited to a square, and the pixel block may have any shape.
The available-block acquiring module 109 in
In addition to the transform coefficient information 13, the variable length encoder 104 receives the selection block information 31 from the predictor 101, receives the prediction information and encoding parameters, such as the quantization parameter, from the encoding controller 150, and receives the available block information 30 from the available-block acquiring module 109. The variable length encoder 104 performs entropy encoding (for example, fixed-length coding, Huffman coding, and arithmetic coding) to the quantized transform coefficient information 13, the selection block information 31, the available block information 30, and the encoding parameter to generate the encoded data 14. The encoding parameter includes the parameters necessary to decode the information on the transform coefficient, the information on the quantization, and the like in addition to the selection block information 31 and the prediction information. The generated encoded data 14 is temporarily stored in the output buffer 120, and then transmitted to the storage system (not illustrated) or the transmission system (not illustrated).
The transform/quantization module 103 performs the orthogonal transform and the quantization to the prediction error image signal 12 to generate transform coefficient information 13 (Step S503). The transform coefficient information 13 and the selection block information 31 are transmitted to the variable length encoder 104, and the variable length encoding is performed to the transform coefficient information 13 and the selection block information 31 to generate the encoded data 14 (Step S504). In Step S504, a code table is switched according to the selection block information 31 so as to have as many entries as available blocks, and the variable length encoding is also performed to the selection block information 31. A bit stream 20 of the encoded data is transmitted to the storage system (not illustrated) or the transmission line (not illustrated).
The inverse-quantization/inverse-transform module 105 inversely quantizes the transform coefficient information 13 generated in Step S503, and the inverse transform processing is performed to the inversely-quantized transform coefficient information 13 to generate a decoded prediction error signal 15 (Step S505). The decoded prediction error signal 15 is added to the reference image signal 17 used in Step S501 to create a locally-decoded image signal 16 (Step S506), and the locally-decoded image signal 16 is stored as the reference image signal in the frame memory 107 (Step S507).
Each element of the image encoder 100 according to the present embodiment will be described in detail below.
A plurality of prediction modes are prepared in the image encoder 100 in
The inter prediction is not limited to the example in which the reference frame in one frame earlier is used as illustrated in
In the inter prediction, the block size suitable for the encoding target block can be selected from a plurality of motion compensation blocks. That is, the encoding target block is divided into small pixel blocks, and the motion compensation may be performed in each small pixel block.
As described above, the small pixel block (for example, the 4-by-4-pixel block) in the reference frame used in the inter prediction has the motion information 18, so that the shape and the motion vector of the optimum motion compensation block can be used according to the local property of the input image signal 10. The macroblocks and the sub-macroblocks in
The motion reference block will be described below with reference to
The motion reference block is selected from the already-encoded regions (blocks) in the encoding target frame and in the reference frame according to the method decided by both the image encoding apparatus in
The spatial-direction motion reference block is not limited to the example in
As illustrated in
In the temporal-direction motion reference blocks, some of blocks TA to TE may be overlapped as illustrated in
In each of the cases, when the numbers and the positions of the spatial-direction and temporal-direction motion reference blocks are previously decided between the encoding apparatus and decoding apparatus, the numbers and the positions of the motion reference block may be set in any manner. It is not always necessary that the size of the motion reference block be identical to that of the encoding target block. For example, as illustrated in
The motion reference block and the available block may be disposed only in one of the temporal direction and the spatial direction. The temporal-direction motion reference block and the available block may be disposed according to the kind of slice, such as P-slice and B-slice, or the spatial-direction motion reference block and the available block may be disposed according to the kind of slice.
As illustrated in
The available-block acquiring module 109 determines whether the motion reference block p has the motion information 18, namely, whether at least one motion vector is allocated to the motion reference block p (S801). When the motion reference block p does not have the motion vector, namely, when the temporal-direction motion reference block p is a block in an I-slice that does not have the motion information or when the intra prediction encoding is performed to all the small pixel blocks in the temporal-direction motion reference block p, the flow goes to Step S805. In Step S805, the available-block acquiring module 109 determines that the motion reference block p is an unavailable block.
When the motion reference block p has the motion information in Step S801, the flow goes to Step S802. The available-block acquiring module 109 selects a motion reference block q (available block q) that is already selected as the available block, where q is smaller than p. Then the available-block acquiring module 109 compares the motion information 18 on the motion reference block p to the motion information 18 on the available block q to determine whether the motion reference block p and the available block q have identical motion information (S803). When the motion information 18 on the motion reference block p is identical to the motion information 18 on the motion reference block q selected as the available block, the flow goes to Step S805, and the available-block acquiring module 109 determines that the motion reference block p is the unavailable block.
When the motion information 18 on the motion reference block p is not identical to all the pieces of motion information 18 on the available blocks q satisfying q<p in Step S803, the flow goes to Step S804. In Step S804, the available-block acquiring module 109 determines that the motion reference block p is the available block.
When determining that the motion reference block p is the available block or the unavailable block, the available-block acquiring module 109 determines whether the availability determination is made for all the motion reference blocks (S806). When a motion reference block for which the availability determination is not made yet exists, for example, in the case of p<M−1, the flow goes to Step S807. Then the available-block acquiring module 109 increments the index p by 1 (Step S807), and performs Steps S801 to S806 again. When the availability determination is made for all the motion reference blocks in Step S806, the availability determination processing is ended.
Whether each motion reference block is an available block or unavailable block is determined by performing the availability determination processing. The available-block acquiring module 109 generates the available block information 30 including the information on the available block. The amount of information on the available block information 30 is reduced by selecting the available block from the motion reference blocks, and therefore the amount of encoded data 14 can be reduced.
In the case that the intra prediction encoding is performed to at least one of the blocks in the temporal-direction motion reference block p in Step S801 in
Thus, whether the motion information 18 on the motion reference block p is identical to the motion information 18 on the available block q is determined in Step S803. In the examples in
The determination that the motion information on the motion reference block p is identical to the motion information on the available block q is not limited to the case that the motion vectors included in the pieces of motion information are identical to each other. For example, when a norm of a difference between the two motion vectors falls within a predetermined range, the motion information on the motion reference block p may be substantially identical to the motion information on the available block q.
The available block information 30 and the reference motion information 19 on the spatial-direction motion reference block are input to the spatial-direction-motion-information acquiring module 110. The spatial-direction-motion-information acquiring module 110 outputs motion information 18A including the motion information possessed by each available block located in the spatial direction and the index value of the available block. In the case that the information in
The available block information 30 and the reference motion information 19 on the temporal-direction motion reference block are input to the temporal-direction-motion-information acquiring module 111. The temporal-direction-motion-information acquiring module 111 outputs, as motion information 18B, the motion information 19, which is possessed by the available temporal-direction motion reference block identified by the available block information 30, and the index value of the available block. The temporal-direction motion reference block is divided into a plurality of small pixel blocks, and each small pixel block has the motion information 19. As illustrated in
The temporal-direction-motion-information acquiring module 111 may evaluate an average value or a representative value of the motion vectors included in the motion information 19 possessed by each small pixel block, and output the average value or the representative value of the motion vectors as the motion information 18B.
Based on the pieces of motion information 18A and 18B output from the spatial-direction-motion-information acquiring module 110 and the temporal-direction-motion-information acquiring module 111, the motion information selector switch 112 in
For example, the motion information selector switch 112 selects the available block, which minimizes an encoding cost derived by a cost equation indicated in the following mathematical formula (1), as the selection block.
J=D+λ×R (1)
where J indicates the encoding cost and D indicates an encoding strain expressing a sum of squared difference between the input image signal 10 and the reference image signal 17. R indicates a code amount estimated by temporary encoding and λ indicates a Lagrange undetermined coefficient defined by the quantization width. The encoding cost J may be calculated using only the code amount R or the encoding strain D instead of the mathematical formula (1), and a cost function of the mathematical formula (1) may be produced using a value in which the code amount R or the encoding strain D is approximated. The encoding strain D is not limited to the sum of squared difference, and the encoding strain D may be a sum of absolute difference (SAD). Only the code amount related to the motion information 18 may be used as the code amount R. The selection block is not limited to the example in which the available block minimizing the encoding cost is selected as the selection block, and one available block having a value within a range where the encoding cost is at least the minimum may be selected as the selection block.
The motion compensator 113 derives the position of the pixel block, in which the reference image signal 17 is taken out as the predicted image signal, based on the reference motion information (or the motion information group) that is possessed by the selection block selected by the motion information selector 118. In the case that the motion information group is input to the motion compensator 113, the motion compensator 113 acquires the predicted image signal 11 from the reference image signal 17 by dividing the pixel block taken out as the predicted image signal by the reference image signal 17 into small pixel blocks (for example, 4-by-4-pixel blocks) and applying the corresponding motion information to each small pixel block. For example, as illustrated in
The motion compensation processing identical to that of H.264 can be used as the motion compensation processing performed to the encoding target block. An interpolation technique of the ¼ pixel accuracy will specifically be described by way of example. In the interpolation of the ¼ pixel accuracy, the motion vector points out an integral pixel position in the case that each component of the motion vector is a multiple of 4. In other cases, the motion vector points out a predicted position corresponding to an interpolation position of fractional accuracy.
x_pos=x+(mv_χ/4)
y_pos=y+(mv_y/4) (2)
where x and y indicate indexes in vertical and horizontal directions of a beginning position (for example, an upper-left top) of the prediction target block, and x_pos and y_pos indicate the corresponding predicted position of the reference image signal 17. (mv_x,mv_y) indicates the motion vector having the ¼ pixel accuracy. A predicted pixel is generated with respect to the determined pixel position through processing of compensating or interpolating the corresponding pixel position of the reference image signal 17.
b=(E−5×F+20×G+20×H−5×I+J+16)>>5
h=(A−5×C+20×G+20×M−5×R+T+16)>>5 (3)
The letters (for example, b, h, and C1) indicated in the mathematical formulae (3) and (4) indicate the value of the pixel to which the same letters are provided in
For example, in
a=(G+b+1)>>1
d=(G+h+1)>>1 (4)
Thus, the interpolation pixel in the ¼ pixel position is calculated with a two-tap average-value filter (tap coefficient: (1/2,1/2)). The interpolation processing of a ½ pixel corresponding to the letter j existing in the middle of the four integral pixel positions is generated with six taps in the vertical direction and six taps in the horizontal direction. For other pixel positions, the interpolation pixel value is generated in a similar manner.
The interpolation processing is not limited to the examples of the mathematical formulae (3) and (4), and the interpolation pixel value may be generated using another interpolation coefficient. A fixed value provided from the encoding controller 150 may be used as the interpolation coefficient, or the interpolation coefficient may be optimized in each frame based on the encoding cost and generated using the optimized interpolation coefficient.
In the present embodiment, the motion vector block prediction processing is performed to the motion reference block in units of macroblocks (for example, 16-by-16-pixel blocks). Alternatively, the prediction processing may be performed in units of 16-by-8-pixel blocks, 8-by-16-pixel blocks, 8-by-8-pixel blocks, 8-by-4-pixel blocks, 4-by-8-pixel blocks, or 4-by-4-pixel blocks. In this case, the information on the motion vector block is derived in units of pixel blocks. The prediction processing may be performed in units of 32-by-32-pixel blocks, 32-by-16-pixel blocks, or 64-by-64-pixel blocks, which are larger than 16-by-16-pixel blocks.
When a reference motion vector in the motion vector block is substituted for the motion vector of the small pixel block in the encoding target block, (A) a negative value (inverted vector) of the reference motion vector may be substituted, or (B) a weighted average value, a median, a maximum value, or a minimum value of a reference motion vector corresponding to the small block and reference motion vectors adjacent to the reference motion vector may be substituted.
In the case that the available block information 30 includes the index and the availability of the motion reference block corresponding to the index as illustrated in
Entropy encoding (for example, fixed length coding, Huffman coding, and arithmetic coding) can be applied to the encoders 114, 115, and 116, and the generated pieces of encoded data 14A, 14B, and 14C are multiplexed and output by the multiplexer 117.
In the present embodiment, the frame that is encoded one frame earlier than the encoding target frame is referred to as the reference frame by way of example. Alternatively, the scaling (or normalization) of the motion vector is performed using the motion vector and the reference frame number in the reference motion information 19 possessed by the selection block, and the reference motion information 19 may be applied to the encoding target block.
The scaling processing will specifically be described with reference to
tc=Clip(−128,127,DiffPicOrderCnt(curPOC,colPOC)) (5)
tr[i]=Clip(−128,127,DiffPicOrderCnt(colPOC,refPOC)) (6)
where curPOC is the POC (Picture Order Count) of the encoding target frame, colPOC is the POC of the motion reference frame, and refPOC is the POC of the frame i referred to by the selection block. Clip(min,max,target) is a clip function. The clip function Clip(min,max,target) outputs min in the case that the target is smaller than min, outputs max in the case that the target is larger than max, and outputs the target in other cases. DiffPicOrderCnt(x,y) is a function that calculates a difference between the POCs.
Assuming that MVr=(MVr_x,MVr_y) is the motion vector of the selection block and that MV=(MV_x,MV_y) is the motion vector applied to the encoding target block, a motion vector MV is calculated by the following mathematical formula (7).
MV_x=(MVr_x×tc+Abs(tr[i]/2))/tr[i]
MV_y=(MVr_y×tc+Abs(tr[i]/2))/tr[i] (7)
where Abs(x) is a function that takes out an absolute value of x. In the scaling of the motion vector, the motion vector MVr allocated to the selection block is transformed into the motion vector MV between the encoding target frame and the motion first reference frame.
Another example related to the scaling of the motion vector will be described below.
According to the following mathematical formula (8), a scaling coefficient (DistScaleFactor[i]) is evaluated in each slice or frame with respect to all the time distances tr that can be taken by the motion reference frame. The number of scaling coefficients is equal to the number of frames referred to by the selection block, namely, the number of reference frames.
tx=(16384+Abs(tr[i]/2))/tr[i]
DistScaleFactor[i]=Clip(−1024,1023,(tc×tx+32))>>6 (8)
A table may previously be prepared for the calculation of tx in the mathematical formula (8).
In the scaling in each encoding target block, using the following mathematical formula (9), the motion vector MV can be calculated by the multiplication, addition, and the shift calculation.
MV_x=(DistScaleFactor[i]×MVr_x+128)>>8
MV_y=(DistScaleFactor[i]×MVr_y+128)>>8 (9)
In the case that the scaling processing is performed, the post-scaling motion information 18 is applied to both the processing of the predictor 101 and the processing of the available-block acquiring module 109. In the case that the scaling processing is performed, the reference frame referred to by the encoding target block becomes the motion reference frame.
Each of the parts includes a further detailed syntax. The high-level syntax 901 includes sequence-level and picture-level syntaxes, such as a sequence-parameter-set syntax 902 and a picture-parameter-set syntax 903. The slice-level syntax 904 includes a slice header syntax 905 and a slice data syntax 906. The macroblock-level syntax 907 includes a macroblock-layer syntax 908 and a macroblock prediction syntax 909.
A syntax element that is not defined herein can be inserted in a line space of the table in
The information on the mb_type can be reduced using the information on the stds_idx.
At this point, in the case that the selection block indicated by the stds_idx is a Spatial Left (i.e., the pixel block adjacent to the left side of the encoding target block), because the motion information on the pixel block adjacent to the left side of the encoding target block is set to the motion information on the encoding target block, the stds_idx has a meaning equivalent to the performance of the prediction to the encoding target block using the horizontally-long rectangular block indicated by mb_type=4, 6, 8, 10, 12, 14, 16, 18, and 20 in
The information on the stds_idx may be encoded while included in the information on the mb_type.
The order of the mb_type and the binarization method (bin) are not limited to the examples in
The first embodiment can also be applied to an extended macroblock in which the motion compensation prediction is collectively performed to the plurality of macroblocks. In the first embodiment, the encoding may be performed in any scan order. For example, a line scan and a Z-scan can be applied.
As described above, the image encoding apparatus of the first embodiment selects the available block from the a plurality of motion reference blocks, generates the information identifying the motion reference block applied to the encoding target block according to the number of selected available blocks, and encodes the information. According to the image encoding apparatus of the first embodiment, the motion compensation is performed in units of pixel blocks, each of which is smaller than the encoding target block, while the code amount related to the motion vector information is reduced, so that a high encoding efficiency can be implemented.
The motion information acquiring module 205 may decide the optimum motion vector using a value in which a difference between the predicted image signal 11 and the input image signal 10 is transformed. The optimum motion vector may be decided in consideration of the magnitude of the motion vector and the code amounts of the motion vector and the reference frame number, or the optimum motion vector may be decided using the mathematical formula (1). The matching method may be performed based on search range information provided from the outside of the image encoding apparatus, or the matching method may hierarchically be performed in each pixel accuracy level. The motion information provided by the encoding controller 150 may be used as the output 21 of the motion information acquiring module 205 without performing search processing.
The predictor 101 in
A multiplexer 117 receives pieces of encoded data 14A, 14B, 14D, and 14E from a parameter encoder 114, a transform coefficient encoder 115, the selection block encoder 216, and the motion information encoder 117, and multiplexes the pieces of encoded data 14A, 14B, 14D, and 14E.
As described above, the image encoding apparatus of the second embodiment selectively switches between the first predictor 101 of the first embodiment and the second predictor 202 in which the prediction method, such as H.264, is used such that the encoding cost is reduced, and performs compression encoding of the input image signal. Accordingly, in the image encoding apparatus of the second embodiment, the encoding efficiency is improved compared with the image encoding apparatus of the first embodiment.
The image decoding apparatus in
In the third embodiment, the pixel block (for example, the macroblock) that is of the decoding target is simply referred to as a decoding target block. An image frame including the decoding target block is referred to as a decoding target frame.
In the encoded sequence decoder 301, the decoding is performed in each frame or field by a syntax analysis based on the syntax. Specifically, the encoded sequence decoder 301 sequentially performs variable length decoding of an encoded sequence of each syntax, and decodes decoding parameters related to the decoding target block. The decoding parameters include transform coefficient information 33, selection block information 61, and the pieces of prediction information, such as the block size information and the prediction mode information.
In the third embodiment, the decoding parameters include the transform coefficient 33, the selection block information 61, and the prediction information, and the decoding parameters include all the parameters necessary to decode the information on the transform coefficient, the information on the quantization, and the like. The prediction information, the information on the transform coefficient, and the information on the quantization are input as control information 71 to the decoding controller 350. The decoding controller 350 provides the decoding control information 70, which includes the parameters necessary to decode the prediction information, the quantization parameter, and the like, to each module of the image decoder 300.
The encoded sequence decoder 301 decodes the encoded data 80 to obtain the prediction information and the selection block information 61. The motion information 38 including the motion vector and the reference frame number may be not decoded.
The transform coefficient 33 decoded by the encoded sequence decoder 301 is transmitted to the inverse-quantization/inverse-transform module 302. Various pieces of information, namely, the quantization parameter and a quantization matrix which are decoded by the encoded sequence decoder 301 are provided to the decoding controller 350, and loaded on the inverse-quantization/inverse-transform module 302 during the inverse quantization. The inverse-quantization/inverse-transform module 302 inversely quantizes the transform coefficient information 33 according to the loaded information on the quantization, and performs the inverse transform processing (for example, the inverse discrete cosine transform) to generate a prediction error signal 34. The inverse transform processing performed by the inverse-quantization/inverse-transform module 302 in
The prediction error signal 34 restored by the inverse-quantization/inverse-transform module 302 is input to the adder 303. The adder 303 generates a decoded image signal 36 by adding the prediction error signal 34 and a predicted image signal 35 generated by the predictor 305. The generated decoded image signal 36 is output from the image decoder 300, and temporarily stored in the output buffer 308. Then the decoded image signal 36 is output in output timing managed by the decoding controller 350. The decoded image signal 36 is also stored as a reference image signal 37 in the frame memory 304. The reference image signal 37 is sequentially read in each frame or field from the frame memory 304 and input to the predictor 305.
The available-block acquiring module 307 receives reference motion information 39 from the motion information memory 306, and outputs available block information 60. An operation of the available-block acquiring module 307 is identical to that of the available-block acquiring module 109 (
The motion information memory 306 receives motion information 38 from the predictor 305, and temporarily stores the motion information 38 as the reference motion information 39. The motion information memory 306 temporarily stores the motion information 38 output from the predictor 305 as the reference motion information 39.
The motion reference block and the available block of the third embodiment will be described below. The motion reference block is a candidate block that is selected from the already-decoded region according to a method previously defined by the image encoding apparatus and the image decoding apparatus.
The spatial-direction motion reference block is not limited to the example in
As illustrated in
Some of the temporal-direction motion reference blocks TA to TE may be overlapped as illustrated in
In the method for selecting the motion reference block, any number of motion reference blocks may be selected, and the motion reference block may be selected from any position, when both the image decoding apparatus and the image decoding apparatus share the pieces of information on the numbers and the positions of the spatial-direction and temporal-direction motion reference blocks. It is not always necessary that the size of the motion reference block be identical to that of the decoding target block. For example, as illustrated in
The available block will be described below. The available block is a pixel block that is selected from the motion reference blocks, and is a pixel block in which the motion information can be applied to the decoding target block. The available blocks have different pieces of motion information. For example, the available block is selected by performing the available block determination processing in
The available-block acquiring module 307 will be described below. The available-block acquiring module 307 has the same function as the available-block acquiring module 109 of the first embodiment, acquires the reference motion information 39 from the motion information memory 306, and outputs the available block information 60 that is of the information indicating the available block or the unavailable block in each motion reference block.
An operation of the available-block acquiring module 307 will be described with reference to the flowchart in
When the motion reference block p has the motion information in Step S801, the available-block acquiring module 307 selects a motion reference block q (referred to as an available block q) that is already determined to be the available block (Step S802). At this point, q is smaller than p. Then the available-block acquiring module 307 compares the motion information on the motion reference block p to the pieces of motion information on all the available blocks q to determine whether the motion reference block p and the available block q have identical motion information (S803). When the motion vector of the motion reference block p is identical to the motion vector of the motion reference block q, the flow goes to Step S805, and the available-block acquiring module 307 determines that the motion reference block p is the unavailable block in Step S805. When the motion information on the motion reference block p is not identical to the pieces of motion information on all the available blocks q, the available-block acquiring module 307 determines that the motion reference block p is the available block in Step S804.
Whether each motion reference block is the available block or the unavailable block is determined by performing the available block determination processing to all the motion reference blocks, and the available block information 60 is generated.
In the case that the intra prediction encoding is performed to at least one of the blocks in the temporal-direction motion reference block p in Step S801 in
Thus, whether the motion information 38 on the motion reference block p is identical to the motion information 38 on the available block q is determined in Step S803. In the examples in
The determination that the motion information on the motion reference block p is identical to the motion information on the available block q is not limited to the case that the motion vectors included in the pieces of motion information are identical to each other. For example, when a norm of a difference between the two motion vectors falls within a predetermined range, the motion information on the motion reference block p may be substantially identical to the motion information on the available block q.
The parameter decoder 321 receives encoded data 80A including the parameters related to the block size information and the quantization from the separator, and decodes the encoded data 80A to generate the control information 71. The transform coefficient decoder 322 receives the encoded transform coefficient 80B from the separator 320, and decodes the encoded transform coefficient 80B to obtain the transform coefficient information 33. The encoded data 80C related to the selection block and the available block information 60 are input to the selection block decoder 323, and the selection block decoder 323 outputs the selection block information 61. As illustrated in
The predictor 305 will be described in detail with reference to
As illustrated in
The available block information 60, the selection block information 61, the reference motion information 39, and the reference image signal 37 are input to the predictor 305, and the predictor 305 outputs the predicted image signal 35 and the motion information 38. The spatial-direction-motion-information acquiring module 310 and the temporal-direction-motion-information acquiring module 311 have the same functions as the spatial-direction-motion-information acquiring module 110 and the temporal-direction-motion-information acquiring module 111 of the first embodiment, respectively. Using the available block information 60 and the reference motion information 39, the spatial-direction-motion-information acquiring module 310 generates motion information 38A including the motion information and index of each available block located in the spatial direction. Using the available block information 60 and the reference motion information 39, the temporal-direction-motion-information acquiring module 311 generates motion information 38B including the motion information and index of each available block located in the temporal direction.
The motion information selector switch 312 selects one of the motion information 38A from the spatial-direction-motion-information acquiring module 310 and the motion information (or the motion information group) 38B from the temporal-direction-motion-information acquiring module 311 according to the selection block information 61, and obtains the motion information 38. The selected motion information 38 is transmitted to the motion compensator 313 and the motion information memory 306. According to the selected motion information 38, the motion compensator 313 performs the same motion compensation prediction as the motion compensator 113 of the first embodiment to generate the predicted image signal 35.
Because the motion-vector scaling function of the motion compensator 313 is identical to that of the first embodiment, the description is omitted.
A syntax element that is not defined in the embodiment can be inserted in a line space of the table in
As described above, the image decoding apparatus of the third embodiment decodes the image that is encoded by the image encoding apparatus of the first embodiment. Accordingly, in the image decoding of the third embodiment, a high-quality decoded image can be reproduced from a relatively small amount of encoded data.
The predictor 405 of the fourth embodiment selectively switches the prediction method (the first prediction method) in which the motion compensation is performed using the motion information possessed by the selection block and the prediction method (the second prediction method), such as H.264, in which the motion compensation is performed to the decoding target block using one motion vector, and generates a predicted image signal 35.
As to a syntax structure of the fourth embodiment, only differences from that of the third embodiment will mainly be described below.
As described above, the image decoding apparatus of the fourth embodiment decodes the image that is encoded by the image encoding apparatus of the second embodiment. Accordingly, in the image decoding of the fourth embodiment, a high-quality decoded image can be reproduced from a relatively small amount of encoded data.
According to at least one of the embodiments, the encoding efficiency can be improved.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
For example, the same effect is obtained in the following modifications of the first to fourth embodiments.
(1) In the first to fourth embodiments, by way of example, the processing target frame is divided into rectangular blocks, such as 16-by-16-pixel blocks, and the encoding or the decoding is sequentially performed from the upper-left pixel block on the screen in
(2) In the first to fourth embodiments, a luminance signal and a color-difference signal are not distinguished from each other, but a comprehensive description is made about a color signal component. The luminance signal may be different from the color-difference signal in the prediction processing, or the luminance signal may de identical to the color-difference signal in the prediction processing. In the case that different pieces of prediction processing are used, the prediction method selected for the color-difference signal is encoded and decoded by the same method as the luminance signal.
Various modifications can be made without departing from the scope of the embodiments.
This application is a continuation of U.S. application Ser. No. 16/890,734, filed Jun. 2, 2020, the entire contents of which are incorporated herein by reference. U.S. application Ser. No. 16/890,734 is a Continuation Application of U.S. application Ser. No. 15/698,336, filed Sep. 7, 2017, which is a Continuation Application of U.S. application Ser. No. 14/190,909, filed Feb. 26, 2014 which is a continuation of U.S. application Ser. No. 13/647,124, filed Oct. 8, 2012, which is a Continuation Application of PCT Application No. PCT/JP2010/056400, filed Apr. 8, 2010, the entire contents of each of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
7023916 | Pandel | Apr 2006 | B1 |
7233621 | Jeon | Jun 2007 | B2 |
7734151 | Park et al. | Jun 2010 | B2 |
8699562 | Park et al. | Apr 2014 | B2 |
10091525 | Shiodera et al. | Oct 2018 | B2 |
11889107 | Shiodera et al. | Jan 2024 | B2 |
20020006162 | Nakao et al. | Jan 2002 | A1 |
20020094028 | Kimoto | Jul 2002 | A1 |
20030020677 | Nakano | Jan 2003 | A1 |
20030081675 | Sadeh et al. | May 2003 | A1 |
20030206583 | Srinivasan | Nov 2003 | A1 |
20040001546 | Tourapis et al. | Jan 2004 | A1 |
20040008784 | Kikuchi et al. | Jan 2004 | A1 |
20040013308 | Jeon et al. | Jan 2004 | A1 |
20040013309 | Choi et al. | Jan 2004 | A1 |
20040047418 | Tourapis et al. | Mar 2004 | A1 |
20040057515 | Koto et al. | Mar 2004 | A1 |
20040151252 | Sekiguchi et al. | Aug 2004 | A1 |
20040223548 | Kato | Nov 2004 | A1 |
20040233990 | Sekiguchi | Nov 2004 | A1 |
20050041740 | Sekiguchi et al. | Feb 2005 | A1 |
20050117646 | Joch et al. | Jun 2005 | A1 |
20050123207 | Marpe | Jun 2005 | A1 |
20050147162 | Mihara | Jul 2005 | A1 |
20050201633 | Moon et al. | Sep 2005 | A1 |
20050207490 | Wang et al. | Sep 2005 | A1 |
20060013299 | Sato et al. | Jan 2006 | A1 |
20060045186 | Koto et al. | Mar 2006 | A1 |
20060188020 | Wang | Aug 2006 | A1 |
20060198444 | Wada | Sep 2006 | A1 |
20060209960 | Katayama et al. | Sep 2006 | A1 |
20060280253 | Tourapis et al. | Dec 2006 | A1 |
20070014358 | Tourapis et al. | Jan 2007 | A1 |
20070019726 | Cha et al. | Jan 2007 | A1 |
20070086525 | Asano | Apr 2007 | A1 |
20070121731 | Tanizawa et al. | May 2007 | A1 |
20070146380 | Nystad et al. | Jun 2007 | A1 |
20070160140 | Fujisawa et al. | Jul 2007 | A1 |
20070206679 | Lim | Sep 2007 | A1 |
20070211802 | Kikuchi et al. | Sep 2007 | A1 |
20080002770 | Ugur | Jan 2008 | A1 |
20080008242 | Lu et al. | Jan 2008 | A1 |
20080031328 | Kimoto | Feb 2008 | A1 |
20080037657 | Srinivasan | Feb 2008 | A1 |
20080043842 | Nakaishi | Feb 2008 | A1 |
20080101474 | Chiu et al. | May 2008 | A1 |
20080117976 | Lu et al. | May 2008 | A1 |
20080152000 | Kaushik | Jun 2008 | A1 |
20080159401 | Lee et al. | Jul 2008 | A1 |
20080181309 | Lee et al. | Jul 2008 | A1 |
20080273599 | Park et al. | Nov 2008 | A1 |
20090003446 | Wu et al. | Jan 2009 | A1 |
20090010553 | Sagawa | Jan 2009 | A1 |
20090022228 | Wang et al. | Jan 2009 | A1 |
20090034618 | Fu et al. | Feb 2009 | A1 |
20090052543 | Wu | Feb 2009 | A1 |
20090067504 | Zheludkov et al. | Mar 2009 | A1 |
20090074077 | Lakus-Becker | Mar 2009 | A1 |
20090110077 | Amano et al. | Apr 2009 | A1 |
20090245376 | Choi et al. | Oct 2009 | A1 |
20090290643 | Yang | Nov 2009 | A1 |
20090304084 | Hallapuro | Dec 2009 | A1 |
20090310682 | Chono | Dec 2009 | A1 |
20100027655 | Matsuo et al. | Feb 2010 | A1 |
20100061447 | Tu et al. | Mar 2010 | A1 |
20100080296 | Lee | Apr 2010 | A1 |
20100086052 | Park et al. | Apr 2010 | A1 |
20100098173 | Horiuchi et al. | Apr 2010 | A1 |
20100118939 | Shimizu et al. | May 2010 | A1 |
20100135387 | Divorra Escoda | Jun 2010 | A1 |
20100142617 | Koo et al. | Jun 2010 | A1 |
20100158129 | Lai | Jun 2010 | A1 |
20100177824 | Koo et al. | Jul 2010 | A1 |
20100195723 | Ikai et al. | Aug 2010 | A1 |
20100220790 | Jeon et al. | Sep 2010 | A1 |
20100239002 | Park et al. | Sep 2010 | A1 |
20100296582 | Shimizu et al. | Nov 2010 | A1 |
20110038420 | Lee et al. | Feb 2011 | A1 |
20110044550 | Tian et al. | Feb 2011 | A1 |
20110080954 | Bossen et al. | Apr 2011 | A1 |
20110129016 | Sekiguchi et al. | Jun 2011 | A1 |
20110135006 | Yamamoto. et al. | Jun 2011 | A1 |
20110142132 | Tourapis et al. | Jun 2011 | A1 |
20110142133 | Takahashi | Jun 2011 | A1 |
20110176615 | Lee et al. | Jul 2011 | A1 |
20110194609 | Rusert | Aug 2011 | A1 |
20110206119 | Bivolarsky et al. | Aug 2011 | A1 |
20110206132 | Bivolarsky et al. | Aug 2011 | A1 |
20110211640 | Kim et al. | Sep 2011 | A1 |
20110222601 | Suzuki et al. | Sep 2011 | A1 |
20110249749 | Takahashi et al. | Oct 2011 | A1 |
20110286523 | Dencher | Nov 2011 | A1 |
20120044990 | Bivolarsky et al. | Feb 2012 | A1 |
20120128073 | Asaka et al. | May 2012 | A1 |
20120147966 | Lee et al. | Jun 2012 | A1 |
20120169519 | Ugur | Jul 2012 | A1 |
20120281764 | Lee et al. | Nov 2012 | A1 |
20130028328 | Shiodera et al. | Jan 2013 | A1 |
20130058415 | Lee et al. | Mar 2013 | A1 |
20130148737 | Tourapis et al. | Jun 2013 | A1 |
20130279593 | Lee et al. | Oct 2013 | A1 |
20130279594 | Lee et al. | Oct 2013 | A1 |
20140016705 | Lee et al. | Jan 2014 | A1 |
20140177727 | Asaka | Jun 2014 | A1 |
20140185685 | Asaka | Jul 2014 | A1 |
20170171558 | Huang et al. | Jun 2017 | A1 |
Number | Date | Country |
---|---|---|
2 969 723 | Dec 2010 | CA |
2 969 723 | Aug 2019 | CA |
1471320 | Jan 2004 | CN |
1615656 | May 2005 | CN |
1692653 | Nov 2005 | CN |
1750658 | Mar 2006 | CN |
1889687 | Jan 2007 | CN |
1898964 | Jan 2007 | CN |
101023672 | Aug 2007 | CN |
101083770 | Dec 2007 | CN |
101099394 | Jan 2008 | CN |
101361370 | Feb 2009 | CN |
101573984 | Nov 2009 | CN |
101631247 | Jan 2010 | CN |
101631248 | Jan 2010 | CN |
0 579 319 | Jan 1994 | EP |
2149262 | Feb 2010 | EP |
2 677 753 | Dec 2013 | EP |
2 677 753 | Dec 2013 | EP |
6-168330 | Jun 1994 | JP |
8-18976 | Jan 1996 | JP |
10-224800 | Aug 1998 | JP |
2000-50279 | Feb 2000 | JP |
2004-23458 | Jan 2004 | JP |
2004-040785 | Feb 2004 | JP |
2004-56823 | Feb 2004 | JP |
2004-104159 | Apr 2004 | JP |
2004-165703 | Jun 2004 | JP |
2004-208259 | Jul 2004 | JP |
2005-124001 | May 2005 | JP |
4020789 | Oct 2007 | JP |
2008-278091 | Nov 2008 | JP |
2010-010950 | Jan 2010 | JP |
2013-517669 | May 2013 | JP |
2013-517734 | May 2013 | JP |
5444497 | Dec 2013 | JP |
2014-90459 | May 2014 | JP |
2014-131293 | Jul 2014 | JP |
2014-131294 | Jul 2014 | JP |
2014-131295 | Jul 2014 | JP |
WO 2006052577 | May 2006 | WO |
WO 2008127597 | Oct 2008 | WO |
WO 2008133455 | Nov 2008 | WO |
WO 2010004939 | Jan 2010 | WO |
WO 2010146696 | Dec 2010 | WO |
WO 2011087321 | Jul 2011 | WO |
WO 2011090314 | Jul 2011 | WO |
WO 2011125211 | Oct 2011 | WO |
Entry |
---|
International Search Report dated Jul. 13, 2010 for PCT/JP2010/056400 filed on Apr. 8, 2010 (with English translation). |
International Written Opinion dated Jul. 13, 2010 for PCT/JP2010/056400 filed on Apr. 8, 2010. |
Takeshi, Chujoh, Description of video coding technology proposal by Toshiba, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, JCTVC-A117rl, Apr. 15, 2010, pp. 4-6. |
ITU-T Rec.H.264 (Mar. 2005), Chap. 8.4.1 “Derivation process for motion components and reference indices”. |
Japanese Office Action dated Apr. 2, 2013 in Patent Application No. 2012-509253 (with English translation). |
English translation of the International Preliminary Report on Patentability dated Nov. 29, 2012, in PCT/JP2010/056400, filed Apr. 8, 2010. |
Written Opinion of the International Searching Authority dated Jul. 13, 2010, in PCT/JP2010/056400, filed Apr. 8, 2010 (submitting English-language translation only, previously filed Oct. 8, 2012). |
ITU-T Q-6/SG16 Document, VCEG-AC06, Joel Jung, Jul. 17, 2006, “Competition-Based Scheme for Motion Vector Selection and Coding”. |
Office Action dated Jan. 28, 2014 in Japanese Patent Application No. 2013-116884 (with English language translation). |
Office Action dated Feb. 25, 2014 in Japanese Patent Application No. 2014-010560 with English language translation. |
Office Action dated Feb. 25, 2014 in Japanese Patent Application No. 2014-010561 with English language translation. |
Office Action dated Feb. 25, 2014 in Japanese Patent Application No. 2014-010562 with English language translation. |
Extended European Search Report dated Jun. 17, 2014 in Patent Application No. 10849496.4. |
Guillaume Laroche, et al., “RD optimized coding for motion vector predictor selection”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, No. 9, XP011231739, Sep. 2008, pp. 1247-1257. |
“Video coding using extended block sizes”, Qualcomm Inc., International Telecommunications Union, Telecommunications Standardization Sector, COM 16-C 123-E, XP030003764, Jan. 2009, pp. 1-4. |
Sung Deuk Kim, et al., “An efficient motion vector coding scheme based on minimum bitrate prediction”, IEEE Transactions on Image Processing, vol. 8, No. 8, XP011026355, Aug. 1999, pp. 1117-1120. |
Japanese Office Action dated Jul. 29, 2014, in Japan Patent Application No. 2013-186629 (with English translation). |
Combined Chinese Office Action and Search Report dated Sep. 23, 2014 in Patent Application No. 201080066017.7 (with English language translation). |
Combined Office Action and Search Report dated Oct. 10, 2014 in Chinese Patent Application No. 201080066019.6 (with English translation). |
Combined Chinese Office Action and Search Report dated Sep. 6, 2015 in Patent Application No. 201310142052.8 (with English language translation). |
Kemal Ugur, et al., “Appendix to Description of Video Coding Technology Proposal by Tandberg Nokia Ericsson”, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, No. JCTVC-A119, Apr. 2010, pp. 1-55. |
Jungsun Kim, et al., “Encoding Complexity Reduction for Intra Prediction by Disabling NxN Partition”, LG Electronics, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, No. JCTVC-C218, Oct. 2010, pp. 1-5. |
Combined Chinese Office Action and Search Report dated Jun. 24, 2016 in Patent Application No. 201410051546.X (with unedited computer generated English translation and English translation of categories of cited documents). |
Combined Chinese Office Action and Search Report dated Jun. 28, 2016 in Patent Application No. 201410051029.2 (with unedited computer generated English translation and English translation of categories of cited documents). |
Combined Chinese Office Action and Search Report dated Jul. 22, 2016 in Patent Application No. 201410051514.X (with unedited computer generated English translation and English translation of categories of cited documents). |
Office Action dated Sep. 7, 2016 in U.S. Appl. No. 14/190,779. |
Office Action dated Dec. 13, 2016 in European Patent Application No. 10849496.4. |
“Test Model under Consideration”, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 2nd Meeting, Document: JCTVC-B205, Jul. 21-28, 2010, 189 pages. |
Seyoon Jeong et al., “TE11: Cross-check result of merge/skip (3.2c)”, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 3rd Meeting, Document: JCTVC-C191, Oct. 7-15, 2010, with enclosures: 1) JCTVC-C191-Cross Check Result; 2) JCTVC-C191 Decoding Time r1, and 3) JCTVC-C191 Time Comparison, 28 pages. |
U.S. Office Action dated Jan. 17, 2017, issued in U.S. Appl. No. 15/350,265. |
Joel Jung et al., “Competition-Based Scheme for Motion Vector Selection and Coding”, ITU—Telecommunications Standardization Sector, Study Group 16, Question 6, Jul. 17-18, 2006, 8 pages with cover page. |
Office Action dated Jan. 31, 2017 in Japanese Patent Application No. 2016-028133. |
“Series H: Audiovisual and Multimedia Systems: Infrastructure of audiovisual services—Coding of moving video: High efficiency video coding” ITU-T Telecommunication Standardization Sector of ITU, H.265, Apr. 2013, 26 pages. |
Takeshi Chujoh, et al., “Description of video coding technology proposal by Toshiba” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, Apr. 2010, 6 pages. |
Office Action dated Mar. 10, 2017 on co-pending U.S. Appl. No. 13/647,140. |
Detlev Marpe, et al., “Context-Based Adaptive Binary Arithmetic Coding in the H.264/AVC Video Compression Standard” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, No. 7, Jul. 2003, pp. 620-636. |
U.S. Office Action, dated Aug. 9, 2017 in U.S. Appl. No. 13/647,140. |
European Office Action dated May 8, 2019 in European Patent Application No. 18 152 576.7-1208. |
U.S. Office Action dated Jun. 3, 2019 in the related U.S. Appl. No. 14/190,779. |
U.S. Office Action dated Jul. 11, 2019 in the related U.S. Appl. No. 16/250,430. |
Office Action mailed Jul. 2, 2024 in co-pending U.S. Appl. No. 18/536,396. |
Number | Date | Country | |
---|---|---|---|
20210218986 A1 | Jul 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16890734 | Jun 2020 | US |
Child | 17219065 | US | |
Parent | 15698336 | Sep 2017 | US |
Child | 16890734 | US | |
Parent | 14190909 | Feb 2014 | US |
Child | 15698336 | US | |
Parent | 13647124 | Oct 2012 | US |
Child | 14190909 | US | |
Parent | PCT/JP2010/056400 | Apr 2010 | WO |
Child | 13647124 | US |