High-efficiency video coding (HEVC) is a block-based hybrid spatial and temporal predictive coding scheme. Similar to other video coding standards, such as motion picture experts group (MPEG)-1, MPEG-2, and MPEG-4, HEVC supports intra-picture, such as I picture, and inter-picture, such as B picture. In HEVC, P and B pictures are consolidated into a general B picture that can be used as a reference picture.
Intra-picture is coded without referring to any other pictures. Thus, only spatial prediction is allowed for a coding unit (CU)/prediction unit (PU) inside an intra-picture. Inter-picture, however, supports both intra- and inter-prediction. A CU/PU in an inter-picture may be either spatially or temporally predictive coded. Temporal predictive coding may reference pictures that were previously coded.
Temporal motion prediction is an effective method to increase the coding efficiency and provides high compression. HEVC uses a translational model for motion prediction. According to the translational model, a prediction signal for a given block in a current picture is generated from a corresponding block in a reference picture. The coordinates of the reference block are given by a motion vector that describes the translational motion along horizontal (x) and vertical (y) directions that would be added/subtracted to/from the coordinates of the current block. A decoder needs the motion vector to decode the compressed video.
The pixels in the reference frame are used as the prediction. In one example, the motion may be captured in integer pixels. However, not all objects move with the spacing of integer pixels (also referred to as pel). For example, since an object motion is completely unrelated to the sampling grid, sometimes the object motion is more like sub-pixel (fractional) motion than a full-pel one. Thus, HEVC allows for motion vectors with sub-pixel accuracy.
In order to estimate and compensate sub-pixel displacements, the image signal on these sub-pixel positions is generated by an interpolation process. In HEVC, sub-pixel interpolation is performed using finite impulse response (FIR) filters. Generally, the filter may have 8 taps to determine the sub-pixel values for sub-pixel positions, such as half-pel and quarter-pel positions. The taps of an interpolation filter weight the integer pixels with coefficient values to generate the sub-pixel signals. Different coefficients may produce different compression performance in signal distortion and noise.
The fractional-pel and half-pel pixels may be interpolated using the values of spatial neighboring full-pel pixels. For example, the half-pel pixel H may be interpolated using the values of full-pel pixels L3, L2, L1, L0, R0, R1, R2, and R3. Different coefficients may also be used to weight the values of the neighboring pixels and provide different characteristics of filtering.
A uniform sub-pixel spacing may be used. For example, sub-pixel phase offsets are allowed that correspond to quarter, half and three quarter pixel offsets. FIG. 2 is an example of a fixed, uniform, four position sub-pixel motion vector grid,
A motion vector (MV) is a two-dimensional vector (MVX, MVY) that is used for inter prediction that provides an offset from the coordinates in the decoded picture to the coordinates in a reference picture. The motion vector may be represented by integer numbers, but the accuracy may be at quarter-pel resolution. That is, if one component of the motion vector (either MVX or MVY) has a remainder of “0” when dividing by 4, it is an integer-pel motion vector component; if one component of the motion vector has a remainder of “1” when dividing by 4, it is a quarter-pel motion vector component; if one component of the motion vector has a remainder of “2” when dividing by 4, it is a half-pel motion vector component; and if one component of the motion vector has a remainder of “3” when dividing by 4, it is a three-quarter-pel motion vector component.
Motion vectors are predictively coded with predictors chosen from motion vectors of spatial neighboring blocks and/or temporal collocated blocks. The motion vectors of these spatial neighboring blocks and temporal collocated blocks may point to different reference pictures that have a different temporal distance from the reference picture of a current block. To have the motion vector of the spatial neighboring blocks and temporal collocated blocks point to the reference picture of the current block, motion vector scaling is used to scale the motion vector to point to the reference picture of the current block. The scaling uses the differences in temporal distance.
On a uniform motion vector grid, scaling of the motion vector may be very close to scaling of the corresponding motion offset. For example, the motion vector scaling is performed according to temporal distance between the current picture and the reference pictures. Given a current block in a current picture, the motion vector scaling theoretically could be performed as:
MVPscaled=(TDref×MVP)/TDP (1)
where MVP is the motion vector predictor for the current block, TDref is the temporal distance between the current picture and the reference picture for the current block, and TDP is the temporal distance between the picture where the motion vector predictor MVP resides and the reference picture that MVP points to.
If infinite precision is allowed for motion vectors MVP and MVPscaled, the above equation is accurate. However, if the precision is only at quarter-pel, a good approximation is necessary. For example, assuming in one example, a motion vector component has a value 1 on a four position sub-pixel motion vector grid, and temporal distances TDref and TDP are equal to 4 and 1, respectively. By using the scaling equation (1), the motion vector component of value 1 is scaled to:
MVPscaled=(TDref×MVP)/TDP=(4×1)/1=4
On a four position sub-pixel motion vector grid, a motion vector component of value 4 means a motion offset of 1 pel. On uniform four position sub-pixel motion vector grid (
MVPscaled=(TDref×MVP)/TDP=(4×(¼))/1=1(pel)
As seen, for this example, scaling of motion vector component exactly matches scaling of motion offset as both give a motion offset of 1 pel. However, the problem with uniform distribution of sub-sample positions is that it may not be the optimal for a given set of filter restrictions, such as number of taps or the power spectral density of the reference block.
In one embodiment, a method determines a scaled motion vector for a first block. A motion vector for a second block is determined where values associated with the motion vector are represented on a non-uniform motion vector grid. The second block is a spatially neighboring block or a temporal co-located block to the first block, and the non-uniform motion vector grid has a first number of positions, the non-uniform motion vector grid having non-uniform sub-pixel phase offsets between integer pixels. The method then maps the motion vector values for the second block to a higher accuracy uniform motion vector grid and scales the motion vector values for the second block on the uniform motion vector grid. The uniform motion vector grid has a second number of positions greater than the first number of positions due to the presence of more sub-pixel positions between the integer pixels than the non-uniform motion vector grid and, unlike the non-uniform motion vector grid, provides a uniform distribution of sub-pixel positions between the integer pixels. The scaled motion vector values on the uniform motion vector grid are mapped to the non-uniform motion vector grid. The scaled motion vector values on the non-uniform motion vector grid are associated with the first block for a temporal prediction process.
In one embodiment, a method is provided that determines a scaled motion vector for a first block, the method comprising: receiving a bitstream from an encoder at a decoder; determining a motion vector for a second block that is a spatially neighboring block or a temporal co-located block to the first block using information in the bitstream, wherein values associated with the motion vector are represented on a non-uniform motion vector grid having a first number of positions, the non-uniform motion vector grid having non-uniform sub-pixel phase offsets between integer pixels; mapping, by the decoder, the motion vector values for the second block to a higher accuracy uniform motion vector grid, the uniform motion vector grid having a second number of positions greater than the first number of positions due to the presence of more sub-pixel positions between the integer pixels than the non-uniform motion vector grid and providing a uniform distribution of sub-pixel positions between the integer pixels; scaling, by the decoder, the motion vector values for the second block on the uniform motion vector grid to generate scaled motion vector values; and mapping, by the decoder, the scaled motion vector values on the uniform motion vector grid to the non-uniform motion vector grid, wherein the scaled motion vector values on the non-uniform motion vector grid are associated with the first block for a temporal prediction process to decode the bitstream.
In one embodiment, an apparatus determines a scaled motion vector for a first block, the apparatus comprising: one or more computer processors; and a non-transitory computer-readable storage medium. The medium comprises instructions, that when executed, control the one or more computer processors to determine a motion vector for a second block, wherein the second block is a spatially neighboring block or a temporal co-located block to the first block and values associated with the motion vector are represented on a non-uniform motion vector grid; having a first number of positions, the non-uniform motion vector grid having non-uniform sub-pixel phase offsets between integer pixels. The instructions also control the one or more computer processors to map the motion vector values to a uniform motion vector grid that is of a higher accuracy than the non-uniform motion vector grid as it has a second number of positions greater than the first number of positions due to the presence of more sub-pixel positions between the integer pixels than the non-uniform motion vector grid and provides a uniform distribution of sub-pixel positions between the integer pixels; scale the motion vector values for the second block on the uniform motion vector grid to generate scaled motion vector values; and map the scaled motion vector values on the uniform motion vector grid to the non-uniform motion vector grid. The scaled motion vector values on the non-uniform motion vector grid represent a scaled motion vector that is associated with the first block for a temporal prediction process.
In one embodiment, an apparatus determines a scaled motion vector for a first block. The apparatus comprises one or more computer processors and a non-transitory computer-readable storage medium comprising instructions, that when executed, control the one or more computer processors to be configured for receiving a bitstream from an encoder at a decoder; determining a motion vector for a second block using information in the bitstream, wherein the second block is a spatially neighboring block or a temporal co-located block to the first block and the motion vector values are represented on a non-uniform motion vector grid having a first number of positions, the non-uniform motion vector grid having non-uniform sub-pixel phase offsets between integer pixels; mapping the motion vector values to a uniform motion vector grid having a second number of positions greater than the first number of positions due to the presence of more sub-pixel positions between the integer pixels than the non-uniform motion vector grid and providing a uniform distribution of sub-pixel positions between the integer pixels; scaling the motion vector values on the uniform motion vector grid; and mapping the resulting scaled motion vector values to the non-uniform motion vector grid. The scaled motion vector values represented on the non-uniform motion vector grid are associated with the first block for a temporal prediction process to decode the bitstream.
The following detailed description and accompanying drawings provide a better understanding of the nature and advantages of particular embodiments.
Described herein are techniques for a video compression system. In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of particular embodiments. Particular embodiments as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein.
The temporal prediction allows for fractional (sub-pixel) picture accuracy. Sub-pixel prediction is used because motion during two instances of time (the current and reference frames' capture times) can correspond to a sub-pixel position in pixel coordinates and generation of different prediction data corresponding to each sub-pixel position allows for the possibility of conditioning the prediction signal to better match the signal in the current PU.
In the temporal prediction process, a motion vector scaling manager 506, either in encoder 502 or decoder 504, uses a motion vector scaling process for a non-uniform motion vector grid. A non-uniform motion vector grid allows non-uniform sub-pixel phase offsets between integer pixels. For example, sub-pixel phase offsets may be non-uniform in spacing and/or include a different number of phase offsets. A phase offset is an offset of a sub-pixel position from a full-pel position. For example, the non-uniform phase offsets may include phase offsets at a ⅛ pixel phase offset, a ½ pixel phase offset, and a ⅞ pixel phase offset in addition to a 0 phase filter that may use the samples without any filtering. Other non-uniform phase offsets may also be appreciated. Conventionally, a fixed resolution of offsets was used, such as phase offsets that correspond to quarter, half, and three-quarter pixel offsets. For example, uniform phase offsets may be ¼, ½, and ¾ offsets where the uniform spacing is ¼ pel. However, a problem with a uniform distribution of sub-pixel positions is that these uniform sub-pixel positions may not be optimal.
In one embodiment, the phase offsets for sub-pixel positions may be determined based on characteristics of the encoding or decoding process. For example, the characteristics may be statistical information from video content, such as broadcast video, being encoded or decoded. Additionally, the characteristics may be a coding condition, such as properties of an interpolation filter, a type of prediction (e.g., from one reference block or from many reference blocks), and/or the compression noise statistical characteristics. Also, optimal sub-pixel positions may require different phase offsets in a vertical dimension and/or a horizontal dimension. Thus, different phase offsets may be selected based on different characteristics of the encoding or decoding process.
As described above, motion vectors are predictively coded with predictors chosen from motion vectors of spatial neighboring blocks and/or temporal co-located blocks. Motion vector scaling is used to scale a motion vector for the spatial neighboring block and/or temporal co-located block to a scaled motion vector for the current block. However, when non-uniform phase offsets are used, the scaling as applied to uniform phase offsets may not be accurate for the scaling of the corresponding motion offset for the current block.
A temporal distance difference between the reference picture at 808 and the reference picture at 810 exists. For example, the temporal distance between the current picture and the reference picture at 808 is a distance TDP and the temporal distance between the reference picture at 810 and the current picture is a distance TDref. A scaled motion vector MVPscaled is then calculated for the current block using the temporal distances. Although temporal distance is described, other measures may be used for scaling, such as picture order.
For
When performing scaling as described above in the Background section on a non-uniform four-position sub-pixel motion vector grid as shown in
MVPscaled=(TDref×MVP)/TDP=(4×( 3/16))/1= 12/16(pel)
The scaling of the motion vector component of the value “1” gives a motion offset of 1 pel, but the scaling of the motion offset component gives a motion offset of 12/16 pel. The 12/16 pel value is different from the value of 1 pel. This may not be an accurate scaling.
Accordingly, a motion vector scaling manager 506, either in encoder 502 or decoder 504, uses a motion vector scaling process for a non-uniform motion vector grid that is different from the scaling process for the uniform motion vector grid described above in the Background.
In the higher accuracy motion vector grid, additional pixel positions are shown in the dotted lines. This increases the accuracy as more sub-pixel positions between pixels L0 and R0 are included in addition to the 3/16, ½, and 13/16 sub-pixel positions. In the higher accuracy motion vector grid, higher accuracy motion vector components MVXHA, MVYHA have remainders of 0, 3, 8, and 13 when divided by 16, respectively. This is shown at 1004 where the remainders 0, 3, 8, and 13 correspond to the L0 pixel, 3/16 sub-pixel position, ½ sub-pixel position, and 13/16 sub-pixel position. Also, if the non-uniform motion vector grid was not used, then the original motion vector components having remainders of 0, 1, 2, and 3 when dividing by 4 are mapped to the higher accuracy motion vector components having remainders of 0, 4, 8, and 12 when dividing by 16, respectively. Thus, the map up process when a uniform motion vector grid is used does not have any effect.
Referring back to
At 906, motion vector scaling manager 506 performs a map down process. In the map down process, motion vector scaling manager 506 maps the scaled higher accuracy motion vector MVXHAScaled, MVYHAScaled back down to the original non-uniform motion vector grid. This gives a final scaled motion vector MVXScaled, MVYScaled on the original non-uniform motion vector grid.
Different algorithms may be used to perform the map-down process. In one example, quantization is performed that maps subsets of values from the higher accuracy motion vector grid to a smaller number of values on the non-uniform motion vector grid based on distance between values. For example, at 1106, if motion vector MVXHAScaled, MVYHAScaled has a remainder of 1 or 15 when dividing by 16, it is quantized to the nearest integer pixel L0 or R0, respectively. For example, the values of 0 and 1 on the higher accuracy motion vector grid are mapped to the value of 0 on the non-uniform motion vector grid. Also, at 1108, the value of 15 is mapped to the integer pixel R0. At 1110, the values of 2-5 on the higher accuracy motion vector grid are mapped to the value of 1 on the non-uniform motion vector grid. Also, the values of 6-10 on the higher accuracy motion vector grid are mapped to the value of 2 on the non-uniform motion vector grid and the values of 11-14 on the higher accuracy motion vector grid are mapped to the value of 3 on the non-uniform motion vector grid. Although these mappings are described, particular embodiments may use other mappings. For example, values 2-4 may be mapped to the value of 1. Other map down algorithms may also be used.
In one example, the 3/16 phase offset in the higher accuracy motion vector grid corresponds to “3”. When the scaling is performed using the temporal distance, the scaling may be equal to (4×) ( 3/16))/1= 12/16 pel. The 12/16 pel value is mapped to the phase offset of 3 in the non-uniform motion vector grid. Thus, the same value of 3 is determined in the scaling. Thus, instead of determining the 1 pel value as described above in the Background, particular embodiments determine a value of 13/16 pel, which is closer to 12/16 pel than 1 pel. Accordingly, the scaled motion vector using particular embodiment is more accurate.
Encoder and Decoder Examples
Particular embodiments may be used in both the encoding and decoding processes. In encoding, the motion vector predictor is determined for a current block. Then, motion vector scaling manager 506 determines the scaled motion vector. Encoder 502 may code the motion vector predictor to use for the current block in the bitstream sent to decoder 504. Decoder 504 receives the bitstream for decoding. For the current block, decoder 504 determines the motion vector predictor that was used in the encoding process. Then, motion vector scaling manager 506 determines the scaled motion vector. The following describes encoder 502 and decoder 504 in more detail.
For a current PU, x, a prediction PU, x′, is obtained through either spatial prediction or temporal prediction. The prediction PU is then subtracted from the current PU, resulting in a residual PU, e. A spatial prediction block 1204 may include different spatial prediction directions per PU, such as horizontal, vertical, 45-degree diagonal, 135-degree diagonal, DC (flat averaging), and planar.
A temporal prediction block 1206 performs temporal prediction through a motion estimation and motion compensation operation. The motion estimation operation searches for a best match prediction for the current PU over reference pictures. The best match prediction is described by a motion vector (MV) and associated reference picture (refIdx). The motion vector and associated reference picture are included in the coded bit stream.
Transform block 1207 performs a transform operation with the residual PU, e. Transform block 1207 outputs the residual PU in a transform domain, E.
A quantizer 1208 then quantizes the transform coefficients of the residual PU, E. Quantizer 1208 converts the transform coefficients into a finite number of possible values. Entropy coding block 1210 entropy encodes the quantized coefficients, which results in final compression bits to be transmitted. Different entropy coding methods may be used, such as context-adaptive variable length coding (CAVLC) or context-adaptive binary arithmetic coding (CABAC).
Also, in a decoding process within encoder 502, a de-quantizer 1212 de-quantizes the quantized transform coefficients of the residual PU. De-quantizer 1212 then outputs the de-quantized transform coefficients, E′. An inverse transform block 1214 receives the de-quantized transform coefficients, which are then inverse transformed resulting in a reconstructed residual PU, e′. The reconstructed PU, e′, is then added to the corresponding prediction, x′, either spatial or temporal, to form the new reconstructed PU, x″. A loop filter 1216 performs de-blocking on the reconstructed PU, x″, to reduce blocking artifacts. Additionally, loop filter 1216 may perform a sample adaptive offset process after the completion of the de-blocking filter process for the decoded picture, which compensates for a pixel value offset between reconstructed pixels and original pixels. Also, loop filter 1216 may perform adaptive filtering over the reconstructed PU, which minimizes coding distortion between the input and output pictures. Additionally, if the reconstructed pictures are reference pictures, the reference pictures are stored in a reference buffer 1218 for future temporal prediction.
Interpolation filter 1220 interpolates sub-pixel pixel values for temporal prediction block 11206. The phase offsets may be non-uniform. Temporal prediction block 1206 then uses the sub-pixel pixel values outputted by interpolation filter 1220 to generate a prediction of a current PU.
An entropy decoding block 1230 performs entropy decoding on input bits corresponding to quantized transform coefficients of a residual PU. A de-quantizer 1232 de-quantizes the quantized transform coefficients of the residual PU. De-quantizer 1232 then outputs the de-quantized transform coefficients of the residual PU, E′. An inverse transform block 1234 receives the de-quantized transform coefficients, which are then inverse transformed resulting in a reconstructed residual PU, e′.
The reconstructed PU, e′, is then added to the corresponding prediction, x′, either spatial or temporal, to form the new constructed PU, x″. A loop filter 1236 performs de-blocking on the reconstructed PU, x″, to reduce blocking artifacts. Additionally, loop filter 1236 may perform a sample adaptive offset process after the completion of the de-blocking filter process for the decoded picture, which compensates for a pixel value offset between reconstructed pixels and original pixels. Also, loop filter 1236 may perform an adaptive loop filter over the reconstructed PU, which minimizes coding distortion between the input and output pictures. Additionally, if the reconstructed pictures are reference pictures, the reference pictures are stored in a reference buffer 1238 for future temporal prediction.
The prediction PU, x′, is obtained through either spatial prediction or temporal prediction. A spatial prediction block 1240 may receive decoded spatial prediction directions per PU, such as horizontal, vertical, 45-degree diagonal, 135-degree diagonal, DC (flat averaging), and planar. The spatial prediction directions are used to determine the prediction PU, x′.
Interpolation filter 1224 interpolates sub-pixel pixel values for input into a temporal prediction block 1242. The phase offsets may be non-uniform as described above. Temporal prediction block 1242 performs temporal prediction using decoded motion vector information and interpolated sub-pixel pixel values outputted by interpolation filter 106 in a motion compensation operation. Temporal prediction block 1242 outputs the prediction PU, x′.
Particular embodiments may be implemented in a non-transitory computer-readable storage medium for use by or in connection with the instruction execution system, apparatus, system, or machine. The computer-readable storage medium contains instructions for controlling a computer system to perform a method described by particular embodiments. The instructions, when executed by one or more computer processors, may be operable to perform that which is described in particular embodiments.
As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
The above description illustrates various embodiments along with examples of how aspects of particular embodiments may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of particular embodiments as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents may be employed without departing from the scope hereof as defined by the claims.
The present application claims priority to U.S. Provisional App. No. 61/556,147 for “Motion Vector Scaling for Non-Uniform Motion Vector Grid” filed Nov. 4, 2011, the contents of which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4924310 | von Brandt | May 1990 | A |
5148269 | de Haan et al. | Sep 1992 | A |
5337086 | Fujinami | Aug 1994 | A |
5398068 | Liu et al. | Mar 1995 | A |
5461708 | Kahn | Oct 1995 | A |
5512952 | Iwamura | Apr 1996 | A |
5550964 | Davoust | Aug 1996 | A |
5581678 | Kahn | Dec 1996 | A |
5610658 | Uchida et al. | Mar 1997 | A |
5611034 | Makita | Mar 1997 | A |
5729690 | Jeong et al. | Mar 1998 | A |
5731840 | Kikuchi et al. | Mar 1998 | A |
5742710 | Hsu et al. | Apr 1998 | A |
5886742 | Hibi et al. | Mar 1999 | A |
5905535 | Kerdranvat | May 1999 | A |
5978030 | Jung et al. | Nov 1999 | A |
5987180 | Reitmeier | Nov 1999 | A |
5991447 | Eifrig et al. | Nov 1999 | A |
6005980 | Eifrig et al. | Dec 1999 | A |
6011870 | Jeng et al. | Jan 2000 | A |
6014181 | Sun | Jan 2000 | A |
6058143 | Golin | May 2000 | A |
6272179 | Kadono | Aug 2001 | B1 |
6289049 | Kim et al. | Sep 2001 | B1 |
6359929 | Boon | Mar 2002 | B1 |
6381277 | Chun et al. | Apr 2002 | B1 |
6473460 | Topper | Oct 2002 | B1 |
6507617 | Karczewicz et al. | Jan 2003 | B1 |
6711211 | Lainema | Mar 2004 | B1 |
6735249 | Karczewicz et al. | May 2004 | B1 |
6876702 | Hui et al. | Apr 2005 | B1 |
6912255 | Drezner et al. | Jun 2005 | B2 |
7002580 | Aggala et al. | Feb 2006 | B1 |
7418147 | Kamaci et al. | Aug 2008 | B2 |
7463685 | Haskell et al. | Dec 2008 | B1 |
7580456 | Li et al. | Aug 2009 | B2 |
7581168 | Boon | Aug 2009 | B2 |
7606310 | Ameres et al. | Oct 2009 | B1 |
7705847 | Helfman et al. | Apr 2010 | B2 |
7978770 | Luo et al. | Jul 2011 | B2 |
8005144 | Ji et al. | Aug 2011 | B2 |
8006194 | Berger et al. | Aug 2011 | B2 |
8130840 | Mishima et al. | Mar 2012 | B2 |
8208540 | Cote | Jun 2012 | B2 |
8345758 | Jeon | Jan 2013 | B2 |
8351505 | Jeon | Jan 2013 | B2 |
8442117 | Lee et al. | May 2013 | B2 |
8451904 | Reznik et al. | May 2013 | B2 |
8559512 | Paz | Oct 2013 | B2 |
8594200 | Chang et al. | Nov 2013 | B2 |
8718144 | Reznik et al. | May 2014 | B2 |
8762441 | Reznik | Jun 2014 | B2 |
8787459 | Wang | Jul 2014 | B2 |
8818114 | Kim et al. | Aug 2014 | B2 |
8867618 | Pandit et al. | Oct 2014 | B2 |
8879634 | Reznik | Nov 2014 | B2 |
8885956 | Sato | Nov 2014 | B2 |
8891626 | Bankoski et al. | Nov 2014 | B1 |
8908767 | Holmer | Dec 2014 | B1 |
20020031272 | Bagni et al. | Mar 2002 | A1 |
20020064228 | Sethuraman et al. | May 2002 | A1 |
20020118754 | Choi | Aug 2002 | A1 |
20030072374 | Sohm | Apr 2003 | A1 |
20040001546 | Tourapis et al. | Jan 2004 | A1 |
20040028131 | Ye et al. | Feb 2004 | A1 |
20040066848 | Jeon | Apr 2004 | A1 |
20040218674 | Kondo et al. | Nov 2004 | A1 |
20040258155 | Lainema et al. | Dec 2004 | A1 |
20050117646 | Joch et al. | Jun 2005 | A1 |
20050123282 | Novotny et al. | Jun 2005 | A1 |
20050226333 | Suzuki et al. | Oct 2005 | A1 |
20050243925 | Bottreau | Nov 2005 | A1 |
20050243926 | Hubrich et al. | Nov 2005 | A1 |
20050254719 | Sullivan | Nov 2005 | A1 |
20060114989 | Panda | Jun 2006 | A1 |
20060209961 | Han et al. | Sep 2006 | A1 |
20060268166 | Bossen et al. | Nov 2006 | A1 |
20060294171 | Bossen et al. | Dec 2006 | A1 |
20070014358 | Tourapis et al. | Jan 2007 | A1 |
20070110156 | Ji et al. | May 2007 | A1 |
20070195881 | Hagiya | Aug 2007 | A1 |
20070286280 | Saigo et al. | Dec 2007 | A1 |
20080025390 | Shi et al. | Jan 2008 | A1 |
20080037639 | Jeon | Feb 2008 | A1 |
20080043845 | Nakaishi | Feb 2008 | A1 |
20080056354 | Sun et al. | Mar 2008 | A1 |
20080084931 | Kondo et al. | Apr 2008 | A1 |
20080111722 | Reznik | May 2008 | A1 |
20080159392 | Chiang et al. | Jul 2008 | A1 |
20080240242 | Lainema | Oct 2008 | A1 |
20080253459 | Ugur et al. | Oct 2008 | A1 |
20080291285 | Shimizu | Nov 2008 | A1 |
20080310514 | Osamoto et al. | Dec 2008 | A1 |
20080317127 | Lee et al. | Dec 2008 | A1 |
20090016439 | Thoreau et al. | Jan 2009 | A1 |
20090067497 | Jeon | Mar 2009 | A1 |
20090074062 | Jeon | Mar 2009 | A1 |
20090074067 | Jeon | Mar 2009 | A1 |
20090110077 | Amano et al. | Apr 2009 | A1 |
20090125538 | Rosenzweig et al. | May 2009 | A1 |
20090129474 | Pandit et al. | May 2009 | A1 |
20090290643 | Yang | Nov 2009 | A1 |
20100079624 | Miyasako | Apr 2010 | A1 |
20100284469 | Sato et al. | Nov 2010 | A1 |
20100322301 | Karkkainen | Dec 2010 | A1 |
20110026820 | Strom et al. | Feb 2011 | A1 |
20110096837 | Demos | Apr 2011 | A1 |
20110110428 | Chang et al. | May 2011 | A1 |
20110170597 | Shi et al. | Jul 2011 | A1 |
20110170602 | Lee et al. | Jul 2011 | A1 |
20110188583 | Toraichi et al. | Aug 2011 | A1 |
20110243229 | Kim et al. | Oct 2011 | A1 |
20110261886 | Suzuki et al. | Oct 2011 | A1 |
20110293010 | Jeong et al. | Dec 2011 | A1 |
20120014440 | Segall et al. | Jan 2012 | A1 |
20120075535 | Van Beek | Mar 2012 | A1 |
20120134415 | Lin et al. | May 2012 | A1 |
20120263231 | Zhou | Oct 2012 | A1 |
20120294363 | Lee et al. | Nov 2012 | A1 |
20120300845 | Endresen et al. | Nov 2012 | A1 |
20120307905 | Kim et al. | Dec 2012 | A1 |
20130003851 | Yu et al. | Jan 2013 | A1 |
20130022127 | Park et al. | Jan 2013 | A1 |
20130027230 | Marpe et al. | Jan 2013 | A1 |
20130089149 | Hayashi et al. | Apr 2013 | A1 |
20130089266 | Yang et al. | Apr 2013 | A1 |
20140092975 | Yu et al. | Apr 2014 | A1 |
20140098877 | Xu et al. | Apr 2014 | A1 |
20150055706 | Xu et al. | Feb 2015 | A1 |
Number | Date | Country |
---|---|---|
0634873 | Jan 1995 | EP |
0979011 | Feb 2000 | EP |
1091592 | Apr 2001 | EP |
1158806 | Nov 2001 | EP |
1672926 | Jun 2006 | EP |
2536146 | Dec 2012 | EP |
2477033 | Jul 2011 | GB |
WO9941912 | Aug 1999 | WO |
WO2010086041 | Aug 2010 | WO |
WO2012125178 | Sep 2012 | WO |
Entry |
---|
Patent Cooperation Treaty, PCT Search Report and Written Opinion of the International Searching Authority for International Application No. PCT/US2012/063434 dated Feb. 12, 2013, 15 pages. |
Lou J et al: “Motion vector scaling for non-uniform interpolation offset”, 7. JCT-VC Meeting; 98. MPEG Meeting; Nov. 21, 2011-Nov. 30, 2011; Geneva; (Joint Collaborative Team on Video Coding of ISO/IEC JTC1/SC29IWG11 and ITU-T SG.16 ); URL: http://wftp3.itu.int/av-arch/jctvc-site/, No. JCTVC-G699, Nov. 9, 2011, XP030110683. |
Lou J et al: “CE3: Fixed interpolation filter tests by Motorola Mobility”, 6. JCT-VC Meeting; 97. MPEG Meeting; Jul. 14, 2011 Jul. 22, 2011; Torino; (Joint Collaborative Team on Video Coding of ISO/IEC JTC1/SC29/WG11 and ITU-T SG.16 ); URL: http://wftp3.itu.int/av-arch/jctvc-site/.No. JCTVC-F574, Jul. 3, 2011, XP030009597. |
Karczewicz (Qualcomm) M et al: “Video coding technology proposal by Qualcomm”, 1. JCT-VC Meeting; Apr. 15, 2010-Apr. 23, 2010; Dresden; (Jointcollaborative Team on Video Coding of ISO/IEC JTC1/SC29/WG11 and ITU-TSG.16); URL: http://wftp3.itu.int/av-arch/jctvc-site/, Apr. 16, 2010, XP030007568, ISSN: 0000-0049. |
Bankoski et al. “Technical Overview of VP8, an Open Source Video Codec for the Web”. Dated Jul. 11, 2011. |
Bankoski et al. “VP8 Data Format and Decoding Guide; draft-bankoski-vp8-bitstream-02” Network Working Group. Internet-Draft, May 18, 2011, 288 pp. |
Bossen, F., “Common test Conditions and Software Reference Configurations,” Joint Collaborative Team on Video Coding, JCTVC-D600, Jan. 2011. |
Chen, Michael C., et al.; “Design and Optimization of a Differentially Coded Variable Block Size Motion Compensation System”, IEEE 1996, 4 pp. |
Chen, Xing C., et al.; “Quadtree Based Adaptive Lossy Coding of Motion Vectors”, IEEE 1996, 4 pp. |
Ebrahimi, Touradj, et al.; “Joint motion estimation and segmentation for very low bitrate video coding”, SPIE vol. 2501, 1995, 12 pp. |
Guillotel, Philippe, et al.; “Comparison of motion vector coding techniques”, SPIE vol. 2308, 1994, 11 pp. |
Hong D et al.: “Scalabilty Support in HEVC”, 97, MPEG Meeting; Jul. 18, 2011-Jul. 22, 2011; Torino; (Motion Picture Expert Group or ISO/IEC JTC1/SC29/WG11),JCTVCF290, Jul. 13, 2011, all pages. |
Implementors' Guide; Series H: Audiovisual and Multimedia Systems; Coding of moving video: Implementors Guide for H.264: Advanced video coding for generic audiovisual services. H.264. International Telecommunication Union. Version 12. Dated Jul. 30, 2010. |
ISR & Written Opinion, Re: Application # PCT/US2012/044726; Sep. 27, 2012. |
ISR and Written opinion of the International Searching Authoriy for International Application No. PCT/US13/24773, dated Apr. 29, 2013, 13 pages. |
Zheng, Y et al., Unified Motion Vector Predictor Selection for merge and AMVP, Mar. 2011. |
ISR, ISR Search Report and Written Opinion of the International Searching Authority for International Application No. ISR/US2013/060100 dated Nov. 21, 2013, 11 pages. |
Jianghong Guo et al., A Novel Criterion for Block Matching Motion Estimation, Oct. 123, 1998, IEEE, vol. 1 pp. 841-844. |
Karczewicz, Marta, et al.; “Video Coding Using Motion Compensation With Polynomial Motion Vector Fields”, IEEE COMSOC EURASIP, First International Workshop on Wireless Image/Video Communications—Sep. 1996, 6 pp. |
Kim, Jong Won, et al.; “On the Hierarchical Variable Block Size Motion Estimation Technique for Motion Sequence Coding”, SPIE Visual Communication and Image Processing 1993, Cambridge, MA, Nov. 8, 1993, 29 pp. |
Liu, Bede, et al.; “A simple method to segment motion field for video coding”, SPIE vol. 1818, Visual Communications and Image Processing 1992, 10 pp. |
Liu, Bede, et al.; “New Fast Algorithms for the Estimation of Block Motion Vectors”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 2, Apr. 1993, 10 pp. |
Luttrell, Max, et al.; “Simulation Results for Modified Error Resilient Syntax With Data Partitioning and RVLC”, ITU—Telecommunications Standardization Sector, Study Group 16, Video Coding Experts Group (Question 15), Sixth Meeting: Seoul, South Korea, Nov. 2, 1998, 34 pp. |
Martin, Graham R., et al.; “Reduced Entropy Motion Compensation Using Variable Sized Blocks”, SPIE vol. 3024, 1997, 10 pp. |
McCann et al., “Video Coding Technology Proposal by Samsung(and BBC),” Joint Collaborative Team on Video Coding, 1st Meeting, Dresden, Germany, JCTVC-A124, Apr. 15-23, 2010. |
Nicolas, H., et al.; “Region-based motion estimation using deterministic relaxation schemes for image sequence coding”, IEEE 1992, 4 pp. |
Nokia, Inc., Nokia Research Center, “MVC Decoder Description”, Telecommunication Standardization Sector, Study Period 1997-2000, Geneva, Feb. 7, 2000, 99 pp. |
Orchard, Michael T.; “Exploiting Scene Structure in Video Coding”, IEEE 1991, 5 pp. |
Orchard, Michael T.; “Predictive Motion-Field Segmentation for Image Sequence Coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 1, Feb. 1993, 17 pp. |
Overview; VP7 Data Format and Decoder. Version 1.5. On2 Technologies, Inc. Dated Mar. 28, 2005. |
Park, Jun Sung, et al., “Selective Intra Prediction Mode Decision for H.264/AVC Encoders”, World Academy of Science, Engineering and Technology 13, (2006). |
Peng, Qiang, T. Yang, and C Zhu, Block-based temporal error concealment for video packet using motion vector extrapolation, 2002 International Conference on Communications, Circuits and Systems and West Sino Exposition Proceedings, 10-14 vol. 1:2. |
Schiller, H., et al.; “Efficient Coding of Side Information in a Low Bitrate Hybrid Image Coder”, Signal Processing 19 (1990) Elsevier Science Publishers B.V. 61-73, 13 pp. |
Schuster, Guido M., et al.; “A Video Compression Scheme With Optimal Bit Allocation Among Segmentation, Motion, and Residual Error”, IEEE Transactions on Image Processing, vol. 6, No. 11, Nov. 1997, 16 pp. |
Series H: Audiovisual and Multimedia Systems, Infrastructure of audiovisual services—Coding of moving video, Video coding for low bit rate communication, International Telecommunication Union, ITU-T Recommendation H.263, Feb. 1998, 167 pp. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 11. Dated Mar. 2009. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 12. Dated Mar. 2010. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Amendment 2: New profiles for professional applications. International Telecommunication Union. Dated Apr. 2007. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. Version 8. International Telecommunication Union. Dated Nov. 1, 2007. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services. H.264. Amendment 1: Support of additional colour spaces and removal of the High 4:4:4 Profile. International Telecommunication Union. Dated Jun. 2006. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services. H.264. Version 1. International Telecommunication Union. Dated May 2003. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services. H.264. Version 3. International Telecommunication Union. Dated Mar. 2005. |
Steliaros, Michael K., et al.; “Locally-accurate motion estimation for object-based video coding”, SPIE vol. 3309, 1997, 11 pp. |
Stiller, Christoph; “Motion-Estimation for Coding of Moving Video at 8 kbit/s with Gibbs Modeled Vectorfield Smoothing”, SPIE vol. 1360 Visual Communications and Image Processing 1990, 9 pp. |
Strobach, Peter; “Tree-Structured Scene Adaptive Coder”, IEEE Transactions on Communications, vol. 38, No. 4, Apr. 1990, 10 pp. |
VP6 Bitstream & Decoder Specification. Version 1.02. On2 Technologies, Inc. Dated Aug. 17, 2006. |
VP6 Bitstream & Decoder Specification. Version 1.03. On2 Technologies, Inc. Dated Oct. 29, 2007. |
VP8 Data Format and Decoding Guide. WebM Project. Google On2. Dated: Dec. 1, 2010. |
Wiegand, Thomas, et al.; “Long-Term Memory Motion-Compensated Prediction”, Publication Unknown, Date Unknown, 15 pp. |
Wiegand, Thomas, et al.; “Rate-Distortion Optimized Mode Selection for Very Low Bit Rate Video Coding and the Emerging H.263 Standard”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 6, No. 2, Apr. 1996, 9 pp. |
Wright, R. Glenn, et al.; “Multimedia—Electronic Technical Manual for ATE”, IEEE 1996, 3 pp. |
Zhang, Kui, et al.; “Variable Block Size Video Coding With Motion Prediction and Motion Segmentation”, SPIE vol. 2419, 1995, 9 pp. |
Zheng et al., “Extended Motion Vector Prediction for Bi Predictive Mode,” Joint Collaborative Team on Video Coding Geneva, Mar. 2011. |
International Search Report and Written Opinion for related application PCT/US2013/063723, mailed Feb. 2, 2012. |
IPRP and Written Opinion of the International Searching Authority for related International Application No. PCT/US2013060100 mailed Apr. 16, 2015. |
Laroche G. et al.: “RD Optimized Coding for Motion Vector Predictor Selection”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, No. 9, Sep. 1, 2008. |
Li S et al.: “Direct Mode Coding for Bipredictive Slices in the H.264 Standard”, IEEE Transations on Circuits and Systems for Video Technology, vol. 15, Jan. 1, 2005. |
Steffen Kamp et al.: “Decoder side motion vector derivation for inter frame video coding” Image Processing, 2008, 15th IEEE International Conference, Oct. 12, 2008. |
Steffen Kamp et al.: “Improving AVC Compression performance by template matching with decoder-side motion vector derivation”, 84. MPEG Meeting; Feb. 5, 2008. |
Ueda M et al.: “TE1: Refinement Motion Compensation using Decoder-side Motion Estimation” JCT-VC Meeting; Jul. 28, 2007. |
Wiegand et al., “WD3: Working Draft 3 of High-Efficiency Video Coding,” Mar. 29, 2011 JCTVC-E603, all pages. |
Yi-Jen Chiu et al.: “Self-derivation of motion estimation techniques to improve video coding efficiency”, Proceedings of SPIE, vol. 7798, Aug. 19, 2010. |
Y-Jen Chiu et al.: “Fast Techniques to Improve Self Derivation of Motion Estimation” JCT-VC Meeting, Jul. 28, 2010. |
Yue Wang et al.: “Advanced spatial and temporal direct mode for B picture coding”, Visual Communications and Image Processing (VCIP), 2011 Ieee, Nov. 6, 2011, pp. 1-4. |
Y-W Huang et al.: “Decoder-side Motion Vector Derivation with Switchable Template Matching” JCT-VC Meeting Jul. 28, 2010. |
Number | Date | Country | |
---|---|---|---|
20130114725 A1 | May 2013 | US |
Number | Date | Country | |
---|---|---|---|
61556147 | Nov 2011 | US |