A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by any one of the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
Techniques and tools for coding and decoding motion vector information are described. A video encoder uses an extended motion vector in a motion vector syntax for encoding predicted video frames.
Digital video consumes large amounts of storage and transmission capacity. A typical raw digital video sequence includes 15 or 30 frames per second. Each frame can include tens or hundreds of thousands of pixels (also called pels). Each pixel represents a tiny element of the picture. In raw form, a computer commonly represents a pixel with 24 bits. Thus, the number of bits per second, or bit rate, of a typical raw digital video sequence can be 5 million bits/second or more.
Most computers and computer networks lack the resources to process raw digital video. For this reason, engineers use compression (also called coding or encoding) to reduce the bit rate of digital video. Compression can be lossless, in which quality of the video does not suffer but decreases in bit rate are limited by the complexity of the video. Or, compression can be lossy, in which quality of the video suffers but decreases in bit rate are more dramatic. Decompression reverses compression.
In general, video compression techniques include intraframe compression and interframe compression. Intraframe compression techniques compress individual frames, typically called I-frames or key frames. Interframe compression techniques compress frames with reference to preceding and/or following frames, which are typically called predicted frames, P-frames, or B-frames.
Microsoft Corporation's Windows Media Video, Version 8 [“WMV8”] includes a video encoder and a video decoder. The WMV8 encoder uses intraframe and interframe compression, and the WMV8 decoder uses intraframe and interframe decompression.
A. Intraframe Compression in WMV8
The encoder then quantizes 120 the DCT coefficients, resulting in an 8×8 block of quantized DCT coefficients 125. For example, the encoder applies a uniform, scalar quantization step size to each coefficient. Quantization is lossy. The encoder then prepares the 8×8 block of quantized DCT coefficients 125 for entropy encoding, which is a form of lossless compression. The exact type of entropy encoding can vary depending on whether a coefficient is a DC coefficient (lowest frequency), an AC coefficient (other frequencies) in the top row or left column, or another AC coefficient.
The encoder encodes the DC coefficient 126 as a differential from the DC coefficient 136 of a neighboring 8×8 block, which is a previously encoded neighbor (e.g., top or left) of the block being encoded. (
The entropy encoder can encode the left column or top row of AC coefficients as a differential from a corresponding column or row of the neighboring 8×8 block.
The encoder scans 150 the 8×8 block 145 of predicted, quantized AC DCT coefficients into a one-dimensional array 155 and then entropy encodes the scanned AC coefficients using a variation of run length coding 160. The encoder selects an entropy code from one or more run/level/last tables 165 and outputs the entropy code.
B. Interframe Compression in WMV8
Interframe compression in the WMV8 encoder uses block-based motion compensated prediction coding followed by transform coding of the residual error.
For example, the WMV8 encoder splits a predicted frame into 8×8 blocks of pixels. Groups of four 8×8 blocks form macroblocks. For each macroblock, a motion estimation process is performed. The motion estimation approximates the motion of the macroblock of pixels relative to a reference frame, for example, a previously coded, preceding frame. In
The encoder then prepares the 8×8 block 355 of quantized DCT coefficients for entropy encoding. The encoder scans 360 the 8×8 block 355 into a one dimensional array 365 with 64 elements, such that coefficients are generally ordered from lowest frequency to highest frequency, which typically creates long runs of zero values.
The encoder entropy encodes the scanned coefficients using a variation of run length coding 370. The encoder selects an entropy code from one or more run/level/last tables 375 and outputs the entropy code.
In summary of
The amount of change between the original and reconstructed frame is termed the distortion and the number of bits required to code the frame is termed the rate for the frame. The amount of distortion is roughly inversely proportional to the rate. In other words, coding a frame with fewer bits (greater compression) will result in greater distortion, and vice versa.
C. Bi-Directional Prediction
Bi-directionally coded images (e.g., B-frames) use two images from the source video as reference (or anchor) images. For example, referring to
Some conventional encoders use five prediction modes (forward, backward, direct, interpolated and intra) to predict regions in a current B-frame. In intra mode, an encoder does not predict a macroblock from either reference image, and therefore calculates no motion vectors for the macroblock. In forward and backward modes, an encoder predicts a macroblock using either the previous or future reference frame, and therefore calculates one motion vector for the macroblock. In direct and interpolated modes, an encoder predicts a macroblock in a current frame using both reference frames. In interpolated mode, the encoder explicitly calculates two motion vectors for the macroblock. In direct mode, the encoder derives implied motion vectors by scaling the co-located motion vector in the future reference frame, and therefore does not explicitly calculate any motion vectors for the macroblock.
D. Interlace Coding
A typical interlace video frame consists of two fields scanned at different times. For example, referring to
E. Standards for Video Compression and Decompression
Aside from WMV8, several international standards relate to video compression and decompression. These standards include the Motion Picture Experts Group [“MPEG”] 1, 2, and 4 standards and the H.261, H.262, and H.263 standards from the International Telecommunication Union [“ITU”]. Like WMV8, these standards use a combination of intraframe and interframe compression.
For example, advanced video compression or encoding techniques (including techniques in the MPEG, H.26× and WMV8 standards) are based on the exploitation of temporal coherence of typical video sequences. Image areas are tracked as they move over time, and information pertaining to the motion of these areas is compressed as part of the bit stream. Traditionally, a standard P-frame is encoded by computing and storing motion information in the form of two-dimensional displacement vectors corresponding to regularly-sized image tiles (e.g., macroblocks) For example, a macroblock may have one motion vector (a 1MV macroblock) for the macroblock or a motion vector for each of four blocks in the macroblock (a 4MV macroblock). Subsequently, the difference between the input frame and its motion compensated prediction is compressed, usually in a suitable transform domain, and added to an encoded bit stream. Typically, the motion vector component of the bitstream makes up between 10% and 30% of the size. Therefore, it can be appreciated that efficient motion vector coding is a key factor in efficient video compression.
Motion vector coding efficiency can be achieved in different ways. For example, motion vectors are often highly correlated between neighboring macroblocks. For efficiency, a motion vector of a given macroblock can be differentially coded from its prediction based on a causal neighborhood of adjacent macroblocks. A few exceptions to this general rule are observed in prior algorithms, such as those described in MPEG-4 and WMV8:
Given the critical importance of video compression and decompression to digital video, it is not surprising that video compression and decompression are richly developed fields. Whatever the benefits of previous video compression and decompression techniques, however, they do not have the advantages of the following techniques and tools.
In summary, the detailed description is directed to various techniques and tools for encoding and decoding motion vector information for video images. The various techniques and tools can be used in combination or independently.
In one aspect, a video encoder jointly codes for a set of pixels (e.g., block, macroblock, etc.) a switch code with motion vector information (e.g., a motion vector for an inter-coded block/macroblock, or a pseudo motion vector for an intra-coded block/macroblock). The switch code indicates whether a set of pixels is intra-coded.
In another aspect, a video encoder yields an extended motion vector code by jointly coding for a set of pixels a switch code, motion vector information, and a terminal symbol indicating whether subsequent data is encoded for the set of pixels. The subsequent data can include coded block pattern data and/or residual data for macroblocks. The extended motion vector code can be included in an alphabet or table of codes. In one aspect, the alphabet lacks a code that would represent a skip condition for the set of pixels.
In another aspect, an encoder/decoder selects motion vector predictors for current macroblocks (e.g., 1MV or mixed 1MV/4MV macroblocks) in a video image (e.g., an interlace or progressive P-frame or B-frame).
For example, an encoder/decoder selects a predictor from a set of candidates for a last macroblock of a macroblock row. The set of candidates comprises motion vectors from a set of macroblocks adjacent to the current macroblock. The set of macroblocks adjacent to the current macroblock consists of a top adjacent macroblock, a left adjacent macroblock, and a top-left adjacent macroblock. The predictor can be a motion vector for an individual block within a macroblock.
As another example, an encoder/decoder selects a predictor from a set of candidates comprising motion vectors from a set of blocks in macroblocks adjacent to a current macroblock. The set of blocks consists of a bottom-left block of a top adjacent macroblock, a top-right block of a left adjacent macroblock, and a bottom-right block of a top-left adjacent macroblock.
As another example, an encoder/decoder selects a predictor for a current top-left block in the first macroblock of a macroblock row from a set of candidates. The set of candidates comprises a zero-value motion vector and motion vectors from a set of blocks in an adjacent macroblock. The set of blocks consists of a bottom-left block of a top adjacent macroblock, and a bottom-right block of the top adjacent macroblock.
As another example, an encoder/decoder selects a predictor for a current top-right block of a current macroblock from a set of candidates. The current macroblock is the last macroblock of a macroblock row, and the set of candidates consists of a motion vector from the top-left block of the current macroblock, a motion vector from a bottom-left block of a top adjacent macroblock, and a motion vector from a bottom-right block of the top adjacent macroblock.
In another aspect, a video encoder/decoder calculates a motion vector predictor for a set of pixels (e.g., a 1MV or mixed 1MV/4MV macroblock) based on analysis of candidates, and compares the calculated predictor with one or more of the candidates (e.g., the left and top candidates). Based on the comparison, the encoder/decoder determines whether to replace the calculated motion vector predictor with a hybrid motion vector of one of the candidates. The set of pixels can be a skipped set of pixels (e.g., a skipped macroblock). The hybrid motion vector can be indicated by an indicator bit.
In another aspect, a video encoder/decoder selects a motion vector mode for a predicted image from a set of modes comprising a mixed one- and four-motion vector, quarter-pixel resolution, bicubic interpolation filter mode; a one-motion vector, quarter-pixel resolution, bicubic interpolation filter mode; a one-motion vector, half-pixel resolution, bicubic interpolation filter mode; and a one-motion vector, half-pixel resolution, bilinear interpolation filter mode. The mode can be signaled in a bit stream at various levels (e.g., frame-level, slice-level, group-of-pictures level, etc.). The set of modes also can include other modes, such as a four-motion vector, ⅛-pixel, six-tap interpolation filter mode.
In another aspect, for a set of pixels, a video encoder finds a motion vector component value and a motion vector predictor component value, each within a bounded range. The encoder calculates a differential motion vector component value (which is outside the bounded range) based on the motion vector component value and the motion vector predictor component value. The encoder represents the differential motion vector component value with a signed binary code in a bit stream. The signed binary code is operable to allow reconstruction of the differential motion vector component value. For example, the encoder performs rollover arithmetic to convert the differential motion vector component value into a signed binary code. The number of bits in the signed binary code can vary based on motion data (e.g., motion vector component direction (x or y), motion vector resolution, motion vector range.
In another aspect, a video decoder decodes a set of pixels in an encoded bit stream by receiving an extended motion vector code for the set of pixels. The extended motion vector code reflects joint encoding of motion information together with information indicating whether the set of pixels is intra-coded or inter-coded and with a terminal symbol. The decoder determines whether subsequent data for the set of pixels is included in the encoded bit stream based on the extended motion vector code (e.g., by the terminal symbol in the code). For a macroblocks (e.g., 4:2:0, 4:1:1, or 4:2:2 macroblocks), subsequent data can include a coded block pattern code and/or residual information for one or more blocks in the macroblock.
In the bit stream, the extended motion vector code can be preceded by, for example, header information or a modified coded block pattern code, and can be followed by other information for the set of pixels, such as a coded block pattern code. The decoder can receive more than one extended motion vector code for a set of pixels. For example, the decoder can receive two such codes for a bi-directionally predicted, or field-coded interlace macroblock. Or, the decoder can receive an extended motion vector code for each block in a macroblock.
In another aspect, a computer system includes means for decoding images, which comprises means for receiving an extended motion vector code and means for determining whether subsequent data for the set of pixels is included in the encoded bit stream based at least in part upon the received extended motion vector code.
In another aspect, a computer system includes means for encoding images, which comprises means for sending an extended motion vector code for a set of pixels as part of an encoded bit stream.
Additional features and advantages will be made apparent from the following detailed description of different embodiments that proceeds with reference to the accompanying drawings.
The present application relates to techniques and tools for coding motion information in video image sequences. Bit stream formats or syntaxes include flags and other codes to incorporate the techniques. Different bit stream formats can comprise different layers or levels (e.g., sequence level, frame/picture/image level, macroblock level, and/or block level).
The various techniques and tools can be used in combination or independently. Different embodiments implement one or more of the described techniques and tools.
I. Computing Environment
With reference to
A computing environment may have additional features. For example, the computing environment 700 includes storage 740, one or more input devices 750, one or more output devices 760, and one or more communication connections 770. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 700. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 700, and coordinates activities of the components of the computing environment 700.
The storage 740 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 700. The storage 740 stores instructions for the software 780 implementing the video encoder or decoder.
The input device(s) 750 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 700. For audio or video encoding, the input device(s) 750 may be a sound card, video card, TV tuner card, or similar device that accepts audio or video input in analog or digital form, or a CD-ROM or CD-RW that reads audio or video samples into the computing environment 700. The output device(s) 760 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 700.
The communication connection(s) 770 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
The techniques and tools can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment 700, computer-readable media include memory 720, storage 740, communication media, and combinations of any of the above.
The techniques and tools can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
For the sake of presentation, the detailed description uses terms like “predict,” “choose,” “compensate,” and “apply” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
II. Generalized Video Encoder and Decoder
The relationships shown between modules within the encoder and decoder indicate the main flow of information in the encoder and decoder; other relationships are not shown for the sake of simplicity. In particular,
The encoder 800 and decoder 900 are block-based and use a 4:2:0 macroblock format with each macroblock including four 8×8 luminance blocks and two 8×8 chrominance blocks, or a 4:1:1 macroblock format with each macroblock including four 8×8 luminance blocks and four 4×8 chrominance blocks. Alternatively, the encoder 800 and decoder 900 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration.
Depending on implementation and the type of compression desired, modules of the encoder or decoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules. In alternative embodiments, encoder or decoders with different modules and/or other configurations of modules perform one or more of the described techniques.
A. Video Encoder
The encoder system 800 compresses predicted frames and key frames. For the sake of presentation,
A predicted frame (also called P-frame, B-frame, or inter-coded frame) is represented in terms of prediction (or difference) from one or more reference (or anchor) frames. A prediction residual is the difference between what was predicted and the original frame. In contrast, a key frame (also called I-frame, intra-coded frame) is compressed without reference to other frames.
If the current frame 805 is a forward-predicted frame, a motion estimator 810 estimates motion of macroblocks or other sets of pixels of the current frame 805 with respect to a reference frame, which is the reconstructed previous frame 825 buffered in a frame store (e.g., frame store 820). If the current frame 805 is a bi-directionally-predicted frame (a B-frame), a motion estimator 810 estimates motion in the current frame 805 with respect to two reconstructed reference frames. Typically, a motion estimator estimates motion in a B-frame with respect to a temporally previous reference frame and a temporally future reference frame. Accordingly, the encoder system 800 can comprise separate stores 820 and 822 for backward and forward reference frames. For more information on bi-directionally predicted frames, see U.S. patent application Ser. No. 10/622,378, entitled, “Advanced Bi-Directional Predictive Coding of Video Frames,” filed Jul. 18, 2003.
The motion estimator 810 can estimate motion by pixel, ½ pixel, ¼ pixel, or other increments, and can switch the resolution of the motion estimation on a frame-by-frame basis or other basis. The resolution of the motion estimation can be the same or different horizontally and vertically. The motion estimator 810 outputs as side information motion information 815 such as motion vectors. A motion compensator 830 applies the motion information 815 to the reconstructed frame(s) 825 to form a motion-compensated current frame 835. The prediction is rarely perfect, however, and the difference between the motion-compensated current frame 835 and the original current frame 805 is the prediction residual 845. Alternatively, a motion estimator and motion compensator apply another type of motion estimation/compensation.
A frequency transformer 860 converts the spatial domain video information into frequency domain (i.e., spectral) data. For block-based video frames, the frequency transformer 860 applies a discrete cosine transform [“DCT”] or variant of DCT to blocks of the pixel data or prediction residual data, producing blocks of DCT coefficients. Alternatively, the frequency transformer 860 applies another conventional frequency transform such as a Fourier transform or uses wavelet or subband analysis. If the encoder uses spatial extrapolation (not shown in
A quantizer 870 then quantizes the blocks of spectral data coefficients. The quantizer applies uniform, scalar quantization to the spectral data with a step-size that varies on a frame-by-frame basis or other basis. Alternatively, the quantizer applies another type of quantization to the spectral data coefficients, for example, a non-uniform, vector, or non-adaptive quantization, or directly quantizes spatial domain data in an encoder system that does not use frequency transformations. In addition to adaptive quantization, the encoder 800 can use frame dropping, adaptive filtering, or other techniques for rate control.
If a given macroblock in a predicted frame has no information of certain types (e.g., no motion information for the macroblock and/or no residual information), the encoder 800 may encode the macroblock as a skipped macroblock. If so, the encoder signals the skipped macroblock in the output bit stream of compressed video information 895.
When a reconstructed current frame is needed for subsequent motion estimation/compensation, an inverse quantizer 876 performs inverse quantization on the quantized spectral data coefficients. An inverse frequency transformer 866 then performs the inverse of the operations of the frequency transformer 860, producing a reconstructed prediction residual (for a predicted frame) or a reconstructed key frame. If the current frame 805 was a key frame, the reconstructed key frame is taken as the reconstructed current frame (not shown). If the current frame 805 was a predicted frame, the reconstructed prediction residual is added to the motion-compensated current frame 835 to form the reconstructed current frame. A frame store (e.g., frame store 820) buffers the reconstructed current frame for use in predicting another frame. In some embodiments, the encoder applies a deblocking filter to the reconstructed frame to adaptively smooth discontinuities in the blocks of the frame.
The entropy coder 880 compresses the output of the quantizer 870 as well as certain side information (e.g., motion information 815, spatial extrapolation modes, quantization step size). Typical entropy coding techniques include arithmetic coding, differential coding, Huffman coding, run length coding, LZ coding, dictionary coding, and combinations of the above. The entropy coder 880 typically uses different coding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular coding technique.
The entropy coder 880 puts compressed video information 895 in the buffer 890. A buffer level indicator is fed back to bit rate adaptive modules.
The compressed video information 895 is depleted from the buffer 890 at a constant or relatively constant bit rate and stored for subsequent streaming at that bit rate. Therefore, the level of the buffer 890 is primarily a function of the entropy of the filtered, quantized video information, which affects the efficiency of the entropy coding. Alternatively, the encoder system 800 streams compressed video information immediately following compression, and the level of the buffer 890 also depends on the rate at which information is depleted from the buffer 890 for transmission.
Before or after the buffer 890, the compressed video information 895 can be channel coded for transmission over the network. The channel coding can apply error detection and correction data to the compressed video information 895.
B. Video Decoder
The decoder system 900 decompresses predicted frames and key frames. For the sake of presentation,
A buffer 990 receives the information 995 for the compressed video sequence and makes the received information available to the entropy decoder 980. The buffer 990 typically receives the information at a rate that is fairly constant over time, and includes a jitter buffer to smooth short-term variations in bandwidth or transmission. The buffer 990 can include a playback buffer and other buffers as well. Alternatively, the buffer 990 receives information at a varying rate. Before or after the buffer 990, the compressed video information can be channel decoded and processed for error detection and correction.
The entropy decoder 980 entropy decodes entropy-coded quantized data as well as entropy-coded side information (e.g., motion information 915, spatial extrapolation modes, quantization step size), typically applying the inverse of the entropy encoding performed in the encoder. Entropy decoding techniques include arithmetic decoding, differential decoding, Huffman decoding, run length decoding, LZ decoding, dictionary decoding, and combinations of the above. The entropy decoder 980 frequently uses different decoding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular decoding technique.
A motion compensator 930 applies motion information 915 to one or more reference frames 925 to form a prediction 935 of the frame 905 being reconstructed. For example, the motion compensator 930 uses a macroblock motion vector to find a macroblock in a reference frame 925. A frame buffer (e.g., frame buffer 920) stores previously reconstructed frames for use as reference frames. Typically, B-frames have more than one reference frame (e.g., a temporally previous reference frame and a temporally future reference frame). Accordingly, the decoder system 900 can comprise separate frame buffers 920 and 922 for backward and forward reference frames.
The motion compensator 930 can compensate for motion at pixel, ½ pixel, ¼ pixel, or other increments, and can switch the resolution of the motion compensation on a frame-by-frame basis or other basis. The resolution of the motion compensation can be the same or different horizontally and vertically. Alternatively, a motion compensator applies another type of motion compensation. The prediction by the motion compensator is rarely perfect, so the decoder 900 also reconstructs prediction residuals.
When the decoder needs a reconstructed frame for subsequent motion compensation, a frame buffer (e.g., frame buffer 920) buffers the reconstructed frame for use in predicting another frame. In some embodiments, the decoder applies a deblocking filter to the reconstructed frame to adaptively smooth discontinuities in the blocks of the frame.
An inverse quantizer 970 inverse quantizes entropy-decoded data. In general, the inverse quantizer applies uniform, scalar inverse quantization to the entropy-decoded data with a step-size that varies on a frame-by-frame basis or other basis. Alternatively, the inverse quantizer applies another type of inverse quantization to the data, for example, a non-uniform, vector, or non-adaptive quantization, or directly inverse quantizes spatial domain data in a decoder system that does not use inverse frequency transformations.
An inverse frequency transformer 960 converts the quantized, frequency domain data into spatial domain video information. For block-based video frames, the inverse frequency transformer 960 applies an inverse DCT [“IDCT”] or variant of IDCT to blocks of the DCT coefficients, producing pixel data or prediction residual data for key frames or predicted frames, respectively. Alternatively, the frequency transformer 960 applies another conventional inverse frequency transform such as a Fourier transform or uses wavelet or subband synthesis. If the decoder uses spatial extrapolation (not shown in
When a skipped macroblock is signaled in the bit stream of information 995 for a compressed sequence of video frames, the decoder 900 reconstructs the skipped macroblock without using information (e.g., motion information and/or residual information) normally included in the bit stream for non-skipped macroblocks.
III. Overview of Motion Vector Coding
The described techniques and tools improve compression efficiency for predicted images (e.g., frames) in video sequences. Described techniques and tools apply to a one-motion-vector-per-macroblock (1MV) model of motion estimation and compensation for predicted frames (e.g., P-frames). Described techniques and tools also employ specialized mechanisms to encode motion vectors in certain situations (e.g., four-motion-vectors-per-macroblock (4MV) models, mixed 1MV and 4MV models, B-frames, and interlace coding) that give rise to data structures that are not homogeneous with the 1MV model. For more information on interlace video, see U.S. patent application Ser. No. 10/622,284, entitled, “Intraframe and Interframe Interlace Coding and Decoding,” filed Jul. 18, 2003. Described techniques and tools are also extensible to future formats.
With an increased average number of motion vectors per frame (e.g., in 4MV and mixed 1MV and 4MV models), it is desirable to design a more efficient scheme to encode motion vector information. As in earlier standards, described techniques and tools use predictive coding to compress motion vector information. However, there are several key differences. The described techniques and tools, individually or in combination, include the following features:
In some embodiments, an encoder derives motion vectors for chrominance planes from luminance motion vectors. However, the techniques and tools described herein are equally applicable to chrominance motion in other embodiments. For example, a video encoder may choose to explicitly send chrominance motion vectors as part of a bit stream, and can use techniques and tools similar to those described herein to encode/decode the chrominance motion vectors.
IV. Extended Motion Vector Alphabet
In some embodiments, an extended motion vector alphabet includes joint codes for jointly coding motion vector information with other information for a block, macroblock, or other set of pixels.
A. Signaling Intra Macroblocks and Blocks
The signaling of an intra-coded set of pixels (e.g., block, macroblock, etc.) can be achieved by extending the alphabet of motion vectors to allow for a symbol (e.g., an I/P switch) indicating an intra area. Intra macroblocks and blocks do not have a true motion vector associated with them. A motion vector (or in the case of an intra-coded set of pixels, a pseudo motion vector) can be appended to an intra symbol to yield a triple of the form <Intra, MVx, MVy> that indicates whether the set of pixels (e.g., macroblock or block) is coded as intra, and if not, what its motion vector should be. When the intra flag is set, MVx and MVy are “don't care” conditions. When the intra flag is zero, MVx and MVy correspond to computed motion vector components.
Joint coding of an intra symbol with motion vectors allows an elegant yet efficient implementation with the ability to switch blocks to intra when four extended motion vectors are used in a macroblock.
B. Signaling Residual Information
In addition to the intra symbol, some embodiments jointly code the presence or absence of subsequent residual symbols with a motion vector. For example, a “last” (or terminal) symbol indicates whether the joint code containing the motion vector or pseudo motion vector is a terminal symbol of a given macroblock, block or field, or if residual data follows (e.g., when last=1 (i.e. last is true), no subsequent data pertains to the area). This joint code can be referred to as an extended motion vector, and is of the form <intra, MVx, MVy, last>. In the syntax diagrams below, an extended motion vector is represented as MV*.
In some embodiments, the extended motion vector symbol <inter, 0, 0, true> is an invalid symbol. The condition that would ordinarily lead to this symbol a special condition called a “skip” condition. Under the skip condition, the current set of pixels (e.g., macroblock) can be predicted (to within quantization error) from its motion vector. No additional data (e.g., residual data) is necessary to decode this area. For efficiency reasons, the skip condition can signaled at the frame level. Therefore, in some embodiments, this symbol is not present in the bit stream. For example, skipped macroblocks have a motion vector such that the differential motion vector is (0, 0) or have no motion at all. In other words, in skipped macroblocks where some motion is present, the skipped macroblocks use the same motion vector as the predicted motion vector. Skipped macroblocks are also defined for 4MV macroblocks, and other cases. For more information on skipped macroblocks, see U.S. patent application Ser. No. 10/321,415, entitled, “Skip Macroblock Coding,” filed Dec. 16, 2002.
The last symbol applies to both intra signals and inter motion vectors. The way this symbol is used in different embodiments depends on many factors, including whether a macroblock is a 1MV or 4MV macroblock, or an interlace macroblock (e.g., a field-coded, 2MV macroblock). Moreover, in some embodiments, the last symbol is interpreted differently for interpolated mode B-frames. These concepts are covered in detail below.
V. Syntax for Coding Motion Vector Information
In some embodiments, a video encoder encodes video images using a sub-frame-level syntax (e.g., a macroblock-level syntax) including extended motion vectors. For example, for macroblocks in a video sequence having progressive and interlace P-frames and B-frames, each macroblock is coded with zero, one, two or four associated extended motion vector symbols. The specific number of motion vectors depends on the specifics of the coding mode—(e.g., whether the frame is a P-frame or B-frame, progressive or interlace, 1MV or 4MV-coded, and/or skip coded). Coding modes also determine the order in which the motion vector information is sent. The following sections and corresponding
In the following sections and the corresponding figures, the symbol MBH denotes a macroblock header—a placeholder for any macroblock level information other than a motion vector, I/P switch or coded block pattern (CBP)). Examples of elements in MBH are skip bit information, motion vector mode information, coding mode information for B-frames, and frame/field information for interlace frames.
A. 1MV Macroblock Syntax
CBP indicates which of the blocks making up a macroblock have attached residual information. For example, for a 4:2:0 macroblock with four luminance blocks and two chrominance blocks, CBP includes six bits. A corresponding CBP bit indicates whether residual information exists for each block. In MV*, the terminal symbol “last” is set to 1 if CBP is all zero, indicating that there are no residuals for all six blocks in the macroblock. In this case, CBP is not sent. If CBP is not all zero (which under many circumstances is more likely to be the case), the terminal symbol is set to 1, and the CBP is sent, followed by the residual data for blocks that have residuals. For example, in
B. 4MV Macroblock Syntax
In
C. 2MV Macroblock Syntax
In
D. Macroblock Syntax for Interlace Field-type Macroblocks in P-Frames and Forward/Backward Predicted Field-Type Macroblocks in B-Frames
E. Macroblock Syntax for Interlace Field-Type Interpolated Macroblocks in B-Frames
F. Simplified CBP and MV* Alphabets
In the syntax formats described above, the coded block pattern CBP=0 (i.e., all bits in CBP are equal to zero) does not occur in the bit stream. Accordingly, in some embodiments, for the sake of efficiency, this symbol is not present in the CBP alphabet. For example, for the six blocks in a 4:2:0 macroblock, the coded block pattern alphabet comprises 2{circumflex over (0)}^6−1=63 symbols. Moreover, as discussed earlier, the MV* symbol <intra switch, MVx, MVy, last>=<inter, 0, 0, true> is an invalid symbol. Occurrences of this symbol can be coded using skip bits, or in some cases, CBP.
VI. Generation of Motion Vector Predictors and Differential Motion Vectors
In some embodiments, to exploit continuity in motion vector information, motion vectors are differentially predicted and encoded from neighboring sets of pixels (e.g., blocks, macroblocks, etc.). For example, a video encoder/decoder uses three motion vectors in the neighborhood of a current block, macroblock or field for computing a prediction. The specific features of a predictor calculation technique depend on factors such as whether the sequence is interlace or progressive, and whether one, two, or four motion vectors are being generated for a given macroblock. For example, in a 1MV macroblock, the macroblock has one corresponding motion vector for the entire macroblock. In a 4MV macroblock, the macroblock has one corresponding motion vector for each block in the macroblock.
In the following sections, there is only one numerical prediction for a given motion vector, and this is calculated by analyzing candidates (which may also be referred to as predictors) for the motion vector predictor.
A. Motion Vector Candidates in 1MV P-frames
B. Motion Vector Candidates in Mixed-MV P-frames
In embodiments such as those shown in
C. Motion Vector Candidates in Interlace P-Frames
In some embodiments, for field-coded macroblocks, the motion vectors of corresponding fields of the neighboring macroblocks are used as candidates for predicting a motion vector for a top or bottom field. For example,
D. Calculating a Predictor from Candidates
Given three motion vector predictor candidates, the following pseudocode illustrates the process for calculating the motion vector predictor.
The function cmedian3 is the component-wise median of three two dimensional vectors.
E. Pullback of Predictor
In some embodiments, after the predictor is computed, an encoder/decoder verifies whether the area of the image referenced by the predictor is within the frame. If the area is entirely outside the frame, it is pulled back to an area that overlaps the frame by one pixel width, overlapping the frame at the area closest to the original area. For example,
In some embodiments, an encoder/decoder uses the following rules for performing predictor pull backs:
F. Hybrid Motion Vectors
In some embodiments, if a P-frame is 1MV or mixed-MV, a calculated predictor is tested relative to the A and C predictors, such as those described above. This test determines whether the motion vector must be hybrid coded.
For example,
When the variance among the three motion vector candidates used in a prediction is high, the true motion vector is likely to be close to one of the candidate vectors, especially the vectors to the left and the top of the current macroblock or block (Predictors A and C, respectively). When the candidates are far apart, their component-wise median is often not an accurate predictor of motion in a current macroblock. Hence, in some embodiments, an encoder sends an additional bit indicating which candidate the true motion vector is closer to. For example, when the indicator bit indicates that the motion vector for Predictor A or C is the closer one, a decoder uses it as the predictor. The decoder must determine for each motion vector whether to expect a hybrid motion indicator bit, and this determination can be made from causal motion vector information.
The following pseudo-code illustrates this determination. In this example, when either Predictor A or Predictor C is intra-coded, the corresponding motion is deemed to be zero.
An advantage of the above approach is that it uses the computed predictor—and in the typical case when there is no hybrid motion, the additional computations are not expensive.
In some embodiments, in a bit stream syntax, the hybrid motion vector indicator bit is sent together with the motion vector itself. Hybrid motion vectors may occur even when a set of pixels (e.g., block, macroblock, etc.) is skipped, in which case the one bit indicates whether to use A or C as the true motion for the set of pixels. In such cases, in the bit stream syntax, the hybrid bit is sent where the motion vector would have been had it not been skipped.
Hybrid motion vector prediction can be enabled or disabled in different situations. For example, in some embodiments, hybrid motion vector prediction is not used for interlace pictures (e.g., field-coded P pictures). A decision to use hybrid motion vector prediction can be made at frame level, sequence level, or some other level.
VII. Motion Vector Modes
In some embodiments, motion vectors are specified to half-pixel or quarter-pixel accuracy. Frames can also be 1MV frames, or mixed 1MV/4MV frames, and can use bicubic or bilinear interpolation. These choices make up the motion vector mode. In some embodiments, the motion vector mode is sent at the frame level. Alternatively, an encoder chooses motion vector modes on some other basis, and/or sends motion vector mode information at some other level.
In some embodiments, an encoder uses one of four motion compensation modes. The frame-level mode indicates (a) possible number of motion vectors per macroblock, (b) motion vector sampling accuracy, and (c) interpolation filter. The four modes (ranked in order of complexity/overhead cost) are:
Some embodiments use motion vectors that are specified in dyadic (power of two) ranges, with the range of permissible motion vectors in the x-component being larger than the range in the y-component. The range in the x-component is generally larger because (a) high motion typically occurs in the horizontal direction and (b) the cost of motion compensation with a large displacement is typically much higher in the vertical direction.
Some embodiments specify a baseline motion vector range of −64 to 63.x pixels for the x-component, and −32 to 31.x pixels for the y-component. The “.x” fraction is dependent on motion vector resolution. For example, for half-pixel sampling, .x is 0.5 and for quarter-pixel accuracy .x is 0.75. The total number of discrete motion vector components in the x and y directions are therefore 512 and 256, respectively, for bicubic filters (for bilinear filters, these numbers are 256 and 128). In other embodiments, the range is expanded to allow longer motion vectors in “broadcast modes.”
Table 1 shows different ranges for motion vectors (in addition to the baseline), signaled by the variable-length codeword MVRANGE.
Motion vectors are transmitted in the bit stream by encoding their differences from causal predictors. Since the ranges of both motion vectors and predictors are bounded (e.g., by one of the ranges described above), the range of the differences is also bounded. In order to maximize encoding efficiency, rollover arithmetic is used to encode the motion vector difference.
B=Rollover(A+Rollover(B−A,K),K).
Replacing A with MVPx and B with MVx, the following relationship holds:
MVx=Rollover(MVPx+Rollover(MVx−MVPx),K)
where K is chosen as the logarithm to base 2 of the motion vector alphabet size, assuming the size is a power of 2. The differential motion vector ΔMVx is set to Rollover(MVx−MVPx), which is represented in K bits.
In some embodiments, rollover arithmetic is applied according to the following example.
Assume that the current frame is encoded using the baseline motion vector range, with quarter pixel accuracy motion vectors. The range of both the x-component of a motion vector of a macroblock (MVx) and the x-component of its predicted motion (MVPx) is (−64, 63.75). The alphabet size for each is 2^9=512. In other words, there are 512 distinct values each for MVx and MVPx.
The difference ΔMVx (MVx−MVPx) can be in the range (−128, 127.5). Therefore, the alphabet size for ΔMVx is 2^10−1=1023. However, using rollover arithmetic, 9 bits of precision is sufficient to transmit the difference signal, in order to uniquely recover MVx from MVPx.
Let MVx=−63 and MVPx=63 with K=log 2(512)=9. At quarter-pixel motion resolution, with an alphabet size of 512, the fixed point hexadecimal representations of MVx and MVPx are respectively 0xFFFFFF04 and 0x0FC, of which only the last 9 bits are unique. MVx−MVPx=0xFFFFFE08. The differential motion vector value is:
ΔMVx=Rollover(0xFFFFFE08,9)=0x008
which is a positive quantity, although the raw difference is negative. On the decoder side, MVx is recovered from MVPx:
MVx=Rollover(0x0FC+0x008,9)=Rollover(0×104)=0xF..F04
which is the fixed point hexadecimal representation of −63.
The same technique is used for coding the Y component. For example, K is set to 8 for the baseline MV range, at quarter-pixel resolution. In general, the value of K changes between x- and y-components, between motion vector resolutions, and between motion vector ranges.
IX. Extensions
In addition to the embodiments described above, and the previously described variations of those embodiments, the following is a list of possible extensions of some of the described techniques and tools. It is by no means exhaustive.
Having described and illustrated the principles of our invention with reference to various embodiments, it will be recognized that the various embodiments can be modified in arrangement and detail without departing from such principles. It should be understood that the programs, processes, or methods described herein are not related or limited to any particular type of computing environment, unless indicated otherwise. Various types of general purpose or specialized computing environments may be used with or perform operations in accordance with the teachings described herein. Elements of embodiments shown in software may be implemented in hardware and vice versa.
In view of the many possible embodiments to which the principles of our invention may be applied, we claim as our invention all such embodiments as may come within the scope and spirit of the following claims and equivalents thereto.
The present application is a continuation of U.S. patent application Ser. No. 10/622,841, entitled “Coding of Motion Vector Information,” filed Jul. 18, 2003, the disclosure of which is hereby incorporated by reference. The following U.S. patent applications relate to the present application and are hereby incorporated herein by reference: 1) U.S. patent application Ser. No. 10/622,378, entitled, “Advanced Bi-Directional Predictive Coding of Video Frames,” filed Jul. 18, 2003, now U.S. Pat. No. 7,609,763; 2) U.S. patent application Ser. No. 10/622,284, entitled, “Intraframe and Interframe Interlace Coding and Decoding,” filed Jul. 18, 2003, now U.S. Pat. No. 7,426,308; 3) U.S. patent application Ser. No. 10/321,415, entitled, “Skip Macroblock Coding,” filed Dec. 16, 2002, now U.S. Pat. No. 7,200,275; and 4) U.S. patent application Ser. No. 10/379,615, entitled “Chrominance Motion Vector Rounding,” filed Mar. 4, 2003, now U.S. Pat. No. 7,116,831.
Number | Name | Date | Kind |
---|---|---|---|
4454546 | Mori | Jun 1984 | A |
4661849 | Hinman | Apr 1987 | A |
4661853 | Roeder et al. | Apr 1987 | A |
4691329 | Juri et al. | Sep 1987 | A |
4695882 | Wada et al. | Sep 1987 | A |
4796087 | Guichard et al. | Jan 1989 | A |
4800432 | Barnett et al. | Jan 1989 | A |
4849812 | Borgers et al. | Jul 1989 | A |
4862267 | Gillard et al. | Aug 1989 | A |
4864393 | Harradine et al. | Sep 1989 | A |
4999705 | Puri | Mar 1991 | A |
5021879 | Vogel | Jun 1991 | A |
5068724 | Krause et al. | Nov 1991 | A |
5089887 | Robert et al. | Feb 1992 | A |
5091782 | Krause et al. | Feb 1992 | A |
5103306 | Weiman et al. | Apr 1992 | A |
5105271 | Niihara | Apr 1992 | A |
5111292 | Kuriacose et al. | May 1992 | A |
5117287 | Koike et al. | May 1992 | A |
5144426 | Tanaka et al. | Sep 1992 | A |
5155594 | Bernstein et al. | Oct 1992 | A |
5157490 | Kawai et al. | Oct 1992 | A |
5175618 | Ueda | Dec 1992 | A |
5193004 | Wang et al. | Mar 1993 | A |
5223949 | Honjo | Jun 1993 | A |
5227878 | Puri et al. | Jul 1993 | A |
5258836 | Murata | Nov 1993 | A |
5274453 | Maeda | Dec 1993 | A |
5287420 | Barrett | Feb 1994 | A |
5298991 | Yagasaki et al. | Mar 1994 | A |
5317397 | Odaka et al. | May 1994 | A |
5319463 | Hongu et al. | Jun 1994 | A |
5343248 | Fujinami | Aug 1994 | A |
5347308 | Wai | Sep 1994 | A |
5376968 | Wu et al. | Dec 1994 | A |
5376971 | Kadono et al. | Dec 1994 | A |
5379351 | Fandrianto et al. | Jan 1995 | A |
5386234 | Veltman et al. | Jan 1995 | A |
5400075 | Savatier | Mar 1995 | A |
5412430 | Nagata | May 1995 | A |
5412435 | Nakajima | May 1995 | A |
RE34965 | Sugiyama | Jun 1995 | E |
5422676 | Herpel et al. | Jun 1995 | A |
5424779 | Odaka | Jun 1995 | A |
5426464 | Casavant et al. | Jun 1995 | A |
5428396 | Yagasaki et al. | Jun 1995 | A |
5442400 | Sun | Aug 1995 | A |
5448297 | Alattar et al. | Sep 1995 | A |
5453799 | Yang et al. | Sep 1995 | A |
5457495 | Hartung | Oct 1995 | A |
5461421 | Moon | Oct 1995 | A |
RE35093 | Wang et al. | Nov 1995 | E |
5465118 | Hancock et al. | Nov 1995 | A |
5467086 | Jeong | Nov 1995 | A |
5467136 | Odaka | Nov 1995 | A |
5477272 | Zhang et al. | Dec 1995 | A |
RE35158 | Sugiyama | Feb 1996 | E |
5491523 | Sato | Feb 1996 | A |
5510840 | Yonemitsu et al. | Apr 1996 | A |
5517327 | Nakatani et al. | May 1996 | A |
5539466 | Igarashi et al. | Jul 1996 | A |
5544286 | Laney | Aug 1996 | A |
5546129 | Lee | Aug 1996 | A |
5550541 | Todd | Aug 1996 | A |
5550847 | Zhu | Aug 1996 | A |
5552832 | Astle | Sep 1996 | A |
5565922 | Krause | Oct 1996 | A |
5574504 | Yagasaki et al. | Nov 1996 | A |
5594504 | Ebrahimi | Jan 1997 | A |
5594813 | Fandrianto et al. | Jan 1997 | A |
5598215 | Watanabe | Jan 1997 | A |
5598216 | Lee | Jan 1997 | A |
5617144 | Lee | Apr 1997 | A |
5619281 | Jung | Apr 1997 | A |
5621481 | Yasuda et al. | Apr 1997 | A |
5623311 | Phillips et al. | Apr 1997 | A |
5648819 | Tranchard | Jul 1997 | A |
5650829 | Sugimoto et al. | Jul 1997 | A |
5654771 | Tekalp et al. | Aug 1997 | A |
5659365 | Wilkinson | Aug 1997 | A |
5666461 | Igarashi et al. | Sep 1997 | A |
5668608 | Lee | Sep 1997 | A |
5668932 | Laney | Sep 1997 | A |
5687097 | Mizusawa et al. | Nov 1997 | A |
5689305 | Ng et al. | Nov 1997 | A |
5689306 | Jung | Nov 1997 | A |
5692063 | Lee et al. | Nov 1997 | A |
5694173 | Kimura et al. | Dec 1997 | A |
5699476 | Van Der Meer | Dec 1997 | A |
5701164 | Kato | Dec 1997 | A |
5715005 | Masaki | Feb 1998 | A |
5717441 | Serizawa et al. | Feb 1998 | A |
5731850 | Maturi et al. | Mar 1998 | A |
5734783 | Shimoda et al. | Mar 1998 | A |
5748784 | Sugiyama | May 1998 | A |
5748789 | Lee et al. | May 1998 | A |
5767898 | Urano et al. | Jun 1998 | A |
5784175 | Lee | Jul 1998 | A |
5786860 | Kim et al. | Jul 1998 | A |
5787203 | Lee et al. | Jul 1998 | A |
5793897 | Jo et al. | Aug 1998 | A |
5796855 | Lee | Aug 1998 | A |
5799113 | Lee | Aug 1998 | A |
RE35910 | Nagata et al. | Sep 1998 | E |
5825830 | Kopf | Oct 1998 | A |
5825929 | Chen et al. | Oct 1998 | A |
5835144 | Matsumura et al. | Nov 1998 | A |
5835146 | Stone | Nov 1998 | A |
5835149 | Astle | Nov 1998 | A |
5844613 | Chaddha | Dec 1998 | A |
5847776 | Khmelnitsky et al. | Dec 1998 | A |
5859668 | Aono et al. | Jan 1999 | A |
5874995 | Naimpally et al. | Feb 1999 | A |
5901248 | Fandrianto et al. | May 1999 | A |
5905535 | Kerdranvat | May 1999 | A |
5905542 | Linzer | May 1999 | A |
5923375 | Pau | Jul 1999 | A |
5926573 | Kim et al. | Jul 1999 | A |
5929940 | Jeannin | Jul 1999 | A |
5946042 | Kato | Aug 1999 | A |
5946043 | Lee et al. | Aug 1999 | A |
5949489 | Nishikawa et al. | Sep 1999 | A |
5959673 | Lee et al. | Sep 1999 | A |
5963258 | Nishikawa et al. | Oct 1999 | A |
5963259 | Nakaya et al. | Oct 1999 | A |
5963673 | Kodama et al. | Oct 1999 | A |
5970173 | Lee et al. | Oct 1999 | A |
5970175 | Nishikawa et al. | Oct 1999 | A |
5973743 | Han | Oct 1999 | A |
5973755 | Gabriel | Oct 1999 | A |
5982437 | Okazaki et al. | Nov 1999 | A |
5982438 | Lin et al. | Nov 1999 | A |
5990960 | Murakami et al. | Nov 1999 | A |
5991447 | Eifrig et al. | Nov 1999 | A |
6002439 | Murakami et al. | Dec 1999 | A |
6005980 | Eifrig et al. | Dec 1999 | A |
RE36507 | Iu | Jan 2000 | E |
6011596 | Burl et al. | Jan 2000 | A |
6026195 | Eifrig et al. | Feb 2000 | A |
6035070 | Moon et al. | Mar 2000 | A |
6040863 | Kato | Mar 2000 | A |
6052150 | Kikuchi | Apr 2000 | A |
6057884 | Chen | May 2000 | A |
6058212 | Yokohama | May 2000 | A |
6067322 | Wang | May 2000 | A |
6081209 | Schuyler et al. | Jun 2000 | A |
6094225 | Han | Jul 2000 | A |
RE36822 | Sugiyama | Aug 2000 | E |
6097759 | Murakami et al. | Aug 2000 | A |
6111914 | Bist | Aug 2000 | A |
6130963 | Uz et al. | Oct 2000 | A |
6148027 | Song et al. | Nov 2000 | A |
6148033 | Pearlstein et al. | Nov 2000 | A |
6154495 | Yamaguchi et al. | Nov 2000 | A |
6167090 | Iizuka | Dec 2000 | A |
6188725 | Sugiyama | Feb 2001 | B1 |
6188794 | Nishikawa et al. | Feb 2001 | B1 |
6201927 | Comer | Mar 2001 | B1 |
6205176 | Sugiyama | Mar 2001 | B1 |
6208761 | Passaggio et al. | Mar 2001 | B1 |
6215905 | Lee et al. | Apr 2001 | B1 |
6219070 | Baker et al. | Apr 2001 | B1 |
6219464 | Greggain et al. | Apr 2001 | B1 |
6233017 | Chaddha | May 2001 | B1 |
6236806 | Kojima et al. | May 2001 | B1 |
RE37222 | Yonemitsu | Jun 2001 | E |
6243418 | Kim | Jun 2001 | B1 |
6259741 | Chen et al. | Jul 2001 | B1 |
6263024 | Matsumoto | Jul 2001 | B1 |
6263065 | Durinovic-Johri et al. | Jul 2001 | B1 |
6266091 | Saha et al. | Jul 2001 | B1 |
6271885 | Sugiyama | Aug 2001 | B2 |
6272179 | Kadono | Aug 2001 | B1 |
6275528 | Isozaki et al. | Aug 2001 | B1 |
6275531 | Li | Aug 2001 | B1 |
6281942 | Wang | Aug 2001 | B1 |
6282243 | Kazui et al. | Aug 2001 | B1 |
6289049 | Kim et al. | Sep 2001 | B1 |
6289132 | Goertzen | Sep 2001 | B1 |
6292585 | Yamaguchi et al. | Sep 2001 | B1 |
6295376 | Nakaya | Sep 2001 | B1 |
6307887 | Gabriel | Oct 2001 | B1 |
6307973 | Nishikawa et al. | Oct 2001 | B2 |
6310918 | Saha et al. | Oct 2001 | B1 |
6320593 | Sobel et al. | Nov 2001 | B1 |
6324216 | Igarashi et al. | Nov 2001 | B1 |
6337881 | Chaddha | Jan 2002 | B1 |
6339656 | Marui | Jan 2002 | B1 |
6377628 | Schultz et al. | Apr 2002 | B1 |
6381275 | Fukuhara et al. | Apr 2002 | B1 |
6381277 | Chun et al. | Apr 2002 | B1 |
6381279 | Taubman | Apr 2002 | B1 |
6393059 | Sugiyama | May 2002 | B1 |
6396876 | Babonneau et al. | May 2002 | B1 |
6404813 | Haskell et al. | Jun 2002 | B1 |
6408029 | McVeigh et al. | Jun 2002 | B1 |
6418166 | Wu et al. | Jul 2002 | B1 |
6421383 | Beattie | Jul 2002 | B2 |
6430316 | Wilkinson | Aug 2002 | B1 |
6441842 | Fandrianto et al. | Aug 2002 | B1 |
6442204 | Snook et al. | Aug 2002 | B1 |
6449312 | Zhang et al. | Sep 2002 | B1 |
6493392 | Chung et al. | Dec 2002 | B1 |
6496608 | Chui | Dec 2002 | B1 |
6498810 | Kim et al. | Dec 2002 | B1 |
6519005 | Bakhmutsky et al. | Feb 2003 | B2 |
6519287 | Hawkins et al. | Feb 2003 | B1 |
6529632 | Nakaya et al. | Mar 2003 | B1 |
6539056 | Sato et al. | Mar 2003 | B1 |
6563953 | Lin et al. | May 2003 | B2 |
6647061 | Panusopone et al. | Nov 2003 | B1 |
6650781 | Nakaya | Nov 2003 | B2 |
6661470 | Kawakami et al. | Dec 2003 | B1 |
6671323 | Tahara et al. | Dec 2003 | B1 |
6704360 | Haskell et al. | Mar 2004 | B2 |
6728317 | Demos | Apr 2004 | B1 |
RE38563 | Eifrig et al. | Aug 2004 | E |
6778610 | Lin | Aug 2004 | B2 |
6782053 | Lainema | Aug 2004 | B1 |
6950469 | Karczewicz et al. | Sep 2005 | B2 |
6968008 | Ribas-Corbera et al. | Nov 2005 | B1 |
6980596 | Wang et al. | Dec 2005 | B2 |
6983018 | Lin et al. | Jan 2006 | B1 |
7020200 | Winger | Mar 2006 | B2 |
7023919 | Cho et al. | Apr 2006 | B2 |
7233621 | Jeon | Jun 2007 | B2 |
7295616 | Sun et al. | Nov 2007 | B2 |
7317839 | Holcomb | Jan 2008 | B2 |
7453941 | Yamori et al. | Nov 2008 | B1 |
7486734 | Machida | Feb 2009 | B2 |
7567617 | Holcomb | Jul 2009 | B2 |
7616692 | Holcomb et al. | Nov 2009 | B2 |
7623574 | Holcomb | Nov 2009 | B2 |
7630438 | Mukerjee et al. | Dec 2009 | B2 |
7742529 | Ghanbari | Jun 2010 | B1 |
20010019586 | Kang et al. | Sep 2001 | A1 |
20010050957 | Nakaya et al. | Dec 2001 | A1 |
20020106025 | Tsukagoshi et al. | Aug 2002 | A1 |
20020122488 | Takahashi et al. | Sep 2002 | A1 |
20020186890 | Lee et al. | Dec 2002 | A1 |
20030059118 | Suzuki | Mar 2003 | A1 |
20030076883 | Bottreau et al. | Apr 2003 | A1 |
20030095603 | Lan et al. | May 2003 | A1 |
20030099292 | Wang et al. | May 2003 | A1 |
20030099294 | Wang et al. | May 2003 | A1 |
20030112864 | Karczewicz et al. | Jun 2003 | A1 |
20030113026 | Srinivasan et al. | Jun 2003 | A1 |
20030142748 | Tourapis | Jul 2003 | A1 |
20030152146 | Lin et al. | Aug 2003 | A1 |
20030156646 | Hsu et al. | Aug 2003 | A1 |
20030161402 | Horowitz | Aug 2003 | A1 |
20030179826 | Jeon | Sep 2003 | A1 |
20030202705 | Sun | Oct 2003 | A1 |
20040057523 | Koto et al. | Mar 2004 | A1 |
20040136461 | Kondo et al. | Jul 2004 | A1 |
20050013497 | Hsu et al. | Jan 2005 | A1 |
20050013498 | Srinivasan | Jan 2005 | A1 |
20050036700 | Nakaya | Feb 2005 | A1 |
20050036759 | Lin et al. | Feb 2005 | A1 |
20050053137 | Holcomb | Mar 2005 | A1 |
20050053147 | Mukerjee et al. | Mar 2005 | A1 |
20050053149 | Mukerjee et al. | Mar 2005 | A1 |
20050053292 | Mukerjee et al. | Mar 2005 | A1 |
20050058205 | Holcomb et al. | Mar 2005 | A1 |
20050100093 | Holcomb | May 2005 | A1 |
20050226335 | Lee et al. | Oct 2005 | A1 |
20060013307 | Olivier et al. | Jan 2006 | A1 |
Number | Date | Country |
---|---|---|
0 279 053 | Aug 1988 | EP |
0 397 402 | Nov 1990 | EP |
0 526 163 | Feb 1993 | EP |
0 535 746 | Apr 1993 | EP |
0 540 350 | May 1993 | EP |
0 542 474 | May 1993 | EP |
0 588 653 | Mar 1994 | EP |
0 614 318 | Sep 1994 | EP |
0 625 853 | Nov 1994 | EP |
0 651 574 | May 1995 | EP |
0 676 900 | Oct 1995 | EP |
0 771 114 | May 1997 | EP |
0 786 907 | Jul 1997 | EP |
0 825 778 | Feb 1998 | EP |
0 830 029 | Mar 1998 | EP |
0 884 912 | Dec 1998 | EP |
0 944 245 | Sep 1999 | EP |
1 335 609 | Aug 2003 | EP |
1 411 729 | Apr 2004 | EP |
2328337 | Feb 1999 | GB |
2332115 | Jun 1999 | GB |
2343579 | May 2000 | GB |
61205086 | Sep 1986 | JP |
62 213 494 | Sep 1987 | JP |
3001688 | Jan 1991 | JP |
3 129 986 | Mar 1991 | JP |
05-199422 | Aug 1993 | JP |
6 078 295 | Mar 1994 | JP |
6 078 298 | Mar 1994 | JP |
06-276481 | Sep 1994 | JP |
06-276511 | Sep 1994 | JP |
6292188 | Oct 1994 | JP |
07-087331 | Mar 1995 | JP |
7-274171 | Oct 1995 | JP |
08-140099 | May 1996 | JP |
09-322163 | Dec 1997 | JP |
10 056 644 | Feb 1998 | JP |
10-271512 | Oct 1998 | JP |
2001-346215 | Dec 2001 | JP |
2004-048711 | Feb 2004 | JP |
2004-215229 | Jul 2004 | JP |
WO 9844743 | Oct 1998 | WO |
WO 0033581 | Aug 2000 | WO |
WO 03026296 | Mar 2003 | WO |
WO 03026315 | Mar 2003 | WO |
WO 03047272 | Jun 2003 | WO |
WO 03063503 | Jul 2003 | WO |
Entry |
---|
International Organization for Standardisation, ISO/IEC JTC1/SC29/WG11, MPEG 92/535, AVC-356, Coded Representation of Picture and Audio Information, “Draft Proposal for Test Model 2, revision 2,” Experts group on ATM Video Coding, Oct. 1992, 84 pages. |
ITU (Borgwardt), “Core Experiment on Interlaced Video Coding,” Study Group 16, Question 6, Video Coding Experts Group (VCEG), VCEG-059, 15th Meeting: Pattaya, Thailand, Dec. 4-6, 2001, 10 pages. |
Joint Video Team (JVT) of ISO/IEC JTC1/SC29/WG11 and ITU-T SG.16 (Jeon and Tourapis), “B Pictures in JVT,” JVT-D155, JVT Meeting, MPEG Meeting, Klagenfurt, Austria, Jul. 22-26, 2002, 19 pages. |
Melanson, “VP3 Bitstream Format and Decoding Process,” v0.5, 21 pp. (document marked Dec. 8, 2004). |
On2 Technologies Inc., “On2 Introduces TrueMotion VP3.2,” 1 pp., press release dated Aug. 16, 2000 (downloaded from the World Wide Web on Dec. 6, 2012). |
Wiegand, “H.26L Test Model Long-Term No. 9 (TML-9) draft 0,” ITU-Telecommunications Standardization Sector, Study Group 16, VCEG-N83, 74 pp. (Dec. 2001). |
Wikipedia, “Theora,” 10 pp. (downloaded from the World Wide Web on Dec. 6, 2012). |
Wikipedia, “VP3,” 4 pp. (downloaded from the World Wide Web on Dec. 6, 2012). |
Xiph.org Foundation, “Theora I Specification,” 206 pp. (Sep. 17, 2004). |
Xiph.org Foundation, “Theora Specification,” 206 pp. (Aug. 5, 2009). |
ITU (Kerofsky et al.), “Adaptive Syntax for MTYPE”, Study Group 16, Question 6, Video Coding Experts Group (VCEG), VCEG-M14, 13th Meeting: Austin, Texas, Apr. 2-4, 2001, 6 pages. |
Itu (Sullivan and Wiegand), Meeting Report of the Thirteenth Meeting (Meeting M) of the ITU-T Video Coding Experts Group (VCEG), [Q.6/16]—Austin, Texas, Apr. 24, 2001, 34 pages. |
ISO/IEC, “MPEG-4 Video Verification Model Version 7.1,” ISO/IEC JTC1/SC29/WG11 MPEG97/M2249, 204 pp. (Ed. Ebrahimi) (Stockholm, Jul. 13, 1997). |
ISO/IEC, “Draft of 14496-2 Third Edition, Information Technology—Coding of Audio-Visual Objects—Part 2: Visual,” ISO/IEC JTC 1/SC 29/WG 11 M9477, 590 pp. (Pattaya, Mar. 3, 2003). |
U.S. Appl. No. 60/341,674, filed Dec. 17, 2001, Lee et al. |
U.S. Appl. No. 60/488,710, filed Jul. 18, 2003, Srinivasan et al. |
U.S. Appl. No. 60/501,081, filed Sep. 7, 2003, Srinivasan et al. |
U.S. Appl. No. 60/501,133, filed Sep. 7, 2003, Holcomb et al. |
Bartkowiak et al., “Color Video Compression Based on Chrominance Vector Quantization,” 7th Int'l Workshop on Systems, Signals and Image Processing, IWSSIP 2000, Maribor 7-9 VI, pp. 107-110 (Jun. 2000). |
Benzler et al., “Improving multiresolution motion compensating hybrid coding by drift reduction,” Picture Coding Symposium, 4 pp. (Mar. 1996). |
Benzler et al., “Motion and aliasing compensating prediction with quarter-pel accuracy and adaptive overlapping blocks as proposal for MPEG-4 tool evaluation—Technical description,” ISO/IEC JTC1/SC29/WG11, MPEG 95/0552, 5 pp. (document marked Dec. 1995). |
Benzler, “Results of core experiments P8 (Motion and Aliasing Compensating Prediction),” ISO/IEC JTC1/SC29/WG11, MPEG 97/2625, 8 pp. (document marked Oct. 1997). |
Borman et al., “Block-matching Sub-pixel Motion Estimation from Noisy, Under-Sampled Frames—an Empirical Performance Evaluation,” SPIE Visual Comm. & Image Processing, 10 pp. (Jan. 1999). |
Conklin et al., “Multi-resolution Motion Estimation,” Proc. ICASSP '97, Munich, Germany, 4 pp. (Apr. 1997). |
Davis et al., “Equivalence of subpixel motion estimators based on optical flow and block matching,” Proc. IEEE Int'l Symposium on Computer Vision, pp. 7-12 (Nov. 1995). |
de Haan et al., “Sub-pixel motion estimation with 3-D recursive search block-matching,” Signal Processing: Image Comm.6, pp. 229-239 (Jun. 1994). |
“DivX Multi Standard Video Encoder,” 2 pp. (Downloaded from the World Wide Web on Jan. 24, 2006). |
Ericsson, “Fixed and Adaptive Predictors for Hybrid Predictive/Transform Coding,” IEEE Transactions on Comm., vol. COM-33, No. 12, pp. 1291-1302 (Dec. 1985). |
Flierl et al., “Multihypothesis Motion Estimation for Video Coding,” Proc. DCC, 10 pp. (Mar. 2001). |
Fogg, “Survey of Software and Hardware VLC Architectures,” SPIE, vol. 2186, pp. 29-37 (Feb. 9-10 1994). |
Girod, “Efficiency Analysis of Multihypothesis Motion-Compensated Prediction for Video Coding,” IEEE Transactions on Image Processing, vol. 9, No. 2, pp. 173-183 (Feb. 2000). |
Girod, “Motion-Compensating Prediction with Fractional-Pel Accuracy,” IEEE Transactions on Comm., vol. 41, No. 4, pp. 604-612 (Apr. 1993). |
Girod, “Motion Compensation: Visual Aspects, Accuracy, and Fundamental Limits,” Motion Analysis and Image Sequence Processing, Kluwer Academic Publishers, pp. 125-152 (Mar. 1993). |
Horn et al., “Estimation of Motion Vector Fields for Multiscale Motion Compensation,” Proc. Picture Coding Symp. (PCS 97), pp. 141-144 (Sep. 1997). |
Hsu et al., “A Low Bit-Rate Video Codec Based on Two-Dimensional Mesh Motion Compensation with Adaptive Interpolation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. II, No. 1, pp. 111-117 (Jan. 2001). |
Huang et al., “Hardware architecture design for variable block size motion estimation in MPEG-4 AVC/JVT/ITU-T H.264,” Proc. of the 2003 Int'l Symposium on Circuits & Sys. (ISCAS '03), vol. 2, pp. 796-799 (May 2003). |
IBM Technical Disclosure Bulletin, “Advanced Motion Estimation for Moving Picture Expert Group Encoders,” vol. 39, No. 4, pp. 323-324 (Apr. 1996). |
ISO/IEC, “Information Technology—Coding of Audio-Visual Objects: Visual, ISO/IEC 14496-2, Committee Draft,” 330 pp. (Mar. 1998). |
ISO/IEC, “MPEG-4 Video Verification Model Version 10.0,” ISO/IEC JTC1/SC29/WG11, MPEG98/N1992, (ed. Ebrahimi) (document marked Feb. 1998). |
ISO/IEC, “ISO/IEC 11172-2, Information Technology—Coding of Moving Pictures and Associated Audio for Digital Storage Media at up to about 1.5 Mbit/s—Part 2: Video,” 112 pp. (Aug. 1993). |
ISO/IEC, “ISO/IEC CD 13818-2: Information technology—Generic coding of moving pictures and associated audio information—Part 2: video,” ISO/IEC JTC1/SC29 N659, 189 pp. (Dec. 1993). |
ISO/IEC, “Draft Text of Final Draft International Standard for Advanced Video Coding (ITU-T Rec. H.264 | ISO/IEC 14496-10 AVC),” ISO/IEC JTC 1/SC 29/WG 11 N5555, 242 pp. (Mar. 2003). |
ISO/IEC, “Text of Committee Draft of Joint Video Specification (ITU-T Rec. H.264 | ISO/IEC 14496-10 AVC,” ISO/IEC JTC1/SC29/WG11 MPEG02/N4810, 143 pp. (May 2002). |
ITU-T, “ITU-T Recommendation H.261, Video Codec for Audiovisual Services at p x 64 kbits,” 25 pp. (Mar. 1993). |
ITU-T, “ITU-T Recommendation H.262, Information Technology—Generic Coding of Moving Pictures and Associated Audio Information: Video,” 205 pp. (Jul. 1995). |
ITU-T, “ITU-T Recommendation H.263 Video Coding for Low Bit Rate Communication,” 167 pp. (Feb. 1998). |
ITU—Q15-F-24, “MVC Video Codec—Proposal for H.26L,” Study Group 16, Video Coding Experts Group (Question 15), 28 pp. (document marked as generated in Oct. 1998). |
ITU-T, “H.26L Test Model Long Term Number 5 (TML-5) draft0,” Study Group 16, Video Coding Experts Group (Question 15), Document Q15-K-59, 35 pp. (ed. Gisle Bjontegaard) (Document dated Oct. 2000). |
ITU-T, “Adaptive Field/Frame Block Coding Experiment Proposal,” Study Group 16, Video Coding Experts Group (VCEG), VCEG-N76, Santa Barbara Meeting, Sep. 24-27, 2001. |
ITU (Borgwardt), “Core Experiment on Interlaced Video Coding,” Study Group 16, Question 6, Video Coding Experts Group (VCEG), VCEG-N85r1, 14th Meeting: Santa Barbara, California, Sep. 24-27, 2001, 10 pages. |
Iwahashi et al., “A Motion Compensation Technique for Downscaled Pictures in Layered Coding,” IEICE Transactions on Comm., vol. E77-B, No. 8, pp. 1007-1012 (Aug. 1994). |
Jang et al., “MPEG-4 Version 2 Visual Working Draft Rev 5.1,” ISO/IEC JTC1/SC29 WG11 M4257, 462 pp. (Dec. 1998). |
Jeong et al., “Adaptive Huffman Coding of 2-D DCT Coefficients for Image Sequence Compression,” Signal Processing: Image Communication, vol. 7, 11 pp. (Mar. 1995). |
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, “Joint Model Number 1, Revision 1 (JM-1r1),” JVT-A003r1, Pattaya, Thailand, 80 pp. (Dec. 2001) [document marked “Generated: Jan. 18, 2002”]. |
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, “Joint Committee Draft (CD),” JVT-C167, 3rd Meeting: Fairfax, Virginia, USA, 142 pp. (May 2002). |
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, “Final Joint Committee Draft of Joint Video Specification (ITU-T Recommendation H.264, ISO/IEC 14496-10 AVC,” JVT-D157, 218 pp. (Aug. 2002). |
Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG, (Tourapis), Direct Prediction for Predictive (P) and Bidirectionally Predictive (B) frames in Video Coding, JVT-C128 (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), 3rd Meeting: Fairfax, Virginia, USA, 12 pages, May 6-10, 2002. |
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, (Tourapis), “Timestamp Independent Motion Vector Prediction for P and B frames with Division Elimination,” JVT-D040 (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), 4th Meeting: Klagenfurt, Austria, Jul. 22-26, 2002, 18 pages. |
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, (Borgwardt), “Core Experiment on Macroblock Adaptive Frame/Field Encoding,” JVT-B117 (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), 2nd Meeting: Geneva, Switzerland, Jan. 29-Feb. 1, 2002, 8 pages. |
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, (Kondo), “Memory Reduction for Temporal Technique of Direct Mode,” JVT-E076 (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), 5th Meeting: Geneva, Switzerland, Oct. 9-17, 2002, 12 pages. |
Keys, “Cubic Convolution Interpolation for Digital Image Processing,” IEEE Transactions on Acoustics, Speech & Signal Processing, vol. ASSP-29, No. 6, pp. 1153-1160 (Dec. 1981). |
Konrad et al., “On Motion Modeling and Estimation for Very Low Bit Rate Video Coding,” Visual Comm. & Image Processing (VCIP '95), 12 pp. (May 1995). |
Kossentini et al., “Predictive RD Optimized Motion Estimation for Very Low Bit-rate Video Coding,” IEEE J. on Selected Areas in Communications, vol. 15, No. 9 pp. 1752-1763 (Dec. 1997). |
Lopes et al., “Analysis of Spatial Transform Motion Estimation with Overlapped Compensation and Fractional-pixel Accuracy,” IEEE Proc. Visual Image Signal Processing, vol. 146, No. 6, pp. 339-344 (Dec. 1999). |
Microsoft Corp., “Microsoft Debuts New Windows Media Player 9 Series, Redefining Digital Media on the PC,” 4 pp. (document marked Sep. 4, 2002) [Downloaded from the World Wide Web on May 14, 2004]. |
Mook, “Next-Gen Windows Media Player Leaks to the Web,” BetaNews, 17 pp. (Jul. 19, 2002) [Downloaded from the World Wide Web on Aug. 8, 2003]. |
Morimoto et al., “Fast Electronic Digital Image Stabilization,” Proc. ICPR, Vienna, Austria, 5 pp. (Aug. 1996). |
Odaka et al., “A motion compensated prediction method for interlaced image ‘Dual-prime’,” Information Processing Society of Japan (IPSJ) Technical Report, vol. 94, No. 53 (94-AVM-5), pp. 17-24, Jun. 24, 1994. |
“Overview of MPEG-2 Test Model 5,” 5 pp. [Downloaded from the World Wide Web on Mar. 1, 2006]. |
Patel et al., “Performance of a Software MPEG Video Decoder,” Proc. of the First ACM Intl Conf on Multimedia, pp. 75-82 (Aug. 1993). |
Printouts of FTP directories from http://ftp3.itu.ch, 8 pp. (downloaded from the World Wide Web on Sep. 20, 2005). |
Reader, “History of MPEG Video Compression—Ver. 4.0,” 99 pp. (document marked Dec. 16, 2003). |
Ribas-Corbera et al., “On the Optimal Block Size for Block-based Motion-Compensated Video Coders,” SPIE Proc. of Visual Comm. & Image Processing, vol. 3024, 12 pp. (Jan. 1997). |
Ribas-Corbera et al., “On the Optimal Motion Vector Accuracy for Block-based Motion-Compensated Video Coders,” Proc. SPIE Digital Video Compression, San Jose, CA, 13 pp. (Mar. 1996). |
Schultz et al., “Subpixel Motion Estimation for Super-Resolution Image Sequence Enhancement,” Journal of Visual Comm. & Image Representation, vol. 9, No. 1, pp. 38-50 (Mar. 1998). |
Srinivasan et al., “Windows Media Video 9: overview and applications,” Signal Processing: Image Communication, vol. 19, No. 9, pp. 851-875 (Oct. 2004). |
Suhring et al., “JM2.1 video coding reference software, extract from source code module lencod.c,” <http://iphome.hhi.de/suehring/tml/download/old—jm/jm21.zip>, May 27, 2002, 8 pages. |
Sullivan et al., “The H.264/AVC Advanced Video Coding Standard: Overview and Introduction to the Fidelity Range Extensions,” 21 pp. (Aug. 2004). |
Sun et al., “Improved TML Loop Filter with Lower Complexity,” ITU-T VCEG-N17, 8 pp. (Aug. 2001). |
“The TML Project WEB-Page and Archive,” (including pages of code marked “image.cpp for H.26L decoder, Copyright 1999” and “image.c”), 24 pp. [Downloaded from the World Wide Web on Jun. 1, 2005]. |
Tourapis et al., “Predictive Motion Vector Field Adaptive Search Technique (PMVFAST)—Enhancing Block Based Motion Estimation,” Proc. Visual Communications and Image Processing, 10 pp. (Jan. 2001). |
Triggs, “Empirical Filter Estimation for Subpixel Interpolation and Matching,” Int'l Conf. Computer Vision '01, Vancouver, Canada, 8 pp. (Jul. 2001). |
Triggs, “Optimal Filters for Subpixel Interpolation and Matching,” Int'l Conf. Computer Vision '01, Vancouver, Canada, 10 pp. (Jul. 2001). |
“Video Coding Using Wavelet Decomposition for Very Low Bit-Rate Networks,” 16 pp. (Month unknown 1997). |
Wang et al., “Interlace Coding Tools for H.26L Video Coding,” ITU-T SG16/Q.6 VCEG-O37, pp. 1-20 (Dec. 2001). |
Wedi, “Complexity Reduced Motion Compensated Prediction with 1/8-pel Displacement Vector Resolution,” ITU Study Group 16, Video Coding Experts Group (Question 6), Document VCEG-L20, 8 pp. (Document dated Dec. 2000). |
Weiss et al., “Real Time Implementation of Subpixel Motion Estimation for Broadcast Applications,” pp. 7/1-7/3 (Oct. 1990). |
Wiegand et al., “Long-term Memory Motion Compensated Prediction,” IEEE Transactions on Circuits & Systems for Video Technology, vol. 9, No. 1, pp. 70-84 (Feb. 1999). |
Wien, “Variable Block-Size Transforms for Hybrid Video Coding,” Dissertation, 182 pp. (Feb. 2004). |
Wu et al., “Joint estimation of forward and backward motion vectors for interpolative prediction of video,” IEEE Transactions on Image Processing, vol. 3, No. 5, pp. 684-687 (Sep. 1994). |
Yang et al., “Very High Efficiency VLSI Chip-pair for Full Search Block Matching with Fractional Precision,” Proc. ICASSP/IEEE Int'l Conf. on Acoustics, Speech & Signal Processing, Glasgow, pp. 2437-2440 (May 1989). |
Yu et al., “Two-Dimensional Motion Vector Coding for Low Bitrate Videophone Applications,” Proc. Int'l Conf. on Image Processing, Los Alamitos, US, pp. 414-417, IEEE Comp. Soc. Press (Oct. 1995). |
Zhang et al., “Adaptive Field/Frame Selection for High Compression Coding,” MERL TR-2003-29, 13 pp. (Jan. 2003). |
Zhu, “RTP Payload Format for H.263 Video Streams,” IETF Request for Comments 2190, 12 pp. (Sep. 1997). |
Number | Date | Country | |
---|---|---|---|
20120213280 A1 | Aug 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10622841 | Jul 2003 | US |
Child | 13455094 | US |