The invention pertains to a video system that compresses video data for transmission or storage and decompresses the video data for display. More particularly, the invention pertains to a video system and a method for intracoding video data.
Video systems transmit, process and store large quantities of video data. To create a video presentation, such as a video movie, a rendering video system displays the video data as a plurality of digital images, also referred to as “frames,” thereby simulating movement. In order to achieve a video presentation with an acceptable video quality, or to enable transmission and storage at all, a conventional video system modifies the video data prior to transmission or storage. For instance, the video system compresses and encodes the video data to reduce the bit rate for storage and transmission.
In a conventional video system a video encoder is used to compress and encode the video data and a video decoder is used to decompress and to decode the video data. The video encoder outputs video data that has a reduced bit rate and a reduced redundancy. That is, the technique of video compression removes spatial redundancy within a video frame or temporal redundancy between consecutive video frames.
The video encoder and video decoder may be configured to apply one of two types of coding to compress the video stream, namely intracoding and intercoding. These two types of coding are based on the statistical properties of the video frames. When the video frames are coded using intracoding, the compression is based on information contained in a single frame (the frame that is compressed) by using the spatial redundancy within the frame. Intracoding, thus, does not depend on any other frames. In contrast, intercoding uses at least one other frame as a reference and codes a difference between the frame to be compressed and the reference frame. Intercoding is thus based on a temporal redundancy between consecutive frames in the video data.
The field of video compression is subject to international standards, e.g., International Telecommunications Union (ITU) standard H.263 that defines uniform requirements for video coding and decoding. In addition, manufacturers of video coders and decoders modify or build upon the international standards and implement proprietary techniques for video compression.
Despite the existence of the international standards and the proprietary techniques, there is still a need for improved techniques for video compression. For example, as the quality of a displayed video movie depends directly from the technique used for video compression, any improvement of the video compression technique makes the video movie more pleasing for the viewer.
An aspect of the invention involves a method of coding a stream of video data including a stream of video frames. The method divides each video frame into a matrix of a plurality of subblocks, wherein each subblock includes a plurality of pixels. The method further defines nine prediction modes, wherein each prediction mode determines a mode according to which a present subblock is to be coded. The method further selects one of the nine prediction modes to encode the present subblock. The selected prediction mode provides for a minimum error value in the present subblock.
Another aspect of the invention involves a video system for coding and decoding a stream of video data that includes a stream of video frames. The video system includes a video encoder and a mode selector. The video encoder is configured to receive a stream of video data including a stream of video frames and to divide each video frame into a matrix of a plurality of subblocks, wherein each subblock includes a plurality of pixels. The mode selector is in communication with the video encoder and is configured to define nine prediction modes. Each prediction mode determines a mode according to which a present subblock is to be coded. The mode selector is further configured to select one of the nine prediction modes to encode the present subblock, wherein the selected prediction mode provides for a minimum error value in the present subblock.
Once the video system has selected the best prediction mode to encode the pixels of the present subblock, the video system encodes the minimum error value and transmits the encoded minimum error value within a compressed bitstream to the decoder. The minimum error value represents a difference between predicted pixels of the present subblock and the original pixels of the subblock. The decoder uses the predicted pixels and the difference to the original pixels to accurately reconstruct the video frame.
These and other aspects, advantages, and novel features of the invention will become apparent upon reading the following detailed description and upon reference to the accompanying drawings.
In the following description, reference is made to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. Where possible, the same reference numbers will be used throughout the drawings to refer to the same or like components. Numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one skilled in the art that the present invention may be practiced without the specific details or with certain alternative equivalent devices and methods to those described herein. In other instances, well-known methods, procedures, components, and devices have not been described in detail so as not to unnecessarily obscure aspects of the present invention.
The video sequence 20 input to the encoder apparatus 3 may be either a live signal, e.g., provided by a video camera, or a prerecorded sequence in a predetermined format. The video sequence 20 includes frames of a digital video, an audio segment consisting of digital audio, combinations of video, graphics, text, and/or audio (multimedia applications), or analog forms of the aforementioned. If necessary, conversions can be applied to various types of input signals such as analog video, or previously compressed and encoded video to produce an appropriate input to the encoder apparatus 3. In one embodiment, the encoder apparatus 3 may accept video in RGB or YUV formats. The encoder apparatus 3, however, may be adapted to accept any format of input as long as an appropriate conversion mechanism is supplied. Conversion mechanisms for converting a signal in one format to a signal in another format are well known in the art.
The medium 9 may be a storage device or a transmission medium. In one embodiment, the video system 1 may be implemented on a computer. The encoder apparatus 3 sends an encoded video stream (representation) to the medium 9 that is implemented as a storage device. The storage device may be a video server, a hard disk drive, a CD rewriteable drive, a read/write DVD drive, or any other device capable of storing and allowing the retrieval of encoded video data. The storage device is connected to the decoder apparatus 5, which can selectively read from the storage device and decode the encoded video sequence. As the decoder apparatus 5 decodes a selected one of the encoded video sequence, it generates a reproduction of the video sequence 20, for example, for display on a computer monitor or screen.
In another embodiment, the medium 9 provides a connection to another computer, which may be a remote computer, that receives the encoded video sequence. The medium 9 may be a network connection such as a LAN, a WAN, the Internet, or the like. The decoder apparatus 5 within the remote computer decodes the encoded representations contained therein and may generate a reproduction of the video sequence 20 on a screen or a monitor of the remote computer.
Aspects of the video system 1 illustrated in
Pre-existing video encoding techniques typically break up a frame (picture) into smaller blocks of pixels called macroblocks. Each macroblock can consist of a matrix of pixels, typically a 16×16 matrix, defining the unit of information at which encoding is performed. The matrix of pixels is therefore referred to as a 16×16 macroblock. These video encoding techniques usually break each 16×16 macroblock further up into smaller matrices of pixels. For example, into 8×8 matrices of pixels or 4×4 matrices of pixels. Such matrices are hereinafter referred to as subblocks. In one embodiment of the present invention, a 16×16 macroblock is divided into 16 4×4 subblocks. Those skilled in the art will appreciate that the present invention is equally applicable to systems that use 8×8 subblocks, 4×4 subblocks or only 16×16 macroblocks without breaking it up into subblocks.
Further, the pre-existing encoding techniques provide for motion compensation and motion estimation using motion vectors. The motion vectors describe the direction, expressed through an x-component and a y-component, and the amount of motion of the 16×16 macroblocks, or their respective subblocks, and are transmitted to the decoder as part of the bit stream. Motion vectors are used for bidirectionally encoded pictures (B-pictures) and predicted pictures (P pictures) as known in the art.
The video encoder 2 performs a discrete cosine transform (DCT) to encode and compress the video sequence 20. Briefly, the video encoder 2 converts the video sequence 20 from the spatial, temporal domain into the frequency domain. The output of the video encoder 2 is a set of signal amplitudes, called “DCT coefficients.” A quantizer receives the DCT coefficients and assigns each of a range (or step size) of DCT coefficient values a single value, such as a small integer, during encoding. Quantization allows data to be represented more compactly, but results in the loss of some data. Quantization on a finer scale results in a less compact representation (higher bit-rate), but also involves the loss of less data. Quantization on a more coarse scale results in a more compact representation (lower bit-rate), but also involves more loss of data. The mode selector 14 communicates with the video encoder 2 and monitors and controls encoding of the video sequence 20. The mode selector 14 determines in accordance with the present invention prediction modes according to which the video encoder 2 encodes the video sequence 20. The mode selector 14 may be a processor or a software module that are configured to operates in accordance with a method of the present invention.
The buffer 8 of the encoder apparatus 3 receives the encoded and compressed video sequence (hereinafter “encoded video sequence”) from the video encoder 2 and adjusts the bit rate of the encoded video sequence before it is sent to the medium 9. Buffering may be required because individual video images may contain varying amounts of information, resulting in varying coding efficiencies from image to image. As the buffer 8 has a limited size, a feedback loop to the quantizer may be used to avoid overflow or underflow of the buffer 8. The bit-rate of the representation is the rate at which the representation data must be processed in order to present the representation in real time.
The decoder apparatus 5 performs the inverse function of the encoder apparatus 3. The buffer 10 serves also to adjust the bit rate of the incoming encoded video sequence. The video decoder 12 decodes and decompresses in combination with the mode selector 16 the incoming video sequence reconstructing the video sequence. The mode selector 16 determines the prediction modes according to which the video encoder 2 encoded the incoming video sequence. The decoder apparatus 5 outputs a decoded and decompressed video sequence 24 illustrated as “VIDEO OUT” (hereinafter “decoded video sequence 24”).
The video decoder 12 receives a bit stream that represents the encoded video sequence from the buffer 10 (
The macroblock 36a, as a representative for all macroblocks 36, 36a, 36b, 36c, 36d, is shown in greater detail below the video frame 30. The video encoding technique of the video system 1 breaks each macroblock 36, 36a, 36b, 36c, 36d further up into a matrix of pixels 38, hereinafter referred to as a subblock 38. In one embodiment, the subblock 38 is a 4×4 matrix of pixels, wherein the 16 pixels are labeled as a, b, c, p. Bordering pixels of an adjacent subblock of a neighboring macroblock 36b, which is located above the macroblock 36a, are labeled as A, B, C, D. Further, bordering pixels of a subblock located above and to the right of the macroblock 36a are labeled as E, F, G, H. Likewise, bordering pixels of an adjacent subblock of a neighboring macroblock 36c, which is located to the left of the macroblock 36a, are labeled as I, J, K, L. Bordering pixels of a subblock located below and to the left of the macroblock 36a are labeled as M, N, O, P. A bordering pixel of a subblock of a macroblock 36d, which is located above and to the left of the macroblock 36a, is labeled as Q.
The video system 1 of the present invention codes each macroblock 36 as an intra macroblock. Intra macroblocks are transform encoded without motion compensated prediction. Thus, intra macroblocks do not reference decoded data from either previous or subsequent frames. An I-frame is a frame consisting completely of intra macroblocks. Thus, I-frames are encoded with no reference to previous or subsequent frames. I-frames are also known as “Intra-frames.”
Mode 0:
In this mode, each pixel a-p is predicted by the following equation:
It is contemplated that in this mode as well as in the following modes, a “division” means to round the result down toward “minus infinity” (−∞). For instance, in mode 0, the term “+4” ensures that the division results in a rounding to the nearest integer. This applies also the other modes.
If four of the pixels A-P are outside the current picture (frame) that is currently encoded, the average of the remaining four pixels is used for prediction. If all eight pixels are outside the picture, the prediction for all pixels in this subblock is 128. A subblock may therefore always be predicted in mode 0.
Mode 1:
If the pixels A, B, C, D are inside the current picture, the pixels a-p are predicted in vertical direction as shown in
Mode 2:
If the pixels I, J, K, L are inside the current picture, the pixels a-p are predicted in horizontal direction. That is, the pixels a-p are predicted as follows:
Mode 3:
This mode is used if all pixels A-P are inside the current picture. This corresponds to a prediction in a diagonal direction as shown in
Mode 4:
This mode is used if all pixels A-P are inside the current picture. This is also a diagonal prediction.
Mode 5:
This mode is used if all pixels A-P are inside the current picture. This is also a diagonal prediction.
Mode 6:
This mode is used if all pixels A-P are inside the current picture. This is a diagonal prediction.
Mode 7:
This mode is used if all pixels A-P are inside the current picture. This is a diagonal prediction.
Mode 8:
This mode is used if all pixels A-P are inside the current picture. This is a diagonal prediction.
In one embodiment of the present invention, a mode selection algorithm determines a criteria to select one of the nine modes. The subblock 38 is then encoded in accordance with the selected mode. The mode selection algorithm is described in detail below.
In a step 28, e.g., when a user activates the video system 1, the procedure initializes the video system 1. The initialization procedure includes, for example, determining whether the encoder apparatus 3 is operating and properly connected to receive the stream of video frames.
In a step 30, the procedure receives the stream of video frames and divides each video frame into a matrix of a plurality of subblocks, wherein each subblock includes a plurality of pixels. The matrix of a plurality of subblocks may include 4×4 subblocks 38 that are part of a macroblock as described above.
In a step 32, the procedure defines the nine prediction modes Mode 0-8, wherein each prediction mode determines a mode according to which a present subblock is to be coded. For example, the procedure may execute a subroutine to calculate and define the modes Mode 0-8.
In a step 34, the procedure selects one of the nine prediction modes Mode 0-8 to encode the present subblock 38. In one embodiment, the procedure calculates for each mode an error value, determines which mode provides a minimum error value and selects that mode for encoding the present subblock 38.
Once the procedure has selected the “best” prediction mode to encode the pixels of the present subblock 38, the procedure encodes the minimum error value and transmits the encoded minimum error value within a compressed bitstream to the decoder. The minimum error value represents a difference between the predicted pixels of the present subblock and the original pixels of the subblock. The difference may be encoded using a DCT, coefficient quantization and variable length coding as known in the art. The decoder uses the predicted pixels and the difference to the original pixels to accurately reconstruct the video frame. The procedure ends at a step 36.
The procedure provides that each of the 4×4 subblocks 38 is coded in accordance with one of the nine prediction modes Mode 0-8. As this may require a considerable number of bits if coded directly, the video system 1 in accordance with the present invention may apply a more efficient way of coding the mode information. A prediction mode of a subblock is correlated with the prediction modes of adjacent subblocks.
For each combination of the prediction modes of the subblocks A and B, a sequence of nine numbers is given, one number for each of the nine Modes 0-9. For example in Group 3, if the prediction modes for the subblock A and the subblock B are both Mode 1, a string “1 6 2 5 3 0 4 8 7” indicates that the Mode 1, i.e., the first number in the string, is the most probable mode for the subblock C. The Mode 6, i.e., the second number in the string, is the next most probable mode. In the exemplary string, the Mode 7 is the least probable since the number 7 is the last number in the string. The string will be part of the stream of bits that represents the encoded video sequence.
The stream of bits therefore includes information (Prob0=1 (see Table 1)) indicating the mode used for the subblock C. For example, the information may indicate that the next most probable intra prediction mode is Mode 6. Note that a “-” in the table indicates that this instance cannot occur. The term “outside” used in the Table 1 indicates “outside the frame.” If the subblock A or B is within the frame, but is not INTRA coded (e.g., in a P frame, the subblock C could be INTRA coded but either the subblock A or the subblock B may not be INTRA coded), there is no prediction mode. The procedure of the present invention assumes the Mode 0 for such subblocks.
The information about the prediction modes may be efficiently coded by combining prediction mode information of two subblocks 38 in one codeword. The stream of bits includes then the resulting codewords, wherein each codeword represents the prediction modes of the two subblocks. Table 2 lists exemplary binary codewords for code numbers (Code No.) between 0 and 80. The probability of a mode of the first subblock is indicated as Prob0 and the probability of a mode of the second subblock is indicated as Prob1.
With the nine prediction modes (Table 1) and the probabilities of the modes (Table 1, Table 2), a mode selection algorithm determines the mode according to which a particular subblock is predicted. In one embodiment of the present invention, the algorithm selects the mode using a sum of absolute differences (SAD) between the pixels a-p and the corresponding pixels in the original frame, and the above probabilities of the modes. The SAD and the probability table are used to select the mode for a particular subblock 38. The algorithm calculates a parameter uError for each of the nine possible modes Mode 0-8. The mode that provides the smallest uError is the mode selected for the subblock 38.
The uError is calculated as follows:
uError=SAD({a, . . . ,p},{original frame})+rd_quant[uMBQP]*uProb,
where SAD({a, . . . , p},{original frame} is the sum of absolute difference between the pixels a-p and the corresponding pixels in the original frame,
where rd_quant[uMBQP] is a table of constant values indexed by a quantization parameter uMBQP. uMBQP is given by
const U8 rd_quant[32]=(1,1,1,1,1,1,2,2,2,2,3,3,3,4,4,5,5,6,7,7,8,9,11,12,13,15,17,19,21,24,27,30); and
where uProb is the probability of the mode occurring, provided by the position in the mode probability table (Table 1).
For example, the prediction mode for the subblocks A is the Mode 1 and the prediction mode for the subblock B is the Mode 1. The string “1 6 2 5 3 0 4 8 7” indicates that the Mode 1 is also the most probable mode for the subblock C. The Mode 6 is the second most probable mode, etc. Thus, when the algorithm calculates uError for the Mode 0, the probability uProb is 5. Further, for the Mode 1 the probability uProb is 0, for the Mode 2 the probability uProb is 2, for the Mode 3 the probability uProb is 4, and so forth.
In addition to coding the luminance portion (Y) of the video frame, the video system 1 of the present invention may also predict the chrominance portions (U, V) of the video frame. The chrominance portions may be considered as chrominance planes (U and V-planes). Typically, the chrominance planes (U and V-planes) are a quarter of the size of a luminance plane. Thus, in a 16×16 macroblock a corresponding 8×8 block of pixels exists in both the U and V-planes. These 8×8 blocks are divided into 4×4 blocks. In general, separate prediction modes are not transmitted for chrominace blocks. Instead, the modes transmitted for the Y-plane blocks are used as prediction modes for the U and V-plane blocks.
While the above detailed description has shown, described and identified several novel features of the invention as applied to a preferred embodiment, it will be understood that various omissions, substitutions and changes in the form and details of the described embodiments may be made by those skilled in the art without departing from the spirit of the invention. Accordingly, the scope of the invention should not be limited to the foregoing discussion, but should be defined by the appended claims.
This is a continuation of U.S. patent application Ser. No. 15/245,975, filed on Aug. 24, 2016, now patented as U.S. Pat. No. 9,930,343, issued on Mar. 27, 2018, which is a continuation of U.S. patent application Ser. No. 14/140,349, filed on Dec. 24, 2013, now patented as U.S. Pat. No. 9,432,682, issued on Aug. 30, 2016, which is a divisional of U.S. patent application Ser. No. 13/679,957, filed on Nov. 16, 2012, now patented as U.S. Pat. No. 8,908,764, issued on Dec. 9, 2014, which is continuation of U.S. patent application Ser. No. 12/767,744, filed on Apr. 26, 2010, now patented as U.S. Pat. No. 8,385,415, issued on Feb. 26, 2013, which is continuation of U.S. patent application Ser. No. 10/848,992, filed on May 18, 2004, now patented as U.S. Pat. No. 7,706,444, issued on Apr. 27, 2010, which is a continuation of U.S. patent application Ser. No. 09/732,522, filed on Dec. 6, 2000, now patented as U.S. Pat. No. 6,765,964, issued on Jul. 20, 2004, which are hereby incorporated by reference in their entireties for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4383272 | Netravali et al. | May 1983 | A |
4466714 | Dyfverman | Aug 1984 | A |
4862259 | Gillard | Aug 1989 | A |
4862260 | Harradine et al. | Aug 1989 | A |
4864393 | Harradine et al. | Sep 1989 | A |
4864398 | Avis et al. | Sep 1989 | A |
4865394 | Gillard | Sep 1989 | A |
4967271 | Campbell et al. | Oct 1990 | A |
5020121 | Rosenberg | May 1991 | A |
5313281 | Richards | May 1994 | A |
5398068 | Liu et al. | Mar 1995 | A |
5410358 | Shackleton et al. | Apr 1995 | A |
5467086 | Jeong | Nov 1995 | A |
5546477 | Knowles et al. | Aug 1996 | A |
5568200 | Pearlstein et al. | Oct 1996 | A |
5592226 | Lee et al. | Jan 1997 | A |
5615287 | Fu et al. | Mar 1997 | A |
5642239 | Nagai | Jun 1997 | A |
5677735 | ueno et al. | Oct 1997 | A |
5699499 | Kawada et al. | Dec 1997 | A |
5734435 | Wilson et al. | Mar 1998 | A |
5742343 | Haskell et al. | Apr 1998 | A |
5748248 | Parke | May 1998 | A |
5786864 | Yamamoto | Jul 1998 | A |
5831677 | Streater | Nov 1998 | A |
5838828 | Mizuki et al. | Nov 1998 | A |
5943090 | Eiberger et al. | Aug 1999 | A |
5974177 | Krtolica | Oct 1999 | A |
5995154 | Heimburger | Nov 1999 | A |
5999173 | Ubillos | Dec 1999 | A |
6014473 | Hossack et al. | Jan 2000 | A |
6058142 | Ishikawa et al. | May 2000 | A |
6061474 | Kajiwara | May 2000 | A |
6067321 | Lempel | May 2000 | A |
6072833 | Yamauchi | Jun 2000 | A |
6084908 | Chiang et al. | Jul 2000 | A |
6125144 | Matsumura et al. | Sep 2000 | A |
6130912 | Chang et al. | Oct 2000 | A |
6141449 | Kawada et al. | Oct 2000 | A |
6148109 | Boon | Nov 2000 | A |
6157396 | Margulis et al. | Dec 2000 | A |
6181382 | Kieu et al. | Jan 2001 | B1 |
6185329 | Zhang et al. | Feb 2001 | B1 |
6208760 | De Haan et al. | Mar 2001 | B1 |
6222885 | Chaddha et al. | Apr 2001 | B1 |
6256068 | Takada et al. | Jul 2001 | B1 |
6408096 | Tan | Jun 2002 | B2 |
6421386 | Chung et al. | Jul 2002 | B1 |
6519005 | Bakhmutsky et al. | Feb 2003 | B2 |
6556197 | Van Hook et al. | Apr 2003 | B1 |
6556718 | Piccinelli et al. | Apr 2003 | B1 |
1450809 | Lee | Oct 2003 | A1 |
6654420 | Snook | Nov 2003 | B1 |
6690728 | Chang et al. | Feb 2004 | B1 |
6707367 | Castaneda et al. | Mar 2004 | B2 |
6765964 | Conklin | Jul 2004 | B1 |
7010279 | Rofougaran | Mar 2006 | B2 |
7023921 | Subramaniyan et al. | Apr 2006 | B2 |
7133451 | Kim et al. | Nov 2006 | B2 |
7248844 | Rofougaran | Jul 2007 | B2 |
7259649 | Ancey et al. | Aug 2007 | B2 |
7260148 | Sohm | Aug 2007 | B2 |
7289672 | Sun et al. | Oct 2007 | B2 |
7463687 | Subramaniyan et al. | Dec 2008 | B2 |
7526256 | Bhatti et al. | Apr 2009 | B2 |
7590180 | Kang | Sep 2009 | B2 |
7683851 | Rofougaran et al. | Mar 2010 | B2 |
7751482 | Srinivasan et al. | Jul 2010 | B1 |
7764740 | Seok et al. | Jul 2010 | B2 |
7890066 | Rofougaran et al. | Feb 2011 | B2 |
8107748 | Miao et al. | Jan 2012 | B2 |
8233538 | Sun et al. | Jul 2012 | B2 |
8238421 | Choi et al. | Aug 2012 | B2 |
8279018 | Song et al. | Oct 2012 | B1 |
8279927 | Sun et al. | Oct 2012 | B2 |
8295551 | Lertrattanapanich et al. | Oct 2012 | B2 |
8331450 | Sun et al. | Dec 2012 | B2 |
8462852 | Xu et al. | Jun 2013 | B2 |
20020175320 | Heun et al. | Nov 2002 | A1 |
20020176500 | Bakhmutsky et al. | Nov 2002 | A1 |
20030031128 | Kim et al. | Feb 2003 | A1 |
20030063671 | Song | Apr 2003 | A1 |
20030189981 | Lee | Oct 2003 | A1 |
20040046891 | Mishima et al. | Mar 2004 | A1 |
20040114688 | Kang | Jun 2004 | A1 |
20050018772 | Sung et al. | Jan 2005 | A1 |
20050135481 | Sung et al. | Jun 2005 | A1 |
20050220190 | Ha et al. | Oct 2005 | A1 |
20050259736 | Payson | Nov 2005 | A1 |
20050286777 | Kumar et al. | Dec 2005 | A1 |
20060018383 | Shi et al. | Jan 2006 | A1 |
20060109905 | Seok et al. | May 2006 | A1 |
20060215761 | Shi et al. | Sep 2006 | A1 |
20070053440 | Hsieh et al. | Mar 2007 | A1 |
20070064803 | Miao et al. | Mar 2007 | A1 |
20070064804 | Paniconi et al. | Mar 2007 | A1 |
20070086526 | Koto et al. | Apr 2007 | A1 |
20070239546 | Blum et al. | Oct 2007 | A1 |
20070268964 | Zhao | Nov 2007 | A1 |
20070297510 | Herpel et al. | Dec 2007 | A1 |
20080069230 | Kondo | Mar 2008 | A1 |
20080101707 | Mukherjee et al. | May 2008 | A1 |
20080175320 | Sun et al. | Jul 2008 | A1 |
20080181309 | Lee et al. | Jul 2008 | A1 |
20080214146 | Lincoln et al. | Sep 2008 | A1 |
20080253457 | Moore | Oct 2008 | A1 |
20080281685 | Jaffe et al. | Nov 2008 | A1 |
20090060359 | Kim et al. | Mar 2009 | A1 |
20090304084 | Hallapuro | Mar 2009 | A1 |
20090172751 | Aldrey et al. | Jul 2009 | A1 |
20090207915 | Yan et al. | Aug 2009 | A1 |
20100046614 | Choi et al. | Feb 2010 | A1 |
20100109798 | Chu | May 2010 | A1 |
20100201457 | Lee et al. | Aug 2010 | A1 |
20110002387 | Chiu et al. | Jan 2011 | A1 |
20110002389 | Xu et al. | Jan 2011 | A1 |
20110090964 | Lidong et al. | Apr 2011 | A1 |
20110261882 | Zheng et al. | Oct 2011 | A1 |
20110286523 | Dencher | Nov 2011 | A1 |
20120294370 | Chiu et al. | Nov 2012 | A1 |
20130064296 | Sun et al. | Mar 2013 | A1 |
20130082810 | Feng et al. | Apr 2013 | A1 |
20130287111 | Xu et al. | Oct 2013 | A1 |
20130336402 | Xu et al. | Dec 2013 | A1 |
Number | Date | Country |
---|---|---|
1450809 | Oct 2003 | CN |
1961582 | May 2007 | CN |
1977539 | Jun 2007 | CN |
101001377 | Jul 2007 | CN |
101023673 | Aug 2007 | CN |
101378504 | Mar 2009 | CN |
101621696 | Jan 2010 | CN |
101647285 | Feb 2010 | CN |
101945276 | Jan 2011 | CN |
102045563 | May 2011 | CN |
102340664 | Feb 2012 | CN |
102010025816 | Jan 2011 | DE |
102010046508 | Jun 2011 | DE |
102011008630 | Sep 2011 | DE |
0391094 | Dec 1990 | EP |
0596732 | Nov 1993 | EP |
0634871 | Jul 1994 | EP |
0781041 | Jun 1997 | EP |
0883298 | Jun 1998 | EP |
0294957 | Dec 1998 | EP |
1903798 | Mar 2008 | EP |
1932366 | Jun 2008 | EP |
1981281 | Oct 2008 | EP |
2471577 | Jan 2011 | GB |
2477033 | Apr 2012 | GB |
10023420 | Jan 1998 | JP |
10336666 | Dec 1998 | JP |
11205799 | Jul 1999 | JP |
2000350211 | Dec 2000 | JP |
2003169338 | Jun 2003 | JP |
2003319400 | Nov 2003 | JP |
2004048512 | Feb 2004 | JP |
2004328633 | Nov 2004 | JP |
2004343349 | Dec 2004 | JP |
2005094458 | Apr 2005 | JP |
2005269164 | Sep 2005 | JP |
2006033433 | Feb 2006 | JP |
2009044350 | Feb 2009 | JP |
4352189 | Oct 2009 | JP |
2011029863 | Feb 2011 | JP |
20090069461 | Jul 2009 | KR |
1020110003438 | Jan 2011 | KR |
201127068 | Aug 2011 | TW |
201204054 | Jan 2012 | TW |
9746020 | Dec 1997 | WO |
9925122 | May 1999 | WO |
9952281 | Oct 1999 | WO |
2007035276 | Mar 2007 | WO |
2009110754 | Oct 2009 | WO |
2010086041 | Aug 2010 | WO |
2012045225 | Apr 2012 | WO |
2006000504 | Jun 2012 | WO |
2012083487 | Jun 2012 | WO |
2012125178 | Sep 2012 | WO |
2013048908 | Apr 2013 | WO |
Entry |
---|
Combined Search and Examination Report received for GB Patent Application No. GB1100658.2, dated May 16, 2011. |
Combined Search and Examination Report received for GB1011216.7, dated Oct. 14, 2010, 6 pages. |
Combined Search and Examination Report received for GB1015985.3, dated Jan. 17, 2011, 5 pages. |
Final Office Action for U.S. Appl. No. 10/848,992 dated Nov. 28, 2008, 12 pages. |
Final Office Action for U.S. Appl. No. 12/767,744 dated Jan. 17, 2012, 6 pages. |
International Preliminary Report on Patentability and Written Opinion received for PCT Patent Application No. PCT/US2012/056682, dated Apr. 1, 2014. |
International Preliminary Report on Patentability and Written Opinion received for PCT Patent Application No. PCT/CN2010/002107, dated Jun. 25, 2013. |
International Preliminary Report on Patentability received for PCT Application No. PCT/CN2011/000568, dated Apr. 18, 2013. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/CN2011/000568, dated Jan. 19, 2012. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/CN2010/002107, dated Oct. 13, 2011. |
Japanese Office Action received for Japanese Patent Application No. 2011-004871, dated Aug. 14, 2012. |
Japanese Office Action received for Japanese Patent Application No. 2011-004871, dated Nov. 20, 2014, 4 pages of Office Action, including 2 pages of English Translation. |
Korean Office Action received for Korean Patent Application No. 1020110004254, dated Dec. 26, 2012; 5 pages of Office Action, including 2 pages of English Translation. |
Non-Final Office Action for U.S. Appl. No. 10/848,992 dated Dec. 12, 2007, 7 pages. |
Non-Final Office Action for U.S. Appl. No. 10/848,992 dated May 15, 2009, 13 pages. |
Non-Final Office Action for U.S. Appl. No. 12/767,744 dated Apr. 18, 2011, 7 pages. |
Non-Final Office Action for U.S. Appl. No. 13/679,957 dated Feb. 21, 2014, 7 pages. |
Non-Final Office Action dated Jul. 31, 2015 for U.S. Appl. No. 14/140,349. |
Notice of Allowance for U.S. Appl. No. 14/140,349 dated Apr. 27, 2016, 5 pages. |
Notice of Allowance for U.S. Appl. No. 10/848,992 dated Dec. 11, 2009, 4 pages. |
Notice of Allowance for U.S. Appl. No. 12/767,744 dated Oct. 16, 2012, 5 pages. |
Notice of Allowance for U.S. Appl. No. 13/679,957 dated Jul. 10, 2014, 5 pages. |
Notice of Allowance received for Chinese Patent Application No. 201010507057.2, dated Mar. 6, 2014. |
Notice of allowance received for Japanese Patent Application No. 2010211120, dated Jan. 24, 2012, 1 page. |
Notice of Allowance received for Korean Patent Application No. 10-2010-0102216, dated Nov. 26, 2012, 3 pages of English Translation. |
Notice of Allowance received for Korean Patent Application No. 10-2010-064539, dated Sep. 27, 2012, 3 pages of Notice of Allowance, including 1 page of English Translation. |
Office Action for U.S. Appl. No. 15/245,975 dated Oct. 2, 2017. |
Office Action received for China Patent Application No. 201010507057.2, dated Apr. 12, 2013, 12 pages, including 7 pages of English Translation. |
Office Action Received for Chinese Patent Application No. 20100507057.2, dated Aug. 3, 2012, 3 pages of Office Action and 3 pages of English Translation. |
Office Action received for Chinese Patent Application No. 201010270056.0, dated May 3, 2012. |
Office Action received for Chinese Patent Application No. 2010-10270056.0, dated May 27, 2013. |
Office Action received for Chinese Patent Application No. 201010507057.2, dated Aug. 3, 2012, 3 pages of Office Action and 3 pages of English Translation. |
Office Action received for Chinese Patent Application No. 201010270056.0, dated Dec. 13, 2012, including 5 pages of English Translation. |
Office Action received for Chinese Patent Application No. 201010507057.2, dated Oct. 28, 2013. |
Office Action received for Chinese Patent Application No. 201110056040.4, dated Mar. 31, 2014. |
Office Action received for Chinese Patent Application No. 201110056040.4, dated Sep. 2, 2013. |
Office Action received for German Patent Application No. 10 2010 046 508.9, dated Jul. 26, 2011. |
Office Action received for German Patent Application No. 10 2010 046 508.9, dated Sep. 5, 2012, 13 pages of Office Action, including 5 pages of English Translation. |
Office Action received for Japanese Patent Application No. 2013532027, dated Jan. 21, 2014. |
Office Action received for Japanese Patent Application No. 2013540202, dated Feb. 4, 2014. |
Office Action received for Korean Patent Application No. 10-2010-4254, dated Feb. 10, 2012. |
Office Action received for Korean Patent Application No. 1020137002525, dated Jan. 24, 2014. |
Office Action received for Korean Patent Application No. 2010-0102216, dated May 22, 2012. |
Office Action received for Korean Patent Application No. 10-2010-064539, dated Feb. 10, 2012. |
Office Action received for Korean Patent Application No. 10-2011-4254, dated Jun. 19, 2013. |
Office Action received for Taiwanese Patent Application No. 100101277, dated Feb. 14, 2014. |
Office Action received for Taiwanese Patent Application No. 100101277, dated Aug. 7, 2013. |
Office Action received in U.S. Appl. No. 12/566,823, dated Jan. 10, 2012, 10 pages. |
“Content description data”, Telecommunication Standardization Sector of ITU, Erratum 1, Recommendation ITU-T H.262 Amendment 1, Geneva, Apr. 22, 2002. 1 page. |
“Infrastructure of audiovisual services—Coding of moving video”, International Telecommunication Union, Series H: Audiovisual and Multimedia Systems, H.262, Feb. 2000, 220 pages. |
“MVC Decoder Description”, ITU-Telecommunication Standardization Sector, Study Group 16, Study Period 1997-2000, Geneva Feb. 7-18, 2000. |
“Series H: Audiovisual and Multimedia Systems Infrastructure of audiovisual services—Coding of moving video”, InternationalTelecommunicationUnion, H.262,Amendment 2, Jan. 2007, Information technology—Generic coding of moving pictures and associated audio information: Video Amendment 2: Support for colour spaces, 14 pages. |
“Series H: Audiovisual and Multimedia Systems Infrastructure of audiovisual services—Coding of moving video”, InternationalTelecommunicationUnion, H.262, Corrigendum 2, Information technology—Generic coding of moving pictures and associated audio information: Video Technical Corrigendum 2, May 2006, 14 pages. |
“Series H: Audiovisual and Multimedia Systems Infrastructure of audiovisual services—Coding of moving video”, International Telecommunication Union, H.262, Amendment 1, Nov. 2000, 26 pages. |
“Series H: Audiovisual and Multimedia Systems Infrastructure of audiovisual services—Coding of moving video”, International Telecommunication Union, H.262, Amendment 4, Feb. 2012, Information technology—Generic coding of moving pictures and associated audio information: Video: Frame packing arrangement signalling for 3D content, 238 pages. |
“Series H: Audiovisual and Multimedia Systems Infrastructure of audiovisual services—Coding of moving video”, International Telecommunication Union, H.262, Corrigendum 1, Nov. 2000, 10 pages. |
“Series H: Audiovisual and Multimedia Systems Infrastructure of audiovisual services—Coding of moving video”, International Telecommunication Union, H.264, Jan. 2012, Recommendation ITU-T H.264, 680 pages. |
“Series H: Audiovisual and Multimedia Systems Infrastructure of audiovisual services—Coding of moving video”, International Telecommunication Union, H.264, Nov. 2007, Advanced video coding for generic audiovisual services, 564 pages. |
“Series H: Audiovisual and Multimedia Systems Infrastructure of audiovisual services—Coding of moving video”, International Telecommunication Union; H.264, Feb. 2014, Advanced video coding for generic audiovisual services, 790 pages. |
“Working Draft No. 2, Revision 0 (WD-2)”, Document JVT-B118, Dec. 3, 2011, pp. 3-100; p. 27, paragraph 4.4.4 to p. 32, paragraph 4.4.5. |
Anttila, et al., “Transferring Real-Time Video on the Internet”, www.tml.hut.fi/Opinnot/Tik-110.551/1997/iwsem.html, printed Aug. 4, 2000. |
Bares, “Digital Image Processing: Principles and Applications”, John Wiley & Sons, 1994, 88-91. |
Bjontegaard,“H.26L Test Model Long Term 8 (TML-8) draft0”, ITU Study Group 16, Apr. 2, 2001, 1-2, 16-19. |
Bjontegaard,“Video Coding Experts Group (Question 15)”, ITU—Telecommunications Standardization Section, Document Q15-F-11, Sixth Meeting: Seoul, Korea, Nov. 3-6, 1998. |
Bjontegaard,“Video Coding Experts Group (Question 15)”, ITU—Telecommunications Standardization Section, Document Q15-J-72, Tenth Meeting: Osaka, May 16-18, 2000. |
Blume, “New Algorithm for Nonlinear Vector-Based Upconversion with Center Weighted Medians”, Journal of Electronic Imaging 6(3), Jul. 1997, 368-378. |
Blume, “Nonlinear Vector Error Tolerant Interploation of Intermediate Video Images by Weighted Medians”, Signal Processing Image Communication, vol. 14, (Search Report PCTUS00/18386), 851-868. |
Chen, et al., “A Macroblock Level Adaptive Search Range Algorithm for Variable Block Size Motion Estimation in h.264/avc”, International Symposium on Intelligent Signal Processing and Communication Systems, ISPACS, Xiamen, China, Nov. 28, 2007-Dec. 1, 2007, 598-601. |
Chen, et al., “Source Model for Transform Video Coder and its Application—Part II: Variable Frame Rate Coding”, XP000687649, IEEE Transactions on Circuits and Systems for Video Technology, vol. 7, No. 2, Apr. 1997. |
Chiu, et al., “Fast Techniques to Improve Self Derivation of Motion Estimation”, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, JCTVC-B047, Jul. 21-28, 2010 (10 pages). |
Chiu, et al., “Report of Self Derivation of Motion Estimation Improvement in TMuC”, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, URL: hftp://phenix,int-evry.fr/jct/doc_end_user/documents/3_Guangzhou/wg11/JCTVC-C127-m18151-v1-JCTVC-C127.zip (see JCTVC-C127.doc), Oct. 3, 2010 (13 pages). |
Chiu, et al., “Self-Derivation of Motion Estimation Techniques to Improve Video Coding Efficientcy”, Applications of Digital Image Processing XXX111 Proc. of SPIE, vol. 7798, Sep. 7, 2010 (11 pages). |
Haavisto, et al., “Motion Adaptive Scan Rate Up-Conversion”, Multidimensional Systems and Signal Processing, XP 000573419, vol. 3, 1992 (Search Report PCTUS/183836 and PCTUS00/18390), 113-130. |
Han, et al., “Frame-Rate Up-Conversion Using Transmitted Motion and Segmentation Fields for Very Low Bit-Rate Video Coding”, Proceedings for ICIP (International Conference on Image Processing), 1997, 747-750. |
Hsia, et al., “High Efficiency and Low Complexity Motion Estimation Algorithm for MPEG-4 AVC/H.264 Coding”, Tamkang Journal of Science and Engineering, 2007,vol. 10, No. 3, 221-234. |
Kamp, et al., “Decoder Side Motion Vector Derivation for Inter Frame Video Coding”, 15th IEEE International Conference on Image Processing (ICIP 2008). Oct. 12-15, 2008, pp. 1120-1123. |
Kamp, et al., “Fast Decoder Side Motion Vector Derivation for Inter Frame Video Coding”, Proceedings of the International Picture Coding Symposium (PCS) '09, Digital Object Identifier 10.1109/PCS.2009.5167453, Print ISBN 978-1-4244-4593-6, IEEE, Piscataway, Chicago, IL, USA, May 6-8, 2009, 4 pages. |
Kim, et al., “Local Motion-Adaptive Interpolation Technique Based on Block Matching Algorithms”, Signal Processing Image Communication, Nov. 1992, No. 6 Amsterdam. |
Kiranyaz, et al., “Motion Compensated Frame Interpolation Techniques for VLBR Video Coding”, Proceedings for ICIP (International Conference on Image Processing), 1997. |
Klomp, et al., “Decoder-Side Block Motion Estimation for H.264/MPEG-4 AVC Based Video Coding”, IEEE International Symposium on Circuits and Systems (ISCAS 2009), May 24-27, 2009, pp. 1641-1644. |
Kokaram, et al., “Detection and Removal of Impulsive Noise in Image Sequences”, Proceedings of the Singapore International Conference on Image Processing, Sep. 1992, Singapore. |
Kronander, “Post and Pre-Processing in Coding of Image Sequences Using Filters with Motion Compensated History”, International Conference on Acoustics, Speech, and Signal Processing, Apr. 1988, New York City. |
Laroche, et al., “RD Optimized Coding for Motion Vector Predictor Selection”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 18(12), Dec. 2008, pp. 1681-1691. |
Migliorati, et al., “Multistage Motion Estimation for Image Interpolation”, Signal Processing Image Communication, vol. 7, 1995, 187-199. |
Murakami, et al., “Advanced B Skip Mode with Decoder-Side Motion Estimation”, 37th VCEG meeting at Yokohama, VCEG-AK12, Hitachi Inspire the Next, Central research Laboratory, Embedded System Platform Research Laboratory. |
Nisar, et al., “Fast Motion Estimation Algorithm Based on Spatio-Temporal Correlation and Direction of Motion Vectors”, Electronics Letters, Dept. of Mechatronics, Gwangju Inst. of Sci & Technol. 11/23/20016, vol. 42, No. 24. |
Poynton, “A Technical Introduction to Digital Video”, John Wiley & Sons, 1996, 8-11. |
Sadka, et al., “Error Performance Improvement in Block-Transform Video Coders”, www.research.att.com/mrc/pv99/contents/papers/sadka/sadka.htm printed Aug. 4, 2000. |
Sallent, “Simulation of a Teleconference Codec for ISDN”, Proceedings of the European Signal Proceeding Conference, vol. 4, Amsterdam, Sep. 1990. |
Sato, et al., “Video OCR for Digital News Archive”, IEEE International Workshop on content-Based Access of Image and Video Database, XP002149702, 1998, 52-60. |
Stallings, et al., “Business Data Communications”, Third Edition, 198, Prentice-Hall, Inc., Chapter 2. |
Thomas, “A Comparison of Motion-Compensated Interlace-to-Progressive Conversion Methods”, Signal Processing Image Communication, vol. 12, 1998, 209-229. |
Werda, et al., “Optimal DSP-Based Motion Estimation Tools Implementation for H.264/AVC Baseline Encoder”, IJCSNS International Journal of Computer Science and Network Security, May 2007, vol. 7, No. 5, 141-150. |
“An Introduction to MPEG Video Compression”, members.aol.com/symbandgrl printed Apr. 14, 2000. |
Number | Date | Country | |
---|---|---|---|
20180278941 A1 | Sep 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13679957 | Nov 2012 | US |
Child | 14140349 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15245975 | Aug 2016 | US |
Child | 15926235 | US | |
Parent | 14140349 | Dec 2013 | US |
Child | 15245975 | US | |
Parent | 12767744 | Apr 2010 | US |
Child | 13679957 | US | |
Parent | 10848992 | May 2004 | US |
Child | 12767744 | US | |
Parent | 09732522 | Dec 2000 | US |
Child | 10848992 | US |