The present invention relates to video coders and, more particularly, to predictive video encoders.
Coded video data consumes less bandwidth than uncoded video data. Encoders 100 employ coding techniques that exploit redundancy in the uncoded video data. A variety of coding techniques are known; they vary in terms of bandwidth conservation and computational complexity imposed upon the encoder and/or decoder.
One type of known video coder is the “predictive coder.” In a predictive coder, coded video data at one time may be coded using video data at another time as a reference. An example is shown in FIG. 2.
Predictive coders establish “prediction chains,” a series of frames wherein reconstructed data of one frame is used to predict the reconstructed data of another. Frames 10-40 in
As is known, predictive coders may code input video data on a block basis. Input video data may be broken down in to “blocks” (also called “macroblocks” in some coding applications), arrays of video data of a predetermined size. Each block of a frame is coded independently of the other blocks in the frame.
In a P frame, a predictive coder may code a block of data as one of three types:
The block decoder 120 receives coded video data from the block encoder 110 and decodes it. The decoded video data of a frame is stored at the block decoder 120 ready to be input back to the block encoder 110 to be used as a reference for predictive coding of later frames. In this regard, operation of the block decoder 120 is well-known.
The block decoder 120 permits the encoder 100 to “see” the reconstructed video data that the decoder 200 (
Reconstruction of coded data is a computationally expensive task. Provision of the block decoder 120 in an encoder 100 increases its complexity and expense. There is a need in the art for an encoder 100 that is less expensive and more simplified in implementation than known encoders.
An embodiment of the present invention provides a video coding method in which it is determined to code input video data according a technique selected from the group of: (a) intra-coding, (b) inter-coding without a residual and (c) inter-coding with a residual, wherein inter-coding codes the input video data with reference to coded video data at another time. When it is determined to code the input video data according to type (c) decoded video data of another frame is decoded and the input data is coded according to the selected technique.
In a first embodiment of the present invention, an encoder is populated by a block encoder and a block decoder. The block encoder and block decoder perform image processing as is done in prior art systems except that the block encoder disables the block decoder unless coding a block according to type (c). In a second embodiment, the encoder is populated by a block encoder and a delay buffer. The block encoder performs image coding in a manner similar to those of prior systems except that, for blocks of type (c), the block encoder calculates a residual with reference to original video data rather than reconstructed video data.
According to an embodiment of the present invention, the block encoder 410 suspends the block decoder 420 unless it determines to code a block as a type (c) block. Reconstructed video data need not be input to the block encoder 410 for the block encoder to be able to code input video data according to type (a) or (b). If a block is to be coded as a type (c) block, the block encoder 410 engages the block decoder 420. The block decoder decodes 420 previously coded video data and outputs reconstructed video data of the block back to the block encoder 410. The block encoder 410 codes the type (c) block and thereafter disables the block decoder 420 until (and unless) it determines to code another block of input video data as a type (c) block.
Experiments indicate that about 60-70% of an I frame are decoded during operation of the present invention. As compared to prior art encoders that decode every I frame (e.g., 100%), this embodiment of the present invention improves encoder efficiency by about 30-40%.
As in the system of
When the block encoder 510 determines to code a block as type (c), the block encoder 510 calculates a residual based upon delayed original video data received from delay element 520. This configuration eliminates the need for a block decoder 420 as in the encoder 400 of FIG. 4 and further reduces the complexity of the encoder 500. However, because the encoder 500 generates residuals of type (c) without reference to reconstructed video data, the encoder 500 codes data with less accuracy than may be obtained using, for example, the encoder 400 of FIG. 4.
According to an embodiment of the present invention, the encoder 500 of
As noted above, the encoder 500 introduces coding errors in the reconstructed video data. The embodiment of
Further, when used in combination with short prediction chains, the encoder 500 provides a relatively high percentage of I frames in the coded output data. The high percentage of I frames also mitigates the subjective effect of coding errors that may occur in type (c) coded blocks. Again, the encoder 500 codes I frames just as accurately as do the encoders 100, 400 of
The embodiments of
Throughout this discussion, reference has been made to “I frames,” “P frames,” intracoding and inter-coding, “blocks” and “macroblocks.” Such nomenclature may be found in certain video coding standards. It is used for illustrative purposes only and not meant to limit the scope of the present invention to any coding standard or family of standards. The embodiments of the present invention herein described are applicable to predictive coders generally, not to any specific type of predictive coder.
As exemplary predictive coders, the present invention may be applied to coders operating in conformance with one or more of the following standards: “MPEG-2,” ISO 13818, Information Technology-Generic Coding of Moving Pictures and Associated Audio; “MPEG-4,” Requirements Version 4, ISO/EIC JTCI/SC29/WG11 N 1716 (1997); ITU-T, Recommendation H.261, “Video Codec for Audio Visual Services at px64 kbit/s,” 1990; ITU-T T H.263+Video Group, “Draft 12 of ITU-T Recommendation H.263+” 1997, and their successors. Accordingly, video data may be coded in items based on video frames, video objects or other structures as may be conventional to the predictive techniques used by the encoders of the prior art. Additionally, the intra-coding and inter-coding techniques (with or without a residual) that are performed may be performed on units of data such as blocks, macro-blocks or other organizational units of video data as may be known. “Block,” as used herein, is used in a generic sense and is meant to encompass all of these organizational units of input video data. Such variances among coding techniques, items and units are consistent with the scope and spirit of the present invention.
Several embodiments of the present invention are specifically illustrated and described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
3716851 | Neumann | Feb 1973 | A |
4023110 | Oliver | May 1977 | A |
4131765 | Kahn | Dec 1978 | A |
4394774 | Wildergren et al. | Jul 1983 | A |
4670851 | Murakami et al. | Jun 1987 | A |
4698672 | Chen et al. | Oct 1987 | A |
4760446 | Ninomiya et al. | Jul 1988 | A |
4864393 | Harradine et al. | Sep 1989 | A |
4901075 | Vogel | Feb 1990 | A |
5010401 | Murakami et al. | Apr 1991 | A |
5021879 | Vogel | Jun 1991 | A |
5068724 | Krause et al. | Nov 1991 | A |
5091782 | Krause et al. | Feb 1992 | A |
5093720 | Krause et al. | Mar 1992 | A |
5113255 | Nagata et al. | May 1992 | A |
5168375 | Reisch et al. | Dec 1992 | A |
5175618 | Ueda et al. | Dec 1992 | A |
5223949 | Honjo | Jun 1993 | A |
5260783 | Dixit | Nov 1993 | A |
5293229 | Iu | Mar 1994 | A |
5298991 | Yagasaki et al. | Mar 1994 | A |
5317397 | Odaka et al. | May 1994 | A |
5329318 | Keith | Jul 1994 | A |
5343248 | Fujinami | Aug 1994 | A |
5377051 | Lane et al. | Dec 1994 | A |
5412430 | Nagata | May 1995 | A |
RE34965 | Sugiyama | Jun 1995 | E |
5428396 | Yagasaki et al. | Jun 1995 | A |
RE35093 | Wang et al. | Nov 1995 | E |
5469208 | Dea | Nov 1995 | A |
5469212 | Lee | Nov 1995 | A |
RE35158 | Sugiyama | Feb 1996 | E |
5497239 | Kwon | Mar 1996 | A |
5510840 | Yonemitsu et al. | Apr 1996 | A |
5539466 | Igarashi et al. | Jul 1996 | A |
5543847 | Kato | Aug 1996 | A |
5557330 | Astle | Sep 1996 | A |
5559557 | Kato | Sep 1996 | A |
5565920 | Lee et al. | Oct 1996 | A |
5568200 | Pearlstein et al. | Oct 1996 | A |
5587806 | Yamada et al. | Dec 1996 | A |
5625355 | Takeno et al. | Apr 1997 | A |
5648733 | Worrell et al. | Jul 1997 | A |
5654706 | Jeong | Aug 1997 | A |
5666461 | Igarashi et al. | Sep 1997 | A |
5684534 | Harney et al. | Nov 1997 | A |
5703646 | Oda | Dec 1997 | A |
5711012 | Bottoms et al. | Jan 1998 | A |
5719986 | Kato et al. | Feb 1998 | A |
5831688 | Yamada et al. | Nov 1998 | A |
5841939 | Takahashi et al. | Nov 1998 | A |
5852664 | Iverson et al. | Dec 1998 | A |
5887111 | Takahashi et al. | Mar 1999 | A |
5917954 | Girod et al. | Jun 1999 | A |
5946043 | Lee et al. | Aug 1999 | A |
5949948 | Krause et al. | Sep 1999 | A |
5991447 | Eifrig et al. | Nov 1999 | A |
5991503 | Miyasaka et al. | Nov 1999 | A |
6052507 | Niida et al. | Apr 2000 | A |
6064776 | Kikuchi et al. | May 2000 | A |
6081296 | Fukunaga et al. | Jun 2000 | A |
6081551 | Etoh | Jun 2000 | A |
RE36761 | Fujiwara | Jul 2000 | E |
6088391 | Auld et al. | Jul 2000 | A |
6115070 | Song et al. | Sep 2000 | A |
6125146 | Frencken et al. | Sep 2000 | A |
6141383 | Yu | Oct 2000 | A |
6144698 | Poon et al. | Nov 2000 | A |
6167087 | Kato | Dec 2000 | A |
6169821 | Fukunaga et al. | Jan 2001 | B1 |
6188725 | Sugiyama | Feb 2001 | B1 |
6217234 | Dewar et al. | Apr 2001 | B1 |
6256420 | Sako et al. | Jul 2001 | B1 |
6563549 | Sethuraman | May 2003 | B1 |