Simplified predictive video encoder

Information

  • Patent Grant
  • 6904174
  • Patent Number
    6,904,174
  • Date Filed
    Friday, December 11, 1998
    26 years ago
  • Date Issued
    Tuesday, June 7, 2005
    19 years ago
Abstract
In a video coding method, it is determined to code input video data according a technique selected from the group of: (a) intra-coding, (b) inter-coding without a residual and (c) inter-coding with a residual, wherein inter-coding codes the input video data with reference to coded video data at another time. When it is determined to code the input video data according to type (c) decoded video data of another frame is decoded and the the input data is coded according to the selected technique.
Description
BACKGROUND

The present invention relates to video coders and, more particularly, to predictive video encoders.



FIG. 1 illustrates a simplified video coding system. The system includes an encoder 100 provided in communication with a decoder 200 over a channel 300. The encoder 100 receives original video data at an input. It generates coded video data from the original video data and outputs the coded video data to the channel 300. The channel 300 may be a communication link, such as those provided by telecommunications networks or computer networks, or may be a memory such as an optical, electric or magnetic storage device. The decoder 200 retrieves the coded video data from the channel 300 and, by inverting the coding process performed by the encoder 100, reconstructs the original video data therefrom. Depending upon the coding/decoding techniques used, the reconstructed video data may be either an exact replica or merely a close approximation of the original video data.


Coded video data consumes less bandwidth than uncoded video data. Encoders 100 employ coding techniques that exploit redundancy in the uncoded video data. A variety of coding techniques are known; they vary in terms of bandwidth conservation and computational complexity imposed upon the encoder and/or decoder.


One type of known video coder is the “predictive coder.” In a predictive coder, coded video data at one time may be coded using video data at another time as a reference. An example is shown in FIG. 2. FIG. 2 illustrates several frames 10-50 of video information, each frame representing the video data at different times. An intra-coded (I) frame 10, 50 is coded “from scratch.” That is, an I frame may be decoded based solely on the coded video data for that frame; the coded video data makes no reference to data of any other frame. By contrast, predictively coded (P) frames 20-40 are coded with reference to other frames. To decode a P frame (e.g, 20), a decoder 200 retrieves coded data for both the P frame and reconstructed data previously decoded for another frame (e.g. I frame 10). Prediction arrows 60-80 illustrate possible prediction directions. A frame usually occupies less bandwidth when coded as a P frame rather than as an I frame.


Predictive coders establish “prediction chains,” a series of frames wherein reconstructed data of one frame is used to predict the reconstructed data of another. Frames 10-40 in FIG. 2 illustrate a four frame prediction chain. An originating frame is coded as an I frame but all others in the chain are coded a P frames. Because the P frames achieve more bandwidth conservation than I frames, many coders extend prediction chains as far as possible. Some coding systems also force an encoder 100 to introduce an I frame even when unreasonable coding errors would not otherwise be present. By introducing I frames at regular intervals, decoders 200 may perform random access functions, akin to fast-forwarding and rewinding, that would not otherwise be possible.


As is known, predictive coders may code input video data on a block basis. Input video data may be broken down in to “blocks” (also called “macroblocks” in some coding applications), arrays of video data of a predetermined size. Each block of a frame is coded independently of the other blocks in the frame.


In a P frame, a predictive coder may code a block of data as one of three types:

    • (a) An Intra-Coded Block: The block is coded without reference to any block in any other frame;
    • (b) An Inter-Coded Block Sans Residual: Image data for the block is copied from a block in another frame and displayed without modification; or
    • (c) An Inter-Coded Block Plus a Residual: Image data for the block is copied from a block in another frame and supplemented with an error term (called a “residual”) that is supplied in the channel 300.



FIG. 3 is a block diagram that illustrates processing that may be performed by an encoder 100 to code video data predictively. The encoder 100 may include a block encoder 110 and a block decoder 120. The block encoder 110 receives a block of input video data at a first input and blocks of decoded video data from the block decoder 120. For a P frame, the block encoder 110 determines how to code the block (whether as type (a), (b) or (c) above). The block encoder 110 outputs coded video data to the channel 300. In this regard, operation of the block encoder 110 is well-known.


The block decoder 120 receives coded video data from the block encoder 110 and decodes it. The decoded video data of a frame is stored at the block decoder 120 ready to be input back to the block encoder 110 to be used as a reference for predictive coding of later frames. In this regard, operation of the block decoder 120 is well-known.


The block decoder 120 permits the encoder 100 to “see” the reconstructed video data that the decoder 200 (FIG. 1) will obtain by decoding coded video data from the channel 300. The block decoder 120 permits the block encoder 110 to generate accurate residuals when coding blocks according to type (c) above. The block decoder 120 prevents prediction errors from compounding over a series of frames that would occur if type (c) blocks were used to predict other type (c) blocks in a prediction chain.


Reconstruction of coded data is a computationally expensive task. Provision of the block decoder 120 in an encoder 100 increases its complexity and expense. There is a need in the art for an encoder 100 that is less expensive and more simplified in implementation than known encoders.


SUMMARY

An embodiment of the present invention provides a video coding method in which it is determined to code input video data according a technique selected from the group of: (a) intra-coding, (b) inter-coding without a residual and (c) inter-coding with a residual, wherein inter-coding codes the input video data with reference to coded video data at another time. When it is determined to code the input video data according to type (c) decoded video data of another frame is decoded and the input data is coded according to the selected technique.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of a video coding/decoding system.



FIG. 2 is an illustration of predictively coded frames.



FIG. 3 is a block diagram of a known predictive encoder.



FIG. 4 is a block diagram of processing performed in a predictive encoder according to an embodiment of the present invention.



FIG. 5 is a block diagram of processing performed in a predictive encoder according to another embodiment of the present invention.



FIG. 6 illustrates prediction chains that may be developed according to an embodiment of the present invention.





DETAILED DESCRIPTION

In a first embodiment of the present invention, an encoder is populated by a block encoder and a block decoder. The block encoder and block decoder perform image processing as is done in prior art systems except that the block encoder disables the block decoder unless coding a block according to type (c). In a second embodiment, the encoder is populated by a block encoder and a delay buffer. The block encoder performs image coding in a manner similar to those of prior systems except that, for blocks of type (c), the block encoder calculates a residual with reference to original video data rather than reconstructed video data.



FIG. 4 illustrates an encoder 400 constructed in accordance with an embodiment of the present invention. The encoder 400 includes a block encoder 410 and block decoder 420. The block encoder 410 determines to code and codes blocks of input video data as types (a), (b) and/or (c) according to processes that are known in the art. The block decoder 420 decodes coded video data output from the block encoder 410 also according to processes that are known in the art and outputs reconstructed video data back to the block encoder 410. The block encoder 410 may enable or disable the block decoder 420 via a control line 430.


According to an embodiment of the present invention, the block encoder 410 suspends the block decoder 420 unless it determines to code a block as a type (c) block. Reconstructed video data need not be input to the block encoder 410 for the block encoder to be able to code input video data according to type (a) or (b). If a block is to be coded as a type (c) block, the block encoder 410 engages the block decoder 420. The block decoder decodes 420 previously coded video data and outputs reconstructed video data of the block back to the block encoder 410. The block encoder 410 codes the type (c) block and thereafter disables the block decoder 420 until (and unless) it determines to code another block of input video data as a type (c) block.


Experiments indicate that about 60-70% of an I frame are decoded during operation of the present invention. As compared to prior art encoders that decode every I frame (e.g., 100%), this embodiment of the present invention improves encoder efficiency by about 30-40%.



FIG. 5 illustrates an encoder 500 constructed in accordance with another embodiment of the present invention. The encoder 500 includes a block encoder 510 and a delay element 520. It does not include a block decoder (such as 120 or 420 of FIG. 3 or 4). The block encoder 510 codes blocks of input video data and outputs coded video data to the channel 300 (FIG. 1). The delay element 520 receives the blocks of input video data and stores them. The delay element 520 inputs the blocks of input video data to the block encoder 510.


As in the system of FIG. 4, the block encoder 510 determines to code the blocks of input video data as type (a), (b) or (c). When it determines to code a block as type (a) or (b), the block encoder 510 codes the block as does the block encoder 410 of FIG. 4.


When the block encoder 510 determines to code a block as type (c), the block encoder 510 calculates a residual based upon delayed original video data received from delay element 520. This configuration eliminates the need for a block decoder 420 as in the encoder 400 of FIG. 4 and further reduces the complexity of the encoder 500. However, because the encoder 500 generates residuals of type (c) without reference to reconstructed video data, the encoder 500 codes data with less accuracy than may be obtained using, for example, the encoder 400 of FIG. 4.


According to an embodiment of the present invention, the encoder 500 of FIG. 5 is used with short prediction chains. Exemplary prediction chains are shown in FIG. 6. There, a series of video frames 610-660 are shown coded in prediction chains 670-700 having only a single “link” each (two frames per chain). For each I frame 620, 650, there are twice as many P frames 610, 630-640, 660. Each I frame (e.g., 620) is used as a source for prediction of two P frames, one 610 immediately before the I frame 620 and one 630 immediately after the I frame 620. Slightly longer prediction chains may be used, such as two link prediction chains.


As noted above, the encoder 500 introduces coding errors in the reconstructed video data. The embodiment of FIG. 5 represents a trade-off between image quality and bandwidth on the one hand and encoder complexity on the other. However, the added coding errors contribute little to subjective image quality degradation. It is believed that the coding errors are ameliorated for several reasons. First, the coding errors are introduced only when coding blocks of type (c). For blocks of type (a) and type (b), the coding performed by the encoder 500 is just as accurate as coding done by the encoders 100, 400 of FIGS. 3 or 4. Also, when the encoder 500 is used in combination with the short, one or two-link prediction chains, coding errors do not accumulate over more than one or two frames (i.e., the length of the prediction chain).


Further, when used in combination with short prediction chains, the encoder 500 provides a relatively high percentage of I frames in the coded output data. The high percentage of I frames also mitigates the subjective effect of coding errors that may occur in type (c) coded blocks. Again, the encoder 500 codes I frames just as accurately as do the encoders 100, 400 of FIG. 3 or 4. And, as is known, human beings tend to disregard frames having significant coding errors when those frames are interspersed among accurately coded frames. The accurately coded I frames are more perceptually significant than the type (c) blocks of P frame. Thus, any errors present in type (c) blocks are mitigated naturally.


The embodiments of FIGS. 4 and 5 may be implemented as hardware devices, software devices or hybrid hardware and software devices. As a hardware device, the embodiments of FIGS. 4 and 5 may be implemented in, for example, an application specific integrated circuit. As a software device, the embodiments of FIGS. 4 and 5 may be implemented in a general purpose processor or a digital signal processor executing program instructions that cause the processor to perform block encoding and, possibly, block decoding functions (depending upon the embodiment). Such program instructions would be stored, most likely, in a storage device such as an electric, optical or magnetic memory and loaded into the general purpose processor or digital signal processor for execution. The present invention facilitates software implementations by reducing the number of calculations required for coding video data.


Throughout this discussion, reference has been made to “I frames,” “P frames,” intracoding and inter-coding, “blocks” and “macroblocks.” Such nomenclature may be found in certain video coding standards. It is used for illustrative purposes only and not meant to limit the scope of the present invention to any coding standard or family of standards. The embodiments of the present invention herein described are applicable to predictive coders generally, not to any specific type of predictive coder.


As exemplary predictive coders, the present invention may be applied to coders operating in conformance with one or more of the following standards: “MPEG-2,” ISO 13818, Information Technology-Generic Coding of Moving Pictures and Associated Audio; “MPEG-4,” Requirements Version 4, ISO/EIC JTCI/SC29/WG11 N 1716 (1997); ITU-T, Recommendation H.261, “Video Codec for Audio Visual Services at px64 kbit/s,” 1990; ITU-T T H.263+Video Group, “Draft 12 of ITU-T Recommendation H.263+” 1997, and their successors. Accordingly, video data may be coded in items based on video frames, video objects or other structures as may be conventional to the predictive techniques used by the encoders of the prior art. Additionally, the intra-coding and inter-coding techniques (with or without a residual) that are performed may be performed on units of data such as blocks, macro-blocks or other organizational units of video data as may be known. “Block,” as used herein, is used in a generic sense and is meant to encompass all of these organizational units of input video data. Such variances among coding techniques, items and units are consistent with the scope and spirit of the present invention.


Several embodiments of the present invention are specifically illustrated and described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention.

Claims
  • 1. A video coder, comprising: a block encoder receiving as an input blocks of input video data, and having outputs for a coded data signal and a control signal, the coded data signal being output to an external communication channel, and a block decoder to receive as inputs the coded data signal and the control signal from the block encoder, the block decoder being disabled based on a state of the control output generated by the block encoder, the block decoder outputting a decoded video data, which is input to the block encoder.
  • 2. The video coder of claim 1, wherein the block encoder determines to code the input blocks of input video data according to a technique selected from the group of: (a) intra-coding, (b) inter-coding without a residual, and (c) inter-coding with a residual.
  • 3. The video coder of claim 2, wherein the block encoder generates the control signal to disable the block decoder when it is determined to code the input video data according to type (a) intra-coding and type (b) inter-coding without a residual.
  • 4. The video coder of claim 1, wherein the coded data signal being output to the external communication channel includes prediction chains.
  • 5. The video coder of claim 4, wherein the prediction chains have a length of two.
  • 6. The video coder of claim 4, wherein the prediction chains have a length of three.
  • 7. A video coding method, comprising: on a block-by-block basis, determining to code blocks of input video data according to a technique selected from the group of: (a) intra-coding, (b) inter-coding without a residual and (c) inter-coding with a residual, decoding coded video data associated with another time, the decoding being disabled unless it is determined to code according to type (c) inter-coding with a residual, and coding the input data according to the selected technique, wherein for type (c) coding, an output of the decoding is used to determine the residual.
  • 8. The video coding method of claim 7, wherein the coding step generates prediction chains, each prediction chain beginning with intra-coded video data.
  • 9. The video coding method of claim 8, wherein the prediction chains have a length of two.
  • 10. The video coding method of claim 8, wherein the prediction chains have a length of three.
  • 11. The video coding method of claim 8, wherein each instance of intra-coded video data is a beginning of two prediction chains.
  • 12. A computer readable medium on which stores program instructions that, when executed by a processor, cause the processor to: determine, on a block-by-block basis, to code input video data according to a technique selected from the group of: (a) intra-coding, (b) inter-coding without a residual and (c) inter-coding with a residual, decode coded video data associated with another time when it is determined to code the input video data according to type (c) inter-coding with a residual, the decoding being disabled when it is determined to code the input video data according to types (a) intra-coding or (b) inter-coding without a residual, and code the input data according to the selected technique, wherein for type (c) coding, an output of the decoding is used to determine the residual.
  • 13. The computer readable medium of claim 12, wherein every third frame of video data is coded according to intra-coding.
  • 14. The computer readable medium of claim 12, wherein the coding step generates prediction chains, each prediction chain beginning with intra-coded video data.
  • 15. The computer readable medium of claim 14, wherein the prediction chains have a length of two.
  • 16. The computer readable medium of claim 14, wherein the prediction chains have a length of three.
  • 17. The computer readable medium of claim 14, wherein each instance of intra-coded video data is a beginning of two prediction chains.
  • 18. A video coding method, comprising: receiving frames of video data, each frame having a plurality of blocks of data, coding a first frame of video data, coding a second frame of video data with reference to the coded first frame, including: for each block within the second frame, determining to code the block according to a technique selected from the group of: (a) intra-coding, (b) inter-coding without a residual and (c) inter-coding with a residual, when it is determined to code the block in the second framed according to type (c) inter-coding with a residual, decoding a coded block in the first frame, and the decoding being disabled for other blocks, and coding the block according to the selected technique, wherein for type (c) coding, an output of the decoding is used to determine the residual.
  • 19. The video coding method of claim 18, wherein every third frame of video data is coded according to intra-coding.
  • 20. The video coding method of claim 18, wherein the coding step generates prediction chains, each prediction chain beginning with intra-coded video data.
  • 21. The video coding method of claim 20, wherein the prediction chains have a length of two.
  • 22. The video coding method of claim 20, wherein the prediction chains have a length of three.
  • 23. The video coding method of claim 20, wherein each instance of intra-coded video data is a beginning of two prediction chains.
US Referenced Citations (74)
Number Name Date Kind
3716851 Neumann Feb 1973 A
4023110 Oliver May 1977 A
4131765 Kahn Dec 1978 A
4394774 Wildergren et al. Jul 1983 A
4670851 Murakami et al. Jun 1987 A
4698672 Chen et al. Oct 1987 A
4760446 Ninomiya et al. Jul 1988 A
4864393 Harradine et al. Sep 1989 A
4901075 Vogel Feb 1990 A
5010401 Murakami et al. Apr 1991 A
5021879 Vogel Jun 1991 A
5068724 Krause et al. Nov 1991 A
5091782 Krause et al. Feb 1992 A
5093720 Krause et al. Mar 1992 A
5113255 Nagata et al. May 1992 A
5168375 Reisch et al. Dec 1992 A
5175618 Ueda et al. Dec 1992 A
5223949 Honjo Jun 1993 A
5260783 Dixit Nov 1993 A
5293229 Iu Mar 1994 A
5298991 Yagasaki et al. Mar 1994 A
5317397 Odaka et al. May 1994 A
5329318 Keith Jul 1994 A
5343248 Fujinami Aug 1994 A
5377051 Lane et al. Dec 1994 A
5412430 Nagata May 1995 A
RE34965 Sugiyama Jun 1995 E
5428396 Yagasaki et al. Jun 1995 A
RE35093 Wang et al. Nov 1995 E
5469208 Dea Nov 1995 A
5469212 Lee Nov 1995 A
RE35158 Sugiyama Feb 1996 E
5497239 Kwon Mar 1996 A
5510840 Yonemitsu et al. Apr 1996 A
5539466 Igarashi et al. Jul 1996 A
5543847 Kato Aug 1996 A
5557330 Astle Sep 1996 A
5559557 Kato Sep 1996 A
5565920 Lee et al. Oct 1996 A
5568200 Pearlstein et al. Oct 1996 A
5587806 Yamada et al. Dec 1996 A
5625355 Takeno et al. Apr 1997 A
5648733 Worrell et al. Jul 1997 A
5654706 Jeong Aug 1997 A
5666461 Igarashi et al. Sep 1997 A
5684534 Harney et al. Nov 1997 A
5703646 Oda Dec 1997 A
5711012 Bottoms et al. Jan 1998 A
5719986 Kato et al. Feb 1998 A
5831688 Yamada et al. Nov 1998 A
5841939 Takahashi et al. Nov 1998 A
5852664 Iverson et al. Dec 1998 A
5887111 Takahashi et al. Mar 1999 A
5917954 Girod et al. Jun 1999 A
5946043 Lee et al. Aug 1999 A
5949948 Krause et al. Sep 1999 A
5991447 Eifrig et al. Nov 1999 A
5991503 Miyasaka et al. Nov 1999 A
6052507 Niida et al. Apr 2000 A
6064776 Kikuchi et al. May 2000 A
6081296 Fukunaga et al. Jun 2000 A
6081551 Etoh Jun 2000 A
RE36761 Fujiwara Jul 2000 E
6088391 Auld et al. Jul 2000 A
6115070 Song et al. Sep 2000 A
6125146 Frencken et al. Sep 2000 A
6141383 Yu Oct 2000 A
6144698 Poon et al. Nov 2000 A
6167087 Kato Dec 2000 A
6169821 Fukunaga et al. Jan 2001 B1
6188725 Sugiyama Feb 2001 B1
6217234 Dewar et al. Apr 2001 B1
6256420 Sako et al. Jul 2001 B1
6563549 Sethuraman May 2003 B1