Analysis of compression decoded video image sequences

Information

  • Patent Grant
  • 6895049
  • Patent Number
    6,895,049
  • Date Filed
    Monday, October 11, 1999
    25 years ago
  • Date Issued
    Tuesday, May 17, 2005
    19 years ago
Abstract
A method of detecting I-frames in a video signal which has previously been MPEG coded, involves taking a DCT and analyzing the frequency of zero value coefficients. An I-frame, which does not utilize prediction coding is expected to have a higher number of zero coefficients than a predicted P- or B-frame.
Description
FIELD OF THE INVENTION

This invention relates to the analysis of compression decoded sequences and in the most important example to the analysis of video signals that have been decoded from an MPEG bitstream.


BACKGROUND OF THE INVENTION

It is now understood that processes such as re-encoding of a video signal can be significantly improved with knowledge of at least some of the coding decisions used in the original encoding.


Proposals have been made for making some or all of these coding decisions available explicitly. Examples of these proposals can be seen in previous patent applications [see EP 0 765 576 and EP 0 913 058]. These methods involve the use of an Information Bus which passes MPEG coding parameters from a decoder to a subsequent re-coder.


However, in certain situations, no such explicit information is available, and the only available information is that contained in the decoded video signal.


It will be well understood that in the MPEG-2 video compression standard, there are different categories of frames which differ in the degree to which their frames are coded using prediction, and that these categories are denoted by I-, P- and B-frames respectively. An important coding decision to be taken into consideration in a re-encoding process is accordingly the frame structure in terms of the I-, P- and B-frames.


SUMMARY OF THE INVENTION

Accordingly, it is an object of the invention to determine by analysis of the video signal, information concerning the upstream coding and decoding process that is useful in minimizing degradation of picture quality in a subsequent, downstream coding and decoding process.


A further object of the present invention is to derive information from a decoded video signal concerning the categories of frames employed in the encoding process. It is a further object of this invention to assist in maintaining picture quality when cascading compression decoding and coding processes.


Accordingly the present invention consists in a method of analysing a signal derived in coding and decoding processes which utilise a quantisation process having a set of quantisation values in which the coded signal contains categories of frames which categories differ in the degree to which their frames are coded using prediction, the method comprising the steps of measuring the occurrence of values in the signal corresponding with the set of possible quantisation values, and inferring the category of a specific frame by testing the occurrence of said values against a threshold.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention will now be described by way of example in reference to the accompanying drawings, in which:



FIG. 1 is a block diagram illustrating the use of an Information Bus as in the prior art;



FIG. 2 is a block diagram showing one embodiment of the present invention;



FIG. 3 is a graph illustrating the operation of one embodiment of the present invention; and



FIG. 4 is a block diagram illustrating in more detail a specific part of the apparatus of the FIG. 2 embodiment.





DETAILED DESCRIPTION

Referring initially to FIG. 1, the MPEG decoder (100), adapted as shown in the above prior references, receives an MPEG bitstream. In addition to the output of a standard MPEG decoder, this adapted decoder then produces an Information Bus output conveying the coding decisions taken in the upstream encoder, which are of course inherent in the MPEG bitstream. The Information Bus is then passed to the dumb coder (102) along with the video signal. This dumb coder then follows the coding decisions made by the upstream coder (not shown) which are conveyed by the Information Bus.


This invention is related to the situation in which the adapted MPEG decoder cannot be used to produce the Information Bus because the MPEG bitstream has already been decoded by a standard decoder and the input bitstream is no longer available. The aim is to try to estimate as many as possible of the MPEG coding parameters by analysing the decoded video signal.


When the Information Bus is used, transparent cascading is only possible when all the relevant parameters at sequence, GOP, picture, slice and macroblock rate are carried. Clearly, it is not possible to estimate all these parameters from a decoded picture alone. However, a proportion of the benefit of the Information Bus can be obtained even if the only information carried from decoder to coder relates to the picture type (I-, P- or B-frame). In this case, the ‘dumb’ coder of FIG. 1 becomes a full MPEG coder except that it sets the picture type to that received in the Information Bus.


The purpose of the present invention is to estimate the picture type by analysing the decoded video signal. Such an estimate would be used as shown in FIG. 2. In this embodiment, the standard MPEG decoder (200) receives the bitstream input, and decodes it, producing the usual video signal output. This is then passed to the MPEG coder (206), and also to the picture type detector (202). The information from the picture type detector is then passed to an Information Bus generator (204). The Information Bus then generated, relating to picture type only, is then passed to the MPEG coder. This then follows the coding information available in the Information Bus in coding the video signal, producing an output bitstream.


The following description concentrates on one particular aspect of the picture type detector, the detection of I-frames. In this embodiment, the coding processes referred to are those of MPEG-2, in which the different categories of frames which differ in the degree to which they are coded using prediction are the picture types I, P, and B. The I-frames are coded with no prediction; the P-frames with only forward prediction and the B-frames with both forward and backward prediction.


The invention relies on the observation that intra coded blocks in MPEG-2 are the direct output of an inverse DCT function, whereas predicted macroblocks are the result of an inverse DCT function added to a prediction. If we take the forward DCT of a picture that has been decoded from an MPEG-2 bitstream, then we would expect the DCT coefficients of intra blocks to take only values that were in the set of quantizer reconstruction levels specified in the MPEG-2 standard. The DCT coefficients of predicted blocks might occasionally exhibit this property, but this would only be fortuitous. Unfortunately, without knowledge of the quantizer step size and weighting matrix used in the original encoder, the only quantizer reconstruction level that we know to exist is zero. However, because the distribution of DCT coefficients (even for intra blocks) is highly peaked around zero, we can still expect a large number of DCT coefficients of intra blocks to be equal to zero.


If we count the number of zero DCT coefficients in each frame, we would expect a high number to indicate either that the frame is an I-frame and contains only intra blocks, or is a predicted frame in which a very high proportion of the blocks are intra coded. In either case, it would be acceptable to judge such a frame as an I-frame for the purposes of optimizing the performance of the re-coding step. One slight complication is that, for luminance blocks, there are two options for every macroblock for the input to the DCT process; dct_type can be either frame based or field based. This problem can be avoided by simply taking both kinds of DCT in parallel and including both in the count, accepting the fact that there will be a 2:1 ‘dilution’ of the result. While chrominance blocks do not have this problem in Main Profile MPEG coding, they are not used in the preferred form of the invention because of the ambiguities involved in transcoding between the 4:2:0 and 4:2:2 formats.



FIG. 3 illustrates aspects of this particular embodiment of the invention. Graph B in FIG. 3 shows the number of zero coefficients encountered for each frame in a typical sequence in which the first and every 12th subsequent frame is an I-frame.


Clear peaks can be seen which indeed correspond to the existence of I-frames. In principle, we should be able to apply a threshold to the curve in order to detect I-frames. However, as the graph illustrates, a simple fixed threshold will not always work. The sequence has a change in scene content around frame 190, and this leads to a drop both in the counts of zero coefficients for I-frames and in the ‘background count’ for other frames. Some kind of adaptive threshold filter is therefore required. There now follows an example showing how the threshold could be adapted.


The filter sets an initial threshold which is then modified once per frame. If the zero-coefficent count (ZCC) exceeds the threshold, then an I-frame is deemed to have been detected and the threshold is reset to the ZCC value; otherwise the threshold is decreased by a factor, known here as the Threshold Modifier Factor (TMF):


Tn=(Tn−1*TMF+Vn)/TMF+1


where Tn is the threshold at frame n1 and Vn is the ZCC value at time n.


It can be seen that if the incoming ZCC data is constant, then the threshold will tend to that constant.


The TMF is determined using a confidence test. The confidence test looks at the position of the last I-frame found and the number of frames between the last two detected I-frames, and assumes that the I-frames are occurring at regular intervals. If there were no frames between the last two detected I-frames, then we assume that a false I-frame was detected, so the confidence is set to 0.5 until another I-frame is found; otherwise the confidence is determined as:

Confidence=exp((CurrentFramePosition−ExpectedIFramePosition))/5


The confidence is clipped to the range 0 (no peak expected) to 1 (peak expected).


The TMF is determined as:

TMF=Max−Confidence*(Max−Min)

where Min and Max are the minimum and maximum values the TMF is allowed to take.


It can be seen that the greater the confidence, the smaller the TMF, hence the more rapidly the threshold decreases until an I-frame is detected.


The above description is simply an example; other systems of adaptive thresholding could equally be used.


A refinement to the detection method could be to make an explicit and separate detection of scene changes and incorporate this into the confidence measure. This could be augmented by a priori knowledge of the strategy adopted by the original encoder to modify or reset the GOP structure at scene changes.


The performance of the complete detection method is illustrated in FIG. 3, where the four curves show: the threshold, graph A, the zero-coefficient count (ZCC) value, graph B, the confidence value, graph C, and the result—a downward pointing spike indicating that an I-frame has been detected—graph D.


A further particular aspect of the picture type detector involves the detection of all the different categories of frames which make up the frame structure in the video signal and not merely the I-frames. The method here is similar to that employed in detecting the I-frames. In general, a particular P-frame is not necessarily noticeably more intra-coded than a particular B-frame, but if the analysis is performed as for the detection of I-frames, and an average of the results over a large number of frames is taken, it is found that the P-frames are, on average, slightly more intra-coded than the B-frames. This result can be used to determine the particular frame structure which was used in the original encoding.


Given a specific number of non-I-frames between two I-frames, there are only a few different types of frame structure which are used in conventional encoders. For example, given an I-frame every sixth frame, the structure will typically be I, B, B, P, B, B, I or I, B, P, B, P, B, I. The changes, if at all, between frames structures in a given signal are also reasonably infrequent, allowing a large possible sample from which an average can be taken. If an average of the results of an analysis as above are taken, the subtle trends in the numbers of zero coefficients found for the different types of predicted frame can be used, along with the knowledge that there are only a certain known number of different frame structures available, to deduce the frame structure which was used in encoding the given signal. This information can be used alongside information regarding the position of the I-frames to give more detailed information to the MPEG coder (as in FIG. 2, (206)) via the Information Bus.



FIG. 4 gives a block diagram of an I-frame detector using the method described above. The video signal is first converted to field blocks (400) and frame blocks (401). The signal from each of these is then passed first through a DCT (402), and the zero coefficients for the signal are then counted (404). The two signals are then added and the combined zero coefficient count is compared at 408 with a threshold. An I-frame is detected if the threshold is exceeded. The threshold is calculated at 406 utilising, as described above, information from the combined zero coefficient count and the location of the last detected I-frame.


It will be understood that this invention has been described by way of example only, and a wide variety of modification is possible without departing from the scope of the invention.


Thus, whilst the described counting of zero coefficients has the advantage of not requiring prior knowledge of the full set of possible quantisation values, there will be circumstances in which it will be appropriate to measure the occurrence of other values. Also, although the example of MPEG-2 is of course very important, the invention is also applicable to other coding schemes which utilise categories of frames which differ in the degree to which frames are coded using prediction.

Claims
  • 1. A method of analysing a signal which has previously been coded and decoded, said previous coding utilising a quantisation process having a set of possible quantisation values and in which the coded signal contained categories of frames, which categories differ in the degree to which frames were coded using prediction, the method comprising the steps of measuring the occurrence in the signal of those values which correspond with the set of possible quantisation values, and inferring the category of a specific frame by testing the occurrence of said values against a threshold.
  • 2. A method according to claim 1, in which said threshold is varied in accordance with an expected pattern of said categories of frames in the coded signal.
  • 3. A method according to claim 2, in which said threshold varies with the number of frames since detection in the decoded signal of a particular category of frame.
  • 4. A method according to claim 3, in which said particular category of frame contains those frames which are coded with no prediction.
  • 5. A method according to claim 1, in which the occurrence of zero values is measured.
  • 6. A method according to claim 1, in which coding and decoding processes utilise frame types I, P and B; in which I frames are coded with no prediction, P frames are coded with only forward prediction and B frames are coded with both forward and backward prediction.
Priority Claims (1)
Number Date Country Kind
9822092 Oct 1998 GB national
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCTGB99/03359 10/11/1999 WO 00 4/6/2001
Publishing Document Publishing Date Country Kind
WO0022831 4/20/2000 WO A
US Referenced Citations (25)
Number Name Date Kind
5086488 Kato et al. Feb 1992 A
5142380 Sakagami et al. Aug 1992 A
5249053 Jain Sep 1993 A
5438625 Klippel Aug 1995 A
5512956 Yan Apr 1996 A
5629779 Jeon May 1997 A
5642115 Chen Jun 1997 A
5671298 Markandey et al. Sep 1997 A
5748245 Shimizu et al. May 1998 A
5802218 Brailean Sep 1998 A
5812197 Chan et al. Sep 1998 A
5831688 Yamada et al. Nov 1998 A
5930398 Watney Jul 1999 A
5991456 Rahman et al. Nov 1999 A
6005952 Klippel Dec 1999 A
6151362 Wang Nov 2000 A
6163573 Mihara Dec 2000 A
6269120 Boice et al. Jul 2001 B1
6278735 Mohsenian Aug 2001 B1
6285716 Knee et al. Sep 2001 B1
6437827 Baudouin Aug 2002 B1
6539120 Sita et al. Mar 2003 B1
6570922 Wang et al. May 2003 B1
20010031009 Knee et al. Oct 2001 A1
20020118760 Knee et al. Aug 2002 A1
Foreign Referenced Citations (3)
Number Date Country
0509576 Oct 1992 EP
0710030 May 1996 EP
WO 9826602 Jun 1998 WO