Method of computing wavelets temporal coefficients of a group of pictures

Abstract
The invention relates to a method of computing wavelets temporal coefficients of a GOP (Group Of Pictures) the length of which is 2n. A controlled temporal transform is applied recursively generating n decomposition levels. Each decomposition level comprises the mean and the mean difference of each couple of input signals. During the last n-1 decomposition levels each decomposition level transform block is controlled by a control signal corresponding to the sum of two temporal mean differences outputted from the previous decomposition level. The corresponding temporal means of said previous decomposition level are the input signals for said transform block. When the said control signal is equal to zero the output values of said transform block is the temporal mean value and the temporal mean difference of the input signals. When the control signal is different from zero, the output signals are the said input signals.
Description


BACKGROUND OF THE INVENTION

[0001] The present invention relates to a method of computing wavelets temporal coefficients of a GOP (Group Of Pictures) the length of which is 2n by applying recursively a temporal transform generating n decomposition levels each decomposition level comprising a function M(ai,bi) and a function D(ai,bi)of each couple of input signals ai,bi. The technique used is based on wavelets, for digital video coding.


[0002] A Picture is a matrix of numbers each number representing the color intensity, or the luminance, or a value computed by an algorithm, etc of a pixel. Should the encoding of the image is made on a byte each number is comprised between 0 and 255. A Picture may be referred to as frame when it represents a matrix of the color intensity, or the luminance of a pixel.


[0003] A digital video signal is composed of a sequence of frames. The appearance of motion is given to the observer by displaying a certain amount of frames per second (usually 25 or 30). In this context, a digital video signal is indeed a 3-D signal (3-dimensional), where 2 dimensions represent the image plane (a single frame), and the third dimension represents the time (successive frames of the sequence (FIG. 1))


[0004] In order to efficiently compress such 3-D signals it is necessary to exploit both spatial (2-D) and temporal (3-D) redundancies. Wavelets have been widely used to exploit the spatial redundancies of the video sequence by applying for example the Haar transform.


[0005] The Haar transform is well known in the wavelet theory: it is the wavelet transform with one of the shortest support. Given two input values A and B, the corresponding Haar coefficients are simply their half difference Δ, and their mean μ.


[0006] In the context of video coding, the Haar transform has been used both as a spatial and as a temporal transform. In this second case the temporal decomposition is applied on a GOP (Group Of Pictures) of size 2n. The input data can be the raw images of the video sequence (luminance and chrominance values) or their spatial decomposition using any 2-D linear transform (transform coefficients).


[0007] When Haar is applied as a temporal transform we obtain the scheme reported in FIG. 2. Here the input is represented by four pictures (F1 , F2 , F3 , F4) generated by 2-D Haar transform of the frames of the input sequence.


[0008] Given F1 and F2 the corresponding temporal Haar transform is composed of a mean picture μ1 and a difference picture Δ1, μ1 and Δ1 represent the Haar transform of F1 and F2. It is possible to recursively apply the Haar transform so as to generate several decomposition levels. In the example of FIG. 3, we show two levels of decomposition obtained by applying Haar first on the pictures (F1, F2 ) and (F3 , F4) then on the two mean pictures (μ1, μ2). The corresponding Haar transform is represented by Δ1, Δ2, Δ3and μ3. Note that the Haar decomposition is extremely efficient for video coding applications when the input pictures are similar to each other. In this case, the difference pictures are close to zero, thus easy to compress with an entropic coder.


[0009] In the Haar transform, the decomposition is equally applied on all the inputs and for each decomposition.


[0010] The present invention proposes a method of computing wavelets temporal coefficients of a GOP allowing to efficiently exploit the temporal redundancies of the transform coefficients computed by any 2-D linear transform.


[0011] The method according to the invention, is characterized in that during the last n-1 decomposition levels a decomposition level transform block is controlled by a control signal corresponding to the sum of two functions D(ai,bi) outputted from the previous decomposition level while the corresponding functions M(ai,bi) of said previous decomposition level are the input signals for said transform block, and in that when the said control signal is equal to zero the output values of said transform block are the functions M(M(ai,bi), M(ai+1,bi+1) and D(M(ai,bi) M(ai+1,bi+1)) of the input signals M(ai,bi) and when the control signal is different from zero the output signals are the said input signals..


[0012] According to a preferred embodiment the function M(ai,bi) is the mean value of signals ai,bi and the function D(ai,bi) is the quantization of the mean difference of the signals ai,bi


[0013] The proposed technique efficiently exploits the temporal redundancies of the transform coefficients computed preferably by a 2-D linear transform using the new method called by the inventors “Dynamic Temporal Transform”. Differently from other solutions this technique does not require a motion estimation procedure that is a computationally expensive task.


[0014] According to an embodiment 2n frames of the input sequence are first independently transformed with a 2-D linear transform into a GOP of 2n pictures each picture of the said GOP containing the transform coefficients of each input frame, the said transform coefficients are passed to a temporal transform generating a first level of temporal decompositions each decomposition comprising temporal mean and temporal difference, and for the further n-1 decomposition levels the control signal is the sum of the quantized version of two temporal mean differences outputted from the previous decomposition level


[0015] Note that the proposed “Dynamic Temporal Transform” for this embodiment is applied on the results of 2-D linear transform and not on the raw images. This choice has advantages in the complexity of the encoder.


[0016] According to another embodiment the input values for the first decomposition level are the raw images, in that the control signal is the sum of the coded, quantized, dequantized and decoded image mean differences of the two signals outputted from the previous level.


[0017] According to a preferred embodiment, the 2-D linear transform is the 2-D 5.3 wavelet transform where the low pass filter is a 5 tap filter and the high pass filter is a 3 tap filter and the temporal transform is the Haar temporal transform.







BRIEF DESCRIPTION OF THE DRAWINGS

[0018] The invention will be presented with the help of the accompanying drawing where the 2-D linear transform and the temporal transform are Haar transforms.


[0019]
FIG. 1 is a representation of video sequence seen as a 3-D signal;


[0020]
FIG. 2 is a scheme of the Haar temporal decomposition;


[0021]
FIG. 3 is an overview scheme of the present invention method;


[0022]
FIG. 4 is a scheme of the present invention method;


[0023]
FIG. 5 is a presentation of the controlled transform block;


[0024]
FIG. 6 is illustrative decomposition according to the classic Haar and the present invention transform.


[0025]
FIG. 7 is a scheme of the temporal Haar transform applied to the image domain.


[0026]
FIG. 8 is a presentation a of the present invention method applied to the image domain.







DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0027] The overview scheme of the proposed technique according to the first embodiment is depicted in FIG. 3.


[0028] The proposed “Dynamic Temporal Transform” can be applied on any GOP with length 2n. In FIG. 3, the GOP has a length of 4 (22). These images are first independently transformed with 2-D wavelets (parallel implementation can be suitable for this step). As result we generate four spatial decompositions containing the wavelets coefficients of each input image. Note that each decomposition contains as many coefficients as pixels in the input image. These coefficients are then passed to the “Dynamic Temporal Transform” which generates four temporal decompositions. Also in this case, each decomposition contains as many coefficients as pixels in the input image. One of these decomposition is a temporal mean, three are temporal differences as it will be explained in more details in the next sessions.


[0029] The length of the GOP (2n) defines the number of temporal decompositions (n). The higher is n, the more temporal redundancies can be exploited. However, n also defines the delay introduced by the encoder and thus it cannot be too large in practical applications.


[0030] The proposed method defines a procedure to dynamically decide which input coefficient will be further transformed and which coefficient will simply be propagated.


[0031] The defined scheme is described in FIG. 4 where the procedure is extended to eight input pictures It-It-7. The scheme shows that in the first level, the classic Haar transform, described above (FIG. 2), is applied on all the input. The results are two pictures: on the top the mean I and on the bottom the difference A. The quantized version, Q, of the difference picture is used as control signal for the next decomposition level, while the mean picture is one of the input for the controlled transform block of the next decomposition level.


[0032] According to the control signal the output of the controlled transform block can be the mean and the difference of the input signals or directly the input signals. The detailed mechanisms of the controlled transform block are depicted in FIG. 5. The input signals are Cn(x,y,t1) and Cn(x,y,t2) and they represent the wavelet coefficients generated at the nth decomposition in the subband position x, y, respectively at time t1, t2.


[0033] Two cases are depicted. In the first case the control signal, Q, is zero. Here the output signal is simply the Haar transform of the input (roughly speaking the mean and the difference of the input signals). In the second case the control signal, Q, is different from zero. Here the output signals are directly the input signals.


[0034] In order to illustrate the mechanisms of the proposed “Dynamic Temporal Transform”, two examples are presented in FIG. 6. The first shows the mechanisms of the classic Haar transform. The input signal is piece-wise constant simulating a strong discontinuities in the temporal domain (for example a moving object). The eight (23) input values are decomposed on 3 levels. At each level from two input signals we obtain two coefficients as the mean of the input and half of their difference (in the example, the input 0,0 give 0,0 as mean and half difference and another input 0,4, is transformed in 2,2). The eight input signals are after the three level of decompositions represented by 8 coefficients. The first one is the last computed mean value (in this example is 5) and the other 7 are all the computed differences (in this example 3,2,0,0,4,0,0). From these coefficients it is possible to reconstruct the input values.


[0035] On the same input signal, we have applied the proposed “Dynamic Temporal Transform”. The difference with the standard Haar transform happens at the second level of decomposition. Now the input signals 0,4 gives us 0,4 instead of 2,2. This is because the control signal for this transform block was non null (in FIG. 6, the non null control signal is doted circle)). Because of these differences, after three level of decomposition we have the following eight output coefficients: 0 is the last mean and 8,4,0,0,4,0,0 are all the differences. After applying a quantization step of 2 to the result obtained by the classic Haar transform we obtain the following coefficients 2,1,1,0,0,2,0,0 which after decoding gives 0,0,0,8,6,6,6,6. By applying the same to the coefficients obtained by applying “Dynamic Temporal Transform” we obtain 0,4,2,0,0,2,0,0 and after decoding 0,0,0,8,8,8,8,8 which is exactly the input, that means that the “Dynamic Temporal Transform” performs, in the above example better encoding than the classic Haar transform.


[0036] The proposed “Dynamic Temporal Transform” improves significantly the performances of the classic Haar transform in the context of video coding. The main advantage is that at a given rate, the proposed “Dynamic Temporal Transform” does not introduce annoying artifacts such as ghosts around the moving objects, whereas the classic Haar transform does. The robustness against these artifacts makes it possible to increase the GOP's length and to exploit more the temporal redundancies in the input signal. In classic temporal Haar transform, the presence of artifacts when moving objects are in the scene, limits the GOP's length to 24. This limitation has an impact on the coding performances of the classic temporal Haar transforms.


[0037] Another important advantage of the proposed implementation lies on its reduced complexity compared to standard approaches such as MPEG-2/4. In fact, the proposed encoding process does not need a prior decoding procedure in the encoder to exploit the temporal redundancies.


[0038] No drawbacks are introduced with the proposed implementation of the “Dynamic Temporal Transform”. Since the transform is symmetric and reversible there is no need to send to the decoder any additional control signal.


[0039] The proposed “Dynamic Temporal Transform” has a major domain of application in the compression of video surveillance signals. The reasons are:


[0040] 1. In security videos a large part of the scene remains fixed: thus the temporal redundancy exploited on long GOPs has a significant impact on the compression performances. The use of classical temporal Haar transform on long GOPs is limited by the ghost artifacts. This is corrected with the use of the proposed “Dynamic Temporal Transform”.


[0041] 2. In security scenarios, real time constraints are very strong: the proposed “Dynamic Temporal Transform” has a very low computational complexity compared to MPEG-2/4 standard approaches and can be easily implemented in hardware.


[0042] However, the proposed Dynamic Haar transform can be used in other domain of application where high computational performances are required and where static scenes are compressed. Examples are Video Telephony, Video Forums, Video conferences, etc. . . .


[0043] The method described in the above embodiment is applied in the linear transform domain: e.g it is applied on the transform coefficients such as the wavelets coefficients. However, it can be extended according to another embodiment of the present invention and generalized to the image domain and thus be applied on the color information of an image. In this context, instead of applying the “Dynamic Temporal Transform” in the transform domain (e.g. on the wavelet coefficients as depicted in FIG. 2) it is possible to apply the Dynamic Haar on the image domain (e.g. in the color intensities whatever format: RGB, YWV, YcbCr, . . . as depicted in FIG. 7.).


[0044] Note that this generalization requires that the encoder have access to the same information that will be available at the decoder. Because of this, the scheme of FIG. 4 must be generalized as shown in FIG. 8.


[0045] For clarity of the scheme only one level of decomposition is reported as example. The input It-It-3 is now represented by the raw input images and not by the 2-D transform as it was in FIG. 4.


[0046] In this generalization the controlled transform block remains the one described in FIG. 5. As before, the controlled transform block of the first decomposition generates two pictures: the mean difference Δ and the mean μ. Δ is encoded using any possible coder (in FIG. 8 the result of the Coder 1 is called δ)·. Then, δ is decoded using the corresponding Decoder 1. The result is an approximation of the input image difference, Δ′. We will see that this frame will be available at the decoder as well.


[0047] In the example of FIG. 8, the first level of decomposition, gives us two pictures Δ′. Their sum at a pixel level is the control signal for the following level of decomposition. The controlled transform block will decide whether to encode the decoded picture mean difference and mean or transmit the real values without temporal encoding according to the fact that the picture difference coded, quantized, dequantized, and decoded is equal or different from zero. This choice can be done at a pixel level and does not require to send any additional information from the encoder to the decoder since both takes their decisions from the same data. The pictures i and A obtained are coded independently with any coder to generate two streams Ω and δ. Finally the streams sent to nal mean μ, and the three δ streams corresponding to the coded versions of the 3 mean differences. The decoder is thus able to reconstruct the control signals Δ′ from the corresponding δ's.


[0048] Note that in MPEG-4 standard a similar result is obtained by sending the information whether a block is encoded as intra or inter. In this case the choice is done at a block size resolution and with the additional cost of sending the instructions to the decoder.


[0049] Although illustrative embodiments of the invention have been shown and described, a wide range of modification, change and substitution is contemplated in the foregoing disclosure and in some instances, some features of the present invention may be employed without a corresponding use of the other features. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the scope of the invention.


Claims
  • 1. Method of computing wavelets temporal coefficients of a GOP (Group Of Pictures) the length of which is 2n by applying recursively a temporal transform generating n decomposition levels each decomposition level comprising a function M(ai,bi) and a function D(ai,bi)for each couple of input signals ai, bi, characterized in that during the last n-1 decomposition levels each decomposition level transform block is controlled by a control signal corresponding to the sum of two functions D(ai,bi) outputted from the previous decomposition level while the corresponding functions M(ai,bi) of said previous decomposition level are the input signals for said transform block, and in that when the said control signal is equal to zero the output values of said transform block are the functions M(M(ai,bi), M(ai+1,bi+1)) and D(M(ai,bi) M(ai+1,bi+1)) of the input signals M(ai,bi) and when the control signal is different from zero the output signals are the said input signals.
  • 2. Method according to claim 1, characterized in that the function M(ai,bi) is the mean value of signals ai,bi and the function D(ai,bi) is the quantization of the mean difference of the signals ai,bi.
  • 3. Method according to claim 2, characterized in that 2n frames of the input sequence are first independently transformed with any 2-D linear transform (wavelet, DCT, . . . ) into a GOP of 2n pictures each picture containing the transform coefficients of each input frame, in that the obtained spatial transform coefficients are passed to a temporal transform generating a first level of temporal decompositions each decomposition comprising temporal mean and temporal difference, and in that for the further n-1 decomposition levels the control signal is the sum of the quantized version of two temporal mean differences outputted from the previous decomposition level.
  • 4. Method according to claim 2, characterized in that the input values for the first decomposition level are the raw images, in that the control signal is the sum of the coded, quantized, dequantized and decoded image mean differences of the two signals outputted from the previous level.
  • 5. Method according to claim 3, characterized in that the 2-D linear transform is the 2-D 5.3 wavelet transform where the low pass filter is a 5 tap filter and the high pass filter is a 3 tap filter.
  • 6. Method according to the claim 1, characterized in that the said temporal transform is the Haar temporal transform.
  • 7. Method according to the claim 2, characterized in that the said temporal transform is the Haar temporal transform.
  • 8. Method according to the claim 3, characterized in that the said temporal transform is the Haar temporal transform.
  • 9. Method according to the claim 4, characterized in that the said temporal transform is the Haar temporal transform.
Priority Claims (1)
Number Date Country Kind
02024184.0 Oct 2002 EP