This is a U.S. national stage of application No. PCT/FR2006/050568, filed on Jun. 19, 2006.
This application claims the priority of French patent application no. 05/06693 filed Jun. 30, 2005, the content of which is hereby incorporated by reference.
The present invention relates to video coding techniques.
It applies to situations in which a coder producing a coded video signal stream sent to a video decoder has the benefit of a back channel on which the equipment at the decoder end provides information indicating, explicitly or implicitly, whether or not it has been possible to reconstruct the images of the video signal appropriately.
Many video coders support an inter-frame coding mode in which movement between successive images of a video sequence is estimated in order for the most recent image to be coded relative to one or more preceding images. Movement in the sequence is estimated, the estimation parameters being sent to the decoder, and the estimation error is converted, quantized, and sent to the decoder.
Each image of the sequence can also be coded without reference to the others. This is known as intra-frame coding. This coding mode exploits spatial correlation within an image. For a given bit rate for transmission from the coder to the decoder, it achieves lower video quality than inter-frame coding because it does not make use of temporal correlation between the successive images of the video sequence.
A video sequence portion routinely has its first image coded in intra-frame mode and subsequent images coded in intra-frame mode or inter-frame mode. Information included in the output stream from the coder indicates the macroblocks coded in intra-frame mode and in inter-frame mode and, for inter-frame mode, the reference image(s) to use.
A problem with inter-frame coding is its behavior in the presence of transmission errors or loss of packets on the communication channel between the coder and the decoder. Deterioration or loss of an image propagates to subsequent images until a new intra-frame coded image arrives.
It is routine for the mode of transmission of the coded signal between the coder and the decoder to generate total or partial loss of certain images. For example, with transmission over a packet network having no guaranteed delivery, such as an IP (Internet Protocol) network, such losses result from the loss or the delayed arrival of certain data packets. Losses can also result from errors introduced by the transmission channel that exceed the correction capabilities of the error corrector codes employed.
In an environment subject to various signal losses, it is necessary to provide mechanisms for improving image quality in the decoder. One of these mechanisms uses a back channel from the decoder to the coder on which the decoder informs the coder that it has lost some or all of certain images. The drawback of this is that:
Following reception of this information, the coder makes coding choices to correct or at least reduce the effects of the transmission errors. Current coders simply send an intra-frame coded image, i.e. one with no reference to the images previously coded in the stream that may contain errors.
These intra-frame coded images are used to refresh the display and to correct errors caused by transmission losses. However, they are not of such good quality as inter-frame coded images. Thus the usual mechanism for compensating image losses leads in any event to deterioration of the quality of the reconstructed signal for a certain time after the loss.
There are also known mechanisms in which the decoder is capable of signaling lost image portions to the coder in more detail (better spatial and temporal location). For example, if during processing of an image N by the decoder the decoder determines that the macroblocks i, j, and k of the image N have been lost, the decoder then informs the coder of the loss of those macroblocks. Such mechanisms are described in the following documents in particular:
VCEG-Y15, “Definition of a back channel for H.264: some results”, Baillavoine, Jung, January 2004;
The drawback of this type of mechanism is the absence of reaction and therefore of processing by the coder following reception by the coder of information to the effect that image portions have been lost.
One object of the present invention is to improve the quality of a video signal following transmission errors when there is a back channel from the decoder to the coder.
To attain this and other objects, one aspect of the present invention is directed to a video coding method, comprising the following steps:
a) coding successive images of a video sequence to generate coding parameters;
b) including the coding parameters in an output stream to be transmitted to a station including a decoder;
c) receiving from said station back channel information on reconstruction of the images of the video sequence by the decoder;
d) analyzing the back channel information in order:
e) coding the current image of the video sequence in a coding mode that is a function of the identification or non-identification of a lost portion in the step d).
This enables the adoption of the most appropriate coding mode (Intra-frame, 16×16 Inter-frame, 8×8 Inter-frame, etc.) in the coder as a function of the result of the analysis of the back channel information.
In particular this avoids the systematic choice of intra-frame coding in the coder in the presence of transmission errors.
Implementations of the method of the invention make use of one or more of the following features:
Another aspect of the invention relates to a computer program to be installed in a video processing unit, comprising instructions for executing the steps of a video coding method as defined above upon execution of the program by a calculation unit of said unit.
A further aspect of the invention relates to a video coder comprising:
Embodiments of the coder of the invention make use of the following features:
Other features and advantages of the present invention become apparent in the course of the following description of non-limiting embodiments, which is given with reference to the appended drawings, in which:
a to 4d show the analysis of back channel information in the
a to 5d show the analysis of back channel information in the
The coding method according to the invention is applicable to videoconferences between two stations A and B (
In a preliminary negotiation phase, using the ITU-T H.323 protocol well known in the IP videoconference field, for example, the stations A, B agree on a dialogue configuration and, using the ITU-T H.241 protocol, agree on a H.264 configuration with long-term marking and on setting up a back channel, for example of the ITU-T H.245 type.
In the example of application to videoconferences, each station A, B is naturally equipped both with a coder and a decoder (codec). It is assumed here that station A is the sender that contains the video coder 1 (
The stations A, B consist of personal computers, for example, as in the
As can be seen in
In H.264, the video image reconstruction module of the decoder 2 is also included in the coder 1. This reconstruction module 5 is seen in each of
An entropic coding module 9 constructs the output stream Φ of the coder 1 that includes the coding parameters of the successive images of the video sequence (converted residue prediction and quantization parameters) together with various control parameters obtained by a control module 10 of the coder 1.
Those control parameters indicate in particular which coding mode (inter-frame or intra-frame) is used for the current image and, with inter-frame coding, the reference image(s) to use.
At the decoder 2 end, the stream Φ received by the network interface 4 is passed to an entropic decoder 11 which recovers the coding parameters and the control parameters, the control parameters being supplied to a control module 12 of the decoder 2. The control modules 10 and 12 supervise the coder 1 and the decoder 2, respectively, feeding them the commands necessary for determining the coding mode employed, designating the reference images in inter-frame coding, configuring and setting the parameters of the conversion elements, quantization and filtering, etc.
For inter-frame coding, each usable reference image FR is stored in a buffer 51 of the reconstruction module 5. The buffer contains a window of N reconstructed images immediately preceding the current image (short-term images) and where appropriate one or more images that the coder has specifically marked (long-term images).
The number N of short-term images held in memory is controlled by the coder 1. It is usually limited so as not to occupy too much of the resources of the stations A, B. These short-term images are refreshed after N images of the video stream.
Each image marked long-term is retained in the buffer 51 of the decoder 2 (and in that of the coder 1) until the coder produces a corresponding unmarking command. Thus the control parameters obtained by the module 10 and inserted into the stream Φ also include commands for marking and unmarking the long-term images.
A movement estimation module 15 calculates the prediction parameters for inter-frame coding by a known method as a function of the current image F and one or more reference images FR. The predicted image P is generated by a movement compensation module 52 on the basis of the reference image(s) FR and the prediction parameters calculated by the module 15.
The reconstruction module 5 includes a module 53 that recovers converted and quantized parameters from quantization indices produced by the quantization module 8. A module 54 effects the opposite conversion to the module 7 to recover a quantized version of the prediction residue. This is added to the blocks of the predicted image P by an adder 55 to supply the blocks of a preprocessed image PF′. The preprocessed image PF′ is finally processed by a deblocking filter 57 to supply the reconstituted image F′ delivered by the decoder and stored in its buffer 51.
In intra-frame mode, spatial prediction is effected by a known method as and when blocks of the current image F are coded. This prediction is effected by a module 56 on the basis of blocks of the preprocessed image PF′ already available.
For a given coding quality, sending intra-frame coded parameters generally requires a higher bit rate than sending inter-frame coded parameters. In other words, for a given transmission bit rate, intra-frame coding of an image of a video sequence achieves lower quality than inter-frame coding.
Choosing between the intra-frame mode and the inter-frame mode for a current image is effected by the control module 10 of the coder 1 (
A first embodiment is described next, mainly with reference to
In the example shown, it is assumed that the lost portions of the image are the macroblocks of the image.
In the embodiment shown in
In particular, the update module 17 contains an update table TMAJ which associates a given short-term or long-term reference image with one or more state parameters indicating whether or not the decoder 2 has identified one or more lost macroblocks in that image.
Assume that in the example shown in
Assume now that at time t the coder 1 processes the image n and that the decoder 2 has not signaled any loss to the coder 1.
Consequently, referring to
On the basis of the content of the update table TMAJ at time t, the coder 1 then chooses to code the image n in inter-frame mode via the movement compensation module 52 because of the absence of deterioration of the preceding reference images n−1, n−2, n−3, and LT.
Assume now, with reference to
During processing, the decoder 2 identifies the loss of a macroblock of the image n−1, for example the macroblock MBi. The decoder 2 then sends this information to the coder 1 via the control module 12 and the network interface 4 (
The control module 10 of the coder 1 analyses this information and detects the indication of the loss of the macroblock MBi of the image n−1. As can be seen in
At the same time t+1, the coder 1 determines that the image n+1 to be coded must exclude the macroblock MBi of the image n−1, but also all the macroblocks of the image n that refer to the macroblock i of the image n−1, such as the macroblocks MBi, MBi+1, MBi+8, MBi+9, for example. To this end, the control module 10 activates the update module 17 to update the correspondence table TMAJ.
As can be seen in
On the basis of the content of the update table TMAJ at time t+1, the coder 1 then chooses to code the image n+1 in inter-frame mode via the movement compensation module 52 to minimize the deterioration of the quality of the image n+1 following detection of the loss of the macroblock MBi, signaled by the decoder 2, and the decision to exclude the aforementioned macroblocks MBi, MBi+1, MBi+8, MBi+9. The method therefore favors resumption of coding, not in the event of transmission errors in intra-frame mode, like current coders, but rather in inter-frame mode.
The tables TC and TMAJ are managed in this way for each image of the video sequence.
A second embodiment is described next, mainly with reference to
In the example represented, it is again assumed that the lost portions of the image are the macroblocks of that image.
The second embodiment also differs from the first embodiment in that:
To be more precise: for a given short-term or long-term reference image, a correspondence table TC′ associates its macroblocks that have been lost in the decoder 2 and its macroblocks that have been received by the decoder 2;
This confidence index I is a real value in the range [0;1] where:
Assume that in the example represented in
Assume now that at time t the coder 1 processes the image n and the decoder 2 processes the image n−2.
Assume further that at this time t the decoder 2 does not signal any loss to the coder 1 in respect of the image n−2.
Consequently, at time t the correspondence table TC′ is as represented in 5a.
At time t, the update module 17 updates the table TMAJ′ so that for each reference image indicated in the “image number” field each macroblock of that reference image is associated with a confidence index I value in a “confidence index” field.
As shown in
On the basis of the content of the update table TMAJ′ at time t, the movement compensation module 52 and the intra-frame prediction module 56 respectively calculate, for each macroblock identified in the table TMAJ′, a cost criterion J=1/1·(D+λR) where:
Whether to code the image n in inter-frame mode or intra-frame mode is chosen at time t as a function of the lowest cost criterion J evaluated.
In the example shown, the cost criterion J calculated by the movement estimation module 52 is the lowest. Consequently, the image n is coded in inter-frame mode.
Referring to
During processing, the decoder 2 identifies the loss of a macroblock of the image n−1, for example the macroblock MB2. The decoder 2 then sends that information to the coder 1 via the control module 12 and the network interface 4 (
The control module 10 of the coder 1 analyses this information and detects the indication of the loss of the macroblock MB2. As can be seen in
At the same time t+1, the coder 1 determines that the image n+1 to be coded must exclude the macroblock MB2 of the image n−1 and also all the macroblocks of the image n that refer to the macroblock MB2 of the image n−1, for example the macroblocks MB2 and MB1. To this end, the control module 10 activates the update module 17 to update the correspondence table TMAJ′.
As can be seen in
On the basis of the content of the update table TMAJ′ at time t+1, the movement compensation module 52 and the intra-frame prediction module 56 calculate the aforementioned cost criterion J for each macroblock identified in the table TMAJ′.
In the example shown, the cost criterion J calculated by the intra-frame prediction module 56 is the lowest. Consequently, the image n+1 is coded in intra-frame mode.
The second embodiment therefore optimizes even further the strategy of coding at the level of the coder 1, by means of the weighting of the cost criterion by the confidence index, which leads to the choice of the most appropriate coding mode.
This second embodiment achieves an advantageous compromise between coding efficiency and robustness (resistance to errors).
| Number | Date | Country | Kind |
|---|---|---|---|
| 05 06693 | Jun 2005 | FR | national |
| Filing Document | Filing Date | Country | Kind | 371c Date |
|---|---|---|---|---|
| PCT/FR2006/050568 | 6/19/2006 | WO | 00 | 4/22/2009 |
| Publishing Document | Publishing Date | Country | Kind |
|---|---|---|---|
| WO2007/003836 | 1/11/2007 | WO | A |
| Number | Name | Date | Kind |
|---|---|---|---|
| 6658618 | Gu et al. | Dec 2003 | B1 |
| 6744924 | Hannuksela et al. | Jun 2004 | B1 |
| 7253831 | Gu | Aug 2007 | B2 |
| Number | Date | Country |
|---|---|---|
| 0 753 968 | Jan 1997 | EP |
| Entry |
|---|
| Baillavoine et al., Definition of a back channel for H.264, Jan. 2005, ITU-T VCEG, VCEG-Y15, pp. 1-8. |
| Jeorg Ott, et al., “Extended RTP Profile for RTCP-based Feedback (RTP/AVPF)”, IETF/AVT Draft Aug. 10, 2004, pp. 1-47. |
| Frederic Loras, et al., “Definition of a back channel for H.264”, ITU-T Video Coding Experts Group (ITU-T SG16 Q.6), 24th Meeting: Oct. 18-22, 2003, Palma de Mallorca, Document VCEG-X09, pp. 1-7. |
| Marc Baillavoine, et al., “Definition of a back channel for H.264: some results”, ITU-T Video Coding Experts Group (ITU-T SG16 Q.6), 25th Meeting: Jan. 16-21, 2005, Hong Kong, Document VCEG-Y15, 8 pgs. |
| ITU-T., “Control protocol for multimedia communication, Recommendation H.245”, Jul. 2003, pp. 1-13. |
| Number | Date | Country | |
|---|---|---|---|
| 20090213939 A1 | Aug 2009 | US |