Method and apparatus for motion dependent coding

Abstract
The gravity centred coding shall be improved with respect to false contour effect disturbances on plasma display panels for example. Therefore, there is provided a GCC code (gravity center coding) and a motion amplitude of a picture or a part of a picture. Furthermore, there is provided at least one sub-set code of the GCC code. The video data are coded with the GCC code or the at least one sub-set code depending on the motion amplitude. Thus, it is possible to reduce the number of coding levels if the motion increases. A further improvement can be obtained by using texture information for selecting the GCC code.
Description

BRIEF DESCRIPTION OF THE DRAWINGS

Exemplary embodiments of the invention are illustrated in the drawings and in more detail in the following description. The drawings showing in:



FIG. 1 the composition of a frame period for the binary code;



FIG. 2 the centre of gravity of three video levels;



FIG. 3 the centre of gravity of sub-fields;



FIG. 4 the temporal gravity centre depending on the video level;



FIG. 5 chosen video levels for GCC;



FIG. 6 the centre of gravity for different sub-field arrangements for the video levels;



FIG. 7 time charts for several GCC codes with a different number of levels depending on the intensity of motion;



FIG. 8 a time chart showing hierarchical GCC codes;



FIG. 9 a cut out of FIG. 8;



FIG. 10 a block diagram for implementing the inventive concept; and



FIG. 11 a logical block diagram for selecting an appropriate code depending on motion and skin tone.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

A preferred embodiment of the present invention relates to linear-motion coding for GCC.


The main idea behind this concept is to have a set of codes all based on the same skeleton. This is real important since if the picture is divided in regions depending on the movement in each region, the border between two regions must stay invisible. If there are totally different code words used in each region the border would become visible under the form of false contour borders.


Therefore, a first GCC code is defined using a lot of levels and providing a good and almost noise free grayscale for static areas. Then based on this code, levels are suppressed to go step by step to a coding that is more optimized for fast motion. Then, depending on the motion information obtained for each pixel, the appropriate sub-set of codes is used.


The motion information can be a simple frame difference (the stronger the difference between two frames is, the lower the number of levels being selected) or a more advanced information coming from real motion detection or motion estimation.


In the following, it is assumed that at the beginning of the PDP video chain, motion information is given as motion amplitude. This can be provided by either a motion detector/estimator located in the same chip or can be provided from a front-end chip having such block inside.



FIG. 7 shows that depending on the motion speed various GCC modes are selected from a high number of discrete levels for a static pixel up to a low number of discrete levels for a fast moving pixel.


In the present example a GCC code having 255 discrete levels is used for a static picture as shown in the upper left picture of FIG. 7, a GCC code having 94 discrete levels is used for coding a low motion pixel as shown in the upper right picture, a GCC code having 54 discrete levels is used for coding a medium motion pixel as is shown in the lower right picture and a GCC code having 38 discrete levels is used for coding a fast motion pixel as shown in the lower left picture of FIG. 7. As the number of discrete level decreases, the dithering noise level increases. This is only an example and much more sub-codes can be implemented.


However, one of the main ideas behind this concept is to get the best compromise between dithering noise level and moving quality. Furthermore, a very important aspect is that all GCC modes are made in a hierarchical way otherwise the concept will not work very well. This means that a mode k is automatically a subset of a mode k−1.


The number of modes is flexible and depends on the targeted application. These modes can be either all stored in the chip in various tables or generated for each pixel. In the first case the choice between tables will be done depending on the motion amplitude information. In the second case, the motion amplitude information will be used to compute directly the correct GCC encoding value.


The global concept is illustrated on the following table for the same example as shown in FIG. 7.
















embedded image









embedded image









embedded image









embedded image









embedded image











The table shows per column the selected levels for each mode. An empty cell means that the level has not been selected. For intermediate modes (for example between mode 0 and mode I), the symbol “ . . . ” means that the code can be either selected or not depending on the optimization process.


As it can be seen on the previous table, a mode l contains always less discrete levels than a mode k when k<l. Furthermore, all discrete levels from mode l are always available in mode k.


The next paragraph will propose a possibility to define the various modes. Specifically, a hierarchical mode construction will be shown.


In order to define all required modes in a linear way so that they can be changed linearly to motion, a new concept has been developed based on the distance to the ideal GCC curve. For the illustration of this concept FIG. 8 presents three curves:

    • the curve of gray rhombs built with all discrete levels (e.g. 255 in our example) defined for static areas
    • the curve of white squares built with all discrete levels (e.g. 38 in our example) for fast moving areas
    • the black ideal curve to select gravity centres in order to minimize moving artifacts.


In order to define a motion dependent coding, a parameter called DTI (Distance To Ideal) is defined for each available discrete level of the static area code. This DTI describes the distance between the gravity centre of a code word to the ideal GCC curve (black curve). FIG. 9 shows DTIs for same levels of the curves of FIG. 8. The DTI has to be evaluated for each level (code word).


Then, the respective DTI will be associated to each code word. In order to obtain various coding depending on the movement each DTI will be compared to a certain motion amplitude. The higher the motion amplitude is, the lower the DTI must be to have a selected code word. With this concept it is possible to define a large amount of coding modes varying with the motion amplitude.


Now, a concept of hardware implementation will be illustrated along with FIG. 10. As already said, the various codes with hierarchical structure can be either computed on the fly or the various codes are stored in different tables on-chip.


In the first case, only the DTI is computed by software and stored for each code word in a LUT on-chip. Then, for each incoming pixel, a motion amplitude information is generated or provided. This information will be compared to the DTI information of each code to determine if the code must be used or not.


In the second case, a number P of tables are stored in the chip. The DTI information could be used to define such tables but it is not absolutely mandatory. Additionally, some experimental fine-tuning of the tables can be adopted to further improve the behavior. In that case, the motion amplitude will determine which table must be used to code the current pixel.


According to FIG. 10, the input R, G, B picture is forwarded to the gamma block 1 performing a quadratic function under the form






Out
=

4095
×


(

Input
MAX

)

γ






where γ is more or less around 2.2 and MAX represents the highest possible input value. The output should be at least 12 bits to be able to render correctly low levels. The output of this gamma block 1 could be forwarded to a motion amplitude estimation block 2 that is optional (e.g. calculating simple frame difference). However, in theory, it is also possible to perform the motion amplitude estimation before the gamma block 1.


In any case, motion amplitude information is mandatory for each incoming pixel. If there is no motion amplitude estimation inside the PDP IC, external motion information must be available (e.g. output of a motion estimation used in the front-end part for up-conversion purposes).


The motion amplitude information is send to a coding selection block 3, which will select the appropriate GCC coding to be used or which will generate the appropriate coding to be used for the current pixel. Based on this selected or generated mode, the resealing LUT 4 and coding LUT 5 are updated. The rescaling unit 4 performs the GCC, whereas the coding unit 5 performs the usual sub-field coding. Between them, the dithering block 6 will add more than 4 bits dithering to correctly render the video signal. It should be noticed that the output of the resealing block 4 is p×8 bits where p represents the total amount of GCC code words used (from 255 to 38 in our example). The 8 additional bits are used for dithering purposes in order to have only p levels after dithering for the encoding block 5. The encoding block 5 delivers 3×16 bit sub-field data to the plasma display panel 7. All bits and dithering relevant numbers are only given as example (more than 16 sub-fields can be available, more than 4 bits dithering is also possible).


A further improvement of the motion coding can be achieved by regarding texture information. Such texture information relates to a skin tone texture, for example. The skin tone texture is very sensitive to motion rendition. Therefore a more hierarchical decision concept could be used to improve the final picture quality as described with FIG. 11.


Accordingly, skin tone areas and normal areas are handled differently (cf. European Patent Application 04 291 674.2). In the case of skin tone, even static areas could be handled with a more optimized motion coding compared to normal areas. As illustrated in FIG. 11, the input data before or after the gamma correction are analysed for a skin tone texture. If a skin tone is detected, generally, codes with a lower number of levels are used (94 levels even for static pictures and 38 levels for fast motion pixels. Otherwise, if no skin tone is detected, codes with a higher number of levels are used (255 levels for static pixels and 54 for fast motion pixels).


In any case, the information of motion should have more impact on skin tone areas than on normal areas.


A possible implementation is either to use two different sets of multiple codes but this will increase the memory on-chip too much if LUTs are used or to use a transformation for the motion amplitude in case of skin tone.


Such a transformation formula is given as following:









V




=

{





a
×


V



+

b





if





skin





detected









V







else









where |V| represent the original motion amplitude. Values a and b are correction coefficients used for skin areas. When both textures should have the same coding in static areas, b is chosen to be equal to 0.

Claims
  • 1. Method for processing video data for display on a display device having a plurality of luminous elements corresponding to the pixels of a picture including the steps of providing a GCC code for coding video input data,evaluating or providing a motion amplitude of a picture or a part of the picture,providing at least one sub-set code of said GCC code,coding the video data with said GCC code or said at least one sub-set code depending on said motion amplitude.
  • 2. Method according to claim 1, wherein said motion amplitude is evaluated on the basis of the difference of two pictures or two corresponding parts of pictures.
  • 3. Method according to claim 1, wherein several sub-set codes with mutually different numbers of coding levels are provided and the more motion said motion amplitude indicates, the lower the number of coding levels of that sub-set code being used for coding is.
  • 4. Method according to claim 1, wherein said GCC code and said at least one sub-set code are stored in tables in a memory.
  • 5. Method according to claim 1, wherein said at least one sub-set code is generated for each pixel.
  • 6. Method according to claim 1, wherein a texture value within a picture or a part of a picture is determined and depending additionally on the determined texture value the code for coding the video data is varied.
  • 7. Method according to claim 6, wherein said texture value is a skin tone value and the code is varied by multiplying a value of said motion amplitude by a factor depending on the skin tone value, said value of said motion amplitude being used for generating or selecting the code used.
  • 8. Method according to claim 1, wherein a distance between the gravity center of a code word and a pre-given GCC curve is determined for each code word and wherein the code for coding said video data is selected on the basis of said distance.
  • 9. Apparatus for processing video data for display on a display device having a plurality of luminous elements corresponding to the pixels of a picture including coding means for coding video input data by means of a GCC code, the coded video data being usable for controlling said display device,
  • 10. Apparatus according to claim 9 including motion detection means for providing motion amplitude about said picture or said part of picture to said coding means.
  • 11. Apparatus according to claim 9 including texture measurement means for measuring a texture value, preferably a skin tone value within a picture or a part of a picture, so that said coding means is capable of varying the code used for coding the video data depending additionally on the determined texture value.
Priority Claims (1)
Number Date Country Kind
06290589.8 Apr 2006 EP regional