The present invention relates to a device for digital display of a video image using time-division modulation to display grey levels on the screen. The invention applies most particularly to projection and rear-projection appliances, televisions or monitors.
Among display devices, digital display devices are devices comprising one or more cells which can take a finite number of illumination values. Currently, this finite number of values is equal to two and corresponds to an on state and an off state of the cell. To obtain a larger number of grey levels, it is known to temporally modulate the state of the cells over the video frame so that the human eye, by integrating the pulses of light resulting from these changes of state, can detect intermediate grey levels.
Among the known digital display devices, there are those comprising a digital micromirror matrix or DMD matrix (DMD standing for Digital Micromirror Device). A DMD matrix is a component, conventionally used for video-projection, which is formed of a chip on which are mounted several thousand microscopic mirrors or micromirrors which, controlled on the basis of digital data, serve to project an image onto a screen, by pivoting in such a way as to reflect or to block the light originating from an external source. The technology based on the use of such micromirror matrices and consisting in a digital processing of light is known as “Digital Light Processing” or DLP.
The invention will be more particularly described within the framework of digital display devices comprising a digital micromirror matrix without this implying any limitation whatsoever on the scope of the invention to this type of device. The invention can for example also be applied to digital devices of the LCOS type.
In DLP technology, one micromirror per image pixel to be displayed is provided. The micromirror exhibits two operating positions, namely an active position and a passive position, on either side of a quiescent position. In the active position, the micromirror is tilted by a few degrees (around 10 degrees) with respect to its quiescent position so that the light originating from the external source is projected onto the screen through a projection lens. In the passive position, the micromirror is tilted by a few degrees in the opposite direction so that the light originating from the external source is directed towards a light absorber. A cell in the active (respectively passive) position corresponds to a pixel of the image in an on (respectively off) state. The periods of illumination of a pixel therefore correspond to the periods during which the associated micromirror is in the active position.
Thus, if the light supplied to the micromirror matrix is white light, the pixels corresponding to the micromirrors in the active position are white and those corresponding to the micromirrors in the passive position are black. The intermediate grey levels are obtained by time-division modulation of the light projected onto the screen corresponding to a PWM modulation (PWM standing for Pulse Width Modulation). Specifically, each micromirror is capable of changing position several thousand times a second. The human eye does not detect these changes of position, nor the light pulses which result therefrom, but integrates the pulses between them and therefore perceives the average light level. The grey level detected by the human eye is therefore directly proportional to the time for which the micromirror is in the active position in the course of a video frame.
To obtain 256 grey levels, the video frame is for example divided into eight consecutive sub-periods of different weights. These sub-periods are commonly called subfields. During each subfield, the micromirrors are either in an active position, or in a passive position. The weight of each subfield is proportional to its duration.
Furthermore, as in all video appliances, the displaying of a colour image requires the displaying of three images—one red, one blue and one green. In projectors with single DMD matrix, these three images are displayed sequentially. Such projectors comprise for example a rotating wheel comprising red, green and blue filters through which the white light originating from the source of the projector is filtered before being transmitted to the DMD matrix. The DMD matrix is thus supplied sequentially with red, green and blue light during the video frame. The rotating wheel comprises for example six filters (two red, two green, two blue) and rotates at a frequency of 150 revs/second, i.e. three revolutions per video frame. The digital data of the R, G and B components of the video image are supplied to the DMD matrix in a manner which is synchronized with the red, green and blue light so that the R, G and B components of the image are displayed with the appropriate light. The video frame can therefore be chopped into 18 time segments, 6 for each colour, as illustrated in
These digital display devices exhibit problems related to the temporal integration of the illumination periods. A contouring problem appears in particular when an object moves between two consecutive images. This problem, well known to the person skilled in the art, is manifested by the appearance of darker or lighter bands on grey level transitions which are normally almost imperceptible.
To limit these contouring effects, it is possible to employ so-called incremental coding of the grey levels, as is described in French patent application No. 02/03141 filed on 7 Mar. 2002.
According to this coding, the cells of the digital display device change state (on or off) at most once during each segment of the video frame. In the case of a DMD matrix, this implies that the micromirrors of the DMD matrix change position at most once during each segment of the video frame. Thus, if a micromirror is in an active position at the start of a segment and switches to a passive position in the course of this segment, it remains in this position until the end of the segment. More exactly, if a micromirror of the DMD matrix is in an active position at the start of a time segment and switches to a passive position at the start of a subfield of this time segment, it retains this position during the remaining subfields of the time segment.
The main advantage of this coding is that it does not create any “time holes” in the segment, the said holes being generators of disturbances during the temporal integration. A time hole designates an “on” subfield (subfield during which the pixel exhibits a non zero grey level) between two off subfields (subfields during which the pixel exhibits a zero grey level) or vice versa.
However, this coding allows the display of only a restricted number of possible grey levels, namely, for a segment comprising N subfields, it allows the display of a maximum of N+1 grey level values. However, techniques of dithering or of noising, which are well known to the person skilled in the art, make it possible to compensate for this small number of grey levels. The principle of the “dithering” technique consists in decomposing each non displayable grey level into a combination of displayable grey levels which, through temporal integration (these grey levels are displayed on several successive images) or through spatial integration (these grey levels are displayed in an area of the image encompassing the relevant pixel), restore on the screen a grey level close to the sought-after non displayable grey level.
A digital display device implementing this incremental coding is represented in
The present invention proposes another way of saving the video image in the image memory of the device.
According to the invention, it is proposed that, for each subfield, information identifying the pixels changing state be saved in the image memory, rather than saving video levels.
Also, the invention relates to a digital display device serving to display a video image during a video frame comprising a plurality of consecutive subfields distributed within at least two separate identical time segments, each pixel of the video image being able selectively to take an on state or an off state during each subfield of the said video frame, the said device comprising
If the video frame comprises N subfields, the image memory comprises, for example, N memory areas each associated with a subfield and each memory area saves the coordinates of the pixels of the video image changing state during the subfield associated therewith.
If each time segment comprises P subfields arranged in the same order and if each pixel of the video image changes state during the subfields of same order in the said at least two time segments, the image memory advantageously comprises P memory areas each associated with one of the P subfields and saving the coordinates of the pixels of the video image changing state during the associated subfield. Each memory area of the said image memory then is read once per time segment to display the video image.
Other characteristics and advantages of the invention will become apparent from reading the detailed description which follows and which is given with reference to the appended drawings, in which:
According to the invention, it is envisaged that information identifying, for each subfield, the pixels changing state be saved in the image memory, instead of saving video levels. This image memory therefore now saves only information pertaining to pixels of the video image changing state in the course of the time segments of the video frame.
The image memory is divided into as many memory areas as there are subfields in the 18 time segments (6 per colour). Each memory area is associated with a subfield and stores the row and column coordinates of the pixels of the image changing state at the start of this subfield.
The content of an image memory according to the invention is shown in
The size of the memory areas Zi can be fixed. If one considers images comprising 768×1024 pixels, the coordinates of the pixels being moreover coded on 20 bits (10 bits for the row position and 10 bits for the column position), it is then necessary to envisage memory areas having a size equal to 1024×768×20=15.7 megabits. If the image is displayed with 60 subfields (ten subfields per time segment), for each of the three colours R, G, B, the total size required for the image memory is then equal to 1024×768×20×60×3=2.8 gigabits.
This device comprises an incremental coding module 10 in accordance with that of
The video levels emanating from the incremental coding module 10 are subsequently supplied to a calculation module 20 responsible for generating, for each pixel (represented by its video level) of the image in the stream of video levels, its row and column coordinates as a function of its position in the said stream and for defining at least one address at which to record them in the image memory. The pixel coordinates are recorded several times in the memory if the relevant pixel changes state during several time segments of the video frame. To calculate the address at which to record, for each time segment, the coordinates of a given pixel P, the module 20 determines the subfield in the course of which the pixel P changes state (this subfield is dependent on the video level of the pixel P) and selects an unused address in the memory area associated with this subfield.
For example, if the pixel P is the third pixel of the video stream to change state during the third subfield of the first time segment, the module 20 appends to it, in the memory area associated with this subfield, an address corresponding to the third memory location of the area.
The coordinates of the pixels changing state during the relevant video frame are thus recorded in the image memory 12 at the addresses determined by the module 20.
The image memory 12 is a very fast RAM memory, for example an SDRAM. It is read area by area so as to construct, for each subfield, a binary plane in a read buffer circuit 21. For each subfield, the read buffer circuit 21 creates a binary plane from the pixel coordinates recorded in the memory area associated with the relevant subfield. The circuit 21 sets for example to 1 the bits of the binary plane whose coordinates are present in the memory area read. The other bits of the binary plane do not change state. It should be noted that at the start of the time segments, all the bits of the binary plane are in this case at zero.
The binary planes of the various subfields of the various time segments are subsequently supplied to the addressing mechanism 13 of the DMD matrix 14 to display the video image.
It is possible to operate the device with an image memory 12 of reduced size. To do this, it is necessary to determine, prior to recording the pixel coordinates in the image memory 12, the number of pixels to be recorded in each of the areas of the image memory 12. It will then be possible to determine the memory size required for each of them. This embodiment is illustrated by
In this embodiment, a module for calculating the occurrences of video level 30 is inserted between the incremental coding module 10 and the module for generating pixel coordinates and memory addresses 20. The module 30 is responsible for calculating the number of occurrences of each video level in the stream of data received during a video frame. These numbers of occurrences are used in the module 20 to calculate, for each subfield, the number of pixels which change state and to deduce therefrom the memory size of each area. It should be noted that pixels having different video levels may change state at the start of the same subfield of a given time segment (they will switch at different subfields for at least one of the other 6 time segments). Each memory area may therefore enclose the coordinates of pixels not having the same video level. The module 20 then totals the number of occurrences of these various video levels to determine the size of the memory area.
Moreover, the stream of video levels which emanates from the incremental coding module 10 is still supplied to the calculation module 20 but with a delay video frame. This delay is effected by a delay module 31 placed between the module 10 and the module 20. The video data output by the module 10 has to be delayed by one frame so that the numbers of occurrences received by the module 20 correspond to the video data which it receives.
For the calculation of the memory addresses in the module 20, the device of
To limit the bandwidth of the image memory 12, the 6 time segments of the video frame advantageously comprise, for each pixel, the same item of video information (the video level is distributed uniformly over the 6 time segments). Each pixel then changes state in the course of the same subfield in the 6 time segments. The pixel coordinates can then be written just once to the image memory during the video frame and read 6 times (once per time segment). In this embodiment, the image memory 12 then comprises only P memory areas, P being the number of subfields per time segment for the three colours.
The bandwidth required for the operation of reading the image memory 12 is then equal to:
For the write operation, it is 6 times smaller, i.e.:
I.e. a total bandwidth BPtotal equal to 16.5 Gbit/s.
This value is very high but can be further reduced. Specifically, the row and column coordinates do not have to be written for all the pixels. Statistically, even if the image comprises a large number of different video levels, there is a very high probability of there being several pixels per row having the same video level. It is therefore proposed that the row coordinate be written, to the memory area, once only and that the column coordinates of the pixels of this row having the same video level be bracketed therewith as in the example hereinbelow:
Row1:column20/column21/column22/
In the case where the 768 row positions are written to the 60 areas of the memory, we then obtain:
I.e. a total bandwidth BPtotal equal to 10.3 Gbit/s.
It is also conceivable to act on the number of possible video levels or on the number of subfields per time segment to further decrease the bandwidth.
Moreover, the duration of the subfields associated with the areas of the image memory is advantageously defined by a so-called inverse gamma correction curve. The inverse gamma correction designates the correction to be applied to an image from a camera in order for this image to be displayed correctly on the screen of a linear digital display device. Specifically, in contradistinction to cathode ray tubes, the ratio between the video levels at the input (original image) and the video levels at the output of the digital display device is in general linear. Now, given that a gamma correction is carried out on the source image at the camera level, an inverse gamma correction must be applied to the image from the camera in order to obtain correct display of the image on the screen. This inverse gamma correction is therefore implemented by acting on the duration of the subfields associated with the various areas of the image memory.
The shape of an inverse gamma correction curve is shown in
Number | Date | Country | Kind |
---|---|---|---|
0203807 | Mar 2002 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP03/02408 | 3/10/2003 | WO |