The image display device DP1 is a projector. In the image display device DP1, light emitted from a light source unit 110 is converted into light for displaying an image (image light) by means of the liquid crystal panel 100. This image light is then imaged onto a projection screen SC by means of a projection optical system 120, and the image is projected onto the projection screen SC. The liquid crystal driver 70 can also be regarded not as an image data processing device, but rather as a block included within the image display device together with liquid panel 100. Each component part of the image display device DP1 is sequentially described below.
Through loading the control program and processing conditions recorded in the memory 90, the CPU 80 controls the actions of each block.
The signal conversion component 10 constitutes a processing circuit for converting image signals input from an external source into signals which can be processed by the memory write controller 30. For example, in cases in which image signals input from an external source are analog image signals, the signal conversion component 10 synchronizes with the synchronous signal included within the image signal, and converts the image signal into a digital image signal. Additionally, in cases in which image signals input from an external source are digital image signals, the signal conversion component 10 transforms the image signal into a form of signal which can be processed by the memory write controller 30, according to the type of image signal.
The digital image signal output from the signal conversion component 10 contains the video data WVDS of each frame. The memory write controller 30 sequentially writes the video data WVDS into the frame memory 20, synchronizing with the sync signal WSNK (write sync signal) for write use corresponding to the image signal. Further, write vertical synchronous signals, write horizontal synchronous signals, and write clock signals are included within the write synchronous signal WSNK.
The memory read-out controller 40 generates a synchronous signal RSNK (read synchronous signal) for read use based on read control conditions provided from the memory 90 via the CPU 80. The memory read-out controller 40, in sync with the read-out synchronous signal RSNK, reads the image data stored in frame memory 20. The memory read-out controller 40 subsequently outputs read-out video data signal RVDS and read-out synchronous signal RSNK to the driving video data generator 50.
Further, read vertical synchronous signals, read horizontal synchronous signals, and read clock signals are included within read-out synchronous signal RSNK. In addition, the cycle of read vertical synchronous signal RSNK has been established to be double that of the frequency (frame rate) of the write vertical synchronous signal WSNK of the image signal written in frame memory 20. Therefore, memory read-out controller 40, in sync with the read-out synchronous signal RSNK, twice reads image data stored in frame memory 20 within 1 frame cycle of the image signal written in frame memory 20, and outputs this to driving video data generator 50.
Data which is read the first time from the frame memory 20 by the memory read-out controller 40 is called first field data. Data which is read the second time from the frame memory 20 by memory read-out controller 40 is called second field data. Image signals within the frame memory 20 are not overwritten between first and second reads; therefore, the first field data and the second field data are the same.
The driving video data generator 50 is supplied with read-out video data signal RVDS and read-out synchronous signal RSNK from memory read-out controller 40. In addition, the driving video data generator 50 is supplied with a mask parameter signal MPS from the movement detecting component 60. The driving video data generator 50 then generates a driving video data signal DVDS based on the read-out video data signal RVDS, the read-out synchronous signal RSNK, and the mask parameter signal MPS; and outputs this to the liquid crystal panel driver 70. The driving video data signal DVDS is a signal used to drive the liquid crystal panel 100 via the liquid crystal panel driver 70. The composition and actions of the driving video data generator 50 are described further below.
The movement detecting component 60 makes a comparison between each frame of video data (also called “frame video data” below) WVDS, sequentially written by the memory write controller 30 into the frame memory 20 in sync with the write synchronous signal WSNK, and the read-out video data RVDS read by the memory read-out controller 40 from the frame memory 20 in sync with the read-out synchronous signal RSNK. Then, based on the frame video data WVDS and the read-out video data RVDS, the movement detecting component 60 detects the movement of both images of the frame video data WVDS and the read-out video data RVDS, and calculates the amount of movement. In addition, the read-out video data RVDS constitutes the video data that is one frame prior to the frame video data WVDS targeted for the comparison. The movement detecting component 60 determines the mask parameter signal MPS, according to the calculated amount of movement. The movement detecting component 60 then outputs the mask parameter signal MPS to the driving video data generator 50. The composition and actions of the movement detecting component 60 are described further below.
The liquid crystal panel driver 70 converts the driving video data signal DVDS supplied from the driving video data generator 50 into a signal that can be supplied to liquid crystal panel 100, and supplies this signal to the liquid crystal panel 100.
The liquid crystal panel 100 emits image light, according to the driving video data signal supplied from the liquid crystal panel driver 70. As stated earlier, this image light is projected onto the projection screen SC, and the image is displayed.
The movement amount detecting component 62 respectively divides the frame video data (target data) WVDS written into the frame memory 20, and the frame video data (reference data) read from the frame memory 20, into rectangular image blocks of p×q pixels (p, q are integers that are equal to or greater than 2). The movement amount detecting component 62 then obtains the image movement vector for the pair of each block, based on the block that corresponds to these two frames of image data. The size of this movement vector constitutes the amount of movement of each block pair. The sum total of the amount of movement of each block pair constitutes the volume of image movement between the two frames.
It is possible to easily obtain the movement vector for each block pair, by, for example, obtaining the amount of movement of the center of gravity coordinate of the image data (brightness data) included within the block. “Pixel/frame” may be utilized as the unit for the amount of movement of the center of gravity coordinate. Because various general methods may be utilized as methods for obtaining the movement vector, their detailed explanation is omitted here. The obtained amount of movement is supplied as the movement amount data QMD from the movement amount detecting component 62 to the mask parameter determining component 66.
The mask parameter determining component 66 determines the value of the mask parameter MP, according to the movement amount data QMD supplied from the movement amount detecting component 62. Data showing the determined mask parameter MP value is output as mask parameter signal MPS from the movement detecting component 60 to the driving video data generator 50 (see
Table data is stored in advance within the mask parameter determining component 66. The table data shows image a plurality of movement amount Vm related with normalized value of mask parameter MP. These table data are read from the memory 90 by the CPU 80, and are supplied to the mask parameter determining component 66 of movement detecting component 60 (see
According to the table data in
On the other hand, in cases where the movement amount Vm exceeds the threshold value Vlmt2, the mask parameter MP value is 1. As is stated hereafter, mask data that shows the complementary colors of the colors of each pixel of the read-out video data signal RVDS1 are generated.
Moreover, according to the table data in
Further, in the present embodiment, the mask parameter determining component 66 is constituted as a portion of the movement detecting component 60 (see
The driving video data generating controller 510 is supplied with the read-out synchronous signal RSNK from the memory read-out controller 40, as well as with the moving area data signal MAS from the movement detecting component 60 (see
The driving video data generating controller 510 outputs a latch signal LTS, a selection control signal MXS, and an enable signal MES, based on a read vertical synchronous signal VS, a read horizontal synchronous signal HS, a read clock DCK, and a field selection signal FIELD contained within the read-out synchronous signal RSNK, as well as the moving area data signal MAS (see bottom right portion of
The latch signal LTS is output from the driving video data generating controller 510 to the first latch component 520 and the second latch component 540, and controls their actions.
The selection control signal MXS outputs from the driving video data generating controller 510 to the multiplexer 550, and controls the actions of the multiplexer 550. The selection control signal MXS shows the position of the image, or the position (pattern) of the pixel for which the read-out image data are to be replaced with the mask data.
The enable signal MES is output to the mask data generator 530 from the driving video data generating controller 510, and controls the actions of the mask data generator 530. In other words, the enable signal MES constitutes a signal that directs the generation and non-generation of mask data. The driving video data generating controller 510 controls the driving video data signal DVDS by means of these signals.
In addition, the field selection signal FIELD, which is received by the driving video data generating controller 510 from the memory read-out controller 40, is a signal with the following characteristics. Specifically, the field selection signal FIELD shows whether the read-out video data signal RVDS (see
The first latch component 520 sequentially latches the read-out video data signal RVDS supplied from the memory read-out controller 40, according to the latch signal LTS supplied from the driving video data generating controller 510. The first latch component 520 outputs the latched read-out image data, as a read-out video data signal RVDS 1 to the mask data generator 530 and the second latch component 540.
The mask data generator 530 is supplied the mask parameter signal MPS from the movement detecting component 60. The mask data generator 530 is also supplied the enable signal MES from driving video data generating controller 510. The mask data generator 530 is further supplied the read-out Video data signal RVDS1 from the first latch component 520. In case where the generation of mask data is allowed by the enable signal MES, the mask data generator 530 generate mask data based on the mask parameter signal MPS and the read-out video data signal RVDS1. The mask data generator 530 outputs the generated mask data to the second latch component 540 as a mask data signal MDS1.
The mask data shows the pixel value, according to the pixel value of each pixel included within the read-out video data RVDS1. More specifically, the mask data constitutes pixel values that show the complementary colors of each pixel included within the read-out video data RVDS1, or the colors obtained by the mixing of complementary and achromatic colors. Also, “pixel value” refers to the parameters that indicate the colors of each pixel. In the present embodiment, the read-out video data signal RVDS1 is designed to contain color information concerning each pixel as a combination of pixel values indicating the intensity of red (R), green (G), or blue (B) (tone values 0 to 255). Below, these red (R), green (G), or blue (B) tone value combinations are referred to as “RGB tone values.”
Y=(0.29891×R)−(0.58661×G)+(0.11448×B) (1)
Cr=(0.50000×R)−(0.41869×G)−(0.08131×B) (2)
Cb=−(0.16874×R)−(0.33126×G)+(0.50000×B) (3)
Additionally, the processes of Steps S10 to S40 of
In Step S20, according to the following formulae (4) and (5), the mask data generator 530 inverts the signs of the Cr, Cb tone value obtained by formulae (1) to (3) above, thereby obtaining the tone value (Y, Crt, Cbt). Tone value (Y, Crt, Cbt) shows the complementary color of the color indicated by gradient color (Y, Cr, Cb).
Crt=−Cr (4)
Cbt=−Cb (5)
The color indicated by tone value (Y, Crt, Cbt) constitutes a color with the opposite values of both read and blue color differences, as the color shown by tone value (Y, Cr, Cb). Specifically, when the colors indicated by tone value (Y, Crt, Cbt) and tone value (Y, Cr, Cb) are mixed, Cr and Crt, as well as Cb and Cbt respectively cancel out one other, and the red-green component as well as the blue-yellow component both become 0. In other words, if the colors indicated by tone value (Y, Crt, Cbt) and tone value (Y, Cr, Cb) are mixed, the color becomes achromatic. A color with this kind of relationship relative to another color is called a “complementary color.”
In Step S30 of
In the calculations conducted in Step S30, it is possible to utilize various calculations, such as, for example, multiplication, bit shift calculation, etc. In the present embodiment, multiplication (C=A×B) of tone values Crt, Cbt is established as the calculation conducted in step S30. Specifically, the formulae (6) to (8) below are followed to obtain tone value (Yt2, Crt2, Cbt2) from tone value (Y, Crt, Cbt).
Yt2=Y (6)
Crt2=Crt×MP (7)
Cbt2=Cbt×MP (8)
In step S40 of
Rt=Y+(1.40200×Crt) (9)
Gt=Y−(0.34414×Cbt)−(0.71414×Crt) (10)
Bt=Y+(1.77200×Cbt) (11)
In Step S50 of
The mask data generator 530, as described above, conducts color conversion in regards to the read-out video data signal RVDS1, generates image data signal MDS1, and supplies this to the second latch component 540 (see
For example, in cases in which the value of the mask parameter MP is 0, both “red-green component” Crt2 and “blue-yellow component” Cbt2 are both 0, according to formulas (7) and (8). Consequently, the colors of each pixel of the mask data are achromatic. In addition, in cases in which the value of the mask parameter MP is 1, Crt2=−Cr, Cbt2=−Cb, according to formulas (7) and (8). Therefore, mask data indicating complementary colors (Y, −Cr, −Cb) of the colors of each pixel of read-out video data signal RVDS1 are generated.
Additionally, when mask parameter MP assumes a value that is greater than 0 and less than 1, the color of each pixel of the mask data possesses the same level of brightness as the brightness of the colors of each pixel of the read-out video data signal RVDS1. The signs of the “red-green component” colors of the pixels of the mask data then become the opposite of those of the “red-green component” colors of the pixels of the read-out video data signal RVDS1, and the absolute value becomes a smaller value. The signs of the “blue-yellow component” colors of the pixels of the mask data also become the opposite of those of the “blue-yellow component” colors of the pixels of the read-out video data signal RVDS1, and the absolute value becomes a smaller value. The saturation of such colors are reduced as compared with the “complementary colors” of the read-out video data signal RVDS1.
The above-described colors lie between the complementary colors of the colors of the pixels of the read-out video data signal RVDS1, and grey having a level of brightness that is the same as that of the colors of the pixels of the read-out video data signal RVDS1. Specifically, the colors of the pixels of the mask data are obtainable by mixing the complementary colors of the pixels of the read-out video data signal RVDS1 with achromatic colors of a prescribed brightness, at a predetermined proportion.
The second latch component 540 of
The multiplexer 550 receives read-out video data signal RVDS2 and the mask data signal MDS2 supplied from the second latch component 540. In addition, the multiplexer 550 receives the selection control signal MXS supplied from the driving video data generating controller 510. The multiplexer 550 selects either the read-out video data signal RVDS2, or the mask data signal MDS2, in accordance with the selection control signal MXS. The multiplexer 550 then generates a driving video data signal DVDS, based on the selected signal, and outputs this to the liquid crystal panel driver 70 (see
In addition, the selection control signal MXS is generated by driving video data generating controller 510, based on the field signal FIELD, the read-out vertical synchronous signal VS, the read-out horizontal synchronous signal HS, and the read-out-clock DCK, so that the pattern of the mask data configured by replacement with the read-out image data may constitute a predetermined mask pattern as a whole (see
At this time, as mentioned previously, the frame video data stored in the frame memory 20 are read twice at cycle speed (field cycle) Tfi, equivalent to double the cycle speed of frame cycle Tfr (see
Then, in the driving video data generator 50 (
The read-out image data FI1 (N) of the first field corresponding to the #N frame and read-out image data FI2 (N+1) of the second field corresponding to the #(N+1) frame constitute the driving video data DFI1 (N) and DFI2 (N+1) as is (see the columns on the left and right edges of
On the other hand, the read-out image data FI2 (N) and FI1 (N+1), on the boundary of the #N and #(N+1) frames (see the row (b) of
More specifically, the even-numbered horizontal lines (shown by the crosshatching in the row (c) of
The odd-numbered horizontal lines of the read-out image data FI2 (N) may be replaced with the mask data to generate driving video data DFI2 (N); and the even-numbered horizontal lines of read-out image data FI1 (N+1) may be replaced with the mask data to generate driving video data DFI1 (N+1).
Further, for the sake of clarity, the image shown by the driving video data in
The video data signal DVDS (see
The image DFR (N) of driving video data DFI1 (N) constitutes the image of the frame video data FR (N) (see the left side of
As opposed to this, the image of the driving video data DFI2 (N) constitutes the image that has replaced the image of the frame video data FR (N) partly, for example, the even-numbered horizontal line image, with the image of the mask data. The image of driving video data DFI1 (N+1) then constitutes the image that has replaced the image of the frame video data FR (N+1) partly, for example, the odd-numbered horizontal line image, with the image of the mask data.
When the moving image is reproduced, based on output of the video data signal DVDS from the driving video data generator 50 to the liquid crystal panel driver 70, the images of driving video data DFI2 (N) and of driving video data DFI1 (N+1) are consecutively displayed. As a result, the images of driving video data DFI2 (N) and of driving video data DFI1 (N+1) appear as a single synthesized image DFR (N+½) to persons viewing projection screen SC.
In the image DFR (N+½), the color of each pixel of the even-numbered horizontal lines appears as the color obtained as a result of a mixture of the color of the mask data of each pixel of the even-numbered horizontal lines of the driving video data DFI2 (N), and of the color of each pixel of the even-numbered horizontal lines of the driving video data DFI1 (N+1). Additionally, in the image DFR (N+½), the color of each pixel of the odd-numbered horizontal lines is seen as the color obtained as a result of a mixture of the color of each pixel of the odd-numbered horizontal lines of the driving video data DFI2 (N), and of the color of the mask data of each pixel of the odd-numbered horizontal lines of the driving video data DFI1 (N+1).
In the mask data, the color of each pixel is generated based on the complementary color of the color of the pixel corresponding to the read-out video data signal RVDS1 (see step S20 in
Specifically, the image DFR (N+½) possesses an intermediate pattern between that of the image DFR (N) of frame video data FR (N) and that of the image DFR (N+1) of frame video data FR (N+1), in which the saturation of each pixel is lower than those of the images of the image DFR (N) and the image DFR (N+1). IN case where the mask parameter MP is 1 (see
In the present embodiment, when reproducing operations are conducted based on the video data signal DVDS, an image DFR (N+½) of a color brought into approximation with the above-described achromatic color is visible between the image DFR (N) of the frame video data FR (N) and the image DFR (N+1) of the frame video data FR (N+1; see the row (c) of
In addition, in the present embodiment, the color of the mask data of the pixel of the driving video data DFI2 (N) is generated based on the complementary color of the pixel of the driving video data DFI1 (N), and the color of the mask data of the pixel of the driving video data DFI1 (N+1) is generated based on the complementary color of the pixel of the driving video data DFI2 (N+1). Therefore, the remaining image can be more effectively negated, as opposed to constitutions that simply darken the color of adjacent pixels of the driving video data DFI1 (N) or DFI2 (N+1), or constitutions that utilize a monochromatic mask (black, white, grey, etc.).
Moreover, in case where the remaining image is strongly negated by utilizing a monochromatic black or grey mask, it has been necessary to utilize a mask that is close to black in color. As a result, there has been a risk that the screen will become dark. However, in the present embodiment, because the complementary color can be effectively utilized to negate the remaining image, the actual occurrence of such darkening of the screen is preventable.
In the present embodiment, the driving video data DFI2 (N) and the driving video data DFI1 (N+1) images both constitute images in which portions (i.e. every other horizontal line) have been replaced with the mask data. The horizontal lines are formed with an extremely high density. Consequently, in cases where the viewer sees each and every image, the viewer is able to visually confirm the target within the image in which slightly different images are shown in the alternate lines. In the present embodiment, it is not the case that monochromatic images that are entirely black, white or grey (achromatic) are inserted into the intervals of the frame images. Consequently, by means of the present embodiment, moving images may be reproduced in which it is difficult for the viewer to detect any flickering.
In the embodiment described above, as shown in
Further, the odd-numbered horizontal lines in the driving video data DFI2 (N) may be replaced with the mask data, and even-numbered horizontal lines in driving video data DFI1 (N+1) may be replaced with the mask data.
Also in the present modification example, due to the nature of human vision to see a remaining image, interpolation image DFR (N+½) is sensed by the viewer, by means of the image of the second driving video data DFI2 (N) of the #N frame, as well as the image of the first driving video data DFI1 (N+1) of the #(N+1) frame. In reproducing moving images in this manner, it is possible to reduce moving image blur and flicker (screen flicker), as compared with cases in which the frame video data FR (N) and frame video data FR (N+1) are continuously displayed.
In particular, in cases as in the present modification example, when read-out image data corresponding to the pixels forming the vertical lines are replaced with the mask data, the reduction of image blurring and flickering with respect to movement, including movement in the horizontal direction, is more effectively accomplished, as compared with the replacement of read-out image data corresponding to the horizontal lines with the mask data, as is the case with the embodiment. However, with respect to the movement including movement in the vertical direction, the first embodiment is more effective.
The second modification example described a case in which read-out image data and mask data are alternately positioned on each of the vertical lines. However, it is also permissible for the read-out image data and the mask data to be alternately positioned at every n-th number (n being an integer equal to or greater than 1) of vertical lines. In such cases, as in the second modification example, the interval between the two frames can be interpolated in an effective manner through utilizing the nature of human vision. Consequently, in reproducing moving images, it is possible to reduce the blurring and flickering of such images, and to make the viewer feel that the images are moving in a smooth manner. In the present variation, the reduction of image blurring and flickering is particularly effective with respect to movement in the horizontal direction.
Moreover, in the example in
Additionally, with respect to driving video data DFI1 (N), it is possible for read-out image data in regards to odd-numbered pixels for odd-numbered horizontal lines, as well as even-numbered pixels for even-numbered horizontal lines, to be replaced with the mask data. With such constitution for the driving video data DFI2 (N), it is possible for the read-out image data in regards to even-numbered pixels for odd-numbered horizontal lines, as well as odd-numbered pixels for even-numbered horizontal lines, to be replaced with the mask data.
Also in the present variation, the interpolation image DFR (N+½) is visually recognized, by means of #2 driving video data DFI2 (N) of #N frame and the first driving video data DFI1 (N+1) of the #(N+1) frame. In reproducing moving images in this manner, it is possible to reduce moving image blurring and flickering (screen flickering), and to make the viewer feel that the images are moving in a smooth manner.
In particular, in the present modification example, in cases in which the mask data are placed in a checkered pattern (checkerboard) within the image, the compensation effects of both movements in the vertical direction, as in the first embodiment, as well as movements in the horizontal direction, as in Modification Example 2, can be achieved.
In addition, the fourth modification example described conditions in which read-out image data and mask data are alternately positioned in horizontal and vertical directions, in single pixel units. However, the read-out image data and the mask data may also be alternately positioned in block units of r pixels (r being an integer equal to/greater than 1) in the horizontal direction, and s pixels (s being an integer equal to/greater than 1) in the vertical direction. Even in such cases, the interval between the two frames can be interpolated in an effective manner through utilizing the nature of human vision. Consequently, compensation can be achieved so that the displayed moving image moves in a smooth manner. This constitution is also effective in achieving the compensation effects of movement in the horizontal and vertical directions.
In the first embodiment, there was described a case in which frame video data stored in the frame memory 20 is read twice at cycle Tfi, which corresponds to twice the cycle speed of frame cycle Tfr, thereby generating driving video data corresponding to the respective read-out image data. However, the frame video data stored in the frame memory 20 may also be read by a cycle speed that is 3 or more times the cycle speed of frame cycle Tfr, thereby generating driving video data corresponding to the respective read-out image data.
In the second embodiment, the frame video data housed within the frame memory 20 is read at a cycle speed that is three times the cycle speed of the frame cycle Tfr (⅓ the time required). In this case, the first and third read-out image data is modified, but the second read-out image data is not modified. Other aspects of the second embodiment are identical to the first embodiment.
With this constitution, as shown in the row (b) of
Among the three sets of driving video data DFI1 to DFI3 generated in a single frame, portions of the read-out image data of driving video data DFI1 and DFI3, of the first and third read-outs, constitute image data replaced with the mask data. In the row (c) of
Herein, the second driving video data DFI2 (N) in the frame cycle of the #N frame (N being an integer equal to or greater than 1) constitutes the read-out image data FI2 (N) of the frame video data FR (N) of the #N frame read from the frame memory 20, so the frame image DFR (N) of #N frame will be represented by this driving video data DFI2 (N).
Also, the second driving video data DFI2 (N+1) in the frame cycle of the #(N+1) frame constitutes the read-out image data FI2 (N+1) of the frame video data FR (N+1) of the #(N+1) frame read from the frame memory 20. Accordingly, the frame image DFR (N+1) of #(N+1) frame will be represented by this driving video data FI2 (N+1).
The third driving video data DFI3 (N) in the frame cycle of the #N frame is generated based on the third read-out image data FI3 (N) of the #N frame. The first driving video data DFI1 (N+1) in the frame cycle of the #(N+1) frame is generated based on the first read-out image data FI1 (N+1) of the #(N+1) frame.
In the third driving video data DFI3 (N) in the frame cycle of the #N frame, mask data is placed on the even-numbered horizontal lines. In the first driving video data DFI1 (N+1) in the frame cycle of the #(N+1) frame, mask data is placed on the odd-numbered horizontal lines.
The positional relationship of the mask data between driving video data DFI2 (N) and driving video data DFI1 (N+1) is complementary. Therefore, due to the nature of human vision to see a residual image, the interpolation image DFR (N+½) is sensed by the viewer, by means of the driving video data DFI3 (N) of the third read-out of #N frame, and driving video data DFI1 (N+1) of the first read-out of #(N+1) frame.
Moreover, interpolation between frames can be achieved in the same manner by means of a combination of the third driving video data DFI3 (N−1) of the #(N−1) frame (not shown) and the first driving video data DFI1 (N) of #N frame; or a combination of the third driving video data DFI3 (N+1) of #(N+1) frame and the first driving video data DFI1 (N+2) of #(N+2) frame not shown in the figure.
Accordingly, in reproducing images according to the Embodiment 2, it is possible to reproduce such images so as to reduce the blurring and flickering (screen flickering) of such images, and to make the viewer feel that the images are moving in a smooth manner.
In cases such as in Embodiment 1 in which read-out is conducted at a doubled cycle speed, it is possible to compensate for movement in each group (pair) of two frames. However, in the case of the present variation, it is possible to compensate for movement between adjacent frames; consequently, the effectiveness of such compensation of movement is increased.
In addition, the case in which the driving video data of the present embodiment were replaced with the mask data of each horizontal line, similar to the first embodiment, was used as an example; however, driving video data variations in Modification Examples 1 to 5 of the first embodiment may also be applied to the second embodiment.
Moreover, in the embodiment stated above, the case in which the frame video data are read three times at cycle Tfi, which moves at three times the cycle speed of frame cycle Tfr, was used as an example; however, read-out may be conducted 4 or more times, at cycle speeds that are 4 times or greater than the cycle speed of frame cycle Tfr. In such cases, from amongst the multiple read-out image data of each frame, if the read-out image data read at boundaries of adjacent frames are modified and converted into driving video data, and if at least one of read-out image data other than read-out image data read at the boundaries are left as is as driving video data, the same effects can be obtained.
The present invention is not limited to the embodiments described above, and may be reduced to practice in various other forms without deviating from the spirit of the invention.
In the Embodiment1 described above, the entire area of read-out image data FI2 (N) and read-out image data FI1 (N+1) is targeted by the mask (see the lower part of
With such an embodiment, the movement detecting component 60 determines portions representing moving images within the frame images, based on the frame video data (target data) WVDS and the frame video data (reference data) RVDS (see
In the embodiments described above, the description was made on the assumption that the replacement of image data with the mask data was performed according to a predetermined pattern and then the driving video data is generated (see
For example, in Embodiment 1, in cases where the movement vector in the horizontal direction (horizontal vector) in the video is greater than the movement vector in the vertical direction (vertical vector) in the video, it is possible to select either of the patterns between the driving video data second to fifth variations. In cases in which the vertical vector is greater than the horizontal vector, it is possible to select any one of the patterns between the first embodiment, Modification Example 1 or 2 of the Driving video data Modification Examples. In addition, in case where the vertical and horizontal vectors are equal, it is possible to consider selecting either of the patterns Modification Example 4 and 5 of the Driving video data Modification Examples. The same is true for Embodiment 2 as well.
Moreover, in Embodiments 1 and 2, for example, this selection may be made by the driving video data generating controller 510, based on the direction and amount of movement shown by the movement vector detected by the movement amount detecting component 62. Otherwise, it is also possible for the CPU 80 to execute prescribed processing based on the direction and amount of movement shown by the movement vector detected by the movement amount detecting component 62, and to supply the corresponding control information to the driving video data generating controller 510.
The movement vector, for example, can be determined as follows. Specifically, the centers of gravity are calculated with respect to two images by calculating weighted average of the positions of the pixels based on the brightness of each pixel. The vectors, for which the center of gravity of these two images serves as the beginning and end points, are considered to constitute the movement vectors. Additionally, the images may be divided into multiple blocks, the above-described process conducted, and the average values taken to determine the orientation and size of the movement vector.
Further, the third embodiment can be modified such that, for example, the CPU 80 conducts selection of the pattern based on the desired direction and amount of movement indicated by the user, and supplies the corresponding control information to the driving video data generating controller 510.
In addition, for example, the user's specification of the volume of image movement may be achieved by the user making a selection from among “large”, “medium”, or “small” volumes of movement. Specifically, in regards to the specification of image movement amount by the user, if the user is allowed to specify their desired amount of movement, any method may be used. The table data may contain the mask parameter MP that corresponds to the so-specified amount of movement.
The driving video data generator 50 in the embodiments described above are constituted so that the read-out video data signals RVDS read from the frame memory 20 are sequentially latched by the first latch component 520. However, the driving video data generator 50 may be equipped with a flame memory in the upstream side of the first latch component 520. Such an embodiment may be designed in a manner so that it is possible to temporarily write the read-out video data signal RVDS to the frame memory, and to sequentially latch the new read-out image data signals output from the frame memory, by means of the first latch component 520. In such case, the movement detecting component 60 may be input, as image data signals, image data signals written to the frame memory and image data signals read from the frame memory.
In the embodiments described above, mask data is generated for each pixel of the read-out image data. However, it is also possible that mask data are generated only for pixels that are to be replaced (see the crosshatch parts of
Further, in Embodiment 1 discussed above, a mask parameter MP value is between 0 and 1. With respect to the process for utilizing the mask parameter MP with the read-out image data, the mask parameter MP is multiplied by the pixel values Crt, Cbt of the complementary colors (see Step S30 in
For example, calculations utilizing the mask parameter MP may also be utilized for all of the pixel values Y, Crt, and Cbt. Additionally, instead of conducting the conversion from the RGB tone values to the YCrCb tone values, calculations utilizing the mask parameter MP may be directly conducted with regards to the RGB tone values possessed by the read-out image data. Moreover, the process may be executed by referring a look up table which coordinates and stores RGB tone values in the read-out image data or post-conversion YCrCb tone values, related to the post-processing tone values, and which is generated by the utilization of mask parameter MP.
In Embodiment 1 described above, in obtaining the complementary color of the color of the pixel of the read-out image data, conversion to the YCrCb system tone values is carried out to obtain the complementary color. However, various other methods may also be utilized to obtain the complementary color of the color of the pixel of the read-out image data.
For example, when the red, green, and blue tone values of the read-out data take the values 0 to Vmax, and the tone values of certain pixels of the read-out image data constitute (R, G, B), the tone values (Rt, Gt, Bt) of their corresponding colors may be calculated by means of the following formulae (12) to (14).
Rt=(Vmax+1)−R (12)
Gt=(Vmax+1)−G (13)
Bt=(Vmax+1)−B (14)
In the embodiments described above, application of a liquid crystal panel in a projector was explained as an example. However, the invention may also be applied to devices other than a projector, such as a direct view type of display device. Besides a liquid crystal panel, various image display devices, such as a PDP (plasma display panel) or ELD (electro luminescence display) may also be applied. In addition, the invention may also be applied to projectors that utilize DMD (Digital Micromirror Device, Texas Instruments Corporation trademark).
In the embodiments described above, the image data indicate the colors of each pixel at RGB tone values that show the intensity of each color composition of red, green, and blue. However, the image data may also indicate the colors of each pixel with other tone values. For example, the image data may also indicate the colors of each pixel with YCrCb tone values. In addition, the image data may also indicate the colors of each pixel with the tone values of other color systems, such as L*a*b*, or L*u*v*.
In such aspects, according to Step S40 of
In the embodiments described above, a case in which the blocks of the memory write controller, the memory read-out controller, the driving video data generator, and the movement detecting component for generating the driving video data are constituted by hardware, are described by way example. However, some the blocks could instead be constituted by software, so that they may implemented by means of the reading and execution of computer software by the CPU.
The Program product may be realized as many aspects. For example:
(i) Computer readable medium, for example the flexible disks, the optical disk, or the semiconductor memories;
(ii) Data signals, which comprise a computer program and are embodied inside a carrier wave;
(iii) Computer including the computer readable medium, for example the magnetic disks or the semiconductor memories; and
(iv) Computer temporally storing the computer program in the memory through the data transferring means.
While the invention has been described with reference to preferred exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed embodiments or constructions. On the contrary, the invention is intended to cover various modifications and equivalent arrangements. In addition, while the various elements of the disclosed invention are shown in various combinations and configurations, which are exemplary, other combinations and configurations, including more less or only a single element, are also within the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2006-218030 | Aug 2006 | JP | national |