The disclosure of Japanese Patent Application No. 2011-144837 filed on Jun. 29, 2011 including the specification, drawings, and abstract is incorporated herein by reference in its entirety.
The present invention relates to a display and a display control circuit and, more specifically, to a display configured to perform overdrive processing on image data and a display control circuit.
One of problems in the display in recent years is an increase of transfer volume of the image data to a display panel driver for driving the display panel. For example, in liquid crystal displays in recent years, since resolution has improved and the frame rate has increased by adoption of double-speed driving (for example, the double speed to the quad speed), it is necessary to transfer a lot of image data to the display panel driver. In order to transfer the lot of image data, necessity of increasing a data transfer rate rises. However, if the data transfer rate is increased in order to transfer a lot of image data, there will arise problems that power consumption will increase and EMI (electromagnetic interference) will also increase.
In order to cope with the problem of the increase of the transfer volume of image data, the inventors are examining reducing the data transfer volume by transferring the image data after compressing it. Since this enables the data transfer rate to be made small, it becomes easy to reduce the power consumption and to do EMI measure.
One of other problems in the display is to make fast driving pixels of the display panel. For example, in the liquid crystal display in recent years, a load capacity of the liquid crystal display panel has become large by enlargement and higher resolution of the display. On the other hand, a frame rate has increased due to adoption of double-speed driving, and a time given to charge data lines of the liquid crystal display panel has shortened. For this reason, a technology of driving the pixels at high speed is being required.
One of the technologies for accelerating the driving of the pixels is overdriving. The overdriving is a technology of, when there is a change in the gradation value of the image data, driving the pixel so that a change in the drive voltage may become larger than an original change in the gradation value of the image data. Thereby, a response speed of the display panel can be raised.
One technique of realizing the overdriving is correcting the gradation value of the image data by data processing. Specifically, with reference to the gradation value of the image data of the previous frame, the gradation value of the original image data is corrected so that when the gradation value of the image data increases to be larger than that of the previous frame, the gradation value may become larger, and that when it decrease, the gradation value may become smaller. Hereinafter, such processing is called the overdrive processing.
The inventors consider that there is a technical advantage in providing a display corresponding to both the overdrive processing and compression processing. However, according to the inventors' finding, when the technology of transferring the image data after compressing it and the overdrive processing are used together, there may arise the following problems. The first problem is that when the overdrive processing and the compression processing are used together, the overdriving may be performed in an improper overdrive direction for each pixel due to an effect of a compression error. Here, the compression error is a difference between the gradation value obtained by uncompression processing and the original gradation value when the compression processing and the uncompression processing are performed on the original gradation value of the image data.
As shown in
The second problem is that as shown in
An image processing technology that performs both the overdrive processing and the compression processing is disclosed, for example, by Japanese Unexamined Patent Publication No. 2008-281734. In this technology, in order to make small a capacity of memory for storing the image data of the previous frame, the compressed data obtained by compressing the image data of the previous frame is stored in the memory. The image data obtained by uncompressing the compressed data stored in the memory is used for the overdrive processing. Furthermore, in order to reduce an influence of the error by the compression, the compression processing and the uncompression processing are performed also on the image data of the current frame, and the image data obtained as its result is used for the overdrive processing.
Furthermore, Japanese Patent Unexamined Application Publication No. 2009-109835 discloses a technology of performing the overdrive processing and also performing the compression processing on the image data of the current frame read from the memory for display and storing it in memory for overdrive.
However, in these technologies, it should be noted that the compression processing is performed in order to reduce a capacity of memory used for the overdrive processing. In other words, in these technologies, the compression processing must be performed before the overdrive processing. These two patent documents do not suggest a technology of transferring the compressed data obtained by performing the compression processing after performing the overdrive processing on the transmission side to the reception side, i.e., the display panel driver.
An object of the present invention is therefore to realize a technology of preventing overdriving from being performed improperly originating in a compression error in the display that is configured to transfer the image data to the driver after compressing it and performs the overdriving.
According to one aspect of the present invention, the display includes a display panel, the driver, and a display control circuit for supplying transfer compressed data generated from the image data to the driver. The display control circuit has: a first uncompression circuit for generating current frame uncompressed compressed data by performing uncompression processing on the compressed data corresponding to the image data of a current frame; a second uncompression circuit for generating previous frame uncompressed compressed data by performing the uncompression processing on the compressed data corresponding to the image data of a previous frame; an overdrive processing part for generating overdrive processed data by performing overdrive based on the current frame uncompressed compressed data and the previous frame uncompressed compressed data; an overdrive direction detection circuit for detecting a proper direction of the overdriving from the current frame uncompressed compressed data and the previous frame uncompressed compressed data; a correction part for generating post-correction overdrive processed data by correcting the overdrive processed data according to the detected proper direction; a first compression circuit for generating post-correction compressed data by compressing the post-correction overdrive processed data; and a transmission part for supporting an operation of transmitting the post-correction compressed data as the transfer compressed data to the driver. Responding to the display data obtained by uncompressing the transfer compressed data, the driver drives the display panel.
According to another aspect of the present invention, a display control circuit that supplies transfer compressed data generated from the image data to the driver for driving the display panel in response to the display data obtained by uncompressing the transfer compressed data is provided. The display control circuit has: a first uncompression circuit for generating the current frame uncompressed compressed data by performing the uncompression processing on the compressed data corresponding to the image data of the current frame; a second uncompression circuit for generating the previous frame uncompressed compressed data by performing the uncompression processing on the compressed data corresponding to the image data of the previous frame; the overdrive processing part for generating the overdrive processed data by performing the overdrive processing based on the current frame uncompressed compressed data and the previous frame uncompressed compressed data; an overdrive direction detection circuit for detecting a proper direction of the overdriving from the current frame uncompressed compressed data and the previous frame uncompressed compressed data; a correction part for generating post-correction overdrive processed data by correcting the overdrive processed data according to the detected proper direction; a first compression circuit for generating the post-correction compressed data by compressing the post-correction overdrive processed data; and a transmission part for supporting an operation of transmitting the post-correction overdrive processed data as the transfer compressed data to the driver.
According to the aspects of the present invention, it is possible to realize a technology of preventing the overdriving from being performed improperly originating in the compression error in a display that is configured to transfer the image data to a display panel driver after compressing it and performs the overdriving.
In this embodiment, the image data 6 is supplied as data that represents gradations of the R subpixel, the G subpixel, and the B subpixel each in eight bits, i.e., data that represents the gradations of the respective pixels in 24 bits. However, the number of bits of the image data 6 is not limited to this. Moreover, the pixel is not limited to be comprised of the R subpixel, the G subpixel, and the B subpixel. For example, each pixel may additionally include a subpixel for representing a white color in addition to the R subpixel, the G subpixel, and the B subpixel, and may additionally include a subpixel for representing a yellow color. In this case, a format of the image data 6 is also changed to conform to the configuration of the pixel.
The liquid crystal display 1 includes a graphic processing circuit 3, a driver 4, and a gate line driving circuit 5. The driver 4 drives the data lines of the liquid crystal display panel 2, and the gate line driving circuit 5 drives the gate lines of the liquid crystal display panel 2. In this embodiment, the graphic processing circuit 3, the driver 4, and the gate line driving circuit 5 are mounted as separate ICs (integrated circuits). In this embodiment, multiple drivers 4 are provided in the liquid crystal display 1, and the image processing circuit 3 and the each driver 4 are Peer-to-Peer coupled with each other. Specifically, the graphic processing circuit 3 is coupled to each driver 4 through a serial signal line exclusive for the each driver 4. Data transfer between the graphic processing circuit 3 and the each driver 4 is performed by serial data transfer through the serial signal line. Although there may be generally considered an architecture of coupling a graphic processing circuit and a driver with a bus in the liquid crystal display having multiple drivers, the architecture of coupling the graphic processing circuit 3 and the each driver 4 by Peer-to-Peer connection like this embodiment is useful in a respect that a transfer rate required for data transfer between the graphic processing circuit 3 and the each driver 4 can be reduced.
The graphic processing circuit 3 includes memory 11 and a timing control circuit 12. The memory 11 is used in order to temporarily storing the image data used for overdrive processing. The memory 11 has a capacity of memorizing the image data of one frame, and is used in order to supply the image data of a frame (the previous frame) immediately before the object frame (the current frame) of the overdrive processing to the timing control circuit 12. Below, the image data 6 of the current frame supplied to the timing control circuit 12 from the outside may be called current frame data 6a, and the image data 6 of the previous frame supplied to the timing control circuit 12 from the memory 11 may be called previous frame data 6b.
Responding to a timing control signal supplied from the outside, the timing control circuit 12 controls the driver 4 and the gate line driving circuit 5 so that a desired image may be displayed on the liquid crystal display panel 2. In addition, the timing control circuit 12 is configured so that the overdrive generation arithmetic circuit 13 therein may perform the overdrive processing and compression processing. The overdrive generation arithmetic circuit 13 performs the overdrive processing while referring to the previous frame data 6b stored in the memory 11 to the current frame data 6a, and further performs the compression processing on the data obtained by the overdrive processing to generate compressed data 7. The generated compressed data 7 is sent to each driver 4 by a data transmission circuit 14. The data transmission circuit 14 further has a function of sending timing control data to the each driver 4.
The driver 4 drives the data lines of the liquid crystal display panel 2 in response to the compressed data 7 and the timing control data that are received. In detail, the driver 4 includes an uncompression circuit 15, a display latch part 16, and a data line driving circuit 17. The uncompression circuit 15 uncompresses the received compressed data 7 to generate display data 8, and transfers the generated display data 8 sequentially to the display latch part 16. Here, the display latch part 16 latches the display data 8 received from the uncompression circuit 15 sequentially. The display latch part 16 of the each driver 4 stores the display data 8 of a pixel corresponding to the driver 4 of the pixels in one pixel line. Responding to the display data 8 latched by the display latch part 16, the data line driving circuit 17 drives the data lines. In each horizontal synchronization period, in response to the display data 8 stored in the display latch part 16, the data line corresponding to each of the display data is driven. Incidentally, although only the configuration of the one driver 4 is illustrated in
Here, it should be noted that in this embodiment, the memory 11 is provided on the transmission side, i.e., in the graphic processing circuit 3. Such a configuration is suitable in order to reduce the hardware as the whole of the liquid crystal display 1. The graphic processing circuit 3 may uses frame memory for various image processing, and the memory 11 for the overdrive processing can be used also as the frame memory for other image processing. On the other hand, providing the memory 11 on the transmission side negates the need for memory in the driver 4. It is suitable for reduction in the hardware that pieces of memory become unnecessary in multiple drivers 4 that exist.
Below, a configuration and an operation of the overdrive generation arithmetic circuit 13 of the timing control circuit 12 will be explained. In this embodiment, the overdrive generation arithmetic circuit 13 performs the overdrive processing and the compression processing by handling a block comprised of four pixels belonging to the same pixel line as a unit.
The compression circuits 21, 22 perform the compression processing on the previous frame data 6b and the current frame data 6a, respectively. The uncompression circuits 23, 24 perform uncompression processing on the compressed data outputted from the compression circuits 21, 22. Here, the data outputted from the uncompression circuits 23, 24 is called previous frame uncompressed compressed data 23a and current frame uncompressed compressed data 24a, respectively. Here, it should be noted that the compression circuits 21, 22 and the uncompression circuits 23, 24 perform the compression processing and the uncompression processing by using the block comprised of four pixels as a unit, respectively.
The overdrive arithmetic circuit 25 performs the overdrive processing on the previous frame uncompressed compressed data 23a and the current frame uncompressed compressed data 24a. What should be noted is that the overdrive arithmetic circuit 25 performs the overdrive processing on the previous frame uncompressed compressed data 23a and the current frame uncompressed compressed data 24a that are obtained by performing the compression processing and the uncompression processing. As will be described later, it is possible to avoid the overdrive processing whose overdrive direction is improper from being performed due to an effect of a compression error by deciding the overdrive direction based on the previous frame uncompressed compressed data 23a and the current frame uncompressed compressed data 24a that are obtained by performing the compression processing and the uncompression processing on the previous frame data 6b and the current frame data 6a, and by performing the overdrive processing so that the direction may be kept correctly.
The LUT arithmetic part 32 functions as an overdrive processing unit that outputs the gradation values after the overdrive processing that corresponds to a combination of the gradation values of the previous frame uncompressed compressed data 23a and the current frame uncompressed compressed data 24a for each subpixel of each pixel of the object block. Here, the gradation value after the overdrive processing outputted from the LUT arithmetic part 32 is generically named the no-correction overdrive processed data 25a. Here, “no-correction” means that correction according to the overdrive direction described later is not performed. The LUT arithmetic part 32 includes an LUT 32a and an interpolation circuit (not illustrated) in one embodiment, and generates the no-correction overdrive processed data 25a by interpolating values obtained by table look-up according to a combination of the previous frame uncompressed compressed data 23a and the current frame uncompressed compressed data 24a with the interpolation circuit. The no-correction overdrive processed data 25a is generated so that optimal overdrive processing may be realized, that is, so that the drive voltage actually supplied to the data lines may be brought close to a desired drive voltage quickly. Incidentally, a generation method of the no-correction overdrive processed data 25a may be modified variously. For example, not using the LUT 32a, an arithmetic formula that uses the gradation values of the previous frame uncompressed compressed data 23a and the current frame uncompressed compressed data 24a as variables may be used to generate the no-correction overdrive processed data 25a.
The no-correction overdrive processed data 25a generated for a specific subpixel of a specific pixel of the object block satisfies the following conditions: (a) When the gradation value of the current frame uncompressed compressed data 24a is larger than a sum of the gradation value of the previous frame uncompressed compressed data 23a and a prescribed value α, the gradation value of the no-correction overdrive processed data 25a is larger than the gradation value of the current frame uncompressed compressed data 24a. Here, the prescribed value α is an integer larger than or equal to zero. (b) When the gradation value of the current frame uncompressed compressed data 24a is smaller than a difference obtained by subtracting the prescribed value α from the gradation value of the previous frame uncompressed compressed data 23a, the gradation value of the no-correction overdrive processed data 25a is smaller than the gradation value of the current frame uncompressed compressed data 24a. Here, the prescribed value α is an integer larger than or equal to zero. (c) When both of the above-mentioned conditions (a), (b) do not hold true, the gradation value of the no-correction overdrive processed data 25a is equal to the gradation value of the current frame uncompressed compressed data 24a (that is, overdriving is not performed). Here, it should be noted that the condition (c) with the prescribed value α equal to zero holds true only when the gradation value of the current frame uncompressed compressed data 24a is equal to the gradation value of the previous frame uncompressed compressed data 23a.
The overdrive direction detection part 33 detects a proper overdrive direction in the overdrive processing by comparing the previous frame uncompressed compressed data 23a and the current frame uncompressed compressed data 24a. The proper overdrive direction is detected for each subpixel of each pixel of the object block. When the gradation value of the current frame uncompressed compressed data 24a corresponding to a certain subpixel of a certain pixel of the object block is larger than or equal to the corresponding gradation value of the previous frame uncompressed compressed data 23a of the subpixel, the proper overdrive direction is detected as “positive”; when the value is smaller than it, the overdrive direction is detected as “negative.” The overdrive direction detection part 33 outputs drive direction data 25c indicating the overdrive direction for each subpixel of each pixel of the object block.
The correction part 34 corrects the no-correction overdrive processed data 25a according to the drive direction data 25c to generate post-correction overdrive processed data 25b. This correction is performed so that, when the compressed data generated by the compression circuit 27 compressing the post-correction overdrive processed data 25b is uncompressed by the uncompression circuit 15 of the driver 4 to generate the display data 8, the overdrive direction detected by the overdrive direction detection part 33 may be maintained also in the display data 8. When the data line is driven in response to the display data 8 obtained by the uncompression processing by the uncompression circuit 15 of the driver 4, there is a possibility that the overdriving is performed in an opposite direction to the proper overdrive direction because of an effect of the compression error caused by the compression/uncompression processing. The correction part 34 generates the post-correction overdrive processed data 25b such that the overdrive direction detected by the overdrive direction detection part 33 in the display data 8 is maintained surely by adding or subtracting the gradation value of the no-correction overdrive processed data 25a according to the overdrive direction. Generation of the post-correction overdrive processed data 25b by the correction part 34 will be explained in detail later.
Returning to
The uncompression circuits 28, 29 perform the uncompression processing on the no-correction compressed data 26a and the post-correction compressed data 27a, respectively. Pieces of the data outputted from the uncompression circuits 28, 29 are described as no-correction uncompressed compressed data 28a and post-correction uncompressed compressed data 29a, respectively.
The comparison circuit 30 selects any of the following data as the compressed data 7 to be sent to the driver 4: the compressed data 22a outputted from the compression circuit 22 (that is, the compressed data that is not overdrive processed); and one of the no-correction compressed data 26a and the post-correction compressed data 27a that are outputted from the compression circuits 26, 27. This selection is performed based on the following data: (1) the current frame uncompressed compressed data 24a outputted from the uncompression circuit 24, (2) the no-correction uncompressed compressed data 28a and the post-correction uncompressed compressed data 29a outputted from the uncompression circuits 28, 29, and (3) the drive direction data 25c. The selection of the compressed data 7 by the comparison circuit 30 will be explained in detail later. The selection circuit 31 outputs the compressed data (22a, 26a, or 27a) selected by the comparison circuit 30 as the compressed data 7.
Next, the overdrive processing and the compression processing in the overdrive generation arithmetic circuit 13 will be explained in detail. As described above, when the overdrive processing and the compression processing are used together, the overdriving may be performed on each pixel in an improper overdrive direction by an influence of the compression error. Moreover, depending on the compression processing, although the overdriving is originally unnecessary, the overdriving may be performed by an influence of the gradation values of surrounding pixels. For example, when the compression processing is performed by using the block comprised of four pixels like this embodiment as a unit, it is affected by other pixels of the same block.
In order to resolve such a problem, the overdrive generation arithmetic circuit 13 of this embodiment performs the following two operations.
First, the overdrive generation arithmetic circuit 13 of this embodiment adopts the overdrive processing that puts a high value on a fact that the overdriving is performed in a proper direction rather than accuracy of the overdrive processing. That is, when it is determined that the overdriving in the improper overdrive direction is performed due to the compression error, the post-correction compressed data 27a generated by compressing the post-correction overdrive processed data 25b is selected as the compressed data 7 and is sent to the driver 4. The driver 4 generates the display data 8 by uncompressing the compressed data 7 and drives the data lines according to the display data 8.
Here, the post-correction overdrive processed data 25b is data that is obtained by increasing or decreasing the gradation value of the no-correction overdrive processed data 25a generated by ideal overdrive processing according to an overdrive direction shown in the drive direction data 25c. Below, generation of the post-correction overdrive processed data 25b will be explained in detail.
In the one embodiment, for a subpixel whose overdrive direction shown in the drive direction data 25c is “positive,” the gradation value of the post-correction overdrive processed data 25b is generated by adding a correction value to the gradation value of the no-correction overdrive processed data 25a. On the other hand, for the subpixel whose overdrive direction shown in the drive direction data 25c is “negative,” the gradation value of the post-correction overdrive processed data 25b is generated by subtracting the correction value from the gradation value of the no-correction overdrive processed data 25a.
The correction value added or subtracted may be set variously. However, the correction value is set as follows: In the case of a subpixel whose overdrive direction shown in the drive direction data 25c is “positive,” the gradation value of the post-correction overdrive processed data 25b may become larger than or equal to a sum of the corresponding gradation value of the current frame uncompressed compressed data 24a and an absolute value of a maximum compression error; and in the case of a subpixel whose overdrive direction shown in the drive direction data 25c is “negative,” the gradation value of the post-correction overdrive processed data 25b may become smaller than or equal to a value obtained by subtracting the absolute value of the maximum compression error from the corresponding gradation value of the current frame uncompressed compressed data 24a. If it is done in this way, a correct overdrive method is maintained even for the display data 8 obtained by uncompressing the post-correction compressed data 27a.
What is necessary to do this in a simplest way is just to make the correction value to be added or subtracted agree with the absolute value of the maximum compression error generated by compression and uncompression. For example, when the overdrive direction shown in the drive direction data 25c is “positive” and a compression error of ±4 occurs by compression and uncompression, the post-correction overdrive processed data 25b is generated by adding a constant value four to the gradation value of the no-correction overdrive processed data 25a. The display data 8 obtained by compressing and uncompressing the post-correction overdrive processed data 25b thus generated realizes a correct overdrive direction surely.
Alternatively, the post-correction overdrive processed data 25b may be generated as follows: (A) If the overdrive direction shown in the drive direction data 25c is “positive,” (A1) when the gradation value of the no-correction overdrive processed data 25a is larger than or equal to a value obtained by adding an absolute value of the maximum compression error to the gradation value of the current frame uncompressed compressed data 24a, the gradation value of the post-correction overdrive processed data 25b is decided to be identical to the gradation value of the no-correction overdrive processed data 25a (it is not corrected); (A2) when the gradation value of the no-correction overdrive processed data 25a is smaller than the value obtained by adding the absolute value of the maximum compression error to the gradation value of the current frame uncompressed compressed data 24a, the gradation value of the post-correction overdrive processed data 25b is set to a value obtained by adding the absolute value of the maximum compression error to the gradation value of the current frame uncompressed compressed data 24a.
(B) If the overdrive direction shown in the drive direction data 25c is “negative,” (B1) when the gradation value of the no-correction overdrive processed data 25a is smaller than or equal to a value obtained by subtracting the absolute value of the maximum compression error from the gradation value of the current frame uncompressed compressed data 24a, the gradation value of the post-correction overdrive processed data 25b is decided to be identical to the no-correction overdrive processed data 25a (it is not corrected); (B2) when the gradation value of the no-correction overdrive processed data 25a is larger than a value obtained by subtracting the absolute value of the maximum compression error from the gradation value of the current frame uncompressed compressed data 24a, the gradation value of the post-correction overdrive processed data 25b is set to a value obtained by subtracting the absolute value of the maximum compression error from the gradation value of the current frame uncompressed compressed data 24a.
The post-correction compressed data 27a is generated by compressing the post-correction overdrive processed data 25b thus generated and further the post-correction compressed data 27a is sent to the driver 4 as the compressed data 7, whereby also in the display data 8, the overdrive direction detected by the overdrive direction detection part 33 is maintained.
What should be noted is a respect that the overdrive direction should be decided based on the gradation values after the compression and uncompression processing (that is, the gradation values of the previous frame uncompressed compressed data 23a and the current frame uncompressed compressed data 24a), and further the no-correction overdrive processed data 25a should be generated by performing the overdrive processing. When lossless compression processing is performed, there may be a case where a desired gradation is intended to be realized as a long time temporal average. In such a case, if the overdrive direction is not decided on the basis of the gradation value after the uncompression processing, the proper overdrive direction cannot be acquired.
Second, when there is no (or small) change of the gradation value of each subpixel of each pixel of the object block, the overdrive generation arithmetic circuit 13 of this embodiment determines that the overdrive processing is unnecessary, selects the compressed data 22a obtained by compressing the current frame data 6a as the compressed data 7, and transmits it to the driver 4. Here, it should be noted that the overdrive processing is not performed on the compressed data 22a.
In order to realize the above two operations, the comparison circuit 30 and the selection circuit 31 select the compressed data 7 to be actually sent to the driver 4 as described below:
First, when the gradation value of the current frame uncompressed compressed data 24a and the gradation value of the no-correction overdrive processed data 25a are identical for all the subpixels of all the pixels of the object block, the comparison circuit 30 determines that the overdrive processing is unnecessary, and selects the compressed data 22a outputted from the compression circuit 22 as the compressed data 7 to be actually sent to the driver 4. Here, it should be noted that a fact that the gradation value of the current frame uncompressed compressed data 24a and the gradation value of the no-correction overdrive processed data 25a are the same means that there is no change in the gradation value of each subblock of each pixel of the object block or it is small. When the difference of the previous frame uncompressed compressed data 23a and the current frame uncompressed compressed data 24a is small, depending on details of the overdrive processing, the gradation value of the current frame uncompressed compressed data 24a and the gradation value of the no-correction overdrive processed data 25a can become identical.
If the gradation value of the current frame uncompressed compressed data 24a and the gradation value of the no-correction overdrive processed data 25a are different for any of the subpixels of any of the pixels of the object block, the comparison circuit 30 will determine whether the overdrive direction realized with the no-correction compressed data 26a is proper for each subpixel of each pixel of the object block. This determination is made by comparing the no-correction uncompressed compressed data 28a obtained by uncompressing the no-correction compressed data 26a (this agrees with data obtained by the uncompression processing of the no-correction compressed data 26a as the display data 8 in the driver 4) with the current frame uncompressed compressed data 24a.
For example, consider a case where the overdrive direction shown in the drive direction data 25c for a specific subpixel of a certain specific pixel is “positive.” In this case, when a value of the no-correction uncompressed compressed data 28a of the specific subpixel of the specific pixel is larger than or equal to a value of the current frame uncompressed compressed data 24a of the specific subpixel of the specific pixel, the overdrive direction is determined to be proper; when it is not so, the overdrive direction is determined to be improper. Similarly, in the case where the overdrive direction shown in the drive direction data 25c for a specific subpixel of a certain specific pixel is “negative,” when the value of the no-correction uncompressed compressed data 28a of the specific subpixel of the specific pixel is smaller than the value of the current frame uncompressed compressed data 24a of the specific subpixel of the specific pixel, the overdrive direction is determined to be proper; when it is not so, the overdrive direction is determined to be improper.
If the overdrive direction realized with the no-correction compressed data 26a for all the subpixels of all the pixels of the object block is proper, the comparison circuit 30 will select the no-correction compressed data 26a as the compressed data 7 to be actually sent to the driver 4.
On the other hand, if the overdrive direction realized with the no-correction compressed data 26a is improper at least for one subpixel of the pixels included in the object block, the comparison circuit 30 will select the post-correction compressed data 27a as the compressed data 7 to be actually sent to the driver 4.
It should be noted that the above-mentioned selection is performed for every object block. Taking a look at a certain object block, the compressed data 22a outputted from the compression circuit 22 is selected for all the subpixels of all the pixels, or the no-correction compressed data 26a is selected for all the subpixels of all the pixels, or the post-correction compressed data 27a is selected for all the subpixels of all the pixels.
In this case, the gradation value of the no-correction uncompressed compressed data 28a obtained by performing the compression processing and the uncompression processing on the no-correction overdrive processed data 25a can take a value of not less than 98 and not more than 106. When the gradation value of the no-correction uncompressed compressed data 28a is larger than or equal to 100 (that is, when it is larger than or equal to the gradation value of the current frame uncompressed compressed data 24a), the overdrive direction is determined to be proper. In this case, the proper overdrive direction can be certainly realized by selecting the no-correction compressed data 26a as the compressed data 7 to be sent to the driver 4. On the other hand, when the gradation value of the no-correction uncompressed compressed data 28a is smaller than 100 (that is, when it is smaller than the gradation value of the current frame uncompressed compressed data 24a), it is possible to realize the proper overdrive direction by selecting the post-correction compressed data 27a as the compressed data 7 to be sent to the driver 4. When the gradation value of the post-correction overdrive processed data 25b is 104, although the display data 8 obtained by uncompressing the post-correction compressed data 27a can take a value of not less than 100 and not more than 108, the overdrive direction does not become a reverse direction even if it takes any value. Therefore, the overdriving is not performed in an improper overdrive direction.
By selecting the compressed data 7 in this way, the overdriving is prevented from being performed in the improper overdrive direction, and the overdriving is prevented from being performed although the overdriving is originally unnecessary.
Incidentally, it should be noted that for the compression processing performed in the compression circuits 21, 22, 26, and 27 and the uncompression processing performed in the uncompression circuits 15, 23, 24, 28, and 29, well-known various compression processing and uncompression processing can be used.
Moreover, in the above-mentioned embodiment, when the gradation value of the current frame uncompressed compressed data 24a corresponding to a certain subpixel of a certain pixel of the object block is larger than or equal to the corresponding gradation value of the previous frame uncompressed compressed data 23a of the subpixel, the proper overdrive direction is detected as “positive”; when it is not so, the proper overdrive direction is detected as “negative.” However, the proper overdrive direction when the gradation value of the current frame uncompressed compressed data 24a corresponding to a certain subpixel of a certain pixel of the object block is equal to the corresponding gradation value of the previous frame uncompressed compressed data 23a of the subpixel may be different from this direction. That is, the following detection may be all right: when the gradation value of the current frame uncompressed compressed data 24a corresponding to a certain subpixel of a certain pixel of the object block exceeds the corresponding gradation value of the previous frame uncompressed compressed data 23a of the subpixel, the proper overdrive direction is detected as “positive”; when it is not so, the overdrive direction is detected as “negative.”
In this case, in the comparison circuit 30, in the case where the overdrive direction shown in the drive direction data 25c for a specific subpixel of a certain specific pixel is “positive,” when the value of the no-correction uncompressed compressed data 28a of the specific subpixel of the specific pixel exceeds the value of the current frame uncompressed compressed data 24a of the specific subpixel of the specific pixel, the overdrive direction is determined to be proper; when it is not so, the overdrive direction is determined to be improper. Moreover, in the case where the overdrive direction shown in the drive direction data 25c for a specific subpixel of a certain specific pixel is “negative,” when the value of the no-correction uncompressed compressed data 28a of the specific subpixel of the specific pixel is smaller than or equal to the value of the current frame uncompressed compressed data 24a of the specific subpixel of the specific pixel, the overdrive direction is determined to be proper; when it is not so, the overdrive direction is determined to be improper.
Furthermore, in the above-mentioned embodiment, although the compressed data 7 is selected from among the no-correction compressed data 26a, the post-correction compressed data 27a, and the compressed data 22a (on which the overdrive processing is not performed), an operation where the compressed data 22a is not selected as the compressed 7, that is, either the no-correction compressed data 26a or the post-correction compressed data 27a is selected as the compressed data 7 is also possible. Even in this case, an effect that the overdriving is performed in the improper direction is obtained. Moreover, the post-correction compressed data 27a may always be used as the compressed data 7 with no selection by the comparison circuit 30 and the selection circuit 31 being performed. In this case, since the liquid crystal display panel 2 is always driven in response to the display data 8 obtained by uncompressing the compressed data 7 generated from the post-correction overdrive processed data 25b, the status is unsuitable to perform ideal overdriving (the no-correction overdrive processed data 25a is more preferable than the post-correction overdrive processed data 25b in order to realize the ideal overdriving). However, this scheme at least prevents the overdriving from being performed in the improper overdrive direction. As described above, according to the inventors' examination, it is rather important that the overdriving is not performed in the improper overdrive direction.
In this embodiment where the compressed data 22a generated by the compression circuit 22 for performing the compression processing on the current frame data 6a is stored in the memory 11A, it is possible to make a capacity of the memory 11A smaller than the memory 11 used in the first embodiment. Moreover, the compression circuit 21 can be removed from the overdrive generation arithmetic circuit 13A. Thus, the configuration of the liquid crystal display 1A of the second embodiment has an advantage that the hardware can be made small.
In detail, in this embodiment, the overdrive generation arithmetic circuit 13B is configured to compress the image data 6 that it receives by any of the following six compression processing operations: lossless compression, (1×4) pixel compression, (2+1×2) pixel compression, (2×2) pixel compression, (3+1) pixel compression, and (4×1) pixel compression.
Here, the lossless compression is a scheme of compressing the image data 6 so that the original image data 6 can be completely restored from the compressed data 7. In this embodiment, it is used when the image data of the object block has a specific pattern. As described above, it should be noted that each block includes pixels of one row and four columns in this embodiment. The (1×4) pixel compression is a scheme of independently performing processing of reducing the number of bit planes for each of all the four pixels of the object block (in this embodiment, dithering using a dither matrix). This (1×4) pixel compression is suitable to a case where the correlation of the image data of the four pixels is low. The (2+1×2) pixel compression is a scheme of deciding a representative value that represents the image data of two pixels of all the four pixels of the object block and, on the other hand, performing processing of reducing the number of bit planes on each of the other two pixels. This (2+1×2) pixel compression is suitable to a case where the correlation of the image data of two pixels of the four pixels is high and the correlation of the image data of the other two pixels is low. The (2×2) pixel compression is a scheme where all the four pixels of the object block are divided into two sets each including two pixels and a representative value representing the image data is determined for each set of the two pixels and the image data is compressed. This (2×2) pixel compression is suitable to a case where the correlation of the image data of two pixels of the four pixels is high and the correlation of the image data of the other two pixels is high. The (3+1) pixel compression is a scheme where a representative value representing the image data of three pixels of all the four pixels of the object block is decided and, on the other hand, processing of reducing the number of bit planes is performed on the remaining one pixel. This (3+1) pixel compression is suitable to a case where the correlation among the image data of three pixels of the object block is high and the correlation between the image data of the remaining one pixel and the image data of the three pixels is low. (4×1) pixel compression is a scheme whereby a representative value that represents the image data of the four pixels of the object block is decided and the image data is compressed. This (4×1) pixel compression is suitable to a case where the correlation among the image data of all the four pixels of the object block is high.
Here, a fact that when the image data of the object block has a specific pattern, pieces of the image data of the object block are configured so that the lossless compression can be performed thereon is useful to enable inspection of the liquid display crystal panel 2 to be performed appropriately. In the inspection of the liquid crystal display panel 2, evaluations of a luminance characteristic and a color gamut characteristic are performed. In this evaluation of the luminance characteristic and the color gamut characteristic, an image of a specific pattern is displayed on the liquid crystal display panel 2. In order to evaluate the luminance characteristic and the color gamut characteristic appropriately at this time, it is necessary to display an image reproducing colors faithfully to the inputted image data on the liquid crystal display panel 2. If a compressive strain exists, it is impossible to perform the evaluation of the luminance characteristic and the color gamut characteristic appropriately. Therefore, this embodiment is configured so that the overdrive generation arithmetic circuit 13B may be able to perform the lossless compression.
Which one among the six compression processing operations is to be used is decided according to whether the image data of the object block has a specific pattern and a correlation among the image data of the pixels of one row and four columns that are included in the object block. For example, when the correlation of the image data of all the four pixels is high, the (4×1) pixel compression is used; when the correlation of the image data of two pixels in the four pixels is high and the correlation of the image data of the other two pixels is high, the (2×2) pixel compression is used. Selection of the six compression processing operations, and the compression processing and the uncompression processing in each will be explained in detail later.
As a specific configuration, as illustrated in
The compression circuit 42 performs the compression processing on the image data 6 (that is, the current frame data 6a) to generate the compressed data.
Returning to
The uncompression circuit 43 includes a lossless uncompression part 43a, a (1×4) pixel uncompression part 43b, a (2+1×2) pixel uncompression part 43c, a (2×2) pixel uncompression part 43d, a (3+1) pixel uncompression part 43e, a (4×1) pixel uncompression part 43f, and a shape recognition part 43g. The lossless uncompression part 43a performs uncompression processing corresponding to the lossless compression on the received compressed data to generate lossless uncompressed data. The (1×4) pixel uncompression part 43b performs uncompression processing corresponding to the (1×4) pixel compression on the received compressed data to generate (1×4) uncompressed data. The (2+1×2) pixel uncompression part 43c performs uncompression processing corresponding to the (2+1×2) pixel compression on the received compressed data to generate (2+1×2) uncompressed data. The (2×2) pixel uncompression part 43d performs uncompression processing corresponding to the (2×2) pixel compression on the received compressed data to generate (2×2) uncompressed data. The (3+1) pixel uncompression part 43e performs uncompression processing corresponding to the (3+1) pixel compression on the received compressed data to generate (3+1) uncompressed data. The (4×1) pixel uncompression part 43f performs uncompression processing corresponding to the (4×1) pixel compression on the received compressed data to generate (4×1) uncompressed data. The shape recognition part 43g recognizes the compression processing being used for compression of the received compressed data from a compression type recognition bit included in the compressed data, selects the uncompressed data corresponding to the compression processing being recognized, and sends uncompressed data selection data indicating selected uncompressed data to the uncompressed data selection part 43h. The uncompressed data selection part 43h outputs the uncompressed data specified by the uncompressed data selection data.
Returning to
The lossless compression part 46a, the (1×4) pixel compression part 46b, the (2+1×2) pixel compression part 46c, the (2×2) pixel compression part 46d, the (3+1) pixel compression part 46e, and the (4×1) pixel compression part 46f are of a circuit group for performing the compression processing on the no-correction overdrive processed data 45a. In detail, the lossless compression part 46a performs the lossless compression on the no-correction overdrive processed data 45a to generate the no-correction lossless compressed data. The (1×4) pixel compression part 46b performs the (1×4) pixel compression on the no-correction overdrive processed data 45a to generate no-correction (1×4) compressed data. The (2+1×2) pixel compression part 46c performs the (2+1×2) pixel compression on the no-correction overdrive processed data 45a to generate no-correction (2+1×2) compressed data. The (2×2) pixel compression part 46d performs the (2×2) pixel compression on the no-correction overdrive processed data 45a to generate no-correction (2×2) compressed data. The (3+1) pixel compression part 46e performs the (3+1) pixel compression on the no-correction overdrive processed data 45a to generate no-correction (3+1) compressed data. The (4×1) pixel compression part 46f performs the (4×1) pixel compression on the no-correction overdrive processed data 45a to generate no-correction (4×1) compressed data.
The lossless compression part 47a, the (1×4) pixel compression part 47b, the (2+1×2) pixel compression part 47c, the (2×2) pixel compression part 47d, the (3+1) pixel compression part 47e, and the (4×1) pixel compression part 47f are of a circuit group that performs the compression processing on the post-correction overdrive processed data 45b. The lossless compression part 47a performs the lossless compression on the post-correction overdrive processed data 45b to generate post-correction lossless compressed data 45b. The (1×4) pixel compression part 47b performs the (1×4) pixel compression on the post-correction overdrive processed data 45b to generate post-correction (1×4) compressed data. The (2+1×2) pixel compression part 47c performs the (2+1×2) pixel compression on the post-correction overdrive processed data 45b to generate post-correction (2+1×2) compressed data. The (2×2) pixel compression part 47d performs the (2×2) pixel compression on the post-correction overdrive processed data 45b to generate post-correction (2×2) compressed data. The (3+1) pixel compression part 47e performs the (3+1) pixel compression on the post-correction overdrive processed data 45b to generate post-correction (3+1) compressed data. The (4×1) pixel compression part 47f performs the (4×1) pixel compression on the post-correction overdrive processed data 45b to generate post-correction (4×1) compressed data.
The lossless uncompression part 48a, the (1×4) pixel uncompression part 48b, the (2+1×2) pixel uncompression part 48c, the (2×2) pixel uncompression part 48d, the (3+1) pixel uncompression part 48e, and the (4×1) pixel uncompression part 48f are of a circuit group for uncompressing the compressed data that is generated by the compression processing on the no-correction overdrive processed data 45a. The lossless uncompression part 48a performs uncompression processing corresponding to the lossless compression on the no-correction lossless compressed data received from the lossless compression part 46a to generate the no-correction lossless uncompressed compressed data. The (1×4) pixel uncompression part 48b performs uncompression processing corresponding to the (1×4) pixel compression on the no-correction (1×4) compressed data received from the (1×4) pixel compression part 46b to generate the no-correction (1×4) uncompressed compressed data. The (2+1×2) pixel uncompression part 48c performs uncompression processing corresponding to the (2+1×2) pixel compression on the compressed data received from the (2+1×2) pixel compression part 46c to generate the no-correction (2+1×2) uncompressed compressed data. The (2×2) pixel uncompression part 48d performs uncompression processing corresponding to the (2×2) pixel compression on the compressed data received from the (2×2) pixel compression part 46d to generate the no-correction (2×2) uncompressed compressed data. The (3+1) pixel uncompression part 48e performs uncompression processing corresponding to the (3+1) pixel compression on the compressed data received from the (3+1) pixel compression part 46e to generate no-correction (3+1) uncompressed compressed data. The (4×1) pixel uncompression part 48f performs uncompression processing corresponding to the (4×1) pixel compression on the compressed data received from the (4×1) pixel compression part 46f to generate no-correction (4×1) uncompressed data.
The lossless uncompression part 49a, the (1×4) pixel uncompression part 49b, the (2+1×2) pixel uncompression part 49c, the (2×2) pixel uncompression part 49d, the (3+1) pixel uncompression part 49e, and the (4×1) pixel uncompression part 49f are of a circuit group for uncompressing the compressed data that is generated by the compression processing on the post-correction overdrive processed data 45b. The lossless uncompression part 49a performs uncompression processing corresponding to the lossless compression on the post-correction lossless compressed data received from the lossless compression part 46a to generate post-correction lossless uncompressed compressed data. The (1×4) pixel uncompression part 49b performs uncompression processing corresponding to the (1×4) pixel compression on the post-correction (1×4) compressed data received from the (1×4) pixel compression part 46b to generate post-correction (1×4) uncompressed compressed data. The (2+1×2) pixel uncompression part 49c performs uncompression processing corresponding to the (2+1×2) pixel compression on the compressed data received from the (2+1×2) pixel compression part 46c to generate post-correction (2+1×2) uncompressed compressed data. The (2×2) pixel uncompression part 49d performs uncompression processing corresponding to the (2×2) pixel compression on the compressed data received from the (2×2) pixel compression part 46d to generate post-correction (2×2) uncompressed compressed data. The (3+1) pixel uncompression part 49e performs uncompression processing corresponding to the (3+1) compression on the compressed data received from the (3+1) pixel compression part 46e to generate post-correction (3+1) uncompressed compressed data. The (4×1) pixel uncompression part 49f performs uncompression processing corresponding to the (4×1) compression on the compressed data received from the (4×1) pixel compression part 46f to generate post-correction (4×1) uncompressed data.
The comparison circuit 50 selects any of the compressed data outputted from the compression circuit 42 and the compression circuits 46a to 46f and 47a to 47f as the compressed data 7 to be sent to the driver 4. Here, the compressed data outputted from the compression circuit 42 is compressed data on which the overdrive processing is not performed. Moreover, each piece of the compressed data outputted from the compression circuits 46a to 46f is compressed data obtained by performing the compression processing on the data on which the overdrive processing is performed by the LUT processing part and yet the correction processing by the correction part is not performed; each piece of the compressed data outputted from the compression circuits 47a to 47f is compressed data obtained by performing the compression processing on the data on which the overdrive processing is performed and further the correction processing is performed. The selection by the comparison circuit 50 is performed based on (1) the current frame uncompressed compressed data outputted from the uncompression circuit 44, (2) the data outputted from the uncompression circuits 46a to 46f and 47a to 47f, and (3) the drive direction data 45c. The selection circuit 51 outputs the compressed data selected by the comparison circuit 50 as the compressed data 7 that should be sent to the driver 4.
The selection in the comparison circuit 50 is performed as follows in the one embodiment: First, if the gradation value of the current frame uncompressed compressed data outputted from the uncompression circuit 44 and the gradation value of the no-correction overdrive processed data 45a are identical for all the subpixels of all the pixels of the object block, the comparison circuit 50 determines that the overdrive processing is unnecessary and selects the compressed data outputted from the compression circuit 42 as the compressed data 7 to be actually sent to the driver 4.
If the gradation value of the current frame uncompressed compressed data and the gradation value of the no-correction overdrive processed data 45a are different for any subpixel of any pixel of the object block, the comparison circuit 50 further selects the compressed data 7 that should be sent to the driver 4 from among pieces of the compressed data received from the lossless compression part 46a, the (1×4) pixel compression part 46b, the (2+1×2) pixel compression part 46c, the (2×2) pixel compression part 46d, the (3+1) pixel compression part 46e, the (4×1) pixel compression part 46f, the lossless compression part 47a, the (1×4) pixel compression part 47b, the (2+1×2) pixel compression part 47c, the (2×2) pixel compression part 47d, the (3+1) pixel compression part 47e, and the (4×1) pixel compression part 47f. The selection of the compressed data 7 that should be sent to the driver 4 is performed as follows:
First, the comparison circuit 50 determines whether the overdrive direction realized with pieces of the compressed data outputted from the lossless compression part 46a, the (1×4) pixel compression part 46b, the (2+1×2) pixel compression part 46c, the (2×2) pixel compression part 46d, the (3+1) pixel compression part 46e, and the (4×1) pixel compression part 46f is proper for each subpixel of each pixel of the object block. This determination is made by comparison of the no-correction uncompressed compressed data obtained by uncompressing each of the compressed data (that is, pieces of the uncompressed data outputted from the lossless uncompression part 48a, the (1×4) pixel uncompression part 48b, the (2+1×2) pixel uncompression part 48c, the (2×2) pixel uncompression part 48d, the (3+1) pixel uncompression part 48e, and the (1×4) pixel uncompression part 48f), and the current frame uncompressed compressed data.
For example, consider a case where the overdrive direction shown in the drive direction data 45c for a specific subpixel of a certain specific pixel is “positive,” and an object of determination of the overdrive direction is the compressed data outputted from the lossless compression part 46a. In this case, when a value of the uncompressed data outputted from the lossless uncompression part 48a for the specific subpixel of the specific pixel is larger than or equal to the value of the current frame uncompressed compressed data of the specific subpixel of the specific pixel, the overdrive direction realized with the compressed data outputted from the lossless compression part 46a is determined to be proper; when it is not so, the overdrive direction is determined to be improper. Similarly, in the case where the overdrive direction shown in the drive direction data 45c for a specific subpixel of a certain specific pixel is “negative,” when the value of the uncompressed data outputted from the lossless uncompression part 48a for the specific subpixel of the specific pixel is smaller than the value of the current frame uncompressed compressed data of the specific subpixel of the specific pixel, the overdrive direction is determined to be proper; when it is not so, the overdrive direction is determined to be improper. Furthermore, the same determination is made on the compressed data outputted from the (1×4) pixel compression part 46b, the (2+1×2) pixel compression part 46c, the (2×2) pixel compression part 46d, the (3+1) pixel compression part 46e, and the (4×1) pixel compression part 46f. Thereby, for each piece of the compressed data outputted from the lossless compression part 46a, the (1×4) pixel compression part 46b, the (2+1×2) pixel compression part 46c, the (2×2) pixel compression part 46d, the (3+1) pixel compression part 46e, and the (4×1) pixel compression part 46f, whether the overdrive direction of all the subpixels of all the pixels of the object block is proper is determined.
If there is only one piece of the compressed data whose overdrive direction of all the subpixels of all the pixels of the object block is proper among pieces of the compressed data generated by the lossless compression part 46a, the (1×4) pixel compression part 46b, the (2+1×2) pixel compression part 46c, the (2×2) pixel compression part 46d, the (3+1) pixel compression part 46e, and the (4×1) pixel compression part 46f, the comparison circuit 50 will select the one piece of the compressed data as the compressed data 7 that should be sent to the driver 4.
If there are plural pieces of the compressed data whose overdrive direction of all the subpixels of all the pixels of the object block is proper, the compressed data whose uncompressed data obtained by uncompressing the compressed data is the closest to the no-correction overdrive processed data 45a will be selected from among the plural pieces of the compressed data. In the one embodiment, regarding each subpixel of each pixel of the object block, a difference absolute value of the value of the uncompressed data and the value of the no-correction overdrive processed data 45a is computed, and the compressed data corresponding to uncompressed data such that a sum of the difference absolute values of all the subpixels of all the pixels of the object block is the smallest is selected as the compressed data 7 that should be sent to the driver 4 from among pieces of the compressed data each of whose overdrive direction of all the subpixels of all the pixels of the object block is proper.
If the compressed data whose overdrive direction of all the subpixels of all the pixels of the object block is proper does not exist among pieces of the compressed data generated by the lossless compression part 46a, the (1×4) pixel compression part 46b, the (2+1×2) pixel compression part 46c, the (2×2) pixel compression part 46d, the (3+1) pixel compression part 46e, and the (4×1) pixel compression part 46f, the compressed data 7 that should be sent to the driver 4 will be selected from among pieces of the compressed data outputted from the lossless compression part 47a, the (1×4) pixel compression part 47b, the (2+1×2) pixel compression part 47c, the (2×2) pixel compression part 47d, the (3+1) pixel compression part 47e, and the (4×1) pixel compression part 47f.
In detail, the compressed data such that corresponding uncompressed data (that is, the uncompressed data outputted from each of the lossless uncompression part 49a, the (1×4) pixel uncompression part 49b, the (2+1×2) pixel uncompression part 49c, the (2×2) pixel uncompression part 49d, the (3+1) pixel uncompression part 49e, and the (4×1) pixel uncompression part 49f) is the closest to the no-correction overdrive processed data 45a in the pieces of the compressed data is selected as the compressed data 7 that should be sent to the driver 4. In the one embodiment, on each subpixel of each pixel of the object block, difference absolute values between the values of the uncompressed data outputted from the lossless uncompression part 49a, the (1×4) pixel uncompression part 49b, the (2+1×2) pixel uncompression part 49c, the (2×2) pixel uncompression part 49d, the (3+1) pixel uncompression part 49e, and the (4×1) pixel uncompression part 49f and the value of the no-correction overdrive processed data 45a are computed, and the compressed data corresponding to the uncompressed data such that a sum of the difference absolute values of all the subpixels of all the pixels of the object block is the smallest is selected as the compressed data 7 that should be sent to the driver 4. In this case, the compressed data 7 that should be sent to the driver 4 will be selected from among pieces of the compressed data outputted from the lossless compression part 47a, the (1×4) pixel compression part 47b, the (2+1×2) pixel compression part 47c, the (2×2) pixel compression part 47d, the (3+1) pixel compression part 47e, and the (4×1) pixel compression part 47f.
Then, selection of the compression processing in the compression circuit 42 and details of each compression processing operation (the lossless compression, the (1×4) pixel compression, the (2+1×2) pixel compression, the (2×2) pixel compression, the (3+1) pixel compression, and the (4×1) pixel compression) will be explained. In the following explanation, the gradation values of the R subpixels of the pixels A, B, C and D are described as RA, RB, RC, and RD, respectively, the gradation values of the G subpixels of the pixels A, B, C and D are described as GA, GB, GC, and GD, respectively, and the gradation values of the B subpixels of the pixels A, B, C and D are described as BA, BB, BC, and BD, respectively.
In detail, if the gradation values of the image data of the four pixels of the object block correspond to one of the following four patterns (1) to (4), the lossless compression will be performed:
When the gradation values of the image data of the four pixels of the object block satisfy the following condition (1a), the lossless compression is performed. Condition (1a): RA=RB=RC=RD, GA=GB=GC=GD, and BA=BB=BC=BD. In this case, the gradation values of the image data of the four pixels of the object block are three kinds.
(2) Gradation Values of the R Subpixel, the G Subpixel, and the B Subpixel are Identical Among the Four Pixels (
When the gradation values of the image data of the four pixels of the object block satisfy the following condition (2a), the lossless compression is performed. Condition (2a): RA=GA=BA, RB=GB=BB, RC=GC=BC, and RD=GD=BD. In this case, the gradation values of the image data of the four-pixels of the object block are four kinds.
When any of the below-mentioned three conditions (3a) to (3c) is satisfied, the lossless compression is performed: Condition (3a): GA=GB=GC=GD=BA=BB=BC=BD. Condition (3b): BA=BB=BC=BD=RA=RB=RC=RD. Condition (3c): RA=RB=RC=RD=GA=GB=GC=GD. In this case, the gradation values of the image data of the four pixels of the object block are five kinds.
Also when further any of the below-mentioned three conditions (4a) to (4c) is satisfied, the lossless compression is performed. Condition (4a): GA=GB=GC=GD, RA=BA, RB=BB, RC=BC, and RD=BD. Condition (4b): BA=BB=BC=BD, RA=GA, RB=GB, RC=GC, and RD=GD. Condition (4c): RA=RB=RC=RD, GA=BA, GB=BB, GC=BC, and GD=BD. In this case, the gradation values of the image data of the four pixels are five kinds.
When the lossless compression is not performed, the compression processing is selected according to the correlation among the four pixels. More specifically, the shape recognition part 42g of the compression circuit 42 determines to which case among the following cases the gradation value of each subpixel of the four pixels of the object block corresponds: Case A: A correlation among the image data of an arbitrary combination of pixels in the four pixels is low. Case B: A high correlation exists between the image data of two pixels, and the image data of the other two pixels have low correlations with the previous two pixels and have a low correlation with each other. Case C: A high correlation exists among the image data of the four pixels. Case D: A high correlation exists among the image data of three pixels, and the image data of the other one pixel has low correlations with the previous three pixels. Case E: A high correlation exists between the image data of two pixels and a high correlation exists between the image data of the other two pixels.
In detail, when the following condition (A) does not hold true for all the combination of i and j such that iε{A, B, C, D}, jε{A, B, C, D}, i≠j, the shape recognition part 42g of the compression circuit 42 determines that the status corresponds to Case A (that is, a correlation among the image data of arbitrarily combined pixels from among the four pixels is low) (Step S02). Condition (A): |Ri-Rj|≦Th1, |Gi-Gj|≦Th1, and |Bi-Bj|≦Th1. When the status corresponds to Case A, the shape recognition part 42g selects the (1×4) pixel compression.
When it is determined that the status does not correspond to Case A, the shape recognition part 42g specifies two pixels of a first pair and two pixels of a second pair for the four pixels, and determines for all combinations thereof whether the following condition is satisfied: a difference of the image data between the two pixels of the first pair is smaller than the prescribed value and a difference of the image data between the two pixels of the second pair is smaller than the prescribed value (Step S03). More specifically, the shape recognition part 42g determines whether any of the following conditions (B1) to (B3) holds true (Step S03). Condition (B1): |RA-RB|≦Th2, |GA-GB|≦Th2, |BA-BB|≦Th2, |RC-RD|≦Th2, |GC-GD|≦Th2, and |BC-BD|≦Th2. Condition (B-2): |RA-RC|≦Th2, |GA-GC|≦Th2, |BA-BC|≦Th2, |RB-RD|≦Th2, |GB-GD|≦Th2, and |BB-BD|≦Th2. Condition (B3): |RA-RD|≦Th2, |GA-GD|≦Th2, |BA-BD|≦Th2, |RB-RC|≦Th2, |GB-GC|≦Th2, and |BB-BC|≦Th2.
When none of the above-mentioned conditions (B1) to (B3) holds true, the shape recognition part 42g determines that the status corresponds to Case B (that is, a high correlation exists between the image data of the two pixels, and the image data of the other two pixels have a low correlation with each other). In this case, the shape recognition part 42g selects the (2+1×2) pixel compression.
When it is determined that the status corresponds to neither of Cases A, B, the shape recognition part 42g determines whether a condition that a difference between a maximum and a minimum of the image data of the four subpixels is smaller than the prescribed value is satisfied for each of all the colors of the four pixels. More specifically, the shape recognition part 42g determines whether the following condition (C) holds true (Step S04). Condition (C): max (RA, RB, RC, RD)−min (RA, RB, RC, RD)<Th3, max (GA, GB, GD, GD)−min (GA, GB, GC, GD)<Th3, and max (BA, BB, BC, BD)−min (BA, BB, BC, BD)<Th3.
If the condition (C) holds true, the shape recognition part 42g determines that the status corresponds to Case C (high correlations exist among the four-pixel image data). In this case, the shape recognition part 42g decides to perform the (4×1) pixel compression.
On the other hand, if the condition (C) does not hold true, the shape recognition part 42g determines whether a high correlation exists among any image data of combinations of three pixels of the four pixels and the image data of the other one pixel has low correlations with the three pixels (Step S05). More specifically, the shape recognition part 42g determines whether any of the following conditions (D1) to (D4) holds true (Step S04).
|RA-RB|≦Th4,|GA-GB|≦Th4,|BA-BB|≦Th4,|RB-RC|≦Th4,|GB-GC|≦Th4,|BB-BC|≦Th4,|RC-RA|≦Th4,|GC-GA|≦Th4,and |BC-BA|≦Th4. Condition (D1):
|RA-RB|≦Th4,|GA-GB|≦Th4,|BA-BB|≦Th4,|RB-RD|≦Th4,|GB-GD|≦Th4,|BB-BD|≦Th4,|RD-RA|≦Th4,|GD-GA|≦Th4,and |BD-BA|≦Th4. Condition (D2):
|RA-RD|≦Th4,|GA-GD|≦Th4,|BA-BD|≦Th4,|RD-RD|≦Th4,|GC-GD|≦Th4,|BC-BD|≦Th4,|RD-RA|≦Th4,|GD-GA|≦Th4,and |BD-BA|≦Th4. Condition (D3):
|RB-RD|≦Th4,|GB-GD|≦Th4,|BB-BD|≦Th4,|RD-RD|≦Th4,|GC-GD|≦Th4,|BC-BD|≦Th4,|RD-RB|≦Th4,|GD-GB|<Th4,and |BD-BB|≦Th4. Condition (D4):
If any of the conditions (D1) to (D4) holds true, the shape recognition part 42g will determine that the status corresponds to Case D (that is, a high correlation exists among the image data of three pixels and these three pixels has a low correlation with the image data of the other one pixel). In this case, the shape recognition part 42g decides to perform the (3+1) pixel compression.
If none of the above-mentioned conditions (D1) to (D4) holds true, the shape recognition part 42g determines that the status corresponds to Case E (that is, high correlations exist among the image data of the pixels and a high correlation exists between the image data of the other two pixels. In this case, the shape recognition part 42g decides to perform the (2×2) pixel compression.
The shape recognition part 42g selects any of the (1×4) pixel compression, the (2+1×2) pixel compression, the (2×2) pixel compression, the (3+1) pixel compression, or the (4×1) pixel compression based on the recognition result of correlation as described above. According to the selection result thus obtained, selection of the compressed data outputted from the compression circuit 42 and selection of the compressed data in the comparison circuit 50 are performed.
Then, regarding each of the lossless compression, the (1×4) pixel compression, the (2+1×2) pixel compression, the (2×2) pixel compression, the (3+1) pixel compression, and the (4×1) pixel compression, details of the compression processing and details of the uncompression processing will be explained.
In this embodiment, the lossless compression is performed by rearranging the gradation values of respective subpixels of the pixel of the object block.
The compression type recognition bit is data indicating a type of the compression processing used for compression and five bits are assigned to the compression type recognition bit for the lossless compressed data. In this embodiment, a value of the compression type recognition bit of the lossless compressed data is “11111.”
The color type data is data indicating to which pattern of eight patterns of
The image data pieces #1 to #5 are data obtained by rearranging the data values of the image data of the pixels of the object block. Each of the image data pieces #1 to #5 is eight-bit data. As described above, since the data values of the image data of the four pixels of the object block is five kinds or fewer, all the data values can be stored in the image data pieces #1 to #5.
Uncompression of the compressed data generated by the above-mentioned lossless compression is performed by rearranging the image data pieces #1 to #5 referring to the color type data. Since it is described in the color type data to which pattern among
The RA, GA, and BA data pieces are bit plane reduction data obtained by performing processing of reducing the number of bit planes on the gradation values of R, G, and B subpixels of the pixel A. The RB, GB, and BB data pieces are bit plane reduction data obtained by performing processing of reducing the number of bit planes on the gradation values of the R, G, and B subpixels of the pixel B. Similarly, the RC, GC, and BC data pieces are bit plane reduction data obtained by performing processing of reducing the number of bit planes on the gradation values of the R, G, and B subpixels of the pixel C. The RD, GD, and BD data pieces are bit plane reduction data obtained by performing processing of reducing the number of bit planes on the gradation values of the R, G, and B subpixels of the pixel D.
In this embodiment, only BD data corresponding to the B subpixel of the pixel D is three-bit data, and other pieces of data are four-bit data. In this bit allocation, the sum number of bits including the compression type recognition bit becomes 48 bits.
Furthermore, rounding is performed and, thereby, the RA, GA, and BA data pieces, the RB, GB, and BB data pieces, the RC, GC, and BC data pieces, the RD, GD, and BD data pieces are generated. Here, the rounding means processing in which a value 2(n−1) is added to data where n is a desired value and lower n bits are omitted. On the gradation value of the B subpixel of the pixel D, processing of adding a value 16 and subsequently omitting lower five bits is performed. A value “0” is added, as the compression type recognition bit, to the RA, GA, and BA data pieces, the RB, GB, and BB data pieces, the RC, GC, and BC data pieces, and the RD, GD, and BD data pieces that are generated as described above, whereby the (1×4) compressed data is generated.
Furthermore, subtraction of the error data α is performed and the uncompression of the (1×4) compressed data is completed. Thereby, the (1×4) uncompressed data showing the gradation of each subpixel of the pixels A to D is generated. The (1×4) uncompressed data is data that restored the original image data in general. If the gradation values of the subpixels of the pixels A to D of the (1×4) uncompressed data of
As shown in
The compression type recognition bit is data indicating the type of the compression processing used for compression, and two bits are assigned to the compression type recognition bit in the (2+1×2) compressed data. In this embodiment, a value of the compression type recognition bit of the (2+1×2) compressed data is “10.”
The shape recognition data is three-bit data indicating which two pixels have a high correlation between the image data thereof in the pixels A to D. When the (2+1×2) pixel compression is used, the correlation between the image data of two pixels from among the pixels A to D is high, and the remaining two pixels have a low correlation with the image data of other pixels. Therefore, combinations of two pixels whose correlation between the image data is high are the below-mentioned six cases: pixels A, C; pixels B, D; pixels A, B; pixels C, D; pixels B, C; and pixels A, D. The shape recognition data indicates to which combination in these six combinations the two pixels having a high correlation between the image data correspond by three bits.
The R representative value, the G representative value, and the B representative value are values that represent the gradation values of the R subpixels, the G subpixels, and the B subpixels of two pixels having a high correlation, respectively. As illustrated in
The β comparison data is data indicating whether a difference between the gradation values of the identical color subpixels of two pixels having a high correlation is larger than the prescribed threshold β. The β comparison data is data indicating whether a difference of the gradation values of the R subpixels of two pixels having a high correlation and a difference of the gradation values of the G subpixels of the two pixels having a high correlation are larger than the prescribed threshold β.
On the other hand, the size recognition data is data indicating which gradation value of the R subpixels of two pixels is larger than that of the other and which gradation value of the G subpixels of two pixels is larger than that of the other in the two pixels having a high correlation. The size recognition data corresponding to the R subpixel is generated only when the difference of the gradation values of the R subpixels of two pixels having a high correlation is larger than the threshold β; the size recognition data corresponding to the G subpixel is generated only when the difference of the gradation values of the G subpixels of two pixels having a high correlation is larger than the threshold β. Therefore, the size recognition data is zero-bit to two-bit data.
The Ri, Gi, and Bi data pieces and the Rj, Gj, and Bj data pieces are bit plane reduced data obtained by performing processing of reducing the number of bit planes on the gradation values of the R, G, and B subpixels of the two pixels having a low correlation. Each set of the Ri, Gi, and Bi data pieces and the Rj, Gj, and Bj data is four-bit data pieces.
Below, the (2+1×2) pixel compression will be explained referring to
First, the compression processing of the image data of the pixels A, B (correlation is high) will be explained. First, an average of the gradation values is computed for each of the R subpixel, the G subpixel, and the B subpixel. Averages Rave, Gave, and Bave of the gradation values of the R subpixel, the G subpixel, and the B subpixel are computed by the following formulae: Rave=(RA+RB+1)/2, Gave=(GA+GB+1)/2, and Bave=(BA+BB+1)/2.
Furthermore, a comparison as to whether the difference |RA-RB| of the gradation values of the R subpixels and the difference |GA-GB| of the gradation values of the G subpixels of the pixels A, B are larger than the prescribed threshold β is made. These comparison results are described in the (2+1×2) compressed data as the β comparison data.
Furthermore, the size recognition data is created by the following procedure. When the difference |RA-RB| of the gradation values of the R subpixels of the pixels A, B is larger than the threshold β, which gradation value of the R subpixel is larger than that of the other between the pixels A, B is described in the size recognition data. When the difference |RA-RB| of the gradation values of the R subpixels of the pixels A, B is smaller than or equal to the threshold β, a size relation of the gradation values of the R subpixels of the pixels A, B is not described in the size recognition data. Similarly, when the difference |GA-GB| of the gradation values of the G subpixels of the pixels A, B is larger than the threshold β, which gradation value of the G subpixel is larger than that of the other between the pixels A, B is described in the size recognition data. When the difference |GA-GB| of the gradation values of the G subpixels of the pixels A, B is smaller than or equal to the threshold β, a size relation of the gradation values of the G subpixels of the pixels A, B is not described in the size recognition data.
In the example of
Then, the error data α is added to the averages Rave, Gave, and Bave of the gradation values of the R subpixel, the G subpixel, and the B subpixel. In this embodiment, the error data α is decided from the coordinates of two pixels of each combination using the basic matrix. Computation of the error data α will be described separately later. Below, in this embodiment, an explanation will be given assuming that the error data α determined for the pixels A, B is zero.
Furthermore, the rounding is performed to compute the R representative value, the G representative value, and the B representative value. A numerical value that is added in the rounding and the number of bits that is omitted by the round-down processing are decided according to the size relation between the differences |RA-RB|, |GA-GB|, and |BA-BB| of the gradation values and the threshold β, and compressibility. Regarding the R subpixels, when the difference |RA-RB| of the gradation values of the R subpixels is larger than the threshold β, processing of adding a value four to the average Rave of the gradation values of the R subpixels of the pixel D and subsequently omitting lower three bits is performed and, thereby, the R representative value is computed. When it is not so, processing of adding a value two to the average Rave and subsequently omitting lower two bits is performed and, thereby, the R representative value is computed. Regarding the G subpixels, similarly, when the difference |GA-GB| of the gradation values is larger than the threshold β, processing of adding a value four to the average Gave of the gradation values of the G subpixels and subsequently omitting lower three bits is performed and, thereby, the G representative value is computed. When it is not so, processing of adding a value two to the average Gave and subsequently omitting lower two bits is performed and, thereby, the R representative value is computed. In the example of
On the other hand, on the image data of the pixels C, D (correlation is low), the same processing as the (1×4) pixel compression is performed. That is, for each of the pixels C, D, the dithering using the dither matrix is performed independently and, thereby, the numbers of bit planes of the image data of the pixels C, D are reduced. In detail, first, processing of adding the error data α to each of the image data of the pixels C, D is performed. As described above, the error data α of each pixel is computed from the coordinates of the pixel. Below, an explanation will be given assuming that the error data α determined for the pixels C, D are 10 and 15, respectively.
Furthermore, the rounding is performed to generate the RC, GC, and BC data pieces and the RD, GD, and BD data pieces. In detail, processing of adding a value eight to each set of the gradation values of the R, G, and B subpixels of each of the pixels C, D and subsequently omitting lower four bits is performed. Thereby, the RC, GC, and BC data pieces and the RD, GD, and BD data pieces are computed.
The (2+1×2) compressed data is generated by adding the compression type recognition bit and the shape recognition data to the R representative value, the G representative value, the B representative value, the size recognition data, the β comparison result data, the RC, GC, and BC data pieces, and the RD, GD, and BD data pieces all of which are generated as described above.
On the other hand,
In uncompression of the (2+1×2) compressed data, first, bit advance processing is performed on the R representative value, the G representative value, and the B representative value. However, execution/non-execution of the bit advance processing is decided depending on the size relation of the differences |RA-RB|, |GA-GB|, and |BA-BB| of the gradation values and the compressibility described in the β comparison data. When the difference |RA-RB| of the gradation values of the R subpixels is larger than the threshold β, three-bit bit advance processing is performed on the R representative value; when it is not so, two-bit bit advance processing is performed. Similarly, when the difference |GA-GB| of the gradation values of the G subpixels is larger than the threshold β, the three-bit bit advance processing is performed on the G representative value; when it is not so, the two-bit bit advance processing is performed. In the example of
After the above-mentioned bit advance processing is completed, subtraction of the error data α is performed on each of the R representative value, the G representative value, and the B representative value, and further processing of restoring the gradation values of the R, G, and B subpixels of the pixels A, B of the (2+1×2) uncompressed data from the R representative value, the G representative value, and the B representative value is performed.
In restoration of the gradation values of the R subpixels of the pixels A, B of the (2+1×2) uncompressed data, the β comparison data and the size recognition data are used. When the β comparison data describes that the difference |RA-RB| of the gradation values of the R subpixels is larger than the threshold β, a value obtained by adding a constant value five to the R representative value is restored as the gradation value of the R subpixel of the pixel that is described to be large in the size recognition data in the pixels A, B, and a value obtained by subtracting a constant value five from the R representative value is restored as the gradation value of the R subpixel of the pixel that is described to be small in the size recognition data. On the other hand, when the difference |RA-RB| of the gradation values of the R subpixels is smaller than the threshold β, the gradation values of the R subpixels of the pixels A, B are restored so as to agree with the R representative value. In the example of
However, since the β comparison data and the size recognition data do not exist for the B subpixels of the pixels A, B, restoration is performed assuming that values of the B subpixels of the pixels A, B both agree with the B representative value regardless of the β comparison data and the size recognition data.
By the above procedure, the restoration of the gradation values of the R subpixels, the G subpixels, and the B subpixels of the pixels A, B is completed.
On the other hand, in the uncompression processing on the image data of the pixels C, D (correlation is low), the same processing as the above-mentioned uncompression processing of the (1×4) compressed data is performed. In the uncompression processing on the image data of the pixels C, D, first, the four-bit bit advance processing is performed on each of the RC, GC, and BC data pieces, and the RD, GD, and BD data pieces. Furthermore, subtraction of the error data α is performed and, thereby, the gradation values of the R subpixels, the G subpixels, and the B subpixels of the pixels C, D are restored.
By the above procedure, the restoration of the gradation values of the R subpixels, the G subpixels, and the B subpixels of the pixels C, D is completed. The gradation values of the R subpixels, the G subpixels, and the B subpixels of the pixels C, D are restored as values of eight bits.
In this embodiment, the (2×2) compressed data is comprised of the compression type recognition bit, the shape recognition data, an R representative value #1, a G representative value #1, a B representative value #1, an R representative value #2, a G representative value #2, a B representative value #2, the size recognition data, and the β comparison result data.
The compression type recognition bit is data indicating the type of the compression processing used for compression and three bits are assigned to the compression type recognition bit in the (2×2) compressed data. In this embodiment, a value of the compression type recognition bit of the (2×2) compressed data is “110.”
The shape recognition data is two-bit data indicating which pair of two pixels from among the pixels A to D has a higher correlation between the image data thereof. When the (2×2) pixel compression is used, a high correlation exists between the image data of two pixels from among the pixels A to D, and a high correlation exists between the image data of the other two pixels. Therefore, combinations of two pixels whose correlation of the image data is high are following three cases: A correlation of the pixels A, B is high and a correlation of the pixels C, D is high; A correlation of the pixels A, C is high and a correlation of the pixels B, D is high; and A correlation of the pixels A, D is high and a correlation of the pixels B, C is high. The shape recognition data shows which one from among these three combinations exists by two bits.
The R representative value #1, the G representative value #1, and the B representative value #1 are values representing the gradation values of two pixels of the one pair, respectively, and the R representative value #2, the G representative value #2, and the B representative value #2 are values representing the gradation values of two pixels of the other pair, respectively. As illustrated in
The β comparison data is data indicating whether a difference of the gradation values of the R subpixels of the two pixels having a high correlation, a difference of the gradation values of the G subpixels of the two pixels having a high correlation, and a difference of the gradation values of the R subpixels of the two pixels having a high correlation are larger than the prescribed threshold β. In this embodiment, the β comparison data of the (2×2) compressed data is six-bit data such that three bits are assigned to each of two pairs each having two pixels. On the other hand, the size recognition data is data indicating which pixel in the two pixels having a high correlation has a larger gradation value of the R subpixel, which pixel in the two pixels having a high correlation has a larger gradation value of the G subpixel, and which pixel in the two pixels having a high correlation has a larger gradation value of the B subpixel. The size recognition data corresponding to the R subpixel is generated only when the difference of the gradation values of the R subpixels of the two pixels having a high correlation is larger than the threshold β; the size recognition data corresponding to the G subpixel is generated only when the difference of the gradation values of the G subpixels of the two pixels having a high correlation is larger than the threshold β; and the size recognition data corresponding to the B subpixel is generated only when the difference of the gradation values of the R subpixels of the two pixels having a high correlation is larger than the threshold β. Therefore, the size recognition data of the (2×2) compressed data is zero- to six-bit data.
Below, the (2×2) pixel compression will be explained referring to
First, the average of the gradation values is computed for each of the R subpixel, the G subpixel, and the B subpixel. The averages Rave1, Gave1, and Bave1 of the gradation values of the R subpixel, the G subpixel, and the B subpixel of the pixels A, B and the averages Rave2, Gave2, and Bave2 of the gradation values of the R subpixel, the G subpixel, and the B subpixel of the pixel C, D are computed by the following formulae: Rave1=(RA+RB+1)/2, Gave1=(GA+GB+1)/2, Bave1=(BA+BB+1)/2, Rave2=(RA+RB+1)/2, Gave2=(GA+GB+1)/2, and Bave2=(BA+BB+1)/2.
Furthermore, a comparison is made as to whether the difference |RA-RB| of the gradation values of the R subpixels, the difference |GA-GB| of the gradation values of the G subpixels, and the difference |BA-BB| of the gradation values of the B subpixels of the pixels A, B are larger than the prescribed threshold β. Similarly, a comparison is made as to whether the difference |RC-RD| of the gradation values of the R subpixels, the difference |GC-GD| of the gradation values of the G subpixels, and the difference |BC-BD| of the gradation values of the B subpixels of the pixels C, D are larger than the prescribed threshold β. These comparison results are described in the (2×2) compressed data as the β comparison data.
Furthermore, the size recognition data is created for each of the combination of the pixels A, B and the combination of the pixels C, D.
In detail, when the difference |RA-RB| of the gradation values of the R subpixels of the pixels A, B is larger than the threshold β, it is described in the size recognition data which R subpixel of the pixels A, B has a larger gradation value. When the difference |RA-RB| of the gradation values of the R subpixels of the pixels A, B is smaller than or equal to the threshold β, the size relation of the gradation values of the R subpixels of the pixels A, B is not described in the size recognition data. Similarly, when the difference |GA-GB| of the gradation values of the G subpixels of the pixels A, B is larger than the threshold β, it is described in the size recognition data which G subpixel of the pixels A, B has a larger gradation value. When the difference |GA-GB| of the gradation values of the G subpixels of the pixels A, B is smaller than or equal to the threshold β, the size relation of the gradation values of the G subpixels of the pixels A, B is not described in the size recognition data. In addition, when the difference |BA-BB| of the gradation values of the B subpixels of the pixels A, B is larger than the threshold β, it is described in the size recognition data which B subpixel of the pixels A, B has a larger gradation value. When the difference |BA-BB| of the gradation values of the B subpixels of the pixels A, B is smaller than or equal to the threshold β, the size relation of the gradation values of the B subpixels of the pixels A, B is not described in the size recognition data.
Similarly, when the difference |RC-RD| of the gradation values of the R subpixels of the pixels C, D is larger than the threshold β, it is described in the size recognition data which R subpixel of the pixels C, D has a larger gradation value. When the difference |RC-RD| of the gradation values of the R subpixels of the pixels C, D is smaller than or equal to the threshold β, the size relation of the gradation values of the R subpixels of the pixels C, D is not described in the size recognition data. Similarly, when the difference |GC-GD| of the gradation values of the G subpixels of the pixels C, D is larger than the threshold β, it is described in the size recognition data which G subpixel of the pixels C, D has a larger gradation value. When the difference |GC-GD| of the gradation values of the G subpixels of the pixels C, D is smaller than or equal to the threshold β, the size relation of the gradation values of the G subpixels of the pixels C, D is not described in the size recognition data. In addition, when the difference |BC-BD| of the gradation values of the B subpixels of the pixels C, D is larger than the threshold β, it is described in the size recognition data which B subpixel of the pixels C, D has a larger gradation value. When the difference |BC-BD| of the gradation values of the B subpixels of the pixels C, D is smaller than or equal to the threshold β, the size relation of the gradation values of the B subpixels of the pixels C, D is not described in the size recognition data.
In the example of
Moreover, both of gradation values of the R subpixels of the pixels C, D are 100. In this case, since the difference |RC-RD| of the gradation values is smaller than or equal to the threshold β, this fact is described in the β comparison data. The size relation of the gradation values of the G subpixels of the pixels A, B is not described in the size recognition data. Moreover, the gradation values of the G subpixels of the pixels C, D are 80 and 85, respectively. In this case, since the difference |GC-GD| of the gradation values is larger than the threshold β, this fact is described in the β comparison data, and a fact that the gradation value of the G subpixel of the pixel D is larger than the gradation value of the G subpixel of the pixel C is described in the size recognition data. Furthermore, the gradation values of the B subpixels of the pixels C, D are 8 and 2, respectively. In this case, since the difference |BC-BD| of the gradation values is larger than the threshold β, this fact is described in the β comparison data, and a fact that the gradation value of the B subpixel of the pixel C is larger than the gradation value of the B subpixel of the pixel D is described in the size recognition data.
Furthermore, the error data α is added to the averages Rave1, Gave1, and Bave1 of the gradation values of the R subpixels, the G subpixels, and the B subpixels of the pixels A, B and the averages Rave2, Gave2, and Bave2 of the gradation values of the R subpixels, the G subpixels, and the B subpixels of the pixels C, D. In this embodiment, the error data α is decided from the coordinates of two pixels of each combination using the basic matrix that is the Bayer matrix. Computation of the error data α will be described separately later. Below, in this embodiment, an explanation will be given assuming that the error data α determined for the pixels A, B is zero.
Furthermore, the rounding and bit round-down processing are performed to compute the R representative value #1, the G representative value #1, the B representative value #1, the R representative value #2, the G representative value #2, and the B representative value #2. The rounding and the bit round-down processing are performed according to the compressibility. Regarding the pixels A, B, a numerical value that is added in the rounding and the number of bits that are omitted by the bit round-down processing are decided to be two bits or three bits according to the size relation between the differences |RA-RB|, |GA-GB|, and |BA-BB| of the gradation values and the threshold β. Regarding the R subpixel, when the difference |RA-RB| of the gradation values of the R subpixels is larger than the threshold β, processing of adding a value four to the gradation values of the R subpixels and subsequently omitting lower three bits is performed and, thereby, the R representative value #1 is computed. When it is not so, processing of adding a value two to the average Rave1 and subsequently omitting lower two bits is performed and, thereby, the R representative value #1 is computed. As a result, the R representative value #1 becomes five bits or six bits. The computation is also the same for the G subpixel and the B subpixel. When the difference |GA-GB| of the gradation values is larger than the threshold β, processing of adding a value four to the average Gave1 of the gradation values of the G subpixels and subsequently omitting lower three bits is performed and, thereby, the G representative value #1 is computed. When it is not so, processing of adding a value two to the average Gave and subsequently omitting lower two bits is performed and, thereby, the G representative value #1 is computed. Furthermore, when the difference |BA-BB| of the gradation values is larger than the threshold β, processing of adding a value four to the average Bave1 of the B subpixels and subsequently omitting lower three bits is performed and, thereby, the B representative value #1 is computed. When it is not so, processing of adding a value two to the average Bave1 and subsequently omitting lower two bits is performed and, thereby, the B representative value #1 is computed.
In the example of
On a combination of the pixels C, D, the same processing is performed and, thereby, the R representative value #2, the G representative value #2, and the B representative value #2 are computed. However, regarding the G subpixels of the pixels C, D, a numerical value added in the rounding and the number of bits omitted by the bit round-down processing are one bit and two bits, respectively. When the difference |GC-GD| of the gradation values is larger than the threshold β, processing of adding a value two to the average Gave2 of the G subpixels and subsequently omitting lower two bits is performed and, thereby, the G representative value #2 is computed. When it is not so, processing of adding a value unity to the average Gave2 and subsequently omitting lower one bit is performed and, thereby, the G representative value #2 is computed.
In the example of
By the above procedure, the compression processing by the (2×2) pixel compression is completed.
On the other hand,
First, the bit advance processing is performed on the R representative value #1, the G representative value #1, and the B representative value #1. The number of bits of the bit advance processing is decided according to the size relation of the differences |RA-RB|, |GA-GB|, and |BA-BB| of the gradation values and the threshold β and the compressibility that are described in the β comparison data. When the difference |RA-RB| of the gradation values of the R subpixels of the pixels A, B is larger than the threshold β, the three-bit bit advance processing is performed on the R representative value #1; when it is not so, the two-bit bit advance processing is performed. Similarly, when the difference |GA-GB| of the gradation values of the G subpixels of the pixels A, B is larger than the threshold β, the three-bit bit advance processing is performed on the G representative value #1; when it is not so, the two-bit bit advance processing is performed. Furthermore, when the difference |BA-BB| of the gradation values of the B subpixels of the pixels A, B is larger than the threshold β, the three-bit bit advance processing is performed on the B representative value #1; when it is not so, the two-bit bit advance processing is performed. In the example of
The same bit advance processing is performed on the R representative value #2, the G representative value #2, and the B representative value #2. However, the number of bits of the bit advance processing of the G representative value #2 is selected from one bit and two bits. When the difference |GC-GD| of the gradation values of the G subpixels of the pixels C, D is larger than the threshold β, the two-bit bit advance processing is performed on the G representative value #2; when it is not so, one-bit bit advance processing is performed. In the example of
Furthermore, after the error data α is subtracted from each of the R representative value #1, the G representative value #1, the B representative value #1, the R representative value #2, the G representative value #2, and the B representative value #2, processing of restoring the gradation values of the R, G, and B subpixels of the pixels A, B and the gradation values of the R, G, and B subpixels of the pixels C, D from these representative values is performed.
In the restoration of the gradation values, the β comparison data and the size recognition data are used. In the β comparison data, when the difference |RA-RB| of the gradation values of the R subpixels of the pixels A, B is described to be larger than the threshold β, a value obtained by adding a constant value five to the R representative value #1 is restored as the gradation value of the R subpixel that is described to be large in the size recognition data in the pixels A, B, and a value obtained by subtracting a constant value five from the R representative value #1 is restored as the gradation value of the R subpixel that is described to be small in the size recognition data. When the difference |RA-RB| of the gradation values of the R subpixels of the pixels A, B is smaller than the threshold β, the restoration is performed assuming that the gradation values of the R subpixels of the pixels A, B agree with the R representative value #1. Similarly, the gradation values of the G subpixels and the B subpixels of the pixels A, B and the gradation values of the R subpixels, the G subpixels, and the B subpixels of the pixels C, D are also restored by the same procedure.
In the example of
By the above procedure, the restoration of the gradation values of the R subpixel, the G subpixel, and the B subpixel of the pixels A to D is completed. If the image data of pixels A to D in the right column of
The compression type recognition bit is data indicating the type of the compression processing used for compression, and five bits are assigned to the compression type recognition bit in the compressed data generated by the (3+1) pixel compression. In this embodiment, a value of the compression type recognition bit of the compressed data generated by the (3+1) pixel compression is “11110.”
The R representative value, the G representative value, and the B representative value are values that represent the gradation values of the R subpixels, the G subpixels, and the B subpixels of three pixels having a high correlation, respectively. The R representative value, the G representative value, and the B representative value are computed as averages of the gradation values of the R subpixels, the G subpixels, and the B subpixels of the three pixels having the high correlation, respectively. In the example of
On the other hand, the Ri, Gi, and Bi data pieces and the Rj, Gj, and Bj data pieces are each bit plane reduction data obtained by performing processing of reducing the number of bit planes on the gradation values of the R, G, and B subpixels of the remaining one pixel. In this embodiment, each of the Ri, Gi, and Bi data pieces and the Rj, Gj, and Bj data pieces is six-bit data.
The padding data is added in order to make the compressed data generated by the (3+1) pixel compression have the same number of bits as the compressed data generated by the other compression processing. In this embodiment, the padding data is one-bit data.
Below, the (3+1) pixel compression will be explained referring to
First, an average of the gradation values of the R subpixels, an average of the gradation values of the G subpixels, and an average of the gradation values of the B subpixels of the pixels A, B, and C are computed, respectively, and the computed averages are decided as the R representative value, the G representative value, and the B representative value, respectively. The R representative value, the G representative value, and the B representative value are computed by the following formulae: Rave1=(RA+RB+RC)/3, Gave1=(GA+GB+GC)/3, and Bave1=(BA+BB+BC)/3.
On the other hand, on the image data of the pixel D (correlation is low), the same processing as the (1×4) pixel compression is performed. That is, the dithering using the dither matrix is performed on the pixel D independently and, thereby, the number of bit planes of the image data of the pixel D is reduced. In detail, first, processing of adding the error data α to each of the image data of the pixel D is performed. As described above, the error data α of each pixel is computed from the coordinates of the pixel. Below, an explanation will be given assuming that the error data α determined for the pixel D is three.
Furthermore, the rounding is performed to generate the RD, GD, and BD data. In detail, processing in which a value two is added to each of the gradation values of the R, G, and B subpixels of the pixel D, and subsequently lower two bits are omitted is performed. Thereby, the RC, GC, and BC data pieces and the RD, GD, and BD data pieces are computed.
On the other hand,
In the uncompression processing of the compressed data compressed by the (3+1) pixel compression, the uncompressed data is generated on the assumption that the gradation value of the R subpixel of each of the pixels A, B, and C agrees with the R representative value, the gradation value of the G subpixel of each of the pixels A, B, and C agrees with the G representative value, and the gradation value of the B subpixel of each of the pixels A, B, and C agrees with the B representative value.
On the other hand, on the pixel D, the same processing as the above-mentioned uncompression processing of the (1×4) compressed data is performed. In the uncompression processing on the image data of the pixel D, first, the two-bit bit advance processing is performed on each of the RD, GD, and BD data pieces. Furthermore, subtraction of the error data α is performed and, thereby, the gradation values of the R subpixel, the G subpixel, and the B subpixel of the pixels C, D are restored.
By the above procedure, the restoration of the gradation values of the R subpixel, the G subpixel, and the B subpixel of the pixel D is completed. The gradation values of the R subpixel, the G subpixel, and the B subpixel of the pixel D are restored as eight-bit values.
By the above procedure, the restoration of the gradation values of the R subpixels, the G subpixels, and the B subpixels of the pixels A to D is completed. If the image data of the pixels A to D in the right column of
As shown in
The compression type recognition bit is data indicating the type of the compression processing used for compression, and four bits are assigned to the compression type recognition bit in this embodiment.
Ymin, Ydist0 to Ydist2, the address data, Cb′, and Cr′ are data obtained by converting the RGB image data of the four pixels of the object block into YUV data and further performing the compression processing on the YUV data. Here, Ymin and Ydist0 to Ydist2 are data obtained from luminance data among YUV data of the four pixels of the object block, and Cr′ and Cb′ are data obtained from chrominance data. Ymin, Ydist0 to Ydist2, Cb′, and Cr′ are the representative values of the image data of the four pixels of the object block. As shown in
Below, the (4×1) pixel compression will be explained referring to
Here, Yk is luminance data of the pixel k and Crk, Cbk are chrominance data of the pixel k. Moreover, as described above, Rk, Gk, and Bk are the gradation values of the R subpixel, the G subpixel, and the B subpixel of the pixel k, respectively.
Furthermore, Ymin, Ydist0 to Ydist2, the address data, Cb′, and Cr′ are created from the luminance data Yk and the chrominance data Crk, Cbk of the pixels A to D.
Ymin is defined as the minimum data (minimum luminance data) among pieces of the luminance data YA to YD. Moreover, Ydist0 to Ydist2 are created by performing round-down processing of two bits on differences of pieces of the remaining luminance data and the minimum luminance data Ymin. The address data is generated as data indicating which luminance data of the pixels A to D is the minimum. In the example of
Furthermore, Cr′ is generated by performing one-bit round-down processing on a sum of CrA to CrD, and similarly Cb′ is generated by performing the one-bit round-down processing on a sum of CbA to CbD. In the example of
On the other hand,
Furthermore, the gradation values of the R, G, and B subpixels of the pixels A to D are restored from the luminance data YA′ to YD′ and the chrominance data Cr′, Cb′ by the following matrix operation:
Here, “>>2” is an operator indicating processing of omitting two bits. As will be understood from the above-mentioned formulae, the chrominance data Cr′, Cb′ are used in common in the restoration of the gradation values of the R, G, and B subpixels of the pixels A to D.
By the above procedure, the restoration of the gradation values of the R subpixel, the G subpixel, and the B subpixel of the pixels A to D is completed. If the values of the (4×1) uncompressed data of the pixels A to D in the right column of
Below, computation of the error data a used in the (1×4) pixel compression, the (2+1×2) pixel compression, the (2×2) pixel compression, and the (3+1) pixel compression will be explained.
The error data α used in the bit plane reduction processing that is performed for each pixel in the (1×4) pixel compression, the (2+1×2) pixel compression, and the (3+1) pixel compression is computed from the basic matrix shown in
In detail, first, based on lower two bits x1, x0 of the x-coordinate and lower two bits y1, y0 of the y-coordinate of the object pixel, the basic value Q is extracted from among matrix elements of the basic matrix. For example, in the case where the object of the bit plane reduction processing is the pixel A and the lower two bits of the coordinate of the pixel A is “00,” “15” is extracted as the basic value Q.
Furthermore, according to the number of bits of the bit round-down processing successively performed in the bit plane reduction processing, the following operations are performed on the basic value Q and, thereby, the error data α is computed: α=Q×2 (the number of bits of the bit round-down processing is five), α=Q (the number of bits of the bit round-down processing is four), α=Q/2 (the number of bits of the bit round-down processing is three), and α=Q/4 (the number of bits of the bit round-down processing is two).
On the other hand, the error data α used in computation processing of the representative value of the image data of two pixels having a high correlation in the (2+1×2) pixel compression and the (2×2) pixel compression is computed from the basic matrix shown in
Furthermore, according to the lower second bits x1, y1 of the x-coordinate and the y-coordinate of the object two pixels, the basic value Q corresponding to the Q extraction pixel is extracted from the basic matrix. For example, when the object two pixels are the pixels A, B, the Q extraction pixel is the pixel A. In this case, according to x1, y1, the basic value Q to be finally used is decided as follows from among the four basic values Q that are associated with the pixel A that is the Q extraction pixel in the basic matrix. Q=15 (x1=y1=“0”), Q=01 (x1=“1,” y1=“0”), Q=07 (x1=“0,” y1=“1”), and Q=13 (x1=y1=“1”).
Furthermore, according to the number of bits of the bit round-down processing successively performed in the computation processing of the representative value, the following operation is performed on the basic value Q and, thereby, the error data a used in the computation processing of the representative value of the image data of two pixels having a high correlation is computed: α=Q/2 (the number of bits of the bit round-down processing is three), α=Q/4 (the number of bits of the bit round-down processing is two), and α=Q/8 (the number of bits of the bit round-down processing is one).
For example, in the case where the object two pixels are the pixels A, B, x1=y1=“1,” and the number of bits of the bit round-down processing is three, the error data α is decided by the following formula: Q=13, α=13/2=6.
Incidentally, the computation method of the error data a is not limited to what is described above. For example, as the basic matrix, another matrix that is the Bayer matrix is usable.
Although various embodiments of the present invention are described above, the present invention shall not be interpreted to be limited to the above-mentioned embodiments. For example, although the liquid crystal display having the liquid crystal display panel is presented in the embodiment described above, it is clear to a person skilled in the art that the present invention is also applicable to a display that drives a display panel that is required to charge the data lines (signal lines) at high speed, in addition to the liquid crystal display panel.
Moreover, although the object block is defined as pixels of one row and four columns in the embodiment described above, the object block may be defined as four pixels of an arbitrary arrangement. For example, as illustrated in
Number | Date | Country | Kind |
---|---|---|---|
2011-144837 | Jun 2011 | JP | national |