The present application relates to the technical field of display, in particular, to a display adjustment method and a display device.
The reason why the display device can display different colors and images is mainly because the display panel contains a lot of R (red), G (green), B (blue) sub-pixels. These three sub-pixels can display different colors at different brightnesses. However, since the wavelength of the B sub-pixel is short, the attenuation of the B sub-pixel is much smaller than that of the R sub-pixel and the G sub-pixel, resulting in a display image of the display device being cold. The cool color is not suitable for Asians to watch, so it can be achieved by white tracking technology, which increases the brightness of the R sub-pixels and reduces the brightness of the B sub-pixels, thereby weakening the phenomenon of coldness of the image. The current white tracking technology is implemented by converting 8-bit into 10 bits, but this method takes up the storage space inside the IC (chip) and increases the cost of the IC.
The main purpose of the present application is to provide a display adjustment method and a display device, which aim to save the storage space of the chip and reduce the cost of the chip.
In order to achieve the above purpose, the present application provides a display adjustment method comprising steps of:
obtaining a first pixel data of an image displayed by the display device;
converting the first pixel data into a second pixel data after the first pixel data enters a timing controller, wherein the amount of stored data of the second pixel data is greater than the amount of stored data of the first pixel data;
converting at least one of the second pixel data into sub-pixel data whose amount of stored data is reduced relative to the second pixel data to obtain a corresponding third pixel data; and
outputting the third pixel data.
In order to achieve the above purpose, the present application further provides a display adjustment method comprising steps of:
obtaining a first pixel data of an image displayed by the display device;
converting the first pixel data into a second pixel data after the first pixel data enters a timing controller, wherein the amount of stored data of the second pixel data is greater than the amount of stored data of the first pixel data;
calculating a gray-scale difference between the first sub-pixel data and the second sub-pixel data in the second pixel data, and a gray-scale difference between the second sub-pixel data and the third sub-pixel data;
obtaining a corresponding third pixel data according to each of the gray-scale differences and the third sub-pixel data; and
outputting the third pixel data.
In order to achieve the above purpose, the present application further provides a display device, wherein the display device comprises a memory, a processor, and a display adjustment program stored on the memory and operable on the processor, the processor executing the display adjustment program to implement the steps of:
obtaining a first pixel data of an image displayed by the display device;
converting the first pixel data into a second pixel data after the first pixel data enters a timing controller, wherein the amount of stored data of the second pixel data is greater than the amount of stored data of the first pixel data;
converting at least one of the second pixel data into sub-pixel data whose amount of stored data is reduced relative to the second pixel data to obtain a corresponding third pixel data; and
outputting the third pixel data.
In the technical solution of the present application, by reducing the amount of stored data of at least one of the second pixel data in the timing controller, the storage space of the chip may be effectively reduced, and the cost of the chip may be reduced.
To illustrate the technical solutions according to the embodiments of the present disclosure more clearly, the accompanying drawings for describing the embodiments are introduced briefly in the following. Apparently, the accompanying drawings in the following description are only about some embodiments of the present disclosure, and persons of ordinary skill in the art can derive other drawings from the accompanying drawings without creative efforts.
It should be understood that the specific embodiments described herein are only for illustrating but not for limiting the present application.
The main solution of the embodiment of the present application is: obtaining a first pixel data of an image displayed by the display device; converting the first pixel data into a second pixel data after the first pixel data enters a timing controller; converting at least one of the second pixel data into sub-pixel data whose amount of stored data is reduced relative to the second pixel data to obtain a corresponding third pixel data; outputting the third pixel data.
In the technical solution of the present application, by reducing the amount of stored data of the second pixel data in the timing controller, the storage space of the chip may be effectively reduced, and the cost of the chip may be reduced.
As an embodiment, the display device may be as shown in
The solution of the embodiment of the present application relates to a display device, where the display device includes: a processor 1001, such as a CPU, a communication bus 1002, and a memory 1003. Among them, the communication bus 1002 is used to realize connection and communication between the assemblies.
The memory 1003 may be a high-speed RAM memory, and can also be a non-volatile memory, such as a magnetic disk memory. As shown in
obtaining a first pixel data of an image displayed by the display device;
converting the first pixel data into a second pixel data after the first pixel data enters a timing controller, wherein the amount of stored data of the second pixel data is greater than the amount of stored data of the first pixel data;
converting at least one of the second pixel data into sub-pixel data whose amount of stored data is reduced relative to the second pixel data to obtain a corresponding third pixel data;
outputting the third pixel data.
Optionally, the processor 1001 may be used to invoke the display adjustment program stored in the memory 1003 and perform the following operations:
treating one sub-pixel data in the second pixel data as a reference sub-pixel data, and treating other sub-pixel data as a target sub-pixel data;
calculating a gray-scale difference between each of the target sub-pixel data and the reference sub-pixel data, and obtaining a corresponding third pixel data according to the gray-scale difference and the reference sub-pixel data.
Optionally, the processor 1001 may be used to invoke the display adjustment program stored in the memory 1003 and perform the following operations:
performing difference between each of the target sub-pixel data and the reference sub-pixel data to obtain a corresponding plurality of gray-scale differences;
converting each of the gray-scale differences into a gray-scale difference whose amount of the stored data is reduced relative to the second pixel data;
treating the converted plurality of gray-scale differences and the reference sub-pixel data as a third pixel data.
Optionally, the processor 1001 may be used to invoke the display adjustment program stored in the memory 1003 and perform the following operations:
treating two sub-pixel data in the second pixel data as a reference sub-pixel data, and treating other sub-pixel data as a target sub-pixel data;
calculating a gray-scale difference between the target sub-pixel data and one of the reference sub-pixel data, and obtaining a corresponding third pixel data according to the gray-scale difference and the reference sub-pixel data.
Optionally, the processor 1001 may be used to invoke the display adjustment program stored in the memory 1003 and perform the following operations:
performing difference between the target sub-pixel data and one of the reference sub-pixel data to obtain a corresponding gray-scale difference;
converting the gray-scale difference into a gray-scale difference whose amount of the stored data is reduced relative to the second pixel data;
treating the converted gray-scale difference and the reference sub-pixel data as a third pixel data.
Optionally, the processor 1001 may be used to invoke the display adjustment program stored in the memory 1003 and perform the following operations:
obtaining a first pixel data of an image displayed by the display device;
converting the first pixel data into a second pixel data after the first pixel data enters a timing controller, wherein the amount of stored data of the second pixel data is greater than the amount of stored data of the first pixel data;
calculating a gray-scale difference between the first sub-pixel data and the second sub-pixel data in the second pixel data, and a gray-scale difference between the second sub-pixel data and the third sub-pixel data; wherein the corresponding gray-scale of the first sub-pixel data is greater than the corresponding gray-scale value of the second sub-pixel data, and the corresponding gray-scale of the second sub-pixel data is greater than the corresponding gray-scale of the third sub-pixel data;
obtaining a corresponding third pixel data according to each of the gray-scale differences and the third sub-pixel data;
outputting the third pixel data.
Optionally, the processor 1001 may be used to invoke the display adjustment program stored in the memory 1003 and perform the following operations:
converting the gray-scale difference into a gray-scale difference whose amount of the stored data is reduced relative to the second pixel data;
treating the converted gray-scale difference and the third sub-pixel data as a third pixel data.
Optionally, the processor 1001 may be used to invoke the display adjustment program stored in the memory 1003 and perform the following operations:
the gray-scale difference between the first sub-pixel data and the second sub-pixel data is non-negative, and the gray-scale difference between the second sub-pixel data and the third sub-pixel data is non-negative.
In the present embodiment, the display adjustment method comprises:
step S1, obtaining a first pixel data of an image displayed by the display device;
The display device may be a display device having a display panel, such as a television, a tablet, or a mobile phone. The image displayed by the display device is formed by a plurality of pixels that may display different colors. Among them, each pixel includes an R (red) sub-pixel, a G (green) sub-pixel, and a B (blue) sub-pixel. Each gray-scale of the R sub-pixel is stored in 8 bits, each gray-scale of the G sub-pixel is stored in 8 bits, and each gray-scale of the B sub-pixel is stored in 8 bits, to obtain corresponding three groups of sub-pixel data represented by 8 bits, that is, the first pixel data.
Step S2, converting the first pixel data into a second pixel data after the first pixel data enters a timing controller;
the timing controller may be used to adjust the color temperature to improve the coldness of the display image. The processing method includes increasing a gray-scale of the R sub-pixel in each pixel and/or decreasing a gray-scale of the B sub-pixel. Specifically, after receiving three groups of first pixel data composed of sub-pixel data represented by 8 bits, the timing controller re-adjusts the gray-scales of the R, G, and B sub-pixels in the first pixel data. The adjustment method includes increasing the gray-scale of the R sub-pixel in each pixel and/or decreasing the gray-scale of the B sub-pixel in each pixel. For example, before entering the timing controller, the gray-scales of the R, G, and B sub-pixels corresponding to the gray-scale data 3 are 3, 3, and 3, respectively, and after entering the timing controller, the gray-scales of the R, G, and B sub-pixels are adjusted to 14, 13, and 12. For another example, the gray-scales of the R, G, and B sub-pixels corresponding to the gray-scale data 254 are 254, 254, and 254, respectively, and after entering the timing controller, the gray-scales of the R, G, and B sub-pixels are adjusted to 1014, 1012, and 968. Since one bit may not store the gray-scale of the sub-pixel re-adjusted by the timing controller, each gray-scale of each sub-pixel is stored in 10 bits, thereby obtaining three groups of sub-pixel data represented by 10 bits, and the three groups of sub-pixel data represented by 10 bits are the second pixel data.
Step S3, converting at least one of the second pixel data into sub-pixel data whose amount of stored data is reduced relative to the second pixel data to obtain a corresponding third pixel data;
After the adjusted second pixel data represented by 10 bits is obtained, in order to reduce the storage space of the chip, a group of sub-pixel data or two groups of sub-pixel data of the R sub-pixel data, the G sub-pixel data, or the B sub-pixel data represented by 10 bits in the second pixel data may be converted into a sub-pixel data with a relatively reduced amount of stored data, that is, a sub-pixel data whose data bits are reduced. For example, three groups of R, G, B sub-pixel data represented by 10 bits may be converted into a sub-pixel data represented by 7 bits, or two groups of sub-pixel data in the three groups of sub-pixel data of R, G, and B may be converted into a sub-pixel data represented by 7 bits. The conversion method may be implemented according to certain calculation rules. The converted sub-pixel data and the unconverted sub-pixel data are treated as a third pixel data.
Step S4, outputting the third pixel data.
After one or two groups of sub-pixel data in the second pixel data into sub-pixel data whose amount of stored data is reduced, a corresponding third pixel data is obtained, and the timing controller outputs the corresponding third pixel data.
In the technical solution of the present application, by converting the second pixel data in the timing controller into a third pixel data whose amount of stored data is reduced, the storage space of the chip may be effectively reduced, the amount of computation inside the chip may be reduced and the cost of the chip may be effectively reduced.
Referring to
step S10, treating one sub-pixel data in the second pixel data as a reference sub-pixel data, and treating other sub-pixel data as a target sub-pixel data;
Based on the above embodiments, in the present embodiment, one sub-pixel data in the second pixel data may be treated as a reference sub-pixel data, and other sub-pixel data may be treated as a target sub-pixel data, that is, one group of sub-pixel data of the three groups of sub-pixel data is treated as a reference sub-pixel data, and the other two groups of sub-pixel data are treated as a target sub-pixel data. For example, one group of R sub-pixel data is treated as a reference sub-pixel data, and other two groups of sub-pixel data are treated as a target sub-pixel data; or, one group of G sub-pixel data is treated as a reference sub-pixel data, and other two groups of sub-pixel data are treated as a target sub-pixel data; or, one group of B sub-pixel data is treated as a reference sub-pixel data, and other two groups of sub-pixel data are treated as a target sub-pixel data.
Step S20, calculating a gray-scale difference between each of the target sub-pixel data and the reference sub-pixel data, and obtaining a corresponding third pixel data according to the gray-scale difference and the reference sub-pixel data;
After the reference sub-pixel data and the target sub-pixel data are determined, a gray-scale difference between each target sub-pixel and the reference sub-pixel corresponding to each level of gray-scale data is calculated to obtain a gray-scale difference between each target sub-pixel and the reference sub-pixel under each level of gray-scale data; the corresponding gray-scale difference is treated as a new gray-scale of the target sub-pixel under the gray-scale data, and the amount of stored data in each new gray-scale of each target sub-pixel is reduced; then, the target sub-pixel data whose the amount of stored data is reduced and the reference sub-pixel data whose the amount of stored data are unchanged are treated as the third pixel data. Specifically, referring to
step S201, performing difference between each of the target sub-pixel data and the reference sub-pixel data to obtain a corresponding plurality of gray-scale differences;
After a gray-scale difference between each target sub-pixel and the reference sub-pixel corresponding to each level of gray-scale data is calculated to obtain a gray-scale difference between each target sub-pixel and the reference sub-pixel under each level of gray-scale data, the corresponding gray-scale difference may be treated as a new gray-scale of the target sub-pixel under the gray-scale data. For example, the gray-scales of the R, G, and B sub-pixels corresponding to the gray-scale data 1 are 8, 6, and 6, respectively. If the R sub-pixel data is treated as the reference sub-pixel data, the gray-scale difference between the G sub-pixel and the R sub-pixel under the gray-scale data 1 is −2; if −2 is treated as the new gray-scale of the G sub-pixel under the gray-scale data 1, the gray-scale difference between the B sub-pixel and the R sub-pixel under the gray-scale data 1 is −2; if −2 is treated as the new gray-scale of the B sub-pixel under the gray-scale data 1, similarly, a new gray-scale corresponding to each target sub-pixel under each level of the gray-scale data is calculated.
Step S202, converting each of the gray-scale differences into a gray-scale difference whose amount of the stored data is reduced relative to the second pixel data;
a plurality of new gray-scales of the target sub-pixel through the above steps; since the new gray-scale of the target sub-pixel is small, it is not necessary to use 10 bits for storage to avoid wasting the storage space of the chip too much; therefore, each new gray-scale corresponding to the target sub-pixel may be represented by 7 bits, that is, the amount of stored data for each new gray-scale of the target sub-pixel is converted from 10 bits to 7 bits; wherein since the gray-scale difference may have a negative value, the data first digit may be used as a sign digit, e.g., if the gray-scale difference is −62, represented in 7 bits, it may be represented as 011 1110, and if the gray-scale difference is 62, it may be represented as 1111110; the first digit 0, 1 is used to indicate a negative or positive value. It should be noted that since the first digit of 7 bits is used to represent the sign digit, the present embodiment is only applicable to values in which the gray-scale difference is in the range of −63 to 63. In practical applications, in order to avoid the display image of the display panel being cold or warm, the gray-scale difference between the R, G, and B sub-pixels corresponding to the same gray-scale data is generally not excessive.
Step S203, treating the converted plurality of gray-scale differences and the reference sub-pixel data as a third pixel data;
After each new gray-scale of the target sub-pixel is represented by 7 bits, two groups of sub-pixel data represented by 7 bits and one group of sub-pixel data represented by 10 bits are obtained, respectively; then, two groups of sub-pixel data represented by 7 bits and one group of sub-pixel data represented by 10 bits are treated as the third pixel data.
In the technical solution of the present application, by converting the amount of stored data of the two groups of sub-pixel data of the second pixel data in the timing controller from 10 bits to 7 bits, the storage space of the chip may be significantly reduced and the cost of the chip may be effectively reduced.
Referring to
step S30, treating two sub-pixel data in the second pixel data as a reference sub-pixel data, and treating other sub-pixel data as a target sub-pixel data;
based on the above embodiments, in the present embodiment, two sub-pixel data in the second pixel data are treated as a reference sub-pixel data, and other one sub-pixel data is treated as a target sub-pixel data, that is, two groups of sub-pixel data of the three groups of sub-pixel data are treated as a reference sub-pixel data, and the other one group of sub-pixel data is treated as a target sub-pixel data. For example, one group of R sub-pixel data and one group of G sub-pixel data are treated as a reference sub-pixel data, and B sub-pixel data is treated as a target sub-pixel data; or, one group of G sub-pixel data and one group of B sub-pixel are treated as a reference sub-pixel data, and R sub-pixel data is treated as a target sub-pixel data; or, one group of R sub-pixel data and one group of B sub-pixel are treated as a reference sub-pixel data, and other one group of G sub-pixel data is treated as a target sub-pixel data.
Step S40, calculating a gray-scale difference between the target sub-pixel data and one of the reference sub-pixel data, and obtaining a corresponding third pixel data according to the gray-scale difference and the reference sub-pixel data.
After the reference sub-pixel data and the target sub-pixel data are determined, a gray-scale difference between each target sub-pixel and the reference sub-pixel corresponding to each level of gray-scale data is calculated, and the obtained gray-scale difference is treated as a new gray-scale of the target sub-pixel under the gray-scale data; after a new gray-scale of the target sub-pixel corresponding to each level of gray-scale data, the amount of stored data of the new gray-scale corresponding to the target sub-pixel is reduced, and the target sub-pixel data whose amount of stored data is reduced and the reference sub-pixel data whose amount of stored data are unchanged are used as the third pixel data. Specifically, referring to
step S401, performing difference between the target sub-pixel data and one of the reference sub-pixel data to obtain a corresponding gray-scale difference;
A gray-scale difference between the target sub-pixel and the reference sub-pixel corresponding to each level of gray-scale data is calculated to obtain a gray-scale difference between the target sub-pixel and the reference sub-pixel under each level of gray-scale data, and the gray-scale difference is treated as a new gray-scale of the target sub-pixel under the gray-scale data. Among them, the gray-scale difference may be calculated by selecting one group of sub-pixel data in two groups of reference sub-pixel data according to actual needs. After the gray-scale difference is calculated by determining one group of reference sub-pixel data, when the gray-scale difference between the target sub-pixel and the reference sub-pixel under each level of gray-scale data is calculated, the same group of reference sub-pixel data is taken to calculate the gray-scale difference, but not alternately. For example, in an embodiment, the gray-scales of the R, G, and B sub-pixels corresponding to the gray-scale data 1 are 8, 6, and 6, respectively. If the R sub-pixel data is treated as the reference sub-pixel data and the G sub-pixel data as the target sub-pixel data, the gray-scale difference between the G sub-pixel data and the R sub-pixel data under the gray-scale data 1 is −2; if −2 is treated as the new gray-scale of the G sub-pixel data under the gray-scale data 1, when the gray-scale difference between the target sub-pixel and the reference sub-pixel under the next level of gray-scale data is calculated, the new gray-scale value of the G sub-pixel under the gray-scale data is still calculated by using the R sub-pixel as the reference sub-pixel; similarly, a new gray-scale corresponding to the target sub-pixel under each level of the gray-scale data is calculated. In other embodiments, the B sub-pixel data may also be selected as a reference, which is not limited herein.
Step S402, converting the gray-scale difference into a gray-scale difference whose amount of the stored data is reduced relative to the second pixel data;
after the new gray-scale corresponding to the target sub-pixel under each level of the gray-scale data is calculated, since the new gray-scale of the target sub-pixel data is small, it is not necessary to use 10 bits to represent too much wasted chip storage space; therefore, each new gray-scale of the target sub-pixel may be represented by 7 bits, that is, each new gray-scale of the target sub-pixel is converted from 10 bits to 7 bits; wherein since the gray-scale difference may have a negative value, the data first digit may be used as a sign digit, e.g., if the gray-scale difference is −62, represented in 7 bits, it may be represented as 011 1110; the first digit 0 is used to indicate a negative value. Since the first digit of 7 bits is used to represent the sign digit, the present embodiment is only applicable to values in which the gray-scale difference is in the range of −63 to 63. In practical applications, in order to avoid the display image of the display panel being cold or warm, the gray-scale difference between each sub-pixel corresponding to the same gray-scale data is generally not excessive.
Step S403, treating the converted gray-scale difference and the reference sub-pixel data as a third pixel data.
After each new gray-scale corresponding to the target sub-pixel data is represented by 7 bits, two groups of sub-pixel data represented by 10 bits and one group of sub-pixel data represented by 7 bits are obtained; then, two groups of sub-pixel data represented by 10 bits and one group of sub-pixel data represented by 7 bits are treated as the third pixel data.
In the technical solution of the present application, by converting one group sub-pixel data of the second pixel data in the timing controller from 10 bits to 7 bits, the storage space of the chip may be effectively reduced and the amount of computation inside the chip is reduced, thereby effectively reducing the cost.
in the present embodiment, the display adjustment method comprises:
step S101, obtaining a first pixel data of an image displayed by the display device;
the display device may be a display device having a display panel, such as a television, a tablet, or a mobile phone. The image displayed by the display device is formed by a plurality of pixels that may display different colors. Among them, each pixel includes an R (red) sub-pixel, a G (green) sub-pixel, and a B (blue) sub-pixel. Each gray-scale of the R sub-pixel is stored in 8 bits, each gray-scale of the G sub-pixel is stored in 8 bits, and each gray-scale of the B sub-pixel is stored in 8 bits, to obtain corresponding three groups of sub-pixel data represented by 8 bits, that is, the first pixel data.
Step S102, converting the first pixel data into a second pixel data after the first pixel data enters a timing controller;
the timing controller may be used to adjust the color temperature to improve the coldness of the display image. The processing method includes increasing a gray-scale of the R sub-pixel in each pixel and/or decreasing a gray-scale of the B sub-pixel. Specifically, after receiving three groups of first pixel data composed of sub-pixel data represented by 8 bits, the timing controller re-adjusts the gray-scales of the R, G, and B sub-pixels in the first pixel data. The adjustment method includes increasing the gray-scale of the R sub-pixel in each pixel and/or decreasing the gray-scale of the B sub-pixel in each pixel. For example, before entering the timing controller, the gray-scales of the R, G, and B sub-pixels corresponding to the gray-scale data 3 are 3, 3, and 3, respectively, and after entering the timing controller, the gray-scales of the R, G, and B sub-pixels are adjusted to 14, 13, and 12. For another example, the gray-scales of the R, G, and B sub-pixels corresponding to the gray-scale data 254 are 254, 254, and 254, respectively, and after entering the timing controller, the gray-scales of the R, G, and B sub-pixels are adjusted to 1014, 1012, and 968. Since one bit may not store the gray-scale of the sub-pixel re-adjusted by the timing controller, the gray-scale of each sub-pixel is stored in 10 bits, thereby obtaining three groups of sub-pixel data represented by 10 bits, and the three groups of sub-pixel data represented by 10 bits are the second pixel data.
Step S103, calculating a gray-scale difference between the first sub-pixel data and the second sub-pixel data in the second pixel data, and a gray-scale difference between the second sub-pixel data and the third sub-pixel data;
In the present embodiment, the R sub-pixel data is treated as the first sub-pixel data, the G sub-pixel data is treated as the second sub-pixel data, and the B sub-pixel data is treated as the third sub-pixel data. In practical applications, after the pixel data is re-adjusted by the timing controller, the gray-scale of the R sub-pixel is the largest and the gray-scale of the B sub-pixel is the smallest under each level of gray-scale data. In the present embodiment, the gray-scale difference between the R sub-pixel and the G sub-pixel under each level of gray-scale data is calculated as a new gray-scale corresponding to the R sub-pixel, and the gray-scale difference between the G sub-pixel and the B sub-pixel under each level of gray-scale data is calculated as a new gray-scale corresponding to the G sub-pixel data. For example, in an embodiment, the gray scales of the R, G, and B sub-pixels under gray-scale data 5 are 21, 18, and 17, respectively; a difference between 21 and 18 is calculated to obtain a new gray-scale 3 for the R sub-pixel under gray-scale data 5; a difference between 18 and 17 is calculated to obtain a new gray-scale 1 for the G sub-pixel under gray-scale data 5; similarly, a new gray-scale corresponding to the first sub-pixel and a new gray-scale corresponding to the second sub-pixel under each level of gray-scale data are calculated.
Step S104, obtaining a corresponding third pixel data according to each of the gray-scale differences and the third sub-pixel data;
after the new gray-scale corresponding of the first sub-pixel and the second sub-pixel under each level of the gray-scale data are calculated, since the new gray-scale of the first sub-pixel and the second sub-pixel is small, it is not necessary to use 10 bits to represent too much wasted chip storage space; therefore, each new gray-scale corresponding to the R sub-pixel may be represented by 7 bits, and each new gray-scale corresponding to the G sub-pixel may be represented by 7 bits. Among them, since the gray scale of the R sub-pixel is the largest and the gray scale of the B sub-pixel is the smallest under each gray scale data, each new gray-scale of the R sub-pixel and each new gray-scale of the G sub-pixel are non-negative, i.e., 0 or a positive number, so that no reserved sign digit is required. Therefore, the solution of the present embodiment is applicable to values in which the gray-scale difference does not exceed 127. After each new gray-scale of the R sub-pixel and the G sub-pixel in the second pixel data is converted from 10 bits to 7 bits, two groups of sub-pixel data represented by 7 bits and one group of sub-pixel data represented by 10 bits are obtained; then, two groups of sub-pixel data represented by 7 bits and one group of sub-pixel data represented by 10 bits are treated as a third pixel data.
Step S105, outputting the third pixel data.
After a corresponding third pixel data is obtained, the timing controller outputs the corresponding third pixel data.
In the technical solution of the present application, by reducing the amount of stored data of the first sub-pixel data and the second sub-pixel data in the second pixel data in the timing controller from 10 bits to 7 bits, the storage space of the chip may be saved and the amount of computation inside the chip is reduced, thereby effectively reducing the cost of the chip.
The present application further provides a display device, wherein the display device comprises a memory, a processor, and a display adjustment program stored on the memory and operable on the processor, the processor executing the display adjustment program to implement the steps of the display adjustment method as described above.
The display device of the present embodiment may be a display device having a display panel, such as a television, a tablet, or a mobile phone.
The descriptions above are only the alternative embodiments of the present application, but not intended to limit the patent scope of the present application. Any equivalent structural variations made by utilizing the specification and drawings of the present application under the inventive concept of the present application, or direct or indirect applications in other related technical fields should be concluded in the patent protection scope of the present application.
Number | Date | Country | Kind |
---|---|---|---|
201811395264.6 | Nov 2018 | CN | national |
The present application is a Continuation Application of PCT Application No. PCT/CN2018/121829 filed on Dec. 18, 2018, which claims the benefit of Chinese Patent Application No. 201811395264.6 filed on Nov. 21, 2018. All the above are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
20050062767 | Choe | Mar 2005 | A1 |
20070013979 | Nose | Jan 2007 | A1 |
20090060377 | Chen | Mar 2009 | A1 |
20100309234 | Lee | Dec 2010 | A1 |
20110134152 | Furihata | Jun 2011 | A1 |
20120281030 | Miyata | Nov 2012 | A1 |
20140146097 | Kimura | May 2014 | A1 |
20140160174 | Tsuei et al. | Jun 2014 | A1 |
20160232859 | Oh | Aug 2016 | A1 |
20180336853 | Nakajima | Nov 2018 | A1 |
20190392769 | Lee | Dec 2019 | A1 |
20200219432 | Park | Jul 2020 | A1 |
Number | Date | Country |
---|---|---|
1395229 | Feb 2003 | CN |
1432987 | Jul 2003 | CN |
101986380 | Mar 2011 | CN |
103561253 | Feb 2014 | CN |
103761933 | Apr 2014 | CN |
106713654 | May 2017 | CN |
107799079 | Mar 2018 | CN |
2000115802 | Apr 2000 | JP |
2011215479 | Oct 2011 | JP |
2013108646 | Jul 2013 | WO |
Number | Date | Country | |
---|---|---|---|
20200160773 A1 | May 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2018/121829 | Dec 2018 | US |
Child | 16289687 | US |