This application claims priority to Chinese Application No. 201910599326.3 filed on Jul. 4, 2019, the contents of which are incorporated herein by reference in their entirety.
The present disclosure relates to the field of image processing technology, more particular, to an image processing method and device, and a display device.
An organic light-emitting diode (OLED) display device is a display device made of organic electroluminescence diodes.
The OLED display device may comprise a display driving circuit and an array of light emitting devices. In order to drive the array of light-emitting devices to emit light, various signal lines such as power signal lines, data signal lines, and common power supply lines are arranged in the OLED display device, so as to support a pixel compensation circuit to drive the array of light-emitting devices to emit light. Due to a IR voltage drop, the signal voltage transmitted by these signal lines changes gradually in a direction away from the signal terminals, which may result in an uneven brightness of the light emitted by the array of light emitting devices, obtaining a poor brightness uniformity of the image displayed by the OLED display device.
The present disclosure provides an image processing method. The image processing method comprises:
acquiring an image brightness information of an image to be displayed, the image brightness information comprising a brightness information of M of first sub-image areas arranged in a first direction, the brightness information of each first sub-image area comprises a brightness information of at least two band points, M is an integer greater than 1;
obtaining a gray scale compensation parameter of the at least two band points of each first sub-image area, according to the brightness information of the at least two band points of each first sub-image area and a reference brightness;
obtaining a gray scale compensation information of each first sub-image area, according to the gray scale compensation parameter of the at least two band points of each first sub-image area;
acquiring an image information of the image to be displayed, and performing a gray scale compensation process on the image information, according to the gray scale compensation information of the M of first sub-image areas.
The present disclosure provides an image processing device. The image processing device comprises:
an acquiring unit, configured to acquire an image brightness information and an image information, the image brightness information comprising a brightness information of M of first sub-image areas arranged in a first direction, the brightness information of each first sub-image area comprises a brightness information of at least two band points;
a compensation setting unit, configured to obtain a gray scale compensation parameter for the at least two band points, according to the brightness information of the at least two band points of each first sub-image area and a reference information, and obtain the gray scale compensation information of each first sub-image area according to the gray scale compensation parameter of the at least two band points; and
a gray scale compensation unit, configured to perform the gray scale compensation process on the image information according to the gray scale compensation information of the M of first sub-image areas.
The present disclosure provides an image processing device. The image processing device comprises: a memory having instructions stored thereon and a processor configured to execute the instructions so as to implement the image processor method of embodiments of the present disclosure.
The present disclosure also provides a display device comprising a display panel, a signal generation chip, and an image processing device according to an embodiment of the present disclosure.
The present disclosure also provides a computer storage medium. The computer storage medium stores instructions, and when the instructions are executed, the above image processing method is implemented.
The present disclosure also provides a display device. The display device comprises the above-described image processing device.
The drawings described herein are used to provide a more comprehensive understanding of the present disclosure and constitute a part of the present disclosure. The exemplary embodiments and descriptions of the present disclosure are used to explain the present disclosure, and do not constitute a limitation of the present disclosure. In the drawings:
In the following, specific implementations of the present disclosure are discussed in detail in combination of the figures and various embodiments. The following embodiments are intended to illustrate a part of embodiments of the present disclosure, rather than all of the embodiments of the present disclosure. All other embodiments obtained by those of ordinary skill in the art based on the embodiments of the present disclosure without creative effort are intended to be included in the scope of the present disclosure.
As shown in
For an OLED display panel, the above-mentioned display panel may comprise display driving circuits stacked in layers and light emitting devices EL arranged in an array. The display driving circuit comprises pixel compensation circuits PDC arranged in the array. The pixel compensation circuits PDC arranged in the array are electrically connected to the light emitting devices in the array EL. The pixel compensation circuits PDC arranged in the array are also electrically connected to the scan driving unit 231 and the data driving unit 232 shown in
As shown in
When the light emitting device EL shown in
The pixel compensation circuit PDC shown in
Signal terminals such as the data signal terminal DATA, the common power supply terminal ELVSS, the gate signal terminal GATE, and the power supply signal terminal ELVDD may be located at the edges of the OLED display panel, and lead out from a signal processor such as a signal generation chip. Each of signal lines connected to these signal terminals may have an IR voltage drop, causing the signal voltage transmitted by the signal lines to change along the extending direction of the signal lines gradually. Especially, for large-sized display panels, the change of signal voltage along the extension direction of the signal lines caused by the IR voltage drop is more obvious. Since the brightness of the light emitting device in the pixel compensation circuit is related to the power signal voltage, the brightness of the screen displayed by the OLED display panel decreases along the direction away from the power signal terminal gradually, resulting in a poor uniformity of the screen displayed by the OLED display panel. The IR voltage drop refers to a phenomenon that voltages on power and ground networks in integrated circuits decreases or increases. For example, the voltage at positions of the power signal line near the power supply terminal is higher than the voltage at the positions far from the power signal terminal.
In response to the above problems, an embodiment of the present disclosure provides an image processing method. As shown in
In step S100, the image brightness information of the image is acquired. The image brightness information comprises a brightness information of M of first sub-image areas arranged in a first direction, and the brightness information of each first sub-image area comprises a brightness information of at least two band points. The first direction may be a direction in which the signal line is away from the signal generation chip. For example,
The image brightness information can be collected by a light collection device, such as a light sensor, for example, a charge coupled device (CCD) image sensor. Before acquiring the brightness information of the image, the display device may be divided into M of first display areas along the first direction (such as the width direction), so that the image displayed by the display device is divided into M of first sub-image areas. When acquiring an image, the image displayed by the display device may contain at least two band point image information, so that the collected brightness information of each first sub-image area comprises brightness information of at least two band points.
In step S200A, a gray scale compensation parameter of the at least two band points of each first sub-image area is obtained, according to the brightness information of the at least two band points of each first sub-image area and a reference brightness L0.
In step S300A, a gray scale compensation information of each first sub-image area is obtained, according to the gray scale compensation parameter of the at least two band points of each first sub-image area. The gray scale compensation parameter of the at least two band points of each first sub-image area can be data-fitted by using a fitting method, so as to obtain respective gray scale compensation parameters for the first sub-image area. For example, for an 8-bit image, the gray scale of each first sub-image area is between 0 and 255, and correspondingly has 256 gray scale compensation parameters. Each gray scale compensation parameter of the first sub-image area may constitute gray scale compensation information of the first sub-image area. There are various fitting methods, such as linear interpolation and other fitting methods. The obtained gray scale compensation information of each first sub-image area may be stored in the memory. The memory may be a storage device or a collective term for multiple storage elements, and is used to store executable program code and the like. The memory may comprise random access memory (RAM) or non-volatile memory (non-volatile memory), such as disk memory, flash memory (Flash), and so on.
In step S400, the image information of the image is acquired, and a brightness uniformization process is performed on the image information by using a brightness uniformization method.
In step S600A, the gray scale compensation process is performed on the image information according to the gray scale compensation information of the M of first sub-image areas. Here, performing the gray scale compensation process on the image information according to the gray scale compensation information of the M of first sub-image areas comprises performing a bijection gray scale compensation process on pixels of the M of first sub-image area included in the image information according to the gray scale compensation information of the M of first sub-image areas.
The image brightness information comprises a brightness information of M of first sub-image areas arranged in a first direction. Since the brightness information of each first sub-image area comprises the brightness information of at least two band points, a gray scale compensation parameter of the at least two band points of each first sub-image area may be obtained according to the brightness information of the at least two band points of each first sub-image area and a reference brightness L0. The gray scale compensation information of each first sub-image area is obtained according to the gray scale compensation parameter of the at least two band points of each first sub-image area. If there is an IR pressure drop along the first direction, then the gray scale compensation is performed on at least a portion of the M of first sub-image areas by using the gray scale compensation information of the M of first sub-image areas, thereby reducing the problem of a poor uniformity of image brightness caused by the IR voltage drop. By using the above mentioned method, the poor uniformity of image brightness due to uneven film formation can also be alleviated.
In some embodiments, as shown in
In some embodiments, the above image processing method is to perform the gray scale compensation process on the M of first sub-image areas included in the image information along the first direction. This gray scale compensation method is suitable for display devices with a large size in one direction and a small size in other directions, such as mobile phones.
For a large-sized display device, the brightness unevenness exists in both the width direction and the height direction of the displayed screen. Since the signal generation chip of the display device is usually provided on the upper frame and the side frame of the display device, the image brightness information comprises a brightness information of N of second sub-image areas arranged in a second direction, and the brightness information of each second sub-image area comprises a brightness information of at least two band points. Here the second direction is perpendicular to the first direction. For example, the first direction refers to the direction indicated by the first arrow 01 in
The above image brightness information comprises a brightness information of M of first sub-image areas arranged in a first direction and a brightness information of N of second sub-image areas arranged in a second direction. Before acquiring the brightness information of the image, the display device may be divided into N of second display areas along the second direction (such as the width direction), so that the image displayed by the display device is divided into N of second sub-image areas along the second direction. In this manner, the brightness information of the N of second sub-image areas arranged in the second direction can be collected by the light collection device.
When acquiring an image, the image displayed by the display device may contain at least two band point image information, so that the collected brightness information of each first sub-image area comprises brightness information of at least two band points.
As shown in
In step S200B, a gray scale compensation parameter of two band points of each second sub-image area is obtained, according to the brightness information of the at least two band points of each second sub-image area and a reference brightness L0.
In step S300B, a gray scale compensation information of each second sub-image area is obtained, according to the gray scale compensation parameter of the at least two band points. The gray scale compensation parameter of the at least two band points of each second sub-image area can be data-fitted by using a fitting method, so as to obtain respective gray scale compensation parameters for the second sub-image area. Step S200B and Step S200A may be executed simultaneously or sequentially. Step S300B and Step S300A may be executed simultaneously or sequentially.
For example, taken the 8-bit image as an example, the gray scale of each second sub-image area is between 0 and 255, and correspondingly has 256 gray scale compensation parameters. Each gray scale compensation parameter of the second sub-image area may constitute gray scale compensation information of the second sub-image area. There are various fitting methods, such as linear interpolation and other fitting methods. The obtained gray scale compensation information of each second sub-image area may be stored in the memory. The memory may be a storage device or a collective term for multiple storage elements, and is used to store executable program code and the like. The memory may comprise random access memory (RAM) or non-volatile memory (non-volatile memory), such as disk memory, flash memory (Flash), and so on.
Based on this, as shown in
In step S600B, the gray scale compensation process is performed on the image information according to the gray scale compensation information of the N of second sub-image areas. Herein, the order of step S600B and step S600A may be executed simultaneously or sequentially, and may be set according to actual situation.
In some embodiments, as shown in
In step S210A, a reference gray scale G0 is obtained according to the reference brightness L0 and a brightness-gray scale relationship. The brightness-gray scale relationship is L=A*(G/255)Gamma, Gamma is the display parameter L is the brightness, and G is the gray scale.
In step S220A, an average brightness
In step S230A, a gray scale compensation parameter of each band point included in each first sub-image area is obtained, according to the reference gray scale G0, the reference brightness L0 and the average brightness of each band point of each first sub-image area
As shown in
In step S231A, an equivalent gray scale
of each band point of each first sub-image area is obtained, according to the reference gray scale G0, the reference brightness L0 and the equivalent gray scale of each band point of each first sub-image area
In step S232A, a gray scale compensation parameter G0 of each band point of each first sub-image area is obtained, according to the reference gray scale ΔG1 and the equivalent gray scale G1 of each band point of each first sub-image area. For each first sub-image area, the gray scale compensation parameter
of each band point of each first sub-image area, α is the first direction brightness modulation factor, α is greater than or equal to 0.5 but less than or equal to 1, and Gamma is the display parameter, for example, 2.2.
It can be explained that when both M and N are greater than or equal to 2, the first direction brightness modulation factor is related to the aspect ratio of the display panel and the degree of unevenness of the image brightness in the first direction, and is limited by gray scale compensation parameters in the second direction. Therefore, α can be adjusted between 0.5 and 1 according to the display effect of the gray scale compensated image. When M≥2 and N=0, it means that the above image brightness information does not comprise the brightness information of the N of second sub-image areas arranged along the second direction, therefore, there is no need to set the first direction brightness modulation factor, that is, α=1, and there is no need to set α.
It should be noted that, if M and N are both greater than or equal to 2, the gray scale compensation parameters of the image pixel is ΔGPIX=ΔGPIX1+ΔGPIX2, ΔGPIX1 is the gray scale compensation parameters of the pixels in the first direction, and ΔGPIX2 is the gray scale compensation parameters of the pixels in the second direction. In addition, when performing the image gray scale compensation, the pixels can be compensated by using the gray scale compensation parameters of the first direction and then using the gray scale compensation parameters of the second direction. Of course, the pixels can be compensated by using the gray scale compensation parameters of the second direction and then using the gray scale compensation parameters of the first direction.
In some embodiments, as shown in
In step S210B, based on the reference brightness L0 and the brightness-gray scale relationship, the reference gray scale G0 is obtained.
In step S220B, the average brightness
In step S230B, a gray scale compensation parameter of each band point of each first sub-image area is obtained, according to the reference gray scale G0, the reference brightness L0 and the average brightness
As shown in
In step S231B, an equivalent gray scale
of each band point of each second sub-image area is obtained, according to the reference gray scale G0, the reference brightness L0 and the equivalent gray scale of each band point included in each second sub-image area
In step S232B, a gray scale compensation parameter G0 of each band point included in each second sub-image area is obtained, according to the reference gray scale ΔG2 and the equivalent gray scale G2 of each band point included in each second sub-image area. For each second sub-image area, the gray scale compensation parameter
of each band point included in each second sub-image area, β is greater than or equal to 0.5 but less than or equal to 1.
Herein, it can be explained that when both M and N are greater than or equal to 2, the first direction brightness modulation factor is related to the aspect ratio of the display panel and the degree of unevenness of the image brightness in the first direction, and is limited by gray scale compensation parameters in the second direction. Therefore, β can be adjusted between 0.5 and 1 according to the display effect of the gray scale compensated image. When N≥2 and M=0, it means that the above image brightness information does not comprise the brightness information of the M of first sub-image areas arranged along the first direction, therefore, there is no need to set the first direction brightness modulation factor, that is, β=1, and there is no need to set β.
When the above image brightness information comprises both the brightness information of the M of first sub-image areas arranged along the first direction and the brightness information of the N of second sub-image areas arranged along the second direction, the display device is divided into M of first display areas along the first direction, and the brightness information of the M of first sub-image areas arranged along the first direction are collected. The display device is divided into N of second display areas along the second direction, and the brightness information of the N of second sub-image areas arranged along the second direction are collected.
When the above image brightness information comprises both the brightness information of the M of first sub-image areas arranged along the first direction and the brightness information of the N of second sub-image areas arranged along the second direction, it is also possible to divide the display device into M of first display areas along the first direction, divide the display device into N of second display areas along the second direction, and then collect the brightness information of the M of the first sub-image area arranged along the first direction and the brightness information of the N of second sub-image areas arranged along the second direction in one time. The image brightness information collection method is more convenient, and does not require multiple collection, which is simple and convenient. Since the first direction and the second direction are different, the display device is divided into a grid display area after completing the division, the corresponding M of first sub-image areas and N of second sub-image areas form a grid sub-image areas.
In some embodiments, the above-mentioned reference brightness L0 may be a default, or may be selected from the average brightness of the first sub-image area, that is, the above-mentioned reference brightness L0 is the average brightness of each band point of a target first sub-image area, and the first direction is the same as the direction away from the signal generation chip. Since the IR voltage drop is small, the brightness of the first sub-image area close to the signal generation chip may be high. If the average brightness of the first sub-image areas is used as the reference brightness L0, this will result in a large amount of processing for gray scale compensation. Based on this, the target first sub-image area is the kth first sub-image area along the direction away from the signal generation chip, wherein k is an integer greater than or equal to 2 and less than or equal to M. As for the value of k, it can be set according to the reference brightness L0. For example, when M is an even number, the target first sub-image area may be set to the M/2th first sub-image area. For another example, when M is an odd number, the target first sub-image area may be set to the (M+1)/2th first sub-image area.
Exemplarily,
The first longitudinal sub-image area is defined as a chip proximal image area ICJ, a chip intermediate image area ICZ, and a chip distal image area ICY. When collecting the image brightness information, the display device displays an image with the same band point, and uses the light collection device to collect the brightness information of the chip proximal image area ICJ, the brightness information of the chip intermediate image area ICZ, and the brightness information of the chip distal image area ICY of the corresponding band point, wherein the average brightness obtained from the brightness information of the chip intermediate image area ICZ is defined as the reference brightness L0 of the current band point. Then, the image information of the next band point is displayed, and the light collection device is used to collect the brightness information of the chip proximal image area ICJ, the brightness information of the chip intermediate image area ICZ, and the brightness information of the chip distal image area ICY of this band point. The above process is repeated, and the brightness information of the chip proximal image area ICJ, the brightness information of the chip intermediate image area ICZ, and the brightness information of the chip distal image area ICY of the desired number of band points are collected, wherein the average brightness obtained from the brightness information of the chip intermediate image area ICZ is defined as the reference brightness L0 of the current band point.
Exemplarily,
The first horizontal sub-image area is defined as the chip left image area ICZC, the second horizontal sub-image area is defined as the chip corresponding image area ICD, and the third horizontal sub-image area is defined as the chip right image area ICYC. When collecting the image brightness information, the display device displays the image information of the next band point, and use the light collection device to collect the brightness information of the chip left image area ICZC, the brightness information of the chip corresponding image area ICD, and the brightness information of the chip right image area ICYC of this band point. Then, the image information of the next band point is displayed, the display device displays the image information of the next band point, and use the light collection device to collect the brightness information of the chip left image area ICZC, the brightness information of the chip corresponding image area ICD, and the brightness information of the chip right image area ICYC of this band point. The above process is repeated, and the brightness information of the chip left image area ICZC, the brightness information of the chip corresponding image area ICD, and the brightness information of the chip right image area ICYC of the desired number of band points are collected.
The band point comprised in the brightness information of the second sub-image areas along the second direction may be the same as the band point comprised in the brightness information of the first sub-image areas along the first direction.
For example,
For example, the sub-image area G11 in the first row and first column, the sub-image area G12 in the first row and second column, and the sub-image area G13 in the first row and third column along the first direction are divided into the chip proximal image area ICJ shown in
As shown in
As shown in
As shown in
In some embodiments, as shown in
In step 500A, the gray scale compensation process is performed on the image information according to the gray scale compensation information of the M of first sub-image areas, and the gray scale compensation process is performed on the image information according to the gray scale compensation information of the N of second sub-image areas, in response to an average gray scale of at least one primary color included the image information being greater than the gray scale threshold of the corresponding primary color.
For example, as shown in
In step S510A, based on the gray scales of a plurality of primary colors included in the image information, the average gray scales of the plurality of primary colors included in the image information are obtained.
In step S520A, it is determined whether the average gray scale of at least one primary color comprised in the image information is greater than the gray scale threshold of the primary color.
If yes, step S600A or step S600B is executed. Otherwise, the image process ends.
For example, if the pixels comprised in the display device are divided into red pixels for displaying red, green pixels for displaying green, and blue pixels for displaying blue, then the red average gray scale comprised in the image information can be obtained according to the gray scale values displayed by all red pixels comprised in the image information; the green average gray scale comprised in the image information can be obtained according to the gray scale values displayed by all green pixels comprised in the image information; and the blue average gray scale comprised in the image information can be obtained according to the gray scale values displayed by all blue pixels comprised in the image information.
When the red average gray scale comprised in the image information is less than the red gray scale threshold, the green average gray scale comprised in the image information is less than the green gray scale threshold, or the blue average gray scale comprised in the image information is less than the blue gray scale threshold, it is showed that the brightness of this color in the image is relatively low, and the image is a low gray scale image. Since the display panel displays the low gray scale image with a small difference in brightness, and the human eye is not sensitive to the low brightness image, the primary color in the image information is no longer compensated. Otherwise, step S600A or step S600B is executed.
In some embodiments, the image brightness information comprises the brightness information of the plurality of primary color images and white image brightness information.
The corresponding gray scale compensation information of each first sub-image area comprises the gray scale compensation information of each first sub-image area corresponding to the brightness information of plurality of primary color images and the gray scale compensation information of each first sub-image area corresponding to white image brightness information. The gray scale compensation information of each first sub-image area corresponding to the brightness information of the plurality of primary color images and the gray scale compensation information of each first sub-image area corresponding to white image brightness information can be obtained with reference to the above discussion.
The gray scale compensation information of each first sub-image area further comprises the gray scale compensation information of each second sub-image area corresponding to the brightness information of the plurality of primary color images and the gray scale compensation information of each second sub-image area corresponding to white image brightness information. The gray scale compensation information of each second sub-image area corresponding to the brightness information of the plurality of primary color images and the gray scale compensation information of each second sub-image area corresponding to white image brightness information can be obtained with reference to the above discussion.
The white light compensation method is used to compensate the gray scale image, which requires fewer compensation parameters and occupies less storage space. The IR pressure drop compensation of the gray scale image will not cause color cast. However, if the white light compensation method is used to further perform gray scale compensation on the color image, there will result in a color cast problem.
In order to reduce the color cast problem caused by the pixel compensation, before performing the gray scale compensation process on the image information according to the gray scale compensation information of the M of first sub-image areas and after the image information, as shown in
In step S500B, it is determined whether the image information is the gray scale information of the gray scale image.
For example, as shown in
In step S510B, based on the image information, the average gray scales of the plurality of primary colors included in the image information are obtained.
In step S520B, it is determined whether the average gray scales of respective primary colors comprised in the image information are all the same.
If they are the same, step S530B is executed; otherwise, step S540B is executed.
In step S530B, it is determined that the image information is the gray scale information. In step S540B, it is determined that the image information is the color scale information.
For example, if the image information comprises a plurality of red pixel gray scales, a plurality of green pixel gray scales, and a plurality of blue pixel gray scales, then the average gray scale of the red pixels comprised in the image information may be obtained according to the plurality of red pixel gray scales comprised in the image information. The average gray scale of green pixels comprised in the image information is obtained according to the plurality of green pixels comprised in the image information, and the average gray scale of blue pixels comprised in the image information is obtained according to the plurality of blue pixels comprised in the image information. Then, it is determined whether the average gray scales of the red pixels, the average gray scales of the green pixels and the average gray scales of the blue pixels comprised in the image information are all the same. If they are the same, the image information is determined to be the gray scale image information, otherwise the image information is the color image information.
As shown in
Step S600A1 comprises performing the gray scale compensation on the respective primary pixels in the first direction included in the image information by using the gray scale compensation information of the M of first sub-image areas corresponding to the white image.
Step S600B1 comprises performing the gray scale compensation on the respective primary pixels in the second direction included in the image information by using the gray scale compensation information of the N of second sub-image areas corresponding to the white image.
When the image information is the gray scale image information, the gray scale compensation method for the gray scale image is the white light compensation method. Since in the white light compensation method, the sub-image areas (such as the first sub-image area and/or the second sub-image area) corresponding to the red pixels, the green pixels and the blue pixels included in the white image, the gray scale compensation process is performed on the gray scale image with the white light compensation method, which will not result in a color cast problem of the gray scale image.
As shown in
Step S600A2 comprises performing the gray scale compensation on the respective primary color pixels in the first direction included in the image information by using the gray scale compensation information of the M of first sub-image areas corresponding to the plurality of primary color images.
Step S600B2 comprises performing the gray scale compensation on the respective primary color pixels in the second direction included in the image information by using the gray scale compensation information of the N of second sub-image areas corresponding to the plurality of primary color images.
Thus, when the image information is the color scale image information, the gray scale compensation method for the color image is the primary color compensation method. That is, the gray scale of the corresponding primary color pixels of the color image is compensated with the gray scale compensation information of the corresponding sub-area region (such as, at least one of the first sub-image area and the second sub-image area) of the primary color image.
It should be noted that if the image processing method according to the embodiment of the present disclosure comprises not only step S500A but also step S500B, then steps S500B and S500A may be executed in a random order. For example, after performing step S500B, step S500A is performed, and then step S600A1 or step S600B1 is performed. For another example, after performing step S500A, step S500B is performed, and then step S600A1 or step S600B1 is performed.
In some embodiments, the gray scale compensation information of each first sub-image area discussed above is obtained according to the brightness information of the M of first sub-image areas arranged in the first direction. Thus, when the difference between the gray scale compensation information of two neighbor first sub-image areas is relatively large, it is easy to generate stripes along the direction perpendicular to the first direction on the compensated image information. If the first direction is the length direction of the display device, the generated stripes are cross stripes along the width direction. Thus, a gray scale infiltration algorithm can be used to compensate the gray scale of the pixels along the first direction of the image information. For example, as shown in
In Step S610A, the gray scale infiltration compensation information of the image along the first direction is obtained according to the gray scale compensation information of the M of first sub-image areas, so that the gray scale infiltration compensation information along the first direction of the image information increase or decrease in the direction close to the kth first sub-image area. With respect to the schematic diagram of the arrangement of the sub-image areas along the first direction shown in
In step S620A, the infiltration gray scale compensation on the image information is performed according to the gray scale infiltration compensation information of the image information along the first direction.
For example,
Using gray scale infiltration algorithm to compensate the gray scale of the pixels along the first direction of the image information comprises:
acquiring the number NPIX of rows of pixels from the chip intermediate image area ICZ to the edge of the chip proximal image area according to the pixel area information of the chip intermediate image area ICZ and the pixel area information of the chip proximal image area, i.e. NPIX=65−1=64.
If the chip proximal image area has its gray scale compensation parameter of −8 in a gray scale interval from 246 to 251, since the reference brightness is the average brightness of each band point of the chip intermediate image area ICZ, the chip intermediate image area has its gray scale compensation parameter of 0 in the gray scale interval from 246 to 251. Based on this, the difference between the gray scale compensation parameters of the chip proximal image area is obtained according to the gray scale compensation parameters of the chip proximal image area in the gray scale interval from 246 to 251 and the chip intermediate image area in the gray scale interval from 246 to 251, Δk=−8−0=−8.
According to the difference between the chip proximal gray scale compensation parameters Δk and the number of pixels NPIX from the chip intermediate image area ICZ to the chip proximal image area edge in the first direction, the number of rows of pixels occupied by the unit gray scale compensation difference is obtained, n=8.
In the following, by taking the gray scale compensation information of every two rows of pixels changing once as an example, the gray scale compensation parameters (that is, gray scale infiltration compensation parameters) of the chip intermediate image area ICZ to the chip proximal image area ICJ in the gray scale interval of 246 to 251 are illustrated. The method for setting the gray scale compensation parameters of the chip intermediate image area ICZ to the chip distal image area ICY in the gray scale interval from 246 to 251 can refer to the method for setting the gray scale compensation parameters of the chip intermediate image area ICZ to the chip proximal image area ICJ in the gray scale interval from 246 to 251.
For the pixels in the 64th row to the 57th row, the pixels in the 64th row to the 57th row constitute the first-level gray scale compensation area. The first-level gray scale compensation area is divided into the 1st first-level gray scale compensation area, the 2nd first-level gray scale compensation area, the 3rd first-level gray scale compensation area, and the 4th first-level gray scale compensation area along the direction far from the chip intermediate image area ICZ.
The pixels in the 64th row and the pixels in the 63rd row constitute a 1st first-level gray scale compensation area. 25% of the pixels in the 1st first-level gray scale compensation area is selected to be compensated in the gray scale term with the gray scale compensation parameter of −1, while 75% of the pixels are not going to be compensated for gray scale, or in other words, to be compensated in the gray scale term with the gray scale compensation parameter of 0. The pixels in the 62nd row and the pixels in the 61st row constitute a 2nd first-level gray scale compensation area. 50% of the pixels in the 2nd first-level gray scale compensation area is selected to be compensated in the gray scale term with the gray scale compensation parameter of −1, while 50% of the pixels are not going to be compensated for gray scale, or in other words, to be compensated in the gray scale term with the gray scale compensation parameter of 0. The pixels in the 60th row and the pixels in the 59nd row constitute a 3rd first-level gray scale compensation area. 75% of the pixels in the 3rd first-level gray scale compensation area is selected to be compensated in the gray scale term with the gray scale compensation parameter of −1, while 25% of the pixels are not going to be compensated for gray scale, or in other words, to be compensated in the gray scale term with the gray scale compensation parameter of 0. The pixels in the 58th row and the pixels in the 57rd row constitute a 4th first-level gray scale compensation area. All or 100% of the pixels in the 4th first-level gray scale compensation area are selected to be compensated in the gray scale term with the gray scale compensation parameter of −1. For each first-level gray scale compensation area, the selection of pixels in which the gray scale compensation is performed with the gray scale compensation parameter of −1 can follow the principle of an uniform arrangement, so as to ensure the uniformity of the gray scale of the pixels after the gray scale compensation. Since each first-level gray scale compensation area comprises two rows of pixels, in each first-level gray scale compensation area, the pixels for which the gray scale compensation is performed with the gray scale compensation parameter of −1 are arranged at intervals in each row of pixels. The pixels in the adjacent two rows of pixels for which the gray scale compensation is performed with the gray scale compensation parameter of −1 are staggered.
For the pixels in the 56th row to the 49th row, the pixels in the 56th row to the 49th row constitute a second-level gray scale compensation area. The second-level gray scale compensation area is divided into the 1st first-level gray scale compensation area, the 2nd first-level gray scale compensation area, the 3nd first-level gray scale compensation area, and the 4th first-level gray scale compensation area along the direction far from the chip intermediate image area ICZ.
The pixels in the 56th row and the pixels in the 55th row constitute a 1st second-level gray scale compensation area. 25% of the pixels in the 1st second-level gray scale compensation area is selected to be compensated in the gray scale term with the gray scale compensation parameter of −2, while the remaining 75% of the pixels are compensated in the gray scale term with the gray scale compensation parameter of −1. The pixels in the 54th row and the pixels in the 53rd row constitute a 2nd second-level gray scale compensation area. 50% of the pixels in the 2nd second-level gray scale compensation area is selected to be compensated in the gray scale term with the gray scale compensation parameter of −2, while the remaining 50% of the pixels are compensated in the gray scale term with the gray scale compensation parameter of −1. The pixels in the 52nd row and the pixels in the 51st row constitute a 3rd second-level gray scale compensation area. 75% of the pixels in the 3rd second-level gray scale compensation area is selected to be compensated in the gray scale term with the gray scale compensation parameter of −2, while the remaining 25% of the pixels are compensated in the gray scale term with the gray scale compensation parameter of −1. The pixels in the 50th row and the pixels in the 49th row constitute a 4th second-level gray scale compensation area. All or 100% of the pixels in the 4th second-level gray scale compensation area are selected to be compensated in the gray scale term with the gray scale compensation parameter of −2. For each second-level gray scale compensation area, the principle of selecting pixels that are gray-scale compensated with the gray scale compensation parameter of −2 can refer to the principle of selecting pixels that are gray-scale compensated with the gray scale compensation parameter of −1, so as to ensure the uniformity of the gray scale of the pixels after the gray scale compensation.
The gray scale compensation parameters of the pixels in the 48th row to the 41st row, the gray scale compensation parameters for the pixels in the 33nd row to the 40th row, the selection of the gray scale compensation parameters for the pixels in the 25th row to the 32nd row, the gray scale compensation parameters of the pixels in the 17th row to the 24th row, and the gray scale compensation parameters of the pixels in the 9th row to the 1st row can be selected according to the method for selecting the gray scale compensation parameters of the pixels in the 56th row to the 55th row.
In some embodiments, since the gray scale compensation information of each second sub-image area discussed above being obtained according to the brightness information of the N of second sub-image areas arranged in the second direction, when the difference between the gray scale compensation information of two neighbor second sub-image areas is relatively large, it is easy to generate longitudinal stripes along the direction perpendicular to the second direction on the compensated image information. If the second direction is the width direction of the display device, the generated stripes are longitudinal stripes along the length direction. Thus, a gray scale infiltration algorithm can be used to compensate the gray scale of the pixels of the plurality of second sub-image areas arranged along the second direction of the image information.
For example, as shown in
In step S610B, the gray scale infiltration compensation information of the image information along the second direction is obtained according to the gray scale compensation information of the N of second sub-image areas, so that the gray scale infiltration compensation information of the image information along the second direction gradually increases or decreases in the direction from 1st second sub-image area to the tth second sub-image area. The kth first sub-image area has a geometric center positioned in the tth second sub-image area. With respect to the schematic diagram of the arrangement of the sub-image areas along the second direction shown in
In step S620B, the infiltration gray scale compensation on the image information is performed according to the gray scale infiltration compensation information of the image information along the second direction.
For example,
For example, the method for setting the gray scale compensation parameters of the chip left image area ICZC to the chip corresponding image area ICD in the gray scale interval from 246 to 251 can refer to the method for setting the gray scale compensation parameters of the chip intermediate image area ICZ to the chip proximal image area ICJ in the gray scale interval from 246 to 251. The method for setting the gray scale compensation parameters of the chip right image area ICYC to the chip corresponding image area ICD in the gray scale interval from 246 to 251 can refer to the method for setting the gray scale compensation parameters of the chip intermediate image area ICZ to the chip proximal image area ICJ in the gray scale interval from 246 to 251.
For example,
The pixel arrangement shown in
Assuming that the image brightness information only comprises the brightness information of 3 of first sub-image areas arranged in the first direction, wherein the intermediate sub-image areas are disposed between two adjacent first sub-image areas. The schematic diagram of the arrangement of sub-image areas along the first direction is shown in
Table 1 shows a lookup table of the gray scale compensation parameters of the chip proximal image area ICJ and the chip distal image area ICY.
Table 2 shows a table of LRU of the image information before and after the gray scale compensation.
It can be seen from Table 2 that when the compensation parameters shown in Table 1 are used for the image information, the LRU of the image after the compensation is greater than the LRU of the image before the compensation, and all of the LRU of the image after the compensation are greater than 90%. At the same time, it can be known from the analysis of the LRU of the image before and after the compensation with the standard deviation and the confidence interval before and after the compensation that, the image processing method according to the embodiments of the present disclosure is effective for gray scale compensation of the image.
The embodiments of the present disclosure provide an image processing device. As shown in
The imaging processing device further comprises a compensation setting unit 320 communicated with the transceiving unit 310 and configured to obtain the gray scale compensation parameter for the at least two band points comprised in each first sub-image area, according to the brightness information of the at least two band points comprised in the brightness information of each first sub-image area and a reference information L0, and obtain the gray scale compensation information of each first sub-image area according to the gray scale compensation parameter of the at least two band points comprised in each first sub-image area. The imaging processing device further comprises the gray scale compensation unit 330 communicated with the tranceiving unit 310 and the compensation setting unit 320, and configured to perform the gray scale compensation process on the image information according to the gray scale compensation information of the M of first sub-image areas.
In some embodiments, as shown in
As shown in
In order to avoid setting the gray scale compensation information of each first sub-image area and the gray scale compensation information of each second sub-image area every time before the image processing, the image processing device further comprises The imaging processing device further comprises a storage unit 340 communicated with the compensation setting unit 320 and the gray scale compensation unit 330, and configured to store the gray scale compensation information of each first sub-image areas and the gray scale compensation information of each second sub-image areas.
In some embodiments, as shown in
of each band point comprised in each first sub-image area, α is greater than or equal to 0.5 but less than or equal to 1, and Gamma is a display parameter.
As shown in
of each band point included in each second sub-image area is obtained, wherein β is greater than or equal to 0.5 but less than or equal to 1.
For example, when M≥2 and N=0, α=1. When M=0 and N≥2, β=1. When M≥2 and N≥2, α and β are both greater than or equal to 0.5 and less than or equal to 1.
In some embodiments, as shown in
As shown in
In some embodiments, as shown in
In some embodiments, the forgoing reference brightness L0 is the average brightness of the band points of the target first sub-image area, the first direction being the same as the direction away from the signal chip, the target first sub-image area is the kth first sub-image area along the direction away from the signal chip, and k is an integer greater than or equal to 2 but less than or equal to M.
As shown in
As shown in
In some embodiment, as shown in
The embodiments of the present disclosure also provides a display device. The display device comprises the above-described image processing device.
The display device may be any product or component having a display function, such as, a mobile phone, a tablet computer, a TV, a display, a notebook computer, a digital photo frame, a navigator, or the like.
The embodiments of the present disclosure also provides a computer storage medium. The computer storage medium stores instructions, and when the instructions are executed, the above image processing method is implemented.
The above instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the instructions may be transmitted from a website, a computer, a server, or a data center via a cable (such as the same Axis cable, optical fiber, digital subscriber line (DSL)) or wirelessly (such as infrared, wireless, microwave, etc.) to another website, computer, server or data center. The computer storage medium may be any available medium that the computer can store or a data storage device such as a server, a data center, and the like that comprises one or more available integrated medium. The available media may be magnetic media (for example, floppy disk, hard disk, magnetic tape), optical media (for example, DVD), or semiconductor media (for example, Solid State Disk, SSD).
As shown in
As shown in
The processor 410 described in the embodiment of the present disclosure may be a processor, or a collective term for multiple processing elements. For example, the processor 410 may be a central processor (CPU), an application specific integrated circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present disclosure, such as, one or more micro-processor (digital signal processor, DSP), or one or more field programmable gate array (FPGA).
The memory 420 may be a storage device, or a collective term for multiple storage elements, and is used to store executable program code and the like. The memory 420 may comprise random access memory (RAM), and may also comprise non-volatile memory, such as magnetic disk memory, flash memory (Flash), and so on.
The bus 440 may be an industry standard architecture (ISA) bus, a peripheral component interconnection (PCI) bus, an extended industry standard architecture (EISA) bus, or the like. The bus can be divided into an address bus, a data bus, a control bus and so on. For ease of representation, the bus is only represented by a thick line in
The embodiments in this specification are described in a progressive manner. The same or similar parts between the embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments. In particular, since the device embodiment is basically similar to the method embodiment, the description of the device embodiment is relatively simple, and the relevant part can be referred to the description of the method embodiment.
The above description is only specific implementations of the present disclosure, but the scope of the present disclosure is not limited to this. Those skilled in the art can easily conceive changes or replacements within the technical scope disclosed in the present disclosure, which should be covered by the scope of the present disclosure. Therefore, the scope of the present disclosure should be defined by the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
201910599326.3 | Jul 2019 | CN | national |
Number | Name | Date | Kind |
---|---|---|---|
10311807 | Chen | Jun 2019 | B2 |
10720088 | Li | Jul 2020 | B1 |
20140285533 | Chun et al. | Sep 2014 | A1 |
20150379922 | Zhang | Dec 2015 | A1 |
20160351101 | Lee | Dec 2016 | A1 |
20170069238 | Moon | Mar 2017 | A1 |
20180204529 | Chen | Jul 2018 | A1 |
20200286423 | Yu et al. | Sep 2020 | A1 |
Number | Date | Country |
---|---|---|
106023916 | Oct 2016 | CN |
108550345 | Sep 2018 | CN |
108694906 | Oct 2018 | CN |
109256101 | Jan 2019 | CN |
Entry |
---|
Office Action issued in corresponding Chinese Patent Application No. 201910599326.3, dated Dec. 3, 2020. |
Number | Date | Country | |
---|---|---|---|
20210005128 A1 | Jan 2021 | US |