The disclosure relates generally to display technologies, and more particularly, to system and method for calibrating a display panel.
In display technology, differences in manufacturing and calibration can result in differences in product performance. For example, these differences may exist in the backlight performance of liquid crystal display (LCD) panels, light-emitting performance of organic light-emitting diode (OLED) display panels, and performance of thin-film transistors (TFTs), resulting differences in the maximum brightness level, variation in brightness levels and/or or chrominance values. Meanwhile, different geographic locations, devices, and applications may require different display standards for display panels. For example, display standards on the display panels in Asia and Europe may require different color temperature ranges. To satisfy different display standards, display panels are often calibrated to meet desired display standards.
In one example, a system for display is provided. The system includes a display panel including a pixel array and a controller. The processor is configured to, upon executing instructions: define a calibration vector with a source pixel, a vector volume, and a calibration range; calculate a distance between a pixel to be calibrated and the source pixel; calculate a calibration amount based on the distance and the vector volume; and calibrate the pixel to be calibrated based on the calibration amount and the calibration vector.
In some implementations, the source pixel is a three-dimensional parameter (Rscr, Gscr, Bscr) configured to determine a central point for calibration, where Rscr is a grayscale value of a red channel of the source pixel; Gscr is a grayscale value of a green channel of the source pixel; and Bscr is a grayscale value of a blue channel of the source pixel.
In some implementations, the vector volume is a three-dimensional parameter (VR, VG, VB) configured to determine a maximal volume for calibration, where VR is a calibration value of a red channel of the source pixel; VG is a calibration value of a green channel of the source pixel; and VB is a calibration value of a bule channel of the source pixel.
In some implementations, the calibration range is a four-dimensional parameter (VRange, SR, SG, SB) configured to determine a calibration scope, pixels within the calibration scope are calibrated by the calibration vector, where VRange is a preset distance between the pixel to be calibrated and the source pixel, the pixel to be calibrated is calibrated by the calibration vector when the distance between the pixel to be calibrated and the source pixel is smaller than VRange, SR is a calculation factor of a red channel when calculating the distance between the pixel to be calibrated and the source pixel; SG is a calculation factor of a green channel when calculating the distance between the pixel to be calibrated and the source pixel; and SB is a calculation factor of a blue channel when calculating the distance between the pixel to be calibrated and the source pixel.
In some implementations, the distance between a pixel to be calibrated, and the source pixel is negatively correlated with the calibration amount applied on the pixel to be calibrated.
In some implementations, the processor is further configured to calculate a calibration weight based on the distance between a pixel to be calibrated and the source pixel and calculate the calibration amount based on the calibration weight and the vector volume, the calibration amount is a three-dimensional parameter (ΔVR, ΔVG, ΔVB), where ΔVR is a calibration amount of a red channel of the source pixel; ΔVG is a calibration amount of a green channel of the source pixel; and ΔVB is a calibration amount of a blue channel of the source pixel.
In some implementations, the processor is further configured to calibrate the pixel to be calibrated based on a plurality of the calibration vectors corresponding to the more than one the calibration vectors.
In some implementations, the plurality of calibration vectors are placed in a sequential order.
In some implementations, the plurality of calibration vectors are placed in a parallel order.
In some implementations, the calibration vector comprises at least one of a marginal vector for calibrating pixels of the pixel array, a white balance vector for compensating the pixels of the pixel array, or a local vector for calibrating at least one of the pixels of the pixel array.
In some implementations, the system further includes a register configured to store the calibration vector defined by the vector defining module. The calibration vector stored in the register is retrieved by the processor repeatedly.
In another example, a method for calibrating a display having a pixel array is provided. The method includes four operations: defining a calibration vector with a source pixel, a vector volume, and a calibration range; calculating a distance between a pixel to be calibrated and the source pixel; calculating a calibration amount based on the distance and the vector volume; and calibrating the pixel to be calibrated based on the calibration amount and the calibration vector.
In some implementations, the source pixel is a three-dimensional parameter (Rscr, Gscr, Bscr) configured to determine a central point for calibration, where Rscr is a grayscale value of a red channel of the source pixel; Gscr is a grayscale value of a green channel of the source pixel; and Bscr is a grayscale value of a blue channel of the source pixel.
In some implementations, the vector volume is a three-dimensional parameter (VR, VG, VB) configured to determine a maximal volume for calibration, where VR is a calibration value of a red channel of the source pixel; VG is a calibration value of a green channel of the source pixel; and VB is a calibration value of a blue channel of the source pixel.
In some implementations, the calibration range is a four-dimensional parameter (VRange, SR, SG, SB) configured to determine a calibration scope, pixels within the calibration scope are calibrated by the calibration vector, where VRange is a preset distance between the pixel to be calibrated and the source pixel, the pixel to be calibrated is calibrated by the calibration vector when the distance between the pixel to be calibrated and the source pixel is smaller than VRange; SR is a calculation factor of a red channel when calculating the distance between the pixel to be calibrated and the source pixel; SG is a calculation factor of a green channel when calculating the distance between the pixel to be calibrated and the source pixel; and SB is a calculation factor of a blue channel when calculating the distance between the pixel to be calibrated and the source pixel.
In some implementations, the distance between a pixel to be calibrated and the source pixel is negatively correlated with the calibration amount applied on the pixel to be calibrated.
In some implementations, the calculating a calibration amount based on the distance and the vector volume includes calculating a calibration weight based on the distance between a pixel to be calibrated and the source pixel and calculating the calibration amount based on the calibration weight and the vector volume, the calibration amount is a three-dimensional parameter (ΔVR, ΔVG, ΔVB). Where ΔVR is a calibration amount of a red channel of the source pixel; ΔVG is a calibration amount of a green channel of the source pixel; and ΔVB is a calibration amount of a blue channel of the source pixel.
In some implementations, the pixel to be calibrated is calibrated based on a plurality of the calibration vectors and a plurality of calibration amounts corresponding to the calibration vectors.
In some implementations, the calibration vector comprises at least one of a marginal vector for calibrating pixels of the pixel array, a white balance vector for compensating the pixels of the pixel array, or a local vector calibrating at least one of the pixels of the pixel array.
In yet another example, a processor for calibrating a display having a pixel array is provided. The processor includes a vector defining module configured to define a calibration vector with a source pixel, a vector volume, and a calibration range; a first calculator configured to calculate a distance between a pixel to be calibrated and the source pixel; a second calculator configured to calculate a calibration amount based on the distance and the vector volume; and a calibrating module configured to calibrate the pixel to be calibrated based on the calibration amount and the calibration vector.
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosures. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure.
Throughout the specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. Likewise, the phrase “in one embodiment/example” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment/example” as used herein does not necessarily refer to a different embodiment. It is intended, for example, that claimed subject matter include combinations of example embodiments in whole or in part.
In general, terminology may be understood at least in part from usage in context. For example, terms, such as “and”, “or”, or “and/or,” as used herein may include a variety of meanings that may depend at least in part upon the context in which such terms are used. Typically, “or” if used to associate a list, such as A, B or C, is intended to mean A, B, and C, here used in the inclusive sense, as well as A, B or C, here used in the exclusive sense. In addition, the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
In the present disclosure, each pixel or subpixel of a display panel can be directed to assume a luminance/pixel value discretized to the standard set [0, 1, 2, . . . , (2N−1)], where N represents the bit number and is a positive integer. A triplet of such pixels/subpixels provides the red (R), green (G), and B (blue) components that make up an arbitrary color, which can be updated in each frame. Each of the pixel values corresponds to a different grayscale value. For ease of description, the grayscale value of a pixel is also discretized to a standard set [0, 1, 2, . . . , (2N−1)]. In the present disclosure, a pixel value and a grayscale value each represents the voltage applied on the pixel/subpixel. In the present disclosure, a grayscale mapping correlation lookup table (LUT) is employed to describe the mapping correlation between a grayscale value of a pixel and a set of mapped pixel values of subpixels. In the present disclosure, the display data of a pixel can the represented in the forms of different attributes. For example, display data of a pixel can be represented as (R, G, B), where R, G, and B each represents a respective pixel value of a subpixel in the pixel. In another example, the display data of a subpixel can be represented as (Y, x, y), where Y represents the luminance value, and x and y each represents a chrominance value. For illustrative purposes, the present disclosure only describes a pixel having three subpixels, each displaying a different color (e.g., R, G, and B colors). It should be appreciated that the disclosed methods can be applied to pixels having any suitable number of subpixels that can separately display various colors, such as 2 subpixels, 4 subpixels, 5 pixels, and so forth. The number of subpixels and the colors displayed by the subpixels should not be limited by the embodiments of the present disclosure.
In the present disclosure, a numerical space is employed to illustrate the method for determining a set of mapped pixels mapped to a grayscale value based on a target luminance value and a plurality of target chrominance values. The numerical space has a plurality of axes extending from an origin. Each of the three axes represents the grayscale value of one color displayed by the display panel. For ease of description, the numerical space has three axes, each being orthogonal to one another and representing the pixel value of a subpixel in a pixel to display a color. In some embodiments, the numerical space is an RGB space having three axes, representing the pixel values for a subpixel to display a red (R) color, a green (G) color, and a blue (B) color. A point in the RGB space can have a set of coordinates. Each component (i.e., one of the coordinates) of the set of coordinates represents the pixel value (i.e., displayed by the respective subpixel) along the respective axis. For example, a point of (R0, G0, B0) represents a pixel having pixel values of R0,G0, and B0 applied respectively on the R, G, and B subpixels. The RGB space is employed herein to, e.g., determine different sets of pixel values for ease of description, and can be different from a standard RGB color space defined as a color space based on the RGB color model. For example, the RGB space employed herein represents the colors that can be displayed by the display panel. These colors may or may not be the same as the colors defined in a standard RGB color space.
In display technology, display panels are calibrated to have different input/output characteristics for various reasons. LUTs are widely used in common calibrations of display panels. LUTs are basically conversion matrices, of different complexities, with the two main options being one-dimension (1D) LUTs or three-dimension (3D) LUTs. A LUT takes an input value and outputs a new value based on the data within the LUT. ID LUTs can only re-map individual input values to new output values based on the LUT data—a simple one input to one output process, regardless of the actual RGB pixel value. 3D LUTs can re-map individual input values to any number of output values based on the LUT data, and the other associated input RGB pixel data. Referring to
The disadvantages of 3d LUT cannot be ignored at the same time. If a 3D LUT were to have values for each and every input-to-output combination, the LUT would be very large—so large as to be impossible to use. A 3D LUT using every input-to-output value for 10-bit image workflows would be a 1024-point LUT and would have 1,073,741,824 points (10243). So, most 3D LUTs use cubes in the range of 173 to 643. For a 173 3D LUT, it means there are 17 input points to output points for each axis, and accuracy is sacrificed to reduce the amount of data storage. Further, as values between these points must be interpolated, and different systems do this with different levels of accuracy, the exact same 3D LUT used in two different systems will, in all probability, produce a subtly different result.
To overcome the above-mentioned issues, a system and a method for calibrating a display panel are provided. One or more calibration vectors are used in place of 3D LUT for calibration. A calibration vector has three parameters: a source pixel, a vector volume, and a calibration range. The source pixel is a three-dimensional parameter configured to determine a central point for calibration. The vector volume is a three-dimensional parameter configured to determine a maximal volume for calibration. The calibration range is a four-dimensional parameter configured to determine a calibration scope, and pixels within the calibration scope are calibrated by the calibration vector. By employing one or more calibration vectors, precise and complex calibration can be performed to specific color within a specific scope with a small data storage. The method can be used to calibrate any suitable types of display panels, such as LCDs and OLED displays. In some embodiments, the calibration is computed by a processor (or an application processor (AP)), and/or a control logic (or a display driver integrated circuit (DDIC)).
Additional novel features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by the production or operation of the examples. The novel features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities, and combinations set forth in the detailed examples discussed below.
Control logic 104 may be any suitable hardware, software, firmware, or combination thereof, configured to receive display data 106 (e.g., pixel data) and generate control signals 108 for driving the subpixels on display panel 102. Control signals 108 are used for controlling writing of display data to the subpixels and directing operations of display panel 102. For example, subpixel rendering (SPR) algorithms for various subpixel arrangements may be part of control logic 104 or implemented by control logic 104. Control logic 104 may include any other suitable components, such as an encoder, a decoder, one or more processors, controllers, and storage devices. Control logic 104 may be implemented as a standalone integrated circuit (IC) chip, such as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA). In some embodiments, control logic 104 may be manufactured in a chip-on-glass (COG) package, for example, when display panel 102 is a rigid display. In some embodiments, control logic 104 may be manufactured in a chip-on-film (COF) package, for example, when display panel 102 is a flexible display, e.g., a flexible OLED display.
Apparatus 100 may also include any other suitable component such as, but not limited to tracking devices 110 (e.g., inertial sensors, camera, eye tracker, GPS, or any other suitable devices for tracking motion of eyeballs, facial expression, head movement, body movement, and hand gesture) and input devices 112 (e.g., a mouse, keyboard, remote controller, handwriting device, microphone, scanner, etc.). Input devices 112 may transmit input instructions 120 to processor 114 to be processed and executed. For example, input instructions 120 may include computer programs and/or manual input to command processor 114 to perform a test and/or calibration operation on control logic 104 and/or display panel 102.
In this embodiment, apparatus 100 may be a handheld or a VR/AR device, such as a smart phone, a tablet, or a VR headset. Apparatus 100 may also include a processor 114 and memory 116. Processor 114 may be, for example, a graphics processor (e.g., graphics processing unit (GPU)), an application processor (AP), a general processor (e.g., APU, accelerated processing unit; GPGPU, general-purpose computing on GPU), or any other suitable processor. Memory 116 may be, for example, a discrete frame buffer or a unified memory. Processor 114 is configured to generate display data 106 in consecutive display frames and may temporally store display data 106 in memory 116 before sending it to control logic 104. Processor 114 may also generate other data, such as but not limited to, control instructions 118 or test signals, and provide them to control logic 104 directly or through memory 116. Control logic 104 then receives display data 106 from memory 116 or directly from processor 114.
Display panel 102 may be, for example, a TN panel, an IPS panel, an AFFS panel, a VA panel, an ASV panel, or any other suitable display panel. In this example, the display panel 102 includes a filter substrate 220, an electrode substrate 224, and a liquid crystal layer 226 disposed between the filter substrate 220 and the electrode substrate 224. As shown in
As shown in
In this embodiment, display panel includes light emitting layer 270 and a driving circuit layer 272. As shown in
In this embodiment, driving circuit layer 272 includes a plurality of pixel circuits 260, 262, 264, and 268, each of which includes one or more thin film transistors (TFTs), corresponding to OLEDs 250, 252, 254, and 256 of subpixels 202, 204, 206, and 208, respectively. Pixel circuits 260, 262, 264, and 268 may be individually addressed by control signals 108 from control logic 104 and configured to drive corresponding subpixels 202, 204, 206, and 208, by controlling the light emitting from respective OLEDs 250, 252, 254, and 256, according to control signals 108. Driving circuit layer 216 may further include one or more drivers (not shown) formed on the same substrate as pixel circuits 260, 262, 264, and 268. The on-panel drivers may include circuits for controlling light emitting, gate scanning, and data writing, as described below in detail. Scan lines and data lines are also formed in driving circuit layer 216 for transmitting scan signals and data signals, respectively, from the drivers to each pixel circuit 260, 262, 264, and 268. Display panel may include any other suitable component, such as one or more glass substrates, polarization layers, or a touch panel (not shown). Pixel circuits 260, 262, 264, and 268 and other components in driving circuit layer 272 in this embodiment are formed on a low-temperature polycrystalline silicon (LTPS) layer deposited on a glass substrate, and the TFTs in each pixel circuit 260, 262, 264, and 268 are p-type transistors (e.g., PMOS LTPS-TFTs). In some embodiments, the components in driving circuit layer 272 may be formed on an amorphous silicon (a-Si) layer, and the TFTs in each pixel circuit may be n-type transistors (e.g., NMOS TFTs). In some embodiments, the TFTs in each pixel circuit may be organic TFTs (OTFT) or indium gallium zinc oxide (IGZO) TFTs.
As shown in
Although
In some embodiments, control logic 104 is an integrated circuit (but may alternatively include a state machine made of discrete logic and other components), which provides an interface function between processor 114/memory 116 and display panel 102. Control logic 104 may provide various control signals 108 with suitable voltage, current, timing, and de-multiplexing, to control display panel 102 to show the desired text or image. Control logic 104 may be an application-specific microcontroller and may include storage units such as RAM, flash memory, EEPROM, and/or ROM, which may store, for example, firmware and display fonts. In this embodiment, control logic 104 includes a data interface and a control signal generating sub-module. The data interface may be any serial or parallel interface, such as but not limited to, display serial interface (DSI), display pixel interface (DPI), and display bus interface (DBI) by the Mobile Industry Processor Interface (MIPI) Alliance, unified display interface (UDI), digital visual interface (DVI), high-definition multimedia interface (HDMI), and DisplayPort (DP). The data interface in this embodiment is configured to receive display data 106 and any other control instructions 118 or test signals from processor 114/memory 116. The control signal generating sub-module may provide control signals 108 to on-panel drivers 302, 304, and 306. Control signals 108 control on-panel drivers 302, 304, and 306 to drive the subpixels in active region 300 by, in each frame, scanning the subpixels to update display data and causing the subpixels to emit light to present the updated display image.
Apparatus 100 can be configured to calibrate a mapping correlation between voltage (e.g., gate voltage) applied on a light-emitting element (e.g., an LCD or an OLED) of a pixel in display panel 102 and the grayscale values displayed by a pixel that includes the light-emitting element (e.g., when different gate voltages are applied on the light-emitting element). The calibration process may be performed by processor 114 (e.g., illustrated in
In this embodiment, processor 114 includes a vector defining module 402, a first calculator 404, a second calculator 406, and a calibrating module 408. Vector defining module 402 is configured to define one or more calibration vectors used for calibration. Unlike a 3D LUT, a calibration vector has three parameters: a source pixel, a vector volume, and a calibration range. The source pixel is a three-dimensional parameter configured to determine a central point for calibration. The vector volume is a three-dimensional parameter configured to determine a maximal volume for calibration. The calibration range is a four-dimensional parameter configured to determine a calibration scope, and pixels within the calibration scope are calibrated by the calibration vector.
A two-dimension (2D) calibration vector is taken as an example to illustrate the principle of the calibration vectors.
Source pixel 610 is a two-dimensional parameter (Gscr, Bscr) configured to determine a central point for calibration, where Gscr is a grayscale value of a green channel of the source pixel and Bscr is a grayscale value of a blue channel of the source pixel. In the present implementation, grayscale value of source pixel 610 is (100, 150). Source pixel 610 defines a start point for calibration, as shown in
Vector volume 620 is a two-dimensional parameter (VG, VB) configured to determine a maximal volume for calibration. VG is a calibration value of a green channel of the source pixel and VB is a calibration value of a blue channel of the source pixel. In the present implementation, vector volume 620 is (30, 0). Vector volume 620 defines a degree for calibration, as shown in
Calibration range 640 is a three-dimensional parameter (VRange, SG, SB) configured to determine a calibration scope, pixels within the calibration scope are calibrated by the calibration vector. VRange is a preset distance between the pixel to be calibrated and the source pixel, the pixel to be calibrated is calibrated by the calibration vector when the distance between the pixel to be calibrated and the source pixel is smaller than VRange. In the present implementation, VRange is 100, i.e., a pixel whose distance from source pixel greater than 100 will not be calibrated. For example, a distance between a first pixel 612 and source pixel 610 is smaller than 100, and first pixel 612 locates within calibration range 640. Thus, first pixel 612 will be calibrated. For example, a distance between a second pixel 614 and source pixel 610 is bigger than 100, and second pixel 614 locates outside calibration range 640. Thus, second pixel 614 will not be calibrated.
In the present implementation, the green channel and the bule channel share the same VRange·SG is a calculation factor of a green channel when calculating the distance between the pixel to be calibrated and the source pixel, and SB is a calculation factor of a blue channel when calculating the distance between the pixel to be calibrated and the source pixel. SG and SB are preset based on a calibration target.
The principle of how calibration vectors work in a display panel is illustrated in the above implementation in a 2D color space.
Source pixel 710 is a three-dimensional parameter (Rscr, Gscr, Bscr) configured to determine a central point for calibration, and Rscr is a grayscale value of a red channel of the source pixel. In the present implementation, the grayscale value of source pixel 710 is (100, 100, 150). Source pixel 710 defines a start point for calibration, as shown in
Calibration range 740 is a three-dimensional parameter (VRange, SR, SG, SB) configured to determine a calibration scope, pixels within the calibration scope are calibrated by calibration vector V3D. VRange is a preset distance between the pixel to be calibrated and the source pixel 710, and the pixel to be calibrated is calibrated by the calibration vector when the distance between the pixel to be calibrated and the source pixel 710 is smaller than VRange. In the present implementation, VRange is 100, i.e., a pixel whose distance from source pixel 710 is greater than 100 will not be calibrated. For example, a distance between a first pixel 712 and source pixel 710 is smaller than 100, and first pixel 712 locates within calibration range 740. Thus, first pixel 712 will be calibrated. For example, a distance between a second pixel 714 and source pixel 710 is bigger than 100, and second pixel 714 locates outside calibration range 740. Thus, second pixel 714 will not be calibrated. In the present implementation, the red channel, the green channel, and the blue channel share the same VRange, and thus the scope of calibration is a cube. SR is a calculation factor of a red channel when calculating the distance between the pixel to be calibrated and the source pixel. SR, SG and SB are preset based on a calibration target.
Referring to
Taking first pixel 712 as an example, to perform calibration, it is necessary to calculate a distance 750 between first pixel 712 and source pixel 710 because distance 750 is negatively correlated with a calibration amount 722 applied on first pixel 712. Distance 750 is calculated based on grayscale values of first pixel 712, source pixel 710, and calculation factors of the color space SR, SG, SB through a preset calculation module, for example, the ellipsoidal distance model, cube distance model, sphere distance model, etc. In an implementation, for each pixel to be calibrated, an ellipsoidal distance model is employed to calculate the distance between the pixel and source pixel 710 based on the following formula in which (Ri, Gi, Bi) is the grayscale value of the pixel to be calibrated.
In the present implementation, the grayscale value of first pixel 712 is (50, 80, 110), the grayscale value of source pixel 710 is (100, 100, 150), SR=SG=SB=2, distance 750 is 95. Source pixel 710, vector volume 720, and calibration range 740 can be designed and preset as any value to meet the needs of the display system. In some implementations, SR=SG=SB=1, then distance 750 is 67, i.e., the Euclidean distance. In some implementations, SR, SG, SB are different, for example, SR=0, SG=1, SB=2, which means no calibration in the red channel and distance 750 is 20.
In some implementations, for each pixel to be calibrated, a cube distance model is employed to calculate the distance between the pixel and source pixel 710 based on the following formula in which (Ri, Gi, Bi) is the grayscale value of the pixel to be calibrated.
For each pixel to be calibrated, the distance is closely related to the source pixel, the calibration range, and the calibration module. It is to be appreciated that the above implementations are provided for an exemplary purpose only and without limitations.
Referring to
Two steps are needed to calculate calibration amount 722.
First, calculate a calibration weight based on the distance between a pixel to be calibrated and the source pixel, i.e., distance 750. In the present implementation, calibration weight is calculated based on the following formula:
Second, calculate calibration amount 722 based on calibration weight and vector volume 720. Calibration amount 722 is a three-dimensional parameter (ΔVR, ΔVG, ΔVB), where ΔVR is a calibration amount of a red channel of the source pixel, ΔVG is a calibration amount of a green channel of the source pixel, and ΔVB is a calibration amount of a blue channel of the source pixel. In the present implementation, ΔVR=VR×Weight, ΔVG=VG×Weight, ΔVB=VB×Weight. The calculation method to calculate the calibration weight and the calibration amount are provided for an exemplary purpose only and without limitations.
After calibration amount 722 is confirmed, calibrating module 408 is configured to calibrate first pixel 712 based on calibration amount 722 and calibration vector V3D. The grayscale value of a first calibrated pixel 732 is a three-dimensional parameter (RCal, GCal, BCal), which is determined by first pixel 712 and calibration amount 722, i.e., RCal=Ri+ΔVR, GCal=Gi+ΔVG, and BCal=Bi+ΔVB. In an implementation, the grayscale value of source pixel 710 is (100, 100, 150), and calibration range 740 is (100, 1, 1,1). For first pixel 712, the grayscale value of first pixel 712 is (50, 80, 110), Euclidean distances module is used to calculate distance 750, which is 67, and the grayscale value of first calibrated pixel 732 is calibrated to (17, 47, 51). In other implementations, the grayscale value of first calibrated pixel 732 will change with the change of the parameters used for calibration. The implementations are provided for an exemplary purpose only and without limitations.
Calibration vector V3D may be a marginal vector for calibrating pixels of the pixel array, a white balance vector for compensating the pixels of the pixel array, or a local vector for calibrating at least one of the pixels of the pixel array, etc.
Marginal vectors are used for global calibration of RGB space to adjust the overall color according to gamut space characteristics. Referring to Table 1, an example of a set of marginal vectors is illustrated. The set of marginal vectors includes six groups of marginal vectors, marginal vectors in a same group share the same source pixel and calibration range. Table 1 shows the source pixels and calibration ranges of the six groups of marginal vectors. Marginal vectors are global vectors as they are designed to calibrate the overall color of the display system, and thus the range of every marginal vector covers the whole RGB space, and the source pixels are located on vertices of the RGB space. In other implementations, eight groups of marginal vectors are provided to achieve a precise calibration.
For each marginal vector, the source pixels are preset and fixed to define the start points for each vector, as shown in Table 1. The vector volume for each marginal vector can be tailor-made according to the needs of different calibration. The vector volume for each marginal vector can be the same, for example, vector volume of the six vectors are all (10, 10, 10). The vector volume for each marginal vector can be different, for example, vector volume of V0 is (−10, 5, 0), vector volume of V1 is (6, −4, −5), vector volume of V2 is (−10, −5, 0), vector volume of V3 is (0, 10, −5), vector volume of V4 is (10, 5, 10), vector volume of V5 is (−5, −5, −15). Vector volume defines a degree for each vector, and can be preset and adjusted according to the actual needs of each calibration.
VRange is a preset distance between the pixel to be calibrated and the source pixel, for each marginal vector, VRange is fixed as 255 to cover the whole RGB space. SR is a calculation factor of a red channel when calculating the distance between the pixel to be calibrated and the source pixel. SG is a calculation factor of a green channel when calculating the distance between the pixel to be calibrated and the source pixel. SB is a calculation factor of a blue channel when calculating the distance between the pixel to be calibrated and the source pixel. SR, SG and SB are preset based on a calibration target. As an example, in the present implementation, SR=SG=SB=1, and the cube distance model is employed to calculate the distance between a pixel (Ri, Gi, Bi) and the source pixel based on the following formula.
For each marginal vector, after the distance is calculated, the calibration amount can be obtained based on the distance and the vector volume respectively. As described above, two steps are needed to calculate the calibration amount. First, calculate the calibration weight based on the distance between a pixel to be calibrated and the source pixel. Second, calculate the calibration amount based on calibration weight and vector volume. Detailed calculation methods are described above and will not repeat here. The calculation methods to calculate the calibration weight and the calibration amount are provided for an exemplary purpose only and without limitations.
As more than one vector is employed in the present implementation, they should be combined to complete the calibration. The six marginal vectors may be placed in a sequential order or a parallel order when combined.
For a group of calibration vectors V1 (R1, G1, B1), V2 (R2, G2, B2), . . . , VN (RN, GN, BN), a total calibration amount ΔVtotal and a final calibrated pixel VF can be calculated through the formula below:
White balance vectors are used to correct the color profile of white in RGB space. Usually, the grayscale values of a source pixel of a white balance vector in the red channel, the green channel, and the blue channel are the same. Referring to Table 2, an example of a set of white balance is illustrated. The set of white balance vectors includes 9 groups of white balance vectors, and the grayscale values of a source pixel of every white balance vector in the red channel, the green channel, and the blue channel are the same. Table 2 shows the source pixels and calibration ranges of the 9 groups of white balance vectors. White balance vectors are global vectors as they are designed to correct the color profile of white of the display system, and thus a sum of the range of every white balance vector covers the whole RGB space.
For each white balance vector, the source pixels are preset and fixed to define the start points for each vector, as shown in Table 2. The vector volume for each white balance vector can be tailor-made according to the needs of different calibration. The vector volume for each marginal vector can be the same, for example, vector volumes of the six vectors are all (30, 30, 30). The vector volume for each marginal vector can be different, for example, vector volume of V0 is (−10, −5, −15), vector volume of V1 is (−10, 5, −15), vector volume of V2 is (10, −5, 0), vector volume of V3 and V4 is (−4, −10, 7), vector volume of V5, V6, V7, and V8 is (−4, −5, 6). Vector volume defines a degree for each vector and can be preset and adjusted according to the actual needs of each calibration.
VRange is a preset distance between the pixel to be calibrated and the source pixel, for each white balance vector, VRange is designed and fixed to cover the whole RGB space. SR is a calculation factor of a red channel when calculating the distance between the pixel to be calibrated and the source pixel. SG is a calculation factor of a green channel when calculating the distance between the pixel to be calibrated and the source pixel. SB is a calculation factor of a blue channel when calculating the distance between the pixel to be calibrated and the source pixel. SR, SG and SB are preset based on a calibration target. As an example, in the present implementation, SR=SG=SB=1, and the cube distance model is employed to calculate the distance between a pixel (Ri, Gi, Bi) and the source pixel based on the following formula.
For each vector, after the distance is calculated, the calibration amount can be obtained based on the distance and the vector volume respectively. As described above, two steps are needed to calculate the calibration amount. First, calculate the calibration weight based on the distance between a pixel to be calibrated and the source pixel. Second, calculate the calibration amount based on calibration weight and vector volume. Detailed calculation methods are described above and will not be repeated here. The calculation methods to calculate the calibration weight and the calibration amount are provided for an exemplary purpose only and without limitations.
As more than one vector is employed in the present implementation, they should be combined to complete the calibration. The six marginal vectors may be placed in a sequential order or a parallel order when combined. The detailed calculation methods are described above and will not be repeated here.
Local vectors can be customized according to requirements, generally as a complement to the margin vectors or white balance vectors. Local vectors can also be tailor-made based on a specific color according to actual needs. In implementations discussed above, a plurality of calibration vectors are combined to complete a specific calibration.
Referring to
Starting at operation 1102, a calibration vector with a source pixel, a vector volume, and a calibration range are defined by processor 114. The source pixel is a three-dimensional parameter configured to determine a central point for calibration. The source pixel defines a starting point for calibration. The vector volume is a three-dimensional parameter configured to determine a maximal volume for calibration. The vector volume defines a degree for calibration. The calibration range is a four-dimensional parameter configured to determine a calibration scope, and pixels within the calibration scope are calibrated by the calibration vector. If the distance between the pixel to be calibrated and source pixel is smaller than the calibration range, the pixel will be calibrated, and vice versa. Calibration vector may be a marginal vector for calibrating pixels of the pixel array, a white balance vector for compensating the pixels of the pixel array, or a local vector for calibrating at least one of the pixels of the pixel array, etc. One or more calibration vectors can be defined and used for calibration to meet the needs of the display system. Details for defining the calibration vectors have been described above and will not be repeated here.
Method 1100 then proceeds to operation 1104, in which a distance between a pixel to be calibrated and the source pixel is calculated. To perform calibration, it is necessary to calculate the distance between the pixel to be calibrated and the source pixel because the distance is negatively correlated with the calibration amount applied to the pixel to be calibrated. The distance is calculated based on grayscale values of the pixel to be calibrated, the source pixel, and the calculation range through a preset calculation module, for example, an ellipsoidal distance model, a cube distance model, a sphere distance model, etc.
Method 1100 then proceeds to operation 1106, in which a calibration amount based on the distance and the vector volume is calculated. Two steps are needed to calculate the calibration amount. First, calculate a calibration weight based on the distance between a pixel to be calibrated and the source pixel, the calibration weight can be calculated through a preset formula. Second, calculate the calibration amount based on the calibration weight and the vector volume. The detailed calculation method to calculate the calibration weight and the calibration amount are described above and will not be repeated here.
Method 1100 then proceeds to operation 1108, in which the pixel to be calibrated is calibrated based on the calibration amount and the calibration vector. As discussed above, a plurality of calibration vectors may be combined to complete a specific calibration, and the plurality of calibration vectors may be placed in a sequential order or a parallel order when combined. The above operations may be performed by processor 114 or control logic 104. By employing one or more calibration vectors, precise and complex calibration can be performed to specific color within a specific scope with a small data storage. Calibration vectors can achieve the same effects as 3D LUT with ignorable data storage.
The above detailed description of the disclosure and the examples described therein have been presented for the purposes of illustration and description only and not by limitation. It is therefore contemplated that the present disclosure covers any and all modifications, variations or equivalents that fall within the spirit and scope of the basic underlying principles disclosed above and claimed herein.
This application is a continuation of International Application No. PCT/CN2023/091928, filed on May 1, 2023, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/091928 | May 2023 | WO |
Child | 18206138 | US |