This U.S. non-provisional patent application claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2022-0103147 filed on Aug. 18, 2022, in the Korean Intellectual Property Office, the disclosure of which is incorporated by reference in its entirety herein.
Embodiments of the present disclosure described herein relate to a display device, and more particularly, relate to a display device having a function to reduce power consumption.
A light emitting display device displays an image by using a light emitting diode that generates light through the recombination of electrons and holes. The light emitting display device has a fast response speed and operates with low power consumption.
The light emitting display device includes pixels connected to data lines and scan lines. Each of the pixels may include a light emitting element and a pixel circuit for controlling the amount of current flowing to the light emitting element. Power consumption may be reduced further in the light emitting display device using various techniques. However, a flickering may be observed on a display panel of the light emitting display device when one of these techniques is used, thereby reducing image quality.
Embodiments of the present disclosure provide a display device for preventing a flickering phenomenon from occurring in a partial area during an operation of reducing power consumption.
According to an embodiment, a display device includes a display panel that displays an image, a panel driver that drives the display panel, and a driving controller that controls driving of the panel driver.
The driving controller compensates first image data corresponding to a first area of the display panel where a still image is displayed during at least a predetermined time period, in a first compensation method to generate first compensation image data for the first area, and compensates second image data corresponding to a second area of the display panel different from the first area in a second compensation method that uses a load calculated based on previous image data to generate second compensation image data for the second area.
According to an embodiment, a display device includes a display panel that displays an image, a panel driver that drives the display panel, and a driving controller that controls driving of the panel driver.
The driving controller receives image data during at least ‘k’ frames where ‘k’ is an integer greater than or equal to 2, extracts first image data, which is maintained to have a grayscale that is less than or equal to the reference grayscale during the least ‘k’ frames, and second image data, which has a grayscale higher than the reference grayscale or which is not maintained to have a grayscale that is less than or equal to the reference grayscale during the ‘k’ frames or more, from the image data based on a predetermined reference grayscale. The driving controller compensates the first image data in a first compensation method and compensates the second image data in a second compensation method different from the first compensation method.
According to an embodiment, a display device includes a display panel configured to display an image; a panel driver configured to drive the display panel; and a driving controller configured to control driving of the panel driver. The driving controller is configured to compensate first image data corresponding to a first area of the display panel corresponding to a still image during at least a predetermined time period, using a fixed scale factor independent of a load to generate first compensation image data for the first area. The driving controller is further configured to compensate second image data corresponding to a second area of the display panel corresponding to a moving image using the load to generate second compensation image data for the second area. The load may be based on previous image data.
The above and other objects and features of the present disclosure will become apparent by describing in detail embodiments thereof with reference to the accompanying drawings.
In the specification, the expression that a first component (or region, layer, part, portion, etc.) is “on”, “connected with”, or “coupled with” a second component means that the first component is directly on, connected with, or coupled with the second component or means that a third component is interposed therebetween.
The same reference numerals refer to the same components. Also, in the drawings, the thickness, ratio, and dimension of components may be exaggerated for effectiveness of description of technical contents. The expression “and/or” includes one or more combinations which associated components are capable of defining.
Although the terms “first”, “second”, etc. may be used to describe various components, the components should not be construed as being limited by the terms. The terms are only used to distinguish one component from another component. For example, without departing from the scope and spirit of the present disclosure, a first component may be referred to as a second component, and similarly, the second component may be referred to as the first component. The articles “a,” “an,” and “the” are singular in that they have a single referent, but the use of the singular form in the specification should not preclude the presence of more than one referent.
Also, the terms “under”, “below”, “on”, “above”, etc. are used to describe the correlation of components illustrated in drawings. The terms that are relative in concept are described based on a direction shown in drawings.
Hereinafter, embodiments of the present disclosure will be described with reference to accompanying drawings.
Referring to
In an embodiment, a front surface (or an upper/top surface) and a rear surface (or a lower/bottom surface) of each member are defined based on a direction in which the image IM is displayed. The front surface may be opposite to the rear surface in the third direction DR3, and a normal direction of each of the front surface and the rear surface may be parallel to the third direction DR3.
A separation distance between the front surface and the rear surface in the third direction DR3 may correspond to a thickness of the display device DD in the third direction DR3. Meanwhile, directions that the first, second, and third directions DR1, DR2, and DR3 indicate may be relative in concept and may be changed to different directions.
The display device DD may sense an external input applied from the outside. The external input may include various types of inputs that are provided from the outside of the display device DD. The display device DD according to an embodiment of the present disclosure may sense an external input of a user, which is applied from the outside. The external input of the user may be one of various types of external inputs, such as a part of his/her body, light, heat, his/her gaze, and pressure, or a combination thereof. Also, the display device DD may sense the external input of the user applied to a side surface or a rear surface of the display device DD depending on a structure of the display device DD and is not limited to an embodiment. As an example of the present disclosure, an external input may include an input entered through an input device (e.g., a stylus pen, an active pen, a touch pen, an electronic pen, or an E-pen).
The display surface IS of the display device DD may be divided into a display area DA and a non-display area NDA. The display area DA may be an area in which the image IM is displayed. A user perceives (or views) the image IM through the display area DA. In an embodiment, the display area DA is illustrated in the shape of a quadrangle whose vertexes are rounded. However, this is illustrated merely as an example. The display area DA may have various shapes and is not limited to a particular embodiment.
The non-display area NDA is adjacent to the display area DA. The non-display area NDA may have a given color. The non-display area NDA may surround the display area DA. Accordingly, a shape of the display area DA may be defined substantially by the non-display area NDA. However, this is illustrated merely as an example. The non-display area NDA may be disposed adjacent to only one side of the display area DA or may be omitted, but is not limited thereto.
As illustrated in
According to an embodiment of the present disclosure, the display panel DP may include a light emitting display panel. For example, the display panel DP may be an organic light emitting display panel, an inorganic light emitting display panel, a quantum dot light emitting display panel. An emission layer of the organic light emitting display panel may include an organic light emitting material. An emission layer of the inorganic light emitting display panel may include an inorganic light emitting material. An emission layer of the quantum dot light emitting display panel may include a quantum dot, a quantum rod, or the like.
The display panel DP may output the image IM, and the image IM thus output may be displayed through the display surface IS.
The input sensing layer ISP may be disposed on the display panel DP to sense an external input. The input sensing layer ISP may be directly disposed on the display panel DP. According to an embodiment of the present disclosure, the input sensing layer ISP may be formed on the display panel DP by a subsequent process. That is, when the input sensing layer ISP is directly disposed on the display panel DP, an inner adhesive film (not illustrated) is not interposed between the input sensing layer ISP and the display panel DP. However, the inner adhesive film may be interposed between the input sensing layer ISP and the display panel DP. In this case, the input sensing layer ISP is not manufactured together with the display panel DP through the subsequent processes. That is, the input sensing layer ISP may be manufactured through a process separate from that of the display panel DP and may then be fixed on an upper surface of the display panel DP by the inner adhesive film.
The window WM may be formed of a transparent material capable of outputting the image IM. For example, the window WM may be formed of glass, sapphire, plastic, etc. It is illustrated that the window WM is implemented with a single layer. However, embodiments of the present disclosure are not limited thereto. For example, the window WM may include a plurality of layers.
Meanwhile, although not illustrated, the non-display area NDA of the display device DD described above may correspond to an area that is defined by printing a material including a given color on one area of the window WM. As an example of the present disclosure, the window WM may include a light blocking pattern for defining the non-display area NDA. The light blocking pattern that is a colored organic film may be formed using a coating process.
The window WM may be coupled to the display module DM through an adhesive film. As an example of the present disclosure, the adhesive film may include an optically clear adhesive (OCA) film. However, the adhesive film is not limited thereto. For example, the adhesive film may include a typical adhesive or sticking agent. For example, the adhesive film may include an optically clear resin (OCR) or a pressure sensitive adhesive (PSA) film.
An anti-reflection layer may be further disposed between the window WM and the display module DM. The anti-reflection layer decreases the reflectivity of external light incident from above the window WM. The anti-reflection layer according to an embodiment of the present disclosure may include a retarder and a polarizer. The retarder may be of a film type or a liquid crystal coating type and may include a λ/2 retarder and/or a λ/4 retarder. The polarizer may be of a film type or a liquid crystal coating type. The film type may include a stretch-type synthetic resin film, and the liquid crystal coating type may include liquid crystals arranged in a given direction. The retarder and the polarizer may be implemented with one polarization film.
As an example of the present disclosure, the anti-reflection layer may also include color filters. The arrangement of the color filters may be determined in consideration of colors of light generated from a plurality of pixels PX (see
The display module DM may display the image IM depending on an electrical signal and may transmit/receive information about an external input. The display module DM may be defined by an active area AA and an inactive area NAA. The active area AA may be defined as an area (i.e., an area where the image IM is displayed) through which the image IM is output from the display panel DP. Also, the active area AA may be defined as an area in which the input sensing layer ISP senses an external input applied from the outside. According to an embodiment, the active area AA of the display module DM may correspond to (or overlap with) at least part of the display area DA.
The inactive area NAA is adjacent to the active area AA. The inactive area NAA may be an area in which the image IM is not substantially displayed. For example, the inactive area NAA may surround the active area AA. However, this is illustrated merely as an example. The inactive area NAA may have various shapes but is not limited to a specific embodiment. According to an embodiment, the inactive area NAA of the display module DM corresponds to (or overlaps with) at least part of the non-display area NDA.
The display device DD may further include a plurality of flexible films FF connected to the display panel DP. As an example of the present disclosure, a data driver 230 (see
The display device DD may further include at least one circuit board PCB coupled to the plurality of flexible films FF. As an example of the present disclosure, four circuit boards PCB are provided in the display device DD, but the number of circuit boards PCB is not limited thereto. Two adjacent circuit boards among the circuit boards PCB may be electrically connected to each other by a connection film CF. Also, at least one of the circuit boards PCB may be electrically connected to a main board. A driving controller 100 (see
The input sensing layer ISP may be electrically connected to the circuit boards PCB through the flexible films FF. However, embodiments of the present disclosure are not limited thereto. That is, the display module DM may additionally include a separate flexible film for electrically connecting the input sensing layer ISP and the circuit boards PCB.
The display device DD further includes a housing EDC for accommodating the display module DM. The housing EDC may be coupled with the window WM to define the exterior appearance of the display device DD. The housing EDC may absorb external shocks and may prevent a foreign material/moisture or the like from infiltrating into the display module DM such that components accommodated in the housing EDC are protected. Meanwhile, as an example of the present disclosure, the housing EDC may be provided in the form of a combination of a plurality of accommodating members.
The display device DD according to an embodiment may further include an electronic module including various functional modules for operating the display module DM, a power supply module (e.g., a battery) for supplying power for overall operations of the display device DD, a bracket coupled with the display module DM and/or the housing EDC to partition an inner space of the display device DD, etc.
Referring to
The driving controller 100 receives an input image signal RGB and a control signal CTRL from a main controller (e.g., a microcontroller or a graphics controller). The driving controller 100 may generate image data by converting a data format of the input image signal RGB to a format in compliance with the specification for an interface with the data driver 230. The driving controller 100 may receive the input image signal RGB in units of frames. The image data may be referred to differently depending on the corresponding frame. That is, image data converted from the input image signal RGB received during a previous frame is referred to as “previous image data”, and image data converted from the input image signal RGB received during a current frame may be referred to as “current image data”.
The driving controller 100 may classify the input image signal RGB into first image data corresponding to a first area, in which a still image is displayed during a predetermined time or more, and second image data corresponding to a second area different from the first area. For example, a moving image may be displayed in the second area. The driving controller 100 generates first compensation image data C_DS1 by compensating the first image data in a first compensation method and generates second compensation image data C_DS2 by compensating the second image data in a second compensation method different from the first compensation method. As an example of the present disclosure, the second compensation method may be a compensation method using a load calculated based on the previous image data.
The driving controller 100 generates a scan control signal SCS and a data control signal DCS based on the control signal CTRL.
The data driver 230 receives the data control signal DCS from the driving controller 100. The data driver 230 receives the first and second compensation image data C_DS1 and C_DS2 from the driving controller 100. In an embodiment, the data driver 230 converts the first and second compensation image data C_DS1 and C_DS2 into data voltages (or data signals) based on a gamma reference voltage and outputs the data voltages to a plurality of data lines DL1 to DLm, which will be described later. The data voltages are analog voltages corresponding to grayscale values of the first and second compensation image data C_DS1 and C_DS2. The data voltages converted from the first compensation image data C_DS1 may be referred to as “first compensation data voltages”. The data voltages converted from the second compensation image data C_DS2 may be referred to as “second compensation data voltages”.
As an example of the present disclosure, the data driver 230 may be positioned in the driver chips DIC shown in
The scan driver 250 receives the scan control signal SCS from the driving controller 100. In response to the scan control signal SCS, the scan driver 250 may output first scan signals to first scan lines SCL1 to SCLn to be described later and may output second scan signals to second scan lines SSL1 to SSLn to be described later.
The display panel DP includes the first scan lines SCL1 to SCLn, the second scan lines SSL1 to SSLn, the data lines DL1 to DLm, and pixels PX. The display panel DP may be divided into the active area AA and the inactive area NAA. The pixels PX may be positioned in the active area AA. The scan driver 250 may be positioned in the inactive area NAA.
In an embodiment, the first scan lines SCL1 to SCLn and the second scan lines SSL1 to SSLn extend in parallel with the first direction DR1 and are arranged spaced from each other in the second direction DR2. In an embodiment, the data lines DL1 to DLm extend from the data driver 230 in parallel with the second direction DR2 and are arranged spaced from each other in the first direction DR1.
The plurality of pixels PX are electrically connected to the first scan lines SCL1 to SCLn, the second scan lines SSL1 to SSLn, and the data lines DL1 to DLm. For example, the first row of pixels may be connected to the scan lines SCL1 and SSL1, the second row of pixels may be connected to the scan lines SCL2 and SSL2, the third row of pixels may be connected to the scan lines SCL3 and SSL3, etc.
Each of the plurality of pixels PX includes a light emitting element ED (see
In an embodiment, the scan driver 250 is arranged on a first side of the display panel DP. The first scan lines SCL1 to SCLn and the second scan lines SSL1 to SSLn extend from the scan driver 250 in parallel with the first direction DR1. The scan driver 250 is positioned adjacent to a first side of the active area AA, but the present disclosure is not limited thereto. In an embodiment, the scan driver 250 may be positioned adjacent to the first side and the second side of the active area AA. For example, the scan driving circuit positioned adjacent to the first side of the active area AA may provide the first scan signals to the first scan lines SCL1 to SCLn, and the scan driving circuit positioned adjacent to the second side of the active area AA may provide the second scan signals to the second scan lines SSL1 to SSLn.
Each of the plurality of pixels PX receives a first driving voltage (or driving voltage) ELVDD, a second driving voltage ELVSS, and an initialization voltage VINT. The first driving voltage ELVDD may be higher than the second driving voltage ELVSS.
The voltage generator 300 generates voltages used to operate the display panel DP. In an embodiment of the present disclosure, the voltage generator 300 generates the first driving voltage ELVDD, the second driving voltage ELVSS, and the initialization voltage VINT, which are used for an operation of the display panel DP. The first driving voltage ELVDD, the second driving voltage ELVSS, and the initialization voltage VINT may be provided to the display panel DP through a first voltage line VL1 (or a driving voltage line), a second voltage line VL2, and a third voltage line VL3.
As well as the first driving voltage ELVDD, the second driving voltage ELVSS, and the initialization voltage VINT, the voltage generator 300 may further generate various voltages (e.g., a gamma reference voltage, a data driving voltage, a gate-on voltage, and a gate-off voltage) used for operations of the data driver 230 and the scan driver 250.
As an example of the present disclosure, the driving controller 100 shown in
Each of the plurality of pixels PX shown in
The pixel circuit PXC may include at least one transistor, which is electrically connected to the light emitting element ED and which is used to provide a current corresponding to a data signal Di delivered from the i-th data line DLi to the light emitting element ED. As an example of the present disclosure, the pixel circuit PXC of the pixel PXij includes a first transistor T1, a second transistor T2, a third transistor T3, and a capacitor Cst. Each of the first to third transistors T1 to T3 may be an N-type transistor by using an oxide semiconductor as a semiconductor layer. However, the present disclosure is not limited thereto. For example, each of the first to third transistors Ti to T3 may be a P-type transistor having a low-temperature polycrystalline silicon (LTPS) semiconductor layer. Alternatively, at least one of the first to third transistors Ti to T3 may be an N-type transistor and the others thereof may be P-type transistors.
Referring to
The first driving voltage ELVDD and the initialization voltage VINT may be delivered to the pixel circuit PXC through the first voltage line VL1 and the third voltage line VL3, respectively. The second driving voltage ELVSS may be delivered to a cathode (or a second terminal) of the light emitting element ED through the second voltage line VL2.
The first transistor T1 includes a first electrode connected to the first voltage line VL1, a second electrode electrically connected to an anode (or a first terminal) of the light emitting element ED, and a gate electrode connected to one end of the capacitor Cst. The first transistor T1 may supply an emission current to the light emitting element ED in response to the data signal Di delivered through the data line DLi depending on a switching operation of the second transistor T2.
The second transistor T2 includes a first electrode connected to the data line DLi, a second electrode connected to the gate electrode of the first transistor T1, and a gate electrode connected to the j-th first scan line SCLj. The second transistor T2 may be turned on in response to the first scan signal SCj received through the j-th first scan line SCLj so as to deliver the data signal Di delivered through the i-th data line DLi to the gate electrode of the first transistor T1.
The third transistor T3 includes a first electrode connected to the third voltage line VL3, a second electrode connected to the anode of the light emitting element ED, and a gate electrode connected to the j-th second scan line SSLj. The third transistor T3 may be turned on in response to the second scan signal SSj received through the j-th second scan line SSLj so as to deliver the initialization voltage VINT to the anode of the light emitting element ED.
As described above, one end of the capacitor Cst is connected to the gate electrode of the first transistor T1, and the other end of the capacitor Cst is connected to the second electrode of the first transistor T1. The structure of the pixel PXij according to an embodiment is not limited to the structure illustrated in
Referring to
As an example of the present disclosure, because the video is displayed in the second area AR2, the brightness change of a screen may quickly appear in the second area AR2.
As shown in
Because a video is displayed in each of the plurality of second areas AR2_1 and AR2_2, the brightness change of the screen may quickly appear in the second areas AR2_1 and AR2_2. However, because the first area AR1 displays a still image, the luminance of the first area AR1 remains almost constant without change.
Referring to
The area determination unit 110 may receive image data I_DS during at least ‘k’ frames (or frame periods) and may determine the first area AR1, in which a still image is displayed, and the second area AR2, in which a video is displayed on the display screen SC1, based on the image data I_DS. The image data I_DS may be a signal converted from the input image signal RGB (see
The area determination unit 110 determines that an area having no change in the image data I_DS during ‘k’ frames is the first area AR1, and determines that an area having a change in the image data I_DS during the ‘k’ frames is the second area AR2. As such, when the first area AR1 and the second area AR2 are determined on the display screen SC1, the area determination unit 110 may generate coordinate information C_XY about at least one of the first and second areas AR1 and AR2. For example, the area determination unit 110 may determine first boundary coordinates of the first area AR1 and second boundary coordinates of the second area AR2 within the active area AA.
The area determination unit 110 provides the coordinate information C_XY to the data extraction unit 120. As an example of the present disclosure, the coordinate information C_XY may be coordinate information about the first area AR1 and/or the second area AR2. For example, the coordinate information C_XY may include the first boundary coordinates of the first area and/or the second boundary coordinates of the second area AR2. The data extraction unit 120 extracts first image data I_DS1 corresponding to the first area AR1 and second image data I_DS2 corresponding to the second area AR2 from the image data LDS based on the coordinate information C_XY. The data extraction unit 120 provides the first image data I_DS1 corresponding to the first area AR1 to the sub-current compensation unit 140 and provides the second image data I_DS2 corresponding to the second area AR2 to the main current compensation unit 130.
The driving controller 100 compensates for the first image data I_DS1 through the sub-current compensation unit 140 in a first compensation method and compensates for the second image data I_DS2 through the main current compensation unit 130 in a second compensation method different from the first compensation method. As an example of the present disclosure, the second compensation method may be a compensation method using a load LD calculated based on previous image data I_DS_P (see
Referring to
The load calculation block 131 may directly receive the input image signal RGB (see
The main storage block 133 may include a lookup table in which different scale factors SF are stored depending on the size of the load LD. The current control block 132 may select a scale factor SF corresponding to a size of the load LD from among the scale factors SF stored in the main storage block 133. The current control block 132 provides the selected scale factor SF to the main compensation block 134.
The main compensation block 134 may receive the scale factor SF from the current control block 132. Moreover, the main compensation block 134 may receive current image data, that is, the second image data I_DS2 and may generate the second compensation image data C_DS2 by compensating the second image data I_DS2 based on the scale factor SF. For example, the main compensation block 134 may determine a compensation scale based on the scale factor SF and may generate the second compensation image data C_DS2 by lowering the grayscale (or luminance) of the second image data I_DS2 by the compensation scale. Accordingly, an image displayed by using the second compensation image data C_DS2 may have luminance lower than an image displayed by using the second image data I_DS2. Accordingly, when an image is displayed by using the second compensation image data C_DS2, the driving current of the display panel DP may decrease. Accordingly, the total power consumption of the display device DD may be reduced through an operation (or a main current compensation operation) of the main current compensation unit 130.
Referring to
When the load LD is 0%, the scale factor SF may have the highest value. Accordingly, the second compensation image data C_DS2 compensated based on the scale factor SF may have the highest luminance value B_max. On the other hand, when the load is 100%, the scale factor SF may have the lowest value. Accordingly, the second compensation image data C_DS2 compensated based on the scale factor SF may have the lowest luminance value B_min. As an example of the present disclosure, the highest luminance value B_max may be 1000 nits, and the lowest luminance value B_min may be 250 nits.
A load (hereinafter, a first load LDa) of an image displayed on the display screen SC1 during a first frame as shown in
In the case where the first scale factor Sa is changed to the second scale factor Sb, when a change in the scale factor SF is also applied to the first area AR1, the luminance of a black-grayscale image of the first area AR1 may be changed (i.e., down). This change in luminance of the black-grayscale image may be recognized as a blinking.
Referring to
Alternatively, referring to
Referring to
Unlike the driving controller 100 illustrated in
The data extraction unit 120_a may receive the input image signal RGB, the control signal CTRL, and the coordinate information EC_XY from the main controller. As an example of the present disclosure, the control signal CTRL may include a vertical synchronization signal Vsync and a data enable signal DE.
A period in which the driving controller 100_a receives the input image signal RGB may be defined as an input frame IF1 (or an input frame period). The input frame IF1 includes a data reception section IP1 and a blank section IVP1. During the data reception section IP1, the driving controller 100_a may receive the input image signal RGB. The blank section IVP1 may be an idle section in which the input image signal RGB is not received. As an example of the present disclosure, the driving controller 100_a may receive the coordinate information EC_XY during the blank section IVP1.
During the blank section IVP1, the driving controller 100_a may further receive various display control signals for maximizing a contrast ratio of the second area AR2. As an example of the present disclosure, the coordinate information EC_XY may be included in a display control signal and may be transmitted to the driving controller 100_a.
On the basis of the coordinate information EC_XY, the data extraction unit 120_a extracts a first image signal RGB1 corresponding to the first area AR1 and a second image signal RGB2 corresponding to the second area AR2 from the input image signal RGB. The data extraction unit 120_a provides the first and second image signals RGB1 and RGB2 to the data conversion unit 125.
The data conversion unit 125 converts the first and second image signals RGB1 and RGB2 into the first and second image data I_DS1 and I_DS2, respectively. The data conversion unit 125 provides the first image data I_DS1 to the sub-current compensation unit 140 and provides the second image data I_DS2 to the main current compensation unit 130.
The driving controller 100_a compensates for the first image data I_DS1 through the sub-current compensation unit 140 in a first compensation method and compensates for the second image data I_DS2 through the main current compensation unit 130 in a second compensation method different from the first compensation method. As an example of the present disclosure, the second compensation method may be a compensation method using a load LD calculated based on the previous image data I_DS_P (see
Referring to
The data extraction unit 120_b receives the image data I_DS during at least ‘k’ frames and extracts first image data I_DSa and second image data I_DSb from the image data I_DS based on a predetermined reference grayscale. In detail, the data extraction unit 120_b extracts a set of data, which is maintained to have a grayscale that is less than or equal to a reference grayscale during ‘k’ frames or more, in the image data I_DS as the first image data I_DSa. The data extraction unit 120_b extracts a set of data, which has a grayscale higher than the reference grayscale or is not maintained to have a grayscale that is less than or equal to the reference grayscale during ‘k’ frames or more, in image data as the second image data I_DSb. As an example of the present disclosure, when the image data I_DS is within a grayscale range of 0 to 255, the reference grayscale may be set to the grayscale of 32 or less. However, the reference grayscale is not limited thereto. Moreover, the reference grayscale may vary depending on a grayscale range expressed by the image data I_DS.
The data extraction unit 120_b provides the first and second image data I_DSa and I_DSb to the main current compensation unit 130_a.
The main current compensation unit 130_a may receive the first and second image data I_DSa and I_DSb and may output first and second compensation image data C_DSa and C_DSb having target luminance by compensating the image data I_DS based on the load LD (see
The sub-current compensation unit 140_a may receive the first compensation image data C_DSa and the scale factor SF from the main current compensation unit 130_a. The sub-current compensation unit 140_a may generate re-compensation data CC_DSa by compensating the first compensation image data C_DSa based on the change amount of the scale factor SF.
Referring to
In detail, the load change determination block 141 may receive the scale factor SF from the main current compensation unit 130_a. The load change determination block 141 stores a scale factor corresponding to the load of a previous frame and outputs a scale factor change amount ΔSab by comparing the scale factor corresponding to the load of the previous frame with a scale factor corresponding to the load of a current frame. For example, Sa may be the scale factor of the previous fame and Sb may be the scale factor of the current frame.
As shown in
As shown in
The gamma compensation block 142 may receive the scale factor change amount ΔSab from the load change determination block 141. The gamma compensation block 142 may generate the re-compensation image data CC_DSa by correcting the gamma of the first compensation image data C_DSa based on the scale factor change amount ΔSab.
Referring to
Even though the first compensation image data C_DSa having a first grayscale Ga is input to the sub-current compensation unit 140_a, the re-compensation data CC_DSa having a different luminance may be output depending on the first and second cases.
The main current compensation unit 130_a may convert the first image data I_DSa for maintaining a constant grayscale (particularly, a low grayscale) during the first and second frames into the first compensation image data C_DSa having a luminance different depending on a scale factor. However, the first compensation image data C_DSa is compensated by the sub-current compensation unit 140_a as the re-compensation data CC_DSa having a gamma curve different depending on the first and second cases. Since the image is displayed in the grayscale area based on the re-compensation data CC_DSa, it is possible to prevent or reduce the perception of a flickering phenomenon in the low grayscale area due to the luminance compensation by the main current compensation unit 130_a.
Referring to
When the scale factor change amount ΔSab has a negative value, the sub-current compensation unit 140_a may compensate the first compensation image data C_DSa to have a high gamma curve (e.g., first and second high gamma curves C_GC11 and C_GC12) having a gamma higher than the reference gamma curve R_GC. In the meantime, when the scale factor change amount ΔSab has a positive value, the sub-current compensation unit 140_a may compensate the first compensation image data C_DSa to have a low gamma curve (e.g., first and second low gamma curves C_GC21 and C_GC22) having a gamma lower than the reference gamma curve R_GC.
Even though the first compensation image data C_DSa has a first grayscale I_G1, the first compensation image data C_DSa may be changed to the re-compensation data CC_DSa having a grayscale (i.e., the first or second compensation grayscale C_G11 or C_G12) different depending on the scale factor change amount ΔSab. Even though the first compensation image data C_DSa has a second grayscale I_G2, the first compensation image data C_DSa may be changed to the re-compensation data CC_DSa having a grayscale (i.e., the first or second compensation grayscale C_G21 or C_G22) different depending on the scale factor change amount ΔSab.
Referring to
The sub-compensation block 143 may receive the first compensation image data C_DSa and the scale factor SF from the main current compensation unit 130_a. The sub-compensation block 143 stores a scale factor corresponding to the load of a previous frame and generates the scale factor change amount ΔSab by comparing the scale factor corresponding to the load of the previous frame with a scale factor corresponding to the load of a current frame. The sub-compensation block 143 may output the re-compensation data CC_DSa by compensating the first compensation image data C_DSa with reference to the sub-storage block 144.
The sub-storage block 144 may include a grayscale I_Ga of the first compensation image data C_DSa and a lookup table C_LUT in which compensation values according to the scale factor change amount ΔSab are stored.
When the scale factor change amount ΔSab has a negative value, compensation values may have positive values. For example, the lookup table C_LUT has a compensation value of +10 for a grayscale of 4 when the scale factor change amount ΔSab is a −0.4, has a compensation value of +8 when the scale factor change amount ΔSab is a −0.3, etc. When the scale factor change amount ΔSab has a positive value, compensation values may have negative values. For example, the lookup table C_LUT has a compensation value of −6 for a grayscale of 4 when the scale factor change amount ΔSab is 0.3, a compensation value of −9 for a grayscale of 4 when the scale factor change amount ΔSab is 0.4, etc. While various compensation values, scale factor change amounts ΔSab and grayscales are shown in the lookup table C_LUT, embodiments of the present disclosure are not limited thereto. For example, the compensation values, scale factor change amounts ΔSab and grayscales the lookup table C_LUT may be changed to other values as needed.
The sub-compensation block 143 may read out a compensation value CV corresponding to the grayscale I_Ga of the first compensation image data C_DSa and the scale factor change amount Δsab from the sub-storage block 144 and may output the re-compensation data CC_Dsa by compensating the first compensation image data C_Dsa based on the compensation value CV.
Since the image is displayed in the grayscale area based on the re-compensation data CC_Dsa, it is possible to prevent or reduce the perception of a flickering phenomenon in the low grayscale area due to the luminance compensation by the main current compensation unit 130_a.
In an embodiment, power can be conserved without causing observable flicker by compensating areas of the display device maintained at a low gray level for at least certain time period differently from areas of the display area that are not maintained at the low gray level for at least the certain period.
According to an embodiment of the present disclosure, the flickering phenomenon in a first area may be removed while reducing the overall power consumption of a display device, by applying different luminance compensation methods to the first area, where a still image is displayed, and a second area where a video is displayed.
While the present disclosure has been described with reference to embodiments thereof, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of the present disclosure as set forth in the following claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2022-0103147 | Aug 2022 | KR | national |