This application claims priority to Korean Patent Application No. 10-2021-0123616, filed on Sep. 16, 2021, and all the benefits accruing therefrom under 35 U.S.C. § 119, the content of which in its entirety is herein incorporated by reference.
The present disclosure relates to a display device. More particularly, the present disclosure relates to a display device capable of improving an afterimage.
Among display devices, a light emitting type display device displays an image using a light emitting diode that emits a light by a recombination of holes and electrons. The light emitting type display device has advantages such as a fast response speed, a low power consumption, etc.
The display device includes a display panel for displaying an image, a scan driver sequentially for applying scan signals to scan lines arranged in the display panel, and a data driver for applying data signals to data lines arranged in the display panel.
The present disclosure provides a display device capable of improving an afterimage.
Embodiments of the invention provide a display device including a display panel displaying an image, a panel driver driving the display panel, and a driving controller controlling a drive of the panel driver.
The driving controller includes a compensation determination block and a data compensation block. The compensation determination block is activated after a still image is displayed for a predetermined time or more and generates a compensation value based on a final afterimage component calculated using an afterimage algorithm obtained by a combination of a first afterimage calculation equation and a second afterimage calculation equation. The data compensation block receives an image signal and reflects the compensation value to the image signal to generate a compensation image signal.
The compensation determination block generates the first and second afterimage calculation equations by reflecting a difference in grayscale between the still image and a target image and a time during which the still image is displayed. The target image corresponds to an original image of the image signal without an afterimage.
Embodiments of the invention provide a display device including a display panel, a panel driver, and a driving controller. The display panel includes a first display area and a second display area, displays a first image through the first and second display areas in a first mode, and displays a second image through the first display area in a second mode. The panel driver drives the display panel in the first mode or the second mode, and the driving controller controls a drive of the panel driver.
The driving controller includes a compensation determination block and a data compensation block. The compensation determination block is activated after the second mode is switched to the first mode and generates a compensation value based on a final afterimage component calculated using an afterimage algorithm obtained by a combination of a first afterimage calculation equation and a second afterimage calculation equation. The data compensation block receives a second image signal corresponding to the second display area and reflects the compensation value to the second image signal to generate a second compensation image signal.
According to the above, as the compensation value is generated based on the final afterimage component calculated using the afterimage algorithm and the image signal is compensated for based on the compensation value, the medium-term afterimage is removed within a period during which the medium-term afterimage occurs. Accordingly, a deterioration of a display quality, which is caused by the medium-term afterimage, is effectively prevented in the display device.
The above and other advantages of the present disclosure will become readily apparent by reference to the following detailed description when considered in conjunction with the accompanying drawings wherein:
In the present disclosure, it will be understood that when an element or layer is referred to as being “on”, “connected to” or “coupled to” another element or layer, it can be directly on, connected or coupled to the other element or layer or intervening elements or layers may be present.
Like numerals refer to like elements throughout. In the drawings, the thickness, ratio, and dimension of components are exaggerated for effective description of the technical content. As used herein, the term “and/or” may include any and all combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the present disclosure. As used herein, the singular forms, “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Spatially relative terms, such as “beneath”, “below”, “lower”, “above”, “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures.
It will be further understood that the terms “includes” and/or “including”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
“About”, “approximately”, “substantially equal” as used herein is inclusive of the stated value and means within an acceptable range of deviation for the particular value as determined by one of ordinary skill in the art, considering the measurement in question and the error associated with measurement of the particular quantity (i.e., the limitations of the measurement system). For example, “about” can mean within one or more standard deviations, or within ±30%, 20%, 10% or 5% of the stated value.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Hereinafter, the present disclosure will be explained in detail with reference to the accompanying drawings.
Referring to
The display device DD may include a display surface substantially parallel to each of a first direction DR1 and a second direction DR2 and may display an image through the display surface. The display surface may correspond to a front surface of the display device DD.
The display surface of the display device DD may include a display area DA and a non-display area NDA. The display area DA may be an area in which the image is displayed. A user may view the image through the display area DA. In the present embodiment, the display area DA has a quadrangular shape, however, this is merely one example. According to an embodiment, the display area DA may have a variety of shapes and should not be particularly limited.
The non-display area NDA may be defined adjacent to the display area DA. The non-display area NDA may have a predetermined color. The non-display area NDA may surround the display area DA. Accordingly, the shape of the display area DA may be defined by the non-display area NDA, however, this is merely one example. According to an embodiment, the non-display area NDA may be disposed adjacent to only one side of the display area DA or may be omitted. The display device DD may include various embodiments and should not be particularly limited.
Referring to
The driving controller 100 may receive input image signal RGB and control signals CTRL. The driving controller 100 may convert a data format of the input image signal RGB to a data format appropriate to an interface between the data driver 200 and the driving controller 100 to generate image data DATA. The driving controller 100 may generate a first driving control signal SCS, a second driving control signal DCS, and a third driving control signal ECS based on the control signals CTRL.
The data driver 200 may receive the second driving control signal DCS and the image data DATA from the driving controller 100. The data driver 200 may convert the image data DATA to data signals and may output the data signals to a plurality of data lines DL1 to DLm described later. The data signals may be analog voltages corresponding to grayscale values of the image data DATA.
The scan driver 300 may receive the first driving control signal SCS from the driving controller 100. The scan driver 300 may output scan signals to scan lines in response to the first driving control signal SCS.
The voltage generator 400 may generate voltages required to operate the display panel DP. In the present embodiment, the voltage generator 400 may generate a first driving voltage ELVDD, a second driving voltage ELVSS, a first initialization voltage VINT, and a second initialization voltage AINT.
The display panel DP may include initialization scan lines SIL1 to SILn, compensation scan lines SCL1 to SCLn, write scan lines SWL1 to SWLn+1, light emission control lines EML1 to EMLn, the data lines DL1 to DLm, and pixels PX. The initialization scan lines SIL1 to SILn, the compensation scan lines SCL1 to SCLn, the write scan lines SWL1 to SWLn+1, the light emission control lines EML1 to EMLn, the data lines DL1 to DLm, and the pixels PX may overlap the display area DA. The initialization scan lines SIL1 to SILn, the compensation scan lines SCL1 to SCLn, the write scan lines SWL1 to SWLn+1, and the light emission control lines EML1 to EMLn may extend in the second direction DR2. The initialization scan lines SIL1 to SILn, the compensation scan lines SCL1 to SCLn, the write scan lines SWL1 to SWLn+1, and the light emission control lines EML1 to EMLn may be arranged in the first direction DR1 and may be spaced apart from each other. The data lines DL1 to DLm may extend in the first direction DR1 and may be arranged spaced apart from each other in the second direction DR2.
The pixels PX may be electrically connected to the initialization scan lines SIL1 to SILn, the compensation scan lines SCL1 to SCLn, the write scan lines SWL1 to SWLn+1, the light emission control lines EML1 to EMLn, and the data lines DL1 to DLm. Each of the pixels PX may be electrically connected to four scan lines. As an example, as shown in
The scan driver 300 may be disposed in the non-display area NDA of the display panel DP. The scan driver 300 may receive the first driving control signal SCS from the driving controller 100. Responsive to the first driving control signal SCS, the scan driver 300 may output initialization scan signals to the initialization scan lines SIL1 to SILn, may output compensation scan signals to the compensation scan lines SCL1 to SCLn, and may output write scan signals to the write scan lines SWL1 to SWLn+1. A circuit configuration and an operation of the scan driver 300 will be described in detail later.
The light emission driver 350 may receive the third driving control signal ECS from the driving controller 100. The light emission driver 350 may output light emission control signals to the light emission control lines EML1 to EMLn in response to the third driving control signal ECS. According to an embodiment, the scan driver 300 may be connected to the light emission control lines EML1 to EMLn. In this case, the scan driver 300 may output the light emission control signals to the light emission control lines EML1 to EMLn.
Each of the pixels PX may include a light emitting diode ED and a pixel circuit part PXC that controls a light emission of the light emitting diode ED. The pixel circuit part PXC may include a plurality of transistors and a capacitor. The scan driver 300 and the light emission driver 350 may include transistors that are formed through the same processes as those of the pixel circuit part PXC.
Each of the pixels PX may receive the first driving voltage ELVDD, the second driving voltage ELVSS, the first initialization voltage VINT, and the second initialization voltage AINT from the voltage generator 400.
The pixel PXij may include the light emitting diode ED and the pixel circuit part PXC. The pixel circuit part PXC may include first, second, third, fourth, fifth, sixth, and seventh transistors T1, T2, T3, T4, T5, T6, and T7 and one capacitor Cst. Each of the first to seventh transistors T1 to T7 may be a transistor including a low-temperature polycrystalline silicon (“LTPS”) semiconductor layer. All the first to seventh transistors T1 to T7 may be a P-type transistor, however, the present disclosure should not be limited thereto or thereby. As an example, all the first to seventh transistors T1 to T7 may be an N-type transistor. According to an embodiment, some transistors of the first to seventh transistors T1 to T7 may be the P-type transistor, and the other transistors may be the N-type transistor. As an example, each of the first, second, fifth, sixth, and seventh transistors T1, T2, T5, T6, and T7 among the first to seventh transistors T1 to T7 may be the P-type transistor, and each of the third and fourth transistors T3 and T4 may be the N-type transistor including an oxide semiconductor as its semiconductor layer. However, the configuration of the pixel circuit part PXC should not be limited to the embodiment shown in
The initialization scan line SILj, the compensation scan line SCLj, the first and second write scan lines SWLj and SWLj+1, and the light emission control line EMLj may transmit a j-th initialization scan signal SIj (hereinafter, referred to as an “initialization scan signal”), a j-th compensation scan signal SCj (hereinafter, referred to as a “compensation scan signal”), j-th and (j+1)th write scan signals SWj and SWj+1 (hereinafter, referred to as “first and second write scan signals”), and a j-th light emission control signal EMj (hereinafter, referred to as a “light emission control signal”) to the pixel PXij, respectively. The data line DLi may transmit a data signal Di to the pixel PXij. The data signal Di may have a voltage level corresponding to a grayscale of a corresponding image signal among the image signal RGB input to the display device DD (refer to
The first transistor T1 may include a first electrode connected to the first driving voltage line VL1 via the fifth transistor T5, a second electrode electrically connected to an anode of the light emitting diode ED via the sixth transistor T6, and a gate electrode connected to one end of the capacitor Cst. The first transistor T1 may receive the data signal Di transmitted via the data line DLi according to a switching operation of the second transistor T2 and may supply a driving current Id to the light emitting diode ED.
The second transistor T2 may include a first electrode connected to the data line DLi, a second electrode connected to the first electrode of the first transistor T1, and a gate electrode connected to the first write scan line SWLj. The second transistor T2 may be turned on in response to the first write scan signal SWj applied thereto via the first write scan line SWLj and may transmit the data signal Di applied thereto via the data line DLi to the first electrode of the first transistor T1.
The third transistor T3 may include a first electrode connected to the second electrode of the first transistor T1, a second electrode connected to the gate electrode of the first transistor T1, and a gate electrode connected to the compensation scan line SCLj. The third transistor T3 may be turned on in response to the compensation scan signal SCj applied thereto via the compensation scan line SCLj and may connect the gate electrode and the second electrode of the first transistor T1 to each other to allow the first transistor T1 to be connected in a diode configuration.
The fourth transistor T4 may include a first electrode connected to the gate electrode of the first transistor T1, a second electrode connected to the third driving voltage line VL3 to which the first initialization voltage VINT is transmitted, and a gate electrode connected to the initialization scan line SILj. The fourth transistor T4 may be turned on in response to the initialization scan signal SIj applied thereto via the initialization scan line SILj and may transmit the first initialization voltage VINT to the gate electrode of the first transistor T1 to perform an initialization operation that initializes a voltage of the gate electrode of the first transistor T1.
The fifth transistor T5 may include a first electrode connected to the first driving voltage line VL1, a second electrode connected to the first electrode of the first transistor T1, and a gate electrode connected to the light emission control line EMLj.
The sixth transistor T6 may include a first electrode connected to the second electrode of the first transistor T1, a second electrode connected to the anode of the light emitting diode ED, and a gate electrode connected to the light emission control line EMLj.
The fifth transistor T5 and the sixth transistor T6 may be substantially simultaneously turned on in response to the light emission control signal EMj applied thereto via the light emission control line EMLj. The first driving voltage ELVDD applied via the turned-on fifth transistor T5 may be compensated for by the first transistor T1 connected in the diode configuration and may be transmitted to the light emitting diode ED.
The seventh transistor T7 may include a first electrode connected to the second electrode of the sixth transistor T6, a second electrode connected to the fourth driving voltage line VL4 to which the second initialization voltage AINT is transmitted, and a gate electrode connected to the second write scan line SWLj+1.
As described above, the one end of the capacitor Cst may be connected to the gate electrode of the first transistor T1, and the other end of the capacitor Cst may be connected to the first driving voltage line VL1. A cathode of the light emitting diode ED may be connected to the second driving voltage line VL2 that transmits the second driving voltage ELVSS.
Referring to
Then, when the compensation scan signal SCj at low level is provided via the compensation scan line SCLj during a compensation period of the one frame F1, the third transistor T3 may be turned on. The compensation period may not overlap the initialization period. An activation period of the compensation scan signal SCj may be defined as a period in which the compensation scan signal SCj has the low level, and an activation period of the initialization scan signal SIj may be defined as a period in which the initialization scan signal SIj has the low level. The activation period of the compensation scan signal SCj may not overlap the activation period of the initialization scan signal SIj. The activation period of the initialization scan signal SIj may precede the activation period of the compensation scan signal SCj.
During the compensation period, the first transistor T1 may be connected in a diode configuration and may be forward biased by the turned-on third transistor T3. In addition, the compensation period may include a data write period in which the first write scan signal SWj is generated at low level. The second transistor T2 may be turned on in response to the first write scan signal SWj at low level during the data write period. Then, a compensation voltage Di-Vth reduced by a threshold voltage Vth of the first transistor T1 from the data signal Di provided via the data line DLi may be applied to the gate electrode of the first transistor T1. That is, an electric potential of the gate electrode of the first transistor T1 may be the compensation voltage Di-Vth.
The first driving voltage ELVDD and the compensation voltage Di-Vth may be applied to both ends (or opposite ends) of the capacitor Cst, respectively, and the capacitor Cst may be charged with electric charges corresponding to a difference in voltage between the both ends of the capacitor Cst.
Meanwhile, the seventh transistor T7 is turned on in response to the second write scan signal SWj+1 having the low level applied thereto via the second write scan line SWLj+1. A portion of the driving current Id may be bypassed as a bypass current Ibp via the seventh transistor T7.
In a case where the pixel PXij displays a black image, when the light emitting diode ED emits a light even though a minimum current of the first transistor T1 flows as the driving current Id, the pixel PXij may not properly display the black image. Therefore, the seventh transistor T7 of the pixel PXij according to the embodiment of the present disclosure may distribute a portion of the minimum current of the first transistor T1 to another current path as a bypass current Ibp rather than to a current path to the light emitting diode ED. In this case, the minimum current of the first transistor T1 means a current flowing to the first transistor T1 under a condition that a gate-source voltage Vgs of the first transistor T1 is less than the threshold voltage Vth and the first transistor T1 is turned off. In this way, when the minimum driving current that turns off the first transistor T1, for example, a current of less than about 10 picoamperes (pA), is transmitted to the light emitting diode ED, an image with a black grayscale may be displayed. In the case where the pixel PXij displays the black image, an influence of bypass transmission of the bypass current Ibp is relatively large, however, in the case where images, such as a normal image or a white image, are displayed, the influence of the bypass current Ibp with respect to the driving current Id may be negligible. Accordingly, when the black image is displayed, a current, i.e., a light emitting current led, reduced by an amount of the bypass current Ibp, which is bypassed through the seventh transistor T7, from the driving current Id may be provided to the light emitting diode ED, and thus, the black image may be clearly displayed. Thus, the pixel PXij may display an accurate black grayscale image using the seventh transistor T7, and as a result, a contrast ratio may be improved.
Then, a level of the light emission control signal EMj provided from the light emission control line EMLj may be changed to a low level from a high level. The fifth transistor T5 and the sixth transistor T6 may be turned on in response to the light emission signal EMj having the low level. As a result, the driving current Id may be generated due to a difference in voltage between the gate voltage of the gate electrode of the first transistor T1 and the first driving voltage ELVDD, the driving current Id may be supplied to the light emitting diode ED via the sixth transistor T6, and thus, the light emitting current led may flow through the light emitting diode ED.
Referring to
In a case where the still image is displayed over a predetermined time and switched to another image, the medium-term afterimage may occur for a certain period of time. As an example, the predetermined time may be about 10 seconds or more and may be about 1 hour or less. In a case where the display of the still image ends and an image (hereinafter, referred to as a “target image”) having a target grayscale is displayed over the display area DA, the medium-term afterimage may occur in the first and second areas A1 and A2 for the certain period of time as shown in
In the case where the target image is the afterimage-causing image, a difference in luminance between the first and second areas A1 and A2 may be visible due to the medium-term afterimage for the certain period of time. After the certain period of time elapses, the target image from which the medium-term afterimage is removed may be displayed in the display area DA as shown in
The medium-term afterimage may occur in different ways depending on the grayscale of the still image, the grayscale of the target image, and the display time. As an example, when the first still image having the first low grayscale (e.g., 8 grayscale) are displayed in the first area A1 for about 10 seconds and then a target data signal having the target grayscale (e.g., 48 grayscale) higher than the first low grayscale is provided to the pixel PX (refer to
Meanwhile, when the second still image having the white grayscale is displayed in the second area A2 for about 10 seconds and then the target data signal having the target grayscale (e.g., 48 grayscale) lower than the white grayscale (e.g., 128 grayscale) is provided to the pixel PX of the second area A2, an image (hereinafter, referred to as a “second afterimage”) having a grayscale lower than the target grayscale may be visible in the second area A2 for the certain period of time.
As an example, the medium-term afterimage may be caused by a change in hysteresis characteristics of the transistors T1 to T7 (refer to
Due to the medium-term afterimage, although the same target image is supposed to be displayed in the first and second areas A1 and A2, the first afterimage and the second afterimage may be displayed in the first area A1 and the second area A2, respectively, for the certain period of time. The difference in luminance between the first and second afterimages may be visible in the display area DA by the medium-term afterimage for the certain period of time.
Referring to
The luminance ratio may be defined as a ratio of the luminance (or grayscale) of the afterimage to the target luminance. In this case, when the target luminance and the luminance of the afterimage are the same as each other, the luminance ratio of the afterimage luminance to the target luminance may be the same as the reference luminance ratio Rb.
The luminance ratio of the first afterimage luminance to the target luminance may be greater than the reference luminance ratio Rb. That is, when the luminance of the still image is lower than the target luminance, the medium-term afterimage having the luminance ratio smaller than the reference luminance ratio Rb may occur. The luminance ratio of the second afterimage to the target luminance may be smaller than the reference luminance ratio Rb. That is, when the luminance of the still image is higher than the target luminance, the medium-term afterimage having the luminance ratio greater than the reference luminance ratio Rb may occur.
The luminance ratio of the medium-term afterimage may be changed depending on the time during which the still image is displayed. In
According to the first to third graphs G1 to G3, afterimage characteristics of the first afterimage may be different depending on the display time of the first still image. In particular, as the time during which the first still image is displayed increases, the luminance ratio is high during an initial period (for example, within about 40 seconds).
In
According to the fourth to sixth graphs G4 to G6, afterimage characteristics of the second afterimage may be different depending on the display time of the second still image. In particular, the time during which the second still image is displayed increases, the luminance ratio is low during an initial period (for example, within about 60 seconds).
As represented by the first to third graphs G1 to G3, the first afterimage may have a first tendency during a first period SP1 from a start point t0, at which the first still image is changed to the first afterimage, to a first midpoint t1. In addition, the first afterimage may have a second tendency during a second period LP1 from the first midpoint t1 to a time point t2 at which the medium-term afterimage is finished. The first tendency may be similar to a tendency of a temporary afterimage (i.e., short-term afterimage), and the second tendency may be similar to a tendency of a long-term burn-in (i.e., long-term afterimage). According to the present disclosure, the driving controller 100 (refer to
As represented by the fourth to sixth graphs G4 to G6, the second afterimage may have a third tendency during a third period SP2 from a start point t0, at which the second still image is changed to the second afterimage, to a second midpoint t3. In addition, the second afterimage may have a fourth tendency during a fourth period LP2 from the second midpoint t3 to a time point t4 at which the medium-term afterimage is finished. The third tendency may be similar to the tendency of the temporary afterimage (i.e., short-term afterimage), and the fourth tendency may be similar to a tendency of a long-term burn-in (i.e., long-term afterimage). According to the present disclosure, the driving controller 100 (refer to
The afterimage algorithm f(x) according to an embodiment of the present disclosure is defined by the following Equation 1.
f(x)=f1(x)+f2(x) [Equation 1]
In Equation 1, f1(x) denotes a first afterimage calculation equation, and f2(x) denotes a second afterimage calculation equation. As used herein, the afterimage algorithm f(x) may represent an afterimage, especially, a mid-term afterimage, the first afterimage calculation equation f1(x) may represent the short-term afterimage, and the second afterimage calculation equation f2(x) may represent the long-term afterimage.
The first afterimage calculation equation f1(x) is defined by the following Equation 2.
f1(x)=aebx [Equation 2]
The second afterimage calculation equation is defined by one of the following Equations 3, 4, and 5.
f2(x)=Rb+cx+d [Equation 3]
f2(x)=cx+d [Equation 4]
f2(x)=cedx [Equation 5])
In Equations 1 to 5, each of a, b, c, and d may be a constant. Rb may be the reference luminance ratio. In the condition in which the first afterimage is displayed, values of a and c may be positive values, and in the condition in which the second afterimage is displayed, the values of a and c may be negative values. The value of each of a, b, c, and d may be changed depending on the display time of the still image and a difference between the grayscale of the still image and the target grayscale.
Referring to
Referring to
During the third period SP2, the first afterimage component may have the negative value, and the second afterimage component may have a value smaller than 1 and greater than 0. A sum of the second afterimage component and the first afterimage component may be calculated as the final afterimage component in the third period SP2. As an example, during the fourth period LP2, the first afterimage component may have a value of 0, and the second afterimage component may have a value smaller than 1 and greater than 0. Accordingly, the second afterimage component alone may be calculated as the final afterimage component during the fourth period LP2.
That is, the first afterimage component calculated by the first afterimage calculation equation f1(x) may converge to zero (0) as a time elapses. In addition, the second afterimage component calculated by the second afterimage calculation equation f2(x) may converge to 1 as a time elapses or may be maintained at 1 after the certain period of time elapses (that is, after the fourth period LP2 elapses).
In
In
As an example, the target grayscale Tg may be 16 grayscale, the first and second reference grayscales may be 8 grayscale and 0 grayscale, respectively, and the third and fourth reference grayscales may be 32 grayscale and 128 grayscale, respectively.
In the third section S3 of
According to
In the third section S3 of
According to
The driving controller 100 (refer to
Referring to
The data compensation block 120 may receive the image signal RGB and may reflect the compensation value Cv to the image signal RGB to generate a compensation image signal RGB′. The data compensation block 120 may be activated in response to a flag signal fg1. The flag signal fg1 may be enabled in the period when the still image is displayed and may be disabled in the period when the video (i.e., moving image) is displayed. Accordingly, the data compensation block 120 may be activated in response to the flag signal fg1 enabled in the period when the still image is displayed and may be deactivated in response to the flag signal fg1 disabled in the period when the video is displayed. In the period when the data compensation block 120 is deactivated, the driving controller 100 may output the image signal RGB without compensating for the image signal RGB, and in the period when the data compensation block 120 is activated, the driving controller 100 may output the compensation image signal RGB′.
As an example, the compensation determination block 110 may include the image determination block 111 and the compensation value generation block 112. The image determination block 111 may compare a previous image signal P_RGB with the image signal RGB (i.e., a current image signal) currently input to calculate a variation amount Df and may determine whether the image is changed based on the variation amount Df. The previous image signal P_RGB may be provided from a memory. The image determination block 111 may compare the previous image signal P_RGB with the current image signal RGB in units of one frame and may compare the previous image signal P_RGB with the current image signal RGB in units of one line. In a case that the previous image signal P_RGB is compared with the current image signal RGB in units of one frame, the previous image signal P_RGB may be an image signal of a previous frame, and the current image signal RGB may be an image signal of a current frame. In a case that the previous image signal P_RGB is compared with the current image signal RGB in units of one line, the previous image signal P_RGB may be an image signal corresponding to a previous line, and the current image signal RGB may be an image signal corresponding to a current line.
As shown in
The comparator 111b may compare the previous image signal P_RGB with the cumulative image signal A_RGB to calculate the variation amount Df. The comparator 111b may transmit the calculated variation amount Df to the determiner 111c. When the previous image signal P_RGB is compared with the cumulative image signal A_RGB, the comparator 111b may use only some bits of information among all bits. As an example, when the image signal is an 8-bit signal, only the upper 4 bits of information may be used to compare the previous image signal P_RGB with the cumulative image signal A_RGB.
The determiner 111c may compare the variation amount Df with a predetermined reference value (e.g., 0) to determine whether the image is changed. When the variation amount Df is the same as the reference value, the determiner 111c may determine that the image is the still image and may transmit an incremental value Rc to the counter 111d to count the display time of the still image. The counter 111d may receive the incremental value Rc and may accumulate the incremental value Rc. In a case where the incremental value Rc is not received in predetermined unit (for example, the determiner 111c), the counter 111d may reset the accumulated value Ac (i.e., a cumulative value). When the reception of the incremental value Rc is finished and a request for the cumulative value Ac is received from the compensation value generation block 112, the counter 111d may transmit the cumulative value Ac to the compensation value generation block 112.
In addition, when the variation amount Df is different from the reference value, the determiner 111c may output the variation amount Df to the compensation value generation block 112. After a time point at which it is determined first that the variation amount Df is different from the reference value (for example, a time point at which an image signal corresponding to a target image is input, which corresponds to the start point t0 of
The compensation value generation block 112 may receive the variation amount Df and the state signal Sc and may generate the afterimage algorithm. When the variation amount Df is the same as the reference value (for example, when the still image is displayed), the compensation value generation block 112 may be deactivated. That is, since the medium-term afterimage does not occur while the still image is being displayed, the compensation value generation block 112 may be deactivated.
However, when the variation amount Df is different from the reference value, the compensation value generation block 112 may be activated, and the compensation value generation block 112 may generate the compensation value Cv using the afterimage algorithm f(x).
As shown in
The afterimage component determiner 112b may generate the afterimage algorithm f(x) based on the variation amount Df and the accumulated state result Th and may calculate the final afterimage component AId using the afterimage algorithm f(x). As the afterimage component determiner 112b generates the afterimage algorithm f(x) using the variation amount Df, the afterimage component determiner 112b may generate the final afterimage component AId by taking into account the difference between the grayscale of the still image and the target grayscale. The afterimage component determiner 112b may generate the afterimage algorithm f(x) using the state result Th, and thus, the afterimage component determiner 112b may periodically generate the final afterimage component AId by taking into account the display time of the afterimage.
The afterimage component determiner 112b may further receive the cumulative value Ac from the image determination block 111, e.g., the counter 111d. The cumulative value Ac may also be used to generate the afterimage algorithm f(x). The afterimage component determiner 112b may calculate the final afterimage component AId by taking into account the time during which the still image is displayed, through the cumulative value Ac.
The afterimage component determiner 112b may receive information Ca, Cb, Cc, and Cd about the constant values of the a, b, c, and d used in the first and second afterimage calculation equations f1(x) and f2(x). The information Ca, Cb, Cc, and Cd about the constant values of the a, b, c, and d may be changed depending on the difference between the grayscale of the still image and the target grayscale, the display time of the still image, and the display time of the afterimage. As an example, a time interval in which the information Ca and Cb about the constant values of the a and the b is changed may be shorter than a time interval in which the information Cc and Cd about the constant values of the c and the d is changed. Alternatively, the information Ca and Cb about the constant values of the a and the b may have a fixed value regardless of the display time of the still image. In this case, only the information Cc and Cd about the constant values of the c and the d may have a value changed depending on the display time of the still image.
The compensation value determiner 112c may generate the compensation value Cv based on the final afterimage component AId. In
The final afterimage component AId at the start point t0 (that is, a case where x is 0) may be determined according to the constant values of the a and the d in the first and second afterimage calculation equations f1(x) and f2(x).
The compensation value determiner 112c may update the compensation value Cv in the units of a first compensation interval Pc1 during the third period SP2 or the first period SP1 (refer to
As described above, as the compensation value Cv is generated based on the final afterimage component AId calculated by the afterimage algorithm and the image signal RGB is compensated for by the compensation value Cv, the medium-term afterimage may be removed even within a certain period of time during which the medium-term afterimage occurs, and as a result, the deterioration in display quality of the display device DD, which is caused by the medium-term afterimage, may be effectively prevented. In addition, as the medium-term afterimage is compensated for using the afterimage algorithm f(x) rather than using a lookup table, the increase of the number of the component parts of the display device DD due to a memory for the lookup table may be prevented. Since the afterimage algorithm f(x) is obtained by a combination of the first afterimage calculation equation f1(x) to extract the first afterimage component and the second afterimage calculation equation f2(x) to extract the second afterimage component, the final afterimage component similar to an actual measurement value of the medium-term afterimage may be calculated. Accordingly, the compensation for the medium-term afterimage may be more precisely performed.
Referring to
The display device DDa may be inwardly folded (in-folding) such that the first and second display areas DA1 and DA2 face each other as shown in
The display device DDa may be operated in a first mode in which an image is displayed using both the first and second display areas DA1 and DA2 or may be operated in a second mode (See
The first mode may be a normal mode in which both the first and second display areas DA1 and DA2 are normally operated. The second mode may be a partial operation mode in which only one area of the first and second display areas DA1 and DA2 is normally operated. In this case, the expression “the display area is normally operated” may mean that an operation to display an image including information for the user is performed.
The display device DDa may display the image using the first and second display areas DA1 and DA2 in the first mode, and the display device DDa may display the image using only one display area of the first and second display areas DA1 and DA2 in the second mode. As an example, in a case where the display device DDa displays the image using only the first display area DA1 in the second mode, the second display area DA2 may continuously display a reference image having a specific grayscale, for example, a black reference image having a black grayscale. In this case, the black reference image displayed in the second display area DA2 may be defined as an image displayed by a black data signal having the black grayscale, however, the present disclosure should not be limited thereto or thereby. According to another embodiment, the black reference image may be defined as an image displayed by a low grayscale data signal having a specific grayscale, e.g., a low grayscale.
Referring to
When the second mode MD2 is terminated and the operation mode of the display device DDa is switched to the first mode MD1, the medium-term afterimage may occur in the second display area DA2. In a case where the second mode MD2 is terminated and the image having the target grayscale is displayed on the entire display area DA in the first mode MD1, the medium-term afterimage may occur in the second display area DA2 during the certain period of time as shown in
Referring to
The signal extraction block 130 may receive an image signal RGB. The signal extraction block 130 may extract a first image signal RGB1 corresponding to the first display area DA1 and a second image signal RGB2 corresponding to the second display area DA2 from the image signal RGB. Since the medium-term afterimage does not occur in the first display area DA1, the first image signal RGB1 may not be provided to the compensation determination block 110a. Since the medium-term afterimage occurs in the second display area DA2, the second image signal RGB2 may be provided to the compensation determination block 110a.
The compensation determination block 110a may be activated after a time point at which the second mode MD2 is switched to the first mode MD1. The compensation determination block 110a may generate a compensation value Cv based on a final afterimage component calculated by using an afterimage algorithm f(x) obtained by a combination of a first afterimage calculation equation f1(x) and a second afterimage calculation equation f2(x).
The data compensation block 120a may receive the second image signal RGB2 and may reflect the compensation value Cv to the second image signal RGB2 to generate a second compensation image signal RGB2′. The data compensation block 120a may be activated in response to a flag signal fg2. The flag signal fg2 may be enabled in the first mode DM1 and may be disabled in the second mode DM2. Accordingly, the data compensation block 120a may be activated in response to the flag signal fg2 enabled in the first mode MD1 and may be deactivated in response to the flag signal fg2 disabled in the second mode MD2. In a period during which the data compensation block 120a is deactivated, the driving controller 100a may not compensate for the second image signal RGB2, and the driving controller 100a may compensate for the second image signal RGB2 in a period during which the data compensation block 120a is activated.
Operations of the compensation determination block 110a and the data compensation block 120a are substantially the same as those of
The synthesis block 140 may receive the first image signal RGB1 and the second compensation image signal RGB2′ and may synthesize the first image signal RGB1 and the second compensation image signal RGB2′ to output the final compensation signal RGB′.
As used in connection with various embodiments of the disclosure, the term “block” may include a unit implemented in hardware, software, or firmware, and may be interchangeably used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A block may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment of the disclosure, the compensation determination block, the data compensation block, the image determination block, the compensation value generation block, signal extraction block, synthesis block, the unit determiner, the comparator, the determiner, the counter, the state accumulator, the after image component determiner, or the compensation value determiner may be implemented in a form of an application-specific integrated circuit (ASIC).
Although the embodiments of the present disclosure have been described, it is understood that the present disclosure should not be limited to these embodiments but various changes and modifications can be made by one ordinary skilled in the art within the spirit and scope of the present disclosure as hereinafter claimed. Therefore, the disclosed subject matter should not be limited to any single embodiment described herein, and the scope of the present invention shall be determined according to the attached claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2021-0123616 | Sep 2021 | KR | national |