This application claims priority to Korean Patent Application No. 10-2023-0033646, filed on Mar. 15, 2023, and all the benefits accruing therefrom under 35 U.S.C. § 119, the content of which in its entirety is herein incorporated by reference.
Embodiments of the present disclosure described herein relate to a display device and a method of driving the display device, and more particularly, relate to a display device including a display panel having an increased lifetime and a method of driving a display device.
Various types of display devices may be used to provide image information. For example, an organic light emitting display device, an inorganic light emitting display device, a quantum dot display device, a liquid crystal display device, and the like may be used as display devices. In particular, the organic light emitting display devices include organic light emitting elements having a predetermined lifetime. Therefore, a lifespan difference may occur for each pixel, and an afterimage may occur.
Embodiments of the present disclosure provide a display device including a display panel having an increased lifetime.
Embodiments of the present disclosure provide a method of driving a display device capable of improving the lifespan of a display panel.
According to an embodiment of the present disclosure, a display device includes a display panel that displays an image, and a controller that receives image data corresponding to a frame and drives the display panel. The controller divides the image data into a plurality of blocks and determines whether the image is a fixed image based on an inter-frame variation of a block of the plurality of blocks, and the controller changes an afterimage prevention operation mode when the number of blocks displaying the fixed image for a period equal to or greater than a first reference value among the plurality of blocks is equal to or greater than a threshold number.
According to an embodiment, when the controller determines the inter-frame variation of a block of the plurality of blocks is equal to or less than a first threshold, the controller may be configured to determine that the block displays the fixed image.
According to an embodiment, when the controller determines that the block displays the fixed image, the controller may be configured to accumulate a stack value of the block, and when the controller determines that the block displays a moving image, the controller may be configured to subtract the stack value of the block.
According to an embodiment, the controller may be configured to determine a state of each of the plurality of blocks, update a plurality of stack values corresponding to the plurality of blocks on a one-to-one basis based on the states of the plurality of blocks, where updating the plurality of stack values is every preset period, and change the afterimage prevention operation mode when a number of blocks having a stack value equal to or greater than the first reference value among the plurality of blocks is equal to or greater than the threshold number.
According to an embodiment, the state may be a first state of displaying a fixed image or a second state of displaying a moving image.
According to an embodiment, the controller may divide second image data corresponding to a second frame into a second plurality of blocks, where the second frame is subsequent or prior to the frame, calculate a difference between first block image information of each of the plurality of blocks of the frame and second block image information of each of the second plurality of blocks of the second frame, determine a block of the plurality of blocks is in a first state when a ratio of the difference associated with the block is less than or equal to a state reference value, and determine that the block is in a second state when the ratio of the difference is greater than the state reference value.
According to an embodiment, each of the first block image information and the second block image information may include at least one of an average luminance, an average gray level, and data about a detected edge of each of the plurality of blocks and the second plurality of blocks.
According to an embodiment, the plurality of blocks may include a first block, and the plurality of stack values may include a first stack value corresponding to the first block, and when the first block is in the first state, the controller may accumulate a first value to the first stack value, and when the first block is in the second state, the controller may subtract a second value from the first stack value.
According to an embodiment, the first value may be based on an average luminance of the first block.
According to an embodiment, the first value when the average luminance is a first luminance may be less than the first value when the average luminance is a second luminance greater than the first luminance.
According to an embodiment, the afterimage prevention operation mode may include a first operation mode and a second operation mode different from the first operation mode, and the controller may change the afterimage prevention operation mode from the first operation mode to the second operation mode when the number of the blocks is equal to or greater than the threshold number.
According to an embodiment, in the first operation mode, the controller may reduce the luminance of the image data by a first ratio when the luminance of the image data is greater than or equal to a first reference luminance, in the second operation mode, the controller may reduce the luminance of the image data by a second ratio when the luminance of the image data is equal to or greater than a second reference luminance, and the first reference luminance may be greater than or equal to the second reference luminance.
According to an embodiment, the second ratio may be greater than or equal to the first ratio.
According to an embodiment, the controller may position and display the image data by X number of pixels in a first period in the first operation mode, may position and display the image data by Y number of pixels in a second period in the second operation mode, and the Y number may be greater than or equal to the X number.
According to an embodiment, the second period may be less than or equal to first period.
According to an embodiment of the present disclosure, a method of driving a display device includes a receiving image data, dividing the image data into a plurality of blocks and determining a state of each of the plurality of blocks, updating a plurality of stack values corresponding to the plurality of blocks on a one-to-one basis based on the state of each of the plurality of blocks, wherein updating the plurality of stack values is at preset periods, detecting the number of blocks having a stack value greater than or equal to a first reference value among the plurality of blocks, and changing an afterimage prevention operation mode when the number of blocks having the stack value equal to or greater than the first reference value is greater than or equal to a threshold number.
According to an embodiment, the determining of the state may include calculating first block image information of each of the plurality of blocks of a first frame, dividing second image data corresponding a second frame into a second plurality of blocks, where the second frame is subsequent or prior to the first frame, calculating second block image information of each of the second plurality of blocks of the second frame, calculating a difference between the first block image information and the second block image information, comparing the difference with a state reference value, and determining that a block of the plurality of blocks is in a first state when a ratio of the difference associated with the block is less than or equal to the state reference value, determining that the block is in a second state different from the first state when the ratio of the difference associated with the exceeds the state reference value.
According to an embodiment, the plurality of blocks may include a first block, and the plurality of stack values may include a first stack value corresponding to the first block, and the updating of the plurality of stack values may include accumulating a first value to the first stack value when the first block is in the first state and subtracting a second value from the first stack value when the first block is in the second state.
According to an embodiment, the first value may be based on an average luminance of the first block, and the first value when the average luminance is a first luminance may be less than the first value when the average luminance is a second luminance higher than the first luminance.
According to an embodiment, each of the first block image information and the second block image information may include at least one of an average luminance, an average gray level, and data about a detected edge of each of the plurality of blocks and the second plurality of blocks.
A detailed description of each drawing is provided to facilitate a more thorough understanding of the drawings referenced in the detailed description of the present disclosure.
The invention now will be described more fully hereinafter with reference to the accompanying drawings, in which various embodiments are shown. This invention may, however, be embodied in many different forms, and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
In the specification, when one component (or area, layer, part, or the like) is referred to as being “on”, “connected to”, or “coupled to” another component, it should be understood that the former may be directly on, connected to, or coupled to the latter, and also may be on, connected to, or coupled to the latter via a third intervening component.
Like reference numerals refer to like components. Also, in drawings, the thickness, ratio, and dimension of components are exaggerated for effectiveness of description of technical contents. The term “and/or” includes one or more combinations of the associated listed items.
The terms “first”, “second”, etc. are used to describe various components, but the components are not limited by the terms. The terms are used to differentiate one component from another component. For example, a first component may be named as a second component, and vice versa, without departing from the spirit or scope of the present disclosure. A singular form, unless otherwise stated, includes a plural form.
Also, the terms “under”, “beneath”, “on”, and “above” are used to describe a relationship between components illustrated in a drawing. The terms are relative and are described with reference to a direction indicated in the drawing. It will be understood that when an element is referred to as being “on” another element, it can be directly on the other element or intervening elements may be present therebetween. In contrast, when an element is referred to as being “directly on” another element, there are no intervening elements present.
It will be understood that the terms “include”, “comprise”, “have”, etc. specify the presence of features, numbers, steps, operations, elements, or components, described in the specification, or a combination thereof, not precluding the presence or additional possibility of one or more other features, numbers, steps, operations, elements, or components or a combination thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, “a,” “an,” “the,” and “at least one” do not denote a limitation of quantity, and are intended to include both the singular and plural, unless the context clearly indicates otherwise. For example, “an element” has the same meaning as “at least one element,” unless the context clearly indicates otherwise. “At least one” is not to be construed as limiting “a” or “an.” “Or” means “and/or.” As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” or “includes” and/or “including” when used in this specification, specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof.
It should be appreciated that various embodiments of the disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B”, “at least one of A and B”, “at least one of A or B”, “A, B, or C”, “at least one of A, B, and C”, and “at least one of A, B, or C”, may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd”, or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with”, “coupled to”, “connected with”, or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element.
Furthermore, relative terms, such as “lower” or “bottom” and “upper” or “top,” may be used herein to describe one element's relationship to another element as illustrated in the Figures. It will be understood that relative terms are intended to encompass different orientations of the device in addition to the orientation depicted in the Figures. For example, if the device in one of the figures is turned over, elements described as being on the “lower” side of other elements would then be oriented on “upper” sides of the other elements. The term “lower,” can therefore, encompasses both an orientation of “lower” and “upper,” depending on the particular orientation of the figure. Similarly, if the device in one of the figures is turned over, elements described as “below” or “beneath” other elements would then be oriented “above” the other elements. The terms “below” or “beneath” can, therefore, encompass both an orientation of above and below.
The terms “part” and “unit” mean a software component or a hardware component that performs a specific function. The hardware component may include, for example, a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). The software component may refer to executable code and/or data used by executable code in an addressable storage medium. Thus, software components may be, for example, object-oriented software components, class components, and working components, and may include processes, functions, properties, procedures, subroutines, program code segments, drivers, firmwares, micro-codes, circuits, data, databases, data structures, tables, arrays, or variables.
Unless defined otherwise, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. In addition, terms such as terms defined in commonly used dictionaries should be interpreted as having a meaning consistent with the meaning in the context of the related technology, and should not be interpreted as an ideal or excessively formal meaning unless explicitly defined in the present disclosure.
Hereinafter, embodiments of the present disclosure will be described with reference to accompanying drawings.
Referring to
The display device DD has a rectangular shape having a relatively longer side in a first direction DR1 and a relatively shorter side in a second direction DR2 intersecting the first direction DR1. However, the shape of the display device DD is not limited thereto, and the shape may be modified into various shapes. It is to be understood that the terms “longer” and “shorter,” when recited with respect to a shape of an object (e.g., “longer sides” and “shorter sides” of an object), are relative terms expressing dimensions of the object. The display device DD may display an image IM in a third direction DR3, on a display surface IS parallel to a plane corresponding to the first direction DR1 and the second direction DR2. The display surface IS on which the image IM is displayed may correspond to a front surface of the display device DD.
According to an embodiment, a front surface (or top surface) and a rear surface (or a bottom surface) of each of members of the display device DD are defined based on a direction that the image IM is displayed. The front surface and the rear surface may be opposite to each other in the third direction DR3, and a normal direction of each of the front surface and the rear surface may be parallel to the third direction DR3. The distance between the front surface and the back surface in the third direction DR3 may correspond to the thickness of the display device DD in the third direction DR3. In one or more embodiments, the directions indicated by the first, second, and third directions DR1, DR2, and DR3 may be a relative concept (e.g., the directions are relative to one another) and may be changed to different directions.
The display device DD may detect an external input applied from the outside of the display device DD. The external input may include various types of inputs provided from the outside of the display device DD. According to an embodiment of the present disclosure, the display device DD may detect a user external input, which is applied from the outside of the display device DD. The user external input may be any one or a combination of various types of external inputs, such as, for example, a part of the user's body, light, heat, gaze, or pressure. In addition, the display device DD may detect the user external input TC (not illustrated) (also referred to herein as a touch input, a button input, or the like), which is applied to the side surface or the back surface of the display device DD depending on the structures of the display device DD, and is not limited to any one embodiment. As an example in accordance with the present disclosure, the user external input may include an input provided using an input device (e.g., a stylus pen, an active pen, a touch pen, an electronic pen, an e-pen, etc.).
The display surface IS of the display device DD may be divided into a display area DA and a non-display area NDA. The display area DA may be an area in which the image IM is displayed. A user visually perceives the image IM through the display area DA. In the example embodiment of
The non-display area NDA is adjacent to the display area DA. The non-display of area NDA may have a given color. The non-display area NDA may surround the display area DA. Accordingly, the shape of the display area DA may be actually defined by the non-display area NDA. However, the described aspects are illustrated by way of example, and the non-display area NDA may be disposed adjacent to a single side of the display area DA or may be omitted. According to aspects of the present disclosure, the display device DD may include various embodiments, and not limited to any one embodiment.
As illustrated in
According to an embodiment of the present disclosure, the display panel DP may be a light emitting display panel. For example, the display panel DP may be an organic light emitting display panel, an inorganic light emitting display panel, an inorganic-inorganic light emitting display panel, or a quantum dot light emitting display panel. A light emitting layer of the organic light emitting display panel may include an organic light emitting material. A light emitting layer of the inorganic light emitting display panel may include an inorganic light emitting material. A light emitting layer of the quantum dot light emitting display panel may include a quantum dot, a quantum rod, and the like. In accordance with one or more embodiments of the present disclosure, the following description is provided in which the display panel DP is an organic light emitting display panel.
The display panel DP outputs the image IM, and the output image IM may be displayed on the display surface IS.
The input sensing layer ISP may be disposed on the display panel DP to sense an external input. The input sensing layer ISP may be directly disposed on the display panel DP. According to an embodiment of the present disclosure, the input sensing layer ISP may be formed on the display panel DP through a process subsequent to the fabrication of the display panel DP. In detail, when the input sensing layer ISP is directly disposed on the display panel DP, an internal adhesive film (not illustrated) is not disposed between the input sensing layer ISP and the display panel DP. However, in an alternative example, the internal adhesive film may be disposed between the input sensing layer ISP and the display panel DP. In this case, the input sensing layer ISP is not fabricated together with the display panel DP through the subsequent processes. In other words, after fabricating the input sensing layer ISP through a process separate from a fabrication process of the display panel DP, the input sensing layer ISP may be fixed on a top surface of the display panel DP through the internal adhesive film. In an embodiment of the present disclosure, the input sensing layer ISP may be omitted.
The window WM may include a transparent material via which the image IM may be visible. For example, the window WM may be formed of glass, sapphire, plastic, etc. An example in which the window WM is implemented with a single layer is illustrated, but the present disclosure is not limited thereto. For example, the window WM may include a plurality of layers.
In one or more embodiments, although not illustrated, the non-display area NDA of the display device DD may be actually provided by printing one area of the window WM with a material including a specific color. As an embodiment of the present disclosure, the window WM may further include a light shielding pattern for defining the non-display area NDA. The light shielding pattern, which has the form of a film having a color, may be, for example, formed through one or more coating techniques.
The window WM may be coupled to the display module DM through an adhesive film. As an embodiment of the present disclosure, the adhesive film may include an optically clear adhesive (OCA) film. However, the adhesive film is not limited thereto, but may include an adhesive agent and adhesion agent. For example, the adhesive film may include optically clear resin (OCR) or a pressure sensitive adhesive film (PSA).
An anti-reflective layer may be further interposed between the window WM and the display module DM. The anti-reflective layer reduces a reflective index of external light incident from an upper portion of the window WM. According to an embodiment of the present disclosure, the anti-reflective layer may include a retarder and a polarizer. The retarder may be a retarder of a film type or a liquid crystal coating type and may include a λ/2 retarder and/or a retarder. The polarizer may also be a film type or a liquid crystal coating type. The film type may include a stretch-type synthetic resin film, and the liquid crystal coating type may include liquid crystals arranged in a given direction. The retarder and the polarizer may be implemented with one polarization film.
As an example in accordance with the present disclosure, the anti-reflection layer may also include color filters. The arrangement of color filters may be determined in consideration of colors of light generated from a plurality of pixels PX (described with reference to
The display module DM may display the image IM according to electrical signals and may transmit/receive information about an external input. The display module DM may be defined by an effective area AA and a non-effective area NAA. The effective area AA may be defined as an area through which the image IM provided from the display module DM is output. The effective area AA may be defined as an area in which the input sensing layer ISP senses an external input applied from the outside.
The non-effective area NAA is adjacent to the effective area AA. For example, the non-effective area NAA may surround the effective area AA. However, aspects of the non-effective area NAA and the effective area AA are illustrated by way of an example and are not limited thereto. The non-effective area NAA may be defined in various shapes and is not limited to any one embodiment. According to an embodiment, the effective area AA of the display module DM may correspond to at least a portion of the display area DA.
The display module DM may further include a main circuit board MCB, a plurality of flexible circuit films D-FCB, and a plurality of driving chips DIC. The main circuit board MCB may be connected to the flexible circuit films D-FCB to be electrically connected to the display panel DP. The flexible circuit films D-FCB are connected to the display panel DP to electrically connect the display panel DP and the main circuit board MCB. The main circuit board MCB may include a plurality of driving devices. The plurality of driving devices may include a circuit part (also referred to herein as circuitry or a circuit portion) to drive the display panel DP. The driving chips DIC may be mounted on the flexible circuit films D-FCB.
As an example in accordance with the present disclosure, the flexible circuit films D-FCB may include a first flexible circuit film D-FCB1, a second flexible circuit film D-FCB2, and a third flexible circuit film D-FCB3. The driving chips DIC may include a first driving chip DIC1, a second driving chip DIC2, and a third driving chip DIC3. The first to third flexible circuit films D-FCB1, D-FCB2, and D-FCB3 may be disposed to be spaced apart from each other in the first direction DR1 and may be connected to the display panel DP to electrically connect the display panel DP and the main circuit board MCB. The first driving chip DIC1 may be mounted on the first flexible circuit film D-FCB1. The second driving chip DIC2 may be mounted on the second flexible circuit film D-FCB2. The third driving chip DIC3 may be mounted on the third flexible circuit film D-FCB3.
However, embodiments of the present disclosure are not limited thereto. For example, the display panel DP may be electrically connected to the main circuit board MCB through one flexible circuit film, and a single driving chip may be mounted on the one flexible circuit film. In addition, the display panel DP may be electrically connected to the main circuit board MCB through four or more flexible circuit films, and driving chips may be respectively mounted on the flexible circuit films.
Although
The input sensing layer ISP may be electrically connected to the main circuit board MCB through the flexible circuit films D-FCB. However, an embodiment of the present disclosure are not limited thereto. In detail, the display module DM may additionally include a separate flexible circuit film for electrically connecting the input sensing layer ISP to the main circuit board MCB.
The display device DD further includes an external case EDC accommodating the display module DM. The external case EDC may be combined with the window WM to define the appearance of the display device DD. The external case EDC absorbs shock applied from the outside and prevents foreign material/moisture or the like from infiltrating into the display module DM such that components accommodated in the external case EDC are protected. In one or more embodiments, in accordance with example aspects of the present disclosure, the external case EDC may be provided in a form in which a plurality of accommodating members are combined.
The display device DD according to an embodiment may further include an electronic module including various functional modules for operating the display module DM, a power supply module for supplying a power necessary for overall operations of the display device DD, and a bracket coupled with the display module DM and/or the external case EDC to partition an inner space of the display device DD.
Referring to
Each of the scan lines SL1 to SLn may extend in the first direction DR1, and the scan lines SL1 to SLn may be arranged to be spaced apart from each other in the second direction DR2. Each of the data lines DL1 to DLm may extend in the second direction DR2, and the data lines DL1 to DLm may be arranged to be spaced apart from each other in the first direction DR1.
The display driver 100C may include a controller 100C1, a scan driving circuit 100C2, and a data driving circuit 100C3.
The controller 100C1 may receive image data RGB and a control signal D-CS from a main driver. The controller 100C1 may generate corrected image data RGBc (described with reference to
The controller 100C1 generate a driving signal DS by converting the data format of the image data RGB or the corrected image data RGBc to meet the interface specification with the data driving circuit 100C3. As an example in accordance with the present disclosure, the controller 100C1 may generate the driving signal DS by converting the format of the image data RGB and the corrected image data RGBc to meet the interface specification with the data driving circuit 100C3 (e.g., such that the resulting format of the image data RGB and the corrected image data RGBc satisfies the interface specification of the data driving circuit 100C3).
The control signal D-CS may include an input vertical synchronization signal, an input horizontal synchronization signal, a main clock, and a data enable signal. The controller 100C1 may generate a first control signal CONT1 and a vertical synchronization signal Vsync based on the control signal D-CS, and may output the first control signal CONT1 and the vertical synchronization signal Vsync to the scan driving circuit 100C2. The controller 100C1 may generate a second control signal CONT2 and a horizontal synchronization signal Hsync based on the control signal D-CS, and may output the second control signal CONT2 and the horizontal synchronization signal Hsync to the data driving circuit 100C3. The first control signal CONT1 and the second control signal CONT2 are signals associated with (e.g., necessary for) the operation of the scan driving circuit 100C2 and the data driving circuit 100C3, and control signals associated with the operation of the scan driving circuit 100C2 and the data driving circuit 100C3 are not particularly limited thereto.
The controller 100C1 may output the driving signal DS obtained by processing the image data RGB or the corrected image data RGBc to meet an operating condition of the display panel DP to the data driving circuit 100C3. The scan driving circuit 100C2 drives the plurality of scan lines SL1 to SLn in response to the first control signal CONT1 and the vertical synchronization signal Vsync.
In an embodiment of the present disclosure, the scan driving circuit 100C2 may be embedded in the display panel DP. For example, the scan driving circuit 100C2 may be formed in the same process as transistors in the pixel PX, but is not limited thereto. For example, the scan driving circuit 100C2 may be implemented as an integrated circuit (IC) and may be directly mounted on a predetermined area of the display panel DP or may be mounted on a separate printed circuit board in a chip-on-film (COF) manner to be electrically connected with the display panel DP.
The data driving circuit 100C3 may output a grayscale voltage to the data lines DL1 to DLm in response to the second control signal CONT2, the horizontal synchronization signal Hsync, and the driving signal DS provided from the controller 100C1. The data driving circuit 100C3 may be included in the driving chips DIC (described with reference to
Referring to
The term “afterimage condition” may refer to a condition in which an image is displayed that has the potential to cause burn-in of light emitting elements. For example, if a fixed still image is displayed for a long period of time, the light emitting elements of the pixels corresponding to that area may be burned in, causing an afterimage or a mura. The terms “period,” “temporal period,” “time period,” “temporal duration,” and “duration” may be used interchangeably herein. Example aspects of changing an afterimage prevention operation mode in accordance with aspects of the present disclosure will be described in detail below.
The controller 100C1 may include an image analyzer 100C11, an afterimage condition detector 100C12, an operating condition changer 100C13, and a processor 100C14. In an embodiment of the present disclosure, the controller 100C1 may further include a memory MM. Alternatively, the memory MM may be provided outside of and be electrically coupled to the controller 100C1.
Referring to
The block image information obtained by the image analyzer 100C11 may be stored in the memory MM. According to an embodiment of the present disclosure, the block image information for each of the blocks may be stored in the memory MM. Therefore, compared to the case where all information of a frame is stored, the amount of data may be reduced and the processing speed in subsequent operations may be improved. However, the present disclosure is not limited thereto. For example, entire information of a frame may be stored in the memory MM.
The afterimage condition detector 100C12 may detect an afterimage vulnerable condition (S300). The afterimage condition detector 100C12 may determine whether the image is an afterimage vulnerable image in a use environment lasting several minutes or more. A detailed description of the afterimage vulnerable condition detection operation of the afterimage condition detector 100C12 will be described later herein.
The operating condition changer 100C13 may determine an afterimage prevention operating condition depending on the result detected by the afterimage condition detector 100C12 (S400). For example, the operating condition changer 100C13 may deactivate the afterimage prevention operation. In some aspects, deactivating the afterimage prevention operation may include changing an afterimage prevention operation mode from a first operation mode to a second operation mode different from the first operation mode. Therefore, when the display device DD (described with reference to
The processor 100C14 may output the corrected image data RGBc obtained by correcting the image data RGB based on the determined operation mode (e.g., based on whether the afterimage prevention operation is activated or deactivated).
Referring to
The afterimage condition detector 100C12 determines whether an amount of change is less than or equal to a first threshold value (S320). When the amount of change is less than or equal to the first threshold, the corresponding block (hereinafter, referred to as a first block) is determined as a fixed image (S330-1), and when the amount of change exceeds the first threshold, the corresponding block (hereinafter, referred to as a second block) is determined as a moving image (S330-2). Expressed another way, when the afterimage condition detector 100C12 determines the amount of change of a given block is less than or equal to the first threshold, the afterimage condition detector 100C12 determines (at S330-1) that the given block is associated with a fixed image. Further, for example, when the afterimage condition detector 100C12 determines the amount of change of a given block is greater than the first threshold, the afterimage condition detector 100C12 determines (at S330-2) that the given block is associated with a moving image. The terms “fixed image,” “static image,” and “unchanged image” may be used interchangeably herein. The terms “moving image” and “changed image” may be used interchangeably herein.
A first stack value of the first block determined as the fixed image may be accumulated (S340-1), and a second stack value of the second block determined as the moving image may be reduced (S340-2). For example, when the first block is in a first state (or displaying the fixed image), the afterimage condition detector 100C12 may accumulate a first value to the first stack value corresponding to the first block, and when the first block is in a second state (or displaying the moving image), the afterimage condition detector 100C12 may subtract a second value from the second stack value.
In an embodiment of the present disclosure, the first value may be equally increased whenever the corresponding block is determined to be the fixed image without a weight. Alternatively, in an embodiment of the present disclosure, the first value may be a value to which a weight is applied based on data on the luminance or edge of the first block determined to be the fixed image. For example, a greater weight may be applied as the luminance is higher or the data value of the edge is larger. Expressed another way, for a block determined to be in a first state (e.g., the block is associated with a fixed image), the afterimage condition detector 100C12 may add a first value associated with the block to the first stack value, without applying a weighting factor to the first value. Additionally, or alternatively, for the block determined to be in the first state (e.g., the block is associated with a fixed image), the afterimage condition detector 100C12 may apply a weighting factor to the first value, and the afterimage condition detector 100C12 may increase or decrease the weighting factor based on the luminance or edge of the block.
In an embodiment of the present disclosure, when the afterimage condition detector 100C12 determines that a block that was displaying a fixed image is determined to change state to display a moving image, the stack value may be farther from a reference value (e.g., a second threshold value) associated with an afterimage vulnerable block. The first value accumulated to a stack value and the second value subtracted from a stack value may be the same, but are not particularly limited thereto. In an embodiment of the present disclosure, the second stack value of the second block determined as the moving image may be initialized to “0”, and the stack value may be decreased in various ways away from the second threshold.
The afterimage condition detector 100C12 compares the stack value with the second threshold value (S350). When the stack value exceeds the second threshold value, the corresponding block may be determined to be an afterimage vulnerable block (S360-1), and when the stack value is less than the second threshold value, the corresponding block may be determined not to be an afterimage vulnerable block (S360-2). The second threshold value may be referred to as a threshold stack value. Expressed another way, when the afterimage condition detector 100C12 determines the stack value of a given block exceeds the second threshold value (threshold stack value), the afterimage condition detector 100C12 determines (at S360-1) that the given block is an afterimage vulnerable block. Further, for example, when the afterimage condition detector 100C12 determines the stack value of a given block does not exceed the second threshold value, the afterimage condition detector 100C12 determines (at S360-2) that the given block is not an afterimage vulnerable block.
When it is assumed that the stack value is accumulated by ‘1’ when the first block is determined to be the fixed image, the second threshold value has a value of ‘10’. Expressed another way, in an example, the afterimage condition detector 100C12 may add a value of ‘1’ to the stack value when the afterimage condition detector 100C12 determines the first block is associated with a fixed image. In this case, when the afterimage condition detector 100C12 determines that the fixed image is continuously displayed at the first block for a time period exceeding 10 times a specified time period for calculating the amount of change, the afterimage condition detector 100C12 determines the first block to be an afterimage vulnerable block. The aforementioned second threshold value is an example, and is not necessarily limited thereto. In some aspects, when a weight is applied to the accumulated value (e.g., the value of ‘1’), a block displaying a fixed image that is more vulnerable to an afterimage may be more quickly determined by the afterimage condition detector 100C12 as an afterimage vulnerable block. For example, in the case of a block to which a weight of 2 is applied under the same conditions described herein, the afterimage condition detector 100C12 may determine the block to be an afterimage vulnerable block about twice as fast compared to a case in which a weight is not applied.
The afterimage condition detector 100C12 compares the number of afterimage vulnerable blocks with a third threshold value (S370). In an example, the third threshold value may be a threshold number of afterimage vulnerable blocks. When the number of afterimage vulnerable blocks is equal to or greater than the third threshold value, the afterimage condition detector 100C12 may determine that the image displayed on the display device DD (described with reference to
Referring to
Referring to
Referring to
The image data RGB may be input based on corresponding grayscale values (e.g., the average value of R, G, and B at a given block BL may be converted to a grayscale value). In an embodiment of the present disclosure, the first image information IFa may include an average grayscale value for each of the blocks BL, and the second image information IFb may include edge data with respect to each of the blocks BL calculated based on the grayscale values. The term “average grayscale” for a block BL may refer to the arithmetic mean of respective grayscale values of the pixels in the block BL. Alternatively or additionally, in an embodiment of the present disclosure, the image analyzer 100C11 may obtain the block image information IF of a block based on the luminance of the block after converting the grayscale value of the block into the luminance. For example, the first image information IFa may include an average luminance value for each of the blocks BL, and the second image information IFb may include the edge data with respect to each of the blocks BL calculated based on the luminance. The term “average luminance” for a block BL may refer to the arithmetic mean of respective luminance of the pixels in the block BL.
Referring to
When the image IMex having a similar form to that illustrated in
According to an embodiment of the present disclosure, when the same image is continuously displayed on the display device DD (e.g., a similar type of image is continuously displayed), the controller 100C1 may determine whether the image IMex is vulnerable to afterimage and may determine an afterimage prevention operation mode. For example, when the controller 100C1 determines that the image IMex is vulnerable to afterimage, an afterimage prevention compensation operation may be further strengthened. Thus, the lifespan of the display panel DP (described with reference to
Referring to
Referring to
The afterimage condition detector 100C12 may determine that a block BL is in the first state when the difference DIF is less than a reference difference value or when a ratio of the difference DIF (example aspects of which are later described herein) is less than or equal to a state reference value, and the afterimage condition detector 100C12 may determine that the block BL is in the second state when the difference DIF exceeds the reference difference value or when the ratio of the difference DIF exceeds the state reference value. The first state may be a state of displaying a fixed image, and the second state may be a state of displaying a moving image.
In an embodiment of the present disclosure, the difference DIF may include a first difference between the average luminance (or average grayscale) of a block BL in the second frame FR2 and the average luminance (or average grayscale) of the block BL in the first frame FR1, and the difference DIF may include a second difference between the edge data of the block BL of the second frame FR2 and the edge data of the block BL of the first frame FR1. The ratio of the difference DIF may be the percentage of the first difference with respect to the average luminance (or average grayscale) of a block of the second frame FR2 and may be the percentage of the second difference with respect to the edge data of a block of the second frame FR2. The condition reference value may be, for example, 5 percent. However, the described condition reference value is an example and is not particularly limited thereto. The condition reference value may correspond to the first threshold value described with reference to
Referring to
In
Referring to
Referring to
Referring to
As described herein, for a given block BL, the afterimage condition detector 100C12 may compare the stack value of the block BL with the second threshold value, and when the stack value is greater than or equal to the second threshold value (threshold stack value described herein), the afterimage condition detector 100C12 may determine the corresponding block to be an afterimage vulnerable block.
Referring to
Referring to
For example, the controller 100C1 moves the image data RGB by ‘X’ number of pixels in a first period in the first operation mode and displays the image data RGB on the display panel DP. In addition, the controller 100C1 moves the image data RGB by ‘Y’ number of pixels in a second period in the second operation mode and displays the image data RGB on the display panel DP. Expressed another way, in the first operation mode and the second operation mode, the controller 100C1 may position the image data RGB according to pixel positions different from initial pixel positions associated with the image data RGB. In an example, the ‘Y’ number may be greater than or equal to the ‘X’ number. In some examples, the second period may be less than or equal to the first period.
In an example, the amount by which the image is shifted in the second operation mode may be greater than the amount by which the image is shifted in the first operation mode. Expressed another way, the amount by which the controller 100C1 moves the image data RGB may be greater in the second operation mode than in the first operation mode. In some examples, the period in which the image shifting operation occurs in the second operation mode may be equal to or less than the period in which the image shifting operation occurs in the first operation mode. Alternatively, the period in which the image shifting operation occurs in the second operation mode may be shorter than the period in which the image shifting operation occurs in the first operation mode, and the amount by which the image is shifted in the second operation mode may be equal to or greater than the amount by which the image is shifted in the first operation mode. In detail, the amount of compensation provided by the afterimage prevention compensation operation may be greater in the second operation mode than in the first operation mode.
A first graph GP1 is a graph of accumulated stress according to pixel positions when X is ±16 (e.g., when the controller 100C1 moves the image data RGB by ‘X’ number of pixels in a first period in the first operation mode), and a second graph GP2 is a graph of accumulated stress according to pixel positions when Y is ±20 (e.g., when the controller 100C1 moves the image data RGB by ‘Y’ number of pixels in a second period in the second operation mode). In this case, under a condition in which the accumulated stress is highest for the first graph GP1 and the second graph GP2, the width of the second graph GP2 is smaller than the width of the first graph GP1. Expressed another way, the second graph GP2 may have a smaller width than the first graph GP1 in the width of a portion receiving the maximum accumulated stress. Therefore, when the display device DD (described with reference to
Referring to
In an example of strengthening the afterimage prevention compensation operation, the luminance processing criteria may be lowered or the luminance processing ratio may be increased. Accordingly, when similar type of images are continuously input and the controller 100C1 changes the afterimage prevention compensation operation from the first operation mode APA1 to the second operation mode APA2, the area to be processed for luminance may be increased or the level to be processed for luminance may be increased.
According to an embodiment of the present disclosure, when an image of a similar type is continuously displayed (expressed another way, at least a portion of the same image is continuously displayed), the controller may determine whether the image is vulnerable to afterimages and may determine an afterimage prevention operation mode. For example, when the image is determined to be vulnerable to afterimages, the controller may strengthen an afterimage prevention compensation operation as described herein. Thus, the lifetime of a display panel may be further improved. In addition, since a compensation operation is reinforced in an environment in which blocks of a predetermined ratio or more display a fixed image (or still image) for more than several minutes, the possibility of deterioration in image quality may be reduced.
Although embodiments of the present disclosure have been described for illustrative purposes, those skilled in the art will appreciate that various modifications, and substitutions are possible, without departing from the scope and spirit of the disclosure as disclosed in the accompanying claims. Accordingly, the technical scope of the present disclosure is not limited to the detailed description of this specification, but should be defined by the claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2023-0033646 | Mar 2023 | KR | national |