This application claims the benefit of priority from Japanese Patent Application No. 2020-152416 filed on Sep. 10, 2020, the entire contents of which are incorporated herein by reference.
The present invention relates to a display device and a display system.
A virtual reality (VR) system changes image display along with movement of a point of view to cause a user to feel a sense of virtual reality. As a display device to achieve such a VR system, disclosed is a technique mounting a head mounted display (hereinafter also referred to as an “HMD”) on the head and displaying images corresponding to the motion of the body or the like, for example (WO 2018/211672, for example).
In the HMD used in the VR system, a displayed image is enlarged by an eyepiece, and thus a display panel is required to have higher definition. The displayed image is enlarged, whereby gaps between pixels are likely to look like a grid. Thus, a liquid crystal display panel having a high pixel opening ratio is used as the display panel of the HMD, thereby producing the advantage that image display with a less sense of grid is enabled. In lateral electric field mode liquid crystal display panels such as in-plane switching (IPS) including fringe field switching (FFS), along with higher definition, there is a possibility that mutual electric lines of force exert influence on each other between adjacent pixels to cause color shift and a reduction in the accuracy of displayed colors.
What is disclosed herein has been made in view of the above problem, and an object thereof is to provide a display device and a display system that can inhibit a reduction in the accuracy of displayed colors along with higher definition.
A display device according to an embodiment of the present disclosure includes a liquid crystal display panel having a display region, pixels provided in the display region and arranged in a matrix (row-column configuration) in a first direction and a second direction different from the first direction, and a pixel gradation corrector correcting a gradation value of a first pixel in accordance with gradation values of second pixels adjacent to the first pixel, the pixel gradation corrector multiplying a value indicating sensitivity with which the first pixel is influenced by the second pixels and a value indicating strength of influence that the second pixels exert on the first pixel together, and subtracting the multiplied value from an input gradation value of the first pixel to calculate an output gradation value to the first pixel.
A display system according to an embodiment of the present disclosure includes a display device including a liquid crystal display panel having a display region, and pixels provided in the display region and arranged in a matrix (row-column configuration) in a first direction and a second direction different from the first direction, and an image generation device including a pixel gradation corrector correcting a gradation value of a first pixel in accordance with gradation values of second pixels adjacent to the first pixel, the pixel gradation corrector multiplying a value indicating sensitivity with which the first pixel is influenced by the second pixels and a value indicating strength of influence that the second pixels exert on the first pixel together, and subtracting the multiplied value from an input gradation value of the first pixel to calculate an output gradation value to the first pixel.
The following describes aspects (embodiments) to perform the present disclosure in detail with reference to the accompanying drawings. The details described in the following embodiments do not limit the present disclosure. The components described in the following include ones that can easily be thought of by those skilled in the art and substantially the same ones. Further, the components described in the following can be combined with each other as appropriate. What is disclosed herein is only by way of example, and some appropriate modifications with the gist of the disclosure maintained that can easily be thought of by those skilled in the art are naturally included in the scope of the present disclosure. The drawings may be represented more schematically for the width, thickness, shape, and the like of parts than those of actual aspects in order to make the description clearer; they are only by way of example and do not limit the interpretation of the present disclosure. In the present specification and drawings, components similar to those previously described for the drawings previously described are denoted by the same symbols, and a detailed description may be omitted as appropriate.
In the present embodiment, this display system 1 is a display system changing display in accordance with the motion of the user. The display system 1 is a virtual reality (VR) system stereoscopically displaying a VR image indicating a three-dimensional object or the like on a virtual space and changing the stereoscopic display along with the direction (position) of the head of the user to cause the user to feel a sense of virtual reality, for example.
As illustrated in
In the present disclosure, the display device 100 is used as a head mounted type display device fixed to a mounting member 400 and mounted on the head of the user, for example. The display device 100 includes a display panel 110 for displaying an image generated by the image generation device 200. In the following, the aspect in which the display device 100 is fixed to the mounting member 400 is also referred to as a “head mounted display (HMD)”.
In the present disclosure, examples of the image generation device 200 include personal computers and electronic apparatuses such as game machines. The image generation device 200 generates a VR image corresponding to the position or the attitude of the head of the user and outputs the VR image to the display device 100. The image generated by the image generation device 200 is not limited to the VR image.
The display device 100 is fixed at a position where when the user wears the HMD, the display panel 110 is placed before both eyes of the user. The display device 100 may include a voice output device such as a speaker at positions corresponding to both ears of the user when the user wears the HMD apart from the display panel 110. As described below, the display device 100 may include a sensor detecting the position, the attitude, or the like of the head of the user wearing the display device 100, or a gyro sensor, an acceleration sensor, or an azimuth sensor, for example. The display device 100 may contain the functions of the image generation device 200.
As illustrated in
In the present embodiment, the display panel 110 assumes a lateral electric field mode liquid crystal display panel such as in-plane switching (IPS) including fringe field switching (FFS) including an image liquid crystal element.
In the display device 100 used in the VR system illustrated in
The display device 100 includes the two display panels 110. As to the two display panels 110, one is used as the display panel 110 for the left eye, whereas the other is used as the display panel 110 for the right eye.
Each of the two display panels 110 has a display region 111 and a display control circuit 112. The display panel 110 has a light source device (not illustrated) illuminating the display region 111 from behind.
In the display region 111, P0×Q0 (P0 in a row direction (an X direction) and Q0 in a column direction (a Y direction)) pixels Pix are arranged in a two-dimensional matrix (row-column configuration). In the present embodiment, a pixel density of the display region 111 is 806 ppi, for example.
The display panel 110 has scan lines extending in the X direction and signal lines extending in the Y direction crossing the X direction. In the display panel 110, the pixels Pix are each placed in an area surrounded by signal lines SL and scan lines GL. The pixels Pix each have a switching element (thin film transistor (TFT)) connected to a signal line SL and a scan line GL and a pixel electrode connected to the switching element. A plurality of pixels Pix arranged along an extension direction of the scan line GL are connected to one scan line GL. A plurality of pixels Pix arranged along an extension direction of the signal line SL are connected to one signal line SL.
Out of the two display panels 110, the display region 111 of one display panel 110 is for the right eye, whereas the display region 111 of the other display panel 110 is for the left eye. Although this example exemplifies a case in which the display panel 110 has the two display panels 110 for the left eye and for the right eye, the display device 100 is not limited to the structure including two display panels 110. One display panel 110 may be employed, in which the display region of the one display panel 110 may be divided into two so that an image for the right eye is displayed on a right half region, whereas an image for the left eye is displayed on a left half region, for example.
The display control circuit 112 includes a driver integrated circuit (IC) 115, a signal line connection circuit 113, and a scan line drive circuit 114. The signal line connection circuit 113 is electrically connected to the signal lines SL. The driver IC 115 controls on and off of switching elements (TFTs, for example) for controlling the operation (light transmittance) of the pixels Pix by the scan line drive circuit 114. The scan line drive circuit 114 is electrically connected to the scan lines GL.
The sensor 120 detects information from which the orientation of the head of the user can be estimated. The sensor 120 detects information indicating the motion of the display device 100, whereas the display system 1 estimates the orientation of the head of the user wearing the display device 100 on the head based on the information indicating the motion of the display device 100, for example.
The sensor 120 detects information from which the orientation of the line of sight can be estimated using at least one of the angle, the acceleration, the angular velocity, the azimuth, and the distance of the display device 100, for example. For the sensor 120, a gyro sensor, an acceleration sensor, or an azimuth sensor can be used, for example. The sensor 120 may detect the angle and the angular velocity of the display device 100 by the gyro sensor, for example. The sensor 120 may detect the direction and the magnitude of acceleration acting on the display device 100 by the acceleration sensor, for example.
The sensor 120 may detect the azimuth of the display device 100 by the azimuth sensor, for example. The sensor 120 may detect the movement of the display device 100 by a distance sensor or a Global Positioning System (GPS) receiver, for example. The sensor 120 may be another sensor such as a light sensor or may be a combination of a plurality of sensors so long as it is a sensor for detecting the orientation of the head, a change in the line of sight, the movement, or the like of the user. The sensor 120 is electrically connected to the image separation circuit 150 via the interface 160 described below.
The image separation circuit 150 receives image data for the left eye and image data for the right eye sent from the image generation device 200 via the cable 300 and sends the image data for the left eye to the display panel 110 displaying the image for the left eye and sends the image data for the right eye to the display panel 110 displaying the image for the right eye.
The interface 160 includes a connector to which the cable 300 (
Alternatively, the signal input from the sensor 120 may be output to a control circuit 230 of the image generation device 200 directly via the interface 160. The interface 160 may be a wireless communication device and may transmit and receive information to and from the image generation device 200 via wireless communication, for example.
The image generation device 200 includes an operator 210, a storage unit 220, the control circuit 230, and the interface 240.
The operator 210 receives operations by the user. The operator 210 can be an input device such as a keyboard, buttons, or a touch screen. The operator 210 is electrically connected to the control circuit 230. The operator 210 outputs information corresponding to the operations to the control circuit 230.
The storage unit 220 stores therein a computer program and data. The storage unit 220 temporarily stores therein a processed result of the control circuit 230. The storage unit 220 includes a storage medium. The storage medium includes a read only memory (ROM), a random access memory (RAM), a memory card, an optical disc, or a magnetooptical disc, for example. The storage unit 220 may store therein the data of an image to be displayed on the display device 100.
The storage unit 220 stores therein a control program 211 and a VR application 212, for example. The control program 211 can provide a function on various kinds of control to operate the image generation device 200, for example. The VR application 212 can provide a function to cause the display device 100 to display the VR image. The storage unit 220 can store therein various kinds of information input from the display device 100 such as data indicating a detection result of the sensor 120, for example.
The control circuit 230 includes a micro control unit (MCU) or a central processing unit (CPU), for example. The control circuit 230 can comprehensively control the operation of the image generation device 200. The various kinds of functions of the control circuit 230 are implemented based on the control of the control circuit 230.
The control circuit 230 includes a graphics processing unit (GPU) generating an image to be displayed, for example. The GPU generates an image to be displayed on the display device 100. The control circuit 230 outputs the image generated by the GPU to the display device 100 via the interface 240. Although the present embodiment describes a case in which the control circuit 230 of the image generation device 200 includes the GPU, this is not limiting. The GPU may be provided in the display device 100 or the image separation circuit 150 of the display device 100, for example. In this case, the display device 100 may acquire data from the image generation device 200, an external electronic apparatus, or the like, and the GPU may generate an image based on the data, for example.
The interface 240 includes a connector to which the cable 300 (refer to
Upon execution of the VR application 212, the control circuit 230 causes the display device 100 to display an image corresponding to the motion of the user (the display device 100). Upon detection of a change in the user (the display device 100) with the image displayed on the display device 100, the control circuit 230 changes the image displayed on the display device 100 to an image of the changed direction. The control circuit 230 creates an image based on a standard point of view and a standard line of sight on a virtual space at the time of starting creation of an image, when detecting the change in the user (the display device 100), changes the point of view or the line of sight when creating the displayed image from the standard point of view or the standard line of sight direction in accordance with the motion of the user (the display device 100), and causes the display device 100 to display an image based on the changed point of view or line of sight.
The control circuit 230 detects the movement of the head of the user to a right direction based on the detection result of the sensor 120, for example. In this case, the control circuit 230 changes the image being currently displayed to an image when the line of sight is changed to the right direction. The user can visually recognize an image in the right direction of the image being displayed on the display device 100.
Upon detection of the movement of the display device 100 based on the detection result of the sensor 120, the control circuit 230 changes the image in accordance with the detected movement, for example. When detecting that the display device 100 has moved toward the front, the control circuit 230 changes the image being currently displayed to an image when the display device 100 has moved to the front of the image being currently displayed. When detecting that the display device 100 has moved to a rear direction, the control circuit 230 changes the image being currently displayed to an image when the display device 100 has moved to the rear of the image being currently displayed. The user can visually recognize an image in its own moving direction from the image being displayed on the display device 100.
As illustrated in
The pixels PixR, PixG, and PixB include the switching elements TrD1, TrD2, and TrD3, respectively, and each include a capacitance of a liquid crystal layer LC. The switching elements TrD1, TrD2, and TrD3 include thin film transistors and, in this example, include n-channel metal oxide semiconductor (MOS) TFTs. A sixth insulating film 16 (refer to
In color filters CFR, CFG, and CFB illustrated in
As illustrated in
The scan line drive circuit 114 is placed in the peripheral region between the substrate end side 110e1 and the display region 111 of the display panel 110. The signal line connection circuit 113 is placed in the peripheral region between the substrate end side 110e4 and the display region 111 of the display panel 110. The driver IC 115 is placed in the peripheral region between the substrate end side 110e4 and the display region 111 of the display panel 110. In the present embodiment, the substrate end sides 110e3 and 110e4 of the display panel 110 are parallel to the X direction. The substrate end sides 110e1 and 110e2 of the display panel 110 are parallel to the Y direction.
In the example illustrated in
The following describes a sectional structure of the display panel 110 with reference to
The first insulting film 11 is positioned on the first insulating substrate 10. The second insulating film 12 is positioned on the first insulting film 11. The third insulating film 13 is positioned on the second insulating film 12. The signal lines S1 to S3 are positioned on the third insulating film 13. The fourth insulting film 14 is positioned on the third insulating film 13 to cover the signal lines S1 to S3.
If necessary, wiring may be placed on the fourth insulting film 14. This wiring will be covered with the fifth insulating film 15. The present embodiment omits the wiring. The first insulting film 11, the second insulating film 12, the third insulating film 13, and the sixth insulating film 16 are formed of an inorganic material having translucency such as a silicon oxide or a silicon nitride, for example. The fourth insulting film 14 and the fifth insulating film 15 are formed of a resin material having translucency and have larger thicknesses than those of the other insulating films formed of the inorganic material. However, the fifth insulating film 15 may be formed of the inorganic material.
The common electrode COM is positioned on the fifth insulating film 15. The common electrode COM is covered with the sixth insulating film 16. The sixth insulating film 16 is formed of an inorganic material having translucency such as a silicon oxide or a silicon nitride, for example.
The pixel electrodes PE1 to PE3 are positioned on the sixth insulating film 16 and face the common electrode COM via the sixth insulating film 16. The pixel electrodes PE1 to PE3 and the common electrode COM are formed of a conductive material having translucency such as indium tin oxide (ITO) or indium zinc oxide (IZO), for example. The pixel electrodes PE1 to PE3 are covered with the first orientation film AL1. The first orientation film AL1 also covers the sixth insulating film 16.
The counter substrate SUB2 has a second insulating substrate 20 having translucency such as a glass substrate or a resin substrate as a base. The counter substrate SUB2 includes a light shielding layer BM, the color filters CFR, CFG, and CFB, an overcoat layer OC, a second orientation film AL2, and the like on a side of the second insulating substrate 20 facing the array substrate SUB1.
As illustrated in
The color filters CFR, CFG, and CFB are positioned on the side of the second insulating substrate 20 facing the array substrate SUB1, and each end thereof overlaps with the light shielding layer BM. The color filter CFR faces the pixel electrode PE1. The color filter CFG faces the pixel electrode PE2. The color filter CFB faces the pixel electrode PE3. As an example, the color filters CFR, CFG, and CFB are formed of resin materials colored in blue, red, and green, respectively.
The overcoat layer OC covers the color filters CFR, CFG, and CFB. The overcoat layer OC is formed of a resin material having translucency. The second orientation film AL2 covers the overcoat layer OC. The first orientation film AL1 and the second orientation film AL2 are formed of a material indicating horizontal orientation, for example.
As described in the foregoing, the counter substrate SUB2 includes the light shielding layer BM, the color filters CFR, CFG, and CFB, and the like. The light shielding layer BM is placed at regions facing wiring parts such as the scan lines G1, G2, and G3 and the signal lines S1, S2, and S3 illustrated in
Although in
Although in
The array substrate SUB1 and the counter substrate SUB2 described above are placed such that the first orientation film AL1 and the second orientation film AL2 face each other. The liquid crystal layer LC is enclosed between the first orientation film AL1 and the second orientation film AL2. The liquid crystal layer LC includes a negative liquid crystal material, the dielectric anisotropy of which is negative, or a positive liquid crystal material, the dielectric anisotropy of which is positive.
The array substrate SUB1 faces a backlight unit IL, whereas the counter substrate SUB2 is positioned on a display face side. As the backlight unit IL, ones of various kinds of aspects can be used; a description of its detailed structure is omitted.
A first optical element OD1 including a first polarizing plate PL1 is placed on an outer face of the first insulating substrate 10 or a face facing the backlight unit IL. A second optical element OD2 including a second polarizing plate PL2 is placed on an outer face of the second insulating substrate 20 or a face on an observation position side. A first polarization axis of the first polarizing plate PL1 and a second polarization axis of the second polarizing plate PL2 are in a positional relation of crossed Nicol on an X-Y plane, for example.
The first optical element OD1 and the second optical element OD2 may include other optical functional elements such as a phase plate.
When the liquid crystal layer LC is the negative liquid crystal material, and no voltage is applied to the liquid crystal layer LC, for example, liquid crystal molecules LM are initially oriented in a direction such that their long axis is along the X direction within the X-Y plane. On the other hand, when voltage is applied to the liquid crystal layer LC, that is, at the time of on, in which electric fields are formed between the pixel electrodes PE1 to PE3 and the common electrode COM, the liquid crystal molecules LM are influenced by the electric fields to change their orientation state. At the time of on, incident linearly polarized light, when passing through the liquid crystal layer LC, changes its polarized state in accordance with the orientation state of the liquid crystal molecules LM.
As illustrated in
When a pixel voltage is applied to the pixel electrodes PE1, PE2, and PE3 to cause a potential difference between the pixel electrodes PE1, PE2, and PE3 and the common electrode COM, the pixels PixS cause electric fields having electric lines of force emerging from the surface of the pixel electrodes PE1, PE2, and PE3 to reach the surface of the common electrode COM as indicated by the broken lines in
As the pixel density of the display region 111 becomes higher, influence by the mutual electric lines of force becomes larger between the pixels PixS illustrated in
Specifically, for example, in a case in which the pixel density of the display region 111 is 538 ppi (the distance Phi between the pixels PixS in the Y direction is 47.25 μm, whereas the distance Pw1 between the pixels PixS in the X direction is 15.75 μm) and a case in which the pixel density of the display region 111 is 806 ppi (the distance Phi between the pixels PixS in the Y direction is 31.5 μm, whereas the distance Pw1 between the pixels PixS in the X direction is 10.5 μm), the color shift caused by the fact that the electric lines of force between the pixels PixS exert influence on each other occurs more conspicuously in the case in which the pixel density of the display region 111 is 806 ppi.
As illustrated in
In the present disclosure, the gradation value varying by the degree of influence of the electric lines of force between the pixels PixS is corrected. Correction of the gradation value is preferably performed such that the pixels PixR, PixG, and PixB have the same relative intensity with respect to respective gradations given to them even with any combination of the respective gradations of the pixels PixR, PixG, and PixB. In the present disclosure, as an example, based on the display of white (that is, the intensity of the pixel PixR=the intensity of the pixel PixG=the intensity of the pixel PixB), for general display in which any one or more of the intensities of the pixels PixR, PixG, and PixB do not match, shift from the relative intensity of the pixels PixR, PixG, and PixB in the white display is corrected.
The following first describes the necessity of pixel gradation correction in the first example of the pixel configuration illustrated in
In the first example of the pixel configuration illustrated in
Specifically, as taken as an example in which the color shift caused by the fact that the electric lines of force between the pixels PixS exert influence on each other conspicuously occurs in the above, when the pixel density of the display region 111 is 806 ppi, the influence by the electric lines of force of the pixel PixSm−1,n and the pixel PixSm+1,n, which are adjacent to the pixel PixSm,n for which the pixel gradation will be corrected with 10.5 μm (=Pw1) in the X direction, is conspicuous, whereas the influence by the electric lines of force of the pixel PixSm,n−1 and the pixel PixSm,n+1, which are adjacent to the pixel PixSm,n for which the pixel gradation will be corrected with 31.5 μm (=Phi) in the Y direction, is extremely small. That is to say, in the first example of the pixel configuration illustrated in
Output of the pixel gradation correction circuit 116 is DA converted by a DAC 117 to be output to the display region 111. The DAC 117 is provided in the driver IC 115 illustrated in
The component provided with the pixel gradation correction circuit 116 and the DAC 117 is not limited to the driver IC 115; a component different from the driver IC 115 may be provided with the pixel gradation correction circuit 116 and the DAC 117 or the pixel gradation correction circuit 116 and the DAC 117 may be included as independent components, for example. Image correction processing such as gamma correction and white balance correction is preferably performed before the pixel gradation correction circuit 116.
The pixel gradation correction circuit 116 performs pixel gradation correction processing for each of the pixels PixS using Expression (1) below and Expression (2) below.
Vo
m,n
=Vi
m,n
−f(Vim,n)×{SL(Vim−1,n−Vim,n)+SR(Vim+1,n−Vim,n)+SU(Vim,n−1−Vim,n)+SD(Vim,n+1−Vim,n)} (1)
f(Vim,n)=fq(x)=Aqx3+Cqx2+Dqx+Eq (2)
In Expression (1), Vim,n indicates a pixel gradation input value (an input gradation value) to the pixel gradation correction circuit 116 for the pixel PixSm,n for which the pixel gradation will be corrected. f(Vim,n) is a function indicating susceptibility with which the pixel PixSm,n for which the pixel gradation will be corrected is influenced by adjacent pixels, that is, sensitivity with which the pixel PixSm,n for which the pixel gradation will be corrected is influenced by the adjacent pixels. Vom,n indicates an output value (an output gradation value) of the pixel gradation correction circuit 116 having corrected the input gradation value Vim,n for the pixel PixSm,n for which the pixel gradation will be corrected, that is, a corrected gradation value as a gradation value to be output to the pixel PixSm,n for which the pixel gradation will be corrected.
That is to say, a correction amount for the pixel PixSm,n for which the pixel gradation will be corrected can be represented by the product of the following two terms. The first is a “value indicating sensitivity influenced by the adjacent pixels (the term f(Vim,n) indicated in Expression (1)”, which changes the value in accordance with the input gradation value Vim,n that the pixel PixSm,n for which the pixel gradation will be corrected should originally display. The second is a product of the difference between the input gradation value Vim,n that the pixel PixSm,n for which the pixel gradation will be corrected should originally display and an input gradation value that a pixel adjacent to the pixel should originally display and a certain coefficient, that is, a “value indicating the strength of influence that the adjacent pixels exert on the pixel PixSm,n for which the pixel gradation will be corrected (the term {SL(Vim−1,n−Vim,n)+SR(Vnm+1,n−Vim,n)+SU(Vim,n−1−Vim,n)+SD(Vim,n+1−Vim,n)} indicated in Expression (1))”. The product of the “value indicating sensitivity influenced by the adjacent pixels” and the “value indicating the strength of influence that the adjacent pixels exert on the pixel PixSm,n for which the pixel gradation will be corrected” is subtracted from the input gradation value Vim,n that the pixel PixSm,n for which the pixel gradation will be corrected should originally display. An output gradation value Vom,n (refer to Expression (1)) having been obtained as a result of this is given to the pixel PixSm,n for which the pixel gradation will be corrected. Thus, display intensity that should originally be displayed in the pixel PixSm,n for which the pixel gradation will be corrected is obtained. The following describes coefficients for calculating the “value indicating the strength of influence that the adjacent pixels exert on the pixel PixSm,n for which the pixel gradation will be corrected” (hereinafter may be referred to simply as “coefficients indicating the strength of influence that the adjacent pixels exert”).
SL is a coefficient indicating the strength of influence that the pixel PixSm−1,n adjacent to the pixel PixSm,n for which the pixel gradation will be corrected on the left side in the X direction exerts on the pixel PixSm,n. SR is a coefficient indicating the strength of influence that the pixel PixSm+1,n adjacent to the pixel PixSm,n for which the pixel gradation will be corrected on the right side in the X direction exerts on the pixel PixSm,n. SU is a coefficient indicating the strength of influence that the pixel PixSm,n−1 adjacent to the pixel PixSm,n for which the pixel gradation will be corrected on the upper side in the Y direction exerts on the pixel PixSm,n. SD is a coefficient indicating the strength of influence that the pixel PixSm,n+1 adjacent to the pixel PixSm,n for which the pixel gradation will be corrected on the down side in the Y direction exerts on the pixel PixSm,n. These coefficients SL, SR, SU, and SD are set in advance in accordance with the pixel arrangement of the display region 111, the shape and orientation of the pixel electrodes PE of the pixels PixS (the pixels PixR, PixG, and PixB), and the like.
Vim−1,n indicates an input gradation value, that is, an input value before pixel gradation correction to the pixel gradation correction circuit 116 for the pixel PixSm−1,n adjacent to the pixel PixSm,n for which the pixel gradation will be corrected on the left side in the X direction. Vim+1,n indicates an input gradation value, that is, an input value before pixel gradation correction to the pixel gradation correction circuit 116 for the pixel PixSm+1,n adjacent to the pixel PixSm,n for which the pixel gradation will be corrected on the right side in the X direction. Vim,n−1 indicates an input gradation value, that is, an input value before pixel gradation correction to the pixel gradation correction circuit 116 for the pixel PixSm,n−1 adjacent to the pixel PixSm,n for which the pixel gradation will be corrected on the upper side in the Y direction. Vim,n+1 indicates an input gradation value, that is, an input value before pixel gradation correction to the pixel gradation correction circuit 116 for the pixel PixSm,n+1 adjacent to the pixel PixSm,n for which the pixel gradation will be corrected on the down side in the Y direction.
In Expression (2), fq(x) is a function indicating the sensitivity with which the pixel PixSm,n for which the pixel gradation will be corrected is influenced by the adjacent pixels. The function fq(x) in Expression (2) is the same as the function f(Vim,n) indicated in Expression (1). In Expression (2), x indicates an input gradation value, that is, an input value before pixel gradation correction to the pixel gradation correction circuit 116 for the pixel PixSm,n for which the pixel gradation will be corrected. The input gradation value x in Expression (2) is the same as the input gradation value Vim,n in Expression (1). Aq, Cq, Dq, and Eq indicate coefficients set in advance in accordance with the sensitivity with which the pixel PixSm,n for which the pixel gradation will be corrected is influenced by the adjacent pixels.
As illustrated in
Pixel gradation correction processing for each of the pixels PixS is performed by Expression (1) and Expression (2) using the function fq(x) determined in advance as described above, whereby the shift of the gradation values of the pixels PixS (the pixels PixR, PixG, and PixB) with respect to the gradation values when the white display is performed can be corrected. More correctly, with display intensity (three stimulus values) in the case of the white display (that is, the gradation values of the pixels PixR, PixG, and PixB of an image to be displayed all match) as being correct, in the case of not being the white display (that is, there is any gradation that does not match in the gradation values of the pixels PixR, PixG, and PixB of an image to be displayed), the display intensities of the pixels PixR, PixG, and PixB can be compensated so as to be intensities expected as image display to be displayed. The following describes pixel gradation correction expressions in the pixels PixR, PixG, and PixB.
In the first example of the pixel configuration illustrated in
Ro
m,n
=Ri
m,n
−f(Rim,n)×{SRL(Bim−1,n−Rim,n)+SRR(Gim+1,n−Rim,n)+SRU(Rim,n−1−Rim,n)+SRD(Rim,n+1−Rim,n)} (3)
f(Rim,n)=fR(x)=ARx3+CRx2+DRx+ER (4)
In the first example of the pixel configuration illustrated in
Go
m,n
=Gi
m,n
−f(Gim,n)×{SGL(Rim−1,n−Gim,n)+SGR(Bim+1,n−Gim,n)+SGU(Gim,n−1−Gim,n)+SGD(Gim,n+1−Gim,n)} (5)
f(Gim,n)=fG(x)=AGx3+CGx2+DGx+EG (6)
In the first example of the pixel configuration illustrated in
Bo
m,n
=Bi
m,n
−f(Bim,n)×{SBL(Gim−1,n−Bim,n)+SBR(Rim+1,n−Bim,n)+SBU(Bim,n−1−Bim,n)+SBD(Bim,n+1−Bim,n)} (7)
f(Bim,n)=fB(x)=ABx3+CBx2+DBx+EB (8)
As described in the first example of the pixel configuration illustrated in
Vo
m,n
=Vi
m,n
−f(Vim,n)×{SL(Vim−1,n−Vim,n)+SR(Vim+1,n−Vim,n)} (10)
Ro
m,n
=Ri
m,n
−f(Rim,n)×{SRL(Bim−1,n−Rim,n)+SRR(Gim+1,n−Rim,n)} (11)
Go
m,n
=Gi
m,n
−f(Gim,n)×{SGL(Rim−1,n−Gim,n)+SGR(Bim+1,n−Gim,n)} (12)
Bo
m,n
=Bi
m,n
−f(Bim,n)×{SBL(Gim−1,n−Bim,n)+SBR(Rim+1,n−Bim,n)} (13)
As described in the third example of the pixel configuration illustrated in
Ro
m,n
=Ri
m,n (14)
Bo
m,n
=Bi
m,n (15)
In the second example of the pixel configuration illustrated in
Ro
m,n
=Ri
m,n
−f(Rim,n)×{SRL(Bim−1,n−Rim,n)+SRR(Gim+1,n−Rim,n)+SRU(Gim,n−1−Rim,n)+SRD(Bim,n+1−Rim,n)} (16)
f(Rim,n)=fR(x)=ARx3+CRx2+DRx+ER (17)
In the second example of the pixel configuration illustrated in
Go
m,n
=Gi
m,n
−f(Gim,n)×{SGL(Rim−1,n−Gim,n)+SGR(Bim+1,n−Gim,n)+SGU(Bim,n−1−Gim,n)+SGD(Rim,n+1−Gim,n)} (18)
f(Gim,n)=fG(x)=AGx3+CGx2+DGx+EG (19)
In the second example of the pixel configuration illustrated in
Bo
m,n
=Bi
m,n
−f(Bim,n)×{SBL(Gim−1,n−Bim,n)+SBR(Rim+1,n−Bim,n)+SBU(Rim,n−1−Bim,n)+SBD(Gim,n+1−Bim,n)} (20)
f(Bim,n)=fB(x)=ABx3+CBx2+DBx+EB (21)
As illustrated in
Vo1m,n=Vim,n−f(Vim,n)×{SL1(Vim−1,n−Vim,n)+SR1(Vim+1,n−Vim,n)+SU1(Vim,n−1−Vim,n)+SD1(Vim,n+1−Vim,n)} (22)
Vo2m,n=Vim,n−f(Vim,n)×{SL2(Vim−1,n−Vim,n)+SR2(Vim+1,n−Vim,n)+SU2(Vim,n−1−Vim,n)+SD2(Vim,n+1−Vim,n)} (23)
In Expression (22) and Expression (23), when the orientation of the pixel electrodes PE is inverted in the X direction between the odd row and the even row as illustrated in
The following describes Expression (1) and Expression (2) in a generalized manner.
When the input gradation value of the pixel PixSm,n for which the pixel gradation will be corrected (hereinafter referred to as a “first pixel”) is V1i, the function indicating the sensitivity with which the first pixel is influenced by the pixel PixSm−1,n, the pixel PixSm+1,n, the pixel PixSm,n−1, and the pixel PixSm,n+1, which are adjacent to the first pixel, (hereinafter referred to as “second pixels”) is f(V1i), the number of the second pixels adjacent to the first pixel is N, the input gradation value of the second pixels is V2i, and the coefficient indicating the strength of influence that the second pixels exert is Sp, Expression (1) can be indicated by Expression (24) below.
As to the function f(V1i) indicating the sensitivity with which the first pixel is influenced by the second pixels, when the input gradation value Vii of the first pixel is x, and the coefficients set in advance in accordance with the sensitivity with which the first pixel is influenced by the second pixels are A, C, D, and E, Expression (2) can be indicated by Expression (25) below.
f(V1i)=f(x)=Ax3+Cx2+Dx+E (25)
By the pixel gradation correction processing for each of the pixels PixS described in the present embodiment, the shift of the gradation values of the pixels PixS (the pixels PixR, PixG, and PixB) with respect to the gradation values when the white display is performed can be corrected. More correctly, with display intensity (three stimulus values) in the case of the white display (that is, the gradation values of the pixels PixR, PixG, and PixB of an image to be displayed all match) as being correct, in the case of not being the white display (that is, there is any gradation that does not match in the gradation values of the pixels PixR, PixG, and PixB of an image to be displayed), the display intensities of the pixels PixR, PixG, and PixB can be compensated so as to be intensities expected as image display to be displayed.
According to the present embodiment, the display device 100 and the display system 1 can inhibit a reduction in the accuracy of displayed colors along with higher definition.
As illustrated in
Output of the pixel gradation correction circuit 250 is DA converted by the DAC 117 provided in a display device 100a to be output to the display region 111. The DAC 117 is provided in the driver IC 115 illustrated in
As illustrated in
According to the present embodiment, the display device 100a and a display system 1a can inhibit a reduction in the accuracy of displayed colors along with higher definition.
The preferred embodiments of the present disclosure have been described; the present disclosure is not limited to such embodiments. The details disclosed in the embodiments are only by way of example, and various modifications can be made in a range not departing from the gist of the present disclosure. Appropriate modifications made in the range not departing from the gist of the present disclosure also naturally belong to the technical scope of the present invention, for example.
Number | Date | Country | Kind |
---|---|---|---|
2020-152416 | Sep 2020 | JP | national |