This invention relates to a display system.
A virtual reality (VR) system provides a virtual sense of reality to a user by changing image display along with viewpoint movement. Examples of a display device for achieving such a VR system include a disclosed technology of mounting a head mounted display (hereinafter also referred to as an “HMD”) on a head and displaying a video in accordance with body motion or the like (for example, Japanese Patent Application Laid-open Publication No. 2017-44768).
In an HMD used in a VR system, a displayed video is enlarged through an eyepiece lens, and accordingly, an image displayed on a display panel is distorted. Thus, it has been typically performed to distort the original image in advance with taken into account image distortion due to the lens, feed the image to the display panel, and perform image conversion processing such as resolution conversion processing and pixel value conversion processing on the display panel side in accordance with the resolution and the pixel arrangement of the display panel. However, when the resolution of the display panel is lower than the resolution of the fed image or when the display panel presupposes that what is called subpixel rendering processing is performed, the amount of fed data is larger than the amount of actually displayed data and waste occurs.
The present disclosure is made in view of the above-described problem and intended to provide a display system capable of performing transmission and reception in a data amount in accordance with a pixel arrangement of a display panel.
A display system according to an embodiment of the present disclosure includes a display device including a liquid crystal display panel including pixels, the pixels each including a plurality of sub pixels and being arranged in a matrix of rows and columns in a first direction and a second direction different from the first direction, and an image generation device including a control circuit configured to perform image deformation processing of an input image in accordance with a pixel structure of the liquid crystal display panel. The image generation device and the display device are coupled to each other through wired or wireless communication, and the control circuit generates pixel values of all the sub pixels of the liquid crystal display panel in the image deformation processing.
Aspects (embodiments) of the present disclosure will be described below in detail with reference to the accompanying drawings. Contents described below in the embodiments do not limit the present disclosure. Constituent components described below include those that could be easily thought of by the skilled person in the art and those identical in effect. Constituent components described below may be combined as appropriate. What is disclosed herein is merely exemplary, and any modification that could be easily thought of by the skilled person in the art as appropriate without departing from the gist of the present disclosure is contained in the scope of the disclosure. For clearer description, the drawings are schematically illustrated for the width, thickness, shape, and the like of each component as compared to an actual aspect in some cases, but the drawings are merely exemplary and do not limit interpretation of the present disclosure. In the present specification and the drawings, any component same as that already described with reference to an already described drawing is denoted by the same reference sign, and detailed description thereof is omitted as appropriate in some cases.
In the present embodiment, a display system 1 is a display system configured to change display along with motion of the user. For example, the display system 1 is a VR system configured to provide a virtual sense of reality to the user by stereoscopically displaying a virtual reality (VR) image illustrating a three-dimensional object or the like in a virtual space and changing the stereoscopic display along with the orientation (position) of the head of the user.
As illustrated in
In the present disclosure, the display device 100 is used as, for example, a head mounted display device fixed to a mounting member 400 and mounted on the head of the user. The display device 100 includes a display panel 110 for displaying an image generated by the image generation device 2. Hereinafter, the configuration in which the display device 100 is fixed to the mounting member 400 is also referred to as “head mounted display (HMD)”.
In the present disclosure, the image generation device 200 is, for example, an electronic apparatus such as a personal computer or a game apparatus. The image generation device 200 generates a VR image in accordance with the position and posture of the head of the user and outputs the VR image to the display device 100. The image generated by the image generation device 200 is not limited to a VR image.
The display device 100 is fixed to such a position that the display panel 110 is disposed in front of the eyes of the user when the HMD is mounted on the user. The display device 100 may include, in addition to the display panel 110, voice output devices such as speakers at positions corresponding to the ears of the user when the HMD is mounted on the user. As described later, the display device 100 may include a sensor (for example, a gyro sensor, an acceleration sensor, or an orientation sensor) configured to detect, for example, the position and posture of the head of the user on which the display device 100 is mounted. The display device 100 may also encompass functions of the image generation device 200.
As illustrated in
In the present embodiment, the display panel 110 is assumed to be a liquid crystal display panel.
In the display device 100 used in the VR system as illustrated in
The display device 100 includes the two display panels 110. One of the two display panels 110 is used as a left-eye display panel 110, and the other is used as a right-eye display panel 110.
Each of the two display panels 110 includes a display region 111 and a display control circuit 112. Each display panel 110 includes a non-illustrated light source device configured to irradiate the display region 111 from behind. Each display region 111 includes a two-dimensional matrix of rows and columns of n×m arranged pixels Pix (n pixels in the row direction (X direction) and m pixels in the column direction (Y direction)). In the present embodiment, the pixel density in each display region 111 is, for example, 806 ppi.
Each display panel 110 includes scanning lines extending in the X direction and signal lines extending in the Y direction intersecting the X direction. In each display panel 110, the pixels Pix are disposed in regions surrounded by signal lines SL and scanning lines GL. Each pixel Pix includes a switching element (thin film transistor (TFT)) coupled to a signal line SL and a scanning line GL, and pixel electrodes coupled to the switching element. Each scanning line GL is coupled to a plurality of pixels Pix disposed in the direction in which the scanning line GL extends. Each signal line SL is coupled to a plurality of pixels Pix disposed in the direction in which the signal line SL extends.
The display region 111 of one of the two display panels 110 is for the right eye, and the display region 111 of the other display panel 110 is for the left eye. In this example, the display panels 110 include the two display panels 110 for the left and right eyes, but the display device 100 is not limited to a structure including two display panels 110. For example, one display panel 110 may be provided and the display region of the one display panel 110 may be divided into two to display a right-eye image in the right-half region and display a left-eye image in the left-half region.
Each display control circuit 112 includes a driver integrated circuit (IC) 115, a signal line coupling circuit 113, and a scanning line drive circuit 114. The signal line coupling circuit 113 is electrically coupled to the signal lines SL. The driver IC 115 controls the scanning line drive circuit 114 to turn on and off each switching element (for example, TFT) for controlling operation (light transmittance) of the corresponding pixel Pix. The scanning line drive circuit 114 is electrically coupled to the scanning lines GL.
The sensor 120 detects information based on which the orientation of the head of the user can be estimated. For example, the sensor 120 detects information indicating motion of the display device 100, and the display system 1 estimates the orientation of the head of the user on which the display device 100 is mounted based on the information indicating motion of the display device 100.
The sensor 120 detects information based on which the orientation of the line of sight can be estimated by using, for example, at least one of the angle, acceleration, angular velocity, orientation, and distance of the display device 100. As the sensor 120, for example, a gyro sensor, an acceleration sensor, and an orientation sensor can be used. For example, the sensor 120 may detect the angle and angular velocity of the display device 100 by using the gyro sensor. For example, the sensor 120 may detect the direction and magnitude of acceleration applied to the display device 100 by using the acceleration sensor.
For example, the sensor 120 may detect the orientation of the display device 100 by using the orientation sensor. The sensor 120 may detect movement of the display device 100 by using, for example, a distance sensor or a global positioning system (GPS) receiver. The sensor 120 may be any other sensor, such as a light sensor, for detecting the orientation of the head of the user, change of the line of sight, movement, or the like, or may be a combination of a plurality of sensors. The sensor 120 is electrically coupled to the image separation circuit 150 through the interface 160 to be described later.
The image separation circuit 150 receives left-eye image data and right-eye image data fed from the image generation device 200 through the cable 300, feeds the left-eye image data to the display panel 110 configured to display a left-eye image, and feeds the right-eye image data to the display panel 110 configured to display a right-eye image.
The interface 160 includes a connector to which the cable 300 (
The image generation device 200 includes an operation portion 210, a storage 220, the control circuit 230, and the interface 240.
The operation portion 210 receives an operation from the user. As the operation portion 210, for example, an input device such as a keyboard, a button, and a touch screen can be used. The operation portion 210 is electrically coupled to the control circuit 230. The operation portion 210 outputs information in accordance with the operation to the control circuit 230.
The storage 220 stores computer programs and data. The storage 220 temporarily stores results of processing by the control circuit 230. The storage 220 includes a storage medium. Examples of the storage medium include a ROM, a RAM, a memory card, an optical disk, and a magneto optical disc. The storage 220 may store data of images to be displayed on the display device 100.
The storage 220 stores, for example, a control program 211 and a VR application 212. The control program 211 can provide, for example, functions related to various kinds of control for operating the image generation device 200. The VR application 212 can provide a function to display a VR image on the display device 100. The storage 220 can store various kinds of information input from the display device 100, such as data indicating results of detection by the sensor 120.
The control circuit 230 includes, for example, a micro control unit (MCU) or a central processing unit (CPU). The control circuit 230 can collectively control operation of the image generation device 200. Various kinds of functions of the control circuit 230 are implemented based on control by the control circuit 230.
The control circuit 230 includes, for example, a graphics processing unit (GPU) configured to generate images to be displayed. The GPU generates an image to be displayed on the display device 100. The control circuit 230 outputs the image generated by the GPU to the display device 100 through the interface 240. The control circuit 230 of the image generation device 200 includes the GPU in description of the present embodiment but is not limited thereto. For example, the GPU may be provided in the display device 100 or the image separation circuit 150 of the display device 100. In this case, the display device 100 may acquire data from the image generation device 200, an external electronic apparatus, or the like, and the GPU may generate an image based on the data.
The interface 240 includes a connector to which the cable 300 (refer to
When the VR application 212 is executed, the control circuit 230 displays an image in accordance with motion of the user (display device 100) on the display device 100. When having detected change of the user (display device 100) while displaying the image on the display device 100, the control circuit 230 changes the image displayed on the display device 100 to an image in the direction of the change. At start of image production, the control circuit 230 produces an image based on a reference viewpoint and a reference line of sight in a virtual space, and when having detected change of the user (display device 100), changes a viewpoint or line of sight for producing a displayed image from the direction of the reference viewpoint or the reference line of sight in accordance with motion of the user (display device 100), and displays an image based on the changed viewpoint or line of sight on the display device 100.
For example, the control circuit 230 detects rightward movement of the head of the user based on a result of detection by the sensor 120. In this case, the control circuit 230 changes a currently displayed image to an image when the line of sight is changed in the right direction. The user can visually recognize an image to the right of the image displayed on the display device 100.
For example, when having detected movement of the display device 100 based on a result of detection by the sensor 120, the control circuit 230 changes an image in accordance with the detected movement. When having detected frontward movement of the display device 100, the control circuit 230 changes the currently displayed image to an image in a case of movement to the front side of the currently displayed image. When having detected backward movement of the display device 100, the control circuit 230 changes the currently displayed image to an image in a case of movement to the back side of the currently displayed image. The user can visually recognize an image in the direction in which the user moves from an image displayed on the display device 100.
As illustrated in
The sub pixels SPixR, SPixG, and SPixB include the respective switching elements TrD1, TrD2, and TrD3 and capacitors of a liquid crystal layer LC. The switching elements TrD1, TrD2, and TrD3 are each constituted by a thin film transistor, and in this example, constituted by an n-channel metal oxide semiconductor (MOS) TFT. A sixth insulating film 16 (refer to
Color filters CFR, CFG, and CFB illustrated in
As illustrated in
The corresponding scanning line drive circuit 114 is disposed in the peripheral region between the side 110e1 of the substrate end part of the display panel 110 and the display region 111. The corresponding signal line coupling circuit 113 is disposed in the peripheral region between the side 110e4 of the substrate end part of the display panel 110 and the display region 111. The corresponding driver IC 115 is disposed in the peripheral region between the side 110e4 of the substrate end part of the display panel 110 and the display region 111. In the present embodiment, the side 110e3 and the side 110e4 of the substrate end part of the display panel 110 are parallel to the X direction. The side 110e1 and the side 110e2 of the substrate end part of the display panel 110 are parallel to the Y direction.
In the example illustrated in
The following describes a sectional structure of each display panel 110 with reference to
The first insulating film 11 is positioned on the first insulation substrate 10. The second insulating film 12 is positioned on the first insulating film 11. The third insulating film 13 is positioned on the second insulating film 12. The signal lines S1 to S3 are positioned on the third insulating film 13. The fourth insulating film 14 is positioned on the third insulating film 13 and covers the signal lines S1 to S3.
Wires may be disposed on the fourth insulating film 14 as necessary. The wires are covered by the fifth insulating film 15. The wires are omitted in the present embodiment. The first insulating film 11, the second insulating film 12, the third insulating film 13, and the sixth insulating film 16 are formed of a translucent inorganic material such as silicon oxide or silicon nitride. The fourth insulating film 14 and the fifth insulating film 15 are formed of a translucent resin material and have thicknesses larger than those of the other insulating films formed of the inorganic material. However, the fifth insulating film 15 may be formed of an inorganic material.
The common electrode COM is positioned on the fifth insulating film 15. The common electrode COM is covered by the sixth insulating film 16. The sixth insulating film 16 is formed of a translucent inorganic material such as silicon oxide or silicon nitride.
The pixel electrodes PE1 to PE3 are positioned on the sixth insulating film 16 and face the common electrode COM through the sixth insulating film 16. The pixel electrodes PE1 to PE3 and the common electrode COM are formed of a translucent conductive material such as indium tin oxide (ITO) or indium zinc oxide (IZO). The pixel electrodes PE1 to PE3 are covered by the first alignment film AL1. The first alignment film AL1 covers the sixth insulating film 16 as well.
The counter substrate SUB2 is based on a translucent second insulation substrate 20 such as a glass substrate or a resin substrate. The counter substrate SUB2 includes a light-shielding layer BM, the color filters CFR, CFG, and CFB, an overcoat layer OC, and a second alignment film AL2 on a side on which the second insulation substrate 20 faces the array substrate SUB1.
As illustrated in
The color filters CFR, CFG, and CFB are positioned on the side on which the second insulation substrate 20 faces the array substrate SUB1, and end parts of each color filter overlap the light-shielding layer BM. The color filter CFR faces the pixel electrode PE1. The color filter CFG faces the pixel electrode PE2. The color filter CFB faces the pixel electrode PE3. In an example, the color filters CFR, CFG, and CFB are formed of resin materials colored in blue, red, and green, respectively.
The overcoat layer OC covers the color filters CFR, CFG, and CFB. The overcoat layer OC is formed of a translucent resin material. The second alignment film AL2 covers the overcoat layer OC. The first alignment film AL1 and the second alignment film AL2 are formed of, for example, a material having horizontal orientation.
As described above, the counter substrate SUB2 includes the light-shielding layer BM and the color filters CFR, CFG, and CFB. The light-shielding layer BM is disposed in regions facing wire parts such as the scanning lines G1, G2, and G3, the signal lines S1, S2, and S3, contact portions PA1, PA2, and PA3, the switching elements TrD1, TrD2, and TrD3 illustrated in
The counter substrate SUB2 includes the color filters CFR, CFG, and CFB in three colors in
The color filters CF are provided in the counter substrate SUB2 in
The array substrate SUB1 and the counter substrate SUB2 described above are disposed such that the first alignment film AL1 and the second alignment film AL2 face each other. The liquid crystal layer LC is encapsulated between the first alignment film AL1 and the second alignment film AL2. The liquid crystal layer LC is made of a negative liquid crystal material having negative dielectric constant anisotropy or a positive liquid crystal material having positive dielectric constant anisotropy.
The array substrate SUB1 faces a backlight unit IL, and the counter substrate SUB2 is positioned on a display surface side. The backlight unit IL is applicable in various kinds of forms, but description of a detailed structure thereof is omitted.
A first optical element OD1 including a first polarization plate PL1 is disposed on the outer surface of the first insulation substrate 10 or its surface facing the backlight unit IL. A second optical element OD2 including a second polarization plate PL2 is disposed on the outer surface of the second insulation substrate 20 or its surface on an observation position side. A first polarization axis of the first polarization plate PL1 and a second polarization axis of the second polarization plate PL2 are in, for example, a cross Nicol positional relation on an X-Y plane. The first optical element OD1 and the second optical element OD2 may each include another optical functional element such as a wave plate.
For example, in a state in which no voltage is applied to the liquid crystal layer LC when the liquid crystal layer LC is a negative liquid crystal material, the long axis of each liquid crystal molecule LM is initially oriented in the X direction on an X-Y plane. In a state in which voltage is applied to the liquid crystal layer LC, in other words, in an “on” state in which an electric field is formed between the pixel electrodes PE1 to PE3 and the common electrode COM, the orientation state of the liquid crystal molecule LM changes due to influence of the electric field. In the “on” state, the polarization state of incident linearly polarized light changes in accordance . . . with the orientation state of the liquid crystal molecule LM as the light passes through the liquid crystal layer LC.
Typically in the HMD, an image displayed on each display panel 110 is enlarged through the corresponding lens 410 and observed. Since the lens 410 is in proximity to the display panel 110, geometric image distortion (hereinafter also simply referred to as “lens distortion”) due to aberrations of the lens 410 occurs to an image observed by the user. It is difficult to prevent aberrations of the lens 410 because of weight and size limitations of the HMD mounted on the head. Thus, the display system 1 performs image deformation processing to compensate the lens distortion on an image (hereinafter also referred to as an “input image”) input to the display system 1, thereby generating an image to be displayed on each display panel 110.
In
The image deformation processing in the display system 1 is performed by using texture mapping typically used in image processing. The following briefly describes the image deformation processing based on texture mapping.
In the example illustrated in
The data definition points P0, 0, P1, 0, . . . , Pn−1, m−1 illustrated in
In the present disclosure, two-dimensional numbers indicating a data position on an image (texture) of image data constituting texture are referred to as “texture coordinates”. The range of a value in the texture coordinate system is normalized with the number of n−1 in the X direction and the number of m−1 in the Y direction in an XY coordinate system illustrated in
A coordinate system that defines texture coordinates is assumed to be a uv coordinate system. When k represents the position of a texel in the u direction, l represents the position thereof in the v direction, q represents the number of texels in the u direction, and p represents the number of texels in the v direction, the correlation between a position in the uv coordinate system and the position of a texel can be expressed by Expressions (1) and (2) below.
k=qu (1)
l=pv (2)
In the present disclosure, processing in accordance with the pixel structure of each display panel 110 is performed in the above-described image deformation processing. Specifically, the control circuit 230 of the image generation device 200 generates the pixel values of all sub pixels SPix of the display panel 110 and transmits the pixel values to the display device 100 through the interface 240. Accordingly, image conversion processing such as resolution conversion processing and pixel value conversion processing in accordance with the pixel structure of the display panel 110 can be omitted at the display panel 110.
The following describes the image deformation processing in the present disclosure in detail.
Through the above-described image deformation processing based on texture mapping, a texture coordinate (uc, vc) in the image Mn yet to be subjected to the image deformation processing is obtained, the texture coordinate corresponding to a pixel position (x, y) in the image MIg subjected to the image deformation processing. The texture coordinate (uc, vc) has no concept as a sub pixel and is a coordinate corresponding to a typical position (x, y) of a pixel Pix. Thus, the positions of the sub pixels SPixR, SPixG, and SPixB are relatively different from a typical position of a pixel Pix. The texture coordinate of each sub pixel SPix of the display panel 110 can be obtained by applying correction for the sub pixel SPix to the texture coordinate (uc, vc).
A texture coordinate (uQ(x, y), vQ(x, y)) to which correction for each sub pixel SPix of the display panel 110 is applied can be expressed by Expressions (3) and (4) below. In Expressions (3) and (4) below, Q is R, G, and B. Specifically, a coordinate (uR(x, y), vR(x, y)) represents the texture coordinate of the sub pixel SPixR, a coordinate (uG(x, y), vG(x, y)) represents the texture coordinate of the sub pixel SPixG, and a coordinate (uU(x, y), vB(x, y)) represents the texture coordinate of the sub pixel SPixB.
uQ(x,y)=ucQ(x,y)+ΔusQ(x,y) (3)
vQ(x,y)=vcQ(x,y)+ΔvsQ(x,y) (4)
In Expressions (3) and (4) above, the texture coordinate (ucQ(x, y), vcQ(x, y)) is constant (uc(x, y)=ucR(x, y)=ucG(x, y)=ucB(x, y)) among the sub pixels SPix when correction of chromatic aberration due to the corresponding lens 410 is not performed at the sub pixels SPixR, SPixG, and SPixB. A texture coordinate difference value (ΔusQ(x, y), ΔvsQ(x, y)) represents the difference value of each of the sub pixels SPixR, SPixG, and SPixB from the texture coordinate (uc, vc).
Four coefficients kxp, kxm, kyp, and kym illustrated in
As in the case of Expressions (3) and (4) above, a texture coordinate (usQ(x, y), vgQ(x, y)) where Q=R, G, and B can be expressed by Expressions (5) and (6) below through two-dimensional linear interpolation using the above-described four coefficients kxp, kxm, kyp, and kym. In Expressions (5) and (6) below, Q is R, G, and B. Specifically, for example, the coefficient kxp for the sub pixel SPixR is expressed as kxpR. For example, the coefficient kxp for the sub pixel SPixG is expressed as kxpG. For example, the coefficient kw for the sub pixel SPixB is expressed as kxpB. This is the same for the other coefficients.
When the center coordinate of one of the sub pixels SPixR, SPixG, and SPixB or the pixel Pix is set as a coordinate (uc(x, y), vc(x, y)) as a representative value, the texture coordinate difference value (ΔusQ(x, y), ΔvsQ(x, y)) can be expressed by Expressions (7) and (8) below.
ΔusQ(x,y)=usQ(x,y)−uc(x,y) (7)
ΔvsQ(x,y)=vsQ(x,y)−vc(x,y) (8)
When the chromatic aberration due to the lens 410 is not corrected, Expressions (9) and (10) below are obtained for the sub pixel SPixG as a representative value.
uc(x,y)=ucG(x,y) (9)
vc(x,y)=vcG(x,y) (10)
When the chromatic aberration due to the lens 410 is corrected, two methods can be employed. In the first method, the shape of the polygon mesh for obtaining the texture coordinate (uc, vc) is reflected onto and differentiated among the sub pixels SPixR, SPixG, and SPixB in accordance with the chromatic aberration due to the lens 410, and accordingly, a texture image is mapped with reduced chromatic aberration in accordance with the image magnification of each of the sub pixels SPixR, SPixG, and SPixB of the lens 410. In other words, the coordinate (ucQ(x, y), vcQ(x, y)) is obtained for each of the sub pixels SPixR, SPixG, and SPixB.
In the second method of correcting the chromatic aberration due to the lens 410, a correction coefficient of the image magnification is applied for each of the sub pixels SPixR, SPixG, and SPixB. The following describes the second method of correcting the chromatic aberration due to the lens 410.
When the chromatic aberration due to the lens 410 is considered, it is needed to first obtain a pixel position at which the difference in the image magnification due to the chromatic aberration due to the lens 410 is to be compensated.
The difference in the image magnification of the lens among colors can be expressed as, for example, a change amount Δr of the image magnification with a distance r from the optical axis of the lens as illustrated in
Typically, the change amount ΔrR of the image magnification for the sub pixel SPixR, the change amount ΔrG of the image magnification for the sub pixel SPixG, and the change amount ΔrB of the image magnification for the sub pixel SPixB are generated due to dispersion of the refractive index of the lens and increase as the distance r from the optical axis increases. As illustrated in
When a predetermined pixel Pix is considered and the position of a pixel Pix on the display panel 110 is denoted by xlens_c/ylens_c, a distance rc from a position xlens0, ylens0 of the optical axis of the lens 410 to the pixel on the display panel 110 can be expressed by Expressions (11), (12), and (13) below.
Δxlens_c=xlens_c−xlens0 (11)
Δylens_c=ylens_c−ylens0 (12)
rc=√(Δxlens_c2+Δylens_c2) (13)
As expressed in Expression (14) below, with the sub pixel SPixG of the pixel Pix as a reference (rG=rc), a distance IR from the position xlens0, ylens0 of the optical axis of the lens 410 on the display panel 110 to an image of the sub pixel SPixR and a distance Is from the position xlens0, ylens0 of the optical axis of the lens 410 on the display panel 110 to an image of the sub pixel SPixB can be expressed by Expressions (15) and (16) below, respectively.
rG=rc (14)
rR=rc×(1+ΔrR(rc)) (15)
rB=rc×(1+ΔrB(rc)) (16)
As described above, with the sub pixel SPixG as a reference (ΔrG=0), the change amount ΔrB of the image magnification for the sub pixel SPixB typically has a positive value, and the change amount ΔrR of the image magnification for the sub pixel SPixR typically has a negative value. Thus, the positions of the sub pixels SPixR, SPixG, and SPixB on the display panel 110 are shifted.
An on-image position at which sub-pixel data to be input to the sub pixel SPixR exists corresponds to image data at the position of an image displayed with a shift due to the chromatic aberration of the lens and can be approximated to a relative position Δxcomp_R, Δycomp_R obtained by correcting the distance from the optical axis of the lens in accordance with the magnification, as expressed in Expressions (17) and (18) below.
An on-image position xcomp_R, ycomp_R at which sup-pixel data to be input to the sub pixel SPixR on the display panel 110 exists is expressed by Expressions (19) and (20) below by using the position xlens0, ylens0 of the optical axis of the lens 410 on the display panel 110.
xcomp_R=Δxcomp_R+xlens0 (19)
ycomp_R=Δycomp_R+ylens0 (20)
A position xtR, ytR of the sub pixel SPixR with taken into account the above-described positional shift due to chromatic aberration can be expressed by Expressions (21) to (24) below by using the coefficients kxpR, kxmR, kypR, and kym indicating positions attributable to the configuration of sub pixels.
xtR=kxpR+Δxcomp_R+xlens0 (for kxmR=0) (21)
xtR=−kxmR+Δxcomp_R+xlens0 (for kxpR=0) (22)
ytR=kypR+Δycomp_R+ylens0 (for kymR=0) (23)
ytR=−kymR+Δycomp_R+ylens0 (for kypR=0) (24)
The values of the position XER, YER of the sub pixel SPixR, which are expressed by Expressions (21) to (24) above cannot be applied in a case of separation from a position xc, yc of a reference pixel Pix by one or larger. Thus, the values of a position xtB, Yes of the sub pixel SPixR each need to be disassembled into an integer part and a part after the decimal point by using Expressions (25) to (28) below. In Expressions (25) and (27) below, “floor” is a function that takes out an integer part by discarding digits after the decimal point.
xiR=floor(xtR) (25)
kxR=xtR−xiR (26)
yiR=floor(ytR) (27)
kyR=ytR−yiR (28)
The position xtB, ytB of the sub pixel SPixB with taken into account the above-described positional shift due to chromatic aberration can be expressed by Expressions (29) to (32) below by using the coefficients kxpB, kxmb, kypB, and kymB indicating positions attributable to the configuration of sub pixels.
xtB=kxpB+Δxcomp_B+xlens0 (for kxmB=0) (29)
xtB=−kxmB+Δxcomp_B+xlens0 (for kxpB=0) (30)
ytB=kypB+Δycomp_B+ylens0 (for kymB=0) (31)
ytB=−kymB+Δycomp_B+ylens0 (for kypB=0) (32)
The values of the position xtB, ytB of the sub pixel SPixB, which are expressed by Expressions (29) to (32) above cannot be applied in a case of separation from the position xc, yc of the pixel Pix by one or larger. Thus, the values of the position xtB, ytB of the sub pixel SPixB each need to be disassembled into an integer part and a part after the decimal point by using Expressions (33) to (36) below. In Expressions (33) and (35) below, “floor” is a function that takes out an integer part by discarding digits after the decimal point.
xiB=floor(xtB) (33)
kxB=xtB−xiB (34)
yiB=floor(ytB) (35)
kyB=ytB−yiB (36)
The texture coordinate (usQ(x, y), vsQ(x, y)) for each sub pixel SPix, using the coefficients xiQ, kxQ, yiQ, and kyQ (Q=R, G, and B) calculated by using Expressions (11) to (36) above can be expressed by Expressions (37) and (38) below in place of Expressions (7) and (8) above.
The texture coordinate difference value (ΔusQ(x, y), ΔvsQ(x, y)) expressed in Expressions (39) and (40) below is obtained by subtracting the position uc(xc, yc), vc(xc, yc) of the pixel Pix from the texture coordinate (uc(xtQ, ytQ), vc(xtQ, ytQ)) for sub pixel SPix, which is expressed in Expressions (37) and (38) above.
ΔusQ(x,y)=uc(xtQ,ytQ)−uc(xc,yc) (39)
ΔvsQ(x,y)=vc(xtQ,ytQ)−vc(xc,yc) (40)
Compensation of a positional shift due to chromatic aberration and processing in accordance with the pixel arrangement of the display panel 110 can be simultaneously performed by holding the texture coordinate (usQ(x, y), vsQ(x, y)), which is generated by using Expressions (5) and (6) above or Expressions (37) and (38) above, as a coordinate transform table in the storage 220 of the image generation device 200 and applying the coordinate transform table to the image deformation processing.
Alternatively, compensation of a positional shift due to chromatic aberration and processing in accordance with the pixel arrangement of the display panel 110 can be simultaneously performed by holding the texture coordinate difference value (ΔusQ(x, y), ΔvsQ(x, y)), which is generated by using Expressions (7) and (8) above or Expressions (39) and (40) above, as a coordinate transform table in the storage 220 of the image generation device 200 and applying the coordinate transform table after having obtained the position uc(xc, yc), vc(xc, yc) of the pixel Pix in the image deformation processing.
Alternatively, processing in accordance with the pixel arrangement of the display panel 110 can be simultaneously performed by holding the texture coordinate difference value (ΔusQ(x, y), ΔvsQ(x, y)), which is generated by using Expressions (7) and (8) above or Expression (7) above to which Expression (37) above is applied and Expression (8) above to which Expression (38) above is applied, as a coordinate transform table in the storage 220 of the image generation device 200 and applying the coordinate transform table to the image deformation processing.
The following describes processing of deriving the coordinate transform table tb2Q using a polygon mesh with reference to
In the example illustrated in
In the first example illustrated in
Subsequently, the control circuit 230 generates a polygon mesh corresponding to the sub pixel SPixR. The polygon mesh is produced in a predetermined shape that compensates the R-color lens magnification ratio and distortion of the lens 410, the texture coordinate (ucR(x, Y), vcR(x, y)) corresponding to the sub pixel SPixR is obtained by texture mapping, and a texture coordinate table corresponding to the sub pixel SPixR is generated (step S102).
Subsequently, the control circuit 230 generates a polygon mesh corresponding to the sub pixel SPixG (step S103).
Subsequently, the control circuit 230 generates a polygon mesh corresponding to the sub pixel SPixG. The polygon mesh is produced in a predetermined shape that compensates the G-color lens magnification ratio and distortion of the lens 410, the texture coordinate (ucG(x, y), vcG(x, y)) corresponding to the sub pixel SPixG is obtained by texture mapping, and a texture coordinate table corresponding to the sub pixel SPixG is generated (step S104).
Subsequently, the control circuit 230 generates a polygon mesh corresponding to the sub pixel SPixB (step S105).
Subsequently, the control circuit 230 generates a polygon mesh corresponding to the sub pixel SPixB. The polygon mesh is produced in a predetermined shape that compensates the B-color lens magnification ratio and distortion of the lens 410, the texture coordinate (ucB(x, y), vcB(x, y)) corresponding to the sub pixel SPixB is obtained by texture mapping, and a texture coordinate table corresponding to the sub pixel SPixB is generated (step S106).
Then, the control circuit 230 calculates the texture coordinate (usR(x, y), vsR(x, y)), the texture coordinate (usG(x, y), vsG(x, y)), and the texture coordinate (usB(x, y), vsB(x, y)) for the respective sub pixels SPixR, SPixG, and SPixB by using Expressions (5) and (6) above. The control circuit 230 takes one of the sub pixels SPixR, SPixG, and SPixB as a representative (for example, the position ucG (x, y), vcG(x, y) of the sub pixel SPixG as representative values), obtains the texture coordinate difference value (ΔusQ(x, y), ΔvsQ(x, y)) for each of the sub pixels SPixR, SPixG, and SPixB by using Expressions (7) and (8) above, generates the coordinate transform table tb2Q for each of the sub pixels SPixR, SPixG, and SPixB (step S107), and discards the texture coordinate tables generated for the sub pixels SPixR, SPixG, and SPixB at steps S102, S104, and S106 (step S108).
In the example illustrated in
In the second example illustrated in
First, the control circuit 230 of the image generation device 200 generates a polygon mesh corresponding to the sub pixel SPixG (step S201). The polygon mesh is produced in a predetermined shape that compensates the G-color lens magnification ratio and distortion of the lens 410.
Subsequently, the control circuit 230 obtains the texture coordinate (ucG(x, y), vcG(x, y)) corresponding to the sub pixel SPixG by texture mapping and generates a texture coordinate table corresponding to the sub pixel SPixG (step S202).
Then, the control circuit 230 calculates the coefficients xiQ, kxQ, yiQ, and kyQ (Q=R, G, and B) by using Expressions (11) to (36) above and calculates the texture coordinate (ugR(x, y), vsR(x, y)), the texture coordinate (usG(x, y), vsG(x, y)), and the texture coordinate (usB(x, y), vsB(x, y)) for the respective sub pixels SPixR, SPixG, and SPixB by using Expressions (37) and (38) above. The control circuit 230 obtains the texture coordinate difference value (ΔusQ(x, y), ΔvsQ(x, y)) for each of the sub pixels SPixR, SPixG, and SPixB by using Expressions (39) and (40) above, generates the coordinate transform table tb2Q for each of the sub pixels SPixR, SPixG, and SPixB (step S203), and discards the texture coordinate table generated for the sub pixel SPixG at step S202 (step S204).
The above-described coordinate transform table tb1Q (Q=R, G, and B) for each sub pixel SPix or the above-described coordinate transform table tb2Q (Q=R, G, and B) for each sub pixel SPix may be stored in accordance with the lens 410 of the HMD in the storage 220 of the image generation device 200 in advance or may be held in accordance with the lens 410 of the HMD in the storage 220 as the control circuit 230 of the image generation device 200 executes the processing of deriving the coordinate transform table tb1Q or the coordinate transform table tb2Q when the display system 1 is activated. Alternatively, the coordinate transform table may be held in the storage 220 in accordance with the executed VR application 212 as the control circuit 230 of the image generation device 200 executes the processing of deriving the coordinate transform table tb1Q or the coordinate transform table tb2Q as appropriate.
The following describes a method of deriving a pixel value applied to each sub pixel SPix of the display panel 110 in display operation for each display frame by using the texture coordinate difference value (ΔusQ(x, y), ΔvsQ(x, y)) or the texture coordinate (usQ(x, y), vsQ(x, y)).
As described above, in the HMD, an image displayed on each display panel 110 is enlarged through the lens 410 and observed, and thus image deformation processing for compensating the lens distortion is performed on the image displayed on the display panel 110. In the image deformation processing, display data at the position x, y of a pixel Pix is sampled from the texture coordinate (usQ(x, y), vsQ(x, y)) of the original image, and color data not only at one point that matches the texture coordinate (usQ(x, y), vsQ(x, y)) but also at one or more data definition points are sampled to calculate the pixel value of each of the sub pixels SPixR, SPixG, and SPixB. In this case, the pixel value of each of the sub pixels SPixR, SPixG, and SPixB needs to be generated in accordance with disposition of the data definition points after moved by the image deformation processing.
When sampling is performed for each sub pixel SPix, the problem of coloring (false color) occurs due to aliasing noise. In the image deformation processing based on texture mapping, the spacing of texture to be sampled does not necessarily match the spacing of sampling (in other words, the pixel spacing of the display panel 110), which is countered by multisampling and blurring processing. In the present disclosure, since sampling is performed for sub pixel SPix, state differences between sub pixels SPix are easily noticeable as false colors.
The present disclosure describes examples in which an area averaging method and a multipoint averaging method are used as methods of sampling for each sub pixel SPix.
In the example illustrated in
In the example illustrated in
A pixel value fQ transmitted to the display panel 110 can be expressed by Expression (43) below that applies gamma correction to the pixel value FQ expressed by Expression (41) above or Expression (42) above.
The multipoint averaging method, which samples and averages from a plurality of positions based on the texture coordinate (usQ, vsQ), can easily accommodate texture deformation and rotation. Since the above-described coordinate transform table includes a plurality of texture coordinates in accordance with the position of the sub pixel SPix in the XY coordinate system, the size of the plane-filling figure on a texel can be obtained from values in the coordinate transform table referred at sampling.
When the values of the texture coordinate (usQ, vsQ) or the texture coordinate difference value (ΔusQ, ΔvsQ) acquired from a plurality of coordinate transform tables corresponding to a plurality of sub pixels SPix include two or more difference values that are not parallel, the vectors E1 and E2 representing the size and orientation of the plane-filling figure in the uv coordinate system can be calculated by treating the plurality of difference values as linearly independent vectors. When the coordinate transform tables do not include two or more difference values that are not parallel, the vectors E1 and E2 in the uv coordinate system corresponding to the vectors V1 and V2 in a predetermined XY coordinate system may be calculated and a table holding the values of the vectors E1 and E2 may be produced in advance and referred.
When a is set to 1 in Expression (44) above, the position of each sampling point spij becomes positions illustrated in
When the value of a in Expression (44) above is set to 1 or less, an aliasing noise reducing effect (hereinafter also referred to as an “anti-aliasing effect”) decreases and the resolution can be increased. When the value of a in Expression (44) above is increased, the anti-aliasing effect can be increased.
The central position V0 of the plane-filling figure in the XY coordinate system, which is expressed by Expression (46) below, may be added as the sampling point.
Sp20=V0 (46)
The pixel value fQ transmitted to the display panel 110 can be expressed by Expression (48) below that applies gamma correction to the pixel value FQ expressed by Expression (44) above. However, when an input value to the display panel 110 is linear data, this processing is unnecessary and FQ may be directly output.
The following describes specific examples of the image deformation processing executed for each display frame with reference to
The first example illustrated in
If the pixel value generation has not ended for all pixels of one frame (No at step S302), the control circuit 230 refers to the coordinate transform table tb1R to acquire a texture coordinate to be sampled at the sub pixel SPixR of the pixel Pix set at step S301 (step S303) and calculates the pixel value of the sub pixel SPixR by using the above-described sampling method (step S304).
Subsequently, the control circuit 230 refers to the coordinate transform table tb1G to acquire a texture coordinate to be sampled at the sub pixel SPixG of the pixel Pix set at step S301 (step S305) and calculates the pixel value of the sub pixel SPixG by using the above-described sampling method (step S306).
Subsequently, the control circuit 230 refers to the coordinate transform table tb1B to acquire a texture coordinate to be sampled at the sub pixel SPixB of the pixel Pix set at step S301 (step S307) and calculates the pixel value of the sub pixel SPixB by using the above-described sampling method (step S308).
Subsequently, the control circuit 230 returns to step S301 and repeats the processing up to step S308. If the pixel value generation has ended for all pixels of one frame at step S302 (Yes at step S302), the image deformation processing for one frame ends.
The second example illustrated in
If the pixel value generation has not ended for all pixels of one frame (No at step S402), the control circuit 230 acquires a texture coordinate (for example, the texture coordinate (ucG(x, y), vcG(x, y)) corresponding to the sub pixel SPixG) as a reference for the coordinate transform table tb2Q (Q=R, G, and B) of the pixel Pix set at step S401 by using a reference polygon mesh (for example, a polygon mesh corresponding to the sub pixel SPixG) (step S403).
The control circuit 230 refers to the coordinate transform table tb2R to acquire a texture coordinate corresponding to the sub pixel SPixR, which is to be sampled at the sub pixel SPixR of the pixel Pix set at step S401 (step S404). Specifically, the control circuit 230 calculates the texture coordinate (usR(x, y), vsR(x, y)) by using Expressions (49) and (50) below, which are transformed from Expressions (7) to (10) above.
usR(x,y)=ucG(x,y)+ΔusR(x,y) (49)
vsR(x,y)=vcG(x,y)+ΔusR(x,y) (50)
Then, the control circuit 230 calculates the pixel value of the sub pixel SPixR by using the above-described sampling method (step S405).
Subsequently, the control circuit 230 refers to the coordinate transform table tb2G to acquire a texture coordinate corresponding to the sub pixel SPixG, which is to be sampled at the sub pixel SPixG of the pixel Pix set at step S401 (step S406). Specifically, the control circuit 230 calculates the texture coordinate (usG(x, y), vsG(x, y)) by using Expressions (51) and (52) below, which are transformed from Expressions (7) to (10) above.
usG(x,y)=ucG(x,y)+ΔusG(x,y) (51)
vsG(x,y)=vcG(x,y)+ΔusG(x,y) (52)
Then, the control circuit 230 calculates the pixel value of the sub pixel SPixG by using the above-described sampling method (step S407).
Subsequently, the control circuit 230 refers to the coordinate transform table tb2B to acquire a texture coordinate corresponding to the sub pixel SPixB, which is to be sampled at the sub pixel SPixB of the pixel Pix set at step S401 (step S408). Specifically, the control circuit 230 calculates the texture coordinate (usB(x, y), vsB(x, y)) by using Expressions (53) and (54) below, which are transformed from Expressions (7) to (10) above.
usB(x,y)=ucG(x,y)+ΔusB(x,y) (53)
vsB(x,y)=vcG(x,y)+ΔusB(x,y) (54)
Then, the control circuit 230 calculates the pixel value of the sub pixel SPixB by using the above-described sampling method (step S409).
Subsequently, the control circuit 230 returns to step S401 and repeats the processing up to step S409. If the pixel value generation has ended for all pixels of one frame at step S402 (Yes at step S402), the image deformation processing for one frame ends.
The following describes specific examples of the pixel arrangement of each display panel 110.
A pixel Pix (0, 1) includes a sub pixel SPixR (1, 1), a sub pixel SPixG (0, 2), and a sub pixel SPixB (1, 2). A pixel Pix (1, 1) includes a sub pixel SPixR (2, 2), a sub pixel SPixG (3, 2), and a sub pixel SPixB (3, 1). A pixel Pix (2, 1) includes a sub pixel SPixR (5, 2), a sub pixel SPixG (5, 1), and a sub pixel SPixB (4, 2).
In the pixel configuration illustrated in
The following describes an example in which the display panel 110 according to the embodiment in the pixel configuration illustrated in
sx=x%3 (55)
sy=y%2 (56)
As described above, in the pixel configuration illustrated in
The area averaging method or the multipoint averaging method described above can be used as a method of sampling each pixel value in the pixel configuration illustrated in
V1=0.96x−(⅔)y (57)
V2=−0.5x−0.72y (58)
When a polygon mesh corresponding to the sub pixel SPixG is used as a reference polygon mesh to derive the coordinate transform table tb2Q (Q=R, B, G), the coordinate transform table tb2G of the sub pixel SPixG contains no influence of chromatic aberration correction, but only the difference between the virtual central position of the pixel Pix and the central position of the sub pixel SPixG is reflected. Thus, the vectors E1 and E2 in the uv coordinate system corresponding to the vectors V1 and V2 can be derived from values included in the coordinate transform table tb2G of the sub pixel SPixG.
As illustrated in
VG10=−0.25x+0.5y (59)
VG11=0.25x+(⅙)y (60)
The vectors V1 and V2 of the plane-filling figure illustrated in
V1=−1.96VG10+1.88VG11 (61)
V2=−0.58VG10−2.58VG11 (62)
Expressions (59), (60), (61), and (62) above approximately hold in the uv coordinate system, and thus in processing of obtaining actual pixel values, a vector EG10 and a vector EG11 in the texture coordinate system can be produced for the vectors VG10 and VG11, and the vector E1 for the vector V1 and the vector E2 for the vector V2 can be calculated. Thus, approximate vectors EG10 and EG11 are obtained by referring, in the coordinate transform table tb2G of the sub pixel SPixG, a difference value corresponding to the sub pixel SPixG of a pixel for the pixel Pix (1, 0) and a difference value corresponding to the sub pixel SPixG of a pixel for the pixel Pix (1, 1), the pixels being positioned close to a pixel to be sampled. The vectors E1 and E2 can be calculated by Expressions (61) and (62) above by using the vectors EG10 and EG11. The sampling point derivation by Expressions (44) to (48) above can be performed by using the vectors E1 and E2 thus calculated.
In the pixel configuration illustrated in
In the pixel configuration illustrated in
The coordinates sx and sy of each pixel (sx, sy) can be expressed by Expressions (63) and (64) below, respectively. In Expressions (63) and (64) below, % represents calculation that obtains the remainder of division.
sx=x%3 (63)
sy=y%3 (64)
In the example illustrated in
A pixel Pix (0, 1) includes a sub pixel SPixB (0, 1) and a sub pixel SPixR (1, 1). A pixel Pix (1, 1) includes a sub pixel SPixG (2, 1) and a sub pixel SPixB (3, 1). A pixel Pix (2, 1) includes a sub pixel SPixR (4, 1) and a sub pixel SPixG (5, 1).
A pixel Pix (0, 2) includes a sub pixel SPixG (0, 2) and a sub pixel SPixB (1, 2). A pixel Pix (1, 2) includes a sub pixel SPixR (2, 2) and a sub pixel SPixG (3, 2). A pixel Pix (2, 2) includes a sub pixel SPixB (4, 2) and a sub pixel SPixR (5, 2).
In this example, the control circuit 230 of the image generation device 200 transmits pixel data including the pixel values of the sub pixels SPixR, SPixG, and SPixB for each pixel Pix (sx, sy) in the pixel configuration illustrated in
When pixel data including the pixel values of the sub pixels SPixR, SPixG, and SPixB for each pixel Pix (sx, sy) in the pixel configuration illustrated in
In order to eliminate transmission waste, for example, data is allocated to disposition of a pixel including the pixel values of the sub pixels SPixR, SPixG, and SPixB for each pixel Pix (sx, sy) in the pixel configuration illustrated in
A pixel configuration when display is performed by using two of the sub pixels SPixR, SPixG, and SPixB as one pixel unit is not limited to the pixel configuration illustrated in
The format of transmission of pixel data including the pixel values of the sub pixels SPixR, SPixG, and SPixB from the control circuit 230 of the image generation device 200 to each display panel 110 is not limited to transmission of pixel data including the pixel values of the sub pixels SPixR, SPixG, and SPixB for each pixel Pix (sx, sy) in the pixel configuration illustrated in
In the example illustrated in
As described above, the coordinate transform table tb1Q illustrated in
In the example illustrated in
In a case in which the pixel arrangement of each display panel 110 is the RGB stripe arrangement as illustrated in
According to the present embodiment, the display system 1 can transmit and receive image data in a data amount in accordance with the pixel arrangement of each display panel.
Preferable embodiments of the present disclosure are described above, but the present disclosure is not limited to such embodiments. Contents disclosed in the embodiments are merely exemplary, and various kinds of modifications are possible without departing from the scope of the present disclosure. Any modification performed as appropriate without departing from the scope of the present disclosure belongs to the technical scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2021-014669 | Feb 2021 | JP | national |
This application is a continuation of PCT international application Ser. No. PCT/JP2021/047139 filed on Dec. 20, 2021 which designates the United States, incorporated herein by reference, and which claims the benefit of priority from Japanese Patent Application No. 2021-014669, filed on Feb. 1, 2021, incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
10074700 | Nakamura et al. | Sep 2018 | B2 |
20110285753 | Park et al. | Nov 2011 | A1 |
20170039911 | Guo et al. | Feb 2017 | A1 |
20180061307 | Inoue | Mar 2018 | A1 |
20190156466 | Cho et al. | May 2019 | A1 |
20220146828 | Ohba | May 2022 | A1 |
Number | Date | Country |
---|---|---|
2007-325043 | Dec 2007 | JP |
2011-242744 | Dec 2011 | JP |
2017-044768 | Mar 2017 | JP |
WO2020170454 | Aug 2020 | WO |
Entry |
---|
International Search Report issued in International Patent Application No. PCT/JP2021/047139 on Mar. 1, 2022 and English translation of same. 6 pages. |
Written Opinion issued in International Patent Application No. PCT/JP2021/047139 on Mar. 1, 2022. 4 pages. |
Office Action issued in related Japanese Patent Application No. 2022-578149, issued on Mar. 19, 2024, and English translation of same. 7 pages. |
Number | Date | Country | |
---|---|---|---|
20230368744 A1 | Nov 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2021/047139 | Dec 2021 | WO |
Child | 18223204 | US |