This application is a national stage entry of International Application No. PCT/KR2017/00242, filed on Jan. 9, 2017, which is based on and claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2016-0146905, filed on Nov. 4, 2016, in the Korean Intellectual Property Office, and Russian Patent Application No. 2016100444, filed Jan. 12, 2016, in the Russian Patent Office, the disclosures of which are incorporated herein by reference in their entireties.
The present disclosure relates to display technology, and more particularly, to a display device including a compound lens used to form images having different resolutions (i.e., different pixel densities), thereby enhancing a three-dimensional (3D) effect sensed by a viewer.
A virtual reality device generally includes a helmet or glasses and has an optical device in front of each eye of a user wearing the device. There is also a monocular device having a built-in micro display and having an optical device in front of only one eye of a user. In most cases, a related-art virtual reality helmet or glasses includes a display and a lens located in front of each human eye. The lens serves as an eyepiece that projects an image formed by the display on the retina of the human eye.
A well-known device has an optical device worn on the head of a user and located in front of each eye, a display, and a frame having a mechanical part for mounting and moving the optical device and display. The optical device refers to an eyepiece located in front of each eye of the user. A left-eye eyepiece including at least one lens projects an image formed by the half of the display on the left eye of the user. A right-eye eyepiece operates in the same way on the right eye of the user.
However, the above lenses are designed to provide approximately the same resolution in a central image region and a peripheral image region. In particular, the resolution of the peripheral image region is higher than the resolution of the human eye, while the resolution of the central image region is considerably lower than the resolution of the human eye. Therefore, these lenses generate an excessive resolution in the peripheral image region and an insufficient resolution in the central image region. Thereby, it reduces the sharpness of displayed images and reduces the feel of a 3D scene to a viewer.
Provided is a compound lens that may improve image quality by providing a higher resolution in a central image region and a lower resolution in a peripheral image region.
Provided is a display device that may improve image quality by providing a higher resolution in a central image region and a lower resolution in a peripheral image region.
According to an aspect of the present disclosure, a compound lens includes: a central lens portion having a first focal length; and at least one peripheral lens portion having a second focal length and surrounding the central lens portion, wherein the first focal length is greater than the second focal length.
The central lens portion and the at least one peripheral lens portion may have concentric focal planes.
The central lens portion and the at least one peripheral lens portion may include polymethyl methacrylate (PMMA), glass, or optical plastic.
The central lens portion may have a circular shape and the at least one peripheral lens portion may have an annular shape.
The central lens portion and the at least one peripheral lens portion may be concentrically arranged.
The central lens portion and the at least one peripheral lens portion may form a Fresnel lens together.
The central lens portion may include any one of a convex lens, a concave lens, a biconvex lens, a biconcave lens, a positive meniscus lens, a negative meniscus lens, and a lens having two randomly-curved surfaces.
The at least one peripheral lens portion may include any one of a biconvex lens, a biconcave lens, and a lens having two randomly-curved surfaces.
The central lens portion and the at least one peripheral lens portion may include an optical diffractive element or an optical holographic element.
At least one of the central lens portion and the at least one peripheral lens portion may be coated with a total reflection film configured to increase lens transparency.
The at least one peripheral lens portion may include a plurality of peripheral lens portions surrounding the central lens portion, wherein the plurality of peripheral lens portions may have a focal length fi where “i” is the number of peripheral lens portions and i=1, 2, 3, . . . n and satisfy f0>f1>f2>f3> . . . >fn where f0 is the first focal length of the central lens portion, and the central lens portion and the plurality of peripheral lens portions may have coincident focal planes.
According to another aspect of the present disclosure, a display device includes: a frame; a processor mounted in the frame and configured to select an image to be displayed to a viewer; a display attached to the frame and configured to display the image selected by the processor; and two compound lenses.
Each of the two compound lenses may include a central lens portion having a first focal length and at least one peripheral lens portion having a second focal length and surrounding the central lens portion, and the first focal length may be greater than the second focal length.
Each of the compound lenses may be installed at the position of the frame facing each eye of the viewer and may be configured to project the half of a displayed image on each eye of the viewer.
The display device may further include a first adjuster configured to adjust an interocular distance for the viewer by moving the compound lens perpendicularly to an optical axis of the compound lens.
The display device may further include a second adjuster configured to change a distance between the display and the compound lens to compensate for a refraction error of the viewer's eyes by moving the compound lens along an optical axis of the compound lens.
The processor may be configured to compensate for image distortion provided by the compound lens by pre-distorting an image displayed by the display.
Hereinafter, example embodiments will be described in detail with reference to the accompanying drawings. However, embodiments of the present disclosure may be implemented in various other forms and should not be limited to any structure or function described in the following description. Rather, these embodiments are provided to make the description of the present disclosure detailed and complete. According to the present description, it will be apparent to those of ordinary skill in the art that the scope of the present disclosure covers the embodiments of the present disclosure described herein, regardless of whether the embodiments are implemented independently or in combination with other embodiments of the present disclosure. For example, a device described herein may be practically implemented by using various embodiments. In addition, any embodiment of the present disclosure may be implemented by using one or more components in the appended claims.
The term “example” is used herein to mean “used as an example or illustration”. Herein, any embodiment described as “example” should not be construed as “preferable or advantageous over other embodiments”.
In addition, directional words such as “central” and “peripheral” are used with reference to the directions of the drawings to be described. Since components of embodiments of the present disclosure are located in different directions, directional terms are used for the purposes of the drawings and do not give any limitation. It will be understood that other embodiments may be used and structural or logical changes may be made without departing from the scope of the present disclosure.
The human eye has a relatively high resolution in a central region of the viewing angle and a relatively low resolution in a peripheral region of the viewing angle. The related-art lens is designed to have almost the same resolution in the peripheral region and the central region. The compound lens according to an example embodiment may be configured to have a relatively high resolution in a central region of the viewing angle and a relatively low resolution in a peripheral region of the viewing angle. Thereby, it may be matched with the resolution of the human eye.
A virtual reality device may have a viewing angle of 120 degrees or more but the number of display pixels may not be sufficient to satisfy a high resolution through a full viewing angle. As described above, the human eye has a higher resolution in a central image region than in a peripheral image region. Since a related-art single lens has approximately the same resolution (about 15 pixels/°) in the central and peripheral image regions, the resolution of the peripheral image region may be higher than the resolution of the human eye. Also, as illustrated in
To understand this, consider the characteristics of a single lens for a virtual reality device. Referring to
Herein, D0 is the height of a paraxial chief ray on an image plane or a display plane, f′ is a focal length, and h is the height of a chief ray on a principle plane with respect to the edge of a viewing angle (FOV) and is defined by a pupil position p.
h=p·tgα1 (2)
A viewing angle with a sign may be obtained by using tgα1′ of a first equation in a second equation of Equation (1).
The chief ray heights in a paraxial lens and a real lens may be different from each other due to a distortion Δ defined by the following equation.
Herein, D is the chief ray height of the image plane. In this case, D is equal to the half of a diagonal of a display 31.
Also, D0 may be obtained from Equation (4).
The following equation may be obtained by substituting Equation (5) in Equation (3).
In Equation (6), there is a parameter that may be adjusted. It is a focal length f′. A distortion Δ in a lens depends mainly on a viewing angle (FOV) and an aperture stop position. In a modern virtual reality device, the FOV is about 90 degrees. In this case, an aperture stop may coincide with a pupil. Based on anthropometric data, the pupil position p may be defined as being about 10 mm to about 20 mm away from the lens. This may lead to a barrel distortion of about −25% to about −45%. D may not change in a wide angle. When the value of D is small, an optical system including several components may be required to correct the aberration of a high-power lens. When the value of D is great, it may lead to a large overall dimension.
The angular resolution averaged over the entire FOV may be defined as follows.
Herein, N denotes the number of pixels across the display diagonal. For a modem advertisement display with a suitable diagonal D, the pixel number N is 3,000 or less.
The angular resolution in a central region of the FOV (defined by α2) may be proportional to a central region (d) of the display.
As described above, the distortion and the focal length of a normal lens may be selected such that φ=Φ0.
The dependence between α2 and the display region is defined as Equation (9).
Herein, Δ0 is a distortion in the central image region.
Equations (6) to (9) show whether it is necessary to increase the FOV. It may be necessary to reduce the angular resolution or vice versa. It may be important for the user of the virtual reality device to have both the wide angle FOV and the high angular resolution mode at the same time. To have both the FOV and the high-level angular resolution, it may be necessary to refer to Equations (6) and (9). That is, it may be necessary to have different f′ in the central image region and the peripheral image region. In this case, one of the image regions may be out of focus and the image sensed by the eyes may be blurred. For example, referring to
The position of the principle plane H0H0′ may be determined by a meniscus lens (see
In a compound lens according to an example embodiment, a central lens portion and at least one peripheral lens portion may be formed by using a molding or processing technology.
As illustrated in
As illustrated in
As illustrated in
As illustrated in
In addition, a central lens portion and a peripheral lens portion may have two randomly-curved surfaces. Some of the combinations of lenses described above may be used as a portion of a compound optical system for aberration correction.
Meanwhile, the condition of an in-focus position may be derived from the geometric consideration illustrated in
s′0H′+f′0−t=f′+s′H′−Δf′ (10)
The compound lens 66 may include an incidence surface (IS) where light enters and an exit surface (ES) where light exits. The exit surface (ES) may include a first lens surface 61 of the central lens portion 64 and a second lens surface 62 of the peripheral lens portion 65. In Equation (10), S′0H′ and S′H′ denote the positions of principal planes H0′ and H′ corresponding to the first lens surface 61 and the second lens surface 62. S′0H′ may denote the distance between a point where the first lens surface 61 intersects with an optical axis 63 and a point where the second lens surface 62 intersects with the optical axis 63. Herein, the second lens surface 62 may have a continuous virtual surface (VS) having the same curvature, and a position where the virtual surface (VS) intersects with the optical axis 63 will be assumed. Δf′ denotes an allowable defocusing of the peripheral lens portion 65 with respect to the central lens portion 64 (see
The result of combining f0′−f′=Δf′ into Equation (10) may be represented as follows.
s′H′=s′0H′+2Δf′−t (11)
Also, the following is known from a geometric optical system.
s′H′=f′(d0+Δd)(1−n)/nr1
s′0H′=f′0d0(1−n0)/n0r01 (12)
Herein, n0 and n are the refractive indexes of the central lens portion 64 and the peripheral lens portion 65, and d0 is the thickness of the central lens portion 64 along the optical axis 63. Δd denotes the thickness difference between the central lens portion 64 and the peripheral lens portion 65. The thickness of the peripheral lens portion 65 represents the thickness along the optical axis 63 when it is assumed that the peripheral lens portion 65 extends to the optical axis 63. r01 is the radius of the first lens surface 61 of the central lens portion 64 and r1 is the radius of the second lens surface 62 of the peripheral lens portion 65. The first lens surface 61 may represent the exit surface of the central lens portion 64, and the second lens surface 62 may represent the exit surface of the peripheral lens portion 65.
When Equation (12) is substituted into Equation (11), the relationship between the curvature radius of the central lens portion 64 and the curvature radius of the peripheral lens portion 65 may be obtained.
In a compound lens having zero distortion, images may be formed at different scales and a gap may be formed between a central image region and a peripheral image region. For example, when a uniform grid is displayed, the image seen by the eyes will appear as in
When Equation (14) is satisfied, the image of
Another method of removing image doubling is to apply a hood or shield (see
Meanwhile, as illustrated in
The compound lens according to an example embodiment has no screen door effect. When fine lines separating pixels are viewed in a virtual reality image, a screen door effect may be a visual artifact. In
Referring to
The coordinate cycle of a display image sets the coordinates (x;y). The coordinate (x_o;y_o) of the original image sets an ideal beam corresponding to a height h2 (see
The parameter ratio of the ideal beam and the real beam, φ(the viewing angle FOV), h (the height of a beam on the display), φ_ideal(h) (the function of an ideal beam angle from the beam height), and h_real(φ) (the function of a real beam height on the display from the angle) are well-known functions (see
In order to find a matrix l_t(x;y), it may be necessary to determine l_o(x_o;y_o) (where x_o and y_o are functions of x,y). That is, it may be necessary to determine l_o(x_o(x;y);y_o(x;y)).
The main cycle of the algorithm is the cycle of the image display coordinates (x,y). When the coordinates (x,y) of the center of a new image is known, a radius R may be calculated.
R(x;y)=√{square root over (x2+y2)} (15)
α=arctg(y/x) (16)
Then, the following may be obtained.
h2(x;y)=R(x;y) (17)
The current ideal beam angle is as follows.
φ=φ_ideal(h2) (18)
φ_real=φ (19)
When the angle is known, the height may be determined as follows.
h1=h_real((φ_real) (20)
The radius of the original image may be defined as follows.
R_o=h1 (21)
When the angle α is known, the coordinates (x_o;y_o) of the original image may be determined. Thus, the corresponding pixels of the original image and the transformed image may be obtained. Similarly, the ratio of all three colors may be determined and an image may be formed with chromatic aberration correction. l_t(x, y) may be generated based on linear, cubic interpolation, or other operations (see
In some embodiments, a compound lens may include more than two lens portions having different focal lengths (corresponding to pixel densities of image regions or different colors). When a smooth reduction of the resolution from the central image region to the peripheral image region is required, the number of lens portions should be increased. For example,
In order to achieve Equations (13) and (14), the lens surfaces 61 and 62 (see
Herein, c denotes the curvature (the reciprocal of a radius), r denotes the radius, k denotes a conic constant, α1-αn-1 denotes an aspherical coefficient, and n denotes the order of a term.
When k=0, α1=α2= . . . =αn-1=0, Equation (15) represents a spherical surface.
In a general case, for example, for better aberration correction, the lens surfaces 61 and 62 may have a free form that is described by an equation corresponding to an array of point coordinates.
Equation (10) may be achieved by applying a medium with a refractive index gradient as well as other surface forms.
n=n0+n1rr+n2rr2+ . . . +nurru+n1zz+n2zz2+ . . . +nv2rv (23)
Herein, n0 is a base refractive index, r is a radius, z is an axial coordinate, n1r-nur are radial terms, n1z−nvz are axial terms, and u and v are the corresponding numbers of radial terms and axial terms.
Another method of satisfying Equation (10) is to use a diffractive optical element instead of a refractive surface.
Herein, n is a unit vector perpendicular to the hologram surface of a diffractive element at a ray intersection 193, r0 is a unit vector according to a first structural beam, rr is a unit vector according to a second structural beam, and rr′ is a unit vector according to an incident beam, r0′ is a unit vector according to a refracted ray, λc is a structure wavelength, Δρ is a wavelength irradiated by the display 31, and m is a diffraction order.
The display device 201 may further include an integrated processor 207, a wireless radio interface 208, an optical interface 209, and an audio interface 210. The integrated processor 207 may process interactive content for display to the user. The wireless radio interface 208 may transmit/receive interactive content through radio waves. The optical interface 209 may capture or relay the interactive content. The optical interface 209 may be mounted in the frame 202 and may be configured to capture and relay an image. The processor 207 may be configured to select an image captured by an optical interface for display to a viewer through the display. The optical interface 209 may be implemented by a camera, a camcorder, or a projection lens. The audio interface 210 may transmit/receive interactive content through sound waves. The audio interface 210 may be implemented by a microphone, a dynamic transducer, a bone conduction transducer, or the like.
An example display device may be made in the form of a helmet. The display may be a screen of a mobile device.
These applications are easy for the virtual reality device 201 with the compound lens 66 according to an example embodiment. When there is no built-in display, it may be sufficient to insert a mobile phone into a headset. In order to compensate for eye defects such as myopia and hyperopia, a user may start a software application to be immersed in virtual reality. An implementable viewing angle may make this reality possible regardless of whether it is a game, a movie, or a training simulator. Since users may accurately see minute details about the real world, the lack of screen door effects and high realizable resolutions enable realistic presence. The user will be completely immersed in the virtual reality when previewing a movie through the device 201 in 3D of theater size scale with high resolution.
The proposed lenses may be used not only in virtual reality devices but also in devices that are necessary to redistribute the resolution through the viewing angle (FOV). For example,
Let's consider an example of the design of a compound lens for a virtual reality device with parameters in Table 1.
A compound lens 66 according to an example embodiment is illustrated in
The first aspherical surface S1 may be an incidence surface of the compound lens 66 and may be represented as follows.
Herein, k denotes a conic constant, and c denotes a curvature.
The second aspherical surface S2 may be an exit surface of a central lens portion of the compound lens 66 and may be represented as follows.
The third aspherical surface S3 may be an exit surface of a peripheral lens portion of the compound lens 66 and may be represented as follows.
Coefficients c, k, A, B, C, D, E, F, and G are illustrated in Table 2.
The compound lens 66 illustrated in
Herein, φ is the angular resolution (see
In an example embodiment, at least one of a central lens portion and at least one peripheral lens portion may be coated with a thin film to improve the performance of a compound lens. For example, the thin film may be an anti-reflection film used to increase the lens transmittance.
A compound lens according to an example embodiment may be used for image projection, video recording, and photographing.
A compound lens according to an example embodiment may include a central lens portion and at least one peripheral lens portion. The central lens portion and the at least one peripheral lens portion may have different focal lengths. That is, the focal length of the central lens portion may be greater than the focal length of the at least one peripheral lens portion. The central lens portion and the at least one peripheral lens portion may be arranged to have coincident focal planes. For example, the central lens portion and the at least one peripheral lens portion may have concentric focal planes. This arrangement may be performed by combining different types of lenses as described above. The main idea of compound lenses according to various embodiments is to have a central lens portion having a relatively greater focal length to provide a higher image resolution (or higher pixel density) in a central image region on the screen. Since the peripheral lens portion has a relatively smaller focal length, the at least one peripheral lens portion may provide a lower image resolution (or lower pixel density) in a peripheral image region on the screen. Thus, the viewer may feel as being in the scene displayed in the image.
A compound lens according to an example embodiment may include a central lens portion and a plurality of peripheral lens portions surrounding the central lens portion. The plurality of peripheral lens portions may have a focal length fi (where “i” is the number of peripheral lens portions and i=1, 2, 3, . . . n) and satisfy f0>f1>f2>f3> . . . >fn (where f0 is the first focal length of the central lens portion), and the central lens portion and the plurality of peripheral lens portions may have coincident focal planes.
The compound lenses according to various embodiments may be applied to exhibitions, museums, movies, concerts, sports halls, stadiums, sports stadiums, and the like. Also, the compound lenses may be applied in other places where it is necessary to provide immersive presence simulations in the advertising industry, in cars, in games, and in virtual reality.
Although example embodiments of the present disclosure have been described above, various changes and modifications may be made therein without departing from the scope of protection defined by the following claims. In the appended claims, references to elements in singular form do not exclude the presence of a plurality of such elements unless explicitly stated otherwise.
Number | Date | Country | Kind |
---|---|---|---|
2016100444 | Jan 2016 | RU | national |
10-2016-0146905 | Nov 2016 | KR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/KR2017/000242 | 1/9/2017 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/122972 | 7/20/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5448312 | Roffman et al. | Sep 1995 | A |
5708641 | Choi et al. | Jan 1998 | A |
5715031 | Roffman et al. | Feb 1998 | A |
5777803 | Ju et al. | Jul 1998 | A |
6409141 | Yamazaki et al. | Jun 2002 | B1 |
8256895 | Del Nobile | Sep 2012 | B2 |
9529194 | Yoo et al. | Dec 2016 | B2 |
20050046956 | Gruhlke | Mar 2005 | A1 |
20070019157 | Hillis et al. | Jan 2007 | A1 |
20120162486 | Asakura et al. | Jun 2012 | A1 |
20150185480 | Ouderkirk et al. | Jul 2015 | A1 |
Number | Date | Country |
---|---|---|
1154542 | Jul 1997 | CN |
0592578 | Sep 1999 | EP |
1168317 | Jan 2002 | EP |
3 370 099 | Sep 2018 | EP |
2304971 | Mar 1997 | GB |
6-46356 | Feb 1994 | JP |
3679601 | Aug 2005 | JP |
10-2015-0059085 | May 2015 | KR |
Entry |
---|
International Search Report dated Apr. 14, 2017, issued by the International Searching Authority in counterpart International Patent Application No. PCT/KR2017/000242 (PCT/ISA/210). |
Written Opinion dated Apr. 14, 2017, issued by the International Searching Authority in counterpart International Patent Application No. PCT/KR2017/000242 (PCT/ISA/237). |
Communication dated Oct. 31, 2018, issued by the European Patent Office in counterpart European Application No. 17738601.8. |
Communication dated Jan. 19, 2020 issued by the State Intellectual Property Office of P.R. China in counterpart Chinese Application No. 201780006425.5. |
Communication dated Sep. 2, 2020, issued by the State Intellectual Property Office of P.R. China in Chinese Application No. 201780006425.5. |
Number | Date | Country | |
---|---|---|---|
20190025475 A1 | Jan 2019 | US |