The present invention relates to a display device including a display to generate a real image and an optical system and, more particularly, to an improved optical system with a plurality of lenslets each producing a ray pencil from each object pixel of a cluster.
Head mounted display (HMD) technology is a rapidly developing area. An ideal head mounted display combines a high resolution, a large field of view, a low and well-distributed weight, and a structure with small dimensions.
The embodiments disclosed herein refer to lenslet array based optics. This type of optics have been used in HMD technologies in the frame of Light Field Displays (LFD) to provide a solution to the vergence-accommodation conflict (VAC) appearing in most present HMDs. As yet LFD may solve this conflict at the expense of having a low resolution. State of the art of a LFD of this type was described by Douglas Lanman, David Luebke, “Near-Eye Light Field Displays” ACM SIGGRAPH 2013 Emerging Technologies, July 2013, “Lanman 2013”, and the way this prior art LFDs work is described next in
Following the nomenclature defined in [0008] (which will be further developed in the DETAILED DESCRIPTION below),
Therefore, LFD based on microlens arrays as “Lanman 2013” have a very limited resolution, as mentioned before, due to the distance between most a-pixels and the waist plane. Additionally, state of the art LFD use arrays in which all microlenses have rotational symmetric surfaces and are identical (except for translation) which makes the ones far from the center of the field of view perform very poorly in terms of image quality. Moreover, prior art designs are based on ideal lens raytracing, with rectilinear mapping following the tangent law, which further limits the achievable resolutions with respect to other possible mapping functions with distortion. The embodiments herein overcome this three aspects of that lead to the low resolution in prior art.
U.S. Pat. No. 10,432,920 by common invertors to this application discloses the use of pencils in a different way as the LFD just described, in which VAC exists, but the perceived resolution is higher.
Designing a optic for virtual reality that is compact, produces a wide field of view and a high resolution virtual image is a challenging task. Refractive single channel optics are commonly used, but the difficulty in designing them arises from the fact that they must handle a significant etendue. In order to control all this light one needs a large number of degrees of freedom which typically means using many optical surfaces, making the resulting optic complex and bulky. One possible alternative is to use folding optics, such as the pancake design. However, these tend to have very low efficiencies, which is a significant drawback in devices meant to light and to run on batteries.
An alternative to these technologies is to use multiple channel optics. Now, each channel handles a much smaller etendue and is therefore easier do design, resulting in simpler, smaller and more efficient optical configurations. Multiple channel configurations, however, tend to have duplicated information on the display, which lowers the resolution that may be achieved.
This invention describes several strategies to overcome the limitations to multi-channel configurations, increasing resolution while reducing the size of the optics. Traditional multi-channel configurations (such a lens arrays combined with a display) create an eye box within which the eye may move and still be presented with a visible virtual image. These, however, are low focal, low resolution configurations. One option to increase resolution is to increase the focal length of the lenses in the array. This reduces the eye box size and leads to the need to use eye tracking. Also it increases the thickness of the device (due to the longer focal length). This strategy increases resolution at the cost of eye tracking and an increased device thickness. These configurations maintain duplicate information in the display, where the same information is shown through different channels in order to compose the virtual image.
One step further may be taken in which one eliminates the duplicate information in the display. As is disclosed herein this strategy permits an increased focal length, which in turn results in an increased resolution. However, a longer focal length also leads to a larger device which may be undesirable. In an alternative configuration, the lenses in the array may be split into families and the focal length reduced, reducing device size. Each family now generates a lower resolution virtual image, but said virtual images generated by the different families may be interlaced to recover a high resolution. These configurations combine the compactness of short focal devices with high image resolution.
Further improvements may be achieved by using polarization and/or time multiplexing. Also, the relative orientation of microlenses and their cluster may lead to some additional resolution improvements, as disclosed.
combine the compactness of short focal devices with high image resolution.
In an embodiment a display device is disclosed comprising a display, operable to generate a real image comprising a plurality of object pixels, and an optical system, comprising a plurality of lenslets, each lenslet having associated one cluster of object pixels. The assignation of object pixels to clusters may change periodically in time intervals, preferably a frame period. Each lenslet produces a ray pencil from each object pixel of its corresponding cluster, the pencils having corresponding waists laying close to a waist surface. Each lenslet projects its corresponding ray pencils towards an imaginary sphere at an eye position; the sphere being an approximation of the eyeball sphere and being in a fixed location relative to the user's skull. The ray pencils of each lenslet are configured to generate a partial virtual image from the real image of its corresponding cluster, and the partial virtual images of the lenslets combine to form a virtual image to be visualized through a pupil of an eye during use. At least two of the lenslets cannot be made to coincide by a simple translation rigid motion. The foveal rays are a subset of rays emanating from the lenslets during use that reach the eye and whose straight prolongation is away from the imaginary sphere center a distance smaller than a value between 2 and 4 mm. The corresponding foveal lenslets of a given field point are those intercepted by the foveal rays of that field point. The directional magnification function is a ratio of distance on the display surface over distance between field points. Wherein for any field point of a gazeable region of a field of view, values of a directional magnification function for the foveal lenslets corresponding to that field point differ less than 10%.
The present invention may include various other optional elements and features, such as:
It is also contemplated that the display device further comprises a second display device, a mount to position the first and second display devices relative to one another such that their respective lenslets project the light towards two eyes of a human being, and a display driver operative to cause the display devices to display objects such that the two virtual images from the two display devices combine to form a single image when viewed by a human observer.
Furthermore in any of the embodiments, it is also contemplated that:
The foregoing and other features of the invention and advantages of the present invention will become more apparent in light of the following detailed description of the preferred embodiments, as illustrated in the accompanying figures. As will be realized, the invention is capable of modifications in various respects, all without departing from the invention. Accordingly, the drawings and the description are to be regarded as illustrative in nature, and not as restrictive.
The above and other aspects, features and advantages of the present invention will be apparent from the following more particular description thereof, presented in conjunction with the following drawings wherein:
A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description of the invention and accompanying drawings, which set forth illustrative embodiments in which the principles of the invention are utilized.
The embodiments in the present invention consist on an display device comprising one or more displays per eye, operable to generate a real image comprising a plurality of object pixels (or opixels for short); and an optical system, comprising a plurality of lenslets, each one having associated at a given instant a cluster of object pixels. Each lenslet produces a ray pencil from an object pixel of its corresponding cluster. We shall call ray pencil (or just pencil) to the set of straight lines that contain segments coincident with ray trajectories illuminating the eye, such that these rays carry the same information at any instant. The same information means the same (or similar) luminance, color and any other variable that modulates the light and can be detected by the human eye. In general, the color of the rays of the pencil is constant with time while the luminance changes with time. This luminance and color are a property of the pencil. The pencil must intersect the pupil range to be viewable at some of the allowable positions of the pupil. When the light of a pencil is the only one entering the eye's pupil, the eye accommodates at a point near the location of the pencil's waist if it is being gazed and if the waist is far enough from the eye. The rays of a pencil are represented, in general, by a simply connected region of the phase space. The set of straight lines forming the pencil usually has a small angular dispersion and a small spatial dispersion at its waist. A straight line determined by a point of the central region of the pencil's phase space representation at the waist is usually chosen as representative of the pencil. This straight line is called central ray of the pencil. The waist of a pencil may be substantially smaller than 1 mm2 and its maximum angular divergence may be below ±10 mrad, a combination which may be close to the diffraction limit. The pencils intercept the eye sphere inside the pupil range in a well-designed system. The light of a single o-pixel lights up several pencils of different lenslets, in general, but only one or none of these pencils may reach the eye's retina, otherwise there is undesirable cross-talk between lenslets. The o-pixel to lenslet cluster assignation may be dynamic because it may depend on the eye pupil position.
The waist of a pencil is the minimum RMS region of a plane intersecting all the rays of the pencil. This flat region is in general normal to the pencil's central ray. In some embodiments the waists of some or all of the pencils can be grouped by its proximity to certain surfaces. These surfaces are called waist surfaces. Sometimes planes can approximate these surfaces. These planes are preferably normal to the frontward direction.
Optic 401 is part of a lenslet array. Each element of said array may be a combination of deflective surfaces, so the drawing 401 is only illustrative of a lens with a positive optical power. Examples may include trains of lenses or “pancake” optical configurations described by La Russa U.S. Pat. No. 3,443,858. In particular, the train may include two or three lenses of equal or different materials, at least one with positive power and at least another with negative power, combination providing chromatic aberration correction and/or other geometrical aberration corrections, as field curvature. Alternatively, two materials can be used, one with higher dispersion than the other. Lens surfaces may therefore be convex or concave, or even how inflection points so they are peanut type in form.
Also shown in
The light of the ray pencils are projected towards an eye position and the pencils of each lenslet are configured to generate a partial virtual image from the real image on its corresponding cluster. The partial virtual images are viewable by a normal human eye located at the eye position. The partial virtual images of different lenslets combine to form a virtual image to be visualized by the eye. This combination may include, not exclusively, overlapping, tessellation and interlacing between partial images.
We will refer to accommodation pixel (or a-pixel) to the small region in the image space where a single human eye accommodates when it gazes that region, at a sufficient distance. In this situation the eye's foveal region is illuminated (completely or partially) by a set of pencils carrying the same information (same luminance and color). This set of pencils, which may consist of one or more pencils, is said to form the a-pixel. That luminance and color become a property of the a-pixel at a given instant. The pencils are such that their principal rays meet near the a-pixel, which is also close or coincident with the location of the waist of the union of those pencils. Nevertheless, this waist is not necessarily close to the individual waists of the different pencils forming the a-pixel. A given pencil is part of no more than one a-pixel during a given time interval (which is typically a frame period) but it may be part of different a-pixels at different time intervals. If a set of pencils is forming always the same a-pixel, the a-pixel is said to be static. In this case, all the pencils of the a-pixel carry always the same luminance and color. Otherwise, the a-pixel is said to be dynamic. The eye perceives the a-pixel as an emitting region when its luminance is high enough and it is located at sufficient distance from the eye. In some embodiments, accommodation pixels can be grouped by its proximity to certain surfaces. These surfaces are named accommodation surfaces (or a-surfaces). Sometimes they are approximated by spheres or even by planes, taken the names accommodation sphere or accommodation plane.
1. Interlacing
Lenslets in square-like matrix configuration would use preferably square o-pixels. For instance. RGBW-square OLED microdisplays with high aperture ratio have, for each color, a fill factor up to 25%, i.e., 25% of the display area may emit a given color. Therefore a lenslet array with interlacing factor k=2 may be conveniently designed to make that the fill factor in the virtual image for each color is 100%. The resulting full-color a-pixels in the virtual image will have the four RGBW colors overlapped.
If the red (R) o-pixels are off and the blue (B) o-pixels are on, a blue virtual image is formed by blue o-pixels 911 that has its corresponding a-pixels spaced by 910. A pixel, formed by one blue and one red o-pixels has a size 910 defining the resolution of the system.
Accommodation pixels 1004 are visible through lens 904 and accommodation pixels 1003 are visible through lens 1001. The eye pupil 1006 must be large enough to capture light from both lenses 1001 and 904 so both images are overlapped on the retina.
When both red (R) and blue (B) o-pixels are on, the red and blue images overlap. The result is an image with double the resolution along the direction of the cross section shown in
When taken to three-dimensional geometry, displacement 1002 of the lenses is done in the horizontal direction and then in the vertical direction of
Referring back to
Lens 1301 has a combined aperture of size 1301A+1301B. In this case, the phase space representation of the pencils through this combined aperture may be non simply connected from the topology point of view. These apertures may be made coherent. All we need for this purpose is that the apertures be parts of single continuous original lens imaging the same cluster. Then, the diffraction limit of the lens is determined by the combined aperture, and since the combined aperture has a larger area than any of its parts, the diffraction limit is less restrictive than the situation in which the different parts were emitting incoherently. Of course, the combined aperture determines the diffraction limit of the lens only when the light of both apertures pass through the pupil. Otherwise, the pupil vignetting will introduce an additional effect.
Diffraction imposes a limit on the smallest aperture of the lenslets generating a pencil. Let's D be the diameter of the lenslet aperture and let (θR)−1 the resolution (in pixels per degree) of the image of the cluster imaged by that lenslet. Following Rayleigh Criterion, the pencil's waists of that image will be resolvable if D>1.22λ/sin(2θR), where λ is the wavelength of the light. For λ=550 nm and (θR)−1=52 ppd, then D must be greater than 1 mm to be resolvable according to that criterion.
On the other side, a small lenslet aperture is required to have several pencils sending light to the eye's pupil with directions imaged in the fovea, and so to be able to overlap on the retina images from different lenslets. Additionally, we need several lenslets providing the same a-pixels on the fovea to minimize “scintilliation”. We may consider that a standard condition for the human pupil is such that its diameter is at least 4 mm. Then it is evident that there is a trade-off for the lenslet diameter: on one side we need an aperture big enough to diminish the diffraction effects and on the other side we need small lenslet apertures to be able to send to the fovea images from different lenslets through the human pupil. The strategy shown in
Display Pixel Configurations, Array Geometries and Interlacing Factor
The previous section has disclosed in detail the interlacing design for an RGBW-square o-pixel display with interlacing factor 2, so lenslets are grouped in 4 families. Notice that since the blue o-pixel resolution can be lowered without affecting the resolution perceived by the user, and its current density it is preferred to be lower than for the other colors to increase its lifetime, an alternative embodiment would consist in eliminating the white o-pixel to allow the blue o-pixel occupy its area, which is normally referred to as RGB-π pixel display design.
Consider the case of an RGBW-square o-pixel display with lower fill factor, for instance such that the o-pixel side is ⅓ of the full-color pixel side. This display fits perfectly with a square-type lenslet array design with interlace factor 3, so lenslets are grouped in 9 families. This interlace factor could also be applied to the RGBW case with o-pixel side ½ of the full-color pixel side, but there will be some overlapping on the virtual image of o-pixels.
In the eventuality that o-pixels are rectangular with high aspect ratio, as occurs in RGB stripe pixel designs, interlacing only in the direction perpendicular to the stripes may be done, preferably with a higher directional magnification in the perpendicular direction. If in this case orthogonal directions result with different virtual image resolutions, the headset can be configured so left eye has a high resolution direction set vertical while right eye has the high resolution horizontal, for the user to approximately perceive high resolutions both horizontal The application of subpixel rendering may also be applied, in particular with RGBGRGB-type designs.
Further resolution increase can be achieved if the number of green o-pixel of the display is larger than the red and blue ones. In direct-view displays, this is used in the so-called pentile RGBG configurations, as the one shown in
The o-pixel configuration disclosed in
Moreover, some embodiments with square-like lenslet arrays in which polarization is being used to avoid cross-talk between adjacent clusters, as disclosed in [0432], require preferably an interlacing factor of 21/2 or 81/2. The former can be achieved with an RGBW-square o-pixel display 45 deg rotated with respect to the lenslet array, while the latter would preferably use a display RGBW o-pixel arrangement 2701 as the one shown in
Interlacing by Rotation
In the interlacing description in [0136], lenslets where shifted, and in several embodiments in section [0143] the interlacing was achieved by adequate o-pixel positioning on the display. In this section we are disclosing a third option to produce interlacing, which consists in rotating the array relative to the display. This has practical interest to adjust a manufactured device, or even to dynamically modify the interlacing factor using actuators.
Rotating the display relative to the display is a way to convert an accommodation plane with interlaced factor 1 into an accommodation plane of a interlaced factor greater than one, as explained next. The rotation angle needed for interlacing is not unique. The minimum rotation angle α is related with the o-pixel pitch with the following formula:
α=a/kM with kM=int(pL/pP) where int(x) is the integer part of x, pL is the lens pitch, pP the display o-pixel pitch and a is the inverse of the interlacing factor.
All o-pixels lit to form the a-pixel 3409 should have the same information. This results in repeated information in display 3405.
In order to appreciate this increased resolution, the eye pupil 3505 of an observer must be large enough to capture light from four (in this example) lenslets.
Each o-pixel shown in
Let us now consider a display as shown in
In the interlacing process, as the lens array 3501 rotates and the interlacing unfolds, the red virtual images overlapped in 3D pixel 3409 start to pull apart from each other, moving “over” the adjacent blue, white and green o-pixels that are off. This results in a virtual image that appears all red, i.e., without spaces between red o-pixels.
It is possible to do the same for the blue (B), white (W) and green (G) o-pixels resulting in virtual images that appear all blue, white and green respectively. When all o-pixels are on, the result is an image in the interlaced configuration of
2. Directional Magnification Function and Equi-Focal Lenslet Arrays
Consider an array of lenslets, each one of them approximately imaging a portion the digital display on a portion of a waist sphere of radius R∞ centered at the eye's sphere center. R∞ is much greater than the radius of the eye's sphere. Let r=(x,y) be a point of the digital display and let (θ,φ) be two spherical coordinates of the point on that sphere (that we will call the field point (θ,φ), or the field (θ,φ) for short) where the rays issuing from (x,y) are virtually imaged by the lenslet (i,j), i.e., these rays are virtually coming from the field point (θ,φ) of the sphere when they intercept the eye. Let's call θ the polar angle and φ the azimuthal angle. Let's set θ=0 as the skull's frontward direction. Then r=(x,y) depends on θ,φ,i,j through the mapping function:
Let Δr be the change of r when the point of said sphere moves differentially from (θ,φ) to (θ+Δθ,φ+Δφ) such that tan α=sin θΔφ/Δθ. Let's call α the direction angle. We define the directional magnification m of the lenslet (i,j) as the function of (θ,φ,αi,j) given by:
where the subindices θ,φ indicate partial derivative. This function is called directional magnification along direction α at the field point (θ,φ). Notice that this magnification definition corresponds to ray trayectories reversed from the actual ones i.e., from display to the eye, since the magnification corresponds to a ratio of a distance between points on the display surface to the distance between the field points on the waist-surface. This reverse operation is the usual one in Head Mounted Display (HMD) optics design, so this magnification m is the commonly used in commercial software as Zemax or Code V, while m−1 is the one normally used in magnifying instruments as binoculars or microscopes. In the limit case when R∞ tends to infinity, it is preferable to use the direction focal length defined as f=mR∞, which will be called directional focal length along direction α at the field point (θ,φ). The directional magnification as well as the directional focal length are called respectively radial and sagittal magnification/focal length when α=0 and α=π/2, respectively.
Preferred embodiments of this invention have lens arrays with directional magnification functions independent of the lenslet i,j (which we will call equi-focal lens arrays) provided by the mapping function of the form
If all lenslets are identical (except for translation) it is trivially fulfilled that directional magnification function independent of the lenslet i,j. However, the pencils used in wide field of view designs are very different for the different lenslets. In this case, practical identical lenslets designed with an affordable number of surfaces are not able to provide good image quality and low cross talk between lenslets for all the pencils throughout the array. When the field of view is large, the lenslet close to center of the field of view operates with pencils whose central rays form moderate angles with the frontward direction, while lenlets close to the periphery typically operate with pencils whose central rays are very oblique with respect to the frontward direction. It is much more efficient to design different lenses, each one optimized for the pencils they need to work for to illuminate the pupil range, and the best global results can be achieved when those lenslets contain freeform surfaces since rotational symmetric surfaces impose undesired constraints for oblique operation. This is particularly true for lenslets far from the center of the array.
Since eye accommodates based mainly on the information projected on the fovea, the condition of Equation [0176] is only needed for those lenslets sending foveal rays virtually coming from the field points (θ,φ). Foveal rays are rays focused on to the fovea for some position of the eye pupil. They can be also characterized as those reaching the eye ball sphere within the pupil range such that their straight prolongation is away from that sphere center a distance smaller than a value between 2 and 4 mm. Therefore, for each field point of the gazeable region of the field of view we can define its corresponding foveal lenslets as those intercepted by the foveal rays of that field point.
The condition of directional magnification function being independent of the lenslet i,j (Equations [0173] and [0176]) guarantees the x−xc=constant and y−yc=constant lines (which may correspond to the image row and columns of opixels on the display) coincide on the sphere R∞ when they are imaged by the different lenslets. This ensures that the overlapping or interlacing of their corresponding partial virtual images can be done properly. Without this condition, different pitches and orientations of o-pixel image grids of the different lenslets on the waist surface would cause blurring and resolution loss. Such irregular spacing could even cause Moire type effects in the virtual image visualization. Deviations up to 10% from the exact equality of the directional magnification function for a field point among its corresponding foveal lenslets may still be acceptable, although deviations smaller than 3% are desirable, specially for the field points of the gazeable region of the field of view.
As a particular case, we are interested in lenslets whose directional magnification function is rotational symmetric, so it does not depend on azimuthal angle 9. The mapping function of these lenslets is given by:
whose directional magnification function is given by:
The radial magnification, which corresponds to α=0, is therefore given by Gθ/R∞, while the sagittal magnification, which corresponds to α=π/2, is given by G/sin θ/R∞. Notice that both magnifications coincide at θ=0.
When the lenslets are ideal optical systems with distortion free rectilinear mapping, the function G is G(θ)=f0 tan θ. In this case, the radial magnification is given by Gθ/R∞=1/(R∞ cos2 θ) which is minimum at θ=0. In this invention we are interested in having a high magnification at the center of the field of view at the expense of a reduced magnification for larger values of θ, so a behavior opposite to that of rectilinear mapping. For that purpose, we will preferably chose functions G(θ) such that associated directional magnification in the radial direction multiplied by the square of the cosine of the polar angle is a decreasing function of the polar angle ratio.
Lenlets can be designed as described herein to produce prescribed mapping functions as [0176] or [0181]. However, it is of particular interest to use additional conforming lenses intercepting the path of every ray illuminating the pupil range from the digital display, since it can act as a field lens to increase the field of view for a given display. It can also allow for a thin lenslet array and without steps from microlens to microlens. Unlike a lenslet array, a conforming lens cannot be divided in disjoint portions such that each one of them is working solely for a single channel. A conforming lens may be placed between the eye and the rest of the optical system or between lenslet arrays or even between the digital display and the rest of the optical system. A conforming lens may have at least one surface with slope discontinuities to either reduce its thickness as a Fresnel lens, or to habilitate the use two or more displays per eye, as shown in
Conforming lenses are not limited to just including refractive surfaces, but can also include reflective or diffractive surfaces. Particularly, a “pancake” architecture as described by La Russa U.S. Pat. No. 3,443,858 can also be used as a conforming lens to allow the design of embodiments herein with a very large field of view and relatively small displays.
Additionally, since the equality between directional magnification functions is only required for foveal rays, the image quality or the magnification (or both), of each lenslet can be designed to be lower for pencils with only non-foveal rays, that is, rays hitting the peripheral retina, where the human eye resolution is much lower. Therefore, the virtual image resolution can be further increase by making that, for every direction angle α, the directional magnification of lenslets is maximum at its centered gazing field, which is defined by the ray joining the lenslet aperture center and whose straight prolongation passes through the center of the eyeball sphere. This strategy itself can typically increase the virtual image resolution at the center of the field of view relative to the rectilinear mapping function by 1.2× or greater.
The lenses in
Prescribing the directional magnification function, making it different from the rectilinear mapping imply that the lenslets present distortion, which must be corrected by software so the virtual image appear undistorted, as it is usually done in virtual reality optics.
Interlacing by Rotation for an Arbitrary G(θ) Function
We disclosed in [0151] how to interlace by rotation. Calculations have been done assuming a simplified situation that corresponds to a rectilinear mapping function. However, this is perfectly applicable to an arbitrary equi-focal lenslet array as shown next. The mapping function of [0181] for a specific lenslet in a design:
We have omitted the indices i,j of the lenslet, as those variables are going to be re-used herein with a different meaning. Assume the opixels of the display are in a square Cartesian grid parallel to the x and y axes with pitch po. Therefore, the pixel of indices (i,j) has x,y coordinates:
where we have set the indices (0,0) correspond to the central pixel of the display. Therefore, inverting equation [0197], we find that the a-pixels associated to the o-pixel (i,j) for a given lenslet of center (xc,yc) are located at:
The condition for the a-pixels to be non interlaced implies that for any two lenslets, with centers (xc,yc) and (x′c,y′c), they have the same a-pixels, which means that, for each (i,j) there exist a (i′,j′) such that:
θi′j′(x′c,y′c)=θij(xc,yc)
φi′j′(x′c,y′c)=φij(xc,yc)
Notice that, from equations [0197] and [0203] it is deduced that the condition for not being interlaced is equivalent to:
So the lenslet centers must fulfill:
Where i″ and j″ are integers. Thus the x and y distances between lenslet centers must be a multiple of the o-pixel pitch pa. Since the central lens of the array (xc,yc)=(0,0), we can therefore manufacture the lenslet array with no interlacing by making:
Therefore, for this election, equation [0197] can be rewritten as:
where g(θ)=G(θ)/po. If we design the array so:
where D=Npo and N, ic and jc are integers, and adjacent lenslets differ by 1 in ic and in jc. So in this case:
Consider an emitting point of the o-plane (xint,yint) in an intermediate position different from that of centers of the grid of o-pixels. This position is related with its direction of emission (θint,φint) through equation [0215] as:
where iint=xint/po and jint=yint/po are non-integer values. Let us consider now that we rotate the display an angle α, so the new coordinates of the centers of the o-pixels (xα,yα) are related with the original (non-rotated) ones (x,y) as:
Dividing by po, we can define the values of the indices of the rotated opixels by:
where i and j are integers, and iα=xα/po and jα=yα/po are, in general, non-integers. Applying equation [0216] to (iα, jα) and its corresponding direction (θα,φα), and substituting in [0220] for lenslet with center (ic,jc) we get:
When α is small, this equation can be approximated by:
So according to [0222] and [0225], the mapping of lenslet with center (ic,jc) when the display is rotated is given by:
This equation is given as the relation between the opixel (i, j) and the direction of its corresponding a-pixel (θα,φα) through lenslet (ic,jc) when the display is rotated a. The inverted expression of [0227] is:
An opixel (i′, j′) is projected through another lenslet (i′c, j′c) as indicated by:
Let us find if there is an angle α (apart from α=0) such that the design is “perfectly coupled”, that is, if for any o-pixel (i, j) projected through lenslet (ic,jc) there exists an o-pixel (i′, j′) which is projected through lenslet (i′c,j′c) such that θ′α=θα and φ′α=φα. By subtracting equations [0227] and [0231], we get:
From [0233] it is clear that the condition is that Nα is an integer, and the smallest α is given by:
To obtain an interlacing factor 2 we just select:
From [0233] we find that the lenslet families having j′c−jc and i′c−ic even will be perfectly coupled, so 4 families appear:
We can compute what is distance in between the a-pixels of the different families, to see that they are truly interlaced duplicating the a-pixel density. To make this calculation simpler, we will compute where the a-pixel of a given family would need to come from on the display to be produced by another family. That is, consider lenslets (ic,jc) of family A (so ic and jc are even) and an a-pixel (θα,φα) produced by o-pixel (iA, jA) (so iA, jA are integers). By [0229] with Nα=0.5:
For families B, C and D, the points of the o-plane that would correspond to this a-pixel (θ,φ) would be:
Subtracting these equations we get:
i
αB
=i
αA
+N
j
αB
=j
αA−0.5
i
αC
=i
αA+0.5
j
αC
=j
αA
+N
i
αD
=i
αA
+N+0.5
j
αD
=j
αA
+N−0.5
Since (iαA, jαA) are integers, and only integer values of iα and jα correspond to true pixels of the display. Equation [0247] indicates that the a-pixels of the B family will be along the same iα=constant lines, but will be in between the jα=constant lines of the A family. Analogously, the a-pixels of the C family will be along the same jα=constant lines, but will be in between the iα=constant lines of the A family, and the a-pixels of the D family will be intermediate to the A family in both dimensions.
3. Pupil Tracking and Underfilling Strategy
All the light from lit area (cluster) 4909A of lens 4904 when deflected through lens 4907 will be emitted below edge ray 4912 and will cross the plane of the pupil 4915 below point 4913. Similarly, another lens will emit light from the lit area of the lens above it below point 4913 and will not enter into the pupil. This is because cluster 4909A does not correspond to lens 4907 but to lens 4904. Also, each lens will emit above a symmetrical point to 4913 (not shown) the light from clusters below its corresponding one. Point 4913 is located below the bottom edge 4914 of eye pupil 4911. The symmetrical to point 4913 (not shown) is located above the top edge 4916 of the eye pupil.
Since both sets of lenses 5504 and 5506 generate overlapping virtual images 5502, said sets of lenses may be interlaced to increase the perceived resolution of virtual image 5502. Interlaced means here that the image of the green pixels of the display 5503 on the viewer's eye retina when seen through lenses 5504 are such that do not coincide with the image of the green o-pixels seen through lenses 5506, but are slightly shifted so the resolution (in pixels per degree) of the addition of the 2 images is greater than the one of any of them. The shifting is in the order of the green o-pixels diameter. Similarly with other colors.
Configuration 5801 includes lens 5802 that forms a virtual image at an infinite distance of cluster 5803 of display 5804. The light forming said virtual image enters eye pupil 5812 and is contained between the directions of bundles 5805 and 5806. Configuration 5801 also includes lens 5807 that forms a virtual image at an infinite distance of portion 5808 of the display 5804. The light forming said virtual image is contained between the directions of bundles 5809 and 5810. Since bundles 5806 and 5809 have the same direction, the set of two lenses 5802 and 5807 create a virtual image contained between the directions of bundles 5805 and 5810. Adding more lenses to the array 5811 concatenates portions of the virtual image, increasing its angular size. In general, some overlap of the virtual images formed by each lens may be allowed. The size of the lens 5802 or 5807 is essentially half the size of pupil 5812. Configuration 5801 has a focal length 5813.
Also shown in
Referring to configuration 5840, lens 5841 takes light from cluster 5842 of display 5823 and forms a partial virtual image at an infinite distance. The light forming said virtual image enters eye pupil 5812 and is contained between the directions of bundles 5844 and 5845. Lens 5846 takes light from cluster 5847 of display 5823 and forms a partial virtual image at an infinite distance. The light forming said virtual image enters eye pupil 5812 and is contained between the directions of bundles 5848 and 5849. Lens 5850 takes light from cluster 5851 of display 5823 and forms a partial virtual image at an infinite distance. The light forming said virtual image enters eye pupil 5812 and is contained between the directions of bundles 5852 and 5853. Similarly to what happened in configuration 5801, also here lenses 5841, 5846 and 5850 form a continuous virtual image by concatenating different partial virtual images created by the different lenses. The size of the eye pupil 5812 is essentially three times the size of lenses in lens array 5851.
Referring back to configuration 5801, one may see that it has a long focal length 5813 resulting in a high resolution virtual image.
The lens array in configurations 5820 and 5840 is composed of two families of lenses: 5821, 5826, 5830 and 5841, 5846, 5850. Each one of these families creates a full virtual image. The device in configurations 5820 and 5840 has a shorter focal length 5815 than the device in configuration 5801. This results in a more compact device but the virtual images created by the two families of lenses have a corresponding lower resolution. However, said lens families may be interlaced to increase the resolution of the device.
Also shown in
Also shown is lens 6206 with its cluster 6216 and ray 6206C from the bottom of cluster 6216 through the bottom of lens 6206. Lens 6205 with its cluster 6215, ray 6205A from the bottom of cluster 6216 through the top of lens 6205 and ray 6205D from the top of cluster 6214 through the bottom of lens 6205. Lens 6204 with its cluster 6214 and ray 6204B from the top of cluster 6214 through the top of lens 6204. Rays 6205A and 6206C have the same ρ value. Rays 6206C and 6204B have the same θ value.
Comparing
We now refer to line 6705. The lens with mapping 6701 will emit light in directions below angle 6708 while the lens with mapping 6703 will emit light in directions above angle 6709. Therefore, there will be a gap between the images generated by the lenses of mappings 6701 and 6703. Said gap in the virtual image will range in directions from angle 6708 to angle 6709. This is not acceptable since this angular range would correspond to a dark area in the virtual image.
We now refer to line 6706. The lens with mapping 6701 will emit light in directions below angle 6709 while the lens with mapping 6703 will emit light in directions above angle 6708. Therefore, there will be an overlap between the images generated by the lenses of mappings 6701 and 6703. Said overlap in the virtual image will range in directions from angle 6708 to angle 6709. This is acceptable but may be not desirable.
As the pupil rotates, lines such as 6704, 6705 and 6706 will move up and down mapping curves 6701, 6702 and 6703 as was illustrated in
A similar effect can be achieved by introducing a synchronized beam steering element in the optical path, which slightly deflects the light. Diffractive or diffractive ones can be used, as those described in the literature using LC materials, or a birefringent tapered plate, so the taper angle can be designed for the different refractive indices of ordinary and extraordinary rays to make o-pixels of different polarizations to be emitted in slightly different directions, causing a proper image interlacing.
Display 7201 is paired with a lens array. There is one lens over each cluster shown.
The eye pupil has a periphery 7402 that corresponds to pupil size 6810 in
It may be seen that some pencil prints such as 7403 fall completely inside pupil 7402. However, other pencil prints such as 7405 may fall partially outside the pupil 7402. If this happens, the brightness of the corresponding display o-pixel which feeds that pencil must be increased to compensate the lost power not entering the pupil. This software adjustment of o-pixel brightness is a function of the pupil size and pupil position.
4. The G(θ)=sin(θ) Case
This section discloses a design example of a system using underfilling strategy with foveal variable magnification given by the function G(θ)=sin(θ) (see section starting in [0169]), and also using interlacing strategy with k=2. The optics includes a continuous lens (called conforming lens) common to all channels and an array of lenslets, each one of them corresponding to a single channel. The lenslet array is made up of two arrays of microlenses. As always, each lenslet has a corresponding cluster. This cluster plus all the optics associated to it (its lenslet and the conforming lenses) form a channel. In order to simplify the explanation we will assume that the virtual image surface is at infinity and we will refer only to the 2D geometry problem. Interlacing with degree k=2 implies that there are two channel families each one of them imaging its clusters into the full Field of View. Extension to 3D geometry is straightforward.
Conforming Lens Calculation
Let's start with the calculations of the conforming lens mapping functions.
Let y(x, p) and q(x, p) be the spatial coordinate and its cosine director at the digital display 7504 of the ray arriving at a point x on the eye pupil with angle θ=arcsin(p). We look for a conforming lens such that y(x, p)=(p+P0(x))F, where F is a constant and P0(x) is an arbitrary function of x. Conservation of etendue implies dxdp=dydq, so
This is a first order Partial Differential Equation in the function q(x,p), whose solution can be found by the methods of characteristics by solving Lagrange-Charpit related equations. The solution is given by q(x, p)=−x/F+Q0(y(x, p)) where Q0(y) is an arbitrary function of y. This equation together with the expression of y(x, p) give the 2 mapping functions from the space x,p into the space y,q. These equations can also be written as p=y/F−P0(x) and q=−x/F+Q0(y).
With this result we can calculate the Hamilton's characteristic function of this lens l(x,y), i.e., l(x,y) is the optical path length from the entry point x up to the exit point y along the ray linking both points. Then lx(x,y)=−p and ly(x,y)=q. Using the last expression for p and q we get: lx(x,y)=P0(x)−y/F and ly(x,y)=−x/F+Q0(y), whose integrations gives the Hamilton's characteristic function of this lens: l(x,y)=P0(x)+Q0(y)−xy/F, where P0(x) and Q0(y) are arbitrary functions of x and y respectively whose derivatives are P0(x) and Q0(y).
Let's choose P0(x)=x/G1 and Q0(y)=y/G2. Then the mapping functions p=y/F−P0(x) and q=−x/F+Q0(y) become p=y/F−x/G1 and q=−x/F+y/G2, which can be written as
and so
The Hamilton's characteristic function is l(x,y)=x2/(2G1)+y2/(2G2)−xy/F+l0, where l0 is an arbitrary constant.
A 2-surfaces lens 7601 performing the mapping referred in [0307] at least in the neighborhood of x=0 (point 7607 at x=x0 and point 7608 at x=−x0) can be designed with an Simultaneous Multiple Surface (SMS) technique (see
Once the lens is designed, the function z(x,p) in the conforming lens system can be calculated by tracing back from the point y(x,p) along the ray with direction cosine q(x,p). If the refractive index is n, the total slabs thickness is T and D is the distance of the axis z to the axis y (see
In this approximated case
where d=D−T(1−n−1).
Total Mapping Functions
Let's substitute each flat dielectric slab 7502 and 7503 of
The goal for the total mapping (i.e., the mapping generated by the system when lenslet array substitutes the flat slabs of the initial conforming lens configuration) is to get this mapping function=yi(p)≡fp+ico, where f and co are constants and i is the lenslet number.
Again, conservation of etendue implies dxdp=dydq, so
This equation gives a condition on the other function giving the total mapping which can be expressed as qi(x,p)≡−x/f+A(p), where A(p) is an arbitrary function of p.
Note that the conforming lens mapping (i.e., the whole system with flat slabs instead of the lenslet array) is given by the equations in [0304]) one of which is y=Fp+FP0(x).
Let's analyze a regular cluster structure where all the clusters 7805 of the display have the same size, as well as the dark corridors 7806: cc is the width of a cluster and cd the one of the dark corridor between clusters so the pitch is ce=cc+cd (see
The y coordinate of the edges of the dark region i+½ (this is the dark region between cluster i and cluster i+1), yu,i and yd,i+1 are, when P0(xs)=0, yu,i=ice+cc/2 and yd,i+1=(i+1)ce−cc/2.
Each cluster edge defines 2 angular boundaries of the lenslet span: The upper edge of the dark region i+½ defines the smallest emission angle of the cluster i+1, (pd,i+1) and the greatest emission angle of the cluster i without cross-talk (πu,i). Then, yd,i+1=fPd,i+1+(i+1)co and yd,i+1=fπu,i+ico. (see mapping in [0315]) The lower edge of the dark region i+½ defines the greatest emission angle of the cluster i. (pu,i) and the smallest emission angle of the cluster i+1 without cross-talk (πd,i+1). Then, yu,i=fpu,i+ico and yu,i=fπd,i+1+(i+1)co.
Summarizing these results and referring all of them to the cluster i, we get πu,i/co=i/(F−f)+f−1, πd,i/co=i/(F−f)−f−1, pu,i/co=(i+1)/(F−f) and pd,i/co=(i−1)/(F−f).
Observe that the clusters of the same family (odd or even) tile completely the FOV since pd,i+2=pu,i. The angular emission span of the lenslets change with the lenslet position but it is constant if it is expressed as a difference of the coordinate p at both edges, i.e., Δp=pu,i−pd,i=2co/(F−f), and it is Δπ=πu,i−πd,i=2co/f when the dark regions at both sides are also included. For both angular spans the midpoint is coi/(F−f).
Let's call z to the coordinate on the exit plane of the lenslet array.
Eye Pupil Size
The preceding results determine the maximum size of the eye pupil free of cross-talk illumination. Consider the 2 rays issuing from a point of the z-axis with coordinate zi+1/2, such that they reach the pupil with angles p=πd,i+1 and πu,i (see
Observe that πd,i+1−πu,i does not depend on i, since πd,i+1−πu,i=Δπ−Δp, so neither the eye pupil location (−xP<x<xP) does. Replacing πd,i+1−πu,i and πd,k+1+πu,k we get.
In a similar way it is possible to calculate the lenslet exit aperture size Δz=(zi+1/2−zi−1/2)=(1−d/G2) F Δp/2. The yi+1/2 corresponding to this zi+1/2, according to the Conforming lens mapping (x=0) (Eq. in [0308] and [0312]), is the mid-point of the dark region i+½, i.e., yi+1/2=(yd,i+1+yu,i)/2=ce(i+½)
Lenslet Spot Size on the Pupil
The edges of the lenslet i spot on the pupil plane are given by the coordinates xpu and xpd (see
These coordinates can be calculated with the mapping functions from z to x plane as follows:
xpu: can be calculated using Eq. in [0312], [0321] and [0326] for z=zi+1/2. The value obtained is xpu=xP/[2F/(3f)−1] which results independent of i.
xpd: can also be calculated using Eq. in [03129], [0321] and [0326] but for z=zi−1/2 The resulting value is xpd=−xpu.
In order to capture inside the pupil all the light sent by any lenslet from its corresponding cluster we need this spot be smaller than the pupil size. i.e., xpu<xP, which implies F/f>3, or, what is the same, Δπ/Δp>2.
Another interesting points are the intersections with the x-axis (xcu and xcd) of the rays issuing from z=zk±1/2 with directions p=pu,i and p=pd,i, respectively (see
Non-Centered Eye-Pupil
Let's recall the fixed parameters of the design, i.e., the parameters not depending on the eye pupil position (within the pupil range). These are: zi+1/2, F, f, co, and consequently (Eq. in [0318]) cc, ce. On the contrary, the y coordinate of the edges of the dark region i+½ (this is the dark region between cluster i and cluster i+1), yu,i and yd,i+1 slide accordingly to the pupil position because they depend on the value of P0(xs). The center of the pupil x=xs determines P0(xs) as shown in the equation in [0304]. In the Conforming lens example [0306]. P0(xs)=xs/G1, and in the example of [0313] xs=0, i.e., the pupil was centered. In this section we will analyze the performance of the design of previous section with non-centred pupils, ie, for xs≠0
The total mapping functions (Equations in [0315], [0316]) remain identical to the centered eye pupil case since the optics has not moved. Nevertheless, the edges of the clusters have changed because the eye tracking control has provided the information about the pupil position and the contents to be displayed in the display are modified accordingly. This means that the straight lines yi(p) 7802 shown in
Summarizing the previous results and referring all of them to the cluster i, we have now
Observe that again the clusters of the same family tile completely the FOV since pd,i+2=pu,i.
By applying the mapping functions to the rays issuing from a corner between lenslets zi+1/2, such that they reach the pupil with angles p=πd,i+1 and p=πu,i it is possible to calculate the coordinates xPu and xPd of the points where they reach the pupil
The pupil size is again xPu−xPd=2xP, which is independent of the lenslet number i and independent of the direction cosine shift. The pupil midpoint xPm=(xPu+xPd)/2 is also independent of i (see
The lenslet spot falls between xpu and xpd which must be between the pupil edges xPu and xPd. Calculation of xpu and xpd: Apply the first of [0312] to z=zi±1/2 (using Eq. in [0327]) and p=pd,i or p=pu,k (using Eq. in [0342]), and solve it for x. The resulting xpu and xpd are independent of i.
In order to capture inside the pupil all the light sent by the lenslet from its corresponding cluster we need this spot be smaller than the pupil size, i.e., F/f>3.
Non Regular Clusters
Irregularities in the cluster and corridor sizes may be caused by irregularities in the mapping functions (for instance when co in Eq [0315] depends on i). This case is not considered here. We are going to consider the case in which the sizes of clusters (and dark corridors) are not the same but the mapping functions (Equation in [0315]) are still equi-spaced straight lines (lines 8102 in the example of
Any set of boundary y coordinates {yu,i, yd,i} can be realized provided that for any i the following conditions are fulfilled: (1) yd,i−1<yu,i−1<yd,i<yu,i, and (2) yd,i+1−yu,i−1=2co.
The first condition establishes that any cluster or corridor must have a positive length and the second one ensures the full tiling of the FOV by any of the 2 channel families. We can have more room to accomplish this 2nd condition by designing lenslets with mapping functions (Eq in [0315]) that have non regular constant term (i.e., different than i·co) so the straight lines yi(p) in
Using the freedom to choose the set of cluster boundaries {yu,i, yd,i} we can modify the regular arrangement of
5. Preferable G(θ) for a Prescribed Conforming Lens
There may be geometrical constraints that impose restrictions to the conforming lens shape and thickness, making its design as described in [0300] not adequate always. For that reason, it is of interest to estimate the function G(θ) for a given rotational symmetric conforming lens that allows to obtain a step-free lenslet array. To illustrate said calculation we will consider using eye tracking with underfilling strategy and interlacing factor 2, but the procedure is not restricted to this conditions. We will consider that the design must illuminate the eye pupil when looking frontward (which we will refer to with subindex 0) and when looking at the edge of the pupil range (which we will refer to with subindex 20). In the radial direction (α=0), the mapping function can be written as:
ρ=G(θ)+ck
where G(θ) is a function and ck is a constant, both to be determined. We call θk to angle 8202 shown in
P(θk)=G(θk)+ck
From Equation [0359] ck is found, and thus the mapping can be rewritten as:
ρ=P(θk)+G(θ)−G(θk) (3)
We consider here the case in which θk+1−θk=Δ=constant, so all lenslets has the same angular aperture Δ. As already shown in [0249], for the lenslets at the central part of the field of view (i.e., for k≤kt, that transition value kt still to be found). the tiling of the a-plane that limits corresponds to the position of the pupil looking frontwards, while for k>kt is the pupil located at the rim of the pupil range which limits. Therefore, k≤kt, we have, first, the conditions of no cross talk with adjacent clusters, which is:
ρB0,k+1=P(θk)+G(αmax0,k)−G(θk)
ρT0,k−1=P(θk)+G(αmin0,k)−G(θk)
where αmin0.k, αmax0.k, ρB0,k+1 and ρT0,k−1 to ray angles 8201 and 8203 from the pupil edges and the heights 8204 and 8206 of the rim of the clusters of the adjacent lenslets in
ρB0,k+1=P(θk+1)+G(αmed0,k)−G(Gk+1)
ρT0,k−1=P(θk−1)+G(αmed0,k)−G(Gk−1)
where αmed0.k is the angle at which the tiling is produced. Subtracting [0364] from [0363] and [0367] from [0366]:
ρB0,k+1−ρT0,k−1=G(αmax0,k)−G(αmin0,k)
ρB0,k+1−ρT0,k−1=P(θk+1)−P(θk−1)−(G(θk+1)−G(θk−1))
Subtracting again [0369] and [0370] we obtain:
(G(αmax0,k)−G(αmin0,k))+(G(θk+1)−G(θk−1))=P(θk+1)−P(θk−1)
Considering θk is intermediate to αmax0,k and αmin0,k, equation [0372] can be approximated by:
G′(θk)(αmax0,k−αmin0,k)+G′(θk)(θk+1−θk−1)=P′(θk)(θk+1−θk−1)
Since θk+1−θk−1=2Δ, we have:
Therefore, for k≤kt G′ is proportional to P′. Analogously for k>kt, we arrive at:
G′(αk)(αmax20,k−αmin20,k)+G′(θk)(θk+1−θk−1)=P′(θk)(θk+1−θk−1)
where αk is angle 8301 in
(L−R tan θ20)tan αk≈L tan θk
So we can estimate:
Assuming that, for k>kt, αk<θk
Therefore, combining [0378] and [0384]:
Let us define:
And then from G(θ)=∫0θG′()d we obtain an estimate for the function G we were searching for.
kt can be estimated as
where θint is the angle at which the two expressions inside the bracket in [0388]) are equal.
6. Mapping Implementation and Display Image Segmentation
The mapping functions gives display coordinates r=(x,y) as a function of the coordinates on an a-pixel surface or a waist surface, such as (θ,φ) (see section starting in [0169]). In this section we will use generic coordinates (H,V) instead of (θ,φ) and we will call virtual screen to the surface in the image space.
Underfilling strategy results in some o-pixels on the display being off, no matter the real or virtual image. In particular, the o-pixels of the black corridors of 8401 in
Each lenslet imposes a continuous mapping between the x,y and the H,V coordinates. This mapping is given by the functions (H,V)=(Aij(x,y), Bij(x,y)). The same mapping can also be expressed in terms of the functions h and v, as (H,V)=(Hij(h,v), Vij(h,v)). Observe that Hij(h,v) and Vij(h,v) are only defined for the cluster i,j, i.e. when i≤h≤i+m and j≤v≤j+n. Observe also that Aij(x,y), Bij(x,y) are not dependent of the pupil position p but Hij(h,v), Vij(h,v) are dependent in general.
In an interlaced strategy, the lenslets (and their clusters) can be classified in k2 families where k is the interlacing degree. For instance when k=2 and there is a square configuration of lenslets, then the 4 families are (00) i,j odd; (10) i odd, j even; (01) i even, j odd; and (11) i, j even. The mapping of anyone of the families, for instance (01) can be written as (H,V)=(α01(x,y), β01(x,y)) where the functions α01(x,y)=Aij(x,y) and β01(x,y)=Bij(x,y) if (x,y) belongs to the cluster ij and this cluster belongs to the family 01. α01(x,y) and β01(x,y) are called the mapping functions of the lenslet family MF=01. The image space of any function αMF(x,y) and βMF(x,y) is the whole virtual screen while its object space is formed by the points (x,y) belonging to the clusters of the family MF. The functions αMF(x,y) and βMF(x,y) can also be written as functions of h and v when p is known αMF(x,y)=MF(h(x,y,p),v(x,y,p)) and BMF(x,y)=MF(h(x,y,p),v(x,y,p)).
For a given lenslet design, the functions h(x,y,p), v(x,y,p), m(h), n(v), and αMF(x,y), βMF(x,y) are known. Then, for the mapping implementation in the rendering engine software, given (x,y, p) the calculation to obtain the values of H,V in these 5 steps:
1. Calculate h=h(x,y,p) and v=v(x,y,p).
2. Is there a couple i,j such that i≤h≤i+m and j≤v≤j+n.? If not turn off the pixel at x,y.
3. Find the lenslet family MF containing the lenslet associated to the cluster i,j.
4. Calculate (H,V)=(αMF(x,y), βMF(x,y)).
5. Find the brightness corresponding to (H,V) and turn on the pixel (x,y) with that brightness.
Continuity and overlapping of the partial virtual images in the virtual screen. For a correct tessellation of the partial virtual images of every lenslet into a common virtual image, the lenslet mapping functions MF(h,v) and MF(h,v) have to fulfill that MF(i+m,v)=MF(i+k,v), MF(i+m,v)=MF(i+k,v) and MF(h,j+n)=MF(h,j+k), MF(h,j+n)=MF(h,j+k). Remember that k is the interlacing degree. These last equations establish the conditions on the tiling of the image regions of the VR screen of the different lenslets of the family MF. This tiling must give a continuous image on the VR screen from the set of clusters of the family MF. In order to allow some tolerance for the position of theses image regions it is advisable to allow for some overlap of that partial virtual images. The contours of the display clusters will be dimmed smoothly and slightly extended to overlap slightly its neighbor virtual images. This overlap is preferably limited so at least 80% of the waists of pencils containing foveal rays and that are associated to objects pixels belonging to clusters do not overlap angularly from the center of the eye pupil. Let's 2Δ be the width of the overlapping regions in the variables h or v. Assume that Δ<<1. The new calculation process of H,V is
1. Calculate h=h(x,y,p) and v=v(x,v,p).
2. Find the couple i,j such that i−Δ≤h≤i+m+Δ and j−Δ≤v≤j+n+Δ.? If there is no solution, then turn the x,y pixel off.
3. Find the lenslet family MF containing the lenslet i,j.
4. Calculate (H,V)=(αMF(x,y), βMF(x,y)). The functions Hij(h,v) and Vij(h,v) are now defined for a cluster i,j, slightly bigger than in the preceding case when there was no overlapping of partial virtual images. Now they are defined for i−Δ≤h≤i+m+Δ and j−Δ≤v≤j+n+Δ.
5. Find the brightness corresponding to (H,V) and turn on the pixel (x,y) with that brightness times the weighting function w(h,v). This weighting function w(h,v) is w(h,v)=c(h)·d(v) where c(h) and d(v) can be calculated with the following routine: Set c(h)=1, if i+Δ≤h≤i+m−Δ. If i−Δ≤h≤i+Δ then c(h)=(1−(h−i)/Δ)/2. If i+m−Δ≤h≤i+m+Δ then c(h)=(1−(h−i−m)/Δ)/2. Set c(h)=0 otherwise. Set d(v)=1 if j−Δ≤v≤j+n+Δ. If j−Δ≤v≤j+Δ then d(v)=(1−(v−j)/Δ)/2. If j+n−Δ≤v≤j+n+Δ then d(v)=(1−(v−j−n)/Δ)/2. Set d(v)=0 otherwise.
This strategy smoothly dims the contours of the clusters. For this strategy to be correct the mapping functions MF(h,v) and MF(h,v) have to fulfill that MF(h+m,v)=MF(h+k,v), MF(h+m,v)=MF(h+k,v) for any couple h,v in the corridors i+m−Δ≤h≤i+m+Δ, i+k−Δ≤h≤i+k+Δ and MF(h,v+n)=MF(h,v+k), MF(h,v+n)=MF(h,v+k), for the corridors j+n−Δ≤v≤j+n+Δ, j+k−Δ≤v≤j+k+Δ. This condition ensures that the weighting functions do sum 1 at any point of the overlapping regions. This condition on the mapping functions is more restrictive than the one found when there is no tolerance allowance for the tiling, which establishes the same equations but only for the curves h=i+m, h=i+k and v=j+n, v=j+k, which are in the middle of the abovementioned corridors.
A more practical condition for the overlapping case is to require that:
1. MF(h+m,v)=MF(h+k,v), MF(h+m,v)=MF(h+k,v) and MF h(h+m,v)=MF h(h+k,v), MF h(h+m,v)=MF h(h+k,v) only for the curves h=i+m, h=i+k, where the subindex h denotes partial derivative of the function with respect to h, i.e., ∂( )/∂h.
2. MF(h,v+n)=MF(h,v+k), MF(h,v+n)=MF(h,v+k) and MF v(h,v+n)=MF v(h,v+k), MF v(h,v+n)=MF v(h,v+k) only for the curves v=j+n, v=j+k, where the subindex v denotes partial derivative of the function with respect to v, i.e., ∂( )/∂v.
This approximation assumes that the functions MF and MF can be approximated by its linear expansion for the points of the corridors. The approximation works for Δ small enough.
7. Lenslet Design
The number of freeform surfaces required to perform the optical design of the lenslets depend on the specific design parameters targeted: FOV, interlacing factor, virtual image resolution, displays o-pixel size, virtual image resolution versus polar angle θ, etc. We disclose next an exemplary design, whose diagonal cross section in shown in
The design performs interlacing factor 2, eye tracking and underfilling strategy. The selected materials for this example are POG01 for the two arrays closer to the eye and POG12 for the element closer to the display, both UV curable resins used by Süss MicroTec.
This design has been done for a 1.78″ diagonal OLED RGBW square o-pixel microdisplay with about 3.2k×3.2k resolution (so the full-color pixel is about 10 microns). It achieves a FOV-H=FOV-V=80 degs with an eye relief of 15 mm and a thickness of less than 9 mm, and eyebox of 16 mm (this includes ±25 deg eye rotations and ±2 mm eye shift), weighting (the optics) only 4 grams per eye.
The lens designs are done taking into account the variable directional magnification desired, the eye rotations and the human vision angular acuity function. The lenslets have been specifically designed with a G(θ) function such that the resulting VR pixel radial resolution (proportional to G(θ)) matches with high accuracy the curve shown in
The design is done considering that cross-talk between the different channels must be avoided for the eye located in any position within the pupil range. Additionally, the surface shapes are constrained so the piece-wise continuous intersection curve between one lens surface and its adjacent ones is contained in the clear aperture, so it can be manufactured easier, without presenting steps between surfaces.
The design of the freeform surfaces may be done by multiparameter optimization with adequate constraints using commercial design software as Code V or Zemax. Apart from the symmetries of the arrays in this example, most lenses in one octant are different on the others. Each lens is designed with plane symmetry with respect to a plane containing the frontward axis (z-axis). An efficient implementation of the design algorithm may incorporate the possibility of designing only the lenslets along a diagonal (of indices (i,i), i≥0) and obtaining the rest (which are at different radial distances front the z-axis) by interpolation of the diagonal lenslets parameters. The optimization may be carried out with the following expression for the freeform surfaces:
and ceiling(x) gives the smallest integer greater than or equal to x. For a polynomial of degree d, the maximum monomial index N is given by N=(d+1)(d+2)/2. Next tables show the resulting parameters for exemplary lenslets along the diagonal of indices (0,0), (3,3), and (6,6). The parameters not shown in the tables are 0.
As mentioned before, our design takes into account the eye rotations and the human vision angular acuity function, in order to make the best use of the degrees of freedom available and not having a lens that works better than needed in some circumstances compromising the performance in other circumstances.
Lens manufacturing can be done by replication on glass substrates by UV curing process with mold manufacturing with diamond turning. PMMA, Zeonex E48R. PC and EP-5000 are also candidate material that could have been used for this design, which can also be conformed by a thermal embossing process. If the edges of said microlenses are nor perfect and occupy a certain non negligible area (specially in the surface closest to the eye) they can produce undesirable scattering. To avoid such effect said edges may be covered by a mask.
Apart from manufacturing aspects, the imaging performance of these designs may be improved if a mask blocks the light from the corners of quasi-rhomboidal apertures the minilenses, which are usually their poorest performance portions. An example of such a mask to limit the aperture of the freeform surface close to the eye is shown in
7. Polarization Based Enhancements
Using polarized light permits further enhancements in this invention, which are disclosed next.
As a general rule, the greater number of waist-surfaces, the larger number of candidate accommodation surfaces, which helps to reduce the VAC. When two are available, one is set closer to the eye than the other, and when more than two are used, they are preferably spaced by a distance between 2 and 5 diopters along the frontwards direction. FIG. 94 illustrate this strategy, showing a stereoscopic system 9400 where eyes 9401 and 9402 with pupils 9403 and 9404 looking into microlens arrays 9405 and 9406 facing displays 9407 and 9408. Both microlens arrays are multi-focal, i.e., they are able to form pencils with waist plane selectable among 9409 or 9410, which are preferably designed to coincide with two accommodation planes (no interlacing is applied in the figure, but interlacing may be included too). Consider now that we want to show v-pixel 9415. It is closest to plane 9409 so we turn on the pencils that cross v-pixel 9415 and have waists at 9409. This configuration still shows VAC because the v-pixel is at position 9415 while eye 9401 accommodates at a-pixel 9417 and eye 9402 accommodates at a-pixel 9418, both at the waist plane 9409 which is also the accommodation plane of 9417 and 9418. However, by choosing an accommodation plane 9409 near the v-pixel 9415, the VAC is reduced with respect the situation of
In one embodiment, block 9611 may be made of birefringent material, as for instance calcite, quartz and anisotropic polymers, as those that may be made from stretched polyester films. Unpolarized light emitted by display 9614 will be split into two polarizations (ordinary and extraordinary rays) that experience two different refractive indices as light crosses element 9611. As a consequence, two waist planes will be produced at two different distances. Pencils generated by this optical arrangement will be bifocal, i.e, with 2 waists: one at 9616 and the other at 9617.
Alternatively, the display and/or some element of the optics array may be moved with an actuator between the two positions, and the display maybe time multiplexed synchronized with those movement, so in the first half of the frame the waist in on one plane and the second half on the second plane. This method is less efficient and requires faster displays. Another option consists in dividing the lenslet array in two or more groups and design each one to provide pencils with different waist position. This lowers the potential x-y resolution of the virtual image, since those groups could be used to do interlacing in a single waist plane. Finally, lenslets providing pencils with two or more waists, as in multifocal intraocular lenses, may certainly help too, although the MTF quality at the waists planes are lower than in single waist pencils.
In another embodiment, display 9614 emits linearly polarized light and is covered by a liquid crystal panel 9618 (without any polarizing filter) that has the ability to rotate the polarization of the light by 90 degs when applied a voltage. Different portions of the panel (i.e., the liquid crystal panel pixels) may produce different polarization rotations, even down to the level of being possible to set the polarization for each individual pixel, even the display itself may have that capability (so 9618 would not be needed). Said different polarizations will experience different refractive indices as light crosses element 9611 and, as referred above, will produce virtual image planes at different distances. This embodiment then allows different regions of the virtual image plane to be placed at different distances, reducing the vergence-accommodation mismatch. With this optical arrangement we may have bifocal pencils with 2 waists (one at 9616 and another at 9617) whose relative brightness weight may be controlled with the voltage applied to the liquid crystal pixel, going from a single-waist pencil at 9616 up to a single-waist pencil at 9617 and passing by bifocal pencils whose 2 waists have a variable brightness contribution which depends on the voltage applied to the liquid crystal pixel. This analogue behavior of the variable relative brightness can be used to fine tune the accommodation location perception of bifocal pencils between the two waists, and therefore to fine tune a-pixels made of this type of pencils.
Liquid crystal panel 9705 has the ability to rotate the polarization of the light crossing it. Said light, as it crosses birefringent element 9704, will experience a refractive index n2A or n2B, depending on the polarization state of the light. Said light then crosses liquid crystal panel 9703 which again has the ability to rotate the polarization of said light. Again, as said light crosses birefringent element 9702, will experience a refractive index n1A or n1B, depending on the polarization state of the light. This system therefore has four possible states, depending on the polarization rotation introduced by elements 9705 or 9703 which correspond to light experiencing a refractive index n2A or nB2 at element 9704, and a refractive index n1A or n1B at element 9702. The crossing of element 9704 displaces virtual image 9707 as does the crossing of element 9702. This embodiment therefore has the ability to place the virtual image 9707 at four different distances from the display 9706.
Also, different regions of panels 9705 or 9703 may be addressed separately, rotating differently the polarization of light. This results an image split over different portions of image planes, placed at different distances, that is, the waist of the pencils will be located at those four distances by the adequate addressing of the 4 LCDs. Objects near the display 9706 may be represented on an image plane closer to 9706 while objects far from the display 9706 may be represented on an image plane further from 9706. This may be used to alleviate the vergence-accommodation mismatch. Alternatively, liquid 9703 can be switched and fast axis 9702 can be placed at 45 degs relative to the fast axis of 9704. As a consequence, 9702 will produce that the pencils will have two waists, and those waists pairs may be jointly shifted according to the addressing of 9705.
Additionally, a similar analogue control of the brightness relative weight explained for
Waist planes are preferably designed, as already mentioned, to coincide with accommodation planes. Alternatively, waist planes may be designed to be located in between two consecutive accommodation planes of the selected ones, to provide a more uniform resolution between both accommodation planes. Typical positions of the two waist surfaces may be between 0.25 to 1 m for the one closest to the eye, and between 0.75 to 5 m for the farthest one.
Using Adjacent Clusters of Orthogonal Polarizations
Notice that embodiments in
This application claims benefit of and priority to commonly invented and assigned U.S. Provisional Patent Application No. 62/944,105, filed on 5 Dec. 2019 for “Lenslet Based Ultra-High Resolution Optics For Virtual And Mixed Reality”, and U.S. Provisional Patent Application No. 63/090,795, filed on 13 Oct. 2020 for “Lenslet Based Freeform Optics”. Both of those provisional applications are incorporated herein by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/063629 | 12/7/2020 | WO |
Number | Date | Country | |
---|---|---|---|
63090795 | Oct 2020 | US | |
62944105 | Dec 2019 | US |