The disclosed technology generally relates to three-dimensional (3D) displays, and more specifically to methods and frameworks for designing and optimizing high-performance light field displays.
Conventional stereoscopic three-dimensional displays (S3D) stimulate the perception of 3D spaces and shapes from a pair of two-dimensional (2D) perspective images at a fixed distance, one for each eye, with binocular disparities and other pictorial depth cues of a 3D scene seen from two slightly different viewing positions. A key limitation associated with the S3D-type displays is the well-known vergence-accommodation conflict (VAC) problem. With binocular vision, when observing an object, the eyes must rotate around a horizontal axis so that the projection of the image is in the center of the retina in both eyes. Vergence is the simultaneous movement of both eyes in opposite directions to obtain or maintain single binocular vision. Accommodation is the process by which the eye changes optical power to maintain a clear image or focus on an object as its distance varies. Under normal viewing conditions of objects, changing the focus of the eyes to look at an object at a different distance will automatically cause vergence and accommodation. In the context of 3D displays, the VAC occurs when the brain receives mismatching cues between the distance of a virtual 3D object (vergence), and the focusing distance (accommodation) required for the eyes to focus on that object. This issue stems from the inability to render correct focus cues, including accommodation and retinal blur effects, for 3D scenes. It causes several cue conflicts and is considered as one of the key contributing factors to various visual artifacts associated with viewing S3D displays.
The disclosed embodiments relate to three-dimensional (3D) displays, and more specifically to methods and frameworks for designing and optimizing of high-performance light field displays, including but not limited to light field head-mounted displays.
In one example embodiment, a method for designing an integral-imaging (InI) based three-dimensional (3D) display system. The system includes an arrayed optics, an arrayed display device capable of producing a plurality of elemental images, a first reference plane representing a virtual central depth plane (CDP) on which light rays emitted by a point source on the display converge to form an image point, a second reference plane representing a viewing window for viewing a reconstructed 3D scene, and an optical subsection representing a model of a human eye. The method for designing the system includes tracing a set of rays associated with a light field in the InI-based 3D system, where the tracing starts at the arrayed display device and is carried out through the arrayed optics and to the optical subsection for each element of the arrayed display device and arrayed optics. The method further includes adjusting one or more parameters associated with the InI-based 3D system to obtain at least a first metric value within a predetermined value or range of values. The first metric value corresponds to a ray directional sampling of the light field.
In the disclosed InI-based 3D systems, light field reconstructs the 4-D light field of a 3D scene by angularly sampling the directions of the light rays apparently emitted by the 3D scene. According to some embodiments, the optical design process includes optimizing the mapping of both ray positions and ray directions in 4-D light field rendering rather than simply optimizing the 2D mapping between object-image conjugate planes in conventional HMD designs.
There have been several efforts to address the vergence-accommodation conflict (VAC) problem, among them is an integral-imaging-based (InI-based) light field 3D (LF-3D) display. Integral imaging generally refers to a three-dimensional imaging technique that captures and reproduces a light field by using a two-dimensional array of microlenses or optical apertures. This configuration allows the reconstruction of a 3D scene by rendering the directional light rays apparently emitted by the scene via an array optics seen from a predesigned viewing window. However, a systematic design approach suitable for the optimization of 3D display systems, including InI-based LF-3D head-mounted displays (HMDs), is lacking. As such, designing high-performance 3D display systems remain to be a challenge. In particular, without a well-optimized design, an InI-HMD will not be able to correctly render the depth and accommodation cue of the reconstructed 3D scene, resulting in images of compromised quality and comfort.
In the description that follows, the InI-HMD is used as an example system to illustrate the above noted problems and the disclosed solutions. It is, however, understood that the disclosed embodiments are similarly applicable to other types of 3D display systems, such as non-head-worn, direct-view type light field displays, where the optical systems for rendering light fields are not directly worn on a viewer's head, 3D light field displays that only sample the light field in one angular direction, typically in the horizontal direction, (which is better known as displays rendering horizontal parallax only—i.e., multi-views are arranged in as vertical stripes on the viewing window), or super multi-view displays or autostereoscopic displays where the elemental views are generated by an array of projectors or imaging units.
In systems that utilize integral imaging, a 3D image is typically displayed by placing a microlens array (MLA) in front of the image, where each lenslet of the MLA looks different depending on the viewing angle. An InI-HMD system requires that different elemental views created by multiple elements of the MLA to be rendered and be seen through each of the eye pupils. Therefore, the light rays emitted by multiple spatially-separated pixels on these elemental views are received by eye pupil and integrally summed up to form the perception of a 3D reconstructed point, which essentially is the key difference of an InI-HMD from a conventional HMD. In a conventional HMD system, the viewing optics simply project a 2D image on a microdisplay onto a 2D virtual display and thus the light rays from a single pixel are imaged together by the eye optics to form the perception of a 2D point. Due at least to this inherent difference of the image formation process, the existing optical design methods for conventional HMDs become inadequate for designing a true 3D LF-HMD system which requires the ray integration from multiple individual sources.
The disclosed embodiments, among other features and benefits, provide improved optical design methods that enable (1) producing an LF-3D HMD design to precisely execute real ray tracing, and (2) optimize the design to precisely sample and render the light field of a reconstructed 3D scene, which is key to drive the accommodation status of the viewer's eye and thus solve the VAC problem. In embodiments of the disclosed technology, one or more new design constraints or metrics are established that facilitate the optimization of ray positional sampling of the light field and/or ray directional sampling of the light field. For instance, one constraint or metric for positional sampling accounts for global distortions (e.g., aberrations) related to the lateral positions of the virtual elemental images (EIs) with respect to the whole FOV of the reconstructed 3D scene aberrations. Another constraint or metric for directional sampling provides a measure of deviation or deformation of the ray footprints from their paraxial shapes.
The use of the disclosed constraints and metrics improves the optical design process, and allows a designer to assess the quality of the produced images (e.g., in terms of solving the VAC problem) and improve the design of the optical system. In some embodiments, by minimizing the disclosed metrics during the design process, an optimum design may be produced. The disclosed metrics further provide an assessment of achievable image quality, and thus, in some embodiments, a desired image quality goal may be achieved for a particular optical system based on target values (as opposed to minimization) of the disclosed metrics.
Owning to the nature of 2D image formation described above, the optical design process for such a system only needs to focus on the 2D mapping between the pixels on a microdisplay and their corresponding images on the virtual display; the optimization strategy concentrates on control of optical aberrations that degrade the contrast and resolution of the virtual display or detort the geometry of the virtual display. To this end, the rays from every single pixel on the display are imaged by a common optical path or sequence of optical elements. Therefore, the conventional HMD system can be modeled by a shared optical configuration. In such a system, the retinal image of a rendered point is the projection of the rays emitted by a single pixel on the microdisplay or the magnified virtual display, allowing the optical performance of a conventional 2D HMD system to be adequately evaluated by characterizing the 2D image patterns projected by the rays from a handful field positions on the microdisplay.
In contrast, an LF-HMD reconstructs the 4-D light field of a 3D scene by angularly sampling the directions of the light rays apparently emitted by the 3D scene.
Each pixel on these EIs is considered as the image source defining the positional information, (s, t), of the 4-D light field function. Associated with the array of EIs is an array optics, such as an MLA, each of which defines the directional information, (u, v), of the light field function (see, also
In a magnified-view configuration, an eyepiece is inserted to further magnify the miniature 3D scene into a large 3D volume with an extended depth in virtual space (e.g., A′ and B′ in
Owning to the nature of 3D image formation described above, light field reconstruction of a 3D point is the integral effects of the light rays emitted by multiple spatially separated pixels, each of which is located on a different elemental image and imaged by a different optics unit of an array optics. Each pixel provides a sample of a light field position and its corresponding unit of imaging optics provides a sample of a light field direction. Therefore, the optical design process requires optimizing the mapping of both ray positions and directions in 4-D light field rendering rather than simply optimizing the 2D mapping between object-image conjugate planes in conventional HMD designs. The optimization strategy needs to not only properly control and evaluate optical aberrations that degrade the contrast and resolution of the virtual display or detort the geometry of the 2D elemental images, which accounts for the ray position sampling aspects of the light field, but also requires methods and metrics to control and evaluate the optical aberrations that degrade the accuracy of the directional ray sampling.
As illustrated in
The retinal image of a rendered 3D point is the integral sum of the projected rays emitted by multiple pixels on different elemental images, and the appearance of the image varies largely with the states of the eye accommodation. Therefore, the optical performance of an LF-HMD system cannot be adequately evaluated by characterizing the 2D image patterns projected by the rays from a handful field positions on the microdisplay alone, but needs to be evaluated by charactering the integral images on the retina with respect to different states of eye accommodation. For this purpose, an eye model is a necessary part of the imaging system.
To achieve a good mapping, it is critical to obtain (1) a good control of the positional sampling mapping from (s, t) to (xc, yc) of the light field function so that each of the EIs rendered on the display panel is well imaged onto the virtual CDP, and (2) a good control of the directional sampling mapping from (u, v) to (xv, yv) of the light field function so that the ray bundles from each of the imaged EIs are projected onto the viewing window with the correct directions and footprints and thus the elemental views are well integrated without displacement from each other.
In particular, in optimizing the ray position mapping from (s, t) to (xc, yc) additional considerations regarding interactions between induvial elements must be taken into account. Further, optimizing the ray direction mapping from (u, v) to (xv, yv) requires completely new design metrics capable of precisely evaluating the quality of the directional sampling of the reconstructed light field of a 3D scene and its effect upon the display system.
In conventional 2D HMD designs, HMD optical systems are commonly configured to trace rays reversely from a shared exit pupil (or the entrance pupil of the eye) toward the microdisplay, and no eye model is needed. In contrast, the sub-systems in accordance with the disclosed embodiments are configured such that the ray tracing starts from the microdisplay or equivalently from the EI toward the viewing window. In this way, ray tracing failures are avoided due to the fact that the projections of the array of apertures of the optics unit of array optics on the viewing window do not form a commonly-shared exit pupil as in conventional HMDs. Additionally, an ideal lens emulating the eye optics of a viewer or an established eye model (e.g., the Arizona eye model) is inserted with its entrance pupil coinciding with the viewing window for better optimization convergence and convenient assessment of the retinal image of a light field reconstruction. It should be noted that the use of a standard eye model is one non-limiting example that allow the design of the 3D system for mass-produced products. In some embodiments, an individualized or customized eye model may be used.
Referring back to
Although the microdisplay is also divided into an M by N array of EIs, one for each lenslet, the lateral position and size of each EI is more complex and dependent on several other specifications of the display system. For instance, the viewing window, which is not necessarily the optical conjugate of the MLA and can be shifted longitudinally along the optical axis according to a design requirement (see, e.g.,
In Equation (2), g is the gap between the display panel and MLA, l is the gap between the MLA and the intermediate CDP, zIC is the gap between the intermediate CDP and the eyepiece group, and z′xp is introduced to refer to the distance between the eyepiece group and the imaged viewing window by the eyepiece group, which can be further given as:
In Equation (3), fep is the equivalent focal length of the eyepiece group. Therefore, for a given sub-system unit indexed as (m, n), the lateral coordinates of the center of the corresponding EI can be expressed as:
The footprint size, dv, of the ray bundle from a pixel of an EI projected through the whole optics on the viewing window, which determines the view density or equivalently the total number of views encircled by the eye pupil, can be determined by tracing the ray bundles emitted by the center point of the EI (e.g. the shaded ray bundles in
In Equations (5) and (6), | . . . | stand for the absolute value symbol.
According to the paraxial geometry, both the footprint diameter and the viewing window size are the same for any of the sub-systems, and the footprints corresponding to the same field object of different sub-systems will intersect on the viewing window so that they share the same coordinates (xv, yv). For example, as mentioned above, the chief ray of the center of each EI will intersect with the optical axis at the center of the viewing window so that xv0(m, n) and yv0(m, n) both equal to zero for any of the sub-system.
In an InI-HMD system, the EIs are seen as an array of spatially displaced virtual images observed from the viewing window.
In Equations (7) and (8), ZCDP is the distance between the virtual CDP and the viewing window. For a given sub-system unit indexed as (m, n), the lateral coordinates of the paraxial center of the corresponding virtual EI can be expressed as:
Equations (7) and (9) essentially provide the paraxial calculations of the image dimension and center position for each of the sub-systems. The image size of the virtual EIs on the virtual CDP is usually much greater than the displacement of the neighboring virtual EIs so that the virtual EIs on the virtual CDP would partially overlap with each other. As illustrated in
The above steps demonstrate the disclosed methods of modeling an LF-HMD system and analytic methods of calculating the first-order relationships of the system parameters. These steps are different from modeling a conventional 2D HMD and are critical for developing proper optimization strategies.
As stated earlier, the optimization strategy for an LF-HMD needs to properly control and evaluate optical aberrations that degrade the contrast and resolution of the virtual display or detort the geometry of the 2D elemental images, both individually and collectively, to account for the ray position sampling aspects of the light field. The optimization strategy also requires methods and metrics to control and evaluate the optical aberrations that degrades the accuracy of the directional ray sampling.
Ray Positional Sampling Considerations: Optimizing ray positional sampling of the light field function can be achieved by control optical aberrations that affect aberrations that degrade the contrast and resolution of the virtual display or detort the geometry of the 2D elemental images. It is helpful to obtain well-imaged EIs on the virtual CDP from the display panel through their corresponding lenslets of the MLA and eyepiece group.
The optimization strategy in accordance with the disclosed embodiments for ray positional sampling is multi-fold, and includes optimizing the imaging process of each EI individually by each of the sub-systems. For example, optimization constraints and performance metrics available in optimizing the 2D image-conjugates for conventional HMDs can be used. The exact constraints and performance metrics vary largely from system to system, heavily depending on the complexity of the optical components utilized in the optical systems for an InI-HMDs. Examples of typical constraints include, but are not limited to, the minimum and maximum values of element thickness or the spacings between adjacent elements, the total path length of the system, the allowable component sizes, shapes of each of the optical surfaces, surface sag departures from a reference surface, the type of optical materials to be used, the amount of tolerable aberrations, or the amount of optical power. Examples of performance metrics include, but are not limited to, root-mean-square (RMS) spot size, wavefront errors, the amount of residual aberrations, modulation transfer functions, or acceptable image contrast. With this initial stem, the entire FOV of the LF display composed by the individual EIs are optimized separately instead of being treated as a whole as in conventional HMD designs. Such an individual optimization for each of the EIs, however, overlooks the corresponding connection between the neighboring EIs and more importantly the relative positions and sizes of virtual EIs with respect to the total FOV. For an InI-HMD, as shown in
To account for the effects of distortions induced to the EIs locally and globally, two different types of constraints during optimization can be applied. The first constraint is the control of the local distortion aberrations for each of the sub-systems representing a single EI, which can be readily implemented by adopting the distortion-related optimization constraints already available in the optical design software to each zoom configuration. The exact constraints for local distortion control vary largely from system to system, heavily depending on the complexity of the optical components utilized in the optical systems for an InI-HMDs. Examples of typical controls on distortion include, but are not limited to, maximum allowable deviation of the image heights of the sampled object fields from their paraxial values, allowable percentile of image height and shape difference from a regular grid, allowable magnification differences of different object fields, or allowable shape deformation of the image from a desired shape, etc. These controls are typically applied as constraints to each sub-system individually to ensure each sub-system forms an image with acceptable local distortion. These local controls of distortion in each sub-system ensure the dimensions and shapes of the virtual EIs remain within a threshold level in comparison to their paraxial non-distorted images.
The second constraint is the control of the global distortion, which is related to the lateral positions of the virtual EIs with respect to the whole FOV of the reconstructed 3D scene. To optimize for this global distortion, the chief ray of the center object field of each EIs on the microdisplay is ought to be specially traced and its interception on the virtual CDP needs to be extracted and constrained within a threshold level comparing to its paraxial positions in global coordinates. For a given sub-system indexed as (m, n), the global deviation of the center position of virtual EI on the virtual CDP from its paraxial position can be quantified by a possible metrics, GD, along with the corresponding constraints, which can be expressed as:
The GD metric in Equation (11) examines the angular deviation between the real and theoretical position of the chief ray of the center object field measured from the viewing window. For example, as illustrated in
A constraint corresponding to the metric can therefore be created by obtaining the maximum values of the metric for all the EIs through all the sub-systems. By adding the constraint to the optimization process and modifying the value of the constraint, the maximally allowed global distortion can be adjusted and optimized.
Ray Directional Sampling Considerations: due to the unique property of an LF-HMD, the ray directions of the light fields play a very important role in designing such a display system. Incorrect sampling of ray directions will not only affect the integration of the EIs but also potentially lead to uneven number of elemental views for reconstructed light field targets and thus misrepresented focus cues. In the case of severe pupil aberration, it is even possible that the number of elemental views encircled by a viewer's eye pupil reduces to be less than two so that the system becomes no different from a conventional stereoscopic display system and fails to properly render true light fields.
As noted above, the viewing window is where all the chief rays through the center pixels of all the EIs intersect with the optical axis, as shown in
To optimize the ray directional sampling of light fields in designing LF-HMDs, proper constraints for the footprints of each elemental views projected on the viewing window must be provided. To account for the effects of pupil aberration induced to the ray footprints and directions on the viewing window during optimization, the disclosed optimization strategies (1) extract the exact footprints of the ray bundles from any given pixel of a given EI on the viewing window; and (2) establish metric functions that properly quantify any deviations of the ray footprints from their paraxial shapes and positions so that constraints can be applied during the optimization process to control the deviations within a threshold level.
In some embodiments, for each given object field, four marginal rays are sampled through the lenslet aperture to avoid exhaustive computation time during the optimization process. The coordinates of these marginal rays on the viewing window define the envelop of the ray footprint of a sampled field on a given EI in a given sub-systems. For a sampled object field indexed as (i, j) on a given EI corresponding to a sampled sub-system indexed as (m, n), the deformation of the ray footprints from its paraxial shape can be quantified by a metric function, PA, using, for example, the following:
In Equation (12), x′v, and y′v, are the real positions of the marginal rays on the viewing window obtained via real ray tracing horizontally and vertically, respectively, while xv and yv are their corresponding paraxial positions on the viewing window. k is the index of four marginal rays for a sampled object on a given EI corresponding to a sampled sub-system.
The metric PA in Equation (12) quantifies the deformation of the ray footprint of a given ray bundle from its paraxial shape by examining the relative ratio of the average deviated distance between the real and theoretical positions of the marginal rays on the viewing window to the diagonal width of the paraxial footprint. By adding the constraint to the optimization process and modifying the value, the maximally allowed deviation and deformation of the footprint, or equivalently, the pupil aberration affecting the ray directions of the light field of an InI-HMD can be adjusted and optimized.
In using the above noted metrics, the system design can be carried out to determine optimum (or generally, desired or target) designs that include determinations of distances and angular alignment of components, sizes of components (e.g., pitch of lenslets, area of lenslets, surface profiles of lenslets and/or eyepiece, focal lengths, and apertures of the lenslets array and/or eyepiece, etc.). The system can further include additional optical elements, such as relay optics, element for folding or changing the light path, and others, that can be subject to ray tracing and optimization as part of the system design.
At the outset of the design for the example configuration, the MLA and the relay-eyepiece group were optimized separately to obtain good starting points due to the complexity of the system. For the initial iteration of the MLA design, special attention was paid to the marginal rays that were constrained to not surpass the edge of the lenslet to prevent crosstalk among neighboring EIs. The two surfaces of the lenslet were optimized as aspheric polynomials with coefficients up to 6th order. In the initial iteration for the design of the relay and eyepiece group, the design was reversely set up by backward tracing rays from the viewing window toward the eyepiece and relay lenses. Each of the four freeform surfaces of the prism was described by x-plane symmetric XY-polynomials and was optimized with coefficients up to their 10th order.
After obtaining the initial designs of both the MLA and the relay-eyepiece group, the two parts were integrated and an array of 7 by 3 zoom configurations was created.
In experiments, test result of a prototype InI-HMD system designed in accordance with the disclosed technology were obtained by placing the camera at the viewing window and capturing real images of the displayed scene through the system. Test scenes included a slanted wall with water drop texture spanning a depth from around 500 mm (2 diopters) to 1600 mm (0.6 diopters) that was computationally rendered and displayed as the test target. The central 15 by 7 elemental views rendered on the microdisplay were obtained, as well as real captured images of the rendered light fields of such a continuous 3D scene by adjusting the focal depth of the camera from the near side (˜600 mm), to the middle part (˜1000 mm), and the far side (˜1400 mm) of the scene, respectively, which simulates the adjustment of the eye accommodation from near to far distances. The virtual CDP of the prototype was shifted and fixed at depth of 750 mm (1.33 diopters). The parts of the 3D scene within the same depth of the camera focus remained in sharp focus with high fidelity compared to the target. In contrast, the other parts of the 3D scene outside of the camera focal depth were blurry; the more the depth of the 3D scene was deviated from the camera focus, the blurrier were the part of the 3D scene, which is similar to what we observe from the real word scene. Such results clearly demonstrate the ability of the prototype designed in accordance with the disclosed embodiments to render high-quality light field contents, and more importantly, to render correct focus cues to drive the accommodation of the viewer's eye.
In one example embodiment, the first metric value quantifies a deformation of ray footprint of a given ray bundle of the light field from its paraxial footprint. In another example embodiment, the first metric value is determined in accordance with a relative ratio of an average deviated distance between a real and a theoretical position of marginal rays on the second reference plane to a diagonal width of the paraxial footprint. In yet another example embodiment, the first metric value is determined in accordance with Equation (12). For example, the first metric value can be determined based on a difference real positions of marginal rays on the viewing window obtained by ray tracing and their corresponding paraxial positions on the viewing window.
According to another example embodiment, in the above noted method, adjusting the one or parameters associated with the InI-based 3D system is carried out to further obtain a second metric value within another predetermined value or range of values, where the second metric value corresponds to a ray positional sampling of the light field that accounts for deformations induced by neighboring elements of at least the arrayed optics. In one example embodiment, the second metric value is determined in accordance with an angular deviation between real and theoretical positions of a chief ray of a center object field measured from the second reference plane. In yet another example embodiment, the second metric value represents a global distortion measure. In still another example embodiment, the second metric value is determined in accordance with Equation (11). For example, the second metric value is computed as a deviation of a center position of a virtual elemental image of the plurality of the elemental images on the virtual CDP from a paraxial position thereof. In another example embodiment, adjusting the one or parameters associated with the InI-based 3D system is carried out with respect to the ray positional sampling of the light field to additionally optimize imaging of each EI individually.
In one example embodiment, the InI-based 3D system further includes an eyepiece positioned between the arrayed optics and the second reference plane, and tracing the set of rays includes tracing the set of rays through the eyepiece. In some embodiments, the arrayed display device is a microdisplay device. In some embodiments, the arrayed optics comprises one or more lenslet arrays, each including a plurality of microlenses. In another embodiment, the InI-based 3D system is an InI-based head-mounted display (InI-based HMD) system.
In some embodiments, the predetermined value, or range of values, for one or both of the first or the second metric are selected to achieve a particular image quality. In some embodiments, the predetermined value, or range of values, for one or both of the first or the second metric represents a maxima or a minima that provides an optimum design criteria with respect to the first or the second metric.
The processor(s) 1204 may include central processing units (CPUs) to control the overall operation of, for example, the host computer. In certain embodiments, the processor(s) 1204 accomplish this by executing software or firmware stored in memory 1202. The processor(s) 1204 may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), graphics processing units (GPUs), or the like, or a combination of such devices.
The memory 1202 can be or can include the main memory of a computer system. The memory 1202 represents any suitable form of random access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices. In use, the memory 1202 may contain, among other things, a set of machine instructions which, when executed by processor 1204, causes the processor 1204 to perform operations to implement certain aspects of the presently disclosed technology.
It is understood that the various disclosed embodiments may be implemented individually, or collectively, in devices comprised of various optical components, electronics hardware and/or software modules and components. These devices, for example, may comprise a processor, a memory unit, an interface that are communicatively connected to each other, and may range from desktop and/or laptop computers, to mobile devices and the like. The processor and/or controller can perform various disclosed operations based on execution of program code that is stored on a storage medium. The processor and/or controller can, for example, be in communication with at least one memory and with at least one communication unit that enables the exchange of data and information, directly or indirectly, through the communication link with other entities, devices and networks. The communication unit may provide wired and/or wireless communication capabilities in accordance with one or more communication protocols, and therefore it may comprise the proper transmitter/receiver antennas, circuitry and ports, as well as the encoding/decoding capabilities that may be necessary for proper transmission and/or reception of data and other information.
Various information and data processing operations described herein may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Therefore, the computer-readable media that is described in the present application comprises non-transitory storage media. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.
The foregoing description of embodiments has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit embodiments of the present invention to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. The embodiments discussed herein were chosen and described in order to explain the principles and the nature of various embodiments and its practical application to enable one skilled in the art to utilize the present invention in various embodiments and with various modifications as are suited to the particular use contemplated. While operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. The features of the embodiments described herein may be combined in all possible combinations of methods, apparatus, modules, and systems.
This application claims priority to the U.S. provisional application with Ser. No. 62/885,460 titled “Optical Design and Optimization Techniques for 3D Light Field Displays,” filed Aug. 12, 2019. The entire contents of the above noted provisional application are incorporated by reference as part of the disclosure of this document.
This invention was made with government support under Grant No. 1422653, awarded by NSF. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/045921 | 8/12/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62885460 | Aug 2019 | US |