The present invention relates generally to optical systems and, in particular but not exclusively, to head-mounted displays.
A head mounted display (“HMD”) is a display device worn on or about the head. HMDs usually incorporate some sort of near-to-eye optical system configured to form a virtual image located in front of the viewer. Displays configured for use with a single eye are referred to as monocular HMDs, while displays configured for use with both eyes are referred to as binocular HMDs.
A HMD is one of the key enabling technologies for virtual reality (VR) and augmented reality (AR) systems. HMDs have been developed for a wide range of applications. For instance, a lightweight “optical see-through” HMD (OST-HMD) enables optical superposition of two-dimensional (2D) or three-dimensional (3D) digital information onto a user's direct view of the physical world, and maintains see-through vision to the real world. OST-HMD is viewed as a transformative technology in the digital age, enabling new ways of accessing digital information essential to our daily life. In recent years significant advancements have been made toward the development of high-performance HMD products and several HMD products are commercially deployed.
Despite the progress with HMD technologies, one of the key limitations of the state-of-the-art is the low dynamic range (LDR) of a HMD. The dynamic range of a display or a display unit is commonly defined as the ratio between the brightest and the darkest luminance that the display can produce, or a range of luminance that a display unit can generate.
Most of the state-of-the-art color displays (including HMDs) are only capable of rendering images with 8-bit depth per color channel, or a maximum of 256 discrete intensity levels. Such low dynamic range is far below the broad dynamic range of real-world scenes, which can reach up to 14 orders of magnitude. Meanwhile, the perceivable luminance variation range of the human visual system is known to be above 5 orders of magnitude without adaptation. For immersive VR applications, images produced by, or associated with, LDR HMDs fall short of rendering scenes with large contrast variations. This, of course, may result in loss of fine structural details, and/or loss of high image fidelity, and/or loss of sense of immersion as far as the user/viewer is concerned. For the “optical see-through” AR applications, virtual images displayed by LDR HMDs may appear to be washed out with highly compromised spatial details when merged with a real-world scene, which likely contains a much wider dynamic range, possibly exceeding that of the LDR HMD's by several orders of magnitude.
The most common method of displaying a high dynamic range (HDR) image on a conventional LDR display is to adopt a tone-mapping technique, which compresses the HDR image to fit the dynamic range of the LDR device while maintaining the image integrity. Although a tone-mapping technique can make HDR images accessible through conventional displays of nominal dynamic range, such accessibility comes at the cost of reduced image contrast (which is subject to the limit of the device's dynamic range), and it does not prevent the displayed images being washed out in an AR display.
Therefore, developing hardware solutions for HDR-HMD technologies becomes very important, especially for AR applications.
Accordingly, in one of its aspects the present invention may provide a display system having an axis and comprising first and second display layers, and an optical system disposed between said first and second display layers, the optical system configured to form an optical image of a first predefined area of the first display layer on a second predefined area of the second layer. As used in this context “on a second predefined area of the second layer”, may include that the optical system is configured to form an optical image of said second area on said first area, or that the second display layer is spatially separated from a plane that is optically-conjugate to a plane of the first display layer. The optical system may be configured to establish a unique one-to-one imaging correspondence between the first and second areas.
At least one of the first and second display layers may be a pixelated display layer, and the first area may include a first group of pixels of the first display layer, the second area may include a second group of pixels of the second display layer, where the first and second areas may be optical conjugates of one another. The first display layer may have a first dynamic range, and the second display layer may have a second dynamic range. The display system may have a system dynamic range with a value of which is a product of values of the first and second dynamic ranges. Further, the optical system may be configured to image said first area onto said second area with a unit lateral magnification.
The display system may be a head mounted display and may include a light source disposed in optical communication with the first display layer. The first display layer may be configured to modulate the light received from the source, and the second display layer may be configured to receive the modulated light from the first display layer, with the second display layer configured to further modulate the received light. The display system may also include an eyepiece for receiving the modulated light from the second display layer. One or both of the first and second display layers may include a reflective spatial light modulator, such as a LCoS. Alternatively, or additionally, one or both of the first and second display layers may include a transmissive spatial light modulator, such as a LCD. Further, the optical system may be telecentric at one or both of the first display layer and the second display layer. Typically, the optical system between the first and second display layers may be an optical relay system.
The foregoing summary and the following detailed description of exemplary embodiments of the present invention may be further understood when read in conjunction with the appended drawings, in which like elements are numbered alike throughout:
The present inventors recognize that art in the field of high dynamic range (HDR) displays for direct-view type desktop applications has been discussing some hardware solutions, and perhaps the most straightforward method towards achieving a HDR display is to attempt to increase the maximum of the practically-displayable luminance level and to increase the addressable bit-depth for each of the color channels of the display pixels. The present inventors recognize that this approach, however, requires high-amplitude, high-resolution drive electronic circuits as well as light sources possessing high luminance, both of which are not easy to implement at practically-reasonable cost. In accordance with the present invention, another method may be employed—to combine two or more device layers—for example, layers of spatial light modulators (SLMs)—to be able to simultaneously control the light output produced by pixels. In the spirit of this approach, the present inventors have conceived of use of art relating to an HDR display schematic for direct-view desktop displays, which was based on a dual-layer spatial light-modulating scheme. Being different from conventional liquid-crystal displays (LCDs) that utilize uniform backlighting, this solution employed a projector to provide spatially-modulated light source for a transmissive LCD in order to achieve a dual-layer modulation and a 16-bit dynamic range with two 8-bit SLMs. This solution also demonstrated an alternative implementation of the dual-layer modulation scheme, in which an LED array, driven by spatially-varying electrical signals, was used to replace the projector unit and provided a spatially-varying light source to an LCD.
While one could think that the aforementioned multi-layer modulation scheme developed specifically for direct-view desktop displays can be adopted to the design of an HDR-HMD system—for example, by directly stacking two or more miniature SLM layers (along with a backlight source and an eyepiece), the present inventors have discovered that practical attempts to do so convincingly prove that such “direct stacking of multiple layers of SLMs” exhibits several critical structural and operational shortcomings, which severely limit an HDR-HMD system, making the so-structured HDR-HMD system practically meaningless.
To illustrate the practical problem(s) that persist in related art, upon reviewing the teachings of the present patent application, a person of skill would appreciate that (in reference to
An LED array approach would be also readily understood as substantially impractical, not only because of the spatial separation between the layers, but also due to the limited resolution of an LED array. The common microdisplays used for HMDs are less than an inch diagonally (sometimes only a few millimeters) with very high pixel density, and thus only a few LEDs can fit within this size, which makes spatially-varying light source modulation impractical.
Implementations of the idea of the present invention address these shortcomings, and, in contradistinction with related art, make the multi-layer configuration of the HDR-HMD system not only possible but functionally advantageous. Specifically, for example, in various of its aspects the present invention may address the following:
For the purposes of the following disclosure and unless expressly specified otherwise:
Generally, implementations of HMD optical systems 10, 15 according to the idea of the invention includes two sub-systems or parts—an HDR display engine 12 and an (optional) HMD viewing optics 14, 16 (such as an eyepiece or optical combiner),
The HDR display engine 12 can be optically coupled with different types and configurations of the viewing optics 14, 16. Following with the classification of normal head mounted display, the HDR-HMD system 10, 15 can be generally classified under two types, the immersive (
Throughout this disclosure, for convenience and simplicity of illustration and discussion, the (optional) viewing optics sub-system of an HDR-HMD is shown in the following as a single lens element, while it is of course intended and appreciated that various complex configuration of the viewing optics can be employed. The basic principle implemented in the construction of an HDR display engine is to use one spatial light modulator (SLM) or layer modulating another SLM or layer.
A most straightforward thinking for achieving multiple layer modulation simultaneously is to stack multiple transmissive SLMs 11 (or LCD1/LCD2) at the front of an illumination light, such as backlight 13, as shown in
An advantage of the configuration of
However, the HDR display engine 17, 19 employing the simply-stacked LCDs possesses clear limitations. The basic structure of an LCD is known to include a liquid crystal layer between two glass plates with polarizing filters. The light-modulating mechanism of an LCD is to induce the rotation of the polarization vector of the incident light by electrically driving the orientation of liquid crystal molecules, and then to filter light with a certain state of polarization with the use of linear and/or circular polarizer. The incident light would inevitably be filtered and absorbed when transmitting through an LCD. The polarizing filters absorb at least a half of the incident light during the transmission, even in the “on” state of the device (characterized by maximum light transmittance), causing significant reduction of light throughput. Typical optical efficiency of an active matrix LCD is even smaller, less than 15%. In addition, the transmissive LCD has difficulties with producing dark and very dark “gray levels”, which leads to a relatively narrow range of contrast that the transmissive LCD can demonstrate. Although the setup of
In order to increase light efficiency and contrast ratio of a multilayer HDR display engine in accordance with the present invention, a reflective SLM, such as liquid crystal on Silicon (LCoS) panel or digital mirror array (DMP) panel, can be used in combination with a transmissive SLM, such as an LCD. LCoS is a reflective type LC display, which uses a silicon wafer as a driving backplane and modulates light intensity in reflection. Specifically, a liquid crystal material can be used to form a coating over a silicon CMOS chip, in which case the CMOS chip acts as the reflective surface with a polarizer and liquid crystal on its top cover. The LCoS-based display has several advantages over the transmissive LCD-based one. First, a reflective type microdisplay has higher modulation efficiency and higher contrast ratio as compared with the transmissive type (LCD-based) that loses a large portion of efficiency during the transmission of light. Second, due to higher density of electronic circuitry in the back of the substrate, LCoS tends to have relatively high fill factor, and typically has smaller pixel size (that can be as small as a few microns). Besides, LCoS is easier and less expensive to manufacture than an LCD.
Due to the reflective nature of LCoS, the structure of stacked-together LCoS-based SLMs is no longer feasible. Indeed, LCoS is not a self-emissive microdisplay element and, therefore, high efficiency and illumination of this element is required for operation. Furthermore, light modulation with the use of LCoS is achieved by manipulating the light retardance with switching the direction of orientation of liquid crystal and then filtering light with a polarizer. In order to obtain higher light efficiency and contrast ratio, a polarizer should be employed right after the light source, to obtain a polarized illumination. Separating the incident and reflected light presents another practical issue. A polarized beam splitter (PBS) can be used in this embodiment to split the input light and the modulated light and redirect them along different paths.
Although implementations of
To further increase light efficiency and contrast ratio provided by multi-layer display units in accordance with the present invention, two reflective SLM layers, such as LCoS or DMD panels, can be adopted in a single HDR display. The schematic layout of the double LCoS configuration is shown in
Taking the HDR display engine 130 of
The optical path length within the HDR display engines 130, 140 of
HDR Display Engine: Two Modulation Layers with a Relay System In-Between
While the setups discussed above may be capable of displaying images with dynamic range that exceeds the dynamic range corresponding to 8-bits, the limitation on the maximum dynamic range value that can be achieved with these setups is imposed by the finite distance between the two SLM layers e.g., LCoS 114/LCD 116, LCoS1 114/LCoS2 115. In reference to
According to an idea of the invention, and to address the problems accompanying the embodiments of the above-discussed examples, e.g., those of
To make the operation of an LC of a display layer most efficient, it may be preferred to make the optical relay system of choice to be telecentric in both image space and object space, so that—considering the geometrical approximation—the cone of light emitted by a point at SLM1 converges to one point on SLM2 and vice versa, to achieve imaging of the SLM1, SLM2 on one another across the relay system. As the result, one-to-one spatial mapping between the pixels of the display layers is achieved to avoid modulation crosstalk. As a result of operation of such telecentric configuration, when the image of the “intermediate image” formed as SLM1 is optically relayed to a plane optically-conjugate with the plane of SLM1, this also results in effective repositioning of the “intermediate image” plane towards and closer to the viewing optics, which reduces the required back focal distance (BFD) of the viewing optics.
When the physical location of the SLM2 display layer is chosen to be at a plane that is optically-conjugate with the SLM1 layer, then under the condition of one-to-one pixel imaging discussed above the overall dynamic range of the display engine containing these SLM1 and SLM2 layers that are separated by the optical relay system is maximized and equal to the maximum achievable in this situation dynamic range—that is, the product of the dynamic ranges of the individual SLM1, SLM2 layers.
In further reference to
The above idea of
Just as the light engines mentioned in connection with Example 1, the light engines for Example 4 could include a complex illumination unit to provide uniform illumination, or just a single LED with a polarizer for system capacity, simplicity, low energy consumption, small size and long lifetime. For a LCoS-LCD HDR engine 150 in accordance with the present invention, a LED emitting light 112a may be manipulated to be S-polarized, so that the illumination light would be reflected by a PBS 113 and incident onto LCoS 114,
By folding light path twice, compact HDR display engines 150, 160 with reflective LCoS 114 and transmissive LCD 116 as the SLM are provided in accordance with the present invention. Compared with the stacked LCDs HDR engine, such as those of
To further increase the system light efficiency, two LCoS panels with a double-path relay architecture are provided in accordance with the present invention, with
The advantage of this configuration is that it does not require long back focal distance for the eyepiece design, as the intermediate image is relayed to the location out of the HDR display engine. The distance between the image and viewing optics can be as small as a few millimeters. Nevertheless, although this configuration had loose requirements for viewing optics, the relay optics needs to have superb performance, since the LCoS1 114 image needs to be reimaged twice, which introduced wavefront error twice as for image double path. Compared with all the former setups, the intermediate image quality would be not as good as the other configurations, since both SLMs image relayed once more, which would introduce even more wavefront deformation. The residual aberrations would have to be corrected by viewing optics, if the relay optics does not have an ideal performance.
However, although system performance gets better with single relay pass, the back focal distance of viewing optics needs to be long, as intermediate image was located on LCoS2, which was inside the HDR engine. The back focal distance would highly depend on the dimension of the PBS, as well as system NA. This limited the configuration of viewing optics increased the difficulty for viewing optics design.
The advantage of this setup is that the system can be quite compact, because it does not only fold the light path by the Cubic PBS, it also truncates the relay system with only half of its original length. However, the disadvantage of both this configuration is that it requires a long back focal distance for the viewing optics (EYEPIECE), which as previously mentioned brings more difficulties for viewing optics design.
Table 1 summarizes the major characteristics of different HDR display engine design. We can see the tradeoff between the viewing optics BFD and HDR engine optics performance. The reason was that although introducing optics could relocate intermediate image position, it would also bring in aberrations. The light efficiency significantly improved by introducing reflective type SLMs. The modulation ability, which represents the real contrast ratio expansion, was compromised with the alignment precision. That was because minimizing the diffraction effects of microdisplays could diminish the overlapped diffraction area and improve the modulation ability, however this also required with high precision alignment with the corresponding pixel on two SLM. Overall, each design has its own advantages and drawbacks. The selection of HDR display engine for a specific HMD system should depend on the overall system specifications, like system compactness, illumination type, FOV, etc.
Before showing the example of the disclosed embodiment of the invention in detail, it is worth noting that this invention is not limited in this particular application and arrangement, because this invention is also applicable to other embodiments.
It would be helpful to show the meaning of some words used herein:
HDR—high dynamic range
HMD—head mounted display
SLM—spatial light modulator
EFL—effective focal length
FOV—field of view
NA—numerical aperture, F/#—f-number
LCoS—liquid crystal on Silicon, LCD—liquid crystal display
PBS—polarized beam splitter, AR—coating-anti-reflect coating
RGB LED—RGB light emitting diode, FLC—Ferroelectric liquid crystal
WGF—wire grid film
OPD—optical path difference
MTF—modulation transfer function
The SLMs used in this specific embodiment were FLCoS (Ferroelectric LCoS) and were manufactured by CITIZEN FINEDEVICE CO., LTD, having a Quad VGA format with resolution of 1280×960. The panel active area was 8.127×6.095 mm with 10.16 mm in diagonal. The pixel size was 6.35 um. The ferroelectric liquid crystal used liquid crystal with chiral smectic C phase, which exhibits ferroelectric properties with very short switching time. Thus, it has the capability with high-fidelity color sequential in very fast frame rate (60 Hz). A time-sequential RGB LED was synchronized with the FLCoS to offer sequential illumination. The WGF was covered at the top of the FLCoS panel with a certain curvature to offer uniform illumination and to separate the illumination light with the emerging light.
A cubic PBS was used in the design.
A double-telecentric relay system with unit magnification was designed in the HDR display engine system,
The specification of HDR display engine design can be determined based on all aforementioned analysis. The LCoS has diagonal size of 10.16 mm, which corresponds to ±5.08 mm of full field. The 0 mm, 3.5 mm and 5.08 mm object height were sampled for optimization. The viewing angle of LCoS is ±10°. The object space NA was set to be 0.125 and can be enlarged to 0.176. System magnification was set to be −1, with double telecentric configuration. The distortion was set to be less than 3.2%, and residual distortion can be corrected digitally thereafter. The sampled wavelengths were 656 nm, 587 nm and 486 nm with equal weighting factor. Table 3 shows the summary of the system design specification. Also, off-the-shelf lenses were preferred in this design.
To reduce aberrations even further, two meniscus-shaped off-the-shelf singlets were also provided between the PBS and doublet 2, and between the Stop and doublet 3 respectively, see Table 4. The shape, orientation and position of the singlets were nearly mirror symmetric with respect to the aperture stop, for the sake of controlling odd aberrations, like coma and distortion, of the system. The remaining five singlet elements were set to be variable in shape, thickness and radius as shown in Table 4. For the purpose of matching with stock lenses, these elements were constrained to have the most common shapes and materials during global optimization.
The opto-mechanical design for the HDR display engine was also proposed in this invention. A particular design in the mechanical part was a tunable aperture at the location of aperture stop. This part could be easily taken in and out of the groove with a handle. By adding a smaller or larger aperture onto this element, the system NA could be changed from 0.125 to 0.176, for seeking an optimal balance between the system throughput and performance. These mechanical parts were then manufactured by 3-D printing techniques.
After the HDR HMD system implementation, a HDR image rendering algorithm was developed,
The calibration and rendering algorithm of radiant parameters is performed to pursue proper radiance distributions and pixel values. As HDR image was actually stored absolute illumination value rather than grayscale level, the display tone mapping curve needs to be calibrated to properly displayed the image. Furthermore, due to the optics and illumination distribution, there might be some inherently uneven radiance distribution, which should be measured and corrected a priori. What was more important, the HDR raw image data should be separated into two individual images shown on two FLCoS. Based on the configuration analysis of the prototype of
Although two LCoS of the prototype of
To fully understand the how the image was distorted and deviated, we should firstly determine the image forming light path for each LCoS.
C
1
=P
1
RRD
1
L
1 and C2=P2RD2L2 (1)
where L1 and L2 were the undistorted original image; D was the distortion introduced during the whole image forming light pass; R was the reflection. The reflections need to be considered due to parity change of the image; P was the projection relation from the 3-D global coordinates to the 2-D camera frame. C1 and C2 were the image captured by camera.
To optically overlap C1 and C2, the two equations above should be algebraic equivalent. We could conclude that besides considering parity change caused by reflection, the projection matrixes P and distortion coefficients D of each LCoS should be calibrated, for obtaining the 2-D projection equivalence.
The geometric calibration was based on the HMD calibration method of Lee S and Hua H. (Journal of Display Technology, 2015, 11(10): 845-853). The distortion coefficients and intermediate image positions were calibrated based on a machine vision camera placing at the exit pupil of the eyepiece where should also be the position of viewer's eyes. To obtain the relationship between original image point and corresponding point distorted by HMD optics, the camera intrinsic parameters and distortions should be calibrated first, for the sake of removing influence brought by camera. We calibrated these parameters by using the camera calibration toolbox discussed by Zhang Z. (Flexible camera calibration by viewing a plane from unknown orientations[C]//Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on. Ieee, 1999, 1: 666-673), with taking a series of unknown orientation checkerboard patterns, extracting the corner point positions and fitting with expected values. The rigid body transformation should be tenable between the original sampled image and the distorted image after eliminating the effects of the camera distortion. The distortion coefficients and image coordinates could then be estimated based on the perspective projection model. The process of the HDR HMD geometric calibration is shown in
In order to get the viewing image perfectly overlapped, the HDR image alignment algorithm should be adopted to pre-warp original images digitally, based on the calibrated results. The flow chart of how the algorithm works is shown in
To correct the projection position, the pinhole camera model was used for simplicity. In order to overlap projected images on camera position, the transformation matrix was derived based on at least four projection points in global coordinate system. For each LCoS2 point (l,n,p), the corresponding projection point on LCoS1 (xg, yg, zg) could be calculated by the parametric equation:
Where (A, B, C) was the normal direction of LCoS1 with respect to camera. t was the projection parameter.
In the 2-D projection plane, the original and the projection position was associated by the projective transformation matrix H:
Note that (x,y) and (x′,y′) are the local coordinates on the projection plane. Then for homogeneous solution of the homography, the elements of the 3 by 3 transformation matrix H should be calculated by.
where h11˜h32 are the elements transformation matrix and h33=1. Subscripts for (x,y) and (x′,y′) denoted for different sampled points. They all denoted the local coordinates in the projection plane, and could be calculated by coordinate transformation from their corresponding global coordinates. By using the transformation matrix and adopting appropriate interpolation method, the projected images can be rendered, as the image shown in the right column of
The second camera-based calibration was operated after homography (
Since LCoS2 of
The residual alignment error of the prototype should be analyzed for evaluating the aligning performance. To do this, the local image projected coordinates on the camera view should be appropriately sampled and extracted for comparison. In this experiment, either the checkerboard pattern or circular pattern could be used in the error analysis, as shown in
Before discussing the radiance calibration and rendering algorithm for HDR HMD, it should be noticed that the normal image formats with 8-bit depth no longer offer wide enough dynamic range for rendering a HDR scene on the proposed HDR HMD, which have the capability to reproduce the image with 16-bit depth. Thus, the HDR imaging technique should be employed for acquiring a 16-bit depth raw image data. One common method to generate a HDR image is to capture multiple low dynamic range images with same scene but at different exposure time or aperture stop. The extended dynamic range photograph is then generated by those images and stored in HDR format, the format that stores absolute luminance value rather than 8-bit command level. The HDR images used in following were generated based on this method. The HDR image production procedure is not the main part of the invention, thus will not mentioned more in detail.
In order to display HDR images with desired luminance, the tone response curve for each microdisplay should be calibrated, for converting the absolute luminance to pixel values. A spectro-radiometer was used in this step, which can analyze both spectral and luminance within a narrow incident angle. It was settled at the center of exit pupil of the eyepiece as to measure the radiance when viewing each microdisplay. In order to get the response plots for each LCoS, a series of pure colored red, green and blue targets with equal grayscale differences were displayed on each microdisplay as the sampled grayscale values for the measurements. The XYZ tristimulus values for each grayscale could be calibrated by a spectroradiometer, and then translated to the value of RGB, and normalized to the get the response curve for each color, based on the equation:
To eliminate the effects of the background noise, the tristimulus value (X0,Y0,Z0) corresponding to [R G B]=[0 0 0] should be calibrated and subtracted from each data, as per equation 5. The response curves for two SLM were calibrated separately, with target images shown on the testing LCoS, while keeping another with total reflection (maximum value [R G B]=[255 255 255]). The tone response curve was then interpolated using piecewise cubic polynomial by the sampled values, as shown in
In order to render desired image grayscale, another requisite calibration was the HMD intrinsic field-dependent luminance calibration. Due to effects of optics vignetting, camera sensation, and backlighting non-uniformity, the image radiance may not be evenly distributed over the whole field of view. Even showing uniform values onto microdisplays, it is practically not possible to see uniform brightness across the whole FOV because of those internal artifacts. Therefore, all these miscellaneous artifacts should be corrected during the image rendering procedure.
Directly measuring the radiance over the whole field was not feasible since the acceptance angle for the spectroradiometer was narrow and it was hard to accurately control its direction during measurements. Thus, a camera-based field-dependence radiance calibration was adopted. The procedure is shown in
Before uniformity correction (
However, it should be noticed that the uniformity correction sacrificed the central field pixels command level to improve the uniformity at the SLM panel (or panel display). The HDR engine might lose its effectiveness at some extent, if the command level were truncated too much. Thus, in the algorithm, a clipping factor may be provided to the user to select appropriate tradeoffs between uniformity and system dynamic range.
As we split each pixel modulation into two SLM equally, the command level of each pixel on the two LCoS needs to be re-calculated. However, even if we desired to equally distribute the pixel value into two SLMs, this process does not simply make a square root of the original image value. The microdisplay had a non-linear tone response curve, as we calibrated as shown in
The LCoS2 image should be rendered as compensation to LCoS1 image. Because of the physical separation of two microdisplay panels, the LCoS1 image plane would have some displacements to the system reference image plane, which was set at the position of LCoS2 in
Δz is the displacement between LCoS1 and the reference image position; r is the radial distance; λ is the wavelength; ρ is the normalized integral variable on exit pupil; a is the half angle of the diffraction cone. The actual LCoS1 defocused image at the reference image plane can be treated as the original image convolved with its point spread function (
An optional rendering procedure may be used to redistribute the image spatial frequency. It is not necessary for the relayed HDR HMD system, as the pixel on each display has one-to-one imaging correspondence. However, distributing spatial frequencies with different weighting onto two microdisplays may leave more alignment tolerance. Moreover, for the non-relayed HDR display engine which has one SLM nearer to and another SLM farther from the nominal image plane, weighting higher spatial frequency information on the microdisplay closer to the image plane might increase the overall image quality.
A number of patent and non-patent publications are cited herein, the entire disclosure of each of these publications is incorporated by reference herein.
These and other advantages of the present invention will be apparent to those skilled in the art from the foregoing specification. Accordingly, it will be recognized by those skilled in the art that changes or modifications may be made to the above-described embodiments without departing from the broad inventive concepts of the invention. It should therefore be understood that this invention is not limited to the particular embodiments described herein, but is intended to include all changes and modifications that are within the scope and spirit of the invention as set forth in the claims.
This application claims the benefit of priority of U.S. Provisional Application No. 62/508,202, filed on May 18, 2017, the entire contents of which application(s) are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US18/33430 | 5/18/2018 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62508202 | May 2017 | US |