Optical See-Through Head Worn Display

Abstract
An augmented reality head worn device comprising a curve combiner transparent to visible light and reflective in an selected infrared frequency range, a scanning light source defining a field of view, a holographic optical element adapted to provide pupil expansion to create an eye-box, and at least one projection system for providing high acuity at or near a center of the field of view.
Description
FIELD OF THE INVENTION

The present invention relates to head worn displays and in particular to optical see-through head worn devices.


BACKGROUND OF THE INVENTION

The most rapid transfer of information to humans is through vision. Head mounted displays are a modality of human-computer interface associated with vision. Head worn display devices are well known. They are display devices worn on the head sometimes as attached to a helmet. They may also be mounted on or be a part of a visor, goggles or eyeglasses. Head worn displays can operate in either of two modes. In “augmented reality” mode the display is see-through, and the display imagery is superimposed upon natural vision. In “virtual reality” mode, the display is occluding and blocks the view of the local scenery, entirely replacing it with displayed imagery. The performance of current head worn displays is limited compared to the typical human visual capability. Current head worn display devices have serious ergonomic issues that significantly handicap the wearer. Examples of prior art designs for head worn display devices includes a device with a goggle format proposed by William Schonlau and described in a SPIE paper, “Personal Viewer; a wide field, low profile, see-through eyewear display”, SPIE Vol. 5443, 2004 and “Immersive Viewing Engine”, SPIE Vol. 6224, 2006. This device is a retinal scanning display based head mounted display with a curved primary mirror in front of the eye.


Augmented Reality

Augmented reality head worn displays generally fall into three classes:

    • 1. Video-relay augmented reality, in which a displayed scene is generated by cameras placed in front of the two eyes (or in close proximity to that location). The displayed scene is altered by superposition of augmented reality content.
    • 2. Digital night vision displayed in an optical see-through head-worn display, in which the see-through scene is overlaid with the same scenes imaged with head-mounted low-light imaging sensors and/or imaging sensors operating in visible and non-visible wavelength bands.
    • 3. Conventional augmented reality head-worn displays in which isolated objects, imagery, text, or symbology is superposed over the wearer's view of the real world. The superposed augmented reality content is often affixed to locations in the real world, but may be designed to be in a specific relative motion to the real world.


U.S. Pat. No. 9,529,191, awarded to two of the present inventors, describes prior art embodiments of both virtual reality and augmented reality HWDs including a special dynamic foveal vision display. The teachings of this patent are incorporated by reference in this provisional patent application.


The ultimate augmented reality (AR) head worn display (HWD) overlays digital image/video content onto the user's see-through vision of the real world. In this way, the experience includes a transparent, undisturbed view of the environment, as well as displayed wide field-of-view (FOV) color video content with which the user can interact. Interaction, while not necessary, can enhance and control the user's experience. It is by far the most commercially desirable class of HWD's but also the hardest to design and manufacture. This type of video relay display contrasts other AR displays which merge live video feed recorded from the user's point-of-view with augmented video content. In these less ideal systems, the user sees the merged video feed through a display and does not directly see the external world. This is very similar to virtual reality (VR) with the main difference that in VR the user can only see the virtual reality environment through the video display system. The drawbacks of VR video relay AR limit the applications of this technology.


An ideal AR HWD has a light-weight goggle or eye-glass form-factor that seamlessly blends real world and artificial experiences without compromising the presentation quality of either. The user of such a system should not feel any discomfort, eye fatigue, or added weight. In fact, the high comfort level enables the user to become unaware they are wearing the device, allowing it to be worn for extended periods of time. The AR content interaction and comfortable form of the HWD should lead users to be inclined to its daily use.


The device will feature binocular color vision with automatic adjustment to the anatomical differences in head size, eye separation, eye alignment, and eye relief within a large population of users. Ideally, the device would compensate for a user's natural eye defocus/aberrations in order to eliminate the need for any additional prescription eyewear or contact lenses. In addition, the ideal HWD would include an eye tracking system to control the HWD standard operation, and if desired, to enable the control of content and/or the user's experience though hands free graphical user interface (GUI) menus. The HWD should mimic the human visual system in order to efficiently allocate hardware resources only as needed. High-acuity narrow field-of-view (FOV) and lower-acuity wide FOV content is displayed to the user's fovea and the rest of the retina, respectively. Additionally, localized occlusion of see-through vision can assist in making the display content appear more vivid. Each of these systems would be tightly integrated into a small ergonomic package.


What is needed is a better optical see-through head worn display.


SUMMARY OF THE INVENTION

The present invention provides an augmented reality head worn device comprising a curve combiner transparent to visible light and reflective in an selected infrared frequency range, a scanning light source defining a field of view, a holographic optical element adapted to provide pupil expansion to create an eye-box, and at least one projection system for providing high acuity at or near a center of the field of view. Embodiments of the invention include a laser scanning projector, one or more lenses, one or two holographic optical elements (HOE) and a curved combiner. The laser scanning projector provides narrow laser beams in multiple wavelengths scanned over a field of view. The narrow laser beams can be collimated and can have a beam waist. The wavelengths are preferably red, green and blue for producing color images, but the number of colors is not limited to three. The projector provides a 2D image by scanning a mirror that deflects the narrow laser beams in two dimensional directions along with the laser modulation in synchronization with image signals. In preferred embodiments the projector may provide a controlled and variable beam divergent laser beam by placing a small focusing lens in the optical path prior to the scanning mirror. The small focusing lens can be a variable focusing lens. In addition a large focusing lens may be included after the scanning mirror, or the two lenses could be replaced by a single specially designed lens. The HOE can be placed at, or in the vicinity of, a focal plane of the focusing lens or lenses. The HOE generates multiple diffracted beams that are directed to respective positions on the curved combiner. The HOE can be surface relief hologram, a grating, or volume hologram. Preferably the HOE is equipped with designated sub-areas to generate diffracted beam of each of the three wavelengths. The radius of curvature of the outer (convex) surface of the combiner is larger than the radius of curvature of the inner (concave) surfaces of the combiner y and amount equal to the thickness of the combiner, to provide an undistorted see-through image. The inner surface is preferably coated with a notch filter coating on the inner surface to reflect only the red, green and blue beams from the HOE and to transmit light at other wavelengths. The combiner also preferably includes an anti-reflection coating on the outer surface.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a baseline optical design.



FIG. 2 is an example of a warping function.



FIG. 3 shows use of two holographic optical elements.



FIG. 4 shows a way to illuminate adjustments of eye distance accommodation.



FIG. 5 describes a technique for combining three lasers of different colors.



FIG. 6 shows the number of holograms required to generate particular eye-box dimensions.



FIG. 7 shows an exaggerated case of an effective amount of prism.



FIG. 8 shows use of a reflective coating to direct images into the eye.



FIG. 9 shows a coating providing more than 80% transmission.



FIG. 10 shows a coating with optical density exceeding 3.0.



FIG. 11 demonstrates a filter design.



FIG. 12 depicts a method for implementing opaqueness-in-the-lens.



FIG. 13 shows typical times for reflex for pilots' eyes.



FIG. 14 shows features related to beam diameters.



FIG. 15 shows a targeted IPD range.



FIG. 16 shows exit pupil array targets with a minimum pupil diameter of 3.0 mm.



FIG. 17 shows typical cases of horizontal pupil translations.



FIG. 18 shows information regarding eye-rotation.



FIG. 19 shows the result of data analysis.



FIG. 20 shows a simple model of eye rotation.



FIG. 21 shows the size of eye boxes needed in various situations.



FIG. 22 shows the LCOS naturally creating a collimated beam pattern.



FIG. 23 shows a fast vibrating MEMS mirror to produce a raster scan.



FIG. 24 shows a dichotomy of an optical design.



FIG. 25 shows point source MEMS pico-projector.



FIG. 26 shows a technique for placing a display in the focal plane of an objective lens.



FIG. 27 shows how HOEs can be used to “clone” a single eye-box perspective into an array of spots.



FIG. 28 shows how to clone eye boxes by coupling two HOEs.



FIG. 29 shows a computer generated process.



FIG. 30 shows a way to mitigate stray light and cross talk noise.



FIG. 31 also shows a way to mitigate stray light and cross talk noise.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

Applicant's approach to developing an optical see-through HWD attempts to follow a logical path toward the optimum solution. There are many desirable features in a HWD. The waveguide approach entails numerous, possibly unsurmountable limitations, which prevent attainment of many of the key desirable features. The curved combiner approach appears to allow for a solution for providing all of the desirable features.


Desirable Features in a HWD

Desirable features in an optical see-through HWD include the following:

    • First and foremost does not harm natural see-through vision
      • Provide see-through transmission adequate for all conditions, including nighttime
      • Does not block peripheral vision (so as not to be hit by a car when crossing the street), does not block binocular overlap (to preserve depth perception) and does not block the view downwards (which would interfere with mobility)
      • Does not cause a discontinuity to see-through vision (a flat beam splitter in front of the eye introduces such a discontinuity at the edges)
    • Provide a wide display field-of-view (FOV), ideally approaching that of natural vision
      • For many applications such as training and gaming, the virtual objects should not disappear in peripheral vision when the user's head is not pointed in their direction
      • A wide FOV promotes a feeling of “immersion” and makes the experience more realistic
    • Provide high-acuity, ideally approaching that of natural vision.
      • A benchmark is 20/20, but in fact the average best corrected vision for young adults is better than 20/15
    • Incorporate a method of display foveation, similar to that inherent in human vision, to allow simultaneous provision of a wide FOV and high-acuity
      • Minimize data transmission bandwidth
      • Minimize power consumption
    • Provide an eye-box of adequate size
      • Must at least cover the comfortable range of eye rotations (±15°)
      • May additionally be used to cover slippage of the HWD with respect to the eyes
      • May additionally be used for fitting purposes, to cover the inter-pupillary distance (IPD) spectrum of the population
    • Adequate brightness for use in full sunlit conditions, adequate reduction in brightness in dark conditions, red astronomer mode which allows and preserves dark adaptation
    • Best possible color gamut, with uniform color balance and luminosity
    • Provide a variable focus display
    • Provide fully binocular vision and avoid displaying dichoptic imagery
      • A Dichoptic display can cause: luning, binocular rivalry, deficits in contrast threshold & slower target acquisition
    • Provide occlusion for virtual objects
    • Does not induce “simulator sickness” by providing unnatural and inconsistent information (cues) to the brain; has low persistence to reduce image blur when the head rotates
    • Form factor of sunglasses or safety goggles


Limitations of the Waveguide Approach

The current mainstream approach to HWDs is known as the “waveguide” approach. A flat or slightly curved optical material (the waveguide) is placed in front of each eye. Light is injected into the material from near one of the edges, and extracted in front of the eye. There are four methods of coupling into and out of the waveguide: holographic, diffractive, mirror and polarization. If the waveguide approach solved all of the aforementioned challenges, there would be nothing further to discuss. In fact, however, the waveguide approach fails in numerous key areas. Some of the limitations include the following:

    • Introduced Discontinuity: Waveguides tend to block peripheral vision or binocular overlap. When the light source is placed above the eyebrow (best option) the edges of the waveguide present a discontinuity to see-through vision. This discontinuity is greater for the slightly curved version with increased display FOV because they are thicker along the optical axis.
    • Low Transmission: Waveguides based upon mirror or polarization coupling have low see-through transmission, creating a handicap at nighttime. Increasing transmission results in reduced brightness, which is required in sunlit conditions. All coupling methods have limited transmission when broadband light sources, such as organic light emitting diodes (OLEDs), are utilized since coupling certain wavelengths limits their see-though transmission. OLEDS are too broadband to provide a red astronomer mode.
    • Limited FOV: Traditional flat waveguides are limited to something like a 40° FOV. The latest slightly curved waveguides have pushed this limit to 60° at the price of greatly increased thickness. With increased thickness, the edges of the waveguide present a greater discontinuity to see-through vision.
    • Low Brightness: Despite all of the research into the waveguide approach, no version has adequate brightness for fully sunlit conditions. At the 2015 Navy Opportunity Forum, Peter Squire stated that the forward observer training system under development for the military currently plans to utilize SA Photonics latest model slightly-curved waveguide HWD, but that it still is not bright enough in fully sunlit conditions.
    • No Variable Focus Solution: Waveguides generally cannot provide a variable focus display. Without variable focus, the vergence-accommodation conflict cannot be solved, and provision of comfortable “complete-3D” imagery cannot be provided.


What the waveguide approach has solved is provision of exit pupil expansion and the creation of an eye-box adequate for eye rotations. It also provides a form factor approaching that of sunglasses or safety goggles except for the see-through blockage/discontinuity issue.


History & Potential of the “Curved Combiner or Transparent Mirror” (CTM) Approach

Applicants' prototype CTM is an ellipsoidal reflector, with two foci. The light source projector is placed at one of the two foci located above the eyebrow. The second foci is located at the entrance pupil of the eye. All light emanating from the projector, regardless of where it bounces off of the ellipsoidal reflector, will reach the eye. CTM's allow for see-though vision whereas curved mirrors do not, however both provide similar display imagery. If the light source projector has a limited FOV, it can be expanded by combining the ellipsoidal reflector with a hyperboloidal reflector. Therefore, in principle, a display FOV approaching that of natural vision can be provided. This overcomes the biggest limitation of the waveguide approach.


Applicants initially proposed using an ellipsoidal reflector, but it turned out that they were not the first to consider this possibility. The first two groups to investigate this idea produced prototypes with FOVs per eye of 60°×120°. However, they discovered two issues. One is that aberrations from the curved mirror are an issue, and the provision of exit pupil expansion to provide an adequate eye-box is unsolved. These other groups did demonstrate that the CTM in front of the eye (with a projection system above the eyebrow) can in essence be formed into something akin to wrap-around sunglasses or safety goggles.


Applicants soon realized that the aberration problem could be dramatically reduced by combining retinal scanning display (RSD) technology with the curved transparent mirror approach. In RSD, scanning beams with good spatial coherence such as eye-safe laser beams are directed at the eye to produce imagery. The first commercial RSD display was the NOMAD device developed by Microvision although it utilized a flat fold-mirror in front of the eye. Applicants have viewed, tested and verified the NOMAD device firsthand. The footprint of such beams on a curved mirror is small and the effective f-number is high, resulting in low levels of aberrations. If the footprint of each pixel on the curved mirror is large, the aberration problem becomes intractable. Another way of visualizing the problem is that laser beams with diameter small compared to the radius of curvature of the mirror appear to first order more like a ray, rather than a bundle of rays. The end result is that there is a variation in defocus from top to bottom of the ellipsoid, but high-order aberrations are strongly suppressed. Furthermore, with the use of small diameter beams, the depth-of-focus of the display light is substantial.


The use of scanning laser beams provides a number of further benefits. The most electrically efficient sources of light known are laser diodes (although not all laser diodes are efficient). The progression from tungsten bulbs, to compact fluorescent bulbs, to LEDs, will likely culminate with laser light bulbs. Laser light bulbs are currently being investigated at Sandia National Laboratory. Laser light bulbs are already replacing mercury bulbs in projectors and the first laser headlamps in high-end cars are nearing reality. Therefore, in principle, an optimal light generating mechanism in a display will utilize laser diodes. A second benefit is that use of narrowband light means that the reflective coating on the CTM need only reflect a small proportion of the photopic band, resulting with the benefit that high photopic transmission can be provided. Rugate coatings can provide high reflection at chosen wavelengths while maintaining excellent transmission across the rest of the spectrum. They have been proven to reflect color imagery and provide photopic transmission exceeding 80%. In RSD there is no speckle whatsoever, provided that the beams do not impinge on a diffusing surface en route to the eye.


With the RSD approach, an electrically-activated variable lens placed just prior to the scanning mirror can be used to affect the divergence/convergence of the scanning beams, and thereby provide a display with variable focus. This lens can also be used to correct the vertical focus variation due to the ellipsoidal geometry. It can additionally be utilized to fix the accommodation-vergence conflict causing simulator sickness in persons watching conventional 3D movies. RSD utilizes scanning laser beams efficiently delivered to the eye. Providing adequate brightness outdoors for use in fully sunlit conditions is not a problem.


The remaining challenge is to provide exit pupil expansion sufficient to create an eye-box of adequate size. The NOMAD device utilized a novel refractive/diffractive device based upon dual micro-lens arrays. It has only been manufactured in a flat geometry, which is not directly applicable to the CTM approach. The first approach explored by Applicants was to utilize eye tracking to locate the entrance pupil of the eye, and to move the laser scanner so as to parry the relative movements of the eye. Alternatively, an optical method of making the scanner appear to move could be utilized. In the ellipsoidal design, when the scanner is moved away from the upper “focus” of the ellipsoid, the beams no longer intersect at a point near the lower focus near the eye. Instead, the further the exit pupil is caused to move from the lower focus of the ellipsoid, the larger the diameter of the “circle of least confusion” becomes (in analogy to the situation of spherical aberration in a conventional lens). This effect limits the potential size of the eye-box. Mechanically moving the scanner is possible but undesirable. To make the light projection source appear to move, Nagahara et. al. proposed to utilize tilt generated in LCDs. However, the tilt levels possible are small unless the LCD is thick, in which case it is slow. Electro-wetting is another possibility, which is fast but the levels of tilt possible again are not large. Applicants proposed using a pair of curved polarization gratings to make the scanner appear to move. Polarization grating can scan large diameter beams over very large angles. In principle, this could be made to work in full color and has been demonstrated in two color systems. However, the only company with experience making these devices, Boulder Nonlinear, has never made a compact version on thin glass substrates, let alone curved substrates. In addition, they wanted ≈$100 k to make a prototype of any sort. The consensus was that this technology is not matured sufficiently since there is no commercial product utilizing them.


There was another challenge. The conventional optical design software packages (e.g. Zemax, Code-V, FRED) do not include provisions for easily working with off-axis conics. Applicants' optical designer resorted to importing the off-axis conics into Zemax via Matlab code. This made iterations on the basic design slow and limited the optimization space. This problem was resolved by utilizing the Mathematica-based software Optica.


The “moving scanner” approach to creating an eye-box essentially keeps etendue fixed, moving the optical system around in time. Etendue can never be decreased, which would represent a violation of the second law of thermodynamics. (Using only 100% reflectors and refractive lens elements, etendue is conserved.) If beam expansion is utilized to expand the eye-box while conserving etendue, the FOV is decreased in proportion. Therefore, this approach is not useful for a display with wide FOV. The conventional approach to exit pupil expansion is to increase etendue utilizing any of the following: beam-splitters, diffraction, holography or diffusers. For instance, a 50% beam splitter takes a pulse of light with some divergence and volume, and essentially doubles the volume and occupied angular space. A diffraction grating can be designed to create an un-diffracted zero-order beam and two first order (±1 order) diffracted beams, all of equal intensity, thereby tripling the volume and occupied angular space.


Applicants are pursuing a novel holographic approach to generate exit pupil expansion without the use of waveguides. Hologram use for HWD applications is not a new idea, they have been used previously to generate exit pupil expansion in waveguide-based HWDs. The novelty of our approach is that the HOE can simultaneously create a large eye-box while maintaining a wide FOV not yet achieved in waveguide approaches.


The optical design software Optica, utilized for working with off-axis conics, can also be used to model holographic optical elements (HOEs). To clarify, the term hologram often implies it contains 3D images, whereas, the term HOE does not. In particular, unlike display holograms, there is no image stored in an HOE. Instead the HOE behaves like a lens system but without the limitations of a traditional lens (such as the conservation of etendue limitation).


For the purposes of this report the use of the terms “hologram” and “grating” is used to mean HOE. It turns out that the HOEs can be utilized to correct for the “circle of least confusion” issue mentioned previously in regards to expanding the eye-box. One exciting possibility is that the size of the eye-box might be increased sufficiently to provide automatic fitting to a large fraction of the population. With HOEs, the ellipsoidal reflector can be replaced with a spherical reflector, as the functionality provided by the ellipsoid can be incorporated into the HOEs. A spherical curved transparent mirror is easier to manufacture and position correctly.


With HOEs a new method of providing foveation arises. Applicants original concept was to have two projection systems, one to supply pixels for a high-acuity zone (which may be centered in the display FOV or movable to track the gaze) and a second projector to provide the wide display FOV image with reduced acuity. Applicants have already demonstrated a roving zone with enhanced acuity. However, the simplest version of foveation is to provide high-acuity in the center of the display FOV where the gaze is present most of the time, and low-acuity in the peripheral display FOV, which is also the peripheral FOV of the wearer most of the time. Whenever the wearer desires to look at a virtual object with high-acuity, they simply turn their head to point in the vicinity of the virtual object Foveation using a single projector per eye can be accomplished when pixel density warping is implemented in the HOEs. Keystone warping correction of the overall image can also be implemented in the HOEs at the same time. Applicants are working on designs that show HOEs can be made to perform a number of extremely useful functions. Applicants are collaborating with Technicolor to manufacture computer generated HOEs which have been used in HWD prototypes. Computer generated surface HOEs manufactured with state of the art technology will be utilized to program the volume HOEs utilized in the HWD.


Applicants' Holographic Baseline Design System Overview

Applicants have created a design that addresses many of the requirements for the highly sought HWD. Methods to extend the functionality of our design to meet the remaining requirements of the ideal HWD have been identified and effort to incorporate them into the baseline design has begun. A development path for each component and subsystem has been established in order to transition the TREX design into a fabricatable product. This includes identifying materials, components, modules, fabrication methods, and testing protocols that can be exercised within TREX or from local vendors.



FIG. 1 shows the baseline optical design, which uses a RSD comprised of modulated red, green, and blue lasers and microelectromechanical systems (MEMS) micro-mirror raster scanner to generate the display video content. This video illumination system passes through conditioning and focusing optics onto a hologram which corrects for optical aberrations, replicates the image signal to create an eye-box, and then redirects the light toward a CTM. This display content is then reflected off the CTM onto the retina of the user. TREX simulations show that this holographic design can be used to achieve an expanded eye-box size of greater than 10 millimeters and a field-of-view (FOV) that is 80 degrees horizontal by 45 degrees vertical; however, this is not the theoretical limit. A rugate coating will be deposited onto the CTM in order to be highly reflective at the projector laser wavelengths (450 nm, 530 nm, 635 nm) while allowing the mirror to remain optically transparent for the rest of the visible spectrum. As such, the mirror can be >80% transparent for see-through vision.


In addition, it is possible to add high-acuity to the optical system by warping the image from the projection system to create foveation. An example of such a warping function is shown in FIG. 2. FIG. 3 shows how two holographic optical elements can be used to map an equally spaced display into a foveated resolution that matches the non-spatially uniform acuity of the human eye. This approach will most efficiently address the needs of the HWD.


A further improvement to the baseline design could be the use of volume-based hologram technology, such as photopolymer instead of surface hologram technology. This enables the eye-box size to increase substantially at the price of a more challenging mass production process. With the use of volume holograms, the eye-box size could be increased to eliminate any adjustment of the inter-pupil eye distance accommodation, see FIG. 4. Further discussion of the eye-box formation is provided below.


The following table expresses a crude estimate of cost of the subsystems and components, as well as the labor and infrastructure, needed to assemble the color binocular foveated HWD described in this report.









TABLE 1







Cost estimate analysis for 10 million volume unit production. See the


end of each respective sections for a more detailed discussion.








Component/System
Volume production unit cost











Eye-head Measurement System
$10


Ambient Light Measurement System
<$1


Automated Mechanical IPD Adjustment
$5


Digital Processing Unit ASIC
$10


Video Projector Module
$100


Hologram Optics
$5


Curved Transparent Mirror
<$1


Rugate Coating
$100


Localized Opacity Control
$10


Dynamic Focus Control
<$1


Battery
$5


Assembly and Integration of HWD System
$20


Total
$268









Subsystem Overview

This section provides an overview of each HWD subsystem and describes how they interact in unison to enable the functionality sought by the prototype. The sections detailing each subsystem begin with a performance specification of the subsystem and a brief summary of the section's contents. This summary information is followed by a more in depth description of the findings which include: a performance limiting component analysis, roadmap to the 2020 production year, forecast of component development and emerging technology that will impact the HWD during this timeframe, top engineering challenges, and annual unit volume cost analysis. This component-by-component analysis is invaluable in estimating the technological course the HWD will take in the next 5 years. It will also serve to best identify the limiting components, alternate technologies, and where R&D resources should be invested. The subsystem cost analysis will aid in estimating the total volume production costs of a consumer grade foveated, color, binocular, augmented reality HWD system.


Eye-Head Measurement System (EHMS)

The EHMS trades the wearer's gaze eye and focus, head orientation and position. It increases abbient light level and IPD of the wearer. Many well developed commercial-off-the-shelf (COTS) eye-head measurement systems (EHMS) are readily available. They employ relatively simple hardware optics and electronics coupled with more sophisticated software algorithms. The EHMS addresses a problem that is deceivingly simple; however, erroneous measurements are easily caused by variations in the optical characteristics of the anatomy of the eyes and faces of the population of users. Considerable work has been done to produce EHMS's that provide accurate measurements for a wide range of users. It is beneficial to utilize COTS technology for any HWD system since the cost to reproduce their performance is non-negligible. In what follows, we discuss the hardware configuration of a typical EHMS. This section is included to show how the EHMS works and how its output measurement signals are used to control other subsystems. Volume costs of the eye-head measurement system include the infrared LED emitter, camera, and mechanical supports to maintain their position. Additionally an inertial measurement unit (IMU) will be used to track the position of the users head. Of great importance is the dedicated software and processing electronics needed to interpret the raw data in order to produce valid output signals. This software would need to be adopted from third party companies or developed in-house in order to make the EHMS function properly. The estimated volume cost for the EHMS subsystem is on the order of $10 with a significant part going to software and electronic processing.


Gaze Tracking

Gaze tracking has numerous potential applications in a HWD. These include:

    • Hands-free communication
    • Hands-free control of the display and or other machines
    • The above applied to disabled persons
    • Monitoring physiological and psychological aspects of the wearer (e.g. fatigue)
    • Foveation—displaying high-acuity only where the wearer is looking
    • To measure vergence and thereby infer focus—could be used to properly focus the virtual object being observed


In some HWD designs, to correct display aberrations associated with the current gaze angle Applicants have investigated several approaches to gaze tracking as follows:

    • Conventional glint+pupil eye tracking utilizing modern miniature cameras such as the OV6211
    • Corneal glint tracking with HWD slippage correction utilizing modern miniature cameras such as the OV6211
    • RSD tracking, which leverages the RSD architecture and adds a near IR diode laser and avalanche photodiode to image the retina and note the position of landmarks on the retina to infer gaze angle


Tobii is a world leader in conventional eye tracking. They offer glasses with two miniature cameras imbedded in the frame below each eye. The drawback to their device is the price. When initially released a few years ago, they wanted $10K for the glasses and $10K for their software.


Applicants have developed a simplified version of their tracker, utilizing the miniature OV 6211 camera, which is designed for eye tracking applications. The method relies solely on the movement of the primary corneal glint, and has been utilized by numerous subjects to remotely control a robot. If Trex were to successfully develop a method to automatically compensate for slippage of the HWD on the head, this method would be of great interest, since the hardware is inexpensive.


Applicants have developed an RSD method of eye tracking in which the eye is “pinged” with near infrared light from all angles in the display. The infrared light reflected from the retina is returned to an avalanche photodiode (APD) which measures its intensity. Specular reflections are removed using polarization. Every frame the system essentially takes a picture of the retina. From the location of landmarks on the retina (either the fovea or the optic disc) the gaze angle is directly determined in the coordinate system of the display. The high voltage power supply required for operation of the APD is inconveniently large for incorporation in a HWD, and this matter is under investigation.


Vergence Measurements to Infer Accommodation Tracking

If the accommodative or focal state of the wearer were known, the HWD system would have the option of automatically focusing the display to be comfortably viewed by the wearer at all times. If the wearer looked at something close by, content in the display could be similarly focused. If the wearer looked in the distance, the display could automatically focus displayed content at infinity. A second application is in providing “true” or “complete” 3D imagery, in which both retinal disparity and correct focus are provided by the display. If there are virtual objects displayed at multiple ranges simultaneously, and the display focus system can only provide focus for a single range, one approach is for the display to match the current focus of the wearer. Another is to correctly focus the virtual object being looked at or closest to the line of sight.


One way to infer the focus of the wearer is to utilize eye tracking of both eyes to determine the vergence of the eyes. As the wearer looks at objects closer and closer the eyes rotate towards each other and these angles can be measured.


A second method for measuring focus of the wearer is to measure display light or near infrared light reflected from the retina, while varying the divergence of the light incident at the eye. The measured reflected light will be maximized at some divergence, and this divergence will have a unique correlation with the focal state of the eye.


Interpupil Distance and 3D Location Measurement

Using camera-based eye tracking, the inter-pupillary distance or IPD could be measured. A mechanical mechanism could then automatically position the displays in front of the eyes at the optimum location. One method of accomplishing this would be to utilize “squiggle” motors, which are very compact and require zero power to maintain position.


The IPD range that is of interest can be estimated from the following data. Inter-pupillary distance data was taken on 3976 subjects aged 17 to 51 and reported in the 1988 ANSUR database. A review of the ANSUR and other IPD data can be found in a reference by Dodgson. The mean and standard deviation in the male data is 64.7 mm & 3.7 mm and in the female data is 62.3 mm & 3.6 mm. The mean and standard deviation for the combined data is 63.36 mm & 3.832 mm. The minimum value was 52 mm and the maximum value 78 mm. The distribution for the combined data is used in TABLE 2 to generate horizontal eye-box dimensions to fit various percentages of the ANSUR population.









TABLE 2







ANSUR 1988 IPD data and required horizontal eye-box


ANSUR 1988 IPD data and required horizontal eye-box














IPD
Percentage
Eye-Box
Eye-Box





Range
of
w/ ±4 mm
w/ ±7 mm





about
Population
pupil
pupil
Min
Max



Mean
Fit
translation
translation
IPD
IPD


σ
(mm)
(%)
(mm)
(mm)
(mm)
(mm)
















0
0
0
8
14
63.4
63.4


0.5
3.8
38.2
11.8
17.8
59.5
67.2


1.0
7.7
68.2
15.7
21.7
55.7
71.0


1.5
11.5
86.6
19.5
25.5
51.9
74.9


2.0
15.3
95.4
23.3
29.3
48.0
78.7


2.5
19.2
98.8
27.2
33.2
44.2
82.5


3.0
23.0
99.7
31.0
37.0
40.4
86.4


3.5
26.8
99.9
34.8
40.8
36.5
90.2









Head Tracking

In an optical see-through HWD, head tracking is required for several reasons. These include the following:

    • To keep displayed virtual objects embedded in the real world anchored in their correct location despite head motion
    • To compensate for the vestibulo-ocular reflex or VOR


If a Minecraft creation is created on a real table in a room, it should remain on top of the table when the head moves. This is only possible if the head motion is detected and a display location correction is made in real time.


When displaying content that is not intended to be anchored in the real world, but is intended to be readable despite head motion, the vestibulo-ocular reflex needs to be considered. If a sentence is displayed in a fixed location in a HWD and the head is rotated or translated suddenly, the sentence becomes unreadable. This is counter-intuitive as the sentence is held steady in front of the eye. However, the VOR is designed to make imagery of the inertial horizon visible despite accelerations of the head. The driver of a car has no problem seeing the countryside as they drive around sharp curves or go over bumps, because their VOR automatically corrects for the accelerations. A person reading in the back seat may become nauseated, because their VOR is attempting the same corrections, but the reading material is moving with them, not staying fixed with the inertial horizon. Simple display in a HWD corrects for the VOR correction and actually causes problems. The solution, however, is relatively simple. Anchor displayed content with the inertial horizon, and not with the head. When the head quickly rotates left, make the displayed content rotate right to keep it fixed in inertial space, and then let it slowly drift to the original relative location in the field-of-view.


Head tracking involves several layers. There are inertial measurement units (e.g. gyros and accelerometers) that measure rapid accelerations and movement. Then there are secondary measurements that keep track of slower motions and drifts (e.g. vision based tracking and magnetometers).


Ambient Light Measurement System (ALMS)

The ambient light measurement system is a very well developed COTS system that is used in countless consumer electronic applications. It is mentioned here for completeness since ambient light is used to control other systems such as the overall brightness of the video projection system. This is done so that the projector brightness coincides with the light entering the user's eyes from the environment. It can additionally be used by the EHMS to help prevent erroneous measurements due to stray light reflections from the eyes and face of the user. There are commercially available models that provide RGB ambient light values which can be used white balance the display. The volume cost per unit of this component would likely be very inexpensive in comparison with the rest of the HWD. Our estimate per unit when purchasing 10 million units is well under $1.


Automatic Mechanical Inter-Pupillary Distance and 3D Location Adjustment

The IPD of the display can be set automatically using voltage data signals from the eye-head measurement system (EHMS). If the IR illumination source and the camera sensor are aligned along the vertical axis, the measured pupil position can automatically control an adjustment mechanism to move the two optical paths alignment with the user's eyes. A screw drive or squiggle motor can serve as an actuation mechanism to physically move the optics into position. It is important to note that the optical components must have vertical, lateral and longitudinal mechanical control in order to universally adapt to the user's vision requirements. This 3D position information is fed in directly from the EHMS which takes measurements of the user's eyes. The headband mechanical support of the entire system provides the freedom needed to accomplish this task with minimal hardware and interference. The 10 million unit volume cost for mechanical and electronic hardware needed to position the HWD to automatically accommodate the IPD of a given user is on the order of $5. This would have some impact on power consumption which can in turn lead to an increase in the battery cost.


Display Processing

The 10 million unit volume cost for the digital image processing memory and application specific integrated circuit (ASIC) capable of performing the tasks described in the following three subsections is on the order of $5.


Foveated Display Software Compensation

Our primary design uses one projection system to illuminate both the peripheral and narrow FOV views. The density of equally spaced pixels within the commercially available pixels is optically mapped to a non-uniform pixel density using holograms. This remapping is done to match the foveated human visual system. In this way the displayed reposition has much better correspondence between pixel and photoreceptor density.


The software foveation architecture necessarily works in tandem with the optical foveation system. The mapping introduced by the hologram must the compensated for in software prior to displaying the imagery so that the imagery appears undistorted. A look-up-table generated from the hologram data or a calibration test grid is used in the remapping algorithm.


Additionally, the gaze tracking information from the eye measurement system (EHMS) will control what region of the display appears foveated. The circular zone where the user's gaze is fixed appears at the native resolution of the foveated display described above. Regions outside the foveated circular zone will be binned-up in software allowing for a considerable savings in battery storage, data transmission, and data processing requirements due to the large bandwidth reduction.


Position, and Orientation of Displayed Content

Head-tracking plays an integral role in HWD operation by providing the azimuth, elevation and orientation of the users head. The azimuth, elevation serial output of the eye tracker can be in the form of digital pulse width modulated (PWM) voltage signals which are compatible with current FPGA or ASIC technologies.


Head tracking is invaluable to the overall operation of such a system since it will enable to displayed content to be far more realistic that otherwise possible. The reason is that most of the object a person interacts with in their everyday environment does not move when they turn their head. Therefore the objects displayed to the user should remain fixed in position with respect to the environment not the users head. For this reason it is necessary to track the user's head movements in order to compensate for them in the displayed imagery. This allows the system to display content that appears to the user to remain fixed in position with respect to the environment.


Keystone Mapping and Image Overlapping for Binocular Vision

Vergence (used to infer the accommodation of the eye) movements (measured in degrees or prism diopters) provided by the EHMS enable binocular display data to be actively transformed so that the two display feeds going to each eye can be registered and aligned. This transformation may be implemented using an image geometry correction transformation transform which can account for perspective and optical mapping corrections. This transformation updates automatically with the user's accommodation at a high refresh rate in a feedback loop. The baseline design is designed to have full overlap when the user is focused to a distance of 30 cm. This will work in conjunction with a variable focus lens (described below) so that the display information is presented at the focal depth at which the user's focus is accommodated. This reduces conflicting cues the human visual system uses to determine its surroundings. If cues that are not in agreement can cause the users to experience disorientation, sickness or nausea. The meter angle (MA) is used to get a comparable metric for accommodation and convergence. The meter angle equals the reciprocal of the viewing distance and its value is approximately equal to the accommodative stimulus in diopters.


Video Illumination and Modulation Projection Module

Microvision refers to their biaxial scanning MEMS units combined with modulated laser diodes as an “integrated photonics module” or IPM (see FIG. 5).


Microvision initially manufactured a WVGA format IPM (848×480) which was incorporated into their commercial scanning laser Pico projectors, such as the ShowWX and the ShowWX+HDMI units. Microvision later manufactured a 720p IPM (720×1280). Their magnetic drive technology utilizes low-voltages unlike electrostatic MEMS, which utilize much higher drive voltages. Microvision technology may offer currently the best high-frequency MEMS mirrors. Currently there are several manufacturers of miniature laser scanning projection modules. They include the following:

  • 1. Celluon is selling Pico projectors containing IPMs utilizing Microvision-licensed technology. It is believed that the IPMs are manufactured by Sony. These IPMs offer a “bastard” resolution of 720×1920. The resolution is 720p vertical and 1080p horizontal. Previously it is known that Microvision manufactured a 720p IPM. The logical inference is that they have no trouble modulating the diode lasers fast enough to provide 1920 pixels per line, but that they did not increase the horizontal resonant frequency of the MEMS mirror sufficiently beyond the minimum necessary for operation at 720p at 60 Hz: (720/2)×60 Hz=21.6 kHz to reach the minimum required for 1080p at 60 Hz: (1080/2)×60 Hz=32.4 kHz. The Celluon IPMs utilize 5 lasers rather than three. There are two red, two green and one blue diode lasers. Looking with a simple handheld spectrometer, the two green lines are separated by perhaps 2 nm, as are the two red lines. A theory is that they are trying to minimize speckle in the projected imagery. The PicoAir model connects only wireless with Miracast-enabled Android and DNLA. The PicoPro model connects with HDMI cable, DLNA and Miracast.
  • 2. Microvision has informed us that IM Electronics in Korea is manufacturing 720p units very similar to their original 720p units. In August 2015 they informed us that units cannot be purchased directly from IM Electronics, but that we will be able to purchase them from Microvision.
  • 3. Maradin Technologies Ltd. located in Israel manufactures a biaxial MEMS laser scanning development kit with resolution of 600×854, an optical scan range of 36°×27°, an effective mirror size of 1.0×1.1 mm very similar in size to that of Microvision, drive electronics and the following lasers: 120 mW at 638 nm, 50 mW at 520 nm, and 80 mW at 450 nm. The frame rate is 30 Hz interlaced. The development kit is not compact like the Microvision IPMs. They have published a description of their approach.
  • 4. ST Microelectronics developed a chip for operating laser MEMS scanners. They announced the acquisition of an Israeli MEMS company bTendo and the development of a laser scanning projector module. It now appears that they are actively pursuing a laser scanner built into a smart phone. Their device may be more compact than the Microvision IPM. Their device may utilize two scanning mirrors rather than one biaxial mirror. The release date of an official product has not yet been officially announced.
  • 5. Mezmeriz in conjunction with Cornell University has developed miniature laser/MEMS projection units for use in cell phones based upon carbon-fiber MEMS. They show a video of one in action on their website. One application is to increase the size of the display in cell phones when projecting on nearby surfaces, such as the desktop. These operate with low drive voltages. Their claim is that current silicon-based MEMS devices are limited by material properties, and that carbon fiber can outperform silicon. They are working with mirrors of varying diameter and offer mirror diameters larger than 1 mm, and optical scan angles up to 180°. Their main product scans ±45° without additional optics. According to their specification sheets, they have not attained the horizontal resonant frequencies that Microvision technology has. According to correspondence they have matched Microvision resolution. According to correspondence, they are actively pursuing some forms of commercial products.


All Pico Projector devices are designed to project images on walls or other surfaces illuminated to some degree by additional light sources such as room light. In a consumer device it is possible that a user could point the scanner at the eye from a distance of 4″. In order to meet the IEC 60825-1 class 2 requirement in this condition the output is limited to approximately 17 lumens. The Celluon PicoPro projector is capable of 32 lumens and is class 3R (as are many modern day so-called “laser pointers”) and has lasers each with output capability of tens of milliwatts. By comparison the power level required for RSD display are more modest.


RSD Laser Power Requirement

A benchmark for the brightness of a display useable outdoors is the luminance of a typical photographic scene in full sunlight, which is about 5000 cd/m2. It is also approximately the luminance of the full moon (maximum solar illumination) directly overhead (minimum atmospheric path).


The power into a bright sun adapted 2.0 mm diameter pupil of the eye is:










P
W

=


5000





cd


/




m
2

·



π
(

2.0
×

10

-
3













m

)

2

4

·
Ω
·

1

683











lumens


/


watt


·

1

V


(
λ
)





=

23.0






µW
·


Ω
sr


V


(
λ
)










Equation





1







In Equation 1, Ω is the solid angle of the scene and V is the luminous efficacy. The solid angle of a spherical rectangle is given by:














P
W

=



5000





cd


/




m
2

·



π


(

2.0
×

10

-
3







m

)


2

4

·
Ω
·












1

683





lumens


/


watt


·

1

V


(
λ
)










=



23.0






µW
·


Ω
sr


V


(
λ
)















Ω
=


(


θ
2

-

θ
1


)

·

(


sin


(

φ
2

)


-

sin


(

φ
2

)



)







Equation





2







For a display with field-of-view of 80°×45°, the solid angle is given by:









Ω
=


80


°
·

π
180

·

(

sin


(

45

°

)


)



=


0.987





sr

=

1.0





sr







Equation





3







Therefore to match a bright daytime scene using 555 nm (for which V(λ)=1.0) in a display with FOV of 1.0 steradian requires a power into the eye of 23 micro-watts. To reproduce this at 532 nm (for which V(λ)=0.8832) requires 26 μW. This is power into the eye.


For a static eye-box, the required power increases by a factor equal to the ratio of the pupil area to the eye-box area. In this case it should be noted that the pupil diameter is typically much larger than 2.0 mm as sunglasses are typically worn in bright sunlit conditions. If a 3 mm diameter is assumed, and a large eye-box per eye of 9 mm×20 mm, the factor is:











eye


-


box





power





factor




9





mm
×
20





mm



π


(

1.5





mm

)


2



=
25.5




Equation





4







The total power requirement at 532 nm is then 0.66 mW.


At least one independent researcher is critical of manufacturer's claims for laser scanning device resolution. The overall system performance will maybe limited by the projection module and therefore its performance capabilities and limitations to be verified include:

    • Update rate
    • Latency
    • Horizontal resolution
    • Vertical resolution
    • Color gamut
    • Brightness
    • Dynamic range
    • Contrast ratio


The biggest limitation is the horizontal frequency of the scanning mirror because the frame rate multiplied by the vertical resolution is directly limited by this value. High frame rate is used to combat various causes of simulator sickness, and vertical resolution is necessary for acuity and adequate field-of-view.


New scanning technologies may provide significant improvement in the horizontal scan frequency. In particular, the plasmonic scanner being developed by Ultimara and Stanford University may dramatically improve this parameter.


Smaller and faster lasers with less power are needed for RSD versus the wall projection application. Multiple independent groups are pursuing the smart phone format wall projection application. Alternative scanning approaches are being pursued in addition to maximizing the potential of MEMS. Currently the following applications and perhaps others are driving development of laser scanning technology: automobile HUD, embedded in smartphone for portable projection.


Volume production and manufacturing does not seem to be a problem as Celluon is producing large numbers of these devices. The product is viable since the units are selling for $350, and the STM smartphone will be priced like a smart phone. These prices are acceptable for customers entering the mobile projector market. Our estimate for the 10 million unit volume cost for purchasing video projection modules is approximately $20.


Projection Optics

HOE for display, aberration mitigation and eye-box formation


Volume Grating Specification

    • Angular Selectivity: FWHM <4 degrees
    • Wavelength Selectivity: FWHM <50 nm
    • Diffraction Efficiency: >60 percent
    • Grating Pitch <5000 line pairs per mm
    • Thickness >10 microns


Surface Grating Specifications

    • Diffraction Efficiency: 20 percent
    • Grating Pitch <2500 line pairs per mm (>=400 nm peak to peak)
    • Hoxel Size >=1 micron


The baseline HWD design uses one or two holographic optical elements (HOEs) (labelled as H1 & H2 in FIG. 1 to create an enlarged eye-box that maintains a wide FOV, and to remove image optical aberrations from the system. The eye-box size is determined by the number of recordings present in each HOE. HOEs can overcome the normal conservation of etendue that limit classical optical systems. This is because holograms have the ability to simultaneously create more than one optical state in phase space, whereas conservation of etendue would normally limit the device to a single optical state in phase space. In the case of HWDs, the phase space state is given by the product between the eye-box sizes and its FOV. However, HOEs can create more than one eye-box simultaneously to the viewer without compromising the FOV. As such, HOEs can help create an expanded eye-box that exceeds classical limits of etendue conserving systems. In particular, HOEs can be used to “clone” a single eye-box perspective into an array of eye-box spots at the pupil of each eye. By carefully locating the placement of each “cloned” eye-box spot, it is possible to ensure that the eye pupil always receives a complete image across an expanded eye-box area. Eye-box spots are also known as exit pupils.


There are two fundamentally different types of holograms, namely surface and volume. A detailed discussion on thin vs. thick (surface vs. volume) holograms is provided below. Surface holograms are orders of magnitude easier to print directly from computer generated data. Over the last seven years, Technicolor has developed an extremely effective method for using Blu-ray disk manufacturing to generate computer-generated metalized masters from which to subsequently mass produce high quality replicas through an embossing process. Blu-ray printing technology enables superior optical quality to volume holograms which are accurate representations of the computer generated data. Technicolor's holographic grating consistency, substrate surface flatness, and manufacturing repeatability make their approach highly desirable.


Surface holograms contain multiple diffracted orders in addition to one designed for the HWD operation. These additional diffraction orders generate noise in the system that can scatter into environment or the eye. The uniformly scattered noise that does enter the eye leads to a loss in contrast of the displayed imagery. In addition to generating multiple diffraction orders, surface holograms have no ability to discriminate between different wavelengths or angles of incoming light. As a consequence, surface holograms cannot selectively operate on specific light while filtering out others. This makes them very susceptible to generating higher amounts of background noise than volume holograms.


Volume holograms follow Bragg diffraction which selectively diffracts light coming from specific wavelengths and directions. The rest of the light is strongly filtered (reflected) resulting in a very significant reduction in noise generation and propagation within the system. Grating information is stored within a volume instead in a single surface enabling them to contain much more information than their surface hologram counterparts. In fact, it is possible to encode hundreds of different surface holograms into a single “multiplexed” volume hologram. While many types of materials can be used to store volume holograms, we will only consider the photopolymer holographic material (PP) since it best suited for mass production and has high performance. More specifically, we are favoring the newer Bayer photopolymer since it has improved optical performance over the competing, and older, DuPont photopolymers. Unfortunately, volume holograms cannot be directly created from computer generated data in the same manner as Technicolor's surface holograms. In addition, volume holograms cannot be mass-produced with the same simplicity, low cost, and ease as surface holograms embossed from metalized blue ray master holograms. Volume holograms are only viable when the performance of an equivalent surface hologram is found to be insufficient. The Bayer and DuPont photopolymers are typically 10-12 microns thick sufficient to enable significant wavelength and angular selectivity.


Hexagonally packed eye-box spots of a given separation are used to most efficiently create an eye-box of a certain size. Light is split up among the spots so, for a given eye-box size, a larger spot separation will increase the light entering each spot. The maximum spacing is constrained by the size of the eye's pupil. An eye-box centered on the optical axis of the eye needs to be large enough to accommodate for lateral movement of the eye that occurs when the user's gaze shifts to comfortable extremes. A first order approximation of this value is derived below and is 9 mm in diameter. The eye-box size can also be used to accommodate for interpupil distance (IPD) variations among a population of users. An IPD range from 51 mm to 74 mm accounts for the majority of the population. If the eye-box is to account for the gaze range of the eye and the IPD range of the population then it must be at least 22.5 mm horizontal and 11 mm vertical. This is described more in depth below as previously indicated.


Each exit pupil is created by an independent holographic recording in the physical medium. Ultimately the signal-to-noise ratio found in each spot limits of number of spots that can be useful. In particular, as the number of spots in the system is increased, there is a corresponding increase to the level of noise present. Unfortunately, with surface holograms in particular, this increase may not merely be linear with the number of spots but can even be strongly nonlinear and disproportionate with the increased number of spots. Fortunately volume holograms offer greater immunity to crosstalk noise and increasing the feasible number of spots.


There are two types of noise present in the head worn displays under consideration: crosstalk noise and scattering noise. Scattering noise is result of optical perturbations from microscopic structures in the holographic material that cause unintended light scattering. Crosstalk noise is a result of signals from different holographic layers getting diffracted into unintended directions by subsequent holographic layers If the total number of generated exit pupils (Eps) is small, the scattering noise is usually insignificant. However, as the number of spots is increased, the diffraction efficiency from each spot becomes reduced and the scattering noise grows. The maximum number of spots is reached when either the crosstalk noise or the scattering noise begins to out-compete the intended diffracted signals present. For volume holograms with multiple exposures, crosstalk effects can be mitigated and scattering noise is likely to be the limiting factor for the eye-box size. For surface holograms, however, crosstalk noise is dominate and determines the maximum number of EPs available.


While the surface hologram technology is the least expensive to create and easiest to mass produce, it is limited in application since it more susceptible to crosstalk noise. This is because it has no ability to discriminate between different angles of incidence and therefore cannot filter out unwanted optical angles that cause the crosstalk noise. In this quest to determine the practical limits on the eye-box size, a baseline HWD design has been explored with surface hologram technology. Preliminary computer simulations of its crosstalk noise have suggested that at least seven EPs in a surface hologram can be created without unacceptable noise levels. As a result of these computer simulations, this baseline design has been determined with 7 spots arranged in a hexagonal pattern and a spot interspacing of 2.5 mm. The resulting eye-box size of this design is ˜10 mm in diameter. While this baseline design offers some insight into a potential eye-box size, further research is required to determine the maximum upper limit on the eye-box size by surface hologram technology. To completely eliminate the crosstalk, color filters can be placed on a surface HOE to prevent, for instance, blue light interacting with a red grating etc.


These limitations are not a problem when using volume hologram technology; however, other problems arise. Volume holograms, such as those made using photopolymers, can discriminate between different angles of incidence and therefore offer a significantly reduced level of crosstalk noise for the same number of spots. At this point, more experimentation is needed to determine the maximum number of spots that can be sustained by either volume or surface holograms in this application. There is good evidence from the work of other researchers that a very large number of holograms can be successfully recorded “multiplexed” within a single substrate. For example, one holographer, Steve Hart has successfully demonstrated recording as many as 400 holographic exposures in a single volume hologram. In such a case the modulation depth of the photopolymer must be accounted for in order to preserve a high level of diffraction efficiency.


With this evidence, it appears that many more EPs may be possible for a volume hologram-based system than for a surface hologram. With volume holograms, it appears that a sufficiently large number of spots with a corresponding larger eye-box size could even eliminate the need for mechanical adjustment of interocular spacing. As an example, as shown in FIG. 4, 34 spots is sufficient to generate a single color display eye-box size of 22.5 mm×12 mm. If the display is full color instead, then 135 spots would be required for the same eye-box size. However, as stated above and derived below, the largest eye-box that is required will be of size 22.5 mm×11 mm. Preliminary research has indicated that such eye-box sizes may indeed be possible with the volume hologram approach. FIG. 6 shows the number of holograms required to generate particular eye-box dimensions. As the eye-box size increases so do the number of required holograms and the cost to create them.


HOE materials and fabrication methods are discussed in what follows. There are five possible hologram formats for HOE fabrication methods under present consideration:

    • (Method 1) Standard Sinusoidal CGH
    • (Method 2) Blazed CGH
    • (Method 3) Standard CGH Transferred to PP
    • (Method 4) Blazed CGH Transferred to PP
    • (Method 5) Analog PP


(Method 1) Standard Sinusoidal CGH


The standard computer generated hologram (CGH) is the most basic surface hologram format available. Based on Blu-ray production technology from Technicolor, it is the easiest and cheapest hologram to mass produce. These holograms contain sinusoidal grating structures that result in three diffracted orders: plus one, zero, and minus one. Of these three orders, only the plus one order contributes to the desired signal and the remaining orders only contribute to noise. In the design of HWDs, the scattering noise of the Standard Sinusoidal CGH limits the eye-box size that can be supported by such holograms.


(Method 2) Blazed CGH


Blazed computer generated surface holograms use a sawtooth grating structure rather than the sinusoidal or binary structure of more standard computer generated holograms. This has the potential ability to generate only the plus one diffracted order without the added noise of the minus one and even the zero diffracted orders. At this moment, blazed surface holograms are in developmental production stages at Technicolor and the full capability for mass production of blazed surface holograms is not yet known. If Technicolor can be successful in mass-producing blazed surface holograms, this represents the best possible performance in a surface hologram technology. With the use of blazed surface holograms, it becomes possible to increase the size of the eye-box relative the Standard Sinusoidal CGH.


(Method 3) Standard Sinusoidal CGH Transferred to PP


The previous two hologram formats under discussion have been surface holograms. Next we will consider the production of volume holograms in photopolymer (PP). As mentioned previously, volume holograms suffer from an increased difficulty in using computer generated data. Rather than attempting to write computer generated holograms directly into a volume hologram, it is proposed to first create a surface hologram that is optically transferred into the volume hologram by reconstruction of its wavefront. In fact, in the most general case, it is necessary to produce two surface holograms that correspondingly determine the reference and object recording wavefronts for the volume hologram. When the Standard Sinusoidal CGH from Technicolor is used, the zero and minus one diffracted orders of each surface hologram must be suppressed by some optical means such as locating the surface hologram some distance away from the volume hologram and blocking the unwanted diffracted orders.


(Method 4) Blazed CGH Transferred to PP


If a Blazed CGH can be used instead of the Standard Sinusoidal CGH, the minus one diffracted order is no longer present and it becomes possible to simplify the transfer of optical data into the photopolymer. In some cases, it even becomes possible to place the surface hologram in direct physical contact with the volume hologram and expose the light through the surface hologram into the volume hologram.


(Method 5) Analog PP


Computer generated holograms can offer unique advantages over traditional holograms. In particular, CGHs may offer solutions for problems that are impractical to solve in any other way. In addition, CGHs eliminate any uncertainty in the precision of the resulting hologram and may not require as much investment in optical hardware. In other cases, however, conventional holographic recording set-up may be superior when the added cost in both time and expense of the computer generated holographic production places a high burden on the production process; provided a traditional hologram recording geometry can be found. In particular, for the maximum possible eye-box size, a large number of independent exposures must be incorporated into the volume hologram from uniquely created surface holograms. With such a large number of exposures, the required number of surface holograms can become a significant cost in the overall operation and a more traditional hologram fabrication approach may be worth further consideration.


H1 and H2 play different roles in the optical HWD design yet they are highly interdependent (see FIG. 1). While any of the five formats described above may be used with either H1 or H2, the nature of these holograms lend themselves to different solutions. In particular, H1 will generally experience much less crosstalk and background noise than H2. As such it is more likely the case that H1 can be fabricated from a direct surface hologram solution (Methods 1 or 2) while H2 is more likely better suited for a volume hologram solution in photopolymer (Methods 3, 4, or 5). In some cases, however, where the need for inexpensive mass production is great and the required eye-box size can be relaxed, it may be possible to directly fabricate both H1 and H2 with a surface hologram (Methods 1 or 2).


The two holographic optical elements need to be tested as a part of a system with the curved lens and projection optics. The best way to test these elements is on a bench-top test-rig station that includes both the curved lens and projection optics. The same test rig will likely be used to align and bond two holograms and curved lens together. The projection optics on the bench-top test-rig station could either be fitted to final HWD assembly or it can be integrated into the test-rig station.


The testing process involves measurement of both the eye-box properties and the final image on the back the retina. In particular, the test rig needs to measure the size and brightness of each spot with the expanded eye-box of spots. In addition, the projected image properties of the point spread function, modulation transfer function, contrast, field-of-view, spatial distortion, and sharpness through each eye-box spot should be measured. Both of these measurements may be accomplished initially qualitatively by human inspection in order to achieve coarse alignment of the components and then later quantitatively with machine vision methods to accomplish the final component alignment.


The eye-box properties are influenced by the thickness and dynamic range of the volume hologram used in the system. These material considerations ultimately limit the maximum eye-box size we are able to produce. Hologram quality can limit performance due to the introduction of aberrations in the display image formation. In particular if the emulsion shrinks or swells during fabrication the spatial resolution and diffraction efficiency performance suffers leading to image degradation. Mechanical registration and tolerance can limit the throughput of the volume hologram fabrication and assembly.


There are several emerging technologies that could dramatically improve the performance of the holographic optical elements. These include computer generated volume holograms and other new methods to fabricate custom graded index refractive elements. Both of these technologies could enable improved signal-to-noise performance and expanded eye-box sizes for the system. It appears likely that these new technologies would be the result of performance gains in 3D printing of optical elements. In particular, as 3D printing continues to improve in spatial resolution and material flexibility, it seems to be only a matter of time before 3D printed volume holograms and optical components in general become feasible.


By 2020, due to the loss of market share from video streaming services, it is highly likely that Blu-ray fabrication will wane unless it finds a new market such as holographic optical element manufacture. At the same time, with further increases in head worn device production, it is quite likely that holographic optical element manufacture will become more mainstream and perhaps commonplace.


Compared to other types of high technology, such as integrated circuit manufacture, the risks and challenges of holographic optical elements in either surface or volume format is quite minimal. With sufficient volume demands, the hologram production could easily become mainstream. The manufacturing biggest risk/challenge is going to be the alignment and finally assembly. In particular, high mechanical tolerances could influence large volume production. Extra care must be taken in the mechanical design to relax assembly tolerances as much as possible. At the same time, the manufacturing methods will require fully automated alignment and high speed processing in order to minimize costs and increase reliability.


Until now, there has not been a strong market for the mass production of HOE's. Most of the mass market in holography is centered on anti-counterfeiting measures, decorations, and product labeling. With the emerging market of head worn displays, this is likely to change. At present, surface hologram manufacturing based on Blu-ray technology has matured to the greatest level. In particular, mass production of the Blu-ray based holograms are fully realizable with today's manufacturing capabilities. Unfortunately, Blu-ray technology cannot directly support the mass manufacture of volume holograms. While excellent photopolymer materials now exist for the mass production, the actual mass production methods for volume holograms is still emerging.


Blu-ray disk manufacturing which can support surface hologram production costs approximately $1 USD to produce 4 holograms in quantities of 10 Million. This does not include initial non-recurring expenditures which are in the tens of thousands. Our current design relies on volume holograms and which would require custom manufacturing and material testing. Our estimate to produce these 4 volume holograms (enough for 2 units) is approximately $2 USD in quantities of 10 Million. Non-recurring costs would be on the order of $500,000. These costs depend on many factors which include access to materials, investment or collaboration with suppliers, fabrication facilities, and process automation.


Curved Transparent Mirror (CTM)

The role of the curved transparent mirror (CTM) is to provide a reflective substrate from which the display video content can be reflected into the eye with high efficiency, while maintaining a nearly unnoticeable clear see-through vision experience. The curvature of the CTM plays a critical role and allows for extra degrees of freedom within the design.









TABLE 3







Current CTM design specifications (likely to change)









Radius of Curvature
Distance from entrance pupil
mirror tip angle





50.8 mm
45 mm
30 degrees









The baseline system has a spherical lens that is coated to be highly reflective at the three laser wavelengths of the projection system while maintaining a low reflectance (anti-reflectance coating) at all other wavelengths. This allows the lens to remain optically transparent across most of the visible spectrum. In addition, this lens has a constant wall thickness that contains no optical power and does not magnify or distort the see-through vision of the system. Although other curved surface shapes, such as elliptical or parabolic, have been considered, the baseline spherical lens shape has the lowest mechanical tolerances and is the least expensive to obtain and mass-produce. The use of a spherical surface simplifies the aberration mitigating properties incorporated into the holographic optical elements. In addition, while elliptical and parabolic surfaces may behave ideally for a single off-axis design angle, these shapes have more severe aberrations at other angles. In contrast, the aberration properties of the spherical surface shape are far more relaxed and performance is less sensitive to changes in angle.


The material of choice for applications requiring the ultimate in eye protection is polycarbonate. Polycarbonate is utilized in safety goggles, riot shields and motorcycle helmet visors, because it is the toughest transparent material. There are bullet-proof transparent ceramics, however these materials spall fragments from the surface opposite to that impacted, and therefore do not protect the eye from the more common but less severe impacts. The optical transparency of certain grades of polycarbonate exceeds that of many glasses. The material has one limitation, it has a relatively soft surface prone to scratching. To solve this problem, anti-scratch coatings have been developed in the ophthalmic industry. In addition there are coatings to repel dirt, water and resist smudges (oleophobic), and anti-reflection coatings. Polycarbonate lenses can be mass produced using injection molding techniques. This is relatively low cost, although initial NRE is required to manufacture the molds.


Looking through a flat window, if the window is rotated, the line-of-sight is displaced laterally but not deviated in angle. Looking through a curved window along the radius of curvature there is no displacement or angular deviation. However, looking through a curved window other than along a radius of curvature, there is an effective amount of prism, and the line-of-sight is slightly deviated in angle. An exaggerated case is shown in FIG. 7 to explain the concept. The prism exists because the slope of the surface at the exit of the rays differs from the slope of the surface at the entrance of the rays. If the angular deviation is not the same for both eyes, then unless the brain performs extra work to correct the problem, the two eyes will not look in exactly the same direction. High-end sunglasses are designed to have “horizontal prism balance” to mitigate this issue. Cheap sunglasses do not have this consideration, and may feel less comfortable to wear, and provide vision that is less clear or creates more visual fatigue. The test is that two laser beams separated by a nominal IPD are made to converge at some distance (tens of meters) and the sunglasses are placed in front of the beams to determine their effect on the convergence. With curved transparent mirrors in front of the eyes, comfort in wearing the proposed HWD can be affected by this issue. When the eye-box is expanded to cover a range of IPDs, for IPDs at the extremes the entrance pupils of the eye will be displaced from the center of curvature of the spherical lenses. It must be checked that the prism balance is within desirable limits. The inner concave surface will be coated to reflect the display light and therefore must maintain the required curvature. In principle the outer surface could be machined to provide personal ophthalmic prescriptions, so that spectacles did not have to be worn underneath the HWD lenses. Volume costs on the order of 10 million units will be relatively inexpensive compared to other components because we are using a spherical optometrist blanks in our current design. The estimated volume cost is <$1. This proves beneficial not only on cost but also in that it provides added safety for the user.


Rugate Coatings

In the baseline design, a curved window with spherical surfaces (spherical meniscus lens without power) is placed in front of each eye. A reflective coating on the inner surface is utilized to direct images into the eye, see FIG. 8. A rugate coating is deposited onto the inner lens surface to make it a highly reflective mirror at the projector laser wavelengths (450 nm, 520 nm, 635 nm) while allowing the lens to remain optically transparent for the rest of the visible spectrum. As such, the coating can provide >80% transmission for see-through photopic vision, see FIG. 9. In addition, this lens has a constant wall thickness such that it has no optical power and does not magnify or distort see-through vision.


The purpose of having a curved lens in front of each eye is to reflect the displayed light to the eye, and to transmit all other wavelengths. Under the premise that the augmented reality display should first do no harm to the see-though vision, the photopic transmission must be as high as possible so vision is not impeded. This is particularly important in dark conditions in which wearing a pair of sunglasses would represent a handicap.


Rugate coatings are an advanced type of thin film coating featuring layers with continuously variable index of refraction. These coating were originally developed by Walter Johnson and Robert Crane at the Wright Laboratory at Wright-Patterson AFB beginning in about 1982. The technology is now commercially available. Rugate technology allows the production of very narrow notch filters without subsidiary peaks to hinder transmission at other wavelengths. Rugate coatings are often used to provide laser eye protection (LEP) with minimal compromise of normal photopic vision. Use in LEP implies that these coatings can provide very significant optical density at the required wavelengths. FIG. 10 shows a coating once offered in the Edmund Optics catalog, with optical density exceeding 3.0 at the design wavelength. FIG. 11 demonstrates a filter designed for LEP which offers an optical density exceeding 5.0 for 532 nm and NIR with reflectivity exceeding 99.5%. The transmission is approximately 92% everywhere else. These filters are deposited on flat substrates.


Rugate coatings have a refractive index profile that varies continuously with depth. This differs from the discontinuous refractive index profile produced by conventional optical coatings comprised of a stack of layers. The continuously varying refractive index allows for the creation of coatings with very high reflectivity over extremely narrow wavelength bands, allowing for photopic transmission exceeding 80%. Conventional coatings have not been demonstrated to achieve as high a photopic transmission for a given optical density in the reflection band. The rugate coating would provide an anti-reflection coating for the transmitted wavelengths. The higher the optical density at the center wavelength and design angle of incidence, the higher the reflectivity at other nearby angles of incidence. Higher optical density at the center wavelength requires a thicker coating.


Optical coatings are in general most easily applied to glass, where higher temperatures and vacuum deposition techniques work well. However, lenses placed in front of the eye are no longer made of glass for safety reasons. Almost all ophthalmic lenses today are made of plastics. The ophthalmic plastic with the best rating for eye protection is polycarbonate, which is utilized for riot shields and motorcycle helmet visors. Optical grades of polycarbonate exist that actually have higher transmission and clarity than many types of glass. Polycarbonate is extremely tough but has a relatively soft surface and scratches more easily than some other plastics. Anti-scratch coatings have been developed to solve this problem.


The inner surface of the lens would be coated with the rugate or conventional coating to reflect the display light to the eye. Current designs specify the surface as spherical. The lens can therefore be manufactured using a standard ophthalmic lens blank provided that the optical design is compatible with a spherical curvature equal to that of a standard base curve. A possible option would be to machine spectacle prescriptions into the outer surface of the lens. However, the lens would nominally be machined with no see-through power (“plano”) and with a thickness as thin as possible; yet, consistent with the required rigidity.


It might seem doubtful that a vender for rugate coatings on curved polycarbonate could be located. However, there happens to be a defense-related application for just such a lens. Laser eye protection (LEP) is becoming important in the military, and a defense contractor has been located who deposits rugate coatings on curved polycarbonate lenses for this purpose. They have verbally quoted us in the past for designing a coating for our wavelengths and depositing it on lenses. Rugate coatings are ideal as they can provide optical densities of OD5 or more at any set of laser wavelengths, yet at the same time provide high photopic transmission.


Applicants have searched for additional venders and located one that is willing to try depositing rugate coatings on curved plastic, but they would first have to first develop a process for deposition on plastics. Most coating venders deposit on glass only. A vender was found that deposits rugate at low temperatures and can deposit on plastics, however they state that their process uses liquids and is incompatible with a curved surface.


The achievable photopic transmission can be estimated in an Excel spreadsheet given the widths of the reflectivity bands. There are several issues. The wavelength of laser diodes varies with part number and from projector module to projector module. The wavelength of any given laser diode varies with temperature. In Celluon Pico projectors utilizing Sony made scanning engines, there are two green lasers (close but not equal in wavelength) and two red lasers (close but not equal in wavelength). The purpose of two lasers could be to suppress speckle in imagery displayed on diffusely reflecting surfaces. Speckle is absent in the proposed HWD application as the lasers are not reflected from a diffusely reflecting surface. However, if the Sony engines are utilized as made, the effective source bandwidth is at least tripled for the green and red wavelengths. One additional factor is that the angle of incidence varies from top to bottom of the lens, and if the same coating design is deposited on all portions of the lens, the width of the reflecting bandwidth must be increased to allow for this variation in angle of incidence. However, allowing for these factors still provides for very high photopic transmission exceeding 80% as proven by the curved combiner shown in FIG. 9.


The coating would be tested for reflectivity at the display wavelengths over the surface area utilized. The correct angle of incidence would be required for each spot on the lens, which would require a fixture to hold the lens and scanning source in the correct configuration. To eliminate the effects of power drift, two power meters may be utilized, one monitoring a fraction of the output from the scanning engine (beam splitter used to sample) and the other looking at the reflected light. Small integrating spheres may be required to average the non-uniformities of the detectors.


The coating would be tested for photopic transmission at the angle of incidence utilized by the eye when the lens is worn. The angle of incidence would then be varied to cover the see-through field-of-view. A constant-power white light source of known effective temperature could be measured with a power meter after passing through the lens at the same angle of incidence utilized by the eye. A method to correct for the photopic curve would be required. Leakage of beams through the lens is probably not a concern, but could be measured. The amount of light reflected is correct for viewing, so any small fraction leaking through would hardly be visible.


Photopic transmission is given in percent, 100% meaning all external visible light (lumens) reaches the eye, and 0% meaning that no visible light reaches the eye. The requirement is a minimum of 35%, but with rugate coatings values in excess of 80% should be possible, and values exceeding 90% may be possible. This is made possible since polarizers are not required.


It is possible that the coating could have higher reflectivity in some regions than others leading to a non-uniformity in reflection. If the non-uniformity of the light reaching the eye is less than 0.3% it cannot be detected with the best contrast sensitivity at high luminance on of the order of 300, see FIG. 12. The luminance non-uniformity specification is better than 85% (≦15% non-uniformity). A fixed non-uniformity could be corrected for in software.


There are emerging hard coat technologies that can create notch filters like a rugate. In the past, Edmund Optics used to sell rugate notch filters. They still sell narrow notch filters but it is unclear if they are rugate coatings. When contacted a few years ago, their process required high temperatures and vacuum compatible with glass. The obstacle would be to develop a deposition process compatible with some optical plastic material. One idea is to first coat the plastic with a barrier layer such as silicon monoxide, and then to coat the barrier layer with the desired coating. Methods may be required to reduce outgassing of the plastic during deposition.


The 2020 roadmap of rugate or similar coatings may be trailblazed by effort set forth by the military. Provision of laser eye protection while providing high see-through photopic transmission is becoming a larger need for the military and for pilots every day. The proliferation of very powerful, inexpensive and compact solid-state lasers is driving the need for this. Missiles are very expensive, while drones are inexpensive. It has become clear that shooting expensive bullets (missiles) at inexpensive targets (e.g. drones) is not an economically favorable way to fight battles. Therefore there is a substantial push to develop laser weapons, for which the cost of a “bullet” is small compared to the cost of a missile. This suggests that future battlefields will feature many deployed laser weapons, and require all warfighters to wear appropriate laser eye protection. Therefore the necessary rugate-like technologies will be developed due to the military need.


Applicants are currently aware of only one vender that deposits rugate coatings, which presents a risk. There is a second vender willing to research how to accomplish it. Rugate coatings are thick and take longer to deposit than conventional coatings. The time required for deposition factors into the cost of the coating. However, significant thickness is only required for obtaining the ultimate laser eye protection, which are not required in the HWD application. Military personnel mentioned that an increased thickness of the rugate coatings makes them prone to delamination if the temperature varies significantly. However, when the LEP vender was questioned on this topic they denied that this was an issue for their coatings.


Large volume production of these coatings has yet to be exercised. The rugate coatings are currently produced by a sole source vender and are expensive. Rugate coatings might be made a commercial product but it is likely this has yet to be attempted. Conventional coatings exist with reduced photopic transmission. Anti-reflection, anti-scratch, and hydrophobic coatings are mass-produced for ophthalmic lenses. We are optimistic that with the correct partners and process the 10 million unit volume cost may be made as low as $2. Typically, special vacuum deposition runs cost something of the order of $1000-$1500. The question becomes how many parts can be coated in one run. Repeating a regular processes can be less expensive. This is possibly the one part with the largest uncertainty in mass production cost.


Localized Opaqueness Control

In real world scenes, most objects are opaque (solid) and occlude (block) light coming from their background. Display imagery of AR HWD systems does not behave this way and can appear unnatural. The AR objects displayed in see-through HWDs appear translucent, rather than opaque, since light from the projected AR object mixes with light from the background scene. As a consequence, the virtual object appears ghostly, and the contrast is diminished. It becomes difficult to determine the distance to the AR object since occlusion is used by the brain as a cue for this task. The projector illumination can be increased so light from the AR object overcomes the background light but this makes the AR objects unrealistically bright. In some situations where the background illumination is very bright, the displayed imagery can be washed out or unrecognizable.


The two conventional methods to ensure AR objects appear opaque and occlude their background include video relay and optical relay. In video relay there is no optical see-through path, but rather cameras are placed in front of the eyes, and their imagery is presented in a display. Here software is used to replace potions of the scene with AR objects. No such display can reproduce the field-of-view, color, resolution, and visual fidelity of optical see-through vision. In optical relay, the scene is optically relayed to a window with localized opaqueness control, such as an LCD panel, and then optically relayed to the eye. This approach offers very limited see-through field-of-view and transmission, and the required hardware is physically large. According to the goal of “doing no harm to natural see-through vision,” both approaches fail.


Applicants have proposed a novel approach for localized opaqueness control that does not suffer from any of the issues plaguing the optical and video relay schemes. Opaqueness is generated at spatially localized regions in the see-through lens of the HWD at video rates. It cannot; however, be in focus with the AR objects generated amongst the background scene since it is physically located close to the eye. This essentially means that the resolution of the opaqueness is significantly reduced and typically less than that of the display. Nevertheless, the out-of-focus localized occlusion can dramatically improve the contrast of virtual objects and prevent washout. If the opaqueness is restricted to the interior of virtual objects rather than overfilling them, the objects appear significantly more solid and the presence of translucent edges is less noticeable. To obtain maximum opaqueness resolution with this concept, proper placement of the opaqueness requires knowledge of the location of the pupil of the eye within the eye-box provided by the EHMS. This approach can be referred to as opaqueness-in-the-lens since the opaqueness generating layer is placed on one of the two surfaces of the CTM substrate, which otherwise acts essentially as the see-through lens of a safety goggle. Applications include (1) provision for the occlusion of individual virtual objects, (2) provision to make regions behind text, maps, pictures and actionable indicators opaque rendering the AR content easier to read against a bright background, and (3) provision for sunglasses to be of variable localized optical density.


The original method proposed for implementation of opaqueness-in-the-lens was to utilize fast photochromic dyes in the lens, activated by a scanning ultraviolet-A or Blu-ray (405 nm) beam as depicted in FIG. 12. Fast photochromic materials exist, and this method has been demonstrated, but utilizes significant power. The faster the photochromic dye results in greater the power consumption. There are alternative methods of implementation, as described in TABLE 4. LCD and electro-wetting have greatly reduced power consumption. LCDs are a very mature technology but due to the use of a polarizing filter, see-through transmission is limited to a theoretical maximum of 50%, and significantly less in practice. Pixelated electro-wetting can offer very high levels of opaqueness and very high levels of see-through transmission in the non-opaque state. Electro-wetting is therefore a method of considerable interest. However, there are currently no commercially available devices, although the technology has been demonstrated.


In TABLE 4, PC refers to “photo-chromic.” BR refers to mutant “bacteriorhodopsin,” which is a photochromic material produced in nature. It has excellent longevity and the sample Trex experimented with was prepared 15 year ago. Pseudogem-black is a nick-name for a fast color-neutral photochromic material recently developed by a group in Japan. LCD refers to “liquid crystal display” and has the disadvantage that it requires polarizing filters to operate. Hence the transmission is a maximum of 50% in theory and much less than that in practice. This would be a handicap when used in darkened conditions. Pixelated electro-wetting displays were considered as a replacement for LCDs with reduced power consumption because the backlight would not have to suffer loss in a polarizer. However, the technology went up against a very mature LCD technology and manufacturing infrastructure and has been sidelined. Currently Amazon purchased AquaVista and may utilize the technology to produce a tablet form reader. In the power column, “Low” refers to tens of milliwatts, “Medium” refers to hundreds of milliwatts, and “High” refers to watts of power consumption.









TABLE 4







Methods for implementation of Opaqueness-in-the-Lens


(PC = photo-chromic, LCD = liquid crystal display)














Curved
Optical




Method
Power
Lens
Density
Transmission
Other Issues















BR-PC
Med
No issue
1.0+
Yellow tint
Power to clear,







Sunlight clears


Pseudogem
Very
No issue
(0.5)
Good
Lifetime, fluorescence, laser λ,


black-PC
High



thermal decay


Pseudogem
High
No issue
(0.5)
Good
Lifetime, laser λ, thermal decay,


green-PC




not color neutral


Diffusing
Low
Issue

Good
Blocks contrast not light, need


LCD




TFTs for lock-on, electric connect


Monochrome
Low
Issue
(0.8)
50% max
Need TFTs for lock-on Electrical


LCD




connection


Laser Written
Med
Less of an
(0.8)
50% max
Polarized skylight activation?


LCD

issue


Electro-
Low
Issue
2.0+
Good
Need TFTs for lock-on Electrical


wetting




connection









The performance parameters to be tested and characterized include the following:

    • Photopic transmission in the non-opaque state (minimum optical density)
    • Maximum optical density in the opaque state
    • Uniformity of optical density in the opaque state
    • Number of grey scale values if optical density is variable
    • Spectral transmission of the film in the non-opaque state (color neutrality)
    • Spectral transmission of the film in the opaque state (color neutrality)
    • Turn-on and turn-off time constants
    • Opaqueness resolution in arc-minutes for given eye pupil sizes
    • Power consumption for full opaqueness normalized to solid angle


BR-Photochromic System Performance Specifications

    • Photopic transmission in non-opaque state >80%
    • Optical density 1.5 at 550 nm, less in the red, not color neutral
    • Yellow tint in transmitting state
    • Power consumption high to generate opaqueness
    • Power consumption required for non-opaque state (red LED imbedded in lens)
    • Transition speed depends upon power consumption and ΔOD—150 msec possible


Electro-Wetting System

    • Photopic transmission in non-opaque state >75% demonstrated
    • Optical density 2.0 color neutral
    • Color neutral in transmitting state
    • Power consumption low to generate opaqueness
    • No power consumption required for non-opaque state
    • Transition speed—10 msec demonstrated


The pixelated electro-wetting is an emerging technology that offers perhaps the best overall performance because of the greatly reduced power consumption and large optical density change.


The 2020 roadmap for localized opacity control includes pixelated electro-wetting devices that attempt to outperform the LCD approach due to the electro-wetting's significantly reduced power consumption. This is due to the elimination of the polarizing filters. However, LCD has proliferated the market and is well proven. Electro-wetting was new and had not yet demonstrated longevity and reliability. However, the advantage of lower power consumption will eventually win in the wireless age.


Pixelated electro-wetting is currently being considered for e-readers. Amazon purchased the largest company working on pixelated electro-wetting devices (AquaVista). Gamma Dynamics is now a holding company licensing its technology. Gamma Dynamics devices had a limited lifetime, so further research on longevity is required. University groups are still working on electro-wetting displays. Varioptic in France is selling variable lenses based upon electro-wetting technology.


The fast photo-chromic “BR” material utilized in the demonstrated approach requires hermetic sealing to maintain the correct humidity value to provide optimum function. This is currently being developed.


The risk for this technology is that there is only one known source for BR—a company in Germany who uses it for security related applications. They grow a mutant version of a bacterium and harvest the BR protein from the bacteria. However, the methods could be reproduced in other locations if necessary to support the new application. The photo-chromic approach generally has a challenge of using too much power consumption. The faster the PC material is driven, the more power is required. In the near UV there is no retinal hazard, and eye safety simply requires limiting the accumulated dose to the maximum permissible exposure published in the ANSI Z136.1 2014 standard. This is best accomplished by designing the system so that the small component reflected from the lens misses the cornea. Blu-ray (405 nm) is very affordable laser technology but unfortunately falls just beyond the 400 nm cutoff for retinal hazard. Laser diode technology for wavelengths below 400 nm is currently much more expensive, as there are currently no mass-produced devices for commercial products.


As stated earlier, Gamma Dynamics electro-wetting display devices quit working after a few years, so now that they are no longer being produced, no functioning devices are available for demonstration purposes. This indicates that longevity and reliability need work. It has become more difficult to get information from AquaVista since they were purchased by Amazon, but presumably they are actively looking at the possibility of an e-reader. However, Amazon purchased AquaVista from Samsung, who evidently decided not to pursue the technology for some reason. The challenge is to develop something slightly different than an e-reader. Fabrication on a curved surface with good transmission in the off state is required. Other groups have demonstrated both of these aspects. Regarding volume production, electro-wetting device manufacture personnel claim that they can utilize existing LCD fabrication lines. BR is currently expensive, simply because it is not mass-produced for a commercial product. We hope that with strategic technology development the 10 million unit volume cost would be $10.


Dynamic Image Focus Control

The concept is that an electronically variable focus element placed just prior to the beam scanning mirror can be utilized to adjust or vary the divergence of the scanning beams. For focus at infinity, the beams are nominally collimated at the eye. In order to have the display in focus when the wearer of the HWD is focusing on something close, the beams must be divergent at the eye. Variable focus has several potential applications in the proposed HWD. These include the following:

    • To adjust focus of AR content such as text and diagrams to be in focus with near work or far work, optionally incorporating a mechanism of autofocus
    • To provide “true” or “complete” 3D imagery with both correct vergence and correct focus cues
    • To correct for defocus variations in the display field-of-view due to the optical design


Most HWDs are not variable focus, and many choose to have a constant focus at infinity. However, if the purpose of the HWD is say to provide schematics to a mechanic, the worker may prefer to have the schematics in focus with the parts being worked on, possibly to one side. It takes considerable time for a person to change focus from near-to-far, and back again. In one study of navy fighter pilots, subjects were required to rccognize the orientation of a Landolt C optotype at 20/20 resolution, both at 18″ and then at 18 feet. The minimum time for the pair of optotypes to be correctly recognized in succession was measured. A plot of the results is shown in FIG. 13. The minimum average time exceeds 500 msec for the youngest and fastest accommodating subjects. Other studies have measured much longer times for accommodation. A corollary is that if autofocus were to be employed the variable lens would not have to be fast. Adjustment of several diopters can utilize a half second and keep up with the fastest accommodating eyes.


It is suspected that visual depth cues must agree to prevent the nausea of simulator sickness in a substantial fraction of the population. Typically to obtain 3D images, the retinal disparity is provided, but not the correct defocus corresponding to the vergence. For instance, at 3D movies presented on flat screen, retinal disparity is provided to make objects appear to move close and far, but the eyes are always focused at a fixed distance (the screen). In cases of large motion in depth, a significant number of individuals will eventually experience nausea when only retinal disparity is provided to indicate range. It has been proposed that this is due to an evolutionary adaptation in which the brain decides that the only way such conflicting signals can arrive at the brain is if a poisonous substance has been consumed. Consequently an urge to throw up (nausea) is generated. The solution is to include a defocus adjuster in the display so that the focus of the virtual object agrees with the retinal disparity provided for it. If there is only one virtual object or multiple virtual objects at essentially the same range, this would be straightforward. If there are multiple virtual objects distributed at significantly different ranges, an additional strategy must be employed. If the defocus adjuster is fast enough, all of the virtual objects can be given their proper focus. Another approach is to properly focus the virtual object being looked at (if gaze tracking in included in the HWD) or properly focus the one closest to where the head is pointing (e.g. in the center of the display FOV).


In some curved transparent mirror optical design architectures, there is a variation in defocus from top to bottom of the display. This variation can be corrected simply through a variation in the defocus lens power at the frame rate. At a 60 Hz frame rate the frame period is 17 msec. The response time of the Varioptic Arctic 316 electro-wetting lens is approximately 10 msec, and substantial optical power variation can be generated at 60 Hz.


When the scanning beams are smaller than the entrance pupil diameter of the eye, and the array of exit pupils does not fill the entrance pupil of the eye, then the depth-of-focus is increased, see FIG. 14. For instance, with 0.5 mm diameter beams and one exit pupil in the eye, even 5 diopters of defocus has little significant impact, and the display will be in focus when looking from infinity to 20 cm away.


Variable focus has been incorporated into three benchtop demonstration setups at Trex,

    • The roving high-acuity zone demonstration included an Optotune liquid lens, the power of which was controlled by varying a voltage adjusted (in the demonstration) by a knob.
    • One of the holographic demonstrators included a miniature electro-wetting Arctic 316 lens manufactured by Varioptic in France.
    • In a benchtop setup without a display, a variable LCD lens manufactured by LensVector has been tested and evaluated. It is one of the most compact lenses examined, and it was developed for cell phones.


The LensVector and Varioptic lenses were originally developed for cell phones. However, cell phones initially utilized mechanical lens adjusters and mechanical currently dominates the “fancier” electro-optic technologies.


The result of testing and evaluation at Trex is the following. Optotune lenses are too large for a HWD. The optical quality of LCD lenses is not as good as the optical quality of electro-wetting lenses. Varioptic is willing to make a smaller and faster version of the Arctic 316, which is already small enough for a HWD. Their longevity appears to be good. Holochip claims that they can produce a variable lens that is faster than those made by Varioptic. However, they tend to make lenses for larger apertures.









TABLE 3







Variable Lens Options



















Dynamic







Size

Range




Company
Technology
Model
(mm)
Weight
(Diopters)
Speed
Power


















Varioptic
Electro-Wetting
Arctic 316
8 × 2
300
mg
18
10 ms
 1 mW


LensVector
Liquid Crystal
LVAF
4.5 × 4.5 × 0.5
22
mg
>10
20 ms
 80 mW


Optotune
Liquid Lens
EL-6-18
18 × 19 × 9
6.7
g
18
 2 ms
350 mW









The performance parameters to be tested and characterized include the following:

    • Chromatic variation of defocus power, if any
    • Verify focus at infinity
    • Measure closest point of focus
    • Speed of focus change
    • Power consumption
    • Aberration level during rapid focus change
    • Verify autofocus performance, if applicable


Electro-Wetting Lens Performance Specifications

    • Clear aperture 2.5 mm
    • Power range 18 diopters (5 cm to ∞)
    • Transmission 97%
    • Operating temperature −20 to 60° C.
    • Silent
    • Power consumption 1 mW


Developing technologies that may improve performance include voice coil actuation used commonly in cell phones. With the great infrastructure and investment in the cell phone industry this technology is advancing rapidly and it is unclear if the improvement have reached fundamental limits. It may be that a mechanical solution will be fast enough in the future and have less bulk, weight and power consumption than the alternatives. This industry will likely drive the 2020 roadmap of the variable focus lens technology.


The risk is in how to predict if the best solution is mechanical or another emerging technology. Every technology that has gone up against the mechanical solution in cell phones has essentially lost. However, the one parameter that is useful that may not be available in a cell phone focus adjuster is speed of focus. A fast enough lens might allow for multiple virtual objects at varying range to all have appropriate focus. Volume production and cost on the order of 10 Million units of variable lens technology has been accomplished by the cell phone industry. At these unit volumes the costs are less than $1 USD.


Battery Requirements

Power consumption in the Microvision IPM is 1-2 W at full video and allows a full movie to be watched on a single Pico projector battery charge. The two major power draws include laser power and MEMS mirror drive power. Some laser diodes are the most electrically efficient light sources known, exceeding 50% at some wavelengths. The amount of light required to match the luminance of a CRT display is in the range of several hundred nanowatts, and is quite small. The lasers utilized in Pico projectors are designed to project a large bright image on walls, and have much higher average power than is necessary. Initially these engines will be utilized with fixed attenuation. But in the future the diode lasers will be replaced with smaller less powerful diodes, more appropriate to the retinal scanning display (RSD) application. This will reduce power consumption and at the same time allow for even faster rates of modulation. The MEMS drive power essentially goes into air resistance, which can be reduced by hermetically sealing the devices in a reduced pressure atmosphere. Microvision has suggested that some air resistance is useful for providing stability to the scanning mirror and damping transients. Therefore a clear path exists for a significant reduction in power consumption.


The optical power actually required in the HWD is estimated in the following. The power P from a display with luminance L and solid angle Ω entering a pupil of area A is given by:










P
eye

=


L

c





d


/



m
2



·

A

m
2


·

Ω
sr

·

1

683





lumens


/


watt


·

1

V


(
λ
)








Equation





5








P
eye

=


L

c





d


/



m
2



·

A

m
2


·

Ω
sr

·

1

683





lumens


/


watt


·

1

V


(
λ
)












P
eye

=


L

c





d


/



m
2



·

A

m
2


·

Ω
sr

·

1

683





lumens


/


watt


·

1

V


(
λ
)









Equation





5







In, Equation 5 V(λ) is the luminous efficacy. If the eye-box has dimensions of W×H the required power to the eye-box is given by:










P

eye


-


box


=



P
eye

·


W
×
H

A


=



L

c





d


/



m
2



·

W
m


×


H
m

·

Ω
sr

·

1

683





lumens


/


watt


·

1

V


(
λ
)










Equation





6







A benchmark for the brightness of a display useable outdoors is the luminance of a typical photographic scene in full sunlight, which is about 5000 cd/m2. It is also approximately the luminance of the full moon (maximum solar illumination) directly overhead (minimum atmospheric path). The power into a bright sun adapted 2.0 mm diameter pupil of the eye is:













P
eye

=



5000





cd


/




m
2

·



π


(

2.0
×

10

-
3







m

)


2

4

·
Ω
·












1

683





lumens


/


watt


·

1

V


(
λ
)










=



23.0






µW
·


Ω
sr


V


(
λ
)












Equation





7







The solid angle of a spherical rectangle is given by:





Ω=(θ2−θ1)·(sin(φ2)−sin(φ2))  Equation 8


For a display with field-of-view of 80°×45°, the solid angle is given by:









Ω
=


80


°
·

π
180

·

(

sin


(

45

°

)


)



=


0.987





sr

=

1.0





sr







Equation





9







a power into the eye of 23 micro-watts. To reproduce this at 532 nm (for which V(λ)=0.8832) Therefore to match a bright daytime scene using 555 nm (for which V(λ)=1.0) in a display with FOV of 1.0 steradian requires 26 μW. For reference, the luminous efficacy of various wavelengths is given in TABLE 5: Luminous Efficacy.









TABLE 5







Luminous Efficacy










λ (nm)
V (λ)







450
0.0380



473
0.1040



520
0.7100



530
0.8620



532
0.8832



555
1.0000



640
0.1750










The luminance of a typical computer monitor in an office is between 50 and 300 cd/m2. At 200 cd/m2 the pupil diameter is typically about 3.0 mm diameter and the power into the eye is approximately:













P
eye

=



200





cd


/




m
2

·



π


(

3.0
×

10

-
3







m

)


2

4

·
Ω
·












1

683





lumens


/


watt


·

1

V


(
λ
)










=



2






µW
·


Ω
sr


V


(
λ
)












Equation





10







Suppose W=20 mm and H=9 mm. The power required in the eye-box is given by:










P

eye


-


box


=




53






µW
·


1

V


(
λ
)



@
200







cd


/



m
2







1.3






mW
·


1

V


(
λ
)



@
5000







cd


/



m
2









Equation





11







The relevant number for a HWD is the 200 cd/m2 number. Outside in bright sunlight, the wearer should wear sunglasses, which have the effect of decreasing the apparent luminance and increasing the diameter of the pupil to 3.0 mm, for which vision is improved. For a laptop screen the display is viewed through the sunglasses and must directly compete with the sun. Thousands of nits (cd/m2) are required. For a HWD, the display should be placed inside of the sun protection.


Therefore approximately 53 μW/eye are required at a luminous efficacy of 1.0. For sake of argument assume that the average luminous efficacy is (0.3). The optical power requirement per eye is 160 μW. Assume that the lasers are 10% efficient, and the delivery optics are 10% efficient. The wall plug power requirement is 16 mW/eye for full 100% video over 80°×45°.


Some estimates for overall system power consumption are presented in TABLE 6.









TABLE 6







Power Consumption Estimates (per eye)


Power Consumption Estimates (per eye)










Power



Component/System
Consumption
Notes





Scanning Mirror(s)
500 mW 
Full atmospheric pressure (less at lower P)


Diode Lasers
16 mW
If engineered for RSD rather than wall projection


Display Processing

Depends on implementation


Variable Focus
15 mW
Varioptic 316 + Driver


Ambient Light control

Piggyback on camera sensor


Night Attenuator
15 mW
Electro-chromic or photo-chromic + UV-LED


Automatic IPD Adjustment
340 mW 
Squiggle motor + driver active


Opacity System
15 mW
Electro-wetting option w/o processing


Eye Tracking
85 mW
Ultra miniature camera OV6211


Total
630 mW 
Not including processing & IPD & attenuation









Aside from processing, the major power consumption in hardware is in the high-frequency scanning mirror, which dissipates power fighting air resistance. The 500 mW value is a rough estimate. Encapsulation of the scanner in a reduced atmospheric pressure would decrease the power consumption. Any active IPD system will be utilized infrequently, and is not needed at all if a sufficiently large eye-box is implemented. The 10 million unit volume cost for a battery supplying the 630 mW estimated power usage over a several hour span is approximately $5.


Eye-Box Requirements to Accommodate IPD Variation Among the Population

The targeted IPD range is 54-71 mm (FIG. 15). With mounting via a headband, there is no issue with symmetry about the nose bridge. The exit pupil array targets a minimum pupil diameter of 3.0 mm (see FIG. 16). The pupil translates 1.0 mm for every 5° of rotation. Comfortable eye rotation is ±15° and eye rotations of ±20° automatically induce head rotation, so as to reduce further eye rotation requirements. Hence allowing for horizontal pupil translations of ±4.0 mm covers most typical cases (see FIG. 17).


Based upon the constraints imposed by the IPD range of the population, pupil radius, and the eye rotation range; the required horizontal eye-box is given by:










eye


-


box

=





74





mm

-

51





mm


2

+

(

2
×
4





mm

)

+

3





mm


=

22.5





mm






Equation





12







Typical eye dimensions are shown in FIG. 18.


Eye-Box Requirement Experiments

To determine the eye-box requirements, several tests were performed and data available in the literature was analyzed. A homemade “bite bar” was constructed using the top of a chin/forehead rest fixture. For hygienic use a fresh freezer-type Ziploc bag was wrapped around the top bar of the chin/forehead rest and taped in place. The subject would bite down on the bar so that their head could not rotate during changes in gaze angle.


A series of targets were placed on the wall at a distance of 77.25″ from the eye. The zero-degree target was placed so that the subject was nominally looking straight ahead. Additional targets were placed every 5-degrees out to ±35-degrees in a horizontal plane containing the two eyes, see TABLE 7.









TABLE 7







Calculation of the target locations









Angle

77.25″ ×


(degrees)
tan (θ)
tan (θ)












0
0.000
0.00


5
0.087
6.76


10
0.176
13.62


15
0.268
20.70


20
0.364
28.12


25
0.466
36.02


30
0.577
44.60


35
0.700
54.09


40
0.839
64.82


45
1.000
77.25









The camera was mounted on a heavy-duty tripod, so as not to move when activated. The camera was approximately 4.5-feet from the subject's face, just below the line of sight to the zero-degree target. The pupil diameter was measured by importing the zero-degree image into a drawing program, generating a line equal to the pupil diameter, and placing a line of the same length over the image of the ruler. The measurements were taken with the room lights on, and the pupil diameter of the subject was 4.0 mm.


Images were recorded for gaze at each of the targets with the attempt by the subject not to move his head. From the perspective of the subject, the bite bar could not move significantly left or right, but it could move forward slightly, and it could twist to a small degree. The camera could move if the tripod moved. When the data was analyzed, it became apparent that the head did move with respect to the camera a very small amount. However, this effect could be calibrated out by referencing the pixel number at a certain position on the ruler. The positions of the two pupils with respect to the ruler were determined for each gaze angle using image processing software (MaximDL). The result of the analysis of the data is shown in FIG. 19. The pupil translates significantly with gaze angle. The average amount of translation is roughly 1 mm per 5° of rotation. This is the main result of the study.


Model of Displacement

The simplest model is that as the eye rotates, the iris rotates along a circular arc of radius R, which is the distance between the center-of-rotation of the eye (CR) and the iris, see FIG. 20. The displacement as a function of gaze angle is given by:





ΔX=R·sin(θ)  Equation 13


The corneal vertex is the point located at the intersection of the patient's line of sight (visual axis) and the outer corneal surface. The center of rotation (CR) of the eye is located approximately 13 mm behind the vertex of the cornea according to one reference, and 15 mm behind the vertex of the cornea according to a second reference. The anterior surface of the lens at the back of the iris is 3.8 mm behind the vertex of the cornea. Thus, the distance R from the iris to the CR is in the range of 9.2 mm to 11.2, with the midpoint of the range at 10.2 mm. R is a free parameter in the theory because it varies from person to person. The best fit to the current data is R=11.6 mm, and this is the value used in the plot shown in ERROR! REFERENCE SOURCE NOT FOUND.


Upon further consideration, the value of R should be increased from the value of 10.2 mm for the following reason. The iris is located approximately 4 mm behind the apex of the cornea. However, looking into the eye from the outside, the iris will appear to be closer than this due to the index of the eye. The phenomenon is the same as looking at the bottom of a swimming pool through the water filling it. The bottom of the swimming pool appears closer by a factor equal to the inverse of the index of refraction of water. The index of the aqueous component of the eye is 1.337, so the 4 mm actual distance to the iris from the apex of the cornea is reduced to an apparent distance of:


Hence the iris appears from the outside to be 1 mm closer to the observer than it actually is










apparent





distance

=



4.0





mm

1.337

=

3.0





mm






Equation





14







and thus the CR-to-iris distance appears to be 1 mm larger than it actually is. Thus instead of the nominal 10.2 mm distance from the CR to the iris, the apparent distance is 10.2 mm+1.0 mm=11.2 mm. This estimated nominal value is close to the best fit to the data value of 11.6 mm. In any event, the best fit of the data to the model for subject LS requires an effective value of R equal to 11.6 mm.


RSD Data:


A retinal scanning display was set up as part of the eye tracking development hardware. Several subjects placed their head on the system's chin rest/forehead rest to minimize head motion, and were asked to rotate their eyes (but not their heads) in a horizontal plane both directions until the display became non-visible. The points on the far wall at which the subject was looking when visibility was lost, were noted. The distance from the eye to the far wall was also noted. Hence by simple trigonometry, the angular spread over which the display was visible could be estimated. The results are shown in TABLE 8: Data from Trex RSD Vignetting Tests. Only vignetting in a horizontal plane was tested. Note that if the crossover point is not placed at the entrance pupil (EP) of the eye, but is instead place in front of or behind the EP, then the apparent angular range free of complete vignetting will be increased at the loss of some instantaneous field-of-view. It is not clear whether or not the crossover point was placed at the EP when this data was recorded. Variation in the position of the crossover point with respect to the EP of the eye could explain the variation in the recorded values. In addition the pupil diameter was not measured and is unknown.









TABLE 8







Data from Trex RSD Vignetting Tests












±Angle
Room



Subject
(degrees)
Lights















LS
9.2
Off



LS
9.2
On



KK
7.1
Off



KK
9.2
Off



LA
7.5
Off



LA
9.6
Off



KK
14.4
On



LA
16.4
On



LA
17.7
Off



KK
16.0
Off










In the future, placement of the crossing point with respect to the EP of the eye could be determined as follows. The display is moved toward the eye until vignetting of the edges of the FOV is noticed and the axial position of the display noted. The display is then moved away from the eye until vignetting of the edges of the FOV is noticed and the axial position of the display noted. If the display is placed at the center of the range just determined, then the crossing point will be located at the EP of the eye. Placement of the display crossing point at the EP of the eye is not necessarily the optimum location. Allowing some vignetting as a means to increase the eye-box may be desirable.


Nagahara Data:


Nagahara et. al. built an ellipsoidal/hyperboloidal display using an LCD image source relayed into the mirrors by a series of lenses. According to a ray trace image in their publication the diameter of the collimated beams at the eye varies with elevation on the ellipsoidal mirror. They did not specify the beam diameters at the eye. They tested 10 subjects, asking them to determine at what gaze angles vignetting of the display image became noticeable. The results are shown in TABLE 9. Note that if the crossover point is not placed at the entrance pupil (FP) of the eye, but is instead place in front of or behind the EP, then the apparent angular range free of vignetting will be increased at the loss of some instantaneous field-of-view. It is not clear whether or not Nagahara et. al. ensured that the crossover point was placed at the EP when they recorded their data. The mean non-vignetted range is equal to the comfortable range of eye motion (±15°).









TABLE 9







Data from Nagahara et. al. on vignetting












Non-Vignetted Range
Standard Deviation



Direction
(degrees)
(degrees)







Vertical
±15.5°
4.33°



Horizontal
±16.6°
5.13°










Nagahara et. al. went further and compared two distinct display modes. In the first mode a static image was displayed, and the only option to look at any object in the image was to rotate the eyes appropriately. In the second mode, the displayed image was registered to the head orientation, and a second option of rotating the head to see an object was possible. In the first mode, the 60% of the subjects complained about the vignetting. In the second mode only 10% of the subjects complained about vignetting. Hence the mode of the display clearly has an influence on the acceptability of a small eye-box. It is probably useful to offer a mode in which the displayed imagery is “slaved” to the head orientation of the operator. This is not expensive. The Oculus Rift developer's display available for $300 includes full head tracking, and Trex has verified that it functions pretty well.


Experimental Summary

As a rough approximation, the pupil of the eye appears to translate 1 mm for every 5° of shift in gaze angle. The comfortable range of gaze angles is ±15°. This comfortable range corresponds to a pupil translation of approximately ±3 mm. Optimum vision is typically obtained with a pupil diameter of 3.0 mm. If the pupil is as small in diameter as possible (2.0 mm) then the subject should be wearing sunglasses to increase it to at least 3.0 mm to improve their vision. With a pupil diameter of 3.0 mm, a translation of ±3 mm means that an eye-box of ±1.5 mm is required to keep at least 50% of the beam visible, ±2.0 mm is required to keep the entire beam visible, and ±3.0 mm is required to keep the beam centered in the pupil, see FIG. 21.


Optical Design of Head Worn Devices

There are two classes of head worn displays (HWD): (1) virtual reality and (2) augmented reality. Virtual reality systems completely replace normal vision with an electronic image display while augmented reality systems add new content to the natural “see-through” perception of the user. One example of a virtual reality display is the Occulus Rift. The optical design for practical virtual reality systems is the easiest to implement and consequently have been supported by inexpensive commercial devices for the longest time. However, virtual reality systems become less useful when the wearer wishes to continue interacting with the world around her. As such, this report will not consider virtual reality systems any further but will instead focus on the second class: augmented-reality. While some forms of augmented reality use a video camera link to create a digital overlay of the real world in a virtual reality display, we are only interested in see-through augmented reality that does not impair the natural sight of the user but rather overlays digital image content with the real world. By far, this last form of display is the hardest to design and manufacture.


Before embarking on the display design, it is helpful to consider how the image content will be generated. While numerous forms of image displays have been developed over the years, today there are four common forms of miniature display devices in use: (1) LCOS (Liquid Crystal On Silicon), (2) OLED (Organic LED), (3) DLP (Texas instruments), and (4) MEMS pico-projector (Microvision). (Omitted entirely from this discussion are the larger format displays used in e-books, laptop computers, or television sets.) The first three devices are two-dimensional planar arrays that support a spatially bit-mapped image across its surface. In contrast, the MEMS pico-projector device behaves as a point source that raster-scans each image pixel sequentially. The OLED is the simplest in operation since it generates its own light internally and requires no additional optics to support the initial image creation. The remaining LCOS, DLP, and MEMS devices all require more sophisticated optical support and reflect the light off a mirrored surface during the image formation process.


While each of the fore mentioned devices have their own advantages and disadvantages, we will now primarily consider only the LCOS and MEMS devices for further study. In particular, both of these devices use narrow-band laser sources that are highly beneficial to the application of see-through optical systems. While not essential, the use of such narrow band light also enables the use of narrow band optical filters to reflect the projected image into the eye without impairing the see-through ambient light from simultaneously reaching the observer. For this reason, we have ruled out the use of OLED devices, which exhibit an extremely broad band emission spectrum and does not permit the use of narrow band optical filters. While the DLP technology is based on a array of micro-mirrors and can support narrow band illumination, it is used primarily today in large movie theater projectors rather than small display devices. As such, we have omitted further discussion of the DLP, although its optical behavior is somewhat analogous to LCOS devices.


Shown in FIG. 22, the LCOS naturally creates a collimated beam pattern. Here the polarization at each pixel on the LCOS is rotated in order to modulate the light intensity for each pixel at the polarizing beam splitter cube PBS. This system often employs a diode laser LD that operates at a constant power level.


The LCOS device has two big attributes: it can support the fastest frame rates at the highest possible resolution: up to 240 Hz with 1920×1080 pixels of resolution (and 4K devices under development at the time of this writing). However, some optical designs are only possible with the MEMS pico-projector.


Unlike the LCOS system that works with collimated beam patterns, the MEMS pico-projector naturally creates a point-source like light pattern that originates from a center of the mirror. Shown in FIG. 23, the MEMS micro-mirror vibrates extremely fast in orthogonal angular directions in order to raster-scan a laser beam across different sweep angles of a bit-mapped image over time. In this case, the intensity at each image pixel is determined through temporal modulation of the laser beam at the different angular positions of the pico-projector mirror. Each pixel in the resulting image is generated by the MEMS mirror angle. With the pico-projector, each image pixel intensity is usually determined by modulating the power of the laser diode itself. This is not unlike the old fashioned cathode ray picture tube of original television sets except that the electron beam has now been replaced with a laser beam. The current MEMS pico-projector devices can support 60 Hz frame rates with 1920×720 pixels of resolution and projects an expanding image that has a full-angle sweep of 27×47 degrees. With both the LCOS and MEMS devices, when the system is monochromatic, only a single laser source is required. Full color displays, however, require three chromatic laser sources.


For most of this report, we will use thin lenses in place of actual optical components. This allows us to understand the simple optical behavior without concern for lens aberrations and higher-order optical behavior or the precise mechanical placement of the system. For much of this report, the term “objective lens” will be used to refer to the optical element placed directing in front of the viewer's eyes. However, this term does imply use of a glass lens but could in fact be a curved mirror, holographic optical element, Fresnel lens, or even waveguide that operates as a focusing element.


Conservation of Etendue

Conservation of etendue in an optical system means that the volume in phase space occupied by RI at the input aperture must be the same as the volume in phase space occupied by RO at the output aperture. In other words, the space-bandwidth product is always conserved at each optical surface, where space is the aperture size at the surface and bandwidth is the ray-angles passing through the surface. Typically, within any optical system, the space-bandwidth product is naturally limited by some particular element in the system. In the case MEMS pico-projector system, the system space bandwidth product is limited by the behavior of the micro-mirror itself. In this case, the mirror has a spatial dimension of 1 mm and its small aperture size becomes the limiting factor. Fortunately, the pico-projector mirror is capable of a much larger angular bandwidth either when highly divergent light is projected through in the mirror or when the mirror is scanning through different angles. With the LCOS system, the LCOS surface has a spatial dimension of 18 millimeters, which is considerably larger than the MEMS pico-projector micro-mirror, but its angular bandwidth has a natural divergence of only about 8 degrees, which is due to the diffraction of the display's 6 micron pixel size. As such, one could argue that the space-bandwidth product of the LCOS device is potentially greater than the MEMS micro-mirror. However, this is not the complete story.


Overcoming Etendue

Conservation of etendue only applies at a single instant in time. However, if the phase space of the system changes over time, then its time-average Etendue can be much larger than its instantaneous behavior. This is exactly the case for the MEMS pico-projector, since the micro-mirror is scanning very rapidly in time. In addition, when used in conjunction with a liquid lens that varies the beam divergence through micro-mirror, its time-average etendue can actually be much larger than a comparable LCOS device because the time-average behavior can reach a much larger volume in the phase space of the system.


Finally, there is a second way of overcoming the conservation of etendue. The conservation of etendue does not apply to diffraction gratings and holographic optical elements in the same way as it does for non-diffractive optical systems. In particular, holograms can directly increase the volume of the phase space for a given optical system, because holograms can generate multiple diffracted orders or even multiple sets of diffraction gratings. In essence, a hologram can generate time-averaged phase space results without time.


Dichotomy of Optical Design

There is an inherent dichotomy in the optical design of head worn devices (and possibly most imaging devices!) In particular, in order to represent the entire phase space of a system, the optical design must support both planar and spherical wavefronts reciprocally through both the entrance and exit apertures since the design must support both types of fields simultaneously. This makes the optical design of such imaging systems much more subtle. Consider the system shown in FIG. 24. This system contains only three elements: the Eye, the objective lens, L, and the Projector. In order for this design to work, the light that reaches the eye from the projector must be nearly collimated in order to place the observed image at infinity. However, the light must also reach the Eye over a range of angles that determines the field-of-view (FOV) for the observer. As such, the system must accommodate both a point source located at the projector position as well as multiple planar wavefronts originating from the Eye. However, depending on whether an LCOS or MEMS pico-projector device is used, the projector is behaving primarily as a point source or a planar source. In the case of the MEMS pico-projector, its behavior is largely point source-like. The LCOS display acts chiefly as a planar source. This is discussed further in the next section.


The optical design for a head worn display begins by reverse propagating a point of light from the eye pupil center and considering two parameters: the desired field-of-view FOV, given by θ_eye, and the desired eye relief position d1. Next the objective lens L is placed at the eye relief position d1. For point source projectors such as the MEMS pico-projector, such a system is shown in FIG. 25. Here, objective lens must refocus light back onto the exit aperture of the projector. From the conservation of etendue, the task of the objective lens L is to impedance match the FOV, θeye, with the F-number of the projection system, given by θprojector, by the following relationship:










θ
eye

=



d





2


d





1




θ
projector






Equation





15







and the focal length f of the lens L quickly follows









f
=



d





1

d





2



d





1

+

d





2



=


d





1


θ
eye




θ
eye

+

θ
projector








Equation





16







When an LCOS system is used instead of the MEMS picoprojector, the same field-of-view FOV, θeye, and eye relief, d1, constraints are still required. This time, however, the focal length f is given by the relationship:









f
=

p

tan


(

θ
eye

)







Equation





17








and






d1=d2=f  Equation 18


Reverse Propagation with Collimated Beams

Once the focal length f and positions of the objective lens L and Projector position has been determined, the eye-box dimension s can be determined by considering how plane waves will reverse propagate between exit pupil of the eye through the lens L to the projector p. In the case of the MEMS pico-projector, from the conservation of etendue, the eye-box dimension s is given by:









s
=


d





1

p


d





2






Equation





19







With the pico-projector, the mirror dimension p is very small (on the order of 1 mm). As such, the eye-box s at the pupil is also tiny. For example, if p is 1 mm and d2=2*d1, then s=0.5 mm. This is very small eye-box indeed! In spite of this limitation, the pico-projector does offer other important attributes that offset its limited eye-box dimension: namely, its optics enables easier see-through capability and a wider field-of-view. Here, the long working distance d2 is the benefit of the expanding beam geometry of the pico-projector which allows it to be placed distantly from the focal plane. In addition, as a result of the conservation of etendue at the pupil, the small eye-box size results in a large field-of-view for the same system.


Traditional Objective Lens Approach

For virtual reality systems which do not require see-through vision, it is possible to create an expanded eye-box size that can be quite large. This is accomplished by placing the display of a diffused image directly in the focal plane of the objective lens. This is shown in FIG. 26. Here, each pixel in the diffused image emits light as a point source. For such a system, the OLED display is commonly used since it naturally emits light with a high angular spectrum. Unfortunately, as previously mentioned, the OLED display cannot be used in our application since it has a broad emission spectrum as well. For the LCOS or MEMS micro-mirror display, a diffuser plate or lenticular array is placed in the focal plane of the lens. The resulting projected light from each diffuse pixel position is collimated by lens L and creates an expanded eye-box near the pupil of the eye. However, none of these methods can offer see-through-vision since the diffuse hardware is generally located directly in the viewer's line of sight, unless the field-of-view is made small so that the focal length of the lens can be large. Since one of our goals is a high field-of-view, this traditional approach is not suitable.


Wave-Guide Approach

Invariably, the optical design of most see-through augmented reality systems turn to the use of waveguides in order to achieve an expanded eye-box with see-through. Indeed, the allure of such wave-guide technology has been almost universal for early developers of see-through augmented reality devices. Unfortunately, there are multiple complicating factors that prevent wave-guide technology from being highly effective. Inevitably, such wave guide devices suffer from either a limited FOV or a reduced see-through capability. These limitations ultimately stem from the conservation of etendue.


Next we will consider a new approach with holographic optical elements that overcomes etendue and enables a wide field-of-view without waveguides and without obscuring the line-of-sight to the natural world.


Overcoming Etendue with Holographic Eye-Boxes

Holographic optical elements (HOEs) can overcome the normal conservation of etendue that limit classical optical systems. This is because holograms have the ability to create more than one optical state simultaneously in phase space, whereas conservation of etendue would normally limit the device to a single optical state in phase space. (In the case of head worn displays, the phase space state is given by the product between the eye-box size s and its field-of-view FOV.) However, HOEs can create more than one eye-box simultaneously to the viewer without compromising the field-of-view. As such, HOEs can help create an expanded eye-box that exceeds classical limits. Shown in FIG. 27, HOEs can be used to “clone” a single eye-box perspective into an array of eye-box spots at the pupil of each eye. By carefully locating the placement of each “cloned” eye-box spot, it is possible to ensure that the eye pupil always receives a complete image across an expanded eye-box boundary. Next we will consider how such an HOE system works.


For HOE construction, in practice, the “cloned” eye-box is accomplished by coupling two HOEs in series with the projector and objective lens. The basic process is shown in FIG. 28. Here, the projector illuminates the first hologram, H1, with a single projected image. (For the purposes of this discussion, we will only consider the MEMS micro-mirror projector but an analogous design can be constructed for the LCOS device as well.) The job of the first hologram is to split this single perspective into multiple wavefronts that ultimately form the different Spots. H1 passes these split wavefronts to H2. Each wavefront created by H1 is destined for a particular eye-box spot, but must first propagate to H2 and then to L before getting focused into the various cloned eye-box spots. The purpose of H2 is to act as a corrective element that matches each of the wavefronts from H1 to the optical characteristics of L to eliminate the aberrations.


Holographic optical elements H1 & H2 are computer generated and hence must first be computed and subsequently “printed” into a physical holographic master for mass production. The two holograms are computer generated by first calculating their optical parameters through a ray-trace propagation algorithm. Shown in FIG. 29, the computer generated process starts with the optical layout of a display system that, without any holograms present, creates the central eye-box Spot 0 in the pupil aperture of the eye. Finally the holograms H1 and H2 are calculated in five steps:

    • (1) The location and orientation of the two holograms are chosen to lie between the objective lens L and the Projector P, as shown in FIG. 29. Typically H1 & H2 are located with a small gap between them. However, this placement is not arbitrary but carefully determined to reduce stray light and cross-talk from the reaching the eye-box and causing harm to the viewing process. (Stray light and cross talk is examined in more detail later in this report.)
    • (2) A single wavefront is calculated at H1 by tracing rays from a point source at the Projector P to the H1 surface. This ray-trace information is converted into wavefront information.
    • (3) Additional wavefronts are calculated at H2 from the back propagation of the rays from each eye-box Spot position through the lens L to the H2 surface. Here each Spot is treated as a point source in the ray-trace calculation. Note that the angular spectrum is held fixed across all of the different “cloned” Spots. This means that the field-of-view at each Spot exactly matches the field-of-view of every other Spot location and enables the observer's pupil to overlap with multiple spot positions without experiencing double vision.
    • (4) The previous ray-trace information from (2) and (3) is used to calculate the joining sets of wavefronts that are required to couple H1 and H2 for each Spot position.
    • (5) Spatially dependent diffraction gratings are computed from the normal direction vectors of each wavefront pair found in steps 2-4 such that there are N separate diffraction gratings computed at holograms H1 & H2 for each of the N Spot locations. This grating information is blended at H1 and H2 by either interlacing each grating calculation within a single surface hologram, by stacking each grating in a series of holograms located close to each other in space, or by multiple exposure of a volume hologram. (Methods of blending is examined in more detail later in this report.)


The optical geometry discussed in the previous section uses the projector directly in-line with Spot 0, the central spot. Unfortunately, this can lead to an imbalance of energy distribution between the central spot and the side spot eye-box locations. In particular, the side spots undergo much more loss than the central spot, since they are diffracted by H1 and H2 while Spot 0 does not experience any losses due to the additional diffracting steps. In addition, without further mitigation, the previous in-line system can potentially suffer from increased stray light and cross talk noise. One way to mitigate these issues is to use an off-axis geometry instead. This is shown in FIGS. 30AND 31. Here the optical design places the Projector off-axis from the L such that its projection through L misses the eye-box region entirely. Following this, the hologram H1 and H2 are constructed by the same steps described in the previous section. This time, however, the central spot is also included in the calculation for H1 & H2.


Stray Light and Cross-Talk

Cross-talk noise in cascaded holographic systems occur when the light from one hologram is diffracted by a second hologram in an unintended way. Such stray-light effects become a problem when the viewing experience of the user becomes overly compromised. The current optical design uses two cascaded holographic optical elements that, together, can diffract the light into a multiplicity of directions. In fact, most of the possible diffracted directions are not useful but, rather, generates such unwanted cross-talk noise. For example, consider a single ray of light passing through an optical system of two HOE elements where each element contains 7 directions of diffraction (see FIG. 32). When a ray of light enters the first HOE, it gets diffracted into one of seven different “first-order” directions before it reaches the second HOE, where it is diffracted for a second time into one of seven additional “first order” directions. For such an HOE system with two sets of “first-order” diffractions, there are a total of 49 “first-order” directions of diffraction. However, the situation gets considerably more complicated when, in addition to “first-order”, the “zero-order” and “minus-one” order terms are also included in the analysis. In this case, after passing through the two HOE plates, there are a total of 21̂2=441 possible ray directions generated by such a system, in which only 7 directions are intended (to make the 7 spot positions of the eye-box in this case). The remaining 334 directions can only contribute to cross-talk noise. However, such effects can be mitigated through a careful design of the different elements in the system.


There are two important ways that the design can minimize the effects of cross-talk noise. The best method is to use an optical design that keeps most of the cross-talk noise outside the “eye-box” zone. This method is best because it has no impact on the viewing experience. For the remaining cross-talk noise that does enter the “eye-box” zone, it is important to ensure that this noise remain as diffuse as possible and does not become focused. In general, such diffuse noise is considered to be uncorrelated noise, whereas focused spots are the result of correlated behavior in the optics. Such correlated behavior occurs when each of the rays undergo the same optical pathway, such as passing through a lens that concentrates the light. Diffuse noise degrades the image quality by reducing the image contrast, where-as correlated noise can actually generate false double images that strongly interfere with the viewing experience. As such, strongly correlated cross-talk noise must be avoided as much as possible. Of course the most effective way to minimize cross-talk noise is through the use of thick (volume) holograms that Bragg-match each separate incoming wavefront with a desired separate outgoing wavefront.


Luminance Uniformity & Color Balance

Applicants' approach to manufacturing the volume holographic elements utilizes digital specification of the generating surface HOEs. It should be possible to correct for image distortions in the holograms digitally. Correction patterns to address color balance and luminosity uniformity can be incorporated in the holograms as well as through software modifications controlling the light projection source. The color uniformity issue is very similar to that discussed in H. Mukawa et. al., “A full color eyewear display using holographic planar waveguides,” SID Digest (2008) 89-92.


Aspects to be measured and corrected if necessary are discussed in what follows.


Luminance Uniformity:

    • Brightness uniformity or variance across the image within one exit pupil
      • Affected by the diffraction efficiency of the HOEs with varying Bragg condition
    • Brightness uniformity or variance across the image, with varying numbers of exit pupils entering the pupil of the eye at the same time, and at varying location within the eye-box
      • Multi Variable Monte Carlo histogram to display the occurrence of various brightness intensities with varying pupil size
      • Effects of varying the density of exit pupils within the eye-box


Color Alignment & White Balance:

    • Efficacy of digital correction for varying diffraction efficiency due to varying Bragg condition for all three colors
    • Effects of exit pupil distortion varying with color
    • Histogram of Δu′v′ across the scene within one exit pupil as a result of RGB balance variations
    • Histogram of Δu′v′ across the scene with varying number of exit pupils entering the eye and varying position within the eye-box as a result of RGB balance variations


Measurements are required to assess luminosity uniformity and color balance. One possible approach is to utilize ISO 9241 standards. ISO 9241 is a multi-part standard covering ergonomics of human-computer interaction. In particular, ISO 9241-300 covers displays and display related hardware. Within the 300 series the following standard is of particular relevance:

    • Part 305: Optical laboratory test methods for electronic visual displays


This part of ISO 9241 describes optical test methods and expert observation techniques to evaluate a visual display against the requirements in ISO 9241-303. At 200 pages, it contains detailed instructions on taking display measurements. If Luminosity uniformity or color balance requires improvement, the digital correction in the HOEs can be updated and/or software corrections can be implemented.


Limitations in Grating Pitch

The hologram optical elements, H1 & H2 are initially constructed by Technicolor from computer generated data. These holograms are initially fabricated as surface holograms. Some care must be taken not to exceed Technicolor's maximum pitch of 3300 line-pair/mm (Ideally, the pitch should be less than 2500 lp/mm). Later these may be copied into a volume hologram format in order to achieve an expanded eye-box.


Thick Vs. Thin Holograms

There are two fundamentally different types of holographic media: “Thin” versus “Thick”. If we consider a hologram consisting of a single sinusoidal grating with grating planes normal to the surface of the emulsion, it behaves as a thick or thin grating depending on the value of the q parameter given by






q=λd/nΛ
2  Equation 20


where λ is the vacuum wavelength of the light used during reconstruction, n is the refractive index of the emulsion, Λ is the grating period, and d is the emulsion thickness. The grating is considered “thick” when q>1, while the grating is “thin” when q<1. In this application, both thin and thick holograms may be employed. In particular, thin holograms are fabricated initially from computer generated data by Technicolor. Later these thin holograms may be optically transferred into a photopolymer material that has “thick” properties.


There are two main types of holograms used in mass production: surface-embossed holograms and photopolymer volume holograms. Surface holograms are far easier and cheaper to mass-produce at the cost of increased cross-talk noise and reduced diffraction efficiency. Volume holograms can exhibit very low cross-talk and very high diffraction efficiency. Photopolymers are the most effective form of volume hologram for mass production. Unfortunately, photopolymers are significantly more expensive to mass produce as well as being more mechanically unstable and therefore more difficult to control their production quality. Until recently such mass produced surface holograms always had inferior optical quality to photopolymers. Recently however, the basic fabrication technology of Blu-ray disks have enabled the mass-production of computer-generated surface holograms with such high quality that it can surpass the optical quality of photopolymers, which cannot be computer generated as easily. In some cases, when a computer generated hologram is required to exhibit the optical properties of a volume hologram, it is sometimes possible to create a computer generated blue-ray master hologram and then transfer the hologram by optical projection into a photopolymer.


Thin (surface) holograms are significantly easier to record directly from computer generated data and then subsequently get mass produced. Over the past decade, Technicolor has developed extremely robust methods for computer-generated, mass-production of embossed holograms based on the well-established methods of Blu-ray disk manufacturing. As a consequence, it is highly desirable to employ the computer generated surface hologram technology of Technicolor whenever possible. In addition, the Blu-ray based holograms retain a superior optical quality to an equivalent volume hologram because the holographic grating consistency, substrate surface flatness, and manufacturing repeatability is very close to ideal. In this application, the holograms fabricated by Technicolor are clearly “thin” since the grating only exists nΛ2 Equation 20, and q=0.


Being thin, such surface holograms, however, are very susceptible to containing multiple diffracted orders that can generate extra noise in the diffracted beam. In addition, surface holograms have no ability to discriminate between different incoming wavelengths and wavefront directions. As a consequence, surface holograms cannot selectively operate on some wavelengths but not others or some incoming wavefront directions but not others. As a result, surface holograms are very susceptible to generating higher amounts of background noise than volume holograms, to be discussed next.


Thick (volume) holograms use Bragg diffraction to selectively diffract light coming from specific incoming wavelengths and directions and going into other predetermined directions. As a consequence, thick holograms can act as tunable filters for very specific incoming optical wavefronts. Because thick holograms store the grating information over a volume instead in a single surface, a thick hologram can hold much more information content than an equivalent thin hologram. In fact, it is possible to encode hundreds of different surface holograms into a single thick hologram. According to Equation 20, while thick holograms must always contain a volume, not all volume holograms are “thick”. However, in this application, the “thick” properties of volume holograms are of particular interest. While many types of materials can be used to store thick holograms, we will only consider the photo polymer holographic material (PP) since it has the best overall properties required for mass production and performance. More specifically, we are favoring the newer Bayer photo polymer since it has improved optical performance over competing older DuPont photo polymers. Unfortunately, volume holograms cannot be directly created from computer generated data in the manner of Technicolor's surface holograms. In addition, volume holograms cannot be mass-produced with the same simplicity, low cost, and ease of surface holograms. Volume holograms should only be considered when the performance of equivalent surface holograms are found to be insufficient. Even though the term “volume” is used here, the Bayer and DuPont photo polymers are typically only 10-12 microns thick. Nevertheless, such a thickness can be sufficient to enable significant wavelength and angular selectivity.


Simulator Sickness in a HWD System

People involved in the development of head worn displays (HWDs) have discovered that it is not uncommon for the devices to make the wearer fatigued, uncomfortable or downright nauseated, sometimes generically referred to as simulator sickness or SS. HWD development is spurring research into human vision. Dramatic improvements have been made in reducing SS but all aspects of the problem are not understood at the present time. It is extremely important to correct for or minimize features that are known to cause fatigue or simulator sickness in the wearer of a head worn display (HWD). Generally this includes information delivered to the brain, which is inconsistent and/or at odds with normal visual experience. Simulator sickness is a very real phenomenon and can essentially make a HWD unusable. Known issues include the following:

  • 1. Motion Blur due to head motion—displayed objects appear blurry when the head is turned
  • 2. Judder due to eye motion—motion of displayed objects appears jerky, blurry or non-smooth when the eyes rotate
  • 3. Latency in updating imagery—delay in updating the displayed scene when the head is turned
  • 4. Focus cues incorrect—retinal disparity is provided but not proper focus cues, as in current 3D movies


The general scenario of interest is a HWD with live imagery supplied by cameras whose pointing is slaved to the orientation of the head. When the display is optical see-through this is a form of augmented reality or (AR). This is slightly more complicated than that for virtual reality (VR). In VR the scenery is already known, and the goal of the display is to provide the correct scenery for the current head orientation and the camera is virtual rather than real. Concrete examples are the following:

    • Unmanned Vehicle (UV) Control—A controller of an unmanned vehicle wears a HWD displaying live imagery from a camera mounted on the UV. Furthermore, the pointing of the camera(s) on the unmanned vehicle mimics orientation of the operators head, so that the operator can quickly look around, as if he/she were actually present at the location of the UV.
    • Digital Night Vision—A warfighter wears both a HWD and imaging sensors designed to provide imagery typically using wavelengths undetectable by the naked eye. The imaging sensors are mounted on the head so that their pointing tracks the head orientation, see FIG. 2 Virtual Camera—Virtual objects are to be imbedded in the real world, and their position needs to be updated continuously as the head moves.


A simplified block diagram of the processing steps between the imaging sensor and the head worn display is shown in FIG. 3ERROR! REFERENCE SOURCE NOT FOUND. Latency due to head tracking components applies to the UV-Control and Virtual-Camera applications and is discussed elsewhere. The update rate from the inertial measurement unit in Oculus Rift is 1000 Hz, and predictive models are utilized to minimize update errors.


Motion Blur from Head Motion


Motion blur caused by head motion is due to an excessive camera integration time and/or an excessive display persistence time. When the head is moving, the scene requires constant updating. However, the display is updated at the frame rate, which is finite. If the camera records data for the entire frame period, it essentially averages data over the angular range covered by head motion during that period. If the HWD displays each pixel for the entire frame period, it essentially holds data, which at best can be correct only at one angular position, constant over the angular range covered by the head motion during the frame period, thus smearing it out. The acceptable durations for display persistence and camera integration time depend upon the angular speed of head motion. In a VR situation without a camera, Oculus has demonstrated that a low-persistence display (≈2 msec) adequately suppresses the development of simulator sickness due to motion blur and judder for most people. In displays based upon retinal scanning display (RSD) technology, the display persistence is the least of all current technologies. On the camera side, the maximum permissible integration time needs to be limited by some function of the current angular speed of head motion. Head motion data is obtained by head tracking, which is now COTS technology.


Judder when the Eye Tracks a Moving Virtual Object

Judder refers to apparent motion that is jerky, whereas the intention was to display smooth motion. Generically any non-smooth apparent motion is referred to as judder. According to one authority on vision issues with head worn displays, judder specifically refers to such effects associated with eye rotation while tracking or following a moving object. The generic situation is that the HWD is displaying a moving object, which the eye is following or tracking. In this case there are two issues, blurring and strobing. When the eye rotates, a display with persistence equal to the frame period holds the data at each pixel too long causing the image to smear over a range of pixels. A low persistence display eliminates this blurring effect and its tendency to cause SS. However, with low persistence the image only appears briefly once per frame period, and “strobing” is possible if the object moves more than about 5-10 arc minutes per frame. Strobing is not believed to be an immediate cause of SS but nevertheless is known to have other effects, which are under investigation. Strobing is reduced by increasing the frame rate.


Latency

Ideally the camera instantly records a brightness value at each pixel and the HWD instantly displays those values. In reality there is a time delay between the two, referred to as latency. For viewing of still images with a stationary head, latency has no impact on or mechanism to cause SS. When dynamic interaction is involved such as moving the head, the imagery must be continuously updated to be consistent with the new head orientation. In this case excessive latency is known to cause discomfort or SS. The acceptable level of latency is not known, but many video game developers believe that a value below 20 msec is required at a minimum. At least one DARPA RFP is requesting a value of 2 msec or less.


Some generic components of system latency include the following:

    • 1. Camera integration or light acquisition time—Data must be acquired before it can be delivered. The integration time is limited to the frame period. Long integration time is required for seeing in dark conditions, but increases this component of latency. The solution to this tradeoff is to limit the integration time based upon head motion. When the head is rapidly moving as determined by a head tracking device, integration time is limited to a sufficiently small value so that latency does not cause SS. When the head is still, latency is not an issue and the integration time can be increased (and frame rate decreased if necessary) to the optimum value for scene visibility. Denote the camera integration time by τint.
    • 2. Read out/processing time of the camera—A finite time is required to readout and possibly process a block of data. Typically this is 1/60th of a second for full frame readout of a 60 Hz camera, because use of higher frame rates requires operation over a sub-frame. In rolling shutter on one particular low-light CMOS camera, five lines at a time are processed as part of the non-uniformity correction or NUC. Therefore nothing can be transmitted from the camera until a minimum block of 5 lines has been read out and processed. However, at say 1080p60 the time for this minimum block is (5/1080)×(1/60)=77 μsec, which is small compared to the desired total latency. If the display requires processing of a minimum of 64 lines, the readout time increases to (64/1080)×(1/60)=1 msec+NUC processing delay=1.077 msec. The total delay at this point is 1.077 msec+τint.
    • 3. Data block transmit duration—The transmit bandwidth must be adequate to “keep up” with the data, which means that a frame of data must be transmitted in a frame period (or less). To make transmission feasible over available bandwidths, data compression at transmit and decompression at receive are employed. The chips that accomplish this can offer a total latency less than 1 msec for 1080p at 60 Hz. Thus for 64 lines of 1080p60 the time to receive data at the display is 1.077 msec+1.0 msec compress/decompress+τint=2.077 msec+τint.
    • 4. Display frame buffer—There is a minimum amount of data that must be present to allow any required processing prior to display. If this is an entire frame buffer, the latency is at least a frame period just to receive the data. If the minimum is several lines of data, the latency is the corresponding fraction of the frame period. Implementation of display warping correction and/or image fusion typically requires processing of a given number of lines of data at a time. If 64 lines of data are required at say 1080p60 the time to receive this data is 2.077 msec+camera integration time as shown above. The processing time required in a processor performing something akin to an FFT is n×log(n)×clock cycle. At 50 MHz and n=64 the processing time per 64×64 block is something like 5 μsec, and the time to serially process all blocks in the 64 lines is of the order of 5 μsec×(1920/64)=0.15 msec. The total delay time required is 2.23 msec+τint. If the laser scanning is performed according to one of Microvision's patents, a full frame buffer could be required to interpolate and store the data prior to complete display. The format is more akin to 120 Hz-interlaced rather than 60 Hz-progressive. What the patent shows for the vertical motion of the biaxial mirror is a uniform ramp downwards followed by a uniform ramp upwards. Half the frame is displayed on the downward ramp and the other half on the return upward ramp. The first of the two interlaced frames (downward ramp) could be designed to have the latency previously described, but the second interlaced frame would have a variable additional latency up to 16.7 msec at the top of the frame. If the camera was made to run at 120 Hz, the additional latency could be cut in half to a maximum of 8.3 msec. If the vertical mirror was made to snap back to the starting position each frame rather than slowly ramp upwards, this source of additional latency could be removed. Latency is not an issue for the wall display and automotive HUD applications currently employing Microvision scanners. The method of scanning needs to be optimized for low latency for the HWD application.
    • 5. Display persistence—Light is generated at each pixel for some finite duration. Since the persistence is required to generate the correct perception of brightness, some fraction of the persistence is in fact latency. In a low-persistence display such as one based upon retinal scanning display (RSD) technology, this contribution to latency is zero.


The minimum latency is summarized in TABLE 10 assuming that the scanning method is corrected from interlaced to progressive.









TABLE 10







Minimum latency in the camera/HWD system assuming


progressive scanning is implemented










Description
Latency (msec)







Camera integration
τint



NUC delay
0.077



Min. block readout
1.0



Compress/Decompress
1.0



Display processor
0.15



Total
2.23 + τint










3D Defocus Incorrect

A display that only focuses at infinity will in general be less readable when viewing closer objects and may prove annoying in those situations. More importantly, visual cues must agree to prevent the nausea of simulator sickness. Typically to obtain 3D images, the retinal disparity is provided, but not the correct defocus corresponding to the vergence. This creates a vergence-accommodation conflict in the brain. In cases of large motion in depth, a significant number of individuals will eventually experience nausea when only retinal disparity is provided to indicate range. A large percentage of people viewing 3D movies are adversely affected. It has been proposed that this is due to an evolutionary adaptation in which the brain decides that the only way such conflicting signals can arrive at the brain is if a dangerous substance has been consumed. Consequently an urge to throw up (nausea) is generated. The solution is to include a defocus adjuster in the display so that the display overlay is in focus with the background objects being viewed and so that vergence and defocus cues agree.


In the proposed RSD architecture, it is possible to place a variable focus lens prior to the MEMS scanning mirror to adjust the divergence of the scanning beams, and thereby affect the focus of the displayed imagery. This has been implemented successfully in two separate benchtop setups.


There are multiple technologies providing relatively compact variable focus lenses:

    • 1. Liquid lenses with confining membrane curvature set by internal pressure (e.g. Optotune)
    • 2. Liquid crystal lenses (e.g. LensVector)
    • 3. Electro-wetting lenses (e.g. Varioptic)


Of the technologies available, liquid crystal lenses are the most compact, but the electro-wetting lenses are almost as compact and have superior optical quality, as well as color correction, due to the use to two complementary liquids.


None of the technologies is capable of changing focus on a pixel-by-pixel basis, but they can change focus to some degree on a line-by-line basis. The electro-wetting technology has a response time on the order of 10 msec, improving for smaller apertures.


The focus adjuster has to be fast enough to keep up with the eye. Accommodation changes are actually quite slow compared to video rates, and therefore the defocus adjuster is not required to have a high bandwidth. How fast can a person accommodate? In one study of navy fighter pilots, subjects were required to recognize the orientation of a Landolt Coptotype at 20/20 resolution, both at 18″ and then at 18 feet. The minimum time for the pair of optotypes to be correctly recognized in succession was measured. The minimum average time exceeds 500 msec for the youngest and fastest accommodating subjects. Other studies have measured much longer times for accommodation. Adjustment of several diopters can utilize a half second and keep up with the fastest accommodating eyes.


Global change of display focus is straightforward to implement. If there is more than one virtual object, with a distribution of intended range, the defocus technologies mentioned above probably cannot simultaneously provide different focus for the virtual objects unless they are present in the display at non-overlapping vertical positions. In the presence of multiple virtual objects at different range, eye tracking, if implemented, will allow the system to correctly focus the virtual object closest to the foveal gaze.


Variations

While the above description contains many specifications and other details, the reader should not construe these as limitations on the scope of the invention but merely as exemplifications of preferred embodiments thereof. For example:


Applications of the present invention may include:

    • a. HWD with Large FOV, expanded eye box, variable brightness and variable focus (military—ground soldiers and commercial (gaming)
    • b. Situation awareness HWD and next gen Night Vision HWD: HWD integrated with low-light CMOS and LWIR cameras (night vision mode and no light leak for covert operation) (improved situation awareness for ground soldiers and pilots, covert operation, next generation night vision (replacement for NVG)
    • c. AR/VR HMD for military and civilian training applications with opaqueness in the lens
    • d. HWD for hands free robot control (ground vehicles). HWD integrated with an eye tracker
    • e. Dynamic Foveal Vision Display—high resolution combined with large FOV (two scanners per eye-foveal and peripheral).
    • f. AR HWD with variable brightness and light security for ground vehicles (other platforms) (for SOCOM, Army)


Features of embodiments of the invention may include:

    • a. Unobscured see-through FOV with high transmission
      • i. Curved plano combiner (centered on cr of the eye; no prism effect)
      • ii. High see through transmission due to the use of notch filters reflective coating
      • iii. Unobstractive binocular vision
      • iv. Thin polycarbonate (1 mm-1.5 mm combiner)
    • b. Large display FOV (70 degrees)
      • v. Small beam foot print (0.5 mm-2 mm)-> small amount of aberration
      • vi. HOE design correct for beam divergence
    • c. Expanded eye box
      • vii. HOE computer generated surface holograms (low cost), Damann grating, correction for spherical aberration of the lens
      • viii. Hoxeles size, orientation, reflective vs transmissive mode
      • ix. Thin, lightweight, polycarbonate
      • x. Color separation (hoxeles dimension)
      • xi. Spatial resolution
      • xii. Number of exit pupils
      • xiii. Volume HOE
    • d. Variable brightness
      • xiv. Large dynamic range (LCD attenuation)
      • xv. Color gamut
      • xvi. Night mode (>620 nm)
    • e. Variable focus
      • xvii. Electro wetting (LCD lens)
      • xviii. Variable focus display (entire image)
      • xix. Aberrations correction (correction line by line)
    • f. Cyber sickness mitigation
      • xx. Low persistence
      • xxi. Low latency
      • xxii. Vergance focus
    • g. Opaqueness in the lens (programmable opaqueness)
      • xxiii. LCD (flat)
      • xxiv. LCD on a curved surface
      • xxv. Ghost host LCD
    • h. Foveal vision HWD (high resolution with large FOV) (two scanners per eye,
      • xxvi. Static (Foveal display 20/20 at the optical axis, peripheral with low resolution in the peripheral view)
      • xxvii. Dynamic using input (Gaze direction) from the eye tracker.
      • xxviii. Pixel warping approach (using 2 HOEs)
    • i. Eye tracker
      • xxix. Camera based eye tracker
      • xxx. Laser-based eye tracker
    • j. Covertness (covert operation)
      • xxxi. Rougate coating, notch filter coating
    • k. Low SWaP and low cost (COTS components)
      • xxxii. $200/unit in production
    • l. Optically enhanced head tracker
      • xxviii. Had tracking algorithm using data streams from IMU and vision-based head tracker.
    • m. Night mode (preserves dark eye adaptation, covert operation at night)


Key HWD Components:


Key components may include:

    • a. RSD Pico projector engine;
    • b. Focusing lens;
    • c. HOE or HOEs and
    • d. Combiner
    • e. LCD for opaqueness
    • f. Eye tracker (camera-based and laser-based)
    • g. Cameras: low-light CMOS and LWIR
    • h. Head tracker: IMU+optical tracker


Also for example, embodiments may include a camera imaging a field of view larger than the displayed field of view so that a forward predicted view of a background may be displayed when tracking a fast moving object as described in Ser. No. 62/603,160 in order to reduce latency effects.


Accordingly, the reader is requested to determine the scope of the invention by the appended claims and their legal equivalents and not by the examples given above.

Claims
  • 1. An augmented reality head worn device comprising; a. a curve combiner transparent to visible light and reflective in an selected infrared frequency range,b. a scanning light source defining a field of view,c. a holographic optical element adapted to provide exit pupil expansion to create an eye-box, andd. at least one projection system for providing high acuity at or near a center of the field of view.
  • 2. The device as in claim 1 wherein the transparent mirror is coated with a rugate coating.
  • 3. The device as in claim 1 wherein the light source is a laser light source.
  • 4. The device as in claim 1 wherein the light source is a full color light source.
  • 5. A head mounted display system comprising at least one retinal display unit said at least one display unit comprising: A) a curved reflector and a frame adapted to position the curved reflector in front of at least one eye of a wearer, said at least one eye defining a pupil, a retina, a fovea and a view direction,B) a first set of at least three visible light lasers, all lasers being co-aligned and adapted to provide a co-aligned, color foveal laser beam,C) a second set of at least three visible light lasers plus an infrared laser, all lasers being co-aligned and adapted to provide a co-aligned, color and infrared retinal laser beam,D) a first two dimensional MEMS laser scanner unit adapted to provide both horizontal and vertical scanning of the co-aligned color laser beam across a portion of the curved reflector in directions so as to produce reflections of the horizontally and vertically scanned color foveal laser beam through the pupil of the eye onto a small portion of the retina, said small portion being less than 20 percent of the retina but large enough to encompass the fovea, said small portion defining a foveal region,E) a second two dimensional MEMS laser scanner unit adapted to provide both horizontal and vertical scanning of the co-aligned color and infrared laser beam across a portion of said curved reflector in directions so as to produce a reflection of the horizontally and vertically scanned color and infrared retinal laser beam through the pupil of the same eye onto a portion of retina corresponding to a field of view of at least 30 degrees×30 degrees,F) an infrared light detector adapted to detect infrared light reflected from the retina and the curved reflector and produce an infrared reflection signal,G) a video graphics input device adapted to provide color video graphics input signals,H) control electronics adapted to: 1) determine the view direction of the eye based on the infrared reflection signal,2) modulate the first set of three visible light lasers based on the video graphics input signals and control the first scanner unit based on the infrared reflection signal to produce, with the scanned foveal laser beam, color images on the foveal region of the eye, and3) modulate the second set of three visible light lasers based on the video graphics input signals and control the second scanner unit based on the infrared reflection signal to produce, with the retinal color and infrared laser beam: a) color images on a region of the retina corresponding to a field of view of at least 30 degrees×30 degrees andb) infrared reflected light for determining the eye view directionwherein the first scanner unit is adapted to produce a relatively high resolution image on the fovea region of the user's eye and the second scanner unit is adapted to produce a substantially larger image on a portion of the user's eye providing the user a high resolution image of objects within less than 20 degrees of the center of his field of view and an overall field of view of at least 30 degrees.
  • 6. The display system as in claim 5 wherein the curved mirror is spherical.
  • 7. The display system as in claim 5 wherein the curved mirror is ellipsoidal.
  • 8. The display system as in claim 5 wherein the curved mirror is a partially reflecting lens.
  • 9. The display system as in claim 5 wherein each of the first and second sets of at least three visible light lasers comprise red, green and blue lasers.
  • 10. The display system as in claim 5 wherein each of the first and second sets of at least three visible light lasers is a set made up of a red, a green and a blue laser.
  • 11. The display system as in claim 5 wherein each of the first scanner unit and the second scanner unit is comprised of a MEMS scanner.
  • 12. The display system as in claim 11 wherein the each of the first and second MEMS scanner includes a scanner axis that is operated in a resonant mode.
  • 13. The display system as in claim 11 wherein a horizontal scan for each of the first and second MEMS scanners is provided by the resonant scanner and vertical scans are provided by a ramping voltage applied with respect to one axis of the scanner.
  • 14. The display system as in claim 5 wherein the foveal region corresponds to an approximately 10 degree diameter field of view encompassing the fovea.
  • 15. The display system as in claim 5 wherein the second scanner unit is adapted to provide a reflection on the retina corresponding to a field of view of about 50 degrees×70 degrees.
  • 16. The display system as in claim 5 wherein the second scanner unit is adapted to provide a reflection on the retina corresponding to a field of view of having one dimension as large as 120 degrees.
  • 17. The display system as in claim 5 wherein said at least one retina display unit is two retina display units and said at least one eye is both of the wearer's two eyes.
  • 18. The display system as in claim 7 wherein said display further comprises focus adjuster elements.
  • 19. The display system as in claim 18 wherein the focus adjuster elements comprise a variable focus lens and feedback electronics adapted to adjust focus of the variable focus lens to maximize reflection of infrared light detected by said infrared detector of each of the two retinal display units.
  • 20. The display system as in claim 19 wherein the focus adjuster elements comprise a variable focus lens and feedback electronics adapted to adjust focus of the variable focus lens and said control electronics are adapted to determine the focus of each of the two eyes by estimating the convergence angle of the two eyes.
  • 21. The display system as in claim 20 wherein the system is adapted to provide three dimensional viewing.
  • 22. The display system as in claim 21 wherein the system includes a wireless connection to a communication consol.
  • 23. The display system as in claim 22 wherein the consol is a television consol.
  • 24. The display system as in claim 23 wherein the consol is a computer consol in communication with the Internet.
  • 25. The display system as in claim 24 wherein said system is adapted for computer gaming.
  • 26. The display system as in claim 5 wherein the system is adapted for operation in a virtual reality mode.
  • 27. The display system as in claim 5 wherein the system is adapted for operation in an augmented reality mode.
  • 28. The display as in claim 5 wherein the curved mirror has a varying radius of curvature
  • 29. The display system as in claim 5 wherein the system is adapted for implementation in the form of goggles.
  • 30. The display system as in claim 5 wherein the system is adapted for implementation in the form of a head mounted visor.
  • 31. The display system as in claim 5 wherein the system is adapted for implementation in a form wherein the curved reflector is a portion of a cockpit window.
  • 32. The display system as in claim 5 wherein the system is adapted for implementation in a form wherein the curved reflector is a portion of a motor vehicle window.
  • 33. The display system as in claim 5 wherein a field of view of at least 50×100 degrees is provided with a single pico projector and a single MEMS scanner.
  • 34. The display system as in claim 33 wherein the system also includes adjustable focus features.
  • 35. The system as in claim 5 and also comprising a mechanical eye box.
  • 36. The system as in claim 5 and also comprising a polarization-based separation of retinal and corneal reflection.
  • 37. The system as in claim 5 and also optical created opaqueness in the lens for virtually reality applications.
  • 38. The system as in claim 37 wherein the optical created opaqueness is provided with a photochromic material, or a guest-host liquid crystal material.
  • 39. The system as in claim 5 wherein the photo-chromatic material is a diarylethene-type dye
  • 40. The system as in claim 5 wherein the head worn device is adapted to provide true 3D renderings and elimination of simulation sickness by providing both the correct focus as well as the correct retinal disparity.
  • 41. As in claim 1 where pixelated color filters are placed on top of the HOE diffractive elements to reduce color cross-talk.
  • 42. The system as in claim 1, where imagery from cameras are displayed to the wearer, the camera images are over rendered, the head orientation at the moment of display of the next frame is forward predicted, and the correct portion of the over-rendered camera image is displayed to the wearer in said next frame.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of Provisional patent applications: Ser. Nos. 62/495,181; 62/495,187; 62/495,286; 62/495,188, 62/495,185 all filed Sep. 1, 2016 and the benefit of Provisional Patent Application Ser. No. 62/603,160 filed May 17, 2017, and this invention is also a Continuation in Part of Utility patent application Ser. No. 14/545,985, filed Jul. 13, 2015 and Ser. No. 15/257,883 filed Sep. 6, 2016.

Provisional Applications (6)
Number Date Country
62495181 Sep 2016 US
62495187 Sep 2016 US
62495286 Sep 2016 US
62495188 Sep 2016 US
62495185 Sep 2016 US
62603160 May 2017 US
Continuation in Parts (2)
Number Date Country
Parent 14545985 Jul 2015 US
Child 15731959 US
Parent 15257883 Sep 2016 US
Child 14545985 US