This application claims priority to Taiwan Application Serial Number 112148374, filed Dec. 12, 2023 which is herein incorporated by reference.
The present disclosure relates to a mixed reality display device.
Near-eye displays have become the mainstream of mobile displays in recent years. Near-eye displays can be integrated into portable devices such as glasses and head-mounted displays to increase display convenience. In the field of mixed reality (MR) display, exit pupil expansion (EPE) technology is crucial to a viewer's experience. An area of an image incident area is usually much smaller than that of an image exit area, since it is limited by a size of a near-eye display, which is usually much smaller than the area of the image exit area. How to extend an image so that it can be clearly viewed by a human eye from different directions is an important subject for mixed reality near-eye display designers.
One technical aspect of the present disclosure is a mixed reality display device.
According to an embodiment of the present disclosure, a mixed reality display device comprises a waveguide element, an image display device, an imaging lens, a first diffractive optical element, a first volume holographic optical element, a second volume holographic optical element and a second diffractive optical element. The image display device is located on a first side of the waveguide element, and is configured to emit a light field image. The imaging lens is located between the waveguide element and the image display device, and is configured to image the light field image at infinity, that is, transfer each signal pixel in the image display device into a plane wave at a corresponding angle. The first diffractive optical element is located on a second side of the waveguide element opposite to the first side, and is configured to guide the light field image into the waveguide element. The first volume holographic optical element is located on the second side of the waveguide element, and is configured to guide the light field image. The second volume holographic optical element is located on the first side of the waveguide element, and is configured to guide the light field image, wherein the first volume holographic optical element and the second volume holographic optical element have an angular selectivity, configured to extract a specific viewing angle, and the second volume holographic element has a lens array function to form a light field image source. The second diffractive optical element is located on the second side of the waveguide element and has a lens array function to transfer the light field image source into a three-dimensional image.
In an embodiment of the present disclosure, the first diffractive optical element comprises at least one of a surface structured grating, a liquid crystal grating, a volume holographic element, or a metamaterial.
In an embodiment of the present disclosure, the second diffractive optical element comprises at least one of a surface structured grating, a liquid crystal grating, a volume holographic element, or a metamaterial.
In an embodiment of the present disclosure, the lens array function of the second volume holographic optical element and the lens array function of the second diffractive optical element are configured to generate a viewing-angle magnification function.
In an embodiment of the present disclosure, the second volume holographic optical element comprises a plurality of holographic optical element lenses. The holographic optical element lenses are arranged in an array. Any of the holographic optical element lenses is configured to converge a ray of light.
In an embodiment of the present disclosure, the holographic optical element lenses are produced by a recording of the interference between a plane wave and a spherical wave, and the holographic optical element lenses are produced by repeating the above step at different positions on a same holographic photosensitive film to produce the holographic optical element lenses, so that the second volume holographic optical element is composed into a form of an array.
In an embodiment of the present disclosure, the second diffractive optical element comprises a plurality of diffractive optical element lenses. The diffractive optical element lenses are arranged in an array. Any of the diffractive optical element lenses is configured to diverge or converge a ray of light.
In an embodiment of the present disclosure, any of the holographic optical element lenses in the second volume holographic optical element forms a beam shrinking together with a diffractive optical element lens at a corresponding position in the second diffractive optical element.
In an embodiment of the present disclosure, the first volume holographic optical element comprises a plurality of volume holographic gratings formed at different positions in a same photosensitive material in plane wave interference recording at different angles. The volume holographic gratings are arranged in an array. Each of the volume holographic gratings corresponds to a different Bragg condition.
In an embodiment of the present disclosure, an air layer is provided between the second diffractive optical element and the waveguide element.
In an embodiment of the present disclosure, a mask element is further provided between the second diffractive optical element and the waveguide element. The mask element is configured to eliminate a phenomenon of crosstalk between the diffractive optical element lenses.
In an embodiment of the present disclosure, the mixed reality display device further comprises an eye movement tracking camera. The eye movement tracking camera is located on the first side of the waveguide element.
In an embodiment of the present disclosure, the mixed reality display device further comprises a beam splitter. The beam splitter is located between the waveguide element and the image display device, and is configured to transmit an image of a human eye to the eye movement tracking camera.
In an embodiment of the present disclosure, the mixed reality display device further comprises an eye movement tracking camera and a third volume holographic optical element. The eye movement tracking camera is located on the second side of the waveguide element. The third volume holographic optical element is located on the first side of the waveguide element, and is configured to transmit the image of the human eye to the eye movement tracking camera, wherein the third volume holographic optical element comprises a reflective volume holographic element.
In an embodiment of the present disclosure, the mixed reality display device further comprises an eye movement tracking camera and a third diffractive optical element. The eye movement tracking camera is located on the second side of the waveguide element. The third diffractive optical element is located on the first side of the waveguide element and does not overlap with the image display device in a first direction, and is configured to transmit the image of the human eye to the eye movement tracking camera.
In the above embodiments of the present disclosure, since the image emitted by the image display device of the mixed reality display device is a light field image, and the second diffractive optical element has a lens array function to transfer the light field image into a three-dimensional image, the effect of seeing the 3D image in the mixed reality display device can be achieved. Furthermore, the first volume holographic optical element and the second volume holographic optical element have the angle selectivity, and the lens array function of the second volume holographic optical element and the lens array function of the second diffractive optical element are configured to generate the viewing-angle magnification function, so that the effects of image selection and viewing-angle magnification can be achieved, thereby improving the viewing experience of the mixed reality display device. By using the eye movement tracking camera to capture the image of the human eye passing through the lens array, then deducing the position that the human eye gazes at through a regression algorithm, and feeding it back to the light source to adjust the emitted light field image, the feeling of dizziness caused by the vergence-accommodation conflict (VAC) to the human eye can be effectively reduced.
Aspects of the present disclosure are best understood from the following description of embodiments when read in conjunction with the drawings. Note that in accordance with standard practice in this industry, various features are not drawn to scale. In fact, dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion.
The following disclosure of embodiments provides many different embodiments or examples for implementing different features of the provided subject matter. Specific examples of elements and arrangements are described below to simplify the present application. Of course, these examples are examples only and are not intended to be limiting. Additionally, reference symbols and/or letters may be repeated in various examples of the present application. This repetition is for simplicity and clarity, and does not by itself specify a relationship between various embodiments and/or configurations discussed.
Spatial relative terms, such as “below”, “under”, “lower”, “above”, “upper”, etc., may be used herein for convenience of description, to describe the relationship between one element or feature and another element or feature as illustrated in the drawings. Spatial relative terms are intended to encompass different orientations of a device in use or operation, in addition to the orientations illustrated in the drawings. The device may be otherwise oriented (rotated 90 degrees or at other orientations), and the spatial relative descriptive words used herein may also be interpreted accordingly.
Since the image emitted by the image display device 120 of the mixed reality display device 100 is a light field image, and the second diffractive optical element 160 has a lens array function to transfer the light field image into a three-dimensional image, the effect of seeing the 3D image in the mixed reality display device 100 can be achieved. Furthermore, the first volume holographic optical element 140 and the second volume holographic optical element 150 have the angle selectivity, and the lens array function of the second volume holographic optical element 150 and the lens array function of the second diffractive optical element 160 are configured to generate the viewing-angle magnification function, so that the effects of selecting an image and magnifying the viewing-angle can be achieved, thereby improving the viewing experience of the mixed reality display device 100.
In some embodiments, the first diffractive optical element 130 includes a surface structured grating, a liquid crystal grating, a volume holographic element, a metamaterial, or other diffractive optical elements. In some embodiments, the second diffractive optical element 160 includes a surface structured grating, a liquid crystal grating, a volume holographic element, a metamaterial, a lens array with a see-through function, or any form of diffractive optical element.
The holographic optical element lenses 152 diffract particular polarization light to form diffraction light, and the polarization of the diffraction light of each diffractive optical element lens is orthogonal to the polarization of the diffraction light from its neighboring diffractive optical element lens. Such that the crosstalk between the diffractive optical element lenses can be eliminated.
The diffractive optical element lenses 162 diffract particular polarization light to form diffraction light, and the polarization of the diffraction light of each diffractive optical element lens is orthogonal to the polarization of the diffraction light from its neighboring diffractive optical element lens. Such that the crosstalk between the diffractive optical element lenses can be eliminated.
The above-mentioned embodiments of
By using the eye movement tracking camera 170 to capture the image of the human eye E passing through the lens array, then deducing the position that the human eye gazes at through a regression algorithm, and feeding it back to the image display device 120 to adjust the emitted light field image, the feeling of dizziness caused by the vergence-accommodation conflict (VAC) to the human eye can be effectively reduced.
The foregoing outlines features of several embodiments, so that those skilled in the art may better understand the aspects of the present disclosure. It should be understood by those skilled in the art that they may readily use the present disclosure as a basis for designing or modifying other processes and structures, for realizing the same purposes and/or achieving the same advantages as the embodiments introduced herein. It should be recognized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they can be variously changed, substituted, and altered herein without departing from the spirit and scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
112148374 | Dec 2023 | TW | national |