The invention is in the field of image projection system, more specifically the invention relates to techniques for providing virtual and/or augmented reality experience to a user.
Wearable, e.g. head mounted, image projection systems for providing virtual and/or augmented virtual reality to the user eye(s) are becoming increasingly popular. Various systems are configured as glasses mountable onto a user's head and operable for projecting images to the user's eyes.
Some of the known systems are aimed at providing pure virtual reality image projections to the user eyes, in which light from the external real-world scenery is blocked from reaching the eye(s), while some other known systems are directed to provide augmented virtual reality perception, in which the light from the external real-world scenery is allowed to pass to the eyes while images/video frames projected to the eyes by the image projection systems are superposed on the external real-world scenery.
Depth and width of field are two of the parameters that should be considered in such virtual or augmented reality projection systems.
For example, WO06078177 describes direct retinal display for displaying an image on the retina of an eye with a wide field of view. The direct retinal display comprises a scan source that is arranged to generate a scanned optical beam, modulated with an image, in two dimensions over a scan angle. The direct retinal display further comprises a diverging reflector in the path of the scanned optical beam that is arranged to reflect the scanned optical beam incident on the diverging reflector outwardly with a magnified scan angle toward a converging reflector that is arranged to reflect the scanned optical beam substantially toward a convergence spot at the pupil of the eye for reconstruction and display of the image on the retina with a wide field of view.
WO15081313 describes a system for presenting virtual reality and augmented reality experiences to users. The system may comprise an image-generating source to provide one or more frames of image data in a time-sequential manner, a light modulator configured to transmit light associated with the one or more frames of image data, a substrate to direct image information to a user's eye, wherein the substrate houses a plurality of reflectors, a first reflector of the plurality of reflectors to reflect transmitted light associated with a first frame of image data at a first angle to the user's eye, and a second reflector to reflect transmitted light associated with a second frame of the image data at a second angle to the user's eye.
WO15184412 discloses a system for presenting virtual reality and augmented reality experiences to users. The system may comprise a spatial light modulator operatively coupled to an image source for projecting light associated with one or more frames of image data, and a variable focus element for varying a focus of the projected light such that a first frame of image data is focused at a first depth plane, and a second frame of image data is focused at a second depth plane, and wherein a distance between the first depth plane and the second depth plane is fixed.
Virtual and augmented reality applications should provide convincingly realistic as well as convenient experience to the user, just as close as possible to the three-dimensional real life. When contemplating the world in the real life, a person sees objects either in focus or out of focus depending on the person's gaze direction/focus and distance from the instantaneous focal plane. Every object we look at directly is in focus because we accommodate our vision to focus on the gaze-centric object and every object in the environment that we do not look at directly and is in a different focal plane, called world-centric object, is not in focus and looks blurred because light coming from it is not focused on the retina of our eyes which are accommodated to focus light coming from the object that we are contemplating at.
Unlike the technique of the present invention, as will be detailed below, some virtual and/or augmented reality systems utilize “extended depth of focus” principle where the user sees all objects in focus irrelevant of their distance from the user and user's accommodation. This effect is achieved by reduction of the exit pupil of the optical system to a level that the depth of focus covers significant accommodation diopter range.
In some potential virtual or augmented reality applications, the virtual object/image should be projected at a fixed location with respect to the three-dimensional surrounding environment, whether virtual or real environment. The object of the virtual image should be in focus whenever the user looks directly towards the virtual object and should be blurred/out of focus whenever the user looks at a different location in the surrounding environment. For example, in construction projects, augmented reality can be usefully utilized to direct the workers to the real-world location of the different building elements, such that the building elements are superposed at a finite location within the real-world environment watched by the workers regardless of the workers gaze focus.
In some other potential virtual or augmented reality applications, the virtual object/image should move with the gaze focus/direction of the user, i.e. it is projected at different locations, with respect to the surrounding environment, corresponding to the gaze focus/direction of the user. In this case, the object of the virtual image should be in focus always. For example, in certain augmented reality games, the user follows a specific virtual character superimposed as moving in the surrounding real-world environment.
Conventional image projection systems for providing virtual or augmented reality to users are generally based on the projection of an image towards the user's eye(s), by forming a focused image on an intermediate image plane, such that the image is perceived by the user as being located at a fixed distance (typically a few meters) in front of the user's eye(s). The depth of focus of such image projection systems is therefore large, and it is difficult to measure and accurately adjust the focal length (the distance to the intermediate plane). However, the eyes, which have good accommodation functionality signal the user and thus the user remains sensitive to inaccuracies in the focal length of the image projection system, and it is particularly problematic when the image is viewed with both eyes, since there may be a discrepancy between the respective focal planes which the eyes look at. In such image projection systems, the intermediate image plane has to be optically relayed to the user's eye(s) and as the intermediate image plane is typically placed at a certain finite distance in front of the eye, it is focused onto the eye retina only when the eye focuses to that certain distance. Projecting images perceived at a certain finite distance from the user eyes relates to the development of eye fatigue, and in many cases, headaches are associated with the fact that while the objects in the projected image may be perceived at various distances from the eye, the image captured by the eye is actually focused at the fixed distance from the eye. This effect which is known as “vergence-accommodation conflict” generally confuses/distresses the visual sensory mechanisms in the brain, yielding eye fatigue and headaches. Furthermore, variations between the relative position and orientation of the eye relative to the image projection systems change the location at which the projected image is perceived by the user eye and cause significant discomfort to persons using the conventional virtual or augmented reality glasses.
There is a need in the art for adjusting the focus and/or location of the virtual object/image based on the specific application, such that the virtual object is in focus whenever the user is looking at the virtual object, whether it is static or mobile with regards to the surrounding, and is out of focus whenever the user is not looking directly at the virtual object, i.e. the user is looking in a different direction and/or focusing on another spot in his field of view.
The present invention provides novel systems and methods that provide natural and realistic virtual or augmented reality experience, in which the virtual object/s is/are dynamically in focus/out of focus based on the specific application, as described above. Therefore, a virtual object will be in focus whenever contemplated by the user and out of focus whenever it is not contemplated by the user.
The present invention also provides novel systems and methods that provide static or moving virtual objects, in and out of focus, with respect to the real/virtual surrounding based on the specific application.
Further, the present invention provides novel systems and methods that provide real-time tracking of eye accommodation that enables dynamic control over the focusing/blurring of the virtual object. Additionally or alternatively, the systems and methods presented assume that the user's accommodation and vergence parameters are obtained from the eye tracking mechanism.
Thus, according to a broad aspect of the present invention there is provided an eye projection system, comprising:
an image projection system configured and operable for generating a light beam modulated to encode image data indicative of an image to be projected towards a subject's eye along a light beam propagation path;
an optical assembly being located in the light beam propagation path and configured and operable for directing the light beam between the image projection system and a retina of the subject's eye, the optical assembly comprising a light beam divergence assembly configured and operable for controllably varying focusing properties of the optical assembly and adjusting divergence of the light beam to thereby affect one or more focusing parameters of one or more portions of the image on the retina of the subject's eye.
In some embodiments, the light beam divergence assembly affects the one or more focusing parameters of one or more portions of the image by maintaining the one or more portions of the image in focus in every gaze distance and/or direction of the subject's eye.
In some embodiments, the light beam divergence assembly affects the one or more focusing parameters of one or more portions of the image by projecting the one or more portions of the image at a fixed spatial location in a field of view of the subject's eye.
In some embodiments, the eye projection system further comprising an eye focal point detection module configured and operable to continuously determine a focal length of the subject's eye and generate eye focal point data to control the light beam divergence assembly. The eye focal point detection module may comprise: a light source arrangement configured and operable to illuminate the subject's eye with a collimated light beam, an optical sensor configured and operable to register reflected light beam from the subject's retina and generate reflection data, and a camera configured and operable to capture images of the subject's eye pupil and generate pupil data, thereby enabling utilizing the reflection and pupil data to determine the focal length of the subject's eye and generate the eye focal point data. Accommodation parameters can be obtained in various other methods, such as auto refractometer, gaze vector convergence point, retinal reflection parameter change detection, etc.
In some embodiments, the light beam divergence assembly comprises an optical element having a controllably variable focusing property.
In some embodiments, the optical assembly comprises a relay lens arrangement.
In some embodiments, the optical assembly comprises at least an input optical element and an output optical element, the light beam divergence assembly being configured and operable to modify a light beam effective distance between the input and output optical elements along the light beam propagation path.
In some embodiments, the light beam divergence assembly comprises an array of light beam deflectors configured and operable to direct the light beam between the input and output optical elements, the light beam divergence assembly being configured and operable to displace at least one light beam deflector of the array.
In some embodiments, at least part of the light beam divergence assembly is positioned before another optical element of the optical assembly along the light beam propagation path.
In some embodiments, the at least part of the light beam divergence assembly comprises at least two optical focusing elements displaceable with respect to each other.
In some embodiments, the at least part of the light beam divergence assembly comprises an optical focusing element having a controllably variable focusing property.
In some embodiments, the focusing element comprises a deformable membrane comprising piezoelectric material being configured and operable to converge or diverge the light beam.
In some embodiments, the at least part of the light beam divergence assembly comprises a beam splitter, a light polarizing element, a focusing element and a light beam deflector arranged sequentially along the light beam propagation path, at least one of the focusing element and light beam deflector being displaceable with respect to the other along the light beam propagation path.
In some embodiments, the eye focal point detection module comprises an eye tracking assembly configured and operable to measure gaze direction of the subject's eye and generate eye positioning data, a camera configured and operable to capture size of pupil of the subject's eye and generate pupil size data, and a controller configured and operable to utilize the eye positioning data and the pupil size data and generate the focal point data.
According to another broad aspect of the present invention, there is provided a method for determining one or more focusing parameters of one or more portions of an image on a retina of a subject's eye, the method comprising:
receiving image data input indicative of an image to be projected to a user's eye; the image data comprises information about color, intensity, distance and whether the image is gaze-centric or world centric;
receiving, for each image datum of the image data, eye focal point data indicative of instant eye focal length;
generating, for each image datum of the image data, focusing and light beam divergence data;
generating, for each image datum of the image data, a light beam encoding each image datum based on the image data, the eye focal point data and focusing and light beam divergence data; and
projecting the light beams encoding the image data in a desired temporal or spatial order towards the subject's eye.
In order to better understand the subject matter that is disclosed herein and to exemplify how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
Reference is made to
Generally, as shown, the eye projection system 100 includes an image projection system 110 that generates light beams LB that form the image on the retina/fovea of the subject's eye, an optical assembly 120 and a light beam divergence assembly 130 included within the optical assembly 120, that together transport the light beams to the eye and control the focus of the image, and one or more controllers 140 that control the operation of the image projection system 110 and/or the optical assembly 120, particularly the light beam divergence assembly 130, to produce the image with the required focusing on the retina/fovea of the subject's eye EYE.
The image projection system 110 is configured and operable for generating a light beam LB that is modulated by encoding image data indicative of an object/image to be projected towards a subject's eye EYE, specifically towards the retina and fovea, along a light beam propagation path LBPP. It is noted that, in general, the image projection system 110 produces one modulated light beam LB which is sequentially encoded with image data. The modulated light beam LB is then projected on the user's eye via the optical assembly 120. The light beam LB can be configured as a laser beam with preconfigured properties, such as the chromatic distribution (RGB) and intensity, in order to genuinely encode the image data indicative of an object/image to be projected. Generally, each instantaneous light beam is modulated by one image datum representing one pixel in the object/image to be projected. Therefore, for example, for projecting an image of 1280×720 pixels, at least 921,600 modulated light beams LB are encoded by the 921,600 image data pieces and projected towards the eye via an optical system that includes the optical assembly 120. The frame rate of projecting the whole image is determined such that it is higher than the frame rate of the human eye. Detailed description about the generation of the object/image by the image projection system 110 is found in WO 15132775 and WO 17037708 both assigned to the assignee of the present invention and incorporated herein by reference.
As shown in the
The optical assembly 120 includes the light beam divergence assembly 130 which is configured and operable for controllably varying focusing properties of the optical assembly 120 such as by adjusting divergence of the light beam LB to thereby affect one or more focusing parameters of one or more portions of the image on the subject's eye EYE.
It is known that the human eye focuses on an object through eye accommodation that involves adjusting the focal point/length of the eye such that light arriving from the object in focus is focused/converges at the focal point of the eye and produces a focused image on the retina/fovea. In other words, an observed object will be in focus if and only if the light emerging/reflected from it converges at the focal point of the subject's eye.
The light beam divergence assembly 130 affects the divergence/convergence of the light beam LB that carries the image to the subject's eye by travelling along the light beam propagation path LBPP, to dynamically cause focusing and defocusing of the image or portions thereof. Consequently, for the subject to see the image/object in focus, the light beam divergence assembly 130 is configured to maintain the one or more portions of the image/object in focus in every gaze distance and/or direction of the subject's eye, i.e. by converging the light beam LB at the focal point of the eye. And for the subject to see the image/object out of focus, when not looking at it directly, exactly as in the real life, the light beam divergence assembly 130 is configured to project the one or more portions of the image/object at a fixed spatial location in a field of view of the subject's eye and/or to converge the light beam LB at a location which is different from the focal point of the subject's eye. It should be noted that, the image can be comprised of RGB components and convergence of each color (R, G, B) can be controlled either simultaneously or separately.
The one or more controllers 140 are configured and operable to generate controlling signals to the image projection system 110 and/or the light beam divergence assembly 130 in order to produce and direct each light beam LB that encodes each image datum such that the image is projected on the retina/fovea with the required focus and depth of field as described above. It is noted that, the eye projection system 100 can include one central controller being in communication with all the elements/assemblies/subsystems included in the eye projection system 100, such that it controls all of the operation of the eye projection system 100. Or, each element/assembly/subsystem or a combination thereof can include its own local controller that can receive input data from or send output data to other parts of the eye projection system 100. Therefore, whenever a controller action is mentioned herein through the application it can be either from a central controller or a local controller, and even if no controller is specifically shown in a figure, it is assumed that every element/assembly/subsystem has its own local controller or is controlled by the central controller of the whole eye projection system 100. More details about the controller(s) 140 are described further below.
In the description below, various implementations are described for the optical assembly 120 and the light beam divergence assembly 130. It is noted that the specific embodiments are for illustration only and are by no means limiting the invention. Further, it is noted that for simplicity of presentation different simplifying assumptions are made, such as the idea that the light beam LB being input to the optical assembly 120 as a collimated beam (from the image projection system 110), however it is appreciated by a man versed in the art that the light beam LB can be input as a converging/diverging beam as well without any limitation. Yet further, it should be understood that the figure-specific examples given with respect to the condition(s) of the output light beam exiting the optical assembly 120 and the eye projection system 100 towards the user's eye are illustrative and simplified, whereas any other possible condition of the output light beam can be practiced by the present invention without limitation. Moreover, it should be understood, though not necessarily or specifically shown, that the present invention is capable of producing an output light beam towards the subject's eye having any required property such as specific divergence, convergence, frequency, amplitude, width, intensity, angle of incidence with the eye, or any combination thereof in order to produce the required virtual or augmented reality experience, such as the three-dimensionality and focusing profile across the produced virtual image/object.
Turning to
As mentioned, the optical assembly 120 includes one or more optical elements configured and operable to transport the light beams LB indicative of the image to be projected to the subject's eye, between the exit of the image projection system 110 and the subject's eye EYE. In the described non-limiting example, the optical assembly 120 includes optical elements forming a relay lens system 122 including two consecutive converging lenses 122A and 124A. The lens 122A has a focal point F1 and the lens 124A has a variable focal point F2, with two positions illustrated F2A and F2B. The two lenses are arranged such that their optical lens axes are congruent and located along the optical axis X. It should be noted that, the optical assembly 120 can include other optical elements, such as more lenses as required.
The optical assembly 120 includes the light beam divergence assembly 130 which includes/is formed by the second lens 124A. The second lens 124A, in this case the output lens, has a variable focusing property such that its focal point F2 can be altered and changed. As shown, the focal point F2 is shown in two positions F2A and F2B corresponding respectively to two illustrated configurations 124A1 and 124A2 (dashed line) of the lens 124A. As can be understood, a lens having a variable/modifiable focal point/length can change the divergence/convergence of the light beam falling on it. The light beam divergence assembly 130 can therefore controllably adjust the focusing property(ies) of the optical assembly 120 such that the light beam passing therethrough is diverged/converged in a controllable manner. As mentioned above with respect to configuration of the light beam LB, it should be noted that such configuration can operate in various, not necessarily telecentric, modes.
As demonstrated, the light beam LB enters the optical assembly 120, from the side of the first lens 122A, as a collimated beam parallel to the optical axis X, and therefore converges at the focal point F1 of the first lens 122A. If the second lens is in its configuration 124A1, its focal point is at F2A, which is coincident with the focal point F1, and the light beam LB will exit the second lens 124A as a collimated beam LB1 in parallel to the optical axis X (as shown by the full lines). This means that the image produced by the light beam LB1 is in focus at infinity. In other words, the image produced by the light beam LB1 is going to be in focus if the subject is focusing his sight at infinity by looking at a faraway object, or out of focus if the subject is focusing his sight at a close distance. In the human realm, it can be considered that an object which is far from the subject by about 6 meters or more is located at “infinity”, i.e. the focus of the subject's eye does not change from about 6 meters and more. When a human eye is focusing on infinity, the eye focal point is located on the retina at the maximal focal length of the eye, as illustrated by the eye focal point FE1. In the second illustrated situation, in which the focal point of the second lens is positioned at point F2B, the light beam LB will exit the second lens 124A as a converging beam LB2 (as shown by the dashed lines) which eventually converges at some point after the focal point F2B, for example at the focal point FE2 of the subject's eye EYE. This means that the image produced by the light beam LB2 is produced and is in focus at the location where the subject is focusing his sight. Accordingly, the light beam divergence assembly 130 of the present example includes an optical element having a controllably variable focusing property, thereby affecting the focusing parameters of one or more parts of the image. This way, it is possible to produce a realistic virtual or augmented reality scene, by producing focused images at the sight location of the subject (at the location where he focuses his sight, gaze-centric), and unfocused images at locations outside the sight location (world-centric). As can be understood from the description above, since each controlled light beam represents only part of the whole projected image, e.g. one pixel in the image, the eye projection system of the present invention enables producing images that include focused and unfocused, blurred, objects within the same projected image, i.e. simultaneously, so that a three-dimensional perception with a controlled depth of field is achievable.
It should be noted that, although the above described non-limiting examples utilize refractive optical elements, the same principles are also effective for reflective and diffractive optical elements with optical power. One of the benefits of this system is that though it describes a plurality of light beams, its general field is constant and therefore significantly simplifies implementation requirements.
As mentioned above, the present invention also provides systems and methods for monitoring and detecting the focal point/length of the subject's eye, such that the continuous detection enables to control and operate the optical assembly including the light beam divergence assembly and focus or un-focus the light beam produced by the image projection system on the detected focal point of the subject's eye based on the desired result, i.e. focus the light beam if the projected image needs to be in focus, e.g. when the subject is looking at the image, or un-focus the image if it does not need to be in focus, e.g. when the subject is looking away from the projected image.
Reference is now made to
As shown in a first non-limiting example of
The light generated by the light beam source 152 is a light having a spectrum that is not or is almost not absorbable by the eye, specifically the retina. For example, such light can be in the infra-red range, such that firstly it does not disturb the subject even when looking directly to the source of light, because it is not in the seen spectrum, and secondly it is not absorbed but rather scattered by the retina from the eye.
The optical sensor 154 included in the eye focal point detection system 150 is configured to collect and detect the light beam reflected from the subject's eye EYE. The sensor is distanced with a known distance SD from the pupil P of the subject's eye EYE.
In the illustrated example of
The camera 156 is configured and operable to capture images of the eye's pupil P at a predetermined rate, preferably as large/quick as possible. The area SP of the pupil can be calculated from the pupil images.
Accordingly, the eye focal point detection system 150 is configured and operable to determine the focal point/length FL of the eye at each given time based on at least the following parameters: the area of the spot on the sensor (SI, S2), the area of the pupil SP and the distance of the sensor from the pupil.
The controller 140A (or the central controller 140 of the eye projection system 100) can be configured for operating each or some of the light beam source 152, the light sensor 154, the camera 156, and the beam splitter/combiner 158. The controller 140A receives data from the camera 156 and the optical sensor 154 and calculates the instant eye focal length FL. The controller 140A generates output data indicative of the eye focal length FL and sends the output data to the central controller 140, or to other local controllers as the case may be, in order control the light beam divergence assembly 130 and adjust the light beam divergence so as to control the focusing properties of the optical assembly 120 and affect the focusing parameters of one or more parts of the image projected to the subject's eye EYE.
It should be noted that, in the example of
Turning now to
As shown in the figure, all of the elements in the system 150 have the same functionality as described in
In
Turning to
The horizontal error (the extent by which the spot is deviated horizontally from the center, can be expressed as follows:
The vertical error (the extent by which the spot is deviated vertically from the center, can be expressed as follows:
It is possible to plot the ErrorH and ErrorV against the voltage read at the sensor 154, as shown in
Reference is now made to
Reference is now made to
At 410, the eye projection system 100 receives via its image projection system 110, image data indicative of an image to be projected to the subject's eye EYE. For example, the image is composed of pixels (whether one-, two- or three dimensional image), and each pixel in the image is represented by an image datum. The image datum of each pixel in the image includes information such as color, intensity, distance and nature of presentation (whether it should be projected as a gaze-centric image or a world-centric image). Optionally, the user chooses whether the system should operate in gaze-centric mode or in a world-centric mode.
In the following steps, the image projection system 110 generates a series of light beams, encodes them with the corresponding image data and projects them towards the subject's eye EYE via the optical assembly 120. Accordingly, if the image is composed of Z pixels, corresponding Z encoded light beams are generated.
At 420, for each image datum, the eye projection system receives data from the eye focal point detection system 150 about the instant eye focal length.
At 430, for each image datum, the eye projection system generates data to control the light beam divergence assembly 130 in order to adjust the corresponding light beam's divergence and control its focusing on the subject's eye based on whether it represents a gaze-centric or a world-centric image.
At 440, for each image datum, the image projection system 110 generates a light beam encoding the image datum based on the image information (colour, distance, etc.), the eye focal point data and the focusing and light beam divergence data.
At 450, the eye projection system 100 projects the light beams forming the image in a desired temporal or spatial order. Typically, the image data represent a sequential order of the pixels forming the image and the image data is projected in this sequential order. However, the eye projection system can project light beams of parts of the image in an order different to the sequential order of the pixels forming the image.
Number | Date | Country | Kind |
---|---|---|---|
252585 | May 2017 | IL | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IL2018/050582 | 5/28/2018 | WO | 00 |