The present invention relates to Smart Contact Lens (SCL) system. More specifically, the invention relates to an Augmented Reality (AR)/Virtual Reality (VR) Smart Contact Lens (SCL) system with an embedded waveguide to shift the responsibility of bringing different parts of the image partially or completely by shifting the image into the centre of the eye focus/retina.
AR/VR devices normally carry a series of sensors that enable the said AR/VR device to detect and track an environment with depth sensors, CMOS/CCD sensors, image sensors, LIDAR, Infrared, orientation sensors and so on. The AR/VR system creates a 3D model of the surrounding reality from which a virtual space is then created. Virtual spaces contain virtual objects associated with the 3D Models/3D Mesh of such objects, sourced from reality. These Virtual objects are then placed into such virtual spaces to associate the 3D Model/3D Mesh with the surrounding reality. The virtual object is superimposed on the base virtual space layer. In the case of an AR device, the base layer may be the real-world view as compared to the virtual world in the case of a VR device, when the user's gaze is directed to the location of the environment. Thus, the virtual object shifts on display into appropriate position, corresponding to the external position with which said virtual object is associated in 3D Model/3D Mesh of environment.
Generally, for AR and VR platforms there are two modes of image projection/delivery utilized. Both are useful. In one mode, the image may be projected on an embedded display with regard to a 2 dimensional (2D) frame of reference on display; that is according to the display's x and y coordinates. This is the method of projection that projects a “static” image onto an NED display in an existing AR/VR head-mounted display irrespective of the position of the head and where the user is looking at. An example would be the Microsoft Holo-lens running Windows 10 which projects a stop watch or a timer in the bottom right corner of the peripheral vision in the display and the “START” button is provided on the right side of the left corner. Regardless of the direction of the user of AR/VR headset turns their head and where eyes saccade to the position of an image on display, a priori, does not change. Human vision can be discretely divided into “in focus” vision, “near focus” and “peripheral vision”. Focused vision is up to <5 degrees of visual angle. Up to <20 degrees of visual angle represents near focus vision and the rest is peripheral vision.
With peripheral vision, the user registers interest in parts of observable reality and saccades to it, to bring sought parts of observable reality from peripheral to focused vision; that is, by performing saccades, the user refocuses on different parts of a stationary image. When light is reflected from an object, it enters the retina under a direct angle and is then seen as “in focus”. With each saccade fixation point, the visual cortex takes an image snapshot of the current “in focus” image and after collecting several “in focus” 2D images, the visual cortex stitches a multitude of 2D images into the 3D totality of visual perception thereby creating the visual experience.
Now, a 2D Frame of Reference (FoR) refers to a two-dimensional geometry of the embedded display and is defined by x and y axis, that is vertical and horizontal meridian of the eye; a 3D Frame of Reference refers to real world three dimensional geometry and is defined by Cartesian coordinate system consisting of X, Y, and Z axes, that is 3 dimensional geometry of the external environs of the user.
A 2 dimensional FOR is used in AR/VR applications where imagery needs to be superimposed onto the display with regard to the display's geometry, whereas imagery would normally (on AR/VR headset display or laptop display) be stationary on display irrespective of the external environment and what the user is gazing at. For example, AR or VR display application executing Windows or Android, or iOS operating system shows a frame on display wherein the bottom left corner shows the “Start” button and on the right side of the “Start” button on display an up and down menu is accessible. There may be multiple application icons appearing. Icons that could act as UI (user interface) to specific applications, much like iPhones or android phones' application icons that could be selected and the application would launch. A variety of UI components could be presented to the user and such UI components would be stationary relative to the display and their disposition on display will not be changing relative to external 3D environs regardless of what an external environment of the user is being looked at.
On the other hand, 3 dimensional (3D) FOR is used for AR and VR applications where virtual object imagery needs to be superimposed onto the display with regard to the external geometry of the environment. It has a cartesian coordinate system consisting of X, Y, and Z axes, whereas, imagery would be specially associated with the external environment and could be geometrically associated with external space. The image would be placed into a virtual space corresponding to the external environs. The following is a non-limiting and exemplary implementation provided here for illustration only, the process may consist of several steps: Map the local environment, for example with SLAM (Simultaneous Localization and Mapping) methodology may be used to map current environs of the user, such as room, street, etc. SLAM could be complemented by running ML-RANSAC algorithm.
Normally, in conventional vision systems, whenever an image is displayed in front of the user, such as on TV, a tablet or any other head mounted displays, the location of the display remains constant and only the eye gaze changes direction relative to a stationary part of the image on the display to focus on other portions of the display in order to ascertain the full image data ingestion. Image position on display for external (non-on the eye) displays is stationary and eyes saccade to refocus on different parts of an image. So that multitude of focused vision image snapshots taken at different saccade fixation points would be stitched together by the brain's visual cortex, into the totality of visual perception.
However, in 2D FOR, a transparent or a semi-transparent or non-transparent display is embedded into the contact lens and an image or a video is superimposed onto the real-world objects and is viewed in front of the user or maybe displayed. Such kind of an embedded display is naturally, and spatially associated with and locked in, relative to the position of the human eye. Because the embedded display will be shifting with every movement of the eye (saccade), only a part of the image present at the center of the embedded display, will be in sharp focus and a user will not be able to perceive other parts of the superimposed images in clear focus. Further, the human eye position adjustments will disable the eye from seeing other parts of the image in focus because the embedded display moves with the movement of the human eye and an image disposition on display, a priori, does not change. This is a major problem for “on the eye” display system. As the user cannot naturally refocus on different parts of an image by utilizing the natural saccading mechanism.
To overcome the aforementioned problem, an image on the embedded display may be shown at a center position in order to bring an entire image into focus. In order to display the image at the center position, the image needs to appear as being far away from a user. However, this approach presents a number of limitations, such as: 1) amount of information and image size displayed in focus is minimal, and 2) there is no peripheral view available which further limits the usefulness of such an approach. Moreover, with on the eye display, embedded into the contact lens, there is a problem. Every time, user tries to saccade to bring parts of superimposed image from peripheral view into “in focus” view, the user moves the eye to refocus. With every move of the eye, contact lens moves and consequently display moves with synchronicity and proportionality with the eye. Hence, images superimposed also shift proportionally and in the same direction, as a result the user cannot refocus and saccading fails.
Hence, to render an embedded, contact lens display, useful and practical, in 2D FOR, for a human, it is critically important that the above described limitation is transcended. The solution propounded in the present invention is to partly or completely shift responsibility of bringing different parts of the image displayed into focus, from the eyes to the display. To be in focus, the section of an image must be situated in front of the retina of the eye, at the center of the display. The image has to shift on the display, so as to position the section of an image of interest, at the center of the display bringing it into the focus.
The present disclosure overcomes one or more shortcomings of the prior art and provides additional advantages discussed throughout the present disclosure. Additional features and advantages are realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.
To overcome the aforementioned problem and for superimposed stationary images, the following solution is propounded in the present patent application—to partly or completely shift the responsibility of bringing different parts of the image displayed into focus, from the eyes to the display. To be in focus, the section of an image must be situated in front of the retina of the eye, at the center of the display. The image has to shift on the display, so as to position the section of an image of interest, at the center of the display bringing it into the focus.
According to one embodiment, at the beginning a zero point reference is determined by tracking the eye's position for the current image overlaid onto the display. With every shift in the eye's position, the image overlay may be recomputed accordingly so that the part of the image sought by the eye is displayed at the center of the display, in front of the eye's retina, and therefore displayed in focus.
According to another embodiment, a “base point reference” may be selected by the user with any detectable signal or by performing certain actions. For example, the user may trigger taking base point reference by clapping his hands, performing any other hand gesture or by blinking user's eye or by any predefined signal that would be captured by the image capture feature and processed to identify the signal. By tracking changes in focus of the eye, the system may determine whether the eye is focused on the image superimposed on display or the real-world objects in front of the eye.
According to another embodiment, the user may trigger taking base point reference by tracking focus of the eye, in real-time, to determine whether the eye is focusing on the objects at a distance or it is focused on the image on display. This method may be used to switch between Frame of References and for registering an anchor point at the same time. A variety of other detectors of a switch in gaze between outside real object and overlaid image are possible. Methods given above are exemplary only and should not be taken as being limiting to the scope of the invention.
According to one embodiment, the system may be predefined or can be dynamically determined as to where the base point reference should be and when the tacking against the said point reference should stop. The system may stop tracking position of the eye and correlate changes of eye's vector to the image disposition on display at stop point. Stop point may be signalled with hand gestures, voice or other signals. Stop signal may be signalled by change of focus from image on display to the real world objects in front of the user. There may be a variety of other ways to detect a stop point. For 2D FOR, once a stop signal is identified, the image on display may return to its original disposition on display; regardless of the position of the eye.
According to one more embodiment, the contact lens system, runs in 2D FOR mode, tracks focus and/or saccades (eye gaze vector changes) and determines what part of an overlaid image the user is attempting to focus on. In response to attempted refocus (saccade) the system shifts an image on the screen so that the sought after portion of an image comes in focus at the center of the on the eye display. This methodology allows natural use of eyes with on the eye display to browse ‘static’ images in 2D FOR. Present solution is also a prerequisite for SCL based system of control where UI components could be placed across peripheral view statically, in 2D FOR, and could be selected and triggered by an eye saccade only.
5 and 6 discloses a sequential “walk” of the eye's gaze through the overlaid, otherwise static image; according to 2D FOR describing one embodiment of the invention.
The foregoing summary, as well as the following detailed description of certain embodiments of the subject matter set forth herein, will be better understood when read in conjunction with the appended drawings. As used herein, an element or step recited in the singular and preceded with the word “a” or “an” should be understood as not excluding the plural form of said elements or steps, unless such exclusion is explicitly stated. In this document, the term “or” is used to refer to a non-exclusive or, unless otherwise indicated. Furthermore, references to “one embodiment” are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Moreover, unless explicitly stated to the contrary, embodiments “comprising” or “having” an element or a plurality of elements having a particular property may include additional such elements not having that property.
As used herein, the terms “software”, “firmware” and “algorithm” are interchangeable, and include any computer program stored in memory for execution by a computer, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory or any other type of memory. In one embodiment memory maybe implemented as a binary system or in one embodiment memory maybe implemented as a quantum system. Present disclosure should not be construed as being limited to any specific memory system or architecture; presently disclosed system would work with any memory architecture and memory arrangement. The above memory types are exemplary only, and are thus not limiting as to the types of memory usable for storage of a computer program.
As used herein, the term image refers to a dataset containing color information representing an image. Image maybe synthetically produced by a computer software or hardware or maybe taken with an image sensor. The set of instructions may be in the form of a software program, which may form part of a tangible non-transitory computer readable medium or media. The software may be in various forms such as system software or application software. Further, the software may be in the form of a collection of separate programs or modules, a program module within a larger program or a portion of a program module. The software also may include modular programming in the form of object-oriented programming. The processing of input data by the processing machine may be in response to the operator's commands, or in response to results of previous processing, or in response to a request made by another processing machine.
The various embodiments and/or components, for example, the modules, elements, or components and controllers therein, also may be implemented as part of one or more computers or processors. The computer or processor may include a computing device, an input device, a display unit and an interface, for example, for accessing the Internet or Intranet. The computer or processor may include a microprocessor. The microprocessor may be connected to a communication bus. The computer or processor may also include a memory. The memory may include Random Access Memory (RAM) and Read Only Memory (ROM). The computer or processor further may include a storage device, which may be a hard disk drive or a removable storage drive such as an optical disk drive, solid state disk drive (e.g., flash RAM), and the like. The storage device can also be other similar means for loading computer programs or other instructions into the computer or processor. Processor may have onboard memory or memory maybe remotely situated.
As used herein, the term “computer” or “module” may include any processor-based or microprocessor-based system including systems using microcontrollers, reduced instruction set computers (RISC), application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), graphical processing units (GPUs), logic circuits, and any other circuit or processor capable of executing the functions described herein. The above examples are exemplary only, and are thus not intended to limit in any way the definition and/or meaning of the term “computer”.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific embodiments in which the subject matter disclosed herein may be practiced. In one embodiment the computer maybe a binary system computer; in one embodiment computer maybe a quantum system computer. Present disclosure should not be construed as being limited to specific type of computer architecture or system, instead term computer or processor should be taken to mean any computing or processing capability of any kind. These embodiments, which are also referred to herein as “examples,” are described in sufficient detail to enable those skilled in the art to practice the subject matter disclosed herein. It is to be understood that the embodiments may be combined or that other embodiments may be utilized, and that structural, logical, and electrical variations may be made without departing from the scope of the subject matter disclosed herein. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the subject matter disclosed herein is defined by the appended claims and their equivalents.
The terminology used in the present disclosure is for the purpose of describing exemplary embodiments and is not intended to be limiting. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated features, operations, elements, and/or components, but do not exclude the presence of other features, operations, elements, and/or components thereof. The method steps and processes described in the present disclosure are not to be construed as necessarily requiring their performance in the particular order illustrated, unless specifically identified as an order of performance.
In an event an element is referred to as being “on”, “engaged to”, “connected to” or “coupled to” another element, it may be directly on, engaged, connected or coupled to the other element, or intervening elements may be present. On the contrary, in an event an element is referred to as being “directly on,” “directly engaged to”, “directly connected to” or “directly coupled to” another element, there may be no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion. Further, the term “and/or” includes any and all combinations of one or more of the associated listed items.
Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, and/or sections, these elements, components, regions, and/or sections should not be limited by these terms. These terms may only be used to distinguish one element, component, region, or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context.
Term SCL means Smart contact lens; contact lens worn over the cornea of the eye with a variety of embedded electronic, electro-optical or optical components. The term Active Contact Lens should be taken to be synonymous with smart contact lens. For the purposes of the present patent application, the word “transceiver” shall be defined as any device that is capable of either both transmission and reception, or transmission only, or reception only of information signals. Multimedia source data stream—initial or source data set containing data of different type. AR—Augmented Reality, VR—Virtual Reality and MX—Mixed Reality.
Term Saccade means—a rapid movement of the eye between fixation points. Saccades are performed to refocus on different parts of observable reality in order to bring different parts of observable reality from peripheral vision into the focused vision (to the central part of the retina).
GC—grating coupler, maybe an input grating coupler or output grating coupler.
Input coupler grating (aka Input grating coupler) is an optical component that is generally integrated into an optical substrate and is designed to accept/receive light from a display or projector. It may be built of various materials with a variety of methods.
Out coupler grating (aka Out grating coupler) is an optical component that is generally integrated into optical substrate, and is responsible for redirecting light rays into the eye of the wearer. It maybe built of various materials with a variety of methods. It is also known as output grating.
Multimedia data stream-data set containing data of different types, for example, an audio, an image, a video, a text and any other types of data. Multimedia stream—data set containing data of different types, for example, audio, image, video, text and other types of data.
Pre-processing—step connotes preliminary processing step in order to perform initial analysis or classification of the dataset. Pre-processing may also refer to manipulation or dropping or enrichment of data before it is used in order to ensure or enhanced performance.
Image capture sensor—for the purposes of the present patent application the term should be interpreted to mean any sensor capable of registering light condition and to be used for either image or video capture. Term Image capture device—should be interpreted to mean to have identical meaning as term Image capture sensor. Term Image sensor for the purposes of present patent application should be interpreted to mean and to be semantically equivalent to image capture sensor.
Video capture sensor—for the purposes of present patent application the term should be interpreted to mean any sensor capable of registering light condition and to be used for either image or video capture. Term Video capture sensor—should be interpreted to mean to have identical meaning as term Video capture device. Term Video sensor for the purposes of present patent application should be interpreted to mean and to be semantically equivalent to video capture sensor.
The term electronic sensor, for the purposes of present patent, may include electro mechanical sensor or electronic sensors that are micro scale (MEMS) or nano scale (NEMS). The term electronic component(s), for the purposes of present patent, may include electro mechanical component(s) or electronic component(s) that are micro scale (MEMS) or nano scale (NEMS).
For the purpose of description of the present disclosure, the term “embedded display” may be used interchangeably with the terms “integrated display” and “embedded display component” and “embedded display module” and “display unit” and “display module”. For the purpose of description of the present disclosure, the term “display projector component” is used to describe one of display components, as one “display component”. Term “display component” may refer to one of components of display module.
For the purpose of description of the present disclosure, the term “waveguide module” may be used interchangeably with the terms “waveguide display” and “waveguide component” and “waveguide based display”.
For the purpose of description of the present disclosure, the term “processor” may be used interchangeably with the terms “processor component” and “processor module” and “processor unit” and processing unit” and “processing module” and “processor component”. “Processor” may be situated onboard of the contact lens (integrated arrangement) or be located remotely on a paired mother device that performs the computation.
For the purpose of description of the present disclosure, the term “orientation module” maybe used interchangeably with “orientation unit” and “orientation component”.
For the purpose of description of the present disclosure, the term “power module” maybe used interchangeably with “power unit” and “power component”.
For the purpose of description of the present disclosure, the term “communication module” maybe used interchangeably with “communication unit”, or “communication device”. Term “Communication component” refers to a constituent part of communication device. Communication module may consist of number of communication components and devices. Term “Communication component” may refer, to integrated, embedded component of communication device and may consist of transceiver coil and controller. Term “Communication component” may refer, to an off board component of communication device and may consist of transceiver coil and controller, integrated into the paired “parent” device that carries the computational power. For the purpose of description of the present disclosure, the term “focal point sensor” maybe used interchangeably with terms “focus determination component”, “focus determination device” or “focus determination module”. For the purpose of description of the present disclosure, the term “focal point sensing” maybe used interchangeably with terms “focus determination”, “focus tracking” focal point determination” and “focal point tracking”.
The term “focus tracking” refers to multitude of methods that maybe used to track focal changes in the eye, either by observing changes in shape of crystalline lens in the eye, by tracking Purkinje images, particularly Purkinje image (P3) 3, that provides reflection of the outer (anterior) surface of the lens. Also, it is possible to track focal changes, indirectly, by observing and tracking where the eye looks at. Determining the distance and tracking changes in distance of objects looked at before. Also, other methods may be used.
For the purpose of description of the present disclosure, the term “eye position” may be used interchangeably with one of the terms from: “eyes gaze orientation”, “eyes orientation”, “eyes direction”, “eyes directional orientation”, “eyes vector”, or “gaze vector”.
For the purposes of description of the present disclosure, the term “shift factor” may be used interchangeably with the terms “shift adjustment factor” and “display adjustment factor”. The “shift factor” refers to the directional vector and extent of the shift of an image on the display.
For the purposes of description of the present disclosure, the term Frame of Reference (FoR) refers to an observer centric coordinate system. A 2D Frame of Reference refers to a two dimensional geometrically embedded display and is defined by x and y axis; a 3D Frame of Reference refers to real world three dimensional geometry and is defined by x, y and z axes. A 3D Frame of reference allows, for example, a system to track an external environment creating a 3D model of reality and the associated superimposed image to an external (x, y, z) geometry in the virtual space represented by the 3D model of the external reality, allowing an image to stay stationary relative to the external geometry.
Generally speaking 2D Frame of reference is used for stationary images that normally do not change position on display; 2D FOR is a 2 dimensional frame of reference. 3D FOR is a 3 dimensional frame of reference. For the purposes of description of the present disclosure, the term Frame of Reference (FOR) refers to an observer centric coordinate system. There are 2 Frame of references to be discerned: 2D frame of reference and 3D frame of reference.
For the purposes of the present disclosure, the terms “base reference” and “base point reference” and “zero point reference” and “anchor point” refer to the relative position of the eye and corresponding image disposition on display that can be deemed as the starting point for consequent eye gaze orientation tracking and corresponding image position adjustments on display.
Virtual Space-A perceived representational 3D or 2D space created by computer graphics software, and is usually characterized by a Cartesian coordinate system consisting of X, Y, and Z axes. It is a digital environment, frequently multiuser environment and is further characterized by interactivity. Objects can be created and placed inside the environment. For the purposes of the present disclosure, the terms ‘Virtual Reality,’ and ‘Virtual Environment’ (VE) are used interchangeably to describe a computer-simulated place or environment with which users can interact via an interface.
In present patent application, AR/VR Smart Contact Lens (SCL) system with an embedded waveguide based display is disclosed. SCL further integrates imaging system that enables the user to focus, refocus or accommodate on imagery superimposed. It should be understood that the use of term “waveguide based display” implies any of type of micro displaying system where light is propagated via waveguide. There are multiple implementations of waveguide display technology in existence and new are being currently developed. Generally, waveguide is used to conduct electromagnetic energy unidirectional with minimal loss of the strength of the signal, instead of letting the signal to spread out and quickly attenuate. Waveguide represents a medium and a channel where signal can propagate while staying within the confines of the waveguide, similar to propagation of the sound wave in an Organ pipe.
Presently, waveguide based transparent, semi-transparent or non-transparent displays or projectors achieve silicon photonics by integrating various electronic, electro-optical components where optical components are available. In some implementations of waveguide displays there are intermediate grating that performs a variation of light conversion or modulation. In some displays, they are called folding grating. Specifically for AR near-the-eye displays (NED) there is a dual requirement for the display: optics needs to be transparent or semi transparent to enable the user to see through the optical component and see what is in front of the user as well as to be able to transmit light and reflect it back to the user. In AR optical systems, an optical combiner is used to combine a “see through” view and an overlay of an image from the AR display.
There are several possible implementations of micro display/projector component being used or explored, both passive and active display technologies, namely: micro OLED display, micro LED panel or indirect light illumination on liquid crystal based display the likes of transmissive or reflective or transflective LCD or reflective LCOS, other possible implementations of micro display are digital micromirror device (DMD) and laser beam scanner (LBS), other display technologies may be utilized. Many other types of projector displays maybe used.
There are multiple types of waveguides. In one exemplary, non limiting embodiment “array waveguide” may be used. Arrayed Waveguide Gratings (AWG) are optical planar device that is frequently used as multiplexers or demultiplexers. Array of waveguides has imaging and dispersive properties. AWGs also known as Phased arrays (PHASARs) or Waveguide Granting Routers (WRGs) as used in the telecom industry. AWG maybe implemented with a linear emitter being coupled to a waveguide array where the liquid-crystal (LC) switches form. There is line-by-line control of light. The process works in the following manner: a light signal emission by the emitter array, an injection into the waveguide array, and the extraction with the use of LC switches and formation of an image with multitude of LC switches.
In one exemplary, non limiting embodiment geometric waveguide may be used. In one exemplary, non limiting embodiment diffractive waveguide may be used. Advances in diffractive optics made design and fabrication and manufacture of diffractive optical elements (DEOs), economically viable and affordable. Technological parameters of DEOs that have been achieved so far and are being further improved are befitting for AR and VR use. Generally, diffractive waveguide consists of several optical and electro-optical components: a waveguide display comprises micro projector/display, and optical component comprising input coupler (coupler-in), also known as input grating coupler (GC), waveguide and an output coupler (coupler-out), also known as actual out coupler grating; input grating coupler receives rays of light from display or projector and optically redirects and transfers light via waveguide towards the out coupling grating, whereas output coupler redirects light into the eye of the user.
Diffraction grating is a periodic optical structure, where periodicity can be expressed by embossed peaks and valleys on the surface of the material, or alternatively could be described as bright or dark fringes. Fringes structure are achieved by laser interference in the holographic implementation. These formats afford periodicity of refractive index n. with diffraction grating period achievable, at the time, close to or smaller then optical wavelength of visual range ˜380-700 nm, this enables efficient manipulation of light.
According to one embodiment, in diffractive waveguide displays an input coupler and an output coupler may be implemented as planar diffraction grating. According to another embodiment, holographic, polarized thin layer or reflective outcoupling may be utilized. In another embodiment, other advanced waveguide methods may be used. It should be understood that for the purposes of the present patent application, we are not limiting ourselves to the specified hereby waveguide types and methods, these are rather exemplary. It should be further understood that other waveguide types and methods are included in the present patent application.
In the present patent application, we disclose a smart contact lens with an embedded AR/VR micro waveguide display arranged for in-focus image generation and projection into the retina of the eye. Some implementations of waveguide display the intermediate grating that performs variation of light conversion or modulation. In some displays they are called folding grating. Specifically for AR near the eye displays (NED) there is dual requirement for the display: optics needs to be transparent or semi transparent to enable the user to see through the optical component and see what is in front of the user as well as to be able to transit light and reflect it back to the user. In AR optical systems, optical combiner is used to combine a “see through” view and an overlay of an image from AR display.
In one exemplary, non-limiting embodiment, as per
In one embodiment, the power supply module may comprise, a micro solar panel or an array of solar panels. Other methods of generation of and storage of or generation of electricity may be utilized, for example piezoelectric component that transforms movement of the eye or eye blink into electricity. According to one embodiment, contact lens also comprises electronic depth capture component 103. In one embodiment, the depth capture sensor may be implemented as any micro or nano scale electronic sensor or electro mechanical sensor, capable of determining and tracking of depth information. Depth sensor component may comprise variety of sensors reactive to electromagnetic radiation of various waive length. In one exemplary, non-limiting, embodiment, depth sensor component may comprise non monochrome CMOS or CCD sensor.
In another embodiment, depth sensor component may comprise monochrome IR CMOS or CCD sensor optionally, coupled with an IR emitter. Optionally, it could be implemented as combination of RGB CMOS or CCD sensor and monochrome CMOS or CCD sensors. The likes of Kinect depth camera device. In another embodiment, depth sensor component may comprise LIDAR. The depth sensor component may be implemented as with two RGB CMOS or CCD sensors, by utilizing stereo image information to derive depth information. Any other method could be used to determine or compute depth information.
According to one embodiment, depth information may be used to build 3D Model/3D Mesh of surrounding environment. Depth information may be used to 3D virtual space of surrounding environment. Changes in depth information may be used to determine and track orientation and vector changes relative to external environment. Depth information may be utilized to run SLAM or RANSAC or any other similar methodology.
According to one embodiment, the depth sensor component may be implemented with time of flight methodology. For example: LED or Laser maybe used and modulated light source and associated sensor detects reflected light or detects phase change or reflected light. In one embodiment, depth information may be used to determine orientation and direction of eye's gaze and information needed to determine what information should be presented to the user at any given period of time. Furthermore, orientation of the eye may be determined by correlating current depth image against pre-build possibly with SLAM, 3D Model/Mesh of current environs, room, hall, street, etc.
According to one more embodiment, the system may optionally comprise display controller component 104, comprises electronics on micro electronics (MEMS) or nano electronics scale and optionally comprises computing capability. Display controller component controls display projector component 105 of embedded micro waveguide display device. Display projector component 105 may be implemented as any micro or nano scaled display device. In another embodiment, display projector component 105 maybe be situated externally to the contact lens and can be located on pensne like device worn on the nose or maybe located on glasses like device worn over the face. In this exemplary embodiment, display projector projects light directly onto the contact lens input grating for further transformations, if any, and transmission to output grating.
According to one embodiment, display projector component 105 maybe implemented as micro or nano Liquid Crystal Over Silicon (LCoS) display/projector, as micro or nano Light Emitting Diode (LED) display, as micro or nano Liquid Crystal Display (LCD) display or micro OLED. In one embodiment, display projector component 105 maybe implemented as digital micromirror device (DMD) and laser beam scanner (LBS) device or as any other display or projector device capable of projecting electromagnetic radiation. In one embodiment, display projector component 105 maybe implemented as any other active or passive micro display device. In another embodiment, there is single display projector component 105 projecting entire image. In one exemplary, non-limiting, embodiment, display projector component 105 may comprise multiple projectors.
According to one embodiment, if multiple projectors 105 are used, each of the projectors would project a part of the complete image or each projector 105 may project an entire image. In one embodiment, display component 106 is a specialized waveguide lens component comprising, at least, input coupler and output coupler sub components. Component 106 can be implemented as any type of waveguide display. In one embodiment, waveguide display may be implemented with Macro optics, such as Freedom partially reflective lens, with traditional method or with for example Freedom prism or partially reflective mirrors array or any other type of Macro optics. In one embodiment, waveguide display may be implemented with micro optics, such as surface relive grating (SRG) with input coupler or output coupler implemented as SRG.
According to one embodiment, micro optics waveguide display may be implemented as Volume Hologram grating (VHG), with input coupling or output coupling implemented as VHG. In one embodiment, micro optics waveguide display may be implemented as Polarization volume grating (PVG) with input coupler or output coupler implemented as PVG. In one embodiment, micro optics waveguide display may be implemented as any other micro optics component. In one embodiment, waveguide display may be implemented with nano optics, such as metalenses or metasurface reflectors or any other nano scaled optics technology or method.
According to one embodiment, orientation component 107 may be implemented as any micro or nano scale electronic sensor or electro mechanical sensor, capable of determining orientation and vector. 107 is an orientation component that may comprise accelerometer, compass, gyroscope or inertial measurement unit (IMU) any other sensor capable of reacting and tracking to vector changes of the eye. Orientation component 107 determines current orientation of the eye as well as tracking gaze vector changes during saccades. An orientation component may report the following three values in degrees:
According to one embodiment, in response to attempted accommodation or refocus, the SCL system, while in 3D Frame of reference, would shift an image from virtual space into the view of the user and into focus at the center of display or will bring virtual object from virtual space into peripheral view of the user. In one embodiment, in response to attempted accommodation or refocus, the SCL system, alternatively will remove virtual object overlaid from the screen and the object will remain in virtual space. It is important to note that normally, the system will be redrawing the imagery visible to the user from virtual space and as such entire gamut of virtual objects existing in virtual space, where some virtual objects may be spatially associated with 3D model of virtual space, will be shifting proportionately depending on where the user saccades to relative to virtual space. The technique is well known in the art of AR/VR image processing. There are number of other tangent/similar methods that might be used to achieve same result.
According to one exemplary embodiment, in response to attempted accommodation or refocus, the SCL system, while in 2D FOR, would shift image from its current position on display into the view of the user and into focus. In one exemplary embodiment, eye gaze vector changes information is used to determine attempted saccade or attempted accommodation or attempted refocus. In order to determine attempted refocus or accommodation, optionally focus determination and tracking component may be utilized. In one embodiment, on the eye, 110 focus determination/change component may be used to measure electric impulse of ciliary muscles or sensor of actual deformation of the ciliary muscle may be used.
According to one embodiment, Purkinje image corresponding to reflection of the crystalline lens maybe tracked to determine focus change. There are 4 Purkinje images/reflections that are deductible and traceable, these are (P1) reflection from the outer surface of the cornea, (P2) reflection from the inner surface of the cornea, (P3) reflection from the outer (anterior) surface of the lens, (P4) reflection form the inner (posterior) surface of the lens. For purposes of detecting focus and changes in focus, the system would need to track P3 and/or P4 Purkinje reflections. Also, crystalline lens may be tracked with variety of other means, for example, irradiation by rear facing IR emitter of crystalline lens and registering the IR response from crystalline lens. Other methods maybe used to determine and track focus changes in the eye.
According to one embodiment, communication module of the proposed AR SCL system, comprises, an onboard communication component 108, that is embedded into the contact lens. Communication component is configured to communication external device or paired contact lens. Communication is performed utilizing electro magnetic radiation using one of known or proprietary communication protocols. component 108 may utilize bluetooth or wifi communication protocol to communicate with any number of paired devices. In one embodiment, communication component communications with paired contact lens.
According to one embodiment, communication component communicates with external device. In one embodiment, communication component may be implemented as an onboard antenna disposed on the peripheral part of the SCL device. The antenna may also play a dual role as induction coil for wireless power supply module. In one embodiment, the communication component 108, may transfer data, video, image or audio information. In one embodiment, communication component 108, may transfer data, in binary or analog or quant information form. In one embodiment, communication component 108, may be configured to transfer image data from an low or high resolution onboard forward facing image capture device 109 to an external device that does 3D processing and prepares images for contact lens to depict to the user. The visual information may flow back to the contact lens via antenna of communication component 108. In one embodiment, communication component 108, may be configured to transfer depth image data from a low or high resolution onboard forward facing depth capture device 109. In one embodiment, image capture device 109 may be implemented as image or depth image capture device.
According to one embodiment, image capture device 109 may be implemented with CMOS sensor. In one exemplary, non-limiting, embodiment, image capture device 109 may be implemented with CCD sensor. In one embodiment, depth image capture device 109 may be coupled with IR emitter.
Now referring to
Now referring to
Now referring to
According to one embodiment, first, the system takes the base reference, that is, the system determines the position of the eye and current disposition of the image on display. Secondly, with the shift of an eye, as per
According to one embodiment, once, a zero point reference is determined, the tracking of the eye's position begins for the current image overlaid onto the display. With every shift in the eye's position, the image overlay may be recomputed accordingly so that the part of the image sought by the eye is displayed at the center of the display, in front of the eye's retina, and therefore displayed in focus. Here overlaid entire image shifts accordingly. In one embodiment, “base point reference” may be selected by the user with any detectable signal, with triggering action. In one embodiment, the user may trigger taking base point reference by clapping his hands. In another embodiment, the user may trigger taking base point reference by eye blink. In one embodiment, the user may trigger taking base point reference by predefined signal that would be captured by image capture device and processed to identify the signal; for example certain sequence and form of hand gestures.
According to another embodiment, by tracking changes in focus of the eye, the system may determine whether the eye is focused on the image superimposed on display or the real world objects in front of the eye. In one embodiment, the user may trigger taking base point reference by tracking focus of the eye, in real-time, to determine whether the eye focusing on the objects at a distance or it is focused on the image on display. This method may be used to switch between Frame of References and for registering anchor point at the same time. Variety of other detectors of a switch in gaze between outside real object and overlaid image are possible. Methods given above are exemplary only and should not be taken as being limiting to the scope of the invention.
According to one embodiment, the system may predefine or dynamically determine where base point reference should be and when the tacking, against said point reference, should stop. The system may stop tracking position of the eye and correlate changes of eye's vector to the image disposition on display at stop point. Stop point may be signaled with hand gestures, voice or other signals. Stop signal may be signaled by change of focus from image on display to the real world objects in front of the user. There may be variety of other ways to detect stop point. For 2D FOR, once stop signal is identified, the, otherwise static, image on display may return to its original disposition on display; regardless of the position of the eye.
Now referring to
Furthermore, at step 707 the system computes per pixel image matrix based on the shift adjustment factor. There are variety of ways the computation may be achieved, for example with matrix mathematics, trigonometric models and so on. Further, the computed image is output to the display at step 708, so that sought part of image is displayed at the center and is situated at the center of the eye, against eye's retina and thus new portion of an image comes into focus. At the same time the portion of the image that has been in focus previously shifts to the peripheral zone of the display. This process is repeated in a loop 706. Step 709 signifies end of the process and may be triggered, for example by user command, by switching to another Frame of Reference. Step 709 may be triggered by an eye changing focus from the overlaid image to the outside view. In one embodiment the SCL system may provide imagery to the user with respect to 3D FOR, that is the imagery is superimposed with respect with external geometry around the user and external objects are aligned and positioned relative virtual space.
Now referring to
According to one embodiment, the system may determine and rely on orientation relative to external environment. In one embodiment, the system may use SLAM (simultaneous localization and mapping) methodology, to map the room, hallway, buildings, street, valley with trees, etc. where the user is located to create 3D model of external reality. In one exemplary, non-limiting embodiment, an SCL system may achieve that by utilizing at least one integrated into contact lens substrate, forward facing image capture device or depth capture device or combination of the two. Image capture device may comprise CMOS or CCD sensor or any other type of sensor implemented as MEMS or nano-scaled device. Consequently, orientation may be determined by correlating information derived from image or depth capture device against 3D Model that forms virtual space. Orientation may be determined relative to pre-mapped 3D Model of current environment.
According to one embodiment, the system may determine orientation by combining information from orientation sensors and image or depth capture devices. In
According to one embodiment,
Now, hypothetically, the user performs a saccade down and to the right side, and whatever the user of AR/VR SCL platform has in front of himself/herself is demonstrated in
According to one embodiment, now the user performs a saccade to the left side, as is demonstrated in
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
This non-provisional Application claims priority from a prior-filed U.S. provisional Application No. 63/458,945, filed on Apr. 13, 2023, and hereby claims the benefit of the embodiment therein and the filing date thereof.
Number | Date | Country | |
---|---|---|---|
63458945 | Apr 2023 | US |