The present invention relates to augmented reality generally and to mixed reality glasses in particular.
Mobile personal electronics and wearables have become immensely popular around the globe. Augmented reality (AR) glasses are a form of wearable computer where information is displayed onto the glasses, typically using a computer-generated image (CGI) format. With these glasses one can see internet content, movies, TV and video games, or any other digital content. Moving images can be seen by the glasses-wearer, and not by anyone else near the viewer. The technology for providing the images is built into the glasses worn on the head.
As opposed to virtual reality, which replaces the real world with a simulated one, augmented reality supplements a view of the real world with information provided by computer-generated sensory input such as sound, video, graphics or Global Positioning System (GPS) data.
Typically, augmented reality provides digital information about elements in the environment. For example, augmented reality might project sports scores onto the AR glasses while the user is viewing a sports match.
Mixed reality, on the other hand, adds capabilities to virtual objects to understand the real world and the relative position of the user and the virtual objects in the real world, thereby enabling virtual objects (generated as CGIs) to interact with the real world. In other words, artificial information about the environment and its objects can be overlaid on the real world.
There is therefore provided, in accordance with a preferred embodiment of the present invention, a system which enables near eye, see-through display of computer generated images (CGIs) to a user. The system includes see-through projection devices, an IMU device and a processor. The see-through projection devices are mounted within a frame of glasses worn on the head of the user. Each device includes a central field display to project the CGIs within the user's central field of view and a peripheral display to project the CGIs within the rest of a human field of view not including the central field of view. The peripheral display is disposed proximate the user's eye and the central field display is centrally disposed behind the peripheral display. The IMU device is mounted on the glasses to measure where the user's head is facing. The processor splits the CGIs between the central and peripheral displays based on where the virtual object is to be projected into the real world and where the user is facing.
Moreover, in accordance with a preferred embodiment of the present invention, the central field display includes a waveguide lens and the peripheral display includes a transparent display displaying towards the eyes of the user.
Further, in accordance with a preferred embodiment of the present invention, the peripheral display includes a first optical layer between the transparent display and the eyes of the user. The first optical layer includes a plurality of micro-lenses to focus light from the transparent display to the eyes of the user.
Still further, in accordance with a preferred embodiment of the present invention, the peripheral display includes projection elements and an optical layer outside of the peripheral projection elements to correct defocusing of real world objects by the projection elements.
Moreover, in accordance with a preferred embodiment of the present invention, the processor includes a rotation compensator to generate a display location for the CGIs, wherein the display location compensates for motion of the user's head.
Further, in accordance with a preferred embodiment of the present invention, the processor includes a splitter to split the CGIs according to the display location.
Still further, in accordance with a preferred embodiment of the present invention, the processor includes at least one alignment operator to align display attributes between the central display and the peripheral display.
Moreover, in accordance with a preferred embodiment of the present invention, the processor includes a spatial aligner and a color aligner to correct the spatial and color alignment, respectively, of the portion of the CGIs to be displayed on one of the displays. This may be, for example, the peripheral display.
Further, in accordance with a preferred embodiment of the present invention, the peripheral display is a transparent organic light emitting diode (OLED) display.
There is also provided, in accordance with a preferred embodiment of the present invention, a method for near eye, see-through display of CGIs to a user. The method includes having see-through projection devices mounted on a frame of glasses, the devices having central and peripheral displays, measuring where the user's head is facing using an IMU device mounted on the frame of glasses, splitting the CGIs between the central and peripheral displays based on where the virtual object is to be projected into the real world and where the user is facing; and projecting the split CGIs separately to the see-through projection devices. The projecting includes projecting the CGIs within the user's central field of view and projecting the CGIs within the rest of a human field of view not including the central field of view.
Moreover, in accordance with a preferred embodiment of the present invention, the method additionally includes generating a display location for the CGIs, wherein the display location compensates for a motion of the user's head.
Further, in accordance with a preferred embodiment of the present invention, the splitting includes dividing the CGIs according to the display location.
Still further, in accordance with a preferred embodiment of the present invention, the method additionally includes aligning display attributes between the central display and the peripheral display.
Finally, in accordance with a preferred embodiment of the present invention, the aligning includes correcting the spatial and color alignment of the portion of the CGIs to be displayed on one of the displays.
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.
Applicant has realized that, for mixed reality to be accepted by the public, the virtual objects should interact with real world objects “naturally”. However, as Applicant has realized, virtual images will not look real within the real world unless the system projecting them takes into account how the human cognitive perception system works. Otherwise, the human viewer will not ‘believe’ in what s/he is viewing.
To provide a comfortable view, Applicant has realized that virtual objects need to move through the human field of view like real objects do. This may require projecting glasses (or “near eye transparent displays”) that enable the user to see the real world with superimposed virtual images throughout the user's field of view.
As used in prior art augmented reality, near eye projection consists of a method of projecting an image into the retina of a user through waveguide lenses, creating a visual effect perceived by the user as a hologram in the real world. This method is required because the human eye needs a minimum distance for perspective in order to focus. However, as Applicant has realized, this projection method is restricted to a limited field of view (FOV) of the waveguide lenses, and to an accordingly limited display frame of the virtual scene. Applicant has realized that since this FOV is much smaller than the natural human FOV, the virtual object will not seem real to the human viewer.
Applicant has realized that, to project virtual objects throughout a user's complete field of view, the near eye display may be formed of combination glasses, with separate projecting units for a central field of view and a peripheral field of view, both of which are transparent to enable the user to view the real world as well as the projected computer generated images (CGIs). For the central field of view, such glasses may include elements which project the virtual image onto the user's retina, thereby creating an illusion in the user's cognitive system of a hologram in the real world. For the peripheral field of view, the glasses may include elements which do not project directly to the user's retina. Moreover, the glasses may include processing elements which may move virtual objects between the central and peripheral fields of view by changing which element projects them.
In addition, Applicant has realized that for the user's comfort, the eyewear needs to be transparent, light and relatively thin and should be functional both indoors and outdoors.
Reference is now made to prior art
As can be seen in
Sectors 166, on each side of sector 162, provide two dimensional vision, in which one eye sees the object outside of focus vision sector 162 and the second eye sees the object at the edge of focus vision sector 162, near the area for the opposite eye. Finally, in sectors 168, to the sides of sectors 166, the object is seen by one eye only. As mentioned hereinabove, the present invention may take this division of field of view 150 into account when projecting visual objects, as CGIs, to the user.
Reference is now made to
Smartphone 210 may transmit CGI 212 to on-glasses computing device 240, which may split CGI 212 into two images relative to the desired location of the virtual image within the world, one image for central field display 202 and another image for peripheral field TD 201. As discussed in more detail hereinbelow, computing device 240 may correct the initial location of the CGI as received from smartphone 210 to cancel motion and location of the user's head as measured by 3D orientation sensor 232, such as an inertial movement unit (IMU) 232.
Reference is now briefly made to
Reference is now made to
As can be seen by arrows 224 of
In addition, processor 240 may provide different versions of CGI 212 to the two devices 204, one to the user's left eye and one to the user's right eye.
As mentioned hereinabove, central visual field lens 202 does not provide the full human field of view.
In
Unfortunately, as Applicant has realized, display 260, being a near eye display, is too close to the user's eyes for the user to focus on it. Moreover, the light from these pixels 262 is diffuse and therefore, is not focused onto left eye 165.
In accordance with a preferred embodiment of the present invention and as shown in
It will be appreciated that transparent display 201 may extend behind central visual field lens 202, as shown in
Reference is now made to
Applicant has realized that, for a mixed reality system to interact with the real world in a natural way, virtual objects 300 and 302 should seem to maintain their locations in space. To do so, system 200 may measure how the user moves his/her head. For example, system 200 may comprise IMU sensor 232 whose data may be utilized to define where the user is looking. Processor 240 may then comprise elements to determine how to move virtual objects 300 and 302 in the ‘opposite’ direction to the motion of the head, as described in more detail hereinbelow, such that virtual objects 300 and 302 appear to remain in their locations in real world. As a result of this and when virtual objects move around, as discussed hereinbelow with respect to
Applicant has realized that switching displays requires handling the changes in the resolution of displays 201 and 202. Reference is now made to
Processor 240 comprises a rotation compensator 300, a splitter 302, and at least one aligner, such as a spatial aligner 304 and a color aligner 306. Rotation compensator 300 may process data from IMU sensor 232 to determine how to move each CGI 212 to match the current location of the user's head. As described in U.S. Pat. No. 9,210,413, assigned to the common assignee of the present application and incorporated herein by reference, CGI 212 may be projected as an inverse movement to the head movement measured by IMU sensor 232. The inverse movement, which may include translation and/or rotation, may be determined, using a known compensation calculation formula, and may be applied to CGI 212 to generate a compensated CGI, labeled CGI′. This may enable the user to perceive the visual object projected by compensated CGI′ 212 as anchored at a certain point in space within the user's field of view.
Splitter 302 may receive compensated CGI′ 212 and its compensated location from rotation compensator 300 and may split CGI′ 212 between displays 201 and 202 as a function of the compensated display location in the real world of the virtual object. Splitter 302 may also generate the two stereoscopic versions of CGI′ 212, one for each eye.
Applicant has realized that the final appearance of CGI′ 212 across the two displays 201 and 202 should be seen to be uniform, continuous and homogeneous. To that end, processor 240 may comprise at least one alignment operator to align display attributes between displays 201 and 202. For example, processor 240 may comprise spatial aligner 304 and color aligner 306.
Spatial aligner 304 and color aligner 306 together may provide alignment of both displays 201 and 202 to each other. As the alignment may be relative, central field display 202 may be set to be the constant display and the parameters of peripheral display 201 may be aligned to match central field display 202 via spatial aligner 304 and color aligner 306. Accordingly, splitter 302 may provide a portion CGI′C to waveguide 202 as is and may provide a portion CGI′P to spatial aligner 304.
Spatial aligner 304 may align curvatures and lines so that they may seem continuous when going from one display to the other. Color aligner 306 may ensure that a given color may appear the same on both displays 201 and 202.
Spatial aligner 304 may utilize a matrix K defining display 202, given by the following equation:
where f is the focal length of system 200, mx, and my are scale parameters relating pixels to distance, and Cx, and Cy represent the principle point which is the center of the image. s is a skew parameter between the x and y axes and it is almost always 0 (the x and y axes are usually perpendicular to each other).
The focal length, f, may typically be constant for each system 200. At manufacture, scale parameters (mx and my) and principle points (Cx and Cy) for peripheral display 201 may be changed to spatially align the particular peripheral display 201 to its associated central field display 202. To do so, a static image may be displayed which may stretch over both displays 201 and 202. A technician may adjust these four parameters for peripheral display 201 until the static image is spatially aligned. At that point, the parameters may be set and spatial aligner 304 may align all incoming CGI′Ps.
Spatial aligner 304 may provide its output, a corrected CGI′P, to color aligner 306. Color aligner 306 may comprise a set of color alignment parameters for aligned corrected CGI′P.
At manufacture, an image of colored stripes, such as that shown in
A technician may adjust an appropriate set of parameters to adjust the colors of peripheral display 201 until the colors match the colors in central display 202. The parameters may be defined as follows:
In equation 2, R is the red channel, G is the green channel and B is the blue channel, (R,G,B)initial are the initial values of peripheral display 201 and (R,G,B)final and its final values. A technician may adjust the initial colors (R,G,B)initial by applying some correlation matrix and may also add baseline values to give the final color values. There are total of 12 parameters (9 values Cx-x, in the correlation matrix and 3 baseline values (R,G,B)base) that may be adjusted to optimize the color adjustment. It should be noted that the correlation matrix may be close to the identity matrix and the baseline values may be near zero.
In addition, a non-linear correction may be applied to the intensity of each color channel using the following power law expression, known as “Gamma correction”:
V
final
=V
initial
γ Equation 3
Once the technician has matched the colors in peripheral display 201, the parameters may be set and color aligner 306 may correct the colors of all CGI′Ps produced by spatial aligner 304 and may provide a corrected CGI″P to peripheral display 201.
It will be appreciated that processor 240 may provide other alignments as well. For example, processor 240 may include a flicker aligner (not shown), operative at manufacture, to balance the flicker of peripheral display 201 with the flicker of central display 202. The flicker aligner may change the refresh rate of display 201 until it matches the refresh rate of display 202.
It will be appreciated that processor 240 may comprise other elements to synchronize the image display between central field display 202 and peripheral display 201. The result may be an efficient and satisfying comprehensive virtual image superimposed on reality, and therefore an efficient and satisfying integrated, full range visual scenario of reality and the CGI.
It will be appreciated that the final CGI 212 to be displayed in peripheral display 201 may be equal in size relative to the image projected to infinity by central display 202. Since the size of the image projected to infinity is given, the relative projection size to peripheral display 201 may be provided by the simple and consistent formula of Equation 1.
It will be appreciated that system 200 may project CGIs within a wide FOV, thereby to match the FOV that a human user views. Moreover, within this wide FOV, virtual objects may move fluidly from the different sectors of the human FOV to approximate a natural motion.
Unless specifically stated otherwise, as apparent from the preceding discussions, it is appreciated that, throughout the specification, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a general purpose computer of any type, such as a client/server system, mobile computing devices, smart appliances, cloud computing infrastructure or similar electronic computing devices that manipulate and/or transform data within the computing system's registers and/or memories into other data within the computing system's memories, registers or other such information storage, transmission or display devices.
Embodiments of the present invention may include apparatus for performing the operations herein. This apparatus may be specially constructed for the desired purposes, or it may comprise a computing device or system typically having at least one processor and at least one memory, selectively activated or reconfigured by a computer program stored in the computer. The resultant apparatus when instructed by software may turn the general purpose computer into inventive elements as discussed herein. The instructions may define the inventive device in operation with the computer platform for which it is desired. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk, including optical disks, magnetic-optical disks, read-only memories (ROMs), volatile and non-volatile memories, random access memories (RAMs), electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), field-programmable gate array (FPGA), application-specific integrated circuit (AISC), system on chip (SOC), magnetic or optical cards, Flash memory, disk-on-key or any other type of media suitable for storing electronic instructions and capable of being coupled to a computer system bus. The computer readable storage medium may also be implemented in cloud storage.
Some general purpose computers may comprise at least one communication element to enable communication with a data network and/or a mobile communications network.
The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the desired method. The desired structure for a variety of these systems will appear from the description below. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages, frameworks or technologies may be used to implement the teachings of the invention as described herein.
While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
This application claims priority from U.S. provisional patent application 62/711,632, filed Jul. 30, 2018, which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62711632 | Jul 2018 | US |