The present invention relates to near eye displays generally and to virtual reality headsets in particular.
Images displayed on large computer and TV screens are known. When viewing such images, the distance between the display and the viewer's eye is typically between 30 cm and 3 m. Viewing images on personal, near eye displays (NEDs) brings the display closer to the viewer's eyes. This allows users to privately view images, and also allows for an immersive experience, as the eyes only see the images displayed on the near-eye display. Such NED displays can be linked to inertial positioning systems to allow the image to ‘move’ with the movement of the user. This may make the user feel as if they are ‘in’ the image. This immersive experience has application for movies, gaming, and real time remote interaction with remote machines with cameras—e.g., hazardous environmental operations, telemedicine, and undersea exploration.
Virtual Reality (VR) headsets and compatible CGIs are known in the art.
There is therefore provided, in accordance with a preferred embodiment of the present invention, a system including a plurality of stacked optical channels and a channel image adapter. Each optical channel includes at least a portion of a lens and at least a portion of a display and handles a portion of a phase space of the system. The channel image adapter adapts an input image into image portions for projection from the displays, one per optical channel. The input image includes data pixels each having a pixel display angle. The channel image adapter places copies of each data pixel into the image portions for those of the optical channels whose phase space includes the pixel display angle of the data pixel.
There is also provided, in accordance with a preferred embodiment of the present invention, a near eye display system including, per eye, a compound lens formed of multiple lens portions of short effective focal length (EFL) lenses, a display unit including multiple displays, one per lens portion, and an image adapter to adapt an input image into image portions, one per-display. The compound lens, display unit and image adapter operate to provide a field of view of over 60 degrees and an eyebox at least covering the range of pupil motion of the eye.
Moreover, in accordance with a preferred embodiment of the present invention, the system includes a housing useful for virtual reality or augmented reality.
Further, in accordance with a preferred embodiment of the present invention, the system of claim 1 also includes a plurality of channel correctors, one per optical channel, each to provide compensation to its associated image portion in order to correct imaging errors of its associated lens and to display its corrected image portion on its associated display.
Still further, in accordance with a preferred embodiment of the present invention, the system has optical axes which are tilted with respect to each other.
Moreover, in accordance with a preferred embodiment of the present invention, the system has at least one the display which is off-center with respect to an optical axis of its the lens or lens portion.
Further, in accordance with a preferred embodiment of the present invention, at least one lens or lens portion is cut from a donor lens.
Still further, in accordance with a preferred embodiment of the present invention, the cut is asymmetric about an optical axis of its the donor lens.
Moreover, in accordance with a preferred embodiment of the present invention, the system also includes optical separators between neighboring channels, neighboring lenses or lens portions.
Further, in accordance with a preferred embodiment of the present invention, the imaging errors include at least one of color aberration and image distortion.
Still further, in accordance with a preferred embodiment of the present invention, the lenses from the optical channels are formed into a compound lens.
Moreover, in accordance with a preferred embodiment of the present invention, the displays from the optical channels are formed into a single display.
Further, in accordance with a preferred embodiment of the present invention, the displays from the optical channels are separated from each other by empty display areas.
Still further, in accordance with a preferred embodiment of the present invention, each optical channel has an eye-display distance of no more than 30 mm.
There is also provided, in accordance with a preferred embodiment of the present invention, a near eye display system including an optical system, a processor and a housing on which the optical system and processor are mounted close to a pair of human eyes. The optical system includes, per eye, a plurality of stacked optical channels, each optical channel including at least a lens and at least a portion of a display. Each optical channel handles a portion of a phase space of the optical system. The processor includes a channel image adapter and a plurality of channel correctors, one per optical channel. The channel image adapter adapts an input image into image portions, one per optical channel. The input image includes data pixels each having a pixel display angle. The channel image adapter places copies of each data pixel into the image portions for those of the optical channels whose phase space includes the pixel display angle of the data pixel. Each channel corrector provides compensation to its associated image portion to correct imaging errors of its associated lens and to display its corrected image portion on its associated display.
There is also provided, in accordance with a preferred embodiment of the present invention, a compound lens including a plurality of lens portions, each portion cut from a donor lens having a short EFL. The lens portions are glued together in a stacked arrangement.
There is also provided, in accordance with a preferred embodiment of the present invention, a method including stacking optical channels, each optical channel including at least an optical element such as a lens and at least a portion of a display, each optical channel handling a portion of a phase space of the optical device, and adapting an input image into image portions for projection from the displays, one per optical channel, the input image including data pixels each having a pixel display angle. The adapting includes placing copies of each data pixel into the image portions for those of the optical channels whose phase space includes the pixel display angle of the data pixel.
Moreover, in accordance with a preferred embodiment of the present invention, the method also includes providing per-optical-channel compensation to each associated image portion in order to correct imaging errors of its associated lens, thereby to produce a per-channel corrected image portion, and displaying each per-channel corrected image portion on its associated the display.
Further, in accordance with a preferred embodiment of the present invention, the method also includes tilting optical axes of the optical channels with respect to each other.
Still further, in accordance with a preferred embodiment of the present invention, the method also includes positioning at least one the display off-center with respect to an optical axis of its the lens.
Moreover, in accordance with a preferred embodiment of the present invention, the method also includes cutting at least one the lens from a donor lens.
Further, in accordance with a preferred embodiment of the present invention, the cutting is asymmetric about an optical axis of its the donor lens.
Still further, in accordance with a preferred embodiment of the present invention, the method also includes placing optical separators between neighboring the optical channels.
Finally, in accordance with a preferred embodiment of the present invention, the imaging errors include at least one of color aberration and image distortion.
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.
Applicant has realized that users prefer may smaller and less bulky virtual reality (VR) headsets. such as headsets close to or at the position where eyeglasses are held, reducing the eye-display distance (EDD) accordingly. Unfortunately, significantly reducing EDD decreases the size of the optical components of the relevant VR headset which, in turn, reduces the optical quality of the image.
To understand this, consider
It will be appreciated that
It is also to be understood that diagram 11, as well as all phase-space diagrams described hereafter, and as well as all ray tracing diagrams hereafter, depict pupil position and field of view angles along a single one-dimensional axis. It should be understood that the same considerations are applicable for both two-dimensional lateral axes of pupil position and scene angles.
Prior art VR systems, like VR headset 1, respond to such a large phase space requirement with large optical systems, as shown in
Ray tracing diagram 13 shows light rays from a pixel in the upper portion of prior art display 4 as they diverge towards lens 5. Note that, since the human brain identifies objects at a distance by the fact that the light coming from the object is collimated (i.e., parallel rays), lens 5 collimates the light from display 4 into a beam 19. In
Compare this to
Note that the phase space 24 of the smaller system has, per eye, the same FOV (from −40 degrees to +40 degrees) as phase space 22 of larger system. However, its “eyebox”, the range of positions of pupil 21 that is covered, is half the size. This can also be seen in ray tracing diagram 13′ where a beam width BW of beam 19 of prior art lens 5 is 20 mm while a beam width BW′ of lens 20 is only 10 mm wide. Thus, the smaller lens 20 has a narrower beam 23. The result of this is that, for some positions of pupil 21, pupil 21 will be within beam 19 but not within the narrower beam 23 and therefore, will not see the displayed data. The resultant smaller eyebox means either that the users cannot move their eyes or they will only see part of the displayed data.
However, Applicant has realized that, by dividing the optical components into multiple, stacked optical channels, quality images, with a full sized eyebox and an acceptably wide field of view (FOV), may be achieved for a near eye display (NED).
Reference is now made to
Mounted on frame 31 may be at least multiple reduced size displays 34 and at least multiple reduced sized lenses 20 per eye, as well as a processing unit 36. In accordance with a preferred embodiment of the present invention, each display 34 and lens 20 may be sized to match eye-display distance EDD 32 and may comprise a separate optical channel 33 to which processor 36 may separately provide images. It will be appreciated that each optical channel 33 may also include other optical elements as necessary.
As mentioned hereinabove, multiple, stacked optical channels may provide a full sized eyebox. This is illustrated in
Note that, while each per-channel phase space 42 is smaller than prior art phase space 22, their combined phase space is the same size and covers the same area as prior art phase space 22. Moreover, while each beam width BW′ may be smaller than prior art beam width BW, the combined beam width is the same and covers the same range of angles of incidence. Thus, the eyebox of VR glasses 30 is the same as that for prior art headset 1 (
It will be appreciated that graphs 43 in
PA=Tan−1(PP/EFL) Equation 1
where pixel angle PA is the angle of the collimated beam 23 providing light from pixel P.
It is noted that the human eye/brain system sees collimated beams having the same angle as coming from a single object. Applicant has realized that, as long as the piece of data is displayed such that its pixel angle is the same from all of the displays 34 through which it is displayed, then the human eye/brain system translates all of the beams from the different displays 34 as coming from the same location in space. It is this fact which enables the eyebox recovery discussed hereinabove, even if the data is projected from different displays 34.
This is illustrated in
Note that
As can be seen in phase space diagram 41, phase spaces 50A, 50B and 50C, for channels 33A, 33B and 33C, respectively, each fill only part of phase space 22 of prior art lens 5. However, as opposed to phase spaces 42 of
As a result, tilted channels 33A-33C may, overall, cover a wider field of view than the non-tilted channels of
It will be appreciated that, due to the tilt, VR glasses 30 may have a slightly smaller EDD 52 than the non-tilted EDD 32, which may be advantageous. It will also be appreciated that the overall phase space of tilted channels 33A-33C may cover the same amount of rectangle 14 as prior art phase space 22 but may extend significantly less outside of rectangle 14 and thus, may waste significantly less power projecting data to locations not seen by the user.
Furthermore, phase spaces 50A-50C may be utilized to determine where on each display 34 to display each piece of data, since each channel 33 may handle only certain angles of incidence. Note that phase spaces 50A-50C have areas of overlap and areas that don't overlap. For example, channels 33C and 33B both handle overlap area 54, the range of angles from −30 to +5 degrees, while channel 33C is the only channel which handles the range of angles from −40 to −30 degrees. Channel image adapter 37 may provide image data to displays 34 of the overlapped channels 33 for those angles of incidence in overlap areas, such as overlap area 54.
The number of lenses 20 and displays 34 may be selected to provide the desired optical phase space for the desired physical dimensions of VR glasses 30. Applicant has realized that, to further reduce physical dimensions, lenses 20 may be cut into lens sections. This may provide optical performance improvements by using portions of lenses 20 where optical performance may be generally better
As is known, with any optical system, image quality drops towards the edges of the beams. As a result, prior art optical systems utilize wide lenses, to avoid the beam edges. However, Applicant has realized that stacked channels 33A-33C, utilized for compensating for beam width reduction, may provide a further advantage, by compensating for any distortions caused by removing lens edges. Moreover, cutting the lens need not be symmetric around the center. Instead, as described below, there may be a displacement between the center of the lens and the center of the cut. This may allow the displays to be adjusted to the lens angle so that the displays may be placed in a more efficient way.
Reference is now made to
It will be appreciated that any suitable number of lenses 20 and/or lens sections 60 may be combined together, such as, for example, with a suitable glue, into a single compound lens 80. Lenses 20 and/or lens sections 60 may be arranged in either a 1-dimensional or 2-dimensional array.
In an alternative embodiment, lens sections 60 may be tilted with respect to each other, as discussed with respect to
Reference is now made to
Channel image adapter 37 may place copies of each data pixel into image segments Ii for those optical channels whose phase space includes pixel angle PA of the data pixel. Channel image adapter 37 may comprise a pixel angle locater 82 which may determine upon which display(s) 34 to display each pixel. To do so, pixel angle locater 82 may slide a window 84 across image I, moving window 84 by an amount related to the amount of overlap between phase spaces 50. Channel image adapter 37 may then associate the portion of the image within window 84 as image segment Ii. Window 84 may be of the size of each display 34 or a portion of it.
Note that, due to the work of pixel angle locator 82, parts of the image of the playing cards are repeated.
Once channel image adapter 37 has placed each pixel of input image I in the correct locations and segmented the image according to channels 33, each channel corrector 38 may compensate for the optical distortion its channel 33 introduces to its image segment Ii.
Reference is now made to
The primary type of imaging error 47 may be distortion which, for lens segments 60, may be barrel distortion. To compensate for barrel distortion, channel corrector 38 may add a compensation 46 known as “pin cushion” distortion; however, it will be appreciated that other types of distortions may be introduced by each channel 33.
Each channel corrector 38 may utilize the results of any suitable lens characterization operation, which may be performed a priori, such as after manufacture of each lens section 60 or lens 20. The per-segment distortion may be defined by predefined parameters such as form, color and other factors of a lens 20 or lens section 60.
Correction factors for each lens 20 or lens section 60 may then be stored in its associated channel corrector 38 and the appropriate compensating distortion calculation may then be implemented in the relevant channel corrector 38. One suitable compensation calculation may be that described in the article by K. T. Gribbon, C. T. Johnston, and D. G. Bailey entitled “A Real-time FPGA Implementation of a Barrel Distortion Correction Algorithm with Bilinear Interpolation”, published online at http://sprg.massey.ac.nz/pdfs/2003_IVCNZ_408.pdf and discussed in the Wikipedia article on Distortion (optics).
Another suitable correction may be that of color aberration which consists of local shifting in the image the red (R), green (G) and blue (B) image layers relative to one another. The amount of relative shifting is calibrated such that it will cancel the different displacements each R, G and B color layer undergoes when projected through the optical system.
In an alternate embodiment of the present invention, there may be a single channel corrector 38 which may store correction factors for each channel and may implement the same functionality for each channel but with its correction factors.
It will be appreciated that the present invention may provide a comfortable set of VR glasses 30 whose physical dimensions are those of a pair of eyeglasses. With its multiple, stacked optical channels 33 and processor 36, it provides a full field of view and a full range eyebox.
In addition, Applicant has realized that the “stacked channels” approach of the present invention may reduce the amount of power that VR glasses 30 may utilize. This may be because, in VR glasses 30, each display 34 may cover a smaller portion of the pupil of each eye 8. As a result, the total amount of projected brightness in VR glasses 30 may be less for the same user experience.
Moreover, the combined multiple, stacked optical channels 33 and processor 36 may be adjusted and configured for a large set of optical systems. For example, it may be adapted for use in an augmented reality (AR) glasses system such as that shown in
As mentioned hereinabove, VR glasses 30 may be implemented with single combined display 34′ and with compound lens 80. This is shown in
For example, display 34′ may be a 10.5 mm by 17.5 mm display and display segments 35 may each be 3 mm×3 mm. Empty segments 1020 may provide 1-3 mm between adjacent sides of neighboring display segments 35 and 0.5 mm around the outer edges. As shown in
As previously shown in
Lens sections 1010 are shown in
As mentioned, display segments 35 may be associated with their lens section 1010. Thus, each display segment 35 may be displaced from the center of compound lens 80. Moreover, each display segment 35 may display its associated image portion (not shown). Thus, for each channel, its lens section 1010, display segment 35 and image portion are all aligned with each other.
It is noted that compound lens 80 may be used in combination with additional optical elements, which may be separated for each channel 33. It is also noted that compound lens 80 may be used with multiple displays 34 where the multiple displays 34 are arranged such that each lens section 1010 of the compound lens 80 projects towards the eye from a different display 34.
Each display segment 35 may be displaced from the center of its donor lens 1025 and the amount of displacement is indicated by an arrow 1030A or 1030B, associated with two types of display segments 35, the inner segments 35A and the outer segments 35B, respectively. The centers X for inner segments 35A are located equidistantly around a center O of combined display 34, at the relevant corner of each inner segment 35A, while the centers X for outer segments 35B are located at the center of each inner surface of each inner segment 35A, also equidistantly around center O.
For each lens section 1010, its associated arrow 1030 may extend from its associated donor lens center X to a center Os of its associated display segment 35. Accordingly, each display segment 35 may be off-center with respect to the optical center X of its lens section 1010. Moreover, each lens section 1010 may be asymmetrically cut from its donor lens 1025.
Note that, in
Unless specifically stated otherwise, as apparent from the preceding discussions, it is appreciated that, throughout the specification, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a general purpose computer of any type, such as a client/server system, mobile computing devices, smart appliances, cloud computing units or similar electronic computing devices that manipulate and/or transform data within the computing system's registers and/or memories into other data within the computing system's memories, registers or other such information storage, transmission or display devices.
Embodiments of the present invention may include apparatus for performing the operations herein. This apparatus may be specially constructed for the desired purposes, or it may comprise a computing device or system typically having at least one processor and at least one memory, selectively activated or reconfigured by a computer program stored in the computer. The resultant apparatus when instructed by software may turn the general-purpose computer into inventive elements as discussed herein. The instructions may define the inventive device in operation with the computer platform for which it is desired. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk, including optical disks, magnetic-optical disks, read-only memories (ROMs), volatile and non-volatile memories, random access memories (RAMs), electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, Flash memory, disk-on-key or any other type of media suitable for storing electronic instructions and capable of being coupled to a computer system bus. The computer readable storage medium may also be implemented in cloud storage.
Some general-purpose computers may comprise at least one communication element to enable communication with a data network and/or a mobile communications network.
The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the desired method. The desired structure for a variety of these systems will appear from the description below. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
This application claims priority from U.S. patent application 62/948,845, filed Dec. 17, 2019, U.S. patent application 62/957,320, filed Jan. 6, 2020, U.S. patent application 62/957,321, filed Jan. 6, 2020, U.S. patent application 62/957,323, filed Jan. 6, 2020, U.S. patent application 62/957,325, filed Jan. 6, 2020, and U.S. patent application 63/085,224, filed Sep. 30, 2020, all of which are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IL2020/051305 | 12/17/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62948845 | Dec 2019 | US | |
62957320 | Jan 2020 | US | |
62957321 | Jan 2020 | US | |
62957323 | Jan 2020 | US | |
62957325 | Jan 2020 | US | |
63085224 | Sep 2020 | US |