System and method for displaying an object with depths

Information

  • Patent Grant
  • 12231613
  • Patent Number
    12,231,613
  • Date Filed
    Friday, November 6, 2020
    4 years ago
  • Date Issued
    Tuesday, February 18, 2025
    6 days ago
Abstract
An object displaying system includes a right light signal generator, a left light signal generator, a right combiner, and a left combiner. The right light signal generator generates right light signals for an object. The right combiner receives and redirects the right light signals towards one retina of a viewer to display multiple right pixels of the object. The left light signal generator generates leftght signals for the object. The left combiner receives and redirects the left light signals towards the other retina of the viewer to display multiple left pixels of the object. A first redirected right light signal and a corresponding first redirected left light signal are perceived by the viewer to display a first virtual binocular pixel of the object with a first depth that is related to a first angle between the first redirected right light signal and the corresponding first redirected left light signal.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

The present disclosure relates generally to a method and a system for displaying an object with depths and, in particular, to a method and a system for displaying an object by generating and redirecting multiple right light signals and left light signals respectively to a viewer's retinas.


Description of Related Art

In conventional virtual reality (VR) and augmented reality (AR) systems which implement stereoscopic technology, a three-dimensional virtual image is produced by projecting two parallax images with different view angles concurrently to the left and right display panels that is proximate to the viewer's eyes, respectively. View angle difference between the two parallax images (parallax images) is interpreted by the brain and translated as depth perception, while the viewer's eyes are actually focusing (fixating) on the display panel, which has a different depth perception from that the viewer perceived due to the parallax images. And vergence-accommodation conflict (VAC) occurs when the accommodation for focusing on an object mismatches with the convergence of the eyes based on the depth perception. The VAC causes the viewer to experience dizziness or headache. Moreover, when using parallax images in a mix reality (MR) setting, the user will not be able to focus on real object and virtual image at the same time (“focal rivalry”). Furthermore, displaying motion of virtual images via parallax imaging technology places a heavy burden on graphic hardware.


SUMMARY

An object of the present disclosure is to provide a system and a method for displaying an object with depths in space. Because the depth of the object is the same as the location both eyes of a viewer fixate, vergence-accommodation conflict (VAC) and focal rivalry can be avoided. The object displaying system has a right light signal generator, a right combiner, a left light signal generator, and a left combiner. The right light signal generator generates multiple right light signals for an object. The right combiner receives and redirects the multiple right light signals towards one retina of a viewer to display multiple right pixels of the object. The left light signal generator generates multiple left light signals for the object. The left combiner receives and redirects the multiple left light signals towards the other retina of the viewer to display multiple left pixels of the object. In addition, a first redirected right light signal and a corresponding first redirected left light signal are perceived by the viewer to display a first virtual binocular pixel of the object with a first depth that is related to a first angle between the first redirected right light signal and the corresponding first redirected left light signal. In one embodiment, the first depth is determined by the first angle between light path extensions of the first redirected right light signal and the corresponding first redirected left light signal.


The object is perceived with multiple depths when, in addition to the first virtual binocular pixel of the object, a second redirected right light signal and a corresponding second redirected left light signal are perceived by the viewer to display a second virtual binocular pixel of the object with a second depth that is related to a second angle between the second redirected right light signal and the corresponding second redirected left light signal.


Furthermore, the first redirected right light signal is not a parallax of the corresponding first redirected left light signal. Both the right eye and the left eye receive the image of the object from the same view angle, rather than a parallax respectively from the right eye view angle and left eye view angle, which is conventionally used to generate a 3D image.


In another embodiment, the first redirected right light signal and the corresponding first redirected left light signals are directed to approximately the same height of the retina of the viewer's both eyes.


In another embodiment, the multiple right light signals generated from the right light signal generator are reflected only once before entering the retina of the viewer, and the multiple left light signals generated from the left light signal generator are reflected only once before entering the other retina of the viewer.


In one embodiment, the right combiner receiving and redirecting the multiple right light signals towards a right retina of a viewer to display multiple right pixels of the object and the left combiner receiving and redirecting the multiple left light signals towards a left retina of the viewer to display multiple left pixels of the object. In another embodiment, the right combiner receiving and redirecting the multiple left light signals towards a right retina of a viewer to display multiple right pixels of the object and the left combiner receiving and redirecting the multiple right light signals towards a left retina of the viewer to display multiple left pixels of the object.


In an application of augmented reality (AR) or mixed reality (MR), the right combiner and the left combiner are transparent for ambient lights.


Also in the application of AR and MR, an object displaying system further includes a support structure that is wearable on a head of the viewer. The right light signal generator, the left light signal generator, the right combiner, and the left combiner are carried by the support structure. In one embodiment, the system is a head wearable device, in particular a pair of glasses. In this circumstance, the support structure may be a frame with or without lenses of the pair of glasses. The lenses may be prescription lenses used to correct nearsightedness, farsightedness, etc.


In the embodiment of smart glasses, the right light signal generator may be carried by the right temple of the frame and the left light signal generator may be carried by the left temple of the frame. In addition, the right combiner may be carried by the right lens and the left combiner may be carried by the left lens. The carrying can be implemented in various manner. The combiner may be attached or incorporated to the lens by either a removable or a non-removable means. In addition, the combiner may be integratedly made with the lens, including prescription lens.


The present invention uses retinal scanning to project right light signals and left light signals to the viewer's retina, instead of near-eye display usually placed in close proximity to the viewer's eyes, for displaying virtual images.


Additional features and advantages of the disclosure will be set forth in the descriptions that follow, and in part will be apparent from the descriptions, or may be learned by practice of the disclosure. The objectives and other advantages of the disclosure will be realized and attained by the structure and method particularly pointed out in the written description and claims thereof as well as the appended drawings. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram illustrating an embodiment of an object displaying system in accordance with the present invention.



FIG. 2 is a schematic diagram illustrating a relationship between a virtual binocular pixel and the corresponding pair of the right pixel and left pixel in accordance with the present invention.



FIG. 3 is a schematic diagram illustrating the light path from a light signal generator to a combiner, and to a retina of a viewer in accordance with the present invention.



FIG. 4 is a schematic diagram illustrating the virtual binocular pixels formed by right light signals and left light signals in accordance with the present invention.



FIG. 5 is a table illustrating an embodiment of a look up table in accordance with the present invention.



FIG. 6 is a schematic diagram illustrating displaying an object by various virtual binocular pixels in accordance with the present invention.



FIG. 7 is a flow chart illustrating an embodiment of processes for displaying an object in accordance with the present invention.



FIG. 8 is a schematic diagram illustrating the position of a light signal generator with respect to the combiner in accordance with the present invention.



FIG. 9 is a schematic diagram illustrating an embodiment of object displaying system with optical duplicators in accordance with the present invention.



FIG. 10 is a schematic diagram illustrating an embodiment of an object displaying system in accordance with the present invention.



FIG. 11 is a schematic diagram illustrating a united combiner in accordance with the present invention.



FIG. 12 is a schematic diagram illustrating a object displaying system carried by a pair of glasses in accordance with the present invention.



FIG. 13 is a schematic diagram illustrating a dioptric unit and a combiner in accordance with the present invention.



FIGS. 14A-I are schematic diagrams illustrating displaying a moving object in accordance with the present invention.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

The terminology used in the description presented below is intended to be interpreted in its broadest reasonable manner, even though it is used in conjunction with a detailed description of certain specific embodiments of the technology. Certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be specifically defined as such in this Detailed Description section.


The present invention relates to systems and methods for displaying an object with a depth in space. Because the depth of the object is the same as the location both eyes of a viewer fixate, vergence-accommodation conflict (VAC) and focal rivalry can be avoided. The described embodiments concern one or more methods, systems, apparatuses, and computer readable mediums storing processor-executable process steps to display an object with depths in space for a viewer. An object displaying system has a right light signal generator, a right combiner, a left light signal generator, and a left combiner. The right light signal generator generates multiple right light signals for an object. The right combiner receives and redirects the multiple right light signals towards one retina of a viewer to display multiple right pixels of the object. The left light signal generator generates multiple left light signals for the object. The left combiner receives and redirects the multiple left light signals towards the other retina of the viewer to display multiple left pixels of the object. In addition, a first redirected right light signal and a corresponding first redirected left light signal are perceived by the viewer to display a first virtual binocular pixel of the object with a first depth that is related to a first angle between the first redirected right light signal and the corresponding first redirected left light signal. In one embodiment, the first depth is determined by the first angle between light path extensions of the first redirected right light signal and the corresponding first redirected left light signal.


The object is perceived with multiple depths when, in addition to the first virtual binocular pixel of the object, a second redirected right light signal and a corresponding second redirected left light signal are perceived by the viewer to display a second virtual binocular pixel of the object with a second depth that is related to a second angle between the second redirected right light signal and the corresponding second redirected left light signal.


Furthermore, the first redirected right light signal is not a parallax of the corresponding first redirected left light signal. Both the right eye and the left eye receive the image of the object from the same view angle, rather than a parallax respectively from the right eye view angle and left eye view angle, which is conventionally used to generate a 3D image.


In another embodiment, the first redirected right light signal and the corresponding first redirected left light signals are directed to approximately the same height of the retina of the viewer's both eyes.


In another embodiment, the multiple right light signals generated from the right light signal generator are reflected only once before entering the retina of the viewer, and the multiple left light signals generated from the left light signal generator are reflected only once before entering the other retina of the viewer.


In one embodiment, the right combiner receiving and redirecting the multiple right light signals towards a right retina of a viewer to display multiple right pixels of the object and the left combiner receiving and redirecting the multiple left light signals towards a left retina of the viewer to display multiple left pixels of the object. In another embodiment, the right combiner receiving and redirecting the multiple left light signals towards a right retina of a viewer to display multiple right pixels of the object and the left combiner receiving and redirecting the multiple right light signals towards a left retina of the viewer to display multiple left pixels of the object.


In an application of augmented reality (AR) or mixed reality (MR), the right combiner and the left combiner are transparent for ambient lights.


Also in the application of AR and MR, an object displaying system further includes a support structure that is wearable on a head of the viewer. The right light signal generator, the left light signal generator, the right combiner, and the left combiner are carried by the support structure. In one embodiment, the system is a head wearable device, in particular a pair of glasses. In this circumstance, the support structure may be a frame with or without lenses of the pair of glasses. The lenses may be prescription lenses used to correct nearsightedness, farsightedness, etc.


In the embodiment of smart glasses, the right light signal generator may be carried by the right temple of the frame and the left light signal generator may be carried by the left temple of the frame. In addition, the right combiner may be carried by the right lens and the left combiner may be carried by the left lens. The carrying can be implemented in various manner. The combiner may be attached or incorporated to the lens by either a removable or a non-removable means. In addition, the combiner may be integratedly made with the lens, including prescription lens.


As shown in FIG. 1, an object displaying system includes a right light signal generator 10 to generate multiple right light signals (RLS) such as 12 for RLS_1, 14 for RLS_2 and 16 for RLS_3, a right combiner 20 to receive and redirect the multiple right light signals 12′, 14′, and 16′ towards the right retina 54 of a viewer, a left light signal generator 30 to generate multiple left light signals (LLS) such as 32 for LLS_1, 34 for LLS_2, and 36 for LLS_3, and a left combiner 40 to receive and redirect the multiple left light signals 32′, 34′, and 36′ towards a left retina 64 of the viewer. The viewer has a right eye 50 containing a right pupil 52 and a right retina 54, and a left eye 60 containing a left pupil 62 and a left retina 64. The diameter of a human's pupil generally may range from 2 to 8 mm in part depending on the environmental lights. The normal pupil size in adults varies from 2 to 4 mm in diameter in bright light and from 4 to 8 mm in dark. The multiple right light signals are redirected by the right combiner 20, pass the right pupil 52, and are eventually received by the right retina 54. The right light signal RLS_1 is the light signal farthest to the right the viewer's right eye 50 can see on a specific horizontal plan. The right light signal RLS_2 is the light signal farthest to the left the viewer's right eye 50 can see on the same horizontal plane. Upon receipt of the redirected right light signals, the viewer would perceive multiple right pixels for the object in the area A bounded by the extensions of the redirected right light signals RLS_1 and RLS_2. The area A is referred to as the field of view (FOV) for the right eye 50. Likewise, the multiple left light signals are redirected by the left combiner 40, pass the center of the left pupil 62, and are eventually received by the left retina 64. The left light signal LLS_1 is the light signal farthest to the right the viewer's left eye 60 can see on the specific horizontal plan. The left light signal LLS_2 is the light signal farthest to the left the viewer's left eye can see on the same horizontal plane. Upon receipt of the redirected left light signals, the viewer would perceive multiple left pixels for the object in the area B bounded by the extensions of the redirected left light signals LLS_1 and LLS_2. The area B is referred to as the field of view (FOV) for the left eye 60. When both multiple right pixels and left pixels are displayed in the area C which are overlapped by area A and area B, at least one right light signal displaying one right pixel and a corresponding left light signal displaying one left pixel are fused to display a virtual binocular pixel with a specific depth in the area C. The depth is related to an angle of the redirected right light signal and the redirected left light signal. Such angle is also referred to as a convergence angle.


As shown in FIGS. 1 and 2, the viewer perceives a virtual image of the dinosaur object 70 with multiple depths in the area C in front of the viewer. The image of the dinosaur object 70 includes a first virtual binocular pixel 72 displayed at a first depth D1 and a second virtual binocular pixel 74 displayed at a second depth D2. The first angle between the first redirected right light signal 16′ and the corresponding first redirected left light signal 36′ is θ1. The first depth D1 is related to the first angle θ1. In particular, the first depth of the first virtual binocular pixel of the object can be determined by the first angle θ1 between the light path extensions of the first redirected right light signal and the corresponding first redirected left light signal. As a result, the first depth D1 of the first virtual binocular pixel 72 can be calculated approximately by the following formula:







Tan

(

θ
2

)

=

IPD

2

D







The distance between the right pupil 52 and the left pupil 62 is interpupillary distance (IPD). Similarly, the second angle between the second redirected right light signal 18′ and the corresponding second redirected left light signal 38′ is θ2. The second depth D2 is related to the second angle θ2. In particular, the second depth D2 of the second virtual binocular pixel of the object can be determined approximately by the second angle θ2 between the light path extensions of the second redirected right light signal and the corresponding second redirected left light signal by the same formula. Since the second virtual binocular pixel 74 is perceived by the viewer to be further away from the viewer (i.e. with larger depth) than the first virtual binocular pixel 72, the second angle θ2 is smaller than the first angle θ1.


Furthermore, although the redirected right light signal 16′ for RLS_3 and the corresponding redirected left light signal 36′ for LLS_2 together display a first virtual binocular pixel 72 with the first depth D1, the redirected right light signal 16′ for RLG_3 is not a parallax of the corresponding redirected left light signal 36′ for LLS_3. Conventionally, a parallax between the image received by the right eye and the image received by the left eye is used for a viewer to perceive a 3D image with depth because the right eye sees the same object from a view angle different from that of a left eye. However, in the present invention, the right light signal and the corresponding left light signal for a virtual binocular pixel display an image of the same view angle. Thus, the intensity of red, blue, and green (RBG) color and/or the brightness of the right light signal and the left light signal are approximately the same. In other words, the right pixel and the corresponding left pixel are approximately the same. However, in another embodiment, one or both of the right light signal and the left light signal may be modified to present some 3D effects such as shadow. In general, both the right eye and the left eye receive the image of the object from the same view angle in the present invention, rather than a parallax respectively from the right eye view angle and left eye view angle, conventionally used to generate a 3D image.


As described above, the multiple right light signals are generated by the right light signal generator, redirected by the right combiner, and then directly scanned onto the right retina to form a right retina image on the right retina. Likewise, the multiple left light signals are generated by left light signal generator, redirected by the left combiner, and then scanned onto the left retina to form a left retina image on the left retina. In an embodiment shown in FIG. 2, a right retina image 80 contains 36 right pixels in a 6×6 array and a left retina image 90 also contains 36 left pixels in a 6×6 array. In another embodiment, a right retina image 80 contains 921,600 right pixels in a 1280×720 array and a left retina image 90 also contains 921,600 left pixels in a 1280×720 array. The object displaying system is configured to generate multiple right light signals and corresponding multiple left light signals which respectively form the right retina image on the right retina and left retina image on the left retina. As a result, the viewer perceives a virtual binocular object with specific depths in the area C because of image fusion.


With reference to FIG. 2, the first right light signal 16 from the right light signal generator 10 is received and reflected by the right combiner 20. The first redirected right light signal 16′, through the right pupil 52, arrives the right retina of the viewer to display the right pixel R34. The corresponding left light signal 36 from the left light signal generator 30 is received and reflected by the left combiner 40. The first redirected light signal 36′, through the left pupil 62, arrives the left retina of the viewer to display the left retina pixel L33. As a result of image fusion, a viewer perceives the virtual binocular object with multiple depths where the depths are determined by the angles of the multiple redirected right light signals and the corresponding multiple redirected left light signals for the same object. The angle between a redirected right light signal and a corresponding left light signal is determined by the relative horizontal distance of the right pixel and the left pixel. Thus, the depth of a virtual binocular pixel is inversely correlated to the relative horizontal distance between the right pixel and the corresponding left pixel forming the virtual binocular pixel. In other words, the deeper a virtual binocular pixel is perceived by the viewer, the smaller the relative horizontal distance at X axis between the right pixel and left pixel forming such a virtual binocular pixel is. For example, as shown in FIG. 2, the second virtual binocular pixel 74 is perceived by the viewer to have a larger depth (i.e. further away from the viewer) than the first virtual binocular pixel 72. Thus, the horizontal distance between the second right pixel and the second left pixel is smaller than the horizontal distance between the first right pixel and the first left pixel on the retina images. Specifically, the horizontal distance between the second right pixel R41 and the second left pixel L51 forming the second virtual binocular pixel is four-pixel long. However, the distance between the first right pixel R43 and the first left pixel L33 forming the first virtual binocular pixel is six-pixel long.


In one embodiment shown in FIG. 3, the light paths of multiple right light signals and multiple left light signals from a light signal generator to a retina are illustrated. The multiple right light signals generated from the right light generator are projected onto the right combiner 20 to form a right combiner image (RCI) 82. These multiple right light signals are redirected by the right combiner 20 and converge into a small right pupil image (RPI) 84 to pass through the right pupil 52, and then eventually arrive the right retina 54 to form a right retina image (RRI) 86. Each of the RCI, RPI, and RRI comprises i×j pixels. Each right light signal RLS(i,j) travels through the same corresponding pixels from RCI(i,j), to RPI(i,j), and then to RRI(x,y). For example RLS(5,3) travels from RCI(5,3), to RPI(5,3) and then to RRI(2,4). Likewise, the multiple left light signals generated from the left light generator 30 are projected onto the left combiner 40 to form a left combiner image (LCI) 92. These multiple left light signals are redirected by the left combiner 40 and converge into a small left pupil image (LPI) 94 to pass through the left pupil 62, and then eventually arrive the left retina 64 to form a right retina image (LRI) 96. Each of the LCI, LPI, and LRI comprises i×j pixels. Each left light signal LLS(i,j) travels through the same corresponding pixels from LCI(i,j), to LPI(i,j), and then to LRI(x,y). For example LLS(3,1) travels from LCI(3,1), to LPI(3,1) and then to LRI(4,6). The (0, 0) pixel is the top and left most pixel of each image. Pixels in the retina image is left-right inverted and top-bottom inverted to the corresponding pixels in the combiner image. Based on appropriate arrangements of the relative positions and angles of the light signal generators and combiners, each light signal has its own light path from a light signal generator to a retina. The combination of one right light signal displaying one right pixel on the right retina and one corresponding left light signal displaying one left pixel on the left retina forms a virtual binocular pixel with a specific depth perceived by a viewer. Thus, a virtual binocular pixel in the space can be represented by a pair of right pixel and left pixel or a pair of right combiner pixel and left combiner pixel.


A virtual object perceived by a viewer in area C includes multiple virtual binocular pixels. To precisely describe the location of a virtual binocular pixel in the space, each location in the space is provided a three dimensional (3D) coordinate, for example XYZ coordinate. Other 3D coordinate system can be used in another embodiment. As a result, each virtual binocular pixel has a 3D coordinate—a horizontal direction, a vertical direction, and a depth direction. A horizontal direction (or X axis direction) is along the direction of interpupillary line. A vertical direction (or Y axis direction) is along the facial midline and perpendicular to the horizontal direction. A depth direction (or Z axis direction) is normal to the frontal plane and perpendicular to both the horizontal and vertical directions.



FIG. 4 illustrates the relationship between pixels in the right combiner image, pixels in the left combiner image, and the virtual binocular pixels. As described above, pixels in the right combiner image are one to one correspondence to pixels in the right retina image (right pixels). Pixels in the left combiner image are one to one correspondence to pixels in the left retina image (left pixels). However, pixels in the retina image is left-right inverted and top-bottom inverted to the corresponding pixels in the combiner image. For a right retina image comprising 36 (6×6) right pixels and a left retina image comprising 36 (6×6) right pixels, there are 216 (6×6×6) virtual binocular pixels (shown as a dot) in the area C assuming all light signals are within FOV of both eyes of the viewer. The light path extension of one redirected right light signal intersects the light path extension of each redirected left light signal on the same row of the image. Likewise, the light path extension of one redirected left light signal intersects the light path extension of each redirected right light signal on the same row of the image. Thus, there are 36 (6×6) virtual binocular pixels on one layer and 6 layers in the space. There is usually a small angle between two adjacent lines representing light path extensions to intersect and form virtual binocular pixels although they are shown as parallel lines in the FIG. 4. A right pixel and a corresponding left pixel at approximately the same height of each retina (i.e. the same row of the right retina image and left retina image) tend to fuse earlier. As a result, right pixels are paired with left pixels at the same row of the retina image to form virtual binocular pixels.


As shown in FIG. 5, a look-up table is created to facilitate identifying the right pixel and left pixel pair for each virtual binocular pixel. For example, 216 virtual binocular pixels, numbering from 1 to 216, are formed by 36 (6×6) right pixels and 36 (6×6) left pixels. The first (1st) virtual binocular pixel VBP(1) represents the pair of right pixel RRI(1,1) and left pixel LRI(1,1). The second (2nd) virtual binocular pixel VBP(2) represents the pair of right pixel RRI(2,1) and left pixel LRI(1,1). The seventh (7th) virtual binocular pixel VBP(7) represents the pair of right pixel RRI(1,1) and left pixel LRI(2,1). The thirty-seventh (37th) virtual binocular pixel VBP(37) represents the pair of right pixel RRI(1,2) and left pixel LRI(1,2). The two hundred and sixteenth (216th) virtual binocular pixel VBP(216) represents the pair of right pixel RRI(6,6) and left pixel LRI(6,6). Thus, in order to display a specific virtual binocular pixel of an object in the space for the viewer, it is determined which pair of the right pixel and left pixel can be used for generating the corresponding right light signal and left light signal. In addition, each row of a virtual binocular pixel on the look-up table includes a pointer which leads to a memory address that stores the perceived depth (z) of the VBP and the perceived position (x,y) of the VBP. Additional information, such as scale of size, number of overlapping objects, and depth in sequence depth etc., can also be stored for the VBP. Scale of size may be the relative size information of a specific VBP compared against a standard VBP. For example, the scale of size may be set to be 1 when an object is displayed at a standard VBP that is 1 m in front of the viewer. As a result, the scale of size may be set to be 1.2 for a specific VBP that is 90 cm in front of the viewer. Likewise, when the scale of size may be set to be 0.8 for a specific VBP that is 1.5 m in front of the viewer. The scale of size can be used to determine the size of the object for displaying when the object is moved from a first depth to a second depth. The number of overlapping objects is the number of objects that are overlapped with one another so that one object is completely or partially hidden behind another object. The depth in sequence provides information about sequence of depths of various overlapping objects. For example, 3 objects overlapping with each other. The depth in sequence of the first object in the front may be set to be 1 and the depth in sequence of the second object hidden behind the first object may be set to be 2. The number of overlapping objects and the depth in sequence may be used to determine which and what portion of the objects need to be displayed when various overlapping objects are in moving.


As shown in FIG. 6, a virtual object with multiple depths, such as a dinosaur, can be displayed in the area C for a viewer by projecting the predetermined right pixels and left pixels onto the retinas of the viewer's eyes. In one embodiment, the location of the object is determined by a reference point and the view angle of the object is determined by a rotation angle. As shown in FIG. 7, at step 710, create an object image with a reference point. In one embodiment, the object image may be created by 2D or 3D modeling. The reference point may be the center of gravity of the object. At step 720, determine a virtual binocular pixel for the reference point. With a 3D coordinate of the reference point, a designer can directly determine, for example via a software GUI, a closest virtual binocular pixel by its number, such as VBP(145). At step 730, identify a pair of right pixel and left pixel corresponding to the virtual binocular pixel. A designer can then use the look-up table to identify the corresponding pair of the right pixel and left pixel. A designer can also use the predetermined depth of the reference point to calculate the convergent angle and then identify the corresponding right pixel and left pixel, assuming the reference point is before the middle of viewer's both eyes. The designer can move the reference point on XY plane to the predetermined X and Y coordinates and then identify the final corresponding right pixel and left pixel. At step 740, project a right light signal and a corresponding left light signal to respectively display the right pixel and the corresponding left pixel for the reference. Once the pair of the right pixel and the left pixel corresponding to the virtual binocular pixel for the reference point is determined, the whole virtual object can be displayed using its 2D or 3D modeling information.


The look up table may be created by the following processes. At the first step, obtain an individual virtual map based on his/her IPD, created by the system during initiation or calibration, which specify the boundary of the area C where the viewer can perceive an object with depths because of the fusion of right retina image and left retina image. At the second step, for each depth at Z axis direction (each point at Z-coordinate), calculate the convergence angle to identify the pair of right pixel and left pixel respectively on the right retina image and the left retina image regardless of the X-coordinate and Y-coordinate location. At the third step, move the pair of right pixel and left pixel along X axis direction to identify the X-coordinate and Z-coordinate of each pair of right pixel and left pixel at a specific depth regardless of the Y-coordinate location. At the fourth step, move the pair of right pixel and left pixel along Y axis direction to determine the Y-coordinate of each pair of right pixel and left pixel. As a result, the 3D coordinate system such as XYZ of each pair of right pixel and left pixel respectively on the right retina image and the left retina image can be determined to create the look up table. In addition, the third step and the fourth step are exchangeable.


In another embodiment, a designer may determine each of all the necessary virtual binocular pixels to form the virtual object, and then use look-up table to identify each corresponding pair of the right pixel and the left pixel. The right light signals and the left light signals can then be generated accordingly. The right retina image and the left retina image are of the same view angle. Parallax is not used to present 3D images. As a result, very complicated and time-consuming graphics computation can be avoided. The relative location of the object on the right retina image and the left retina image determines the depth perceived by the viewer.


The light signal generator 10 and 30 may use laser, light emitting diode (“LED”) including mini and micro LED, organic light emitting diode (“OLED”), or superluminescent diode (“SLD”), LCOS (Liquid Crystal on Silicon), liquid crystal display (“LCD”), or any combination thereof as its light source. In one embodiment, the light signal generator 10 and 30 is a laser beam scanning projector (LBS projector) which may comprise the light source including a red color light laser, a green color light laser, and a blue color light laser, a light color modifier, such as Dichroic combiner and Polarizing combiner, and a two dimensional (2D) adjustable reflector, such as a 2D electromechanical system (“MEMS”) mirror. The 2D adjustable reflector can be replaced by two one dimensional (1D) reflector, such as two 1D MEMS mirror. The LBS projector sequentially generates and scans light signals one by one to form a 2D image at a predetermined resolution, for example 1280×720 pixels per frame. Thus, one light signal for one pixel is generated and projected at a time towards the combiner 20 and 40. For a viewer to see such a 2D image from one eye, the LBS projector has to sequentially generate light signals for each pixel, for example 1280×720 light signals, within the time period of persistence of vision, for example 1/18 second. Thus, the time duration of each light signal is about 60.28 nanosecond.


In another embodiment, the light signal generator 10 and 30 may be a digital light processing projector (“DLP projector”) which can generate a 2D color image at one time. Texas Instrument's DLP technology is one of several technologies that can be used to manufacture the DLP projector. The whole 2D color image frame, which for example may comprise 1280×720 pixels, is simultaneously projected towards the combiner 20 and 40.


The combiner 20, 40 receives and redirects multiple light signals generated by the light signal generator 10, 30. In one embodiment, the combiner 20, 40 reflects the multiple light signals so that the redirected light signals are on the same side of the combiner 20, 40 as the incident light signals. In another embodiment, the combiner 20, 40 refracts the multiple light signals so that the redirected light signals are on the different side of the combiner 20, 40 from the incident light signals. When the combiner 20, 40 functions as a refractor. The reflection ratio can vary widely, such as 20%-80%, in part depending on the power of the light signal generator. People with ordinary skill in the art know how to determine the appropriate reflection ratio based on characteristics of the light signal generators and the combiners. Besides, in one embodiment, the combiner 20, 40 is optically transparent to the ambient (environmental) lights from the opposite side of the incident light signals. The degree of transparency can vary widely depending on the application. For AR/MR application, the transparency is preferred to be more than 50%, such as about 75% in one embodiment. In addition to redirecting the light signals, the combiner 20, 40 may converge the multiple light signals forming the combiner images so that they can pass through the pupils and arrive the retinas of the viewer's both eyes.


The combiner 20, 40 may be made of glasses or plastic materials like lens, coated with certain materials such as metals to make it partially transparent and partially reflective. One advantage of using a reflective combiner instead of a wave guide in the prior art for directing light signals to the viewer's eyes is to eliminate the problem of undesirable diffraction effects, such as multiple shadows, color displacement . . . etc. The combiner 20, 40 may be a holographic combiner but not preferred because the diffraction effects can cause multiple shadows and RGB displacement. In some embodiments, we may want to avoid using holographic combiner.


In one embodiment, the combiner 20, 40 is configured to have an ellipsoid surface. In addition, the light signal generator and a viewer's eye are respectively positioned on both focal points of the ellipsoid. As illustrated in FIG. 8, for the right combiner with an ellipsoid surface, the right light signal generator is positioned at the right focal point and the right eye of the viewer is positioned at the left focal point of the ellipsoid. Similarly, for the left combiner with an ellipsoid surface, the left light signal generator is positioned at the left focal point and the left eye of the viewer is positioned at the right focal point of the ellipsoid. Due to the geometric property of the ellipsoid, all the light beams projected from one focal point to the surface of the ellipsoid will be reflected to the other focal point. In this case, all the light beams projected from the light signal generators to the surface of the ellipsoid shaped combiners will be reflected to the eyes of the viewer. Thus, in this embodiment, the FOV can be extended to maximum, as large as the surface of the ellipsoid allows. In another embodiment, the combiner 20, 40 may have a flat surface with a holographic film designed to reflect light in a manner similar to the ellipsoid.


The object displaying system may further include a right collimator and a left collimator to narrow the light beam of the multiple light signals, for example to cause the directions of motion to become more aligned in a specific direction or to cause spatial cross section of the light beam to become smaller. The right collimator may be positioned between the right light signal generator and the right combiner and the left collimator may be positioned between the left light signal generator and the left combiner. The collimator may be a curved minor or lens.


As shown in FIG. 9, the object displaying system may further include a right optical duplicator and a left optical duplicator. The optical duplicator can be positioned between the light signal generator 10, 30 and the combiner 20, 40 to duplicate incident light signals. As a result, the optical duplicator can generate multiple instances of the incident light signals to open eye box of the viewer. The optical duplicator may be beam splitters, polarizing splitter, half-silvered mirrors, partial reflective mirror, dichroic mirrored prisms, dichroic or dielectric optical coatings. The optical duplicator 110, 120 may comprise at least two optical components to duplicate the incident light signal into at least two instances. Each of the optical component may be one lens, reflector, partial reflector, prism, mirror, or a combination of the aforementioned.


The object displaying system may further include a control unit having all the necessary circuitry for controlling the right light signal generator and left light signal generator. The control unit provides electronic signals for the light signal generators to generate multiple light signals. In one embodiment, the position and angle of the right light signal generator and the left light signal generator can be adjusted to modify the incident angles of the right light signals and the left light signals to and the receiving locations on the right combiner and the left combiner. Such adjustment may be implemented by the control unit. The control unit may communicate with a separate image signal provider via a wired or wireless means. The wireless communication includes telecommunication such 4G and 5G, WiFi, bluetooth, near field communication, and internet. The control unit may include a processor, a memory, an I/O interface to communicate with the image signal provider and the viewer. The object displaying system further comprises a power supply. The power supply may be a battery and/or a component that can be wirelessly charged.


There are at least two options for arranging the light path from the light signal generator to the viewer's retina. The first option described above is that the right light signals generated by the right light signal generator are redirected by the right combiner to arrive the right retina and the left light signals generated by the left light signal generator are redirected by the left combiner to arrive the left retina. As shown in FIG. 10, the second option is that the right light signals generated by the right light signal generator are redirected by the left combiner to arrive the left retina and the left light signals generated by the left light signal generator are redirected by the right combiner to arrive the right retina.


In another embodiment shown in FIG. 11, the right combiner and the left combiner can be integrated into one united combiner with a specific curvature for both the right light signals and the left light signals. With this large combiner, the right light signals generated by the right signal generator are reflected to arrive the left retina and the left light signals generated by the left signal generator are reflected to arrive the right retina. By extending the width of the combiner to create a relatively large reflective surface, the FOV and the size of area C for binocular fusion may be expanded.


The object displaying system may include a support structure wearable on a head of the viewer to carry the right light signal generator, the left light signal generator, the right combiner, and the left combiner. The right combiner and the left combiner are positioned within a field of view of the viewer. Thus, in this embodiment, the object displaying system is a head wearable device (HWD). In particular, as shown in FIG. 12, the object displaying system is carried by a pair of glasses, which is referred to as smart glasses. In this situation, the support structure may be a frame of a pair of glasses with or without lenses. The lenses may be prescription lenses used to correct nearsightedness, farsightedness, etc. The right light signal generator is carried by a right temple of the frame. The left light signal generator is carried by a left temple of the frame. The right combiner may be carried by the right lens and the left combiner may be carried by the left lens. The carrying can be implemented in various manner. The combiner may be attached or incorporated to the lens by either a removable or a non-removable means. The combiner may be integratedly made with the lens, including prescription lens. When the support structure does not include lenses, the right combiner and the left combiner may be directly carried by the frame or rims.


All components and variations in the embodiments of the object displaying system described above may be applied to the HWD. Thus, the HWD, including smart glasses, may further carry other components of the object displaying system, such as a control unit, a right collimator and a left collimator. The right collimator may be positioned between the right light signal generator and the right combiner and the left collimator may be positioned between the left light signal generator and the left combiner. In addition, the combiner may be replaced by a beam splitter and a convergent lens. The function of the beam splitter is to reflect light signals and the function of the convergent lens is to converge light signals so that they can pass through pupils to arrive the viewer's retinas.


When the object displaying system is implemented on smart eyeglasses. The lenses of the smart eyeglasses may have both dioptric property for correcting the viewer's eyesight and the function of a combiner. The smart eyeglasses may have lenses with prescribed degrees to fit the need of individuals are near-sighted or far-sighted to correct their eyesight. In these circumstances, each of the lenses of the smart eyeglasses may comprise a dioptric unit and a combiner. The dioptric unit and the combiner can be integrally manufactured as one piece with the same or different type of material. The dioptric unit and the combiner can also be separately manufactured in two pieces and then assembled together. These two pieces can attached to each other but separable, for example with built-in magnetic material, or may be attached to each other permanently. In either situation, the combiner is provided on a side of the lense which is closer to the eyes of the viewer. If the lens is one piece, the combiner forms an inner surface of the lens. If the lens has two portions, the combiner forms the inner portion of the lens. The combiner both allows ambient light to pass through and reflects light signals generated by the light signal generators to the viewer's eyes to form virtual images in the real environment. The combiner is designed to have appropriate curvature to reflect and to converge all the light signals from the light signal generators into the pupils and then on the retinas of the eyes.


In some embodiments, the curvature of one of the surfaces of the dioptric unit is determined based on the viewer's dioptric prescription. If the lens is one piece, the prescribed curvature is the outer surface of the lens. If the lens has two portions, the dioptric unit forms the outer portion of the lens. In this situation, the prescribed curvature may be either the inner surface or the outer surface of the dioptric unit. To better match the dioptric unit and the combiner, in one embodiment, the dioptric unit can be categorized into three groups based on the its prescribed degrees—over +3.00 (farsighted), between −3.0-+3.0, and under −3.0 (nearsighted). The combiner can be designed according to the category of a dioptric unit. In another embodiment, the dioptric unit can be categorized into five or ten groups each of which has a smaller range of prescribed degrees. As shown in FIG. 13, when the outer surface of a dioptric unit is used to provide the curvature for the prescribed degrees, the inner surface of the dioptric unit may be designed to have the same curvature as the outer surface of the combiner. As a result, the dioptric unit may be better fitted to the combiner. As an example, the inner surface of the dioptric unit and the outer surface of the combiner may be a same spherical or ellipsoid surface. In other embodiments, when the inner surface of the dioptric unit is used to provide the curvature for the prescribed degree, the outer surface of the combiner may be designed to have the same or similar curvature as the inner surface of the dioptric unit for facilitating the coupling between the two. However, when the outer surface of the combiner does not have the same curvature as the inner surface of the dioptric unit, the outer surface of the combiner and the inner surface of the dioptric unit can be combined via mechanical means such as magnets, adhesive materials, or other coupling structures. Another option is that an intermediary material may be applied to assemble the dioptric unit and the combiner. Alternatively, the combiner may be coated on the inner surface of the lens.


In addition to a still virtual object in an image frame in space, the object displaying system can display the object in moving. When the right light signal generator 10 and the left light signal generator 30 can generate light signals at a high speed, for example 30, 60 or more frames/second, the viewer can see the object in moving in a video smoothly due to persistence of vision. Below describe various embodiments of the processes to display a moving virtual object for the viewer. FIGS. 14A-I respectively illustrate an object moving in examples 1-9. The objects shown in the right combiner image 82 and left combiner image 92 in these figures may not precisely reflect the locations of the corresponding right light signals and left light signals displaying the object. In addition, the examples set the middle point of the viewer's interpupillary line as the origin of the XYZ coordinate system. Furthermore, the RCI(10, 10) and the LCI(10, 10) are set to be respectively the center of the right combiner image and the left combiner image. Likewise, the RRI(10, 10) and the LRI(10, 10) are set to be respectively the center of the right retina image and the left retina image. The (0, 0) pixel is the top and left most pixel of each image.


Example 1 shown in FIG. 14A illustrates a virtual object moving only in X axis direction (to the right) on the same depth plane from a first virtual binocular pixel to a second virtual binocular pixel. To do so, the locations of right light signals and the corresponding left light signals respectively on the right combiner image and the left combiner image need to be moved (to the right) with equal distance (pixels) in X axis direction. As a result, the locations of right light signals and the corresponding left light signals respectively on the right retina image and the left retina image forming the virtual object are moved to the left with equal distance in X axis direction. In other words, such right light signals and corresponding left light signals from the light signal generators have to be projected on a different X-coordinate location of the combiner images. However, since the Y-coordinate and Z-coordinate (the depth direction) of the virtual object remains the same, the right light signals and the corresponding left light signals are projected on the same location of the combiner images with respect to the Y-coordinate and Z-coordinate. For example, when the XYZ coordinate of the virtual object moves from (0, 0, 100) to (10, 0, 100), the right light signal on the right combiner image moves from RCI(10, 10) to RCI(12,10) and the left light signal on the left combiner image moves from LCI(10, 10) to LCI(12, 10). As a result, the right light signal on the right retina image moves from RRI(10, 10) to RRI(8,10) and the left light signal on the left retina image moves from LRI(10, 10) to LRI(8, 10).


Example 2 shown in FIG. 14B illustrates a virtual object moving only along Y axis direction (to lower position) on the same depth plane from a first virtual binocular pixel to a second virtual binocular pixel. To do so, the locations of right light signals and the corresponding left light signals respectively on the right combiner image and the left combiner image need to be moved down with equal distance (pixels) along Y axis direction. As a result, the locations of right light signals and the corresponding left light signals respectively on the right retina image and the left retina image forming the virtual object are moved up with equal distance in Y axis direction. In other words, such right light signals and corresponding left light signals from the light signal generators have to be projected on a different Y-coordinate location of the combiner images. However, since the X-coordinate and Z-coordinate (the depth direction) of the virtual object remains the same, the right light signals and corresponding left light signals are projected on the same location of the combiner images with respect to the X-coordinate and Z-coordinate. For example, when the XYZ coordinate of the virtual object moves from (0, 0, 100) to (0, −10, 100), the right light signal on the right combiner image moves from RCI(10, 10) to RCI(10,12) and the left light signal on the left combiner image moves from LCI(10, 10) to LCI(10, 12). As a result, the right light signal on the right retina image moves from RRI(10, 10) to RRI(10, 8) and the left light signal on the left retina image moves from LRI(10, 10) to LRI(10, 8).


Example 3 shown in FIG. 14C illustrates a virtual object moving only along Z axis direction (closer to the viewer) and thus from an original depth plane to a new depth plane. To do so, the locations of right light signals and the corresponding left light signals respectively on the right combiner image and the left combiner image need to be moved closer to each other in X axis direction depending on the extent the convergence angle between the light path extension of the right light signals and the corresponding left light signals enlarges. As a result, the locations of right light signals and the corresponding left light signals respectively on the right retina image and the left retina image forming the virtual object are moved far away from each other in X axis direction. In sum, when the virtual object moves closer to the viewer, the relative distance between locations of right light signals and corresponding left light signals on the combiner images decreases, while the relative distance between locations of right light signals and corresponding left light signals on the retina images increases. In other words, such right light signals and corresponding left light signals from the light signal generators have to be projected on two different X-coordinate locations of the combiner images that are closer to each other. However, since the Y-coordinate of the virtual object remains the same, the right light signals and corresponding left light signals are projected on the same Y-coordinate location of the combiner images. For example, when the XYZ coordinate of the virtual object moves from (0, 0, 100) to (0, 0, 50), the right light signal on the right combiner image moves from RCI (10, 10) to RCI (5,10) and the left light signal on the left combiner image moves from LCI (10, 10) to LCI (15, 10). As a result, the right light signal on the right retina image moves from RRI (10, 10) to RRI (15, 10) and the left light signal on the left retina image moves from LRI (10, 10) to LRI (5, 10).


However, to move a virtual object closer to the viewer, if the X-coordinate of the virtual object is not at the center (middle point) of the interpupillary line (X-coordinate equals to zero in one embodiment), the locations of the right light signals and corresponding left light signals respectively on the right combiner image and the left combiner image need to be moved closer to each other based on a ratio. The ratio is calculated by the distance between the location of right light signal on the right combiner image and its left edge (close to the center of both eyes), to the distance between the location of left light signal on the left combiner image and its right edge (close to the center of both eyes). For example, assuming that the location of right light signal on the right combiner image is 10 pixels to its left edge (close to the center of both eyes) and the location of left light image on the left combiner image is 5 pixels to its right edge (close to the center of both eyes), The ration of right-location-to-center distance and left-location-to-center distance is 2:1 (10:5). To move the object closer, if the right location on the right combiner image and the left location on the left combiner image have to move closer to each other by 3 pixels distance, the right location needs to move towards its left edge by 2 pixels and the left location needs to move towards its right edge by 1 pixel because of the 2:1 ratio.


Example 4 shown in FIG. 14D illustrates the method for moving a virtual object in X axis direction (to the right) and Y axis direction (to higher position) in space on the same depth plane from a first virtual binocular pixel to a second virtual binocular pixel. To do so, the locations of right light signals and the corresponding left light signals respectively on the right combiner image and the left combiner image need to be moved to the right of and higher than the original location. As a result, the locations of right light signals and the corresponding left light signals respectively on the right retina image and the left retina image forming the virtual object are moved to the left of and lower than the original location. In other words, the right light signals and the corresponding left light signals from the light signal generators need to be projected on a new location of the right combiner image and the left combine image to the right of and higher than the original location while the convergence angle between the light path extension of the right light signals and the corresponding left light signals remains the same. For example, when the XYZ coordinate of the virtual object moves from (0, 0, 100) to (10, 10, 100), the right light signal on the right combiner image moves from RCI (10, 10) to RCI (12, 8) and the left light signal on the left combiner image moves from LCI (10, 10) to LCI (12, 8). As a result, the right light signal on the right retina image moves from RRI (10, 10) to RRI (8, 12) and the left light signal on the left retina image moves from LRI (10, 10) to LRI (8, 12).


Example 5 shown in FIG. 14E illustrates a virtual object moving in Y axis direction (to lower position) and Z (closer to viewer) and thus from an original depth plane to a new depth plane. To do so, the locations of right light signals and the corresponding left light signals respectively on the right combiner image and the left combiner image need to be moved down in Y axis direction and closer to each other in X axis direction for a large convergence angle. As a result, the locations of right light signals and the corresponding left light signals respectively on the right retina image and the left retina image forming the virtual object are moved up in Y axis direction and far away from each other in X axis direction. In other words, such right light signals and corresponding left light signals from the light signal generators have to be projected on a different Y-coordinate location and two different X-coordinate locations (closer to each other) of the combiner images. For example, when the XYZ coordinate of the virtual object moves from (0, 0, 100) to (0, −10, 50), the right light signal on the right combiner image moves from RCI (10, 10) to RCI (5,12) and the left light signal on the left combiner image moves from LCI (10, 10) to LCI (15, 12). As a result, the right light signal on the right retina image moves from RRI (10, 10) to RRI (15, 8) and the left light signal on the left retina image moves from LRI (10, 10) to LRI (5, 8).


However, since the X-coordinate of the virtual object remains the same while the virtual object moves closer to the viewer, the locations of the right light signals and corresponding left light signals respectively on the right combiner image and the left combiner image need to be moved closer to each other based on a ratio. The ratio is calculated by the distance between the location of right light signal on the right combiner image and its left edge (close to the center of both eyes), to the distance between the location of left light signal on the left combiner image and its right edge (close to the center of both eyes). For example, assuming that the location of right light signal on the right combiner image is 10 pixels to its left edge (close to the center of both eyes) and the location of left light image on the left combiner image is 5 pixels to its right edge (close to the center of both eyes), The ration of right-location-to-center distance and left-location-to-center distance is 2:1 (10:5). To move the object closer, if the right location on the right combiner image and the left location on the left combiner image have to move closer to each other by 3 pixels distance, the right location needs to move towards its left edge by 2 pixels and the left location needs to move towards its right edge by 1 pixel because of the 2:1 ratio.


Example 6 shown in FIG. 14F illustrates a virtual object moving in X axis direction (to the right) and Z axis direction (closer to viewer) and thus from an original depth plane to a new depth plane. To do so, the locations of right light signals and the corresponding left light signals respectively on the right combiner image and the left combiner image need to be moved to the right in X axis direction and closer to each other in X axis direction for a large convergence angle. As a result, the locations of right light signals and the corresponding left light signals respectively on the right retina image and the left retina image forming the virtual object are moved to the left in X axis direction and far away from each other in X axis direction. In other words, such right light signals and corresponding left light signals from the light signal generators have to be projected on two different X-coordinate locations (to the right and closer to each other) of the combiner images. since the Y-coordinate of the virtual object remains the same, the right light signals and corresponding left light signals are projected on the same Y-coordinate location of the combiner images. For example, when the XYZ coordinate of the virtual object moves from (0, 0, 100) to (10, 0, 50), the right light signal on the right combiner image moves from RCI(10, 10) to RCI(7, 10) and the left light signal on the left combiner image moves from LCI(10, 10) to LCI(17, 10). As a result, the right light signal on the right retina image moves from RRI(10, 10) to RRI(13, 10) and the left light signal on the left retina image moves from LRI(10, 10) to LRI(3, 10).


Example 7 shown in FIG. 14G illustrates the object moving in X axis direction (to the right), Y axis direction (to lower position), and Z axis direction (closer to viewer) and thus from an original depth plane to a new depth plane. To do so, the locations of right light signals and the corresponding left light signals respectively on the right combiner image and the left combiner image need to be moved to the right in X axis direction, to a lower position in Y axis direction, and closer to each other in X axis direction for a large convergence angle. As a result, the locations of right light signals and the corresponding left light signals respectively on the right retina image and the left retina image forming the virtual object are moved to the left in X axis direction, to a higher position in Y axis direction, and far away from each other in X axis direction. In other words, such right light signals and corresponding left light signals from the light signal generators have to be projected on two different X-coordinate locations (to the right and closer to each other) and a different Y-coordinate location of the combiner images. For example, when the XYZ coordinate of the virtual object moves from (0, 0, 100) to (10, −10, 50), the right light signal on the right combiner image moves from RCI(10, 10) to RCI(7, 12) and the left light signal on the left combiner image moves from LCI(10, 10) to LCI(17, 12). As a result, the right light signal on the right retina image moves from RRI(10, 10) to RRI(13, 8) and the left light signal on the left retina image moves from LRI(10, 10) to LRI(3, 8).


Example 8 shown in FIG. 14H illustrates a method of moving a virtual object in Z axis direction from 1 m depth to 10 m depth away from the viewer and thus from an original depth plane to a new depth plane in space. When the space in area C includes a sufficiently large number of virtual binocular pixels, the virtual object can be moved smoothly through many intermediate virtual binocular pixels. In other words, when the right retina image and the left retina image include a sufficiently large number of right pixels and left pixels, the viewer is able to perceive a gigantic amount of virtual binocular pixels in the space. In FIG. 14H, the object is represented by a round dot moving from a first virtual binocular pixel with 1 m depth to a second virtual binocular pixel with 10 m depth through various intermediate virtual binocular pixels. First, the convergence angle of the first virtual binocular pixel with 1 m depth is calculated to be 3.4 degrees between the light path extension of the first redirected right light signal and the first redirected left light signal.








Tan

(

θ
2

)

=


IPD

2

D


=


60


mm
/

(

2
*
10

m

)


=



0.003
.

If



IPD

=

60


mm





,

Θ
=

0.34


degree
.








Second, the convergence angle of the second virtual binocular pixel with 10 m depth is calculated to be 0.34 degrees between the light path extension of the second redirected right light signal and the second redirected left light signal.








Tan

(

θ
2

)

=


IPD

2

D


=


60


mm
/

(

2
*
1

m

)


=



0.03
.

If



IPD

=

60


mm





,

Θ
=

3.4


degree
.








Third, the intermediate virtual binocular pixels are calculated and identified. The number of intermediate virtual binocular pixels may be calculated based on the difference of convergence angles of the first virtual binocular pixel and the second virtual binocular pixel, and the number of pixels in X axis direction for every degree of FOB. The difference between the convergence angle of the first virtual binocular pixel (3.4 degree) and that of the second virtual binocular pixel (0.34 degree) equals 3.06. The number of pixels in X axis direction for every degree of FOB is 32, assuming that the total width of a scanned retina image is 1280 pixels which cover 40 degrees of field of view (FOV) in total. Thus, when the virtual object is moving from a first virtual binocular pixel with 1 m depth to a second virtual binocular pixel with 10 m depth, there are approximately 98 (32×3.06) virtual binocular pixels in between that can be used to display such a moving. These 98 virtual binocular pixels may be identified through the look up table described above. Fourth, display the moving through 98 intermediate virtual binocular pixels, like 98 steps of small moves in between in this example. The right light signals and the corresponding left light signals for these 98 virtual binocular pixels are respectively generated by the right light signal generator and left light signal generator to project into the right retina and the left retina of the viewer. As a result, the viewer can perceive a virtual object moving smoothly through 98 intermediate positions from 1 m to 10 m.


Example 9 shown in FIG. 14I illustrates a method of moving a virtual object in Z axis direction from 1 m depth to 20 cm depth closer to the viewer and thus from an original depth plane to a new depth plane in space. When the space in area C includes a sufficiently large number of virtual binocular pixels, the virtual object can be moved smoothly through many intermediate virtual binocular pixels. In other words, when the right retina image and the left retina image include a sufficiently large number of right pixels and left pixels, the viewer is able to perceive a gigantic amount of virtual binocular pixels in the space. In FIG. 14I, the object is represented by a round dot moving from a first virtual binocular pixel with 1 m depth to a second virtual binocular pixel with 20 cm depth through various intermediate virtual binocular pixels. First, the convergence angle of the first virtual binocular pixel with 1 m depth is calculated to be 3.4 degrees between the light path extension of the first redirected right light signal and the first redirected left light signal.








Tan

(

θ
2

)

=


IPD

2

D


=


60


mm
/

(

2
*
1

m

)


=



0.03
.

If



IPD

=

60


mm





,

Θ
=

3.4


degree
.








Second, the convergence angle of the second virtual binocular pixel with 20 cm depth is calculated to be 17 degrees between the light path extension of the second redirected right light signal and the second redirected left light signal.








Tan

(

θ
2

)

=


IPD

2

D


=


60


mm
/

(

2
*
20


cm

)


=



0.15
.

If



IPD

=

60


mm





,

Θ
=

17



degree
.








Third, the intermediate virtual binocular pixels are calculated and identified. The number of intermediate virtual binocular pixels may be calculated based on the difference of convergence angles of the first virtual binocular pixel and the second virtual binocular pixel, and the number of pixels in X axis direction for every degree of FOB. The difference between the convergence angle of the first virtual binocular pixel (3.4 degree) and that of the second virtual binocular pixel (17 degree) equals 13.6. The number of pixels in X axis direction for every degree of FOB is 32, assuming that the total width of a scanned retina image is 1280 pixels which cover 40 degrees of field of view (FOV) in total. Thus, when the virtual object is moving from a first virtual binocular pixel with 1 m depth to a second virtual binocular pixel with 20 cm depth, there are approximately 435 (32×13.6) virtual binocular pixels in between that can be used to display such a moving. These 435 virtual binocular pixels may be identified through the look up table described above. Fourth, display the moving through 435 intermediate virtual binocular pixels, like 435 steps of small moves in between in this example. The right light signals and the corresponding left light signals for these 435 virtual binocular pixels are respectively generated by the right light signal generator and left light signal generator to project into the right retina and the left retina of the viewer. As a result, the viewer can perceive a virtual object moving smoothly through 435 intermediate positions from 1 m to 10 m.


The foregoing description of embodiments is provided to enable any person skilled in the art to make and use the subject matter. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the novel principles and subject matter disclosed herein may be applied to other embodiments without the use of the innovative faculty. The claimed subject matter set forth in the claims is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. It is contemplated that additional embodiments are within the spirit and true scope of the disclosed subject matter. Thus, it is intended that the present invention covers modifications and variations that come within the scope of the appended claims and their equivalents.

Claims
  • 1. A system for displaying an object with depths comprising: a right light signal generator generating multiple right light signals for an object, each right light signal being a collimated narrow light beam of a right pixel;a right combiner receiving and redirecting the multiple right light signals towards one retina of a right eye of a viewer to display multiple right pixels of the object;a left light signal generator generating multiple left light signals for the object, each left light signal being a collimated narrow light beam of a left pixel;a left combiner receiving and redirecting the multiple left light signals towards another retina of a left eye of the viewer to display multiple left pixels of the object; andwherein a first redirected right light signal and a corresponding first redirected left light signal are respectively collimated and enter the right eye and the left eye of the viewer as collimated narrow light beams and are perceived by the viewer to display a first virtual binocular pixel of the object with a first depth that is related to a first angle between the first redirected right light signal and the corresponding first redirected left light signal,wherein a second redirected right light signal and a corresponding second redirected left light signal are respectively collimated and enter the right eye and the left eye of the viewer as collimated narrow light beams and are perceived by the viewer to display a second virtual binocular pixel of the object with a second depth that is related to a second angle between the second redirected right light signal and the corresponding second redirected left light signal, andwherein the first angle and the second angle are not equal.
  • 2. The system of claim 1, wherein the first depth is determined by the first angle between light path extensions of the first redirected right light signal and the corresponding first redirected left light signal.
  • 3. The system of claim 1, wherein the first redirected right light signal and the corresponding first redirected left light signals are directed to approximately the same height of the retina of the viewer's both eyes.
  • 4. The system of claim 1, wherein the first redirected right light signal has approximately the same view angle as that of the corresponding first redirected left light signal.
  • 5. The system of claim 1, wherein the multiple right light signals generated from the right light signal generator are reflected only once before entering the retina of the viewer, and the multiple left light signals generated from the left light signal generator are reflected only once before entering the other retina of the viewer.
  • 6. The system of claim 1, wherein the right light signal generator is a right laser beam scanning projector (LBS projector) and the multiple right light signals generated from the right LBS projector are reflected only once by the right combiner before entering the retina of the viewer, and the left light signal generator is a left LBS projector and the multiple left light signals generated from the left LBS projector are reflected only once by a left combiner before entering the other retina of the viewer.
  • 7. The system of claim 1, wherein the right combiner and the left combiner are transparent for ambient lights.
  • 8. The system of claim 1, wherein the right combiner receiving and redirecting the multiple left light signals towards a right retina of the viewer to display multiple right pixels of the object and the left combiner receiving and redirecting the multiple right light signals towards a left retina of the viewer to display multiple left pixels of the object.
  • 9. The system of claim 1, wherein the right combiner and the left combiner are ellipsoid-shaped, and the right light signal generator is positioned on one focus of the right combiner and the left light signal generator is positioned on one focus of the left combiner.
  • 10. The system of claim 1, wherein a right projecting angle of the right light signal generator is adjustable to modify an incident angle of the multiple right light signals to the right combiner and a left projecting angle of the left light signal generator is adjustable to modify an incident angle of the multiple left light signals to the left combiner.
  • 11. The system of claim 1, further comprising: a support structure wearable on a head of the viewer;wherein the right light signal generator and the left light signal generator are carried by the support structure; andwherein the right combiner and the left combiner are carried by the support structure and positioned within a field of view of the viewer.
  • 12. The system of claim 11, wherein the support structure is a pair of glasses.
  • 13. The system of claim 12, wherein the pair of glasses has a prescription lens which carries the right combiner or the left combiner.
  • 14. The system of claim 12, wherein the pair of glasses has a prescription lens integratedly made with the right combiner or the left combiner.
  • 15. The system of claim 12, wherein the prescription lens and either the right combiner or the left combiner are attached with each other but separable.
  • 16. The system of claim 11, wherein the right combiner and the left combiner are integrated into one united combiner.
  • 17. The system of claim 1, further comprising: a control unit to provide electronic signals for the right light signal generator and the left light signal generator to respectively control a direction of the first right light signal and the first left light signal so that the viewer perceives the first virtual binocular pixel of the object at a predetermined 3D coordinate based on a look up table.
  • 18. The system of claim 17, wherein the look up table comprises a pair of a 2D right pixel coordinate and a corresponding 2D left pixel coordinate for the predetermined 3D coordinate of the first virtual binocular pixel.
  • 19. The system of claim 18, wherein the look up table further comprises a scale of size, a number of overlapping objects, and/or a depth in sequence for a virtual binocular pixel.
  • 20. The system of claim 17, wherein when an XYZ coordinate is used, a Z coordinate of the first virtual binocular pixel is determined by a horizontal distance between a right pixel X coordinate and a left pixel X coordinate.
  • 21. The system of claim 17, wherein the look up table is calibrated based on an interpupillary distance of the viewer.
  • 22. The system of claim 1, wherein the first depth is determined by the relative horizontal distance between the first redirected right light signal and the corresponding first redirected left light signal.
  • 23. The system of claim 1, wherein the viewer perceives the first virtual binocular pixel at a first predetermined 3D coordinate related to a first angle between the first redirected right light signal and the corresponding first redirected left light signal at a first time, and a second virtual binocular pixel at a second predetermined 3D coordinate related to a second angle between a second redirected right light signal and a corresponding second redirected left light signal at a second time.
  • 24. The system of claim 23, wherein the first virtual binocular pixel displays approximately the same part of the object as the second virtual binocular pixel and the first time and the second time are within the time period of persistence of vision so that the viewer perceives the object in moving.
  • 25. The system of claim 23, wherein the first virtual binocular pixel displays a different part of the object as the second virtual binocular pixel and the first time and the second time are within the time period of persistence of vision so that the viewer perceives different parts of the object concurrently.
  • 26. The system of claim 23, wherein when the second virtual binocular pixel is perceived to be closer to the viewer than the first virtual binocular pixel, the second angle is larger than the first angle, and a second relative horizontal distance between the second redirected right light signal and the corresponding second redirected left light signal is larger than a first relative horizontal distance between the first redirected right light signal and the corresponding first redirected left light signal.
  • 27. The system of claim 1, wherein the first right light signal is projected onto a predetermined location of one retina of the viewer and the corresponding left light signal is projected onto a predetermined location of the other retina of the viewer concurrently.
  • 28. A method for displaying an object with depths comprising: generating multiple right light signals for the object from a right light signal generator, each right light signal being a collimated narrow light beam of a right pixel;redirecting the multiple right light signals to one retina of a right eye of a viewer;generating multiple left light signals for the object from a left light signal generator, each left light signal being a collimated narrow light beam of a left pixel;redirecting the multiple left light signals to another retina of a left eye of the viewer;wherein a first redirected right light signal and a corresponding first redirected left light signal are respectively collimated and enter the right eye and the left eye of the viewer as collimated narrow light beams and are perceived by the viewer to display a first virtual binocular pixel of the object with a first depth that is related to a first angle between the first redirected right light signal and the corresponding first redirected left light signal,wherein a second redirected right light signal and a corresponding second redirected left light signal are respectively collimated and enter the right eye and the left eye of the viewer as collimated narrow light beams and are perceived by the viewer to display a second virtual binocular pixel of the object with a second depth that is related to a second angle between the second redirected right light signal and the corresponding second redirected left light signal, andwherein the first angle and the second angle are not equal.
  • 29. The method of claim 28, wherein the first depth is determined by the first angle between light path extensions of the first redirected right light signal and the corresponding first redirected left light signal.
  • 30. The method of claim 28, wherein the first redirected right light signal and the corresponding first redirected left light signals are directed to approximately the same height of the retina of the viewer's both eyes.
  • 31. The method of claim 28, wherein the first redirected right light signal has approximately the same view angle as that of the corresponding first redirected left light signal.
  • 32. The method of claim 28, wherein the multiple right light signals generated from the right light signal generator are reflected only once before entering the retina of the viewer, and the multiple left light signals generated from the left light signal generator are reflected only once before entering the other retina of the viewer.
  • 33. The method of claim 28, wherein the right light signal generator is a right laser beam scanning projector (LBS projector) and the multiple right light signals generated from the right LBS projector are reflected only once by the right combiner before entering the retina of the viewer, and the left light signal generator is a left LBS projector and the multiple left light signals generated from the left LBS projector are reflected only once by a left combiner before entering the other retina of the viewer.
  • 34. The method of claim 28, wherein the right combiner and the left combiner are transparent for ambient lights.
  • 35. The method of claim 28, wherein the right combiner receiving and redirecting the multiple left light signals towards a right retina of a viewer to display multiple right pixels of the object and the left combiner receiving and redirecting the multiple right light signals towards a left retina of the viewer to display multiple left pixels of the object.
  • 36. The method of claim 28, wherein the right combiner and the left combiner are ellipsoid-shaped, and the right light signal generator is positioned on one focus of the right combiner and the left light signal generator is positioned on one focus of the left combiner.
  • 37. The method of claim 28, wherein a right projecting angle of the right light signal generator is adjustable to modify an incident angle of the multiple right light signals to the right combiner and a left projecting angle of the left light signal generator is adjustable to modify an incident angle of the multiple left light signals to the left combiner.
  • 38. The method of claim 28: wherein the right light signal generator and the left light signal generator are carried by a support structure wearable on a head of the viewer; andwherein the right combiner and the left combiner are carried by the support structure and positioned within a field of view of the viewer.
  • 39. The method of claim 38, wherein the support structure is a pair of glasses having a prescription lens which carries the right combiner or the left combiner.
  • 40. The method of claim 38, wherein the right combiner and the left combiner are integrated into one united combiner.
  • 41. A system for displaying an object with depths comprising: a right light signal generator generating multiple right light signals for an object, each right light signal being a collimated narrow light beam of a right pixel;a left light signal generator generating multiple left light signals for the object, each left light signal being a collimated narrow light beam of a left pixel; andwherein a first right light signal and a corresponding first left light signal are respectively collimated and enter a right eye and a left eye of a viewer as collimated narrow light beams and are respectively projected onto a right retina and a left retina of the right eye and left eye of the viewer, and are perceived by the viewer to display a first virtual binocular pixel of the object with a first depth that is related to a first angle between the first right light signal before projecting onto the right retina and the corresponding first left light signal before projecting onto the left retina,wherein a second right light signal and a corresponding second left light signal are respectively collimated and enter the right eye and the left eye of the viewer as collimated narrow light beams and are respectively projected onto the right retina and the left retina of the right eye and left eye of the viewer, and are perceived by the viewer to display a second virtual binocular pixel of the object with a second depth that is related to a second angle between the second right light signal before projecting onto the right retina and the corresponding second left light signal before projecting onto the left retina, andwherein the first angle and the second angle are not equal.
RELATED APPLICATION

This application claims the benefit of provisional application 62/931,228, filed on Nov. 6, 2019, titled “SYSTEM AND METHOD FOR PROJECTING BINOCULAR 3D IMAGES WITH DEPTHS”, provisional application 62/978,322, filed on Feb. 19, 2020, titled “HEAD WEARABLE DEVICE WITH INWARD AND OUTWARD CAMERA”, provisional application 63/041,740, filed on Jun. 19, 2020, titled “METHODS AND SYSTEMS FOR EYEBOX EXPANSION”, and provisional application 63/085,172, filed on Sep. 30, 2020, titled “SYSTEMS AND METHODS FOR PROJECTING VIRTUAL IMAGES WITH MULTIPLE DEPTHS”, and incorporated herein by reference at their entirety.

PCT Information
Filing Document Filing Date Country Kind
PCT/US2020/059317 11/6/2020 WO
Publishing Document Publishing Date Country Kind
WO2021/092314 5/14/2021 WO A
US Referenced Citations (434)
Number Name Date Kind
1022856 Marinics Apr 1912 A
1072216 Dunn Sep 1913 A
4953961 Ubhayakar Sep 1990 A
5754344 Fujiyama May 1998 A
6111597 Tabata Aug 2000 A
6454411 Trumbull Sep 2002 B1
6578068 Bowman-Amuah Jun 2003 B1
8123353 Biernat et al. Feb 2012 B2
9028067 Fleischman et al. May 2015 B1
9186293 Krenik Nov 2015 B2
9239453 Cheng et al. Jan 2016 B2
9279972 Judkewitz et al. Mar 2016 B2
9297945 Ide et al. Mar 2016 B2
9304316 Weiss et al. Apr 2016 B2
9319674 Kim et al. Apr 2016 B2
9348144 Kobayashi May 2016 B2
9377627 Watanabe et al. Jun 2016 B2
9395543 Lamb et al. Jul 2016 B2
9400215 Islam Jul 2016 B2
9408539 Tearney et al. Aug 2016 B2
9435956 Xu et al. Sep 2016 B1
9456116 Lapstun Sep 2016 B2
9459456 Kobayashi Oct 2016 B2
9476769 Islam Oct 2016 B2
9485392 Lettington et al. Nov 2016 B1
9488837 Nister et al. Nov 2016 B2
9492083 Rege et al. Nov 2016 B2
9494800 Border et al. Nov 2016 B2
9504376 Neal et al. Nov 2016 B2
9529191 Sverdrup et al. Dec 2016 B2
9529196 Sade Dec 2016 B1
9555589 Ambur et al. Jan 2017 B1
9557568 Ouderkirk et al. Jan 2017 B1
9576398 Zehner et al. Feb 2017 B1
9581744 Yun et al. Feb 2017 B1
9581827 Wong et al. Feb 2017 B1
9599761 Ambur et al. Mar 2017 B1
9618743 Saito Apr 2017 B2
9664615 Bouma et al. May 2017 B2
9671566 Abovitz et al. Jun 2017 B2
9678338 Bamberger et al. Jun 2017 B1
9715114 Yun et al. Jul 2017 B2
9723976 Tesar Aug 2017 B2
9726539 Islam Aug 2017 B2
9766449 Bailey et al. Sep 2017 B2
9772495 Tam et al. Sep 2017 B2
9784975 Aruga Oct 2017 B2
9800844 Nakahara et al. Oct 2017 B2
9817236 Yamazaki et al. Nov 2017 B2
9823474 Evans et al. Nov 2017 B2
9829616 Yun et al. Nov 2017 B2
9835777 Ouderkirk et al. Dec 2017 B2
9851566 Yajima et al. Dec 2017 B2
9851568 Theytaz Dec 2017 B2
9857170 Abovitz et al. Jan 2018 B2
9857591 Welch et al. Jan 2018 B2
9872606 Yeoh et al. Jan 2018 B2
9874744 Bailey et al. Jan 2018 B2
9921396 Ranalli et al. Mar 2018 B2
9927611 Rudy et al. Mar 2018 B2
9945998 Ouderkirk et al. Apr 2018 B2
9945999 Wong et al. Apr 2018 B2
9946067 Bamberger et al. Apr 2018 B1
9952371 Ambur et al. Apr 2018 B2
9964755 Redding et al. May 2018 B2
9983397 Horstmeyer et al. May 2018 B2
9989765 Jepsen Jun 2018 B2
9995857 Evans et al. Jun 2018 B2
9995939 Yun et al. Jun 2018 B2
10001651 Noguchi et al. Jun 2018 B2
10007035 Ouderkirk et al. Jun 2018 B2
10007043 Ambur et al. Jun 2018 B2
10012829 Bailey et al. Jul 2018 B2
10012838 Border Jul 2018 B2
10041832 Islam Aug 2018 B2
10042165 Jepsen et al. Aug 2018 B2
10061062 Sscmidtlin Aug 2018 B2
10061111 Hillman Aug 2018 B2
10067337 Bailey et al. Sep 2018 B2
10073266 Osterhout Sep 2018 B2
10073270 Fujishiro Sep 2018 B2
10078164 Yun et al. Sep 2018 B2
10078220 Alexander et al. Sep 2018 B2
10101571 Andre et al. Oct 2018 B2
10101586 Fujimaki et al. Oct 2018 B2
10133075 Bailey et al. Nov 2018 B2
10139632 Border et al. Nov 2018 B2
10151926 Bailey Dec 2018 B2
10162161 Horstmeyer et al. Dec 2018 B2
10162183 Ide Dec 2018 B2
10168525 Kim et al. Jan 2019 B2
10175484 Yajima et al. Jan 2019 B2
10175488 Bailey et al. Jan 2019 B2
10185154 Popovich et al. Jan 2019 B2
10191283 Alexander et al. Jan 2019 B2
10197805 Bailey et al. Feb 2019 B2
10209517 Popovich et al. Feb 2019 B2
10222618 Border et al. Mar 2019 B2
10222621 Wang et al. Mar 2019 B2
10234687 Welch et al. Mar 2019 B2
10234696 Popovich et al. Mar 2019 B2
10241336 Ide et al. Mar 2019 B2
10261313 Bamberger et al. Apr 2019 B1
10268041 Davis Apr 2019 B2
10274736 Alexander et al. Apr 2019 B2
10277874 Xu Apr 2019 B2
10302950 Ouderkirk et al. May 2019 B2
10303246 Vidal et al. May 2019 B2
10317656 Dubois Jun 2019 B2
10330777 Popovich et al. Jun 2019 B2
10330930 Wong et al. Jun 2019 B2
10335572 Kumar Jul 2019 B1
10338380 Yun et al. Jul 2019 B2
10338393 Yun et al. Jul 2019 B2
10338400 Connor Jul 2019 B2
10345596 Morrison Jul 2019 B2
10345599 Jepsen Jul 2019 B2
10349818 Yeoh et al. Jul 2019 B2
10359629 Jepsen Jul 2019 B2
10365492 Holland et al. Jul 2019 B2
10371892 Zheng et al. Aug 2019 B2
10394034 Reshidko et al. Aug 2019 B2
10409057 Aleem et al. Sep 2019 B2
10409069 Noguchi et al. Sep 2019 B2
10409079 Dewald et al. Sep 2019 B2
10416452 Cheng et al. Sep 2019 B2
10419665 Ou et al. Sep 2019 B2
10422995 Haddick Sep 2019 B2
10429639 Lapstun Oct 2019 B2
10429648 Sverdrup Oct 2019 B2
10437061 Jepsen Oct 2019 B2
10437074 Holland et al. Oct 2019 B2
10444496 Ambur et al. Oct 2019 B2
10444508 Urey et al. Oct 2019 B2
10451876 Jepsen Oct 2019 B2
10451881 Bailey Oct 2019 B2
10459220 Aleem et al. Oct 2019 B2
10459221 Alexander et al. Oct 2019 B2
10459222 Alexander et al. Oct 2019 B2
10459223 Alexander et al. Oct 2019 B2
10459231 Miller et al. Oct 2019 B2
10459305 Shi et al. Oct 2019 B2
10467770 Sato et al. Nov 2019 B2
10473459 Abovitz et al. Nov 2019 B2
10481317 Peroz et al. Nov 2019 B2
10481684 Lopes et al. Nov 2019 B2
10482676 Yuan et al. Nov 2019 B2
10488584 Karafin et al. Nov 2019 B2
10488661 Alexander et al. Nov 2019 B2
10488662 Holland et al. Nov 2019 B2
10510137 Kitain et al. Dec 2019 B1
10520721 Nowatzyk Dec 2019 B2
10520730 Bouchier et al. Dec 2019 B2
10527851 Lin et al. Jan 2020 B2
10534129 Tearney et al. Jan 2020 B2
10534173 Jepsen Jan 2020 B2
10554940 Ghazaryan Feb 2020 B1
10558047 Samec et al. Feb 2020 B2
10564427 Ouderkirk et al. Feb 2020 B2
10606161 Hirata et al. Mar 2020 B2
10616568 Lin et al. Apr 2020 B1
10663727 Ouderkirk et al. May 2020 B2
10670867 Yun et al. Jun 2020 B2
10678052 Ouderkirk et al. Jun 2020 B2
10706600 Yoon et al. Jul 2020 B1
10747002 Yun et al. Aug 2020 B2
10747003 Ouderkirk et al. Aug 2020 B2
10754159 Ouderkirk et al. Aug 2020 B2
10838208 Ouderkirk et al. Nov 2020 B2
10921594 Ambur et al. Feb 2021 B2
11079601 Greenberg Aug 2021 B2
11256092 Shamir et al. Feb 2022 B2
11280997 Gao Mar 2022 B1
11325330 Wong et al. May 2022 B2
11435572 Yeoh et al. Sep 2022 B2
11493769 Wen et al. Nov 2022 B2
20020024708 Lewis et al. Feb 2002 A1
20020122217 Nakajima Sep 2002 A1
20020180868 Lippert et al. Dec 2002 A1
20040179254 Lewis et al. Sep 2004 A1
20040233275 Tomita Nov 2004 A1
20060087618 Smart et al. Apr 2006 A1
20080117289 Schowengerdt May 2008 A1
20100103077 Sugiyama et al. Apr 2010 A1
20100149073 Chaum et al. Jun 2010 A1
20110032706 Mukawa Feb 2011 A1
20110116040 Biernat et al. May 2011 A1
20110273722 Charny et al. Nov 2011 A1
20110304821 Tanassi et al. Dec 2011 A1
20120002163 Neal Jan 2012 A1
20120050269 Awaji Mar 2012 A1
20120056799 Solomon Mar 2012 A1
20120056989 Izumi Mar 2012 A1
20130044101 Kim et al. Feb 2013 A1
20130127980 Haddick et al. May 2013 A1
20130135295 Li et al. May 2013 A1
20130296710 Zuzak et al. Nov 2013 A1
20140211289 Hino et al. Jul 2014 A1
20140368412 Jacobsen et al. Dec 2014 A1
20150022783 Lee et al. Jan 2015 A1
20150169070 Harp Jun 2015 A1
20150215608 Tahara Jul 2015 A1
20150324568 Publicover et al. Nov 2015 A1
20150326570 Publicover et al. Nov 2015 A1
20150338915 Publicover et al. Nov 2015 A1
20160000324 Rege et al. Jan 2016 A1
20160004908 Lundberg Jan 2016 A1
20160033771 Tremblay et al. Feb 2016 A1
20160062459 Publicover et al. Mar 2016 A1
20160085302 Publicover et al. Mar 2016 A1
20160109705 Schowengerdt Apr 2016 A1
20160109708 Schowengerdt Apr 2016 A1
20160116740 Takahashi et al. Apr 2016 A1
20160131912 Border et al. May 2016 A1
20160139402 Lapstun May 2016 A1
20160147067 Hua et al. May 2016 A1
20160147071 Fujishiro May 2016 A1
20160147072 Yamazaki et al. May 2016 A1
20160150201 Kilcher et al. May 2016 A1
20160178908 Dobschal et al. Jun 2016 A1
20160187652 Fujimaki et al. Jun 2016 A1
20160187653 Kimura Jun 2016 A1
20160187661 Yajima et al. Jun 2016 A1
20160195718 Evans Jul 2016 A1
20160195721 Evans Jul 2016 A1
20160209648 Haddick et al. Jul 2016 A1
20160209657 Popovich et al. Jul 2016 A1
20160212394 Nakahara et al. Jul 2016 A1
20160216515 Bouchier et al. Jul 2016 A1
20160238845 Alexander et al. Aug 2016 A1
20160246441 Westerman et al. Aug 2016 A1
20160274358 Yajima et al. Sep 2016 A1
20160274361 Border et al. Sep 2016 A1
20160274365 Bailey et al. Sep 2016 A1
20160274660 Publicover et al. Sep 2016 A1
20160284129 Nishizawa et al. Sep 2016 A1
20160291217 Furukawa et al. Oct 2016 A1
20160291326 Evans et al. Oct 2016 A1
20160327779 Hillman Nov 2016 A1
20160377849 Onda Dec 2016 A1
20160377865 Alexander et al. Dec 2016 A1
20160377866 Alexander et al. Dec 2016 A1
20170003507 Raval et al. Jan 2017 A1
20170010468 Deering et al. Jan 2017 A1
20170017083 Samec et al. Jan 2017 A1
20170027444 Rege et al. Feb 2017 A1
20170027651 Esterberg Feb 2017 A1
20170031160 Popovich et al. Feb 2017 A1
20170038589 Jepsen Feb 2017 A1
20170038590 Jepsen Feb 2017 A1
20170038591 Jepsen Feb 2017 A1
20170045721 Charles Feb 2017 A1
20170068029 Yun et al. Mar 2017 A1
20170068030 Ambur et al. Mar 2017 A1
20170068091 Greenberg Mar 2017 A1
20170068096 Ouderkirk et al. Mar 2017 A1
20170068099 Ouderkirk et al. Mar 2017 A1
20170068100 Ouderkirk et al. Mar 2017 A1
20170068101 Yun et al. Mar 2017 A1
20170068102 Wong et al. Mar 2017 A1
20170068104 Ouderkirk et al. Mar 2017 A1
20170068105 Yun et al. Mar 2017 A1
20170078651 Russell Mar 2017 A1
20170097449 Ouderkirk et al. Apr 2017 A1
20170097453 Ambur et al. Apr 2017 A1
20170097454 Wong et al. Apr 2017 A1
20170097508 Yun et al. Apr 2017 A1
20170115432 Schmidtlin Apr 2017 A1
20170115484 Yokoyama Apr 2017 A1
20170139209 Evans et al. May 2017 A9
20170146714 Ambur et al. May 2017 A1
20170153672 Shin et al. Jun 2017 A1
20170160550 Kobayashi et al. Jun 2017 A1
20170188021 Lo et al. Jun 2017 A1
20170227771 Sverdrup Aug 2017 A1
20170235931 Publicover et al. Aug 2017 A1
20170255012 Tam et al. Sep 2017 A1
20170255013 Tam et al. Sep 2017 A1
20170255020 Tam et al. Sep 2017 A1
20170261751 Noguchi et al. Sep 2017 A1
20170269368 Yun et al. Sep 2017 A1
20170285343 Belenkii et al. Oct 2017 A1
20170293147 Tremblay et al. Oct 2017 A1
20170295353 Hwang et al. Oct 2017 A1
20170299870 Urey et al. Oct 2017 A1
20170299872 Ou et al. Oct 2017 A1
20170315347 Juhola et al. Nov 2017 A1
20170329141 Border et al. Nov 2017 A1
20170336641 von und zu Liechtenstein Nov 2017 A1
20170363872 Border et al. Dec 2017 A1
20170367651 Tzvieli et al. Dec 2017 A1
20180003981 Urey Jan 2018 A1
20180008141 Krueger Jan 2018 A1
20180017815 Chumbley et al. Jan 2018 A1
20180028057 Oz et al. Feb 2018 A1
20180032812 Sengelaub et al. Feb 2018 A1
20180039004 Yun et al. Feb 2018 A1
20180039083 Miller et al. Feb 2018 A1
20180039084 Schowengerdt Feb 2018 A1
20180045927 Heeren et al. Feb 2018 A1
20180045965 Schowengerdt Feb 2018 A1
20180052320 Curtis et al. Feb 2018 A1
20180059296 Ouderkirk et al. Mar 2018 A1
20180081179 Samec et al. Mar 2018 A1
20180081322 Robbins et al. Mar 2018 A1
20180084245 Lapstun Mar 2018 A1
20180088341 Ide et al. Mar 2018 A1
20180088342 Ide Mar 2018 A1
20180091805 Liang et al. Mar 2018 A1
20180113303 Popovich et al. Apr 2018 A1
20180120559 Yeoh et al. May 2018 A1
20180120573 Ninan et al. May 2018 A1
20180131926 Shanks et al. May 2018 A1
20180140260 Taguchi et al. May 2018 A1
20180149862 Kessler et al. May 2018 A1
20180149863 Aleem et al. May 2018 A1
20180149866 Noguchi May 2018 A1
20180149874 Aleem et al. May 2018 A1
20180157317 Richter et al. Jun 2018 A1
20180180784 Ouderkirk et al. Jun 2018 A1
20180180788 Ambur et al. Jun 2018 A1
20180182174 Choi Jun 2018 A1
20180185665 Osterhout et al. Jul 2018 A1
20180196181 Wong et al. Jul 2018 A1
20180203232 Bouchier et al. Jul 2018 A1
20180239149 Yun et al. Aug 2018 A1
20180241983 Kimura Aug 2018 A1
20180246314 Swager et al. Aug 2018 A1
20180246336 Greenberg Aug 2018 A1
20180249150 Takeda et al. Aug 2018 A1
20180249151 Freeman et al. Aug 2018 A1
20180252924 Jepsen Sep 2018 A1
20180252925 Schowengerdt Sep 2018 A1
20180252926 Alexander et al. Sep 2018 A1
20180262758 El-Ghoroury et al. Sep 2018 A1
20180267222 Ambur et al. Sep 2018 A1
20180267319 Ouderkirk et al. Sep 2018 A1
20180275343 Zheng et al. Sep 2018 A1
20180275410 Yeoh et al. Sep 2018 A1
20180284438 Yajima et al. Oct 2018 A1
20180284441 Cobb Oct 2018 A1
20180284442 Abe Oct 2018 A1
20180292908 Kamoda et al. Oct 2018 A1
20180350032 Bastani et al. Dec 2018 A1
20180356591 Karafin et al. Dec 2018 A1
20180356640 Yun et al. Dec 2018 A1
20180357817 Ikekita Dec 2018 A1
20180372958 Karafin et al. Dec 2018 A1
20190004228 Bevensee et al. Jan 2019 A1
20190011621 Karafin et al. Jan 2019 A1
20190011691 Peyman Jan 2019 A1
20190011703 Robaina et al. Jan 2019 A1
20190018235 Ouderkirk et al. Jan 2019 A1
20190018479 Minami Jan 2019 A1
20190025573 Aleksov et al. Jan 2019 A1
20190025587 Sharifi et al. Jan 2019 A1
20190041634 Popovich et al. Feb 2019 A1
20190041648 Petersen Feb 2019 A1
20190064435 Karafin et al. Feb 2019 A1
20190064526 Connor Feb 2019 A1
20190079302 Ninan et al. Mar 2019 A1
20190084419 Suzuki et al. Mar 2019 A1
20190086598 Futterer Mar 2019 A1
20190086674 Sinay et al. Mar 2019 A1
20190101977 Armstrong-Muntner et al. Apr 2019 A1
20190121132 Shamir et al. Apr 2019 A1
20190129178 Patterson et al. May 2019 A1
20190146224 Komori et al. May 2019 A1
20190172216 Ninan et al. Jun 2019 A1
20190179149 Curtis et al. Jun 2019 A1
20190179153 Popovich et al. Jun 2019 A1
20190187473 Tomizawa et al. Jun 2019 A1
20190196172 Hillman Jun 2019 A1
20190200858 Yam et al. Jul 2019 A1
20190212533 Heeren et al. Jul 2019 A9
20190212565 Davis Jul 2019 A1
20190222830 Edwin et al. Jul 2019 A1
20190235235 Ouderkirk et al. Aug 2019 A1
20190258062 Aleem et al. Aug 2019 A1
20190265465 Wong et al. Aug 2019 A1
20190265466 Yun et al. Aug 2019 A1
20190265467 Yun et al. Aug 2019 A1
20190271845 Cormier et al. Sep 2019 A1
20190278091 Smits et al. Sep 2019 A1
20190281279 Peuhkurinen et al. Sep 2019 A1
20190285881 Ilic et al. Sep 2019 A1
20190285895 Fujimaki Sep 2019 A1
20190285897 Topliss Sep 2019 A1
20190287495 Mathur et al. Sep 2019 A1
20190293935 Schneider et al. Sep 2019 A1
20190293939 Sluka Sep 2019 A1
20190302436 Hsu et al. Oct 2019 A1
20190320165 French et al. Oct 2019 A1
20190335158 Holz et al. Oct 2019 A1
20190339528 Freeman et al. Nov 2019 A1
20190361250 Lanman et al. Nov 2019 A1
20190384065 Shau et al. Dec 2019 A1
20190391382 Chung et al. Dec 2019 A1
20190391398 Abou et al. Dec 2019 A1
20190391399 Samec et al. Dec 2019 A1
20190391638 Khaderi et al. Dec 2019 A1
20200012090 Lapstun Jan 2020 A1
20200012095 Edwin et al. Jan 2020 A1
20200018962 Lu et al. Jan 2020 A1
20200033595 Stegelmeier Jan 2020 A1
20200033603 Ohkawa et al. Jan 2020 A1
20200033606 Takeda et al. Jan 2020 A1
20200041787 Popovich et al. Feb 2020 A1
20200041797 Samec et al. Feb 2020 A1
20200049995 Urey et al. Feb 2020 A1
20200064633 Maimone Feb 2020 A1
20200090569 Hajati et al. Mar 2020 A1
20200097065 Iyer et al. Mar 2020 A1
20200117006 Kollin et al. Apr 2020 A1
20200124856 Ouderkirk et al. Apr 2020 A1
20200133393 Forsland et al. Apr 2020 A1
20200138518 Lang May 2020 A1
20200186787 Cantero Jun 2020 A1
20200192475 Gustafsson et al. Jun 2020 A1
20200241305 Ouderkirk et al. Jul 2020 A1
20200241635 Cohen Jul 2020 A1
20200241650 Kramer et al. Jul 2020 A1
20200249755 Uscinski et al. Aug 2020 A1
20200329961 Oz et al. Oct 2020 A1
20210003848 Choi et al. Jan 2021 A1
20210003900 Chen Jan 2021 A1
20210015364 Rege et al. Jan 2021 A1
20210055555 Chi Feb 2021 A1
20210120222 Holz et al. Apr 2021 A1
20210278671 Hsiao et al. Sep 2021 A1
20210278677 Ouderkirk et al. Sep 2021 A1
20210286183 Ouderkirk et al. Sep 2021 A1
20220146839 Miller May 2022 A1
20220326513 Yeoh et al. Oct 2022 A1
Foreign Referenced Citations (64)
Number Date Country
102014732 Apr 2011 CN
102750418 Oct 2012 CN
102595178 Sep 2015 CN
204807808 Nov 2015 CN
105208916 Dec 2015 CN
105527710 Apr 2016 CN
106371218 Feb 2017 CN
106537290 Mar 2017 CN
106909222 Jun 2017 CN
107016685 Aug 2017 CN
107347152 Nov 2017 CN
107438796 Dec 2017 CN
108427498 Aug 2018 CN
109073901 Dec 2018 CN
109477961 Mar 2019 CN
109716244 May 2019 CN
109886216 Jun 2019 CN
110168419 Aug 2019 CN
110168427 Aug 2019 CN
106054403 Jan 2020 CN
3388921 Oct 2018 EP
H4-501927 Apr 1992 JP
H08-166556 Jun 1996 JP
H08-206166 Aug 1996 JP
H09-105885 Apr 1997 JP
2004-527793 Sep 2004 JP
2007-121581 May 2007 JP
2010-117542 May 2010 JP
2011-13688 Jan 2011 JP
2011212430 Oct 2011 JP
2011255045 Dec 2011 JP
5925389 May 2016 JP
2016-180939 Oct 2016 JP
2017-056933 Mar 2017 JP
2017049468 Mar 2017 JP
2018-508036 Mar 2018 JP
2018-512900 May 2018 JP
2018-132756 Aug 2018 JP
2018-137505 Aug 2018 JP
2018-533062 Nov 2018 JP
2019176974 Oct 2019 JP
2020-509790 Apr 2020 JP
20120069133 Jun 2012 KR
10-2019-0108903 Sep 2019 KR
498282 Aug 2002 TW
201014571 Apr 2010 TW
201310974 Mar 2013 TW
201435654 Sep 2014 TW
I544447 Aug 2016 TW
201716827 May 2017 TW
201728959 Aug 2017 TW
201738618 Nov 2017 TW
201809214 Mar 2018 TW
I619967 Apr 2018 TW
202016603 May 2020 TW
I692348 May 2020 TW
0030528 Jun 2000 WO
2014057557 Apr 2014 WO
2016105281 Jun 2016 WO
2018025125 Feb 2018 WO
2018055618 Mar 2018 WO
2018175265 Sep 2018 WO
2021258078 Dec 2021 WO
2022072565 Apr 2022 WO
Non-Patent Literature Citations (30)
Entry
Office Action issued on Aug. 31, 2023, in corresponding, U.S. Appl. No. 18/331,910.
Office Action issued on Sep. 26, 2023, in corresponding, U.S. Appl. No. 18/017,840.
Taiwanese Office Action, dated Aug. 30, 2023, in a corresponding Taiwanese patent application, No. TW 111104638.
PCT/US2021/038318 International Search Report and Written Opinion issued on Sep. 24, 2021.
PCT/US2021/052750 International Search Report and Written Opinion issued on Dec. 28, 2021.
U.S. Appl. No. 17/179,423 Final Rejection filed Jul. 11, 2022.
U.S. Appl. No. 17/179,423 Non-Final Rejection filed Jan. 21, 2022.
Kim, J et al., “Foveated AR: Dynamically-Foveated Augmented Reality Display” pp. 1-15 [online]. Jul. 12, 2019; ACM Transactions on Graphics vol. 38, Issue 4 [Retrieved on Apr. 9, 2022]. Retrieved from the internet <url: https://dl.acm.org/doi/10.1145/3306346.3322987>; DOI: https://doi.org/10.1145/3306346.3322987.
EP 20886006.4 European Search Report issued on Nov. 21, 2023.
TW 110130182 Office Actioin issued on Dec. 15, 2024.
EP 22750600.3 European Search Report issued on Jan. 30, 2024.
TW 112112456 Office Action issued on Nov. 27, 2023.
EP 21827555.0 European Search Report issued on Aug. 8, 2023.
PCT/US2021/046078 International Search Report and Written Opinion issued on Nov. 24, 2021.
PCT/US2021/046078 International Preliminary Report and Written Opinion issued on Dec. 16, 2022.
PCT/US2021/049171 International Search Report and Written Opinion mailed issued on Dec. 6, 2021.
PCT/US2022/015717 International Search Report and Written Opinion issued on May 23, 2022.
PCT/US2022/033321 International Search Report and Written Opinion issued on Nov. 15, 2022.
TW 109141615 Non-Final Office Action issued on Aug. 23, 2022.
TW 110122655 office action issued on Aug. 14, 2023.
TW 110132945 Non-Final Office Action issued on May 26, 2023.
TW 110136602 Office Action issued on Jun. 14, 2023.
TW 111121911 office action issued on Jan. 16, 2023.
U.S. Appl. No. 17/637,808 Notice of Allowance filed Jul. 13, 2023.
International Search Report and Written Opinion mailed on Feb. 5, 2021 in International Patent Application No. PCT/US2020/059317, filed on Nov. 6, 2020.
International Preliminary Report in the PCT application No. PCT/US2021/038318, dated Jul. 28, 2022.
International Preliminary Report in the PCT application No. PCT/US2021/052750, dated Dec. 6, 2022.
Japanese Office Action, dated Jun. 11, 2024 in a counterpart Japanese patent application, No. JP 2021-563371.
Chinese Office Action, dated Feb. 29, 2024, and Search Report dated Feb. 27, 2024, in a related Chinese patent application, No. CN 202080037323.1.
Chinese Office Action, dated Nov. 29, 2024 in a counterpart Chinese patent application, No. CN 202080037323.1.
Related Publications (1)
Number Date Country
20220311992 A1 Sep 2022 US
Provisional Applications (4)
Number Date Country
63085172 Sep 2020 US
63041740 Jun 2020 US
62978322 Feb 2020 US
62931228 Nov 2019 US