The present disclosure relates generally to a method and a system for displaying an object with depths and, in particular, to a method and a system for displaying an object by generating and redirecting multiple right light signals and left light signals respectively to a viewer's retinas.
In conventional virtual reality (VR) and augmented reality (AR) systems which implement stereoscopic technology, a three-dimensional virtual image is produced by projecting two parallax images with different view angles concurrently to the left and right display panels that is proximate to the viewer's eyes, respectively. View angle difference between the two parallax images (parallax images) is interpreted by the brain and translated as depth perception, while the viewer's eyes are actually focusing (fixating) on the display panel, which has a different depth perception from that the viewer perceived due to the parallax images. And vergence-accommodation conflict (VAC) occurs when the accommodation for focusing on an object mismatches with the convergence of the eyes based on the depth perception. The VAC causes the viewer to experience dizziness or headache. Moreover, when using parallax images in a mix reality (MR) setting, the user will not be able to focus on real object and virtual image at the same time (“focal rivalry”). Furthermore, displaying motion of virtual images via parallax imaging technology places a heavy burden on graphic hardware.
An object of the present disclosure is to provide a system and a method for displaying an object with depths in space. Because the depth of the object is the same as the location both eyes of a viewer fixate, vergence-accommodation conflict (VAC) and focal rivalry can be avoided. The object displaying system has a right light signal generator, a right combiner, a left light signal generator, and a left combiner. The right light signal generator generates multiple right light signals for an object. The right combiner receives and redirects the multiple right light signals towards one retina of a viewer to display multiple right pixels of the object. The left light signal generator generates multiple left light signals for the object. The left combiner receives and redirects the multiple left light signals towards the other retina of the viewer to display multiple left pixels of the object. In addition, a first redirected right light signal and a corresponding first redirected left light signal are perceived by the viewer to display a first virtual binocular pixel of the object with a first depth that is related to a first angle between the first redirected right light signal and the corresponding first redirected left light signal. In one embodiment, the first depth is determined by the first angle between light path extensions of the first redirected right light signal and the corresponding first redirected left light signal.
The object is perceived with multiple depths when, in addition to the first virtual binocular pixel of the object, a second redirected right light signal and a corresponding second redirected left light signal are perceived by the viewer to display a second virtual binocular pixel of the object with a second depth that is related to a second angle between the second redirected right light signal and the corresponding second redirected left light signal.
Furthermore, the first redirected right light signal is not a parallax of the corresponding first redirected left light signal. Both the right eye and the left eye receive the image of the object from the same view angle, rather than a parallax respectively from the right eye view angle and left eye view angle, which is conventionally used to generate a 3D image.
In another embodiment, the first redirected right light signal and the corresponding first redirected left light signals are directed to approximately the same height of the retina of the viewer's both eyes.
In another embodiment, the multiple right light signals generated from the right light signal generator are reflected only once before entering the retina of the viewer, and the multiple left light signals generated from the left light signal generator are reflected only once before entering the other retina of the viewer.
In one embodiment, the right combiner receiving and redirecting the multiple right light signals towards a right retina of a viewer to display multiple right pixels of the object and the left combiner receiving and redirecting the multiple left light signals towards a left retina of the viewer to display multiple left pixels of the object. In another embodiment, the right combiner receiving and redirecting the multiple left light signals towards a right retina of a viewer to display multiple right pixels of the object and the left combiner receiving and redirecting the multiple right light signals towards a left retina of the viewer to display multiple left pixels of the object.
In an application of augmented reality (AR) or mixed reality (MR), the right combiner and the left combiner are transparent for ambient lights.
Also in the application of AR and MR, an object displaying system further includes a support structure that is wearable on a head of the viewer. The right light signal generator, the left light signal generator, the right combiner, and the left combiner are carried by the support structure. In one embodiment, the system is a head wearable device, in particular a pair of glasses. In this circumstance, the support structure may be a frame with or without lenses of the pair of glasses. The lenses may be prescription lenses used to correct nearsightedness, farsightedness, etc.
In the embodiment of smart glasses, the right light signal generator may be carried by the right temple of the frame and the left light signal generator may be carried by the left temple of the frame. In addition, the right combiner may be carried by the right lens and the left combiner may be carried by the left lens. The carrying can be implemented in various manner. The combiner may be attached or incorporated to the lens by either a removable or a non-removable means. In addition, the combiner may be integratedly made with the lens, including prescription lens.
The present invention uses retinal scanning to project right light signals and left light signals to the viewer's retina, instead of near-eye display usually placed in close proximity to the viewer's eyes, for displaying virtual images.
Additional features and advantages of the disclosure will be set forth in the descriptions that follow, and in part will be apparent from the descriptions, or may be learned by practice of the disclosure. The objectives and other advantages of the disclosure will be realized and attained by the structure and method particularly pointed out in the written description and claims thereof as well as the appended drawings. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
The terminology used in the description presented below is intended to be interpreted in its broadest reasonable manner, even though it is used in conjunction with a detailed description of certain specific embodiments of the technology. Certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be specifically defined as such in this Detailed Description section.
The present invention relates to systems and methods for displaying an object with a depth in space. Because the depth of the object is the same as the location both eyes of a viewer fixate, vergence-accommodation conflict (VAC) and focal rivalry can be avoided. The described embodiments concern one or more methods, systems, apparatuses, and computer readable mediums storing processor-executable process steps to display an object with depths in space for a viewer. An object displaying system has a right light signal generator, a right combiner, a left light signal generator, and a left combiner. The right light signal generator generates multiple right light signals for an object. The right combiner receives and redirects the multiple right light signals towards one retina of a viewer to display multiple right pixels of the object. The left light signal generator generates multiple left light signals for the object. The left combiner receives and redirects the multiple left light signals towards the other retina of the viewer to display multiple left pixels of the object. In addition, a first redirected right light signal and a corresponding first redirected left light signal are perceived by the viewer to display a first virtual binocular pixel of the object with a first depth that is related to a first angle between the first redirected right light signal and the corresponding first redirected left light signal. In one embodiment, the first depth is determined by the first angle between light path extensions of the first redirected right light signal and the corresponding first redirected left light signal.
The object is perceived with multiple depths when, in addition to the first virtual binocular pixel of the object, a second redirected right light signal and a corresponding second redirected left light signal are perceived by the viewer to display a second virtual binocular pixel of the object with a second depth that is related to a second angle between the second redirected right light signal and the corresponding second redirected left light signal.
Furthermore, the first redirected right light signal is not a parallax of the corresponding first redirected left light signal. Both the right eye and the left eye receive the image of the object from the same view angle, rather than a parallax respectively from the right eye view angle and left eye view angle, which is conventionally used to generate a 3D image.
In another embodiment, the first redirected right light signal and the corresponding first redirected left light signals are directed to approximately the same height of the retina of the viewer's both eyes.
In another embodiment, the multiple right light signals generated from the right light signal generator are reflected only once before entering the retina of the viewer, and the multiple left light signals generated from the left light signal generator are reflected only once before entering the other retina of the viewer.
In one embodiment, the right combiner receiving and redirecting the multiple right light signals towards a right retina of a viewer to display multiple right pixels of the object and the left combiner receiving and redirecting the multiple left light signals towards a left retina of the viewer to display multiple left pixels of the object. In another embodiment, the right combiner receiving and redirecting the multiple left light signals towards a right retina of a viewer to display multiple right pixels of the object and the left combiner receiving and redirecting the multiple right light signals towards a left retina of the viewer to display multiple left pixels of the object.
In an application of augmented reality (AR) or mixed reality (MR), the right combiner and the left combiner are transparent for ambient lights.
Also in the application of AR and MR, an object displaying system further includes a support structure that is wearable on a head of the viewer. The right light signal generator, the left light signal generator, the right combiner, and the left combiner are carried by the support structure. In one embodiment, the system is a head wearable device, in particular a pair of glasses. In this circumstance, the support structure may be a frame with or without lenses of the pair of glasses. The lenses may be prescription lenses used to correct nearsightedness, farsightedness, etc.
In the embodiment of smart glasses, the right light signal generator may be carried by the right temple of the frame and the left light signal generator may be carried by the left temple of the frame. In addition, the right combiner may be carried by the right lens and the left combiner may be carried by the left lens. The carrying can be implemented in various manner. The combiner may be attached or incorporated to the lens by either a removable or a non-removable means. In addition, the combiner may be integratedly made with the lens, including prescription lens.
As shown in
As shown in
The distance between the right pupil 52 and the left pupil 62 is interpupillary distance (IPD). Similarly, the second angle between the second redirected right light signal 18′ and the corresponding second redirected left light signal 38′ is θ2. The second depth D2 is related to the second angle θ2. In particular, the second depth D2 of the second virtual binocular pixel of the object can be determined approximately by the second angle θ2 between the light path extensions of the second redirected right light signal and the corresponding second redirected left light signal by the same formula. Since the second virtual binocular pixel 74 is perceived by the viewer to be further away from the viewer (i.e. with larger depth) than the first virtual binocular pixel 72, the second angle θ2 is smaller than the first angle θ1.
Furthermore, although the redirected right light signal 16′ for RLS_3 and the corresponding redirected left light signal 36′ for LLS_2 together display a first virtual binocular pixel 72 with the first depth D1, the redirected right light signal 16′ for RLG_3 is not a parallax of the corresponding redirected left light signal 36′ for LLS_3. Conventionally, a parallax between the image received by the right eye and the image received by the left eye is used for a viewer to perceive a 3D image with depth because the right eye sees the same object from a view angle different from that of a left eye. However, in the present invention, the right light signal and the corresponding left light signal for a virtual binocular pixel display an image of the same view angle. Thus, the intensity of red, blue, and green (RBG) color and/or the brightness of the right light signal and the left light signal are approximately the same. In other words, the right pixel and the corresponding left pixel are approximately the same. However, in another embodiment, one or both of the right light signal and the left light signal may be modified to present some 3D effects such as shadow. In general, both the right eye and the left eye receive the image of the object from the same view angle in the present invention, rather than a parallax respectively from the right eye view angle and left eye view angle, conventionally used to generate a 3D image.
As described above, the multiple right light signals are generated by the right light signal generator, redirected by the right combiner, and then directly scanned onto the right retina to form a right retina image on the right retina. Likewise, the multiple left light signals are generated by left light signal generator, redirected by the left combiner, and then scanned onto the left retina to form a left retina image on the left retina. In an embodiment shown in
With reference to
In one embodiment shown in
A virtual object perceived by a viewer in area C includes multiple virtual binocular pixels. To precisely describe the location of a virtual binocular pixel in the space, each location in the space is provided a three dimensional (3D) coordinate, for example XYZ coordinate. Other 3D coordinate system can be used in another embodiment. As a result, each virtual binocular pixel has a 3D coordinate—a horizontal direction, a vertical direction, and a depth direction. A horizontal direction (or X axis direction) is along the direction of interpupillary line. A vertical direction (or Y axis direction) is along the facial midline and perpendicular to the horizontal direction. A depth direction (or Z axis direction) is normal to the frontal plane and perpendicular to both the horizontal and vertical directions.
As shown in
As shown in
The look up table may be created by the following processes. At the first step, obtain an individual virtual map based on his/her IPD, created by the system during initiation or calibration, which specify the boundary of the area C where the viewer can perceive an object with depths because of the fusion of right retina image and left retina image. At the second step, for each depth at Z axis direction (each point at Z-coordinate), calculate the convergence angle to identify the pair of right pixel and left pixel respectively on the right retina image and the left retina image regardless of the X-coordinate and Y-coordinate location. At the third step, move the pair of right pixel and left pixel along X axis direction to identify the X-coordinate and Z-coordinate of each pair of right pixel and left pixel at a specific depth regardless of the Y-coordinate location. At the fourth step, move the pair of right pixel and left pixel along Y axis direction to determine the Y-coordinate of each pair of right pixel and left pixel. As a result, the 3D coordinate system such as XYZ of each pair of right pixel and left pixel respectively on the right retina image and the left retina image can be determined to create the look up table. In addition, the third step and the fourth step are exchangeable.
In another embodiment, a designer may determine each of all the necessary virtual binocular pixels to form the virtual object, and then use look-up table to identify each corresponding pair of the right pixel and the left pixel. The right light signals and the left light signals can then be generated accordingly. The right retina image and the left retina image are of the same view angle. Parallax is not used to present 3D images. As a result, very complicated and time-consuming graphics computation can be avoided. The relative location of the object on the right retina image and the left retina image determines the depth perceived by the viewer.
The light signal generator 10 and 30 may use laser, light emitting diode (“LED”) including mini and micro LED, organic light emitting diode (“OLED”), or superluminescent diode (“SLD”), LCOS (Liquid Crystal on Silicon), liquid crystal display (“LCD”), or any combination thereof as its light source. In one embodiment, the light signal generator 10 and 30 is a laser beam scanning projector (LBS projector) which may comprise the light source including a red color light laser, a green color light laser, and a blue color light laser, a light color modifier, such as Dichroic combiner and Polarizing combiner, and a two dimensional (2D) adjustable reflector, such as a 2D electromechanical system (“MEMS”) mirror. The 2D adjustable reflector can be replaced by two one dimensional (1D) reflector, such as two 1D MEMS mirror. The LBS projector sequentially generates and scans light signals one by one to form a 2D image at a predetermined resolution, for example 1280×720 pixels per frame. Thus, one light signal for one pixel is generated and projected at a time towards the combiner 20 and 40. For a viewer to see such a 2D image from one eye, the LBS projector has to sequentially generate light signals for each pixel, for example 1280×720 light signals, within the time period of persistence of vision, for example 1/18 second. Thus, the time duration of each light signal is about 60.28 nanosecond.
In another embodiment, the light signal generator 10 and 30 may be a digital light processing projector (“DLP projector”) which can generate a 2D color image at one time. Texas Instrument's DLP technology is one of several technologies that can be used to manufacture the DLP projector. The whole 2D color image frame, which for example may comprise 1280×720 pixels, is simultaneously projected towards the combiner 20 and 40.
The combiner 20, 40 receives and redirects multiple light signals generated by the light signal generator 10, 30. In one embodiment, the combiner 20, 40 reflects the multiple light signals so that the redirected light signals are on the same side of the combiner 20, 40 as the incident light signals. In another embodiment, the combiner 20, 40 refracts the multiple light signals so that the redirected light signals are on the different side of the combiner 20, 40 from the incident light signals. When the combiner 20, 40 functions as a refractor. The reflection ratio can vary widely, such as 20%-80%, in part depending on the power of the light signal generator. People with ordinary skill in the art know how to determine the appropriate reflection ratio based on characteristics of the light signal generators and the combiners. Besides, in one embodiment, the combiner 20, 40 is optically transparent to the ambient (environmental) lights from the opposite side of the incident light signals. The degree of transparency can vary widely depending on the application. For AR/MR application, the transparency is preferred to be more than 50%, such as about 75% in one embodiment. In addition to redirecting the light signals, the combiner 20, 40 may converge the multiple light signals forming the combiner images so that they can pass through the pupils and arrive the retinas of the viewer's both eyes.
The combiner 20, 40 may be made of glasses or plastic materials like lens, coated with certain materials such as metals to make it partially transparent and partially reflective. One advantage of using a reflective combiner instead of a wave guide in the prior art for directing light signals to the viewer's eyes is to eliminate the problem of undesirable diffraction effects, such as multiple shadows, color displacement . . . etc. The combiner 20, 40 may be a holographic combiner but not preferred because the diffraction effects can cause multiple shadows and RGB displacement. In some embodiments, we may want to avoid using holographic combiner.
In one embodiment, the combiner 20, 40 is configured to have an ellipsoid surface. In addition, the light signal generator and a viewer's eye are respectively positioned on both focal points of the ellipsoid. As illustrated in
The object displaying system may further include a right collimator and a left collimator to narrow the light beam of the multiple light signals, for example to cause the directions of motion to become more aligned in a specific direction or to cause spatial cross section of the light beam to become smaller. The right collimator may be positioned between the right light signal generator and the right combiner and the left collimator may be positioned between the left light signal generator and the left combiner. The collimator may be a curved minor or lens.
As shown in
The object displaying system may further include a control unit having all the necessary circuitry for controlling the right light signal generator and left light signal generator. The control unit provides electronic signals for the light signal generators to generate multiple light signals. In one embodiment, the position and angle of the right light signal generator and the left light signal generator can be adjusted to modify the incident angles of the right light signals and the left light signals to and the receiving locations on the right combiner and the left combiner. Such adjustment may be implemented by the control unit. The control unit may communicate with a separate image signal provider via a wired or wireless means. The wireless communication includes telecommunication such 4G and 5G, WiFi, bluetooth, near field communication, and internet. The control unit may include a processor, a memory, an I/O interface to communicate with the image signal provider and the viewer. The object displaying system further comprises a power supply. The power supply may be a battery and/or a component that can be wirelessly charged.
There are at least two options for arranging the light path from the light signal generator to the viewer's retina. The first option described above is that the right light signals generated by the right light signal generator are redirected by the right combiner to arrive the right retina and the left light signals generated by the left light signal generator are redirected by the left combiner to arrive the left retina. As shown in
In another embodiment shown in
The object displaying system may include a support structure wearable on a head of the viewer to carry the right light signal generator, the left light signal generator, the right combiner, and the left combiner. The right combiner and the left combiner are positioned within a field of view of the viewer. Thus, in this embodiment, the object displaying system is a head wearable device (HWD). In particular, as shown in
All components and variations in the embodiments of the object displaying system described above may be applied to the HWD. Thus, the HWD, including smart glasses, may further carry other components of the object displaying system, such as a control unit, a right collimator and a left collimator. The right collimator may be positioned between the right light signal generator and the right combiner and the left collimator may be positioned between the left light signal generator and the left combiner. In addition, the combiner may be replaced by a beam splitter and a convergent lens. The function of the beam splitter is to reflect light signals and the function of the convergent lens is to converge light signals so that they can pass through pupils to arrive the viewer's retinas.
When the object displaying system is implemented on smart eyeglasses. The lenses of the smart eyeglasses may have both dioptric property for correcting the viewer's eyesight and the function of a combiner. The smart eyeglasses may have lenses with prescribed degrees to fit the need of individuals are near-sighted or far-sighted to correct their eyesight. In these circumstances, each of the lenses of the smart eyeglasses may comprise a dioptric unit and a combiner. The dioptric unit and the combiner can be integrally manufactured as one piece with the same or different type of material. The dioptric unit and the combiner can also be separately manufactured in two pieces and then assembled together. These two pieces can attached to each other but separable, for example with built-in magnetic material, or may be attached to each other permanently. In either situation, the combiner is provided on a side of the lense which is closer to the eyes of the viewer. If the lens is one piece, the combiner forms an inner surface of the lens. If the lens has two portions, the combiner forms the inner portion of the lens. The combiner both allows ambient light to pass through and reflects light signals generated by the light signal generators to the viewer's eyes to form virtual images in the real environment. The combiner is designed to have appropriate curvature to reflect and to converge all the light signals from the light signal generators into the pupils and then on the retinas of the eyes.
In some embodiments, the curvature of one of the surfaces of the dioptric unit is determined based on the viewer's dioptric prescription. If the lens is one piece, the prescribed curvature is the outer surface of the lens. If the lens has two portions, the dioptric unit forms the outer portion of the lens. In this situation, the prescribed curvature may be either the inner surface or the outer surface of the dioptric unit. To better match the dioptric unit and the combiner, in one embodiment, the dioptric unit can be categorized into three groups based on the its prescribed degrees—over +3.00 (farsighted), between −3.0-+3.0, and under −3.0 (nearsighted). The combiner can be designed according to the category of a dioptric unit. In another embodiment, the dioptric unit can be categorized into five or ten groups each of which has a smaller range of prescribed degrees. As shown in
In addition to a still virtual object in an image frame in space, the object displaying system can display the object in moving. When the right light signal generator 10 and the left light signal generator 30 can generate light signals at a high speed, for example 30, 60 or more frames/second, the viewer can see the object in moving in a video smoothly due to persistence of vision. Below describe various embodiments of the processes to display a moving virtual object for the viewer.
Example 1 shown in
Example 2 shown in
Example 3 shown in
However, to move a virtual object closer to the viewer, if the X-coordinate of the virtual object is not at the center (middle point) of the interpupillary line (X-coordinate equals to zero in one embodiment), the locations of the right light signals and corresponding left light signals respectively on the right combiner image and the left combiner image need to be moved closer to each other based on a ratio. The ratio is calculated by the distance between the location of right light signal on the right combiner image and its left edge (close to the center of both eyes), to the distance between the location of left light signal on the left combiner image and its right edge (close to the center of both eyes). For example, assuming that the location of right light signal on the right combiner image is 10 pixels to its left edge (close to the center of both eyes) and the location of left light image on the left combiner image is 5 pixels to its right edge (close to the center of both eyes), The ration of right-location-to-center distance and left-location-to-center distance is 2:1 (10:5). To move the object closer, if the right location on the right combiner image and the left location on the left combiner image have to move closer to each other by 3 pixels distance, the right location needs to move towards its left edge by 2 pixels and the left location needs to move towards its right edge by 1 pixel because of the 2:1 ratio.
Example 4 shown in
Example 5 shown in
However, since the X-coordinate of the virtual object remains the same while the virtual object moves closer to the viewer, the locations of the right light signals and corresponding left light signals respectively on the right combiner image and the left combiner image need to be moved closer to each other based on a ratio. The ratio is calculated by the distance between the location of right light signal on the right combiner image and its left edge (close to the center of both eyes), to the distance between the location of left light signal on the left combiner image and its right edge (close to the center of both eyes). For example, assuming that the location of right light signal on the right combiner image is 10 pixels to its left edge (close to the center of both eyes) and the location of left light image on the left combiner image is 5 pixels to its right edge (close to the center of both eyes), The ration of right-location-to-center distance and left-location-to-center distance is 2:1 (10:5). To move the object closer, if the right location on the right combiner image and the left location on the left combiner image have to move closer to each other by 3 pixels distance, the right location needs to move towards its left edge by 2 pixels and the left location needs to move towards its right edge by 1 pixel because of the 2:1 ratio.
Example 6 shown in
Example 7 shown in
Example 8 shown in
Second, the convergence angle of the second virtual binocular pixel with 10 m depth is calculated to be 0.34 degrees between the light path extension of the second redirected right light signal and the second redirected left light signal.
Third, the intermediate virtual binocular pixels are calculated and identified. The number of intermediate virtual binocular pixels may be calculated based on the difference of convergence angles of the first virtual binocular pixel and the second virtual binocular pixel, and the number of pixels in X axis direction for every degree of FOB. The difference between the convergence angle of the first virtual binocular pixel (3.4 degree) and that of the second virtual binocular pixel (0.34 degree) equals 3.06. The number of pixels in X axis direction for every degree of FOB is 32, assuming that the total width of a scanned retina image is 1280 pixels which cover 40 degrees of field of view (FOV) in total. Thus, when the virtual object is moving from a first virtual binocular pixel with 1 m depth to a second virtual binocular pixel with 10 m depth, there are approximately 98 (32×3.06) virtual binocular pixels in between that can be used to display such a moving. These 98 virtual binocular pixels may be identified through the look up table described above. Fourth, display the moving through 98 intermediate virtual binocular pixels, like 98 steps of small moves in between in this example. The right light signals and the corresponding left light signals for these 98 virtual binocular pixels are respectively generated by the right light signal generator and left light signal generator to project into the right retina and the left retina of the viewer. As a result, the viewer can perceive a virtual object moving smoothly through 98 intermediate positions from 1 m to 10 m.
Example 9 shown in
Second, the convergence angle of the second virtual binocular pixel with 20 cm depth is calculated to be 17 degrees between the light path extension of the second redirected right light signal and the second redirected left light signal.
Third, the intermediate virtual binocular pixels are calculated and identified. The number of intermediate virtual binocular pixels may be calculated based on the difference of convergence angles of the first virtual binocular pixel and the second virtual binocular pixel, and the number of pixels in X axis direction for every degree of FOB. The difference between the convergence angle of the first virtual binocular pixel (3.4 degree) and that of the second virtual binocular pixel (17 degree) equals 13.6. The number of pixels in X axis direction for every degree of FOB is 32, assuming that the total width of a scanned retina image is 1280 pixels which cover 40 degrees of field of view (FOV) in total. Thus, when the virtual object is moving from a first virtual binocular pixel with 1 m depth to a second virtual binocular pixel with 20 cm depth, there are approximately 435 (32×13.6) virtual binocular pixels in between that can be used to display such a moving. These 435 virtual binocular pixels may be identified through the look up table described above. Fourth, display the moving through 435 intermediate virtual binocular pixels, like 435 steps of small moves in between in this example. The right light signals and the corresponding left light signals for these 435 virtual binocular pixels are respectively generated by the right light signal generator and left light signal generator to project into the right retina and the left retina of the viewer. As a result, the viewer can perceive a virtual object moving smoothly through 435 intermediate positions from 1 m to 10 m.
The foregoing description of embodiments is provided to enable any person skilled in the art to make and use the subject matter. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the novel principles and subject matter disclosed herein may be applied to other embodiments without the use of the innovative faculty. The claimed subject matter set forth in the claims is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. It is contemplated that additional embodiments are within the spirit and true scope of the disclosed subject matter. Thus, it is intended that the present invention covers modifications and variations that come within the scope of the appended claims and their equivalents.
This application claims the benefit of provisional application 62/931,228, filed on Nov. 6, 2019, titled “SYSTEM AND METHOD FOR PROJECTING BINOCULAR 3D IMAGES WITH DEPTHS”, provisional application 62/978,322, filed on Feb. 19, 2020, titled “HEAD WEARABLE DEVICE WITH INWARD AND OUTWARD CAMERA”, provisional application 63/041,740, filed on Jun. 19, 2020, titled “METHODS AND SYSTEMS FOR EYEBOX EXPANSION”, and provisional application 63/085,172, filed on Sep. 30, 2020, titled “SYSTEMS AND METHODS FOR PROJECTING VIRTUAL IMAGES WITH MULTIPLE DEPTHS”, and incorporated herein by reference at their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/059317 | 11/6/2020 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2021/092314 | 5/14/2021 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
1022856 | Marinics | Apr 1912 | A |
1072216 | Dunn | Sep 1913 | A |
4953961 | Ubhayakar | Sep 1990 | A |
5754344 | Fujiyama | May 1998 | A |
6111597 | Tabata | Aug 2000 | A |
6454411 | Trumbull | Sep 2002 | B1 |
6578068 | Bowman-Amuah | Jun 2003 | B1 |
8123353 | Biernat et al. | Feb 2012 | B2 |
9028067 | Fleischman et al. | May 2015 | B1 |
9186293 | Krenik | Nov 2015 | B2 |
9239453 | Cheng et al. | Jan 2016 | B2 |
9279972 | Judkewitz et al. | Mar 2016 | B2 |
9297945 | Ide et al. | Mar 2016 | B2 |
9304316 | Weiss et al. | Apr 2016 | B2 |
9319674 | Kim et al. | Apr 2016 | B2 |
9348144 | Kobayashi | May 2016 | B2 |
9377627 | Watanabe et al. | Jun 2016 | B2 |
9395543 | Lamb et al. | Jul 2016 | B2 |
9400215 | Islam | Jul 2016 | B2 |
9408539 | Tearney et al. | Aug 2016 | B2 |
9435956 | Xu et al. | Sep 2016 | B1 |
9456116 | Lapstun | Sep 2016 | B2 |
9459456 | Kobayashi | Oct 2016 | B2 |
9476769 | Islam | Oct 2016 | B2 |
9485392 | Lettington et al. | Nov 2016 | B1 |
9488837 | Nister et al. | Nov 2016 | B2 |
9492083 | Rege et al. | Nov 2016 | B2 |
9494800 | Border et al. | Nov 2016 | B2 |
9504376 | Neal et al. | Nov 2016 | B2 |
9529191 | Sverdrup et al. | Dec 2016 | B2 |
9529196 | Sade | Dec 2016 | B1 |
9555589 | Ambur et al. | Jan 2017 | B1 |
9557568 | Ouderkirk et al. | Jan 2017 | B1 |
9576398 | Zehner et al. | Feb 2017 | B1 |
9581744 | Yun et al. | Feb 2017 | B1 |
9581827 | Wong et al. | Feb 2017 | B1 |
9599761 | Ambur et al. | Mar 2017 | B1 |
9618743 | Saito | Apr 2017 | B2 |
9664615 | Bouma et al. | May 2017 | B2 |
9671566 | Abovitz et al. | Jun 2017 | B2 |
9678338 | Bamberger et al. | Jun 2017 | B1 |
9715114 | Yun et al. | Jul 2017 | B2 |
9723976 | Tesar | Aug 2017 | B2 |
9726539 | Islam | Aug 2017 | B2 |
9766449 | Bailey et al. | Sep 2017 | B2 |
9772495 | Tam et al. | Sep 2017 | B2 |
9784975 | Aruga | Oct 2017 | B2 |
9800844 | Nakahara et al. | Oct 2017 | B2 |
9817236 | Yamazaki et al. | Nov 2017 | B2 |
9823474 | Evans et al. | Nov 2017 | B2 |
9829616 | Yun et al. | Nov 2017 | B2 |
9835777 | Ouderkirk et al. | Dec 2017 | B2 |
9851566 | Yajima et al. | Dec 2017 | B2 |
9851568 | Theytaz | Dec 2017 | B2 |
9857170 | Abovitz et al. | Jan 2018 | B2 |
9857591 | Welch et al. | Jan 2018 | B2 |
9872606 | Yeoh et al. | Jan 2018 | B2 |
9874744 | Bailey et al. | Jan 2018 | B2 |
9921396 | Ranalli et al. | Mar 2018 | B2 |
9927611 | Rudy et al. | Mar 2018 | B2 |
9945998 | Ouderkirk et al. | Apr 2018 | B2 |
9945999 | Wong et al. | Apr 2018 | B2 |
9946067 | Bamberger et al. | Apr 2018 | B1 |
9952371 | Ambur et al. | Apr 2018 | B2 |
9964755 | Redding et al. | May 2018 | B2 |
9983397 | Horstmeyer et al. | May 2018 | B2 |
9989765 | Jepsen | Jun 2018 | B2 |
9995857 | Evans et al. | Jun 2018 | B2 |
9995939 | Yun et al. | Jun 2018 | B2 |
10001651 | Noguchi et al. | Jun 2018 | B2 |
10007035 | Ouderkirk et al. | Jun 2018 | B2 |
10007043 | Ambur et al. | Jun 2018 | B2 |
10012829 | Bailey et al. | Jul 2018 | B2 |
10012838 | Border | Jul 2018 | B2 |
10041832 | Islam | Aug 2018 | B2 |
10042165 | Jepsen et al. | Aug 2018 | B2 |
10061062 | Sscmidtlin | Aug 2018 | B2 |
10061111 | Hillman | Aug 2018 | B2 |
10067337 | Bailey et al. | Sep 2018 | B2 |
10073266 | Osterhout | Sep 2018 | B2 |
10073270 | Fujishiro | Sep 2018 | B2 |
10078164 | Yun et al. | Sep 2018 | B2 |
10078220 | Alexander et al. | Sep 2018 | B2 |
10101571 | Andre et al. | Oct 2018 | B2 |
10101586 | Fujimaki et al. | Oct 2018 | B2 |
10133075 | Bailey et al. | Nov 2018 | B2 |
10139632 | Border et al. | Nov 2018 | B2 |
10151926 | Bailey | Dec 2018 | B2 |
10162161 | Horstmeyer et al. | Dec 2018 | B2 |
10162183 | Ide | Dec 2018 | B2 |
10168525 | Kim et al. | Jan 2019 | B2 |
10175484 | Yajima et al. | Jan 2019 | B2 |
10175488 | Bailey et al. | Jan 2019 | B2 |
10185154 | Popovich et al. | Jan 2019 | B2 |
10191283 | Alexander et al. | Jan 2019 | B2 |
10197805 | Bailey et al. | Feb 2019 | B2 |
10209517 | Popovich et al. | Feb 2019 | B2 |
10222618 | Border et al. | Mar 2019 | B2 |
10222621 | Wang et al. | Mar 2019 | B2 |
10234687 | Welch et al. | Mar 2019 | B2 |
10234696 | Popovich et al. | Mar 2019 | B2 |
10241336 | Ide et al. | Mar 2019 | B2 |
10261313 | Bamberger et al. | Apr 2019 | B1 |
10268041 | Davis | Apr 2019 | B2 |
10274736 | Alexander et al. | Apr 2019 | B2 |
10277874 | Xu | Apr 2019 | B2 |
10302950 | Ouderkirk et al. | May 2019 | B2 |
10303246 | Vidal et al. | May 2019 | B2 |
10317656 | Dubois | Jun 2019 | B2 |
10330777 | Popovich et al. | Jun 2019 | B2 |
10330930 | Wong et al. | Jun 2019 | B2 |
10335572 | Kumar | Jul 2019 | B1 |
10338380 | Yun et al. | Jul 2019 | B2 |
10338393 | Yun et al. | Jul 2019 | B2 |
10338400 | Connor | Jul 2019 | B2 |
10345596 | Morrison | Jul 2019 | B2 |
10345599 | Jepsen | Jul 2019 | B2 |
10349818 | Yeoh et al. | Jul 2019 | B2 |
10359629 | Jepsen | Jul 2019 | B2 |
10365492 | Holland et al. | Jul 2019 | B2 |
10371892 | Zheng et al. | Aug 2019 | B2 |
10394034 | Reshidko et al. | Aug 2019 | B2 |
10409057 | Aleem et al. | Sep 2019 | B2 |
10409069 | Noguchi et al. | Sep 2019 | B2 |
10409079 | Dewald et al. | Sep 2019 | B2 |
10416452 | Cheng et al. | Sep 2019 | B2 |
10419665 | Ou et al. | Sep 2019 | B2 |
10422995 | Haddick | Sep 2019 | B2 |
10429639 | Lapstun | Oct 2019 | B2 |
10429648 | Sverdrup | Oct 2019 | B2 |
10437061 | Jepsen | Oct 2019 | B2 |
10437074 | Holland et al. | Oct 2019 | B2 |
10444496 | Ambur et al. | Oct 2019 | B2 |
10444508 | Urey et al. | Oct 2019 | B2 |
10451876 | Jepsen | Oct 2019 | B2 |
10451881 | Bailey | Oct 2019 | B2 |
10459220 | Aleem et al. | Oct 2019 | B2 |
10459221 | Alexander et al. | Oct 2019 | B2 |
10459222 | Alexander et al. | Oct 2019 | B2 |
10459223 | Alexander et al. | Oct 2019 | B2 |
10459231 | Miller et al. | Oct 2019 | B2 |
10459305 | Shi et al. | Oct 2019 | B2 |
10467770 | Sato et al. | Nov 2019 | B2 |
10473459 | Abovitz et al. | Nov 2019 | B2 |
10481317 | Peroz et al. | Nov 2019 | B2 |
10481684 | Lopes et al. | Nov 2019 | B2 |
10482676 | Yuan et al. | Nov 2019 | B2 |
10488584 | Karafin et al. | Nov 2019 | B2 |
10488661 | Alexander et al. | Nov 2019 | B2 |
10488662 | Holland et al. | Nov 2019 | B2 |
10510137 | Kitain et al. | Dec 2019 | B1 |
10520721 | Nowatzyk | Dec 2019 | B2 |
10520730 | Bouchier et al. | Dec 2019 | B2 |
10527851 | Lin et al. | Jan 2020 | B2 |
10534129 | Tearney et al. | Jan 2020 | B2 |
10534173 | Jepsen | Jan 2020 | B2 |
10554940 | Ghazaryan | Feb 2020 | B1 |
10558047 | Samec et al. | Feb 2020 | B2 |
10564427 | Ouderkirk et al. | Feb 2020 | B2 |
10606161 | Hirata et al. | Mar 2020 | B2 |
10616568 | Lin et al. | Apr 2020 | B1 |
10663727 | Ouderkirk et al. | May 2020 | B2 |
10670867 | Yun et al. | Jun 2020 | B2 |
10678052 | Ouderkirk et al. | Jun 2020 | B2 |
10706600 | Yoon et al. | Jul 2020 | B1 |
10747002 | Yun et al. | Aug 2020 | B2 |
10747003 | Ouderkirk et al. | Aug 2020 | B2 |
10754159 | Ouderkirk et al. | Aug 2020 | B2 |
10838208 | Ouderkirk et al. | Nov 2020 | B2 |
10921594 | Ambur et al. | Feb 2021 | B2 |
11079601 | Greenberg | Aug 2021 | B2 |
11256092 | Shamir et al. | Feb 2022 | B2 |
11280997 | Gao | Mar 2022 | B1 |
11325330 | Wong et al. | May 2022 | B2 |
11435572 | Yeoh et al. | Sep 2022 | B2 |
11493769 | Wen et al. | Nov 2022 | B2 |
20020024708 | Lewis et al. | Feb 2002 | A1 |
20020122217 | Nakajima | Sep 2002 | A1 |
20020180868 | Lippert et al. | Dec 2002 | A1 |
20040179254 | Lewis et al. | Sep 2004 | A1 |
20040233275 | Tomita | Nov 2004 | A1 |
20060087618 | Smart et al. | Apr 2006 | A1 |
20080117289 | Schowengerdt | May 2008 | A1 |
20100103077 | Sugiyama et al. | Apr 2010 | A1 |
20100149073 | Chaum et al. | Jun 2010 | A1 |
20110032706 | Mukawa | Feb 2011 | A1 |
20110116040 | Biernat et al. | May 2011 | A1 |
20110273722 | Charny et al. | Nov 2011 | A1 |
20110304821 | Tanassi et al. | Dec 2011 | A1 |
20120002163 | Neal | Jan 2012 | A1 |
20120050269 | Awaji | Mar 2012 | A1 |
20120056799 | Solomon | Mar 2012 | A1 |
20120056989 | Izumi | Mar 2012 | A1 |
20130044101 | Kim et al. | Feb 2013 | A1 |
20130127980 | Haddick et al. | May 2013 | A1 |
20130135295 | Li et al. | May 2013 | A1 |
20130296710 | Zuzak et al. | Nov 2013 | A1 |
20140211289 | Hino et al. | Jul 2014 | A1 |
20140368412 | Jacobsen et al. | Dec 2014 | A1 |
20150022783 | Lee et al. | Jan 2015 | A1 |
20150169070 | Harp | Jun 2015 | A1 |
20150215608 | Tahara | Jul 2015 | A1 |
20150324568 | Publicover et al. | Nov 2015 | A1 |
20150326570 | Publicover et al. | Nov 2015 | A1 |
20150338915 | Publicover et al. | Nov 2015 | A1 |
20160000324 | Rege et al. | Jan 2016 | A1 |
20160004908 | Lundberg | Jan 2016 | A1 |
20160033771 | Tremblay et al. | Feb 2016 | A1 |
20160062459 | Publicover et al. | Mar 2016 | A1 |
20160085302 | Publicover et al. | Mar 2016 | A1 |
20160109705 | Schowengerdt | Apr 2016 | A1 |
20160109708 | Schowengerdt | Apr 2016 | A1 |
20160116740 | Takahashi et al. | Apr 2016 | A1 |
20160131912 | Border et al. | May 2016 | A1 |
20160139402 | Lapstun | May 2016 | A1 |
20160147067 | Hua et al. | May 2016 | A1 |
20160147071 | Fujishiro | May 2016 | A1 |
20160147072 | Yamazaki et al. | May 2016 | A1 |
20160150201 | Kilcher et al. | May 2016 | A1 |
20160178908 | Dobschal et al. | Jun 2016 | A1 |
20160187652 | Fujimaki et al. | Jun 2016 | A1 |
20160187653 | Kimura | Jun 2016 | A1 |
20160187661 | Yajima et al. | Jun 2016 | A1 |
20160195718 | Evans | Jul 2016 | A1 |
20160195721 | Evans | Jul 2016 | A1 |
20160209648 | Haddick et al. | Jul 2016 | A1 |
20160209657 | Popovich et al. | Jul 2016 | A1 |
20160212394 | Nakahara et al. | Jul 2016 | A1 |
20160216515 | Bouchier et al. | Jul 2016 | A1 |
20160238845 | Alexander et al. | Aug 2016 | A1 |
20160246441 | Westerman et al. | Aug 2016 | A1 |
20160274358 | Yajima et al. | Sep 2016 | A1 |
20160274361 | Border et al. | Sep 2016 | A1 |
20160274365 | Bailey et al. | Sep 2016 | A1 |
20160274660 | Publicover et al. | Sep 2016 | A1 |
20160284129 | Nishizawa et al. | Sep 2016 | A1 |
20160291217 | Furukawa et al. | Oct 2016 | A1 |
20160291326 | Evans et al. | Oct 2016 | A1 |
20160327779 | Hillman | Nov 2016 | A1 |
20160377849 | Onda | Dec 2016 | A1 |
20160377865 | Alexander et al. | Dec 2016 | A1 |
20160377866 | Alexander et al. | Dec 2016 | A1 |
20170003507 | Raval et al. | Jan 2017 | A1 |
20170010468 | Deering et al. | Jan 2017 | A1 |
20170017083 | Samec et al. | Jan 2017 | A1 |
20170027444 | Rege et al. | Feb 2017 | A1 |
20170027651 | Esterberg | Feb 2017 | A1 |
20170031160 | Popovich et al. | Feb 2017 | A1 |
20170038589 | Jepsen | Feb 2017 | A1 |
20170038590 | Jepsen | Feb 2017 | A1 |
20170038591 | Jepsen | Feb 2017 | A1 |
20170045721 | Charles | Feb 2017 | A1 |
20170068029 | Yun et al. | Mar 2017 | A1 |
20170068030 | Ambur et al. | Mar 2017 | A1 |
20170068091 | Greenberg | Mar 2017 | A1 |
20170068096 | Ouderkirk et al. | Mar 2017 | A1 |
20170068099 | Ouderkirk et al. | Mar 2017 | A1 |
20170068100 | Ouderkirk et al. | Mar 2017 | A1 |
20170068101 | Yun et al. | Mar 2017 | A1 |
20170068102 | Wong et al. | Mar 2017 | A1 |
20170068104 | Ouderkirk et al. | Mar 2017 | A1 |
20170068105 | Yun et al. | Mar 2017 | A1 |
20170078651 | Russell | Mar 2017 | A1 |
20170097449 | Ouderkirk et al. | Apr 2017 | A1 |
20170097453 | Ambur et al. | Apr 2017 | A1 |
20170097454 | Wong et al. | Apr 2017 | A1 |
20170097508 | Yun et al. | Apr 2017 | A1 |
20170115432 | Schmidtlin | Apr 2017 | A1 |
20170115484 | Yokoyama | Apr 2017 | A1 |
20170139209 | Evans et al. | May 2017 | A9 |
20170146714 | Ambur et al. | May 2017 | A1 |
20170153672 | Shin et al. | Jun 2017 | A1 |
20170160550 | Kobayashi et al. | Jun 2017 | A1 |
20170188021 | Lo et al. | Jun 2017 | A1 |
20170227771 | Sverdrup | Aug 2017 | A1 |
20170235931 | Publicover et al. | Aug 2017 | A1 |
20170255012 | Tam et al. | Sep 2017 | A1 |
20170255013 | Tam et al. | Sep 2017 | A1 |
20170255020 | Tam et al. | Sep 2017 | A1 |
20170261751 | Noguchi et al. | Sep 2017 | A1 |
20170269368 | Yun et al. | Sep 2017 | A1 |
20170285343 | Belenkii et al. | Oct 2017 | A1 |
20170293147 | Tremblay et al. | Oct 2017 | A1 |
20170295353 | Hwang et al. | Oct 2017 | A1 |
20170299870 | Urey et al. | Oct 2017 | A1 |
20170299872 | Ou et al. | Oct 2017 | A1 |
20170315347 | Juhola et al. | Nov 2017 | A1 |
20170329141 | Border et al. | Nov 2017 | A1 |
20170336641 | von und zu Liechtenstein | Nov 2017 | A1 |
20170363872 | Border et al. | Dec 2017 | A1 |
20170367651 | Tzvieli et al. | Dec 2017 | A1 |
20180003981 | Urey | Jan 2018 | A1 |
20180008141 | Krueger | Jan 2018 | A1 |
20180017815 | Chumbley et al. | Jan 2018 | A1 |
20180028057 | Oz et al. | Feb 2018 | A1 |
20180032812 | Sengelaub et al. | Feb 2018 | A1 |
20180039004 | Yun et al. | Feb 2018 | A1 |
20180039083 | Miller et al. | Feb 2018 | A1 |
20180039084 | Schowengerdt | Feb 2018 | A1 |
20180045927 | Heeren et al. | Feb 2018 | A1 |
20180045965 | Schowengerdt | Feb 2018 | A1 |
20180052320 | Curtis et al. | Feb 2018 | A1 |
20180059296 | Ouderkirk et al. | Mar 2018 | A1 |
20180081179 | Samec et al. | Mar 2018 | A1 |
20180081322 | Robbins et al. | Mar 2018 | A1 |
20180084245 | Lapstun | Mar 2018 | A1 |
20180088341 | Ide et al. | Mar 2018 | A1 |
20180088342 | Ide | Mar 2018 | A1 |
20180091805 | Liang et al. | Mar 2018 | A1 |
20180113303 | Popovich et al. | Apr 2018 | A1 |
20180120559 | Yeoh et al. | May 2018 | A1 |
20180120573 | Ninan et al. | May 2018 | A1 |
20180131926 | Shanks et al. | May 2018 | A1 |
20180140260 | Taguchi et al. | May 2018 | A1 |
20180149862 | Kessler et al. | May 2018 | A1 |
20180149863 | Aleem et al. | May 2018 | A1 |
20180149866 | Noguchi | May 2018 | A1 |
20180149874 | Aleem et al. | May 2018 | A1 |
20180157317 | Richter et al. | Jun 2018 | A1 |
20180180784 | Ouderkirk et al. | Jun 2018 | A1 |
20180180788 | Ambur et al. | Jun 2018 | A1 |
20180182174 | Choi | Jun 2018 | A1 |
20180185665 | Osterhout et al. | Jul 2018 | A1 |
20180196181 | Wong et al. | Jul 2018 | A1 |
20180203232 | Bouchier et al. | Jul 2018 | A1 |
20180239149 | Yun et al. | Aug 2018 | A1 |
20180241983 | Kimura | Aug 2018 | A1 |
20180246314 | Swager et al. | Aug 2018 | A1 |
20180246336 | Greenberg | Aug 2018 | A1 |
20180249150 | Takeda et al. | Aug 2018 | A1 |
20180249151 | Freeman et al. | Aug 2018 | A1 |
20180252924 | Jepsen | Sep 2018 | A1 |
20180252925 | Schowengerdt | Sep 2018 | A1 |
20180252926 | Alexander et al. | Sep 2018 | A1 |
20180262758 | El-Ghoroury et al. | Sep 2018 | A1 |
20180267222 | Ambur et al. | Sep 2018 | A1 |
20180267319 | Ouderkirk et al. | Sep 2018 | A1 |
20180275343 | Zheng et al. | Sep 2018 | A1 |
20180275410 | Yeoh et al. | Sep 2018 | A1 |
20180284438 | Yajima et al. | Oct 2018 | A1 |
20180284441 | Cobb | Oct 2018 | A1 |
20180284442 | Abe | Oct 2018 | A1 |
20180292908 | Kamoda et al. | Oct 2018 | A1 |
20180350032 | Bastani et al. | Dec 2018 | A1 |
20180356591 | Karafin et al. | Dec 2018 | A1 |
20180356640 | Yun et al. | Dec 2018 | A1 |
20180357817 | Ikekita | Dec 2018 | A1 |
20180372958 | Karafin et al. | Dec 2018 | A1 |
20190004228 | Bevensee et al. | Jan 2019 | A1 |
20190011621 | Karafin et al. | Jan 2019 | A1 |
20190011691 | Peyman | Jan 2019 | A1 |
20190011703 | Robaina et al. | Jan 2019 | A1 |
20190018235 | Ouderkirk et al. | Jan 2019 | A1 |
20190018479 | Minami | Jan 2019 | A1 |
20190025573 | Aleksov et al. | Jan 2019 | A1 |
20190025587 | Sharifi et al. | Jan 2019 | A1 |
20190041634 | Popovich et al. | Feb 2019 | A1 |
20190041648 | Petersen | Feb 2019 | A1 |
20190064435 | Karafin et al. | Feb 2019 | A1 |
20190064526 | Connor | Feb 2019 | A1 |
20190079302 | Ninan et al. | Mar 2019 | A1 |
20190084419 | Suzuki et al. | Mar 2019 | A1 |
20190086598 | Futterer | Mar 2019 | A1 |
20190086674 | Sinay et al. | Mar 2019 | A1 |
20190101977 | Armstrong-Muntner et al. | Apr 2019 | A1 |
20190121132 | Shamir et al. | Apr 2019 | A1 |
20190129178 | Patterson et al. | May 2019 | A1 |
20190146224 | Komori et al. | May 2019 | A1 |
20190172216 | Ninan et al. | Jun 2019 | A1 |
20190179149 | Curtis et al. | Jun 2019 | A1 |
20190179153 | Popovich et al. | Jun 2019 | A1 |
20190187473 | Tomizawa et al. | Jun 2019 | A1 |
20190196172 | Hillman | Jun 2019 | A1 |
20190200858 | Yam et al. | Jul 2019 | A1 |
20190212533 | Heeren et al. | Jul 2019 | A9 |
20190212565 | Davis | Jul 2019 | A1 |
20190222830 | Edwin et al. | Jul 2019 | A1 |
20190235235 | Ouderkirk et al. | Aug 2019 | A1 |
20190258062 | Aleem et al. | Aug 2019 | A1 |
20190265465 | Wong et al. | Aug 2019 | A1 |
20190265466 | Yun et al. | Aug 2019 | A1 |
20190265467 | Yun et al. | Aug 2019 | A1 |
20190271845 | Cormier et al. | Sep 2019 | A1 |
20190278091 | Smits et al. | Sep 2019 | A1 |
20190281279 | Peuhkurinen et al. | Sep 2019 | A1 |
20190285881 | Ilic et al. | Sep 2019 | A1 |
20190285895 | Fujimaki | Sep 2019 | A1 |
20190285897 | Topliss | Sep 2019 | A1 |
20190287495 | Mathur et al. | Sep 2019 | A1 |
20190293935 | Schneider et al. | Sep 2019 | A1 |
20190293939 | Sluka | Sep 2019 | A1 |
20190302436 | Hsu et al. | Oct 2019 | A1 |
20190320165 | French et al. | Oct 2019 | A1 |
20190335158 | Holz et al. | Oct 2019 | A1 |
20190339528 | Freeman et al. | Nov 2019 | A1 |
20190361250 | Lanman et al. | Nov 2019 | A1 |
20190384065 | Shau et al. | Dec 2019 | A1 |
20190391382 | Chung et al. | Dec 2019 | A1 |
20190391398 | Abou et al. | Dec 2019 | A1 |
20190391399 | Samec et al. | Dec 2019 | A1 |
20190391638 | Khaderi et al. | Dec 2019 | A1 |
20200012090 | Lapstun | Jan 2020 | A1 |
20200012095 | Edwin et al. | Jan 2020 | A1 |
20200018962 | Lu et al. | Jan 2020 | A1 |
20200033595 | Stegelmeier | Jan 2020 | A1 |
20200033603 | Ohkawa et al. | Jan 2020 | A1 |
20200033606 | Takeda et al. | Jan 2020 | A1 |
20200041787 | Popovich et al. | Feb 2020 | A1 |
20200041797 | Samec et al. | Feb 2020 | A1 |
20200049995 | Urey et al. | Feb 2020 | A1 |
20200064633 | Maimone | Feb 2020 | A1 |
20200090569 | Hajati et al. | Mar 2020 | A1 |
20200097065 | Iyer et al. | Mar 2020 | A1 |
20200117006 | Kollin et al. | Apr 2020 | A1 |
20200124856 | Ouderkirk et al. | Apr 2020 | A1 |
20200133393 | Forsland et al. | Apr 2020 | A1 |
20200138518 | Lang | May 2020 | A1 |
20200186787 | Cantero | Jun 2020 | A1 |
20200192475 | Gustafsson et al. | Jun 2020 | A1 |
20200241305 | Ouderkirk et al. | Jul 2020 | A1 |
20200241635 | Cohen | Jul 2020 | A1 |
20200241650 | Kramer et al. | Jul 2020 | A1 |
20200249755 | Uscinski et al. | Aug 2020 | A1 |
20200329961 | Oz et al. | Oct 2020 | A1 |
20210003848 | Choi et al. | Jan 2021 | A1 |
20210003900 | Chen | Jan 2021 | A1 |
20210015364 | Rege et al. | Jan 2021 | A1 |
20210055555 | Chi | Feb 2021 | A1 |
20210120222 | Holz et al. | Apr 2021 | A1 |
20210278671 | Hsiao et al. | Sep 2021 | A1 |
20210278677 | Ouderkirk et al. | Sep 2021 | A1 |
20210286183 | Ouderkirk et al. | Sep 2021 | A1 |
20220146839 | Miller | May 2022 | A1 |
20220326513 | Yeoh et al. | Oct 2022 | A1 |
Number | Date | Country |
---|---|---|
102014732 | Apr 2011 | CN |
102750418 | Oct 2012 | CN |
102595178 | Sep 2015 | CN |
204807808 | Nov 2015 | CN |
105208916 | Dec 2015 | CN |
105527710 | Apr 2016 | CN |
106371218 | Feb 2017 | CN |
106537290 | Mar 2017 | CN |
106909222 | Jun 2017 | CN |
107016685 | Aug 2017 | CN |
107347152 | Nov 2017 | CN |
107438796 | Dec 2017 | CN |
108427498 | Aug 2018 | CN |
109073901 | Dec 2018 | CN |
109477961 | Mar 2019 | CN |
109716244 | May 2019 | CN |
109886216 | Jun 2019 | CN |
110168419 | Aug 2019 | CN |
110168427 | Aug 2019 | CN |
106054403 | Jan 2020 | CN |
3388921 | Oct 2018 | EP |
H4-501927 | Apr 1992 | JP |
H08-166556 | Jun 1996 | JP |
H08-206166 | Aug 1996 | JP |
H09-105885 | Apr 1997 | JP |
2004-527793 | Sep 2004 | JP |
2007-121581 | May 2007 | JP |
2010-117542 | May 2010 | JP |
2011-13688 | Jan 2011 | JP |
2011212430 | Oct 2011 | JP |
2011255045 | Dec 2011 | JP |
5925389 | May 2016 | JP |
2016-180939 | Oct 2016 | JP |
2017-056933 | Mar 2017 | JP |
2017049468 | Mar 2017 | JP |
2018-508036 | Mar 2018 | JP |
2018-512900 | May 2018 | JP |
2018-132756 | Aug 2018 | JP |
2018-137505 | Aug 2018 | JP |
2018-533062 | Nov 2018 | JP |
2019176974 | Oct 2019 | JP |
2020-509790 | Apr 2020 | JP |
20120069133 | Jun 2012 | KR |
10-2019-0108903 | Sep 2019 | KR |
498282 | Aug 2002 | TW |
201014571 | Apr 2010 | TW |
201310974 | Mar 2013 | TW |
201435654 | Sep 2014 | TW |
I544447 | Aug 2016 | TW |
201716827 | May 2017 | TW |
201728959 | Aug 2017 | TW |
201738618 | Nov 2017 | TW |
201809214 | Mar 2018 | TW |
I619967 | Apr 2018 | TW |
202016603 | May 2020 | TW |
I692348 | May 2020 | TW |
0030528 | Jun 2000 | WO |
2014057557 | Apr 2014 | WO |
2016105281 | Jun 2016 | WO |
2018025125 | Feb 2018 | WO |
2018055618 | Mar 2018 | WO |
2018175265 | Sep 2018 | WO |
2021258078 | Dec 2021 | WO |
2022072565 | Apr 2022 | WO |
Entry |
---|
Office Action issued on Aug. 31, 2023, in corresponding, U.S. Appl. No. 18/331,910. |
Office Action issued on Sep. 26, 2023, in corresponding, U.S. Appl. No. 18/017,840. |
Taiwanese Office Action, dated Aug. 30, 2023, in a corresponding Taiwanese patent application, No. TW 111104638. |
PCT/US2021/038318 International Search Report and Written Opinion issued on Sep. 24, 2021. |
PCT/US2021/052750 International Search Report and Written Opinion issued on Dec. 28, 2021. |
U.S. Appl. No. 17/179,423 Final Rejection filed Jul. 11, 2022. |
U.S. Appl. No. 17/179,423 Non-Final Rejection filed Jan. 21, 2022. |
Kim, J et al., “Foveated AR: Dynamically-Foveated Augmented Reality Display” pp. 1-15 [online]. Jul. 12, 2019; ACM Transactions on Graphics vol. 38, Issue 4 [Retrieved on Apr. 9, 2022]. Retrieved from the internet <url: https://dl.acm.org/doi/10.1145/3306346.3322987>; DOI: https://doi.org/10.1145/3306346.3322987. |
EP 20886006.4 European Search Report issued on Nov. 21, 2023. |
TW 110130182 Office Actioin issued on Dec. 15, 2024. |
EP 22750600.3 European Search Report issued on Jan. 30, 2024. |
TW 112112456 Office Action issued on Nov. 27, 2023. |
EP 21827555.0 European Search Report issued on Aug. 8, 2023. |
PCT/US2021/046078 International Search Report and Written Opinion issued on Nov. 24, 2021. |
PCT/US2021/046078 International Preliminary Report and Written Opinion issued on Dec. 16, 2022. |
PCT/US2021/049171 International Search Report and Written Opinion mailed issued on Dec. 6, 2021. |
PCT/US2022/015717 International Search Report and Written Opinion issued on May 23, 2022. |
PCT/US2022/033321 International Search Report and Written Opinion issued on Nov. 15, 2022. |
TW 109141615 Non-Final Office Action issued on Aug. 23, 2022. |
TW 110122655 office action issued on Aug. 14, 2023. |
TW 110132945 Non-Final Office Action issued on May 26, 2023. |
TW 110136602 Office Action issued on Jun. 14, 2023. |
TW 111121911 office action issued on Jan. 16, 2023. |
U.S. Appl. No. 17/637,808 Notice of Allowance filed Jul. 13, 2023. |
International Search Report and Written Opinion mailed on Feb. 5, 2021 in International Patent Application No. PCT/US2020/059317, filed on Nov. 6, 2020. |
International Preliminary Report in the PCT application No. PCT/US2021/038318, dated Jul. 28, 2022. |
International Preliminary Report in the PCT application No. PCT/US2021/052750, dated Dec. 6, 2022. |
Japanese Office Action, dated Jun. 11, 2024 in a counterpart Japanese patent application, No. JP 2021-563371. |
Chinese Office Action, dated Feb. 29, 2024, and Search Report dated Feb. 27, 2024, in a related Chinese patent application, No. CN 202080037323.1. |
Chinese Office Action, dated Nov. 29, 2024 in a counterpart Chinese patent application, No. CN 202080037323.1. |
Number | Date | Country | |
---|---|---|---|
20220311992 A1 | Sep 2022 | US |
Number | Date | Country | |
---|---|---|---|
63085172 | Sep 2020 | US | |
63041740 | Jun 2020 | US | |
62978322 | Feb 2020 | US | |
62931228 | Nov 2019 | US |