Compact variable focus configurations

Information

  • Patent Grant
  • 11204491
  • Patent Number
    11,204,491
  • Date Filed
    Thursday, May 30, 2019
    5 years ago
  • Date Issued
    Tuesday, December 21, 2021
    3 years ago
Abstract
One embodiment is directed to a head-wearable viewing component for presenting virtual image information to a user, comprising: a head wearable frame; a left optical element for a left eye of the user, the left optical element coupled to the head wearable frame and comprising a left fluid/membrane lens configured to have an electromechanically adjustable focal length for the left eye of the user; a right optical element for a right eye of the user, the right optical element coupled to the head wearable frame and comprising a right fluid/membrane lens configured to have an electromechanically adjustable focal length for the right eye of the user; and a controller operatively coupled to the left optical element and right optical element and configured to provide one or more.
Description
FIELD OF THE INVENTION

This invention is related to viewing optics assemblies, and more specifically to compact variable focus configurations.


BACKGROUND

It is desirable that mixed reality, or augmented reality, near-eye displays be lightweight, low-cost, have a small form-factor, have a wide virtual image field of view, and be as transparent as possible. In addition, it is desirable to have configurations that present virtual image information in multiple focal planes (for example, two or more) in order to be practical for a wide variety of use-cases without exceeding an acceptable allowance for vergence-accommodation mismatch. Referring to FIG. 1, an augmented reality system is illustrated featuring a head-worn viewing component (2), a hand-held controller component (4), and an interconnected auxiliary computing or controller component (6) which may be configured to be worn as a belt pack or the like on the user. Each of these components may be operatively coupled (10, 12, 14, 16, 17, 18) to each other and to other connected resources (8) such as cloud computing or cloud storage resources via wired or wireless communication configurations, such as those specified by IEEE 802.11, Bluetooth (RTM), and other connectivity standards and configurations. As described, for example, in U.S. patent application Ser. Nos. 14/555,585, 14/690,401, 14/331,218, 15/481,255, and 62/518,539, each of which is incorporated by reference herein in its entirety, various aspects of such components are described, such as various embodiments of the two depicted optical elements (20) through which the user may see the world around them along with visual components which may be produced by the associated system components, for an augmented reality experience. In some variations, true variable focus components may be utilized as components of the optical elements (20) to provide not only one or two focal planes, but a spectrum thereof, selectable or tunable by an integrated control system. Referring to FIGS. 2A-2C and FIG. 3, one category of variable focus configurations comprises a fluid type of lens coupled to a membrane and adjustably housed such that upon rotation of a motor (24), an associated mechanical drive assembly (26) rotationally drives a cam member (28) against a lever assembly (30), which causes two opposing perimetric plates (38, 40) to rotate (48, 46) relative to a main housing assembly (41), and rotate about associated rotation pin joints (32, 34) such that the fluid/membrane lens (36) is squeezed (44/42; or released, depending upon the motor 24/cam 28 direction/positioning), as shown in FIG. 3. This squeezing/releasing and reorientation of the opposing perimetric plates (38, 40) relative to each other changes the focus of the fluid/membrane lens (36), thus providing an electromechanically adjustable variable focus assembly. One of the challenges with such a configuration is that it is relatively bulky from a geometric perspective for integration into a head-wearable type of system component (2). Another challenge is that with such a configuration, due to the nature of the system that re-orients the opposing perimetric plates (38, 40) relative to each other as each of them pivots at the bottom relative to the frame that couples the assembly, there is a concomitant change in image position as the focus is varied; this brings in another undesirably complicating variable which must be dealt with in calibration or other steps or configurations. There is a need for compact variable focus lens systems and assemblies which are optimized for use in wearable computing systems.


SUMMARY OF THE INVENTION

One embodiment is directed to a head-wearable viewing component for presenting virtual image information to a user, comprising: a head wearable frame; a left optical element for a left eye of the user, the left optical element coupled to the head wearable frame and comprising a left fluid/membrane lens configured to have an electromechanically adjustable focal length for the left eye of the user; a right optical element for a right eye of the user, the right optical element coupled to the head wearable frame and comprising a right fluid/membrane lens configured to have an electromechanically adjustable focal length for the right eye of the user; and a controller operatively coupled to the left optical element and right optical element and configured to provide one or more commands thereto to modify the focal lengths of the left optical element and right optical element. The head-wearable viewing component of claim 1, wherein at least one of the left and right optical elements comprises an actuation motor intercoupled between two frame members. The actuation motor may be configured to provide linear actuation. The actuation motor may be configured to provide rotational actuation. The two frame members may be coupled to the left fluid/membrane lens and configured to change the focal length for the user by moving relative to each other. The two frame members may be rotatable relative to each other to modify the focal length for the user. The two frame members may be displaceable relative to each other in a non-rotational manner. The actuation motor may comprise a stepper motor. The actuation motor may comprise a servo motor. The actuation motor may comprise a piezoelectric actuator. The actuation motor may comprise an ultrasonic motor. The actuation motor may comprise an electromagnetic actuator. The actuation motor may comprise a shape memory metal alloy actuator. The controller may be configured to command the left and right optical elements to adjust to one of two selectable predetermined focal lengths. The controller may be configured to command the left and right optical elements to adjust to one of three or more selectable predetermined focal lengths.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a system configuration featuring a head wearable component with left and right optical elements in accordance with the present invention.



FIGS. 2A-2
c and FIG. 3 illustrate various aspects of a fluid lens system.



FIGS. 4A-4B illustrate various aspects of a single-motor compact fluid lens configuration in accordance with the present invention.



FIGS. 5, 6, and 7A-7B illustrate various aspects of multi-motor compact fluid lens configurations in accordance with the present invention.



FIGS. 8A and 8B illustrate various aspects of system configurations featuring a head wearable component with left and right optical elements in accordance with the present invention.





DETAILED DESCRIPTION

Referring to FIG. 4A, two main elements of one inventive variable focus assembly (50) are a fluid/membrane lens (36) interposed between two relatively rigid perimetric frame members (70, 72). In the depicted embodiment, between the fluid/membrane lens (36) and each of the rigid perimetric frame members (70, 72) is a rotatable adjustment perimetric member (52, 54) which may be controllably and rotatably adjusted relative to the rigid perimetric frame members (70, 72) using a compact actuation motor (64), such as a stepper motor, servo motor, ultrasonic motor (i.e., such as those comprising a plurality of piezoelectric material components comprising one or more piezoelectric materials, such as lead zirconate titanate, lithium niobate, or other single crystal materials, configured in a substantially circular arrangement and operatively coupled to a stator and rotor to produce rotary ultrasonic motor activation, or operatively coupled to a stator and slider to produce linear translation ultrasonic motor activation), or other electromechanical actuator, which may be coupled to the rigid perimetric frame members (56, 58) and also coupled to the rotatable adjustment perimetric members (52, 54) using a coupling assembly such as that depicted in FIG. 4B, featuring a shaft (62) coupled to a barrel member (60) which is coupled to a pin (61) that interfaces with the rotatable adjustment perimetric members (52, 54) as shown. In one embodiment, the motor (64) may be configured to produce controlled linear motion of the shaft (62) and intercoupled barrel member (60) relative to the depicted cylindrical housing (63) of the motor (64), such that by virtue of the intercoupled pin (61), the rotatable adjustment perimetric members (52, 54) are rotated relative to the rigid perimetric frame members (70, 72) about an axis substantially parallel with a central axis (65) that is perpendicular to the center of the intercoupled fluid/membrane lens (36). In another embodiment, the motor (64) may be configured to produce rotational motion of the shaft (62) relative to the depicted cylindrical housing (63) of the motor (64), and the mechanical coupling between the shaft (62) and barrel member (60) may comprise a threaded interface, such that by virtue of the intercoupled pin (61), the rotatable adjustment perimetric members (52, 54) are rotated relative to the rigid perimetric frame members (70, 72) about an axis substantially parallel with a central axis (65) that is perpendicular to the center of the intercoupled fluid/membrane lens (36). The mechanical interface between the rotatable adjustment perimetric members (52, 54) and the rigid perimetric frame members (70, 72) may be configured to comprise perimetrically located features, such as ramps, bumps, or step-ups, which will cause the intercoupled fluid/membrane lens (36) to be squeezed or loosened with a substantially even perimetric loading, such as by three or more interfacial feature groupings (i.e., one at every 120 degrees around the 360 degree perimetric interfaces between the rotatable adjustment perimetric members (52, 54) and the rigid perimetric frame members (70, 72). In other words, the fluid/membrane lens (36) may be loosened or tightened relatively evenly, preferably without substantial movement or reorientation of the image position relative to the plane of the lens. Further, the mechanical perimetric interfaces may be configured such that sequenced levels of tightening or loosening of fluid/membrane lens (36) may be predictably obtained. For example, in one embodiment the motor may be operatively coupled to a controller, such as a microcontroller or microprocessor, such that a desired or commanded tightening or loosening of the fluid/membrane lens (36), which may be correlated with a predetermined focal length for the fluid/membrane lens (36), may be reliably obtained, preferably with relatively low latency, via commands to the motor from the controller. One advantage of such a configuration as shown and described in reference to FIGS. 4A and 4B is that a single motor may be utilized to control the focal length of the fluid/membrane lens (36).


Referring to FIGS. 5-7B, other embodiments are illustrated which are configured to provide substantially even perimetric loading (and thus focus adjustment without substantial movement or reorientation of image position) for a compact variable focus configuration featuring an intercoupled fluid/membrane lens (36).


Referring to FIG. 5, a compact variable focus assembly (68) features two rigid perimetric frame members (70, 72) and an intercoupled fluid/membrane lens (36), with substantially even perimetric loading of the fluid/membrane lens (36) provided by a plurality of electromagnetic actuators (76, 77, 78), which may be utilized to controllably urge or repel the two rigid perimetric frame members (70, 72) relative to each other to provide controllable focal adjustment. The electromagnetic actuators (76, 77, 78) preferably are placed equidistantly from each other perimetrically (i.e., about 120 degrees from each other) to provide even loading with a 3-actuator configuration as shown. Other embodiments may include more actuators, such as four actuators at 90 degrees apart, etc. In one embodiment, each of the electromagnetic actuators (76, 77, 78) may be operatively coupled between the perimetric frame members (70, 72) such that upon actuation, they urge or repel the perimetric frame members (70, 72) relative to each other with linear actuation; in another embodiment each of the electromagnetic actuators (76, 77, 78) may be operatively coupled between the perimetric frame members (70, 72) such that upon actuation, they cause rotational motion of an intercoupling member, such as an intercoupling member similar to the shaft member (62) of the assembly of FIG. 4B, which may be interfaced with a threaded member, such as a threaded member similar to the barrel member (60) of the assembly of FIG. 4B which may be coupled to one of the perimetric frame members (70, 72), for example, to be converted to linear motion to urge or repel the perimetric frame members (70, 72) relative to each other. In other words, the electromagnetic actuators (76, 77, 78) may be configured to produce either linear or rotational actuation motion, and this linear or rotational actuation motion may be utilized to urge or repel the two rigid perimetric frame members (70, 72) relative to each other to provide controllable focal adjustment.


Preferably one or more predictable levels of tightening or loosening of fluid/membrane lens (36) may be obtained through operation of the electromagnetic actuators (76, 77, 78). For example, in one embodiment the electromagnetic actuators (76, 77, 78) may be operatively coupled to a controller, such as a microcontroller or microprocessor, such that a desired or commanded tightening or loosening of the fluid/membrane lens (36), which may be correlated with a predetermined focal length for the fluid/membrane lens (36), may be reliably obtained, preferably with relatively low latency, via commands to the electromagnetic actuators (76, 77, 78) from the controller.


Referring to FIG. 6, a compact variable focus assembly (74) features two rigid perimetric frame members (70, 72) and an intercoupled fluid/membrane lens (36), with substantially even perimetric loading of the fluid/membrane lens (36) provided by a plurality of shape memory metal alloy actuators (80, 82, 84), which may be utilized to controllably urge or repel the two rigid perimetric frame members (70, 72) relative to each other to provide controllable focal adjustment. The shape memory metal alloy actuators (80, 82, 84) preferably are placed equidistantly from each other perimetrically (i.e., about 120 degrees from each other) to provide even loading with a 3-actuator configuration as shown. Other embodiments may include more actuators, such as four actuators at 90 degrees apart, etc. In one embodiment, each of the shape memory metal alloy actuators (80, 82, 84) may be operatively coupled between the perimetric frame members (70, 72) such that upon actuation, they urge or repel the perimetric frame members (70, 72) relative to each other with linear actuation; in another embodiment each of the shape memory metal alloy actuators (80, 82, 84) may be operatively coupled between the perimetric frame members (70, 72) such that upon actuation, they cause rotational motion of an intercoupling member, such as an intercoupling member similar to the shaft member (62) of the assembly of FIG. 4B, which may be interfaced with a threaded member, such as a threaded member similar to the barrel member (60) of the assembly of FIG. 4B which may be coupled to one of the perimetric frame members (70, 72), for example, to be converted to linear motion to urge or repel the perimetric frame members (70, 72) relative to each other. In other words, the shape memory metal alloy actuators (80, 82, 84) may be configured to produce either linear or rotational actuation motion, and this linear or rotational actuation motion may be utilized to urge or repel the two rigid perimetric frame members (70, 72) relative to each other to provide controllable focal adjustment.


Preferably one or more predictable levels of tightening or loosening of fluid/membrane lens (36) may be obtained through operation of the shape memory metal alloy actuators (80, 82, 84). For example, in one embodiment the shape memory metal alloy actuators (80, 82, 84) may be operatively coupled to a controller, such as a microcontroller or microprocessor, such that a desired or commanded tightening or loosening of the fluid/membrane lens (36), which may be correlated with a predetermined focal length for the fluid/membrane lens (36), may be reliably obtained, preferably with relatively low latency, via commands to the shape memory metal alloy actuators (80, 82, 84) from the controller.


Referring to FIGS. 7A and 7B, a compact variable focus assembly (76) features two rigid perimetric frame members (70, 72) and an intercoupled fluid/membrane lens (36), with substantially even perimetric loading of the fluid/membrane lens (36) provided by a plurality of piezoelectric actuators (86, 88, 90), which may be utilized to controllably urge or repel the two rigid perimetric frame members (70, 72) relative to each other to provide controllable focal adjustment. Each of the piezoelectric actuators (80, 82, 84), may comprise one or more piezoelectric cells configured to produce a given load and displacement change upon actuation, or may comprise a socalled “ultrasound” or “ultrasonic” actuator configuration (i.e., such as those comprising a plurality of piezoelectric material components comprising one or more piezoelectric materials, such as lead zirconate titanate, lithium niobate, or other single crystal materials, configured in a substantially circular arrangement and operatively coupled to a stator and rotor to produce rotary ultrasonic motor activation, or operatively coupled to a stator and slider to produce linear translation ultrasonic motor activation). The piezoelectric actuators (80, 82, 84) preferably are placed equidistantly from each other perimetrically (i.e., about 120 degrees from each other) to provide even loading with a 3-actuator configuration as shown. Other embodiments may include more actuators, such as four actuators at 90 degrees apart, etc. In one embodiment, each of the piezoelectric actuators (80, 82, 84) may be operatively coupled between the perimetric frame members (70, 72) such that upon actuation, they urge or repel the perimetric frame members (70, 72) relative to each other with linear actuation; in another embodiment each of the piezoelectric actuators (80, 82, 84) may be operatively coupled between the perimetric frame members (70, 72) such that upon actuation, they cause rotational motion of an intercoupling member, such as an intercoupling member similar to the shaft member (62) of the assembly of FIG. 4B, which may be interfaced with a threaded member, such as a threaded member similar to the barrel member (60) of the assembly of FIG. 4B which may be coupled to one of the perimetric frame members (70, 72), for example, to be converted to linear motion to urge or repel the perimetric frame members (70, 72) relative to each other. In other words, the piezoelectric actuators (80, 82, 84) may be configured to produce either linear or rotational actuation motion, and this linear or rotational actuation motion may be utilized to urge or repel the two rigid perimetric frame members (70, 72) relative to each other to provide controllable focal adjustment.


Preferably one or more predictable levels of tightening or loosening of fluid/membrane lens (36) may be obtained through operation of the piezoelectric actuators (80, 82, 84). For example, in one embodiment the piezoelectric actuators (80, 82, 84) may be operatively coupled to a controller, such as a microcontroller or microprocessor, such that a desired or commanded tightening or loosening of the fluid/membrane lens (36), which may be correlated with a predetermined focal length for the fluid/membrane lens (36), may be reliably obtained, preferably with relatively low latency, via commands to the piezoelectric actuators (80, 82, 84) from the controller.


Referring to FIG. 7B, depending upon how much mechanical throw is needed in each of the piezoelectric actuators for a given variable focus lens configuration, each of the piezoelectric actuators may comprise an assembly of a series of individual piezoelectric devices (92, 94, etc) intercoupled such that activation of each provides a given mechanical throw which is added to others in the assembly to produce an overall assembly throw which is suitable for the application.


Referring to FIG. 8A, an assembly configuration is illustrated featuring componentry such as discussed above in reference to FIGS. 4A and 4B, with a head wearable component (2) comprising a frame (130) mountable on a user's head so that the user's left (100) and right (102) eyes are exposed to the optical elements (20; here a left optical element 110 and right optical element 112 are separately labelled; these optical elements feature left and right fluid/membrane lenses, 36, and 37, respectively). Left (114) and right (116) motors are configured to electromechanically adjust the focal length of each optical element, as described above in reference to FIGS. 4A and 4B, for example. A controller (108), such as a micro controller or microprocessor, may be utilized to issue commands to the motors (114, 116) to adjust the focal lengths. In various embodiments, cameras (104, 106) may be coupled to the frame (130) and configured to capture data pertaining to the positions of each of the eyes (100, 102); this information may be utilized by the controller (108) in determining how to command the motors (114, 116) in terms of desired focal length. For example, if it is determined that the user is focused on a close-in object relative to the wearable component (2), the system may be configured to have the controller utilize the motors to switch to a closer focal length. FIG. 8B illustrates a configuration analogous to that of FIG. 8A, but with an electromechanical actuation configuration akin to those described in reference to FIGS. 5-7B, wherein a plurality of motors or actuators (118, 120, 122; 124, 126, 128) may be operatively coupled to a controller (108) and utilized to adjust focal length of the optical elements (110, 112).


Various example embodiments of the invention are described herein. Reference is made to these examples in a non-limiting sense. They are provided to illustrate more broadly applicable aspects of the invention. Various changes may be made to the invention described and equivalents may be substituted without departing from the true spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation, material, composition of matter, process, process act(s) or step(s) to the objective(s), spirit or scope of the present invention. Further, as will be appreciated by those with skill in the art that each of the individual variations described and illustrated herein has discrete components and features which may be readily separated from or combined with the features of any of the other several embodiments without departing from the scope or spirit of the present inventions. All such modifications are intended to be within the scope of claims associated with this disclosure.


The invention includes methods that may be performed using the subject devices. The methods may comprise the act of providing such a suitable device. Such provision may be performed by the end user. In other words, the “providing” act merely requires the end user obtain, access, approach, position, set-up, activate, power-up or otherwise act to provide the requisite device in the subject method. Methods recited herein may be carried out in any order of the recited events which is logically possible, as well as in the recited order of events.


Example aspects of the invention, together with details regarding material selection and manufacture have been set forth above. As for other details of the present invention, these may be appreciated in connection with the above-referenced patents and publications as well as generally known or appreciated by those with skill in the art. The same may hold true with respect to method-based aspects of the invention in terms of additional acts as commonly or logically employed.


In addition, though the invention has been described in reference to several examples optionally incorporating various features, the invention is not to be limited to that which is described or indicated as contemplated with respect to each variation of the invention. Various changes may be made to the invention described and equivalents (whether recited herein or not included for the sake of some brevity) may be substituted without departing from the true spirit and scope of the invention. In addition, where a range of values is provided, it is understood that every intervening value, between the upper and lower limit of that range and any other stated or intervening value in that stated range, is encompassed within the invention.


Also, it is contemplated that any optional feature of the inventive variations described may be set forth and claimed independently, or in combination with any one or more of the features described herein. Reference to a singular item, includes the possibility that there are plural of the same items present. More specifically, as used herein and in claims associated hereto, the singular forms “a,” “an,” “said,” and “the” include plural referents unless the specifically stated otherwise. In other words, use of the articles allow for “at least one” of the subject item in the description above as well as claims associated with this disclosure. It is further noted that such claims may be drafted to exclude any optional element. As such, this statement is intended to serve as antecedent basis for use of such exclusive terminology as “solely,” “only” and the like in connection with the recitation of claim elements, or use of a “negative” limitation.


Without the use of such exclusive terminology, the term “comprising” in claims associated with this disclosure shall allow for the inclusion of any additional element—irrespective of whether a given number of elements are enumerated in such claims, or the addition of a feature could be regarded as transforming the nature of an element set forth in such claims. Except as specifically defined herein, all technical and scientific terms used herein are to be given as broad a commonly understood meaning as possible while maintaining claim validity.


The breadth of the present invention is not to be limited to the examples provided and/or the subject specification, but rather only by the scope of claim language associated with this disclosure.

Claims
  • 1. A head-wearable viewing component for presenting virtual image information to a user, comprising: a. a head wearable frame;b. a left optical element for a left eye of the user, the left optical element coupled to the head wearable frame and comprising two left perimetric frame members, a left electromechanical device connected between the two left perimetric frame members and a left fluid/membrane lens between the two left perimetric frame members and having an electromechanically adjustable focal length for the left eye of the user while maintaining image stability when the two left perimetric frame members are moved relative to one another by the left electromechanical device;c. a right optical element for a right eye of the user, the right optical element coupled to the head wearable frame and comprising two right perimetric frame members, a right electromechanical device connected between the two right perimetric frame members and a right fluid/membrane lens between the two right perimetric frame members and having an electromechanically adjustable focal length for the right eye of the user while maintaining image stability when the two right perimetric frame members are moved relative to one another by the right electromechanical device; andd. a controller operatively coupled to the left electromechanical device and the right electromechanical device and configured to provide one or more commands thereto to modify the focal lengths of the left fluid/membrane lens and the right fluid/membrane lens.
  • 2. The head-wearable viewing component of claim 1, wherein at least one of the left and right electromechanical devices comprises an actuation motor intercoupled between the two perimetric frame members of the respective optical element.
  • 3. The head-wearable viewing component of claim 2, wherein the actuation motor is configured to provide linear actuation.
  • 4. The head-wearable viewing component of claim 2, wherein the actuation motor is configured to provide rotational actuation.
  • 5. The head-wearable viewing component of claim 2, wherein the perimetric frame members of the respective optical element are coupled to the fluid/membrane lens perimetric frame members of the respective optical element and configured to change the focal length for the user by linearly moving relative to each other.
  • 6. The head-wearable viewing component of claim 2, wherein the two perimetric frame members of the respective optical element are rotatable relative to each other to modify the focal length for the user.
  • 7. The head-wearable viewing component of claim 2, wherein the two perimetric frame members of the respective optical element are displaceable relative to each other in a non-rotational manner.
  • 8. The head-wearable viewing component of claim 2, wherein the actuation motor comprises a stepper motor.
  • 9. The head-wearable viewing component of claim 2, wherein the actuation motor comprises a servo motor.
  • 10. The head-wearable viewing component of claim 2, wherein the actuation motor comprises a piezoelectric actuator.
  • 11. The head-wearable viewing component of claim 2, wherein the actuation motor comprises an ultrasonic motor.
  • 12. The head-wearable viewing component of claim 2, wherein the actuation motor comprises an electromagnetic actuator.
  • 13. The head-wearable viewing component of claim 2, wherein the actuation motor comprises a shape memory metal alloy actuator.
  • 14. The head-wearable viewing component of claim 1, wherein the controller is configured to command the left and right optical elements to adjust to one of two selectable predetermined focal lengths.
  • 15. The head-wearable viewing component of claim 1, wherein the controller is configured to command the left and right optical elements to adjust to one of three or more selectable predetermined focal lengths.
RELATED APPLICATION DATA

The present application claims the benefit under 35 U.S.C. § 119 to U.S. Provisional Application Ser. No. 62/678,234 filed May 30, 2018. The foregoing application is hereby incorporated by reference into the present application in its entirety.

US Referenced Citations (226)
Number Name Date Kind
4344092 Miller Aug 1982 A
4652930 Crawford Mar 1987 A
4810080 Grendol et al. Mar 1989 A
4997268 Dauvergne Mar 1991 A
5007727 Kahaney et al. Apr 1991 A
5074295 Willis Dec 1991 A
5240220 Elberbaum Aug 1993 A
5410763 Bolle May 1995 A
5455625 Englander Oct 1995 A
5495286 Adair Feb 1996 A
5497463 Stein et al. Mar 1996 A
5682255 Friesem et al. Oct 1997 A
5854872 Tai Dec 1998 A
5864365 Sramek et al. Jan 1999 A
6012811 Chao et al. Jan 2000 A
6016160 Coombs et al. Jan 2000 A
6076927 Owens Jun 2000 A
6117923 Amagai et al. Sep 2000 A
6124977 Takahashi Sep 2000 A
6191809 Hori et al. Feb 2001 B1
6375369 Schneider et al. Apr 2002 B1
6541736 Huang et al. Apr 2003 B1
6757068 Foxlin Jun 2004 B2
7431453 Hogan Oct 2008 B2
7573640 Nivon et al. Aug 2009 B2
7724980 Shenzhi May 2010 B1
7751662 Kleemann Jul 2010 B2
7758185 Lewis Jul 2010 B2
8246408 Elliot Aug 2012 B2
8353594 Lewis Jan 2013 B2
8508676 Silverstein et al. Aug 2013 B2
8547638 Levola Oct 2013 B2
8619365 Harris et al. Dec 2013 B2
8696113 Lewis Apr 2014 B2
8733927 Lewis May 2014 B1
8736636 Kang May 2014 B2
8759929 Shiozawa et al. Jun 2014 B2
8793770 Lim Jul 2014 B2
8823855 Hwang Sep 2014 B2
8847988 Geisner et al. Sep 2014 B2
8874673 Kim Oct 2014 B2
9010929 Lewis Apr 2015 B2
9095437 Boyden et al. Aug 2015 B2
9239473 Lewis Jan 2016 B2
9244293 Lewis Jan 2016 B2
9581820 Robbins Feb 2017 B2
9658473 Lewis May 2017 B2
9671566 Abovitz et al. Jun 2017 B2
9671615 Vallius et al. Jun 2017 B1
9874664 Stevens et al. Jan 2018 B2
9955862 Freeman et al. May 2018 B2
9996797 Holz et al. Jun 2018 B1
10018844 Levola et al. Jul 2018 B2
10151937 Lewis Dec 2018 B2
10185147 Lewis Jan 2019 B2
10218679 Jawahar Feb 2019 B1
10516853 Gibson et al. Dec 2019 B1
10551879 Richards et al. Feb 2020 B1
10578870 Kimmel Mar 2020 B2
20010010598 Aritake et al. Aug 2001 A1
20020063913 Nakamura et al. May 2002 A1
20020071050 Homberg Jun 2002 A1
20020122648 Mule′ et al. Sep 2002 A1
20020140848 Cooper et al. Oct 2002 A1
20030048456 Hill Mar 2003 A1
20030067685 Niv Apr 2003 A1
20030077458 Korenaga et al. Apr 2003 A1
20030219992 Schaper Nov 2003 A1
20040001533 Tran et al. Jan 2004 A1
20040021600 Wittenberg Feb 2004 A1
20040042377 Nikoloai et al. Mar 2004 A1
20040174496 Ji et al. Sep 2004 A1
20040186902 Stewart Sep 2004 A1
20040240072 Schindler et al. Dec 2004 A1
20040246391 Travis Dec 2004 A1
20050001977 Zelman Jan 2005 A1
20050157159 Komiya et al. Jul 2005 A1
20050273792 Inohara et al. Dec 2005 A1
20060013435 Rhoads Jan 2006 A1
20060015821 Jacques Parker et al. Jan 2006 A1
20060038880 Starkweather et al. Feb 2006 A1
20060050224 Smith Mar 2006 A1
20060126181 Levola Jun 2006 A1
20060132914 Weiss et al. Jun 2006 A1
20060221448 Nivon et al. Oct 2006 A1
20060228073 Mukawa et al. Oct 2006 A1
20060250322 Hall et al. Nov 2006 A1
20060268220 Hogan Nov 2006 A1
20070058248 Nguyen Mar 2007 A1
20070159673 Freeman et al. Jul 2007 A1
20070188837 Shimizu et al. Aug 2007 A1
20070204672 Huang et al. Sep 2007 A1
20070283247 Brenneman et al. Dec 2007 A1
20080002259 Ishizawa et al. Jan 2008 A1
20080002260 Arrouy et al. Jan 2008 A1
20080043334 Itzkovitch et al. Feb 2008 A1
20080063802 Maula et al. Mar 2008 A1
20080068557 Menduni et al. Mar 2008 A1
20080146942 Dala-Krishna Jun 2008 A1
20080205838 Crippa et al. Aug 2008 A1
20080316768 Travis Dec 2008 A1
20090153797 Allon et al. Jun 2009 A1
20090224416 Laakkonen et al. Sep 2009 A1
20090245730 Kleemann Oct 2009 A1
20090310633 Ikegami Dec 2009 A1
20100019962 Fujita Jan 2010 A1
20100056274 Uusitalo et al. Mar 2010 A1
20100063854 Purvis et al. Mar 2010 A1
20100079841 Levola Apr 2010 A1
20100232016 Landa et al. Sep 2010 A1
20100244168 Shiozawa et al. Sep 2010 A1
20100296163 Sarikko Nov 2010 A1
20110050655 Mukawa Mar 2011 A1
20110122240 Becker May 2011 A1
20110145617 Thomson et al. Jun 2011 A1
20110170801 Lu et al. Jul 2011 A1
20110218733 Hamza et al. Sep 2011 A1
20110286735 Temblay Nov 2011 A1
20110291969 Rashid et al. Dec 2011 A1
20120050535 Densham et al. Mar 2012 A1
20120081392 Arthur Apr 2012 A1
20120113235 Shintani May 2012 A1
20120154557 Perez et al. Jun 2012 A1
20120218301 Miller Aug 2012 A1
20120246506 Knight Sep 2012 A1
20120249416 Maciocci et al. Oct 2012 A1
20120249741 Maciocci et al. Oct 2012 A1
20120307075 Margalitq Dec 2012 A1
20120314959 White et al. Dec 2012 A1
20120320460 Levola Dec 2012 A1
20120326948 Crocco et al. Dec 2012 A1
20130050833 Lewis et al. Feb 2013 A1
20130077170 Ukuda Mar 2013 A1
20130094148 Sloane Apr 2013 A1
20130169923 Schnoll et al. Jul 2013 A1
20130336138 Venkatraman et al. Dec 2013 A1
20140016821 Arth et al. Jan 2014 A1
20140022819 Oh et al. Jan 2014 A1
20140119598 Ramachandran et al. May 2014 A1
20140126769 Reitmayr et al. May 2014 A1
20140140653 Brown et al. May 2014 A1
20140149573 Tofighbakhsh et al. May 2014 A1
20140168260 O'Brien et al. Jun 2014 A1
20140267419 Ballard et al. Sep 2014 A1
20140274391 Stafford Sep 2014 A1
20140359589 Kodsky et al. Dec 2014 A1
20150005785 Olson Jan 2015 A1
20150009099 Queen Jan 2015 A1
20150123966 Newman May 2015 A1
20150130790 Vazquez, II et al. May 2015 A1
20150134995 Park et al. May 2015 A1
20150138248 Schrader May 2015 A1
20150155939 Oshima et al. Jun 2015 A1
20150205126 Schowengerdt Jul 2015 A1
20150235431 Schowengerdt Aug 2015 A1
20150253651 Russell et al. Sep 2015 A1
20150256484 Cameron Sep 2015 A1
20150269784 Miyawaki et al. Sep 2015 A1
20150294483 Wells et al. Oct 2015 A1
20150301955 Yakovenko et al. Oct 2015 A1
20150338915 Publicover et al. Nov 2015 A1
20150355481 Hilkes et al. Dec 2015 A1
20160027215 Burns et al. Jan 2016 A1
20160077338 Robbins et al. Mar 2016 A1
20160085300 Robbins et al. Mar 2016 A1
20160093099 Bridges Mar 2016 A1
20160123745 Cotier et al. May 2016 A1
20160155273 Lyren et al. Jun 2016 A1
20160180596 Gonzalez del Rosario Jun 2016 A1
20160202496 Billetz et al. Jul 2016 A1
20160217624 Finn et al. Jul 2016 A1
20160266412 Yoshida Sep 2016 A1
20160274733 Hasegawa et al. Sep 2016 A1
20160321551 Priness et al. Nov 2016 A1
20160327798 Xiao et al. Nov 2016 A1
20160334279 Mittleman et al. Nov 2016 A1
20160357255 Lindh et al. Dec 2016 A1
20160370404 Quadrat et al. Dec 2016 A1
20160370510 Thomas Dec 2016 A1
20170038607 Camara Feb 2017 A1
20170061696 Li et al. Mar 2017 A1
20170100664 Osterhout et al. Apr 2017 A1
20170115487 Travis Apr 2017 A1
20170127295 Black et al. May 2017 A1
20170147066 Katz et al. May 2017 A1
20170160518 Lanman et al. Jun 2017 A1
20170192239 Nakamura et al. Jul 2017 A1
20170205903 Miller et al. Jul 2017 A1
20170206668 Poulos et al. Jul 2017 A1
20170213388 Margolis et al. Jul 2017 A1
20170232345 Rofougaran et al. Aug 2017 A1
20170235126 DiDomenico Aug 2017 A1
20170235142 Wall et al. Aug 2017 A1
20170235144 Piskunov et al. Aug 2017 A1
20170235147 Kamakura Aug 2017 A1
20170243403 Daniels et al. Aug 2017 A1
20170254832 Ho et al. Sep 2017 A1
20170281054 Stever et al. Oct 2017 A1
20170287376 Bakar et al. Oct 2017 A1
20170293141 Schowengerdt et al. Oct 2017 A1
20170312032 Amanatullah et al. Nov 2017 A1
20170329137 Tervo Nov 2017 A1
20170332098 Rusanovskyy et al. Nov 2017 A1
20170357332 Balan et al. Dec 2017 A1
20180059305 Popovich et al. Mar 2018 A1
20180067779 Pillalamarri et al. Mar 2018 A1
20180070855 Eichler Mar 2018 A1
20180082480 White et al. Mar 2018 A1
20180088185 Woods et al. Mar 2018 A1
20180102981 Kurtzman et al. Apr 2018 A1
20180136466 Ko May 2018 A1
20180190017 Mendez et al. Jul 2018 A1
20180250589 Cossairt et al. Sep 2018 A1
20190005069 Filgueiras de Araujo et al. Jan 2019 A1
20190011691 Peyman Jan 2019 A1
20190056591 Tervo et al. Feb 2019 A1
20190101758 Zhu et al. Apr 2019 A1
20190172216 Ninan et al. Jun 2019 A1
20190196690 Chong et al. Jun 2019 A1
20190243123 Bohn Aug 2019 A1
20190347853 Chen et al. Nov 2019 A1
20200117267 Gibson et al. Apr 2020 A1
20200117270 Gibson et al. Apr 2020 A1
20200309944 Thoresen et al. Oct 2020 A1
20210033871 Jacoby Feb 2021 A1
20210041951 Gibson et al. Feb 2021 A1
Foreign Referenced Citations (27)
Number Date Country
0535402 Apr 1993 EP
1215522 Jun 2002 EP
1938141 Jul 2008 EP
1943556 Jul 2008 EP
3164776 May 2017 EP
3236211 Oct 2017 EP
2723240 Aug 2018 EP
2003-029198 Jan 2003 JP
2007-012530 Jan 2007 JP
2009-244869 Oct 2009 JP
2012-015774 Jan 2012 JP
6232763 Nov 2017 JP
2002071315 Sep 2002 WO
2006132614 Dec 2006 WO
2007085682 Aug 2007 WO
2007102144 Sep 2007 WO
2008148927 Dec 2008 WO
2009101238 Aug 2009 WO
2013049012 Apr 2013 WO
2015143641 Oct 2015 WO
2016054092 Apr 2016 WO
2017004695 Jan 2017 WO
2018044537 Mar 2018 WO
2018087408 May 2018 WO
2018166921 Sep 2018 WO
2019148154 Aug 2019 WO
2020010226 Jan 2020 WO
Non-Patent Literature Citations (142)
Entry
Invitation to Pay Additional Fees dated Sep. 24, 2020, International Patent Application No. PCT/US2020/043596, (3 pages).
Notice of Allowance dated Oct. 5, 2020, U.S. Appl. No. 16/682,911, (27 pages).
Notice of Reason of Refusal dated Sep. 11, 2020 with English translation, Japanese Patent Application No. 2019-140435, (6 pages).
International Search Report and Written Opinion dated Sep. 4, 2020, International Patent Application No. PCT/US20/31036, (13 pages).
Non Final Office Action dated Sep. 1, 2020, U.S. Appl. No. 16/214,575, (40 pages).
Communication Pursuant to Article 94(3) EPC dated Sep. 4, 2019, European Patent Application No. 10793707.0, (4 pages).
Examination Report dated Jun. 19, 2020, European Patent Application No. 20154750.2, (10 pages).
Extended European Search Report dated May 20, 2020, European Patent Application No. 20154070.5, (7 pages).
Extended European Search Report dated Jun. 12, 2017, European Patent Application No. 16207441.3, (8 pages).
Final Office Action dated Aug. 10, 2020, U.S. Appl. No. 16/225,961, (13 pages).
Final Office Action dated Dec. 4, 2019, U.S. Appl. No. 15/564,517, (15 pages).
Final Office Action dated Feb. 19, 2020, U.S. Appl. No. 15/552,897, (17 pages).
International Search Report and Written Opinion dated Mar. 12, 2020, International PCT Patent Application No. PCT/US19/67919, (14 pages).
International Search Report and Written Opinion dated Aug. 15, 2019, International PCT Patent Application No. PCT/US19/33987, (20 pages).
International Search Report and Written Opinion dated Jun. 15, 2020, International PCT Patent Application No. PCT/US2020/017023, (13 pages).
International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/43097, (10 pages).
International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/36275, (10 pages).
International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/43099, (9 pages).
International Search Report and Written Opinion dated Jun. 17, 2016, International PCT Patent Application No. PCT/FI2016/050172, (9 pages).
International Search Report and Written Opinion dated Oct. 22, 2019, International PCT Patent Application No. PCT/US19/43751, (9 pages).
International Search Report and Written Opinion dated Dec. 23, 2019, International PCT Patent Application No. PCT/US19/44953, (11 pages).
International Search Report and Written Opinion dated May 23, 2019, International PCT Patent Application No. PCT/US18/66514, (17 pages).
International Search Report and Written Opinion dated Sep. 26, 2019, International PCT Patent Application No. PCT/US19/40544, (12 pages).
International Search Report and Written Opinion dated Aug. 27, 2019, International PCT Application No. PCT/US2019/035245, (8 pages).
International Search Report and Written Opinion dated Dec. 27, 2019, International Application No. PCT/US19/47746, (16 pages).
International Search Report and Written Opinion dated Sep. 30, 2019, International Patent Application No. PCT/US19/40324, (7 pages).
International Search Report and Written Opinion dated Jun. 5, 2020, International Patent Application No. PCT/US20/19871, (9 pages).
International Search Report and Written Opinion dated Oct. 8, 2019, International PCT Patent Application No. PCT/US19/41151, (7 pages).
International Search Report and Written Opinion dated Jan. 9, 2020, International Application No. PCT/US19/55185, (10 pages).
International Search Report and Written Opinion dated Feb. 28, 2019, International Patent Application No. PCT/US18/64686, (8 pages).
International Search Report and Written Opinion dated Feb. 7, 2020, International PCT Patent Application No. PCT/US2019/061265, (11 pages).
International Search Report and Written Opinion dated Jun. 11, 2019, International PCT Application No. PCT/US19/22620, (7 pages).
Invitation to Pay Additional Fees dated Aug. 15, 2019, International PCT Patent Application No. PCT/US19/36275, (2 pages).
Invitation to Pay Additional Fees dated Oct. 22, 2019, International PCT Patent Application No. PCT/US19/47746, (2 pages).
Invitation to Pay Additional Fees dated Apr. 3, 2020, International Patent Application No. PCT/US20/17023, (2 pages).
Invitation to Pay Additional Fees dated Oct. 17, 2019, International PCT Patent Application No. PCT/US19/44953, (2 pages).
Non Final Office Action dated Aug. 21, 2019, U.S. Appl. No. 15/564,517, (14 pages).
Non Final Office Action dated Jul. 27, 2020, U.S. Appl. No. 16/435,933, (16 pages).
Non Final Office Action dated Jun. 17, 2020, U.S. Appl. No. 16/682,911, (22 pages).
Non Final Office Action dated Jun. 19, 2020, U.S. Appl. No. 16/225,961, (35 pages).
Non Final Office Action dated Nov. 19, 2019, U.S. Appl. No. 16/355,611, (31 pages).
Non Final Office Action dated Oct. 22, 2019, U.S. Appl. No. 15/859,277, (15 pages).
Notice of Allowance dated Mar. 25, 2020, U.S. Appl. No. 15/564,517, (11 pages).
Summons to attend oral proceedings pursuant to Rule 115(1) EPC mailed on Jul. 15, 2019, European Patent Application No. 15162521.7, (7 pages).
Aarik, J. et al., “Effect of crystal structure on optical properties of TiO2 films grown by atomic layer deposition”, Thin Solid Films; Publication [online). May 19, 1998 [retrieved Feb. 19, 2020]. Retrieved from the Internet: <URL: https://www.sciencedirect.com/science/article/pii/S0040609097001351?via%3Dihub>; DOI: 10.1016/S0040-6090(97)00135-1; see entire document, (2 pages).
Azom, , “Silica—Silicon Dioxide (SiO2)”, AZO Materials; Publication [Online]. Dec. 13, 2001 [retrieved Feb. 19, 2020]. Retrieved from the Internet: <URL: https://www.azom.com/article.aspx?Article1D=1114>, (6 pages).
Goodfellow, , “Titanium Dioxide—Titania (TiO2)”, AZO Materials; Publication [online]. Jan. 11, 2002 [retrieved Feb. 19, 2020]. Retrieved from the Internet: <URL: https://www.azom.com/article.aspx?Article1D=1179>, (9 pages).
Levola, T. , “Diffractive Optics for Virtual Reality Displays”, Journal of the SID EURODISPLAY 14/05, 2005, XP008093627, chapters 2-3, Figures 2 and 10, pp. 467-475.
Levola, Tapani , “Invited Paper: Novel Diffractive Optical Components for Near to Eye Displays—Nokia Research Center”, SID 2006 Digest, 2006 SID International Symposium, Society for Information Display, vol. XXXVII, May 24, 2005, chapters 1-3, figures 1 and 3, pp. 64-67.
Memon, F. et al., “Synthesis, Characterization and Optical Constants of Silicon Oxycarbide”, EPJ Web of Conferences; Publication [online). Mar. 23, 2017 [retrieved Feb. 19, 2020).<URL: https://www.epj-conferences.org/articles/epjconf/pdf/2017/08/epjconf_nanop2017_00002.pdf>; DOI: 10.1051/epjconf/201713900002, (8 pages).
Spencer, T. et al., “Decomposition of poly(propylene carbonate) with UV sensitive iodonium 11 salts”, Polymer Degradation and Stability; (online]. Dec. 24, 2010 (retrieved Feb. 19, 2020]., <URL: http:/fkohl.chbe.gatech.edu/sites/default/files/linked_files/publications/2011Decomposition%20of%20poly(propylene%20carbonate)%20with%20UV%20sensitive%20iodonium%20salts,pdf>; DOI: 10, 1016/j.polymdegradstab.2010, 12.003, (17 pages).
Weissel, et al., “Process cruise control: event-driven clock scaling for dynamic power management”, Proceedings of the 2002 international conference on Compilers, architecture, and synthesis for embedded systems. Oct. 11, 2002 (Oct. 11, 2002) Retrieved on May 16, 2020 (May 16, 2020) from <URL: https://dl.acm.org/doi/pdf/10.1145/581630.581668>, p. 238-246.
International Search Report and Written Opinion dated Aug. 8, 2019, International PCT Patent Application No. PCT/US2019/034763, (8 pages).
“ARToolKit: Hardware”, https://web.archive.org/web/20051013062315/http://www.hitl.washington.edu:80/artoolkit/documentation/hardware.htm (downloaded Oct. 26, 2020), Oct. 13, 2015, (3 pages).
Azuma, Ronald T. , “A Survey of Augmented Reality”, Presence: Teleoperators and Virtual Environments 6, 4 (Aug. 1997), 355-385; https://web.archive.org/web/20010604100006/http://www.cs.unc.edu/˜azuma/ARpresence.pdf (downloaded Oct. 26, 2020).
Azuma, Ronald T. , “Predictive Tracking for Augmented Reality”, Department of Computer Science, Chapel Hill NC; TR95-007, Feb. 1995, 262 pages.
Bimber, Oliver et al., “Spatial Augmented Reality: Merging Real and Virtual Worlds”, https://web.media.mit.edu/˜raskar/book/BimberRaskarAugmentedRealityBook.pdf; published by A K Peters/CRC Press (Jul. 31, 2005); eBook (3rd Edition, 2007), (393 pages).
Jacob, Robert J. , “Eye Tracking in Advanced Interface Design”, Human-Computer Interaction Lab, Naval Research Laboratory, Washington, D.C., date unknown. 2003, pp. 1-50.
Tanriverdi, Vildan et al., “Interacting With Eye Movements in Virtual Environments”, Department of Electrical Engineering and Computer Science, Tufts University; Proceedings of the SIGCHI conference on Human Factors in Computing Systems, Apr. 2000, pp. 1-8.
European Search Report dated Oct. 15, 2020, European Patent Application No. 20180623.9, (10 pages).
Extended European Search Report dated Jan. 22, 2021, European Patent Application No. 18890390.0, (11 pages).
Extended European Search Report dated Nov. 3, 2020, European Patent Application No. 18885707.2, (7 pages).
Extended European Search Report dated Nov. 4, 2020, European Patent Application No. 20190980.1, (14 pages).
Final Office Action dated Mar. 19, 2021, U.S. Appl. No. 16/530,776, (25 pages).
Final Office Action dated Nov. 24, 2020, U.S. Appl. No. 16/435,933, (44 pages).
International Search Report and Written Opinion dated Feb. 12, 2021, International Application No. PCT/US20/60555, (25 pages).
International Search Report and Written Opinion dated Feb. 2, 2021, International PCT Patent Application No. PCT/US20/60550, (9 pages).
International Search Report and Written Opinion dated Dec. 3, 2020, International Patent Application No. PCT/US20/43596, (25 pages).
Non Final Office Action dated Jan. 26, 2021, U.S. Appl. No. 16/928,313, (33 pages).
Non Final Office Action dated Jan. 27, 2021, U.S. Appl. No. 16/225,961, (15 pages).
Non Final Office Action dated Nov. 5, 2020, U.S. Appl. No. 16/530,776, (45 pages).
“Phototourism Challenge”, CVPR 2019 Image Matching Workshop. https://image matching-workshop. github.io. (16 pages).
Altwaijry, et al., “Learning to Detect and Match Keypoints with Deep Architectures”, Proceedings of the British Machine Vision Conference (BMVC), BMVA Press, Sep. 2016, [retrieved on Jan. 8, 2021 (Jan. 8, 2021 )] < URL: http://www.bmva.org/bmvc/2016/papers/paper049/index.html >, en lire document, especially Abstract, pp. 1-6 and 9.
Arandjelović, Relja et al., “Three things everyone should know to improve object retrieval”, CVPR, 2012, (8 pages).
Battaglia, Peter W. et al., “Relational inductive biases, deep Tearning, and graph networks”, arXiv:1806.01261, Oct. 17, 2018, pp. 1-40.
Berg, Alexander C et al., “Shape matching and object recognition using low distortion correspondences”, In CVPR, 2005, (8 pages).
Bian, Jiawang et al., “GMS: Grid-based motion statistics for fast, ultra-robust feature correspondence.”, In CVPR (Conference on Computer Vision and Pattern Recognition), 2017, (10 pages).
Brachmann, Eric et al., “Neural-Guided RANSAC: Learning Where to Sample Model Hypotheses”, In ICCV (International Conference on Computer Vision ), arXiv:1905.04132v2 [cs.CV] Jul. 31, 2019, (17 pages).
Butail, et al., “Putting the fish in the fish tank: Immersive VR for animal behavior experiments”, In: 2012 IEEE International Conference on Robotics and Automation. May 18, 2012 (May 18, 2012) Retrieved on Nov. 14, 2020 (Nov. 14, 2020) from <http:/lcdcl.umd.edu/papers/icra2012.pdf> entire document, (8 pages).
Caetano, Tibério S et al., “Learning graph matching”, IEEE TPAMI, 31(6):1048-1058, 2009.
Cech, Jan et al., “Efficient sequential correspondence selection by cosegmentation”, IEEE TPAMI, 32(9):1568-1581, Sep. 2010.
Cuturi, Marco , “Sinkhorn distances: Lightspeed computation of optimal transport”, NIPS, 2013, (9 pages).
Dai, Angela et al., “ScanNet: Richly-annotated 3d reconstructions of indoor scenes”, In CVPR, arXiv:1702.04405v2 [cs.CV] Apr. 11, 2017, (22 pages).
Deng, Haowen et al., “PPFnet: Global context aware local features for robust 3d point matching”, In CVPR, arXiv:1802.02669v2 [cs.CV] Mar. 1, 2018, (12 pages).
Detone, Daniel et al., “Deep image homography estimation”, In RSS Work-shop: Limits and Potentials of Deep Learning in Robotics, arXiv:1606.03798v1 [cs.CV] Jun. 13, 2016, (6 pages).
Detone, Daniel et al., “Self-improving visual odometry”, arXiv:1812.03245, Dec. 8, 2018, (9 pages).
Detone, Daniel et al., “SuperPoint: Self-supervised interest point detection and description”, In CVPR Workshop on Deep Learning for Visual SLAM, arXiv:1712.07629v4 [cs.CV] Apr. 19, 2018, (13 pages).
Dusmanu, Mihai et al., “D2-net: A trainable CNN for joint detection and description of local features”, CVPR, arXiv:1905.03561v1 [cs.CV] May 9, 2019, (16 pages).
Ebel, Patrick et al., “Beyond cartesian representations for local descriptors”, ICCV, arXiv:1908.05547v1 [cs.CV] Aug. 15, 2019, (11 pages).
Fischler, Martin A et al., “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography”, Communications of the ACM, 24(6): 1981, pp. 381-395.
Gilmer, Justin et al., “Neural message passing for quantum chemistry”, In ICML, arXiv:1704.01212v2 [cs.LG] Jun. 12, 2017, (14 pages).
Hartley, Richard et al., “Multiple View Geometry in Computer Vision”, Cambridge University Press, 2003, pp. 1-673.
Lee, et al., “Self-Attention Graph Pooling”, Cornell University Library/Computer Science/ Machine Learning, Apr. 17, 2019 [retrieved on Jan. 8, 2021 from the Internet< URL: https://arxiv.org/abs/1904.08082 >, entire document.
Lee, Juho et al., “Set transformer: A frame-work for attention-based permutation-invariant neural networks”, ICML, arXiv:1810.00825v3 [cs.LG] May 26, 2019, (17 pages).
Leordeanu, Marius et al., “A spectral technique for correspondence problems using pairwise constraints”, Proceedings of (ICCV) International Conference on Computer Vision, vol. 2, pp. 1482-1489, Oct. 2005, (8 pages).
Li, Yujia et al., “Graph matching networks for learning the similarity of graph structured objects”, ICML, arXiv:1904.12787v2 [cs.LG] May 12, 2019, (18 pages).
Li, Zhengqi et al., “Megadepth: Learning single-view depth prediction from internet photos”, In CVPR, fromarXiv: 1804.00607v4 [cs.CV] Nov. 28, 2018, (10 pages).
Libovicky, et al., “Input Combination Strategies for Multi-Source Transformer Decoder”, Proceedings of the Third Conference on Machine Translation (WMT). vol. 1: Research Papers, Belgium, Brussels, Oct. 31-Nov. 1, 2018; retrieved on Jan. 8, 2021 (Jan. 8, 2021 ) from < URL: https://doi.org/10.18653/v1/W18-64026 >, entire document, pp. 253-260.
Loiola, Eliane M. et al., “A survey for the quadratic assignment problem”, European journal of operational research, 176(2): 2007, pp. 657-690.
Lowe, David G. , “Distinctive image features from scale-invariant keypoints”, International Journal of Computer Vision, 60(2): 91-110, 2004, (28 pages).
Luo, Zixin et al., “ContextDesc: Local descriptor augmentation with cross-modality context”, CVPR, arXiv:1904.04084v1 [cs.CV] Apr. 8, 2019, (14 pages).
Munkres, James , “Algorithms for the assignment and transportation problems”, Journal of the Society for Industrial and Applied Mathematics, 5(1): 1957, pp. 32-38.
Ono, Yuki et al., “LF-Net: Learning local features from images”, 32nd Conference on Neural Information Processing Systems (NIPS 2018), arXiv:1805.09662v2 [cs.CV] Nov. 22, 2018, (13 pages).
Paszke, Adam et al., “Automatic differentiation in Pytorch”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, (4 pages).
Peyré, Gabriel et al., “Computational Optimal Transport”, Foundations and Trends in Machine Learning, 11(5-6):355-607, 2019; arXiv:1803.00567v4 [stat.ML] Mar. 18, 2020, (209 pages).
Qi, Charles R. et al., “Pointnet++: Deep hierarchical feature learning on point sets in a metric space.”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA., (10 pages).
Qi, Charles R et al., “Pointnet: Deep Learning on Point Sets for 3D Classification and Segmentation”, CVPR, arXiv:1612.00593v2 [cs.CV] Apr. 10, 201, (19 pages).
Radenović, Filip et al., “Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking”, CVPR, arXiv:1803.11285v1 [cs.CV] Mar. 29, 2018, (10 pages).
Raguram, Rahul et al., “A comparative analysis of ransac techniques leading to adaptive real-time random sample consensus”, Computer Vision—ECCV 2008, 10th European Conference on Computer Vision, Marseille, France, Oct. 12-18, 2008, Proceedings, Part I, (15 pages).
Ranftl, René et al., “Deep fundamental matrix estimation”, European Conference on Computer Vision (ECCV), 2018, (17 pages).
Revaud, Jerome et al., “R2D2: Repeatable and Reliable Detector and Descriptor”, In NeurIPS, arXiv:1906.06195v2 [cs.CV] Jun. 17, 2019, (12 pages).
Rocco, Ignacio et al., “Neighbourhood Consensus Networks”, 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada, arXiv:1810.10510v2 [cs.CV] Nov. 29, 2018, (20 pages).
Rublee, Ethan et al., “ORB: An efficient alternative to SIFT or SURF”, Proceedings of the IEEE International Conference on Computer Vision. 2564-2571. 2011; 10.1109/ICCV.2011.612654, (9 pages).
Sarlin, et al., “SuperGlue: Learning Feature Matching with Graph Neural Networks”, Cornell University Library/Computer Science/Computer Vision and Pattern Recognition, Nov. 26, 2019 [retrieved on Jan. 8, 2021 from the Internet< URL: https://arxiv.org/abs/1911.11763 >, entire document.
Sattler, Torsten et al., “SCRAMSAC: Improving RANSAC's efficiency with a spatial consistency filter”, ICCV, 2009: 2090-2097., (8 pages).
Schonberger, Johannes L. et al., “Pixelwise view selection for un-structured multi-view stereo”, Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, Oct. 11-14, 2016, Proceedings, Part III, pp. 501-518, 2016.
Schonberger, Johannes L. et al., “Structure-from-motion revisited”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4104-4113, (11 pages).
Sinkhorn, Richard et al., “Concerning nonnegative matrices and doubly stochastic matrices.”, Pacific Journal of Mathematics, 1967, pp. 343-348.
Thomee, Bart et al., “YFCC100m: The new data in multimedia research”, Communications of the ACM, 59(2):64-73, 2016; arXiv:1503.01817v2 [cs.MM] Apr. 25, 2016, (8 pages).
Torresani, Lorenzo et al., “Feature correspondence via graph matching: Models and global optimization”, Computer Vision—ECCV 2008, 10th European Conference on Computer Vision, Marseille, France, Oct. 12-18, 2008, Proceedings, Part II, (15 pages).
Tuytelaars, Tinne et al., “Wide baseline stereo matching based on local, affinely invariant regions”, BMVC, 2000, pp. 1-14.
Ulyanov, Dmitry et al., “Instance normalization: The missing ingredient for fast stylization”, arXiv:1607.08022v3 [cs.CV] Nov. 6, 2017, (6 pages).
Vaswani, Ashish et al., “Attention is all you need”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA; arXiv:1706.03762v5 [cs.CL] Dec. 6, 2017, (15 pages).
Veliĉkoviĉ, Petar et al., “Graph attention networks”, ICLR, arXiv:1710.10903v3 [stat.ML] Feb. 4, 2018, (12 pages).
Villani, Cédric , “Optimal transport: old and new”, vol. 338. Springer Science & Business Media, Jun. 2008, pp. 1-998.
Wang, Xiaolong et al., “Non-local neural networks”, CVPR, arXiv:1711.07971v3 [cs.CV] Apr. 13, 2018, (10 pages).
Wang, Yue et al., “Deep Closest Point: Learning representations for point cloud registration”, ICCV, arXiv:1905.03304v1 [cs.CV] May 8, 2019, (10 pages).
Wang, Yue et al., “Dynamic Graph CNN for learning on point clouds”, ACM Transactions on Graphics, arXiv:1801.07829v2 [cs.CV] Jun. 11, 2019, (13 pages).
Yi, Kwang M. et al., “Learning to find good correspondences”, CVPR, arXiv:1711.05971v2 [cs.CV] May 21, 2018, (13 pages).
Yi, Kwang Moo et al., “Lift: Learned invariant feature transform”, ECCV, arXiv:1603.09114v2 [cs.CV] Jul. 29, 2016, (16 pages).
Zaheer, Manzil et al., “Deep Sets”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA; arXiv:1703.06114v3 [cs.LG] Apr. 14, 2018, (29 pages).
Zhang, Jiahui et al., “Learning two-view correspondences and geometry using order-aware network”, ICCV; aarXiv:1908.04964v1 [cs.CV] Aug. 14, 2019, (11 pages).
Zhang, Li et al., “Dual graph convolutional net-work for semantic segmentation”, BMVC, 2019; arXiv:1909.06121v3 [cs.CV] Aug. 26, 2020, (18 pages).
Communication Pursuant to Rule 164(1) EPC dated Jul. 27, 2021, European Patent Application No. 19833664.6 , (11 pages).
Extended European Search Report dated Jul. 16, 2021, European Patent Application No. 19810142.0, (14 pages).
Extended European Search Report dated Jul. 30, 2021, European Patent Application No. 19839970.1, (7 pages).
Final Office Action dated Sep. 17, 2021, U.S. Appl. No. 16/938,782, (44 pages).
Non Final Office Action dated Aug. 4, 2021, U.S. Appl. No. 16/864,721, (51 pages).
Non Final Office Action dated Sep. 20, 2021, U.S. Appl. No. 17/105,848, (56 pages).
Non Final Office Action dated Sep. 29, 2021, U.S. Appl. No. 16/748,193, (62 pages).
Giuseppe, Donato , et al. , “Stereoscopic helmet mounted system for real time 3D environment reconstruction and indoor ego--motion estimation”, Proc. SPIE 6955, Head- and Helmet-Mounted Displays XIII: Design and Applications , 69550P.
Sheng, Liu , et al. , “Time-multiplexed dual-focal plane head-mounted display with a liquid Tens” , Optics Letters, Optical Society of Amer I Ca, US, vol. 34, No. 11, Jun. 1, 2009 (Jun. 1, 2009), XP001524475, ISSN: 0146-9592 , pp. 1642-1644.
Related Publications (1)
Number Date Country
20190369383 A1 Dec 2019 US
Provisional Applications (1)
Number Date Country
62678234 May 2018 US