Radar head pose localization

Information

  • Patent Grant
  • 11885871
  • Patent Number
    11,885,871
  • Date Filed
    Friday, May 24, 2019
    5 years ago
  • Date Issued
    Tuesday, January 30, 2024
    3 months ago
Abstract
An augmented reality device has a radar system that generates radar maps of locations of real world objects. An inertial measurement unit detects measurement values such as acceleration, gravitational force and inclination ranges. The values from the measurement unit drift over time. The radar maps are processed to determine fingerprints and the fingerprints are combined with the values from the measurement unit to store a pose estimate. Pose estimates at different times are compared to determine drift of the measurement unit. A measurement unit filter is adjusted to correct for the drift.
Description
BACKGROUND OF THE INVENTION
1). Field of the Invention

This invention relates to an augmented reality device and to a method of displaying rendered content.


2). Discussion of Related Art

Modern computing and display technologies have facilitated development of visual perception devices such as “virtual reality” viewing devices. A virtual reality viewing device may be a wearable device that presents the user with two images, one for the left eye and one for the right eye. Objects in the images may differ from one another in a manner that allows the brain to process the objects as a three-dimensional object. When the images constantly change, movement in three-dimensions can be simulated. A virtual reality viewing device typically involves presentation of digital or virtual image information without transparency to other real-world objects.


Other visual perception devices, so called “augmented reality” viewing devices usually include technology that allows for the presentation of digital and virtual image information as an augmentation to visualization of the actual world around the user. An augmented reality viewing device may, for example, have one or more transparent eyepieces that allow the user to see real world objects behind the eyepieces. Such an eyepiece can serve as a wave guide through which laser light propagates from a laser projector towards an eye of the user. A laser light pattern created by the projector becomes visible on the retina of the eye. The retina of the eye then receives light from the real-world objects behind the eyepiece and laser light from the projector. Real world objects are thus augmented with image data from the projector, in the perception of the user.


Augmented reality devices often have technology that permit for an object to remain in a stationary position relative to real world objects, as perceived by the user, even if the user would move their head. If the user would, for example, rotate their head to the right, the rendered object has to rotate to the left within the view of the user together with real world objects. Movement of the augmented reality device may be tracked through a measurement device such as an inertial measurement unit (IMU) so that the position of the object can be adjusted via the projector.


SUMMARY OF THE INVENTION

In some embodiments, the invention provides an augmented reality device including a head-mountable frame, a radar system, a measurement unit, measurement unit filter, a sensor fusion module, a rendering module, an eyepiece and a projector. The radar system generates first and second radar maps of locations of real-world objects relative to the user at first and second times in the slow domain. The measurement unit may be secured to the frame and detects first and second measurement values at the first and second times in the slow domain, each measurement value being indicative of a position and movement of the measurement unit. The measurement unit filter may be connected to the measurement unit. The sensor fusion module may be connected to the image processing module and may be operable to (i) determine first and second pose estimates, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value, (ii) determine drift of the measurement unit by comparing the first pose estimate with the second pose estimate, and (iii) adjust the measurement unit filter to correct for the drift. The rendering module determines a desired position of a rendered object based on the second pose. The eyepiece may be secured to the frame. The projector may be secured to the frame and may be operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.


In some embodiments, the invention also provides another augmented reality device including a head-mountable frame, a radar system, a measurement unit, measurement unit filter, a sensor fusion module, a rendering module, an eyepiece, and a projector. The radar system includes at least a first radar device that has a first radar transmitter secured to the frame and transmitting a radio wave at first and second times in a slow domain and a first radar receiver secured to the frame and detecting the radio waves after the radio waves are reflected from a surface, a radar tracking module connected to the first radar receiver and determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively, a radar mapping module connected to the radar tracking module and generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain, an image processing module connected to the radar mapping module and calculating a first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively. The measurement unit may be secured to the frame and may detects first and second measurement values at the first and second times in the slow domain, each measurement value being indicative of a position and movement of the measurement unit, a measurement unit filter connected to the measurement unit. The sensor fusion module may be connected to the image processing module and may be operable to (i) determine first and second pose estimates, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value, (ii) determine drift of the measurement unit by comparing the first pose estimate with the second pose estimate, and (iii) adjust the measurement unit filter to correct for the drift. The rendering module may determine a desired position of a rendered object based on the second pose. The eyepiece may be secured to the frame. The projector may be secured to the frame and may be operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.


In some embodiments, the invention further provides a further augmented reality device including a head-mountable frame, a radar system, a measurement unit, measurement unit filter, a sensor fusion module, a rendering module, an eyepiece, and a projector. The radar system may be secured to the frame and may include a radar transmitter secured to the frame and initiates execution of a first radar cycle by transmitting a first radio wave, a radar receiver secured to the frame and detecting the first radio wave after the first radio wave may be reflected from a surface, a radar tracking module connected to the radar receiver and determining a first time between the transmission and the detection of the first radio wave, a radar mapping module connected to the radar tracking module and generating a first radar map of locations of real-world objects relative to the user based at least on the first time between the transmission and the detection of the first radio wave and an image processing module connected to the radar mapping module and calculating a first set of radar fingerprints based on the locations of the real-world objects in the first radar map to complete the first radar cycle. The measurement unit may be secured to the frame and may detect a first measurement value indicative of a position and movement with a measurement unit that may be secured to the frame. The sensor fusion module is connected to the image processing module. The measurement unit may determine a first pose estimate of the first set of radar fingerprints relative to the first measurement value. The radar system executes a second radar cycle, which may include transmitting a second radio wave, detecting the second radio wave after the second radio wave is reflected from the surface, determining a second time between the transmission and the detection of the second radio wave, generating a second radar map of locations of real-world objects relative to the user based at least on the second time between the transmission and the detection of the second radio wave and calculating a second set of radar fingerprints based on the locations of the real-world objects in the second radar map. The measurement unit detects a second measurement value indicative of a position and movement with the measurement unit secured to the frame. The sensor fusion module may determine a second pose estimate of the second set of radar fingerprints relative to the second measurement value, may determine drift of the measurement unit by comparing the first pose estimate with the second pose estimate, and adjusts a measurement unit filter that may be connected to the measurement unit to correct for the drift. The rendering module may determine a desired position of a rendered object based on the second pose. The eyepiece may be secured to the frame. The projector may be secured to the frame and may be operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.


In some embodiments, the invention also provides a method of displaying rendered content. A head-mountable frame may be attached to a head of a user. A plurality of radar cycles may be executed to generate first and second radar maps of locations of real-world objects relative to the user at first and second times in the slow domain. First and second measurement values may be detected at the first and second times in the slow domain, each measurement value being indicative of a position and movement with a measurement unit secured to the frame. A first and second pose estimate may be determined, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value. A drift of the measurement unit may be determined by comparing the first pose estimate with the second pose estimate. A measurement unit filter that may be connected to the measurement unit may be adjusted to correct for the drift. A desired position of a rendered object may be determined based on the second pose estimate. Data may be converted into light to generate the rendered object and the rendered object may be displayed in the desired position to the user through an eyepiece secured to the head-mountable frame.


In some embodiments, the invention provides another method of displaying rendered content. A head-mountable frame may be attached to a head of a user. A plurality of radar cycles may be executed, including transmitting a radio wave at a first and second times in a slow domain, detecting the radio waves after the radio waves are reflected from a surface, determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively, generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain and calculating first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively. A first and second measurement values may be detected at the first and second times in the slow domain, each measurement value being indicative of a position and movement with a measurement unit secured to the frame. First and second pose estimates may be determined, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value. Drift of the measurement unit may be determined by comparing the first pose estimate with the second pose estimate, adjusting a measurement unit filter that may be connected to the measurement unit to correct for the drift. A desired position of a rendered object may be determined based on the second pose estimate. Data may be converted into light to generate the rendered object and the rendered object may be displayed in the desired position to the user through an eyepiece secured to the head-mountable frame.


In some embodiments, the invention provides further method of displaying rendered content. A head-mountable frame may be attached to a head of a user. A first radar cycle may be executed that may include transmitting a first radio wave, detecting the first radio wave after the first radio wave is reflected from a surface, determining a first time between the transmission and the detection of the first radio wave, generating a first radar map of locations of real-world objects relative to the user based at least on the first time between the transmission and the detection of the first radio wave and calculating a first set of radar fingerprints based on the locations of the real-world objects in the first radar map, detecting a first measurement value indicative of a position and movement with a measurement unit secured to the frame. A first pose estimate may be determined of the first set of radar fingerprints relative to the first measurement value. A second radar cycle may be executed that may include transmitting a second radio wave, detecting the second radio wave after the second radio wave is reflected from the surface, determining a second time between the transmission and the detection of the second radio wave, generating a second radar map of locations of real-world objects relative to the user based at least on the second time between the transmission and the detection of the second radio wave and calculating a second set of radar fingerprints based on the locations of the real-world objects in the second radar map. A second measurement value may be determined that is indicative of a position and movement with the measurement unit secured to the frame. A second pose estimate may be determined of the second set of radar fingerprints relative to the second measurement value. Drift of the measurement unit may be determined by comparing the first pose estimate with the second pose estimate. A measurement unit filter that may be connected to the measurement unit may be adjusted to correct for the drift. A desired position of a rendered object may be determined based on the second pose estimate. Data may be converted into light to generate the rendered object and the rendered object may be displayed in the desired position to the user through an eyepiece secured to the head-mountable frame.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention is further described by way of example with reference to the accompanying drawings, wherein:



FIG. 1 is a top plan view of an augmented reality device, according to an embodiment of the invention, and a user wearing the augmented reality device;



FIG. 2 is a perspective view illustrating the capturing of visual data for purposes of generating a visual map;



FIG. 3 is a block diagram of further components of the augmented reality device;



FIG. 4 is a flow chart illustrating how a radar system of the augmented reality device is used to correct for drift of a measurement unit of the augmented reality device;



FIG. 5 is a perspective view illustrating the use of a radar system to detect surfaces around the augmented reality device;



FIG. 6 is a time chart illustrating how a radar map is created;



FIG. 7 is a radar map that illustrates objects that are detected to determine fingerprints;



FIG. 8 is a block diagram illustrating how corrections are made to measurement unit data and radar sensor array data;



FIG. 9 is a flow chart illustrating an initiation process of the augmented reality device;



FIG. 10 illustrates a rendering, as seen by the user using the augmented reality device; and



FIG. 11 is a block diagram of a machine in the form of a computer that can find application in the present invention system, in accordance with one embodiment of the invention.





DETAILED DESCRIPTION OF THE INVENTION

An augmented reality device has a radar system that generates radar maps of locations of real world objects. An inertial measurement unit detects measurement values such as acceleration, gravitational force and inclination ranges. The values from the measurement unit drift over time. The radar maps are processed to determine fingerprints and the fingerprints may be combined with the values from the measurement unit to store a pose estimate. Pose estimates at different times may be compared to determine drift of the measurement unit. A measurement unit filter may be adjusted to correct for the drift.


Positive terms such as “is”, “are”, “have”, etc. are preferred herein as opposed to optional terms such as “may be” or “may have”, etc. Positive terms are used to comply with the requirements for (i) describing the best mode, (ii) providing a description that enables one of ordinary skill in the art to make the invention and (iii) providing an example. It should however be understood that the specific details that are described using positive terms may be modified without departing from the scope and spirit of the invention as more broadly defined in the claims.



FIG. 1 illustrates an augmented reality device 10, according to an embodiment of the invention, and a head of a user 12. The augmented reality device 10 may include a head-mountable frame 14, a radar system 16, a measurement unit 18, first and second eyepieces 20 and 22, first and second projectors 24 and 26, and a visual camera 28 that may be mounted to the frame 14.


The radar system 16 may include first, second and third radar devices 30, 32 and 34. Each radar device 30, 32 and 34 may be mounted to the frame 14. Each radar device 30, 32 and 34 has a particular orientation that allows the radar device 30, 32 or 34 to transmit and receive radio waves in a desired direction.


A radar system may comprise one or more antenna for transmitting and/or receiving signals. The radar system may have a chip with a fixed number of antenna fixed to the chip. In order to obtain the desired signal number and/or directionality, more than one chip may be used, and each chip may be placed as needed to direct the signal to the desired location. Alternatively, the radar system may have a single chip with one or more antenna that may be placed in different locations and pointing in different directions in order to obtain the desired radar signals


The measurement unit 18 may be an inertial measurement unit (IMU). The measurement unit 18 may include one or more accelerometers, a gyroscope and a magnetometer. As will be commonly understood by one of ordinary skill in the art, an accelerometer measures acceleration, the acceleration may be integrated to determine velocity, and the velocity may be integrated to determine position. The gyroscope may determine changes in angular orientation. The magnetometer may determine the direction of gravitational force and may determine an “attitude” of the measurement unit relative to the direction of gravitational force.


The projectors 24 and 26 may be operable to convert data into light (e.g. laser light, or LED light) and to generate a rendered object. The projectors 24 and 26 may have lasers that are oriented to direct laser light into the eyepieces 20 and 22, respectively. The eyepieces 20 and 22 may be waveguides that may be also transparent.


Each one of the radar devices 30, 32 or 34 has field-of-view of approximately 30° and an operation range of 0.1 to 4 meters. The radar devices 30, 32 and 34 are typically operated to transmit and receive radio waves. The radio waves may be frequency modulated continuous waves. The radio waves may be millimeter-wave radar (e.g. operating in the 60 GHz band). Each radar device 30, 32 and 34 may have more than one transmission (TX) channel (e.g., two TX channels) and more than one reception (RX) channel (e.g., four RX channels) with some channels measuring redundant signal.


The visual camera 28 captures grayscale images and depth maps at approximately 60 Hz. The visual camera 28 has a field-of-view of approximately 90° and an operating range of approximately 1 to 4 meters.



FIG. 2 illustrates the use of the visual system in more detail. The visual system captures objects 38 and a depth sensor may determine distances to the object 38 from the augmented reality device 10. The visual camera 28 captures images of the real world objects 38 on a continual basis. The user's head pose and position can be determined by processing imagery from the visual system using a Simultaneous Localization and Mapping (SLAM) and visual odometry procedure. The dashed lines represent further processing of the images on a continual basis. Such continual processing of the images provides data that indicates movement of the augmented reality device 10 relative to the real world objects 38. Because the depth sensor and the gravity sensor determine the locations of the real world objects 38 relative to gravitation force, and the visual camera 28 detects movement of the augmented reality device 10 relative to the real world objects 38, the movement of the augmented reality device 10 relative to gravitation force can also be calculated. Other methods of mapping a three-dimensional environment may be employed, for example using one or more cameras that are located in stationary positions within a room. However, the integration of the depth sensor and the visual system within the augmented reality device 10 provides for a more mobile application.


As shown in FIG. 3, the augmented reality device 10 may include an on-device software stack 40, a cloud database 42 and head-mounted device hardware 44.


The head-mounted device hardware 44 include the first, second and third radar devices 30, 32 and 34, the measurement unit 18, and the visual camera 28. Each radar device 30, 32 or 34 has a respective radar transmitter 46 and a respective radar receiver 48. The measurement unit 18 may include a number of measurement devices, including an accelerometer 50, a gyroscope 52 and a magnetometer 54.


All the components of the head-mounted device hardware 44 are directly or indirectly mounted to the frame 14 and are therefore stationary relative to the frame 14. The on-device software stack 40 may include a user interface 60, an application interface 62, a rendering module 64, a real-time tracking module 66, a mapping and map management module 68, a sensor fusion module 70, an image processing module 72 and a hardware abstraction layer (HAL) 74.


The real-time tracking module 66 may include a radar tracking module 76, a radar mapping module 78, a visual tracking module 80 and a visual mapping module 82. The mapping and map management module 68 may include at least one radar map 84, a visual map 86, and a map merge and optimization module 88.


The components of the on-device software stack 40 are shown as separate modules. The modules may, however, be connected to one another in the form of calls and subroutines within a computer program. The components of the on-device software stack 40 may be connected through the hardware abstraction layer 74 to components of the head-mounted device hardware 44. Of significance is that the radar tracking module 76 may be connected to the radar transmitters 46 and radar receivers 48 of the radar devices 30, 32 and 34 and that the radar mapping module 78 may be connected to the radar tracking module 76. Of significance also is that the visual tracking module 80 may be connected to the visual camera 28 and the visual mapping module 82 may be connected to the visual tracking module 80.


The radar mapping module 78 creates the radar map 84 and the visual tracking module 80 creates the visual map 86. The image processing module 72 reads the radar map 84 and the visual map 86 to determine features within the maps. Further radar maps 90 and visual maps 92 may be located within the cloud database 42. The cloud database 42 may be connected to the mapping and map management module 68 so that the radar maps 90 and the visual maps 92 can be stored in or be downloaded from the cloud database 42. The sensor fusion module 70 may be connected to the image processing module 72 and to the measurement devices of the measurement unit 18.


In use, the user 12 secures the frame 14 to their head. The frame 14 has a bridge portion 96 that rests on a nose of the user 12 and temple pieces 98 that extend over the ears of the user 12 and secure to the ears of the user 12 or secure around the back of their head. The eyepieces 20 and 22 may be located in front of eyes of the user 12. All components mounted to the frame 14, including the radar system 16 and the measurement unit 18 may be stationary relative to the head of the user 12 and move together with the head of the user 12 when the user 12 moves their head.


The visual camera 28 continually captures a grayscale image of objects in front of the user 12. The visual tracking module 80 controls the visual camera 28. The visual mapping module 82 generates the visual map 86 based on the grayscale image received from the visual camera 28. The image processing module 72 processes the visual map 86 to determine objects within the visual map 86. The latest visual map 86 may be stored as the visual map 86 within the mapping and map management module 68 and earlier maps may be stored as the visual maps 92 within the cloud database 42. A “visual system” may be thus provided by the visual camera 28, visual tracking module 80, visual mapping module 82, the visual maps 86 and 92 and the image processing module 72.



FIG. 4 illustrates subsequent functioning of the augmented reality device 10 of FIGS. 1 and 3. At 110, the world may be illuminated via the TX channels. Referring to FIG. 3, each one of the radar transmitters 46 transmits a radio wave. The radio wave may be reflected from one or more surfaces and may be then detected by the respective radar receiver 48. The radio waves have frequencies on the order of 60 GHz, which allows for very accurate distance measurements to the surface or surfaces. In addition, multiple radio waves may be transmitted and detected at a frequency in the slow domain of approximately 100 Hz to 10 kHz. The frequency in the slow domain may be high enough to ensure a very fast sampling rate of distance measurements.


As shown in FIG. 1, the radar device 32 may be directed to the front of the user 12 to detect surfaces in front of the user 12. The radar device 30 may be directed to the left of the user 12. Although not clearly shown in FIG. 1, the radar device 34 may be directed in a vertical direction, i.e. out of the paper, to detect the location of a ceiling above the user 12. In the given embodiment, the radar devices 30, 32 and 34 are all located in the same plane, although, in another embodiment, they may be located in different planes. Another embodiment may use more than three radar devices to reduce search complexity, although three radar devices are optimal because it provides for a large number of degrees of freedom to be detected without unnecessary complexity in design.


The radar devices 30, 32 and 34 may be under the control of the radar tracking module 76. The radar tracking module 76 provides the frequency in the slow domain to the radar transmitters 46. The radar tracking module 76 also samples the radio waves through the radar receivers 48 in the fast domain. As is commonly understood in the field of radar engineering, a radio wave will take longer between transmission and reception when it is reflected from a surface that is farther away than from a closer surface. The time between transmission and reception may be in the fast domain and may be an indicator of the distance of the surface from which the radio wave is reflected. Each radar receiver 48 has an array of detectors that can detect the distances to surfaces over a two-dimensional area. In addition, the radar receivers 48 detect back scatter. Different types of surfaces have different back scatter properties and the back scatter is thus a measure of the type of surface, e.g. surface roughness or texture. The distance of each surface, its location in two-dimensional space, its size and the type of surface are all eventually used to provide a fingerprint that may include the individual surfaces and the surfaces in combination.



FIG. 5 illustrates the transmission and reception of radio waves to the front, left and above the augmented reality device 10 that may be worn by the user 12. Each TX channel may transmit a respective radio wave 110A representing a respective TX signal and each RX channel may receive a respective radio wave 112A representing a respective RX signal. Surfaces are thus detected to the left and right of the augmented reality device 10 and a distance to a ceiling above the augmented reality device 10.


In FIG. 4, at 114, all IMU data may be read. The sensor fusion module 70 in FIG. 3 reads an output of the measurement unit 18, including the accelerometer 50, the gyroscope 52 and the magnetometer 54. At 116 in FIG. 4, a three-dimensional (3D) orientation of the device may be determined. In FIG. 3, the sensor fusion module 70 calculates a 3D orientation of the augmented reality device 10 based on the readings from the accelerometer 50, gyroscope 52 and the magnetometer 54.


In FIG. 4, at 118, normalized range-Doppler maps may be created. In FIG. 3, the radar mapping module 78 creates a map of all surfaces that are detected by the radar tracking module 76 and stores the map as the radar map 84. The radar map 84 may be a two-dimensional map that may include the surfaces, their locations and textures. The doppler maps may be normalized to compensate for motion (of the user, e.g. ego motion) based on data from the measurement unit 18.


Range-Doppler maps are well known in the art, and many radar sensor systems automatically create one or more range-doppler maps. In general, the system sends out one or more TX signals and one or more objects reflect the signal back (RX). The received signal may be converted to a range Doppler map using one or more manipulations, including conversion to a slow time/fast time chart, a windowed Fourier Fast Transform (FFT), and background subtraction. Other suitable methods of creating a range-Doppler map may be used, as long as the standard form is range on the y-axis and velocity on the x-axis.



FIG. 6 illustrates how a range-Doppler map may be created. FIG. 5 shows three radar sensors sending (110A), and receiving (112A) signals and in FIG. 6 each TX signal (110A) can be represented by a frequency domain graph 113 and each RX channel (112A) may be represented by a time domain graph 115. The frequency domain graph 113 shows that the respective TX channel may transmit a series of signal represented by pulses.


The time domain graph 115 shows the RX signal that is received in response to the pulses in the frequency domain graph 113. Reflected signals from objects that are farther away take a longer time to travel. Different voltage levels indicate different distances, due of the design of the RX circuitry.


The signal from the RX channel is divided into smaller sample pieces. The sample pieces of the RX signal are divided in time in accordance with the pulses in the TX signal.


Four matrices represent subsequent processing of the RX signal. The first matrix shows that the time domain graph 115 is transformed into slow time and fast time, each being represented on a respective axis. Each sample of the RX signal (between the pulses of the TX signal) is entered into its own column in the matrix. The second matrix show the result of a fast-Fourier transformation along the fast time. The fast time domain provides data regarding the range of different surfaces. Background subtraction in the third matrix allows for objects in the foreground to be isolated so that a moving object can be identified. The fourth matrix shows a fast-Fourier transformation along the slow time to create a range Doppler map that shows range and velocity on different axes.


After the range-Doppler maps have been calculated as discussed with reference to FIG. 6, the system then performs a calculation to compensate for the augmented reality device 10 due to movement of the user 12. When the correction for ego motion has been made, the system can determine whether the user 12 is moving past an object, or if the object is moving past the user 12.


The range-Doppler map only goes so far as to make a determination of the relative location of objects. For example, the range-Doppler map may show that there is a wall three feet away. A wall should not move if a correction for ego motion is made, i.e. the velocity of a wall should be zero. Without ego correction, if the user should walk towards the wall with a certain velocity, it would appear from the range Doppler map that the wall is moving. Once the system makes a correction to adjust for the ego motion, the wall will remain stationary within the range-Doppler map. The IMU data that is read at 114 in FIG. 4 is used to calculate a velocity of the user 12, which is used to correct for movement of the user.


In FIG. 4, at 120, radar fingerprints may be calculated. FIGS. 6 and 7 illustrate how radar fingerprints may be identified for one channel. Radar fingerprinting is an approach that renders a set of values that may be characteristic for given positions in a room. Of particular importance are the distances and angles to static objects. The radar fingerprint is the range-Doppler map wherein the range-Doppler map is corrected for ego motion. The only objects that are included in the radar fingerprint are the ones with zero velocity. By way of example, the stationary wall is included in the fingerprint, but not a hand of the user 12 or any effects due to movement of the user 12.


In FIG. 3, the image processing module 72 may be used to calculate the fingerprints. A radar system is thus provided that may include the radar devices 30, 32 and 34, the radar tracking module 76, the radar mapping module 78, the radar maps 84 and 90 and the image processing module 72.


In FIG. 4, the IMU data that may be read at 114 and the radar fingerprints that are calculated at 120 allow for a pose estimate to be determined at 122. Pose may be determined using geometric error minimization, although other geometric or other error minimization methods may be used. The pose estimate is a first pose estimate at a first time, t1. The IMU data is thus related to the fingerprints at t1. The pose estimate may be stored for later retrieval and later correction of IMU data at a second time, t2, in the slow domain.


In FIG. 4, at 124, the pose estimate can optionally be refined with fingerprints against the visual map 86. In FIG. 3, the map merge and optimization module 88 compares the fingerprints in the radar map 84 with the fingerprints in the visual map 86. The fingerprints in the radar map 84 are then updated with the fingerprints in the visual map 86. The latest radar map may be stored as the radar map 84 within the mapping and map management module 68 and earlier radar maps may be stored as the radar maps 90 in the cloud database 42. The radar maps 90 in the cloud database 42 can be used for additional calculations, e.g. for predictive calculations.


Referring again to FIG. 4, at 126, a determination may be made whether a match is found with an earlier pose estimate. For purposes of discussion, it may be assumed that at t1, there is no further pose estimate and the system does not proceed to 128 and 130, but instead proceeds to 132 to collect further radar data.


The sequence hereinbefore described with reference to FIG. 4 represent a first cycle at a first time t1 in the slow domain. Each radar device 30, 32 and 34 has captured surfaces and the surfaces may be used to determine a first set of radar fingerprints and a first pose may be calculated by combining the first fingerprints with data from the measurement unit 18.


At a second t2 in the slow domain, the system again proceeds to illuminate the world via the TX channels at 110. The process continues through 112 where the RX channels are read, 114 where the IMU data may be read, 116 where a 3D orientation of the device may be determined, 118 where normalized range-Doppler maps are created, 120 where radar fingerprints are calculated, 122 where a pose estimate may be determined, and 124 where the pose estimate may be refined with fingerprints from the visual map. At 126, a determination may be made whether an earlier set of radar fingerprints is available. In the present example, an earlier set of radar fingerprints was previously calculated at t1. The system then proceeds to 128 to update the pose estimate. Values that are provided by the measurement unit 18 drift over time and become inaccurate. The drifts are primarily the result of double integration of acceleration to obtain position. The amount of drift can be determined by comparing the first pose estimate with the second pose estimate. Necessary adjustments are then made to correct for the drift. The adjustments thus form an updated pose estimate. Fingerprints may only be stored long enough to correct for IMU drift and may then discarded. Fingerprints may be stored in a circular buffer or the like. More than one fingerprint can be retained, for example the last five fingerprints, so that a path of the drift can be calculated. Five or so fingerprints may also be useful for prediction IMU data drift, calculating time warping, or extrapolation purposes.


At 130, the radar map 84 is updated. The radar map 84 may need updating because it may be based on an incorrect set of data from the measurement unit 18. A comparison of the fingerprint with map allows the system to do absolute localization. “Relative localization” can be described by way of example to refer to adjusting the display of content relative to a wall that the user 12 is walking towards. “Absolute localization” can be described by way of example to refer to keeping a display of content on a table regardless even if the user would rotate their head from left to right, and/or when others can share in the experience with their own augmented reality devices.


By looking at consecutive sets of fingerprints, a unique 3D pose in a room can be obtained. The accuracy of the estimate quickly converges over time due to a high temporal frequency of the measurement unit 18 and the radar devices 30, 32 and 34. In FIG. 3, the sensor fusion module 70 may be responsible for determining each pose estimate, including a first post estimate at t1 and second pose estimate at t2. The sensor fusion module 70 also determines drift of the measurement unit 18 by comparing the first pose estimate with the second pose estimate, and makes any necessary adjustments for correcting for the drift.



FIG. 8 illustrates the measurement unit 18 and a radar sensor array 121. The radar sensor array 121 may include the radar devices 30, 32 and 34 illustrated in FIGS. 1 and 3. Measurements taken by the measurement unit 18 pass through a measurement unit filter 134. Measurements taken by the radar sensor array 121 pass through a radar Kalman filter 136. An estimate 138 from the measurement unit filter 134 and an estimate 140 from the radar Kalman filter 136 are combined to calculate a pose estimate 142. At 144, the pose estimate 142 may be used to adjust the measurement unit filter 134. The adjustment corrects for the drift in the measurement unit 18. Adjustments that may be made at 144 include acceleration bias and gyroscopic drift corrections, and position, velocity and attitude corrections.


At 146, the pose estimate 130 may be used to adjust the radar Kalman filter 136. The adjustments that may be made at measurement unit filter 134 include position, velocity and attitude corrections.


At 148, a pose resulting from the pose estimate 142 and the visual map 86 may be combined to refine the estimate with the fingerprints against the visual map 86. The refinements result in a pose that may be more accurate than the pose estimate 142, due to the refinements provided by the visual map 86.


The measurements taken in FIG. 4, including measurements by the measurement unit 18, the radar sensor array 121 and the visual map 86 are all at the same time, e.g. t2. At a later time, t3, a new set of measurements can be taken, including a new set of measurements from the measurement unit 18, the radar sensor array 121 and a new visual map 86. Alternatively, all measurements may not be taken at exactly the same time. For example, the measurement unit 18 and the radar sensor array 121 may take measurements at slightly different times. An interpolation algorithm may be used to modify the measurements to obtain sets of measurement that represent measurements if the measurements were to be taken at the same time. Furthermore, the visual map 86 may be a visual map that may be taken at an earlier time than the measurements of the measurement unit 18 and the radar sensor array 121. For example, the visual map 86 may be taken at t1 and the measurements from the measurement unit 18 and the radar sensor array 121 may be taken at t2.



FIG. 9 illustrates initialization that takes place when the augmented reality device 10 may be powered on or put on the head of the user 12. At 150, the augmented reality device 10 carries out coarse localization. The augmented reality device 10 utilizes a global positioning system (GPS) and/or WIFI and/or Bluetooth™ of the environment to carry out coarse geo-location of the device 10. At 152, the device 10 searches whether a visual map may be locally available or available from the cloud database 42. At 154, the system determines whether a map is available. If no map is available, then the device 10 proceeds at 156 to create a new map.


If a map is available, then the device 10 proceeds at 158 to load the map. The device 10 then proceeds to 160 to initiate relocalization. During relocalization, the device 10 uses a circular buffer of radar sensors and IMU fingerprints to determine coarse localization. The device 10 then further refines a starting position by using depth sensor input. At 162, the device 10 starts tracking its movement using IMU data.


Referring again to FIG. 3, the rendering module 64 receives computer data representing an object that has to be rendered. The rendering module 64 also determines a location of the object based on the pose resulting from FIG. 8. The rendering module 64 then provides the data of the image and its locations based on the pose to the projectors 24 and 26 in FIG. 1. The projectors 24 and 26 convert the data into laser or other light and insert the light into the eyepieces 20 and 22. The projectors 24 and 26 each generate a pattern of laser light that renders the object and renders the object at a particular location relative to the pose. The laser light then reflects within the eyepieces 20 and 22 and then exits the eyepieces 20 and 22 towards the user 12. The laser light then enters eyes of the user 12 and the user perceives the laser light on retinas of their eyes. The user 12 can also see through the eyepieces 20 and 22. The user 12 can thus see real world objects behind the eyepieces 20 and 22 and the real-world objects are augmented with the rendered object. The rendered object may be a stationary object relative to the real-world objects. When the user 12 moves their head, the measurement unit 18 detects such movement and the pose may be updated so that the pose modifies relative to the real-world objects. Because the rendered object remains stationary relative to the pose, the rendered object remains stationary relative to the real-world objects in the view of the user 12. In alternate embodiments, the rendered object may be stationary relative to the user, or the rendered object may move relative to the user or real world objects.


In FIG. 3, the user interface 60 and the application interface 62 allow a user 12 to interact with other modules of the on-device software stack 40. For example, the user 12 may modify the frequency that a visual map or a radar map may be created or IMU data may be captured.



FIG. 10 illustrates a rendering, as seen by the user 12 using the augmented reality device 10. In the given example, the rendered object is a globe 170 and its position may be fixed relative to the real-world objects 38 when the user 12 moves their head. In FIG. 3, the rendering module 64 continues to track a change in the second pose by receiving measurement values from the measurement unit 18 after the object is displayed in the desired location and updates the desired location of the rendered object in response to the tracking of the change in the second pose.


As discussed above, a person may be wearing the augmented reality device 10 and, in order for the augmented reality content to be displayed correctly, the location of the augmented reality device 10 must be known. An IMU may provide good short term position estimation, but may suffer from drift problems over time. Radar data may be more accurate in the long term (but not in the short term), so is combined with IMU data to produce improved position data for both long and short term. The combined data is only accurate for determining relative location. The combined data can for example determine that the device is 3 feet away from a wall, but the device still does not know where in the world that wall or the user is. To provide accurate absolute position data of the device, a radar fingerprint may be created from the radar and IMU data and may be compared with data from an outward facing visual camera.



FIG. 11 shows a diagrammatic representation of a machine in the exemplary form of a computer system 900 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.


The exemplary computer system 900 includes a processor 902 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 904 (e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), and a static memory 906 (e.g., flash memory, static random access memory (SRAM), etc.), which communicate with each other via a bus 908 and a laser driver chip 912 or other light source driver.


The computer system 900 may further include a disk drive unit 916, and a network interface device 920.


The disk drive unit 916 includes a machine-readable medium 922 on which is stored one or more sets of instructions 924 (e.g., software) embodying any one or more of the methodologies or functions described herein. The software may also reside, completely or at least partially, within the main memory 904 and/or within the processor 902 during execution thereof by the computer system 900, the main memory 904 and the processor 902 also constituting machine-readable media.


The software may further be transmitted or received over a network 928 via the network interface device 920.


While the machine-readable medium 924 is shown in an exemplary embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.


A laser driver chip 912 includes a data store 161 and its own processor 162. The data store 161 is used to store instruction and data structures that are specific to the operation of a laser source. The processor 162 retrieves the instructions from the data store and has access to the data structures to execute routines that drive the laser source so that the laser source generates laser light. The laser source forms part of a projector that receives data such as video data. A scanner forms part of the projector to allow the projector to display the laser light over a two-dimensional area, and in some instances in three-dimensional space with any patterns, color, saturation and other light qualities that are created by the projector being based on values in the video data.


Although a laser source and a laser driver chip 912 have been illustrated and discussed, it may be possible to use other display systems. Other display systems may for example include displays that make use of light-emitting diode (LED) technology, organic light-emitting diode (OLED) technology, superluminescent light-emitting diode (SLED), or the like.


Example Embodiments

In some embodiments, the invention provides an augmented reality device including a head-mountable frame, a radar system that generates first and second sets of radar fingerprints of locations of real-world objects relative to the user at first and second times, a measurement unit, secured to the frame, and detecting first and second measurement values at the first and second times, each measurement value being indicative of at least one of position and movement of the measurement unit, a measurement unit filter connected to the measurement unit, a sensor fusion module connected to the radar system and the measurement unit and operable to (i) determine first and second pose estimates, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value, (ii) determine a drift of the measurement unit by comparing the first pose estimate with the second pose estimate, and (iii) adjust the measurement unit filter to correct for the drift, a rendering module to determine a desired position of a rendered object based on the second pose, an eyepiece secured to the frame and a projector secured to the frame and operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.


In some embodiments, the augmented reality device may include that the radar system includes at least a first radar device having a first radar transmitter secured to the frame and transmitting a radio wave at first and second times in the slow domain, a first radar receiver secured to the frame and detecting the radio waves after the radio waves are reflected from a surface, a radar tracking module connected to the first radar receiver and determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively, a radar mapping module connected to the radar tracking module and generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain and an image processing module connected to the radar mapping module and calculating a first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively.


In some embodiments, the augmented reality device may include that the radar system includes at least a second radar device having a second radar transmitter secured to the frame and transmitting a radio wave at first and second times in a slow domain and a second radar receiver secured to the frame and detecting the radio waves transmitted by the second radar transmitter after the radio waves are reflected from a surface, wherein the radar tracking module is connected to the second radar receiver and determines first and second time intervals in the fast domain between the transmission and the detection of the respective radio waves transmitted by the second radar transmitter.


In some embodiments, the augmented reality device may include that the radar system includes at least a third radar device having a third radar transmitter secured to the frame and transmitting a radio wave at first and second times in a slow domain; and a third radar receiver secured to the frame and detecting the radio waves transmitted by the third radar transmitter after the radio waves are reflected from a surface, wherein the radar tracking module is connected to the third radar receiver and determines first and second time intervals in the fast domain between the transmission and the detection of the respective radio waves transmitted by the third radar transmitter.


In some embodiments, the augmented reality device may include that the first radar receiver detects back scatter from the surface and the image processing module calculates a texture of the surface based on the back scatter.


In some embodiments, the augmented reality device may include a visual system that includes a visual camera mounted to the frame to capture a first and second visual images of the real-world object, a visual tracking module connected to the visual camera and storing the first and second visual images at the first and second times in the slow domain, respectively, a visual mapping module connected to the visual tracking module and generating first and second visual maps of locations of real-world objects relative to the user based on the first and second visual images and a map merge and optimization module that refines the locations of the first and second sets of fingerprints based on the first and second visual maps, respectively.


In some embodiments, the augmented reality device may include a radar filter connected to the radar system, wherein the sensor fusion module adjusts the radar filter based on the second pose estimate.


In some embodiments, the augmented reality device may include that the rendering module tracks a change in the second pose by receiving measurement values from the measurement device after the object is displayed in the desired location and updates the desired location of the rendered object in response to the tracking of the change in the second pose.


In some embodiments, the augmented reality device may include that the sensor fusion module performs a coarse localization based on a wireless signal.


In some embodiments, the invention also provides an augmented reality device including a head-mountable frame, a radar system that includes at least a first radar device having a first radar transmitter secured to the frame and transmitting a radio wave at first and second times in a slow domain, a first radar receiver secured to the frame and detecting the radio waves after the radio waves are reflected from a surface, a radar tracking module connected to the first radar receiver and determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively, a radar mapping module connected to the radar tracking module and generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain, an image processing module connected to the radar mapping module and calculating a first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively, a measurement unit, secured to the frame, and detecting a first and second measurement values at the first and second times in the slow domain, each measurement value being indicative of at least one of position and movement of the measurement unit, a measurement unit filter connected to the measurement unit, a sensor fusion module connected to the image processing module and operable to (i) determine first and second pose estimates, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value, (ii) determine drift of the measurement unit by comparing the first pose estimate with the second pose estimate, and (iii) adjust the measurement unit filter to correct for the drift, a rendering module to determine a desired position of a rendered object based on the second pose, an eyepiece secured to the frame; and a projector secured to the frame and operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.


In some embodiments, the invention further provides an augmented reality device including a head-mountable frame, a radar system secured to the frame that includes a radar transmitter secured to the frame and initiates execution of a first radar cycle by transmitting a first radio wave, a radar receiver secured to the frame and detecting the first radio wave after the first radio wave is reflected from a surface, a radar tracking module connected to the radar receiver and determining a first time between the transmission and the detection of the first radio wave, a radar mapping module connected to the radar tracking module and generating a first radar map of locations of real-world objects relative to the user based at least on the first time between the transmission and the detection of the first radio wave, an image processing module connected to the radar mapping module and calculating a first set of radar fingerprints based on the locations of the real-world objects in the first radar map to complete the first radar cycle, a measurement unit, secured to the frame, and detecting a first measurement value indicative of at least one of position and movement with a measurement unit secured to the frame, a sensor fusion module connected to the image processing module and the measurement unit and determining a first pose estimate of the first set of radar fingerprints relative to the first measurement value, wherein the radar system executes a second radar cycle including transmitting a second radio wave, detecting the second radio wave after the second radio wave is reflected from the surface, determining a second time between the transmission and the detection of the second radio wave, generating a second radar map of locations of real-world objects relative to the user based at least on the second time between the transmission and the detection of the second radio wave and calculating a second set of radar fingerprints based on the locations of the real-world objects in the second radar map, wherein the measurement unit detects a second measurement value indicative of a position and movement with the measurement unit secured to the frame, wherein the sensor fusion module determines a second pose estimate of the second set of radar fingerprints relative to the second measurement value, determining drift of the measurement unit by comparing the first pose estimate with the second pose estimate, adjusts a measurement unit filter that is connected to the measurement unit to correct for the drift, a rendering module to determine a desired position of a rendered object based on the second pose, an eyepiece secured to the frame and a projector secured to the frame and operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.


In some embodiments, the invention also provides an augmented reality device including a head-mountable frame, a radar system that generates first and second radar fingerprints of locations of real-world objects relative to the user at first and second times a visual system that includes a visual camera mounted to the frame to capture first and second visual images of the real-world objects, a visual tracking module connected to the visual camera and storing first and second visual images at the first and second time intervals, respectively, a visual mapping module connected to the visual tracking module and generating first and second visual maps of locations of real-world objects relative to the user based on the first and second visual images, a map merge and optimization module that refines the locations of the first and second sets of fingerprints based on the first and second visual maps, respectively, a rendering module to determine a desired position of a rendered object based on the first and second sets of fingerprints as refined by the map merge and optimization module, an eyepiece secured to the frame and a projector secured to the frame and operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.


In some embodiments, the invention further provides a method of displaying rendered content including attaching a head-mountable frame to a head of a user, executing a plurality of radar cycles to generate first and second radar fingerprints of locations of real-world objects relative to the user at first and second times in a slow domain, detecting first and second measurement values at the first and second times in the slow domain, each measurement value being indicative of at least one of position and movement, with a measurement unit secured to the frame, determining first and second pose estimates, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value, determining drift of the measurement unit by comparing the first pose estimate with the second pose estimate, adjusting a measurement unit filter that is connected to the measurement unit to correct for the drift, determining a desired position of a rendered object based on the second pose estimate, converting data into light to generate the rendered object and displaying the rendered object in the desired position to the user through an eyepiece secured to the head-mountable frame.


In some embodiments, the method may include transmitting a radio wave at first and second times in the slow domain, detecting the radio waves after the radio waves are reflected from a surface, determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively, generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain and calculating a first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively.


In some embodiments, the method may include detecting back scatter from the surface and calculating a texture of the surface based on the back scatter.


In some embodiments, the method may include that the radar cycles include transmitting a radio wave at first and second times in the slow domain, detecting the radio waves after the radio waves are reflected from a surface, determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively, generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain and calculating first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively.


In some embodiments, the invention also provides a method of displaying rendered content including attaching a head-mountable frame to a head of a user, executing a plurality of radar cycles, including transmitting a radio wave at a first and second times in a slow domain, detecting the radio waves after the radio waves are reflected from a surface, determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively, generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain and calculating first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively, detecting a first and second measurement values at the first and second times in the slow domain, each measurement value being indicative of at least one of position and movement with a measurement unit secured to the frame, determining a first and second pose estimate, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value, determining drift of the measurement unit by comparing the first pose estimate with the second pose estimate, adjusting a measurement unit filter that is connected to the measurement unit to correct for the drift, determining a desired position of a rendered object based on the second pose estimate, converting data into light to generate the rendered object and displaying the rendered object in the desired position to the user through an eyepiece secured to the head-mountable frame.


In some embodiments, the invention further provides a method of displaying rendered content including attaching a head-mountable frame to a head of a user, executing a first radar cycle, including transmitting a first radio wave, detecting the first radio wave after the first radio wave is reflected from a surface, determining a first time between the transmission and the detection of the first radio wave, generating a first radar map of locations of real-world objects relative to the user based at least on the first time between the transmission and the detection of the first radio wave and calculating a first set of radar fingerprints based on the locations of the real-world objects in the first radar map, detecting a first measurement value indicative of at least one of position and movement with a measurement unit secured to the frame, determining a first pose estimate of the first set of radar fingerprints relative to the first measurement value, executing a second radar cycle, including transmitting a second radio wave, detecting the second radio wave after the second radio wave is reflected from the surface, determining a second time between the transmission and the detection of the second radio wave, generating a second radar map of locations of real-world objects relative to the user based at least on the second time between the transmission and the detection of the second radio wave and calculating a second set of radar fingerprints based on the locations of the real-world objects in the second radar map, detecting a second measurement value indicative of a position and movement with the measurement unit secured to the frame, determining a second pose estimate of the second set of radar fingerprints relative to the second measurement value, determining drift of the measurement unit by comparing the first pose estimate with the second pose estimate, adjusting a measurement unit filter that is connected to the measurement unit to correct for the drift, determining a desired position of a rendered object based on the second pose estimate, converting data into light to generate the rendered object and displaying the rendered object in the desired position to the user through an eyepiece secured to the head-mountable frame.


In some embodiments, the invention also provides a method of displaying rendered content including attaching a head-mountable frame to a head of a user, executing a plurality of radar cycles to generate first and second radar fingerprints of locations of real-world objects relative to the user at first and second times in the slow domain, capturing first and second visual images of the real-world objects with a visual camera mounted to the frame to, storing the first and second visual images at the first and second times in the slow domain, respectively, generating first and second visual maps of locations of real-world objects relative to the user based on the first and second visual images, refining the locations of the first and second sets of fingerprints based on the first and second visual maps, respectively, determining a desired position of a rendered object based on the first and second sets of fingerprints as refined, converting data into light to generate the rendered object; and displaying the rendered object in the desired position to the user through an eyepiece secured to the head-mountable frame.


In some embodiments, the invention further provides an augmented reality device including a head-mountable frame, a radar system that generates first and second sets of radar fingerprints of locations of real-world objects relative to the user at first and second times in the slow domain, a measurement unit, secured to the frame, and detecting first and second measurement values at the first and second times in the slow domain, each measurement value being indicative of at least one of position and movement of the measurement unit, a measurement unit filter connected to the measurement unit, a processor, a computer-readable medium connected to the processor, a set of instructions on the computer-readable medium and executable by the processor to (i) determine first and second pose estimates, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value, (ii) determine a drift of the measurement unit by comparing the first pose estimate with the second pose estimate, and (iii) adjust the measurement unit filter to correct for the drift; and (iv) determine a desired position of a rendered object based on the second pose, an eyepiece secured to the frame; and a projector secured to the frame and operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.


While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative and not restrictive of the current invention, and that this invention is not restricted to the specific constructions and arrangements shown and described since modifications may occur to those ordinarily skilled in the art.

Claims
  • 1. An augmented reality device comprising: a head-mountable frame;a radar system that generates first and second sets of radar fingerprints of locations of real-world objects relative to the user at first and second times;a measurement unit, secured to the frame, and detecting first and second measurement values at the first and second times, each measurement value being indicative of at least one of position and movement of the measurement unit;a measurement unit filter connected to the measurement unit;a sensor fusion module connected to the radar system and the measurement unit and operable to (i) determine first and second pose estimates, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value, (ii) determine a drift of the measurement unit by comparing the first pose estimate with the second pose estimate, and (iii) adjust the measurement unit filter to correct for the drift;a rendering module to determine a desired position of a rendered object based on the second pose;an eyepiece secured to the frame; anda projector secured to the frame and operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.
  • 2. The augmented reality device of claim 1, wherein the radar system includes: at least a first radar device having:a first radar transmitter secured to the frame and transmitting a radio wave at first and second times in the slow domain;a first radar receiver secured to the frame and detecting the radio waves after the radio waves are reflected from a surface;a radar tracking module connected to the first radar receiver and determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively;a radar mapping module connected to the radar tracking module and generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain; andan image processing module connected to the radar mapping module and calculating a first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively.
  • 3. The augmented reality device of claim 2, wherein the radar system includes: at least a second radar device having:a second radar transmitter secured to the frame and transmitting a radio wave at first and second times in a slow domain; anda second radar receiver secured to the frame and detecting the radio waves transmitted by the second radar transmitter after the radio waves are reflected from a surface, wherein the radar tracking module is connected to the second radar receiver and determines first and second time intervals in the fast domain between the transmission and the detection of the respective radio waves transmitted by the second radar transmitter.
  • 4. The augmented reality device of claim 3, wherein the radar system includes: at least a third radar device having:a third radar transmitter secured to the frame and transmitting a radio wave at first and second times in a slow domain; anda third radar receiver secured to the frame and detecting the radio waves transmitted by the third radar transmitter after the radio waves are reflected from a surface, wherein the radar tracking module is connected to the third radar receiver and determines first and second time intervals in the fast domain between the transmission and the detection of the respective radio waves transmitted by the third radar transmitter.
  • 5. The augmented reality device of claim 2, wherein the first radar receiver detects back scatter from the surface and the image processing module calculates a texture of the surface based on the back scatter.
  • 6. The augmented reality device of claim 1, further comprising: a visual system that includes:a visual camera mounted to the frame to capture a first and second visual images of the real-world object;a visual tracking module connected to the visual camera and storing the first and second visual images at the first and second times in the slow domain, respectively;a visual mapping module connected to the visual tracking module and generating first and second visual maps of locations of real-world objects relative to the user based on the first and second visual images; anda map merge and optimization module that refines the locations of the first and second sets of fingerprints based on the first and second visual maps, respectively.
  • 7. The augmented reality device of claim 1, further comprising: a radar filter connected to the radar system, wherein the sensor fusion module adjusts the radar filter based on the second pose estimate.
  • 8. The augmented reality device of claim 1, wherein the rendering module tracks a change in the second pose by receiving measurement values from the measurement device after the object is displayed in the desired location and updates the desired location of the rendered object in response to the tracking of the change in the second pose.
  • 9. The augmented reality device of claim 1, wherein the sensor fusion module performs a coarse localization based on a wireless signal.
  • 10. An augmented reality device comprising: a head-mountable frame;a radar system that includes:at least a first radar device having:a first radar transmitter secured to the frame and transmitting a radio wave at first and second times in a slow domain;a first radar receiver secured to the frame and detecting the radio waves after the radio waves are reflected from a surface;a radar tracking module connected to the first radar receiver and determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively;a radar mapping module connected to the radar tracking module and generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain;an image processing module connected to the radar mapping module and calculating a first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively;a measurement unit, secured to the frame, and detecting a first and second measurement values at the first and second times in the slow domain, each measurement value being indicative of at least one of position and movement of the measurement unit;a measurement unit filter connected to the measurement unit;a sensor fusion module connected to the image processing module and operable to (i) determine first and second pose estimates, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value, (ii) determine drift of the measurement unit by comparing the first pose estimate with the second pose estimate, and (iii) adjust the measurement unit filter to correct for the drift;a rendering module to determine a desired position of a rendered object based on the second pose;an eyepiece secured to the frame; anda projector secured to the frame and operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.
  • 11. An augmented reality device comprising: a head-mountable frame;a radar system secured to the frame that includes:a radar transmitter secured to the frame and initiates execution of a first radar cycle by transmitting a first radio wave;a radar receiver secured to the frame and detecting the first radio wave after the first radio wave is reflected from a surface;a radar tracking module connected to the radar receiver and determining a first time between the transmission and the detection of the first radio wave;a radar mapping module connected to the radar tracking module and generating a first radar map of locations of real-world objects relative to the user based at least on the first time between the transmission and the detection of the first radio wave;an image processing module connected to the radar mapping module and calculating a first set of radar fingerprints based on the locations of the real-world objects in the first radar map to complete the first radar cycle;a measurement unit, secured to the frame, and detecting a first measurement value indicative of at least one of position and movement with a measurement unit secured to the frame;a sensor fusion module connected to the image processing module and the measurement unit and determining a first pose estimate of the first set of radar fingerprints relative to the first measurement value;wherein the radar system executes a second radar cycle, including:transmitting a second radio wave;detecting the second radio wave after the second radio wave is reflected from the surface;determining a second time between the transmission and the detection of the second radio wave;generating a second radar map of locations of real-world objects relative to the user based at least on the second time between the transmission and the detection of the second radio wave; andcalculating a second set of radar fingerprints based on the locations of the real-world objects in the second radar map;wherein the measurement unit detects a second measurement value indicative of a position and movement with the measurement unit secured to the frame;wherein the sensor fusion module:determines a second pose estimate of the second set of radar fingerprints relative to the second measurement value;determining drift of the measurement unit by comparing the first pose estimate with the second pose estimate;adjusts a measurement unit filter that is connected to the measurement unit to correct for the drift;a rendering module to determine a desired position of a rendered object based on the second pose;an eyepiece secured to the frame; anda projector secured to the frame and operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.
  • 12. An augmented reality device comprising: a head-mountable frame;a radar system that generates first and second radar fingerprints of locations of real-world objects relative to the user at first and second times;a visual system that includes:a visual camera mounted to the frame to capture first and second visual images of the real-world objects;a visual tracking module connected to the visual camera and storing first and second visual images at the first and second time intervals, respectively;a visual mapping module connected to the visual tracking module and generating first and second visual maps of locations of real-world objects relative to the user based on the first and second visual images;a map merge and optimization module that refines the locations of the first and second sets of fingerprints based on the first and second visual maps, respectively;a rendering module to determine a desired position of a rendered object based on the first and second sets of fingerprints as refined by the map merge and optimization module;an eyepiece secured to the frame; anda projector secured to the frame and operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.
  • 13. A method of displaying rendered content comprising: attaching a head-mountable frame to a head of a user;executing a plurality of radar cycles to generate first and second radar fingerprints of locations of real-world objects relative to the user at first and second times in a slow domain;detecting first and second measurement values at the first and second times in the slow domain, each measurement value being indicative of at least one of position and movement, with a measurement unit secured to the frame;determining first and second pose estimates, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value;determining drift of the measurement unit by comparing the first pose estimate with the second pose estimate;adjusting a measurement unit filter that is connected to the measurement unit to correct for the drift;determining a desired position of a rendered object based on the second pose estimate;converting data into light to generate the rendered object; anddisplaying the rendered object in the desired position to the user through an eyepiece secured to the head-mountable frame.
  • 14. The method of claim 13, further comprising: transmitting a radio wave at first and second times in the slow domain;detecting the radio waves after the radio waves are reflected from a surface;determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively;generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain; andcalculating a first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively.
  • 15. The method of claim 14, further comprising: detecting back scatter from the surface; andcalculating a texture of the surface based on the back scatter.
  • 16. The method of claim 13, wherein the radar cycles include: transmitting a radio wave at first and second times in the slow domain;detecting the radio waves after the radio waves are reflected from a surface;determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively;generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain; andcalculating first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively.
  • 17. A method of displaying rendered content comprising: attaching a head-mountable frame to a head of a user;executing a plurality of radar cycles, including:transmitting a radio wave at a first and second times in a slow domain;detecting the radio waves after the radio waves are reflected from a surface;determining first and second time intervals in a fast domain between the transmission and the detection of the radio waves, respectively;generating first and second radar maps of locations of real-world objects relative to the user based at least on the respective times in the fast domain; andcalculating first and second sets of radar fingerprints based on the locations of the real-world objects in the first and second radar maps, respectively;detecting a first and second measurement values at the first and second times in the slow domain, each measurement value being indicative of at least one of position and movement with a measurement unit secured to the frame;determining a first and second pose estimate, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value;determining drift of the measurement unit by comparing the first pose estimate with the second pose estimate;adjusting a measurement unit filter that is connected to the measurement unit to correct for the drift;determining a desired position of a rendered object based on the second pose estimate;converting data into light to generate the rendered object; anddisplaying the rendered object in the desired position to the user through an eyepiece secured to the head-mountable frame.
  • 18. A method of displaying rendered content comprising: attaching a head-mountable frame to a head of a user;executing a first radar cycle, including:transmitting a first radio wave;detecting the first radio wave after the first radio wave is reflected from a surface;determining a first time between the transmission and the detection of the first radio wave;generating a first radar map of locations of real-world objects relative to the user based at least on the first time between the transmission and the detection of the first radio wave; andcalculating a first set of radar fingerprints based on the locations of the real-world objects in the first radar map;detecting a first measurement value indicative of at least one of position and movement with a measurement unit secured to the frame;determining a first pose estimate of the first set of radar fingerprints relative to the first measurement value;executing a second radar cycle, including:transmitting a second radio wave;detecting the second radio wave after the second radio wave is reflected from the surface;determining a second time between the transmission and the detection of the second radio wave;generating a second radar map of locations of real-world objects relative to the user based at least on the second time between the transmission and the detection of the second radio wave; andcalculating a second set of radar fingerprints based on the locations of the real-world objects in the second radar map;detecting a second measurement value indicative of a position and movement with the measurement unit secured to the frame;determining a second pose estimate of the second set of radar fingerprints relative to the second measurement value;determining drift of the measurement unit by comparing the first pose estimate with the second pose estimate;adjusting a measurement unit filter that is connected to the measurement unit to correct for the drift;determining a desired position of a rendered object based on the second pose estimate;converting data into light to generate the rendered object; anddisplaying the rendered object in the desired position to the user through an eyepiece secured to the head-mountable frame.
  • 19. A method of displaying rendered content comprising: attaching a head-mountable frame to a head of a user;executing a plurality of radar cycles to generate first and second radar fingerprints of locations of real-world objects relative to the user at first and second times in the slow domain;capturing first and second visual images of the real-world objects with a visual camera mounted to the frame to;storing the first and second visual images at the first and second times in the slow domain, respectively;generating first and second visual maps of locations of real-world objects relative to the user based on the first and second visual images;refining the locations of the first and second sets of fingerprints based on the first and second visual maps, respectively;determining a desired position of a rendered object based on the first and second sets of fingerprints as refined;converting data into light to generate the rendered object; anddisplaying the rendered object in the desired position to the user through an eyepiece secured to the head-mountable frame.
  • 20. An augmented reality device comprising: a head-mountable frame;a radar system that generates first and second sets of radar fingerprints of locations of real-world objects relative to the user at first and second times in the slow domain;a measurement unit, secured to the frame, and detecting first and second measurement values at the first and second times in the slow domain, each measurement value being indicative of at least one of position and movement of the measurement unit;a measurement unit filter connected to the measurement unit;a processor;a computer-readable medium connected to the processor;a set of instructions on the computer-readable medium and executable by the processor to (i) determine first and second pose estimates, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value, (ii) determine a drift of the measurement unit by comparing the first pose estimate with the second pose estimate, and (iii) adjust the measurement unit filter to correct for the drift; and (iv) determine a desired position of a rendered object based on the second pose;an eyepiece secured to the frame; anda projector secured to the frame and operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a National Phase of International Application No. PCT/US2019/033987, filed on May 24, 2019, which claims priority from U.S. Provisional Patent Application No. 62/678,621, filed on May 31, 2018, all of which is incorporated herein by reference in its entirety.

PCT Information
Filing Document Filing Date Country Kind
PCT/US2019/033987 5/24/2019 WO
Publishing Document Publishing Date Country Kind
WO2019/231850 12/5/2019 WO A
US Referenced Citations (497)
Number Name Date Kind
4344092 Miller Aug 1982 A
4652930 Crawford Mar 1987 A
4810080 Grendol et al. Mar 1989 A
4997268 Dauvergne Mar 1991 A
5007727 Kahaney et al. Apr 1991 A
5074295 Willis Dec 1991 A
5240220 Elberbaum Aug 1993 A
5251635 Dumoulin et al. Oct 1993 A
5410763 Bolle May 1995 A
5455625 Englander Oct 1995 A
5495286 Adair Feb 1996 A
5497463 Stein et al. Mar 1996 A
5682255 Friesem et al. Oct 1997 A
5689669 Lynch Nov 1997 A
5826092 Flannery Oct 1998 A
5854872 Tai Dec 1998 A
5864365 Sramek et al. Jan 1999 A
5937202 Crosetto Aug 1999 A
6002853 De Hond Dec 1999 A
6012811 Chao et al. Jan 2000 A
6016160 Coombs et al. Jan 2000 A
6064749 Hirota et al. May 2000 A
6076927 Owens Jun 2000 A
6079982 Meader Jun 2000 A
6117923 Amagai et al. Sep 2000 A
6119147 Toomey et al. Sep 2000 A
6124977 Takahashi Sep 2000 A
6179619 Tanaka Jan 2001 B1
6191809 Hori et al. Feb 2001 B1
6219045 Leahy et al. Apr 2001 B1
6243091 Berstis Jun 2001 B1
6271843 Lection et al. Aug 2001 B1
6362817 Powers et al. Mar 2002 B1
6375369 Schneider et al. Apr 2002 B1
6385735 Wilson May 2002 B1
6396522 Vu May 2002 B1
6414679 Miodonski et al. Jul 2002 B1
6538655 Kubota Mar 2003 B1
6541736 Huang et al. Apr 2003 B1
6570563 Honda May 2003 B1
6573903 Gantt Jun 2003 B2
6590593 Robertson et al. Jul 2003 B1
6621508 Shiraishi et al. Sep 2003 B1
6690393 Heron et al. Feb 2004 B2
6757068 Foxlin Jun 2004 B2
6784901 Harvfey et al. Aug 2004 B1
6961055 Doak Nov 2005 B2
7046515 Wyatt May 2006 B1
7051219 Hwang May 2006 B2
7076674 Cervantes Jul 2006 B2
7111290 Yates, Jr. Sep 2006 B1
7119819 Robertson et al. Oct 2006 B1
7219245 Raghuvanshi May 2007 B1
7382288 Wilson Jun 2008 B1
7414629 Santodomingo Aug 2008 B2
7431453 Hogan Oct 2008 B2
7467356 Gettman et al. Dec 2008 B2
7542040 Templeman Jun 2009 B2
7573640 Nivon et al. Aug 2009 B2
7653877 Matsuda Jan 2010 B2
7663625 Chartier et al. Feb 2010 B2
7724980 Shenzhi May 2010 B1
7746343 Charaniya et al. Jun 2010 B1
7751662 Kleemann Jul 2010 B2
7758185 Lewis Jul 2010 B2
7788323 Greenstein et al. Aug 2010 B2
7804507 Yang et al. Sep 2010 B2
7814429 Buffet et al. Oct 2010 B2
7817150 Reichard et al. Oct 2010 B2
7844724 Van Wie et al. Nov 2010 B2
8060759 Arnan et al. Nov 2011 B1
8120851 Iwasa Feb 2012 B2
8214660 Capps, Jr. Jul 2012 B2
8246408 Elliot Aug 2012 B2
8353594 Lewis Jan 2013 B2
8360578 Nummela et al. Jan 2013 B2
8508676 Silverstein et al. Aug 2013 B2
8547638 Levola Oct 2013 B2
8605764 Rothaar et al. Oct 2013 B1
8619365 Harris et al. Dec 2013 B2
8696113 Lewis Apr 2014 B2
8698701 Margulis Apr 2014 B2
8733927 Lewis May 2014 B1
8736636 Kang May 2014 B2
8759929 Shiozawa et al. Jun 2014 B2
8793770 Lim Jul 2014 B2
8823855 Hwang Sep 2014 B2
8847988 Geisner et al. Sep 2014 B2
8874673 Kim Oct 2014 B2
9010929 Lewis Apr 2015 B2
9015501 Gee Apr 2015 B2
9086537 Iwasa et al. Jul 2015 B2
9095437 Boyden et al. Aug 2015 B2
9239473 Lewis Jan 2016 B2
9244293 Lewis Jan 2016 B2
9244533 Friend et al. Jan 2016 B2
9383823 Geisner et al. Jul 2016 B2
9489027 Ogletree Nov 2016 B1
9519305 Wolfe Dec 2016 B2
9581820 Robbins Feb 2017 B2
9582060 Balatsos Feb 2017 B2
9658473 Lewis May 2017 B2
9671566 Abovitz et al. Jun 2017 B2
9671615 Vallius et al. Jun 2017 B1
9696795 Marcolina et al. Jul 2017 B2
9798144 Sako et al. Oct 2017 B2
9874664 Stevens et al. Jan 2018 B2
9880441 Osterhout Jan 2018 B1
9918058 Takahasi et al. Mar 2018 B2
9955862 Freeman et al. May 2018 B2
9978118 Ozgumer et al. May 2018 B1
9996797 Holz et al. Jun 2018 B1
10018844 Levola et al. Jul 2018 B2
10082865 Raynal et al. Sep 2018 B1
10151937 Lewis Dec 2018 B2
10185147 Lewis Jan 2019 B2
10218679 Jawahar Feb 2019 B1
10241545 Richards et al. Mar 2019 B1
10317680 Richards et al. Jun 2019 B1
10436594 Belt et al. Oct 2019 B2
10516853 Gibson et al. Dec 2019 B1
10551879 Richards et al. Feb 2020 B1
10578870 Kimmel Mar 2020 B2
10698202 Kimmel et al. Jun 2020 B2
10856107 Mycek et al. Oct 2020 B2
10825424 Zhang Nov 2020 B2
10987176 Poltaretskyi et al. Apr 2021 B2
11190681 Brook et al. Nov 2021 B1
11209656 Choi et al. Dec 2021 B1
11236993 Hall et al. Feb 2022 B1
20010010598 Aritake et al. Aug 2001 A1
20010018667 Kim Aug 2001 A1
20020007463 Fung Jan 2002 A1
20020108064 Nunally Feb 2002 A1
20020063913 Nakamura et al. May 2002 A1
20020071050 Homberg Jun 2002 A1
20020095463 Matsuda Jul 2002 A1
20020113820 Robinson et al. Aug 2002 A1
20020122648 Mule′ et al. Sep 2002 A1
20020140848 Cooper et al. Oct 2002 A1
20030028816 Bacon Feb 2003 A1
20030048456 Hill Mar 2003 A1
20030067685 Niv Apr 2003 A1
20030077458 Korenaga et al. Apr 2003 A1
20030115494 Cervantes Jun 2003 A1
20030218614 Lavelle et al. Nov 2003 A1
20030219992 Schaper Nov 2003 A1
20030226047 Park Dec 2003 A1
20040001533 Tran et al. Jan 2004 A1
20040021600 Wittenberg Feb 2004 A1
20040025069 Gary et al. Feb 2004 A1
20040042377 Nikoloai et al. Mar 2004 A1
20040073822 Greco Apr 2004 A1
20040073825 Itoh Apr 2004 A1
20040111248 Granny et al. Jun 2004 A1
20040113887 Pair et al. Jun 2004 A1
20040174496 Ji et al. Sep 2004 A1
20040186902 Stewart Sep 2004 A1
20040193441 Altieri Sep 2004 A1
20040201857 Foxlin Oct 2004 A1
20040238732 State et al. Dec 2004 A1
20040240072 Schindler et al. Dec 2004 A1
20040246391 Travis Dec 2004 A1
20040268159 Aasheim et al. Dec 2004 A1
20050001977 Zelman Jan 2005 A1
20050034002 Flautner Feb 2005 A1
20050093719 Okamoto et al. May 2005 A1
20050128212 Edecker et al. Jun 2005 A1
20050157159 Komiya et al. Jul 2005 A1
20050177385 Hull Aug 2005 A1
20050231599 Yamasaki Oct 2005 A1
20050273792 Inohara et al. Dec 2005 A1
20060013435 Rhoads Jan 2006 A1
20060015821 Jacques Parker et al. Jan 2006 A1
20060019723 Vorenkamp Jan 2006 A1
20060038880 Starkweather et al. Feb 2006 A1
20060050224 Smith Mar 2006 A1
20060090092 Verhulst Apr 2006 A1
20060126181 Levola Jun 2006 A1
20060129852 Bonola Jun 2006 A1
20060132914 Weiss et al. Jun 2006 A1
20060179329 Terechko Aug 2006 A1
20060221448 Nivon et al. Oct 2006 A1
20060228073 Mukawa et al. Oct 2006 A1
20060250322 Hall et al. Nov 2006 A1
20060259621 Ranganathan Nov 2006 A1
20060268220 Hogan Nov 2006 A1
20070058248 Nguyen et al. Mar 2007 A1
20070103836 Oh May 2007 A1
20070124730 Pytel May 2007 A1
20070159673 Freeman et al. Jul 2007 A1
20070188837 Shimizu et al. Aug 2007 A1
20070198886 Saito Aug 2007 A1
20070204672 Huang et al. Sep 2007 A1
20070213952 Cirelli Sep 2007 A1
20070283247 Brenneman et al. Dec 2007 A1
20080002259 Ishizawa et al. Jan 2008 A1
20080002260 Arrouy et al. Jan 2008 A1
20080030429 Hailpern Feb 2008 A1
20080043334 Itzkovitch et al. Feb 2008 A1
20080046773 Ham Feb 2008 A1
20080063802 Maula et al. Mar 2008 A1
20080068557 Menduni et al. Mar 2008 A1
20080125218 Collins May 2008 A1
20080146942 Dala-Krishna Jun 2008 A1
20080173036 Willaims Jul 2008 A1
20080177506 Kim Jul 2008 A1
20080205838 Crippa et al. Aug 2008 A1
20080215907 Wilson Sep 2008 A1
20080225393 Rinko Sep 2008 A1
20080235570 Sawada et al. Sep 2008 A1
20080246693 Hailpern et al. Oct 2008 A1
20080316768 Travis Dec 2008 A1
20090076791 Rhoades et al. Mar 2009 A1
20090091583 McCoy Apr 2009 A1
20090153797 Allon et al. Jun 2009 A1
20090224416 Laakkonen et al. Sep 2009 A1
20090245730 Kleemann Oct 2009 A1
20090287728 Martine et al. Nov 2009 A1
20090300528 Stambaugh Dec 2009 A1
20090310633 Ikegami Dec 2009 A1
20100005326 Archer Jan 2010 A1
20100019962 Fujita Jan 2010 A1
20100056274 Uusitalo et al. Mar 2010 A1
20100063854 Purvis et al. Mar 2010 A1
20100070378 Trotman et al. Mar 2010 A1
20100079841 Levola Apr 2010 A1
20100115428 Shuping et al. May 2010 A1
20100153934 Lachner Jun 2010 A1
20100194632 Raento et al. Aug 2010 A1
20100205541 Rappaport et al. Aug 2010 A1
20100214284 Rieffel et al. Aug 2010 A1
20100232016 Landa et al. Sep 2010 A1
20100232031 Batchko et al. Sep 2010 A1
20100244168 Shiozawa et al. Sep 2010 A1
20100274567 Carlson et al. Oct 2010 A1
20100274627 Carlson Oct 2010 A1
20100277803 Pockett et al. Nov 2010 A1
20100284085 Laakkonen Nov 2010 A1
20100296163 Sarikko Nov 2010 A1
20110010636 Hamilton, II et al. Jan 2011 A1
20110021263 Anderson et al. Jan 2011 A1
20110022870 Mcgrane Jan 2011 A1
20110041083 Gabai et al. Feb 2011 A1
20110050640 Lundback et al. Mar 2011 A1
20110050655 Mukawa Mar 2011 A1
20110122240 Becker May 2011 A1
20110145617 Thomson et al. Jun 2011 A1
20110170801 Lu et al. Jul 2011 A1
20110218733 Hamza et al. Sep 2011 A1
20110286735 Temblay Nov 2011 A1
20110291969 Rashid et al. Dec 2011 A1
20120011389 Driesen Jan 2012 A1
20120050535 Densham et al. Mar 2012 A1
20120075501 Oyagi et al. Mar 2012 A1
20120081392 Arthur Apr 2012 A1
20120089854 Breakstone Apr 2012 A1
20120113235 Shintani May 2012 A1
20120127062 Bar-Zeev et al. May 2012 A1
20120154557 Perez et al. Jun 2012 A1
20120218301 Miller Aug 2012 A1
20120246506 Knight Sep 2012 A1
20120249416 Maciocci et al. Oct 2012 A1
20120249741 Maciocci et al. Oct 2012 A1
20120260083 Andrews Oct 2012 A1
20120307075 Margalitq Dec 2012 A1
20120307362 Silverstein et al. Dec 2012 A1
20120314959 White et al. Dec 2012 A1
20120320460 Levola Dec 2012 A1
20120326948 Crocco et al. Dec 2012 A1
20130021486 Richardon Jan 2013 A1
20130050642 Lewis et al. Feb 2013 A1
20130050833 Lewis et al. Feb 2013 A1
20130051730 Travers et al. Feb 2013 A1
20130502058 Liu et al. Feb 2013
20130061240 Yan et al. Mar 2013 A1
20130077049 Bohn Mar 2013 A1
20130077170 Ukuda Mar 2013 A1
20130094148 Sloane Apr 2013 A1
20130129282 Li May 2013 A1
20130162940 Kurtin et al. Jun 2013 A1
20130169923 Schnoll et al. Jul 2013 A1
20130205126 Kruglick Aug 2013 A1
20130222386 Tannhauser et al. Aug 2013 A1
20130268257 Hu Oct 2013 A1
20130278633 Ahn et al. Oct 2013 A1
20130314789 Saarikko et al. Nov 2013 A1
20130318276 Dalal Nov 2013 A1
20130336138 Venkatraman et al. Dec 2013 A1
20130342564 Kinnebrew et al. Dec 2013 A1
20130342570 Kinnebrew et al. Dec 2013 A1
20130342571 Kinnebrew et al. Dec 2013 A1
20130343408 Cook Dec 2013 A1
20140002329 Nishimaki et al. Jan 2014 A1
20140013098 Yeung Jan 2014 A1
20140016821 Arth et al. Jan 2014 A1
20140022819 Oh et al. Jan 2014 A1
20140078023 Ikeda et al. Mar 2014 A1
20140082526 Park et al. Mar 2014 A1
20140119598 Ramachandran et al. May 2014 A1
20140126769 Reitmayr et al. May 2014 A1
20140140653 Brown et al. May 2014 A1
20140149573 Tofighbakhsh et al. May 2014 A1
20140168260 O'Brien et al. Jun 2014 A1
20140266987 Magyari Sep 2014 A1
20140267419 Ballard et al. Sep 2014 A1
20140274391 Stafford Sep 2014 A1
20140282105 Nordstrom Sep 2014 A1
20140313228 Kasahara Oct 2014 A1
20140340449 Plagemann et al. Nov 2014 A1
20140359589 Kodsky et al. Dec 2014 A1
20140375680 Ackerman et al. Dec 2014 A1
20150005785 Olson Jan 2015 A1
20150009099 Queen Jan 2015 A1
20150077312 Wang Mar 2015 A1
20150097719 Balachandreswaran et al. Apr 2015 A1
20150123966 Newman May 2015 A1
20150130790 Vazquez, II et al. May 2015 A1
20150134995 Park et al. May 2015 A1
20150138248 Schrader May 2015 A1
20150155939 Oshima et al. Jun 2015 A1
20150168221 Mao et al. Jun 2015 A1
20150205126 Schowengerdt Jul 2015 A1
20150235427 Nobori et al. Aug 2015 A1
20150235431 Schowengerdt Aug 2015 A1
20150253651 Russell et al. Sep 2015 A1
20150256484 Cameron Sep 2015 A1
20150269784 Miyawaki et al. Sep 2015 A1
20150294483 Wells et al. Oct 2015 A1
20150301955 Yakovenko et al. Oct 2015 A1
20150310657 Eden Oct 2015 A1
20150338915 Publicover et al. Nov 2015 A1
20150355481 Hilkes et al. Dec 2015 A1
20160004102 Nisper et al. Jan 2016 A1
20160015470 Border Jan 2016 A1
20160027215 Burns et al. Jan 2016 A1
20160033770 Fujimaki et al. Feb 2016 A1
20160077338 Robbins et al. Mar 2016 A1
20160085285 Mangione-Smith Mar 2016 A1
20160085300 Robbins et al. Mar 2016 A1
20160091720 Stafford et al. Mar 2016 A1
20160093099 Bridges Mar 2016 A1
20160093269 Buckley et al. Mar 2016 A1
20160123745 Cotier et al. May 2016 A1
20160139402 Lapstun May 2016 A1
20160155273 Lyren et al. Jun 2016 A1
20160180596 Gonzalez del Rosario Jun 2016 A1
20160187654 Border et al. Jun 2016 A1
20160191887 Casas Jun 2016 A1
20160202496 Billetz et al. Jul 2016 A1
20160217624 Finn et al. Jul 2016 A1
20160266412 Yoshida Sep 2016 A1
20160267708 Nistico et al. Sep 2016 A1
20160274733 Hasegawa et al. Sep 2016 A1
20160287337 Aram et al. Oct 2016 A1
20160300388 Stafford et al. Oct 2016 A1
20160321551 Priness et al. Nov 2016 A1
20160327798 Xiao et al. Nov 2016 A1
20160334279 Mittleman et al. Nov 2016 A1
20160357255 Lindh et al. Dec 2016 A1
20160370404 Quadrat et al. Dec 2016 A1
20160370510 Thomas Dec 2016 A1
20170038607 Camara Feb 2017 A1
20170060225 Zha et al. Mar 2017 A1
20170061696 Li et al. Mar 2017 A1
20170064066 Das et al. Mar 2017 A1
20170100664 Osterhout et al. Apr 2017 A1
20170102544 Vallius et al. Apr 2017 A1
20170115487 Travis Apr 2017 A1
20170122725 Yeoh et al. May 2017 A1
20170123526 Trail et al. May 2017 A1
20170127295 Black et al. May 2017 A1
20170131569 Aschwanden et al. May 2017 A1
20170147066 Katz et al. May 2017 A1
20170160518 Lanman et al. Jun 2017 A1
20170161951 Fix et al. Jun 2017 A1
20170185261 Perez et al. Jun 2017 A1
20170192239 Nakamura et al. Jul 2017 A1
20170201709 Igarashi et al. Jul 2017 A1
20170205903 Miller et al. Jul 2017 A1
20170206668 Poulos et al. Jul 2017 A1
20170213388 Margolis et al. Jul 2017 A1
20170214907 Lapstun Jul 2017 A1
20170219841 Popovich et al. Aug 2017 A1
20170232345 Rofougaran et al. Aug 2017 A1
20170235126 DiDomenico Aug 2017 A1
20170235129 Kamakura Aug 2017 A1
20170235142 Wall et al. Aug 2017 A1
20170235144 Piskunov et al. Aug 2017 A1
20170235147 Kamakura Aug 2017 A1
20170243403 Daniels et al. Aug 2017 A1
20170246070 Osterhout et al. Aug 2017 A1
20170254832 Ho et al. Sep 2017 A1
20170256096 Faaborg et al. Sep 2017 A1
20170258526 Lang Sep 2017 A1
20170266529 Reikmoto Sep 2017 A1
20170270712 Tyson et al. Sep 2017 A1
20170281054 Stever et al. Oct 2017 A1
20170287376 Bakar et al. Oct 2017 A1
20170293141 Schowengerdt et al. Oct 2017 A1
20170307886 Stenberg et al. Oct 2017 A1
20170307891 Bucknor et al. Oct 2017 A1
20170312032 Amanatullah et al. Nov 2017 A1
20170322418 Lin et al. Nov 2017 A1
20170322426 Tervo Nov 2017 A1
20170329137 Tervo Nov 2017 A1
20170332098 Rusanovskyy et al. Nov 2017 A1
20170336636 Amitai et al. Nov 2017 A1
20170357332 Balan et al. Dec 2017 A1
20170363871 Vallius Dec 2017 A1
20170371394 Chan Dec 2017 A1
20170371661 Sparling Dec 2017 A1
20180014266 Chen Jan 2018 A1
20180024289 Fattal Jan 2018 A1
20180044173 Netzer Feb 2018 A1
20180052007 Teskey et al. Feb 2018 A1
20180052501 Jones, Jr. et al. Feb 2018 A1
20180059305 Popovich et al. Mar 2018 A1
20180067779 Pillalamarri et al. Mar 2018 A1
20180070855 Eichler Mar 2018 A1
20180082480 White et al. Mar 2018 A1
20180084245 Lapstun Mar 2018 A1
20180088185 Woods et al. Mar 2018 A1
20180102981 Kurtzman et al. Apr 2018 A1
20180108179 Tomlin et al. Apr 2018 A1
20180114298 Malaika et al. Apr 2018 A1
20180129112 Osterhout May 2018 A1
20180131907 Schmirier et al. May 2018 A1
20180136466 Ko May 2018 A1
20180144691 Choi et al. May 2018 A1
20180150971 Adachi et al. May 2018 A1
20180151796 Akahane May 2018 A1
20180172995 Lee et al. Jun 2018 A1
20180188115 Hsu et al. Jul 2018 A1
20180189568 Powderly et al. Jul 2018 A1
20180190017 Mendez et al. Jul 2018 A1
20180191990 Motoyama et al. Jul 2018 A1
20180218545 Garcia et al. Aug 2018 A1
20180250589 Cossairt et al. Sep 2018 A1
20180284877 Klein Oct 2018 A1
20180292654 Wall et al. Oct 2018 A1
20180299678 Singer et al. Oct 2018 A1
20180357472 Dreessen Dec 2018 A1
20190005069 Filgueiras de Araujo et al. Jan 2019 A1
20190011691 Peyman Jan 2019 A1
20190056591 Tervo et al. Feb 2019 A1
20190087015 Lam et al. Mar 2019 A1
20190101758 Zhu et al. Apr 2019 A1
20190107723 Lee et al. Apr 2019 A1
20190137788 Suen May 2019 A1
20190155034 Singer et al. May 2019 A1
20190155439 Mukherjee et al. May 2019 A1
20190158926 Kang et al. May 2019 A1
20190162950 Lapstun May 2019 A1
20190167095 Krueger Jun 2019 A1
20190172216 Ninan et al. Jun 2019 A1
20190178654 Hare Jun 2019 A1
20190182415 Sivan Jun 2019 A1
20190196690 Chong et al. Jun 2019 A1
20190206116 Xu et al. Jul 2019 A1
20190219815 Price et al. Jul 2019 A1
20190243123 Bohn Aug 2019 A1
20190287270 Nakamura et al. Sep 2019 A1
20190318502 He et al. Oct 2019 A1
20190318540 Piemonte et al. Oct 2019 A1
20190321728 Imai et al. Oct 2019 A1
20190347853 Chen et al. Nov 2019 A1
20190380792 Poltaretskyi et al. Dec 2019 A1
20190388182 Kumar et al. Dec 2019 A1
20200066045 Stahl et al. Feb 2020 A1
20200098188 Bar-Zeev et al. Mar 2020 A1
20200100057 Galon et al. Mar 2020 A1
20200110928 Al Jazaery et al. Apr 2020 A1
20200117267 Gibson et al. Apr 2020 A1
20200117270 Gibson et al. Apr 2020 A1
20200184217 Faulkner Jun 2020 A1
20200184653 Faulker Jun 2020 A1
20200202759 Ukai et al. Jun 2020 A1
20200242848 Ambler et al. Jul 2020 A1
20200309944 Thoresen et al. Oct 2020 A1
20200356161 Wagner Nov 2020 A1
20200368616 Delamont Nov 2020 A1
20200391115 Leeper et al. Dec 2020 A1
20200409528 Lee Dec 2020 A1
20210008413 Asikainen et al. Jan 2021 A1
20210033871 Jacoby et al. Feb 2021 A1
20210041951 Gibson et al. Feb 2021 A1
20210053820 Gurin et al. Feb 2021 A1
20210093391 Poltaretskyi et al. Apr 2021 A1
20210093410 Gaborit et al. Apr 2021 A1
20210093414 Moore et al. Apr 2021 A1
20210097886 Kuester et al. Apr 2021 A1
20210132380 Wieczorek May 2021 A1
20210142582 Jones et al. May 2021 A1
20210158627 Cossairt et al. May 2021 A1
20210173480 Osterhout et al. Jun 2021 A1
20220366598 Azimi et al. Nov 2022 A1
Foreign Referenced Citations (96)
Number Date Country
101449270 Jun 2009 CN
104040410 Sep 2014 CN
104603675 May 2015 CN
106662754 May 2017 CN
107683497 Feb 2018 CN
105190427 Nov 2019 CN
0504930 Mar 1992 EP
0535402 Apr 1993 EP
0632360 Jan 1995 EP
1215522 Jun 2002 EP
1494110 Jan 2005 EP
1938141 Jul 2008 EP
1943556 Jul 2008 EP
2290428 Mar 2011 EP
2350774 Aug 2011 EP
1237067 Jan 2016 EP
3139245 Mar 2017 EP
3164776 May 2017 EP
3236211 Oct 2017 EP
2723240 Aug 2018 EP
2896986 Feb 2021 EP
2499635 Aug 2013 GB
2542853 Apr 2017 GB
938DEL2004 Jun 2006 IN
H03-036974 Apr 1991 JP
H10-333094 Dec 1998 JP
2002-529806 Sep 2002 JP
2003-029198 Jan 2003 JP
2003-141574 May 2003 JP
2003-228027 Aug 2003 JP
2003-329873 Nov 2003 JP
2005-303843 Oct 2005 JP
2007-012530 Jan 2007 JP
2007-86696 Apr 2007 JP
2007-273733 Oct 2007 JP
2008-257127 Oct 2008 JP
2009-090689 Apr 2009 JP
2009-244869 Oct 2009 JP
2010-014443 Jan 2010 JP
2011-033993 Feb 2011 JP
2011-257203 Dec 2011 JP
2012-015774 Jan 2012 JP
2012-235036 Nov 2012 JP
2013-525872 Jun 2013 JP
2014-500522 Jan 2014 JP
2014-192550 Oct 2014 JP
2015-191032 Nov 2015 JP
2016-502120 Jan 2016 JP
2016-85463 May 2016 JP
2016-516227 Jun 2016 JP
2017-015697 Jan 2017 JP
2017-153498 Sep 2017 JP
2017-531840 Oct 2017 JP
6232763 Nov 2017 JP
6333965 May 2018 JP
2005-0010775 Jan 2005 KR
10-2006-0059992 Jun 2006 KR
10-1372623 Mar 2014 KR
201219829 May 2012 TW
201803289 Jan 2018 TW
1991000565 Jan 1991 WO
2000030368 Jun 2000 WO
2002071315 Sep 2002 WO
2004095248 Nov 2004 WO
2006132614 Dec 2006 WO
2007037089 May 2007 WO
2007085682 Aug 2007 WO
2007102144 Sep 2007 WO
2008148927 Dec 2008 WO
2009101238 Aug 2009 WO
2014203440 Dec 2010 WO
2012030787 Mar 2012 WO
2013049012 Apr 2013 WO
2013062701 May 2013 WO
2014033306 Mar 2014 WO
2015143641 Oct 2015 WO
2015143641 Oct 2015 WO
2016054092 Apr 2016 WO
2017004695 Jan 2017 WO
2017044761 Mar 2017 WO
2017049163 Mar 2017 WO
2017120475 Jul 2017 WO
2017176861 Oct 2017 WO
2017203201 Nov 2017 WO
2018008232 Jan 2018 WO
2018031261 Feb 2018 WO
2018022523 Feb 2018 WO
2018044537 Mar 2018 WO
2018039273 Mar 2018 WO
2018057564 Mar 2018 WO
2018085287 May 2018 WO
2018087408 May 2018 WO
2018097831 May 2018 WO
2018166921 Sep 2018 WO
2019148154 Aug 2019 WO
2020010226 Jan 2020 WO
Non-Patent Literature Citations (235)
Entry
“Decision of Rejection dated Jan. 5, 2023 with English translation”, Chinese Patent Application No. 201880079474.6, (10 pages).
“Extended European Search Report dated Dec. 14, 2022”, European Patent Application No. 20886547.7, (8 pages).
“Final Office Action dated Dec. 29, 2022”, U.S. Appl. No. 17/098,059, (32 pages).
“First Office Action dated Dec. 22, 2022 with English translation”, Chinese Patent Application No. 201980061450.2, (11 pages).
“First Office Action dated Jan. 24, 2023 with English translation”, Japanese Patent Application No. 2020-549034, (7 pages).
“Non Final Office Action dated Dec. 7, 2022”, U.S. Appl. No. 17/357,795, (63 pages).
“Non Final Office Action dated Feb. 3, 2023”, U.S. Appl. No. 17/429,100, (16 pages).
“Non Final Office Action dated Feb. 3, 2023”, U.S. Appl. No. 17/497,965, (32 pages).
“Non Final Office Action dated Jan. 24, 2023”, U.S. Appl. No. 17/497,940, (10 pages).
“Non Final Office Action dated Mar. 1, 2023”, U.S. Appl. No. 18/046,739, (34 pages).
“Office Action dated Nov. 24, 2022 with English Translation”, Japanese Patent Application No. 2020-533730, (11 pages).
Molchanov, Pavlo et al., “Short-range FMCW monopulse radar for hand-gesture sensing”, 2015 IEEE Radar Conference (RadarCon) (2015), pp. 1491-1496.
Communication Pursuant to Article 94(3) EPC dated Jan. 4, 2022, European Patent Application No. 20154070.5, (8 pages).
Communication Pursuant to Article 94(3) EPC dated Oct. 21, 2021, European Patent Application No. 16207441.3, (4 pages).
Communication Pursuant to Rule 164(1) EPC dated Jul. 27, 2021, European Patent Application No. 19833664.6, (11 pages).
Extended European Search Report dated Jun. 30, 2021, European Patent Application No. 19811971.1, (9 pages).
Extended European Search Report dated Jan. 4, 2022, European Patent Application No. 19815085.6, (9 pages).
Extended European Search Report dated Jul. 16, 2021, European Patent Application No. 19810142.0, (14 pages).
Extended European Search Report dated Jul. 30, 2021, European Patent Application No. 19839970.1, (7 pages).
Extended European Search Report dated Oct. 27, 2021, European Patent Application No. 19833664.6, (10 pages).
Extended European Search Report dated Sep. 20, 2021, European Patent Application No. 19851373.1, (8 pages).
Extended European Search Report dated Sep. 28, 2021, European Patent Application No. 19845418.3, (13 pages).
Final Office Action dated Jun. 15, 2021, U.S. Appl. No. 16/928,313, (42 pages).
Final Office Action dated Sep. 17, 2021, U.S. Appl. No. 16/938,782, (44 pages).
Multi-core processor, TechTarget , 2013 , (1 page).
Non Final Office Action dated Aug. 4, 2021, U.S. Appl. No. 16/864,721, (51 pages).
Non Final Office Action dated Jul. 9, 2021, U.S. Appl. No. 17/002,663, (43 pages).
Non Final Office Action dated Jul. 9, 2021, U.S. Appl. No. 16/833,093, (47 pages).
Non Final Office Action dated Jun. 10, 2021, U.S. Appl. No. 16/938,782, (40 Pages).
Non Final Office Action dated Jun. 29, 2021, U.S. Appl. No. 16/698,588, (58 pages).
Non Final Office Action dated May 26, 2021, U.S. Appl. No. 16/214,575, (19 pages).
Non Final Office Action dated Sep. 20, 2021, U.S. Appl. No. 17/105,848, (56 pages).
Non Final Office Action dated Sep. 29, 202, U.S. Appl. No. 16/748,193 , (62 pages).
Giuseppe, Donato , et al. , Stereoscopic helmet mounted system for real time 3D environment reconstruction and indoor ego—motion estimation, Proc. SPIE 6955, Head- and Helmet-Mounted Displays XIII: Design and Applications, 69550P.
Mrad , et al. , A framework for System Level Low Power Design Space Exploration, 1991.
Sheng, Liu , et al. , Time-multiplexed dual-focal plane head-mounted display with a liquid lens, Optics Letters, Optical Society of Amer i ca, US, vol. 34, No. 11, Jun. 1, 2009 (Jun. 1, 2009), XP001524475, ISSN: 0146-9592, pp. 1642-1644.
“Extended European Search Report dated Aug. 24, 2022”, European Patent Application No. 20846338.0, (13 pages).
“Extended European Search Report dated Aug. 8, 2022”, European Patent Application No. 19898874.3, (8 pages).
“Extended European Search Report dated Sep. 8, 2022”, European Patent Application No. 20798769.4, (13 pages).
“Extended European Search Report dated Nov. 3, 2022”, European Patent Application No. 20770244.0, (23 pages).
“First Examination Report dated Jul. 27, 2022”, Chinese Patent Application No. 201980036675.2, (5 pages).
“First Examination Report dated Jul. 28, 2022”, Indian Patent Application No. 202047024232, (6 pages).
“First Office Action dated Sep. 16, 2022 with English translation”, Chinese Patent Application No. 201980063642.7, (7 pages).
“FS_XR5G: Permanent document, v0.4.0”, Qualcomm Incorporated, 3GPP TSG-SA 4 Meeting 103 retrieved from the Internet: URL:http://www.3gpp.org/ftp/Meetings%5F3GP P%5FSYNC/SA4/Docs/S4%2DI90526%2Ezip [retrieved on Apr. 12, 2019], Apr. 12, 2019, (98 pages).
“Non Final Office Action dated Jul. 26, 2022”, U.S. Appl. No. 17/098,059, (28 pages).
“Non Final Office Action dated Sep. 19, 2022”, U.S. Appl. No. 17/263,001, (14 pages).
“Notice of Reason for Rejection dated Oct. 28, 2022 with English translation”, Japanese Patent Application No. 2020-531452, (3 pages).
“Second Office Action dated Jul. 13, 2022 with English Translation”, Chinese Patent Application No. 201880079474.6, (10 pages).
“Second Office Action dated Jun. 20, 2022 with English Translation”, Chinese Patent Application No. 201880089255.6, (14 pages).
Anonymous , “Koi Pond: Top iPhone App Store Paid App”, https://web.archive.org/web/20080904061233/https://www.iphoneincanada.ca/reviews /koi-pond-top-iphone-app-store-paid-app/—[retrieved on Aug. 9, 2022], (2 pages).
Chittineni, C. , et al., “Single filters for combined image geometric manipulation and enhancement”, Proceedings of SPIE vol. 1903, Image and Video Processing, Apr. 8, 1993, San Jose, CA. (Year 1993), pp. 111-121.
Communication according to Rule 164(1) EPC dated Feb. 23, 2022, European Patent Application No. 20753144.3, (11 pages).
Extended European Search Report dated Jan. 28, 2022, European Patent Application No. 19815876.8, (9 pages).
Extended European Search Report dated Jun. 19, 2020, European Patent Application No. 20154750.2, (10 pages).
Extended European Search Report dated Mar. 22, 2022, European Patent Application No. 19843487.0, (14 pages).
Final Office Action dated Feb. 23, 2022, U.S. Appl. No. 16/748,193, (23 pages).
Final Office Action dated Feb. 3, 2022, U.S. Appl. No. 16/864,721, (36 pages).
First Office Action dated Mar. 14, 2022 with English translation, Chinese Patent Application No. 201880079474.6, (11 pages).
Non Final Office Action dated Apr. 1, 2022, U.S. Appl. No. 17/256,961, (65 pages).
Non Final Office Action dated Apr. 11, 2022, U.S. Appl. No. 16/938,782, (52 pages).
Non Final Office Action dated Apr. 12, 2022, U.S. Appl. No. 17/262,991, (60 pages).
Non Final Office Action dated Feb. 2, 2022, U.S. Appl. No. 16/783,866, (8 pages).
Non Final Office Action dated Mar. 31, 2022, U.S. Appl. No. 17/257,814, (60 pages).
Non Final Office Action dated Mar. 9, 2022, U.S. Appl. No. 16/870,676, (57 pages).
“Communication Pursuant to Article 94(3) EPC dated Apr. 25, 2022”, European Patent Application No. 18885707.2, (5 pages).
“Communication Pursuant to Article 94(3) EPC dated May 30, 2022”, European Patent Application No. 19768418.6, (6 pages).
“Extended European Search Report dated May 16, 2022”, European Patent Application No. 19871001.4, (9 pages).
“Extended European Search Report dated May 30, 2022”, European Patent Application No. 20753144.3, (10 pages).
“Final Office Action dated Jul. 13, 2022”, U.S. Appl. No. 17/262,991, (18 pages).
“First Examination Report dated May 13, 2022”, Indian Patent Application No. 202047026359, (8 pages).
“Non Final Office Action dated May 10, 2022”, U.S. Appl. No. 17/140,921, (25 pages).
“Non Final Office Action dated May 17, 2022”, U.S. Appl. No. 16/748,193, (11 pages).
Extended European Search Report dated Jan. 22, 2021, European Patent Application No. 18890390.0, (11 pages).
Extended European Search Report dated Mar. 4, 2021, European Patent Application No. 19768418.6, (9 pages).
Final Office Action dated Mar. 1, 2021, U.S. Appl. No. 16/214,575, (29 pages).
Final Office Action dated Mar. 19, 2021, U.S. Appl. No. 16/530,776, (25 pages).
International Search Report and Written Opinion dated Feb. 12, 2021, International Application No. PCT/US20/60555, (25 pages).
International Search Report and Written Opinion dated Feb. 2, 2021, International PCT Patent Application No. PCT/US20/60550, (9 pages).
Non Final Office Action dated Jan. 26, 2021, U.S. Appl. No. 16/928,313, (33 pages).
Non Final Office Action dated Jan. 27, 2021, U.S. Appl. No. 16/225,961, (15 pages).
Non Final Office Action dated Mar. 3, 2021, U.S. Appl. No. 16/427,337, (41 pages).
Altwaijry, et al., “Learning to Detect and Match Keypoints with Deep Architectures”, Proceedings of the British Machine Vision Conference (BMVC), BMVA Press, Sep. 2016, [retrieved on Jan. 8, 2021 (Jan. 8, 2021 )] < URL: http://www.bmva.org/bmvc/2016/papers/paper049/index.html >, en lire document, especially Abstract, pp. 1-6 and 9.
Lee, et al., “Self-Attention Graph Pooling”, Cornell University Library/Computer Science/ Machine Learning, Apr. 17, 2019 [retrieved on Jan. 8, 2021 from the Internet< URL: https://arxiv.org/abs/1904.08082 >, entire document.
Libovicky, et al., “Input Combination Strategies for Multi-Source Transformer Decoder”, Proceedings of the Third Conference on Machine Translation (WMT). vol. 1: Research Papers, Belgium, Brussels, Oct. 31-Nov. 1, 2018; retrieved on Jan. 8, 2021 (Jan. 8, 2021 ) from < URL: https://doi.org/10.18653/v1/W18-64026 >, entire document, pp. 253-260.
Sarlin, et al., “SuperGlue: Learning Feature Matching with Graph Neural Networks”, Cornell University Library/Computer Science/Computer Vision and Pattern Recognition, Nov. 26, 2019 [retrieved on Jan. 8, 2021 from the Internet< URL: https://arxiv.org/abs/1911.11763 >, entire document.
“Extended European Search Report dated Apr. 5, 2023”, European Patent Application No. 20888716.6, (11 pages).
“Final Office Action dated Mar. 10, 2023”, U.S. Appl. No. 17/357,795, (15 pages).
“First Office Action dated Apr. 21, 2023 with English translation”, Japanese Patent Application No. 2021-509779, (26 pages).
“First Office Action dated Apr. 13, 2023 with English Translation”, Japanese Patent Application No. 2020-567766, (7 pages).
“First Office Action dated Jan. 30, 2023 with English translation”, Chinese Patent Application No. 201980082951.9, (5 pages).
“First Office Action dated Mar. 27, 2023 with English translation”, Japanese Patent Application No. 2020-566617, (6 pages).
“First Office Action dated Mar. 6, 2023 with English translation”, Korean Patent Application No. 10-2020-7019685, (7 pages).
“Non Final Office Action dated Apr. 13, 2023”, U.S. Appl. No. 17/098,043, (7 pages).
“Non Final Office Action dated May 11, 2023”, U.S. Appl. No. 17/822,279, (24 pages).
“Office Action dated Apr. 13, 2023 with English translation”, Japanese Patent Application No. 2020-533730, (13 pages).
“Office Action dated Mar. 30, 2023 with English translation”, Japanese Patent Application No. 2020-566620, (10 pages).
“Second Office Action dated May 2, 2023 with English Translation”, Japanese Patent Application No. 2020-549034, (6 pages).
Li, Yujia , et al., “Graph Matching Networks for Learning the Similarity of Graph Structured Objects”, arxiv.org, Cornell University Library, 201 Olin Library Cornell University Ithaca, NY 14853, XP081268608, Apr. 29, 2019.
Luo, Zixin , et al., “ContextDesc: Local Descriptor Augmentation With Cross-Modality Context”, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, XP033686823, DOI: 10.1109/CVPR.2019.00263 [retrieved on Jan. 8, 2020], Jun. 15, 2019, pp. 2522-2531.
Zhang, Zen , et al., “Deep Graphical Feature Learning for the Feature Matching Problem”, 2019 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, XP033723985, DOI: 10.1109/ICCV.2019.00519 [retrieved on Feb. 24, 2020], Oct. 27, 2019, pp. 5086-5095.
“ARToolKit: Hardware”, https://web.archive.org/web/20051013062315/http://www.hitl.washington.edu:80/artoolkit/documentation/hardware.htm (downloaded Oct. 26, 2020), Oct. 13, 2015, (3 pages).
Communication Pursuant to Article 94(3) EPC dated Sep. 4, 2019, European Patent Application No. 10793707.0, (4 pages).
Examination Report dated Jun. 19, 2020, European Patent Application No. 20154750.2, (10 pages).
Extended European Search Report dated May 20, 2020, European Patent Application No. 20154070.5, (7 pages).
Extended European Search Report dated Jun. 12, 2017, European Patent Application No. 16207441.3, (8 pages).
Final Office Action dated Aug. 10, 2020, U.S. Appl. No. 16/225,961, (13 pages).
Final Office Action dated Aug. 24, 2020, U.S. Appl. No. 16/435,933, (44 pages).
Final Office Action dated Dec. 4, 2019, U.S. Appl. No. 15/564,517, (15 pages).
Final Office Action dated Feb. 19, 2020, U.S. Appl. No. 15/552,897, (17 pages).
International Search Report and Written Opinion dated Mar. 12, 2020, International PCT Patent Application No. PCT/US19/67919, (14 pages).
International Search Report and Written Opinion dated Aug. 15, 2019, International PCT Patent Application No. PCT/US19/33987, (20 pages).
International Search Report and Written Opinion dated Jun. 15, 2020, International PCT Patent Application No. PCT/US2020/017023, (13 pages).
International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/43097, (10 pages).
International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/36275, (10 pages).
International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/43099, (9 pages).
International Search Report and Written Opinion dated Jun. 17, 2016, International PCT Patent Application No. PCT/FI2016/050172, (9 pages).
International Search Report and Written Opinion dated Oct. 22, 2019, International PCT Patent Application No. PCT/US19/43751, (9 pages).
International Search Report and Written Opinion dated Dec. 23, 2019, International PCT Patent Application No. PCT/US19/44953, (11 pages).
International Search Report and Written Opinion dated May 23, 2019, International PCT Patent Application No. PCT/US18/66514, (17 pages).
International Search Report and Written Opinion dated Sep. 26, 2019, International PCT Patent Application No. PCT/US19/40544, (12 pages).
International Search Report and Written Opinion dated Aug. 27, 2019, International PCT Application No. PCT/US2019/035245, (8 pages).
International Search Report and Written Opinion dated Dec. 27, 2019, International Application No. PCT/US19/47746, (16 pages).
International Search Report and Written Opinion dated Sep. 30, 2019, International Patent Application No. PCT/US19/40324, (7 pages).
International Search Report and Written Opinion dated Sep. 4, 2020, International Patent Application No. PCT/US20/31036, (13 pages).
International Search Report and Written Opinion dated Jun. 5, 2020, International Patent Application No. PCT/US20/19871, (9 pages).
International Search Report and Written Opinion dated Aug. 8, 2019, International PCT Patent Application No. PCT/US2019/034763, (8 pages).
International Search Report and Written Opinion dated Oct. 8, 2019, International PCT Patent Application No. PCT/US19/41151, (7 pages).
International Search Report and Written Opinion dated Jan. 9, 2020, International Application No. PCT/US19/55185, (10 pages).
International Search Report and Written Opinion dated Feb. 28, 2019, International Patent Application No. PCT/US18/64686, (8 pages).
International Search Report and Written Opinion dated Feb. 7, 2020, International PCT Patent Application No. PCT/US2019/061265, (11 pages).
International Search Report and Written Opinion dated Jun. 11, 2019, International PCT Application No. PCT/US19/22620, (7 pages).
Invitation to Pay Additional Fees dated Aug. 15, 2019, International PCT Patent Application No. PCT/US19/36275, (2 pages).
Invitation to Pay Additional Fees dated Sep. 24, 2020, International Patent Application No. PCT/US2020/043596, (3 pages).
Invitation to Pay Additional Fees dated Oct. 22, 2019, International PCT Patent Application No. PCT/US19/47746, (2 pages).
Invitation to Pay Additional Fees dated Apr. 3, 2020, International Patent Application No. PCT/US20/17023, (2 pages).
Invitation to Pay Additional Fees dated Oct. 17, 2019, International PCT Patent Application No. PCT/US19/44953, (2 pages).
Non Final Office Action dated Nov. 19. 2019, U.S. Appl. No. 16/355,611, (31 pages).
Non Final Office Action dated Aug. 21, 2019, U.S. Appl. No. 15/564,517, (14 pages).
Non Final Office Action dated Jul. 27, 2020, U.S. Appl. No. 16/435,933, (16 pages).
Non Final Office Action dated Jun. 17, 2020, U.S. Appl. No. 16/682,911, (22 pages).
Non Final Office Action dated Jun. 19, 2020, U.S. Appl. No. 16/225,961, (35 pages).
Non Final Office Action dated Nov. 5, 2020, U.S. Appl. No. 16/530,776, (45 pages).
Non Final Office Action dated Oct. 22, 2019, U.S. Appl. No. 15/859,277, (15 pages).
Non Final Office Action dated Sep. 1, 2020, U.S. Appl. No. 16/214,575, (40 pages).
Notice of Allowance dated Mar. 25, 2020, U.S. Appl. No. 15/564,517, (11 pages).
Notice of Allowance dated Oct. 5, 2020, U.S. Appl. No. 16/682,911, (27 pages).
Notice of Reason of Refusal dated Sep. 11, 2020 with English translation, Japanese Patent Application No. 2019-140435, (6 pages).
“Phototourism Challenge”, CVPR 2019 Image Matching Workshop. https://image matching-workshop. github.io., (16 pages).
Summons to attend oral proceedings pursuant to Rule 115(1) EPC mailed on Jul. 15, 2019, European Patent Application No. 15162521.7, (7 pages).
Aarik, J et al., “Effect of crystal structure on optical properties of TiO2 films grown by atomic layer deposition”, Thin Solid Films; Publication [online). May 19, 1998 [retrieved Feb. 19, 2020]. Retrieved from the Internet: <URL: https://www.sciencedirect.com/science/article/pii/S0040609097001351?via%3Dihub>; DOI: 10.1016/S0040-6090(97)00135-1; see entire document, (2 pages).
Arandjelović, Relja et al., “Three things everyone should know to improve object retrieval”, CVPR, 2012, (8 pages).
Azom, , “Silica—Silicon Dioxide (SiO2)”, AZO Materials; Publication [Online]. Dec. 13, 2001 [retrieved Feb. 19, 2020]. Retrieved from the Internet: <URL: https://www.azom.com/article.aspx?Article1D=1114>, (6 pages).
Azuma, Ronald T. , “A Survey of Augmented Reality”, Presence: Teleoperators and Virtual Environments 6, 4 (Aug. 1997), 355-385; https://web.archive.org/web/20010604100006/http://www.cs.unc.edu/˜azuma/ARpresence.pdf (downloaded Oct. 26, 2020).
Azuma, Ronald T. , “Predictive Tracking for Augmented Reality”, Department of Computer Science, Chapel Hill NC; TR95-007, Feb. 1995, 262 pages.
Battaglia, Peter W. et al., “Relational inductive biases, deep learning, and graph networks”, arXiv:1806.01261, Oct. 17, 2018, pp. 1-40.
Berg, Alexander C et al., “Shape matching and object recognition using low distortion correspondences”, In CVPR, 2005, (8 pages).
Bian, Jiawang et al., “GMS: Grid-based motion statistics for fast, ultra-robust feature correspondence.”, In CVPR (Conference on Computer Vision and Pattern Recognition), 2017, (10 pages).
Bimber, Oliver et al., “Spatial Augmented Reality: Merging Real and Virtual Worlds”, https://web.media.mit.edu/˜raskar/book/BimberRaskarAugmentedRealityBook.pdf; published by A K Peters/CRC Press (Jul. 31, 2005); eBook (3rd Edition, 2007), (393 pages).
Brachmann, Eric et al., “Neural-Guided RANSAC: Learning Where to Sample Model Hypotheses”, In ICCV (International Conference on Computer Vision ), arXiv:1905.04132v2 [cs.CV] Jul. 31, 2019, (17 pages).
Caetano, Tibério S et al., “Learning graph matching”, IEEE TPAMI, 31(6):1048-1058, 2009.
Cech, Jan et al., “Efficient sequential correspondence selection by cosegmentation”, IEEE TPAMI, 32(9):1568-1581, Sep. 2010.
Cuturi, Marco , “Sinkhorn distances: Lightspeed computation of optimal transport”, NIPS, 2013, (9 pages).
Dai, Angela et al., “ScanNet: Richly-annotated 3d reconstructions of indoor scenes”, In CVPR, arXiv:1702.04405v2 [cs.CV] Apr. 11, 2017, (22 pages).
Deng, Haowen et al., “PPFnet: Global context aware local features for robust 3d point matching”, In CVPR, arXiv:1802.02669v2 [cs.CV] Mar. 1, 2018, (12 pages).
Detone, Daniel et al., “Deep image homography estimation”, In RSS Work-shop: Limits and Potentials of Deep Learning in Robotics, arXiv:1606.03798v1 [cs.CV] Jun. 13, 2016, (6 pages).
Detone, Daniel et al., “Self-improving visual odometry”, arXiv:1812.03245, Dec. 8, 2018, (9 pages).
Detone, Daniel et al., “SuperPoint: Self-supervised interest point detection and description”, In CVPR Workshop on Deep Learning for Visual SLAM, arXiv:1712.07629v4 [cs.CV] Apr. 19, 2018, (13 pages).
Dusmanu, Mihai et al., “D2-net: A trainable CNN for joint detection and description of local features”, CVPR, arXiv:1905.03561v1 [cs.CV] May 9, 2019, (16 pages).
Ebel, Patrick et al., “Beyond cartesian representations for local descriptors”, ICCV, arXiv:1908.05547v1 [cs.CV] Aug. 15, 2019, (11 pages).
Fischler, Martin A et al., “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography”, Communications of the ACM, 24(6): 1981, pp. 381-395.
Gilmer, Justin et al., “Neural message passing for quantum chemistry”, In ICML, arXiv:1704.01212v2 [cs.LG] Jun. 12, 2017, (14 pages).
Goodfellow, , “Titanium Dioxide—Titania (TiO2)”, AZO Materials; Publication [online]. Jan. 11, 2002 [retrieved Feb. 19, 2020]. Retrieved from the Internet: <URL: https://www.azom.com/article.aspx?Article1D=1179>, (9 pages).
Hartley, Richard et al., “Multiple View Geometry in Computer Vision”, Cambridge University Press, 2003, pp. 1-673.
Jacob, Robert J. , “Eye Tracking in Advanced Interface Design”, Human-Computer Interaction Lab, Naval Research Laboratory, Washington, D.C., date unknown. 2003, pp. 1-50.
Lee, Juho et al., “Set transformer: A frame-work for attention-based permutation-invariant neural networks”, ICML, arXiv:1810.00825v3 [cs.LG] May 26, 2019, (17 pages).
Leordeanu, Marius et al., “A spectral technique for correspondence problems using pairwise constraints”, Proceedings of (ICCV) International Conference on Computer Vision, vol. 2, pp. 1482-1489, Oct. 2005, (8 pages).
Levola, T. , “Diffractive Optics for Virtual Reality Displays”, Journal of the SID EuroDisplay 14/05, 2005, XP008093627, chapters 2-3, Figures 2 and 10, pp. 467-475.
Levola, Tapani , “Invited Paper: Novel Diffractive Optical Components for Near to Eye Displays—Nokia Research Center”, SID 2006 Digest, 2006 SID International Symposium, Society for Information Display, vol. XXXVII, May 24, 2005, chapters 1-3, figures 1 and 3, pp. 64-67.
Li, Yujia et al., “Graph matching networks for learning the similarity of graph structured objects”, ICML, arXiv:1904.12787v2 [cs.LG] May 12, 2019, (18 pages).
Li, Zhengqi et al., “Megadepth: Learning single- view depth prediction from internet photos”, In CVPR, fromarXiv: 1804.00607v4 [cs.CV] Nov. 28, 2018, (10 pages).
Loiola, Eliane M. et al., “A survey for the quadratic assignment problem”, European journal of operational research, 176(2): 2007, pp. 657-690.
Lowe, David G. , “Distinctive image features from scale-invariant keypoints”, International Journal of Computer Vision, 60(2): 91-110, 2004, (28 pages).
Luo, Zixin et al., “ContextDesc: Local descriptor augmentation with cross-modality context”, CVPR, arXiv:1904.04084v1 [cs.CV] Apr. 8, 2019, (14 pages).
Memon, F. et al., “Synthesis, Characterization and Optical Constants of Silicon Oxycarbide”, EPJ Web of Conferences; Publication [online). Mar. 23, 2017 [retrieved Feb. 19, 2020).<URL: https://www.epj-conferences.org/articles/epjconf/pdf/2017/08/epjconf_nanop201700002.pdf>; DOI: 10.1051/epjconf/201713900002, (8 pages).
Munkres, James , “Algorithms for the assignment and transportation problems”, Journal of the Society for Industrial and Applied Mathematics, 5(1): 1957, pp. 32-38.
Ono, Yuki et al., “LF-Net: Learning local features from images”, 32nd Conference on Neural Information Processing Systems (NIPS 2018), arXiv:1805.09662v2 [cs.CV] Nov. 22, 2018, (13 pages).
Paszke, Adam et al., “Automatic differentiation in Pytorch”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, (4 pages).
Peyré, Gabriel et al., “Computational Optimal Transport”, Foundations and Trends in Machine Learning, 11(5-6):355-607, 2019; arXiv:1803.00567v4 [stat.ML] Mar. 18, 2020, (209 pages).
Qi, Charles R. et al., “Pointnet++: Deep hierarchical feature learning on point sets in a metric space.”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA., (10 pages).
Qi, Charles R et al., “Pointnet: Deep Learning on Point Sets for 3D Classification and Segmentation”, CVPR, arXiv:1612.00593v2 [cs.CV] Apr. 10, 201, (19 pages).
Radenović, Filip et al., “Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking”, CVPR, arXiv:1803.11285v1 [cs.CV] Mar. 29, 2018, (10 pages).
Raguram, Rahul et al., “A comparative analysis of ransac techniques leading to adaptive real-time random sample consensus”, Computer Vision—ECCV 2008, 10th European Conference on Computer Vision, Marseille, France, Oct. 12-18, 2008, Proceedings, Part I, (15 pages).
Ranftl, René et al., “Deep fundamental matrix estimation”, European Conference on Computer Vision (ECCV), 2018, (17 pages).
Revaud, Jerome et al., “R2D2: Repeatable and Reliable Detector and Descriptor”, In NeurIPS, arXiv:1906.06195v2 [cs.CV] Jun. 17, 2019, (12 pages).
Rocco, Ignacio et al., “Neighbourhood Consensus Networks”, 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada, arXiv:1810.10510v2 [cs.CV] Nov. 29, 2018, (20 pages).
Rublee, Ethan et al., “ORB: An efficient alternative to SIFT or SURF”, Proceedings of the IEEE International Conference on Computer Vision. 2564-2571. 2011; 10.1109/ICCV.2011.612654, (9 pages.).
Sattler, Torsten et al., “SCRAMSAC: Improving RANSAC's efficiency with a spatial consistency filter”, ICCV, 2009: 2090-2097., (8 pages).
Schonberger, Johannes L. et al., “Pixelwise view selection for un-structured multi-view stereo”, Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, Oct. 11-14, 2016, Proceedings, Part III, pp. 501-518, 2016.
Schonberger, Johannes L. et al., “Structure-from-motion revisited”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4104-4113, (11 pages).
Sinkhorn, Richard et al., “Concerning nonnegative matrices and doubly stochastic matrices.”, Pacific Journal of Mathematics, 1967, pp. 343-348.
Spencer, T. et al., “Decomposition of poly(propylene carbonate) with UV sensitive iodonium 11 salts”, Polymer Degradation and Stability; (online]. Dec. 24, 2010 (retrieved Feb. 19, 2020]., <URL: http:/fkohl .chbe.gatech.edu/sites/default/files/linked_files/publications/2011Decomposition%20of%20poly(propylene%20carbonate)%20with%20UV%20sensitive%20iodonium%20salts,pdf>; DOI: 10, 1016/j.polymdegradstab.2010, 12.003, (17 pages).
Tanriverdi, Vildan et al., “Interacting With Eye Movements in Virtual Environments”, Department of Electrical Engineering and Computer Science, Tufts University; Proceedings of the SIGCHI conference on Human Factors in Computing Systems, Apr. 2000, pp. 1-8.
Thomee, Bart et al., “YFCC100m: The new data in multimedia research”, Communications of the ACM, 59(2):64-73, 2016; arXiv:1503.01817v2 [cs.MM] Apr. 25, 2016, (8 pages).
Torresani, Lorenzo et al., “Feature correspondence via graph matching: Models and global optimization”, Computer Vision—ECCV 2008, 10th European Conference on Computer Vision, Marseille, France, Oct. 12-18, 2008, Proceedings, Part II, (15 pages).
Tuytelaars, Tinne et al., “Wide baseline stereo matching based on local, affinely invariant regions”, BMVC, 2000, pp. 1-14.
Ulyanov, Dmitry et al., “Instance normalization: The missing ingredient for fast stylization”, arXiv:1607.08022v3 [cs.CV] Nov. 6, 2017, (6 pages).
Vaswani, Ashish et al., “Attention is all you need”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA; arXiv:1706.03762v5 [cs.CL] Dec. 6, 2017, (15 pages).
Veli{hacek over (c)}kovi{hacek over (c)}, Petar et al., “Graph attention networks”, ICLR, arXiv:1710.10903v3 [stat.ML] Feb. 4, 2018, (12 pages).
Villani, Cédric , “Optimal transport: old and new”, vol. 338. Springer Science & Business Media, Jun. 2008, pp. 1-998.
Wang, Xiaolong et al., “Non-local neural networks”, CVPR, arXiv:1711.07971v3 [cs.CV] Apr. 13, 2018, (10 pages).
Wang, Yue et al., “Deep Closest Point: Learning representations for point cloud registration”, ICCV, arXiv:1905.03304v1 [cs.CV] May 8, 2019, (10 pages).
Wang, Yue et al., “Dynamic Graph CNN for learning on point clouds”, ACM Transactions on Graphics, arXiv:1801.07829v2 [cs.CV] Jun. 11, 2019, (13 pages).
Weissel, et al., “Process cruise control: event-driven clock scaling for dynamic power management”, Proceedings of the 2002 international conference on Compilers, architecture, and synthesis for embedded systems. Oct. 11, 2002 (Oct. 11, 2002) Retrieved on May 16, 2020 (May 16, 2020) from <URL: https://dl.acm.org/doi/pdf/10.1145/581630.581668>, p. 238-246.
Yi, Kwang M. et al., “Learning to find good correspondences”, CVPR, arXiv:1711.05971v2 [cs.CV] May 21, 2018, (13 pages).
Yi, Kwang Moo et al., “Lift: Learned invariant feature transform”, ECCV, arXiv:1603.09114v2 [cs.CV] Jul. 29, 2016, (16 pages).
Zaheer, Manzil et al., “Deep Sets”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA; arXiv:1703.06114v3 [cs.LG] Apr. 14, 2018, (29 pages).
Zhang, Jiahui et al., “Learning two-view correspondences and geometry using order-aware network”, ICCV; aarXiv:1908.04964v1 [cs.CV] Aug. 14, 2019, (11 pages).
Zhang, Li et al., “Dual graph convolutional net- work for semantic segmentation”, BMVC, 2019; arXiv:1909.06121v3 [cs.CV] Aug. 26, 2020, (18 pages).
European Search Report dated Oct. 15, 2020, European Patent Application No. 20180623.9, (10 pages).
Extended European Search Report dated Nov. 3, 2020, European Patent Application No. 18885707.2, (7 pages).
Extended European Search Report dated Nov. 4, 2020, European Patent Application No. 20190980.1, (14 pages).
International Search Report and Written Opinion dated Dec. 3, 2020, International Patent Application No. PCT/US20/43596, (25 pages).
Butail, et al., “Putting the fish in the fish tank: Immersive VR for animal behavior experiments”, In: 2012 IEEE International Conference on Robotics and Automation. May 18, 2012 (May 18, 2012) Retrieved on Nov. 14, 2020 (Nov. 14, 2020) from <http:/lcdcl.umd.edu/papers/icra2012.pdf> entire document, (8 pages).
“Communication Pursuant to Article 94(3) EPC dated May 23, 2023”, European Patent Application No. 18890390.0, (5 pages).
“First Examination Report dated Aug. 8, 2023”, Australian Patent Application No. 2018379105, (3 pages).
“First Office Action dated Jul. 4, 2023 with English translation”, Japanese Patent Application No. 2021-505669, (6 pages).
“First Office Action dated Jun. 13, 2023 with English translation”, Japanese Patent Application No. 2020-567853, (7 pages).
“First Office Action dated May 26, 2023 with English translation”, Japanese Patent Application No. 2021-500607, (6 pages).
“First Office Action dated May 30, 2023 with English translation”, Japanese Patent Application No. 2021-519873, (8 pages).
“Non Final Office Action dated Aug. 2, 2023”, U.S. Appl. No. 17/807,600, (25 pages).
“Non Final Office Action dated Jul. 20, 2023”, U.S. Appl. No. 17/650,188, (11 pages).
“Non Final Office Action dated Jun. 14, 2023”, U.S. Appl. No. 17/516,483, (10 pages).
“Notice of Allowance dated Jul. 27, 2023 with English translation”, Korean Patent Application No. 10-2020-7019685, (4 pages).
“Office Action dated Jul. 20, 2023 with English translation”, Japanese Patent Application No. 2021-505884, (6 pages).
“Office Action dated Jun. 8, 2023 with English translation”, Japanese Patent Application No. 2021-503762, (6 pages).
Related Publications (1)
Number Date Country
20210141076 A1 May 2021 US
Provisional Applications (1)
Number Date Country
62678621 May 2018 US