Apparatus for a near-eye display

Information

  • Patent Grant
  • 11347960
  • Patent Number
    11,347,960
  • Date Filed
    Friday, November 27, 2020
    3 years ago
  • Date Issued
    Tuesday, May 31, 2022
    a year ago
Abstract
An apparatus for providing gaze tracking in a near-eye display. Certain examples provide an apparatus including a light modulator configured to receive light of a first range of wavelengths and generate an image beam therefrom. The light modulator is further configured to receive light of a second range of wavelengths and generate a probe beam therefrom. The apparatus also includes one or more light guides including one or more in-coupling element areas, and one or more out-coupling element areas. The one or more in-coupling diffractive element areas are configured to receive and in-couple the image beam and the probe beam into the one or more light guides. The one or more out-coupling element areas are configured to out-couple, from the one or more light guides: the image beam to a user's eye for user viewing, and the probe beam to the user's eye for detection of reflection therefrom.
Description
TECHNOLOGICAL FIELD

Examples of the present disclosure relate to an apparatus for a near-eye display. Some examples, though without prejudice to the foregoing, relate to an apparatus for providing gaze tracking in a near-eye display.


BACKGROUND

Gaze tracking, namely the process of determining a point of gaze of a user's eye so as to determine a line of sight associated with the user's eye or to determine where the user is looking (and thus determining what the user is looking at), typically relies on capturing video images from a user's eye(s). Such video based gaze tracking typically uses infrared (IR) LEDs or infrared lasers for illuminating the eye and detecting reflections/glints of the infrared light from the eye (e.g. its cornea/surface). A determination of a user's gaze may be calculated based on the detected IR reflections and detected eye features such as detected pupil position. Conventional near-eye displays with integrated gaze tracking functionality systems are not always optimal, not least for example in view of the additional components as well as increased complexity, size and weight necessary to incorporate both gaze tracking functionality as well as display functionality in a near-eye display.


The listing or discussion of any prior-published document or any background in this specification should not necessarily be taken as an acknowledgement that the document or background is part of the state of the art or is common general knowledge. One or more aspects/examples of the present disclosure may or may not address one or more of the background issues.


BRIEF SUMMARY

An aspect of the present invention is set out in the claims.


According to at least some but not necessarily all examples of the disclosure there is provided an apparatus comprising:

    • a light modulator configured to receive light of a first range of wavelengths and generate an image beam therefrom, wherein the light modulator is further configured to receive light of a second range of wavelengths and generate a probe beam therefrom;
    • one or more light guides comprising:
      • one or more in-coupling diffractive element areas, and
      • one or more out-coupling diffractive element areas;
    • wherein the one or more in-coupling diffractive element areas are configured to receive and in-couple the image beam and the probe beam into the one or more light guides; and
    • wherein the one or more out-coupling diffractive element areas are configured to out-couple, from the one or more light guides:
      • the image beam to a user's eye for user viewing, and
      • the probe beam to the user's eye for detection of reflection therefrom.


According to at least some but not necessarily all examples of the disclosure there is provided an apparatus comprising:

    • means configured to receive light of a first range of wavelengths and generate an image beam therefrom, wherein the means is further configured to receive light of a second range of wavelengths and generate a probe beam therefrom;
    • one or more means for guiding light comprising:
      • one or more in-coupling diffractive means, and
      • one or more out-coupling diffractive means;
    • wherein the one or more in-coupling diffractive means are configured to receive and in-couple the image beam and the probe beam into the one or more means for guiding light; and wherein the one or more out-coupling diffractive means are configured to out-couple, from the one or more means for guiding light:
      • the image beam to a user's eye for user viewing, and
      • the probe beam to the user's eye for detection of reflection therefrom.


According to at least some but not necessarily all examples of the disclosure there is provided an apparatus comprising:

    • means configured to generate an image beam of light at a first range of wavelengths, wherein the means is further configured to generate a probe beam of light of a second range of wavelengths;
    • one or more means for guiding light comprising:
      • one or more in-coupling diffractive means, and
      • one or more out-coupling diffractive means;
    • wherein the one or more in-coupling diffractive means are configured to receive and in-couple the image beam and the probe beam into the one or more means for guiding light; and wherein the one or more out-coupling diffractive means are configured to out-couple, from the one or more means for guiding light:
      • the image beam to a user's eye for user viewing, and
      • the probe beam to the user's eye for detection of reflection therefrom.


Certain examples of the apparatus may be provided as a module for a device or as a device itself. The device may be configured for at least one of: portable use, wearable use, head mountable use. Certain examples of the apparatus are configured for use with a Near Eye Display (NED) for providing both display and gaze tracking functionality.


The examples of the present disclosure and the accompanying claims may be suitably combined in any manner apparent to one of ordinary skill in the art.





BRIEF DESCRIPTION OF THE DRAWINGS

For a better understanding of various examples of the present disclosure that are useful for understanding the detailed description and certain embodiments of the invention, reference will now be made by way of example only to the accompanying drawings in which:



FIG. 1 schematically illustrates an example of an apparatus according to the present disclosure;



FIG. 2 schematically illustrates a further example of an apparatus according to the present disclosure;



FIG. 3 schematically illustrates an example of diffractive light guides suitable for use with examples of the present disclosure;



FIG. 4 schematically illustrates a yet further example of an apparatus according to the present disclosure; and



FIGS. 5A and 5B schematically illustrate examples of light modulators suitable for use with examples of the present disclosure.





DETAILED DESCRIPTION

Examples of apparatuses according to the present disclosure will now be described with reference to the Figures. Similar reference numerals are used in the Figures to designate similar features. For clarity, all reference numerals are not necessarily displayed in all figures.



FIG. 1 schematically illustrates a block diagram of an apparatus 100 according to an example of the present disclosure. FIG. 1 focuses on the functional components necessary for describing the operation of the apparatus.


The apparatus 100 comprises a light modulator 101 configured to receive light of a first range of wavelengths 102, and generate an image beam 103 therefrom. The light modulator 101 is further configured so as to receive light of a second range of wavelengths 104 and generate a probe beam 105 therefrom.


The apparatus 100 further comprises one or more light guides 106 comprising one or more in-coupling diffractive element areas 107, and one or more out-coupling diffractive element areas 108. The one or more in-coupling diffractive element areas 107 are configured to receive and in-couple the image beam 103 and the probe beam 105 into the one or more light guides 106. The one or more out-coupling diffractive element areas 108 are configured to out-couple, from the one or more light guides 106:

    • the image beam 109 to a user's eye 110 for user viewing, and
    • the probe beam 111 to the user's eye 110 for detection of reflection therefrom.


Each of the components described above may be one or more of any element, device, mechanism or means configured to perform the corresponding functions of the respective components as described in greater detail below. The component blocks of FIG. 1 are functional and the functions described may or may not be performed by a single physical entity for example the light modulator 101 may correspond to an assembly/arrangement of components, not least for example as shown in FIG. 5A or FIG. 5B. Accordingly, the blocks support: combinations of means for performing the specified functions.


The light modulator 101 may comprise a light modulating/modifying means configured to modulate/modify the incident light 102, 104 of first and second ranges of wavelengths so as to impart an image or pattern thereon and generate one or more collimated beams 103, 105 comprising a variable image or pattern. The light modulator 101 may comprise one or more of: an optical engine, a light engine, a micro display, optics (e.g. enlarging optics and collimating optics), a projector, a digital light processing (DLP) system, a liquid crystal on silicone (LCoS), a retinal scan display, a laser scanning system, a microelectromechanical (MEM) system (e.g. for providing scanning/raster scanning). The light modulator 101, in certain examples, may be: a reflective based display, a transmissive based display or an emissive based display.


In some examples, instead of receiving light of a first and a second range of wavelengths and generating an image beam and probe beam therefrom, the light modulator may be configured to be able to generate for itself an image beam of light of a first range of wavelengths and a probe beam of light of a second range of wavelengths. For example the light modulator may comprise one or more display elements which itself creates a pixelated image/probe pattern that is then projected through an optical setup, e.g. an OLED display or an LED array display, for in-coupling to the one or more light guides.


The image beam 103 may comprise a collimated beam of light that may be expanded and guided to user's eye 110 for viewing and perceiving the image which is imparted to the light 102 that forms the image beam 103 by the light modulator 101. Where the light 102 of the first range of wavelengths comprises one or more colour channels of visible light, e.g. one or more of red (R), green (G) and blue (B), the image beam 103 may correspondingly comprise light within the visible range of the electromagnetic spectrum, e.g. one or more of R, G and B. In certain examples, one or more image beams may be generated corresponding to the image in differing colour channels, e.g. one or more of R, G and B, from light received in the respective colour channels.


The probe beam 105 may comprise a collimated beam of light that may be expanded and guided to user's eye 110 for reflection therefrom. In certain examples (see FIG. 2), such reflection is detected and used in part to determine a gaze or direction of view of the eye 110. The light modulator 101 may impart a variable pattern, image, shape or size to light 104 that forms the probe beam 105. Where the light of the second range of wavelengths comprises infrared light the probe beam may correspondingly comprise light within the infrared part of the electromagnetic spectrum.


The one or more light guides 106 may comprise light guiding means comprising one or more means for diffracting beams into and out of the light guide, for example a diffractive optical element. The light guides may, for example, be a one or more substantially planar substrates comprising one or more areas or diffractive elements/gratings/grooves that are disposed on lower or upper surfaces of the substrate or even located internally of the substrate. The light guide may be an exit pupil expander configured to expand an incident beam of light 103, 105 in one or more directions. The light guides may be transparent and the apparatus may be configured such that the user can see the real world though the apparatus/light guides whilst also seeing a virtual image/world via the apparatus/light guides.


Certain examples of the apparatus: may reduce the complexity of a combined display and gaze tracking device, may provide improved integration and require fewer components, and may thus also reduce the weight and size of the apparatus by enabling the sharing/reutilisation of various components. This may enable the provision of a miniaturised and efficient apparatus for integrated NED and Gaze tracking. For example, the light modulator 101 which generates the image beam 103 is also used to generate the probe beam 105. In other examples, a light source for emitting the light for the image beam 103 is also used to generate the light for the probe beam 105.


The use of the light modulator 101 to generate the probe beam 105 may enable a pattern, shape or size of the probe beam to be dynamically varied. Such control of the probe beam may enable the creation of complex variable shapes, sizes, patterns/images of the probe beam. Moreover, the probe beam could be dynamically adjusted during use so as to achieve optimal detection and measurement results thereby enabling more robust gaze tracking as well as simpler gaze tracking calibration.


Furthermore, certain examples (e.g. FIG. 1) provide the ability to share/utilise one or more of in-coupling diffractive elements 107 and/or one or more of the out-coupling diffractive elements 108 to in-couple and/or out-couple both the image beam 103 and the probe beam 105 and may thereby reduce the number of optical components and the complexity of the arrangements of such components that may otherwise have been required to direct each of the image beam and probe beam separately and independently.


Furthermore, certain examples (see e.g. FIG. 5A) may enable the use of a combined light source 512 which simultaneously generates both the light 102 for the image beam 103 as well as light 104 for the probe beam 105 and may thereby reduce the number of components (and thus the weight, size and complexity) of the apparatus.


In the example of FIG. 1, the light guide 106 comprises a substrate of an optical material having first and second opposing surfaces. The substrate comprises an area of in-coupling diffractive elements 107 and an area of out-coupling diffractive elements 108 which may be laterally spaced apart from one another on the planar substrate for example at either end of the substrate. The in-coupling diffractive element area is configured to receive one or more input optical beams, i.e. either the image beam 103 (“image display beam”) or the probe beam 105 (“gaze tracking probe beam”), and diffract the input optical beam substantially within the first and second surfaces to provide a diffracted optical beam within the substrate which is coupled via total internal reflection to the out-coupling diffractive element area 108. The out-coupling diffractive element area is configured to further diffract the diffracted optical beam out of the substrate to provide an output optical beam, namely an output of the image display beam 109 or the output gaze tracking probe beam 111.


The out-coupling diffractive element 108 may be configured so as to not only just output the diffracted optical beam from the substrate but also to expand the diffracted optical beam in one direction. Further diffractive elements may be provided (for example 316a as shown in FIG. 3) for expanding the diffracted optical beam in another direction so as to provide an output beam 109, 111 that can be expanded beam in two directions.


In certain examples, the in-coupling and out-coupling elements may be based on other optical methods than diffraction gratings and groves, for example volume holograms or gratings, or semi-transparent mirror structures.


The apparatus of FIG. 1 shows a schematic illustration of a monocular apparatus. However, it is to be appreciated that the apparatus could be configured in a binocular form. Such a binocular form could correspond to providing two apparatuses 100 one for each of the user's eyes. For example, providing apparatus 100 shown in FIG. 1 for a user's right eye and a mirror image version of the apparatus 100 for the user's left eye (similar to that of FIG. 2). As an alternative to using two such laterally adjacent light guides, a single light guide could be provided having an in-coupling diffractive element area between two out-coupling diffractive element areas located at either end of the light guide wherein the in-coupling diffractive element area is configured to provide two diffracted input optical beams, one coupled to one of the out-coupling diffractive element areas, for example at a left hand end of the light guide, and the other diffracted input optical beam coupled to the other of the out-coupling diffractive element areas at the other end, e.g. a right hand end of the light guide.



FIG. 1 shows a single diffractive light guide 106 provided with one in-coupling diffractive element area 107 which is configured to diffract light at the first range of wavelengths (e.g. in the visible part of the electromagnetic spectrum) as well as light at the second range of wavelengths (e.g. the IR part of the electromagnetic spectrum). Thus, the diffractive element area 107 may be configured to in-couple both the image display beam as well as the gaze tracking probe beam, i.e. the diffractive element may be configured so as to diffract (or at least have an acceptable efficiency for diffracting) each of infrared, red, green and blue colour channels regions of the electromagnetic spectrum or to diffract a portion of a colour channel. It should be appreciated that other colour channels/apportionment of the visible spectrum may also be envisaged. Likewise, the single out-coupling diffractive element 108 may be similarly configured to diffract and out-couple a corresponding wide range of wavelengths.


As an alternative to using a single in-coupling diffractive element area for in-coupling a wide range of wavelengths of input beams 103, 105, a plurality of in-coupling diffractive element areas could be provided on a light guide, each area spatially distinct from another and each being configured and optimised to diffract a particular/narrower range of wavelengths, for example one or more colour channels, infrared, red, green or blue, to provide one or more diffractive optical beams of such wavelengths of light within the substrate for out-coupling from the substrate by one or more out-coupling diffractive element areas.


Likewise, a plurality of out-coupling diffractive elements could be provided on the light guide each configured and optimised to diffract a particular narrow range of wavelengths of light and out-couple them from the light guide.


Yet further alternatively, instead of having a single light guide (with one or more in-coupling diffractive element areas and one or more out-coupling diffractive element areas) a plurality of light guides may be provided, e.g. vertically aligned and stacked on top of each other. Each of the stacked light guides could be provided with one or more in/out-coupling diffractive element areas configured and optimised to in-couple and out-couple one or more particular colour channels. Thus, it is to be appreciated that a variety of possible combinations of: numbers of in- and out-coupling diffractive element areas per light guide, as well as number of stacked light guides are envisaged.



FIG. 2 shows an apparatus 200 in a binocular form comprising a right hand side set of stacked light guides 106a and 106a′ along with a left hand side set of stacked light guides 106b and 106b′. For the sake of simplicity, the below discussion focuses just on the right hand side components of the apparatus. It is to be appreciated that similar components may be provided on the left hand side of the apparatus (with equivalent reference numerals but designated with a “b” instead of “a”).


Of the two stacked light guides 106a and 106a′, one is configured and optimised to in-couple, expand and out-couple visible light for example in the green and blue parts of the spectrum, whereas the other light guide is configured and optimised to in-couple, expand and out-couple infrared light and red light.


It is to be appreciated that other permutations may readily be envisaged, not least for example a plurality of stacked light guides each optimised for one of: blue, green, red and infrared respectively, or, similarly to FIG. 1, a single light guide may be provided for in- and out-coupling all of IR, R, G and B wavelengths of light beams. Also, one light guide optimised for RGB could be provided with another optimised for IR. Such an IR optimised light guide could be made to be thinner than an RGB light guide. It could further be configured for other integrated uses, such as: being used as a protective element, or a capacitive measurement surface for face/eye movement detection as well as an LC shutter and hot mirror as discussed in further details below.


A single light source 212a emits light 102, 104 for both the image display beam 103 (i.e. visible light) as well as light for the gaze tracking probe beam 105, (i.e. infrared light). Such a light source generating simultaneously both visible light and infrared light may correspond to a laser crystal on silicone based light source as discussed further with respect to FIG. 5A. Alternatively, instead of having a light source that generates simultaneously both the visible and infrared light, separate light sources may be provided which generate independently visible light and infrared, for example separate light sources configured to emit each of visible light and infrared light, not least for example separate red, green, blue and infrared LEDs positioned adjacent to one another which can be separately and independently controlled/switched on such that only visible light is emitted and received by the light modulator when generating the display imaging and likewise only infrared light is emitted and received at the light modulator when generating the gaze tracking probe beam.


The light source 212a generates the light of the first range of wavelengths 102 and also generates the light of the second range of wavelengths 104, each of which are incident to a light modulator 201a which generates an image display beam 103 and a gaze tracking probe beam 105 from such incident light respectively. Additional optics, e.g. lenses, mirrors or MEM system may be provided as well as other mechanisms for focusing, collimating the source of light into beams of light that may further be scanned/rastered into the in-coupling diffractive element.


The image display beam from the light modulator 201a is incident to a first in-coupling diffractive element area 107a and in-coupled into the first light guide 106a and then out-coupled via out-coupling diffractive element 108a (propagating through the underlying light guide 106a′) so as to be directed towards a user's eye 110. The first light guide may be configured and optimised so as to in-couple, expand and out-couple light of the image display beam in green and blue parts of the visible spectrum. The further light guide 106a′ may be provided with an in-coupling diffractive element area 107a′ and an out-coupling diffractive element area 108a′ configured and optimised to in-couple, expand and diffract out visible light of the image display beam in the red part of the spectrum. The second light guide 106a′ may further be configured to in-couple, expand and out-couple infrared light to provide an output gaze tracking probe beam 111 which is incident to the user's eye to be reflected therefrom.


A detector 213, such as an IR detector/sensor or camera, may be provided to detect reflections 214 of the gaze tracking probe beam from the user's eye, i.e. images/video of the user's eye in IR. The detection and measurement of such reflections 214 of the infrared gaze tracking probe beam may be used in part by a controller 215 to calculate and determine a user's gaze.


In some examples, other features of the eye are also measured and captured, for example related to a location of the user's eye pupil. In some examples, the IR detector/eye camera is used to capture and measure not only the reference reflections/glints 214 but is also used to capture and measure a location of the eye pupil. A determination may be made of the detected location of the reference reflection 214 relative to the detected location of the pupil. Differences of the relative movement between the pupil and the reference reflections 214 can be used for detecting changes in the gaze direction.


The determination of the gaze may also be dependent on the generated gaze tracking probe beam, i.e. taking into account one or more characteristics of the generated infrared probe beam outputted such as its initial shape, size and intensity prior to being reflected from the user's eye as compared to the shape, size and intensity of the detected reflected gaze tracking probe beam.


Implementation of the controller 215 can be in hardware alone (e.g. circuitry such as processing circuitry comprising one or more processors and memory circuitry comprising one or more memory elements), have certain aspects in software including firmware alone or can be a combination of hardware and software (including firmware).


The controller 215 may be used to control one or more of the light source 212a and the light modulator 201a so as to control each of the image display beam and the gaze tracking probe beam. The generated gaze tracking probe beam may be modified, e.g. its size or shape or intensity, depended upon the detected reflected gaze tracking probe beam. Such feedback from the detection of the reflected gaze tracking probe beam can assist in the calibration of the apparatus and enable for the gaze tracking probe beam's pattern, shape or size to be optimised for the prevailing circumstances of use.


Whilst FIG. 2 shows a single detector 213 for detecting the gaze of one eye, i.e. the user's right eye, the arrangement could be modified to, alternatively or additionally, determine the gaze of the user's other eye. Whilst FIG. 2 shows a single detector 213 for detecting the gaze of one eye, i.e. the user's right eye, the arrangement could be modified to determine the gaze of the eye using two or more detectors. Whilst FIG. 2 shows two light sources 212a and 212b, and two modulators 201a and 201b, a single light source and/or a single light modulator could be provided for generate image display and probe beams that are incident to each of the two in-coupling diffractive element areas 107a and 107b.


In the apparatus of FIG. 2, where a combined visible and infrared light source is used, i.e. that simultaneously generates both visible and infrared light (as opposed to separately and independently generating visible and IR light such that the emission of visible and IR light can be separately and independently controlled), the apparatus may further comprise a selectively controllable filter to selectively filter out one of: the visible light (or a sub colour channel thereof) or the infrared light.


In some examples, e.g. where the IR optimised light guide is below (and disposed closer to the eye than) the RGB optimised light guide, a passive IR pass filter could be placed between the in-coupling area of the RGB optimised light guide and the in-coupling area of the IR optimised light guide, e.g. so as to reduce in-coupling of RGB light to the IR light guide. Alternatively, in other examples, e.g. where the RGB optimised light guide is below (and placed closer to the eye than) the IR optimised light guide, an IR blocking filter could be placed between the respective in-coupling areas, e.g. so as to reduce in-coupling of IR light to the RBG light guide.


In some examples a Liquid Crystal (LC) shutter could be placed between the light guides and the outside environment/world/reality. For example the LC shutter could form part of a selectively transparent part of the external housing of the apparatus which is configured to selectively enable a real world view though the apparatus. Adjusting the shutter's transmissivity would control how much ambient light gets to the eye through the shutter and through the transparent light guides. The LC shutter could, in some examples, be integrated into an IR optimised light guide, in which case such a light guide may be placed between the RGB optimised light guide(s) and the outside environment/world/reality so that it would not block the RGB light from the light guide reaching the eye.


In some examples, the light guide 106a′ which in-couples, expands and out-couples the infrared gaze tracking probe beam may be configured so as to selectively filter the transmission of infrared light therethrough. The light guide 106a′ may be configured to act as a liquid crystal shutter that can selectively block the transmission of infrared light therethrough whilst still permitting the transmission of visible light therethrough.


When an image display beam is being generated and outputted, a selectively controllable filter may be used to block infrared light such that only visible light is incident to the user's eye during periods of outputting the image display beam output for user viewing. Likewise, in other examples, visible light may be selectively filtered/blocked/switched off such that only infrared gaze tracking light probe beam may be generated and incident to the user's eye when outputting an infrared gaze tracking probe beam.



FIG. 3 schematically illustrates an example of light guides 306a and 306b suitable for use with examples of the present disclosure. For example, the light guides 306a and 306b may be configured for use as light guides 106a and 106b, or 106a′ and 106b′ of FIG. 2.


The light guide 306a comprises in-coupling diffractive area 307a which in-couples an input beam 305a to the substrate, which beam is then expanded via diffractive element area 316a in a y direction and then expanded in an x direction and out-coupled out of the light guide via an out-coupling diffractive element 308a so as to produce output beam 311a. The input optical beam 305a may correspond to an image display beam (or a component of the image display beam such as a red, green or blue component thereof) or an infrared gaze tracking probe beam.


In the apparatus of FIG. 2, the reflected gaze tracking probe beam 214 needs to propagate through each of the stacked light guides 106a′ and 106a to reach the detector for detection. FIG. 4 schematically shows an apparatus 400 that avoids such issues.



FIG. 4 schematically shows an apparatus in the form of glasses/goggles with support members/arms 418 for holding the apparatus in place on a user's head. In this example, the detector 413 is not disposed on the light guides. Instead, a reflector element 417, such as a mirror or hot mirror configured to reflect infrared light, is provided e.g. on in the light guide, namely an inner/lower surface of the lower light guide. It is to be appreciated that the reflector element could be located almost anywhere in front of the eye, even on an opposite side of the light guide stack.


The reflector is configured to further reflect 419 the reflected gaze tracking beam to a detector 413, thereby providing increased flexibility as to the location of the detector. In this example, the detector is disposed on a part of a housing of the apparatus such as support arm 418. The reflector 417 may be further configured so as to provide optical power so as to focus the reflected infrared gaze tracking probe beam 214 towards the detector 413.



FIG. 5A schematically illustrates an example of an apparatus 500 comprising a combined/highly integrated visible and infrared light source 512 which may be used as a laser projector engine. A laser diode 521 generates infrared light which is focused by lens 522 into a non-linear crystal 523, such as periodically poled lithium niobate (PPLN), comprising a surface bragg grating whereby the infrared light is converted into red, green and blue visible wavelengths. However, some infrared light will pass through the non-linear crystal without undergoing such conversion. A conventional laser projection engine would typically block such non-converted infrared light deeming it to be an undesired noise signal. However, such non-converted infrared light may advantageously be used in examples of the present application as a source of infrared illumination. Thus, the light source 512 provides the simultaneous generation of both visible light as well as infrared light that can be incident to a light modulator 501, under control of a controller 515, so as to sequentially generate an image display beam and a gaze tracking probe beam from the incident visible and infrared beams respectively. A selectively controllable filter or shutter may be used to selectively filter/block IR light when the image display beam is being generated and likewise selectively filter/block visible light when the gaze tracking probe beam is being generated.


The light modulator 501 comprises a two axis scanning mirror which can be used to impart an image to the visible light from the light source to generate the image display beam. Also, the light modulator can be used to impart a pattern on the infrared light from the light source to generate the gaze tracking probe beam.



FIG. 5B schematically illustrates an apparatus 530 with an alternative light source 512 and an alternative light modulator 501. The light source 512 is configured to independently and separately generate visible light (and/or sub component colour channels thereof) and infrared light. The light source 512 may corresponds to one or more separate light sources, e.g. LEDs, such as: red, green, blue and infrared LEDs which are independently operable under control of a controller 515. Light from the LEDs is directed to a reflective microdisplay 531 via beam splitter 532. The modulated/modified light reflected from the microdisplay is then guided to a prism 533 and reflected therefrom to an in-coupling diffractive element area 507 of a light guide 506 to be out-coupled to a user's eye 110 via out-coupling diffractive element area 508. A detector 513 is provided that is configured to detect the out-coupled light which is reflected from the user's eye.


The apparatuses as variously described above may be provided in a module. As used here ‘module’ refers to a unit or apparatus that excludes certain parts/components that would be added by an end manufacturer or a user.


In certain examples, the apparatus may be provided as a device, wherein a device is configured for at least one of portable use, wearable use and head mountable use. The device may also be configured to provide functionality in addition to display and gaze tracking. For example, the device may additionally be configured to provide one or more of: audio/text/video communication functions (e.g. tele-communication, video-communication, and/or text transmission (Short Message Service (SMS)/Multimedia Message Service (MMS)/emailing) functions), interactive/non-interactive viewing functions (e.g. web-browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. Moving Picture Experts Group-1 Audio Layer 3 (MP3) or other format and/or (frequency modulation/amplitude modulation) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a (e.g. in-built) digital camera), and gaming functions.


The apparatus may be a part of a NED device, for example, glasses or goggles. It should be understood, however, that glasses or goggles are merely illustrative of an NED device that would benefit from examples of implementations of the present disclosure and, therefore, should not be taken to limit the scope of the present disclosure to the same. For example the apparatus may take other forms such as a visor or helmet or may be implemented in other electronic devices not least hand devices, or portable devices.


Although examples of the apparatus have been described above in terms of comprising various components, it should be understood that the components may be embodied as or otherwise controlled by a corresponding processing element, processor or circuitry of the apparatus.


As used in this application, the term ‘circuitry’ refers to all of the following:

    • (a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) and
    • (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and
    • (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.


This definition of ‘circuitry’ applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term “circuitry” would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.


Features described in the preceding description may be used in combinations other than the combinations explicitly described.


Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not. Although features have been described with reference to certain examples, those features may also be present in other examples whether described or not. Although various examples of the present disclosure have been described in the preceding paragraphs, it should be appreciated that modifications to the examples given can be made without departing from the scope of the invention as set out in the claims.


The term ‘comprise’ is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising Y indicates that X may comprise only one Y or may comprise more than one Y. If it is intended to use ‘comprise’ with an exclusive meaning then it will be made clear in the context by referring to “comprising only one . . . ” or by using “consisting”.


In this description, wording such as ‘couple’, ‘connect’ and ‘communication’ and their derivatives mean operationally coupled/connected/in communication. It should be appreciated that any number or combination of intervening components can exist (including no intervening components).


In this description, reference has been made to various examples. The description of features or functions in relation to an example indicates that those features or functions are present in that example. The use of the term ‘example’ or ‘for example’ or ‘may’ in the text denotes, whether explicitly stated or not, that such features or functions are present in at least the described example, whether described as an example or not, and that they can be, but are not necessarily, present in some or all other examples. Thus ‘example’, ‘for example’ or ‘may’ refers to a particular instance in a class of examples. A property of the instance can be a property of only that instance or a property of the class or a property of a sub-class of the class that includes some but not all of the instances in the class.


In this description, references to “a/an/the” [feature, element, component, means . . . ] are to be interpreted as “at least one” [feature, element, component, means . . . ] unless explicitly stated otherwise.


The above description describes some examples of the present disclosure however those of ordinary skill in the art will be aware of possible alternative structures and method features which offer equivalent functionality to the specific examples of such structures and features described herein above and which for the sake of brevity and clarity have been omitted from the above description. Nonetheless, the above description should be read as implicitly including reference to such alternative structures and method features which provide equivalent functionality unless such alternative structures or method features are explicitly excluded in the above description of the examples of the present disclosure.


Whilst endeavouring in the foregoing specification to draw attention to those features of examples of the present disclosure believed to be of particular importance it should be understood that the applicant claims protection in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not particular emphasis has been placed thereon.

Claims
  • 1. An apparatus comprising: one or more light sources configured to generate simultaneously light of a first range of wavelengths and light of a second range of wavelengths;a light modulator configured to receive the light of the first range of wavelengths and generate an image beam therefrom, wherein the light modulator is further configured to receive the light of the second range of wavelengths and generate a probe beam therefrom; first and second light guides, each light guide comprising: one or more in-coupling areas, andone or more out-coupling areas;wherein the one or more in-coupling areas are configured to receive and in-couple the image beam into the first light guide and the probe beam into the second light guide;wherein the one or more out-coupling areas are configured to out-couple, from the first and second light guides: the image beam to a user's eye for user viewing, andthe probe beam to the user's eye for detection of reflection therefrom; andwherein the apparatus further comprises: a detector configured to detect reflections of the probe beam;a controller to determine a user's gaze based on the detected reflected probe beam.
  • 2. An apparatus as claimed in claim 1, wherein the light modulator is further configured to: control a pattern, shape and/or size of the probe beam; and/ordynamically vary a pattern of the probe beam.
  • 3. An apparatus as claimed in claim 1, further comprising a controller to modify the probe beam dependent upon the detection of the reflected probe beam.
  • 4. An apparatus as claimed in claim 1, further comprising a reflector configured to further reflect the reflected probe beam to the detector.
  • 5. An apparatus as claimed in claim 1, wherein the light of the first range of wavelengths comprises visible light and the light of the second range of wavelengths comprises Infrared light.
  • 6. An apparatus as claimed in claim 1, further comprising a selectively controllable filter to selectively filter one of: the light of the first range of wavelengths and the light of the second range of wavelengths.
  • 7. An apparatus as claimed in claim 1, further comprising a light source configured to generate sequentially the light of the first range of wavelengths and the light of the second range of wavelengths.
  • 8. An apparatus as claimed in claim 1, wherein the one or more in-coupling areas and one or more out-coupling areas are comprised in a single light guide.
  • 9. An apparatus as claimed in claim 1, wherein the one or more light guides is configured as an exit pupil expander.
  • 10. An apparatus as claimed in claim 1, wherein the apparatus is configured for use as a near eye display and a gaze tracker.
  • 11. An apparatus as claimed in claim 1, wherein the in-coupling areas and out-coupling areas are diffractive elements respectively.
  • 12. A module comprising the apparatus of claim 1.
  • 13. A device comprising the apparatus of claim 1, wherein the device is configured for at least one at least one of: portable use, wearable use, head mountable use, wireless communications.
Priority Claims (1)
Number Date Country Kind
15156666 Feb 2015 EP regional
CROSS-REFERENCE TO RELATED APPLICATIONS

This is a continuation of U.S. patent application Ser. No. 15/552,897, filed on Aug. 23, 2017, which is a national stage of International Patent Application No. PCT/FI2016/050072, filed on Feb. 5, 2016, which claims priority from European Patent Application No. 15156666.8, filed on Feb. 26, 2015, each of which is incorporated herein by reference in their entirety.

US Referenced Citations (319)
Number Name Date Kind
4344092 Miller Aug 1982 A
4652930 Crawford Mar 1987 A
4810080 Grendol et al. Mar 1989 A
4997268 Dauvergne Mar 1991 A
5007727 Kahaney et al. Apr 1991 A
5074295 Willis Dec 1991 A
5240220 Elberbaum Aug 1993 A
5251635 Dumoulin et al. Oct 1993 A
5410763 Bolle May 1995 A
5455625 Englander Oct 1995 A
5495286 Adair Feb 1996 A
5497463 Stein et al. Mar 1996 A
5682255 Friesem et al. Oct 1997 A
5854872 Tai Dec 1998 A
5864365 Sramek et al. Jan 1999 A
6012811 Chao et al. Jan 2000 A
6016160 Coombs et al. Jan 2000 A
6076927 Owens Jun 2000 A
6117923 Amagai et al. Sep 2000 A
6124977 Takahashi Sep 2000 A
6191809 Hori et al. Feb 2001 B1
6375369 Schneider et al. Apr 2002 B1
6538655 Kubota Mar 2003 B1
6541736 Huang et al. Apr 2003 B1
6757068 Foxlin Jun 2004 B2
7119819 Robertson et al. Oct 2006 B1
7431453 Hogan Oct 2008 B2
7542040 Templeman Jun 2009 B2
7573640 Nivon et al. Aug 2009 B2
7724980 Shenzhi May 2010 B1
7751662 Kleemann Jul 2010 B2
7758185 Lewis Jul 2010 B2
8060759 Arnan et al. Nov 2011 B1
8120851 Iwasa Feb 2012 B2
8246408 Elliot Aug 2012 B2
8353594 Lewis Jan 2013 B2
8360578 Nummela Jan 2013 B2
8508676 Silverstein et al. Aug 2013 B2
8547638 Levola Oct 2013 B2
8605764 Rothaar et al. Oct 2013 B1
8619365 Harris et al. Dec 2013 B2
8696113 Lewis Apr 2014 B2
8698701 Margulis Apr 2014 B2
8733927 Lewis May 2014 B1
8736636 Kang May 2014 B2
8759929 Shiozawa et al. Jun 2014 B2
8793770 Lim Jul 2014 B2
8823855 Hwang Sep 2014 B2
8847988 Geisner et al. Sep 2014 B2
8874673 Kim Oct 2014 B2
9010929 Lewis Apr 2015 B2
9086537 Iwasa et al. Jul 2015 B2
9095437 Boyden et al. Aug 2015 B2
9239473 Lewis Jan 2016 B2
9244293 Lewis Jan 2016 B2
9244533 Friend et al. Jan 2016 B2
9383823 Geisner et al. Jul 2016 B2
9489027 Ogletree Nov 2016 B1
9581820 Robbins Feb 2017 B2
9658473 Lewis May 2017 B2
9671566 Abovitz et al. Jun 2017 B2
9671615 Vallius et al. Jun 2017 B1
9696795 Marcolina et al. Jul 2017 B2
9874664 Stevens et al. Jan 2018 B2
9918058 Takahasi et al. Mar 2018 B2
9955862 Freeman et al. May 2018 B2
9978118 Ozgumer et al. May 2018 B1
9996797 Holz et al. Jun 2018 B1
10018844 Levola et al. Jul 2018 B2
10082865 Raynal et al. Sep 2018 B1
10151937 Lewis Dec 2018 B2
10185147 Lewis Jan 2019 B2
10218679 Jawahar Feb 2019 B1
10241545 Richards et al. Mar 2019 B1
10317680 Richards et al. Jun 2019 B1
10436594 Belt et al. Oct 2019 B2
10516853 Gibson et al. Dec 2019 B1
10551879 Richards et al. Feb 2020 B1
10578870 Kimmel Mar 2020 B2
10698202 Kimmel et al. Jun 2020 B2
10856107 Mycek et al. Oct 2020 B2
10825424 Zhang Nov 2020 B2
11190681 Brook et al. Nov 2021 B1
11209656 Choi et al. Dec 2021 B1
11236993 Hall et al. Feb 2022 B1
20010010598 Aritake et al. Aug 2001 A1
20020063913 Nakamura et al. May 2002 A1
20020071050 Homberg Jun 2002 A1
20020122648 Mule' et al. Sep 2002 A1
20020140848 Cooper et al. Oct 2002 A1
20030048456 Hill Mar 2003 A1
20030067685 Niv Apr 2003 A1
20030077458 Korenaga et al. Apr 2003 A1
20030219992 Schaper Nov 2003 A1
20040001533 Tran et al. Jan 2004 A1
20040021600 Wittenberg Feb 2004 A1
20040025069 Gary et al. Feb 2004 A1
20040042377 Nikoloai et al. Mar 2004 A1
20040174496 Ji et al. Sep 2004 A1
20040186902 Stewart Sep 2004 A1
20040201857 Foxlin Oct 2004 A1
20040238732 State et al. Dec 2004 A1
20040240072 Schindler et al. Dec 2004 A1
20040246391 Travis Dec 2004 A1
20040268159 Aasheim et al. Dec 2004 A1
20050001977 Zelman Jan 2005 A1
20050157159 Komiya et al. Jul 2005 A1
20050273792 Inohara et al. Dec 2005 A1
20060013435 Rhoads Jan 2006 A1
20060015821 Jacques Parker et al. Jan 2006 A1
20060038880 Starkweather et al. Feb 2006 A1
20060050224 Smith Mar 2006 A1
20060126181 Levola Jun 2006 A1
20060132914 Weiss et al. Jun 2006 A1
20060221448 Nivon et al. Oct 2006 A1
20060228073 Mukawa et al. Oct 2006 A1
20060250322 Hall et al. Nov 2006 A1
20060268220 Hogan Nov 2006 A1
20070058248 Nguyen et al. Mar 2007 A1
20070159673 Freeman et al. Jul 2007 A1
20070188837 Shimizu et al. Aug 2007 A1
20070204672 Huang et al. Sep 2007 A1
20070213952 Cirelli Sep 2007 A1
20070283247 Brenneman et al. Dec 2007 A1
20080002259 Ishizawa et al. Jan 2008 A1
20080002260 Arrouy et al. Jan 2008 A1
20080043334 Itzkovitch et al. Feb 2008 A1
20080063802 Maula et al. Mar 2008 A1
20080068557 Menduni et al. Mar 2008 A1
20080146942 Dala-Krishna Jun 2008 A1
20080205838 Crippa et al. Aug 2008 A1
20080316768 Travis Dec 2008 A1
20090153797 Allon et al. Jun 2009 A1
20090224416 Laakkonen et al. Sep 2009 A1
20090245730 Kleemann Oct 2009 A1
20090310633 Ikegami Dec 2009 A1
20100019962 Fujita Jan 2010 A1
20100056274 Uusitalo et al. Mar 2010 A1
20100063854 Purvis et al. Mar 2010 A1
20100079841 Levola Apr 2010 A1
20100153934 Lachner Jun 2010 A1
20100232016 Landa et al. Sep 2010 A1
20100232031 Batchko et al. Sep 2010 A1
20100244168 Shiozawa et al. Sep 2010 A1
20100296163 Sarikko Nov 2010 A1
20110050655 Mukawa Mar 2011 A1
20110122240 Becker May 2011 A1
20110145617 Thomson et al. Jun 2011 A1
20110170801 Lu et al. Jul 2011 A1
20110218733 Hamza et al. Sep 2011 A1
20110286735 Temblay Nov 2011 A1
20110291969 Rashid et al. Dec 2011 A1
20120050535 Densham et al. Mar 2012 A1
20120075501 Oyagi et al. Mar 2012 A1
20120081392 Arthur Apr 2012 A1
20120113235 Shintani May 2012 A1
20120127062 Bar-Zeev et al. May 2012 A1
20120154557 Perez et al. Jun 2012 A1
20120218301 Miller Aug 2012 A1
20120246506 Knight Sep 2012 A1
20120249416 Maciocci et al. Oct 2012 A1
20120249741 Maciocci et al. Oct 2012 A1
20120307075 Margalitq Dec 2012 A1
20120314959 White et al. Dec 2012 A1
20120320460 Levola Dec 2012 A1
20120326948 Crocco et al. Dec 2012 A1
20130050833 Lewis et al. Feb 2013 A1
20130051730 Travers et al. Feb 2013 A1
20130502058 Liu et al. Feb 2013
20130077049 Bohn Mar 2013 A1
20130077170 Ukuda Mar 2013 A1
20130094148 Sloane Apr 2013 A1
20130129282 Li May 2013 A1
20130169923 Schnoll et al. Jul 2013 A1
20130222386 Tannhauser et al. Aug 2013 A1
20130278633 Ahn et al. Oct 2013 A1
20130314789 Saarikko et al. Nov 2013 A1
20130318276 Dalal Nov 2013 A1
20130336138 Venkatraman et al. Dec 2013 A1
20130342564 Kinnebrew et al. Dec 2013 A1
20130342570 Kinnebrew et al. Dec 2013 A1
20130342571 Kinnebrew et al. Dec 2013 A1
20140016821 Arth et al. Jan 2014 A1
20140022819 Oh et al. Jan 2014 A1
20140078023 Ikeda et al. Mar 2014 A1
20140082526 Park et al. Mar 2014 A1
20140119598 Ramachandran et al. May 2014 A1
20140126769 Reitmayr et al. May 2014 A1
20140140653 Brown et al. May 2014 A1
20140149573 Tofighbakhsh et al. May 2014 A1
20140168260 O'Brien et al. Jun 2014 A1
20140267419 Ballard et al. Sep 2014 A1
20140274391 Stafford Sep 2014 A1
20140282105 Nordstrom Sep 2014 A1
20140359589 Kodsky et al. Dec 2014 A1
20140375680 Ackerman et al. Dec 2014 A1
20150005785 Olson Jan 2015 A1
20150009099 Queen Jan 2015 A1
20150077312 Wang Mar 2015 A1
20150097719 Balachandreswaran et al. Apr 2015 A1
20150123966 Newman May 2015 A1
20150130790 Vazquez, II et al. May 2015 A1
20150134995 Park et al. May 2015 A1
20150138248 Schrader May 2015 A1
20150155939 Oshima et al. Jun 2015 A1
20150205126 Schowengerdt Jul 2015 A1
20150235431 Schowengerdt Aug 2015 A1
20150253651 Russell et al. Sep 2015 A1
20150256484 Cameron Sep 2015 A1
20150269784 Miyawaki et al. Sep 2015 A1
20150294483 Wells et al. Oct 2015 A1
20150301955 Yakovenko et al. Oct 2015 A1
20150338915 Publicover et al. Nov 2015 A1
20150355481 Hilkes et al. Dec 2015 A1
20160004102 Nisper et al. Jan 2016 A1
20160027215 Burns et al. Jan 2016 A1
20160077338 Robbins et al. Mar 2016 A1
20160085300 Robbins et al. Mar 2016 A1
20160091720 Stafford et al. Mar 2016 A1
20160093099 Bridges Mar 2016 A1
20160093269 Buckley et al. Mar 2016 A1
20160123745 Cotier et al. May 2016 A1
20160155273 Lyren et al. Jun 2016 A1
20160180596 Gonzalez del Rosario Jun 2016 A1
20160191887 Casas Jun 2016 A1
20160202496 Billetz et al. Jul 2016 A1
20160217624 Finn et al. Jul 2016 A1
20160266412 Yoshida Sep 2016 A1
20160267708 Nistico et al. Sep 2016 A1
20160274733 Hasegawa et al. Sep 2016 A1
20160300388 Stafford et al. Oct 2016 A1
20160321551 Priness et al. Nov 2016 A1
20160327798 Xiao et al. Nov 2016 A1
20160334279 Mittleman et al. Nov 2016 A1
20160357255 Lindh et al. Dec 2016 A1
20160370404 Quadrat et al. Dec 2016 A1
20160370510 Thomas Dec 2016 A1
20170038607 Camara Feb 2017 A1
20170061696 Li et al. Mar 2017 A1
20170100664 Osterhout et al. Apr 2017 A1
20170115487 Travis Apr 2017 A1
20170122725 Yeoh et al. May 2017 A1
20170123526 Trail et al. May 2017 A1
20170127295 Black et al. May 2017 A1
20170131569 Aschwanden et al. May 2017 A1
20170147066 Katz et al. May 2017 A1
20170160518 Lanman et al. Jun 2017 A1
20170161951 Fix et al. Jun 2017 A1
20170185261 Perez et al. Jun 2017 A1
20170192239 Nakamura et al. Jul 2017 A1
20170205903 Miller et al. Jul 2017 A1
20170206668 Poulos et al. Jul 2017 A1
20170213388 Margolis et al. Jul 2017 A1
20170219841 Popovich et al. Aug 2017 A1
20170232345 Rofougaran et al. Aug 2017 A1
20170235126 DiDomenico Aug 2017 A1
20170235142 Wall et al. Aug 2017 A1
20170235144 Piskunov et al. Aug 2017 A1
20170235147 Kamakura Aug 2017 A1
20170243403 Daniels et al. Aug 2017 A1
20170254832 Ho et al. Sep 2017 A1
20170256096 Faaborg et al. Sep 2017 A1
20170281054 Stever et al. Oct 2017 A1
20170287376 Bakar et al. Oct 2017 A1
20170293141 Schowengerdt et al. Oct 2017 A1
20170307886 Stenberg et al. Oct 2017 A1
20170307891 Bucknor et al. Oct 2017 A1
20170312032 Amanatullah et al. Nov 2017 A1
20170322426 Tervo Nov 2017 A1
20170329137 Tervo Nov 2017 A1
20170332098 Rusanovskyy et al. Nov 2017 A1
20170357332 Balan et al. Dec 2017 A1
20180014266 Chen Jan 2018 A1
20180052501 Jones, Jr. et al. Feb 2018 A1
20180059305 Popovich et al. Mar 2018 A1
20180067779 Pillalamarri et al. Mar 2018 A1
20180070855 Eichler Mar 2018 A1
20180082480 White et al. Mar 2018 A1
20180088185 Woods et al. Mar 2018 A1
20180102981 Kurtzman et al. Apr 2018 A1
20180108179 Tomlin et al. Apr 2018 A1
20180114298 Malaika et al. Apr 2018 A1
20180131907 Schmirier et al. May 2018 A1
20180136466 Ko May 2018 A1
20180144691 Choi et al. May 2018 A1
20180189568 Powderly et al. Jul 2018 A1
20180190017 Mendez et al. Jul 2018 A1
20180191990 Motoyama et al. Jul 2018 A1
20180250589 Cossairt et al. Sep 2018 A1
20180357472 Dreessen Dec 2018 A1
20190005069 Filgueiras de Araujo et al. Jan 2019 A1
20190011691 Peyman Jan 2019 A1
20190056591 Tervo et al. Feb 2019 A1
20190087015 Lam et al. Mar 2019 A1
20190101758 Zhu et al. Apr 2019 A1
20190158926 Kang et al. May 2019 A1
20190167095 Krueger Jun 2019 A1
20190172216 Ninan et al. Jun 2019 A1
20190178654 Hare Jun 2019 A1
20190196690 Chong et al. Jun 2019 A1
20190219815 Price et al. Jul 2019 A1
20190243123 Bohn Aug 2019 A1
20190318540 Piemonte et al. Oct 2019 A1
20190321728 Imai et al. Oct 2019 A1
20190347853 Chen et al. Nov 2019 A1
20200110928 Al Jazaery et al. Apr 2020 A1
20200117267 Gibson et al. Apr 2020 A1
20200117270 Gibson et al. Apr 2020 A1
20200202759 Ukai et al. Jun 2020 A1
20200309944 Thoresen et al. Oct 2020 A1
20200356161 Wagner Nov 2020 A1
20200368616 Delamont Nov 2020 A1
20200409528 Lee Dec 2020 A1
20210008413 Asikainen et al. Jan 2021 A1
20210033871 Jacoby et al. Feb 2021 A1
20210041951 Gibson et al. Feb 2021 A1
20210142582 Jones et al. May 2021 A1
20210158627 Cossairt et al. May 2021 A1
20210173480 Osterhout et al. Jun 2021 A1
Foreign Referenced Citations (34)
Number Date Country
107683497 Feb 2018 CN
0535402 Apr 1993 EP
1215522 Jun 2002 EP
1938141 Jul 2008 EP
1943556 Jul 2008 EP
2290428 Mar 2011 EP
3164776 May 2017 EP
3236211 Oct 2017 EP
2723240 Aug 2018 EP
2499635 Aug 2013 GB
2003-029198 Jan 2003 JP
2007-012530 Jan 2007 JP
2009-244869 Oct 2009 JP
2012-015774 Jan 2012 JP
2016-85463 May 2016 JP
6232763 Nov 2017 JP
201803289 Jan 2018 TW
2002071315 Sep 2002 WO
2006132614 Dec 2006 WO
2007085682 Aug 2007 WO
2007102144 Sep 2007 WO
2008148927 Dec 2008 WO
2009101238 Aug 2009 WO
2013049012 Apr 2013 WO
2015143641 Oct 2015 WO
2016054092 Apr 2016 WO
2017004695 Jan 2017 WO
2017120475 Jul 2017 WO
2018044537 Mar 2018 WO
2018087408 May 2018 WO
2018097831 May 2018 WO
2018166921 Sep 2018 WO
2019148154 Aug 2019 WO
2020010226 Jan 2020 WO
Non-Patent Literature Citations (165)
Entry
“ARToolKit: Hardware”, https://web.archive.org/web/20051013062315/http://www.hitl.washington.edu:80/artoolkit/documentation/hardware.htm (downloaded Oct. 26, 2020), Oct. 13, 2015, (3 pages).
Communication Pursuant to Article 94(3) EPC dated Sep. 4, 2019, European Patent Application No. 10793707.0, (4 pages).
European Search Report dated Oct. 15, 2020, European Patent Application No. 20180623.9, (10 pages).
Examination Report dated Jun. 19, 2020, European Patent Application No. 20154750.2, (10 pages).
Extended European Search Report dated May 20, 2020, European Patent Application No. 20154070.5, (7 pages).
Extended European Search Report dated Nov. 3, 2020, European Patent Application No. 18885707.2, (7 pages).
Extended European Search Report dated Nov. 4, 2020, European Patent Application No. 20190980.1, (14 pages).
Extended European Search Report dated Jun. 12, 2017, European Patent Application No. 16207441.3, (8 pages).
Final Office Action dated Aug. 10, 2020, U.S. Appl. No. 16/225,961, (13 pages).
Final Office Action dated Dec. 4, 2019, U.S. Appl. No. 15/564,517, (15 pages).
Final Office Action dated Feb. 19, 2020, U.S. Appl. No. 15/552,897, (17 pages).
Final Office Action dated Nov. 24, 2020, U.S. Appl. No. 16/435,933, (44 pages).
International Search Report and Written Opinion dated Mar. 12, 2020, International PCT Patent Application No. PCT/US19/67919, (14 pages).
International Search Report and Written Opinion dated Aug. 15, 2019, International PCT Patent Application No. PCT/US19/33987, (20 pages).
International Search Report and Written Opinion dated Jun. 15, 2020, International PCT Patent Application No. PCT/US2020/017023, (13 pages).
International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/43097, (10 pages).
International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/36275, (10 pages).
International Search Report and Written Opinion dated Oct. 16, 2019, International PCT Patent Application No. PCT/US19/43099, (9 pages).
International Search Report and Written Opinion dated Jun. 17, 2016, International PCT Patent Application No. PCT/FI2016/050172, (9 pages).
International Search Report and Written Opinion dated Oct. 22, 2019, International PCT Patent Application No. PCT/US19/43751, (9 pages).
International Search Report and Written Opinion dated Dec. 23, 2019, International PCT Patent Application No. PCT/US19/44953, (11 pages).
International Search Report and Written Opinion dated May 23, 2019, International PCT Patent Application No. PCT/US18/66514, (17 pages).
International Search Report and Written Opinion dated Sep. 26, 2019, International PCT Patent Application No. PCT/US19/40544, (12 pages).
International Search Report and Written Opinion dated Aug. 27, 2019, International PCT Application No. PCT/US2019/035245, (8 pages).
International Search Report and Written Opinion dated Dec. 27, 2019, International Application No. PCT/US19/47746, (16 pages).
International Search Report and Written Opinion dated Sep. 30, 2019, International Patent Application No. PCT/US19/40324, (7 pages).
International Search Report and Written Opinion dated Sep. 4, 2020, International Patent Application No. PCT/US20/31036, (13 pages).
International Search Report and Written Opinion dated Jun. 5, 2020, International Patent Application No. PCT/US20/19871, (9 pages).
International Search Report and Written Opinion dated Aug. 8, 2019, International PCT Patent Application No. PCT/US2019/034763, (8 pages).
International Search Report and Written Opinion dated Oct. 8, 2019, International PCT Patent Application No. PCT/US19/41151, (7 pages).
International Search Report and Written Opinion dated Jan. 9, 2020, International Application No. PCT/US19/55185, (10 pages).
International Search Report and Written Opinion dated Feb. 28, 2019, International Patent Application No. PCT/US18/64686, (8 pages).
International Search Report and Written Opinion dated Feb. 7, 2020, International PCT Patent Application No. PCT/US2019/061265, (11 pages).
International Search Report and Written Opinion dated Jun. 11, 2019, International PCT Application No. PCT/US19/22620, (7 pages).
Invitation to Pay Additional Fees dated Aug. 15, 2019, International PCT Patent Application No. PCT/US19/36275, (2 pages).
Invitation to Pay Additional Fees dated Sep. 24, 2020, International Patent Application No. PCT/US2020/043596, (3 pages).
Invitation to Pay Additional Fees dated Oct. 22, 2019, International PCT Patent Application No. PCT/US19/47746, (2 pages).
Invitation to Pay Additional Fees dated Apr. 3, 2020, International Patent Application No. PCT/US20/17023, (2 pages).
Invitation to Pay Additional Fees dated Oct. 17, 2019, International PCT Patent Application No. PCT/US19/44953, (2 pages).
Non Final Office Action dated Nov. 19. 2019, U.S. Appl. No. 16/355,611, (31 pages).
Non Final Office Action dated Aug. 21, 2019, U.S. Appl. No. 15/564,517, (14 pages).
Non Final Office Action dated Jul. 27, 2020, U.S. Appl. No. 16/435,933, (16 pages).
Non Final Office Action dated Jun. 17, 2020, U.S. Appl. No. 16/682,911, (22 pages).
Non Final Office Action dated Jun. 19, 2020, U.S. Appl. No. 16/225,961, (35 pages).
Non Final Office Action dated Nov. 5, 2020, U.S. Appl. No. 16/530,776, (45 pages).
Non Final Office Action dated Oct. 22, 2019, U.S. Appl. No. 15/859,277, (15 pages).
Non Final Office Action dated Sep. 1, 2020, U.S. Appl. No. 16/214,575, (40 pages).
Notice of Allowance dated Mar. 25, 2020, U.S. Appl. No. 15/564,517, (11 pages).
Notice of Allowance dated Oct. 5, 2020, U.S. Appl. No. 16/682,911, (27 pages).
Notice of Reason of Refusal dated Sep. 11, 2020 with English translation, Japanese Patent Application No. 2019-140435, (6 pages).
“Phototourism Challenge”, CVPR 2019 Image Matching Workshop. https://image matching-workshop. github.io., (16 pages).
Summons to attend oral proceedings pursuant to Rule 115(1) EPC mailed on Jul. 15, 2019, European Patent Application No. 15162521.7, (7 pages).
Aarik, J. et al., “Effect of crystal structure on optical properties of TiO2 films grown by atomic layer deposition”, Thin Solid Films; Publication [online). May 19, 1998 [retrieved Feb. 19, 2020], Retrieved from the Internet: <URL: https://www.sciencedirect.com/science/article/pii/S0040609097001351?via%3Dihub>; DOI: 10.1016/S0040-6090(97)00135-1; see entire document, (2 pages).
Arandjelović, Relja et al., “Three things everyone should know to improve object retrieval”, CVPR, 2012, (8 pages).
Azom, , “Silica—Silicon Dioxide (SiO2)”, AZO Materials; Publication [Online], Dec. 13, 2001 [retrieved Feb. 19, 2020]. Retrieved from the Internet: <URL: https://www.azom.com/article.aspx?Article1 D=1114>, (6 pages).
Azuma, Ronald T. , “A Survey of Augmented Reality”, Presence: Teleoperators and Virtual Environments 6, (Aug. 4, 1997), 355-385; https://web.archive.org/web/20010604100006/http://www.cs.unc.edu/˜azuma/ARpresence.pdf (downloaded Oct. 26, 2020).
Azuma, Ronald T. , “Predictive Tracking for Augmented Reality”, Department of Computer Science, Chapel Hill NC; TR95-007, Feb. 1995, 262 pages.
Battaglia, Peter W. et al., “Relational inductive biases, deep learning, and graph networks”, arXiv:1806.01261, Oct. 17, 2018, pp. 1-40.
Berg, Alexander C et al., “Shape matching and object recognition using low distortion correspondences”, In CVPR, 2005, (8 pages).
Bian, Jiawang et al., “GMS: Grid-based motion statistics for fast, ultra-robust feature correspondence.”, In CVPR (Conference on Computer Vision and Pattern Recognition), 2017, (10 pages).
Bimber, Oliver et al., “Spatial Augmented Reality: Merging Real and Virtual Worlds”, https://web.media.mit.edu/˜raskar/book/BimberRaskarAugmentedRealityBook.pdf; published by A K Peters/CRC Press (Jul. 31, 2005); eBook (3rd Edition, 2007), (393 pages).
Brachmann, Eric et al., “Neural-Guided RANSAC: Learning Where to Sample Model Hypotheses”, In ICCV (International Conference on Computer Vision ), arXiv:1905.04132v2 [cs.CV] Jul. 31, 2019, (17 pages).
Caetano, Tibério S et al., “Learning graph matching”, IEEE TPAMI, 31(6):1048-1058, 2009.
Cech, Jan et al., “Efficient sequential correspondence selection by cosegmentation”, IEEE TPAMI, 32(9):1568-1581, Sep. 2010.
Cuturi, Marco , “Sinkhorn distances: Lightspeed computation of optimal transport”, NIPS, 2013, (9 pages).
Dai, Angela et al., “ScanNet: Richly-annotated 3d reconstructions of indoor scenes”, In CVPR, arXiv:1702.04405v2 [cs.CV] Apr. 11, 2017, (22 pages).
Deng, Haowen et al., “PPFnet: Global context aware local features for robust 3d point matching”, In CVPR, arXiv:1802.02669v2 [cs.CV] Mar. 1, 2018, (12 pages).
Detone, Daniel et al., “Deep image homography estimation”, In RSS Work-shop: Limits and Potentials of Deep Learning in Robotics, arXiv:1606.03798v1 [cs.CV] Jun. 13, 2016, (6 pages).
Detone, Daniel et al., “Self-improving visual odometry”, arXiv:1812.03245, Dec. 8, 2018, (9 pages).
Detone, Daniel et al., “SuperPoint: Self-supervised interest point detection and description”, In CVPR Workshop on Deep Learning for Visual SLAM, arXiv:1712.07629v4 [cs.CV] Apr. 19, 2018, (13 pages).
Dusmanu, Mihai et al., “D2-net: A trainable CNN for joint detection and description of local features”, CVPR, arXiv: 1905.03561 v1 [cs.CV] May 9, 2019, (16 pages).
Ebel, Patrick et al., “Beyond cartesian representations for local descriptors”, ICCV, arXiv: 1908.05547v1 [cs.CV] Aug. 15, 2019, (11 pages).
Fischler, Martin A et al., “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography”, Communications of the ACM, 24(6): 1981, pp. 381-395.
Gilmer, Justin et al., “Neural message passing for quantum chemistry”, In ICML, arXiv:1704.01212v2 [cs.LG] Jun. 12, 2017, (14 pages).
Goodfellow, , “Titanium Dioxide—Titania (TiO2)”, AZO Materials; Publication [online], Jan. 11, 2002 [retrieved Feb. 19, 2020], Retrieved from the Internet: <URL: https://www.azom.com/article.aspx?Article1D=1179>, (9 pages).
Hartley, Richard et al., “Multiple View Geometry in Computer Vision”, Cambridge University Press, 2003, pp. 1-673.
Jacob, Robert J. , “Eye Tracking in Advanced Interface Design”, Human-Computer Interaction Lab, Naval Research Laboratory, Washington, D.C., date unknown. 2003, pp. 1-50.
Lee, Juho et al., “Set transformer: A frame—work for attention-based permutation-invariant neural networks”, ICML, arXiv:1810.00825v3 [cs.LG] May 26, 2019, (17 pages).
Leordeanu, Marius et al., “A spectral technique for correspondence problems using pairwise constraints”, Proceedings of (ICCV) International Conference on Computer Vision, vol. 2, pp. 1482-1489, Oct. 2005, (8 pages).
Levola, T. , “Diffractive Optics for Virtual Reality Displays”, Journal of the SID Eurodisplay May 14, 2005, XP008093627, chapters 2-3, Figures 2 and 10, pp. 467-475.
Levola, Tapani , “Invited Paper: Novel Diffractive Optical Components for Near to Eye Displays—Nokia Research Center”, SID 2006 Digest, 2006 SID International Symposium, Society for Information Display, vol. XXXVII, May 24, 2005, chapters 1-3, figures 1 and 3, pp. 64-67.
Li, Yujia et al., “Graph matching networks for learning the similarity of graph structured objects”, ICML, arXiv:1904.12787v2 [cs.LG] May 12, 2019, (18 pages).
Li, Zhengqi et al., “Megadepth: Learning single—view depth prediction from internet photos”, In CVPR, fromarXiv: 1804.00607v4 [cs.CV] Nov. 28, 2018, (10 pages).
Loiola, Eliane M. et al., “A survey for the quadratic assignment problem”, European journal of operational research, 176(2): 2007, pp. 657-690.
Lowe, David G. , “Distinctive image features from scale—invariant keypoints”, International Journal of Computer Vision, 60(2): 91-110, 2004, (28 pages).
Luo, Zixin et al., “ContextDesc: Local descriptor augmentation with cross-modality context”, CVPR, arXiv:1904.04084v1 [cs.CV] Apr. 8, 2019, (14 pages).
Memon, F. et al., “Synthesis, Characterization and Optical Constants of Silicon Oxycarbide”, EPJ Web of Conferences; Publication [online). Mar. 23, 2017 [retrieved Feb. 19, 2020).<URL: https://www.epj-conferences.org/articles/epjconf/pdf/2017/08/epjconf_nanop2017_00002.pdf> DOI: 10.1051/epjconf/201713900002, (8 pages).
Munkres, James , “Algorithms for the assignment and transportation problems”, Journal of the Society for Industrial and Applied Mathematics, 5(1): 1957, pp. 32-38.
Ono, Yuki et al., “LF-Net: Learning local features from images”, 32nd Conference on Neural Information Processing Systems (NIPS 2018), arXiv:1805.09662v2 [cs.CV] Nov. 22, 2018, (13 pages).
Paszke, Adam et al., “Automatic differentiation in Pytorch”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, (4 pages).
Peyré, Gabriel et al., “Computational Optimal Transport”, Foundations and Trends in Machine Learning, 11(5-6):355-607, 2019; arXiv:1803.00567v4 [stat.ML] Mar. 18, 2020, (209 pages).
Qi, Charles R. et al., “Pointnet++: Deep hierarchical feature learning on point sets in a metric space.”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA., (10 pages).
Qi, Charles R. et al., “Pointnet: Deep Learning on Point Sets for 3D Classification and Segmentation”, CVPR, arXiv:1612.00593v2 [cs.CV] Apr. 10, 201, (19 pages).
Radenović, Filip et al., “Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking”, CVPR, arXiv:1803.11285v1 [cs.CV] Mar. 29, 2018, (10 pages).
Raguram, Rahul et al., “A comparative analysis of ransac techniques leading to adaptive real-time random sample consensus”, Computer Vision—ECCV 2008, 10th European Conference on Computer Vision, Marseille, France, Oct. 12-18, 2008, Proceedings, Part I, (15 pages).
Ranftl, Renéet al., “Deep fundamental matrix estimation”, European Conference on Computer Vision (ECCV), 2018, (17 pages).
Revaud, Jerome et al., “R2D2: Repeatable and Reliable Detector and Descriptor”, In NeurlPS, arXiv:1906.06195v2 [cs.CV] Jun. 17, 2019, (12 pages).
Rocco, Ignacio et al., “Neighbourhood Consensus Networks”, 32nd Conference on Neural Information Processing Systems (NeurlPS 2018), Montreal, Canada, arXiv:1810.10510v2 [cs.CV] Nov. 29, 2018, (20 pages).
Rublee, Ethan et al., “ORB: An efficient alternative to SIFT or SURF”, Proceedings of the IEEE International Conference on Computer Vision. 2564-2571.2011; 10.1109/ICCV.2011.612654, (9 pages).
Sattler, Torsten et al., “SCRAMSAC: Improving RANSAC's efficiency with a spatial consistency filter”, ICCV, 2009: 2090-2097., (8 pages).
Schonberger, Johannes L. et al., “Pixelwise view selection for un-structured multi-view stereo”, Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, Oct. 11-14, 2016, Proceedings, Part III, pp. 501-518, 2016.
Schonberger, Johannes L. et al., “Structure-from-motion revisited”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4104-4113, (11 pages).
Sinkhorn, Richard et al., “Concerning nonnegative matrices and doubly stochastic matrices.”, Pacific Journal of Mathematics, 1967, pp. 343-348.
Spencer, T. et al., “Decomposition of poly(propylene carbonate) with UV sensitive iodonium 11 salts”, Polymer Degradation and Stability (online], Dec. 24, 2010 (retrieved Feb. 19, 2020]., <URL: http:/fkohl.chbe.gatech.edu/sites/default/files/linked_files/publications/2011Decomposition%20of%20poly(propylene%20carbonate)%20with%20UV%20sensitive%20iodonium%20salts,pdf>; DOI: 10, 1016/j.polymdegradstab.2010, 12.003, (17 pages).
Tanriverdi, Vildan et al., “Interacting With Eye Movements in Virtual Environments”, Department of Electrical Engineering and Computer Science, Tufts University Proceedings of the SIGCHI conference on Human Factors in Computing Systems, Apr. 2000, pp. 1-8.
Thomee, Bart et al., “YFCC100m: The new data in multimedia research”, Communications of the ACM, 59(2):64-73, 2016; arXiv:1503.01817v2 [cs.MM] Apr. 25, 2016, (8 pages).
Torresani, Lorenzo et al., “Feature correspondence via graph matching: Models and global optimization”, Computer Vision—ECCV 2008, 10th European Conference on Computer Vision, Marseille, France, Oct. 12-18, 2008, Proceedings, Part II, (15 pages).
Tuytelaars, Tinne et al., “Wide baseline stereo matching based on Tocal, affinely invariant regions”, BMVC, 2000, pp. 1-14.
Ulyanov, Dmitry et al., “Instance normalization: The missing ingredient for fast stylization”, arXiv:1607.08022v3 [cs.CV] Nov. 6, 2017, (6 pages).
Vaswani, Ashish et al., “Attention is all you need”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA; arXiv:1706.03762v5 [cs.CL] Dec. 6, 2017, (15 pages).
Veli{hacek over (c)}kovi{hacek over (c)}, Petar et al., “Graph attention networks”, ICLR, arXiv:1710.10903v3 [stat.ML] Feb. 4, 2018, (12 pages).
Mllani, Cédric , “Optimal transport: old and new”, vol. 338. Springer Science & Business Media, Jun. 2008, pp. 1-998.
Wang, Xiaolong et al., “Non-local neural networks”, CVPR, arXiv:1711.07971v3[cs.CV] Apr. 13, 2018, (10 pages).
Wang, Yue et al., “Deep Closest Point: Learning representations for point cloud registration”, ICCV, arXiv:1905.03304v1 [cs.CV] May 8, 2019, (10 pages).
Wang, Yue et al., “Dynamic Graph CNN for learning on point clouds”, ACM Transactions on Graphics, arXiv:1801.07829v2 [cs.CV] Jun. 11, 2019, (13 pages).
Weissel, et al., “Process cruise control: event-driven clock scaling for dynamic power management”, Proceedings of the 2002 international conference on Compilers, architecture, and synthesis for embedded systems. Oct. 11, 2002 (Oct. 11, 2002) Retrieved on May 16, 2020 (May 16, 2020) from <URL: https://dl.acm.org/doi/pdf/10.1145/581630.581668>, p. 238-246.
Yi, Kwang M. et al., “Learning to find good correspondences”, CVPR, arXiv:1711.05971v2 [cs.CV] May 21, 2018, (13 pages).
Yi, Kwang Moo et al., “Lift: Learned invariant feature transform”, ECCV, arXiv:1603.09114v2 [cs.CV] Jul. 29, 2016, (16 pages).
Zaheer, Manzil et al., “Deep Sets”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA; arXiv:1703.06114v3 [cs.LG] Apr. 14, 2018, (29 pages).
Zhang, Jiahui et al., “Learning two-view correspondences and geometry using order-aware network”, ICCV; aarXiv:1908.04964v1 [cs.CV] Aug. 14, 2019, (11 pages).
Zhang, Li et al., “Dual graph convolutional net- work for semantic segmentation”, BMVC, 2019; arXiv:1909.06121v3 [cs.CV] Aug. 26, 2020, (18 pages).
Extended European Search Report dated Jan. 22, 2021, European Patent Application No. 18890390.0, (11 pages).
Extended European Search Report dated Mar. 4, 2021, European Patent Application No. 19768418.6, (9 pages).
Final Office Action dated Mar. 1, 2021, U.S. Appl. No. 16/214,575, (29 pages).
Final Office Action dated Mar. 19, 2021, U.S. Appl. No. 16/530,776, (25 pages).
International Search Report and Written Opinion dated Feb. 12, 2021, International Application No. PCT/US20/60555, (25 pages).
International Search Report and Written Opinion dated Feb. 2, 2021, International PCT Patent Application No. PCT/US20/60550, (9 pages).
International Search Report and Written Opinion dated Dec. 3, 2020, International Patent Application No. PCT/US20/43596, (25 pages).
Non Final Office Action dated Jan. 26, 2021, U.S. Appl. No. 16/928,313, (33 pages).
Non Final Office Action dated Jan. 27, 2021, U.S. Appl. No. 16/225,961, (15 pages).
Non Final Office Action dated Mar. 3, 2021, United States U.S. Appl. No. 16/427,337, (41 pages).
Non Final Office Action dated May 26, 2021, U.S. Appl. No. 16/214,575, (19 pages).
Altwaijry, et al., “Learning to Detect and Match Keypoints with Deep Architectures”, Proceedings of the British Machine Vision Conference (Bmvc), BMVA Press, Sep. 2016, [retrieved on Jan. 8, 2021 (Jan. 8, 2021 )] < URL: http://www.bmva.org/bmvc/2016/papers/paper049/index.html >, en lire document, especially Abstract, pp. 1-6 and 9.
Butail, et al., “Putting the fish in the fish tank: Immersive VR for animal behavior experiments”, In: 2012 IEEE International Conference on Robotics and Automation. May 18, 2012 (May 18, 2012) Retrieved on Nov. 14, 2020 (Nov. 14, 2020) from <http:/lcdcl.umd.edu/papers/icra2012.pdf> entire document, (8 pages).
Lee, et al., “Self-Attention Graph Pooling”, Cornell University Library/Computer Science/ Machine Learning, Apr. 17, 2019 [retrieved on Jan. 8, 2021 from the Internet< URL: https://arxiv.org/abs/1904.08082 >, entire document.
Libovicky, et al., “Input Combination Strategies for Multi-Source Transformer Decoder”, Proceedings of the Third Conference on Machine Translation (WMT). vol. 1: Research Papers, Belgium, Brussels, Oct. 31-Nov. 1, 2018; retrieved on Jan. 8, 2021 (Jan. 8, 2021 ) from < URL https://doi.org/10.18653/v1/W18-64026 >, entire document, pp. 253-260.
Molchanov, Pavlo et al., “Short-range FMCW monopulse radar for hand-gesture sensing”, 2015 IEEE Radar Conference (RadarCon) (2015), pp. 1491-1496.
Sarlin, et al., “SuperGlue: Learning Feature Matching with Graph Neural Networks”, Cornell University Library/Computer Science/Computer Vision and Pattern Recognition, Nov. 26, 2019 [retrieved on Jan. 8, 2021 from the Internet< URL: https://arxiv.org/abs/1911.11763 >, entire document.
Communication Pursuant to Article 94(3) EPC dated Oct. 21, 2021, European Patent Application No. 16207441.3 , (4 pages).
Communication Pursuant to Rule 164(1) EPC dated Jul. 27, 2021, European Patent Application No. 19833664.6 , (11 pages).
Extended European Search Report dated Jun. 30, 2021, European Patent Application No. 19811971.1 , (9 pages).
Extended European Search Report dated Jul. 16, 2021, European Patent Application No. 19810142.0 , (14 pages).
Extended European Search Report dated Jul. 30, 2021, European Patent Application No. 19839970.1 , (7 pages).
Extended European Search Report dated Oct. 27, 2021, European Patent Application No. 19833664.6 , (10 pages).
Extended European Search Report dated Sep. 20, 2021, European Patent Application No. 19851373.1 , (8 pages).
Extended European Search Report dated Sep. 28, 2021, European Patent Application No. 19845418.3 , (13 pages).
Final Office Action dated Jun. 15, 2021, U.S. Appl. No. 16/928,313 , (42 pages).
Final Office Action dated Sep. 17, 2021, U.S. Appl. No. 16/938,782 , (44 pages).
Non Final Office Action dated Aug. 4, 2021, U.S. Appl. No. 16/864,721 , (51 pages).
Non Final Office Action dated Jul. 9, 2021, U.S. Appl. No. 17/002,663 , (43 pages).
Non Final Office Action dated Jul. 9, 2021, U.S. Appl. No. 16/833,093 , (47 pages).
Non Final Office Action dated Jun. 10, 2021, U.S. Appl. No. 16/938,782 , (40 Pages).
Non Final Office Action dated Jun. 29, 2021, U.S. Appl. No. 16/698,588 , (58 pages).
Non Final Office Action dated Sep. 29, 2021, U.S. Appl. No. 16/748,193 , (62 pages).
Giuseppe, Donato , et al. , “Stereoscopic helmet mounted system for real time 3D environment reconstruction and indoor ego—motion estimation”, Proc. SPIE 6955, Head- and Helmet-Mounted Displays XIII: Design and Applications , 69550P.
Sheng, Liu , et al. , “Time-multiplexed dual-focal plane head-mounted display with a liquid Tens” , Optics Letters, Optical Society of Amer i ca, US, vol. 34, No. 11, Jun. 1, 2009 (Jun. 1, 2009), XP001524475, ISSN: 0146-9592 , pp. 1642-1644.
Communication according to Rule 164(1) EPC, European Patent Application No. 20753144.3, (11 pages).
Extended European Search Report dated Jan. 28, 2022, European Patent Application No. 19815876.8, (9 pages).
Final Office Action dated Feb. 23, 2022, U.S. Appl. No. 16/748,193, (23 pages).
Final Office Action dated Feb. 3, 2022, U.S. Appl. No. 16/864,721, (36 pages).
Non Final Office Action dated Feb. 2, 2022, U.S. Appl. No. 16/783,866, (8 pages).
Non Final Office Action dated Apr. 1, 2022, U.S. Appl. No. 17/256,961, (65 pages).
Non Final Office Action dated Mar. 31, 2022, U.S. Appl. No. 17/257,814, (60 pages).
Communication Pursuant to Article 94(3) EPC dated Jan. 4, 2022, European Patent Application No. 20154070.5, (8 pages).
Extended European Search Report dated Jan. 4, 2022, European Patent Application No. 19815085.6, (9 pages).
Related Publications (1)
Number Date Country
20210081666 A1 Mar 2021 US
Continuations (1)
Number Date Country
Parent 15552897 US
Child 17105848 US