This description relates to capturing optical data.
Head-mounted devices can have a camera that captures images of objects external to the head-mounted device and another camera that captures images of an eye of a user who is wearing the camera.
An apparatus, such as a head-mounted device, includes a camera that captures both visible light that passes through a lens and infrared light that reflects off of the lens. The lens reflects the infrared light from an interior side of the lens and passes visible light.
According to an example, a head-mounted device comprises a frame; a lens coupled to the frame, the lens being configured to reflect infrared light from an interior side of the lens and pass visible light; and a camera configured to capture the infrared light reflected from the interior side of the lens and to capture the visible light passing through the lens.
According to an example, a method performed by a head-mounted device comprises capturing, by a camera included in the head-mounted device, infrared light reflected off of a lens included in the head-mounted device, the infrared light including an image of an eye; determining, by a processor included in the head-mounted device based on the image of the eye, a direction of a gaze of the eye; capturing, by a camera included in the head-mounted device, visible light passing through the lens included in the head-mounted device, the visible light including an image of an object beyond the lens included in the head-mounted device; and determining motion of the head-mounted device based on the image of the object.
According to an example, a non-transitory computer-readable storage medium comprises instructions stored thereon. When executed by at least one processor, the instructions are configured to cause a head-mounted device to capture, by a camera included in the head-mounted device, infrared light reflected off of a lens included in the head-mounted device, the infrared light including an image of an eye; determine, based on the image of the eye, a direction of a gaze of the eye; capture, by the camera included in the head-mounted device, visible light passing through the lens included in the head-mounted device, the visible light including an image of an object beyond the lens included in the head-mounted device; and determine motion of the head-mounted device based on the image of the object.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
Like reference numbers refer to like elements.
Head-mounted devices, such as augmented reality glasses, can include an external camera that captures images of objects in front of a user wearing the head-mounted device as well as a gaze-tracking camera that captures images of an eye of the user. A technical problem with including both the external camera and the gaze-tracking camera in the head-mounted device is that two cameras add weight and expense, and the external camera occupies space on a front portion of the head-mounted device. A technical solution to the technical problem with including two cameras is to capture images of the objects in front of the user and images of the eye with a same, single camera. A lens included in the head-mounted device reflects infrared light and passes visible light. The single camera captures infrared light images of the eye that are reflected off of the lens and captures visible light images of external objects that pass through the lens. A technical benefit of capturing the images of external objects and the eye with a single camera is reduced weight, reduced space occupied by the single camera, and reduced cost.
The head-mounted device 100 includes one or more lenses, such as a left lens 104A supported by and/or coupled to the frame and/or the left rim 102A and a right lens 104B supported by and/or coupled to the frame and/or right rim 102B. The one or more lenses are configured to reflect infrared light from an interior side of the lens and to pass visible light through the lens. Infrared light can be electromagnetic radiation in a spectral band between microwaves and visible light. Visible light can be electromagnetic radiation that can be perceived by a human eye, and can have wavelengths between infrared and ultraviolet. In the example shown in
The head-mounted device 100 includes a camera 108. In some implementations, the camera 108 is coupled to one of the temple arms 106A, 106B, such as to the right temple arm 106B. While the camera 108 is shown extending from the right temple arm 106B in
The camera 108 captures the infrared light reflected from the interior side of the lens 104B. Capturing the infrared light reflected from the interior side of the lens 104B enables the camera 108 to capture one or more images of the eye 110. The camera 108 captures visible light passing through the lens 104B. Capturing visible light passing through the lens 104B enables the camera 108 to capture one or more images of one or more objects beyond and/or external to the head-mounted device 100.
In some examples, the head-mounted device 100 includes an illuminator 112. While the illuminator 112 is shown attached to the camera 108 and interior to the camera 108 in
The one or more illuminators, such as the illuminator 112, can be an infrared light source. The illuminator 112 projects and/or transmits infrared light onto the interior portion of the lens 104B. The infrared light projected and/or transmitted onto the interior portion of the lens 104B reflects off of the interior portion of the lens 104B and onto the eye 110. The infrared light reflected onto the eye 110 scatters off of the eye 110 onto the lens 104B, and reflects off of the interior portion of the lens 104B onto the camera 108. The camera 108 is thereby able to capture one or more infrared images of the eye 110.
In some examples, the head-mounted device 100 includes a processor 114. The processor 114 can perform operations based on data captured by the camera 108. In some examples, the processor 114 can determine a gaze direction of the eye 110 based on the infrared light images of the eye and/or infrared light captured by the camera 108. In some examples, the processor 114 can determine an orientation and/or motion of the head-mounted device 100 based on visible light images of objects and/or visible light captured by the camera 108. In some examples, the processor 114 is near the camera 108, such as supported by and/or coupled to the same right temple arm 106B as the camera 108.
In some examples, the head-mounted device 100 includes an accelerometer and/or gyroscope, which can be included in an inertial measurement unit (IMU) 116. The IMU 116 can determine a specific force, angular rate, and/or orientation of the head-mounted device 100 and/or a portion of the head-mounted device 100 that the IMU 116 is supported by and/or coupled to (such as the right temple arm 106B). In some examples, the IMU 116 is supported by and/or coupled to the same portion of the head-mounted device 100 as the camera 108 and/or processor 114, such as to the right temple arm 106B. In some examples, the processor 114 determines the orientation and/or motion of the head-mounted device 100 based on visible light images of objects and/or visible light captured by the camera 108 as well as the specific force, angular rate, and/or orientation determined by the IMU 116.
In some examples, the camera 108 and/or a processor in communication with the camera 108 (such as the processor 114) crops a portion of the image captured by the camera 108. The camera 108 and/or processor can crop the portion (or portions) of the image captured by the camera 108 that does not include an image of the eye 110. Cropping a portion (or portions) of the image captured by the camera 108 that does not include the image of the eye 110 reduces memory consumption, reduces processing complexity, and/or enables an increase of a frame rate of capturing and/or processing images of the eye 110.
In some examples, the camera 108 adjusts a focus distance between a distance between the camera 108 and the eye 110 and a distance between the camera 108 and the object 302. A first distance, the distance between the camera 108 and the eye 110, can be a sum of a distance that the scattered reflected light 212B traveled from the eye 110 to the interior side 206 of the lens 104B and the distance that the reflected infrared light 214 traveled from the interior side 206 of the lens 104B to the camera 108. A second distance can be a distance that the visible light 304 travels from the object 302 to the camera 108. The second distance is greater than the first distance. The camera 108 can alternate and/or adjust the focus distance between the first distance, while the camera 108 is capturing infrared light, and the second distance, while the camera 108 is capturing visible light. The alternation and/or adjustment of the focus distance can be implemented by a geometric phase lens included in the camera 108 that electronically switches between near focus (to capture images of the eye 110) and far focus (to capture images of the object 302). In some implementations, the camera 108 alternates between a first frame rate for capturing images of the eye 110 and a second frame rate for capturing images of the object 302. The first frame rate can be higher than the second frame rate. The first frame rate can be between 80 Hertz and 100 Hertz, such as 90 Hertz. The second frame rate can be between 5 Hertz and 15 Hertz, such as 10 Hertz.
The camera 108 can also include a grid of photosensors corresponding to the grid of filters included in the filter 400. Photosensors included in the camera 108 that are aligned with and/or correspond to infrared-pass filters that pass infrared light and block visible light can detect infrared light, such as light scattering off of the eye 110 and reflecting off of the interior side 206. Photosensors included in the camera 108 that are aligned with and/or correspond to visible-pass filters that pass visible light and block infrared light can detect visible light, such as light scattering and/or reflecting off of an object such as the object 302 and passing through the lens 104B. In some examples, the photosensors included in the camera 108 that are aligned with and/or correspond to visible-pass filters are divided into four color channels corresponding to the four colors cyan, magenta, yellow, and black. The photosensors aligned with and/or corresponding to the visible-pass filters can sequentially alternate between the four color channels corresponding to the four colors cyan, magenta, yellow, and black. In some examples, the photosensors included in the camera 108 that are aligned with and/or correspond to visible-pass filters are divided into three color channels corresponding to the three colors red, green, and blue, and the photosensors aligned with and/or corresponding to the visible-pass filters can sequentially alternate between the three color channels corresponding to the three colors red, green, and blue.
In some implementations, the head-mounted device 100 may include a see-through near-eye display. The displays 704A, 704B may be configured to project light from a display source onto a portion of teleprompter glass functioning as a beamsplitter seated at an angle (e.g., 30-45 degrees). The beamsplitter may allow for reflection and transmission values that allow the light from the display source to be partially reflected while the remaining light is transmitted through. Such an optic design may allow a user to see both physical items in the world through the lenses 104A, 104B, next to content (such as digital images, user interface elements, virtual content, and the like) generated by the displays 704A, 704B. In some implementations, waveguide optics may be used to depict content on the displays 704A, 704B via outcoupled light 720A, 720B. The images projected by the displays 704A, 704B onto the lenses 104A, 104B may be translucent, allowing the user to see the images projected by the displays 704A, 704B as well as physical objects beyond the head-mounted device 100. The camera 108 and illuminator 112 are coupled to the right temple arm 106B.
In some examples, the head-mounted device 100 performs the determinations and/or calculations of eye gaze direction and motion independently based on the captured images of the eye 110 and the measurements of the IMU 116. In some examples, the head-mounted device 100 performs measurements and/or gathers data, such as measurements performed by the IMU 116 and images captured by the camera 108, sends the measurements and/or data to the computing device 800, the computing device performs calculations and/or determinations such as gaze direction and/or motion, and sends the calculations and/or determinations to the head-mounted device 100. In some examples, the head-mounted device 100 determines the direction of the gaze based on the infrared images and sends IMU measurements and visible images captured by the camera 108 to the computing device 800, the computing device 800 determines the motion of the head-mounted device 100 based on the IMU measurements and visible images, and the computing device 800 sends the determined motion to the head-mounted device 100.
In some examples, the method 900 further includes transmitting the infrared light onto the lens.
In some examples, the determining motion of the head-mounted device is based on the image of the object and inertial measurement data detected by an inertial measurement unit included in the head-mounted device.
In some examples, the method 900 further includes adjusting a focus distance of the camera from a first distance while capturing the infrared light to a second distance while capturing the visible light, the second distance being greater than the first distance.
Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Implementations may implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program, such as the computer program(s) described above, can be written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
Method steps may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method steps also may be performed by, and an apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer also may include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in special purpose logic circuitry.
To provide for interaction with a user, implementations may be implemented on a computer having a display device, e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Implementations may be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation, or any combination of such back-end, middleware, or front-end components. Components may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the embodiments of the invention.