The present disclosure relates to optical components, and in particular to light sources for visual display devices and 3D imaging.
Visual displays provide information to viewer(s) including still images, video, data, etc. Visual displays have applications in diverse fields including entertainment, education, engineering, science, professional training, advertising, to name just a few examples. Some visual displays such as TV sets display images to several users, and some visual display systems such as head-mounted display (HMDs) and near-eye displays (NEDs) are intended for individual users.
An artificial reality system generally includes an HMD, a NED, or the like (e.g., a headset or a pair of glasses) configured to present content to a user. A NED or an HMD may display virtual objects or combine images of real objects with virtual objects, as in virtual reality (VR), augmented reality (AR), or mixed reality (MR) applications. The displayed VR/AR/MR content can be three-dimensional (3D) to enhance the experience and, possibly, to match virtual objects to real objects when observable by the user. An HMD or NED for VR applications may include ranging means to orient the user in a surrounding environment, as the surrounding scenery may not be directly visible to the user.
Because HMDs and NEDs are usually worn on the head of a user, they can benefit from a compact and efficient configuration, including efficient light sources and illuminators for eye tracking and ranging.
Exemplary embodiments will now be described in conjunction with the drawings, which are not to scale, in which like elements are indicated with like reference numerals, and in which:
While the present teachings are described in conjunction with various embodiments and examples, it is not intended that the present teachings be limited to such embodiments. On the contrary, the present teachings encompass various alternatives and equivalents, as will be appreciated by those of skill in the art. All statements herein reciting principles, aspects, and embodiments of this disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
As used herein, the terms “first”, “second”, and so forth are not intended to imply sequential ordering, but rather are intended to distinguish one element from another, unless explicitly stated. Similarly, sequential ordering of method steps does not imply a sequential order of their execution, unless explicitly stated.
Embodiments described herein relate to an apparatus and related method for object imaging, including three-dimensional (3D) imaging of various objects. The imaging may include illuminating an object with structured light of different wavelengths, and detecting reflections of the structured with a camera to obtain a set of 2D images at different illumination wavelengths. The apparatus may include a structured light projector, or illuminator, that includes a liquid crystal (LC) tunable filter (LCTF) in series with a spatial filter, e.g. an LC spatial light modulator (SLM). The illuminator may be configured to project, upon the object or scene, an illumination pattern composed of alternating bright and dark regions, e.g. a sequence of spaced apart bright stripes or fringes.
The method may include tuning the SLM to provide different illumination patterns, e.g. of different periods and/or duty cycles, and capturing the reflected light for the different patterns to obtain two or more sets of the 2D images. Each of these sets may include one or more of the images captured at different illumination wavelengths. The plurality of images so obtained may then be used, e.g., to generate an enhanced image of the object that has more information about the object than any of the individually captured images. For example, compositionally different parts of the object (e.g. a human eye) may have different reflectivity at different illumination wavelengths, and those parts may be identified by comparing the images taken at different illumination wavelength.
In some embodiments, images of the object using illumination patterns with alternating bright and dark regions may be used to approximately determine a 3D shape of the object, e.g. based on apparent deformation of the bright regions of the illumination pattern after reflection from the object. The illuminator may be conveniently embodied using a compact and low-weight multi-layer LC structure, which may be useful e.g. in an HMD or a NED, e.g. for eye tracking and/or for ranging and 3D imaging of the surroundings.
An aspect of the present disclosure relates to an illuminator comprising a light source for emitting light spanning a wavelength band, a liquid crystal tunable filter (LCTF) for spectrally filtering the light and for tuning a center wavelength of the light within the wavelength band, and a spatial filter for spatially modulating the light to provide a patterned light beam comprising a pattern of alternating bright and dark regions in a cross-section of the patterned light beam.
Some implementations may comprise a camera and a display system for projecting images to a viewer. In some of such implementations a controller operably coupled to the LCTF and the camera may be provided. In some of such implementations, the patterned light beam illuminates an eye of the viewer; and the camera is configured to receive a portion of the patterned light beam reflected from the eye. In some of such implementations the wavelength band of the light source may be in an infrared portion of spectrum. The controller may be configured to wavelength-tune the LCTF, obtain eye images captured by the camera in coordination with the wavelength tuning, and process the eye images to obtain eye tracking information, wherein different ones of the eye images correspond to different eye illuminating wavelengths. In some of the above implementations, the patterned light beam may illuminate at least a part of an environment surrounding the viewer, the camera may be configured to receive a portion of the patterned light beam reflected from one or more objects in the environment, and the controller may be configured to process the images to obtain depth information for the environment.
In any of the above implementations the spatial filter may be configured to vary the pattern of alternating bright and dark regions. The controller may be configured to obtain the eye images captured by the camera at different patterns of the alternating bright and dark regions of the patterned light beam illuminating the eye, and to process the images to obtain eye tracking information.
In any of the above implementations, the light may be is incident on the spatial filter as a polarized light beam, and the spatial filter may comprise a patterned waveplate configured to spatially modulate a polarization state of the polarized light beam, such that the polarization state at an output of the patterned waveplate alternates between first and second orthogonal polarization states in a cross-section of the polarized light beam. The spatial filter may further comprise a polarizer downstream of the patterned waveplate for blocking light in the first polarization state while propagating light in the second polarization state to provide the patterned light beam. In some of such implementations the patterned waveplate may comprise an array of alternating first and second waveplate segments having differing retardance. The first waveplate segments may have e.g. an approximately half-wave retardance, and the second waveplate segments may have e.g. an approximately zero retardance or an approximately full-wave retardance. In some implementations the spatial filter may comprise a spatial light modulator (SLM). The SLM may comprise e.g. an array of liquid crystal pixels having individually tunable retardance, and a polarizer downstream of the array of liquid crystal pixels.
In any of the above implementations the LCTF may comprise a sequence of serially coupled blocks, each block comprising a tunable waveplate upstream of a polarizer. Each tunable waveplate may comprise, e.g., an LC layer having a tunable retardance, wherein retardance values of different tunable waveplates of the LCTF are in a binary relationship with one another.
An aspect of the present disclosure provides a display system for projecting images to a viewer. The display system comprises an illuminator for illuminating an eye of the viewer with a patterned light beam comprising a pattern of alternating bright and dark regions in a cross-section of the patterned light beam, the illuminator comprising a spatial filter operable to spatially filter light incident thereon to output the patterned light beam, and a tunable wavelength filter upstream or downstream of the spatial filter. The display system further comprises a camera disposed to receive a portion of the patterned light beam reflected from the eye to capture a plurality of eye images, and a controller configured to receive the eye images captured by the camera. The controller is further configured to wavelength-tune the tunable wavelength filter, and to process the eye images for obtaining eye tracking information, different ones of the eye images being captured at different eye illuminating wavelengths.
In some implementations of the display system, the spatial filter may be configured to vary the pattern of bright regions and the controller is configured to receive a set of eye images captured by the camera for different patterns of the bright regions, and to process the set of eye images for obtaining the eye tracking information.
An aspect of the present disclosure provides a method for imaging an object. The method comprises: filtering light spanning a wavelength band with a liquid crystal tunable filter (LCTF), and spatially modulating the light to provide a patterned light beam comprising a pattern of alternating bright and dark regions in a cross-section of the patterned light beam.
In some implementations, the method may further comprise illuminating the object with the patterned light beam, receiving a portion of the patterned light beam reflected from the object by an image capturing camera, wavelength-tuning the LCTF to vary a center wavelength of the patterned light beam, capturing, with the camera, images of the object in coordination with the wavelength tuning, the images comprising at least two images captured for two different center wavelengths of the patterned light beam, and processing the images to obtain a 3D information about the object.
In some implementations, the method may further comprise varying the pattern of alternating bright and dark regions in coordination with the capturing, so that the images comprise at least two images captured for two different patterns of the patterned light beam.
In some implementations, the method may be used in a display system for displaying images to a viewer, the object may comprise an eye of the viewer, and the processing may include obtaining eye tracking information. In some implementations of the method, the illuminating may comprise illuminating a scene comprising the object and the processing includes obtaining a 3D image of the scene.
The wavelength-tunable light source 110 may include an optical emitter 112 for emitting light 111 spanning a wavelength band Δλ, (e.g. a wavelength band 311,
Referring to
Turning to
In an example embodiment shown, each one of the waveplates 420 is half as thick as the preceding waveplate 420. The thickest one of the waveplates 420 defines a free spectral range of the LCTF 400, and the thinnest one defines the wavelength selectivity, e.g. the spectral bandwidth of the filtering function of the LCTF 400. The filter sections do not necessarily have to be disposed in the order of decreasing/increasing thickness or retardance. It is generally enough that the retardance values of different tunable waveplates of the LCTF are in a binary relationship with one another.
Each of the waveplates 420 may include an LC layer 430, which retardance may be tuned by varying a voltage applied thereto, e.g. from a source of a control voltage 450. In some embodiments, one or more of the waveplates 420 may further include a fixed-retardance waveplate, e.g. a quartz waveplate 440. Although the LCTF 400 as shown in
The patterned LC waveplate 510 is composed of first waveplate regions 511 alternating with second waveplate regions 512. In the illustrated regions 511 and 512 have the shape of stripes, but they may be shaped differently in other embodiments. The first waveplate regions 511 may have approximately half-wave retardance at the operating wavelengths λ, i.e. generally (2m+1)λ/2 retardance, where m is an integer. In other words, the first waveplate regions 511 may have the retardance of an odd number of half-waves of the input light beam 501. The second waveplate regions 512 may have approximately zero-wave retardance at the operating wavelengths λ, i.e. generally i·λ retardance, where i is an integer. In other words, the first waveplate regions 512 may have the retardance of an integer number of waves of the input light beam 501. Here “approximately” means to within +\−0.1λ.
The input light beam 501 may be polarized, e.g. linearly polarized in a plane defined by the orientation of an output polarizer of the LCTF 120, e.g. the polarizer 4104 of the LCTF 400 of
In an example embodiment, the LC molecules 618 may be tilted relative to a normal to the electrodes 615 in the absence of the electric field, at an angle that may be defined by alignment layers (not shown) at one or both sides of the LC layer 612. The tilted orientation of the LC molecules 618 provides the LC layer with non-zero retardance, e.g. approximately a half-wave retardance. By applying a suitable voltage between the electrodes 615, the LC molecules may be forced to approximately aligned along the electric field, as schematically illustrated at 619, which may correspond to an approximately zero retardance.
Referring to
In embodiments of the SLM 600 including the 2D-patterned waveplate 610B, the SLM 600 may be operated to provide a variable 2D pattern of light and dark regions in a cross-section of the output light beam 503.
When a non-flat object or a plurality of objects located at different depths is illuminated by an embodiment of the wavelength-tunable optical pattern projector 100 (“illuminator 100”) generating the patterned light beam 131 or 503, the bright regions in the reflected light, e.g. regions 231 or 722, become distorted, i.e. change their shape, with the visible distortion being dependent on the 3D shape of the object or the difference in the location depth of the objects. By receiving the reflections of the patterned light beam from the object(s) with a camera comprising an array of photo-sensitive elements, and processing the corresponding images of the object(s) captured by the camera to detect and analyze the reflection-induced distortions of various illumination patterns, a 3D image or model of the object(s), e.g. a 3D point cloud, may be generated using a suitably programmed computer. By processing the images obtained for different center wavelengths kc of the patterned light beam, object(s) or parts thereof having different material composition may be discerned, and in some embodiments identified, e.g. based on known features of reflectance spectra thereof.
Referring now to
The patterned light beam obtained in this way may then be used to illuminate the object, to capture one or more sets of images of the object at various illumination wavelengths and/or illumination patterns. The set(s) of these images may then be processed to obtain an enhanced image of the object, i.e. an image of the object that includes more information about the object than any single one of the images. In some embodiments, the set(s) of these two-dimensional (2D) images may be used to obtain a three-dimensional (3D) model or 3D image of the object.
Referring to
Images of the object are captured (
In some embodiments, the wavelength tuning 950 and capturing 960 of the method 900B of
Turning to
In a typical embodiment, the patterned light beam 1111 is invisible to the eye 1150, i.e. a beam of infrared light. The display system 1100 may further include an image projecting module 1140 for projecting images to the viewer using visible light. The image projecting module 1140 may include, for example, one or more image conveying waveguides for conveying image-carrying light from an image source (not shown), e.g. a display panel or a scanning projector, to the eye 1150.
In an embodiment, the display system may implement an embodiment of the methods 900A and 900B as described above, for illuminating the eye 1150. In such embodiment, the controller 1130 is configured to perform, in cooperation with the camera 1120 and the illuminator 1110, the method 900A, 900B of
By way of an illustrative non-limiting example, the images 1001, 1002, 1003 (
The controller may then process the sets of images captured for two or more of the patterns to obtain the eye tracking information, such as the gaze direction and the spatial position of the eye 1150, e.g. of the pupil of the eye 1150. It is to be appreciated that the number of center wavelengths λc of the beam 1111, and the number of different spatial illumination patterns 710 in the description above are by way of example only, and the numbers of illumination wavelength and illumination patterns used in various embodiments may be different from the above description, e.g. in a range from 2 to 20 wavelengths λc and/or from 2 to 20 illumination patterns 710.
Referring now to
In some embodiments, the patterned light beam 1211 is invisible to the eye 1150, i.e. a beam of infrared light. In some embodiments, the patterned light beam 1211 is a beam of visible light. The display system 1200 may further include an image projecting module 1240 for projecting images to the eye 1150 of the viewer using visible light. The image projecting module 1240 may include, for example, one or more image conveying waveguides for conveying image-carrying light from an image source (not shown), e.g. a display panel or a scanning projector, to the eye 1150.
In an embodiment, the display system 1200 may be configured to implement an embodiment of the method 900 of
Referring to
Turning to
In some embodiments, the front body 1402 includes an inertial measurement unit (IMU) 1410 for tracking acceleration of the HMD 1400, and position sensors 1412 for tracking position of the HMD 1400. The IU 1410 is an electronic device that generates data indicating a position of the HMD 1400 based on measurement signals received from one or more of position sensors 1412, which generate one or more measurement signals in response to motion of the HMD 1400. Examples of position sensors 1412 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, one or more other suitable type of sensors that detects motion, a type of sensor used for error correction of the IMU 1410, or some combination thereof. The position sensors 1412 may be located external to the IMU 1410, internal to the IMU 1410, or some combination thereof.
The HMD 1400 may further include an illuminator 1411 and a depth camera assembly (DCA) 1408. The illuminator 1411, which may be an embodiment of the illuminator 100 of
The HMD 1400 may further include an eye tracking system 1414 for determining orientation and position of user's eyes in real time. The eye tracking system 1414 may include a tunable patterned-beam illuminator for illuminating an eye of the user wearing the HMD 1400 with a sequence of illumination wavelengths and/or illumination patterns, and a camera for capturing a sequence of corresponding eye images, e,g. as described above with reference to
The functions of the various elements described above as “processors” and/or “controllers,” may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non volatile storage. Other hardware, conventional and/or custom, may also be included.
Embodiments of the present disclosure may include, or be implemented in conjunction with, an artificial reality system. An artificial reality system adjusts sensory information about outside world obtained through the senses such as visual information, audio, touch (somatosensation) information, acceleration, balance, etc., in some manner before presentation to a user. By way of non-limiting examples, artificial reality may include virtual reality (VR), augmented reality (AR), mixed reality (MR), hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include entirely generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, somatic or haptic feedback, or some combination thereof. Any of this content may be presented in a single channel or in multiple channels, such as in a stereo video that produces a three-dimensional effect to the viewer. Furthermore, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, for example, create content in artificial reality and/or are otherwise used in (e.g., perform activities in) artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a wearable display such as an HMD connected to a host computer system, a standalone HMD, a near-eye display having a form factor of eyeglasses, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
The present disclosure is not to be limited in scope by the specific embodiments described herein. Other various embodiments and modifications, in addition to those described herein, may be apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. Such other embodiments and modifications are intended to fall within the scope of the present disclosure. Further, although the present disclosure has been described herein in the context of a particular implementation in a particular environment for a particular purpose, those of ordinary skill in the art will recognize that its usefulness is not limited thereto and that the present disclosure may be beneficially implemented in any number of environments for any number of purposes. Accordingly, the claims set forth below should be construed in view of the full breadth and spirit of the present disclosure as described herein.