Polarimetric imaging allows an image of a scene to be generated that can reveal details that may be difficult to discern or that are simply not visible in regular monochromatic, color, or infrared (IR) images, which may only rely on intensity or wavelength properties of light. By extracting information relating to the polarization of the received light, more insights can potentially be obtained from the scene. For example, a polarimetric image of an object may uncover details such as surface features, shape, shading, and roughness with high contrast. Though polarimetric imaging has some implementation in industry (e.g., using a wire grid polarizer on a sensor chip), polarimetric imaging has mainly been used in scientific settings and required expensive and specialized equipment. Even when such equipment is available, existing techniques for polarimetric imaging can involve time-division or space-division image capture, which can be associated with blurring in either the time or space domain. There exists a significant need for an improved system for polarimetric imaging.
Illustrative examples are described with reference to the following figures.
The figures depict examples of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative examples of the structures and methods illustrated may be employed without departing from the principles, or benefits touted, of this disclosure.
In the appended figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
In addition, graph 104 shows a linearly polarized electromagnetic field 106 represented as a combination of a horizontal component of the electric field, {right arrow over (E)}x, 108 and a vertical component of the electric field, {right arrow over (E)}Y, 110, with no phase offset between the two fields. Such a linearly polarized electromagnetic wave can be expressed as: {right arrow over (E)}x+{right arrow over (E)}Y, ∅=0. For ease of illustration, the magnitudes of the horizontal and vertical components of the electric field are presented as being equal, i.e., Ex=EY, which results in a linearly polarized wave oscillating along a 45-degree line between the X axis and the Y axis. If the magnitudes of the horizontal and vertical components of the electric field were not equal, the resulting linearly polarized electromagnetic field 106 would oscillate along a line that forms an angle of arctan(Ey/Ex) relative to the X and Y axes.
Further, graph 120 shows a circularly polarized electromagnetic field 122 represented as the combination of the horizontal component of the electric field 106 and the vertical component of the electric field 110, with a 90-degree phase offset between the two components of the electric field. The circularly polarized electromagnetic field 122 can be expressed as: {right arrow over (E)}x+{right arrow over (E)}Y, ∅=90°.
More generally speaking, an elliptically polarized electromagnetic wave is generated if a different phase offset is applied. In fact, to be precise, “elliptical” polarization is the most general term used to describe an electromagnetic wave expressed as: {right arrow over (E)}x+{right arrow over (E)}Y, ∅=X. “Linear” polarization can be viewed as a special case of elliptical polarization, with ∅ taking on the value of 0. “Circular” polarization can be viewed as a special case of elliptical polarization, with ∅ taking on the value of 90 degrees.
In diagram 300, the same linear polarizer is rotated 90 degrees, such that it is now vertically oriented. The vertically oriented linear polarizer acts as a filter, to let vertically polarized light through but filter out horizontally polarized light. The vertically oriented linear polarizer blocks the glare (horizontally polarized light) coming off of the surface of the water. With the glare removed, the viewer can now see the light reflecting off of the stones submerged beneath the surface of the water. In other words, the stones are now visible to the viewer. Diagrams 200 and 300 thus illustrate the operation of polarizers to block light and/or let light pass through, depending on orientation of polarization. While only linear polarizers are illustrated in diagrams 200 and 300, other types of polarizers such as circular or elliptical polarizers can also operate to filter light based on polarization.
The Stokes vector S, which characterizes the polarization state of a beam of light, can be defined as:
The stokes vector S consists of four separate Stokes parameters, including an intensity Stokes parameter S0 and three polarization Stokes parameters S1, S2, and S3. Each of the four Stokes parameters can be expressed as a particular combination of one or more of six distinct polarization intensity values, which represent six distinct states of polarization (SoPs). The six SoP intensity values include: (1) IH, intensity of the light along the direction of horizontal polarization (e.g., along the X-axis of
The first Stokes parameter, S0, expressed as IH+IV, is the overall intensity parameter and represents the total intensity of the light. The second Stokes parameter, S1, expressed as IH−IV, is a measure of the relative strength of the intensity of the light along the horizontal polarization over the vertical polarization. The third Stokes parameter, S2, expressed as I+45−I−45, is a measure of the relative strength of the intensity of the light along the positive 45-degree linear polarization over the negative 45-degree linear polarization. The fourth Stokes parameter, S3, expressed as IRHC−ILHC, is a measure of the relative strength of the intensity of the light along the right-handed circular polarization over the left-handed circular polarization. There are other representations of the Stokes Vector S and corresponding Stokes parameters S0, S1, S2, and S3. Whatever the format used, the Stokes vector S serves to characterize the polarization state of a beam of light.
Different measures of degree of polarization can be expressed as functions of various Stokes parameters discussed above. The total degree of polarization (DoP) may be expressed as:
In Equation 1, DoP can represent the ratio of the combined magnitude of all three polarization Stokes parameters, S1, S2, and S3 as compared to the magnitude of the intensity Stokes parameter S0.
The degree of linear polarization (DoPL) may be expressed as:
In Equation 2, DoPL represents the ratio of the combined magnitude of the two linear polarization Stokes parameters, S1 and S2 as compared to the magnitude of the intensity Stokes parameter S0.
The degree of circular polarization (DoPC) may be expressed as:
In Equation 3, DoPC represents the ratio of the magnitude of the circular polarization Stokes parameter, S3, as compared to the magnitude of the intensity Stokes parameter S0. These three different types of “degree of polarization” are useful measures that represent the degree to which the light beam in question is polarized (DoP), linearly polarized (DoPL), or circularly polarized (DoPC).
The Angle of Circular Polarization (AoCP) may be expressed as:
In addition, image 402 can represent a distribution of a degree of linear polarization (DOLP) image among the pixels of image 402. Here, the degree of linear polarization is expressed as:
DOLP=√{square root over (S11+S22)} (Equation 5)
In Equation 5, DOLP includes the second Stokes parameter S1 and the third Stokes parameter S2. Note that Equation 5 is slightly different than the degree of linear polarization (DoPL) of Equation 2. Nevertheless, the DOLP expression provides a representation of the degree to which the light received from the scene is linearly polarized. In image 404, the value of each pixel can measure the DOLP value of the light associated with that pixel. A measure of the degree of polarization, such as DOLP, is particularly useful in extracting information regarding reflections of light. For example, as shown in
Image 404 can include an angle of linear polarization (AOLP) image. Here, the angle of linear polarization is expressed as:
AOPL=arctan(S2/S1) (Equation 6)
A measure of the angle of linear polarization is useful in computing shape information of the eye (e.g., using a further compute algorithm to calculate the spherical shape of the eye). RGB image 400, DOLP image 402, and AOLP image 404 demonstrate examples of how various measures of polarization of light can reveal different types of information about a scene.
To obtain DOLP image 402 and AOLP image 404, an image sensor can include a polarizer to separate out linear polarized light components from non-polarized light components of incident light. The image sensor can also include multiple light sensing elements (e.g., photodiodes) to obtain a spatial distribution of intensities of the linear polarized light component to obtain the Stokes parameters S1 and S2, and generate DOLP and/or AOLP pixel values based on Equations 4 and 5. The polarizer can extract, for example, horizontally polarized light to obtain intensity value IH, vertically polarized light to obtain intensity value IV, as well as light having 45-degree linear polarization to obtain intensity values I+45 and I−45.
The polarizer can be implemented using various techniques. One example technique is using a wire grid having a pre-determined orientation, which can transmit polarized light components that are orthogonal to the wire grid while reflecting/absorbing polarized light components that are parallel to the wire grid.
An image sensor can include multiple wire grid polarizers, each having a different orientation, to separate out light of different polarization directions (e.g., horizontally polarized light, vertically polarized light, and light of 45-degree and 135-degree linear polarization) for intensity measurements. For example, referring to
While the wire grid polarizers in
Another example technique to implement a polarizer is using a birefringent crystal. Birefringence generally refers to the optical property of a material having a refractive index that depends on the polarization and propagation direction of light. Based on the birefringence property, a birefringent crystal can refract orthogonal states of polarized light components by different refraction angles, and project the two light components to different light sensing elements of the image sensor. As the image sensor can still receive the full power of incident light, the signal-to-noise ratio of the image sensor can be maintained.
Equation 7 illustrates a permittivity tensor of a medium, which relates the electric field E and the displacement vector D of light propagating in the medium according to the following Equation:
In some examples, anisotropic medium 602 can include a uni-axial crystal which has one axis, such as optical axis 610, along which D and E are parallel. Examples of uni-axial crystal may include calcite and rutile. Referring to
Referring to
In Equation 8, ne is the refractive index for the extraordinary ray, no is the refractive index for the ordinary ray, whereas φ is the angle between the optical axis (e.g., optical axis 610) and a surface normal of the crystal, which depends on the crystal cut. In order to maximize separation between the ordinary and extraordinary rays, the crystal can be cut so that the optical axis is oriented at 45° to the surface normal.
As the directions of electric fields of extraordinary ray 614 and ordinary ray 616 are perpendicular to each other, extraordinary ray 614 and ordinary ray 616 can be completely orthogonally linearly polarized. The relative intensities of the ordinary and extraordinary rays can reflect the orientation of linear polarization of the incident radiation with respect to the principal plane. For example, if the incident light only includes linearly polarized light having electric fields parallel to a principal plane, all of the incident light can pass through the crystal as extraordinary rays. On the other hand, if the incident light only includes linearly polarized light having electric fields perpendicular to the principal plane, all of the incident light can pass through the crystal as ordinary rays. Therefore, with the property of birefringence, orthogonal linearly polarized light of different polarization directions can be separated out and projected to different light sensing elements of the image sensor to the composition of an incident ray in terms of orthogonal linearly polarized components.
Pixel cell 702a can include a plurality of photodiodes 712 including, for example, photodiodes 712a, 712b, 712c, and 712d, one or more charge sensing units 714, and one or more analog-to-digital converters 716. The plurality of photodiodes 712 can convert different components of incident light to charge. The components can include, for example, orthogonal linearly polarized light of different polarization directions, light components of different frequency ranges, etc. For example, photodiodes 712a-712d can detect the intensities of, respectively, linearly polarized light having electric fields parallel to a principal plane, linearly polarized light having electric fields perpendicular to the principal plane, and polarized light having electric fields that form 45 degrees from the principal plane. As another example, photodiodes 712a and 712b can detect the intensities of two orthogonal linearly polarized light of two different polarization directions, photodiode 712c can detect the intensity of unpolarized visible light, whereas photodiode 712d can detect the intensity of infrared light. Each of the one or more charge sensing units 714 can include a charge storage device and a buffer to convert the charge generated by photodiodes 712a-712d to voltages, which can be quantized by one or more ADCs 716 into digital values. Although
In some examples, image sensor 700 may also include an illuminator 722, an array of optical elements 724, an imaging module 728, and a sensing controller 740. Illuminator 722 may be a near infrared illuminator, such as a laser, a light emitting diode (LED), etc., that can project near infrared light for 3D sensing. The projected light may include, for example, structured light, polarized light, light pulses, etc. Array of optical elements 724 can include an optical element overlaid on the plurality of photodiodes 712a-712d of each pixel cell including pixel cell 702a. The optical element can select the polarization/wavelength property of the light received by each of photodiodes 712a-712d in a pixel cell.
In addition, image sensor 700 further includes an imaging module 728. Imaging module 728 may further include a 2D imaging module 732 to perform 2D imaging operations and a 3D imaging module 734 to perform 3D imaging operations. 2D imaging module 732 may further include an RGB imaging module 732a and a polarized light imaging module 732b. The operations can be based on digital values provided by ADCs 616. In one example, based on the digital values from each of photodiodes 712a-712d, polarized light imaging module 732b can obtain polarimetric information, such as intensity values IH, IV, L+45 and I−45, to determine Stoke parameters S1 and S2, and then generate DOP and/or AOP pixel values based on Equations 4 and 5. In another example, RGB imaging module 732a can also determine intensity values of visible incident light (which can contain polarized and unpolarized light) from photodiode 712c from each pixel cell, and generate an RGB pixel value for that pixel. Moreover, 3D imaging module 734 can generate a 3D image based on the digital values from photodiode 712d. Image sensor 700 further includes a sensing controller 740 to control different components of image sensor 700 to perform 2D and 3D imaging of an object.
A shared optical element, such as a microlens 752 which can be part of array of optical elements 724, may be positioned between the scene and photodiodes 712a, 712b, 712c, and 712d. In some examples, each super-pixel may have its own microlens. Microlens 752 may be significantly smaller in size than camera lens 706, which serves to accumulate and direct light for the entire image frame toward pixel cell array 702. Microlens 752 is a “shared” optical element, in the sense that it is shared among photodiodes 712a, 712b, 712c, and 712d. Microlens 752 directs light from a particular location in the scene to photodiodes 712a-712d. In this manner, the sub-pixels of a super-pixel can simultaneously sample light from the same spot of a scene, and each sub-pixel can generate a corresponding pixel value in an image frame. In some examples, microlens 752 can be positioned over and shared by multiple pixels as well. On pixel cell array 702, there can be an array of microlenses 752, which are between the camera lens 750 and photodiodes of the pixel cell array 702
As shown in
Birefringent crystal 810 can separate out orthogonal linearly polarized light components of different polarization directions in light 802, and project the different polarized light components to sub-pixels 812a and 812b. Specifically, as light 802 enters birefringent crystal 810, an ordinary ray component 814 of light 802 can be refracted by birefringent crystal 810 according to the refractive index no, whereas an extraordinary ray component 816 of light 802 can be refracted by birefringent crystal 810 according to the refractive index ne. As a result, the propagation directions of the two ray components can be separated by an angle θ based on Equation 8 above. Ordinary ray component 814 can propagate to and be measured by sub-pixel 812b, whereas extraordinary ray component 816 can propagate to and be measured by sub-pixel 812a. With such arrangements, sub-pixel 812a and sub-pixel 812b can measure the intensities of orthogonally linearly polarized lights to provide measurements of intensities IH and IV.
In some examples, pixel cell 702a may include a wave plate 819 sandwiched between birefringent crystal 810 and sub-pixel layer 812. Wave plate 819 can act as a half-wave retarder which can rotate a state of linear polarization from one of the ray components 814 or 816, which can be measured by sub-pixels 812a and 812b to provide measurements of new intensities at the new linear polarization states.
In addition, pixel cell 702a may include insulation structures to reduce cross-talks. For example, birefringent crystal 810 may include one or more metallic-based insulation structures, such as a backside metallization (BSM) structure 820, to prevent light from propagating to another birefringent crystal 810 of a neighboring pixel cell to reduce the cross-talks between pixel cells. The BSM structure may include an absorptive metal material to avoid un-desired reflections. In addition, deep trench isolations (DTI) 822 can prevent different polarized light components 814 and 816 from propagating between sub-pixels 812a and 812b to reduce the cross-talks between sub-pixels. DTI 822, together with absorption structure 813, can also cause total internal reflection of the incident light to increase effective light travel distance within the silicon and to enhance light absorption by sub-pixels 812a and 812b. In some examples, an anti-reflection coating can be applied to the DTI to reduce un-desired light reflections.
In some examples, to further reduce the cross-talks between sub-pixels, microlens top layer 804 can be shaped to facilitate the propagation of different polarized light components to their target sub-pixels.
The asymmetric curvature of microlens top layer 804 can guide/focus light 802 towards a point 840 directly below apex point 830 and above sub-pixel 812b. Such arrangements can focus ordinary ray component 814 and extraordinary ray component 816 to the intended sub-pixels (sub-pixels 812b and 812a respectively), while diverting these ray components away from the unintended sub-pixels.
Referring to
p=H tan(θ) (Equation 9)
The walk-off distance p can be determined based on, for example, the pitch of a sub-pixel, separation between two center points of sub-pixels, etc. The refraction angle θ can be determined based on Equation 8 (reproduced below) as well as ne (the refractive index for the extraordinary ray), no (the refractive index for the ordinary ray), and φ (between the optical axis and a surface normal of the crystal).
The refractive indices no and ne can be determined based on the wavelength of light 802 according to the Sellmeier Equation, as follows:
n
o
2=2.69705+0.0192064/(λ2−0.01820)−0.0151624λ2 (Equation 10)
n
e
2=2.18438+0.0087309/(λ2−0.01018)−0.00244112 (Equation 11)
The following table describes different refractive indices no and ne for different wavelengths:
Graph 850 illustrates a distribution of walk-off distance p (between 0-1.8 um) for different combinations of depth (H) and refraction angle (theta θ). For example, a walk-off distance p of 1.8 um can be achieved with a depth H of 10 um and a refraction angle θ of 10°.
The multiple layers of device 800 and devices fabricated therein are built on a common semiconductor die using one or more semiconductor processing techniques such as lithography, etching, deposition, chemical mechanical planarization, oxidation, ion implantation, diffusion, etc. (e.g., photodiodes are formed in a semiconductor substrate). This is in contrast to building the layers as separate components, then aligning and assembling the components together in a stack. Such alignment and assembly may cause significant precision and manufacturing defect issues, especially as the physical dimensions of the sensor device is reduced to the scale of single-digit micrometers. The design of the super-pixel as a multi-layer semiconductor sensor device 800 allows components such as sub-pixels, wavelength filters, the birefringent crystal, and the microlens to be precisely aligned, as controlled by semiconductor fabrication techniques, and avoids issues of misalignment and imprecision that may be associated with micro assembly.
In
In addition, pixel cell 910 can include four sub-pixels 812a, 812b, 812c, and 812d.
Microlens 912 over pixel cell 910 can also have an asymmetric curvature. In such example, microlens 912 can have an asymmetric curvature along the Y-axis as well.
Although not shown in
HMD 1000 includes a frame 1005 and a display 1010. Frame 1005 is coupled to one or more optical elements. Display 1010 is configured for the user to see content presented by HMD 1000. In some examples, display 1010 comprises a waveguide display assembly for directing light from one or more images to an eye of the user.
HMD 1300 further includes image sensors 1020a, 1020b, 1020c, and 1020d. Each of image sensors 1020a, 1020b, 1020c, and 1020d may include a pixel cell array configured to generate image data representing different fields of views along different directions. Such an image cell array may incorporate a polarimetric sensor array described in the present disclosure. For example, sensors 1020a and 1020b may be configured to provide image data representing two fields of view towards a direction A along the Z axis, whereas sensor 1020c may be configured to provide image data representing a field of view towards a direction B along the X axis, and sensor 1020d may be configured to provide image data representing a field of view towards a direction C along the X axis.
In some examples, HMD 1000 may further include one or more active illuminators 1030 to project light into the physical environment. The light projected can be associated with different frequency spectrums (e.g., visible light, infrared light, ultra-violet light, etc.), and can serve various purposes. For example, illuminator 1030 may project light in a dark environment (or in an environment with low intensity of infrared light, ultra-violet light, etc.) to assist sensors 1020a-1020d in capturing images of different objects within the dark environment to, for example, enable location tracking of the user. Illuminator 1030 may project certain markers onto the objects within the environment, to assist the location tracking system in identifying the objects for map construction/updating.
HMD 1000 may include sensors (e.g., sensors similar to sensors 1020) that are inward facing (e.g., sensors for eye/pupil tracking of the user).
In step 1208, refracted light from the microlens is separated into a first component of light and a second component of light, using the birefringent crystal. For example, the ordinary ray component 814 of light 802 is separated from the extraordinary ray component 816 of light 802 in
In step 1212, the first component of light is detected using the first photodiode. For example, sub pixel 812b measures the ordinary ray component 814 of light 802 in
In step 1216, the second component of light is detected using the second photodiode. For example, sub pixel 812a measures the extraordinary ray component 816 of light 802 in
In some embodiments, the first component of light is a first linear polarization and the second component of light is a second linear polarization, wherein the second linear polarization is orthogonal to the first linear polarization; the microlens is part of an optical element, and the method further comprises generating image frames using an array of optical elements (e.g., as described in conjunction with
The disclosed techniques may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some examples, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, e.g., create content in an artificial reality and/or are otherwise used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
Some portions of this description describe the examples of the disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, and/or hardware.
Steps, operations, or processes described may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In some examples, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
Examples of the disclosure may also relate to an apparatus for performing the operations described. The apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
Examples of the disclosure may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any example of a computer program product or other data combination described herein.
The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the examples is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.
This application claims the benefit of U.S. Provisional Application No. 63/109,704, filed Nov. 4, 2020, entitled “POLARIMETRIC IMAGING CAMERA,” which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63109704 | Nov 2020 | US |