This patent application relates generally to display technologies, and more specifically, to implementing structured polarized illumination techniques to determine and reconstruct a shape and a material property of an object.
In augmented reality (AR), virtual reality (VR), and mixed reality (MR) applications, it may often be necessary to identify various aspects of a physical object. Also, in some instances, an object in the physical world may need to be imported into a virtual world (e.g., for a virtual reality (VR) application).
To properly identify an object, a primary aspect of an object that may need to be determined may include a shape of the object. However, conventional methods for determining a shape of an object may come with drawbacks, such as burdens associated with coordinated use of a plurality of cameras and acquisitions times that may be excessive.
In addition, to properly identify an object, another aspect of an object that may need to be determined may be material properties associated with the object. However, conventional methods for deriving material properties of an object may require rotating elements, which may require larger form factors and more space, more power consumption, and may lead to longer acquisition times.
Features of the present disclosure are illustrated by way of example and not limited in the following figures, in which like numerals indicate like elements. One skilled in the art will readily recognize from the following that alternative examples of the structures and methods illustrated in the figures can be employed without departing from the principles described herein.
For simplicity and illustrative purposes, the present application is described by referring mainly to examples thereof. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. It will be readily apparent, however, that the present application may be practiced without limitation to these specific details. In other instances, some methods and structures readily understood by one of ordinary skill in the art have not been described in detail so as not to unnecessarily obscure the present application. As used herein, the terms “a” and “an” are intended to denote at least one of a particular element, the term “includes” means includes but not limited to, the term “including” means including but not limited to, and the term “based on” means based at least in part on.
In augmented reality (AR), virtual reality (VR), and mixed reality (MR) applications, it may often be necessary to determine various aspects of a physical object. For example, in some instances, while a user may be wearing a pair of augmented reality (AR)-enhanced smartglasses, an object may enter the user's field of view (FOV), and it may be relevant to determine, among other things, what the object is, how the object fits into the user's environment, and various information about the object that may be relevant to the user.
In addition, in some instances, an object in the physical world may need to be imported into a virtual world (e.g., for a virtual reality (VR) application). In these cases, it may also be important to determine aspects of an object in order to properly import the object from the physical world to the virtual world. Also, for example in the case of augmented reality (AR), it may also be important to detect and determine aspects of an object in the physical world in order to properly overlay virtual content.
To properly identify an object, a primary aspect of an object that may need to be determined may include a shape of the object. Typically, a number of conventional methods may be utilized to determine a shape of an object. However, as discussed further below, each of these conventional methods may come with drawbacks.
In some instances, a first method that may be utilized to determine a shape of an object may be stereo depth reconstruction. In some examples, implementation of stereo depth reconstruction may include use of a plurality of cameras to capture a plurality of images, whereby pixels from the plurality of cameras may be matched to retrieve a relative depth based on the positions of the plurality of cameras. However, in some instances, a drawback of utilizing stereo depth reconstruction may be that coordinated use of the plurality of cameras may be burdensome.
Another method that may be utilized to determine a shape of an object may be time of flight. In some examples, implementation of time of flight may include sending light from an illumination source to scan an environment or scene, wherein time of propagation may be recorded at a detector after being returned (e.g., reflected) from an object. In some instances, time of propagation may provide information related to depth and/or distance of an object based on a location of an illumination source and a detector. However, a drawback of utilizing the time of flight method may be that, typically, acquisitions times may be excessive and it may be necessary to scan with respect to a direction of illumination in order to provide scene depth.
Yet another method that may be utilized to determine a shape of an object may be structured illumination via intensity modulation (or variation). In some examples, implementation of structured illumination may include illuminating an object with a predetermined, intensity-structured illumination pattern having fringes (e.g., sinusoidal fringes). Implementation of structured illumination may typically include utilization of a sinusoidal intensity distribution that may vary in one direction and may be uniform in other directions.
In some examples, upon projection of the structured illumination pattern having a plurality of fringes, a deformation of the plurality of fringes may be measured. Specifically, in some examples, a deformation of the plurality of fringes may be proportional to a shape of the illuminated object and relative to a position of illumination source and a corresponding detector.
It may be appreciated that a benefit of utilizing structured illumination may be that only one camera may be necessary, and result may often be achieved via a single iteration. However, a drawback may be that structured illumination may be limited in that during implementation, an object may need to remain stationary. As a result, it may not necessarily be effectively implemented in most augmented reality (AR), mixed reality (MR), and virtual reality (VR) settings, where a user may typically utilize a camera (e.g., on a smartphone) that may be continuously moving. Also, in some instances, another drawback of structured illumination may be that it may not always provide shape-related information to a sufficient resolution.
Also, in order to properly identify an object, another aspect of an object that may need to be determined may be material properties associated with the object. In some examples, and often typically, information about material properties of an object may be gathered by determining a Mueller matrix of an object.
In some examples, a Mueller matrix of an object may describe a polarized response (or reflectance) of an object to possible polarized and unpolarized illumination settings, for a number of possible angles of incidence and scattering angles. In some examples, for a given material, a Mueller matrix may depend on incoming and scattering angles with respect to a surface normal.
In some examples, a Mueller matrix M may be calculated for given incident and scattering angles. In some examples, the Mueller matrix M may be an N×N (e.g., 4×4) matrix. In some examples, a Mueller matrix M may be associated with at least one Stokes vector S. In particular, in some examples, a Stokes vector S may describe a response to a polarized state of light. In some examples, the polarization state of the light may be described as a polarization angle (e.g., zero (0) degrees, forty-five (45) degrees or π/4 degrees, ninety (90) degrees or π/2 degrees, one hundred thirty-five (135) degrees or −π/4 degrees, etc.). In some examples, a Mueller matrix M may be associated with an input Stokes vector Si and an output Stokes vector So, wherein the input Stokes vector Si and an output Stokes vector So may be determined as follows:
So, in some examples, an input Stokes vector Si may be described as:
S
i=(Si0;Si1;Si2;Si3)
In some examples, a non-polarized light beam may only have a S0 component. Also, in some examples, a fully, horizontally-linear polarized light beam may have S0=S1. Additionally, in some examples, a fully, vertically-linear polarized light beam may have S0=−S1. So, in some examples, an output Stokes vector So may be described as:
S
o=(So0;So1;So2;So3)
In some examples, characteristics of an input Stokes vector Si may be predetermined, and upon interaction of illumination with an object, an output Stokes So vector may be determined as well. In some examples, a Mueller matrix M may be calculated using the input Stokes vector Si and output Stokes vector So as follows:
As will be discussed in greater detail below, utilizing this relationship between an input Stokes vector, an output Stokes vector, and a Mueller matrix, the input Stokes vector and the output Stokes vector may be utilized to determine a Mueller matrix that may describe material properties of an object.
In some examples, determination of a Mueller matrix may be referred to Mueller matrix ellipsometry. In some examples, implementation of Mueller matrix ellipsometry may include use of at least one rotating polarizer, at least one rotating analyzer, a camera and at least one waveplate located in front of an unpolarized illumination source. In some examples, the rotating analyzer and the at least one waveplate may be located in front of the camera.
In some examples, the at least one rotating polarizer may be utilized to generate various polarization input states with respect to the at least one waveplate located in front of the camera, which may then be analyzed to determine corresponding Mueller matrix elements. In particular, in some examples, the rotating polarizer may be rotated to implement four (4) input Stokes vectors and generate four (4) output Stokes vectors, which may then be used to determine sixteen (16) elements need to generate a Mueller matrix.
However, implementation of Mueller matrix ellipsometry may come with various difficulties and limitations. That is, in some examples, and even using state-of-the-art methods, deriving material properties for an arbitrary (e.g., an object that may not have substantially flat surfaces) may be difficult without knowing a shape of the object. Also, even using state-of-the-art methods, deriving material properties of an object may require rotating elements, which may require larger form factors and more space, more power consumption, and longer acquisition times.
Systems and methods as described may implement structured polarized illumination techniques to provide determination and reconstruction of a shape and a material property of an object, along with determination reconstruction of material properties of an object. As used herein, “structured polarized illumination” may include illumination settings and/or characteristics where a polarization state may be spatially varying.
In some examples and as will be discussed further below, the systems and methods may implement a polarization camera that may provide fringe patterns (e.g., via a variation of polarization angles/orientations). In some examples, these fringe patterns may be utilized to determine one or more of shape and material properties information relating to an object.
In some examples, the systems and methods may image an object with a polarization camera that may include wire grids located in front of camera pixels. In some examples, because a polarization state (e.g., rotating along the horizontal axis) prior to reaching camera pixels may be known, the wire grids may only transmit (fully) for a polarization state that may match a particular orientation. In some examples, the camera pixels may be used to determine fringe patterns by converting a polarization orientation into an intensity corresponding to the fringe patterns. It may be appreciated that implementation of the structured polarized illumination techniques described herein may not require varying intensities.
As will be discussed in further detail below, the systems and methods may be directed to virtual reality (VR) and/or augmented reality (AR) applications, where it may be desirable to detect, decipher, characterize, and/or represent (e.g., digitally) an object in located in the physical world. In some examples, the systems and methods may be directed to, among other things, object classification applications, simultaneous localization and mapping (SLAM), and importation and overlay of physical objects into augmented reality (AR) and virtual reality (VR) content.
In some examples, the systems and methods describe may provide various benefits related to implementation as well. As discussed above, in traditional Mueller matrix ellipsometry, it may be necessary to implement multiple input Stokes vectors sequentially in order to gather the necessary components required to determine a Mueller matrix. However, in some examples, the systems and methods described may implement structured polarized illumination, and because a polarization state of the input Stokes vector may be varied (spatially), additional information associated with material properties of an object may be derived with a single implementation (or a “single shot”). Specifically, and as will be discussed further below, structured polarized illumination may enable implementation of a plurality of (varying) polarization phases with one emission, which may provide multiple (shifted) patterns in association with a single base pattern. As a result, greater resolution in determining material properties may be achieved.
Consequently, in some examples, the system and methods may be more compact (e.g., easily integrated into augmented reality (AR) and virtual reality (VR) devices), and my enable faster acquisitions times. In addition, in some examples, the systems and methods may be also applied in dynamic settings (e.g., where an object may be moving), and may require lower power consumption as well.
In some examples, the systems and methods described herein may include a system, comprising a light source to transmit an original beam of light, a first grating to receive and diffract the original beam of light into a first light beam having a first circular polarization and a second light beam having an opposite circular polarization, a second grating to receive the first light beam and the second light beam and emit overlapping light beams towards an object, a polarization camera to capture light reflected from the object, and a computer system. In some examples, the computer system may comprise a processor and a non-transitory computer-readable storage medium having an executable stored thereon, which when executed instructs the processor to: analyze, utilizing a plurality of illumination objects, the light reflected from the object to determine a fringe projection analysis associated with the object, determine, based on the fringe projection analysis associated with the object, a shape of the object, and determine, based on the fringe projection analysis associated with the object, a Mueller matrix to describe material properties of the object. In some examples, the executable when executed further instructs the processor to determine an input Stokes vector associated with the overlapping light beams and determine, based on the fringe projection analysis associated with the object, an output Stokes vector, wherein the Mueller matrix to describe material properties of the object is determined further based on the input Stokes vector and the output Stokes vector. In some examples, the processor and the non-transitory computer-readable storage medium may be comprised in a processing unit. In some examples, the first grating and the second grating are Pancharatman-Berry (PB) gratings and the polarization camera comprises at least one unit cell of pixels comprising at least one pixel, wherein each of the at least one pixel implements a particular wire grid orientation of a polarizer array. In some examples, a first pixel of the at least one pixel implements a wire grid orientation of 0 degrees, a second pixel of the at least one pixel implements a wire grid orientation of π/4 degrees, a third pixel of the at least one pixel implements a wire grid orientation of π/2 degrees, and a fourth pixel of the at least one pixel implements a wire grid orientation of −π/4 degrees. In some examples, the executable when executed further instructs the processor to determine a group of pixels of the polarization camera having a same incident and scattering angle with respect to a surface normal. In some examples, the plurality of illumination objects may comprise one or more illumination cards, including a black illumination card having a smaller angle of incidence (AOI), a black illumination card having a larger angle of incidence (AOI), a white illumination card having the smaller angle of incidence (AOI), and white illumination card having the larger angle of incidence.
In some examples, the systems and methods described herein may include a method for implementing structured polarized illumination techniques to determine and reconstruct a shape and a material property of an object, comprising transmitting an original beam of light, diffracting, utilizing a first grating, the original beam of light into a first light beam having a first circular polarization and a second light beam having an opposite circular polarization, receiving the first light beam and the second light beam at a second grating and emitting overlapping light beams towards the object, capturing, utilizing a polarization camera, light reflected from the object, analyzing, utilizing a plurality of illumination objects, the light reflected from the object to determine a fringe projection analysis associated with the object, determining, based on the fringe projection analysis associated with the object, the shape of the object, and determining, based on the fringe projection analysis associated with the object, a Mueller matrix to describe the material properties of the object.
In some examples, the systems and methods described may include an apparatus, comprising a processor, and a non-transitory computer-readable storage medium having an executable stored thereon, which when executed instructs the processor to determine an input Stokes vector associated with the overlapping light beams from a grating, implement a polarization camera to capture light reflected from an object, analyze, utilizing a plurality of illumination objects, the light reflected from the object to determine a fringe projection analysis associated with the object, determine, based on the fringe projection analysis associated with the object, a shape of the object, determine, based on the fringe projection analysis associated with the object, an output Stokes vector and determine, based on the fringe projection analysis associated with the object, the input Stokes vector, and the output Stokes vector, a Mueller matrix to describe material properties of the object.
In some examples, implementing structured polarized illumination may include providing an arrangement of a plurality of Pancharatman-Berry (PB) gratings. In particular, in some examples, two Pancharatman-Berry (PB) gratings may be arranged to create an illumination setting with uniform intensity and linearly polarized light. In addition, the illumination setting may further include an angle of polarization that may be spatially rotating along one axis. It may be appreciated that although examples herein may describe Pancharatman-Berry (PB) gratings, other grating types may be employed as well.
In some examples, the first Pancharatman-Berry (PB) grating 102 may be used to take the light beam 101a and create two virtual point sources of light 102a-102b. In some examples, the first Pancharatman-Berry (PB) grating 101 may receive the light beam 101a and may diffract the light beam 101a into two opposite-handed point sources. In particular, the light beam 101a may be diffracted into light from a first point source 102a having a right-handed circular polarization (RCP) (e.g., +1) and light from a second point source 102b having a left-handed circular polarization LCP (e.g., −1). In some examples, light from the first point source 102a and light from the second point source 102b may deviate from each other as they travel toward the second Pancharatman-Berry (PB) grating 103.
In some examples, the second Pancharatman-Berry (PB) grating 103 may receive the two virtual point sources of light 102a-102b, and may emit them (e.g., recombine them) as overlapping light beams 103a-103b, thereby enabling “two-beam interference” and creating an “interference pattern.” In some examples, the Pancharatman-Berry (PB) grating 103 may perform a steering function that may change an angle of a beam to be rotated at exactly an opposite amount of (e.g., to cancel out) a rotation/deviation provided via the first Pancharatman-Berry (PB) grating 102. In some examples, the second Pancharatman-Berry (PB) grating 103 may create an illumination with uniform intensity and fully linearly polarized light, wherein an angle of polarization may be spatially rotating along one axis.
In some examples, to overcome the angle deviation provided, the second Pancharatman-Berry (PB) grating 103 may be similar to the first Pancharatman-Berry (PB) grating 102, but rotated one-hundred eight (180) degrees. That is, in some examples, the second Pancharatman-Berry (PB) grating 103 may diffract the light second time by exactly the opposite amount as the first Pancharatman-Berry (PB) grating 102 so that the exiting beams leave collinearly and overlapping each other. In some examples, the steering function may be provided based on a density of the second Pancharatman-Berry (PB) grating 103.
In some examples, an angle deviation provided via the first Pancharatman-Berry (PB) grating 102 may be predetermined (or “programmed”), and further an extent to which the first point source 102a and the second point source 102b may be collimated (e.g., steered) may be with respect to an angle of rotation. As a result, and in some examples, the collimated beam may exhibit a spatially rotating linear polarization.
In some examples, a linear polarizer may only transmit light where the polarization state may match an orientation of the wire grid. In some examples, the wire grids may be implemented in conjunction with rotating filters. So, in some examples, because a polarization state (e.g., rotating along the horizontal axis) prior to reaching the linear polarizer may be known, the linear polarizer may only transmit (fully) for a polarization state that may match a particular orientation, thereby resulting in the fringes 400. As a result, and as will be discussed further below, in some examples, based on the uniform intensity and the rotating polarization provided by the point sources of light, a set of fringes (e.g., similar to the fringes 400) may be generated for each wire grid/pixel combination of a polarization camera.
In some examples, a first point source 504a and a second point source 504b may emit light that may be collimated and overlapping and may be directed toward to towards the object 501. In some examples, light from the first point source 504a and the second point source 504b may display spatially rotating, linear polarization (as described above). In some examples, upon reflection from the object 501, the first point source 504a and the second point source 504b may be directed towards and captured by a polarization camera 505.
In some examples, a Stokes vector, such as an input Stokes vector, may be calculated as a function of intensity. Specifically, in some examples, intensity of light beam, such as the collimated, overlapping light beams emanating from the (virtual) point sources 504a-504b described above, may be described as:
In some examples, based on the intensity provided above, a Stokes vector S (e.g., an input Stokes vector) may be represented as follows:
In some examples, for the Stokes vector S, an intensity component S0 may be uniform (equal to one), and S1 and S2 may be sinusoidal functions that may vary based on a rotation of a polarization angle. In some examples, S0 may represent a total light intensity that may be based on intensity of four pixels (e.g., each having a different theta angle 0, π/4, π/2, and −π/4). In some examples, S1 may represent how light may be horizontally (+1) or vertically polarized (−1), and S2 may represent how the light is polarized at (at π/4) or (at −π/4).
In some examples, to implement structured polarized illumination, light reflected from an object may be imaged on to a plurality of illumination objects (e.g., illumination cards). In some examples, an “illumination object” may include any object that may have a known shape and material properties, and may be utilized to determine and reconstruct shape and material properties of (target) object having unknown shape and material properties. One example of such an illumination object may include an illumination card.
It may be appreciated that, in some instances, reflection of light may involve a “specular” reflection process, wherein light may be (immediately) reflected on a surface of an object. In some examples, this process may involve higher reflectiveness, and may be associated with a higher degree of polarization.
It may be appreciated that any objects may exhibit specular characteristics (e.g., direct reflection), wherein the may be associated with a higher degree of polarization, but a lesser intensity. In such instances, the transmitted light may be substantially absorbed by darker objects, while the transmitted light may not fully absorb the transmitted light and some of the transmitted light may be reflected back after some amount of depolarization. In some instances, this may be referred to as the “diffuse” process. In some instances, light may be partially absorbed by the surface of the object, and then reflected back (e.g., due to scattering). In some examples, this process may include lesser reflectiveness (e.g., the light may be absorbed), and may be associated with a lower degree of polarization. In some instances, due to the diffuse process, an intensity of reflected light may increase, but it may also be that a degree of polarization may decrease because of the (aforementioned) depolarization inside the object (during absorption).
Accordingly, it may be appreciated that objects that are bright (e.g., object that display more light contribution) may exhibit both specular and diffuse characteristics. Moreover, objects that are bright may often maintain a higher degree of polarization. On the other hand, in some examples, objects that are dark (e.g., light is not reflected back) may only display specular characteristics, but minimal diffuse characteristics. Also, objects that are dark may exhibit lower or minimal polarization, since the light may often be depolarized to an extent while inside the object (during absorption).
In some examples, a pair of the plurality of illumination cards may be white, and may have a (predetermined) controlled roughness. In particular, in some examples, a first white illumination card of the pair of white illumination cards may be directed to a smaller angle of incidence (AOI), while a second white illumination card of the pair of white illumination cards may be directed to a larger angle of incidence (AOI). In some examples, a pair of the plurality of white illumination cards may be said to have specular and diffuse characteristics. It may be appreciated that while in some examples at least one illumination card may be utilized (e.g., to determine and reconstruct and object's shape and material properties), in other examples use of an illumination card may not be necessary at all.
It may be appreciated that, in some examples, a plurality of illumination cards may be and/or are to be replaced by any object to be reconstructed. That is, in some examples, a plurality of illumination cards may be utilized to calibrate a system directed to determining and reconstructing a shape and a material property of an object, since a shape and material properties of the plurality of illumination cards may be known. However, it should be appreciated that use of the plurality of illumination cards may not be required to determine and reconstruct a shape and a material property of an object, as described herein.
So, in some examples, a pair of the plurality of cards may be black, and may have a (predetermined) controlled roughness. In particular, in some examples, a first black illumination card of the pair of black illumination cards may be directed to a small angle of incidence (AOI), while a second black illumination card of the pair of black illumination cards may be directed to a larger angle of incidence (AOI). In some examples, a pair of the plurality of black illumination cards may be said to have specular characteristics, but minimal diffuse characteristics.
In some examples, at an angle of incidence (AOI) of five (5) degrees, the black illumination card 701 may provide a first image 701a, a second image 701b, a third image 701c, and a fourth image 701d. In some examples, each of the images 701a-701d may also be referred to as “quadrants.” Also, in some examples, at an angle of incidence (AOI) of forty (40) degrees, the black illumination card 702 may provide a first image 702a, a second image 702b, a third image 702c, and a fourth image 702d.
In some examples, at an angle of incidence (AOI) of five (5) degrees, the white illumination card 703 may provide a first image 703a, a second image 703b, a third image 703c, and a fourth image 703d. In some examples, each of the images 703a-703d may also be referred to as “quadrants.” Also, in some examples, at an angle of incidence (AOI) of forty (40) degrees, the black illumination card 704 may provide a first image 704a, a second image 704b, a third image 704c, and a fourth image 704d.
In some examples, each of the images (or quadrants) on the black illumination cards 701-702 and the images (or quadrants) on the white illumination cards 703-704 may correspond to a pixel of a super-pixel (e.g., the super-pixel 603 in
In some examples, and as will be discussed further below, the plurality of illumination cards 700 may be utilized to analyze the light reflected from an object to determine a “fringe projection analysis” associated with the object. So, as illustrated in
In some examples, for a white illumination card where the light may be minimally polarized or fully depolarized, a fully homogenized image may be produced. So, in some examples, if the light may be minimally polarized fully depolarized, a fully homogenized image for all four quadrants may appear where a contrast may be lower and fringes may be relatively compressed. As a result, in some examples, a white illumination card may depolarize light more, and since (after reflection) the polarization of the light from the white illumination card may be lower, the unpolarized light when reaching a wire grid may provide more unpolarized light to a pixel. In some instances, this may create an offset that may degrade a degree of contrast in the fringes.
So, in some examples, where a polarization orientation may match an orientation of the wire grid, it may lead to full or nearly full transmission with minimal contrast. However, when a polarization orientation may be orthogonal to an orientation of the wire grid, it may lead to minimal or no transmission, resulting in a sharp (er) contrast.
It may be appreciated that, in some examples, an object may produce a different reflections based on an input polarization state of projected light. In some examples, fringes produced by the reflection may be based on a polarization state that may be changing, which may produce a (corresponding) contrast in the fringes.
In some examples, a plurality of illumination cards having multiple angle of incidences (e.g., as illustrated in
Also, in some examples and as discussed above, structured illumination may require utilizing a plurality (e.g., four) of separate projections at varying intensities. However, in implementing structured polarized illumination, because the polarization states may be varied (e.g., shifted by a quarter of a period, as discussed above) in a single shot to generate a plurality of images (e.g., images 701a-701d in
In some examples, each image associated with a pixel of a super-pixel may exhibit a distinct shifted fringes. In some examples, an object will provide a different reflection based on an input polarization state that may be varying (as discussed above).
In addition, in some examples, each pixel of a super-pixel may exhibit a distinct fringes contrast based on a varying input polarization state. Specifically, along these fringes, the input polarization state may be changing (or shifting), resulting in a contrast between the fringes.
Also, and as discussed above, in some examples, after reflection on the object, the returned light may see its polarization state change, which may correspond to these shifted fringes. In some examples, this may also further provide a contrast in the fringes that may be utilized to determine and reconstruct a shape of an object. In some examples, these shifted fringes may improve resolution in a process shape reconstruction, for example when compared to a process that may utilize only one set of fringes.
In some examples, it may be appreciated that a shifting of fringes and/or a contrast in fringes may be (also) based on material properties of an object. As a result, it may be appreciated that, in some examples, based on (among other things) a known polarization state variation, a shift in fringes, and a contrast in the (shifted) fringes, material properties of an object may be determined.
In particular, in some examples, fringes may vary more at larger angles of incidence (AOI) for a black illumination card. In some examples, this may be a result of the Brewster's Law. For a larger angle of incidence (AOI), an s polarization may be reflected more than a p polarization. In some instances, this may be because certain wire grid orientations may be more suited to s polarization detection, and thus may have produce higher maximum fringe intensities.
Referring back to
However, for the black illumination card 702 (e.g., at forty (40) degrees angle of incidence) and for the white illumination card 704 (e.g., at forty (40) degrees angle of incidence), it may appear that fringes may be shifted but the contrast exhibited by images may differ significantly. In some instances, this may be a result of the Brewster's Law, wherein at a larger angle of incidence (AOI), there may be a polarization state that may be highly reflective and another polarization state that may be less reflective. As a result, for one wire grid, there may be fringes that may be exhibited, but another wire grid, the fringes may be compressed.
It should be noted that, in some examples, the black illumination card may exhibit a rotating angle of linear polarization (AOLP) after reflection, whereas the white illumination card, which may (often) be more depolarized, may show a more constant angle of linear polarization (AOLP).
Also, in some examples, the black illumination card may exhibit more fringes, because the white illumination card may be more depolarized. Furthermore, in some examples, for a larger angle of incidence (AOI), the black illumination card may exhibit a significant difference in fringe construction, which may not be present in the case of the smaller angle of incidence (AOI).
As discussed above, in some examples, a Stokes vector, such as an output Stokes vector Scamera (or Soutput) to be determined in implementation of structured polarized illumination techniques described herein, may be calculated as a function of intensity. Specifically, in some examples, intensity of light having an angle theta (θ) (e.g., as exhibited by a pixel located below a wire grid filter), may be described as:
As discussed above, in some examples, a super-pixel may include four pixels, wherein each pixel may be associated with a different theta (θ) angle (e.g., 0, π/4, π/2, and −π/4). As also discussed above, in some examples, a quadrant image (e.g., the quadrant images illustrated in
In some examples, S0 may represent a total light intensity that may be based on intensity of four pixels (e.g., each having a different theta angle 0, π/4, π/2, and −π/4) of a super-pixel. In some examples, S1 may represent how light may be horizontally (+1) or vertically polarized (−1), and S2 may represent how the light is polarized at (at +45) or (at −45).
It may be appreciated that the equations for S0, S1, and S2 may be rewritten as follows:
In some examples, based on the relationships indicated above, an intensity of each pixel (e.g., I0, Iπ/4, Iπ/2, and I−π/4) may be provided with respect to a Stokes vector component (e.g., S0, S1, and S2). Specifically, in some examples, where four (4) wire grid polarizers may be implemented having theta (θ) angles of 0, π/4, π/2, and −π/4, their intensities as functions Stokes components S0, S1, and S2 may be rewritten as:
Determination of a Mueller Matrix Associated with Material Properties-First Approach
In some examples, implementing structured polarized illumination techniques as described herein may include determining a Mueller matrix for an object. As discussed above, in some examples the Mueller matrix for the object may be used to describe material properties of the object. Specifically, in some examples, structured polarized illumination techniques as described herein may be utilized to provide multiple, different (incident) polarization states that may be used to determine material properties of the object.
In some examples, a Mueller matrix may be determined with respect to an incident angle and a scattering angle (also referred to as “outgoing angle”). In some examples, and as discussed further below, fringes information associated with an object and a shape of the object may be utilized to determine Mueller matrix elements for the object for a range of incident angles and scattering angles with respect to a surface normal. In some examples, for an object of arbitrary shape, projection of fringes may be utilized to determine a shape of (along with associated depths for) the object, which may then be used to determine a map of surface normals that may be associated with the object.
According to examples, and as will be discussed further below, a Mueller matrix for an object may be determined in multiple ways. In a first method, three parameters (e.g., associated with an incident angle and a scattering angle) may be utilized to determine a Mueller matrix. In some examples, the three parameters may include theta_i (or θi), which may represent a zenith incident angle with respect to a normal, theta_o (or θo), which may represent a zenith viewing angle, and phi (ϕ), which may represent an azimuth difference between incident and scattered rays. As will be discussed further below, in some examples, these three parameters may be utilized to group similarly situated pixels.
In some examples, utilizing these parameters, and based on a shape of an object, a position of an illumination source and a position of a camera (e.g., a polarization camera), a Mueller matrix for an object may be determined for a given incident and scattering angle. That is, in some instances, a particular Mueller matrix may be determined for each pair of incident angle and scattering angle.
In some examples, based on a position of an illumination source and a position of a camera, pixels that may share a same incident and scattering angle with respect to a surface normal may be determined. It should be appreciated that because a surface normal may be changing over a surface of an object, there may be different (corresponding) pixels for which an incident angle and scattering angle may be same.
In some examples, upon determining at least one pixel (e.g., three (3) independent pixels) that may share a same incident and scattering angle with respect to a surface normal, a rotation matrix technique (e.g., rotation from source to surface normal coordinates, rotation from surface normal to camera coordinates, etc.) may be implemented to generate a linear system that may include Mueller Matrix elements intrinsic to material.
In some examples, the input Stokes vector may be referred to as Ssource, and may be determined according to predetermined implementation of gratings, as discussed above. Also, in some examples, the output Stokes vector may be referred to as Scamera, and may be determined based on the fringes projection analysis discussed above. In some examples, a Mueller matrix may be referred to as Mobject, and may be represented by the following equation:
In some examples, incorporating one or more rotation matrixes, this equation may be rewritten as:
In some examples, where multiple (e.g., three) independent pixels that may share a same incident and scattering angle (also known as “classification”) may grouped to create a group (or groups) of pixels where, within each of these groups, the incident and scattering angles may be similar or same with respect to a surface normal. In some examples, the classification may provide a matrix that includes the pixels in this group. In some examples, where a group may include three pixels having a same incident and scattering angle with respect to a surface normal (e.g., based on θi, θo, and ϕ), a classification may be represented as follows:
In some examples, components of a Mueller matrix may be determined for a group (e.g., three) of pixels where the incident and scattering angles may be similar or same with respect to a surface normal:
In some examples, and as indicated above, the Mueller matrix may be expressed as a vector with the following components (m00, m01, m02, m10, m11, m12, m20, m21, m22). In the above example, each pixel where the incident and scattering angles may be similar or same may include three components a, b, c that may correspond to the parameters discussed above (e.g., based on θi, θo, and ϕ). In the example above, the three components for each of the three pixels may provide the nine elements in the Mueller matrix in vector form. However, it may be appreciated that the Mueller matrix may be calculated in this manner with more than three pixels as well.
Furthermore, it may be appreciated that, in some examples, the structured polarized illumination techniques described herein to generate a Mueller matrix for an object may not provide access to a fully polarized bi-directional reflectance function (pBRDF). In some examples, a fully polarized bi-directional reflectance function (pBRDF) may provide Mueller matrices for all possible incoming and scattering angled rays. Instead, in some examples, the structured polarized illumination techniques may provide access to an angle range that may be offered by a particular setting (e.g., a location of camera, a location of illumination source, etc.) and a particular object shape.
Determination of a Mueller Matrix Associated with Material Properties-Second Approach
As discussed above, a second approach to implementing structured polarized illumination to determine a Mueller matrix for an object may be implemented as well. In some examples, the second approach may include utilizing two parameters may be implemented to represent a relationship between an incident angle and a scattering angle. In some examples, the two parameters may include a “halfway vector”, which may represent an average of incident and an outgoing vector, and a “difference angle”, which may represent an angle between an incident vector and a halfway vector. In some examples, these three parameters may be utilized to group similarly situated pixels. In some examples, the halfway vector may be represented as:
In some examples, the difference angle may be represented as:
It may be appreciated that, in some examples, any representation basis for a polarized bi-directional reflectance function (pBRDF) may be utilized for the structured polarized illumination techniques described herein. Indeed, in some examples, this may include grouping pixels (e.g., pixels sharing same angles in this basis) before inversing a linear system for a Mueller matrix.
As discussed above, in some examples, structured polarized illumination techniques as described herein may include spatially varying a polarization state of the input Stokes vector with a single implementation (or a “single shot”). Furthermore, it may be appreciated that implementation of techniques for structured polarized illumination to determine shape and material properties of an object may be extended to more than one sinusoidal modulation component present for an illumination pattern at different angles. As a result, in some examples, the techniques for structured polarized illumination may extend to several projection systems.
In addition, it may be appreciated that wavelength multiplexing may be added to increase robustness of an approach (e.g., one projector in red, one in green, one in blue). In some examples, this may be utilized to recover more material information as additional variations may be implemented based on input wavelength(s).
Furthermore, in some examples, a combination of liquid crystal (LC) elements and switchable printed board (PB) elements may be used to switch between structured polarized light and linearly (or circularly) polarized light (e.g., to implement a dynamic pattern) in every second frame to capture both a higher quality measurement that may capture a Mueller matrix at lower resolution and a reduced quality image at higher resolution.
In
Reference is now made to
While the servers, systems, subsystems, and/or other computing devices shown in
As shown in
In some examples, the memory 1002 may have stored thereon machine-readable instructions (which may also be termed computer-readable instructions) that the processor 1001 may execute. The memory 1002 may be an electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. The memory 1002 may be, for example, random access memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage device, an optical disc, or the like. The memory 1002, which may also be referred to as a computer-readable storage medium, may be a non-transitory machine-readable storage medium, where the term “non-transitory” does not encompass transitory propagating signals. It should be appreciated that the memory 1002 depicted in
It should be appreciated that, and as described further below, the processing performed via the instructions on the memory 1002 may or may not be performed, in part or in total, with the aid of other information and data. Moreover, and as described further below, it should be appreciated that the processing performed via the instructions on the memory 1002 may or may not be performed, in part or in total, with the aid of or in addition to processing provided by other devices.
In some examples, and as discussed further below, the instructions 1003-1009 on the memory 1002 may be executed alone or in combination by the processor 1001 for implementing structured polarized illumination techniques to determine and reconstruct a shape and a material property of an object.
In some examples, the instructions 1003 may implement an illumination source (e.g., a laser diode) to emit light towards an arrangement of at least one grating (e.g., one or more Pancharatman-Berry (PB) gratings).
In some examples, the instructions 1004 may determine an input Stokes vector. In some examples, the instructions 1004 may determine the input Stokes vector based on the light emitted towards the arrangement of the at least one Pancharatman-Berry (PB) grating.
In some examples, the instructions 1005 may implement a polarization camera to capture light reflected from an object whose shape and material properties are to be determined. In some examples, the polarization camera may include a plurality of super-pixels having a plurality of pixels. In some examples each of the plurality of pixels may be associated with a wire grid having a particular wire grid orientation.
In some examples, the instructions 1006 may implement a plurality of illumination cards for imaging light captured (e.g., by a polarization camera) upon reflection from an object. In some examples, the instructions 1006 may implement a projection of fringes and an associated fringes analysis utilizing the plurality of illumination cards. In some examples, the plurality of illumination cards may include a black illumination card having a smaller angle of incidence (AOI), a black illumination card having a larger angle of incidence (AOI), a white illumination card having a smaller angle of incidence (AOI), and white illumination card having a larger angle of incidence (AOI).
In some examples, the instructions 1007 may determine a shape of an object. In particular, in some examples, the instructions 1007 may utilize a projection of fringes and an associated fringes analysis (e.g., as implemented via the instructions 1006) to determine the shape of the object using structured polarized illumination. In some examples, the instructions 1007 may determine a map of surface normals that may be associated with a shape of the object.
In some examples, the instructions 1008 may determine an output Stokes vector. In particular, in some examples, the instructions 1008 may determine the output Stokes vector based on a projection of fringes and an associated fringes analysis (e.g., as implemented via the instructions 1006).
In some examples, the instructions 1009 may determine a Mueller matrix for an object that may describe material properties of the object. In some examples, the instructions 1009 may utilize an input Stokes vector (e.g., as determined via the instructions 1004) and an output Stokes vector (e.g., as determined via the instructions 1008) to determine the Mueller matrix for the object.
Reference is now made with respect to
At 1120, a light beam may be emitted toward a plurality of gratings (e.g., the plurality of Pancharatman-Berry (PB) gratings arranged at 1110), and further toward an object whose shape and material properties may be determined.
At 1130, an input Stokes vector may be determined for a light beam emitted toward an object whose shape and material properties may be determined. In some examples, the input Stokes vector may be determined based on the intensity of the light beam.
At 1140, a polarization camera may receive light reflected from an object whose shape and material properties may be determined. In some examples, the polarization camera may include at least one (e.g., a plurality) of pixels that may have a varied wire grid orientation. In some examples, a first pixel may have a wire grid orientation α=0 degrees, a second pixel may have a wire grid orientation α=45 (or π/4) degrees, a third pixel 603c may have a wire grid orientation α=90 (or π/2) degrees, and a fourth pixel 603d may have a wire grid orientation α=135 (or −π/4) degrees.
At 1150, a plurality of illumination cards may be utilized to image light captured (e.g., by a polarization camera) upon reflection from an object whose shape and material properties may be determined. In some examples, the plurality of illumination cards may be utilized to generate a plurality of images that may each associated with a particular pixel and an associated wire grid of the polarization camera.
At 1160, a shape of an object may be determined. In particular, in some examples, the shape of the object may be determined based on structured polarized illumination (e.g., as provided via instructions 1110-1150) and analysis of fringes produced based on a varying polarization states, which may produce a (corresponding) contrast in the fringes. In some examples, these images having shifted patterns with associated distinct fringes may be utilized to reconstruct a shape of an object.
At 1170, an output Stokes vector may be determined. In particular, in some example, based on (among other things) intensities of light captured by a plurality of pixels on a polarization camera, a known polarization state variation, a shift in fringes, and a contrast in the (shifted) fringes, an output Stokes vector may be determined.
At 1180, a Mueller matrix associated with material properties of an object may be determined. In some examples, the determination of the output Stokes vector mat be based on three parameters including theta_i (or θi), which may represent a zenith incident angle with respect to a normal, theta_o (or θo), which may represent a zenith viewing angle, and phi (ϕ), which may represent an azimuth different between incident and scattered rays. In some examples, the three parameters may be associated with a given incident and scattering angle. In some examples, a Mueller matrix may be determined based on an input Stokes vector associated with an object (e.g., as determined at 1130) and an output Stokes vector associated with the object (e.g., as determined at 1170)
In some examples, an anti-reflective (AR) coating may be applied to one or more sides of a glass substrate. In some examples and as used herein, an anti-reflection (AR) coating may be a type of optical coating that may be applied to a surface of a lens or other optical element to reduce reflection. In some examples, the anti-reflective (AR) coating may function in visible range of the electromagnetic spectrum, while in other examples, the anti-reflective (AR) coating may function in the ultraviolet (UV) light range of the electromagnetic spectrum.
So, in some examples, an anti-reflective (AR) coating may improve efficiency of an optical component, as less light may be lost due to reflection. In addition, in some instances, use of an anti-reflection (AR) may improve an image contrast via elimination of stray light.
In some examples, a glass substrate having an anti-reflective (AR) coating may be utilized for waveguide fabrication. In particular, the glass substrate may include an anti-reflective (AR) coating on one side but not an opposite side.
It may be appreciated that mistakenly utilizing a wrong side (e.g., where an anti-reflective (AR) coating may be on an opposite side) may lead to an excess amount of light accumulating on and/or emanating from a sample (e.g., a glass substrate), and may lead to errors on a following sample during fabrication.
Specifically, in some instances, if an anti-reflective (AR) coating may be on a wrong (or opposite) side, then this error may be magnified during a duplication portion of a fabrication process, and may even be carried to other liquid crystal (LC) elements of a display component. In some instances, this may significantly lower image quality and device performance. Accordingly, it may be beneficial to be able to quickly determine which side an anti-reflective (AR) coating may be provided on.
It may also be appreciated that a determination of which side may include an anti-reflective (AR) coating may not be easy and quick. For example, in some cases, a spectrum measurement that may be utilized, but may not necessarily work for a single anti-reflective (AR) coating substrate, since spectrum measurement technique may require multiple measurements utilizing two anti-reflective (AR) coating substrates.
Also, in some examples, a typical method that may be used to determine which side of a glass substrate may include an anti-reflective (AR) coating may include utilizing a spectrometer. In particular, in some examples, this may include attaching two pieces of anti-reflective (AR) coating substrate together (e.g., next to each other) and sandwiching a layer of liquid (e.g., oil, isopropyl alcohol, water, etc.) for index matching. In some examples, this may require four (4) measurements (e.g., front to front, front to back, back to front, and back to back) with an ultraviolet (UV) spectrometer. As a result, in some instances, the process may take excessive time (e.g., fifteen (15) minutes or more), and may lead to a higher risk of damaging an anti-reflective (AR) coating.
For testing of visible light anti-reflective (AR) coatings, in some instances, another method may be implemented. Such a method is illustrated in
In some examples, and as discussed above, an amount of visible light that may be reflected may be based upon a difference between index references in between the air and the glass substrate. In particular, in the example illustrated in
Also, in some examples, and as indicated by green boxes in
Also, in some examples, and as indicated by green boxes, this arrangement may also provide a transition from “low” (e.g., air having a refractive index of 1.0) to “high” (e.g., a glass substrate having a refractive index of 1.5) to “medium” (e.g., anti-reflective (AR) coating having a refractive index of 1.3). As a result, in some examples, this may result in greater reflection, and it may be determined that the visible light anti-reflective (AR) coating may be located on a front-facing side (or top) of the glass substrate.
Accordingly, it may be appreciated that, in some examples, if the solvent may be applied on a side that may not have a visible light anti-reflective (AR) coating, a surface may appear dimmer. However, in some examples, if the solvent may be applied on a side that may have a visible light anti-reflective (AR) coating, a surface may appear brighter.
It may be appreciated that, in some instances, a similar process may be utilized to determine a change in reflectiveness for ultraviolet (UV) coating as well. Specifically, depending on a transitioning of refractive index between multiple layers, a reflectiveness associated with a transition may be utilized to determine a sequence of layers.
However, an issue may arise as reflectiveness (e.g., a change in brightness) associated with a large portion of the ultraviolet (UV) spectrum may generally not be visible to the human eye. Indeed, in some instance, an ultraviolet (UV) anti-reflective (AR) coating may not function as anti-reflective (AR) with respect to a visible light wavelength range.
Systems and methods described herein may utilize a change in reflected color associated with a short-wavelength range of visible light to determine a side (or surface) of a sample having an ultraviolet (UV) anti-reflective (AR) coating. In some examples, the systems and methods may determine the side having an ultraviolet (UV) anti-reflective (AR) coating using a single sample.
As will be discussed in further detail below, in some examples, when an ultraviolet (UV) anti-reflective (AR) coating may be in effect (that is, the solvent may be applied on a glass), it may appear to display a yellowish/reddish hue. In some instances, this may be because the yellow and red portions of the visible light spectrum may be (mostly) reflected, but blue portion of the visible light spectrum may mostly not be reflected. As a result, in these instances, an existing color profile may remain, and there may be no color change when compared to before application of the solvent.
However, when the ultraviolet (UV) anti-reflective (AR) coating may not be in effect (e.g., the solvent may be applied on ultraviolet (UV) anti-reflective (AR) coating), the blue portion may be reflected along with the remaining portion of the spectrum, so a color profile in this instances may appear to be a “cold” blue.
It may be appreciated that a wavelength range associated with a visible light portion of electromagnetic spectrum may be from four hundred (400) to seven hundred (700) nanometers (nm).
In some examples and as will be discussed further below, the systems and methods described herein may utilize this transition as a basis for a “visible marker” to determine a side (or surface) of a sample having an ultraviolet (UV) anti-reflective (AR) coating. That is, in some examples, the systems and methods may, instead of determining a change (e.g., increase or decrease) in reflectiveness of visible light (as discussed above), instead utilize a change in color of the reflected light to determine the side (or surface) of a sample having an ultraviolet (UV) anti-reflective (AR) coating.
In some examples, the systems and methods may utilize a change in (reflected) color to enable a visual determine of a side (or surface) of a single sample having an ultraviolet (UV) anti-reflective (AR) coating. That is, just based on a change (or no change) in color discernable by the human eye, it may be determined which side of a single sample that an ultraviolet (UV) anti-reflective (AR) coating may be located. In particular, in some examples, since a blue portion of the spectrum may be absorbed (e.g., anti-reflected) and passed through the sample, a sample may appear to have a reddish or yellowish hue (e.g., under visible light in typical room conditions). As a result, when an ultraviolet (UV) anti-reflective (AR) coating may be in effect (or working), it may appear to include a reddish and/or yellowish hue, and may lack a blue coloring or hue. Conversely, where the ultraviolet (UV) anti-reflective (AR) coating may not be in effect (or working), blue light may be reflected and the sample may appear to include an appearance of blue.
In some examples, the systems and methods may include a method for determining a presence of an ultraviolet (UV) anti-reflective (AR) coating on a substrate, comprising applying a solvent on a surface of a substrate, the substrate having an ultraviolet (UV) anti-reflective (AR) coating on a first surface and no coating on an opposite surface; and determining if there is a change in color for the surface of the substrate based on the application of the solvent, and determining if the solvent is applied on the first surface or on the opposite surface based on the change in color for the surface of the substrate. In some examples, the change in color for the surface of the glass substrate may include a reddish or yellowish hue. Also, in some examples, the solvent has a refractive index similar to a refractive index of the substrate, the substrate is a glass substrate and the solvent is toluene, and the solvent has a refractive index of approximately 1.5.
In some examples, and as indicated by green box, this arrangement may provide a transition from “high” (e.g., glass substrate having a refractive index of 1.5) to “medium” (e.g., ultraviolet (UV) anti-reflective (AR) coating having a refractive index of 1.3-1.4) to “low” (e.g., air having a refractive index of 1.0). In some examples, because the refractive index of the ultraviolet (UV) anti-reflective (AR) coating falls between that of the solvent and the glass substrate, the ultraviolet (UV) anti-reflective (AR) coating may be effective for a certain (e.g., shortest wavelength) blue portion of the spectrum. As a result, in some examples, the less blue light is likely to reflect (e.g., more blue light is likely to be absorbed by the ultraviolet (UV) anti-reflective (AR) coating), and the overall reflected (e.g., visible) light will remain the same or similar appearance of reddish and/or yellowish under typical ambient room lighting conditions.
Also, in some examples, and as indicated by green box, this arrangement may provide a transition from “medium” (e.g., ultraviolet (UV) anti-reflective (AR) coating having a refractive index of 1.3-1.4) to “high” (e.g., glass substrate having a refractive index of 1.5) to “low” (e.g., air having a refractive index of 1.0). As a result, in some examples, anti-reflective (AR) characteristics of the ultraviolet (UV) anti-reflective (AR) coating may be negated, so it may appear more to have an additionally blue hue compared to before application of the toluene.
Accordingly, in some examples, when an ultraviolet (UV) anti-reflective (AR) coating is may be in effect (that is, the solvent may be applied on a glass), it may appear to display a yellowish/reddish hue since yellow and red portion of the visible light spectrum may be mostly reflected but blue portion of the visible light spectrum may mostly not. In these instances, a color profile may remain and there may be no color change when compared to before application of the solvent. However, when the ultraviolet (UV) anti-reflective (AR) coating is may not be in effect (the solvent may be applied on ultraviolet (UV) anti-reflective (AR) coating), the blue portion may be reflected along with the remaining portion of the spectrum, so a color profile may appear to be a “cold” blue.
The systems and methods described herein may provide various advantages. In some examples, the systems and methods describe may utilize a single sample (e.g., a glass substrate having an anti-reflective (AR) coating), and may only require a visual inspection to provide an association conclusion (e.g., which side the anti-reflective (AR) coating may be on. Moreover, in some instances, the systems and methods may be implemented and/or conducted within a minimal time period (e.g., one (1) minute), and therefore may be substantially more efficient and effective than prevailing alternatives.
Reference is now made with respect to
In some examples, the solvent may have a refractive index similar to that of the substrate. So, in an instance where the substrate may be glass having a refractive index of index one point five (1.5), a solvent with a similar index may be selected. In some examples, the solvent may be toluene.
At 1920, the method may include applying a solvent on a surface of a substrate (e.g., a glass substrate) having an ultraviolet (UV) anti-reflective (AR) coating. So, in an example where the solvent may be toluene, the toluene may be applied to one surface of the glass substrate.
At 1930, the method may include determining whether a solvent may have been applied on top of a substrate or an ultraviolet (UV) anti-reflective (AR) coating. In particular, in some examples, the method may include determine a color change (e.g., a change in reflectiveness characteristics) of a surface on which a solvent may have been applied.
So, in an example where the solvent may have been applied on a surface of a substrate (e.g., a glass substrate), also referred to as “the non-anti-reflective side), a top portion of may exhibit no change in reflectiveness characteristics. However, in some examples, for a bottom portion (e.g., wherein the ultraviolet (UV) anti-reflective (AR) coating may be provided), the ultraviolet (UV) anti-reflective (AR) coating may function to reflect less blue light (e.g., the blue light may be absorbed). As a result, in some examples, a reflection of visible light from a surface may appear reddish and/or yellowish.
Also, in an example where the solvent may have been applied on a surface where an ultraviolet (UV) anti-reflective (AR) coating may be provided, a top portion of may exhibit no change in reflectiveness characteristics (e.g., blue light may still be reflected), and reflected light may appear same as incident light. Also, in some examples, for a bottom portion (e.g., wherein the glass substrate may be provided), there may be no change in reflectiveness characteristics (e.g., blue light may still be reflected), and reflected light may appear same as incident light.
What has been described and illustrated herein are examples of the disclosure along with some variations. The terms, descriptions, and figures used herein are set forth by way of illustration only and are not meant as limitations. Many variations are possible within the scope of the disclosure, which is intended to be defined by the following claims—and their equivalents—in which all terms are meant in their broadest reasonable sense unless otherwise indicated.
In some examples, conventional methods of fabricating polarization gratings (e.g., liquid crystal (LC) gratings) may be limited to providing one grating per side of a substrate. As used herein, in some instances, the term “polarization grating” may be used interchangeably with “volume hologram” or “polarization volume hologram.” In particular, in some examples, a polarization grating may include a single grating per side having a single surface pattern.
In some examples, conventional methods of fabricating polarization gratings (e.g., liquid crystal (LC) gratings) may be limited to providing one grating per side of a substrate. As used herein, in some instances, the term “polarization grating” may be used interchangeably with “volume hologram” or “polarization volume hologram.” In particular, in some examples, a polarization grating may include a single grating per side having a single surface pattern.
In some examples, as illustrated in
In some examples, the alignment layer 2002 may be aligned in a manner that may be perpendicular to an orientation of the polarized light 2003. It may be appreciated that, in some examples, the alignment layer 2002 may be patterned using any number of polarization lithography techniques.
In some examples, as illustrated in
It may be appreciated that, in some instances, a grating structure may be comprised of a plurality of anisotropic layers (e.g., liquid crystal (LC) layers). In some examples, a grating structure may be utilized for, for example, providing a larger field-of-view (FOV) and enhanced color uniformity. However, typically, a subsequent (e.g., top) anisotropic layer that may be coupled to a previous (e.g., bottom) anisotropic layer may acquire a same alignment (e.g., pitch or orientation) as the previous layer.
In some instances, such limiting of pitch or orientation of one or more anisotropic layers may limit degrees of freedom and functionality of a grating structure. For example, in some instances, this may require coating a plurality of grating layers onto a plurality of (corresponding) substrates (e.g., one grating on each side of a substrate), and then gluing substrates together to create a grating structure. It may be appreciated that this may lead to additional bulk for an optical component, and may create alignment issues among the plurality substrates of the grating structure.
Systems and methods described herein may be directed to stacking multiple deposited anisotropic layers of a polarization grating structure using a barrier layer. In some examples, and as will be discussed further below, the systems and methods may provide a grating structure comprised of multiple grating layers stacked on top of each other that may exhibit independent pitch or orientation for each of a plurality of anisotropic (e.g., liquid crystal (LC)) layers.
In some examples, the systems and methods may implement an intermediary (or “barrier”) layer that may be provided on top of a first (e.g., bottom) polarization grating that may enable subsequent processing of a second (e.g., top) polarization grating in a manner that may be unaffected by the first polarization grating.
In particular, in some examples, the systems and methods may provide a plurality of anisotropic (e.g., liquid crystal (LC)) layers that may each exhibit independent pitch or orientation. As a result, in some examples, by providing a barrier layer on top of a first polarization grating, this may enable independent processing of a following layer such that the following layer may exhibit alignment properties (e.g., pitch or orientation) independent of a previous layer.
In some examples, a barrier layer as provided herein may be comprised of an isotropic material. In particular, in some examples, the barrier layer may be comprised of the isotropic material to enable a decoupling (or canceling) of alignment properties present a previous (e.g., anisotropic) grating layer.
In some examples, by providing the (e.g., isotropic) barrier layer on top of an anisotropic (e.g., liquid crystal) layer of a grating layer, the barrier layer may enable a new polarization arrangement with no preexisting (or associated) alignment(s). In particular, in some examples, this may enable a second grating structure to be provided on top of a first grating structure, wherein a second alignment layer (or photo-aligned material (PAM)) to be provided on top of a second substrate, the second alignment layer may be exposed to polarized (e.g., ultraviolet (UV)) light to create an orientation or pitch that may be different than that exhibited by the first grating structure, and may enable implementation of an anisotropic (e.g., liquid crystal (LC)) layer that may mirror the (different) orientation of the second alignment layer.
In some examples, a method for depositing isotropic layers using a barrier layer, comprising: depositing a first alignment layer on top of a substrate, exposing the first alignment layer to an ultraviolet (UV) light source to provide first polarization orientation to the first alignment layer, providing a first anisotropic layer on top of the first alignment layer, wherein the first anisotropic layer is to acquire the first polarization orientation of the first alignment layer, providing an isotropic barrier layer on top of the first anisotropic layer, depositing a second alignment layer on top of the isotropic barrier layer, exposing the second alignment layer to the ultraviolet (UV) light source to provide second polarization orientation to the second alignment layer, and providing a second anisotropic layer on top of the second alignment layer, wherein the second anisotropic layer is to acquire the second polarization orientation of the second alignment layer. In some examples, the second polarization orientation is different than the first polarization orientation and the first anisotropic layer and the second anisotropic layer are comprised of liquid crystal (LC). In some examples, the isotropic barrier layer is comprised of silicon dioxide (SiO2) and the isotropic barrier layer is to absorb specified optical wavelength ranges.
In some examples, upon imparting of alignment properties from the alignment layer 2102 to the anisotropic layer 2103, a barrier layer 2104, having isotropic characteristics, may be provided (e.g., coated) on top of the anisotropic layer 2103. As will be discussed further below, the barrier layer 2104 may, in some examples, enable implementation of an additional anisotropic (e.g., liquid crystal (LC)) layer that may exhibit a different polarization orientation than the anisotropic layer 2103.
In some examples, the barrier layer 2104 may be added to the top of the anisotropic layer 2103 via a deposition process. Examples of such deposition process may include, but are not limited to, sputtering, evaporation, and spinning. As discussed above, in some examples, upon depositing the barrier layer 2104 on top of the anisotropic layer 2103, the alignment properties of the anisotropic layer 2103 may be decoupled with respect to any subsequent layers (e.g., added on top). As a result, as illustrated in
In
As a result, a second anisotropic (e.g., liquid crystal (LC)) layer 2202, having alignment properties independent of the first anisotropic layer may be implemented. In some examples, the barrier layer 2203 may serve to de-couple properties of the first anisotropic layer 2201 from the second anisotropic layer 2202. Indeed, it may be appreciated that, in this manner and in some examples, any number of anisotropic layers (e.g., three, four, etc.) having independent properties may be implemented on top of each other.
It may be appreciated that, in some examples, a barrier layer as described herein may have a sufficient and/or minimum thickness that may be necessary to de-couple alignment properties of a first anisotropic layer from a second-anisotropic layer. In some examples, a sufficient and/or minimum thickness of the barrier layer may be based on the material comprising the barrier layer.
In some examples, a barrier layer as described may be comprised of various materials. It may be appreciated that, in some examples, a barrier layer may be made of any material may exhibit isotropic characteristics. In some examples, the barrier may be made of a transparent material as well. For example, in some instances, the barrier layer may be comprised of an inorganic material, such as silicon dioxide (SiO2), silicon nitride (SiXNY), or magnesium fluoride (MgF), that may be deposited, for example, via a sputtering process. In another example, the barrier layer may be comprised of an organic material, such as (e.g., spin-coated) SU-8 polymer layer, a photoresist layer, or a layer of curable glue. It may be appreciated that, in general, the barrier layer may be comprised of any material that may have optical properties that may be desired for the application.
In some examples, the barrier layer may be capable of absorbing certain optical wavelength ranges to enable the barrier layer to function as a color filter. Also, in some examples, the barrier layer may be capable of reflecting certain optical wavelength ranges as well.
It may be appreciated that the systems and methods described herein may provide various benefits, advantages, and applications. Indeed, in some instances, the systems and methods may be applied in any setting or context where multiple polarization gratings or polarization volume holograms may be stacked together. For example, as discussed above, in some examples, the system and methods described may enable a plurality (e.g., three or more) gratings on one substrate. In some examples, the plurality of gratings may enable multiple pupil location mechanisms within a retinal projection system in a display device (e.g., augmented reality (AR) eyeglasses).
In some examples, the systems and method described may be utilized to provide grating structures that may exhibit additional bandwidth, such that the grating structures may be able to implement broader wavelength ranges (e.g., additional colors). In addition, in some examples, a grating structure comprised of multiple grating layers stacked on top of each other exhibiting independent pitch or orientation may broaden a field of view (FOV) for waveguide applications, and may provide a larger eyebox (e.g., area) as well. Indeed, in some examples, the systems and methods described herein may enable fabrication of a two-dimensional (2D) grating, and may, in some instances, enable elimination of an intermediary (or “folding”) grating.
Additionally, in some examples, the systems and methods may enable multiple grating structures to be located on one side of a substrate (e.g., user-side or eyeside), such that coating procedures (e.g., anti-reflective (AR) coating) may be more efficiently and effectively conducted. Moreover, as a result, an opposing side may be made available for other purposes as well. It may be appreciated that barrier layers as described herein may also facilitate double-sided processing, wherein one or more grating structures may be located on an opposing side (e.g., world-side) as well.
In some examples, a barrier layer as described herein may implemented as a dual-purpose entity, wherein a first purpose of the barrier layer may be to enable a following layer to exhibit alignment properties (e.g., pitch or orientation) independent of a previous layer, but a second purpose of the barrier layer may be to function as, for example, an anti-reflective (AR) coating layer. In particular, in some examples, a material or materials of the barrier layer may be selected and/or provided to enable both a first purpose (e.g., providing independent alignment properties) and a second purpose (e.g., anti-reflective (AR) coating).
Reference is now made to
While the servers, systems, subsystems, and/or other computing devices shown in
As shown in
In some examples, the memory 2302 may have stored thereon machine-readable instructions (which may also be termed computer-readable instructions) that the processor 2301 may execute. The memory 2302 may be an electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. The memory 2302 may be, for example, random access memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage device, an optical disc, or the like. The memory 2302, which may also be referred to as a computer-readable storage medium, may be a non-transitory machine-readable storage medium, where the term “non-transitory” does not encompass transitory propagating signals. It should be appreciated that the memory 2302 depicted in
It should be appreciated that, and as described further below, the processing performed via the instructions on the memory 2302 may or may not be performed, in part or in total, with the aid of other information and data. Moreover, and as described further below, it should be appreciated that the processing performed via the instructions on the memory 2302 may or may not be performed, in part or in total, with the aid of or in addition to processing provided by other devices.
In some examples, and as discussed further below, the instructions on the memory 2302 may be executed alone or in combination by the processor 2301 for implementing structured polarized illumination techniques to determine and reconstruct a shape and material properties of an object.
In some examples, the instructions 2303 may provide a first anisotropic layer on top of a substrate having an alignment layer. In some examples, the anisotropic layer may have a first polarization orientation. In some examples, the anisotropic layer may be a liquid crystal (LC) layer.
In some examples, the instructions 2304 may provide a barrier layer on top of an (existing) anisotropic layer. In some examples, the barrier layer may be isotropic. In some examples, the barrier layer may be comprised of silicon dioxide (SiO2).
In some examples, the instructions 2305 may provide a second anisotropic layer on top of the barrier layer. In some examples, the anisotropic layer may have a second polarization orientation that may be different from a first polarization orientation of a (e.g., underlying) anisotropic layer that the barrier layer may rest on. In some examples, the anisotropic layer may be a liquid crystal (LC) layer.
Reference is now made with respect to
At 2420, a barrier layer may be provided on top of an (existing) anisotropic layer. In some examples, the barrier layer may be isotropic. In some examples, the barrier layer may be comprised of silicon dioxide (SiO2).
At 2430, a second anisotropic layer may be provided on top of the barrier layer. In some examples, the anisotropic layer may have a second polarization orientation that may be different from a first polarization orientation of a (e.g., underlying) anisotropic layer that the barrier layer may rest on. In some examples, the anisotropic layer may be a liquid crystal (LC) layer.
What has been described and illustrated herein are examples of the disclosure along with some variations. The terms, descriptions, and figures used herein are set forth by way of illustration only and are not meant as limitations. Many variations are possible within the scope of the disclosure, which is intended to be defined by the following claims—and their equivalents—in which all terms are meant in their broadest reasonable sense unless otherwise indicated.
In the foregoing description, various examples are described, including devices, systems, methods, and the like. For the purposes of explanation, specific details are set forth in order to provide a thorough understanding of examples of the disclosure. However, it will be apparent that various examples may be practiced without these specific details. For example, devices, systems, structures, assemblies, methods, and other components may be shown as components in block diagram form in order not to obscure the examples in unnecessary detail. In other instances, well-known devices, processes, systems, structures, and techniques may be shown without necessary detail in order to avoid obscuring the examples.
The figures and description are not intended to be restrictive. The terms and expressions that have been employed in this disclosure are used as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding any equivalents of the features shown and described or portions thereof. The word “example” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or design described herein as “example’ is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
A scanning MEMS mirror with a laser illuminator is a promising optical display system because of its size, relatively low power consumption, fast response times, and precise control. A potential issue with the use of MEMS mirror optical display systems is that stray light paths in the optical display systems may exist, which may negatively affect display contrast and display image quality. Some possible techniques to reduce the stray light paths may include the use of polymer or liquid crystals. However, these techniques may not be viable because they are not easy to implement with a MEMS fabrication process because of thermal issues and material differences. That is, polymer and liquid crystals may not be able to withstand the relatively high temperature applied to MEMS wafers during the MEMS fabrication process.
Another technique is to include a quarter wave plate (QWP) in the optical display systems, which may reduce stray light paths in the optical display systems and may thus improve display contrast and display image quality. Particularly, QWPs may manipulate the polarization state of light that pass through the QWPs, such as by converting linearly polarized light into circularly polarized light or vice versa. QWPs may convert the polarized light by introducing a phase difference of a quarter of a wavelength between the two orthogonal components of the incoming linearly polarized light, in which the phase difference results in the rotation of the polarization direction.
A MEMS fabrication friendly QWP design is a wire grid QWP, which may be patterned to nanometer precision. Typically, the wire grid QWPs are made from metals. However, metals, such as silver and alloys, that have good optical performance in the visible spectrum, e.g., high reflectivity and broadband operation, are typically hard to pattern and oxidize quickly.
Disclosed herein are methods for fabricating MEMS mirror devices having a wire grid QWP, in which the wire grid QWP is formed of crystalline silicon. As a result, the wire grid QWP may be patterned with high precision and may be able to withstand the relatively high temperatures applied to the MEMS mirror devices during the MEMS mirror device fabrication process. In other words, crystalline silicon may provide substantially better wire grid QWP performance as compared with conventional materials used for wire grid QWPs in MEMS mirror devices. Crystalline silicon may also provide better performance than amorphous silicon.
When silicon (Si), as a dielectric material, is used to form the wire grid QWP, both the real and imaginary parts of the dielectric constants may be important. Particularly, the imaginary parts of the dielectric constant of crystalline silicon is much lower than the imaginary parts of the dielectric constant of amorphous silicon, which leads to better QW performance. Crystalline silicon may also have a better polarization response as compared with amorphous silicon.
Through implementation of the features of the present disclosure, a MEMS mirror device may be fabricated to include a crystalline silicon QWP. In addition, the crystalline silicon QWP may be formed across an entire wafer, which contains many MEMS mirror devices, and this may simplify fabrication of the MEMS mirror device. That is, instead of forming a crystalline silicon or other material quarter wave plate on individual MEMS mirror devices, the crystalline silicon QWPs as disclosed herein, may be simultaneously created on every MEMS mirror device on a single wafer.
Reference is first made to
As shown in
As another example, the silicon dioxide layer 2600 may be formed on the crystalline silicon layer 2602 through an oxidation reaction, in which silicon atoms, at elevated temperatures, at the surface of the crystalline silicon substrate react with the oxygen or water vapor, forming silicon dioxide (SiO2). As the oxidation reaction continues, a layer of silicon dioxide starts to grow on the surface of the crystalline silicon substrate. After the desired thickness of the silicon dioxide layer 2600 is achieved, the crystalline silicon wafer may be subjected to an annealing step, in which any defects may be removed and the silicon dioxide layer 2600 may be stabilized. Additional manners in which the silicon dioxide layer 2600 may be formed on the crystalline silicon layer 2602 may include chemical vapor deposition (CVD), plasma-enhanced chemical vapor deposition (PECVD), and atomic layer deposition (ALD).
At block 2504, hydrogen ions may be implanted into the crystalline silicon layer 2602 to create a hydrogen-rich zone layer 2604. The hydrogen ions may be implanted into the crystalline silicon layer 2602 through use of a technique called ion implantation, for instance, by introducing specific dopant elements or creating specific structures within the crystalline silicon layer 2602. In some examples, the hydrogen ions may be generated using an ion source, such as a plasma source or an ion accelerator, and a focused ion beam may be directed towards the crystalline silicon layer 2602. The high energy hydrogen ions penetrate the surface of the crystalline silicon layer 2602 and become embedded within the crystalline silicon layer 2602. The depth of the penetration may depend on the ion energy and an intended implantation profile. Thus, for instance, the hydrogen ions may be implanted at an intended depth in the crystalline silicon layer 2602 such that the hydrogen-rich zone layer 2604 may be formed at an intended distance from the silicon dioxide layer 2600.
At block 2506, the silicon dioxide layer 2600 may be trimmed as shown in
The silicon dioxide layer 2600 may be trimmed through any suitable trimming process including, wet etching, dry etching, laser trimming, chemical-mechanical polishing, etc. The choice of the trimming method may depend on factors such as the desired precision and the area of trimming.
At block 2508, a MEMS mirror wafer 2610 may be bonded onto the silicon dioxide layer 2600 as shown in
At block 2510, thermal energy may be applied onto the MEMS mirror wafer 2610, the silicon dioxide layers 2600, 2612, and the crystalline silicon layer 2602. For instance, the stack of components shown in
At block 2512, the crystalline silicon layer 2602 may be etched into a wire grid pattern 2620 as shown in
According to examples, the crystalline silicon layer 2602 may be etched to have a predefined configuration and dimensions. The predefined configuration and dimensions may correspond to a configuration and dimensions at which performance of the wire grid QWP 2620 formed from the crystalline silicon layer 2602 may be maximized or optimized. The predefined configuration and dimension may also vary depending on various aspects of the system into which the MEMS mirror device is to be employed.
According to examples, the wire grid QWP 2620 may be formed across an entire or a substantial portion of the MEMS mirror wafer 2610 and may thus be formed to function as a wire grid QWP 2620 for the mirrors across the entire or a substantial portion of the MEMS mirror wafer 2610. For instance, the wire grid QWP 2620 may be formed across the entire MEMS mirror wafer 2610, which contains many MEMS mirror devices 2630, and this may simplify fabrication of the MEMS mirror device 2630. That is, instead of forming or depositing a crystalline silicon or other material wire grid QWP 2620 on individual MEMS mirror devices, the crystalline silicon wire grid QWPs described herein may simultaneously be created on every MEMS mirror device 2630 on a single wafer.
In some examples, following the formation of the wire grid pattern 2620 as shown in
The wearable device 2700 may include a display 2704 and a temple arm 2706. The inside of a left temple arm 2706 and a left display 2704 are shown. Although not shown, it should be understood that the wearable device 2700 may also include a right temple arm and a right display. In addition, the temple arms and the displays may be mounted to a frame to enable a user to wear the wearable device 2700 on the user's face such that the displays 2704 may be positioned in front of the user's eyes. Display components 2702 may similarly be provided on the right temple arm and the right display 2704 may be configured to receive and display images from the display components 2702 on the right temple arm.
The display components 2702 may include a light source 2708, such as a laser beam source, that may direct light onto the MEMS mirror device 2630 having the wire grid QWP 2620. The MEMS mirror device 2630 may direct the light onto the display 2704. Particularly, the display 2704 may include optical elements that enable the light from the MEMS mirror device 2630 to be displayed on the display 2704. The optical elements may include lenses, optical waveguides, mirrors, and/or the like. In some examples, the display 2704 may be transparent such that a user may see through the display 2704. In these examples, the display 2704 may display augmented reality objects on the display 2704.
In some examples, the wearable device 2800 may include a frame 2806 and displays 2808, 2810. In some examples, the displays 2808, 2810 may be configured to present media or other content to a user. In some examples, the display components may output light onto the displays 2808, 2810 to cause objects to be displayed on the displays 2808, 2810. In some examples, the displays 2808, 2810 may also include any number of optical components, such as waveguides, gratings, lenses, mirrors, etc.
In some examples, the wearable device 2800 may further include various sensors 2812a-2812e on or within a frame 2806. In some examples, the various sensors 2812a-2812e may include any number of depth sensors, motion sensors, position sensors, inertial sensors, and/or ambient light sensors, as shown. In some examples, the various sensors 2812a-2812e may include any number of image sensors configured to generate image data representing different fields of views in one or more different directions. In some examples, the various sensors 2812a-2812e may be used as input devices to control or influence the displayed content of the wearable device 2800, and/or to provide an interactive virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) experience to a user of the wearable device 2800. In some examples, the various sensors 2812a-2812e may also be used for stereoscopic imaging or other similar application.
In some examples, the wearable device 2800 may further include one or more illuminators 2814 to project light into a physical environment. The projected light may be associated with different frequency bands (e.g., visible light, infra-red light, ultra-violet light, etc.), and may serve various purposes. In some examples, the one or more illuminator(s) 2814 may be used as locators.
In some examples, the wearable device 2800 may also include a camera 2816 or other image capture unit. The camera 2816 may capture images of the physical environment in the field of view of the camera 2816. In some instances, the captured images may be processed, for example, by a virtual reality engine (not shown) to add virtual objects to the captured images or modify physical objects in the captured images, and the processed images may be displayed to the user by the displays 2808, 2810 for augmented reality (AR) and/or mixed reality (MR) applications.
As shown, the HMD device 2900 may include a body 2902 and a head strap 2904. The HMD device 2900 is also depicted as including a bottom side 2906, a front side 2908, and a left side 2910 of the body 2920 in the perspective view. In some examples, the head strap 2904 may have an adjustable or extendible length. In particular, in some examples, there may be a sufficient space between the body 2902 and the head strap 2904 of the HMD device 2900 for allowing a user to mount the HMD device 2900 onto the user's head. For example, the length of the head strap 2904 may be adjustable to accommodate a range of user head sizes. In some examples, the HMD device 2900 may include additional, fewer, and/or different components.
In some examples, the HMD device 2900 may present, to a user, media or other digital content including virtual and/or augmented views of a physical, real-world environment with computer-generated elements. Examples of the media or digital content presented by the HMD device 2900 may include images (e.g., two-dimensional (2D) or three-dimensional (3D) images), videos (e.g., 2D or 3D videos), audio, or any combination thereof. In some examples, the images and videos may be presented to each eye of a user by one or more display assemblies (not shown in
In some examples, the HMD device 2900 may include various sensors (not shown), such as depth sensors, motion sensors, position sensors, and/or eye tracking sensors. Some of these sensors may use any number of structured or unstructured light patterns for sensing purposes. In some examples, the HMD device 2900 may include an input/output interface (not shown) for communicating with a console. In some examples, the HMD device 2900 may include a virtual reality engine (not shown), that may execute applications within the HMD device 2900 and receive depth information, position information, acceleration information, velocity information, predicted future positions, or any combination thereof of the HMD device 2900 from the various sensors.
In the foregoing description, various inventive examples are described, including devices, systems, methods, and the like. For the purposes of explanation, specific details are set forth in order to provide a thorough understanding of examples of the disclosure. However, it will be apparent that various examples may be practiced without these specific details. For example, devices, systems, structures, assemblies, methods, and other components may be shown as components in block diagram form in order not to obscure the examples in unnecessary detail. In other instances, well-known devices, processes, systems, structures, and techniques may be shown without necessary detail in order to avoid obscuring the examples.
The figures and description are not intended to be restrictive. The terms and expressions that have been employed in this disclosure are used as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding any equivalents of the features shown and described or portions thereof. The word “example” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or design described herein as “example’ is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
Although the methods and systems as described herein may be directed mainly to digital content, such as videos or interactive media, it should be appreciated that the methods and systems as described herein may be used for other types of content or scenarios as well. Other applications or uses of the methods and systems as described herein may also include social networking, marketing, content-based recommendation engines, and/or other types of knowledge or data-driven systems.
Disclosed herein are apparatuses and methods for syncing data across multiple apparatuses in a secure and efficient manner. Particularly, a processor may determine that an input data has been received from or collected by an input device of an apparatus and may determine that the input data is to be synced with data on a remote apparatus. The processor may also encrypt the input data based on a determination that the input data is to be synced with data on the remote apparatus and may output the encrypted input data. The processor may output the encrypted input data directly to the remote apparatus via a local connection, such as a Bluetooth™ connection, a WiFi connection, or the like. In addition, or alternatively, the processor may output the encrypted input data to a server from which the remote apparatus may retrieve the encrypted input data.
In one regard, the processor may determine whether the input data is to be synced and may output the input data based on a determination that the input data is to be synced. As a result, the processor may not output all of the input data, but instead, may output certain types of input data. This may reduce the amount of data outputted, which may improve the battery life of the apparatus. In addition, the processor may encrypt the input data in a manner that may prevent the server from being able to decrypt the input data and thus, the input data may be passed through the server to the remote apparatus without the server identifying the contents of the input data.
As also shown in
In addition, the interface components 3010 may enable the apparatus 3000 to communicate to a server 3030 via a network 3040, which may be the Internet and/or a combination of the Internet and other networks. The communication between the apparatus 3000 and the network 3040 is denoted by the dashed line 3024. In these examples, the interface components 3010 may enable the apparatus 3000 to communicate to access points, gateways, etc., through a Bluetooth™ connection, a WiFi connection, an Ethernet connection, etc. The interface components 3010 may additionally or alternatively include hardware and/or software to enable the apparatus 3000 to communicate to the network 3040 via a cellular connection.
The remote apparatuses 3020a-3020n may each be a smart device, such as smartglasses, a VR headset, an AR headset, a smartwatch, a smartphone, a tablet computer, and/or the like. In addition, each of the remote apparatuses 3020a-3020n may include components similar to those discussed with respect to the apparatus 3000. In this regard, the remote apparatuses 3020a-3020n may communicate with other remote apparatuses 3020a-3020n and/or the server 3030 in manners similar to those discussed above with respect to the apparatus 3000.
The input device 3002 may additionally or alternatively include any suitable device that may track a feature of the apparatus 3000 and/or a user of the apparatus 3000. For instance, the input device 3002 may include a sensor, a global positioning system device, a step counter, etc. By way of particular example, the input device 3002 may track a health-related condition of the user, such as the user's heartrate, blood pressure, movements, steps, etc.
The processor 3004 may perform various processing operations and may control operations of various components of the apparatus 3000. The processor 3004 may be a semiconductor-based microprocessor, a central processing unit (CPU), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), and/or other hardware device. The processor 3004 may be programmed with software and/or firmware that the processor 3004 may execute to control operations of the components of the apparatus 3000.
The memory 3006, which may also be termed a computer-readable medium 3006, may be an electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions 3100-3106. The memory 3006 may be, for example, Random Access memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage device, or an optical disc. For instance, the memory 3006 may have stored thereon instructions that the processor 3004 may fetch and execute. The data store 3008 may also be a computer-readable medium similar to the memory 3006. In some examples, the data store 3008 and the memory 3006 may be the same component.
The processor 3004 may execute the instructions 3100 to determine that an input data 3012 has been received from or collected by the input device 3002. The input data 3012 may be an instruction that a user has inputted to the apparatus 3000 through the input device 3002. For instance, the input data 3012 may include an instruction by a user to change a setting on the apparatus 3000, such as a language used by the apparatus 3000 to communicate with the user, a gender of a user assistant voice on the apparatus 3000, a background color scheme of images displayed on the apparatus 3000, a volume setting of the apparatus 3000, and/or the like. The input data 3012 may also or alternatively include inputs to another device, such as taking an action from one device to another device, synchronizing alerts between devices, identifying locations of devices on other devices, etc.
By way of particular example, the apparatus 3000 may be a smartphone and the remote apparatus 3020a may be a pair of smartglasses. In this example, a user of the apparatus 3000 may set a theme or interface setting of the apparatus 3000 to a certain type of theme or interface setting. For instance, the user may set the apparatus 3000 to use a female voice when communicating with the user via audio. In addition, by syncing such input data 3012 on the remote apparatus 3020a, the remote apparatus 3020a may provide a backup location for storage of the input data 3012.
In addition, or as other examples, the input data 3012 may be data pertaining to a tracked feature of the apparatus 3000 and/or a user of the apparatus 3000. The tracked feature may include a geographic location of the apparatus 3000 as determined by a GPS device, a number of steps that a user of the apparatus 3000 has taken over a certain time period, the user's heartrate, the user's blood pressure, the estimated amount of calories that the user has burned over a certain time period, etc. As an example, the remote apparatus 3020a may be used to identify the location of the apparatus 3000 using the input data 3012. As another example, the input data 3012 may be data pertaining to well-being content, such as meditation music/videos, meditation podcasts, success stories, etc., that the user may listen to or watch on their devices. As a further example, the input data 3012 may be data pertaining to a user's audio journey, e.g., a user may say something as a way to reflect on their day on the apparatus 3000.
The processor 3004 may execute the instructions 3102 to determine that the input data 3012 is to be synced with data on a remote apparatus 3020a. The processor 3004 may determine whether the input data 3012 includes a type of data that is to be synced with the data on the remote apparatus 3020a. For instance, a list of the types of data that are to be synced with the data on a remote apparatus 3020a or on multiple remote apparatuses 3020a-3020n may be stored in the data store 3008. The types of data that are to be synced with the data on the remote apparatus(es) 3020a-3020n may include any of the types of data discussed above and may be stored in a look-up table or other suitable format. The types of data that are not to be synced may include types of data that may be specific to the apparatus 3000 and that may thus not be applicable to the remote apparatus(es). In some examples, instead of the types of data that are to be synced being stored in the data store 3008, the types of data that are not to be synced may be stored in the data store 3008.
In addition, the processor 3004 may determine the type of the input data 3012 and may compare that type against the list of the types of data. Based on the comparison, the processor 3004 may determine that the input data 3012 is or is not to be synced with the data on the remote apparatus 3020a. That is, based on a determination that the input data 3012 includes a type of data that is to be synced with the data on the remote apparatus 3020a, the processor 3004 may determine that the input data 3012 is to be synced with the data on the remote apparatus 3020a. However, based on a determination that the input data 3012 does not include a type of data that is to be synced with the data on the remote apparatus 3020a, the processor 3004 may determine that the input data 3012 is not to be synced with the data on the remote apparatus 3020a.
In addition, or alternatively, to determine whether the input data 3012 is to be synced with the data on the remote apparatus 3020a, the processor 3004 may determine whether the input data 3012 includes data that has been changed from a prior synchronization event. In other words, the processor 3004 may determine whether the input data 3012 matches a previously stored input data 3012 or differs from a previously stored input data 3012. By way of example in which the input data 3012 is a voice assistant setting, the processor 3004 may determine whether the input data 3012 is a change to the voice assistant setting.
The processor 3004 may determine that the input data 3012 is to be synced with the remote apparatus 3020a based on a determination that the input data 3012 includes data that has been changed from the prior synchronization event. In some examples, the processor 3004 may determine that the input data 3012 is to be synced based on both a determination that the input data 3012 has been changed and is a type of data that is to be synced. However, based on a determination that the input data 3012 is not data that has been changed and/or does not match a type of data that is to be synced, the processor 3004 may determine that the input data 3012 is not to be synced with the remote apparatus 3020a.
The processor 3004 may execute the instructions 3104 to encrypt the input data 3012 based on a determination that the input data 3012 is to be synced with the data on the remote apparatus 3020a. The processor 3004 may employ any suitable encryption technique to encrypt the input data 3012. For instance, the processor 3004 may employ an end-to-end encryption (E2EE) method, such as, RSA, AES, Elliptic Curve Cryptography (ECC), etc. In addition, in employing E2EE, the processor 3004 may combine both symmetric and asymmetric encryption by using a secure key exchange protocol such as Diffie-Hellman to generate and share a symmetric key. This symmetric key may be used to encrypt and decrypt the actual input data 3012. In one regard, by using E2EE to encrypt the input data 3012, the input data 3012 may be communicated to the server 3030 without the server 3030 being able to decrypt and view the input data 3012. As a result, the input data 3012 may be kept private from the server 3030.
The processor 3004 may execute the instructions 3106 to output the encrypted input data 3012. According to examples, the processor 3004 may automatically output the encrypted input data 3012 to the network 3040 via the connection 3024 when the connection 3024 between the apparatus 3000 and the network 3040 is established. In addition, the processor 3004 may address the encrypted input data 3012, which may be IP packets, to be delivered to the server 3030. In these examples, the remote apparatus(es) 3020a may obtain the encrypted input data 3012 from the server 3030 via the network 3040. As discussed herein, the input data 3012 may be encrypted using E2EE in which the remote apparatus(es) 3020a may be the opposite end of the encryption pair and thus, the server 3030 may not be able to decrypt and view the input data 3012.
In other examples, the processor 3004 may determine whether the apparatus 3000 has connected to the remote apparatus 3020a via a local connection, e.g., a Bluetooth™ or WiFi connection, and may communicate the encrypted input data 3012 to the remote apparatus 3020a based on a determination that the apparatus 3000 is connected to the remote apparatus 3020a. In these examples, the processor 3004 may wait to output the encrypted input data 3012 to the remote apparatus 3020a until the connection has been established. In other words, the processor 3004 may not broadcast the encrypted input data 3012 until and unless the connection has been established, which may conserve a battery life of the apparatus 3000.
In any of the examples discussed herein, the remote apparatus(es) 3020a-3020n that obtains the encrypted input data 3012 may update the data stored on the remote apparatus(es) 3020a-3020n with the input data 3012. Thus, for instance, the remote apparatus(es) 3020a-3020n may have common settings and/or themes as the apparatus 3000. By way of particular example, the remote apparatus(es) 3020a-3020n may have the same voice assistant settings as the apparatus 3000. As a result, the user experiences may be consistent across the apparatus 3000 and the remote apparatuses 3020a-3020n without requiring that the user manually change the settings across all of the apparatuses 3000, 3020a-3020n. Additionally, other types of data, such as a user's health condition may automatically be shared across the apparatuses 3000, 3020a-3020n.
The processor 3004 may execute the instructions 3200 to receive remote data 3014 from a remote apparatus 3020a. The remote data 3014 may be similar to the input data 3012, but may have been collected or received by the remote apparatus 3020a and communicated to the apparatus 3000. According to examples, the apparatus 3000 may directly receive the remote data 3014 from a remote apparatus 3020a via the local connection 3022. In other examples, the apparatus 3000 may receive the remote data 3014 from the server 3030 via the connection 3024 with the network 3040. In either of these examples, the remote apparatus 3020a may have received or collected the remote data 3014 on the remote apparatus 3020a.
In some examples, the remote apparatus 3020a may have encrypted the remote data 3014 prior to communicating the remote data 3014 to the apparatus 3000. For instance, the remote apparatus 3020a may have encrypted the remote data 3014 using an E2EE scheme. In these examples, the processor 3004 may have the decryption key and may thus decrypt the encrypted remote apparatus 3020a using the decryption key. The processor 3004 may also analyze the remote data 3014.
Particularly, the processor 3004 may execute the instructions 3202 to determine whether the remote data 3014 includes a type of data that is to be synced with data stored on the apparatus 3000, e.g., the data store 3008. For instance, the processor 3004 may determine a type of the remote data 3014 and may compare that type with the types of data that are to be synced stored on the data store 3008. The processor 3004 may determine that the remote data 3014 is to be synced based on a determination that the remote data 3014 type matches a type of data that is to be synced.
As another example, the processor 3004 may determine whether the remote data 3014 includes data that has been changed from a prior synchronization event. In other words, the processor 3004 may determine whether the remote data 3014 is an updated version of the data in a prior synchronization event. The processor 3004 may determine that the remote data 3014 is to be synced with the data stored on the apparatus 3000 based on a determination that the remote data 3014 includes data that has been changed. In some examples, the processor 3004 may compare timestamps associated with the remote data 3014 and the stored data and may determine that the remote data 3014 is to be synced if the timestamp of the remote data 3014 is later than the timestamp of the stored data. In addition, the processor 3004 may determine that the remote data 3014 is not to be synced with the stored data based on a determination that the remote data 3014 includes data that has not been changed, e.g., is the same as the stored data.
The processor 3004 may execute the instructions 3204 to sync the remote data 3014 with the data stored on the apparatus 3000 based on a determination that the remote data 3014 is to be synced. The processor 3004 may sync the remote data 3014 with the stored data by, for instance, replacing the stored data with the remote data 3014 or adding the remote data 3014 to the stored data. According to examples, the remote apparatuses 3020a-3020n may execute similar operations to sync the input data 3012 when the remote apparatuses 3020a-3020n receive the input data 3012 from the apparatus 3000.
As shown, the features of the apparatus 3000 may integrate with schema management modules to use the storage and sync schemas 402 to create a local storage layer (load store 3304) for the input data 3012 that is to be synced with a remote apparatus 3020a or to sync remote data 3014 that is to be synced with data on the apparatus 3000. The processor 3004 may use a mixture of code-gen+custom code to create the ORM layer (in the SDK 3306) for clients to use to securely access their data and ensure forward and backwards compatibility. The processor 3004 may also integrate with a privacy module 3308, a transport module 3310, and a sync module 3312 to enable end-to-end encrypted data syncs across the apparatuses 3000, 3020a-3020n. The processor 3004 may further integrate with a tethered device (remote apparatus 3020a) and an untethered device (server 3030) through remote and tethered transports and may ensure eventual input data 3012 syncs. The processor 3004 may still further integrate with the server 3030, which may manage data, sync metadata, and E2EE keys. The processor 3004 may still further integrate with a store and sync logger module 3316 to ensure that consistent logging is done for all storage and sync use cases.
Schemas may enable consistency for cross platform data access. As a result, schema definitions of each data type may be available on the apparatuses 3000, 3020a-3020n. The storage ORM schemas may be used to define the data models, data access queries, etc., that a use case may need. The configuration schemas may capture the storage and sync policies for those data types such as:
According to examples, the privacy module 3308 may enable the privacy of data types that are getting synced via the server 3030 to the remote apparatuses 3020a-3020n. The privacy module 3308 may support the following modes:
Various manners in which the processor 3004 of the apparatus 3000 may operate are discussed in greater detail with respect to the method 3400 depicted in
At block 3402, the processor 3004 may determine that an input data 3012 has been received from or collected by the input device 3002. The input data 3012 may be an instruction that a user has inputted to the apparatus 3000 through the input device 3002 and/or data pertaining to a tracked feature of the apparatus 3000 and/or a user of the apparatus 3000.
At block 3404, the processor 3004 may determine that the input data 3012 is to be synced with data on a remote apparatus 3020a. As discussed herein, the processor 3004 may determine that the input data 3012 is to be synced based on the type of the input data 3012 matching a certain type of data and/or based on the input data 3012 being different from data at a previous synchronization event.
At block 3406, the processor 3004 may encrypt the input data 3012 based on a determination that the input data 3012 is to be synced. The processor 3004 may encrypt the input data 3012 in any suitable manner, such as using an end-to-end scheme as discussed herein.
At block 3408, the processor 3004 may output the encrypted input data 3012. The processor 3004 may output the encrypted input data 3012 directly to the remote apparatus 3020a and/or to a server 3030 via a network 3040.
Some or all of the operations set forth in the method 3400 may be included as a utility, program, or subprogram, in any desired computer accessible medium. In addition, the method 3400 may be embodied by a computer program, which may exist in a variety of forms both active and inactive. For example, they may exist as machine-readable instructions, including source code, object code, executable code or other formats. Any of the above may be embodied on a non-transitory computer readable storage medium.
Examples of non-transitory computer readable storage media include computer system RAM, ROM, EPROM, EEPROM, and magnetic or optical disks or tapes. It is therefore to be understood that any electronic device capable of executing the above-described functions may perform those functions enumerated above.
In the foregoing description, various inventive examples are described, including devices, systems, methods, and the like. For the purposes of explanation, specific details are set forth in order to provide a thorough understanding of examples of the disclosure. However, it will be apparent that various examples may be practiced without these specific details. For example, devices, systems, structures, assemblies, methods, and other components may be shown as components in block diagram form in order not to obscure the examples in unnecessary detail. In other instances, well-known devices, processes, systems, structures, and techniques may be shown without necessary detail in order to avoid obscuring the examples.
The figures and description are not intended to be restrictive. The terms and expressions that have been employed in this disclosure are used as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding any equivalents of the features shown and described or portions thereof. The word “example” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or design described herein as “example’ is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
What has been described and illustrated herein are examples of the disclosure along with some variations. The terms, descriptions, and figures used herein are set forth by way of illustration only and are not meant as limitations. Many variations are possible within the scope of the disclosure, which is intended to be defined by the following claims—and their equivalents—in which all terms are meant in their broadest reasonable sense unless otherwise indicated.
Although the methods and systems as described herein may be directed mainly to digital content, such as videos or interactive media, it should be appreciated that the methods and systems as described herein may be used for other types of content or scenarios as well. Other applications or uses of the methods and systems as described herein may also include social networking, marketing, content-based recommendation engines, and/or other types of knowledge or data-driven systems.
This patent application claims priority to U.S. Provisional Patent Application No. 63/501,051, entitled “Sync Data Across Apparatuses,” filed on May 9, 2023, and U.S. Provisional Patent Application No. 63/499,081, entitled “Determining a Presence of an Ultraviolet (UV) Anti-reflective (AR) Coating on a Substrate,” filed on Apr. 28, 2023.
Number | Date | Country | |
---|---|---|---|
63501051 | May 2023 | US | |
63499081 | Apr 2023 | US |