The present disclosure generally relates to optical systems and methods and, more specifically, to a system and a method for separating volumetric scattering and surface scattering of an optical component.
Typical optical components or elements scatter lights. Light scattering can include volumetric scattering and surface scattering. Volumetric scattering is often intrinsically material related, while surface scattering is often dependent on the smoothness or roughness of an interface between the optical component and the environment (e.g., air). Quantifying scattering of the optical components is often key for component/system level metrology to meet certain optical application requirements. To provide guidance for improving manufacturing processes, it is desirable to identify the scattering sources (e.g., whether the scattering is from a volume and/or surface of the optical component), and provide information on the relative contributions of volumetric scattering and surface scattering to the overall scattering of the optical components. For example, if the surface scattering is identified as the dominant scattering in an overall scattering of the optical component, in order to improve the quality of the optical component, one could polish (or provide other types of treatment for) the surfaces of the optical component. If the volumetric scattering is identified as the dominant scattering in the overall scattering of the optical component, to improve the quality of the optical component, one could adjust the material formulation or compositions to reduce the intrinsic material scattering.
Consistent with an aspect of the present disclosure, a system is provided. The system includes a light source configured to emit a probing beam to illuminate an optical element. The system also includes a rotating structure to which the optical element is mounted. The system also includes a controller configured to control the rotating structure to rotate to change a tilt angle of the optical element with respect to a propagation direction of the probing beam within a predetermined tilting range. The system also includes an image sensor configured to receive one or more scattered beams output from the optical element illuminated by the probing beam, and generate a plurality of sets of speckle pattern image data when the optical element is arranged at a plurality of tilt angles within the predetermined tilting range. The controller is configured to process the plurality of sets of speckle pattern image data to determine respective weights of volumetric scattering and surface scattering in an overall scattering of the optical element.
Consistent with another aspect of the present disclosure, a method is provided. The method includes illuminating, by a light source, an optical element mounted to a rotating structure with a probing beam. The method also includes controlling, by a controller, rotation of the rotating structure to change a tilt angle of the optical element with respect to a propagation direction of the probing beam within a predetermined tilting range. The method also includes generating, by an image sensor, a plurality of sets of speckle pattern image data when the optical element is arranged at a plurality of tilt angles within the predetermined tilting range. The method also includes processing, by the controller, the plurality of sets of speckle pattern image data to determine respective weights of volumetric scattering and surface scattering in an overall scattering of the optical element.
Other aspects of the present disclosure can be understood by those skilled in the art in light of the description, the claims, and the drawings of the present disclosure. The foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the claims.
The following drawings are provided for illustrative purposes according to various disclosed embodiments and are not intended to limit the scope of the present disclosure. In the drawings:
Embodiments consistent with the present disclosure will be described with reference to the accompanying drawings, which are merely examples for illustrative purposes and are not intended to limit the scope of the present disclosure. Wherever possible, the same reference numbers are used throughout the drawings to refer to the same or similar parts, and a detailed description thereof may be omitted.
Further, in the present disclosure, the disclosed embodiments and the features of the disclosed embodiments may be combined. The described embodiments are some but not all of the embodiments of the present disclosure. Based on the disclosed embodiments, persons of ordinary skill in the art may derive other embodiments consistent with the present disclosure. For example, modifications, adaptations, substitutions, additions, or other variations may be made based on the disclosed embodiments. Such variations of the disclosed embodiments are still within the scope of the present disclosure. Accordingly, the present disclosure is not limited to the disclosed embodiments. Instead, the scope of the present disclosure is defined by the appended claims.
As used herein, the terms “couple,” “coupled,” “coupling,” or the like may encompass an optical coupling, a mechanical coupling, an electrical coupling, an electromagnetic coupling, or any combination thereof. An “optical coupling” between two optical elements refers to a configuration in which the two optical elements are arranged in an optical series, and a light output from one optical element may be directly or indirectly received by the other optical element. An optical series refers to optical positioning of a plurality of optical elements in a light path, such that a light output from one optical element may be transmitted, reflected, diffracted, converted, modified, or otherwise processed or manipulated by one or more of other optical elements. In some embodiments, the sequence in which the plurality of optical elements are arranged may or may not affect an overall output of the plurality of optical elements. A coupling may be a direct coupling or an indirect coupling (e.g., coupling through an intermediate element).
The phrase “at least one of A or B” may encompass all combinations of A and B, such as A only, B only, or A and B. Likewise, the phrase “at least one of A, B, or C” may encompass all combinations of A, B, and C, such as A only, B only, C only, A and B, A and C, B and C, or A and B and C. The phrase “A and/or B” may be interpreted in a manner similar to that of the phrase “at least one of A or B.” For example, the phrase “A and/or B” may encompass all combinations of A and B, such as A only, B only, or A and B. Likewise, the phrase “A, B, and/or C” has a meaning similar to that of the phrase “at least one of A, B, or C.” For example, the phrase “A, B, and/or C” may encompass all combinations of A, B, and C, such as A only, B only, C only, A and B, A and C, B and C, or A and B and C.
When a first element is described as “attached,” “provided,” “formed,” “affixed,” “mounted,” “secured,” “connected,” “bonded,” “recorded,” or “disposed,” to, on, at, or at least partially in a second element, the first element may be “attached,” “provided,” “formed,” “affixed,” “mounted,” “secured,” “connected,” “bonded,” “recorded,” or “disposed,” to, on, at, or at least partially in the second element using any suitable mechanical or non-mechanical manner, such as depositing, coating, etching, bonding, gluing, screwing, press-fitting, snap-fitting, clamping, etc. In addition, the first element may be in direct contact with the second element, or there may be an intermediate element between the first element and the second element. The first element may be disposed at any suitable side of the second element, such as left, right, front, back, top, or bottom.
When the first element is shown or described as being disposed or arranged “on” the second element, term “on” is merely used to indicate an example relative orientation between the first element and the second element. The description may be based on a reference coordinate system shown in a figure, or may be based on a current view or example configuration shown in a figure. For example, when a view shown in a figure is described, the first element may be described as being disposed “on” the second element. It is understood that the term “on” may not necessarily imply that the first element is over the second element in the vertical, gravitational direction. For example, when the assembly of the first element and the second element is turned 180 degrees, the first element may be “under” the second element (or the second element may be “on” the first element). Thus, it is understood that when a figure shows that the first element is “on” the second element, the configuration is merely an illustrative example. The first element may be disposed or arranged at any suitable orientation relative to the second element (e.g., over or above the second element, below or under the second element, left to the second element, right to the second element, behind the second element, in front of the second element, etc.).
When the first element is described as being disposed “on” the second element, the first element may be directly or indirectly disposed on the second element. The first element being directly disposed on the second element indicates that no additional element is disposed between the first element and the second element. The first element being indirectly disposed on the second element indicates that one or more additional elements are disposed between the first element and the second element.
The term “processor” used herein may encompass any suitable processor, such as a central processing unit (“CPU”), a graphics processing unit (“GPU”), an application-specific integrated circuit (“ASIC”), a programmable logic device (“PLD”), or any combination thereof. Other processors not listed above may also be used. A processor may be implemented as software, hardware, firmware, or any combination thereof.
The term “controller” may encompass any suitable electrical circuit, software, or processor configured to generate a control signal for controlling a device, a circuit, an optical element, etc. A “controller” may be implemented as software, hardware, firmware, or any combination thereof. For example, a controller may include a processor, or may be included as a part of a processor.
The term “non-transitory computer-readable medium” may encompass any suitable medium for storing, transferring, communicating, broadcasting, or transmitting data, signal, or information. For example, the non-transitory computer-readable medium may include a memory, a hard disk, a magnetic disk, an optical disk, a tape, etc. The memory may include a read-only memory (“ROM”), a random-access memory (“RAM”), a flash memory, etc.
The term “film,” “layer,” “coating,” or “plate” may include rigid or flexible, self-supporting or free-standing film, layer, coating, or plate, which may be disposed on a supporting substrate or between substrates. The terms “film,” “layer,” “coating,” and “plate” may be interchangeable. The term “film plane” refers to a plane in the film, layer, coating, or plate that is perpendicular to the thickness direction. The film plane may be a plane in the volume of the film, layer, coating, or plate, or may be a surface plane of the film, layer, coating, or plate. The term “in-plane” as in, e.g., “in-plane orientation,” “in-plane direction,” “in-plane pitch,” etc., means that the orientation, direction, or pitch is within the film plane. The term “out-of-plane” as in, e.g., “out-of-plane direction,” “out-of-plane orientation,” or “out-of-plane pitch” etc., means that the orientation, direction, or pitch is not within a film plane (i.e., non-parallel with a film plane). For example, the direction, orientation, or pitch may be along a line that is perpendicular to a film plane, or that forms an acute or obtuse angle with respect to the film plane. For example, an “in-plane” direction or orientation may refer to a direction or orientation within a surface plane, an “out-of-plane” direction or orientation may refer to a thickness direction or orientation non-parallel with (e.g., perpendicular to) the surface plane.
The term “orthogonal” as used in “orthogonal polarizations” or the term “orthogonally” as used in “orthogonally polarized” means that an inner product of two vectors representing the two polarizations is substantially zero. For example, two lights or beams with orthogonal polarizations (or two orthogonally polarized lights or beams) may be two linearly polarized lights (or beams) with two orthogonal polarization directions (e.g., an x-axis direction and a y-axis direction in a Cartesian coordinate system) or two circularly polarized lights with opposite handednesses (e.g., a left-handed circularly polarized light and a right-handed circularly polarized light).
The wavelength ranges, spectra, or bands mentioned in the present disclosure are for illustrative purposes. The disclosed optical device, system, element, assembly, and method may be applied to a visible wavelength band, as well as other wavelength bands, such as an ultraviolet (“UV”) wavelength band, an infrared (“IR”) wavelength band, or a combination thereof. The term “substantially” or “primarily” used to modify an optical response action, such as transmit, reflect, diffract, block or the like that describes processing of a light means that a major portion, including all, of a light is transmitted, reflected, diffracted, or blocked, etc. The major portion may be a predetermined percentage (greater than 50%) of the entire light, such as 100%, 98%, 90%, 85%, 80%, etc., which may be determined based on specific application needs.
The present disclosure provides a system and a method for measuring an overall scattering of an optical component, and determining respective weights of volumetric scattering and surface scattering in the measured overall scattering based on a phenomenon known as optical memory effect.
The correlation function shown to the left of image (a) corresponds to the cross-correlation coefficients between the reference speckle pattern (shown in the image (a)) and itself (which can be treated as a shifted speckle pattern with a shift being zero). That is, the correlation function shown to the left of image (a) is the autocorrelation function. Thus, the maximum correlation coefficient is 1. The correlation function shown to the left of each of images (b), (c), and (d) corresponds to the cross-correlation coefficients of the reference speckle pattern (shown in the image (a)) with the corresponding shifted speckle pattern shown in image (b), (c), or (d). The correlation coefficient is plotted as a function of the pattern shift in pixels, and the maximum (or the peak value) of the correlation function represents the maximum degree of correlation (e.g., overlap) between the reference speckle pattern (shown in the image (a)) and the corresponding shifted speckle pattern.
In the images (b) and (c), as the input laser beam is rotated around the sample by a small angle, the speckle patterns slowly change as the incidence angle of the input laser beam changes. The correlation function shown to the left of the image (b) or the image (c) tracks this motion, with the maximum correlation coefficient of the correlation function (or the peak value) decreases as the correlation becomes weaker. The shifted speckle pattern shown in image (d) is unrelated to the reference speckle pattern shown in image (a), and the correlation function shown to the left of the image (d) shows small statistical fluctuations around zero. The correlation function tracks the speckle patterns which, in turn, “remembers” and tracks the propagation direction or the incidence angle of the input laser beam onto the scattering medium 105.
Referring to
When the distance s of the scattering layer from the screen is fixed, the memory effect range may be directly proportional to the value of (λ/L). When the incidence wavelength λ is fixed, as the thickness L of the scattering layer decreases, the memory effect range may increase. Thus, a thinner scattering layer (e.g., surface scattering layer) may have a much greater memory effect range than a thicker scattering layer (e.g., a volumetric scattering layer). The thickness of a surface scattering layer is often at the same scale as the incidence wavelength λ (e.g., hundreds of nanometers), while the thickness of a volumetric scattering layer is often much greater than the incidence wavelength λ (e.g., several micrometers to several tens of micrometers). Thus, the memory effect range of the surface scattering layer having a thickness of 500 nm may be about 100 times of the memory effect range of the volumetric scattering layer having a thickness of 50 μm. For example, the memory effect range of a volumetric scattering layer having the thickness of 50 μm may be about 0.01 rad (or about 0.57 degree), while the memory effect range of a surface scattering layer having the thickness of 500 nm may be about 1 rad (or about 57 degrees).
When the scattering medium 105 includes multiple scattering sources with different thicknesses (e.g., a surface scattering source and a volumetric scattering source), there may be multiple different memory effect ranges. Through determining the multiple memory effect ranges, the scattering contributions of the surface scattering and the volumetric scattering may be separately identified. For example, the scattering contributions from the volumetric scattering within a relatively thick scattering layer and the surface scattering within a relatively thin scattering layer may be separately identified.
Based on the optical memory effect, the present discourse provides a system and a method for measuring an overall scattering of an optical component, and determining the relative contributions of the volumetric scattering and the surface scattering in the measured overall scattering of the optical component. The determined relative contributions of the volumetric scattering and the surface scattering in the measured overall scattering may provide guidance for improving manufacturing processes of the optical component. For example, if the surface scattering is identified as the dominant scattering in the measured overall scattering of the optical component, then improving the surface smoothness may reduce the overall scattering. Surface smoothness enhancement may include polishing the surface or applying other types of surface treatment to the optical component. If the volumetric scattering is identified as the dominant scattering in the measured overall scattering of the optical component, then adjusting the material formulation or composition to reduce the intrinsic material scattering may reduce the overall scattering.
The system and the method disclosed herein may be applied to an optical component (or optical element) that may include a single layer or a plurality of layers of films or plates stacked together (referred to as a layered structure). The optical component with the layered structure may include at least two layers of different materials and/or structures. For example, the optical component with the layered structure may include a substrate, one or more optical films disposed on the substrate, and a protecting film disposed on the optical films. In some embodiments, the optical component with the layered structure may include other elements, such as an alignment structure (or layer) disposed between the substrate and the optical film, a cover glass disposed on the protecting film, etc. The optical film may be configured with a predetermined optical function. For example, the optical film may function as a transmissive or reflective optical element, such as a grating, a lens or lens array, a prism, a polarizer, a compensation plate, or a phase retarder, etc.
In some embodiments, the optical element may include a birefringent medium. The optical element may also be referred to as a birefringent medium layer. In some embodiments, an optic axis of the birefringent medium layer may be configured with a spatially varying orientation in at least one in-plane direction of the optical film. In some embodiments, the optical element may include a photo-polymer layer. In some embodiments, the photo-polymer layer may be a liquid crystal polymer (“LCP”) layer that includes polymerized (or cross-linked) liquid crystals (“LCs”), polymer-stabilized LCs, a photo-sensitive LC polymer, or any combination thereof. The LCs may include nematic LCs, twist-bend LCs, chiral nematic LCs, smectic LCs, or any combination thereof. In some embodiments, the photo-polymer layer may include a birefringent photo-refractive holographic material other than LCs, such as an amorphous polymer. In some embodiments, the optical element may function as a Pancharatnam-Berry phase (“PBP”), a polarization volume hologram (“PVH”) element, or a volumetric Bragg grating element. The optical element may be implemented in systems or devices for beam steering, display, imaging, sensing, communication, biomedical applications, etc. In some embodiments, the optical element may include a photosensitive material that provides a refractive index modulation based on an exposure light pattern. In some embodiments, the photo-polymer layer may be configured with a refractive index modulation in the photo-polymer layer. Hence, the photo-polymer layer may be referred to as a photosensitive index modulation polymer.
For example, the optical element may function as a beam steering device, which may be implemented in various systems for augmented reality (“AR”), virtual reality (“VR”), and/or mixed reality (“MR”) applications, e.g., near-eye displays (“NEDs”), head-up displays (“HUDs”), head-mounted displays (“HMDs”), smart phones, laptops, televisions, vehicles, etc. For example, the beam steering devices may be implemented in displays and optical modules to enable pupil steered AR, VR, and/or MR display systems, such as holographic near eye displays, retinal projection eyewear, and wedged waveguide displays. Pupil steered AR, VR, and/or MR display systems have features such as compactness, large field of views (“FOVs”), high system efficiencies, and small eye-boxes. The beam steering device may be implemented in the pupil steered AR, VR, and/or MR display systems to enlarge the eye-box spatially and/or temporally. In some embodiments, the beam steering device may be implemented in AR, VR, and/or MR sensing modules to detect objects in a wide angular range to enable other functions. In some embodiments, the beam steering device may be implemented in AR, VR, and/or MR sensing modules to extend the FOV (or detecting range) of the sensors in space constrained optical systems, increase detecting resolution or accuracy of the sensors, and/or reduce the signal processing time. In some embodiments, the beam steering device may be used in Light Detection and Ranging (“Lidar”) systems in autonomous vehicles. In some embodiments, the beam steering device may be used in optical communications, e.g., to provide fast speeds (e.g., speeds at the level of Gigabyte/second) and long ranges (e.g., ranges at kilometer levels). In some embodiments, the beam steering device may be implemented in microwave communications, 3D imaging and sensing (e.g., Lidar), lithography, and 3D printing, etc.
In some embodiments, the optical element may function as an imaging device, which may be implemented in various systems for AR, VR, and/or MR applications, enabling light-weight and ergonomic designs for AR, VR, and/or MR devices. For example, the imaging device may be implemented in displays and optical modules to enable smart glasses for AR, VR, and/or MR applications, compact illumination optics for projectors, light-field displays. In some embodiments, the imaging device may replace conventional objective lenses having a high numerical aperture in microscopes. In some embodiments, the imaging device may be implemented into light source assemblies to provide a polarized structured illumination to a sample, for identifying various features of the sample. In some embodiments, the imaging device may enable polarization patterned illumination systems that add a new degree for sample analysis.
Some exemplary applications in AR, VR, or MR fields or some combinations thereof will be explained below.
The right-eye and left-eye display systems 210R and 210L may include image display components configured to generate computer-generated virtual images, and direct the virtual images into left and right display windows 215L and 215R in a field of view (“FOV”). For illustrative purposes,
As shown in
The object tracking system 290 may include an IR light source 291 configured to illuminate the eye 260 and/or the face, and an optical sensor 293 (e.g., a camera) configured to receive the IR light reflected by the eye 260 and generate a tracking signal relating to the eye 260 (e.g., an image of the eye 260). In some embodiments, the object tracking system 290 may also include an IR deflecting element (not shown) configured to deflect the IR light reflected by the eye 260 toward the optical sensor 293. In some embodiments, the object tracking system 290 may include one or more optical components with a layered structure (e.g., including a substrate, one or more optical films, and a protecting film, etc.). In some embodiments, the NED 200 may include an adaptive dimming element which may dynamically adjust the transmittance of lights reflected by real-world objects, thereby switching the NED 200 between a VR device and an AR device or between a VR device and an MR device. In some embodiments, along with switching between the AR/MR device and the VR device, the adaptive dimming element may be used in the AR and/MR device to mitigate differences in brightness of lights reflected by real-world objects and virtual image lights.
The light source assembly 305 may include a light source 320 and an light conditioning system 325. In some embodiments, the light source 320 may be configured to generate a coherent or partially coherent light. The light source 320 may include, e.g., a laser diode, a vertical cavity surface emitting laser, a light emitting diode, or a combination thereof. In some embodiments, the light source 320 may be a display panel, such as a liquid crystal display (“LCD”) panel, a liquid-crystal-on-silicon (“LCoS”) display panel, an organic light-emitting diode (“OLED”) display panel, a micro light-emitting diode (“micro-LED”) display panel, a digital light processing (“DLP”) display panel, a laser scanning display panel, or a combination thereof. In some embodiments, the light source 320 may be a self-emissive panel, such as an OLED display panel or a micro-LED display panel. In some embodiments, the light source 320 may be a display panel that is illuminated by an external source, such as an LCD panel, an LCoS display panel, or a DLP display panel. Examples of an external source may include a laser, an LED, an OLED, or a combination thereof. The light conditioning system 325 may include one or more optical components configured to condition the image light output from the light source 320, e.g., a collimating lens configured to transform or convert a linear distribution of the pixels in the display panel into an angular distribution of the pixels at the input side of the light guide 310.
The light guide 310 may receive the image light 330 at the in-coupling element 335 located at the first portion of the light guide 310. In some embodiments, the in-coupling element 335 may couple the image light 330 into a total internal reflection (“TIR”) path inside the light guide 310. The image light 330 may propagate inside the light guide 310 via TIR toward an out-coupling element 345 located at a second portion of the light guide 310. The out-coupling element 345 may be configured to couple the image light 330 out of the light guide 310 as a plurality of output lights 332 propagating toward the eye-box region 259. Each of the plurality of the output lights 332 may present substantially the same image content as the image light 330. Thus, the out-coupling element 345 may be configured to replicate the image light 330 received from the light source assembly 305 at an output side of the light guide 310 to expand an effective pupil of the light guide display system 300, e.g. in an x-axis direction shown in
The light guide 310 may include a first surface or side 310-1 facing the real-world environment and an opposing second surface or side 310-2 facing the eye-box region 259. Each of the in-coupling element 335 and the out-coupling element 345 may be disposed at the first surface 310-1 or the second surface 310-2 of the light guide 310. In some embodiments, as shown in
In some embodiments, each of the in-coupling element 335 and the out-coupling element 345 may have a designed operating wavelength band that includes at least a portion of the visible wavelength band. In some embodiments, the designed operating wavelength band of each of the in-coupling element 335 and the out-coupling element 345 may not include the IR wavelength band. For example, each of the in-coupling element 335 and the out-coupling element 345 may be configured to deflect a visible light, and transmit an IR light without deflection or with negligible deflection.
In some embodiments, each of the in-coupling element 335 and the out-coupling element 345 may include one or more diffraction gratings, one or more cascaded reflectors, one or more prismatic surface elements, and/or an array of holographic reflectors, or any combination thereof. In some embodiments, each of the in-coupling element 335 and the out-coupling element 345 may include one or more diffractive structures, e.g., diffraction gratings. The diffraction grating may include a surface relief grating, a volume hologram grating, or a polarization hologram grating, etc. For discussion purposes, the in-coupling element 335 and the out-coupling element 345 may also be referred to as the in-coupling grating 335 and the out-coupling grating 345, respectively. In some embodiments, a period of the in-coupling grating 335 may be configured to enable TIR of the image light 330 within the light guide 310. In some embodiments, a period of the out-coupling grating 345 may be configured to couple the image light 330 propagating inside the light guide 310 through TIR out of the light guide 310 via diffraction.
The light guide 310 may include one or more materials configured to facilitate the total internal reflection of the image light 330. The light guide 310 may include, for example, a plastic, a glass, and/or polymers. The light guide 310 may have a relatively small form factor. The light guide 310 coupled with the in-coupling element 335 and the out-coupling element 345 may also function as an image combiner (e.g., AR or MR combiner). The light guide 310 may combine the image light 332 representing a virtual image and a light 334 from the real world environment (or a real world light 334), such that the virtual image may be superimposed with real-world images. With the light guide display system 300, the physical display and electronics may be moved to a side of a front body of the NED 200. A substantially fully unobstructed view of the real world environment may be achieved, which enhances the AR or MR user experience.
In some embodiments, the light guide 310 may include additional elements configured to redirect, fold, and/or expand the pupil of the light source assembly 305. For example, in some embodiments, the light guide display system 300 may include a redirecting element 340 coupled to the light guide 310, and configured to redirect the image light 330 to the out-coupling element 345, such that the image light 330 is coupled out of the light guide 310 via the out-coupling element 345. In some embodiments, the redirecting element 340 may be arranged at a location of the light guide 310 opposing the location of the out-coupling element 345. For example, in some embodiments, the redirecting element 340 may be integrally formed as a part of the light guide 310 at the corresponding surface. In some embodiments, the redirecting element 340 may be separately formed and disposed at (e.g., affixed to) the corresponding surface of the light guide 310.
In some embodiments, the redirecting element 340 and the out-coupling element 345 may have a similar structure. In some embodiments, the redirecting element 340 may include one or more diffraction gratings, one or more cascaded reflectors, one or more prismatic surface elements, and/or an array of holographic reflectors, or any combination thereof. In some embodiments, the redirecting element 340 may include one or more diffractive structures, e.g., diffraction gratings. The diffraction grating may include a surface relief grating, a volume hologram grating, a polarization hologram grating (e.g., a liquid crystal polarization hologram grating), or any combination thereof. For discussion purposes, the redirecting element 340 may also be referred to as the redirecting grating 340.
In some embodiments, the redirecting element 340 and the out-coupling element 345 may be configured to replicate the image light 330 received from the light source assembly 305 at the output side of the light guide 310 in two different directions, thereby providing a two-dimensional (“2D”) expansion of the effective pupil of the light guide display system 300. For example, the out-coupling element 345 may be configured to replicate the image light 330 received from the light source assembly 305 at the output side of the light guide 310 to expand the effective pupil of the light guide display system 300, e.g. in the x-axis direction shown in
In some embodiments, one of the redirecting grating 340 and the out-coupling grating 345 may be disposed at the first surface 310-1 of the light guide 310, and the other one of the redirecting grating 340 and the out-coupling grating 345 may be disposed at the second surface 310-2 of the light guide 310. In some embodiments, the redirecting grating 340 and the out-coupling grating 345 may have different orientations of grating fringes (or grating vectors), thereby expanding the input image light 330 in two different directions. For example, the out-coupling grating 345 may expand the image light 330 along the x-axis direction, and the redirecting grating 340 may expand the image light 330 along the y-axis direction. The out-coupling grating 345 may further couple the expanded input image light out of the light guide 310. Accordingly, the light guide display system 300 may provide 2D pupil replication (or pupil expansion) at a light output side of the light guide display system 300. In some embodiments, the redirecting grating 340 and the out-coupling grating 345 may be disposed at the same surface of the light guide 310. In addition, to expand the exit pupil (or effective pupil) of the light guide display system 300 in more than two directions, more than two gratings (or layers of diffractive structures) may be disposed at the light output region of the light guide 310.
In some embodiments, multiple functions, e.g., redirecting, folding, and/or expanding the pupil of the light generated by the light source assembly 305 may be combined into a single element, e.g. the out-coupling element 345. For example, the out-coupling element 345 itself may be configured to provide a 2D expansion of the effective pupil of the light guide display system 300. For example, the out-coupling grating 345 may be a 2D grating including a single grating layer or a single layer of diffractive structure.
The light guide 310, the in-coupling grating 335, the out-coupling grating 345, and/or the redirecting grating 340 may be designed to be substantially transparent in the visible spectrum. The in-coupling grating 335, the out-coupling grating 345, and/or the redirecting grating 340 may be optical films functioning as gratings. For example, the in-coupling grating 335, the out-coupling grating 345, and/or the redirecting grating 340 may be a polymer layer, e.g., a photo-polymer film, or a liquid crystal polymer film, etc. In some embodiments, protecting films may be disposed at the in-coupling grating 335, the out-coupling grating 345, and/or the redirecting grating 340 for protection purposes. In some embodiments, the light guide 310 may also be coupled to one or more additional optical films that are substantially transparent in the visible spectrum. For example, the in-coupling grating 335, the out-coupling grating 345, and/or the redirecting grating 340 may be coupled to an additional optical film.
The light guide 310 disposed with the in-coupling grating 335, the out-coupling grating 345, and/or the redirecting grating 340 may be an example of an optical component with a layered structure disclosed in the present disclosure. Such an optical component with the layered structure may be substantially optically transparent at least in the visible wavelength range (e.g., about 380 nm to about 700 nm). The optical component with the layered structure may scatter a light when the light propagates through the layered structure of the optical component. When the light scattering is inelastic, for example, Raman scattering, the light scattering may provide information of the chemical (or material) composition of the multiple layers in the optical component. When the light scattering is elastic, the light scattering may disclose structure information of the multiple layers in the optical component at different spatial scales: much smaller than the wavelength of the light (Rayleigh scattering), comparable to the wavelength of the light (Mie scattering), or much larger than the wavelength of the light (Geometric scattering). Some elastic scattering behaviors may cause haze, which is a measurement of clarity or the “see through quality” of the optical component with the layered structure based on a reduction of sharpness. Thus, it is highly desirable to measure the light scattering of the optical component in relevant spectral ranges, ensuring that the haze of the optical component is within a predetermined range, and the optical component meets design specifications and customer expectations.
It may be desirable to identify and/or visualize the sources of scattering in the optical component, e.g., whether the scattering is from a volume of the optical component (i.e., volumetric scattering), or from a surface of the optical component (i.e., surface scattering at an interface between two neighboring layers, and/or at an interface between a layer and an outside environment (e.g., air)). Moreover, it may be desirable to identify the relative scattering contributions from the volumetric scattering and the surface scattering in the overall measured scattering. The identification and/or visualization of the scattering may provide guidance for the design (e.g., structures, materials, compositions, etc.) and the fabrication (e.g., manufacturing process improvements) of an optical component with reduced scattering. For example, when the surface scattering is identified as the dominant scattering, then polishing or other types of surface treatments of the optical component may reduce the overall scattering. When the volumetric scattering is identified as the dominant scattering, then adjusting the material formulation to reduce the intrinsic material scattering may reduce the overall scattering. The disclosed system and method for measuring the overall scattering and identifying the relative contributions of the surface and volumetric scattering in the overall scattering may be low-cost, highly sensitive, and highly efficient, and may be used in quality control process of mass production of the optical components.
The sample 450 may include a light input surface 450-1 and a light output surface 450-2 located at opposite sides of the sample 450. In some embodiments, the light input surface 450-1 and the light output surface 450-2 may be parallel with one another. In some embodiments, the sample 450 may have at least one curved surface, and the light input surface 450-1 may be unparallel with the light output surface 450-2. The light input surface 450-1 may receive the probing beam 465 output from the light source 405. The probing beam 465 may propagate through the sample 450, and exit the sample 450 at the light output surface 450-2 as a plurality of transmitted and scattered beams 430, e.g., including a directly transmitted beam 430a (also considered as a scattered beam with a scattering angle of 0°) and a plurality of scattered beams 430b-430e with non-zero scattering angles. The beams 430b-430e output from the light output surface 450-2 may be referred to as forwardly scattered beams, and the corresponding scattering may be referred to as forward scattering.
Accordingly, the sample 450 may scatter an input wavefront (e.g., a planar wavefront) 467 of the probing beam 465 at an input plane of the sample 450 (e.g., at plane A) as an overall scattered wavefront 437 at an output plane of the sample 450 (e.g., at plane B). The overall scattered wavefront 437 may propagate toward the detection assembly 403. In some embodiments, the positions of the plane A and the plane B may be fixed. In some embodiments, when the sample 450 includes both of surface scattering sources and volumetric scattering sources, the overall scattered wavefront 437 may be a result of the interference of multiple scattered wavefronts including, e.g., one or more scattered wavefronts generated by one or more surface scattering sources and one or more scattered wavefronts generated by one or more volumetric scattering sources. In other words, the overall scattered wavefront 437 may be a superposition of the multiple scattered wavefronts.
The detection assembly 403 may be configured to aim toward the light output surface 450-2 to receive one or more of the scattered beams 430b-430e output from the light output surface 450-2 (or a portion of the overall scattered wavefront 437). For example, the detection assembly 403 may be tilted with respect to the optical axis 425 of the system 400 to detect one or more of the scattered beams 430b-430e (e.g., may not detect the directly transmitted beam 430a). In some embodiments, the position of the detection assembly 403 may be fixed with respect to the optical axis 425 of the system 400. Here the position being fixed means that the orientation of the imaging device 410 with respect to the optical axis 425 (or with respect to the propagation direction of the probing beam 465, or with respect to the sample 450) is also fixed. The detection assembly 403 may include an imaging device 410, and a diaphragm (or an iris) (not shown) disposed in front of the imaging device 410. The diaphragm may define an aperture with a predetermined size (e.g., a predetermined circular hole), through which a beam can reach the imaging device 410. In other words, the diaphragm may define an area (or size of an aperture) through which the imaging device 410 can receive the beams. The diaphragm may also reduce the stray lights that may be received by the image sensor 411.
The imaging device 410 may include an image sensor 411. In some embodiments, the imaging device 410 may also include a lens or lens array (not shown) disposed in front of the image sensor 411. The lens (or lens array) may focus the beams onto the image sensor 411. In some embodiments, the imaging device 410 may be a camera, and the image sensor 411 may also be referred to as a camera sensor. The image sensor 411 may be any suitable 2D image sensor including a 2D array of pixels, such as a charge-coupled device (“CCD”) image sensor, a complementary metal-oxide-semiconductor (“CMOS”) image sensor, an N-type metal-oxide-semiconductor (“NMOS”) image sensor, a pixelated polarized image sensor, or any other image sensors.
The scattering of the sample 450 may generate a far-field intensity speckle pattern (that is a far-field speckle pattern) at a distance s behind the sample 450. The far-field intensity speckle pattern may be projected onto the image sensor 411 (e.g., onto a chip of the image sensor 411 that is disposed at a plane C at the distance s behind the sample 450). In some embodiments, the position of the plane C may be fixed. The image sensor 411 may record the speckle pattern, and generate an image of the speckle pattern. A speckle pattern is a fine granular pattern of light obtained by the scattering of a coherent light beam, and results from the interference of multiple scattered wavefronts (e.g., a surface scattered wavefront generated by the surface scattering and a scattered wavefront generated by the volumetric scattering). The speckle pattern may include multiple speckles or speckle spots. The diaphragm and the lens (or lens array) may also control the size of the speckle pattern.
During the scattering measurement of the sample 450, the controller 455 may control the rotation of the rotating structure 413 to cause the sample 450 to rotate from its initial position, thereby tilting a reference axis 427 (shown in
The tilt angle θ of the sample 450 may be positive or negative. For example, a counter-clockwise direction 420 may be defined as a positive direction, and then a clockwise direction 422 may be defined as a negative direction.
Referring to
Referring to
Referring to
At each angular position (or at each tilt angle) as the sample 450 is tilted within the predetermined tilting range, the image sensor 411 may detect one of the scattered beams 430b-430e, and record a speckle pattern as speckle pattern image data. In some embodiments, the image sensor 411 may generate an image of the speckle pattern based on the recorded speckle pattern image data. In some embodiments, the image sensor 411 may transmit the recorded speckle pattern image data to the controller 455, and the controller 455 may generate the image of the speckle pattern. In some embodiments, the controller 455 may further process the sets of the speckle pattern image data via suitable data processing or image processing algorithms to determine or identify the relative contributions of the volumetric scattering and the surface scattering in the overall measured scattering of the sample 450. The details of processing of the recorded speckle patterns (or sets of speckle pattern image data) will be explained in connection with
Although the above descriptions use forward scattering as an example, a system similar to those shown in
In the disclosed embodiments, the image sensor 411 may enhance the scattering measurement performance as compared to a single photodiode or a photodiode array used in conventional technology for measuring the light intensity of scattered beams. For example, the image sensor 411 may provide a wider dynamic measurement range as compared to the single photodiode and the photodiode array. In some embodiments, the exposure time of the image sensor 411 may range from about 1 μs (micro-second) to 10 s (second), providing 7 orders of magnitude of adjustments. In some embodiments, the image sensor 411 may provide 8-bit depth, corresponding to a range of 0 to 28 (=256), providing 2 orders of magnitude. The number of pixels can be about 10′, providing 7 orders of magnitude. In total, the image sensor 411 can support a 16-order of magnitude measurement dynamic range. In some embodiments, the image sensor 411 may provide 10-bit depth, 12-bit depth, or even higher depth, increasing the dynamic range of the image sensor 411. In addition, the image sensor 411 has advantages of high measurement sensitivity due to the higher light collection efficiency. The image sensor 411 may have an active light collection area of at least 2*2 cm2, whereas a typical photodiode has an active light collection area of about 1*1 mm2. Thus, the light collection area provided by the image sensor 411 is at least 400 times of that of a typical photodiode. This translates into an about 400 times of higher light collection efficiency.
In some embodiments, the processes for performing the scattering measurement using the system 400 shown in
As shown in
Various methods may be used to pre-set the exposure time for each angular position for the image sensor 411. In some embodiments, the histogram captured by the image sensor 411 at each angular position may be analyzed by the controller 455 using a suitable algorithm to determine an exposure time. In some embodiments, the exposure time may be determined according to a working range of the image sensor 411 within which usable light intensity data can be extracted, and the light intensity detected by the pixel (i.e., the pixel value) of the image sensor 411 (or the light intensity received by the image sensor 411). The working range of the image sensor 411 may be a range between a maximum intensity value and a minimum intensity value that can be acquired by the image sensor 411. When the light intensity detected by the pixel (i.e., the pixel value) of the image sensor 411 is at the maximum intensity value or higher (saturation), the pixel in the capture image may appear white, whereas when the light intensity detected by the pixel (i.e., the pixel value) of the image sensor 411 is at the minimum intensity value or lower, the pixel in the captures image may appear black. The light intensity detected by the pixel (i.e., the pixel value) of the image sensor 411 may be determined, in part, by the number of photons received by the pixel, the energy of a single photon, and the exposure time. In some embodiments, at an angular position of the sample 450, the exposure time may be set such that the light intensity detected by the pixels in the image sensor 411 may be limited to be within a predetermined smaller sub-range of the total working range of the image sensor 411 (referred to as a predetermined detection range). In other words, at an angular position of the sample 450, the exposure time may be set such that the pixel value of the image sensor 411 may be limited to be within a predetermined pixel value range. When a detected light intensity is within the predetermined detection range (or the pixel value is within the predetermined pixel value range), the image sensor 411 may provide a contrast ratio and a signal-to-noise ratio that are above predetermined thresholds. For example, a lower limit of the predetermined detection range may be greater than the minimum intensity value, and equal to or greater than a first percentage (e.g., 30%, 35%, 40%, or 45%, etc.) of the maximum intensity value. An upper limit of the predetermined smaller sub-range may be equal to or smaller than a second predetermined percentage (e.g., 70%, 65%, 60%, or 55%, etc.) of the maximum intensity value. The second percentage is greater than the first percentage.
After initial exposure times are determined, in some embodiments, the actual scattering measurement may be preliminarily performed to check for irregularities, i.e., whether any exposure time is too short or too long that may cause irregular or undesirable exposure in the image generated based on the received beams 430b-403e, or any irregular data in the captured image data. If any irregularity is detected in the generated image or captured image data, the processes of determining exposure times may be repeated to refine or adjust the exposure times, until a satisfactory set of exposure times are determined for the subsequent actual scattering measurement of the sample 450. In some embodiments, the processes of checking for irregularities may be omitted, and the initial exposure times may be directly used as the final exposure times. The first step of exposure time pre-setting may also be automated by the controller 455 based on predetermined algorithms or programs.
After the exposure times are determined and pre-set in the image sensor 411 for each angular position (or tilt angle) of the sample 450, the second step of dark frame characterization may be performed for removing the ambient light in the environment and intrinsic noise of the image sensor 411. Referring to
The third step of data acquisition, dark frame subtraction, and data processing may be performed after the second step of dark frame characterization is performed. In the third step, an actual scattering measurement of the sample 450 may be performed. Still referring to
Thus, after the sample 450 is tilted at the predetermined increment throughout the predetermined tilting range (or a measurement angular range), a series of speckle patterns may be recorded as a plurality of sets of speckle pattern image data. In some embodiments, the image sensor 411 or the controller 455 may generate a series of images of speckle patterns based on the acquired speckle pattern image data. The controller 455 may process the plurality of sets of speckle pattern image data to obtain a plurality of sets of scattering intensities per time unit (or a plurality of fourth sets of intensity data) for respective tilt angles of the sample 450. The controller 455 may further process the plurality of fourth sets of intensity data to determine the relative contributions the volumetric scattering and the surface scattering in the overall measured scattering of the sample 450.
For example, the controller 455 may process each fourth set of intensity data to generate a correlation function via a suitable data or image processing algorithm. The correlation function may quantitatively indicate the degree of overlap of a corresponding speckle pattern with respect to the reference speckle pattern. In some embodiments, the correlation function may correspond to the cross-correlation coefficients of the reference speckle pattern with the corresponding speckle pattern. Based on the generated correlation function, the controller 455 may determine a maximum correlation coefficient (or a peak) of the correlation function. The maximum correlation coefficient of the correlation function may represent the maximum degree of correlation between the reference speckle pattern and the corresponding speckle pattern obtained at a specific tilt angle.
That is, at each angular position as the sample 450 is tilted to different angular positions within the predetermined tilting range, the image sensor 411 may detect one of the scattered beams 430b-430e, record a speckle pattern as a set of speckle pattern image data. Based on the speckle pattern, the controller 455 may calculate the correlation function of the speckle pattern with respect to the reference speckle pattern, and determine a maximum correlation coefficient of the correlation function. Thus, for the predetermined tilting range, the controller 455 may determine respective maximum correlation coefficients corresponding to respective angular positions (or tilt angles θ) of the sample 450. Based on the determined respective maximum correlation coefficients corresponding to the respective tilt angles of the sample 450, the controller 455 may generate a plot based on the maximum correlation coefficients determined at various tilt angles showing a relationship between the maximum correlation coefficients and the tilt angles θ of the sample 450.
In some embodiments, when the sample 450 includes a volumetric scattering source and a surface scattering source, each type of scattering may be presumed to be associated with a correlation profile (indicating a relationship between the maximum correlation coefficients and the tilt angles). To explain the principles, the following descriptions use the simplistic situation where the sample 450 includes a single volumetric scattering layer and a single surface scattering layer. Hence, there is one volumetric scattering correlation profile and one surface scattering correlation profile, and the overall correlation profile of the sample 450 (similar to the plot shown in
The optical film (an example of the second layer 454) may be configured with a predetermined optical function. The optical film may function as a transmissive or reflective optical element, such as a prism, a lens or lens array, a grating, a polarizer, a compensation plate, or a phase retarder, etc. The optical film may include one or more layers of films. The thickness of the optical film may be within a range from several micrometers (“μm”) to several hundreds of micrometers. For example, the thickness of the optical film may be within a range from 5 μm to 50 μm, 5 μm to 60 μm, 5 μm to 70 μm, 5 μm to 80 μm, 5 μm to 90 μm, 5 μm to 100 μm, 10 μm to 50 μm, 10 μm to 60 μm, 10 μm to 70 μm, 10 μm to 80 μm, 10 μm to 90 μm, 10 μm to 100 μm, or 5 μm to 200 μm, etc.
The substrate (an example of the first layer 452) may provide support and protection to various layers, films, and/or structures formed thereon. In some embodiments, the substrate may be at least partially transparent in the visible wavelength range (e.g., about 380 nm to about 700 nm). In some embodiments, the substrate may be at least partially transparent in at least a portion of the infrared (“IR”) band (e.g., about 700 nm to about 2 mm). The substrate may include a suitable material that is at least partially transparent to lights of the above-listed wavelength ranges, such as, a glass, a plastic, a sapphire, or a combination thereof, etc. The substrate may be rigid, semi-rigid, flexible, or semi-flexible. The substrate may include a flat surface or a curved surface, on which the different layers or films may be formed. In some embodiments, the substrate may be a part of another optical element or device (e.g., another opto-electrical element or device), e.g., the substrate may be a solid optical lens, a part of a solid optical lens, or a light guide (or waveguide), etc. For example, the substrate (an example of the first layer 452) may be the light guide 310 shown in
In some embodiments, the sample 450 may include more than three layers. For example, in some embodiments, the sample 450 may include an alignment structure (not shown) disposed between the first layer 452 (e.g., substrate) and the second layer 454 (e.g., optical film). The alignment structure may provide a predetermined alignment pattern to align the molecules in the optical film. The alignment structure may include any suitable alignment structure, such as a photo-alignment material (“PAM”) layer, a mechanically rubbed alignment layer, an alignment layer with anisotropic nanoimprint, an anisotropic relief, or a ferroelectric or ferromagnetic material layer, etc.
For discussion purposes, in the sample 450 having a layered structure, the first layer (e.g., substrate) 452 is presumed to be substantially scattering-free, the second layer (e.g., optical film) 454 is presumed to be a volumetric scattering source, and the third layer (e.g., TAC film) 456 is presumed to be a surface scattering source. The thickness of the third layer (e.g., surface scattering layer) 456 may be at the same scale as the incidence wavelength λ (e.g., hundreds of nanometers), while the thickness of the second layer (e.g., volumetric scattering layer) 454 may be much greater than the incidence wavelength k, e.g., several micrometers to several tens of micrometers. Thus, the memory effect range of the third layer (e.g., surface scattering layer) 456 may be much greater than (e.g., ten times to one hundred times) the memory effect range of the second layer (e.g., volumetric scattering layer) 454.
Referring to
It is noted that the correlation profiles 510, 520, and 530 shown in
As the overall correlation profile 530 of the sample 450 is a weighted coherent sum of the correlation profile 510 of the second layer 454 and the correlation profile 520 of the third layer 456, through determining respective weights of the hypothetical correlation profile 510 (or maximum correlation coefficient S1) and the hypothetical correlation profile 520 (or maximum correlation coefficient S2) in the overall correlation profile 530 (or maximum correlation coefficient S0), the relative contributions of the volumetric scattering from the second layer 454 and the surface scattering from the third layer 456 in the overall scattering of the sample 450 may be determined. For example, in some embodiments, based on the overall correlation profile 530 of the sample 450, the controller 455 may first determine respective memory effect ranges of the third layer 456 and the second layer 454 via a suitable data or image processing algorithm, and then determine the respective weights for the correlation profile 510 and the correlation profile 520 from the overall correlation profile 530 via a pre-built model, algorithm, or prior knowledge of the sample 450. For example, the controller 455 may first determine at least one tilt angle representing one or more respective memory effect ranges of the third layer 456 and/or the second layer 454 via a suitable data or image processing algorithm.
For illustrative purposes, the correlation profile 530 in
In some embodiments, the controller 455 may determine the tilt angle θ4 (corresponding to θ2 in
In some embodiments, based on at least one of the determined tilt angles θ3 and θ4, a value C3 may be determined as the weight of the correlation profile 520 (or maximum correlation coefficient S2) in the overall correlation profile 530 (or maximum correlation coefficient S0). The weight for the correlation profile 510 (or maximum correlation coefficient S1) in the overall correlation profile 530 (or maximum correlation coefficient S0) may be calculated as (C0−C3), e.g., (1−C3) when C0=1. In some embodiments, when the maximum correlation coefficients S0 are not normalized maximum correlation coefficients, C0 may be equal to a maximum value of the plurality of the maximum correlation coefficients S0 (or a peak value of the overall correlation profile 530). After the weight for the correlation profile 520 is determined as C3, the weight for the correlation profile 510 may be determined as a difference between the maximum value of the plurality of the maximum correlation coefficients in the overall correlation profile 530 and the weight for the correlation profile 520, i.e., C0-C3. Example methods for determining the value C3 will be discussed in connection with
The weights for the correlation profile 510 and the correlation profile 520 in the overall correlation profile 530 indicate the relative contributions of the volumetric scattering from the second layer 454 and the surface scattering from the third layer 456 that resulted in the overall scattering of the sample 450, respectively. For example, if the overall scattering of the sample 450 is presumed to be 1, the weight of the volumetric scattering from the second layer 454 may be determined as the weight of the correlation profile 510 in the overall correlation profile 530, and the weight of the surface scattering from the third layer 456 may be determined as the weight of the correlation profile 520 in the overall correlation profile 530. Thus, the relative contribution of the surface scattering in the overall scattering of the sample 450 may be C3, and the relative contribution of the volumetric scattering in the overall scattering of the sample 450 may be C0−C3 (or 1−C3 if C0=1).
The determined weights of the volumetric scattering from the second layer 454 and the surface scattering from the third layer 456 from the overall scattering of the sample 450 may provide guidance for improving the fabrication quality of the sample 450. Based on the determined weights of the volumetric scattering and the surface scattering, the controller 455 may determine a dominant scattering of the sample 450. For example, when the weight of the surface scattering from the third layer 456 is greater than 0.5 (e.g., when C3>0.5), the controller 455 may determine that the surface scattering is the dominant scattering of the sample 450. Thus, polishing or other types of surface treatment of the third layer 456 may be recommended to reduce the overall scattering of the sample 450. When the weight of the volumetric scattering from the second layer 454 is greater than 0.5 (e.g., 1−C3>0.5), the controller 455 may determine that the volumetric scattering is the dominant scattering of the sample 450. Thus, adjusting the material formulation for fabricating the second layer 454 may be recommended to reduce the overall scattering of the sample 450.
After the optical component 600 is tilted at a predetermined increment throughout the predetermined tilting range, the image sensor 411 may record a plurality of speckle patterns (in form of a plurality of sets of speckle pattern image data) associated with a plurality of tilt angles of the sample 450. In some embodiments, the image sensor 411 may generate a plurality of images of the speckle patterns. It is understood that the image sensor 411 may not generate the images. The image sensor 411 may provide the sets of speckle pattern image data to the controller 455, which may analyze the speckle pattern image data and determine the relative contributions of the volumetric scattering and the surface scattering based on the processes disclosed herein. For illustrative purposes,
In the example of
Based on the correlation profile 630 of the optical component 600, the controller 455 may determine the weights for the volumetric scattering from the photo-polymer layer 603 and the surface scattering from the TAC layer 605 in the overall scattering of the optical component 600. For example, the overall scattering of the optical component 600 is presumed to be 1. According to
To corroborate that the volumetric scattering from the photo-polymer layer 603 is the dominant scattering in the optical component 600, the light scattering of the optical component 600 is measured by another type of mechanism using a light-sheet microscope.
The present disclosure also provides a method for separately identifying the contributions of the volumetric scattering and the surface scattering from an overall scattering of an optical element or component based on “optical memory effect.” The method may be performed by one or more components included in the disclosed system. Descriptions of the components, structures, and/or functions can refer to the above descriptions rendered in connection with
As shown in
The method 700 may include generating, by an image sensor, a plurality of sets of speckle pattern image data when the optical element is arranged at a plurality of tilt angles within the predetermined tilting range (step 730). Each set of speckle pattern image data may represent a speckle pattern of the scattered beams output from the optical element. Each set of speckle pattern image data may be generated by the image sensor based on the scattered beams received by the image sensor. In some embodiments, the imaging sensor may be a camera sensor that includes a 2D array of pixels for imaging.
The optical element may include a surface scattering source and a volumetric scattering source. The method 700 may include processing, by the controller, the plurality of sets of speckle pattern image data to determine weights of volumetric scattering and surface scattering in an overall scattering of the optical element (step 740). An example process of determining the weights is discussed above in connection with
In some embodiments, the step 740 may include, for each tilt angle, determining a correlation function of a corresponding speckle pattern with respect to a reference speckle pattern. In some embodiments, the reference speckle pattern image data for the reference speckle pattern may be recorded or generated by the image sensor when the tilt angle of the optical element is zero degree. The determination of the correlation function may be based on the speckle pattern image data for the tilt angle and the reference speckle pattern image data for the reference speckle pattern. A plurality of correlation functions may be determined for the plurality of tilt angles.
In some embodiments, the step 740 may include, for each correlation function, determining a maximum correlation coefficient for the specific tilt angle. Thus, a plurality of maximum correlation coefficients may be determined for the plurality of tilt angles. In some embodiments, the step 740 may include normalizing the plurality of determined maximum correlation coefficients to obtain a plurality of normalized maximum correlation coefficients.
In some embodiments, the step 740 may include determining respective memory effect ranges of the surface scattering source and the volumetric scattering source based on the plurality of tilt angles and the plurality of normalized maximum correlation coefficients. Determining each memory effect range may include determining a tilt angle that reflects or that is representative of the memory effect range. For example, in some embodiments, the step 740 may include determining a first tilt angle (e.g., 04 shown in
In some embodiments, the step 740 may include based on at least one tilt angle reflecting (or representative of) at least one of the memory effect ranges of the surface scattering source and the volumetric scattering source, determining respective weights of surface scattering and volumetric scattering in an overall scattering of the optical element. For example, in some embodiments, the weight of the volumetric scattering in the overall scattering of the optical element may be determined to be a normalized maximum correlation coefficient corresponding to the second tilt angle (e.g., θ3). In some embodiments, the weight of the volumetric scattering in the overall scattering of the optical element may be determined as an average of the normalized maximum correlation coefficients corresponding to tilt angles within a sub-range of a range from the second tilt angle (e.g., θ3) to the first tilt angle (e.g., θ4). In some embodiments, determining the weight of the surface scattering in the overall scattering of the optical element may include determining the weight of the surface scattering in the overall scattering as a difference between 1 and the weight of the volumetric scattering in the overall scattering of the optical element.
The method 700 may include other steps or processes not shown in
In some embodiments, the method 700 may also include determining a plurality of exposure times of the imaging sensor for the plurality of tilt angles of the optical element. In some embodiments, the method 700 may also include, based on the determined exposure times for the receptive tilt angles, pre-setting the exposure times in the imaging sensor for the plurality of tilt angles. In some embodiments, the step of pre-setting the exposure times in the imaging sensor may be omitted. In some embodiments, these steps may be performed prior to step 730.
After the respective exposure times of the image senor for the respective tilt angles of the optical element are determined, the step 730 may include generating, by the image sensor, a plurality of sets of speckle pattern image data representing the plurality of speckle patterns, using the respective determined exposure times for the respective tilt angles of the optical element. Each set of speckle pattern image data may include a first set of intensity data relating to the overall scattering detected at a specific tilt angle. For example, the optical element may be tilted from a first angular position corresponding to a first tilt angle to a second angular position corresponding to a second tilt angle. At the first angular position, the imaging sensor may generate a first set of speckle pattern image data representing a first speckle pattern using a first determined exposure time for the first tilt angle. At the second angular position, the imaging sensor may generate a second set of speckle pattern image data representing a second speckle pattern using a second determined exposure time for the second tilt angle.
In some embodiments, after the exposure times for the tilt angles are determined, the method 700 may also include, with the light source turned off, moving the rotating structure to tilt the optical element to each of the plurality of tilt angles to generate a plurality of sets of dark frame speckle pattern image data representing a plurality of dark frame patterns, using the same determined exposure times at the specific tilt angle. Each of the plurality of sets of dark frame speckle pattern image data representing one of the plurality of dark frame patterns may include a second set of intensity data relating to the ambient light in the environment and the intrinsic noise of the imaging sensor. For example, with the light source turned off, the optical element may be tilted from the first angular position to the second angular position. At the first angular position, the imaging sensor may record a first dark frame pattern, using the first determined exposure time for the first tilt angle. At the second angular position, the imaging sensor may record a second dark frame pattern, using the second determined exposure time for the second tilt angle.
In some embodiments, the step 740 may include subtracting the second sets of intensity data from the corresponding first sets of intensity data to obtain a plurality of third sets of intensity data for the plurality of tilt angles. The step 740 may include normalizing the third sets of intensity data by the corresponding exposure times for the plurality of tilt angles. For example, a second set of intensity date representing the first dark frame pattern may be subtracted from a first set of intensity date representing the first speckle pattern to obtain a third set of intensity data for the first tilt angle. The third set of intensity data for the first angular position may be normalized by the first determined exposure time for the first tilt angle. A second set of intensity date representing the second dark frame pattern may be subtracted from a first set of intensity date representing the second speckle pattern to obtain a third set of intensity data for the second tilt angle. The third set of intensity data for the second tilt angle may be normalized by the second determined exposure time for the second tilt angle. In some embodiments, the step 740 may include processing the normalized third sets of intensity data to obtain a correlation profile of the optical element that depends on the tilt angle, i.e., a relationship between the normalized third sets of intensity data and tilt angles.
As shown in
In some embodiments, the optical film 800 may be a polymer layer (or film). For example, in some embodiments, the optical film 800 may be a liquid crystal polymer (“LCP”) layer. In some embodiments, the LCP layer may include polymerized (or cross-linked) LCs, polymer-stabilized LCs, photo-reactive LC polymers, or any combination thereof. The LCs may include nematic LCs, twist-bend LCs, chiral nematic LCs, smectic LCs, or any combination thereof. In some embodiments, the optical film 800 may be a polymer layer including a birefringent photo-refractive holographic material other than LCs, such as an amorphous polymer. The optical film 800 may have a first surface 815-1 on one side and a second surface 815-2 on an opposite side. The first surface 815-1 and the second surface 815-2 may be surfaces along the light propagating path of the incident light 802. In some embodiments, the first surface 815-1 may be an interface between the optical film 800 and a substrate (e.g., the substrate may be the first layer 452 shown in
The birefringent medium 815 in the optical film 800 may include optically anisotropic molecules (e.g., LC molecules) configured with a three-dimensional (“3D”) orientational pattern. In some embodiments, an optic axis of the birefringent medium 815 or optical film 800 may be configured with a spatially varying orientation in at least one in-plane direction. For example, the optic axis of the LC material may periodically or non-periodically vary in at least one in-plane linear direction, in at least one in-plane radial direction, in at least one in-plane circumferential (e.g., azimuthal) direction, or a combination thereof. The LC molecules may be configured with an in-plane orientation pattern, in which the directors of the LC molecules may periodically or non-periodically vary in the at least one in-plane direction. In some embodiments, the optic axis of the LC material may also be configured with a spatially varying orientation in an out-of-plane direction. The directors of the LC molecules may also be configured with spatially varying orientations in an out-of-plane direction. For example, the optic axis of the LC material (or directors of the LC molecules) may twist in a helical fashion in the out-of-plane direction.
As shown in
In addition, within a film plane of the optical film 800, the orientations of the directors of the LC molecules 812 may exhibit a rotation in a predetermined rotation direction, e.g., a clockwise direction or a counter-clockwise direction. Accordingly, the rotation of the orientations of the directors of the LC molecules 812 within a film plane of the optical film 800 may exhibit a handedness, e.g., right handedness or left handedness. In the embodiment shown in
Although not shown, in some embodiments, within the film plane of the optical film 800, the orientations of the directors of the LC molecules 812 may exhibit a rotation in a counter-clockwise direction. Accordingly, the rotation of the orientations of the directors of the LC molecules 812 within the film plane of the optical film 800 may exhibit a right handedness. Although not shown, in some embodiments, within the film plane of the optical film 800, domains in which the orientations of the directors of the LC molecules 812 exhibit a rotation in a clockwise direction (referred to as domains DL) and domains in which the orientations of the directors of the LC molecules 812 exhibit a rotation in a counter-clockwise direction (referred to as domains DR) may be alternatingly arranged in at least one in-plane direction, e.g., in x-axis and y-axis directions.
As shown in
As shown in
The in-plane orientation patterns of the LC directors shown in
In the embodiment shown in
As shown in
As shown in
In the embodiment shown in
In the embodiment shown in
In the embodiment shown in
In some embodiments, the alignment structure 910 may be a PAM layer, and the alignment pattern provided by the PAM layer may be formed via any suitable approach, such as holographic interference, laser direct writing, ink-jet printing, or various other forms of lithography. The PAM layer may include a polarization sensitive material (e.g., a photo-alignment material) that can have a photo-induced optical anisotropy when exposed to a polarized light irradiation. Molecules (or fragments) and/or photo-products of the polarization sensitive material may be configured to generate an orientational ordering under the polarized light irradiation. For example, the polarization sensitive material may be dissolved in a solvent to form a solution. The solution may be dispensed on the substrate 905 using any suitable solution dispensing process, e.g., spin coating, slot coating, blade coating, spray coating, or jet (ink-jet) coating or printing. The solvent may be removed from the coated solution using a suitable process, e.g., drying, or heating, thereby leaving the polarization sensitive material on the substrate 905.
The polarization sensitive material may be optically patterned via the polarized light irradiation, to form the alignment pattern corresponding to a predetermined in-plane orientation pattern. In some embodiments, the polarization sensitive material may include elongated anisotropic photo-sensitive units (e.g., small molecules or fragments of polymeric molecules). After being subjected to a sufficient exposure of the polarized light irradiation, local alignment directions of the anisotropic photo-sensitive units may be induced in the polarization sensitive material, resulting in an alignment pattern (or in-plane modulation) of an optic axis of the polarization sensitive material.
In some embodiments, an entire layer of the polarization sensitive material may be formed on the substrate via a single dispensing process, and the layer of the polarization sensitive material may be subjected to the polarized light irradiation that has a substantially uniform intensity and spatially varying orientations (or polarization directions) of linear polarizations in a predetermined space in which the entire layer of the polarization sensitive material is disposed. In some embodiments, an entire layer of the polarization sensitive material may be formed on the substrate via a plurality of dispensing processes. For example, during a first time period, a first predetermined amount of the polarization sensitive material may be dispensed at a first location of the substrate 905, and exposed to a first polarized light irradiation. During a second time period, a second predetermined amount of the polarization sensitive material may be dispensed at a second location of the substrate 905, and exposed to a second polarized light irradiation. The first polarized light irradiation may have a first uniform intensity, and a first linear polarization direction in a space in which the first predetermined amount of the polarization sensitive material is disposed. The second polarized light irradiation may have a second uniform intensity, and a second linear polarization direction in a space in which the second predetermined amount of the polarization sensitive material is disposed. The first uniform intensity and the second uniform intensity may be substantially the same. The first linear polarization direction and the second linear polarization direction may be substantially the same or different from one another. The processes may be repeated until a PAM layer that provides a desirable alignment pattern is obtained.
The substrate 905 may provide support and protection to various layers, films, and/or structures formed thereon. In some embodiments, the substrate 905 may also be transparent in the visible wavelength band (e.g., about 380 nm to about 900 nm). In some embodiments, the substrate 905 may also be at least partially transparent in at least a portion of the infrared (“IR”) band (e.g., about 900 nm to about 1 mm). The substrate 905 may include a suitable material that is at least partially transparent to lights of the above-listed wavelength ranges, such as, a glass, a plastic, a sapphire, or a combination thereof, etc. The substrate 905 may be rigid, semi-rigid, flexible, or semi-flexible. The substrate 905 may include a flat surface or a curved surface, on which the different layers or films may be formed. In some embodiments, the substrate 905 may be a part of another optical element or device (e.g., another opto-electrical element or device). For example, the substrate 905 may be a solid optical lens, a part of a solid optical lens, or a light guide (or waveguide), etc. In some embodiments, the substrate 905 may be a part of a functional device, such as a display screen.
After the alignment structure 910 is formed on the substrate 905, as shown in
In some embodiments, the birefringent medium may also include other ingredients, such as solvents, initiators (e.g., photo-initiators or thermal initiators), chiral dopants, or surfactants, etc. In some embodiments, the birefringent medium may not have an intrinsic or induced chirality. In some embodiments, the birefringent medium may have an intrinsic or induced chirality. For example, in some embodiments, the birefringent medium may include a host birefringent material and a chiral dopant doped into the host birefringent material at a predetermined concentration. The chirality may be introduced by the chiral dopant doped into the host birefringent material, e.g., chiral RMs doped into achiral RMs. In some embodiments, the birefringent medium may include a birefringent material having an intrinsic molecular chirality, and chiral dopants may not be doped into the birefringent material. The chirality of the birefringent medium may result from the intrinsic molecular chirality of the birefringent material. For example, the birefringent material may include chiral liquid crystal molecules, or molecules having one or more chiral functional groups.
In some embodiments, a birefringent medium may be dissolved in a solvent to form a solution. A suitable amount of the solution may be dispensed (e.g., coated, or sprayed, etc.) on the alignment structure 910 to form the birefringent medium layer 915, as shown in
In some embodiments, when the alignment structure 910 is the PAM layer, the RM molecules in the birefringent medium may be at least partially aligned along the local alignment directions of the anisotropic photo-sensitive units in the PAM layer to form the predetermined in-plane orientation pattern. Thus, the alignment pattern formed in the PAM layer (or the in-plane orientation pattern of the optic axis of the PAM layer) may be transferred to the birefringent medium layer 915. Such an alignment procedure may be referred to as a surface-mediated photo-alignment. The photo-alignment material for a surface-mediated photo-alignment may also be referred to as a surface photo-alignment material.
In some embodiments, after the optically anisotropic molecules (e.g., RM molecules) in the birefringent medium layer 915 are aligned by the alignment structure 910, the birefringent medium layer 915 may be heat treated (e.g., annealed) in a temperature range corresponding to a nematic phase of the RMs to enhance the alignments (or orientation pattern) of the RMs (not shown in
In some embodiments, after the RMs are aligned by the alignment structure 910, the RMs may be polymerized, e.g., thermally polymerized or photo-polymerized, to solidify and stabilize the orientational pattern of the optic axis of the birefringent medium layer 915. In some embodiments, as shown in
After the photo-sensitive polymer layer 1010 is formed on the substrate 905, as shown in
Molecules of the photo-sensitive polymer may include one or more polarization sensitive photo-reactive groups embedded in a main polymer chain or a side polymer chain. During the polarized light irradiation process of the photo-sensitive polymer layer 1010, a photo-alignment of the polarization sensitive photo-reactive groups may occur within (or in, inside) a volume of the photo-sensitive polymer layer 1010. Thus, a 3D polarization field provided by the polarized light irradiation 1020 may be directly recorded within (or in, inside) the volume of the photo-sensitive polymer layer 1010. In other words, the photo-sensitive polymer layer 1010 may be optically patterned to form a patterned photo-sensitive polymer layer (referred to as 1017 in
In some embodiments, the photo-sensitive polymer included in the photo-sensitive polymer layer 1010 may include an amorphous polymer, an LC polymer, etc. The molecules of the photo-sensitive polymer may include one or more polarization sensitive photo-reactive groups embedded in a main polymer chain or a side polymer chain. In some embodiments, the polarization sensitive photo-reactive group may include an azobenzene group, a cinnamate group, or a coumarin group, etc. In some embodiments, the photo-sensitive polymer may be an amorphous polymer, which may be initially optically isotropic prior to undergoing the polarized light irradiation 1020, and may exhibit an induced (e.g., photo-induced) optical anisotropy after being subjected to the polarized light irradiation 1020. In some embodiments, the photo-sensitive polymer may be an LC polymer, in which the birefringence and in-plane orientation pattern may be recorded due to an effect of photo-induced optical anisotropy. In some embodiments, the photo-sensitive polymer may be an LC polymer with a polarization sensitive cinnamate group embedded in a side polymer chain. In some embodiments, when the photo-sensitive polymer layer 1010 includes an LC polymer, the patterned photo-sensitive polymer layer 1017 may be heat treated (e.g., annealed) in a temperature range corresponding to a liquid crystalline state of the LC polymer to enhance the photo-induced optical anisotropy of the LC polymer (not shown in
Referring to
After the optical component 900 or 1000 is fabricated, the scattering property of the optical component 900 or 1000 may be tested using the system 400 disclosed herein. Based on the measured overall scattering and the methods described above, the respective contributions of the volumetric scattering and the surface scattering provided by various components of the optical component 900 or 1000 may be determined. Guidance may be provided based on the respective contributions. For example, when the volumetric scattering constitutes the primary contribution to the overall scattering, guidance may be provided to adjust the material formulation of the optical component 900 or 1000 to reduce the overall scattering. When the surface scattering constitutes the primary contribution to the overall scattering, guidance may be provided to polish the surfaces or apply other types of surface treatment to the surfaces to reduce the overall scattering.
In some embodiments, the present disclosure provides a system that includes a light source configured to emit a probing beam to illuminate an optical element. The system also includes a rotating structure to which the optical element is mounted. The system also includes a controller configured to control the rotating structure to rotate to change a tilt angle of the optical element with respect to a propagation direction of the probing beam within a predetermined tilting range. The system also includes an image sensor configured to receive one or more scattered beams output from the optical element illuminated by the probing beam, and generate a plurality of sets of speckle pattern image data when the optical element is arranged at a plurality of tilt angles within the predetermined tilting range. The controller is configured to process the plurality of sets of speckle pattern image data to determine respective weights of volumetric scattering and surface scattering in an overall scattering of the optical element.
In some embodiments, the image sensor is a camera sensor. In some embodiments, a position of the image sensor is fixed as the optical element is rotated to the respective tilt angles. In some embodiments, the controller is configured to process the plurality of sets of speckle pattern image data to determine a first weight of the surface scattering in the overall scattering of the optical element, and determine a second weight of the volumetric scattering in the overall scattering of the optical element based on the first weight of the surface scattering.
In some embodiments, for each set of speckle pattern image data associated with each tilt angle, the controller is configured to determine a correlation function of the set of speckle pattern image data with respect to a set of reference speckle pattern image data. In some embodiments, for each correlation function associated with each tilt angle, the controller is configured to determine a maximum correlation coefficient of the correlation function. In some embodiments, the controller is configured to determine, based on a plurality of maximum correlation coefficients associated with the plurality of tilt angles of the optical element, a tilt angle dependent correlation profile of the optical element, the tilt angle dependent correlation profile representing a relationship between the maximum correlation coefficients and the tilt angles.
In some embodiments, the optical element includes a surface scattering source that generates the surface scattering and a volumetric scattering source that generates the volumetric scattering, and the controller is configured to determine, based on the tilt angle dependent correlation profile, at least one tilt angle that is representative of a memory effect range of at least one of the surface scattering source or the volumetric scattering source.
In some embodiments, the controller is configured to determine, based on the at least one tilt angle that is representative of the memory effect range, respective weights of the surface scattering and the volumetric scattering in an overall scattering of the optical element.
In some embodiments, based on a plurality of maximum correlation coefficients associated with the plurality of tilt angles of the optical element, the controller is configured to: determine a first tilt angle from which the maximum correlation coefficient is smaller than a predetermined coefficient value.
In some embodiments, the controller is configured to: determine, based on a plurality of maximum correlation coefficients associated with the plurality of tilt angles of the optical element, a tilt angle dependent correlation profile of the optical element, the tilt angle dependent correlation profile representing a relationship between the maximum correlation coefficients and the tilt angles; and determine a second tilt angle based on the tilt angle dependent correlation profile of the optical element.
In some embodiments, the controller is configured to: determine a first weight of the surface scattering as the maximum correlation coefficient corresponding to the first tilt angle or as an average of the maximum correlation coefficients corresponding to a sub-range of tilt angles selected between the first tilt angle and the second tilt angle; and determine a second weight of the volumetric scattering as a difference between a maximum value of the plurality of maximum correlation coefficients and the first weight.
In some embodiments, the controller is configured to determining a plurality of exposure times of the imaging sensor for the plurality of tilt angles of the optical element. In some embodiments, with a light source that emits the probing beam turned on, the controller is configured to rotate the rotating structure to change the tilt angle of the optical element within the predetermined tilting range, and the image sensor is configured to generate each set of speckle pattern image data using an exposure time associated with each tilt angle.
In some embodiments, the present disclosure provides a method. The method includes illuminating, by a light source, an optical element mounted to a rotating structure with a probing beam. The method also includes controlling, by a controller, rotation of the rotating structure to change a tilt angle of the optical element with respect to a propagation direction of the probing beam within a predetermined tilting range. The method also includes generating, by an image sensor, a plurality of sets of speckle pattern image data when the optical element is arranged at a plurality of tilt angles within the predetermined tilting range. The method also includes processing, by the controller, the plurality of sets of speckle pattern image data to determine respective weights of volumetric scattering and surface scattering in an overall scattering of the optical element.
In some embodiments, processing, by the controller, the plurality of sets of speckle pattern image data to determine the respective weights of volumetric scattering and surface scattering in the overall scattering of the optical element includes: determining a first weight of the surface scattering in the overall scattering of the optical element; and determining a second weight of the volumetric scattering in the overall scattering of the optical element based on the first weight of the surface scattering.
In some embodiments, processing, by the controller, the plurality of sets of speckle pattern image data to determine the respective weights of volumetric scattering and surface scattering in the overall scattering of the optical element includes: for each set of speckle pattern image data corresponding to each tilt angle, determining a correlation function of the set of speckle pattern image data with respect to a set of reference speckle pattern image data; for each correlation function associated with each tilt angle, determining a maximum correlation coefficient of the correlation function, thereby obtaining a plurality of maximum correlation coefficients for the plurality of tilt angles; and determining a first tilt angle corresponding to which the maximum correlation coefficient is smaller than a predetermined coefficient value.
In some embodiments, the method also includes determining, based on the plurality of maximum correlation coefficients associated with the plurality of tilt angles of the optical element, a tilt angle dependent correlation profile of the optical element, the tilt angle dependent correlation profile representing a relationship between the maximum correlation coefficients and the tilt angles; and determining a second tilt angle based on the tilt angle dependent correlation profile of the optical element.
In some embodiments, processing, by the controller, the plurality of sets of speckle pattern image data to determine the respective weights of volumetric scattering and surface scattering in the overall scattering of the optical element also includes determining the first weight of the surface scattering as the maximum correlation coefficient corresponding to the first tilt angle or as an average of the maximum correlation coefficients corresponding to a sub-range of tilt angles selected between the first tilt angle and the second tilt angle; and determining the second weight of the volumetric scattering as a difference between a maximum value of the plurality of maximum correlation coefficients and the first weight.
Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware and/or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product including a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. In some embodiments, a hardware module may include hardware components such as a device, a system, an optical element, a controller, an electrical circuit, a logic gate, etc.
Further, when an embodiment illustrated in a drawing shows a single element, it is understood that the embodiment or an embodiment not shown in the figures but within the scope of the present disclosure may include a plurality of such elements. Likewise, when an embodiment illustrated in a drawing shows a plurality of such elements, it is understood that the embodiment or an embodiment not shown in the figures but within the scope of the present disclosure may include only one such element. The number of elements illustrated in the drawing is for illustration purposes only, and should not be construed as limiting the scope of the embodiment. Moreover, unless otherwise noted, the embodiments shown in the drawings are not mutually exclusive, and they may be combined in any suitable manner. For example, elements shown in one figure/embodiment but not shown in another figure/embodiment may nevertheless be included in the other figure/embodiment. In any optical device disclosed herein including one or more optical layers, films, plates, or elements, the numbers of the layers, films, plates, or elements shown in the figures are for illustrative purposes only. In other embodiments not shown in the figures, which are still within the scope of the present disclosure, the same or different layers, films, plates, or elements shown in the same or different figures/embodiments may be combined or repeated in various manners to form a stack.
Various embodiments have been described to illustrate the exemplary implementations. Based on the disclosed embodiments, a person having ordinary skills in the art may make various other changes, modifications, rearrangements, and substitutions without departing from the scope of the present disclosure. Thus, while the present disclosure has been described in detail with reference to the above embodiments, the present disclosure is not limited to the above described embodiments. The present disclosure may be embodied in other equivalent forms without departing from the scope of the present disclosure. The scope of the present disclosure is defined in the appended claims.
This application claims the benefit of priority to U.S. Provisional Application No. 63/320,568, filed on Mar. 16, 2022. The content of the above-mentioned application is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63320568 | Mar 2022 | US |