This disclosure relates to optical systems such as optical systems in electronic devices having displays.
Electronic devices can include displays that provide images near the eyes of a user. Such electronic devices often include virtual or augmented reality headsets with displays having optical elements that allow users to view the displays.
If care is not taken, images presented by the displays can be washed out by bright ambient light. It can also be difficult to provide the displays with structures that meet desired levels of optical and mechanical performance.
An aspect of the disclosure provides an electronic device. The electronic device may include a waveguide configured to propagate first light, an optical coupler on the waveguide and configured to couple the first light out of the waveguide, and a tint layer overlapping the optical coupler and configured to pass second light to the waveguide. The tint layer may include a first substrate, a second substrate, an electrochromic gel between the first substrate and the second substrate, and a glass ring spacer between the first substrate and the second substrate, wherein the glass ring spacer extends around a lateral periphery of the electrochromic gel.
An aspect of the disclosure provides a display. The display may include a waveguide configured to propagate first light, an optical coupler on the waveguide and configured to couple the first light out of the waveguide, and an electrically adjustable tint layer overlapping the optical coupler and configured to pass second light to the waveguide. The electrically adjustable tint layer may include a first substrate, a second substrate, an electrochromic gel between the first substrate and the second substrate, a first edge seal between the first substrate and the second substrate and surrounding a lateral periphery of the electrochromic gel, and a second edge seal between the first and the second substrate and surrounding the lateral periphery of the electrochromic gel, the second edge seal being interposed between the first edge seal and the electrochromic gel.
An aspect of the disclosure provides an electronic device. The electronic device may include a waveguide configured to propagate first light, an optical coupler on the waveguide and configured to couple the first light out of the waveguide, and an electrically adjustable tint layer overlapping the optical coupler and configured to pass second light to the waveguide. The electrically adjustable tint layer can include a first substrate, a second substrate, an electrochromic gel between the first substrate and the second substrate, and an edge seal between the first substrate and the second substrate and surrounding a lateral periphery of the electrochromic gel, wherein the edge seal has a non-uniform width along the lateral periphery of the electrochromic gel.
An aspect of the disclosure provides an electronic device. The electronic device may include a waveguide configured to propagate first light, an optical coupler on the waveguide and configured to couple the first light out of the waveguide, and an electrically adjustable tint layer overlapping the optical coupler and configured to pass second light to the waveguide. The electrically adjustable tint layer can include a first substrate having a first lateral surface, a second substrate having a second lateral surface facing the first lateral surface, a ring of adhesive that couples the first lateral surface to the second lateral surface, a cavity in the first lateral surface, and an electrochromic gel between the first substrate and the second substrate and at least partially disposed within the cavity.
An aspect of the disclosure provides a display. The display can include a waveguide configured to propagate first light, an optical coupler on the waveguide and configured to couple the first light out of the waveguide, and an electrically adjustable tint layer overlapping the optical coupler and configured to pass second light to the waveguide. The electrically adjustable tint layer can include a first substrate having a first lateral surface, a first electrode on the first lateral surface, a second substrate having a second lateral surface facing the first lateral surface, a second electrode on the second lateral surface, an electrochromic gel between the first electrode and the second electrode, the electrochromic gel comprising a first redox species with a first optical absorptivity, a second redox species with a second optical absorptivity greater than the first optical absorptivity, and a third redox species with a third optical absorptivity less than the first optical absorptivity and less than the second optical absorptivity, the third redox species being configured to perform a same type of redox reaction as the second redox species, and a peripheral edge seal that couples the first lateral surface to the second lateral surface and that extends around a lateral periphery of the electrochromic gel.
System 10 of
The operation of system 10 may be controlled using control circuitry 16. Control circuitry 16 may include storage and processing circuitry for controlling the operation of system 10. Control circuitry 16 may include storage such as hard disk drive storage, nonvolatile memory (e.g., electrically-programmable-read-only memory configured to form a solid state drive), volatile memory (e.g., static or dynamic random-access-memory), etc. Processing circuitry in control circuitry 16 may include one or more processors (e.g., microprocessors, microcontrollers, digital signal processors, baseband processors, etc.), power management units, audio chips, graphics processing units, application specific integrated circuits, and other integrated circuits. Software code may be stored on storage in control circuitry 16 and run on processing circuitry in control circuitry 16 to implement operations for system 10 (e.g., data gathering operations, operations involving the adjustment of components using control signals, image rendering operations to produce image content to be displayed for a user, etc.).
System 10 may include input-output circuitry such as input-output devices 12. Input-output devices 12 may be used to allow data to be received by system 10 from external equipment (e.g., a tethered computer, a portable device such as a handheld device or laptop computer, or other electrical equipment) and to allow a user to provide head-mounted device 10 with user input. Input-output devices 12 may also be used to gather information on the environment in which system 10 (e.g., head-mounted device 10) is operating. Output components in devices 12 may allow system 10 to provide a user with output and may be used to communicate with external electrical equipment. Input-output devices 12 may include sensors and other components 18 (e.g., image sensors for gathering images of real-world object that are digitally merged with virtual objects on a display in system 10, accelerometers, depth sensors, light sensors, haptic output devices, speakers, batteries, wireless communications circuits for communicating between system 10 and external electronic equipment, etc.).
Projectors 26 may include liquid crystal displays, organic light-emitting diode displays, laser-based displays, or displays of other types. Projectors 26 may include light sources, emissive display panels, transmissive display panels that are illuminated with illumination light from light sources to produce image light, reflective display panels such as digital micromirror display (DMD) panels and/or liquid crystal on silicon (LCOS) display panels that are illuminated with illumination light from light sources to produce image light 30, etc.
Optical systems 22 may form lenses that allow a viewer (see, e.g., a viewer's eyes at eye box 24) to view images on display(s) 20. There may be two optical systems 22 (e.g., for forming left and right lenses) associated with respective left and right eyes of the user. A single display 20 may produce images for both eyes or a pair of displays 20 may be used to display images. In configurations with multiple displays (e.g., left and right eye displays), the focal length and positions of the lenses formed by system 22 may be selected so that any gap present between the displays will not be visible to a user (e.g., so that the images of the left and right displays overlap or merge seamlessly).
If desired, optical system 22 may contain components (e.g., an optical combiner formed from reflective components, diffractive components, a waveguide, a direct view optical combiner, etc.) to allow real-world light (sometimes referred to as world light) from real-world (external) objects such as real-world (external) object 28 to be combined optically with virtual (computer-generated) images such as virtual images in image light 30. In this type of system, which is sometimes referred to as an augmented reality system, a user of system 10 may view both real-world content (e.g., world light from object 28) and computer-generated content that is overlaid on top of the real-world content. Camera-based augmented reality systems may also be used in device 10 (e.g., in an arrangement in which a camera captures real-world images of object 28 and this content is digitally merged with virtual content at optical system 22).
System 10 may, if desired, include wireless circuitry and/or other circuitry to support communications with a computer or other external equipment (e.g., a computer that supplies display 20 with image content). During operation, control circuitry 16 may supply image content to display 20. The content may be remotely received (e.g., from a computer or other content source coupled to system 10) and/or may be generated by control circuitry 16 (e.g., text, other computer-generated content, etc.). The content that is supplied to display 20 by control circuitry 16 may be viewed by a viewer at eye box 24.
If desired, system 10 may include an optical sensor. The optical sensor may be used to gather optical sensor data associated with a user's eyes at eye box 24. The optical sensor may, for example, be a gaze tracking sensor that gathers optical sensor data such as gaze image data (gaze tracking image data or gaze tracking sensor data) from a user's eye at eye box 24. Control circuitry 16 may process the optical sensor data to identify and track the direction of the user's gaze in real time. Control circuitry 16 may perform any desired operations based on the tracked direction of the user's gaze over time.
As shown in
Infrared emitter(s) 8 may direct light 4 towards optical system 22. Optical system 22 may direct the light 4 emitted by infrared emitter(s) 8 towards eye box 24. Light 4 may reflect off portions (regions) of the user's eye at eye box 24 as reflected light 4R (sometimes referred to herein as reflected sensor light 4R, which is a reflected version of light 4). Optical system 22 may receive reflected light 4R and may direct reflected light 4R towards infrared sensor(s) 6. Infrared sensor(s) 6 may receive reflected light 4R from optical system 22 and may gather (e.g., generate, measure, sense, produce, etc.) optical sensor data in response to the received reflected light 4R. Infrared sensor(s) 6 may include an image sensor or camera (e.g., an infrared image sensor or camera), for example. Infrared sensor(s) 6 may include, for example, one or more image sensor pixels (e.g., arrays of image sensor pixels). The optical sensor data may include image sensor data (e.g., image data, infrared image data, one or more images, etc.). Infrared sensor(s) 6 may pass the optical sensor data to control circuitry 16 for further processing. Infrared sensor(s) 6 and infrared emitter(s) 8 may be omitted if desired.
If desired, waveguide 32 may also include one or more layers of holographic recording media (sometimes referred to herein as holographic media, grating media, or diffraction grating media) on which one or more diffractive gratings are recorded (e.g., holographic phase gratings, sometimes referred to herein as holograms, surface relief gratings, etc.). A holographic recording may be stored as an optical interference pattern (e.g., alternating regions of different indices of refraction) within a photosensitive optical material such as the holographic media. The optical interference pattern may create a holographic phase grating that, when illuminated with a given light source, diffracts light to create a three-dimensional reconstruction of the holographic recording. The holographic phase grating may be a non-switchable diffractive grating that is encoded with a permanent interference pattern or may be a switchable diffractive grating in which the diffracted light can be modulated by controlling an electric field applied to the holographic recording medium. Multiple holographic phase gratings (holograms) may be recorded within (e.g., superimposed within) the same volume of holographic medium if desired. The holographic phase gratings may be, for example, volume holograms or thin-film holograms in the grating medium. The grating medium may include photopolymers, gelatin such as dichromated gelatin, silver halides, holographic polymer dispersed liquid crystal, or other suitable holographic media.
Diffractive gratings on waveguide 32 may include holographic phase gratings such as volume holograms or thin-film holograms, meta-gratings, or any other desired diffractive grating structures. The diffractive gratings on waveguide 32 may also include surface relief gratings (SRGs) formed on one or more surfaces of the substrates in waveguide 32 (e.g., as modulations in thickness of a SRG medium layer). The diffractive gratings may, for example, include multiple multiplexed gratings (e.g., holograms) that at least partially overlap within the same volume of grating medium (e.g., for diffracting different colors of light and/or light from a range of different input angles at one or more corresponding output angles). Other light redirecting elements such as louvered mirrors may be used in place of diffractive gratings in waveguide 32 if desired.
As shown in
Optical system 22 may include one or more optical couplers (e.g., light redirecting elements) such as input coupler 34, cross-coupler 36, and output coupler 38. In the example of
Waveguide 32 may guide image light 30 down its length via total internal reflection. Input coupler 34 may be configured to couple image light 30 from projector 26 into waveguide 32 (e.g., within a total-internal reflection (TIR) range of the waveguide within which light propagates down the waveguide via TIR), whereas output coupler 38 may be configured to couple image light 30 from within waveguide 32 (e.g., propagating within the TIR range) to the exterior of waveguide 32 and towards eye box 24 (e.g., at angles outside of the TIR range). Input coupler 34 may include an input coupling prism, an edge or face of waveguide 32, a lens, a steering mirror or liquid crystal steering element, diffractive grating structures (e.g., volume holograms, SRGs, etc.), partially reflective structures (e.g., louvered mirrors), or any other desired input coupling elements.
As an example, projector 26 may emit image light 30 in direction +Y towards optical system 22. When image light 30 strikes input coupler 34, input coupler 34 may redirect image light 30 so that the light propagates within waveguide 32 via total internal reflection towards output coupler 38 (e.g., in direction +X within the TIR range of waveguide 32). When image light 30 strikes output coupler 38, output coupler 38 may redirect image light 30 out of waveguide 32 towards eye box 24 (e.g., back along the Y-axis). In implementations where cross-coupler 36 is formed on waveguide 32, cross-coupler 36 may redirect image light 30 in one or more directions as it propagates down the length of waveguide 32 (e.g., towards output coupler 38 from a direction of propagation as coupled into the waveguide by the input coupler). In redirecting image light 30, cross-coupler 36 may also perform pupil expansion on image light 30 in one or more directions. In expanding pupils of the image light, cross-coupler 36 may, for example, help to reduce the vertical size of waveguide 32 (e.g., in the Z direction) relative to implementations where cross-coupler 36 is omitted. Cross-coupler 36 may therefore sometimes also be referred to herein as pupil expander 36 or optical expander 36. If desired, output coupler 38 may also expand image light 30 upon coupling the image light out of waveguide 32.
Input coupler 34, cross-coupler 36, and/or output coupler 38 may be based on reflective and refractive optics or may be based on diffractive (e.g., holographic) optics. In arrangements where couplers 34, 36, and 38 are formed from reflective and refractive optics, couplers 34, 36, and 38 may include one or more reflectors (e.g., an array of micromirrors, partial mirrors, louvered mirrors, or other reflectors). In arrangements where couplers 34, 36, and 38 are based on diffractive optics, couplers 34, 36, and 38 may include diffractive gratings (e.g., volume holograms, surface relief gratings, etc.).
The example of
The operation of optical system 22 on image light 30 is shown in
Image light 30 may include images of virtual objects, sometimes referred to herein as virtual object images or simply as virtual objects. Projector 26 may receive image data that includes the virtual object images (e.g., pixels of image data at different pixel locations that form the virtual object images). Output coupler 38 may serve to overlay the virtual object images with world light from real-world object 28 within the field of view (FOV) of eye box 24. The control circuitry for system 10 may provide image data to projector 26 that places the virtual object images at desired locations within the FOV at eye box 24 (e.g., such that the virtual object images are overlaid with desired real-world objects in the scene/environment in front of system 10.)
Optical system 22 may include one or more lenses 40 that overlap output coupler 38. For example, optical system 22 may include at least a first lens 40A and a second lens 40B. Lens 40B may be interposed between waveguide 32 and real-world object 28. Lens 40A may be interposed between waveguide 32 and eye box 24. Lenses 40 are transparent and allow world light from real-world object 28 to pass to eye box 24 for viewing by the user. At the same time, the user can view virtual object images directed out of waveguide 32 and through lens 40A to eye box 24. Lenses 40A and 40B may sometimes also be referred to herein as lens elements.
The strength (sometimes referred to as the optical power, power, or diopter) of lens 40A can be selected to place virtual object images in image light 30 at a desired image distance (depth) from eye box 24 (sometimes referred to herein as a virtual object distance, virtual object image distance, virtual image distance (VID), virtual object depth, virtual image depth, or image depth). For example, it may be desirable to place virtual objects (virtual object images) such as text, icons, moving images, characters, effects, or other content or features at a certain virtual image distance (e.g., to integrate the virtual object image within, onto, into, or around the real-world objects in front of system 10). The placement of the virtual object at that distance can be accomplished by appropriate selection of the strength of lens 40A. Lens 40A may be a negative lens for users whose eyes do not have refraction errors. The strength (larger net negative power) of lens 40A can therefore be selected to adjust the distance (depth) of the virtual object. Lens 40A may therefore sometimes be referred to herein as bias lens 40A or bias− (B−) lens 40A.
If desired, lens 40B may have a complementary power value (e.g., a positive power with a magnitude that matches the magnitude of the negative power of lens 40A). Lens 40B may therefore sometimes be referred to herein as bias+ (B+) lens 40B, complementary lens 40B, or compensation lens 40B. For example, if lens 40A has a power of −2.0 diopter, lens 40B may have an equal and opposite power of +2.0 diopter (as an example). In this type of arrangement, the positive power of lens 40B cancels the negative power of lens 40A. As a result, the overall power of lenses 40A and 40B taken together will be 0 diopter. This allows a viewer to view real-world objects such as real-world object 28 without optical influence from lenses 40A and 40B. For example, a real-world object 28 located far away from system 10 (effectively at infinity), may be viewed as if lenses 40A and 40B were not present.
For a user with satisfactory uncorrected vision, this type of complementary lens arrangement therefore allows virtual objects to be placed in close proximity to the user (e.g., at a virtual image distance of 0.5-5 m, at least 0.1 m, at least 1 m, at least 2 m, less than 20 m, less than 10 m, less than 5 m, or other suitable near-to-midrange distance from device 10 while simultaneously allowing the user to view real world objects without modification by the optical components of the optical system). For example, a real-world object located at a distance of 2 m from device 10 (e.g., a real-world object being labeled by a virtual text label at a virtual image distance of 2 m) will optically appear to be located 2 m from device 10. This is merely illustrative and, if desired, lenses 40A and 40B need not be complementary lenses (e.g., lenses 40A and 40B may have any desired optical powers).
In addition, some users may require vision correction. Vision correction may be provided using tunable lenses, fixed (e.g., removable) lenses (sometimes referred to as supplemental lenses, vision correction lenses, removable lenses, or clip-on lenses), and/or by adjusting the optical power of lens 40A and/or lens 40B to implement the desired vision correction. In general, the vision correction imparted to the lens(es) may include corrections for ametropia (eyes with refractive errors) such as lenses to correct for nearsightedness (myopia), corrections for farsightedness (hyperopia), corrections for astigmatism, corrections for skewed vision, corrections to help accommodate age-related reductions in the range of accommodation exhibited by the eyes (sometimes referred to as presbyopia), and/or other vision disorders.
Lenses 40A and 40B may be provided with any desired optical powers and any desired shapes (e.g., may be plano-convex lenses, plano-concave lenses, plano-freeform lenses, freeform-convex lenses, freeform-concave lenses, convex-concave lenses, freeform-freeform lenses, etc.). Implementations in which the optical power(s) of lenses 40A and/or 40B are fixed (e.g., upon manufacture) are described herein as an example. If desired, one or both of lenses 40A and/or 40B may be electrically adjustable to impart different optical powers or power profiles over time (e.g., lenses 40A and/or 40B may be adjustable/tunable liquid crystal lenses).
In some operating conditions, such as when system 10 is operated outdoors, in rooms with bright lighting, or in other environments having relatively high light levels, world light from real-world objects 28 can overpower or wash out virtual objects presented to eye box 24 in image light 30, thereby limiting the contrast and visibility of the virtual objects when viewed at eye box 24. To reduce the brightness of the world light and maximize the contrast of the images (virtual objects) in image light 30 when viewed at eye box 24, optical system 22 may include a light-absorbing layer such as tint layer 42. Tint layer 42 may be disposed within the optical path between real-world objects 28 and output coupler 38. The world light from real-world objects 28 may pass through tint layer 42 prior to reaching eye box 24 (e.g., tint layer 42 may transmit the world light without transmitting image light 30). Tint layer 42 may absorb some of the real-world light, thereby reducing its brightness and increasing the contrast of virtual objects in image light 30 at eye box 24. If desired, the tint layer may also function to absorb real-world light, even when the virtual image is turned off, performing a function like switchable sunglasses.
Tint layer 42 may be a fixed tint layer or may be a dynamically adjustable tint layer. When implemented as a fixed tint layer, tint layer 42 has a fixed transmission profile that absorbs the same amount of incident world light over time. Fixed tint layers may be formed from a polymer film containing dye and/or pigment (as an example). When implemented as a dynamically (electrically) adjustable tint layer, tint layer 42 has a dynamically (electrically) adjustable transmission profile. In these implementations, tint layer 42 may be controlled by control signals from control circuitry 16. Implementations in which tint layer 42 is a dynamically adjustable tint layer are described herein as an example. However, in general, tint layer 42 as described herein may be replaced with a fixed tint layer.
Electrically adjustable tint layers (sometimes referred to as electrically adjustable light modulators or electrically adjustable light modulator layers) may be formed from an organic or inorganic electrochromic light modulator layer or a guest-host liquid crystal light modulator layer. When implemented using organic electrochromic tint materials, the active tint materials in the tint layer may be formed from one or more polymer layers which change their absorption upon being oxidized or reduced by charge from adjacent electrodes, or the active tint materials in the tint layer may be made from one or more species of organic small molecules, which diffuse in a liquid or gel medium and change their absorption upon being oxidized or reduced by charge from adjacent electrodes. When implemented using inorganic electrochromic tint materials, the active tint materials may be formed from one or more metal oxides, which change their absorption upon being oxidized or reduced by charge from adjacent electrodes, and may include counter-ions. Implementations in which tint layer 42 includes electrochromic tint material such as a layer of cured electrochromic gel are described herein as an example.
During operation of system 10, the electrically adjustable tint layer may be dynamically placed in a high transmission mode (sometimes referred to herein as a clear state) when it is desired to enhance the visibility of real-world objects or in a lower transmission mode (sometimes referred to herein as a dark state) when it is desired to reduce scene brightness and thereby help enhance the viewability of image light from projector 26 (e.g., to allow virtual objects such as virtual objects in image light 30 to be viewed without being overwhelmed by bright environmental light). If desired, tint layer 42 may also be controlled to exhibit intermediate levels of transmission and/or transmission levels that vary across the field of view of eye box 24.
Tint layer 42 may be planar (e.g., having a lateral surface that lies in a flat plane) or may be curved (e.g., having a lateral surface that is curved and non-planar). Tint layer 42 may be disposed at any desired location within optical system 22 between real-world objects 28 (e.g., the scene in front of system 10) and output coupler 38 on waveguide 32. Device 10 may include multiple overlapping tint layers if desired.
Substrate 50B may overlap substrate 50A and may be mounted to substrate 50A. When mounted together, substrates 50A and 50B may define a cavity between substrate 50A and substrate 50B. The cavity may be filled with a layer of electrochromic tint material such as electrochromic gel 78. Electrochromic gel 78 may form the active area 56 of tint layer 42. Tint layer 42 may transmit light to waveguide 32 through active area 56 of tint layer 42 (e.g., while absorbing some of the light, providing the transmitted light with a desired color response, etc.).
Electrochromic gel 78 may be cured and/or solidified during manufacture of tint layer 42. Electrochromic gel 78 may sometimes also be referred to herein as electrochromic layer 78, electrochromic tint material 78, electrochromic material 78, or tint material 78. A peripheral ring of adhesive such as peripheral edge seal 58 may be used to laterally contain electrochromic gel 78 within active area 56 while helping to space substrate 50A apart from substrate 50B. Peripheral edge seal 58 may also serve to mount or adhere substrates 50A and 50B together.
As shown in
Tint layer 42 may include first and second transparent conductive layers (not shown in
Flexible printed circuit 60 may receive control signals such as different control voltages from control circuitry 16 (
In an illustrative configuration, electrochromic gel 78 and tint layer 42 may exhibit a variable amount of light transmission ranging continuously between a minimum level of TMIN and a maximum level of TMAX. The value of TMIN may be 5%, 10%, 15%, 20%, 2-15%, 3-25%, 5-40%, 10-30%, 10-25%, at least 3%, at least 6%, at least 15%, at least 20%, less than 35%, less than 25%, less than 15%, or other suitable minimum level sufficient to help reduce environmental (real-world) light during viewing of computer-generated images from projectors 26 in bright environmental lighting conditions. The value of TMAX may be at least 50%, at least 60%, 60-99%, 40-99.9%, 80-99%, 70-99%, 80-97%, at least 70%, at least 80%, at least 85%, at least 90%, at least 95%, less than 99.99%, less than 99%, or other suitable maximum level sufficiently transparent to allow a viewer to comfortably view real world objects through tint layer 42 during situations where projectors 26 (
If desired, anti-reflective coatings (not shown) may be disposed on one or both of substrates 50A and 50B. In implementations where tint layer 42 is curved, substrates 50A and 50B may be curved. The example of
In general, it may be desirable for tint layer 42 to be as flat as possible (e.g., within the X-Z plane of
If desired, peripheral edge seal 58 may have a single uniform thickness T1 (e.g., as measured in the X-Z plane) along its length around the periphery of electrochromic gel 78. In general, configuring thickness T1 to be as small as possible may serve to minimize the amount of warpage imparted to tint layer 42 upon curing of peripheral edge seal 58. However, if desired, peripheral edge seal 58 may have multiple different thicknesses around the periphery (e.g., circumference) of active area 56. For example, peripheral edge seal 58 may have one or more thicker portions (regions or segments) 59 having a thickness T2 that is greater than thickness T1 (e.g., peripheral edge seal 58 may have an asymmetric amount of material along its length or about the periphery of electrochromic gel 78). Thicker portions 59 (sometimes referred to as edge seal tabs 59 or edge seal reservoirs 59) may, for example, help to counteract epoxy shrinkage on different sides of tint layer 42 during the manufacture and curing process of tint layer 42 (e.g., to balance the stress profile of tint layer 42 across its lateral area in the X-Z plane), thereby ensuring that tint layer 42 is as flat as possible. In addition, thicker portions 59 may help to increase the mechanical integrity with which substrates 50A and 50B are adhered together, for example.
Tint layer 42 may include a first electrode layer such as electrode 76B that is layered onto lateral surface 72 and that is interposed between substrate 50A and electrochromic gel 78. Tint layer 42 may also include a second electrode layer such as electrode 76A that is layered onto lateral surface 73 and that is interposed between substrate 50B and electrochromic gel 78. Flexible printed circuit 60 (
Tails 74A and 74B may provide voltages across electrodes 76A and 76B (through terminals 62) that cause materials in electrochromic gel 78 to perform an oxidation-reduction (redox) reaction. The redox reaction may configure electrochromic gel 78 to exhibit a desired level of optical transmission and/or to exhibit a desired color profile. The voltage may be changed over time to change the level of optical transmission and/or the color profile over time.
As shown in
Peripheral edge seal 58 may have thickness T (e.g., thickness T1 or T2 of
The example of
In these configurations, the material used to form peripheral edge seal 58B may be selected to exhibit maximal chemically compatibility with electrochromic gel 78 whereas the material used to form peripheral edge seal 58A may be selected to form a maximal barrier to oxygen (e.g., O2 gas) and/or water. Both peripheral edge seals 58A and 58B may exhibit relatively low cure shrinkage. In this way, the peripheral edge seals may be optimized to protect electrochromic gel 78 even if there is no single material that exhibits both adequate levels of chemical compatibility with electrochromic gel 78 and adequate levels of oxygen and water protection. As one example, peripheral edge seal 58A may be formed from polyisobutylene whereas peripheral edge seal 58B is formed from epoxy.
The example of
As shown in
Additionally or alternatively, substrate 50B may include a cavity for electrochromic gel 78.
As shown in
Lateral surface 73 (e.g., outside of cavity 88) may be mounted to lateral surface 72 of substrate 50A using peripheral edge seal 58 (e.g., a ring of adhesive, epoxy, polyisobutylene, a glass ring spacer, etc.). Peripheral edge seal 58 may be relatively thin in this configuration (e.g., 10 microns, 1-20 microns, 10-50 microns, etc.). Electrochromic gel 78 may fill cavity 88. Cavity 88 may serve to minimize the amount of adhesive (e.g., epoxy) required to adhere substrates 50A and 50B together (e.g., by increasing the thickness of substrate 50B around the lateral periphery of electrochromic gel 78), thereby minimizing cure warpage and thus maximizing the flatness of tint layer 42.
The example of
As shown in
In other configurations, spacer beads 90 may be formed from a different material than electrochromic gel 78. In these configurations, spacer beads 90 may remain present between substrates 50A and 50B after curing. The material of spacer beads 90 may be selected to exhibit as close a refractive index as electrochromic gel 78 as possible to minimize the visibility of spacer beads 90 to the user. In general, the arrangements of
If desired, various manufacturing techniques may be employed to maximize the flatness of tint layer 42 after curing of electrochromic gel 78 and peripheral edge seal(s) 58. For example, tint layer 42 may be overfilled with electrochromic material 78 at a relatively high temperature prior to curing. This may cause substrates 50A and 50B to bend outwards away from electrochromic gel 78 prior to curing. The electrochromic gel may then be cured and cooled, which may cause electrochromic gel 78 to shrink, reversing the bending of substrates 50A and 50B and leaving the substrates with a flat (planar) shape at room temperature after curing. As another example, during manufacture, electrochromic gel 78 may first be deposited as a freestanding layer onto substrate 50B (e.g., using a screen print process, an inkjet process, a slot die process, etc.). Peripheral edge seal 58 may then be deposited as a high viscosity free-standing material onto substrate 50B and laterally surrounding electrochromic gel 78. Substrate 50A may then be placed on top of the electrochromic gel and the peripheral edge seal and the peripheral edge seal may then be cured to form a flat tint layer 42.
In general, electrochromic gel 78 may be formed from any desired electrochromic material. The electrochromic material may include at least a first redox active species A and a second redox active species B. If desired, a third redox species C may be added to the electrochromic material to tune the color response of the tint layer.
As shown in
In configurations where electrochromic gel 78 only includes two redox active species, A and B, the color response of tint layer 42 in the dark state (e.g., a state of minimal light transmission) is dictated by the relative molar absorptivities, the diffusion coefficients, and the total concentrations of species A and B. If the redox states of A and B in the dark state of tint layer 42 have very different molar absorptivities, then the dark state color of tint layer 42 will be dominated by the species with higher optical absorptivity. This can lead to an undesirable (e.g., non-neutral) color response in the dark state.
Such an undesirable color response cannot be resolved by simply reducing the concentration of the more-absorbing species. This is because every electron used to generate one of the species in the redox reaction must be used to generate the opposing species in a 1:1 ratio. To mitigate these issues and to produce a more neutral color response in the dark state, electrochromic gel 78 may further include a third redox species C. Species C may undergo the same type of redox reaction (oxidation or reduction) as whichever of species A or B absorbs more light at similar electric potentials (e.g., species C may undergo reduction if species A is more absorbing than species B or may undergo oxidization if species B is more absorbing than species A).
In the example of
As used herein, the term “concurrent” means at least partially overlapping in time. In other words, first and second events are referred to herein as being “concurrent” with each other if at least some of the first event occurs at the same time as at least some of the second event (e.g., if at least some of the first event occurs during, while, or when at least some of the second event occurs). First and second events can be concurrent if the first and second events are simultaneous (e.g., if the entire duration of the first event overlaps the entire duration of the second event in time) but can also be concurrent if the first and second events are non-simultaneous (e.g., if the first event starts before or after the start of the second event, if the first event ends before or after the end of the second event, or if the first and second events are partially non-overlapping in time). As used herein, the term “while” is synonymous with “concurrent.”
As described above, one aspect of the present technology is the gathering and use of information such as information from input-output devices. The present disclosure contemplates that in some instances, data may be gathered that includes personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter ID's, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, username, password, biometric information, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to deliver targeted content that is of greater interest to the user. Accordingly, use of such personal information data enables users to have control of the delivered content. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used to provide insights into a user's general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals.
The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the United States, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA), whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country.
Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In another example, users can select not to provide certain types of user data. In yet another example, users can select to limit the length of time user-specific data is maintained. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an application (“app”) that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app.
Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data at a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods.
Therefore, although the present disclosure broadly covers use of information that may include personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data.
Physical environment: A physical environment refers to a physical world that people can sense and/or interact with without aid of electronic systems. Physical environments, such as a physical park, include physical articles, such as physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment, such as through sight, touch, hearing, taste, and smell.
Computer-generated reality: in contrast, a computer-generated reality (CGR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system. In CGR, a subset of a person's physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the CGR environment are adjusted in a manner that comports with at least one law of physics. For example, a CGR system may detect a person's head turning and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. In some situations (e.g., for accessibility reasons), adjustments to characteristic(s) of virtual object(s) in a CGR environment may be made in response to representations of physical motions (e.g., vocal commands). A person may sense and/or interact with a CGR object using any one of their senses, including sight, sound, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create 3D or spatial audio environment that provides the perception of point audio sources in 3D space. In another example, audio objects may enable audio transparency, which selectively incorporates ambient sounds from the physical environment with or without computer-generated audio. In some CGR environments, a person may sense and/or interact only with audio objects. Examples of CGR include virtual reality and mixed reality.
Virtual reality: A virtual reality (VR) environment refers to a simulated environment that is designed to be based entirely on computer-generated sensory inputs for one or more senses. A VR environment comprises a plurality of virtual objects with which a person may sense and/or interact. For example, computer-generated imagery of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in the VR environment through a simulation of the person's presence within the computer-generated environment, and/or through a simulation of a subset of the person's physical movements within the computer-generated environment.
Mixed reality: In contrast to a VR environment, which is designed to be based entirely on computer-generated sensory inputs, a mixed reality (MR) environment refers to a simulated environment that is designed to incorporate sensory inputs from the physical environment, or a representation thereof, in addition to including computer-generated sensory inputs (e.g., virtual objects). On a virtuality continuum, a mixed reality environment is anywhere between, but not including, a wholly physical environment at one end and virtual reality environment at the other end. In some MR environments, computer-generated sensory inputs may respond to changes in sensory inputs from the physical environment. Also, some electronic systems for presenting an MR environment may track location and/or orientation with respect to the physical environment to enable virtual objects to interact with real objects (that is, physical articles from the physical environment or representations thereof). For example, a system may account for movements so that a virtual tree appears stationery with respect to the physical ground. Examples of mixed realities include augmented reality and augmented virtuality. Augmented reality: an augmented reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment, or a representation thereof. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present virtual objects on the transparent or translucent display, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. Alternatively, a system may have an opaque display and one or more imaging sensors that capture images or video of the physical environment, which are representations of the physical environment. The system composites the images or video with virtual objects, and presents the composition on the opaque display. A person, using the system, indirectly views the physical environment by way of the images or video of the physical environment, and perceives the virtual objects superimposed over the physical environment. As used herein, a video of the physical environment shown on an opaque display is called “pass-through video,” meaning a system uses one or more image sensor(s) to capture images of the physical environment, and uses those images in presenting the AR environment on the opaque display. Further alternatively, a system may have a projection system that projects virtual objects into the physical environment, for example, as a hologram or on a physical surface, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing pass-through video, a system may transform one or more sensor images to impose a select perspective (e.g., viewpoint) different than the perspective captured by the imaging sensors. As another example, a representation of a physical environment may be transformed by graphically modifying (e.g., enlarging) portions thereof, such that the modified portion may be representative but not photorealistic versions of the originally captured images. As a further example, a representation of a physical environment may be transformed by graphically eliminating or obfuscating portions thereof. Augmented virtuality: an augmented virtuality (AV) environment refers to a simulated environment in which a virtual or computer generated environment incorporates one or more sensory inputs from the physical environment. The sensory inputs may be representations of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but people with faces photorealistically reproduced from images taken of physical people. As another example, a virtual object may adopt a shape or color of a physical article imaged by one or more imaging sensors. As a further example, a virtual object may adopt shadows consistent with the position of the sun in the physical environment.
Hardware: there are many different types of electronic systems that enable a person to sense and/or interact with various CGR environments. Examples include head mounted systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head mounted system may have one or more speaker(s) and an integrated opaque display. Alternatively, a head mounted system may be configured to accept an external opaque display (e.g., a smartphone). The head mounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head mounted system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person's eyes. The display may utilize digital light projection, OLEDs, LEDs, μLEDs, liquid crystal on silicon, laser scanning light sources, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In one embodiment, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person's retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface.
The foregoing is merely illustrative and various modifications can be made to the described embodiments. The foregoing embodiments may be implemented individually or in any combination.
This application claims the benefit of U.S. Provisional Patent Application No. 63/511,585, filed Jun. 30, 2023, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63511585 | Jun 2023 | US |