This disclosure relates to optical systems such as optical systems in electronic devices having displays.
Electronic devices can include displays that provide images near the eyes of a user. Such electronic devices often include virtual or augmented reality headsets with displays having optical elements that allow users to view the displays. If care is not taken, components used to display images can be bulky and might not exhibit desired levels of optical performance. For example, coherent light paths in the display can produce destructive interference that reduces the efficiency of the display.
An electronic device may include a display having a waveguide that directs image light to an eye box. The waveguide may include an optical coupler that redirects and replicates the image light. The optical coupler may include one or more surface relief gratings (SRGs). The SRG may have a pitch that varies continuously along an axis orthogonal to the ridges of the SRG. The pitch may vary sinusoidally, linearly, parabolically, or according to other continuous and differentiable functions of position along the axis.
The SRG may diffract the image light. Upon diffracting the image light, the SRG may impart a phase to the image light. The phase may vary continuously as a function of position along a first axis and may, if desired, vary continuously as a function of position along a second axis orthogonal to the first axis. The SRG may, for example, exhibit a parabolic or paraboloid phase map. If desired, the ridges and troughs of the SRG may follow sinusoidal paths. The SRG may prevent formation of coherent light paths after replication, thereby maximizing the efficiency of the system. Continuously varying the pitch or phase may prevent the formation of smear artifacts in the image light associated with sharp boundaries between regions of different pitch or phase.
System 10 of
The operation of system 10 may be controlled using control circuitry 16. Control circuitry 16 may include storage and processing circuitry for controlling the operation of system 10. Circuitry 16 may include storage such as hard disk drive storage, nonvolatile memory (e.g., electrically-programmable-read-only memory configured to form a solid state drive), volatile memory (e.g., static or dynamic random-access-memory), etc. Processing circuitry in control circuitry 16 may be based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio chips, graphics processing units, application specific integrated circuits, and other integrated circuits. Software code may be stored on storage in circuitry 16 and run on processing circuitry in circuitry 16 to implement operations for system 10 (e.g., data gathering operations, operations involving the adjustment of components using control signals, image rendering operations to produce image content to be displayed for a user, etc.).
System 10 may include input-output circuitry such as input-output devices 12. Input-output devices 12 may be used to allow data to be received by system 10 from external equipment (e.g., a tethered computer, a portable device such as a handheld device or laptop computer, or other electrical equipment) and to allow a user to provide head-mounted device 10 with user input. Input-output devices 12 may also be used to gather information on the environment in which system 10 (e.g., head-mounted device 10) is operating. Output components in devices 12 may allow system 10 to provide a user with output and may be used to communicate with external electrical equipment. Input-output devices 12 may include sensors and other components 18 (e.g., image sensors for gathering images of real-world object that are digitally merged with virtual objects on a display in system 10, accelerometers, depth sensors, light sensors, haptic output devices, speakers, batteries, wireless communications circuits for communicating between system 10 and external electronic equipment, etc.).
Projectors 26 may include liquid crystal displays, organic light-emitting diode displays, laser-based displays, or displays of other types. Projectors 26 may include light sources, emissive display panels, transmissive display panels that are illuminated with illumination light from light sources to produce image light, reflective display panels such as digital micromirror display (DMD) panels and/or liquid crystal on silicon (LCOS) display panels that are illuminated with illumination light from light sources to produce image light 30, etc.
Optical systems 22 may form lenses that allow a viewer (see, e.g., a viewer's eyes at eye box 24) to view images on display(s) 20. There may be two optical systems 22 (e.g., for forming left and right lenses) associated with respective left and right eyes of the user. A single display 20 may produce images for both eyes or a pair of displays 20 may be used to display images. In configurations with multiple displays (e.g., left and right eye displays), the focal length and positions of the lenses formed by system 22 may be selected so that any gap present between the displays will not be visible to a user (e.g., so that the images of the left and right displays overlap or merge seamlessly).
If desired, optical system 22 may contain components (e.g., an optical combiner, etc.) to allow real-world light 31 (sometimes referred to herein as world light 31 or ambient light 31) produced and/or reflected from real-world objects 28 (sometimes referred to herein as external objects 28) to be combined optically with virtual (computer-generated) images such as virtual images in image light 30. In this type of system, which is sometimes referred to as an augmented reality system, a user of system 10 may view both real-world content and computer-generated content that is overlaid on top of the real-world content. Camera-based augmented reality systems may also be used in device 10 (e.g., in an arrangement in which a camera captures real-world images of external objects and this content is digitally merged with virtual content at optical system 22).
System 10 may, if desired, include wireless circuitry and/or other circuitry to support communications with a computer or other external equipment (e.g., a computer that supplies display 20 with image content). During operation, control circuitry 16 may supply image content to display 20. The content may be remotely received (e.g., from a computer or other content source coupled to system 10) and/or may be generated by control circuitry 16 (e.g., text, other computer-generated content, etc.). The content that is supplied to display 20 by control circuitry 16 may be viewed by a viewer at eye box 24.
If desired, waveguide 32 may also include one or more layers of holographic recording media (sometimes referred to herein as holographic media, grating media, or diffraction grating media) on which one or more diffractive gratings are recorded (e.g., holographic phase gratings, sometimes referred to herein as holograms, surface relief gratings, etc.). A holographic recording may be stored as an optical interference pattern (e.g., alternating regions of different indices of refraction) within a photosensitive optical material such as the holographic media. The optical interference pattern may create a holographic phase grating that, when illuminated with a given light source, diffracts light to create a three-dimensional reconstruction of the holographic recording. The holographic phase grating may be a non-switchable diffractive grating that is encoded with a permanent interference pattern or may be a switchable diffractive grating in which the diffracted light can be modulated by controlling an electric field applied to the holographic recording medium. Multiple holographic phase gratings (holograms) may be recorded within (e.g., superimposed within) the same volume of holographic medium if desired. The holographic phase gratings may be, for example, volume holograms or thin-film holograms in the grating medium. The grating medium may include photopolymers, gelatin such as dichromated gelatin, silver halides, holographic polymer dispersed liquid crystal, or other suitable holographic media.
Diffractive gratings on waveguide 32 may include holographic phase gratings such as volume holograms or thin-film holograms, meta-gratings, or any other desired diffractive grating structures. The diffractive gratings on waveguide 32 may also include surface relief gratings (SRGs) formed on one or more surfaces of the substrates in waveguide 32 (e.g., as modulations in thickness of a SRG medium layer), gratings formed from patterns of metal structures, etc. The diffractive gratings may, for example, include multiple multiplexed gratings (e.g., holograms) that at least partially overlap within the same volume of grating medium (e.g., for diffracting different colors of light and/or light from a range of different input angles at one or more corresponding output angles). Other light redirecting elements such as louvered mirrors may be used in place of diffractive gratings in waveguide 32 if desired.
As shown in
Optical system 22 may include one or more optical couplers (e.g., light redirecting elements) such as input coupler 34, cross-coupler 36, and output coupler 38. In the example of
Waveguide 32 may guide image light 30 down its length via total internal reflection. Input coupler 34 may be configured to couple image light 30 from projector 26 into waveguide 32 (e.g., within a total-internal reflection (TIR) range of the waveguide within which light propagates down the waveguide via TIR), whereas output coupler 38 may be configured to couple image light 30 from within waveguide 32 (e.g., propagating within the TIR range) to the exterior of waveguide 32 and towards eye box 24 (e.g., at angles outside of the TIR range). Input coupler 34 may include an input coupling prism, an edge or face of waveguide 32, a lens, a steering mirror or liquid crystal steering element, diffractive grating structures (e.g., volume holograms, SRGs, etc.), partially reflective structures (e.g., louvered mirrors), or any other desired input coupling elements.
As an example, projector 26 may emit image light 30 in direction +Y towards optical system 22. When image light 30 strikes input coupler 34, input coupler 34 may redirect image light 30 so that the light propagates within waveguide 32 via total internal reflection towards output coupler 38 (e.g., in direction +X within the TIR range of waveguide 32). When image light 30 strikes output coupler 38, output coupler 38 may redirect image light 30 out of waveguide 32 towards eye box 24 (e.g., back along the Y-axis). In implementations where cross-coupler 36 is formed on waveguide 32, cross-coupler 36 may redirect image light 30 in one or more directions as it propagates down the length of waveguide 32 (e.g., towards output coupler 38 from a direction of propagation as coupled into the waveguide by the input coupler). In redirecting image light 30, cross-coupler 36 may also perform pupil expansion on image light 30 in one or more directions. In expanding pupils of the image light, cross-coupler 36 may, for example, help to reduce the vertical size of waveguide 32 (e.g., in the Z direction) relative to implementations where cross-coupler 36 is omitted. Cross-coupler 36 may therefore sometimes also be referred to herein as pupil expander 36 or optical expander 36. If desired, output coupler 38 may also expand image light 30 upon coupling the image light out of waveguide 32.
Input coupler 34, cross-coupler 36, and/or output coupler 38 may be based on reflective and refractive optics or may be based on diffractive (e.g., holographic) optics. In arrangements where couplers 34, 36, and 38 are formed from reflective and refractive optics, couplers 34, 36, and 38 may include one or more reflectors (e.g., an array of micromirrors, partial mirrors, louvered mirrors, or other reflectors). In arrangements where couplers 34, 36, and 38 are based on diffractive optics, couplers 34, 36, and 38 may include diffractive gratings (e.g., volume holograms, surface relief gratings, etc.).
The example of
Waveguide 32 may be provided with a surface relief grating (SRG) such as surface relief grating 74. SRG 74 may be included in cross-coupler 36 or as part of an optical coupler that performs the operations of both cross-coupler 36 and output coupler 38 (e.g., a diamond expander or interleaved coupler), for example. SRG 74 may be formed within a substrate such as a layer of SRG substrate 76 (sometimes referred to herein as medium 76, medium layer 76, SRG medium 76, or SRG medium layer 76). While only a single SRG 74 is shown in SRG substrate 76 in
SRG 74 may include peaks 78 and troughs 80 in the thickness of SRG substrate 76. Peaks 78 may sometimes also be referred to herein as ridges 78 or maxima 78. Troughs 80 may sometimes also be referred to herein as notches 80, slots 80, grooves 80, or minima 80. In the example of
The example of
If desired, multiple SRGs 74 may be distributed across multiple layers of SRG substrate, as shown in the example of
If desired, waveguide 32 may include one or more substrates having regions that include diffractive gratings for input coupler 34, cross-coupler 36, and/or output coupler 38 and having regions that are free from diffractive gratings.
As shown in
For example, substrate(s) 89 may include a first diffractive grating structure 88A (sometimes referred to herein as grating structure 88A or grating(s) 88A) formed from a first set of one or more overlapping SRGs 74 (
Diffractive grating structures 88A, 88B, and 88C may each form respective optical couplers for waveguide 32. For example, diffractive grating structure 88A may form input coupler 34 for waveguide 32. Diffractive grating structure 88B may form cross-coupler (e.g., pupil expander) 36 on waveguide 32. Diffractive grating structure 88C may form output coupler 38 for waveguide 32. Diffractive grating structure 88A may therefore couple a beam 92 of image light 30 into waveguide 32 and towards diffractive grating structure 88B. Diffractive grating structure 88B may redirect image light 30 towards diffractive grating structure 88C and may optionally perform pupil expansion on image light 30 (e.g., may split image light 30 into multiple paths to form a larger beam that covers the eye pupil and forms a more uniform image). Diffractive grating structure 88C may couple image light 30 out of waveguide 32 and towards the eye box. If desired, diffractive grating structure 88C may also perform pupil expansion on image light 30.
Substrate(s) 89 and thus waveguide 32 may also include one or more regions 90 that are free from diffractive grating structures 88, diffractive gratings, or optical couplers. Regions 90 may, for example, be free from ridges 78 and troughs 80 of any SRGs (
Each diffractive grating structure 88 in substrate(s) 89 may span a corresponding lateral area of substrate(s) 89. The lateral area spanned by each diffractive grating structure 88 is defined (bounded) by the lateral edge(s) 94 of that diffractive grating structure 88. Lateral edges 94 may separate or divide the portions of substrate(s) 89 that include thickness modulations used to form one or more SRG(s) in diffractive grating structures 88 from the non-diffractive regions 90 on substrate(s) 89. In other words, lateral edges 94 may define the boundaries between diffractive grating structures 88 and non-diffractive regions 90. Diffractive grating structures 88A, 88B, and 88C may have any desired lateral shapes (e.g., as defined by lateral edges 94).
The example of
The magnitude of grating vector K1 corresponds to the widths and spacings (e.g., the period) of the ridges 78 and troughs 80 (fringes) in SRG 74A, as well as to the wavelengths of light diffracted by the SRG. The magnitude of grating vector K2 corresponds to the widths and spacings (e.g., the period) of the ridges 78 and troughs 80 in SRG 74B, as well as to the wavelengths of light diffracted by the SRG. Surface relief gratings generally have a wide bandwidth. The bandwidth of SRGs 74A and 74B may encompass each of the wavelengths in image light 30, for example (e.g., the entire visible spectrum, a portion of the visible spectrum, portions of the infrared or near-infrared spectrum, some or all of the visible spectrum and a portion of the infrared or near-infrared spectrum, etc.). The magnitude of grating vector K2 may be equal to the magnitude of grating vector K1 or may be different from the magnitude of grating vector K1. While illustrated within the plane of the page of
SRG 74A at least partially overlaps SRG 74B in optical coupler 109 (e.g., at least some of the ridges and troughs of each SRG spatially overlap or are superimposed within the same volume of SRG substrate). If desired, the strength of SRG 74A and/or SRG 74B may be modulated in the vertical direction (e.g., along the Z-axis) and/or in the horizontal direction (e.g., along the X-axis). If desired, one or both of SRGs 74A and 74B may have a magnitude that decreases to zero within peripheral regions 108A and 108B of the field of view, which may help to mitigate the production of rainbow artifacts.
Input coupler 34 (
SRG 74A and SRG 74B may be formed in the same layer of SRG substrate 76 or may be disposed in separate layers of SRG substrate that are disposed on opposing lateral surfaces of waveguide 32 or substrate 89. In another suitable implementation, waveguide 32 or substrate 89 includes a first layer of SRG substrate on a first lateral surface and a second layer of SRG substrate on a second lateral surface opposite the first lateral surface, where the first layer of SRG substrate include SRGs 74A and 74B and the second layer of SRG substrate includes additional overlapping/crossed SRGs such as SRGs 74A and 74B.
One or more of the SRG(s) on waveguide 32 (e.g., SRGs 74A and 74B of
As shown in
SRG 74 is characterized by a corresponding grating pitch P. Pitch P is defined by the lateral distance/separation between adjacent ridges 78 at a surface of the corresponding SRG substrate. SRG 74 is also characterized by a grating vector oriented in direction r, orthogonal to ridges 78 (e.g., orthogonal to the lines of constant SRG substrate thickness in SRG 74). Spatial position along direction r is sometimes denoted herein as position R.
In some implementations, pitch P is constant across the lateral area of SRG 74. In these implementations, the diffraction of image light 30 by SRG 74 causes image light 30 to follow two optical/light paths of nearly the same path length that are then recombined (e.g., a first path from point 114 to point 116 via an arrow 110 and then an arrow 112 and a second path from point 114 to point 116 via an arrow 112 and then an arrow 110). This effectively creates a network of Mach-Zehnder interferometers across the lateral area of SRG 74. If the phase relationship between the image light following the first path and the image light following the second path is not tightly controlled, phase differences between the first and second paths can produce destructive interference when the light from the first path is recombined with the light from the second path at point 116. The destructive interference can reduce the amount of image light 30 that reaches the eye box, thereby limiting the modulation transfer function (MTF) and/or efficiency of the display. It would therefore be desirable to be able to control the relative phases between the light paths in SRG 74 in a manner that minimizes destructive interference and thus maximizes spatial uniformity and angular image uniformity at the eye box.
To maximize spatial uniformity and angular image uniformity at the eye box, SRG 74 may be configured to cause the first and second path lengths (e.g., from point 114 to point 116) to be different (e.g., across all of the region(s) of the SRG 74 that are used in pupil replication). One way of achieving this is by perturbing (e.g., chirping) the phase of the SRG and thus the phase imparted to the image light diffracted by the SRG over the pupil replication region(s). SRG 74 may, for example, have a pitch P and/or an angle that is spatially varied (chirped) across its lateral area by a small percentage that causes the ridges of the SRG to slip in and out of phase relative to a constant pitch SRG.
As shown by SRG 74, SRG 74 has a variable pitch P that varies from a maximum pitch P2 to a minimum pitch P1 at spatial positions R along direction r. Portion 120 in
Plot 122 of
To mitigate these issues, SRG 74 may be provided with a pitch P that varies continuously as a function of position R along direction r and vector Kperpendicular (e.g., that varies smoothly without discrete or non-differentiable jumps in pitch P from R=0 to R=L0). In other words, the pitch P of SRG 74 is continuously varied/changed or continuously chirped as a function of spatial position. If desired, the pitch P of SRG 74 may continuously vary in a periodic manner between pitch P1 and pitch P2 as a function of position R. For example, pitch P may vary sinusoidally between pitch P1 and pitch P2 from R=0 to R=L0, as shown by curve 128. Unlike square waveform 126, curve 128 is continuous and differentiable at all points between R=0 and R=L0, thereby mitigating the formation of smear artifacts and maximizing MTF.
Ideally, the percentage by which pitch P is spatially chirped causes the SRG to slip in and out of phase relative to the constant pitch P0 (see portion 120 of
The example of
The example of
In the example of
Portion 120 of
In the example of
Curve 136 of
If desired, SRG 74 may exhibit a parabolic phase profile along the Z-axis in addition to along the X-axis (e.g., by also parabolically varying pitch P as shown by curves 136 or 137 of
Curve 146 plots the phase imparted to image light 30 upon diffraction by the SRG at different positions along axis Z when the SRG has a pitch P with a parabolic variation along the axis Z and centered along axis Z (e.g., at L2/2). Curve 148 plots the phase imparted to image light 30 upon diffraction by the SRG at different positions along axis Z when the SRG has a pitch P with the parabolic variation along the axis Z that is offset from the center of width L2. In other words, the parabolic chirping of phase P across an axis orthogonal to direction r in SRG 74 configures SRG 74 to exhibit a corresponding parabolic phase map along axis Z (e.g., imparting the diffracted image light 30 with different phases or phase shifts as given by curve 146). When combined with the parabolic phase map along axis X (e.g., as given by curve 138 of
Physical environment: A physical environment refers to a physical world that people can sense and/or interact with without aid of electronic systems. Physical environments, such as a physical park, include physical articles, such as physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment, such as through sight, touch, hearing, taste, and smell.
Computer-generated reality: in contrast, a computer-generated reality (CGR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system. In CGR, a subset of a person's physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the CGR environment are adjusted in a manner that comports with at least one law of physics. For example, a CGR system may detect a person's head turning and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. In some situations (e.g., for accessibility reasons), adjustments to characteristic(s) of virtual object(s) in a CGR environment may be made in response to representations of physical motions (e.g., vocal commands). A person may sense and/or interact with a CGR object using any one of their senses, including sight, sound, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create 3D or spatial audio environment that provides the perception of point audio sources in 3D space. In another example, audio objects may enable audio transparency, which selectively incorporates ambient sounds from the physical environment with or without computer-generated audio. In some CGR environments, a person may sense and/or interact only with audio objects. Examples of CGR include virtual reality and mixed reality.
Virtual reality: A virtual reality (VR) environment refers to a simulated environment that is designed to be based entirely on computer-generated sensory inputs for one or more senses. A VR environment comprises a plurality of virtual objects with which a person may sense and/or interact. For example, computer-generated imagery of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in the VR environment through a simulation of the person's presence within the computer-generated environment, and/or through a simulation of a subset of the person's physical movements within the computer-generated environment.
Mixed reality: In contrast to a VR environment, which is designed to be based entirely on computer-generated sensory inputs, a mixed reality (MR) environment refers to a simulated environment that is designed to incorporate sensory inputs from the physical environment, or a representation thereof, in addition to including computer-generated sensory inputs (e.g., virtual objects). On a virtuality continuum, a mixed reality environment is anywhere between, but not including, a wholly physical environment at one end and virtual reality environment at the other end. In some MR environments, computer-generated sensory inputs may respond to changes in sensory inputs from the physical environment. Also, some electronic systems for presenting an MR environment may track location and/or orientation with respect to the physical environment to enable virtual objects to interact with real objects (that is, physical articles from the physical environment or representations thereof). For example, a system may account for movements so that a virtual tree appears stationery with respect to the physical ground. Examples of mixed realities include augmented reality and augmented virtuality. Augmented reality: an augmented reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment, or a representation thereof. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present virtual objects on the transparent or translucent display, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. Alternatively, a system may have an opaque display and one or more imaging sensors that capture images or video of the physical environment, which are representations of the physical environment. The system composites the images or video with virtual objects, and presents the composition on the opaque display. A person, using the system, indirectly views the physical environment by way of the images or video of the physical environment, and perceives the virtual objects superimposed over the physical environment. As used herein, a video of the physical environment shown on an opaque display is called “pass-through video,” meaning a system uses one or more image sensor(s) to capture images of the physical environment, and uses those images in presenting the AR environment on the opaque display. Further alternatively, a system may have a projection system that projects virtual objects into the physical environment, for example, as a hologram or on a physical surface, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing pass-through video, a system may transform one or more sensor images to impose a select perspective (e.g., viewpoint) different than the perspective captured by the imaging sensors. As another example, a representation of a physical environment may be transformed by graphically modifying (e.g., enlarging) portions thereof, such that the modified portion may be representative but not photorealistic versions of the originally captured images. As a further example, a representation of a physical environment may be transformed by graphically eliminating or obfuscating portions thereof. Augmented virtuality: an augmented virtuality (AV) environment refers to a simulated environment in which a virtual or computer generated environment incorporates one or more sensory inputs from the physical environment. The sensory inputs may be representations of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but people with faces photorealistically reproduced from images taken of physical people. As another example, a virtual object may adopt a shape or color of a physical article imaged by one or more imaging sensors. As a further example, a virtual object may adopt shadows consistent with the position of the sun in the physical environment.
Hardware: there are many different types of electronic systems that enable a person to sense and/or interact with various CGR environments. Examples include head mounted systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head mounted system may have one or more speaker(s) and an integrated opaque display. Alternatively, a head mounted system may be configured to accept an external opaque display (e.g., a smartphone). The head mounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head mounted system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person's eyes. The display may utilize digital light projection, OLEDs, LEDs, μLEDs, liquid crystal on silicon, laser scanning light sources, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In one embodiment, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person's retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface.
As used herein, the term “concurrent” means at least partially overlapping in time. In other words, first and second events are referred to herein as being “concurrent” with each other if at least some of the first event occurs at the same time as at least some of the second event (e.g., if at least some of the first event occurs during, while, or when at least some of the second event occurs). First and second events can be concurrent if the first and second events are simultaneous (e.g., if the entire duration of the first event overlaps the entire duration of the second event in time) but can also be concurrent if the first and second events are non-simultaneous (e.g., if the first event starts before or after the start of the second event, if the first event ends before or after the end of the second event, or if the first and second events are partially non-overlapping in time). As used herein, the term “while” is synonymous with “concurrent.”
System 10 may gather and/or use personally identifiable information. It is well understood that the use of personally identifiable information should follow privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. In particular, personally identifiable information data should be managed and handled so as to minimize risks of unintentional or unauthorized access or use, and the nature of authorized use should be clearly indicated to users.
Physical environment: A physical environment refers to a physical world that people can sense and/or interact with without aid of electronic systems. Physical environments, such as a physical park, include physical articles, such as physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment, such as through sight, touch, hearing, taste, and smell.
Computer-generated reality: in contrast, a computer-generated reality (CGR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system. In CGR, a subset of a person's physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the CGR environment are adjusted in a manner that comports with at least one law of physics. For example, a CGR system may detect a person's head turning and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. In some situations (e.g., for accessibility reasons), adjustments to characteristic(s) of virtual object(s) in a CGR environment may be made in response to representations of physical motions (e.g., vocal commands). A person may sense and/or interact with a CGR object using any one of their senses, including sight, sound, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create 3D or spatial audio environment that provides the perception of point audio sources in 3D space. In another example, audio objects may enable audio transparency, which selectively incorporates ambient sounds from the physical environment with or without computer-generated audio. In some CGR environments, a person may sense and/or interact only with audio objects. Examples of CGR include virtual reality and mixed reality.
Virtual reality: A virtual reality (VR) environment refers to a simulated environment that is designed to be based entirely on computer-generated sensory inputs for one or more senses. A VR environment comprises a plurality of virtual objects with which a person may sense and/or interact. For example, computer-generated imagery of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in the VR environment through a simulation of the person's presence within the computer-generated environment, and/or through a simulation of a subset of the person's physical movements within the computer-generated environment.
Mixed reality: In contrast to a VR environment, which is designed to be based entirely on computer-generated sensory inputs, a mixed reality (MR) environment refers to a simulated environment that is designed to incorporate sensory inputs from the physical environment, or a representation thereof, in addition to including computer-generated sensory inputs (e.g., virtual objects). On a virtuality continuum, a mixed reality environment is anywhere between, but not including, a wholly physical environment at one end and virtual reality environment at the other end. In some MR environments, computer-generated sensory inputs may respond to changes in sensory inputs from the physical environment. Also, some electronic systems for presenting an MR environment may track location and/or orientation with respect to the physical environment to enable virtual objects to interact with real objects (that is, physical articles from the physical environment or representations thereof). For example, a system may account for movements so that a virtual tree appears stationery with respect to the physical ground. Examples of mixed realities include augmented reality and augmented virtuality. Augmented reality: an augmented reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment, or a representation thereof. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present virtual objects on the transparent or translucent display, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. Alternatively, a system may have an opaque display and one or more imaging sensors that capture images or video of the physical environment, which are representations of the physical environment. The system composites the images or video with virtual objects, and presents the composition on the opaque display. A person, using the system, indirectly views the physical environment by way of the images or video of the physical environment, and perceives the virtual objects superimposed over the physical environment. As used herein, a video of the physical environment shown on an opaque display is called “pass-through video,” meaning a system uses one or more image sensor(s) to capture images of the physical environment, and uses those images in presenting the AR environment on the opaque display. Further alternatively, a system may have a projection system that projects virtual objects into the physical environment, for example, as a hologram or on a physical surface, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing pass-through video, a system may transform one or more sensor images to impose a select perspective (e.g., viewpoint) different than the perspective captured by the imaging sensors. As another example, a representation of a physical environment may be transformed by graphically modifying (e.g., enlarging) portions thereof, such that the modified portion may be representative but not photorealistic versions of the originally captured images. As a further example, a representation of a physical environment may be transformed by graphically eliminating or obfuscating portions thereof. Augmented virtuality: an augmented virtuality (AV) environment refers to a simulated environment in which a virtual or computer generated environment incorporates one or more sensory inputs from the physical environment. The sensory inputs may be representations of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but people with faces photorealistically reproduced from images taken of physical people. As another example, a virtual object may adopt a shape or color of a physical article imaged by one or more imaging sensors. As a further example, a virtual object may adopt shadows consistent with the position of the sun in the physical environment.
Hardware: there are many different types of electronic systems that enable a person to sense and/or interact with various CGR environments. Examples include head mounted systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head mounted system may have one or more speaker(s) and an integrated opaque display. Alternatively, a head mounted system may be configured to accept an external opaque display (e.g., a smartphone). The head mounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head mounted system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person's eyes. The display may utilize digital light projection, OLEDs, LEDs, μLEDs, liquid crystal on silicon, laser scanning light sources, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In one embodiment, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person's retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface.
The foregoing is merely illustrative and various modifications can be made to the described embodiments. The foregoing embodiments may be implemented individually or in any combination.
This application claims the benefit of U.S. Provisional Patent Application No. 63/583,085, filed Sep. 15, 2023, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63583085 | Sep 2023 | US |