The present disclosure relates generally to augmented reality (AR) eyewear, which fuses a view of the real world with a heads up display overlay. Wearable heads-up displays (WHUDs) are wearable electronic devices that use optical combiners to combine real world and virtual images. The optical combiner may be integrated with one or more lenses to provide a combiner lens that may be fitted into a support frame of a WHUD. In operation, the combiner lens provides a virtual display that is viewable by a user when the WHUD is worn on the head of the user. One class of optical combiner uses a waveguide (also termed a lightguide) to transfer light. In general, light from a projector of the WHUD enters the waveguide of the combiner through an in-coupler, propagates along the waveguide via total internal reflection (TIR), and exits the waveguide through an outcoupler. If the pupil of the eye is aligned with one or more exit pupils provided by the outcoupler, at least a portion of the light exiting through the outcoupler will enter the pupil of the eye, thereby enabling the user to see a virtual image. Because the combiner lens is transparent, the user will also be able to see the real world.
Embodiments are described herein in which a virtual image (a projected image of a virtual object) is displayed to a user via a light engine to generate a display light representing the virtual image, a diffractive waveguide, and one or more processors communicatively coupled to the light engine. A first component light of the generated display light is converged via the diffractive waveguide at a first focal distance from an eye of the user, and one or more additional component lights of the generated display light are converged at one or more distinct other focal distances from the eye of the user. The virtual image is modified to compensate for a perceived distortion of at least one component light of the virtual image resulting from the disparate focal distances.
In certain embodiments, a system for displaying a virtual image to a user comprises a light engine to generate a display light representing the virtual image; a diffractive waveguide to converge a first component light of the generated display light at a first focal distance from an eye of the user, and to converge one or more additional component lights of the generated display light at one or more distinct other focal distances from the eye of the user; and one or more processors communicatively coupled to the light engine and configured to modify the virtual image in order to compensate for a perceived distortion of at least one additional component light of the one or more additional component lights.
The first component light of the generated display light may include a first color component light. The one or more additional component lights may include at least one of a group that includes a second color component light of the generated display light or a third color component light of the generated display light.
The first color component light of the generated display light may include a green component light having a wavelength of between 495 nm and 570 nm.
To modify the virtual image may include to perform one or more deconvolution operations on the virtual image. The one or more deconvolution operations may be performed utilizing a blur kernel associated with the at least one additional component light. The blur kernel may be generated based on one or more measurements of an angular spread of the at least one additional component light at the first focal distance. The blur kernel may be generated based on modeling an angular spread of the at least one additional component light at the first focal distance.
The one or more additional component lights may include a blue component light and a red component light, such that a first blur kernel may be generated based on an angular spread of the blue component light at the first focal distance, and such that a second blur kernel may be generated based on a second angular spread of the red component light at the first focal distance.
The system may further comprise an incoupler optically coupled to the diffractive waveguide, the incoupler to receive the display light from the light engine and to direct the received display light to the diffractive waveguide; and an outcoupler optically coupled to the diffractive waveguide, the outcoupler to direct at least a portion of the display light from the diffractive waveguide to an eye of the user.
The system may further comprise a memory storing instructions that, when executed by the one or more processors, cause the one or more processors to modify the virtual image prior to the light engine generating the display light.
In certain embodiments, the system may be included within a wearable heads-up display (WHUD).
In certain embodiments, a method for displaying a virtual image to a user may comprise receiving, by one or more processors, the virtual image; modifying, by the one or more processors, the virtual image via one or more preprocessing operations to generate a modified virtual image; generating, by a light engine, a display light representing the modified virtual image; receiving the display light from the light engine and directing the received display light to a diffractive waveguide; and directing at least a portion of the display light from the diffractive waveguide to an eye of the user. The directing of the portion of the display light may include converging a first component light of the generated display light at a first focal distance from the eye of the user, and converging one or more additional component lights of the generated display light at one or more distinct other focal distances from the eye of the user.
The first component light of the generated display light may include a first color component light, such that the one or more additional component lights includes at least one of a group that comprises a second color component light of the generated display light or a third color component light of the generated display light. The first color component light of the generated display light may comprise a green component light having a wavelength of between 495 nm and 570 nm.
Modifying the virtual image via the one or more preprocessing operations may include performing one or more deconvolution operations on the virtual image. Performing the one or more deconvolution operations may include utilizing a blur kernel associated with the at least one additional component light.
The method may further include generating the blur kernel based on one or more measurements of an angular spread of the at least one additional component light at the first focal distance.
The method may further include generating the blur kernel based on modeling an angular spread of the at least one additional component light at the first focal distance.
The one or more additional component lights may include a blue component light and a red component light, such that the method further comprises generating a first blur kernel based on an angular spread of the blue component light at the first focal distance, and generating a second blur kernel based on an angular spread of the red component light at the first focal distance.
The method may further comprise receiving, at an incoupler optically coupled to the diffractive waveguide, display light from the light engine; and directing the received display light to the diffractive waveguide; and directing, via an outcoupler optically coupled to the diffractive waveguide, at least a portion of the display light from the diffractive waveguide to an eye of the user.
The present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items. It will be appreciated that unless specifically indicated, aspects of the accompanying drawings are not presented to scale and are not to be assumed to be so presented.
At least in part because diffractive waveguide architecture typically relies on light entering and exiting the waveguide being collimated, a WHUD using such waveguides is typically designed to display an image that appears to a user's eye to exist at an infinite distance from the user (as opposed to real world objects located closer to the user), such as how stars appear when viewing the night sky. Although this is a relaxed position for the eye, the infinite distance presents a problem when trying to overlay a virtual image upon the user's perceived image of the surrounding real world, as the eye attempts to simultaneously focus on and interpret real world objects being perceived at a finite distance and the virtual image being presented at an infinite distance.
Previous attempts to remedy this issue involve placing a physical lens with positive optical power between the eye and the waveguide, thereby causing the image to display at a finite distance, typically around two meters. (As used herein, optical power refers to a degree to which a lens, mirror, or other optical system converges or diverges light.) However, in order for the perceived image of the real world to be unaffected by the corresponding distortion, an additional compensating physical lens (with an equal but opposite optical power as the first lens) may be placed on the opposite side of the waveguide. While the resulting architecture typically succeeds in “distance shifting” the virtual display, it utilizes correspondingly larger and heavier components, which is typically disfavored for WHUD and other wearable devices.
Alternatively, optical power may be directly applied via a waveguide exit pupil expander's diffractive outcoupler. In certain scenarios and embodiments, such optical power may be applied to the outcoupling grating by introducing a slight curvature to an otherwise linear diffraction grating. However, due to the manner in which the grating interacts with and affects light exiting the outcoupler, the resulting optical power does not equally affect light of different wavelengths. In particular, because the curvature of such an outcoupler grating disparately affects the angle at which the individual red, green, and blue (RGB) components of a displayed virtual image exit the outcoupler, those components will be perceived by an eye of the user as occurring at different focal distances unless the incorporating device includes multiple distinct waveguides (e.g., one or more for each of the red, green, and blue spectra). As with approaches that utilize multiple physical lenses to accomplish the desired distance shift of the virtual display, incorporating multiple distinct waveguides generally corresponds to larger and heavier devices, which as noted is disfavored for wearable devices.
Embodiments of techniques presented herein provide optical power (such as for distance shift or other purposes) via an outcoupler grating of only a single waveguide. In certain embodiments, parameters of such an outcoupler grating may be selected in order to tune a focal distance of a full-color virtual image (one with red, green, and blue components) specifically for the peak of human photopic response, which is typically centered around green light having a wavelength of approximately 555 nm—the wavelength of light that the human visual system predominantly uses to perceive detail in an image.
From a radiometric standpoint, the red and blue light components of a virtual image provided via a single waveguide tuned in this manner may appear defocused or blurry. However, when considering the full human visual system (including the cognitive image processing of the brain), in many circumstances the image will appear sharp because the human visual system relies predominantly upon green light to determine sharpness and resolve detail. Thus, in this manner, optical power may be applied to the outcoupling region of a single-waveguide exit pupil expansion system while retaining a high degree of visual acuity for the user perceiving a resultant virtual image. In addition, embodiments of techniques herein may provide digital preprocessing operations to at least mitigate the perceived distortions resulting from the red light components and blue light components of a virtual image being converged at disparate focal distances. For example, a degree of defocus may be determined and used as the basis for generating one or more blur kernels for each of the red light components and blue light components. As used herein, a blur kernel refers to a two- or three-dimensional diffraction pattern generated by a collimated source of light passing through an optical path of a display system. The generated blur kernel(s) can then be used to modify those red light components and/or blue light components of a virtual image prior to that virtual image being provided to the user via the light engine and waveguide.
In the example of
In various embodiments, aspects of the example wearable display device may be modified from the depicted example in various ways. For example, in certain embodiments the orientation of the wearable display device 100 may be reversed, such that the display is presented to a left eye of a user instead of the right eye. The second arm 120 could carry a light engine similar to the light engine 111 carried by the first arm 110, and the front frame 130 could also carry another lens structure similar to the lens structure 135, such that wearable display device 100 presents a binocular display to both a right eye and a left eye of a user.
The light engine 111 and the display optics 131 include any appropriate display architecture for outputting light and redirecting the light to form a display to be viewed by a user. For example, in some embodiments, the light engine 111 and any of the light engines discussed herein include one or more instances of components selected from a group that includes at least: one of a projector, a scanning laser projector, a micro-display, a white-light source, or any other display technology as appropriate for a given application. The display optics 131 include one or more instances of optical components selected from a group that includes at least: a waveguide (references to which, as used herein, include and encompass both light guides and waveguides), a holographic optical element, a prism, a diffraction grating, a light reflector, a light reflector array, a light refractor, a light refractor array, or any other light-redirection technology as appropriate for a given application, positioned and oriented to redirect the AR content from the light engine 111 towards the eye of the user.
The lens structure 135 may include multiple lens layers, each of which may be disposed closer to an eye of the user than the display optics 131 (eye side) or further from the eye of the user than the display optics 131 (world side). A lens layer can for example be molded or cast, may include a thin film or coating, and may include one or more transparent carriers. A transparent carrier as described herein may refer to a material which acts to carry or support an optical redirector. As one example, a transparent carrier may be an eyeglasses lens or lens assembly. In addition, in certain embodiments one or more of the lens layers may be implemented as a contact lens.
Non-limiting example display architectures could include scanning laser projector and holographic optical element combinations, side-illuminated optical light guide displays, pin-light displays, or any other wearable heads-up display technology as appropriate for a given application. Various example display architectures are described in at least U.S. Provisional Patent Application No. 62/754,339, U.S. Provisional Patent Application Ser. No. 62/782,918, U.S. Provisional Patent Application Ser. No. 62/789,908, U.S. Provisional Patent Application Ser. No. 62/845,956, and U.S. Provisional Patent Application Ser. No. 62/791,514. The term light engine as used herein is not limited to referring to a singular light source but can also refer to a plurality of light sources, and can also refer to a light engine assembly. A light engine assembly may include some components which enable the light engine to function, or which improve operation of the light engine. As one example, a light engine may include a light source, such as a laser or a plurality of lasers. The light engine assembly may additionally include electrical components, such as driver circuitry to power the at least one light source. The light engine assembly may additionally include optical components, such as collimation lenses, a beam combiner, or beam shaping optics. The light engine assembly may additionally include beam redirection optics, such as least one MEMS mirror, which can be operated to scan light from at least one laser light source, such as in a scanning laser projector. In the above example, the light engine assembly includes a light source and also components, which take the output from at least one light source and produce conditioned display light to convey AR content. All of the components in the light engine assembly may be included in a housing of the light engine assembly, affixed to a substrate of the light engine assembly, such as a printed circuit board or similar, or separately mounted components of a wearable heads-up display (WHUD). Certain light engine assemblies are discussed in U.S. Provisional Patent Application No. 62/916,297.
In the example of
In
The light engine 211 can output a display light 290 (simplified for this example) representative of AR content or other display content to be viewed by a user. The display light 290 can be redirected by diffractive waveguide 235 towards an eye 291 of the user, such that the user can see the AR content. The display light 290 from the light engine 211 impinges on the incoupler 231 and is redirected to travel in a volume of the diffractive waveguide 235, where the display light 290 is guided through the light guide, such as by total internal reflection (TIR) or surface treatments such as holograms or reflective coatings. Subsequently, the display light 290 traveling in the volume of the diffractive waveguide 235 impinges on the outcoupler 233, which redirects the display light 290 out of the light guide redirector and towards the eye 291 of a user. Example WHUD display architectures are described in at least U.S. Provisional Patent Application No. 62/754,339, U.S. Provisional Patent Application Ser. No. 62/782,918, U.S. Provisional Patent Application Ser. No. 62/789,908, U.S. Provisional Patent Application Ser. No. 62/845,956, and U.S. Provisional Patent Application Ser. No. 62/791,514.
The wearable display device 200 may include a processor (not shown) that is communicatively coupled to each of the electrical components in the wearable display device 200, including but not limited to the light engine 211. The processor can be any suitable component which can execute instructions or logic, including but not limited to a micro-controller, microprocessor, multi-core processor, integrated-circuit, ASIC, FPGA, programmable logic device, or any appropriate combination of these components. The wearable display device 200 can include a non-transitory processor-readable storage medium, which may store processor readable instructions thereon, which when executed by the processor can cause the processor to execute any number of functions, including causing the light engine 211 to output the light 290 representative of display content to be viewed by a user, receiving user input, managing user interfaces, generating display content to be presented to a user, receiving and managing data from any sensors carried by the wearable display device 200, receiving and processing external data and messages, and any other functions as appropriate for a given application. The non-transitory processor-readable storage medium can be any suitable component, which can store instructions, logic, or programs, including but not limited to non-volatile or volatile memory, read only memory (ROM), random access memory (RAM), FLASH memory, registers, magnetic hard disk, optical disk, or any combination of these components.
As noted elsewhere herein, additional waveguides may be associated with a generally undesirable increase in mass, size, and manufacturing complexity associated with an including WHUD device. However, in certain embodiments it may be useful to include multiple waveguides tuned in the manner described above with respect to green light components of a virtual image. For example, the WHUD device 200 may in certain embodiments include a single distinct waveguide for each of multiple focal planes desired to be viewed by a user of the WHUD device, such as to provide a first virtual image at a first focal distance from a user, and a second virtual image at a distinct second focal distance from the user. Thus, while various examples may be discussed herein with respect to a single waveguide and outcoupler grating for providing a virtual image at a single focal distance, it will be appreciated that in various embodiments multiple waveguides (and corresponding outcoupler gratings) may be utilized, such as each corresponding to a distinct focal distance.
In contrast to the outcoupler grating 401 of
It will be appreciated that the red light components and blue light components of the virtual image 610 are not actually blurry or otherwise distorted—they are merely perceived to be blurred (out of focus) due to the photopic response of the human visual system, which relies primarily on green light to detect detail and therefore automatically focuses on the focal plane at which the green light component of that virtual image appears. Therefore, the focal distance at which the resulting virtual image is perceived typically coincides with the focal plane at which its green light component appears sharpest.
In certain embodiments, the perceived blurriness or other distortion of red light components and blue light components of a virtual image that is tuned for green light wavelengths may be mitigated or effectively eliminated using image preprocessing techniques. As a non-limiting example, a processor of an incorporating WHUD device may compensate for a larger perceived profile of one or more objects in a virtual image comprising red and blue components by effectively modifying a size of the object(s) prior to the provision of (with reference to
The degree to which a user of a display system (such as wearable display device 100 of
A perfectly focused waveguide for a collimated light source causes no angular spread for that collimated light source—optimally, there is no difference between an angular width of the source light components entering the incoupler of the waveguide and the angular width of those light components exiting the outcoupler of the waveguide. Thus, in certain embodiments, a component-specific blur kernel may be generated for each of red light components 805 and blue light components 815 with respect to a particular waveguide (e.g., diffractive waveguide 235) by measuring the respective angular spread associated with each of those red light components and blue light components at the green light focal distance 890. In other embodiments, a similar component-specific blur kernel for each of red light components 805 and blue light components 815 may be generated via modeling, such that the angular spread of those light components may be estimated based on the physical parameters of the diffractive waveguide 235.
The generated blur kernels can be used to establish a deconvolution algorithm for application to the red light components 805 and/or blue light components 815 of some or all of any AR content to be presented to the user. By applying this deconvolution algorithm based on the generated blur kernels to the virtual image to be projected by the light engine, the resulting modified virtual image preemptively compensates for the degree of defocus respectively corresponding to those red light components and/or blue light components.
The generation of the blur kernels and the application of the resulting deconvolution algorithm may be performed as at least part of digital preprocessing operations, such as in order to at least partially compensate for the perceived distortion of the red light components and/or blue light components. Such digital preprocessing operations may be performed at any time prior to the AR content being presented to the user. For example, digital preprocessing operations to compensate for the perceived distortion of the red and/or blue light components may be performed via software instructions for one or more general processors (such as hardware processors 1002 of
As one example, the AR content to be presented to the user may include a geometric line with a finite width. As noted, the disparate respective focal distances (and resulting angular spread) associated with each of red light components 805, green light components 810, and blue light components 815 will cause the line to appear broader than originally intended for the AR content. However, the digital preprocessing operations described above may result in modifying the geometric line to be commensurately narrower. In this manner, the line will appear closer to its originally intended width, despite the perceived distortions of its red light components 805 and/or blue light components 815 resulting from their respectively associated focal distances 885 and 895.
Moreover, in certain scenarios, the perceived distortion of red light components 805 and/or blue light components 815 may be perceived by the user as a color distortion rather than a loss of detail. Therefore, in certain scenarios and embodiments, the generated blur kernel may provide one or more aspects of color correction as well as blur compensation.
The routine begins at block 905, in which an angular spread for each of one or more light components (e.g., red light components 805 and/or blue light components 815 of
At block 910, a blur kernel is generated for each of one or more light components of a virtual image based on the determined angular spread of those light components and the respective focal distances at which individual light components of the virtual image are to be converged. The routine proceeds to block 915.
At block 915, the display device receives the virtual image for display to a user. The routine proceeds to block 920, in which the display device modifies the received virtual image based on the generated blur kernels. In certain embodiments, for example, modifying the virtual image may include one or more preprocessing operations (e.g., one or more deconvolution operations) utilizing the generated blur kernels.
After block 920, the routine proceeds to block 925, in which display light representing the modified virtual image is generated by a light engine of the display device (e.g., light engine 211 of
At block 930, the generated display light is directed to an eye of the user via waveguide (e.g., diffractive waveguide 235 of
Examples, as described herein, may include, or may operate by, logic or a number of components, or mechanisms. Circuitry is a collection of circuits implemented in tangible entities that include hardware (e.g., simple circuits, gates, logic, etc.). Circuitry membership may be flexible over time and underlying hardware variability. Circuitries include members that may, alone or in combination, perform specified operations when operating. In an example, hardware of the circuitry may be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuitry may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuitry in hardware via the variable connections to carry out portions of the specific operation when in operation. Accordingly, the computer readable medium is communicatively coupled to the other components of the circuitry when the device is operating. In an example, any of the physical components may be used in more than one member of more than one circuitry. For example, under operation, execution units may be used in a first circuit of a first circuitry at one point in time and reused by a second circuit in the first circuitry, or by a third circuit in a second circuitry at a different time.
The WHUD computing system 1000 may include one or more hardware processors 1002 (e.g., a central processing unit (CPU), a hardware processor core, or any combination thereof), a main memory 1004, and a graphics processing unit (GPU) 1006, some or all of which may communicate with each other via an interlink (e.g., bus) 1008. The WHUD computing system 1000 may further include a display unit 1010 (such as a display monitor or other display device), an alphanumeric input device 1012 (e.g., a keyboard or other physical or touch-based actuators), and a user interface (UI) navigation device 1014 (e.g., a mouse or other pointing device, such as a touch-based interface). In one example, the display unit 1010, input device 1012, and UI navigation device 1014 may include a touch screen display. The WHUD computing system 1000 may additionally include a storage device (e.g., drive unit) 1016, a signal generation device 1018 (e.g., a speaker), a network interface device 1020, and one or more sensors 1021, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The WHUD computing system 1000 may include an output controller 1028, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
The storage device 1016 may include a computer readable medium 1022 on which is stored one or more sets of data structures or instructions 1024 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 1024 may also reside, completely or at least partially, within the main memory 1004, within GPU 1006, or within the hardware processor 1002 during execution thereof by the WHUD computing system 1000. In an example, one or any combination of the hardware processor 1002, the main memory 1004, the GPU 1006, or the storage device 1016 may constitute computer readable media.
While the computer readable medium 1022 is illustrated as a single medium, the term “computer readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 1024.
The term “computer readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the WHUD computing system 1000 and that cause the WHUD computing system 1000 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting computer readable medium examples may include solid-state memories, and optical and magnetic media. In an example, a massed computer readable medium includes a computer readable medium with a plurality of particles having invariant (e.g., rest) mass. Accordingly, massed computer readable media are not transitory propagating signals. Specific examples of massed computer readable media may include non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The instructions 1024 may further be transmitted or received over a communications network 1026 using a transmission medium via the network interface device 1020 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 1020 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 1026. In an example, the network interface device 1020 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the WHUD computing system 1000, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
In some embodiments, certain aspects of the techniques described above may be implemented by one or more processors of a processing system executing software. The software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.
A computer readable storage medium may include any storage medium, or combination of storage media, accessible by a computer system during use to provide instructions and/or data to the computer system. Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media. The computer readable storage medium may be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)).
Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. For example, in certain embodiments, portions of the routine 900 described above may be performed externally to the display device, such as if a determination of the angular spread associated with one or more light components and/or generation of the blur kernels associated with those light components are performed as part of an initialization or configuration of the display device (e.g., as part of manufacture or initial configuration). Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below.
Number | Date | Country | |
---|---|---|---|
63290083 | Dec 2021 | US |