This application claims the benefit of U.S. Provisional Application No. 63/105,320, filed 25 Oct. 2020, and U.S. Provisional Application No. 63/194,821, filed 28 May 2021, the disclosures of each of which are incorporated, in their entirety, by this reference. Co-pending U.S. application Ser. No. 17/506,640, filed 20 Oct. 2021, is incorporated, in its entirety, by this reference.
The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.
Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
Compiling shaders (e.g., for applications that make use of graphics processing units (GPUs), such as many video games) can be expensive in terms of time and computing resources, potentially impacting both loading time and in-game performance. Ordinarily, shader caches may accumulate over time, reducing the need for repetitive operations to prepare the shaders. However, in the cloud gaming context, each game session may be cleared when complete, meaning the shader cache may be lost after each session. The present disclosure is directed to providing shader cache information for use in cloud gaming sessions. When users or testers play a game, shader cache information may be collected. The shader cache information across the various sessions may be collected and aggregated. The aggregated cache information can then be distributed for use in new game sessions that would not otherwise have access to a previously built shader cache.
The present disclosure is generally directed to distributing compiled shaders within a cloud gaming environment. By providing shader information to instances of video games executing within a cloud gaming environment, embodiments of the present disclosure may improve the field of cloud gaming by improving performance, reducing load times, and/or conserving computing resources. In addition, embodiments of the present disclosure may improve the functioning of virtualization systems by making the benefits of shader caching more available in these contexts. In addition, embodiments of the present disclosure may improve the functioning of servers that execute instances of video games within virtual containers by enabling the servers to perform the task of streaming video games with greater performance, reduced load times, and/or fewer computing resources. Furthermore, these embodiments may improve the functioning of client systems to which video games are streamed by providing the client systems with less average initialization latency for a new streaming session.
Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
The following will provide, with reference to
Regarding the network 104, any suitable network 104 may be used. In certain examples, the network 104 is the Internet, a LAN, a WAN, or the like. Furthermore, any suitable remote device 106 may be used and may include, without limitation, a mobile device such as a smartphone or tablet, a PC, an artificial-reality system, or the like. The remote device 106 may be a client device to interact with, and/or present content provided by the cloud application platform 102 via a web browser or other application on the remote device 106. Furthermore, the remote device 106 may be in communication with an input device 108 to provide input to the remote device 106. The remote device 106 may, in turn, transmit signals to the cloud application platform 102 to control the application, based in part, on input received from the input device 108. The input device 108 may be any suitable device for providing input and may include, without limitation, a device embodied separately from the remote device 106, such as an external mouse, a keyboard, a game controller, or the like, or a device integrated and/or included with the remote device 106 such as an integrated mouse, a touchscreen, an onboard microphone, or the like.
The cloud application platform 102 may provide an environment with which to execute an application (e.g., a video game) for delivery across the Internet. Specifically, in certain examples, the cloud application platform 102 may provide a server-side hosted environment in which to execute the application. In certain examples, the term “server-side” may refer to a classification of resources that run on a server or other suitable platform to generate and/or deliver content over a network to a remote device 106. In some examples, the cloud application platform 102 may provide various optimizations to allow for enhanced execution of an application such as, for example, applications not designed to operate on the server-side hosted environment, in a cloud-hosted infrastructure, and the like.
In some examples, the cloud application platform 102 may optimize graphics processing of an application executing in the server-side hosted environment, such that an application that is non-native to the server-side hosted environment may execute in such an environment without performance degradation.
In certain examples, and as described in more detail below, the application may be a video game. Furthermore, the video game may be an existing video game designed to execute natively on a local device, on a particular operating system, in a particular environment, or the like. Therefore, the system 100 may host and provide cloud delivery for an existing game, designed for a different platform, to allow for an end-user to play on the end-user's device without performance degradation and without the need for substantial modifications to the game.
The cloud application platform 102 may have any suitable architecture to allow execution of an application in a server-side hosted environment.
The operating system 202 may be any suitable operating system. In one example, the operating system 202 supports the basic functioning of the cloud application platform 102 such as hardware and software management, access to resources, task management, and the like. The operating system may include an operating system (“OS”) virtualization layer 208 to provide operating system virtualization capabilities allowing the cloud application platform 102 to support multiple, isolated virtual environments. Any suitable OS virtualization layer 208 may be used. In some examples, the OS virtualization layer 208 is a Kernel-based Virtual Machine (“KVM”).
The operating system 202 and OS virtualization layer 208 may support one or more virtual containers 210. The cloud application platform 102 may use any suitable virtual container 210. A virtual container 210, in certain examples, is a virtualized software unit providing an isolated environment for software execution and is described in greater detail below.
A virtual container 210 may provide a sandboxed environment to support and execute a server-side hosted environment 212. Likewise, the server-side hosted environment 212 may, in turn, execute an application 214. As will be described in greater detail below, the server-side hosted environment 212 may be any suitable environment for executing the application 214. In certain examples, the server-side hosted environment 212 may be an operating system, an emulator that emulates a specific operating system, an operating system virtual machine, and the like.
Although
As illustrated in
The systems described herein may identify the video game in any suitable context. For example, the systems described herein may identify the video game as one of multiple video games digitally catalogued by and/or for execution within the cloud gaming environment. In some examples, the systems described herein may identify the video game by monitoring a storage location where a graphics processing unit stores cached shader information and determining that new information has been stored at the location. Additionally or alternatively, the systems described herein may identify the video game during a deallocation and/or clean-up phase in which an instance of the video game is about to be deallocated or in the process of being deallocated. Accordingly, the systems described herein may identify cached shader information generated by the instance of the video game before the cached shader information is lost in the deallocation process.
Returning to
As used herein, the term “shader” may refer to any set of instructions and/or specifications for rendering, generating, enhancing, and/or modifying graphics effects for an application. In some examples, a shader may include instructions to be executed on a graphics processing unit as part of the execution of a corresponding application. In some examples, a shader may be compiled upon execution of the application (e.g., as part of the loading process of the application) and/or during use of the application (e.g., on an as-needed basis). Compiling shaders in association with execution of the application may enable shaders to be appropriately compiled and re-compiled for particular GPUs (and, e.g., particular drivers and/or versions of drivers for GPUs). In some examples, a collection of shaders may be defined with respect to a particular application for a particular graphics processing unit. Thus, for example, a video game may use thousands of shaders to utilize a graphics processing unit to render graphics in the game. However, not all shaders may be needed for all parts of the game. Accordingly, in some examples, a shader may be compiled on a just-in-time basis (e.g., during gameplay), potentially impacting game performance. In various examples, a GPU may cache compiled shaders for an application so that they can be reused instead of requiring that the shaders be recompiled each time the application is executed. Ordinarily, in various examples, if the application executes within a virtual container, the GPU may cache the compiled shaders in a location that may not be preserved with the destruction of the virtual container.
The systems described herein may save the compiled shader information to a cache in any suitable manner. For example, these systems may save the compiled shader information to a cache outside of a virtual container in which the instance of the video game executed (e.g., such that the compiled shader information is preserved after the virtual container is destroyed). In some examples, these systems may identify a location where a GPU caches the compiled shader information and copy the compiled shader information from the GPU cache to another location (e.g., to a persistent repository on a server on which the instance of the video game executed and/or to a central server used for collecting compiled shader information).
In some examples, systems described herein may aggregated the compiled shader information with additional compiled shader information. For example, these systems may aggregate the compiled shader information with additional compiled shader information generated from executing one or more additional instances of the video game. The aggregation may be performed in any suitable manner. For example, compiled shader information for a video game may be added to a repository. When new compiled shader information is available, unique portions of the new compiled shader information may be added to the repository (e.g., specific compiled shaders that are not yet present in the repository).
In some examples, compiled shader information from one instance of a video game executed on a server (e.g., within a virtual container on the server) may be preserved for use by another instance of the video game later executed on the server (e.g., within a different virtual container). Furthermore, compiled shader information from multiple instances of the video game executed on the server (e.g., within different virtual containers at various times) may be preserved and aggregated for use by future instances of the video game executed on the server.
Additionally or alternatively, in some examples compiled shader information may be gathered, saved, and aggregated from across servers. For example, a first instance of the video game may execute on a first server and a second instance of the video game may execute on a second server. In this example, systems described herein may aggregate the compiled shader information from the first instance on the first server with the compiled shader information from the second instance on the second server. This aggregated compiled shader information may be used for future instances of the video game executing on the first server, on the second server, and/or on other servers.
In addition, in some examples compiled shader information may be gathered, saved, and aggregated from video game streaming sessions across user accounts. For example, a first instance of the video game may be allocated to a first user account for streaming and a second instance of the video game may be allocated to a second user account. In this example, systems described herein may aggregate the compiled shader information from the first instance allocated to the first user account with the compiled shader information from the second instance allocated to the second user account. This aggregated compiled shader information may be used for future instances of the video game allocated to the first user account, the second user account, and/or other user accounts. In some examples, most information between user accounts may be kept separate and/or private from each other. However, because compiled shader information may not represent information personal to a user, and because, in some examples, identifying information may not accompany compiled shader information used in aggregation, systems described herein may use compiled shader information generated by an instance of a video game allocated to a user account while protecting the privacy of the user account.
As discussed earlier, compiled shader information for an application may be specific to a GPU (and, in some examples, to a specific driver version used for the GPU). Accordingly, systems described herein may, when saving the compiled shader information, label or otherwise designate the compiled shader information as corresponding to the GPU (and, e.g., the specific driver version) for which the compiled shader information was generated. In these examples, compiled shader information generated from the execution of various instances of the video game may be aggregated separately by GPU details. Thus, instances of the video game executing with the same GPU details (e.g., GPU model and driver version) may be aggregated by systems described herein separately from instances of the video game executing with different GPU details.
In some examples, as described above, systems described herein may harvest compiled shader information from instances of the video game that executed as a part of a cloud streaming service to stream the video game to a client system operated by an end user. Thus, the use of the cloud streaming service to play games may contribute to the compiled shader information that is used for future streaming of the game via the service. While the instance of the video game from which the compiled shader information is harvested may execute as part of the ordinary use of the service, in some examples the instance of the video game may be executed within the cloud gaming environment in a testing process of the video game within the cloud gaming environment. Thus, shader compilation work performed to enable the operation of the video game within the cloud gaming environment to be tested may be reused (e.g., for future instances used for testing and/or for future instances used for actual gameplay). The testing process may include a human testing process and/or an automated testing process. In some examples, the automated testing process may be configured to systematically test parts of the video game to elicit over time the compilation and/or use of target shaders (or, e.g., all shaders) within the video game.
Returning to
Systems described herein may receive the request in any of a variety of contexts. For example, a client system may transmit a request by a user to stream the video game (e.g., the user may select the video game to play on the client system). In some examples, the client system may communicate the selection of the video game to a cloud gaming service. For example, the client system may communicate the selection to a server of the cloud gaming service (e.g., the server discussed with respect to steps 310 and 320 above and/or another server deployed for use of the cloud gaming service). In some examples, the client system may communicate the selection to an allocation management system that matches user requests to a server. Thus, in some examples, the systems described herein may receive the request to load the new instance of the video game as part of an allocation process.
In some examples, systems described herein may pre-load instances of the video game on one or more servers (e.g., to be ready for play more quickly when a user makes a request). In these examples, systems described herein may receive the request to load the new instance of the video game in response to determining that the number of pre-loaded instances of the video game falls short of a target threshold.
In some examples, the process of selecting the video game on the client system may be seamless from the user's perspective; that is, the interface presented to the user on the client system may show one or more video games available to be launched by the user, and the user may select the video game to launch. From the user's perspective, the video game may quickly begin streaming to the user, partly owing to less time being spent compiling shaders while loading the video game.
Returning to
As discussed earlier, because the new instance of the video game may be separate from other instances of the video game (whether executed on the same server or other servers)—e.g., by being executed in a separate virtual container—the new instance of the video game may not initially have access to any cached shader information. However, systems described herein may reuse the compiled shader information that was copied, saved, and/or cached outside of any of the individual virtual containers used to execute instances of the video game. For example, these systems may make the compiled shader information accessible to a virtual container. For example, these systems may store the compiled shader information in a logical location within the virtual container at which cached shader information is stored for applications executing within the virtual container.
In some examples, as discussed earlier, the compiled shader information may have been aggregated with additional compiled shader information from additional executing instances of the video game, potentially on different servers and/or for different user accounts. In these examples, reusing the compiled shader information from the cache may entail providing the aggregated compiled shader information for use by the new instance of the video game (e.g., by the GPU executing shaders for the new instance of the video game).
Furthermore, as discussed earlier, in some examples the compiled shader information may have been designated as corresponding to a particular type of GPU (e.g., model and/or driver version). Accordingly, systems described herein may identify the type of GPU used for the new instance of the video game and then identify the cached compiled shader information based on the type of GPU. Upon determining that the cached compiled shader information was generated from the same type of GPU, the systems described herein may reuse the cached compiled shader information for the new instance of the video game.
Once client 480 terminates the streaming session, virtual container 430 may be deallocated (and, e.g., data stored by virtual container 430 may be discarded). However, before virtual container 430 is deallocated and compiled shader 460 is lost, shader information distribution system 490 may collect shader information 462 from compiled shader 460. In one example, shader information 462 may be compiled shader 460 and/or may also include information describing compiled shader 460.
Later, server 422 may host a virtual container 432 that includes a server-side hosted environment 452 and a game instance 442. Server 422 may be configured to stream game instance 442 to a client 482. The execution of game instance 442 may ordinarily compile a shader for use by a graphics processing unit 472 in executing game instance 442. However, compiling the shader may be time- and resource-intensive. Thus, compiling the shader may slow load times, reduce performance during gameplay, and/or prevent server 422 from performing other tasks (such as hosting additional virtual containers with additional game instances). Accordingly, shader information distribution system 490 may inject shader information 462 into virtual container 432 so that game instance 442 and/or graphics processing unit 472 find shader information 462 ready for the use of graphics processing unit 472, preventing the superfluous compilation of the shader.
Systems described herein may later use aggregated compiled shader repository 620 to provide already compiled shaders for use by an instance of a video game executing for the cloud-based game streaming service. In addition, in some examples additional compiled shaders for the video game that are still missing from aggregated compiled shader repository 620 may be added in the future, such that aggregated compiled shader repository continues to improve by becoming more complete over time through the repeated use of the cloud-based game streaming service.
Shader information distribution system 750 may provide aggregated shader information 780 to server 724. A game instance 742 may be hosted by server 724. Because game instance 742 uses GPU 714, systems described herein may use aggregated shader information 780 for running game instance 742 instead of, e.g., compiling shaders 766, 768, and 770.
As described above, cloud gaming may present a challenge wherein each game session is cleaned up at the end, causing the shader cache from previous game play to be lost. Preparing the shader cache may be a processing-heavy task and may impact loading time of a game and/or frame-per-second stability of the game. Accordingly, systems described herein provide for automated shader collection, aggregation, and distribution within a cloud gaming environment.
A computer-implemented method for distributing shader information may include identifying a video game configured to be available to stream from within a cloud gaming environment; saving, to a cache, compiled shader information generated from executing an instance of the video game within the cloud gaming environment; receiving a request to load a new instance of the video game within the cloud gaming environment; and loading the new instance of the video game at least in part by reusing the compiled shader information from the cache.
The computer-implemented method of Example 1, further including aggregating the compiled shader information with additional compiled shader information generated from executing at least one additional instance of the video game, resulting in aggregated compiled shader information; and where reusing the compiled shader information from the cache comprises using the aggregated compiled shader information.
The computer-implemented method of any of Examples 1 and 2, where aggregating the compiled shader information with the additional compiled shader information comprises adding at least one unique portion of information from the compiled shader information to a repository that comprises the additional compiled shader information.
The computer-implemented method of any of Examples 1-3, where the instance of the video game executed on a first server; the additional instance of the video game executed on a second server; and aggregating the compiled shader information with the additional compiled shader information comprises collecting the compiled shader information and the additional compiled shader information from the first and second servers, respectively.
The computer-implemented method of any of Examples 1-4, where the instance of the video game was allocated to a first user account; the additional instance of the video game was allocated to a second user account; and aggregating the compiled shader information with the additional compiled shader information includes collecting the compiled shader information and the additional compiled shader information from the generated from the gameplay of the first and second user accounts, respectively.
The computer-implemented method of any of Examples 1-5, where the instance of the video game is executed within the cloud gaming environment in a testing process of the video game within the cloud gaming environment.
The computer-implemented method of any of Examples 1-6, where reusing the compiled shader information from the cache comprises copying the compiled shader information to a location where a computing system executing the new instance of the video game is configured to store and retrieve compiled shader information for the video game.
The computer-implemented method of any of Examples 1-7, where reusing the compiled shader information from the cache includes identifying a virtual container that has been provisioned for running the new instance of the video game; and making the compiled shader information accessible to the virtual container.
The computer-implemented method of any of Examples 1-8, where saving the compiled shader information to the cache includes designating the saved compiled shader information as corresponding to a type of graphical processing unit used in execution of the instance of the video game.
The computer-implemented method of any of Examples 1-9 where reusing the compiled shader information from the cache is in response at least in part to determining that the new instance of the video game is configured to use an additional graphical processing unit of the same type of graphical processing unit used in execution of the instance of the video game.
Embodiments of the present disclosure may include or be implemented in conjunction with various types of artificial-reality systems. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, for example, a virtual reality, an augmented reality, a mixed reality, a hybrid reality, or some combination and/or derivative thereof. Artificial-reality content may include completely computer-generated content or computer-generated content combined with captured (e.g., real-world) content. The artificial-reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional (3D) effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, for example, create content in an artificial reality and/or are otherwise used in (e.g., to perform activities in) an artificial reality.
Artificial-reality systems may be implemented in a variety of different form factors and configurations. Some artificial-reality systems may be designed to work without near-eye displays (NEDs). Other artificial-reality systems may include an NED that also provides visibility into the real world (such as, e.g., augmented-reality system 800 in
Turning to
In some embodiments, augmented-reality system 800 may include one or more sensors, such as sensor 840. Sensor 840 may generate measurement signals in response to motion of augmented-reality system 800 and may be located on substantially any portion of frame 810. Sensor 840 may represent one or more of a variety of different sensing mechanisms, such as a position sensor, an inertial measurement unit (IMU), a depth camera assembly, a structured light emitter and/or detector, or any combination thereof. In some embodiments, augmented-reality system 800 may or may not include sensor 840 or may include more than one sensor. In embodiments in which sensor 840 includes an IMU, the IMU may generate calibration data based on measurement signals from sensor 840. Examples of sensor 840 may include, without limitation, accelerometers, gyroscopes, magnetometers, other suitable types of sensors that detect motion, sensors used for error correction of the IMU, or some combination thereof.
In some examples, augmented-reality system 800 may also include a microphone array with a plurality of acoustic transducers 820(A)-820(J), referred to collectively as acoustic transducers 820. Acoustic transducers 820 may represent transducers that detect air pressure variations induced by sound waves. Each acoustic transducer 820 may be configured to detect sound and convert the detected sound into an electronic format (e.g., an analog or digital format). The microphone array in
In some embodiments, one or more of acoustic transducers 820(A)-(J) may be used as output transducers (e.g., speakers). For example, acoustic transducers 820(A) and/or 820(B) may be earbuds or any other suitable type of headphone or speaker.
The configuration of acoustic transducers 820 of the microphone array may vary. While augmented-reality system 800 is shown in
Acoustic transducers 820(A) and 820(B) may be positioned on different parts of the user's ear, such as behind the pinna, behind the tragus, and/or within the auricle or fossa. Or, there may be additional acoustic transducers 820 on or surrounding the ear in addition to acoustic transducers 820 inside the ear canal. Having an acoustic transducer 820 positioned next to an ear canal of a user may enable the microphone array to collect information on how sounds arrive at the ear canal. By positioning at least two of acoustic transducers 820 on either side of a user's head (e.g., as binaural microphones), augmented-reality device 800 may simulate binaural hearing and capture a 3D stereo sound field around about a user's head. In some embodiments, acoustic transducers 820(A) and 820(B) may be connected to augmented-reality system 800 via a wired connection 830, and in other embodiments acoustic transducers 820(A) and 820(B) may be connected to augmented-reality system 800 via a wireless connection (e.g., a BLUETOOTH connection). In still other embodiments, acoustic transducers 820(A) and 820(B) may not be used at all in conjunction with augmented-reality system 800.
Acoustic transducers 820 on frame 810 may be positioned in a variety of different ways, including along the length of the temples, across the bridge, above or below display devices 815(A) and 815(B), or some combination thereof. Acoustic transducers 820 may also be oriented such that the microphone array is able to detect sounds in a wide range of directions surrounding the user wearing the augmented-reality system 800. In some embodiments, an optimization process may be performed during manufacturing of augmented-reality system 800 to determine relative positioning of each acoustic transducer 820 in the microphone array.
In some examples, augmented-reality system 800 may include or be connected to an external device (e.g., a paired device), such as neckband 805. Neckband 805 generally represents any type or form of paired device. Thus, the following discussion of neckband 805 may also apply to various other paired devices, such as charging cases, smart watches, smart phones, wrist bands, other wearable devices, hand-held controllers, tablet computers, laptop computers, other external compute devices, etc.
As shown, neckband 805 may be coupled to eyewear device 802 via one or more connectors. The connectors may be wired or wireless and may include electrical and/or non-electrical (e.g., structural) components. In some cases, eyewear device 802 and neckband 805 may operate independently without any wired or wireless connection between them. While
Pairing external devices, such as neckband 805, with augmented-reality eyewear devices may enable the eyewear devices to achieve the form factor of a pair of glasses while still providing sufficient battery and computation power for expanded capabilities. Some or all of the battery power, computational resources, and/or additional features of augmented-reality system 800 may be provided by a paired device or shared between a paired device and an eyewear device, thus reducing the weight, heat profile, and form factor of the eyewear device overall while still retaining desired functionality. For example, neckband 805 may allow components that would otherwise be included on an eyewear device to be included in neckband 805 since users may tolerate a heavier weight load on their shoulders than they would tolerate on their heads. Neckband 805 may also have a larger surface area over which to diffuse and disperse heat to the ambient environment. Thus, neckband 805 may allow for greater battery and computation capacity than might otherwise have been possible on a stand-alone eyewear device. Since weight carried in neckband 805 may be less invasive to a user than weight carried in eyewear device 802, a user may tolerate wearing a lighter eyewear device and carrying or wearing the paired device for greater lengths of time than a user would tolerate wearing a heavy standalone eyewear device, thereby enabling users to more fully incorporate artificial-reality environments into their day-to-day activities.
Neckband 805 may be communicatively coupled with eyewear device 802 and/or to other devices. These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 800. In the embodiment of
Acoustic transducers 820(I) and 820(J) of neckband 805 may be configured to detect sound and convert the detected sound into an electronic format (analog or digital). In the embodiment of
Controller 825 of neckband 805 may process information generated by the sensors on neckband 805 and/or augmented-reality system 800. For example, controller 825 may process information from the microphone array that describes sounds detected by the microphone array. For each detected sound, controller 825 may perform a direction-of-arrival (DOA) estimation to estimate a direction from which the detected sound arrived at the microphone array. As the microphone array detects sounds, controller 825 may populate an audio data set with the information. In embodiments in which augmented-reality system 800 includes an inertial measurement unit, controller 825 may compute all inertial and spatial calculations from the IMU located on eyewear device 802. A connector may convey information between augmented-reality system 800 and neckband 805 and between augmented-reality system 800 and controller 825. The information may be in the form of optical data, electrical data, wireless data, or any other transmittable data form. Moving the processing of information generated by augmented-reality system 800 to neckband 805 may reduce weight and heat in eyewear device 802, making it more comfortable to the user.
Power source 835 in neckband 805 may provide power to eyewear device 802 and/or to neckband 805. Power source 835 may include, without limitation, lithium ion batteries, lithium-polymer batteries, primary lithium batteries, alkaline batteries, or any other form of power storage. In some cases, power source 835 may be a wired power source. Including power source 835 on neckband 805 instead of on eyewear device 802 may help better distribute the weight and heat generated by power source 835.
As noted, some artificial-reality systems may, instead of blending an artificial reality with actual reality, substantially replace one or more of a user's sensory perceptions of the real world with a virtual experience. One example of this type of system is a head-worn display system, such as virtual-reality system 900 in
Artificial-reality systems may include a variety of types of visual feedback mechanisms. For example, display devices in augmented-reality system 800 and/or virtual-reality system 900 may include one or more liquid crystal displays (LCDs), light emitting diode (LED) displays, microLED displays, organic LED (OLED) displays, digital light project (DLP) micro-displays, liquid crystal on silicon (LCoS) micro-displays, and/or any other suitable type of display screen. These artificial-reality systems may include a single display screen for both eyes or may provide a display screen for each eye, which may allow for additional flexibility for varifocal adjustments or for correcting a user's refractive error. Some of these artificial-reality systems may also include optical subsystems having one or more lenses (e.g., conventional concave or convex lenses, Fresnel lenses, adjustable liquid lenses, etc.) through which a user may view a display screen. These optical subsystems may serve a variety of purposes, including to collimate (e.g., make an object appear at a greater distance than its physical distance), to magnify (e.g., make an object appear larger than its actual size), and/or to relay (to, e.g., the viewer's eyes) light. These optical subsystems may be used in a non-pupil-forming architecture (such as a single lens configuration that directly collimates light but results in so-called pincushion distortion) and/or a pupil-forming architecture (such as a multi-lens configuration that produces so-called barrel distortion to nullify pincushion distortion).
In addition to or instead of using display screens, some of the artificial-reality systems described herein may include one or more projection systems. For example, display devices in augmented-reality system 800 and/or virtual-reality system 900 may include micro-LED projectors that project light (using, e.g., a waveguide) into display devices, such as clear combiner lenses that allow ambient light to pass through. The display devices may refract the projected light toward a user's pupil and may enable a user to simultaneously view both artificial-reality content and the real world. The display devices may accomplish this using any of a variety of different optical components, including waveguide components (e.g., holographic, planar, diffractive, polarized, and/or reflective waveguide elements), light-manipulation surfaces and elements (such as diffractive, reflective, and refractive elements and gratings), coupling elements, etc. Artificial-reality systems may also be configured with any other suitable type or form of image projection system, such as retinal projectors used in virtual retina displays.
The artificial-reality systems described herein may also include various types of computer vision components and subsystems. For example, augmented-reality system 800 and/or virtual-reality system 900 may include one or more optical sensors, such as two-dimensional (2D) or 3D cameras, structured light transmitters and detectors, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor. An artificial-reality system may process data from one or more of these sensors to identify a location of a user, to map the real world, to provide a user with context about real-world surroundings, and/or to perform a variety of other functions.
The artificial-reality systems described herein may also include one or more input and/or output audio transducers. Output audio transducers may include voice coil speakers, ribbon speakers, electrostatic speakers, piezoelectric speakers, bone conduction transducers, cartilage conduction transducers, tragus-vibration transducers, and/or any other suitable type or form of audio transducer. Similarly, input audio transducers may include condenser microphones, dynamic microphones, ribbon microphones, and/or any other type or form of input transducer. In some embodiments, a single transducer may be used for both audio input and audio output.
In some embodiments, the artificial-reality systems described herein may also include tactile (i.e., haptic) feedback systems, which may be incorporated into headwear, gloves, body suits, handheld controllers, environmental devices (e.g., chairs, floormats, etc.), and/or any other type of device or system. Haptic feedback systems may provide various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature. Haptic feedback systems may also provide various types of kinesthetic feedback, such as motion and compliance. Haptic feedback may be implemented using motors, piezoelectric actuators, fluidic systems, and/or a variety of other types of feedback mechanisms. Haptic feedback systems may be implemented independent of other artificial-reality devices, within other artificial-reality devices, and/or in conjunction with other artificial-reality devices.
By providing haptic sensations, audible content, and/or visual content, artificial-reality systems may create an entire virtual experience or enhance a user's real-world experience in a variety of contexts and environments. For instance, artificial-reality systems may assist or extend a user's perception, memory, or cognition within a particular environment. Some systems may enhance a user's interactions with other people in the real world or may enable more immersive interactions with other people in a virtual world. Artificial-reality systems may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government organizations, military organizations, business enterprises, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.), and/or for accessibility purposes (e.g., as hearing aids, visual aids, etc.). The embodiments disclosed herein may enable or enhance a user's artificial-reality experience in one or more of these contexts and environments and/or in other contexts and environments.
As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.
In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the present disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.”
Number | Date | Country | |
---|---|---|---|
63194821 | May 2021 | US | |
63105320 | Oct 2020 | US |